text
stringlengths
9
7.94M
\begin{document} \maketitle \begin{abstract} We present in this paper a new tool for outliers detection in the context of multiple regression models. This graphical tool is based on recursive estimation of the parameters. Simulations were carried out to illustrate the performance of this graphical procedure. As a conclusion, this tool is applied to real data containing outliers according to the classical available tools. \end{abstract} \section{Introduction} We consider the classical multiple regression model (or linear model). Let $Y$ be a random vector (called response variable) in $\RR^n$ such that $\EE[Y]=X\beta$ and ${\mbox{cov}}(Y)=\sigma^2 I_n$ where $X \in {\cal{M}}_{n,p}(\RR)$ is a known matrix (rows of $X$ are the explanatory variables) and where $\beta \in \RR^p$ and $\sigma^2 \in \RR_+$ are the unknown parameters (to be estimated). If the rank of $X$ equals to $p$ (which will be assumed here), then the solution of least-square problem $\widehat{\beta}$ is unique and is given by $\widehat{\beta} = ({}^tXX)^{-1}{}^tXY$. This estimator is unbiased with covariance matrix equal to $\sigma^2({}^tXX)^{-1}$. It follows that the prediction $\widehat{Y}$ is a linear transformation of the response variable $Y$: $\widehat{Y}= HY$ with $H=X({}^tXX)^{-1}{}^tX$ (called hat matrix). \\[1ex] Sensitive analysis is a crucial, but not obvious, task. Three important notions can be considered together: outliers, leverage points and influential points. The notion of outlier is not easy to define. In fact one has to distinguish between two cases: an observation can be an outlier with respect to the response variable or/and to the explanatory variable(s). An observation is said to be an outlier w.r.t. the response variable if its residual (standardized or not) is large enough. This notion is not sufficient in some cases as for the fourth Anscombe data set \cite{Anscombe}: the residual of the extreme point is zero but it is clearly an outlier. It follows the second definition of an outlier: an observation is an outlier w.r.t. the explanatory variable(s) if it has a high leverage. As precised by Chatterjee and Price \cite{ChatterjeePrice},''the leverage of a point is a measure of its 'outlyingness' [\ldots] in the [explanatory] variables and indicates how much that individual point influences its own prediction value''. A classical way to measure leverage is to consider diagonal elements of the hat matrix $H$ (that depends only on matrix $X$ and not on $Y$): the $i$-th observation is said to have a high leverage if $H_{ii} \geqslant 2p/n$ (which is twice the average value of the diagonal elements of $H$). Any observation with a high leverage has to be considered with care. From the above quotation, one has also to define the notion of influential observations. An observation is ''an influential point if its deletion, singly or in combination with others [...] causes substantial changes in the fitted'' \cite{ChatterjeePrice}. There exists several measures of influence: among the most widely used, the Cook distance \cite{Cook} and the DFFIT distance \cite{BelsleyKuhWelsch}. These two distances are cross-validation (or jackknife) methods since they are defined on regression with deletion of the $i$-th observation (when measuring its influence). Observations which are influential points have also to be considered with care. However one has to consider simultaneously leverage and influence measures. Cook procedure has been improved and used for defining several procedures (see \cite{PenaYohai} for instance). For a survey about various methods for multiple outliers detection throughout Monte Carlo simulations, the reader could refer to \cite{WisnowskiMontgomerySimpson}. \\[1ex] In this paper we propose a new graphical tool for outliers detection in linear regression models (but not for the identification of the outlying observation(s)). This graphical method is based on recursive estimation of the parameters. Recursive estimation over a sample provides a useful framework for outliers detection in various statistical models (multivariate data, time series, regression analysis, \ldots). Next section is devoted to the introduction of this tool. In order to study its performance, simulations were carried out on which our tool was applied in section 3. First we apply our graphical method to the case of data set with one single outlier, either in the explanatory variable or/and in the response variable. Second we apply the graphical tool to the case of multiple outliers. In the last section, our tool is applied to real data for which it is well-known that they contain one or two outliers. \section{A new graphical tool} In one hand many authors suggested graphical tools for the outliers detection in regression models. For instance Atkinson \cite{Atkinson-81} suggested half normal plots for the detection of single outlier (see also \cite{Atkinson-book} for a large panorama). In other hand the seminal paper by Brown {\em et al.} \cite{BrownDurbinEvans} (see also \cite{PhillipsHarvey}) about recursive residuals (we share Nelder opinion - see his comments about \cite{BrownDurbinEvans} - about the misuse of 'recursive residuals' instead of 'sequential residuals' for instance, but as it is noticed by Brown {\em et al.} \cite{BrownDurbinEvans} ''the usage [of this term] is too well-establish to change'') has been the source of various studies on outliers or related problems, most of them being based on CUSUM test. Schweder \cite{Schweder} introduced a related version of CUSUM test, the backward CUSUM test (the summation is made from $n$ to $i$ with $i \geqslant p+1$) which was proved to have greater average power (than the classical CUSUM test). Later Chu {\em et al.} \cite{ChuHornikKuan} proposed MOSUM tests based on moving sums of recursive residuals. \\[1ex] Comments by Barnett and Lewis \cite{BarnettLewis} about recursive residuals summarize well all the difficulty when considering such approach: ''There is a major difficulty in that the labeling of the observations is usually done at random, or in relation to some concomitant variable, rather than 'adaptively' in response to the observed sample values". For instance Schweder \cite{Schweder} in order to develop two methods of outlier detection assumed that the data set could be divided into two subsets with one containing no outliers. In \cite{HadiSimonoff} the reader will find another case in which the half sample is used and assumed to be free of outliers. Since these methods are not satisfactory Kianifard and Swallow \cite{KianifardSwallow-89} defined a test procedure for the outliers detection applied to data ordered according to a given diagnostic measure (standardized residuals, Cook distance, \ldots). Notice that recursive residuals can also be used to check the model assumptions of normality and homoscedasticity \cite{GalpinHawkins,HedayatRobson}. For a review about the use of recursive residuals in linear models, the reader could refer to the state-of-art in 1996 by Kianifard ans Swallow \cite{KianifardSwallow-96} (see also a less recent state-of-art by Hawkins \cite{Hawkins}). \\[1ex] For a given subset of observations, estimators of the parameters are invariant under any permutation of the observations, except if one apply recursive estimations. The idea of a (graphical or not) method based on recursive estimation (of the parameters) is to order the observations such that the presence of one or more outliers will be visible (on a figure or/and on a table). This point of view was used by Kianifard and Swallow \cite{KianifardSwallow-89} in the method described above. However their procedure does not guarantee that outliers are detected: this unfortunate case happens for instance when the outliers is precisely one of the $p$ first observations (which are used for the initialization of the recursive computation of residuals). This point has been already noticed by Clarke \cite{Clarke} (who focused on robust method of outlier detection in the case of small sample size). For example, one can clearly observe this phenomenon on the fourth Anscombe data set \cite{Anscombe} if one uses the standardized residuals as a diagnostic measure (see the introduction above for previous comments on this data set). For each of these cases it is usually assumed that the initial subset (or elemental set) does not contain outliers (such subset are called to be clean subset) but with no guarantee that this assumptions is checked (see \cite{HawkinsBraduKass} for another such situation). \\[1ex] Since the graphical tool we propose is based on a recursive procedure, we will introduce some notation for parameters estimation based on a subset of the observations. For any subset $I$ of $\{1, \ldots, n\}$, we denote by $\widehat{\beta}(I)$ the estimator of $\beta$ based on observations $X_{i1}, \ldots, X_{ip}$ with $i \in I$. We denote by $X_I$ (resp. $Y_I$) the sub-matrix of $X$ (resp. $Y$) corresponding to the above situation. We will assume that for any subset $I$ such that $|I| \geqslant p$ the matrix $X_I$ is full-rank. It follows that $\widehat{\beta}(I)$ is unique and given by $\widehat{\beta}(I) = ({}^tX_IX_I)^{-1}{}^tX_IY_I$. We will denote by $S_n$ the set of all permutations of $\{1, \ldots, n\}$ and for any permutation $s \in S_n$, $I_i^s := \{s(1), \ldots, s(i)\}$. \\[1ex] The graphical procedure we suggest here consists in generating $p$ different graphical displays, one for each coordinates of $\beta$ (including the intercept in case of). On the $j$-th graphical display points $(i,\widehat{\beta}_j(I_{p+i-1}^s))$ with $i \in \{1, \ldots, n-p+1\}$ are plotted, for a given number of permutations $s \in S_n$ (points can be joined with lines). Similar graphical displays can also be produced for the variance estimation and for various coefficients (determination coefficient, AIC, \ldots). This graphical tool can be viewed as dynamic graphics defined by Cook and Weisberg \cite{CookWeisberg-89}. This approach seems to be new to the best of our knowledge despite recursive residuals are quite old (indeed earlier related papers are due to Gauss in 1821 and Pizzetti in 1891 - see the historical note by Farebrother \cite{farebrother} ; see also \cite{Plackett}). In fact recursive residuals and recursive estimation are most of the times considered in the context of time series (see for instance the presentation proposed in \cite{BelsleyKuhWelsch}) since hence there exists a natural order for the observations. It follows that in such situation it is not possible to consider any permutation of the observations (it explains why recursive residuals are mainly used to check the constancy of the parameters over time). \\ The presence of one (or more) outlier in a data set should induce jumps/perturbations at least on some of these plots. However the effect will not be really visible if the outlier lies in the first observations (see above the remark above about \cite{HawkinsBraduKass} and \cite{KianifardSwallow-89}) or in the last observations. In fact, in the first case, the effect will be diluted due to the small sample size inducing a lack of precision in the estimations. And in the second case the effect should be also diluted because of a kind of law of large numbers (as noticed by Anderson in his comments of \cite{BrownDurbinEvans}, $\widehat{\beta}_n$ converges to $\beta$ in probability as $n$ tends to infinity if $({}^tX_nX_n)^{-1}$ converges to zero as $n$ tends to infinity). Hence it suggests that there exists some 'optimal' positions for the outlying individuals in order to be detected by a recursive approach. \\[1ex] The number of permutations used for the graphics should depend on the sample size. We suggest to distinct the three following cases: \begin{enumerate} \item Large sample size: one can plot points for all the $n$ circular permutations. In this way, on each graphical displays $n$ lines will be represented. \item Medium or small sample size: if the sample size is not enough large to apply the above rule, one can choose at random $N$ permutations and to plot the $N$ curves corresponding to recursive estimation. The value of $N$ may depend on $n$: the smallest $n$ is, the largest $N$ has to be. \item Very small sample size: if $n$ is small enough (say smaller than 10), one can plot all the $n!$ sequences on each graphical displays. Such situation could appear in the context of experimental designs for instance. \end{enumerate} A major advantage of this new graphical tool is that it does not require the normality assumption. This assumption is generally required in the former outliers detection procedures (especially when using standardized residuals for instance). Moreover it can be performed on data with few observations. \\[1ex] Before applying the graphical method described above to simulated data and real data, we wish to consider some practical aspects: \begin{enumerate} \item In order to enlight the presence of outliers (indeed this can reduce the effect induced by the lack of data), one could prefer to plot only points $(i,\widehat{\beta}_j(I_{p+i-1}^s))$ for $i \geqslant \lfloor \alpha n \rfloor$ with $\alpha \in (0,1)$. The value of $\alpha$ may depend on the sample size: for small sample size, the value of $\alpha$ could reach up to $25\%$. This could emphasize the cases where the outliers are in the 'optimal' positions. \item Since the graphical method suggested here relies on recursive estimation of parameters, one wish to apply updating formula as given by Brown {\em et al.} \cite{BrownDurbinEvans}. However one should avoid to use such formula, especially when dealing with large data set, and prefer to inverse matrices for each points (since computers are more reliable and efficient than in the past). In fact using updating formula may induce cumulative rounding-off errors making the graphical method unuseful (this point was already noticed by Kendall in his comments about the paper by Brown {\em et al.} \cite{BrownDurbinEvans}). \end{enumerate} For now we will assume that the response variable $Y$ is a Gaussian random vector. We will see how one can use cumulative sum (CUSUM) of recursive residuals in order to get similar graphical displays revealing the presence (or not) of outliers. This is fully inspired by \cite{BrownDurbinEvans} (see also \cite{GalpinHawkins}). In fact as showed by McGilchrist {\em et al.} \cite{McgilchristLiantoBryon} (in a more general context), recursive residuals and recursive estimations of $\beta$ are related one to the other by re-writing the update formula as follows for $i \in \{ 1, \ldots, n-p\}$, \begin{equation*} \widehat{\beta}(I^s_{p+i}) = \widehat{\beta}(I^s_{p+i-1}) + \frac{R(I^s_{p+i}) ({}^tX_{I^s_{p+i-1}}X_{I^s_{p+i-1}})^{-1}}{\sqrt{1+{}^tx_{s(p+i)}({}^tX_{I^s_{p+i-1}}X_{I^s_{p+i-1}})^{-1}x_{s(p+i)}}} \;, \end{equation*} where $x_i$ denotes the $i$-th row of $X$ and where $R(I^s_{i+1})$ is the $i$-th recursive residuals defined by: \begin{equation*} R(I^s_{p+i}) = \frac{Y_{s(p+i)}-{}^tx_{s(p+i)}\widehat{\beta}(I^s_{p+i-1})}{\sqrt{1+{}^tx_{s(p+i)}({}^tX_{I^s_{p+i-1}}X_{I^s_{p+i-1}})^{-1}x_{s(p+i)}}} \;. \end{equation*} As proved by Brown {\em et al.} (lemma~1 in \cite{BrownDurbinEvans}), $R(I^s_{1}), \ldots, R(I^s_{n-p})$ are iid random variables having the Gaussian distribution with mean 0 and variance $\sigma^2$. It allows to construct a continuous-time stochastic process using Donsker theorem (see chapter~2 in \cite{Billingsley}): \begin{equation*} \forall t \in (0,1) \;, \quad X_n(t) = \frac{1}{\sigma\sqrt{n}} \left( S_{\lfloor nt \rfloor} + (nt-\lfloor nt \rfloor)R(I^s_{\lfloor nt \rfloor+p+1}) \right) \;, \end{equation*} where $S_0=0$ and $S_i = S_{i-1} + R(I^s_{p+i})$. The unknown variance $\sigma^2$ is estimated considering all the observations: $\widehat{\sigma}^2 = ||Y-\widehat{Y}||^2/(n-p)$. If all the assumptions of the Gaussian linear model are satisfied, $\{ X_n(t) \,;\,t \in (0,1) \}$ converges in distribution to the Brownian motion as $n$ tends to infinity. It follows that this graphical method could be only used for large sample size. According to Brown {\em et al.} \cite{BrownDurbinEvans}, the probability that a sample path $W_t$ crosses one of the two following curves: \begin{equation*} y = 3a\sqrt{t} \quad {\mbox{or}} \quad y = -3a\sqrt{t} \end{equation*} equals to $\alpha$ if $a$ is solution of the equation: \begin{equation*} 1-\Phi(3a) + \exp(-4a^2)\Phi(a) = \frac{1}{2}\alpha \end{equation*} where $\Phi$ is the cumulative distribution function of the standard Gaussian distribution (for instance, when $\alpha=0.01$ it gives $a=1.143$). \section{Simulations} In this section we provide some simulations in order to observe the phenomenon which arises in such graphical displays in presence of one or more outliers. We will first consider the case where the data set contains only one outlier (either in the explanatory variable or/and in the response variable). Secondly we will consider the case of multiple outliers which is more difficult to detect when using the classical tools. \subsection{Single outlier} We present here some simulations on which we apply our graphical tool. Data were generated as follows : \begin{equation*} \forall i \in \{ 1, \ldots, n\} \;, \quad y_i = 1 + 2 x_i + \varepsilon_i \;, \end{equation*} where $(x_i)$ are iid random variables double exponential distribution with mean 1 and $(\varepsilon_i)$ are iid random variables with the centered Gaussian distribution with standard deviation $\sigma=0.1$. From this model, we derive three perturbed bivariate data sets. First we construct the univariate data set $(\tilde{x}_i)$ as follows: for all $i \in \{1, \ldots, n\} \setminus \{\lfloor n/2 \rfloor\}$ and $\tilde{x}_{\lfloor n/2 \rfloor} = 10 x_{\lfloor n/2 \rfloor}$ (it corresponds to a typo errors with the decimal separator symbol). We construct similarly the perturbed univariate data set $(\tilde{y}_i)$. Thus we combine these univariate data sets to produce four different scenario: no outlier, one outlier in the explanatory variable ($x$), one outlier in the response variable ($y$) and one outlier simultaneously in the explanatory and response variables. \\[1ex] Figure~\ref{fig:large-single} shows these four situations (one for each column) with $n=100$ observations (large sample size): the two first rows contain the recursive estimations of $\beta_0$ and $\beta_1$, the third one the recursive values of $R^2$ (determination coefficient) and the last one the recursive estimations of $\sigma^2$. The presence of one outlier (either in the explanatory variable and/or in the response variable) leads to perturbations in the recursive parameter estimations (especially for the variance estimation) and in the recursive computation of the determination coefficient. \begin{figure} \caption{Graphical plots for simulated data: single outliers and large sample size} \label{fig:large-single} \end{figure} On figure~\ref{fig:large-single-cusum} (each column concern each situation as described above), stochastic processes (that should be Brownian motions in model assumptions are all satisfied) constructed with the CUSUM procedure (see last part in the previous section) are plotted for all circular permutations. Even though these stochastic processes do not frequently cross the parabolic border, it is clear qualitatively of the outliers presence in the three last cases. \begin{figure} \caption{CUSUM plots for simulated data: single outliers and large sample size} \label{fig:large-single-cusum} \end{figure} Figure~\ref{fig:small-single} contains the same outputs (as on the first one) but with $n=10$ (small sample size) and with $N=100$ (the number of random permutations on which recursive estimations are done). Similar outputs are obtained in this case, with slightly difference due precisely to the sample size. The presence of the outlier is more visible for the recursive estimations of $\beta_1$ and for the recursive estimations of the variance $\sigma^2$. \begin{figure} \caption{Graphical plots for simulated data: single outliers and small sample size} \label{fig:small-single} \end{figure} When the response variable $Y$ is a non-Gaussian random vector, the method is still valid an it leads also to the same kind of phenomenon on the various plots. Moreover such approach can be also used to detect switching regime in a regression model. Simulations for these cases were carried out (but not presented here). \subsection{Multiple outliers} The presence of multiple outliers in a data set is more difficult to detect. Methods based on single deletion \cite{Atkinson-book,CookWeisberg-82} may fail and thus outliers will be remained undetected. This phenomenon is called the 'masking effect': in presence of multiple outliers, ''least squares estimation of the parameters may lead to small residuals for the outlying observations'' \cite{AtkinsonRiani} (see also \cite{Lawrence} for a discussion about this effect). Moreover ''if a data set contains more than one outlier, because of the masking effect, the very first observation [with the largest standardized residuals] may not be declared discordant [i.e. as an outlier]'' \cite{Paul}. However since we initialize the recursive estimations at various positions in the data set, this consequence of the masking effect should disappear. \\[2ex] We consider the same model as the previous section but in the perturbed univariate data sets we introduce multiple outliers. Two cases are considered: first the outliers are consecutive observations and second the outliers are at random positions in the data sets. Simulation were only carried out for large samples. Figures~\ref{fig:large-mult-1} and~\ref{fig:large-mult-2} contain the outputs obtained respectively with $5$ consecutive outliers and with $5$ outliers uniformly drawn at random over $\{1, \ldots, 100\}$. \begin{figure} \caption{Graphical plots for simulated data: multiple consecutive outliers} \label{fig:large-mult-1} \end{figure} \begin{figure} \caption{Graphical plots for simulated data: multiple outliers randomly chosen} \label{fig:large-mult-2} \end{figure} \section{Application to health data sets} We apply our graphical tool to two real data sets. A simple regression will be performed on the first data set which contains a single of outlier. While a multiple regression will be performed on the second data sets which contains a couple of outliers. \begin{itemize} \item Alcohol and tobacco spending in Great Britain \cite{MooreMcCabe}. Data comes from a British government survey of household spending in the eleven regions of Great Britain. One can consider the simple regression of alcohol spending on tobacco spending. It appears that this data set contains one single outlier (corresponding to Northern Ireland - the last individual in the data set). On figure~\ref{fig:AT} the various recursive estimations are plotted: from left to right and from up to down, $\beta_0$, $\beta_1$, $R^2$ and $\sigma^2$. Red lines (resp. black) correspond to data with (resp. without) the single outlier. These outputs were obtained by applying the rule for small data sets (with $N=100$ randomly chosen permutations). Graphical plots of the variance estimation and of the determination coefficient clearly indicates the presence of an outlier. \begin{figure} \caption{Graphical plots for alcohol and tobacco data} \label{fig:AT} \end{figure} \item Smoking and cancer data \cite{Fraumeni}. The data are per capita numbers of cigarettes smoked (sold) by 43 states and the District of Columbia in 1960 together with death rates per thousand population from various forms of cancer: bladder cancer, lung cancer, kidney cancer and leukemia. A classical sensitive analysis leads to conclude that the data set contains two outliers, Nevada and the District of Columbia (the two last individuals in the data set), in the distribution of cigarette consumption (the response variable). Figure~\ref{fig:SC} contains the outputs in three cases (corresponding to the three columns): one of the two outliers have been removed for the two first cases and the two outliers have been removed in the last case. As for the previous example, the red line correspond to the original data set and the red one to the data set with one or two outliers removed. The five first rows contain plots for $\widehat{\beta}$, the sixth row the plot for the determination coefficient and the last row the plot for $\widehat{\sigma}$. The graphical plots for the variance estimation indicates clearly that removing only one outlier is not sufficient. \begin{figure} \caption{Graphical plots for smoking and cancer data} \label{fig:SC} \end{figure} \end{itemize} \end{document}
\begin{document} \title[Monotone Paths on Cross-Polytopes]{Monotone Paths on Cross-Polytopes} \author[A.~Black \and J.A.~De Loera]{Alexander E. Black \and Jes\'{u}s A. De Loera} \address[AB]{Dept.\ Math., UC Davis, Davis, CA 95616, USA} \email{aeblack@ucdavis.edu} \address[JDL]{Dept.\ Math., UC Davis, Davis, CA 95616, USA} \email{deloera@ucdavis.edu} \begin{abstract} In the early 1990's, Billera and Sturmfels introduced the monotone path polytope (MPP), a special case of the general theory of fiber polytopes that associates a polytope to a pair $(P,\varphi)$ of a polytope $P$ and linear functional $\varphi$. In that same paper, they showed that MPPs of simplices and hyper-cubes are combinatorial cubes and permutahedra respectively. Their work has lead to many developments in combinatorics. Here we investigate the monotone paths for generic orientations of cross-polytopes. We show the face lattice of its MPP is isomorphic to the lattice of intervals in the sign poset from oriented matroid theory. We look at its $f$-vector, its realizations, and facets. \end{abstract} \maketitle \section{Introduction} In their seminal paper \cite{BSFiberPoly}, Billera and Sturmfels developed a construction that, given a projection of polytopes, associates a new polytope to that projection called the \emph{fiber polytope} (see the books \cite{MYBOOK, zieg} for an introduction). Fiber polytopes have rich combinatorial structure as demonstrated by the fact that associahedra, permutahedra, cyclohedra, and other combinatorial polytopes are all fiber polytopes of canonical projections \cite{holypaper,chapoton_fomin_zelevinsky_2002, MYBOOK, HOHLWEGetal, Baues, vic-equifiber}. Fiber polytopes are of great importance beyond algebraic and geometric combinatorics too (see for example the connections to algebraic geometry in \cite{GKZbook, Mcdonald, SturmfelsYu} and recently to theoretical physics through total positivity in \cite{nimapaper, highersecond}). Fiber polytopes extract complicated combinatorial structure even from one-dimensional projections. The fiber polytopes of one-dimensional projections are called \emph{monotone path polytopes} (MPPs), since they are each the convex hull of all average values of monotone paths on the polytope. The MPPs of simplices for generic linear functionals are combinatorial hyper-cubes, and the MPPs of hyper-cubes for generic linear functionals are always permutahedra \cite{BSFiberPoly}. Few examples of MPPs or fiber polytopes in general beyond these special cases are known, which makes studying them difficult. In this note, we develop a new, natural class of examples in depth: the MPPs of cross-polytopes for generic orientation. To compute the combinatorial type of monotone path polytopes, it suffices to understand the poset of cellular strings or \emph{Baues poset} (see \cite{Baues}). Cellular strings generalize monotone paths in the sense that monotone paths are increasing sequences of edges, and cellular strings are increasing sequences of faces. The face lattice of a MPP of $\diamond^{n}$ is isomorphic to the poset of \emph{coherent} cellular strings contained within the Baues poset. Coherence is a geometric restriction that we will make precise in Section \ref{sec:bg}; it intuitively means that there is a projection of the polytope to a polygon built using the linear functional that takes the cells in the string to the lower edges of that polygon. For any hyper-cube, simplex, and any edge generic orientation, all cellular strings are coherent as stated in \cite{BSFiberPoly}. The cellular strings of the simplex correspond to intervals $[A,B]$ in the poset of subsets of $[n-2]$, where $A$ is the set of endpoints of the string, and $B$ is the set of all vertices that appear anywhere in the string. However, for general polytopes, not all monotone paths are coherent, and the characterization of the coherent paths often leads to interesting combinatorics such as in the cases discussed in \cite{cubepiles, edmanthesis, Hypersimps}. In this paper, we study the monotone paths and the cellular strings of the standard $n$-dimensional \emph{cross-polytope} $\diamond^{n}$ given by the convex hull of the $2n$ vectors $e_1,-e_1,e_2,-e_2,\dots,e_n,-e_n$. As a polyhedron, $\diamond^{n}$ is given by the $2^n$ inequalities of the form $\pm x_1 \pm x_2 \dots \pm x_d \leq 1$ for all possible sign choices. Cross-polytopes are famous simplicial polytopes polar to cubes, and a subset of vertices of the cross-polytope is a face if and only if it does not contain pairs of antipodes. In what follows, $P^{\Delta}$ denotes the polar dual of a polytope $P$. With these facts in mind, we may state our main result: \begin{theorem} \label{MainTheorem1} For the standard cross-polytope $\diamond^{n}$ and for any generic linear functional $\varphi$ such that $\varphi(e_{i}) = a_{i}$ for all $i \in [n]$, \begin{enumerate} \item[(a)] If $0 < a_{1} < a_{2} < \dots < a_{n}$, the set of vertices of $MPP_{\varphi}(\diamond^{n})$ is precisely: \begin{align*} &\Bigg{\{} \left(1 - \frac{a_{i_{k}} + a_{i_{1}}}{2a_{n}}\right)e_{n} + \sum_{i = 1}^{k} \left(\frac{a_{i_{k-1}} + a_{i_{k+1}}}{2a_{n}}\right) e_{i_{k}}: \\ &-n = i_{0} < \dots < i_{k+1} = n \text{ and } i_{a} \neq -i_{b} \text{ for all } a,b \in [k]\Bigg{\}}. \end{align*} \item[(b)] There is an explicit polyhedral realization of $MPP_{\varphi}(\diamond^{n})$. If $0< a_{1} < a_{2} < \dots < a_{n}$, then $MPP_{\varphi}(\diamond^{n})$ is given by \[\{x \in \mathbb{R}^{n}: \varphi(x) = 0 \text{ and } \varphi_{i, \varepsilon}(x) \geq -a_{i} - a_{n}, \varepsilon: [n-1] \to \{\pm 1\}, k \in [n-1]\},\] where we define $\varphi_{i, \varepsilon}$ on the basis $F_{1} \cup F_{2} \cup \{e_{n}\}$ by \[\varphi_{i, \varepsilon}(e_{k}) = \begin{cases} -a_{k} - a_{n} \text{ if } k \in F_{1} \\ \frac{a_{i} + a_{n}}{a_{n} - a_{i}}(a_{k} - a_{n}) \text{ if } k \in F_{2}\\ 0 \text{ if } k = n\end{cases}\] for $F_{1} = \{k: \varepsilon(k)k \leq i\}$ and $F_{2} = \{k: \varepsilon(k)k \geq i\}$. \item[(c)] We have $MPP_{\varphi}(\diamond^{n})$ is combinatorially equivalent to the cubical complex formed by gluing together all unit cubes of dimension $\leq n -2$ with vertices contained in $\{\pm 1, 0\}^{n-1} \setminus \{\mathbf{0}\}$. The face lattice of $MPP_{\varphi}(\diamond^{n})$ is isomorphic to the lattice of intervals in the sign poset $\{0, +, -\}^{n-1} \setminus \{\mathbf{0}\}$ ordered under inclusion. \item[(d)] Furthermore, $MPP_{\varphi}(\diamond^{n})$ is combinatorially equivalent to $(C_{n-1} + \diamond^{n-1})^{\Delta}$, where $C_{n} = [-1,1]^{n}$ is the $n$-dimensional regular cube. \item[(e)] The $f$-vector of $MPP_{\varphi}(\diamond^{n})$ is given by \[f_{m}(MPP_{\varphi}(\diamond^{n})) = \sum_{k=1}^{n-m-1} \binom{n-1}{k,m, n-k-m-1} 2^{k+m}. \] Hence, $MPP_{\varphi}(\diamond^{n})$ has precisely $3^{n-1} -1$ vertices. In particular, they correspond to a sign vector in $\{0, +, -\}^{n-1} \setminus \mathbf{0}$. \item[(f)] Two vertices in $MPP_{\varphi}(\diamond^{n})$ are adjacent if and only if their corresponding vectors are distance $1$ from one another in the Taxi Cab metric. As a result, $\text{diam}(MPP_{\varphi}(\diamond^{n})) = 2(n-1) = (n-1)\text{diam}(\diamond^{n}).$ \item[(g)] The total number of monotone paths in $\diamond^{n}$ is precisely $\frac{2^{2n-1} - 2}{3}.$ Not all paths are coherent. The diameter of the entire flip graph of $\diamond^{n}$ is $2(n-1)$, and the longest flip distance to the nearest coherent path is $n-2$. \end{enumerate} \end{theorem} The combinatorial types of these MPPs correspond exactly to the poset of intervals of the sign poset from oriented matroid theory as one would find in Chapter 7 of \cite{zieg}. For this reason, we call polytopes of this combinatorial type \emph{signohedra.} One may view this result as a type $B$ analog to the case of simplices. Namely, the MPPs of simplices are cubes. The face lattice of a cube corresponds to the lattice of intervals of subsets of $[n]$. The type $B$ analog of the simplex is the cross-polytope, and the type $B$ analog of subsets of $[n]$ is the sign poset. Then we may view the signohedron as a type $B$ cube. Furthermore, via a functorial lemma proven in \cite{BSFiberPoly}, projections take MPPs to MPPs. The projections of the cross-polytopes are precisely the \emph{centrally symmetric polytopes} or equivalently the set of polyhedral unit balls in $\mathbb{R}^{d}$. Thus, Theorem \ref{MainTheorem1} yields the following corollary: \begin{cor} \label{cor:MainCS} Let $P$ be a centrally symmetric polytope with $2n$ vertices $\pm v_{1}, \dots, \pm v_{n}$ and linear functional $\ell$ such that $0 < \ell(v_{1}) < \dots < \ell(v_{n})$. Let $a_{i} = \ell(v_{i})$. Then we have the following: \begin{align*} MPP_{\ell}(P) &= \text{conv}\Bigg{(}\Bigg{\{}\left(1 - \frac{a_{i_{k}} + a_{i_{1}}}{2a_{n}}\right)(v_{n}) + \sum_{i = 1}^{k} \left(\frac{a_{i_{k-1}} + a_{i_{k+1}}}{2a_{n}}\right) v_{i_{k}}: \\ &-n = i_{0} < \dots < i_{k+1} = n \text{ and } i_{a} \neq -i_{b} \text{ for all } a,b \in [k]\Bigg{\}}\Bigg{)}. \end{align*} \end{cor} Note that a similar projection result may be found for all polytopes from projections of simplices and for all zonotopes from projections of cubes. The case for projections of cross-polytopes is more interesting in the sense that some monotone paths are incoherent, and no projection of an incoherent path may be coherent. Our result thus tells us that the longest coherent monotone path on a centrally symmetric polytope with $2n$ is vertices is at most $n$. Therefore, there cannot exist a centrally symmetric equivalent to the Goldfarb cube from \cite{DefProds} in which a coherent monotone path uses all of the vertices of the polytope. Understanding the structure of monotone paths on centrally symmetric polytopes could yield insight into the polynomial Hirsch conjecture (see \cite{Hirsch}). That conjecture asks for a polynomial bound on the lengths of paths on polytopes in terms of the number of facets and dimension. This conjecture is of fundamental interest in applications due to its relationship to the run-time of the simplex method for linear programming. The same question for shortest coherent monotone paths also remains open and is connected to the study of the shadow vertex pivot rule studied in depth in \cite{borgwardt} and more recently in \cite{smoothedanalysis}. \section{Background} \label{sec:bg} Throughout this paper, we rely on a familiarity with convex polytopes at the level of \cite{zieg}. A comprehensive reference for the structure of MPPs may be found in \cite{Baues}, but this section will be sufficient to understand our results. \begin{defn} A \emph{monotone path polytope} (MPP) of a polytope $P$ and orientation induced by a linear functional $\ell: P \to \mathbb{R}$ is the fiber polytope induced by the map $\ell: P \to \ell(P)$. In particular, the monotone path polytope is given by \[MPP_{\ell}(P) = \text{conv}\left(\left\{\int_{\ell(P)} s(x) dx: s \text{ is a section of }\ell \right\}\right). \] \end{defn} Furthermore, Billera and Sturmfels showed in the same paper that the integrals of the sections of monotone paths generate the monotone path polytope. This observation yields a finite generating set for that infinite space. From this result, we obtain a method to compute monotone path polytopes. \begin{theorem*}[Restatement of Theorem 5.3 from \cite{BSFiberPoly}] \label{thm:mppgenerator} For a linear functional $\varphi$ and polytope $P$ with vertices ordered by the linear functional $p_{1}, p_{2}, \dots, p_{n}$, let $M$ be the set of subsets $S$ of $[n]$ such that $\{p_{s}: s \in S\}$ is a monotone path. Then we have \[MPP_{\varphi}(P) = \text{conv}\left(\left\{\sum_{j=2}^{|S|} \frac{\varphi(p_{i_{j}} - p_{i_{j-1}})}{2 \varphi(p_{n} - p_{1})}(p_{i_{j-1}} + p_{i_{j}}): i_{j} \in S, S \in M \right\} \right).\] \end{theorem*} The monotone paths that give rise to vertices in the resulting monotone path polytope are called coherent. Faces of the monotone path polytope correspond to coherent cellular strings, a generalization of coherent paths. \begin{defn}[Page $13$ of \cite{Baues}] \label{def:cohstring} Fix a $d-$polytope $P$ and orientation $\ell \in (\mathbb{R}^{d})^{\ast}$. A \emph{cellular string} is a sequence of faces $F_{1}, F_{2}, \dots, F_{n}$ that satisfies the following: \begin{enumerate} \item[(i)] $v_{min} \in F_{1}$ and $v_{max} \in F_{n}$, where $v_{\text{min}}$ and $v_{\text{max}}$ are minimal and maximal vertices respectively with respect to $\ell$. \item[(ii)] $\ell$ is non-constant on any $F_{i}$. \item[(iii)] For each $i$, the $\ell-$maximizing face of $F_{i}$ is the $\ell-$minimizing face of $F_{i+1}$. \end{enumerate} A cellular string is called \emph{coherent} if there exists some linear functional $\ell' \in (\mathbb{R}^{d})^{\ast}$ such that $\bigcup_{i=1}^{n} F_{i}$ is the union of all points $x \in \ell(P)$ of the $\ell'-$minimal points in the fibers $\ell^{-1}(x)$. More concretely, a cellular string is coherent if there is a projection of the whole polytope taking the cellular string to the lower edges of a polygon such that the composition of this projection to the polygon and the map casting a shadow from the polygon given by $\ell.$ \end{defn} Using the tools we know about fiber polytopes, we may completely describe the face lattice of monotone path polytopes by identifying each face with a coherent cellular string. As an immediate consequence of Theorem $2.1$ in \cite{BSFiberPoly}, the face lattice of a monotone path polytope is equivalent to the lattice of coherent cellular strings with the partial order induced by the refinement of subdivisions. We will use this connection to give a complete description of the MPPs of cross-polytopes up to combinatorial equivalence. Applying this result, the coherent monotone paths may be mapped to the lower vertices of some two dimensional projection. Knowing this property yields a simple geometric obstruction to coherence captured by Figure \ref{fig:incoherentpath}. Namely, for any polytope $P$, any coherent monotone path $v_{1}, v_{2}, \dots, v_{n}$ on $P$ must satisfy \[\text{conv}(v_{i}, v_{j}) \cap \text{conv}(\bigcup_{k=i+1}^{j-1} v_{k}) = \emptyset,\] since the ordered lower vertices of a polygon must satisfy this condition. From this observation, we obtain a general result for coherent monotone paths on centrally symmetric polytopes. Namely, a coherent monotone path on a centrally symmetric polytope cannot contain a pair of antipodes other than its min and max, because convex hulls of distinct pairs of antipodes must intersect as in Figure \ref{fig:incoherentpath}. \begin{figure} \caption{The left figure shows an example of an incoherent path on the octahedron. The obstruction is pictured in the right, since the path contains $-e_{3}, -e_{2}, e_{2},$ and $e_{3}$. } \label{fig:incoherentpath} \end{figure} This intuitive geometric obstruction for any centrally symmetric polytope turns out to be the only obstruction to coherence of monotone paths on the cross-polytopes. We will generalize this observation to cellular strings in the next section. The last general fact we require is the following functorial lemma from \cite{BSFiberPoly} that allows for the computation of the monotone path polytope of a projection of a polytope. \begin{lemma*}[Lemma $2.3$ from \cite{BSFiberPoly}] \label{ProjLemma} Let $P \xrightarrow{\theta} Q \xrightarrow{\varphi} R$ be a sequence of surjective affine maps of polytopes. Then $\Sigma(Q,R) = \theta(\Sigma(P,R))$, where $\Sigma(A,B)$ denotes the fiber polytope for a projection from $A$ to $B$ for polytopes $A$ and $B$. In particular, when $\varphi$ is a linear functional, we find that $MPP_{\varphi}(Q) = \theta(MPP_{\varphi \circ \theta}(P)).$ \end{lemma*} This lemma allows for the computation of monotone path polytopes of any centrally symmetric polytope as the projection of a signohedron and makes Corollary \ref{cor:MainCS} immediate from the proof of Theorem \ref{MainTheorem1}(a). \section{Signohedra: Monotone Paths on Cross-Polytopes} \label{sec:signo} In this section, we completely describe signohedra and equivalently the monotone path polytopes on cross-polytopes via the proof of Theorem \ref{MainTheorem1}. To start studying the MPPs of cross-polytopes for a generic orientation, we must clarify what constitutes a generic orientation. Using this notion of generic, we will fix an ordering of the vertices of the cross-polytope that will be used for all remaining computations of the MPP. \begin{lemma} \label{lem:cpgenorient} Generically, a monotone path polytope of $\diamond^{n}$ is affinely equivalent to one obtained from a linear functional with distinct positive values for each $e_{i}$. \end{lemma} \begin{proof} The vertex generic linear functionals on the cross-polytope are precisely those with distinct nonzero values of coefficients $a_{i}$ for each $e_{i}$. Furthermore, they each map $e_{i}$ to $a_{i}$, where up to a change in indices, $|a_{1}| < |a_{2}| < \dots < |a_{n}|$. Then, by applying the reflection map taking $e_{i} \mapsto -e_{i}$, we may assume that each $a_{i}$ is positive. The cross-polytope $\diamond^{n}$ has vertices $\pm e_{i}$, so under this map, the vertices are ordered such that $-e_{i} < e_{i}$ for all $i \in [n]$ and $e_{j} < e_{k}$ for all $j < k$ in $[n]$. Hence, up to a permutation and reflection, we always obtain the same vertex ordering. Since these symmetries are linear, by Lemma $2.3$ from \cite{BSFiberPoly}, the affine isomorphism from the cross-polytope to itself induces an affine isomorphism of the monotone path polytopes. \end{proof} For cubes and simplices, all monotone paths are coherent. This property makes understanding their monotone path polytopes easier. For cross-polytopes with an orientation given by a generic linear functional, the monotone paths need not all be coherent as noted in Section \ref{sec:bg}. The following theorem is the primary technical fact from which all of our remaining work on the characterization follows. \begin{theorem} \label{CoherenceThm} A cellular string on $\diamond^{n}$ is coherent if and only if the set of vertices that appear in the string only contains one pair of antipodes, namely the maximum and minimum pair. \end{theorem} \begin{proof} For both directions, by Lemma \ref{lem:cpgenorient}, we may assume without loss of generality that the linear functional $\ell: \mathbb{R}^{n} \to \mathbb{R}$ given by $\ell(e_{i}) = a_{i}$ satisfies $0 < a_{i} < a_{j}$ for all $i, j \in [n]$ with $i <j$. Suppose that a coherent cellular string contains $-e_{i}$ and $e_{i}$ as vertices in possibly distinct cells. Then, by the definition of coherence, there exists a projection $\pi: \diamond^{n} \to \mathbb{R}^{2}$ that takes $-e_{i}$ and $e_{i}$ to possibly distinct lower edges of some polygon, where $\pi = \ell \times \varphi$ for some linear functional $\varphi: \mathbb{R}^{n} \to \mathbb{R}$. Since $\pi(-e_{i})$ and $\pi(e_{i})$ lie on the lower edges, and $-e_{n}$ and $e_{n}$ are minimal and maximal respectively for $\varphi$, $\pi(e_{i})$ and $\pi(-e_{i})$ must lie below the line segment from $\pi(-e_{n})$ to $\pi(e_{n})$. It follows that the slope from $\pi(-e_{n})$ to $\pi(-e_{i})$ must be less than the slope from $\pi(-e_{n})$ to $\pi(e_{n})$. Similarly, the slope from $\pi(e_{i})$ to $\pi(e_{n})$ must be greater than the slope from $\pi(-e_{n})$ to $\pi(e_{n})$. Thus, we must have \[\frac{\varphi(e_{n}) - \varphi(e_{i})}{a_{n} - a_{i}} = \frac{\varphi(-e_{i}) - \varphi(-e_{n})}{-a_{i} + a_{n}} < \frac{\varphi(e_{n}) - \varphi(-e_{n})}{a_{n} + a_{n}}< \frac{\varphi(e_{n}) - \varphi(e_{i})}{a_{n} - a_{i}}, \] a contradiction. Suppose instead we have a cellular string such that the set of vertices contained in some cell has only one pair of antipodes. The vertices contained in the cellular string may be partitioned into the subsets of positive and negative basis vectors: $S_{+} \subseteq \{e_{1}, e_{2}, \dots, e_{n}\}$ and $S_{-} \subseteq \{-e_{1}, -e_{2}, \dots, -e_{n}\}$. Let $S_{0} = \{e_{i}: -e_{i}, e_{i} \notin S_{+} \cup S_{-}\}$, the set of $e_{i}$ such that $\pm e_{i}$ does not appear in the path. Then, by our assumption that the path only contains a single pair of antipodes $-e_{n}$ and $e_{n}$, we must have \[S_{+} \cap -S_{-} = \{e_{n}\}.\] It follows that $(S_{+} \cup -S_{-} \cup S_{0}) \setminus \{-e_{n}\} = \{e_{1}, e_{2}, \dots, e_{n}\}$ and is, in particular, linearly independent. Let $F_{1} < F_{2} < \dots < F_{k}$ denote the sequence of faces in the cellular string. Let $e_{-i} = -e_{i}$ and $a_{-i} = -a_{i}$ for $i \in \pm [n]$. To prove coherence, we must choose $\varphi: \mathbb{R}^{n} \to \mathbb{R}$ such that $\pi = \ell \times \varphi$ takes endpoints of the cellular string to lower vertices of $\pi(\diamond^{n})$ and vertices of cells to the interior of the edge between the endpoints. We will construct such a choice of $\varphi$ inductively first on the endpoints of the string and then interpolate to find the value on the interior points. By linear independence, we may define $\varphi$ however we choose for each vertex that appears in the cellular string. Let $e_{b_{j}}$ and $e_{c_{j}}$ denote the minimal and maximal vertices of $F_{j}$. Define $\varphi(e_{n}) = 0$. Define $\varphi(e_{c_{1}})$ to be $-(a_{c_{1}}+a_{n})$. Then the slope from $(-a_{n}, \varphi(-e_{n}))$ to $(a_{c_{1}}, \varphi(a_{c_{1}}))$ will be precisely $-1$. Define $\varphi$ inductively so that the slope from $(a_{b_{j}}, \varphi(e_{b_{j}}))$ to $(a_{c_{j}}, \varphi(e_{c_{j}}))$ is $\frac{-1}{j}$ for all $1 \leq j < k$. For each remaining vertex $v$ in each $F_{j}$, define $\varphi$ so that $\pi(v)$ lies on the line segment from $\pi(e_{b_{j}})$ to $\pi(e_{c_{j}})$. Such a choice is always possible by linear independence. Finally, define $\varphi$ to be $0$ for all vertices in $S_{0}$. It remains to show that $\pi$ then satisfies the properties from Definition \ref{def:cohstring}. Observe that, since $\varphi(-e_{n}) = -\varphi(e_{n}) = 0$ and the slope between consecutive endpoints of the string is negative, $\varphi$ is negative for each endpoint of the cellular string other than $-e_{n}$ and $e_{n}$. Thus, by interpolating, $\varphi$ must be negative for each vertex on the interior of a cell. For vertices $e_{i}$ not in the string, there are two cases. If $-e_{i}$ is in the string, then $\varphi(e_{i}) = - \varphi(-e_{i}) \geq 0$. Otherwise, $e_{i} \in S_{0}$, so $\varphi(e_{i}) = 0 \geq 0$. Hence, all vertices $v$ not contained in some cell of the string must satisfy $\varphi(v) \geq 0$. Since $\varphi(-e_{n}) = \varphi(e_{n}) =0$, it follows that all vertices not in the string lie on or above the line segment from $\pi(-e_{n})$ to $\pi(e_{n})$. Thus, the lower vertices of the polygon must be some subset of the vertices contained in the string. By construction, we defined the $F_{i}$ to each be mapped to an edge and so that the slope of each edge increases as $i$ increases. These edges yield a path from $\pi(-e_{n})$ to $\pi(e_{n})$. Since the slope is increasing, this path is the graph of a piece-wise linear convex function that lies below the line segment from $\pi(-e_{n})$ to $\pi(e_{n})$. It follows then that the convex hull of the path has vertices $\{\pi(e_{i}): e_{i} \text{ is an endpoint of the string}\}$. Furthermore, by construction once again, any vertex in a cell of the cellular string is mapped to the interior of the edge between the endpoints of that string. Hence, the projections of each $F_{i}$ are precisely the lower edges of the polygon meaning that the cellular string must be coherent by definition. \end{proof} \begin{figure}\label{fig:mainproof} \end{figure} Interpreted for cellular strings corresponding to monotone paths, we established what was suggested in Section \ref{sec:bg}. Namely, a monotone path on $\diamond^{n}$ is coherent if and only if the only antipodes it contains are the maximum and minimum pair. Note the assumption that the linear functional is generic is necessary here. \begin{prop} For a cross-polytope and the orientation $\ell = \sum_{i=1}^{n} e_{i}^{T}$, the monotone path polytope is given by $\Delta_{d-1} -\Delta_{d-1}$. \end{prop} \begin{proof} In \cite{BSFiberPoly}, they show that the fiber polytope is given by the minkowski sum of fibers of barycenters of the subdivision induced by the projection. In this case, the subdivision induced by the projection is trivial, so the monotone path polytope is given by $\ell^{-1}(0)$. In that case, we are taking a slice of the Cayley sum of $\Delta_{d-1}$ and $-\Delta_{d-1}$, so from either explicit computation or the Cayley trick as in \cite{Cayley}, we find that the resulting MPP is $\Delta_{d-1} - \Delta_{d-1}$. \end{proof} All monotone paths are given by single edges, so it is immediate that they are all coherent unlike in the generic case. As an immediate corollary of this: \begin{cor} All monotone paths being coherent for one orientation does not imply that all monotone paths are coherent for all orientations. Furthermore, a polytope may not have all paths coherent for any generic orientation but still have some orientation for which all monotone paths are coherent. \end{cor} As stated in Section \ref{sec:bg}, by Theorem $2.1$ of \cite{BSFiberPoly}, the coherent paths correspond exactly to the vertices of the monotone path polytope. From this result, we may immediately compute the number of vertices. \begin{cor} \label{cor:seqbij} For a generic linear functional $\varphi$, $MPP_{\varphi}(\diamond^{n})$ has precisely $3^{n-1} -1$ vertices. In particular, they correspond to elements of $\{-1, 1, 0\}^{n-1} \setminus \{\mathbf{0}\}$. \end{cor} \begin{proof} Recall that any two non-antipodal points of the cross-polytope are connected by an edge. It follows then that the coherent monotone paths consists of choices $e_{i}, -e_{i}$ or neither to include in our sequence of points. That gives $3^{n-1}$ possible choices. Since we have to include at least $1$ element between $-e_{n}$ and $e_{n}$, we obtain that $\diamond^{n}$ has precisely $3^{n-1}-1$ coherent monotone paths. Since the vertices of $MPP_{\varphi}(\diamond^{n})$ correspond to coherent monotone paths, $MPP_{\varphi}(\diamond^{n})$ has precisely $3^{n-1}-1$ vertices. \end{proof} We may strengthen this result to find explicit vertices via computing the average value of each coherent monotone path. \begin{proof}[Proof of Theorem \ref{MainTheorem1}(a)] Use the characterization of coherent monotone paths in Theorem \ref{CoherenceThm} in combination with Theorem $5.3$ from \cite{BSFiberPoly}, and the result is immediate. \end{proof} At this point, we may prove Corollary \ref{cor:MainCS}. \begin{proof}[Proof of Corollary \ref{cor:MainCS}] Apply Lemma $2.3$ from \cite{BSFiberPoly} as stated in Section \ref{sec:bg} to the result of Theorem \ref{MainTheorem1}(a). \end{proof} While more involved, we may similarly provide an explicit facets based description of the polytope from our proof of Theorem \ref{CoherenceThm}. \begin{proof}[Proof of Theorem \ref{MainTheorem1}(b)] To find the facet defining relations for the MPP, we follow the method outlined by Ziegler in Section $9.1$ of \cite{zieg}. Namely, the facet defining inequalities are obtained from the linear functional that yields the polygon whose lower vertices correspond to maximal coherent subdivisions. That is, for such a linear functional $\varphi$, the inequality is given by $\varphi(x) \geq \varphi(v)$, where $v$ is the vertex that is minimized by $\varphi$ on the polygon corresponding to the maximal coherent cellular string. From the combinatorial characterization in Theorem \ref{MainTheorem1}(b), the facets correspond precisely to the maximal intervals in the sign poset. Maximal intervals are obtained from starting with a choice of a separating vertex $e_{i}$ and choosing a maximal length monotone path through $e_{i}$. In the notation from the proof of Theorem \ref{CoherenceThm}, these are precisely the subdivisions corresponding $F_{1} < F_{2}$ such that $\pm F_{1} \cup \pm F_{2} = \{k:k \in \pm [n-1]\}$, where we remove $e_{n}$ from $F_{2}$ and $-e_{n}$ from $F_{1}$ and identify each $e_{i}$ with $i$. To obtain a lifting functional, by following the proof of Theorem \ref{CoherenceThm}, we define \[\varphi(e_{n}) = 0 \text{ and } \varphi(e_{i}) = -a_{i} - a_{n}.\] Then for the remaining vertices in $F_{1}$ we linearly interpolate between $0$ and $-a_{i} - a_{n}$. For the vertices in $F_{2}$, we provide a similar interpolation. In particular, $\varphi(e_{k}) = -a_{k} - a_{n}$ if $k \in F_{1}$ and $\varphi(e_{k}) = \frac{a_{i} + a_{n}}{a_{n} - a_{i}}(a_{k} - a_{n})$ if $k \in F_{2}$ \[\varphi(e_{k}) = \begin{cases} -a_{k} - a_{n} \text{ if } k \in F_{1} \\ \frac{a_{i} + a_{n}}{a_{n} - a_{i}}(a_{k} - a_{n}) \text{ if } k \in F_{2}\\ 0 \text{ if } k = n\end{cases}.\] We denote any functional of this form as $\varphi_{i, \varepsilon}$, where $i \in \pm [n-1]$ is the choice of the splitting vertex, and $\varepsilon: [n-1] \to \pm 1$ denotes the sign sequence of the vertices. Then $F_{1} = \{k: \varepsilon(k)k \leq i\}$ and $F_{2} = \{k: \varepsilon(k)k \geq i\}$, which yields the result. \end{proof} Thus, now we have an explicit description of our polytope in terms of both vertices and facets. We may take this description a step further and use our characterization of coherent cellular strings to obtain a complete characterization of the face lattice of the MPP of $\diamond^{n}$ in terms of the sign poset on $\{+, - ,0\}^{n-1} \setminus \mathbf{0}$. \begin{figure} \caption{A plot of $(C_{3} + \diamond^{3})^{\Delta}$ made using \cite{sage}. By Theorem \ref{MainTheorem1}(c), any MPP of $\diamond^{4}$ for a generic orientation is combinatorially equivalent to the pictured polytope.} \label{fig:cpmpp} \end{figure} \begin{proof}[Proof of Theorem \ref{MainTheorem1}(c)] By Theorem $2.1$ of \cite{BSFiberPoly}, the face lattice of the monotone path polytope corresponds to the poset of coherent cellular strings under refinement. Note that a coherent cellular string is uniquely determined by two pieces of data: its endpoints and the vertices included in the cells between each endpoint. These two pieces of data may be interpreted as two monotone paths, the one with vertices given by the set of endpoints, $E$, of the cellular string and the on given by $V$, the set of vertices that appear anywhere in the cellular string. Clearly $E \subseteq V$, which translates via the bijection to $V$ and $E$ being comparable. Furthermore, the vertices contained in the cellular string correspond exactly to all monotone paths with a set of vertices $S$ such that $E \subseteq S \subseteq V$. Hence, the partial order of inclusion of faces via this bijection is equivalent to the partial order given by inclusion of intervals. Note that each interval $I = [a, b]$ in $\{0,-1,1\}^{n-1} \setminus \{\mathbf{0}\}$ is Boolean. Furthermore, $\text{conv}([a,b])$ is isomorphic to $\text{conv}([a,b] - a) = \text{conv}([\mathbf{0}, b-a])$, where $\mathbf{0}$ is adjoined in the natural way to the partial order. Let $k$ be the length of the interval. Then $b-a$ will have precisely $k$ nonzero entries of either $0$'s or $1'$s. Let $A$ be the linear map defined by $A(e_{i}) = \sigma(i)e_{i}$, where $\sigma(i)$ denotes the sign of $e_{i}$ in $b-a$. Let $S = \{i \in [n-1]: \sigma(i) \neq 0\}$. Then $A([0,b-a]) = \left[0, \sum_{s \in S} e_{s} \right]$. By construction, $A$ is then an isometry taking $\text{conv}(I)$ to the unit cube. Hence, each interval corresponds to a unit cube of dimension $\leq n-2$ with vertices contained in $\{\pm 1, 0\}^{n-1} \setminus \{\mathbf{0}\}$. The converse is similar, since each unit cube has vertices corresponding to an interval in the poset $\{+,-,0\}^{n-1} \setminus \{\mathbf{0}\}$ \end{proof} The next step is to show that this combinatorial type has a nice representative given by $(C_{n-1} + \diamond^{n-1})^{\Delta}$. \begin{proof}[Proof of Theorem \ref{MainTheorem1}(d)] Note that the vertices of $C_{n} + \diamond^{n}$ are obtained precisely by adding the unique maximal vertices on each for a linear functional. Let $\ell$ be a linear functional. Then the max on $\diamond^{n}$ is the vertex corresponding to the coordinate of maximum absolute value together with the corresponding sign, and the max on $C_{n}$ returns the subset with $1$'s for positive elements and $-1$'s for negative elements. The element of maximum absolute value could either be positive or negative. The resulting polytope has vertices $\{B_{d}\left(2e_{1} + \sum_{i=2}^{d} e_{i}\right)\}$, where $B_{d}$ is the set of signed permutation matrices. Let $S \in \{+, -,0\}^{n} \setminus \{\mathbf{0}\}$. Then $S$ induces a partitions of $[n]$ into $S_{+} \cup S_{-} \cup S_{0}$, and there is a naturally associated linear functional $\varphi_{S}$ given by $\sum_{i \in S_{+}} e_{i}^{T} - \sum_{j \in S_{-}} e_{j}^{T}.$ The vertices maximized by $\varphi_{S}$ are precisely \[\{\text{sign}(k)e_{k}: k \in S_{+} \cup S_{-}\} + \{\sum_{a \in A} \varepsilon(a) e_{a}: A \in S_{0}, \varepsilon: A \to \{\pm 1\}\} + \sum_{k \in S_{+} \cup S_{-}} \text{sign}(k)e_{k}. \] These vectors span the affine hyperplane given by \[\sum_{i \in S_{+}} x_{i} - \sum_{j \in S_{-}} x_{j} = |S_{+}| + |S_{-}| + 1.\] Thus, each sign vector $\varphi_{S}$ corresponds to a facet of the resulting polytope. Let $\varphi$ be a linear functional. Then any vertex maximized by that linear functional would also be maximized by the linear functional with the same sign pattern. Hence, the sign vectors induce all of the facets of $C_{n} + \diamond^{n}$, which gives us a polyhedral formulation of this polytope. Namely, for any partition $S_{+} \cup S_{-} \cup S_{0}$ of $n$, we must have \[\frac{1}{|S_{+}| + |S_{-}| + 1}\left(\sum_{i \in S_{+}} x_{i} - \sum_{j \in S_{-}} x_{j}\right) \leq 1.\] These are precisely the relations given by maximizing the sign functionals. Observe that if a sign vector contains $0$'s, then any choice of $\pm e_{i}$ for $i \in S_{0}$ is allowable for a vertex in that facet. Furthermore, if two facets have distinct positive sets or negative sets, then they cannot intersect, since that imposes the sign of $e_{i}$ for any vertex in the set. It follows that two facets intersect if and only if their corresponding sign vectors are comparable. In particular, $m$-dimensional faces correspond exactly to intervals of sign vectors in the poset of length $n-m$. It follows then that the face lattice of this polytope is isomorphic to the lattice of intervals of the sign poset under reverse inclusion. Hence, we have $(C_{n-1} + \diamond^{n-1})^{\Delta}$ and $\text{MPP}_{\varphi}(\diamond^{n})$ are combinatorially equivalent. \end{proof} An advantage of this representation is that we may read off each vertex for a sign vector as \[\frac{1}{|S_{+}| + |S_{-}|+1} \left(\sum_{i \in S_{+}} e_{i} - \sum_{j \in S_{-}} e_{j} \right). \] One may appeal to the theory of anti-prisms for an alternative proof of Theorem \ref{MainTheorem1}(d). Namely, for perfectly centered polytopes, Bj\"{o}rner showed in \cite{bjornerantiprism} that the lattice of intervals in the face lattice of a polytope $P$ under inclusion is isomorphic to the face lattice of $(P + P^{\Delta})^{\Delta}$. Alongside this result from the characterization of the face lattice, we also now have a combinatorial framework for calculating the $f$-vector. \begin{proof}[Proof of Theorem \ref{MainTheorem1}(e)] From Theorem \ref{MainTheorem1}(b), faces correspond exactly to elements of the poset of intervals in the sign poset. Namely, they are identified precisely by pairs of elements of comparable elements of the poset. An $m$-face corresponds to pairs of elements of distance $m$ from each other. That is one has $k$ nonzero entries, and the other has $k+m$ nonzero entries including the $k$ nonzero entries of the starting point. Thus, the faces correspond to flags of length two of subsets of $n-1$ of subsets of size $k$ and $k + m$ counted by $\binom{n-1}{k,m, n-k-m-1}$ together with $2^{k+m}$ choices of signs for each vertex contained in the flag. Then we have that \[f_{m}(MPP_{\pi}(\diamond^{n})) = \sum_{k=1}^{n-m-1} \binom{n-1}{k,m, n-k-m-1} 2^{k+m}. \] \end{proof} Again by Theorem $2.1$ of \cite{BSFiberPoly}, edges in the monotone path polytope correspond to refinements of pairs of coherent monotone paths. Geometrically, we may interpret this refinement as two monotone paths agreeing everywhere except on a single two dimensional face. In the sense of the flip graph, this interpretation means that the two monotone paths differ by a polygonal flip. From this observation, we obtain the following lemma. \begin{lemma} Two vertices in the $MPP$ of $\diamond^{n}$ are adjacent if and only if their corresponding vectors are distance one from one another in the Taxi Cab metric. \end{lemma} \begin{proof} Recall that $\diamond^{n}$ is simplicial. It follows that a polygonal flip either deletes a vertex from a path or adds a single, new vertex. Deleting or adding a vertex corresponds to changing a $1$ or $-1$ to a $0$ or a $0$ to a $1$ or $-1$ in the sequence bijection from Corollary \ref{cor:seqbij}. The connectivity of $\diamond^{n}$ allows us to perform this operation for any element of the sequence. Two sequences of $1$'s, $0$'s, and $-1$'s are at distance $1$ in the Taxi-cab metric if and only if they agree on all but one entry in which one is a $0$ and the other is a $-1$ or $1$, which yields the result. \end{proof} Via explicit computation, we may then compute the diameter of the $MPP$ of $\diamond^{n}$: \begin{proof}[Proof of Theorem \ref{MainTheorem1}(f)] By the triangle inequality, \[\sup_{x,y \in \text{Verts}(MPP(\delta^{n}))} |x-y|_{1} \geq |\sum_{i=1}^{n-1} e_{i} - \sum_{i=1}^{n} -e_{i}|_{1} \geq 2(n-1).\] Since each step along an edge can change the distance by at most one, we have the diameter is at least $2(n-1)$. To see that it is at most $2(n-1)$ may be seen by taking two vectors and changing the coordinates by $\pm 1$ until one vector equals the other. Each coordinate change represents a single step, and the path requires at most $2(n-1)$ coordinate changes. The only detail is avoiding the origin, and that is also easy. \end{proof} Now that we understand the graph of $MPP$ of $\diamond^{n}$, to obtain a more complete description of the space of monotone paths, we will describe the flip graph. The flip graph has as its vertices the set of all monotone paths on a polytope with edges given by polygon flips. We then enumerate its vertices and compute its diameter. \begin{proof}[Proof of Theorem \ref{MainTheorem1}(g)] A monotone path corresponds to a subsequence $s_{1}, s_{2}, \dots, s_{m}$ of \[(-e_{n-1}, -e_{n-2}, \dots, -e_{1}, e_{1}, e_{2}, \dots, e_{n-1})\] such that $s_{k} + s_{k+1} \neq 0$, since all vertices are connected to all vertices other than their antipodes. There are $2^{2(n-1)}-1$ non-empty subsets of $\{ e_{i} : i \in \{\pm 1, \pm 2, \dots, \pm n-1\}$. Then $2^{2(n-2)}$ of those subsets include $-e_{1}, e_{1}$, $2^{2(n-3)}$ include $-e_{2}, e_{2}$ but neither of $-e_{1}, e_{1}$, and in general $2^{2(n-1-k)}$ include $-e_{k}, e_{k}$ but none of $e_{j}$, where $|j| < |k|$. Hence, one may easily verify via standard results for geometric series, the resulting number of possible sequences is \[2^{2(n-1)}-1 - \sum_{k=1}^{n-1} 2^{2(n-k-1)} = \frac{2^{2n-1} - 2}{3}.\] For the diameter of the flip graph, since $\diamond^{n}$ is simplicial, two monotone paths are adjacent if and only if they differ by the addition or removal of a single vertex. A vertex may only be added or removed if it does not introduce consecutive antipodal points. Note that, given these restrictions, the distance between the path given by $e_{1}$ and the path $(-e_{n-1}, -e_{n-2}, \dots, -e_{1}, e_{2}, e_{3}, \dots, e_{n})$ is at least $2(n-1)$. By starting with removing all negative vertices starting with $-e_{n-1}$ and ending with $-e_{1}$ and then adding $e_{1}$ and removing $e_{2}$ through $e_{n}$ we achieve a sequence of flips going between these paths of length precisely $2(n-1)$. Hence, the distance between those points is precisely $2(n-1)$. Let $e_{s_{i}}$ and $e_{t_{j}}$ be two different paths. Let $s_{-}$ and $s_{+}$ denote the maximal negative element and minimal positive element of $s$ respectively. define $t_{-}$ and $t_{+}$ similarly. If $s_{-} = t_{-}$ and $s_{+} = t_{+}$, we may go from $s$ to $t$ by adding elements from $t$ that are not in $s$ to $s$ and taking away elements from $s$ that are not in $t$. Such a path will have length at most $2(n-2)$. If $s_{-} < t_{-}$ and $s_{+} = t_{+}$, then we may add $t_{-}$ to $s$ and follow the same strategy. This will result in a sequence of moves of length at most $2(n-2) + 1$. A similar idea works for any of the possible cases in which $s_{-} = t_{-}$ or $s_{+} = t_{+}$. Suppose that $s_{-} < t_{-}$ and $s_{+} < t_{+}$. If $t_{-} \neq - s_{+}$, we may add $t_{-}$ to the list $s_{-}$. Then we may follow the same strategy for the remaining list keeping $s_{+}$ and $t_{-}$. Then, at the end, we remove $s_{+}$. The result must take fewer that $2(n-2)+2 \leq 2(n-1)$ moves. Suppose instead that $s_{-} < t_{-} = -s_{+} < s_{+} < t_{+}$. First modify all elements of $s$ greater that $s_{+}$ to agree with $t$. If $t_{+} = -s_{-}$, remove $t_{+}$. Otherwise, leave it. In both cases, add $t_{-}$, add $t_{+}$ back and modify all elements $< t_{-}$ to agree with $t$. The result takes $\leq 2(n-2)- 1+3 = 2(n-1)$ moves, since $e_{1} < t_{+}$. The only remaining case is that $s_{-} < t_{-}$ and $s_{+} > t_{+}$. In that case, add $t_{-}$ and $t_{+}$ to $s$ and make the required changes. The result takes fewer than $2(n-2) + 2$ moves. Hence, the diameter of the flip graph is $2(n-1)$. The longest distance to the nearest coherent path for a monotone path may be computed similarly. Start with a path $s$ with $s_{-}$ and $s_{+}$ defined as before. If $s_{+} < - s_{-}$, remove all parts of antipodal pairs after $s_{+}$. Otherwise, remove all parts of antipodal pairs before $s_{-}$. Since there are at most $n-2$ elements before $s_{-}$ and $s_{+}$, the distance to the nearest coherent path is at most $n-2$. This bound is attained for the path $(-e_{n-1}, -e_{n-2}, \dots, -e_{1}, e_{2}, e_{3}, \dots, e_{n-1})$. \end{proof} Thus, the total number of monotone paths grows at a rate of $\Theta(4^{n})$, which is exponentially faster than growth rate of the number of coherent monotone paths, which grows at a rate of $\Theta(3^{n})$. That last proof concludes our description of the structure of monotone paths on the cross-polytopes and our proof of Theorem \ref{MainTheorem1}. \section*{Acknowledgments} We are grateful to Lionel Pournin, Raman Sanyal, and Bernd Sturmfels for useful comments and support. The authors gratefully acknowledge partial support from NSF DMS-grant 1818969. \end{document}
\begin{document} \title{Decoherence allows quantum theory to describe the use of itself} \author{Armando Rela\~{n}o} \affiliation{Departamento de Estructura de la Materia, F\'{\i}sica T\'ermica y Electr\'onica, and GISC, Universidad Complutense de Madrid, Av. Complutense s/n, 28040 Madrid, Spain} \email{armando.relano@fis.ucm.es} \begin{abstract} We show that the quantum description of measurement based on decoherence fixes the bug in quantum theory discussed in [D. Frauchiger and R. Renner, {\em Quantum theory cannot consistently describe the use of itself}, Nat. Comm. {\bf 9}, 3711 (2018)]. Assuming that the outcome of a measurement is determined by environment-induced superselection rules, we prove that different agents acting on a particular system always reach the same conclusions about its actual state. \end{abstract} \maketitle \section{I. Introduction} In \cite{Renner:18} Frauchiger and Renner propose a Gedankenexperiment to show that quantum theory is not fully consistent. The setup consists in an entangled system and a set of fully compatible measurements, from which four different agents infer contradictory conclusions. The key point of their argument is that all these conclusions are obtained from {\em certain} results, free of any quantum ambiguity: {\em 'In the argument present here, the agents' conclusions are all restricted to supposedly unproblematic ``classical'' cases.'} \cite{Renner:18} The goal of this paper is to show that this statement is not true, at least if ``classical'' states arise from quantum mechanics as a consequence of environment-induced superselection rules. These rules are the trademark of the decoherence interpratation of the quantum origins of the classical world. As it is discussed in \cite{Zurek:03}, a quantum measurement understood as a perfect correlation between a system and a measuring apparatus suffers from a number of ambiguities, which only dissapear after a further interaction with a large environment ---if this interaction does not occur, the agent performing the measurement cannot be certain about the real state of the measured system. The main conclusion of this paper is that the contradictory conclusions discussed in \cite{Renner:18} dissapear when the role of the environment in a quantum measurement is properly taken into account. In particular, we show that the considered cases only become ``classical'' after the action of the environment, and that this action removes all the contradictory conclusions. The paper is organized as follows. In Sec. II we review the Gedankenexperiment proposed in \cite{Renner:18}. In Sec. IIIA, we review the consequences of understanding a quantum measurement just as a perfect correlation between a system and a measuring apparatus; this discussion is based on \cite{Zurek:03,Zurek:81}. In Sec. IIIB, we re-interpret the Gedankenexperiment taking into account the conclusions of Sec. IIIA. In particular, we show that the contradictory conclusions obtained by the four measuring agents dissapear due to the role of environment. In Sec. IV we summarize our main conclusions. \section{II. The Gedankenexperiment} This section consists in a review of the Gedankenexperiment proposed in \cite{Renner:18}. The reader familiarized with it can jump to section III. \subsection{A. Description of the setup} The Gedankenexperiment \cite{Renner:18} starts with an initial state in which a {\em quantum coin} $R$ is entangled with a $1/2$-spin $S$. A spanning set of the quantum coin Hilbert space is $\left\{ \ensuremath{\ket{\text{head}}_R}, \ensuremath{\ket{\text{tail}}_R} \right\}$; for the $1/2$ spin we can use the usual basis, $\left\{ \ensuremath{\ket{\uparrow}_S}, \ensuremath{\ket{\downarrow}_S} \right\}$. The experiment starts from the following state: \begin{equation} \ensuremath{\ket{\text{init}}} = \coef{1}{3} \ensuremath{\ket{\text{head}}_R} \ensuremath{\ket{\downarrow}_S} + \coef{2}{3} \ensuremath{\ket{\text{tail}}_R} \ensuremath{\ket{\rightarrow}_S}. \label{eq:inicial} \end{equation} From this state, four different agents, $W$, $F$, $\overline{W}$, and $\overline{F}$, perform different measurements. All these measurements are represented by unitary operators that correlate different parts of the system with their apparatus. Relying on the Born rule, they infer conclusions only from {\em certain} results ---results with probability $p=1$. These conclusions appear to be contradictory. To interpret the results of measurements on the initial state, Eq. (\ref{eq:inicial}), it is useful to rewrite it as different superpositions of linearly independent vectors, that is, by means of different orthonormal basis. As is pointed in \cite{Zurek:81,Zurek:03}, this procedure suffers from what is called basis ambiguity ---due to the superposition principle, different basis entail different correlations between the different parts of the system. This problem is specially important when all the coefficents of the linear combination are equal \cite{Elby:94}; however, it is not restricted to this case. We give here four different possibilities for the initial state given by Eq. (\ref{eq:inicial}): \begin{equation} \ensuremath{\ket{\text{init}}}_{(1)} = \coef{1}{3} \ensuremath{\ket{\text{head}}_R} \ensuremath{\ket{\downarrow}_S} + \coef{1}{3} \ensuremath{\ket{\text{tail}}_R} \ensuremath{\ket{\downarrow}_S} + \coef{1}{3} \ensuremath{\ket{\text{tail}}_R} \ensuremath{\ket{\uparrow}_S}. \label{eq:uno} \end{equation} (The term in $\ensuremath{\ket{\text{head}}_R} \ensuremath{\ket{\uparrow}_S}$ does not show up, because its probability in this state is zero). \begin{equation} \ensuremath{\ket{\text{init}}}_{(2)} = \coef{1}{6} \ensuremath{\ket{\text{head}}_R} \ensuremath{\ket{\rightarrow}_S} - \coef{1}{6} \ensuremath{\ket{\text{head}}_R} \ensuremath{\ket{\leftarrow}_S} + \coef{2}{3} \ensuremath{\ket{\text{tail}}_R} \ensuremath{\ket{\rightarrow}_S}. \label{eq:dos} \end{equation} (Again, the probability of the term in $\ensuremath{\ket{\text{tail}}_R} \ensuremath{\ket{\leftarrow}_S}$ is zero). \begin{equation} \ensuremath{\ket{\text{init}}}_{(3)} = \coef{2}{3} \ensuremath{\ket{\text{h+t}}_R} \ensuremath{\ket{\downarrow}_S} + \coef{1}{6} \ensuremath{\ket{\text{h+t}}_R} \ensuremath{\ket{\uparrow}_S} - \coef{1}{6} \ensuremath{\ket{\text{h-t}}_R} \ensuremath{\ket{\uparrow}_S}. \label{eq:tres} \end{equation} (And again, the probability of the term $\ensuremath{\ket{\text{h-t}}_R} \ensuremath{\ket{\downarrow}_S}$ is zero). \begin{equation} \ensuremath{\ket{\text{init}}}_{(4)} = \coef{3}{4} \ensuremath{\ket{\text{h+t}}_R} \ensuremath{\ket{\rightarrow}_S} - \coef{1}{12} \ensuremath{\ket{\text{h+t}}_R} \ensuremath{\ket{\leftarrow}_S} - \coef{1}{12} \ensuremath{\ket{\text{h-t}}_R} \ensuremath{\ket{\rightarrow}_S} - \coef{1}{12} \ensuremath{\ket{\text{h-t}}_R} \ensuremath{\ket{\leftarrow}_S}. \label{eq:cuatro} \end{equation} It is worth to remark that all $\ensuremath{\ket{\text{init}}}_{(1)}$, $\ensuremath{\ket{\text{init}}}_{(2)}$, $\ensuremath{\ket{\text{init}}}_{(3)}$ and $\ensuremath{\ket{\text{init}}}_{(4)}$ are just different decompositions of the very same state, $\ensuremath{\ket{\text{init}}}$. In the equations above we have used the following notation: \begin{eqnarray} \ensuremath{\ket{\rightarrow}_S} &=& \coef{1}{2} \ensuremath{\ket{\uparrow}_S} + \coef{1}{2} \ensuremath{\ket{\downarrow}_S}, \\ \ensuremath{\ket{\leftarrow}_S} &=& \coef{1}{2} \ensuremath{\ket{\uparrow}_S} - \coef{1}{2} \ensuremath{\ket{\downarrow}_S}, \\ \ensuremath{\ket{\text{h+t}}_R} &=& \coef{1}{2} \ensuremath{\ket{\text{head}}_R} + \coef{1}{2} \ensuremath{\ket{\text{tail}}_R}, \\ \ensuremath{\ket{\text{h-t}}_R} &=& \coef{1}{2} \ensuremath{\ket{\text{head}}_R} - \coef{1}{2} \ensuremath{\ket{\text{tail}}_R}. \end{eqnarray} All the statements that the four agents make in this Gedankenexperiment are based on different measurements performed on the initial state, given by Eq. (\ref{eq:inicial}); their results are easily interpreted relying on Eqs. (\ref{eq:uno})-(\ref{eq:cuatro}). The procedure is designed to not perform two incompatible measurements. That is, each agent works on a different part of the setup, so the wave-function collapse after each measurement does not interfere with the next one. As a consequence of this, each agent can infer the conclusions obtained by the others, just by reasoning from their own measurements. To structure the interpretation of the Gedankenexperiment, we consider the following hypothesis for the measuring protocol: \begin{center} \ovalbox{\parbox{\columnwidth-2\fboxsep-2\fboxrule-\shadowsize}{ \centering \begin{hipotesis}[Measurement procedure] \texttt{} To perform a measurement, an initial state in which the system, $S$, and the apparatus, $A$, are uncorrelated, $\ket{\psi} = \ket{s} \otimes \ket{a}$, is transformed into a correlated state, $\ket{\psi'} = \sum_i c_i \ket{s_i} \otimes \ket{a_i}$, by the action of a unitary operator $U^{M}$. We assume that both $\left\{ \ket{s_i} \right\}$ and $\left\{ \ket{a_i} \right\}$ are linearly independent. Therefore, if the outcome of a measurement is $\ket{a_j}$, then the agent can safely conclude that the system is in the state $\ket{s_j}$. \end{hipotesis} }} \end{center} \subsection{B. Development of the experiment} Equations (\ref{eq:uno})-(\ref{eq:cuatro}) provide four different possibilities to establish a correlation between the system and the apparatus. Each of the four agents involved in the Gedankenexperiment works with one of them. Follows a summary of the main results; more details are given in \cite{Renner:18}. {\bf Measurement 1.-} Agent $\overline{F}$ measures the state of the quantum coin $R$ in the basis $\left\{ \ensuremath{\ket{\text{head}}_R}, \ensuremath{\ket{\text{tail}}_R} \right\}$. According to hypothesis 1 above, this statement is based on the following facts. Agent $\overline{F}$ starts from Eq. (\ref{eq:dos}). Then, she performs a measurement by means of a unitary operator that correlates the quantum coin and the apparatus in the following way \begin{equation} \left( c_1 \ensuremath{\ket{\text{head}}_R} + c_2 \ensuremath{\ket{\text{tail}}_R} \right) \otimes \ket{\overline{F}_0} \longrightarrow c_1 \ensuremath{\ket{\text{head}}_R} \ket{\overline{F}_1} + c_2 \ensuremath{\ket{\text{tail}}_R} \ket{\overline{F}_2}. \label{eq:medida1} \end{equation} That is, for any initial state of the coin, $\ket{R} = c_1 \ensuremath{\ket{\text{head}}_R} + c_2 \ensuremath{\ket{\text{tail}}_R}$, the state $\ket{\overline{F}_1}$ of the apparatus becomes perfectly correlated with $\ensuremath{\ket{\text{head}}_R}$, and the state $\ket{\overline{F}_2}$ becomes perfectly correlated with $\ensuremath{\ket{\text{tail}}_R}$. This procedure is perfect if $\braket{\overline{F}_1}{\overline{F}_2}=0$, but this condition is not necessary to distinguish between the two possible outcomes. Since the same protocol must be valid for any initial state, the only constraint for coefficents $c_1$ and $c_2$ is $\left| c_1 \right|^2 + \left| c_2 \right|^2$=1. As a consequence of this, the measurement performed by agent $\overline{F}$ consists in \begin{equation} \begin{split} &\left( \coef{1}{6} \ensuremath{\ket{\text{head}}_R} \ensuremath{\ket{\rightarrow}_S} - \coef{1}{6} \ensuremath{\ket{\text{head}}_R} \ensuremath{\ket{\leftarrow}_S} + \coef{2}{3} \ensuremath{\ket{\text{tail}}_R} \ensuremath{\ket{\rightarrow}_S} \right) \otimes \ket{\overline{F}_0} \longrightarrow \\ &\longrightarrow \coef{1}{6} \ensuremath{\ket{\text{head}}_R} \ket{\overline{F}_1} \ensuremath{\ket{\rightarrow}_S} - \coef{1}{6} \ensuremath{\ket{\text{head}}_R} \ket{\overline{F}_1} \ensuremath{\ket{\leftarrow}_S} + \coef{2}{3} \ensuremath{\ket{\text{tail}}_R} \ket{\overline{F}_2} \ensuremath{\ket{\rightarrow}_S}. \end{split} \end{equation} Furthermore, the quantum coin together with the agent $\overline{F}$ become the laboratory $\overline{L}$: \begin{eqnarray} \ensuremath{\ket{\text{head}}_R} \otimes \ket{\overline{F}_1} &\equiv& \ket{h}_{\overline{L}}, \\ \ensuremath{\ket{\text{tail}}_R} \otimes \ket{\overline{F}_2} &\equiv& \ket{t}_{\overline{L}}, \end{eqnarray} and therefore the state of the whole system becomes \begin{equation} \ensuremath{\ket{\text{init}}}_{(2)} = \coef{1}{6} \ket{h}_{\overline{L}} \ensuremath{\ket{\rightarrow}_S} - \coef{1}{6} \ket{h}_{\overline{L}} \ensuremath{\ket{\leftarrow}_S} + \coef{2}{3} \ket{t}_{\overline{L}} \ensuremath{\ket{\rightarrow}_S}. \label{eq:dosb} \end{equation} The main conclusion obtain from this procedure can be written as follows: {\bf Statement 1.-} If agent $\overline{F}$ finds her apparatus in the state $\ket{\overline{F}_2}$, then she can safely conclude that the quantum coin $R$ is in the state $\ensuremath{\ket{\text{tail}}_R}$. Then, as a consequence of Eq. (\ref{eq:dosb}), she can also conclude that the spin is in state $\ensuremath{\ket{\rightarrow}_S}$, and therefore that agent $W$ is going to obtain $\ensuremath{\ket{\text{fail}}_L}$ in his measurement (see below for details). {\bf Measurement 2.-} Agent $F$ measures the state of the spin $S$ in the basis $\left\{ \ensuremath{\ket{\uparrow}_S}, \ensuremath{\ket{\downarrow}_S} \right\}$. Again, according to hypothesis $1$, this statement is based on a perfect correlation between the apparatus and the spin states. In this case, agent $F$ starts from Eq. (\ref{eq:uno}). Taking into account the previous measurement, hers gives rise to the following correlation: \begin{equation} \begin{split} & \left( \coef{1}{3} \ket{h}_{\overline{L}} \ensuremath{\ket{\downarrow}_S} + \coef{1}{3} \ket{t}_{\overline{L}} \ensuremath{\ket{\downarrow}_S} + \coef{1}{3} \ket{t}_{\overline{L}} \ensuremath{\ket{\uparrow}_S} \right) \otimes \ket{F_0} \longrightarrow \\ &\longrightarrow \coef{1}{3} \ket{h}_{\overline{L}} \ensuremath{\ket{\downarrow}_S} \ket{F_1} + \coef{1}{3} \ket{t}_{\overline{L}} \ensuremath{\ket{\downarrow}_S} \ket{F_1} + \coef{1}{3} \ket{t}_{\overline{L}} \ensuremath{\ket{\uparrow}_S} \ket{F_2}. \end{split} \end{equation} It is worth to note that this measurement is totally independent from the previous one. As it happened with agent $\overline{F}$, agent $F$ becomes entangled with her apparatus, and both together conform the laboratory $L$: \begin{eqnarray} \ensuremath{\ket{\downarrow}_S} \otimes \ket{F_1} &\equiv& \ensuremath{\ket{-1/2}_L}, \\ \ensuremath{\ket{\uparrow}_S} \otimes \ket{F_2} &\equiv& \ensuremath{\ket{+1/2}_L}. \end{eqnarray} Then, the whole system becomes \begin{equation} \ket{\text{init}}_{(1)} = \coef{1}{3} \ket{h}_{\overline{L}} \ensuremath{\ket{-1/2}_L} + \coef{1}{3} \ket{t}_{\overline{L}} \ensuremath{\ket{-1/2}_L} + \coef{1}{3} \ket{t}_{\overline{L}} \ensuremath{\ket{+1/2}_L}. \label{eq:statement2} \end{equation} The main conclusion obtained by agent $F$ can be written as follows: {\bf Statement 2.-} If agent $F$ finds her apparatus in state $\ket{F_2}$, then she can safely conclude that the spin $S$ is in state $\ensuremath{\ket{\uparrow}_S}$. Then, as it is shown in Eq. (\ref{eq:statement2}), she also is certain that laboratory $\overline{L}$ is in state $\ket{t}_{\overline{L}}$, and therefore she can safely conclude that agent $\overline{F}$ has obtained $\ensuremath{\ket{\text{tail}}_R}$ in her measurement. Finally, according to Statement 1, she can be sure that agent $W$ is going to obtain $\ensuremath{\ket{\text{fail}}_L}$ in his measurement. {\bf Measurement 3.-} Agent $\overline{W}$ measures the laboratory $\overline{L}$ in the basis $\left\{ \ensuremath{\ket{\overline{\text{fail}}}_{\overline{L}}}, \ensuremath{\ket{\overline{\text{ok}}}_{\overline{L}}} \right\}$, where $\ensuremath{\ket{\overline{\text{fail}}}_{\overline{L}}} = \left( \ket{h}_{\overline{L}} + \ket{t}_{\overline{L}} \right)/\sqrt{2}$, and $\ensuremath{\ket{\overline{\text{ok}}}_{\overline{L}}} = \left( \ket{h}_{\overline{L}} - \ket{t}_{\overline{L}} \right)/\sqrt{2}$. Starting from (\ref{eq:tres}), and taking into account all the previous results, this measurement implies: \begin{equation} \begin{split} &\left( \coef{2}{3} \ensuremath{\ket{\overline{\text{fail}}}_{\overline{L}}} \ensuremath{\ket{-1/2}_L} + \coef{1}{6} \ensuremath{\ket{\overline{\text{fail}}}_{\overline{L}}} \ensuremath{\ket{+1/2}_L} - \coef{1}{6} \ensuremath{\ket{\overline{\text{ok}}}_{\overline{L}}} \ensuremath{\ket{+1/2}_L} \right) \otimes \ket{\overline{W}} \longrightarrow \\ &\longrightarrow \coef{2}{3} \ensuremath{\ket{\overline{\text{fail}}}_{\overline{L}}} \ensuremath{\ket{-1/2}_L} \otimes \ket{\overline{W}_1}+ \coef{1}{6} \ensuremath{\ket{\overline{\text{fail}}}_{\overline{L}}} \ensuremath{\ket{+1/2}_L} \otimes \ket{\overline{W}_1} - \coef{1}{6} \ensuremath{\ket{\overline{\text{ok}}}_{\overline{L}}} \ensuremath{\ket{+1/2}_L} \otimes \ket{\overline{W}_2}. \end{split} \label{eq:statement3} \end{equation} Again, as the meauserment is not on either the spin $S$ or the quantum coin $R$ or both, it is fully compatible with the previous ones. And again, agent $\overline{W}$ becomes entangled with his apparatus, in the same way that agents $F$ and $\overline{F}$ did. However, since no measurements are done over this new composite system, we do not introduce a new notation: state $\ensuremath{\ket{\overline{\text{fail}}}_{\overline{L}}}$ can be understood as $\ensuremath{\ket{\overline{\text{fail}}}_{\overline{L}}} \otimes \ket{\overline{W}_1}$, and $\ensuremath{\ket{\overline{\text{ok}}}_{\overline{L}}}$ as $\ensuremath{\ket{\overline{\text{ok}}}_{\overline{L}}} \otimes \ket{\overline{W}_2}$. The main conclusion that agent $\overline{W}$ obtains can be written as follows: {\bf Statement 3.-} If agent $\overline{W}$ finds his apparatus in state $\ket{\overline{W}_2}$, then he can safely conclude that laboratory $\overline{L}$ is in state $\ensuremath{\ket{\overline{\text{ok}}}_{\overline{L}}}$. Hence, as a consequence of Eq. (\ref{eq:statement3}), he can also conclude that laboratory $L$ is in state $\ensuremath{\ket{+1/2}_L}$. Therefore, from statement 2, agent $\overline{W}$ knows that agent $F$ has obtained $\ensuremath{\ket{\uparrow}_S}$ in her measurement, and from statement 1, he also knows that agent $\overline{F}$ has obtained $\ensuremath{\ket{\text{tail}}_R}$. Consequently, agent $\overline{W}$ can be certain that agent $W$ is going to obtain $\ensuremath{\ket{\text{fail}}_L}$ in his measurement on laboratory $L$. The key point in \cite{Renner:18} lays here. As all the agents use the same theory, and as all the measurements they perform are fully compatible, they must reach the same conclusion. This conclusion is: {\em Every time laboratory $\overline{L}$ is in state $\ensuremath{\ket{\overline{\text{ok}}}_{\overline{L}}}$, then laboratory $L$ is in state $\ensuremath{\ket{\text{fail}}_L}$. Hence, it is not possible to find both laboratories in states $\ensuremath{\ket{\overline{\text{ok}}}_{\overline{L}}}$ and $\ensuremath{\ket{\text{ok}}_L}$, respectively.} It is worth to remark that agent $W$ must also obtain the same conclusion from statements 1, 2 and 3. {\bf Measurement 4.-} As the final step of the process, agent $W$ measures the laboratory $L$ in the basis $\left\{ \ensuremath{\ket{\text{fail}}_L}, \ensuremath{\ket{\text{ok}}_L} \right\}$, where $\ensuremath{\ket{\text{ok}}_L} = \left( \ensuremath{\ket{-1/2}_L} - \ensuremath{\ket{+1/2}_L} \right)/\sqrt{2}$, and $\ensuremath{\ket{\text{fail}}_L} = \left( \ensuremath{\ket{-1/2}_L} + \ensuremath{\ket{+1/2}_L} \right)/\sqrt{2}$. Starting from Eq. (\ref{eq:cuatro}), the result of this final measurement is \begin{equation} \begin{split} &\left( \coef{3}{4} \ensuremath{\ket{\overline{\text{fail}}}_{\overline{L}}} \ensuremath{\ket{\text{fail}}_L} + \coef{1}{12} \ensuremath{\ket{\overline{\text{fail}}}_{\overline{L}}} \ensuremath{\ket{\text{ok}}_L} - \coef{1}{12} \ensuremath{\ket{\overline{\text{ok}}}_{\overline{L}}} \ensuremath{\ket{\text{fail}}_L} + \coef{1}{12} \ensuremath{\ket{\overline{\text{ok}}}_{\overline{L}}} \ensuremath{\ket{\text{ok}}_L} \right) \otimes \ket{W} \rightarrow \\ & \rightarrow \coef{3}{4} \ensuremath{\ket{\overline{\text{fail}}}_{\overline{L}}} \ensuremath{\ket{\text{fail}}_L} \otimes \ket{W_1} + \coef{1}{12} \ensuremath{\ket{\overline{\text{fail}}}_{\overline{L}}} \ensuremath{\ket{\text{ok}}_L} \otimes \ket{W_2} - \coef{1}{12} \ensuremath{\ket{\overline{\text{ok}}}_{\overline{L}}} \ensuremath{\ket{\text{fail}}_L} \otimes \ket{W_1} + \coef{1}{12} \ensuremath{\ket{\overline{\text{ok}}}_{\overline{L}}} \ensuremath{\ket{\text{ok}}_L} \otimes \ket{W_2}. \end{split} \label{eq:cuatrob} \end{equation} Therfore, and despite the previous conclusion that agent $W$ has obtained from statements 1, 2, and 3, after this measurement he can conclude that {\em the probability of $\overline{L}$ being in state $\ensuremath{\ket{\overline{\text{ok}}}_{\overline{L}}}$ and $L$ in state $ok$ is not zero, but $1/12$.} This is the contradiction discussed in \cite{Renner:18}, from which the authors of this paper conclude that quantum theory cannot consistently describe the use of itself: {\em As Eq. (\ref{eq:cuatrob}) establishes that the probability of obtaining $\ensuremath{\ket{\overline{\text{ok}}}_{\overline{L}}} \ensuremath{\ket{\text{ok}}_L}$ after a proper measurement is $p=1/12$, and as the same theory, used to describe itself, allows us to conclude that this very same probability should be $p=0$, the conclusion is that quantum theory cannot be used as it is used in statements 1, 2 and 3. That is, quantum theory cannot consistently describe the use of itself.} In the next section, we will prove that this is a consequence of hypothesis 1, that is, a consequence of understanding a measurement just as a perfect correlation between a system and a measuring apparatus. If we consider that a proper measurement requires the action of an external environment, as it is discussed in \cite{Zurek:03,Zurek:81}, quantum theory recovers its ability to speak about itself. Environmental-induced super-selection rules determining the real state of the system after a measurement removes all the contradictions coming from statements 1, 2 and 3. \section{III. Environment-induced superselection rules} \subsection{A. The problem of basis ambiguity} In \cite{Zurek:03,Zurek:81}, W. H. Zurek shows that a perfect correlation, like the one summarized in hypothesis $1$, is {\em not} enough to determine the result of a quantum measurement. The reason is the basis ambiguity due to the superposition principle. To understand this statement, let us consider a simple measurement in which the state of the quantum coin $R$ is to be determined. This goal can be achieved by means of the following unitary operator: \begin{equation} U^{RA} = \ket{A_1} \otimes \ensuremath{\ket{\text{head}}_R} \bra{\text{head}}_R + \ket{A_2} \otimes \ensuremath{\ket{\text{tail}}_R} \bra{\text{tail}}_R, \end{equation} which establishes a perfect correlation between $\ensuremath{\ket{\text{head}}_R}$ and $\ket{A_1}$, and between $\ensuremath{\ket{\text{tail}}_R}$ and $\ket{A_2}$. Furthermore, if such apparatus states verify $\braket{A_1}{A_2}=0$, the measurement is perfect. Starting from an initial state \begin{equation} \ket{\Psi_0} = \left( \coef{1}{3} \ensuremath{\ket{\text{head}}_R} \otimes \ket{A_1} + \coef{2}{3} \ensuremath{\ket{\text{tail}}_R} \otimes \ket{A_2} \right) \otimes \ket{A_0}, \end{equation} the final state of the composite system, quantum coin plus apparatus, is \begin{equation} \ket{\Psi} = \coef{1}{3} \ensuremath{\ket{\text{head}}_R} \otimes \ket{A_1} + \coef{2}{3} \ensuremath{\ket{\text{tail}}_R} \otimes \ket{A_2}. \label{eq:psi} \end{equation} This measurement fulfills the conditions for Hypothesis $1$; indeed, it is equivalent to the one that the agent $\overline{F}$ performs in measurement 1. However, {\em the basis ambiguity allows us to rewrite (\ref{eq:psi}) in the following way:} \begin{equation} \ket{\Psi} = \coef{1}{2} \left( \coef{1}{3} \ensuremath{\ket{\text{head}}_R} + \coef{2}{3} \ensuremath{\ket{\text{tail}}_R} \right) \otimes \ket{A'_1} + \coef{1}{2} \left( \coef{1}{3} \ensuremath{\ket{\text{head}}_R} - \coef{2}{3} \ensuremath{\ket{\text{tail}}_R} \right) \otimes \ket{A'_2}. \label{eq:psiprima} \end{equation} Note that this is the very same state as the one written in Eq. (\ref{eq:psi}) ---it is obtained from $\ket{\Psi_0}$ as a consequence of the action of $U^{RA}$. The new states of the apparatus, \begin{eqnarray} \ket{A_1} &=& \coef{1}{2} \left( \ket{A'_1} + \ket{A'_2} \right), \\ \ket{A_2} &=& \coef{1}{2} \left( \ket{A'_1} - \ket{A'_2} \right), \end{eqnarray} also fulfill $\braket{A'_1}{A'_2}=0$, so they also give rise to a perfect measurement. Let us reinterpret measurement 1, as described in the previous section, taking into account this result. Hypothesis $1$ establishes that a measurement is performed when a perfect correlation between a system and an apparatus has been settled. But, as both Eqs. (\ref{eq:psi}) and (\ref{eq:psiprima}) fulfill this requirement, and both represent the very same state, $\ket{\Psi}$, the action of the operator $U^{RA}$ is not enough to be sure about the final state of both the system and the measuring apparatus. Indeed, the only possible conclusion we can reach is: {\em Measurement $U^{RA}$ cannot determine the final state of the system: if the outcome of the aparatus is ``ONE'', the system can either be in state $\ensuremath{\ket{\text{head}}_R}$ or state $\coef{1}{3} \ensuremath{\ket{\text{head}}_R} + \coef{2}{3} \ensuremath{\ket{\text{tail}}_R}$; and if the outcome of the apparatus is ``TWO'', the system can either be in state $\ensuremath{\ket{\text{tail}}_R}$ or state $\coef{1}{3} \ensuremath{\ket{\text{head}}_R} - \coef{2}{3} \ensuremath{\ket{\text{tail}}_R}$.} Hence, measurement 1, understood as the (only) consequence of Eq. (\ref{eq:medida1}) seems not enough to support the conclusion summarized in statement 1. Both $\ensuremath{\ket{\text{tail}}_R}$ and $\coef{1}{3} \ensuremath{\ket{\text{head}}_R} - \coef{2}{3} \ensuremath{\ket{\text{tail}}_R}$ are fully compatible with the output ``TWO'' of the measuring apparatus. But what has really happened? What is the real state of the quantum coin after the measurement is completed? To which state does the wave function collapse? We know that experiments provide precise results ---Schr\"odinger cats are always found dead or alive, not in a weird superposition like $\coef{1}{3} \ket{\text{alive}} - \coef{2}{3} \ket{\text{dead}}$---, so it is not possible that both possibilities are true. To answer this question, we introduce the following assumption: \begin{center} \Ovalbox{\parbox{\columnwidth-2\fboxsep-2\fboxrule-\shadowsize}{ \centering \begin{myteo}[``Classical'' reality] \texttt{} An event has certainly happened (at a certain time in the past) if and only if it is the only explanation for the current state of the universe. \end{myteo} }} \end{center} This assumption just reinforces our previous conclusion ---from the measurement $U^{RA}$, that is, from Eq. (\ref{eq:medida1}), we cannot make a certain statement about the state of the system. Both $\ensuremath{\ket{\text{tail}}_R}$ and $\coef{1}{3} \ensuremath{\ket{\text{head}}_R} - \coef{2}{3} \ensuremath{\ket{\text{tail}}_R}$ are compatible with the real state of the universe, given by $\ket{\Psi}$ and the measurement outcome ``TWO''. This is why W. H. Zurek establishes that {\em something} else has to happen before we can make a safe statement about the real state of the system. The procedure described in Hypothesis $1$ constitutes just a pre-measurement. The measurement itself requires another action, perfomed by another unitary operator, to determine the real state of the system. This action is done by an external (and large) environment, which becomes correlated with the system and the apparatus. As is described in \cite{Zurek:03,Zurek:81}, after the pre-measurement is completed, the system plus the apparatus interacts with a large environment by means of $U^{\mathcal E}$. Let us suppose that the result of this interaction is \begin{equation} \ket{\Psi}_{{\mathcal E}} = \coef{1}{3} \ensuremath{\ket{\text{head}}_R} \otimes \ket{A_1} \otimes \ket{{\mathcal E}_1} + \coef{2}{3} \ensuremath{\ket{\text{tail}}_R} \otimes \ket{A_2} \otimes \ket{{\mathcal E}_2}, \label{eq:psi_e} \end{equation} with $\braket{{\mathcal E}_1}{{\mathcal E}_2}=0$. Then, this interaction establishes a perfect correlation between environmental and apparatus states, in a similar way that the pre-measurement correlates the system and the apparatus. The main difference between these two processes is given by the following teorem: \begin{center} \shadowbox{\parbox{\columnwidth-2\fboxsep-2\fboxrule-\shadowsize}{ \centering \begin{myteo2}[Triorthogonal uniqueness theorem \cite{Elby:94}] \texttt{} Suppose $\ket{\psi} = \sum_i c_i \ket{A_i} \otimes \ket{B_i} \otimes \ket{C_i}$, where $\{ \ket{A_i} \}$ and $\{ \ket{C_i} \}$ are linearly independent sets of vectors, while $\{ \ket{B_i} \}$ is merely noncollinear. Then there exist no alternative linearly independent sets of vectors $\{ \ket{A'_i} \}$ and $\{ \ket{C'_i} \}$, and no alternative noncollinear set $\{ \ket{B'_i} \}$, such that $\ket{\psi} = \sum_i d_i \ket{A'_i} \otimes \ket{B'_i} \otimes \ket{C'_i}$. (Unless each alternative set of vectors differs only trivially from the set it replaces.) \end{myteo2} }} \end{center} In other words, this theorem establishes that the state $\ket{\Psi}_{{\mathcal E}}$ is unique, that is, we cannot find another decomposition for the very same state \begin{equation} \ket{\Psi}_{{\mathcal E}} = \coef{1}{2} \left( \coef{1}{3} \ensuremath{\ket{\text{head}}_R} + \coef{2}{3} \ensuremath{\ket{\text{tail}}_R} \right) \otimes \ket{A'_1} \otimes \ket{{\mathcal E}'_1} + \coef{1}{2} \left( \coef{1}{3} \ensuremath{\ket{\text{head}}_R} - \coef{2}{3} \ensuremath{\ket{\text{tail}}_R} \right) \otimes \ket{A'_2} \otimes \ket{{\mathcal E}'_2}, \label{eq:psiprima_e} \end{equation} with $\braket{{\mathcal E}'_1}{{\mathcal E}'_2}=0$. Hence, the interaction with the environment determines the real state of the system plus the apparatus. The action of $U^{{\mathcal E}}$ gives rise to Eq. (\ref{eq:psi_e}). {\em To obtain a state like the one written in Eq. (\ref{eq:psiprima_e}), a different interaction with the environment is mandatory, $U^{{\mathcal E}'}$.} Thus, we can formulate an alternative hypothesis: \begin{center} \ovalbox{\parbox{\columnwidth-2\fboxsep-2\fboxrule-\shadowsize}{ ~ \centering \begin{hipotesis}[Real measurement procedure (adapted from \cite{Zurek:03})] \texttt{} To perform a measurement, an initial state in which the system, $S$, the apparatus, $A$, and an external environment ${\mathcal E}$ are uncorrelated, $\ket{\psi} = \ket{s} \otimes \ket{a} \otimes \ket{\varepsilon}$, is transformed: {\em i)} first, into a state, $\ket{\psi'} = \left( \sum_i c_i \ket{s_i} \otimes \ket{a_i} \right) \otimes \ket{\varepsilon}$, by means of a procedure called pre-measurement; and {\em ii)} second, into a final state, $\ket{\psi''} = \left( \sum_i c_i \ket{s_i} \otimes \ket{a_i} \otimes \ket{\varepsilon_i} \right)$. This final state determines the real correlations between the system and the apparatus. If $\braket{\varepsilon_i}{\varepsilon_j}=0$, then, after tracing out the environmental degrees of freedom, the state becomes \begin{equation} \rho = \sum_i \left| c_i \right|^2 \ket{s_i} \ket{a_i} \bra{s_i} \bra{a_i}. \end{equation} Therefore, the measuring agent can safely conclude that the result of the measurement certainly is one of the previous possibilities, $\left\{ \ket{a_i} \ket{s_i} \right\}$, each one with a probability given by $p_i = \left| c_i \right|^2$. The states $\left\{ \ket{a_i} \ket{s_i} \right\}$ are called ``pointer states''. They are selected by the environment, by means of environmental-induced superselection rules; they constitute the ``classical'' reality. ~ \end{hipotesis} }} \end{center} This hypothesis establishes that only after the real correlations between the system, the apparatus and the environment are settled, the observation of the agent becomes certain. Tracing out the environmental degrees of freedom, which are not the object of the measurement, the state given by Eq. (\ref{eq:psi_e}) becomes: \begin{equation} \rho_{{\mathcal E}} = \frac{1}{3} \ensuremath{\ket{\text{head}}_R} \ket{A_1} \bra{\text{head}}_R \bra{A_1} + \frac{2}{3} \ensuremath{\ket{\text{tail}}_R} \ket{A_2} \bra{\text{tail}}_R \bra{A_2}. \end{equation} In other words, the agent observes a mixture between the system being in state $\ensuremath{\ket{\text{head}}_R}$ with the apparatus in state $\ket{A_1}$ (with probability $p_1=1/3$), and the system being in state $\ensuremath{\ket{\text{tail}}_R}$ with the apparatus in state $\ket{A_2}$ (with probability $p_2=2/3$). And this happens because the environment has {\em chosen} $\ensuremath{\ket{\text{tail}}_R}$ and $\ensuremath{\ket{\text{head}}_R}$ as the ``classical'' states ---the ones observed as a consequence of quantum measurements--- by means of environmental-induced superselection rules. In other words, following this interpretation, Schr\"odinger cats are always found either dead or alive because the interaction with the environment determines that $\ket{\text{dead}}$ and $\ket{\text{alive}}$ are the pointer ``classical'' states. \subsection{B. Re-interpretation of the Gedankenexperiment} Let us re-interpret the first statement of the Gedankenexperiment, in the terms discussed above. Agent $\overline{F}$ cannot reach any conclusion about the real state of the quantum coin before the pointer states are obtained by means of the interaction with a large environment ${\mathcal E}$. The key point is that {\em the environment ${\mathcal E}$ interacts with the whole system, that is, with the quantum coin $R$, the apparatus, and the spin $S$, because the three of them are entangled}. So, let us assume that a correlation like Eq. (\ref{eq:psi}) has happened as a consequence of the pre-measurement. In such a case, taking into account that the quantum coin $R$ is entangled with the spin $S$, the state after the pre-measurement is \begin{equation} \ket{\Psi} = \coef{1}{3} \ensuremath{\ket{\text{head}}_R} \ensuremath{\ket{\downarrow}_S} \ket{A_1} + \coef{2}{3} \ensuremath{\ket{\text{tail}}_R} \ensuremath{\ket{\rightarrow}_S} \ket{A_2}. \end{equation} The next step in the process is the interaction with the environment, which determines the pointer states of the system composed by the quantum coin and the spin. There are several possibilities for such an interaction. Let us consider, for example, \begin{equation} U^{{\mathcal E}} = \ket{\varepsilon_1} \ensuremath{\ket{\text{head}}_R} \ensuremath{\ket{\downarrow}_S} \ket{A_1} \bra{\text{head}}_R \bra{\downarrow}_S \bra{A_1} + \ket{\varepsilon_2} \ensuremath{\ket{\text{tail}}_R} \ensuremath{\ket{\rightarrow}_S} \ket{A_2} \bra{\text{tail}}_R \bra{\rightarrow}_S \bra{A_2}, \label{eq:environment1} \end{equation} and \begin{equation} \begin{split} U^{{\mathcal E}'} &= \ket{\varepsilon'_1} \ensuremath{\ket{\text{head}}_R} \ensuremath{\ket{\downarrow}_S} \ket{A_1} \bra{\text{head}}_R \bra{\downarrow}_S \bra{A_1} + \\ &+ \ket{\varepsilon'_2} \ensuremath{\ket{\text{tail}}_R} \ensuremath{\ket{\downarrow}_S} \ket{A_2} \bra{\text{tail}}_R \bra{\downarrow}_S \bra{A_2} + \ket{\varepsilon'_3} \ensuremath{\ket{\text{tail}}_R} \ensuremath{\ket{\uparrow}_S} \ket{A_2} \bra{\text{tail}}_R \bra{\uparrow}_S \bra{A_2}. \end{split} \label{eq:environment2} \end{equation} If the real interaction with the environment is given by Eq. (\ref{eq:environment1}), the final state of the system, after tracing out the environmental degrees of freedom, is \begin{equation} \rho^{{\mathcal E}} = \text{Tr}_{{\mathcal E}} \left[ U^{{\mathcal E}} \ket{\Psi} \right] = \frac{1}{3} \ensuremath{\ket{\text{head}}_R} \ensuremath{\ket{\downarrow}_S} \ket{A_1} \bra{\text{head}}_R \bra{\downarrow}_S \bra{A_1} + \frac{2}{3} \ensuremath{\ket{\text{tail}}_R} \ensuremath{\ket{\rightarrow}_S} \ket{A_2} \bra{\text{tail}}_R \bra{\rightarrow}_S \bra{A_2}. \label{eq:final1} \end{equation} On, the contrary, if the real interaction with the environment is given by Eq. (\ref{eq:environment2}), the final state is \begin{equation} \begin{split} \rho^{{\mathcal E}'} &= \text{Tr}_{{\mathcal E}} \left[ U^{{\mathcal E}'} \ket{\Psi} \right] = \\ &= \frac{1}{3} \ensuremath{\ket{\text{head}}_R} \ensuremath{\ket{\downarrow}_S} \ket{A_1} \bra{\text{head}}_R \bra{\downarrow}_S \bra{A_1} + \frac{1}{3} \ensuremath{\ket{\text{tail}}_R} \ensuremath{\ket{\downarrow}_S} \ket{A_2} \bra{\text{tail}}_R \bra{\downarrow}_S \bra{A_2} + \frac{1}{3} \ensuremath{\ket{\text{tail}}_R} \ensuremath{\ket{\uparrow}_S} \ket{A_2} \bra{\text{tail}}_R \bra{\uparrow}_S \bra{A_2}. \end{split} \label{eq:final2} \end{equation} At this point, the question is: what is the real state of the system after the measurement is completed? \begin{itemize} \item Eq. (\ref{eq:environment1}) establishes that it is a mixture in which the agent can find the system either in $\ensuremath{\ket{\text{head}}_R}$ and $\ensuremath{\ket{\downarrow}_S}$, with probability $p=1/3$, or in $\ensuremath{\ket{\text{tail}}_R}$ and $\ensuremath{\ket{\rightarrow}_S}$, with probability $p=2/3$. It is worth to remark that this is not a quantum superposition, but a classical mixture. That is, due to the interaction with the environment, $U^{{\mathcal E}}$, the state of the system is compatible with either a collapse to $\ensuremath{\ket{\text{head}}_R} \ensuremath{\ket{\downarrow}_S}$, with $p=1/3$, or a collapse to $\ensuremath{\ket{\text{tail}}_R} \ensuremath{\ket{\rightarrow}_S}$, with $p=2/3$. This is what agent $\overline{F}$ concludes in statement 1. \item But the other possible interaction with the environment, Eq. (\ref{eq:environment2}), establishes that the real state of the system is a mixture in which the agent can find the system in $\ensuremath{\ket{\text{head}}_R}$ and $\ensuremath{\ket{\downarrow}_S}$, with probability $p=1/3$, $\ensuremath{\ket{\text{tail}}_R}$ and $\ensuremath{\ket{\downarrow}_S}$, with probability $p=1/3$, and $\ensuremath{\ket{\text{tail}}_R}$ and $\ensuremath{\ket{\uparrow}_S}$, with probability $p=1/3$. \end{itemize} At this stage, the key point is the following. As the measurement performed by agent $\overline{F}$ only involves the quantum coin $R$, her apparatus only reads $\ensuremath{\ket{\text{head}}_R}$ with probability $p=1/3$, and $\ensuremath{\ket{\text{tail}}_R}$, with probability $p=2/3$. {\em But both (\ref{eq:final1}) and (\ref{eq:final2}) are compatible with this result}. Hence, following assumption 1, {\em agent $\overline{F}$ cannot be certain about the state of the spin $S$ ---and thus, she can neither be certain about what agent $W$ is going to find when he measures the state of the laboratory $L$}. The only way to distinguish between (\ref{eq:final1}) and (\ref{eq:final2}) is to perform a further measurement on the spin $S$. Such a procedure would provide the pointer ``classical'' states of the system composed by the quantum coin and the spin ---its outcome would determine whether the interaction with the environment is given by Eq. (\ref{eq:environment1}) or by Eq. (\ref{eq:environment2}). But such a procedure would be incompatible with the measurement performed by agent $F$. Hence, agent $\overline{F}$ has to choose between: {\em i)} not being certain about the real state of the quantum spin $S$, and therefore not being able to reach any conclusion about the measurement that agent $W$ will do in the future; or {\em ii)} performing a further measurement which would invalidate the conclusions of this Gedankenexperiment. This conclusion is enough to rule out the contradictions discussed in \cite{Renner:18}. As agent $\overline{F}$ cannot be certain about the outcome what agent $W$ will obtain in his measurement, none of the four agents can conclude that it is not possible to find laboratory $L$ in state $\ensuremath{\ket{\text{ok}}_L}$ and laboratory $\overline{L}$ in state $\ensuremath{\ket{\overline{\text{ok}}}_{\overline{L}}}$ at the same time. Hence, the outcome of measurement 4, whatever it is, becomes fully compatible with all the conclusions obtained by all the agents. It is worth to note that the same analysis can also be done over measurements 2 and 3. The conclusions are pretty the same. \section{IV. Conclusions} The main result of this paper is to show that assumption 1 and hypothesis 2 allow quantum theory to consistently describe the use of itself. This conclusion is based on the decoherence interpretation about quantum measurements \cite{Zurek:03}. Hence, a further statement can be set down: {\em To make quantum theory fully consistent, in order it can be used to describe itself, the decoherence interpretation of measurements (and origins of the classical world) is mandatory.} In any case, the main conclusion of this paper is applicable to other interpretations of quantum mechanics. Decoherence interpretation of the meausrement process establishes that the wave-function collapse is not real ---the measuring agent {\em sees} the system as if its wave-function had collapsed onto one of the pointer states selected by the environment, even though the whole wave function remains in a quantum superposition. However, this interpretation is not really important for experimental results; from this point of view, it is compatible with the Copenhaguen interpretation, because it assigns the same probabilities to all of the possibles outcomes. Furthermore, it is also compatible with Everett many-worlds interpretation \cite{Everett:57}: the branches onto which the universe splits after a measurement are determined by the environmental-induced super-selection rules. The key point is that real ``classical'' states are not ambiguous, but they are the (unique) result of the interaction between the measured system, the measuring apparatus, and a large environment. \end{document}
\begin{document} \title{Reeb Dynamics of the Link of the $A_n$ Singularity} \begin{abstract} The link of the $A_n$ singularity, $L_{A_n} \subset \mathbb{C}^3$ admits a natural contact structure $\xi_0$ coming from the set of complex tangencies. The canonical contact form $\alpha_0$ associated to $\xi_0$ is degenerate and thus has no isolated Reeb orbits. We show that there is a nondegenerate contact form for a contact structure equivalent to $\xi_0$ that has two isolated simple periodic Reeb orbits. We compute the Conley-Zehnder index of these simple orbits and their iterates. From these calculations we compute the positive $S^1$-equivariant symplectic homology groups for $\left(L_{A_n}, \xi_0 \right)$. In addition, we prove that $\left(L_{A_n}, \xi_0 \right)$ is contactomorphic to the Lens space $L(n+1,n)$, equipped with its canonical contact structure $\xi_{std}$. \end{abstract} \setcounter{tocdepth}{2} \tableofcontents \section{Introduction and Main results } The classical topological theory of isolated critical points of complex polynomials relates the topology of the link of the singularity to the algebraic properties of the singularity \cite{M}. More generally, the link of an irreducible affine variety $A^n \subset \mathbb{C}^N$ with an isolated singularity at $\mathbf{0}$ is defined by $L_A = A \cap S_\delta^{2N+1}$. For sufficiently small $\delta$, the link $L_A$ is a manifold of real dimension $2n-1$, which is an invariant of the germ of $A$ at $\mathbf{0}.$ The links of Brieskorn varieties can sometimes be homeomorphic but not always diffeomorphic to spheres \cite{Br}, a preliminary result which further motivated the study of such objects. Recent developments in symplectic and contact geometry have shown that the algebraic properties of a singularity are strongly connected to the contact topology of the link and symplectic topology of (the resolution of) the variety. A wide range of results demonstrating the power of investigating the symplectic and contact perspective of singularities include \cite{K}, \cite{O}, \cite{McL}, \cite{R}, \cite{Se}, \cite{U}. In this paper we study the contact topology of the link of the $A_n$ singularity, providing a computation of positive $S^1$-equivariant symplectic homology. This is done via our construction of an explicit nondegenerate contact form and the computation of the Conley-Zehnder indices of the associated simple Reeb orbits and their iterates. Our computations show that positive $S^1$-equivariant symplectic homology is a free $\mathbb{Q}[u]$ module of rank equal to the number of conjugacy classes of the finite subgroup $A_n$ of SL$(2;\mathbb{C})$. This provides a concrete example of the relationship between the cohomological McKay correspondence and symplectic homology, which is work in progress by McLean and Ritter \cite{MR}. As a result, the topological nature of the singularity is reflected by qualitative aspects of the Reeb dynamics associated to the link of the $A_n$ singularity. The link of the $A_n$ singularity is defined by \begin{equation}\label{linkeq} L_{A_n} =f_{A_n}^{-1}(0) \cap S^5 \subset \mathbb{C}^3, \ \ \ f_{A_n}=z_0^{n+1} + 2 z_1 z_2. \end{equation} It admits a natural contact structure coming from the set of complex tangencies, \[ \xi_0:=TL_{A_n} \cap J_0(TL_{A_n}). \] The contact structure can be expressed as the kernel of the canonically defined contact form, \[ \alpha_0 = \frac{i}{2} \left( \sum_{j=0}^m ( z_j d\bar{z}_j -\bar{z}_jdz_j ) \right)\bigg \vert_{L_{A_n}}. \] The contact form $\alpha_0$ is degenerate and hence not appropriate for computing Floer theoretic invariants as the periodic orbits of the Reeb vector field defined by \[ {\alpha_0}(R_{\alpha_0})=1, \ \ \ \iota_{R_{\alpha_0}}d\alpha_0 =0, \] are not isolated. Our first result is to construct a nondegenerate contact form $\alpha_\epsilon$ such that $( L_{A_n}, \ker \alpha_0)$ and $( L_{A_n}, \ker \alpha_\epsilon)$ are contactomorphic. Define the Hamiltonian on $\mathbb{C}^3$ by \[ \begin{array}{rlcl} H:& \mathbb{C}^3 &\to & \mathbb{R} \\ & (z_0,z_1,z_2) &\mapsto & |z|^2 + \epsilon(|z_1|^2 - |z_2|^2), \\ \end{array} \] where $\epsilon$ is chosen so that $H > 0$ on $S^5$. As will be shown, \begin{equation}\label{alphaepsilon} \alpha_\epsilon = \frac{1}{H} \left[ \frac{(n+1)i}{8}\left(z_0d\overline{z}_0 - \overline{z}_0 dz_0\right) + \frac{i}{4} \left(z_1 d\bar{z}_1 - \bar{z}_1 dz_1 + z_2 d\bar{z}_2 - \bar{z}_2 dz_2\right)\right], \end{equation} is a nondegenerate contact form. We also find the simple Reeb orbits of $R_{\alpha_\epsilon}$ and compute the associated Conley-Zehnder index with respect to the canonical trivialization of $\mathbb{C}^3$ of their iterates. \begin{thm}\label{CZcomputation} The 1-form $\alpha_\epsilon$ is a nondegenerate contact form for $L_{A_n}$ such that $(L_{A_n}, \ker \alpha_0)$ and $( L_{A_n}, \ker \alpha_\epsilon)$ are contactomorphic. The Reeb orbits of $R_{\alpha_\epsilon}$ are defined by \begin{align*} \gamma_+(t) & = (0,e^{2i(1 + \epsilon)t},0) \quad \quad 0 \le t \le \frac{\pi}{1 + \epsilon}\\ \gamma_-(t) & = (0,0,e^{2i(1 - \epsilon)t}) \quad \quad 0 \le t \le \frac{\pi}{1 - \epsilon}. \end{align*} The Conley-Zehnder index for $\gamma = \gamma_{\pm}^N$ in $0 \le t \le \frac{N\pi}{1 \pm \epsilon}$ is \begin{align}\label{CZeq} \mu_{CZ}(\gamma_{\pm}^N) = 2\left( \left\lfloor \frac{2N}{(n+1)(1 \pm \epsilon)}\right \rfloor + \left\lfloor \frac{N(1 \mp \epsilon)}{1 \pm \epsilon} \right\rfloor - \left \lfloor \frac{2N}{1 \pm \epsilon} \right \rfloor \right) + 2N + 1. \end{align} \end{thm} \begin{rem} If $\epsilon$ is chosen such that $0 <\epsilon \ll \frac{1}{N}$ then (\ref{CZeq}) can be further simplified: \begin{equation} \begin{array}{lcl} \mu_{CZ}(\gamma_{-}^N)& = &2 \left\lfloor \dfrac{2N}{(n+1)(1 - \epsilon)}\right \rfloor + 1;\\ &&\\ \mu_{CZ}(\gamma_{+}^N)& = &2 \left\lfloor \dfrac{2N}{(n+1)(1 + \epsilon)}\right \rfloor + 1. \\ \end{array} \end{equation} \end{rem} The proof of Theorem \ref{CZcomputation} is obtained by adapting methods of Ustilovsky \cite{U} to obtain both $\alpha_\epsilon$ and to compute the Conley-Zehnder indices. The Conley-Zehnder index is a Maslov index for arcs of symplectic matrices and defined in Section \ref{CZsection}. These paths of matrices are obtained by linearizing the flow of the Reeb vector field along the Reeb orbit and restricting to $\xi_0$. To better understand the spread of the Reeb orbits and their iterates in various indices, we have the following example. \begin{example} Let $n=2$ and $0 <\epsilon \ll \frac{1}{10}$. \[ \begin{array}{lc c clcc} \mu_{CZ}(\gamma_- ) &=&1,&\ \ \ \ \ \ &\mu_{CZ} (\gamma_+ )&=&1 \\ \mu_{CZ}(\gamma_-^2 )& =&3,&\ \ \ \ \ \ &\mu_{CZ}(\gamma_+^2)& =&3 \\ \mu_{CZ}(\gamma_-^3 )&=&5, &\ \ \ \ \ \ &\mu_{CZ}(\gamma_+^3 )&=&3 \\ \mu_{CZ}(\gamma_-^4 )&=& 5,&\ \ \ \ \ \ &\mu_{CZ}(\gamma_+^4 )& =&5\\ \mu_{CZ}(\gamma_-^5 )& =& 7,&\ \ \ \ \ \ &\mu_{CZ}(\gamma_+^5 )& =&7\\ \mu_{CZ}(\gamma_-^6 )&=& 9,&\ \ \ \ \ \ &\mu_{CZ}(\gamma_+^6) &=& 7\\ \mu_{CZ}(\gamma_-^7 )&=& 9,&\ \ \ \ \ \ &\mu_{CZ}(\gamma_+^7) &=& 9 \\ \end{array} \] It is interesting to note that spread of integers is not uniform between $\mu_{CZ}(\gamma_-^N)$ and $\mu_{CZ}(\gamma_+^N),$ and where these jumps in index occur. However, we see that there are $n=2$ Reeb orbits with Conley Zehnder index 1 and $n+1=3$ orbits with Conley Zehnder index $2k+1$ for each $k\geq1$. \end{example} \begin{rem}\label{freehtpy} Extrapolating this to all values of $n$ and $N$ demonstrates that the numerology of the Conley-Zehnder index realizes the number of free homotopy classes of $L_{A_n}$. Recall that $[\Sigma L_{A_n}] = \pi_0(\Sigma L_{A_n}) = \pi_1(L_{A_n})/\{\mbox{conjugacy classes}\}$ and $H_1(L_{A_n}, \mathbb{Z}) = \mathbb{Z}_{n+1}$. The information that the $n+1$-th iterate of $\gamma_\pm$ is the first contractible Reeb orbit is also encoded in the above formulas. Qualitative aspects of the Reeb dynamics reflect this topological information in the following computation of a Floer-theoretic invariant of the contact structure $\xi_0$. \end{rem} Theorem \ref{CZcomputation} allows us to easily compute positive $S^1$-equivariant symplectic homology $SH_*^{+,S^1}$. Symplectic homology is a Floer type invariant of symplectic manifolds, with contact type boundary, see also \cite{biased}. Under additional assumptions, one can prove that the positive $S^1$-equivariant symplectic homology $SH_*^{+,S^1}$ is in fact an invariant of the contact structure; see \cite[Theorems 1.2 and 1.3]{GuSH} and \cite[Section 4.1.2]{BO}. Because of the behavior of the Conley-Zehnder index in Theorem \ref{CZcomputation}, we can directly compute $SH_*^{+,S^1}(L_{A_n}, \xi_0)$ and conclude that it is a contact invariant. As a result, the underlying topology of the manifold determines qualitative aspects of any Reeb vector field associated to a contact form defining $\xi_0$. \begin{thm}\label{linksh} The positive $S^1$-equivariant symplectic homology of $(L_{A_n}, \xi_0)$ is \[ SH^{+,S^1}_*(L_{A_n}, \xi_0) = \left\{ \begin{array}{cl} \mathbb{Q}^n & * =1 \\ \mathbb{Q}^{n+1} & * \geq 3, \mbox{ odd } \\ 0 & * \ \mbox{ else } \\ \end{array} \right. \] \end{thm} \begin{proof} To obtain a contact invariant from $SH^{+,S^1}_*$ we need to show in dimension three that all contractible Reeb orbits $\gamma$ satisfy $\mu_{CZ}(\gamma)\geq3$; see \cite[Theorems 1.2 and 1.3]{GuSH} and \cite[Section 4.1.2]{BO}. The first iterate of $\gamma_\pm$ which is contractible is the $(n+1)$-th iterate, and by Theorem \ref{CZcomputation}, will always satisfy $\mu_{CZ}(\gamma_\pm)\geq3$. If $\alpha$ is a nondegenerate contact form such that the Conley-Zehnder indices of all periodic Reeb orbits are lacunary, meaning they contain no two consecutive numbers, then we can appeal to \cite[Theorem 1.1]{GuSH}. This result of Gutt allows us to conclude that over $\mathbb{Q}$-coefficients the differential for $SH^{S^1,+}$ vanishes. In light of Theorem \ref{CZcomputation} we obtain the above result. \end{proof} Remark \ref{freehtpy} yields the following corollary of Theorem \ref{linksh}, indicating a Floer theoretic interpretation of the McKay correspondence \cite{IM} via the Reeb dynamics of the link of the $A_n$ singularity. The $A_n$ singularity is the singularity of $f^{-1}_{A_n}(0)$, where $f_{A_n}$ is described as (\ref{linkeq}). This is equivalent to its characterization as the absolutely isolated double point quotient singularity of $\mathbb{C}^2/A_n$, where $A_n$ is the cyclic subgroup of SL$(2;\mathbb{C})$; see Section \ref{contactgeomlens}. The cyclic group $A_n$ acts on $\mathbb{C}^2$ by $(u,v) \mapsto \left(e^\frac{2\pi i}{n+1}u, e^\frac{2\pi in}{n+1}v\right)$. \begin{cor} The positive $S^1$-equivariant symplectic homology $SH^{+,S^1}_*(L_{A_n}, \xi_0)$ is a free $\mathbb{Q}[u]$ module of rank equal to the number of conjugacy classes of the finite subgroup $A_n$ of $\mbox{\em SL}(2;\mathbb{C})$. \end{cor} \begin{rem} Ongoing work of Nelson \cite{jocompute} and Hutchings and Nelson \cite{HN3} is needed in order to work under the assumption that a related Floer-theoretic invariant, cylindrical contact homology, is a well-defined contact invariant of $(L_{A_n},\xi_{0})$. Once this is complete, the index calculations provided in Theorem \ref{CZcomputation} show that positive $S^1$-equivariant symplectic homology and cylindrical contact homology agree up to a degree shift. In \cite{BO} Bourgeois and Oancea prove that there are restricted classes of contact manifolds for which once can prove that cylindrical contact homology (with a degree shift) is isomorphic to the positive part of $S^1$-equivariant symplectic homology, when both are defined over $\mathbb{Q}$-coefficients. Their isomorphism relies on having transversality for a generic choice of $J,$ which is presently the case for unit cotangent bundles $DT^*L$ such that dim $L \geq 5$ or when $L$ is Riemannian manifold which admits no contractible closed geodesics \cite{BOcorrig}. Our computations confirm that their results should hold for many more closed contact manifolds. \end{rem} Our final result is an explicit proof that $(L_{A_n}, \xi_0)$ and the lens space $(L(n+1,n), \xi_{std})$ are contactomorphic. The lens space \[ L(n+1,n) = S^3/\big((u,v) \sim (e^{2\pi i/(n+1)}u,e^{2\pi ni/(n+1)}v)\big) \] admits a contact structure, which is induced by the one on $S^3$ and can be expressed as the kernel of the following contact form, \[ \lambda_{std}= \frac{i}{2} ( u d\bar{u} -\bar{u}du +v d\bar{v} -\bar{v}dv). \] \begin{thm}\label{lenslinkcontacto} The link of the $A_n$ singularity $(L_{A_n}, \xi_0=\ker \alpha_0)$ and the lens space $(L(n+1,n), \xi_{std}=\ker \lambda_{std})$ are contactomorphic. \end{thm} Theorems \ref{linksh} and \ref{lenslinkcontacto} allow us to reprove the following result of van Koert and Kwon \cite{O}. Since $(L_{A_n}, \xi_0)$ and $(L(n+1,n), \xi_{std})$ are contactomorphic and $SH_*^{S^1,+}$ is a contact invariant, $SH_*^{S^1,+}(L(n+1,n),\xi_{std}) =SH_*^{S^1,+}(L_{A_n}, \xi_0)$. \begin{thm}[Appendix A \cite{O}] The positive $S^1$-equivariant symplectic homology of $(L(n+1,n),\xi_{std})$ is \[ SH^{+,S^1}_*(L(n+1,n), \xi_{std}) = \left\{ \begin{array}{cl} \mathbb{Q}^n & * =1 \\ \mathbb{Q}^{n+1} & * \geq 3, \mbox{ odd } \\ 0 & * \ \mbox{ else } \\ \end{array} \right. \] \end{thm} Their proof relies on the following nondegenerate contact form on $(L(n+1,n),\xi_{std})$. If $a_1,a_2$ are any rationally independent positive real numbers then \[ \lambda_{a_1,a_2} = \frac{i}{2} \sum_{ j = 1}^2 a_j(z_j d\overline{z}_j - \overline{z}_j dz_j)\] is a nondegenerate contact form for $(L(n+1,n), \xi_{std})$. The simple Reeb orbits on $L(n+1,n)$ are given by \begin{align*} \gamma_1 & = (e^{it/a_1},0) \quad \quad 0 \le t \le \frac{ 2 a_1\pi}{n+1}, \\ \gamma_2 & = (0,e^{it/a_2}) \quad \quad 0 \le t \le \frac{2a_2\pi}{n+1}, \end{align*} which descend from the simple isolated Reeb orbits on $S^3$. Again, the $n+1$ different free homotopy classes associated to this lens space are realized by covers of the isolated Reeb orbits $\gamma_i$ for $i=1$ or $2$. The Conley-Zehnder index for $\gamma_1^N$ is \begin{equation}\label{CZlens} \mu_{CZ}(\gamma_1^N) = 2\left(\left\lfloor \frac{N}{n+1}\right\rfloor + \left\lfloor \frac{N a_1}{(n+1)a_2}\right\rfloor\right) + 1, \end{equation} with a similar formula holding for $\gamma_2^N$. \textbf{Outline} The necessary background is given in Section \ref{background}. The construction of a nondegenerate contact form and the proof of Theorem \ref{CZcomputation} is given in Section \ref{CZcomputationsection}. The proof of Theorem \ref{lenslinkcontacto} is given in Section \ref{sectionlinklens}. \section{Background}\label{background} In these sections we recall all the necessary symplectic and contact background which is needed to prove Theorems \ref{CZcomputation} and \ref{lenslinkcontacto}. \subsection{Contact Structures} \hspace{\fill} \\ First we recall some notions from contact geometry. \begin{definition} Let $M$ be a manifold of dimension $2n+1$. A \textbf{contact structure} is a maximally non-integrable hyperplane field $\xi=\mbox{ker }\alpha \subset TM$. \end{definition} \begin{rem} The kernel of a 1-form $\alpha$ on $M^{2n+1},$ $\xi=\ker \alpha$, is a contact structure whenever \[ \alpha \wedge (d\alpha)^n \neq 0, \] which is equivalent to the condition that $d\alpha$ be nondegenerate on $\xi$. \end{rem} Note that the contact structure is unaffected when we multiply the contact form $\alpha$ by any positive or negative function on $M$. We say that two contact structures $\xi_0=\mbox{ker } \alpha_0$ and $\xi_1=\mbox{ker }\alpha_1$ on a manifold $M$ are \textbf{contactomorphic} whenever there is a diffeomorphism $\psi:M \to M$ such that $\psi$ sends $\xi_0$ to $\xi_1$: \[ \psi_*(\xi_0)=\xi_1 \] If a diffeomorphism $\psi: M\to M$ is in fact a contactomorphism then there exists a non-zero function $g:M \to \mathbb{R} $ such that $\psi^*\alpha_1=g\alpha_0$. Finding an explicit contactomorphism often proves to be a rather difficult and messy task, but an application of Moser's argument yields Gray's stability theorem, which essentially states that there are no non-trivial deformations of contact structures on a fixed closed manifold. First we give the statement of Moser's Theorem, which says that one cannot vary a symplectic structure by perturbing it within its cohomology class. Recall that a \textbf{symplectic structure} on a smooth manifold $W^{2n}$ is a nondegenerate closed 2-form $\omega \in \Omega^2(W)$. \begin{thm}[Moser's theorem] \cite[Thm 3.17]{MD} \label{moser} Let $W$ be a closed manifold and suppose that $\omega_t$ is a smooth family of cohomologous symplectic forms on $W$. Then there is a family of diffeomorphisms $\Psi_t$ of $W$ such that \[ \Psi_0=\mbox{id}, \ \ \ \psi^*_t\omega_t=\omega_0. \] \end{thm} The aforementioned contact analogue of Moser's theorem is Gray's stability theorem, stated formally below. \begin{thm}[Gray's stability theorem] \cite[Thm 2.2.2]{G} Let $\xi_t, \ t \in [0,1]$, be a smooth family of contact structures on a closed manifold $V$. Then there is an isotopy $(\psi_t)_{t\in [0,1]}$ of $V$ such that \[ {\psi_t}_*(\xi_0) = \xi_t \ \mbox{ for each } t \in [0,1] \] \end{thm} Next we give the most basic example of a contact structure. \begin{example} \em Consider $\mathbb{R}^{2n+1}$ with coordinates $(x_1, y_1,...,x_n,y_n,z)$ and the 1-form \[ \alpha=dz+\sum_{j=1} ^n x_jdy_j. \] Then $\alpha$ is a contact form for $\mathbb{R}^{2n+1}$. The contact structure $\xi=\mbox{ker }\alpha$ is called the standard contact structure on $\mathbb{R}^{2n+1}$ \end{example} As in symplectic geometry, a variant of Darboux's theorem holds. This states that locally all contact structures are diffeomorphic to the standard contact structure on $\mathbb{R}^{2n+1}$. A contact form gives rise to a unique Hamiltonian-like vector fields as follows. \begin{definition} For any contact manifold $(M, \xi=\mbox{ker }\alpha)$ the \textbf{Reeb vector field} $R_\alpha$ is defined to be the unique vector field determined by $\alpha$: \[ \iota(R_\alpha)d\alpha=0, \ \ \ \alpha(R_\alpha)=1. \] We define the Reeb flow of $R_\alpha$ by $\varphi_t: M \to M$, $\dot{\varphi_t} = R_\alpha(\varphi_t)$. \end{definition} The first condition says that $R_\alpha$ points along the unique null direction of the form $d\alpha$ and the second condition normalizes $R_\alpha$. Because \[ \mathcal{L}_{R_\alpha} \alpha = d \iota_{R_\alpha}\alpha + \iota_{R_\alpha} d\alpha \] the flow of $R_\alpha$ preserves the form $\alpha$ and hence the contact structure $\xi$. Note that if one chooses a different contact form $f \alpha$, the corresponding vector field $R_{f\alpha}$ is very different from $R_\alpha$, and its flow may have quite different properties. A {\bf{Reeb orbit}} $\gamma$ of period $T$ associated to $R_\alpha$ is defined to be a path $\gamma: \mathbb{R}/T\mathbb{Z} \to M$ given by an integral curve of $R_\alpha$. That is, \[ \frac{d\gamma}{dt} = R_\alpha \circ \gamma(t), \quad \gamma(0) = \gamma(T). \] Two Reeb orbits \[ \gamma_1, \ \gamma_0 : \mathbb{R}/T\mathbb{Z} \to M \] are considered equivalent if they differ by reparametrization, i.e. precomposition with a translation of $\mathbb{R}/T\mathbb{Z}.$ The $N$-fold cover $\gamma^N$ is defined to be the composition of $\gamma_\pm$ with $\mathbb{R}/NT\mathbb{Z} \to \mathbb{R}/T\mathbb{Z}$. A \textbf{simple Reeb orbit} is one such that $\gamma: \mathbb{R}/T\mathbb{Z} \to M$ is injective. \begin{rem} Since Reeb vector fields are autonomous, the terminology ``simple Reeb orbit $\gamma$" refers to the entire equivalence class of orbits, and likewise for its iterates. \end{rem} A Reeb orbit $\gamma$ is said to be {\bf{nondegenerate}} whenever the linearized return map \[ d(\varphi_T)_{\gamma(0)}: \xi_{\gamma(0)} \to \xi_{\gamma(T) = \gamma(0)}\] has no eigenvalue equal to 1. A {\bf{nondegenerate contact form}} is one whose Reeb orbits are all nondegenerate and hence isolated. Note that since the Reeb flow preserves the contact structure, the linearized return map is symplectic. Next we briefly review the canonical contact form on $S^3$ and its Reeb dynamics. \begin{example}[Canonical Reeb dynamics on the 3-sphere] \label{3-sphere} { If we define the following function $f\colon \mathbb{R}^4 \to \mathbb{R}$ \[ f(x_1, y_1, x_2, y_2)= x_1^2+y_1^2+x_2^2+ y_2^2, \] then $S^3=f^{-1}(1)$. Recall that the canonical contact form on $S^3 \subset \mathbb{R}^4$ is given to be \begin{equation} \label{ls} \lambda_0 := - \frac{1}{2} df \circ J = \left( x_1 dy_1 - y_1 dx_1 + x_2 dy_2 - y_2 dx_2 \right)\arrowvert_{S^3}. \end{equation} The Reeb vector field is given by \begin{equation}\label{reebreal} \begin{array}{lcl} R_{\lambda_0}&=&\left(x_1 \dfrac{\partial}{\partial y_1} - y_1 \dfrac{\partial}{\partial x_1} + x_2 \dfrac{\partial}{\partial y_2} - y_2 \dfrac{\partial}{\partial x_2}\right) \\ &=& (-y_1,x_1,-y_2,x_2). \\ \end{array} \end{equation} Equivalently we may reformulate these using complex coordinates by identifying $\mathbb{R}^4$ with $\mathbb{C}^2$ via \[ u = x_1+iy_1, \ \ \ v = x_2+iy_2. \] We obtain \[ \lambda_0=\frac{i}{2}\left(u d\bar{u} - \bar{u} du + v d\bar{v} - \bar{v} dv\right)\big |_{S^3}, \] and \begin{equation} \label{reeb3sphere2} \begin{array}{ccl} R_{\lambda_0} & =& i \left( u \dfrac{\partial}{\partial u} - \bar{u} \dfrac{\partial}{\partial \bar{u}} + v \dfrac{\partial}{\partial v} - \bar{v} \dfrac{\partial}{\partial \bar{v}}\right) \\ &=& (iu, iv) \\ \end{array} \end{equation} The second expression for $R_{\lambda_0}$ follows from (\ref{reebreal}) since $iu=(-y_1,x_1)$ and $iv=(-y_2,x_2)$. To see that the orbits of $R_{\lambda_0}$ define the fibers of the Hopf fibration recall that a fiber through a point \[ (u,v)=(x_1+iy_1, x_2+ iy_2) \in S^3 \subset \mathbb{C}^2, \] can be parameterized as \begin{equation} \label{reebflow} \varphi(t)=(e^{it}u, e^{it}v), \ t\in \mathbb{R}. \end{equation} We compute the time derivative of the fiber \[ \dot{\varphi}(0)=(iu,iv)=(i x_1 - y_1, i x_2 - y_2). \] Expressed as a real vector field on $\mathbb{R}^4$, which is tangent to $S^3$, this is the Reeb vector field $R_{\lambda_0}$ as it appears in (\ref{reeb3sphere2}), so the Reeb flow does indeed define the Hopf fibration. } \end{example} \subsection{Hypersurfaces of contact type}\hspace{\fill}\\ Another notion that we need from symplectic and contact geometry is that of a hypersurface of contact type in a symplectic manifold. The following notion of a Liouville vector field allows us to define hypersurfaces of contact type. Liouville vector fields will be used to understand the Reeb dynamics of the nondegenerate contact form $\alpha_1$ as well as to construct the contactomorphism between $(L_{A_n},\xi_0)$ and $(L(n+1,n),\xi_{std})$. \begin{definition} \label{lioudef} A \textbf{Liouville vector field} $Y$ on a symplectic manifold $(W, \omega)$ is a vector field satisfying \[ \mathcal{L}_Y \omega = \omega \] The flow $\psi_t$ of such a vector field is conformal symplectic, i.e. $\psi^*_t(\omega)=e^t \omega$. The flow of these fields are volume expanding, so such fields may only exist locally on compact manifolds. \end{definition} Whenever there exists a Liouville vector field $Y$ defined in a neighborhood of a compact hypersurface $Q$ of $(W, \omega)$, which is transverse to $Q$, we can define a contact 1-form on $Q$ by \[ \alpha : = \iota_Y\omega. \] \begin{prop}[{\cite[Prop 3.58]{MD}}]\label{contacttype} Let $(W, \omega)$ be a symplectic manifold and $Q \subset W$ a compact hypersurface. Then the following are equivalent: \begin{itemize} \item[{(i)}] There exists a contact form $\alpha$ on $Q$ such that $d \alpha = \omega|_Q$. \item[{(ii)}] There exists a Liouville vector field $Y:U \to TW$ defined in a neighborhood $U$ of $Q$, which is transverse to $Q$. \end{itemize} If these conditions are satisfied then $Q$ is said to be of \textbf{contact type.} \end{prop} We will need the following application of Gray's stability theorem to hypersurfaces of contact type to prove Theorem \ref{lenslinkcontacto} in Section \ref{sectionlinklens}. \begin{lem}\cite[Lemma 2.1.5]{G}\label{graycor} Let $Y$ be a Liouville vector field on a symplectic manifold $(W,\omega)$. Suppose that $M_1$ and $M_2$ are hypersurfaces of contact type in $W$. Assume that there is a smooth function \begin{equation}\label{heq} h:W \to \mathbb{R} \end{equation} such that the time-1 map of the flow of $hY$ is a diffeomorphism from $M_1$ to $M_2$. Then this diffeomorphism is in fact a contactomorphism from $(M_1, \ker \iota_Y \omega|_{TM_1})$ to $(M_2, \ker \iota_Y \omega|_{TM_2})$. \end{lem} \subsection{Symplectization} \\ The symplectization of a contact manifold is an important notion in defining Floer theoretic theories like symplectic and contact homology. It will also used in our calculation of the Conley-Zehnder index. Let $(M, \xi = \ker \alpha)$ to be a contact manifold. The \textbf{symplectization} of $(M,\xi = \ker \alpha)$ is given by the manifold $\mathbb{R} \times M$ and symplectic form \[ \omega = e^t(d\alpha - \alpha \wedge dt) = d (e^t\alpha). \] Here $t$ is the coordinate on $\mathbb{R}$, and it should be noted that $\alpha$ is interpreted as a 1-form on $\mathbb{R} \times M$, as we identify $\alpha$ with its pullback under the projection $\mathbb{R} \times M \to M$. Any contact structure $\xi$ may be equipped with a complex structure ${J}$ such that $(\xi, {J})$ is a complex vector bundle. This set is nonempty and contractible. There is a unique canonical extension of the almost complex structure ${J}$ on $\xi$ to an $\mathbb{R}$-invariant almost complex structure $\tilde{J}$ on $T(\mathbb{R} \times M)$, whose existence is due to the splitting, \begin{equation} \label{decomp} T(\mathbb{R} \times M) = \mathbb{R} \frac{\partial}{\partial t} \oplus \mathbb{R} R_{\alpha} \oplus \xi. \end{equation} \begin{definition}[Canonical extension of ${J}$ to $\tilde{J}$ on $T(\mathbb{R} \times M)$]\label{complexstruc} Let $[a,b;v]$ be a tangent vector where $a, \ b \in \mathbb{R}$ and $v \in \xi$. We can extend ${J}: \xi \to \xi$ to $\tilde{J}: T(\mathbb{R} \times M) \to T(\mathbb{R} \times M)$ by \[ \tilde{J}[a,b;v] = [-b,a,{J}v]. \] Thus $\tilde{J}|_\xi = {J}$ and $\tilde{J}$ acts on $\mathbb{R} \frac{\partial}{\partial t} \oplus \mathbb{R} R_{\alpha}$ in the same manner as multiplication by $i$ acts on $\mathbb{C}$, namely ${J} \frac{\partial}{\partial t} = R_{\alpha}$. \end{definition} \subsection{The Conley-Zehnder index}\label{CZsection} \\ The Conley-Zehnder index $\mu_{CZ}$, is a Maslov index for arcs of symplectic matrices which assigns an integer $\mu_{CZ}(\Phi)$ to every path of symplectic matrices $\Phi : [0,T] \to \mbox{Sp}(n)$, with $\Phi(0) = \mathds{1} $. In order to ensure that the Conley-Zehnder index assigns the same integer to homotopic arcs, one must also stipulate that 1 is not an eigenvalue of the endpoint of this path of matrices, i.e. $\det(\mathds{1} - \Phi(T))\neq 0$. We define the following set of continuous paths of symplectic matrices that start at the identity and end on a symplectic matrix that does not have 1 as an eigenvalue. \[ \Sigma^*(n) = \{ \Phi :[0,T] \to \mbox{Sp}(2n) \ | \ \Phi \mbox{ is continuous}, \ \Phi(0)=\mathds{1}, \mbox{ and } \mbox{det}(\mathds{1} - \Phi(T)) \neq 0 \}. \] The Conley-Zehnder index is a functor satisfying the following properties, and is uniquely determined by the homotopy, loop, and signature properties. \begin{thm}\label{CZpropthm}{\cite[Theorem 2.3, Remark 5.4]{RS}}, {\cite[Theorem 2, Proposition 8 \& 9]{GuCZ}}\label{CZprop} \\ There exists a unique functor $\mu_{CZ}$ called the {\bf{Conley-Zehnder index}} that assigns the same integer to all homotopic paths $\Psi$ in $\Sigma^*(n)$, \[ \mu_{CZ}: \Sigma^*(n) \to \mathbb{Z}. \] such that the following hold. \begin{enumerate}[\em (1)] \item {\bf{Homotopy}}: The Conley-Zehnder index is constant on the connected components of $\Sigma^*(n)$. \item {\bf{Naturalization}}: For any paths $\Phi, \Psi: [0,1] \to Sp(2n)$, $\mu_{CZ}(\Phi\Psi\Phi^{-1}) = \mu_{CZ}(\Psi)$. \item {\bf{Zero}}: If $\Psi(t) \in \Sigma^*(n)$ has no eigenvalues on the unit circle for $t >0$, then $\mu_{CZ}(\Psi) = 0$. \item {\bf{Product}}: If $n = n' + n''$, identify $Sp(2n') \oplus Sp(2n'')$ with a subgroup of $Sp(2n)$ in the obvious way. For $\Psi' \in \Sigma^*(n')$, $\Psi'' \in \Sigma^*(n'')$, then $\mu_{CZ}(\Psi' \oplus \Psi'') = \mu_{CZ}(\Psi') + \mu_{CZ}(\Psi'')$. \item {\bf{Loop}}: If $\Phi$ is a loop at $\mathds{1}$, then $\mu_{CZ}(\Phi\Psi) = \mu_{CZ}(\Psi) + 2\mu(\Phi)$ where $\mu$ is the Maslov Index. \item {\bf{Signature}}: If $S \in M(2n)$ is a symmetric matrix with $||S|| < 2\pi$ and $\Psi(t) = \exp(J_0St)$, then $\mu_{CZ}(\Psi) = \frac{1}{2}\sgn(S)$. \end{enumerate} \end{thm} The linearized Reeb flow of $\gamma$ yields a path of symplectic matrices \[ d(\varphi_t)_{\gamma(0)}: \xi_{\gamma(0)} \to \xi_{\gamma(t) = \gamma(0)} \] for $t\in[0,T],$ where $T$ is the period of $\gamma$. Thus we can compute the Conley-Zehnder index of $d\varphi_t, \ t\in[0,T].$ This index is typically dependent on the choice of trivialization $\tau$ of $\xi$ along $\gamma$ which was used in linearizing the Reeb flow. However, if $c_1(\xi;\mathbb{Z})=0$ we can use the existence of an (almost) complex volume form on the symplectization to obtain a global means of linearizing the flow of the Reeb vector field. The choice of a complex volume form is parametrized by $H^1(\mathbb{R} \times M;\mathbb{Z})$, so an absolute integral grading is only determined up to the choice of volume form. See also \cite[\S 1.1.1]{jocompute}. We define \[ \mu_{CZ}^\tau(\gamma):=\mu_{CZ}\left( \left\{ d\varphi_t \right\}\arrowvert_{t\in[0,T]}\right) \] In the case at hand we will be able to work in the ambient space of $(\mathbb{C}^3, J_0)$, and use a canonical trivialization of $\mathbb{C}^3$. \subsection{The canonical contact structure on Brieskorn manifolds}\hspace{\fill} \\ The $A_n$ link is an example of a Brieskorn manifold, which are defined generally by \[ \Sigma(\mathbf{a})= \left\{ (z_0,\dots,z_m) \in \mathbb{C}^{m+1} \ \bigg| \ f:= \sum_{j = 0}^m z_j^{a_j} = 0, \ a_j \in \mathbb{Z}_{>0} \text{ and } \sum_{j = 0}^m |z_j|^2 = 1 \right\}. \] The link of the $A_n$ singularity after a linear change of variables is $ \Sigma(n+1,2,2)$ for $n >3$; see (\ref{coorchange}). Brieskorn gave a necessary and sufficient condition on $\mathbf{a}$ for $\Sigma(\mathbf{a})$ to be a topological sphere, and means to show when these yield exotic differentiable structures on the topological $(2n-1)$-sphere in \cite{Br}. A standard calculus argument \cite[Lemma 7.1.1]{G} shows that $\Sigma(\mathbf{a})$ is always a smooth manifold. In the mid 1970's, Brieskorn manifolds were found to admit a canonical contact structure, given by their set of complex tangencies, \[ \xi_0=T\Sigma \cap J_0 (T\Sigma), \] where $J_0$ is the standard complex structure on $\mathbb{C}^{m+1}$. The contact structure $\xi_0$ can be expressed as $\xi_0 = \ker \alpha_0$ for the canonical 1-form \[ \alpha_0:= (- d\rho \circ J_0)|_\Sigma = \frac{i}{4} \left( \sum_{j=0}^m ( z_j d\bar{z}_j -\bar{z}_jdz_j ) \right)\bigg \vert_\Sigma, \] where $\rho=(||z||^2-1)/4$. A proof of this fact may be found in {\cite[Thm 7.1.2]{G}}. The Reeb dynamics associated to $\alpha_0$ are difficult to understand. There is a more convenient contact form $\alpha_1$ constructed by Ustilovsky \cite[Lemma 4.1.2]{U} via the following family. \begin{prop}[{\cite[Proposition 7.1.4]{G}}] The 1-form \[ \alpha_t = \frac{i}{4}\sum_{j = 0}^m \frac{1}{1 - t + \frac{t}{a_j}} (z_j d\bar{z}_j - \bar{z}_jdz_j) \] is a contact form on $\Sigma(\mathbf{a})$ for each $t\in [0,1]$. \end{prop} Via Gray's stability theorem we obtain the following corollary. \begin{cor} For all $t \in (0,1]$, $(\Sigma(\mathbf{a}), \ker \alpha_0)$ is contactomorphic to $(\Sigma(\mathbf{a}), \ker \alpha_t)$. \end{cor} Next we compute the Reeb dynamics associated to $\alpha_1 = \frac{i}{4}\sum_{j=0}^m a_j (z_j d\bar{z}_j - \bar{z}_jdz_j) $. \begin{rem} While $\alpha_1$ is degenerate, one can still easily check that the Reeb vector field associated to $\alpha_1$ is given by, \[ R_{\alpha_1} = 2i \sum_{j = 0}^m \frac{1}{a_j}\left( z_j \frac{\partial}{\partial z_j} - \bar{z}_j \frac{\partial}{\partial \bar{z}_j} \right) = 2i \left( \frac{z_0}{a_0},...,\frac{z_m}{a_m} \right). \] Indeed, one computes \[ df\left(R_{\alpha_1}\right) = f(\mathbf{z}) \mbox{ and } d\rho \left(R_{\alpha_1}\right) =0. \] This shows that $R_{\alpha_1}$ is tangent to $\Sigma(\mathbf{a})$. The defining equations for the Reeb vector field are satisfied since \[ \alpha_1\left(R_{\alpha_1}\right) \equiv 1 \mbox{ and } \iota_{R_{\alpha_1}}d\alpha_1 = -d\rho, \] with the latter form being zero on the $T_p\Sigma(\mathbf(a))$. The flow of $R_{\alpha_1}$ is given by \[ \varphi_t(z_0,...,z_m) = \left( e^{2it/a_0},...,e^{2it/a_m} \right) \] All the orbits of the Reeb flow are closed, and the flow defines an effective $S^1$-action on $\Sigma(\mathbf{a})$. \end{rem} In the next section we perturb $\alpha_1$ to a nondegenerate contact form. \section{Proof of Theorem \ref{CZcomputation}}\label{CZcomputationsection} \subsection{Constructing a nondegenerate contact form} \hspace{\fill} \\ In this section we adapt a method used by Ustilovsky in \cite[Section 4]{U} to obtain a nondegenerate contact form $\alpha_\epsilon$ on $L_{A_n}$ whose kernel is contactomorphic to $\xi_0$. Ustilovsky's methods yielded a nondegenerate contact form on Brieskorn manifolds of the form $\Sigma(p,2,....,2)$, which were diffeomorphic to $S^{4m+1}$. We define the following change of coordinates to go from $\Sigma(n+1,2,2)$ with defining function $f=z_0^{n+1} + z_1^2+z_2^2$ to $L_{A_n}$ with defining function $f_{A_n}= w_0^{n+1} + 2w_1w_2.$ \begin{equation}\label{coorchange} \Psi(w_0,w_1,w_2) = \left(\underbrace{w_0}_{{:=z_0}} \ , \underbrace{\tfrac{\sqrt{2}}{2}(w_1+w_2)}_{:=z_1} \ , \underbrace{\tfrac{\sqrt{2}}{2}(-iw_1+iw_2)}_{:=z_2} \right) \end{equation} We obtain \begin{align} \Psi^*f(z_0,z_1,z_2)= w_0^{n+1} + 2w_1w_2. \end{align} Then the pull-back of \[ \frac{\alpha_1}{2} = \frac{i}{8}\sum_{j=0}^m a_j (z_j d\bar{z}_j - \bar{z}_jdz_j) \] is given by \[ \frac{\Psi^*\alpha_1}{2} = \frac{(n+1)i}{8}(w_0 d\overline{w}_0 - \overline{w}_0 dw_0) + \frac{i}{4}(w_1 d\overline{w}_1 - \overline{w}_1dw_1 + w_2 d\overline{w}_2 - \overline{w}_2 dw_2).\] We now construct the Hamiltonian function \[H(w)=|w|^2+ \epsilon(|w_{1}|^2-|w_{2}|^2)\] We choose $0<\epsilon<1$ such that $H(w)$ is positive on $S^5$, and define the contact form \begin{equation} \alpha_\epsilon= \frac{\Psi^*\alpha_1}{2H} \end{equation} \begin{rem} The above shows that $(\Sigma(n+1,2,2), \ker \alpha_1)$ is contactomorphic to $(\Psi(\Sigma(n+1,2,2)), \ker \alpha_\epsilon)$. Moreover $L_{A_n}=\Psi(\Sigma(n+1,2,2))$, where $L_{A_n}$ was defined in $(\ref{linkeq})$. \end{rem} \begin{prop}\label{perturbedreebprop} The Reeb vector field for $\alpha_\epsilon$ is \begin{align}\label{perturbedreeb} R_{\alpha_\epsilon} & =\frac{4i}{n+1}w_0\frac{\partial}{\partial w_0}-\frac{4i}{n+1}\overline{w}_0\frac{\partial}{\partial \overline{w}_0} + 2i(1+\epsilon)\left(w_{1}\frac{\partial}{\partial w_{1}}-\overline{w}_{1}\frac{\partial}{\partial \overline{w}_{1}} \right)\notag \\ & + 2i(1 - \epsilon) \left(w_{2} \frac{\partial}{\partial w_{2}} - \overline w_{2}\frac{\partial}{\partial \overline w_{2j}}\right) \notag \\ & = \left( \frac{4i}{n+1}w_0,2i(1+\epsilon)w_1,2i(1-\epsilon)w_2\right). \end{align} \end{prop} \begin{rem} The second formulation of the Reeb vector field is equivalent to the first in the above Proposition via the standard identification of $\mathbb{R}^4$ with $\mathbb{C}^2$, as explained in Example \ref{3-sphere}, equation (\ref{reeb3sphere2}). \end{rem} Before proving Proposition \ref{perturbedreebprop} we need the following lemma. \begin{lem}\label{helper} On $\mathbb{C}^3$, the vector field \begin{align} X(w) = \frac{1}{2}\left(\sum_{j=0}^{2} w_j\frac{\partial}{\partial w_j} + \overline{w}_j\frac{\partial}{\partial \overline{w}_j}\right) \end{align} is a Liouville vector field for the symplectic form \[\omega_1=\frac{d(\Psi^*\alpha_1)}{2}=\frac{i(n+1)}{4}dw_0 \wedge d\overline{w}_0 + \frac{i}{2}\sum_{j=1}^{2} dw_j \wedge d\overline{w}_j.\] The Hamiltonian vector field $X_H$ of $H$ with respect to $\omega_1$ is $-R_{\alpha_\epsilon}$, as in \emph{(\ref{perturbedreeb})} \end{lem} \begin{proof} Recall that the condition to be a Liouville vector field is $\mathcal{L}_X \omega_1 = \omega_1$. We show this with Cartan's formula: \begin{align*} \mathcal{L}_X\omega_1 & = \iota_X d\omega_1 + d(\iota_X \omega_1) \\ & = d(\iota_X \omega_1). \end{align*} We do the explicit calculation for the first term and the rest easily follows: \begin{align*} d \left( \frac{i(n+1)}{4} d\omega_0 \wedge d\overline{\omega}_0 \left( \frac{1}{2} \left( w_0 \frac{\partial}{\partial w_0} + \overline{w}_0 \frac{\partial}{\partial \overline{w}_0}\right), \cdot \right) \right) & = d \left( \frac{i(n+1)}{8} w_0 d\overline{w}_0 - \overline{w}_0 dw_0\right) \\ & = \frac{i(n+1)}{8} \left( dw_0 \wedge d\overline{w}_0 - d\overline{w}_0 \wedge dw_0 \right) \\ & = \frac{i(n+1)}{4} dw_0 \wedge d\overline{w}_0, \end{align*} so $X(w)$ is indeed a Liouville vector field for $\omega_1$. \\ Next we prove that $\omega_1(-R_{\alpha_\epsilon},\cdot) = dH(\cdot)$. First we calculate $dH$, \[ dH = \left(\sum_{j = 0}^{2} w_j d\overline{w}_j + \overline{w}_j dw_j\right) + \epsilon(w_{1} d\overline{w}_{1} + \overline{w}_{1} dw_1 - w_{2} d\overline{w}_{2} - \overline{w}_{2}dw_{2}).\] Then we compare the coefficients of $dH$ to the coefficients of $\omega_1(-R_{\alpha_\epsilon},\cdot)$ associated to each term, $(dw_i \wedge d\overline{w}_i)$. The $(dw_0 \wedge d\overline{w}_0)$ term is \begin{align*} \frac{i(n+1)}{4} dw_0 \wedge d\overline{w}_0 \left(-\frac{4i}{n+1} w_0 \frac{\partial}{\partial w_0} + \frac{4i}{n+1} \overline{w}_0 \frac{\partial}{\partial \overline{w}_0} ,\cdot\right) & = \frac{i(n+1)}{4} \left(- \frac{4i}{n+1} w_0 d\overline{w}_0 - \frac{4i}{n+1} \overline{w}_0 dw_0 \right) \\ & = w_0 d\overline{w}_0 + \overline{w}_0 dw_0. \end{align*} The $(dw_{1} \wedge d\overline{w}_{1})$ term is \begin{align*} \frac{i}{2} dw_{1} \wedge d\overline{w}_{1} \left( -2i(1 + \epsilon) w_{1} \frac{\partial}{\partial w_{1} } + 2i(1 + \epsilon) \overline{w}_{1} \frac{\partial}{\partial \overline{w}_{1}} \right) & = \frac{i}{2} \left( - 2i(1 + \epsilon)w_{1} d\overline{w}_{1} - 2i(1 + \epsilon)\overline{w}_{1}dw_{1}\right) \\ & = (1 + \epsilon) w_{1} d\overline{w}_{1} + (1 + \epsilon) \overline{w}_{1} dw_{1}. \end{align*} The $(dw_2 \wedge d\overline{w}_2)$ term is obtained similarly. Summing the terms yields $\omega_1(-R_{\alpha_\epsilon},\cdot) = dH(\cdot)$. \end{proof} \begin{proof}[Proof of Proposition \ref{perturbedreebprop}] First we show that $X_H =-R_{\alpha_\epsilon}$ is tangent to the link $\Psi(\Sigma(n+1,2,2) )$. We compute \begin{align*} (\Psi_*df)(R_{\alpha_\epsilon}) = & = \left( (n+1)w_0^n dw_0 + 2w_{1}dw_{2} + 2w_{2} dw_1 \right) (R_{\alpha_\epsilon}) \\ & = 4i w_0^{n+1} + 4i(1 - \epsilon) w_{1}w_{2} + 4i(1 + \epsilon) w_{1}w_{2} \\ & = 4i (\Psi^*f) \\ & = 0 \end{align*} the last equality because $\Psi^*f$ is constant along $\Psi(\Sigma(n+1,2,2) )$. Now we have to show that $\dfrac{\Psi^*\alpha_1}{2}(X_H) = -H$. We have \begin{align*} \dfrac{\Psi^*\alpha_1}{2}\left(\cdot\right) & = \iota_X\omega_1(\cdot) = \omega_1(X(w),\cdot) = - \omega(\cdot,X(w)) \\ \dfrac{\Psi^*\alpha_1}{2}(X_H) & = -\omega(X_H,X(w)) = - dH(X(w)) \\ & = - |w|^2 - \epsilon (|w_{1}|^2 - |w_{2}|^2) \\ & = -H. \end{align*} From these, we conclude \begin{align*} \alpha_\epsilon(X_H) & = -\frac{1}{H}H = -1 \\ d\alpha_\epsilon(X_H,\cdot) & = - \frac{1}{2H^2} (dH \wedge \Psi^* \alpha_1)(X_H,\cdot) + \frac{1}{2H} d\Psi^*\alpha_1(X_H,\cdot) \\ & = - \frac{1}{2H^2} dH(X_H) \Psi^*\alpha_1(\cdot) + \frac{1}{2H^2} \Psi^*\alpha_1(X_H) dH(\cdot) + \frac{1}{H} \omega (X_H,\cdot) \\ & = - \frac{1}{2H^2}\omega_1(X_H,X_H) \Psi^*\alpha_1(\cdot) - \frac{1}{H} dH(\cdot) + \frac{1}{H} dH(\cdot) \\ & = 0 \end{align*} By Lemma \ref{helper}, we know $-X_H = R_{\alpha_\epsilon}$ so the result follows. \end{proof} \subsection{Isolated Reeb Orbits} \hspace{\fill} \\ In this quick section, we prove the following proposition. \begin{prop} The only simple periodic Reeb orbits of $R_{\alpha_\epsilon}$ are nondegenerate and defined by \begin{align*} \gamma_+(t) & = (0,e^{2i(1 + \epsilon)t},0), \quad \quad 0 \le t \le \frac{\pi}{1 + \epsilon} \\ \gamma_-(t) & = (0,0,e^{2i(1 - \epsilon)t}), \quad \quad 0 \le t \le \frac{\pi}{1 + \epsilon}. \end{align*} \end{prop} \begin{proof} The flow of \[ R_{\alpha_\epsilon} = \left( \frac{4i}{n + 1}w_0,2i(1 + \epsilon)w_1,2i(1 - \epsilon)w_2\right)\] is given by \[\varphi_t(w_0,w_1,w_2) = \left(e^{\frac{4it}{n+1}}w_0,e^{2i(1+\epsilon)t}w_{1},e^{2i(1-\epsilon)t}w_{2}\right).\] Since $\epsilon$ is small and irrational, the only possible periodic trajectories are \begin{align*} \gamma_0(t) & = (e^{\frac{4i}{n+1}t},0,0) \\ \gamma_+(t) & = (0,e^{2i(1 + \epsilon)t},0) \\ \gamma_-(t) & = (0,0,e^{2i(1 - \epsilon)t}). \end{align*} It is important to note that the first trajectory does not lie in $\Psi(\Sigma(n+1,2,2))$, but rather on total space $\mathbb{C}^3$. This is because the point $\gamma_0(0) = (1,0,0)$ is not a zero of $f_{A_n}=w_0^{n+1}+2w_1w_2$. Next we need to check that the linearized return maps $d\phi|_\xi$ associated to $\gamma_+$ and $\gamma_-$ have no eigenvalues equal to 1. We consider the first orbit $\gamma_+$ of period $\pi/(1 + \epsilon)$, as a similar argument applies to the return flow associated to $\gamma_-$. The differential of its total return map is: \[ d\varphi_{T} = \left. \begin{pmatrix} e^{\frac{4iT}{n+1}} & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & e^{2i(1 - \epsilon)T} \end{pmatrix}\right\arrowvert_{T=\frac{\pi}{1+\epsilon}} \] Since $\epsilon$ is a small irrational number, the total return map only has one eigenvalue which is 1. The eigenvector associated to the eigenvalue which is 1 is in the direction of the Reeb orbit $\gamma^+$, but since we are restricting the return map to $\xi$, we can conclude that $\gamma_+$ is nondegenerate. \end{proof} \subsection{Computation of the Conley-Zehnder index}\hspace{\fill} \\ To compute the Conley-Zehnder indices of the Reeb orbits in Theorem 1.1 we use the same method as in \cite{U}, extending the Reeb flow to give rise to a symplectomorphism of $\mathbb{C}^3\setminus \{\mathbf{0} \}$. This permits us to do the computations in $\mathbb{C}^3$, equipped with the symplectic form \[ \omega_1=\frac{d(\Psi^*\alpha_1)}{2}=\frac{i(n+1)}{4}dw_0 \wedge d\overline{w}_0 + \frac{i}{2}\sum_{j=1}^{2} dw_j \wedge d\overline{w}_j. \] We may equip the contact structure $\xi_0$ with the symplectic form $\omega = d\alpha_1$ instead of $d\alpha_\epsilon$ when computing the Conley-Zehnder indices. This is because $ \ker \alpha_\epsilon = \ker \alpha_1 = \xi_0$, as $\alpha_\epsilon = \frac{1}{H} \alpha_1$ with $H > 0$ and because $\omega|_\xi = Hd\alpha_\epsilon|_\xi$ and $H$ is constant along Reeb trajectories. Our first proposition shows that we can construct a standard symplectic basis for the symplectic complement \[ \xi^\omega = \{ v \in \mathbb{C}^3 \ | \ \omega(v,w) = 0 \text{ for all $w \in \xi$}\} \] of $\xi$ in $\mathbb{C}^3$. As a result, $c_1(\xi^\omega)=0$. Since $c_1(\mathbb{C}^3)=0$, we know $c_1(\xi)=0$. Thus we may compute the Conley-Zehnder indices in the ambient space $\mathbb{C}^3$ and use additivity of the Conley-Zehnder index under direct sums of symplectic paths to compute it in $\xi$. \begin{prop} There exists a standard symplectic basis for the symplectic complement $\xi^\omega$ with respect to $\omega = d\alpha_1$. \end{prop} \begin{proof} Notice that $\xi^\omega = \mbox{span}(X_1, Y_1,X_2,Y_2)$ where \begin{align*} X_1 & = (\bar{w}_0^n,\bar{w}_1,\bar{w}_2) \quad Y_1 = iX_1 \\ X_2 & = R_\epsilon \quad \quad \quad \quad \quad Y_2 = w. \end{align*} We make this a into a symplectic standard basis for $\xi^\omega$ via a Gram-Schmidt process. The new basis is given by: \[ \begin{array}{rclc rcl} \tilde X_1 & = & \dfrac{X_1}{\sqrt{\omega(X_1,Y_2)}} & \ \ \ \ \ \ & \tilde Y_1 & =& \dfrac{Y_1}{\sqrt{\omega(X_1,Y_1)}} = i \tilde X_1 \\ &&&&&& \\ \tilde X_2 &=& X_2 &\ \ \ \ \ \ & \tilde Y_2 &= & Y_2 - \dfrac{\omega(X_1,Y_2)Y_1 - \omega(Y_1,Y_2)X_1}{\omega(X_1,Y_1)} \\ &&&&&& \\ &&&& \ \ \ \ \ \ &= &Y_2 - \dfrac{n-1}{2}w_0^{n+1}{w(X_1,Y_1)}X_1. \\ \end{array} \] This is a standard basis for the symplectic vector space $\xi^\omega$, i.e. the form $\omega$ in this basis is given by \[\begin{pmatrix} \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} & \\ & \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \end{pmatrix}.\] \end{proof} Now we are ready to prove the Conley-Zehnder index formula in Theorem \ref{CZcomputation}. \begin{prop} The Conley-Zehnder index for $\gamma = \gamma_{\pm}^N$ in $0 \le t \le \frac{N\pi}{1 \pm \epsilon}$ is \begin{align} \mu_{CZ}(\gamma_{\pm}^N) = 2\left( \left\lfloor \frac{2N}{(n+1)(1 \pm \epsilon)}\right \rfloor + \left\lfloor \frac{N(1 \mp \epsilon)}{1 \pm \epsilon} \right\rfloor - \left \lfloor \frac{2N}{1 \pm \epsilon} \right \rfloor \right) + 2N + 1. \end{align} \end{prop} \begin{proof} The Reeb flow $\varphi$ which we introduced in the previous section can be extended to a flow on $\mathbb{C}^3 $, which we also denote by $\varphi$. The action of the extended Reeb flow on $\mathbb{C}^3$ is given by: \[ \begin{array}{lclclcl} d\varphi_t(w)\tilde X_1 & = & e^{4it}\tilde X_1(\varphi_t(w)) & \ \ \ &d\varphi_t(w)\tilde Y_1 & = & e^{4it}\tilde Y_1(\varphi_t(w)) \\ d\varphi_t(w)\tilde X_2 & = &\tilde X_2(\varphi_t(w)) & \ \ \ &d\varphi_t(w)\tilde Y_2 &=& \tilde Y_2(\varphi_t(w)). \\ \end{array} \] Define \[ \Phi := d\varphi_t \big |_{\mathbb{C}^3} = \diag \left(e^{\frac{4i}{n+1}t},e^{2i(1+\epsilon)t},e^{2i(1 - \epsilon)t}\right) \] We can now use the additivity of the Conley-Zehnder index under direct sums of symplectic paths, Theorem \ref{CZprop}(4) to get \[ \mu_{CZ}(\gamma_\pm) = \mu_{CZ}(\Phi) - \mu_{CZ}(\Phi_{\xi^\omega}), \] where \begin{equation}\label{CZsum} \Phi_{\xi^\omega} := d\varphi_t \big |_{\xi^\omega} = \diag \left(e^{4it},1\right). \end{equation} The right hand side of (\ref{CZsum}) is easily computed via the crossing form; see \cite[Rem 5.4]{RS}. In particular we have \[ \mu_{CZ}\left(\{e^{it}\} \big |_{t\in [0,T]}\right) = \left\{ \begin{array}{ll} \dfrac{T}{\pi}, & T \in 2\pi \mathbb{Z} \\ &\\ 2 \left \lfloor \dfrac{T}{2\pi} \right \rfloor + 1,\ \ \ & \text{otherwise.} \end{array} \right. \] \noindent Thus for $\{ \Phi(t) \} = \{ e^{4it/(n+1)}\oplus e^{2it(1 + \epsilon)} \oplus e^{2it(1 - \epsilon)} \}$ with $0 \leq t \leq T$ we obtain: \\ \begin{align*} \mu_{CZ}(\Phi) & = \left\{ \begin{array}{ll} \dfrac{4T}{(n+1)\pi}, & T \in \frac{(n+1)\pi}{2}\mathbb{Z} \\ &\\ 2 \left\lfloor \dfrac{2T}{(n+1)\pi}\right \rfloor + 1,\ \ \ & T \notin \frac{(n+1)\pi}{2}\mathbb{Z} \end{array}\right. \ \ \ \ \ + \ \ \ \left\{ \begin{array}{ll} \dfrac{2T(1+\epsilon)}{\pi}, & T \in \frac{\pi}{1 + \epsilon}\mathbb{Z} \\ &\\ 2 \left\lfloor \dfrac{T(1 + \epsilon)}{\pi} \right\rfloor + 1,\ \ \ & T \notin \frac{\pi}{1 + \epsilon}\mathbb{Z} \end{array}\right. \\ &\\ & + \left\{ \begin{array}{ll} \dfrac{2T(1-\epsilon)}{\pi}, & T \in \frac{\pi}{1 - \epsilon}\mathbb{Z} \\ &\\ 2 \left \lfloor \dfrac{T(1 - \epsilon)}{\pi}\right \rfloor + 1,\ \ \ & T \notin \frac{\pi}{1 - \epsilon}\mathbb{Z}. \end{array}\right. \end{align*} \noindent Likewise for $\Phi_{\xi^\omega}$ with $0 \leq t \leq T$ we obtain: \begin{align*} \mu_{CZ}(\Phi_{\xi^\omega}) & = \left\{ \begin{array}{ll} \dfrac{4T}{\pi}, & T \in \frac{\pi}{2}\mathbb{Z} \\ &\\ 2 \left \lfloor \dfrac{2T}{\pi} \right \rfloor + 1,\ \ \ & T \notin \frac{\pi}{2}\mathbb{Z}. \end{array}\right. \end{align*} Hence we get that the Conley-Zehnder index for $\gamma_{\pm}^N$ in $0 \le t \le \frac{N\pi}{1 \pm \epsilon}$ is given by: \begin{equation} \mu_{CZ}(\gamma_{\pm}^N) = 2\left( \left\lfloor \frac{2N}{(n+1)(1 \pm \epsilon)}\right \rfloor + \left\lfloor \frac{N(1 \mp \epsilon)}{1 \pm \epsilon} \right\rfloor - \left \lfloor \frac{2N}{1 \pm \epsilon} \right \rfloor \right) + 2N + 1. \end{equation} \end{proof} \section{Proof of Theorem \ref{lenslinkcontacto}}\label{sectionlinklens} This section proves that $(L_{A_n},\xi_0)$ and $(L(n+1,n),\xi_{std})$ are contactomorphic. This is done by constructing by constructing a 1-parameter family of contact manifolds via a canonically defined Liouville vector field and applying Gray's stability theorem. \subsection{Contact geometry of $(L(n+1,n),\xi_{std})$}\label{contactgeomlens} \\ The lens space $L(n+1,n)$ is obtained via the quotient of $S^3$ by the binary cyclic subgroup $A_n \subset SL(2,\mathbb{C})$. The subgroup $A_n$ is given by the action of $\mathbb{Z}_{n+1}$ on $\mathbb{C}^2$ defined by \begin{align*} \begin{pmatrix} u \\ v \end{pmatrix} \mapsto \begin{pmatrix} e^{2\pi i/(n+1)} & 0 \\ 0 & e^{2 n\pi i/(n+1)} \end{pmatrix}\begin{pmatrix} u \\ v \end{pmatrix} . \\ \end{align*} The following exercise shows that $L(n+1,n)$ is homeomorphic to $L_{A_n}$. This construction will be needed later on in another proof, so we explain it here to set up the notation. The origin is the only fixed point of the $A_n$ action on $\mathbb{C}^2$ and hence is an isolated quotient singularity of $\mathbb{C}^2/A_n$. We can represent $\mathbb{C}^2/A_n$ as a hypersurface of $\mathbb{C}^3$ as follows. Consider the monomials \[ z_0 := uv, \quad z_1 := \tfrac{i}{\sqrt{2}}u^{n+1}, \quad z_2 := \tfrac{i}{\sqrt{2}}v^{n+1} .\] These are invariant under the action of $A_n$ and satisfy the equation $z_0^{n+1} + 2z_1z_2 = 0$. Recall that \[ f_{A_n}(z_0,z_1,z_2) = z_0^{n+1} + 2z_1z_2, \] and \[ L_{A_n}=S^5 \cap \{ f_{A_n}^{-1}(0) \} \] Moreover, \begin{equation}\label{varphieq} \begin{array}{llcl} \tilde\varphi: &\mathbb{C}^2 &\to& \mathbb{C}^3 \\ &(u,v) &\mapsto &(uv,\tfrac{i}{\sqrt{2}}u^{n+1},\tfrac{i}{\sqrt{2}}v^{n+1})\\ \end{array} \end{equation} descends to the map \[ \varphi: \mathbb{C}^2/A_n \to \mathbb{C}^3, \] which sends $\varphi (\mathbb{C}^2/A_n)$ homeomorphically onto the hypersurface $f^{-1}_{A_n}(0)$. Rescaling away from the origin of $\mathbb{C}^3$ yields a homeomorphism between $\varphi(S^3/A_n)$ and $L_{A_n}$. As 3-manifolds which are homeomorphic are also diffeomorphic \cite{moise} we obtain the following proposition. \begin{prop} $L(n+1,n)$ is diffeomorphic to $L_{A_n}$. \end{prop} \begin{rem} In order to prove that two manifolds are contactomorphic, one must either construct an explicit diffeomorphism or make use of Gray's stability theorem. Sadly, $\varphi$ is not a diffeomorphism onto its image when $u=0$ or $v=0$. As the above diffeomorphism is only known to exist abstractly, we will need to appeal the latter method to prove that $(L_{A_n},\xi_0)$ and $(L(n+1,n),\xi_{std})$ are contactomorphic. As a result, this proof is rather involved. \end{rem} Our application of Gray's stability theorem uses the flow of a Liouville vector field to construct a 1-parameter family of contactomorphisms. First we prove that $L(n+1,n)$ is a contact manifold whose contact structure descends from the quotient of $S^3$. Consider the standard symplectic form on $\mathbb{C}^2$ given by \begin{equation} \begin{array}{lcl} \omega_{\mathbb{C}^2}&=&d\lambda_{\mathbb{C}^2} \\ \lambda_{\mathbb{C}^2} &= &\dfrac{i}{2} \left(u d\bar u - \bar u du + v d\bar v - \bar v dv\right). \\ \end{array} \end{equation} The following proposition shows that $\lambda_0$ restricts to a contact form on $L(n+1,n)$. We define $\ker\lambda = \xi_{\text{std}}$ on $L(n+1,n)$. \begin{prop}\label{calcliou} The vector field \[ Y_0 = \frac{1}{2} \left( u\frac{\partial}{\partial u} + \bar{u} \frac{\partial }{\bar{u}} + v\frac{\partial}{\partial v} + \bar{v} \frac{\partial }{\bar{v}} \right) \] is a Liouville vector field on $(\mathbb{C}^2/A_n,\omega_{\mathbb{C}^2})$ away from the origin and transverse to $L(n+1,n)$. \end{prop} \begin{proof} We have that $\mathbb{C}^2/A_n$ is a smooth manifold away from the origin because $0$ is the only fixed point by the action of $A_n$. Write \[ S^3/A_n = \{ (u, v) \in \mathbb{C}^2/A_n\ \big | \ |u|^2+|v|^2 = 1\}. \] Then $L(n+1,n) =S^3/A_n$ is a regular level set of $g(u,v) = |u|^2+|v|^2$ Choose a Riemannian metric on $\mathbb{C}^2/A_n$ and note that \[ Y_0 = \frac{1}{4} \nabla g. \] Thus $Y_0$ is transverse to $L(n+1,n)$. Since \[ \mathcal{L}_{Y_0} \omega_{\mathbb{C}^2} = d(i_{Y_0}d\lambda_{\mathbb{C}^2}) = \omega_{\mathbb{C}^2}, \] we may conclude that $Y_0$ is indeed a Liouville vector field on $(\mathbb{C}^2/A_n, \omega_{\mathbb{C}^2})$ away from the origin. Thus by Proposition \ref{contacttype}, $L(n+1,n)$ is a hypersurface of contact type in $\mathbb{C}^2/A_n$. \end{proof} \subsection{ The proof that $(L_{A_n},\xi_0)$ and $(L(n+1,n),\xi_{std})$ are contactomorphic} \\ First we set up $L_{A_n}$ and $\varphi(L(n+1,n))$ as hypersurfaces of contact type in $\{f^{-1}_{A_n}(0) \} \setminus \{ \mathbf{0} \}$. Define $\rho : \mathbb{C}^3 \to \mathbb{R}$ by \[ \rho(z) = \frac{|z|^2 - 1}{4} = \frac{z_0\bar{z}_0 + \cdots + z_2\bar{z}_2 - 1}{4}. \] The standard symplectic structure on $\mathbb{C}^3$ is given by. \[ \omega_{\mathbb{C}^3} = \frac{i}{2}( dz_0\wedge d\bar{z}_0 + \cdots + dz_2 \wedge d\bar{z}_2).\] Moreover, \begin{equation}\label{liouY} Y = \nabla \rho = \frac{1}{2} \sum_{j = 0}^2 z_j \frac{\partial}{\partial z_j} + \bar{z}_j \frac{\partial}{\partial \bar{z}_j} \end{equation} is a Liouville vector field for $(\mathbb{C}^3,\omega_{\mathbb{C}^3}).$ We define \[ \lambda_{\mathbb{C}^3}= \iota_Y \omega_{\mathbb{C}^3}. \] A standard calculation analogous to the proof of Proposition \ref{calcliou} shows that $Y$ is a Liouville vector field on $\left( \{f^{-1}_{A_n}(0) \} \setminus \{ \mathbf{0} \}, \omega_{\mathbb{C}^3} \right)$ \begin{rem} Both $\varphi(L(n+1,n))$ and $L_{A_n}$ are hypersurfaces of contact type in $\left( \{f^{-1}_{A_n}(0) \} \setminus \{ \mathbf{0} \}, \omega_{\mathbb{C}^3} \right)$. Note that $\varphi(L(n+1,n))$ is in fact transverse to the Liouville vector field $Y$ because \[ \begin{array}{ccl} \varphi(L(n+1,n)) &=& \varphi \left( \left \{|u|^{2} + |v|^{2} = 1 \right \}/ A_n\right) \\ &=& \varphi ( \{|u|^{4} + 2|u|^2|v|^2+ |v|^{4} = 1 \}/ A_n) \\ &=&\left \{ 2|z_0|^2+4^{1/(n+1)}|z_1|^{4/(n+1)} + 4^{1/(n+1)} |z_2|^{4/(n+1)} = 1 \right \} \cap f_{A_n}^{-1}(0) \\ \end{array} \] \end{rem} We will want $\varphi(L(n+1,n))$ and $L_{A_n}$ to be disjoint in $\{f^{-1}_{A_n}(0) \}. $ This is easily accomplished by rescaling $r$ in the definition of the link. \begin{definition} Define \[ L_{A_n}^r = f^{-1}_{A_n}(0) \cap S^5_r, \] with the assumption that $r$ has been chosen so that $\varphi(L(n+1,n))$ and $L_{A_n}^r$ are disjoint in $\{f^{-1}_{A_n}(0) \}$ and so that the flow of the Liouville vector field $Y$ ``hits" $\varphi(L(n+1,n))$ before $L_{A_n}^r$. \end{definition} The first result is the following lemma, which provides a 1-parameter family of diffeomorphic manifolds starting on $\varphi(L(n+1,n))$ and ending on $L_{A_n}^r$. First we set up some notation. Let \[ \psi_t:\mathbb{R} \times X \to X \] be the flow of $Y$ and $\psi_t(z) = \gamma_z(t)$ the unique integral curve passing through $z \in \varphi(L(n+1,n))$ at time $t = 0$. For any integral curve $\gamma$ of $Y$ we consider the following initial value problem: \begin{equation} \label{ivp} \begin{array}{ccl} \gamma'(t)&=&Y(\gamma(t))\\ \gamma(0)&=&z \in \varphi(L(n+1,n)) \\ \end{array} \end{equation} By means of the implicit function theorem and the properties of the Liouville vector field $Y$ we can prove the following claim. \begin{lem}\label{oneparameter} For every $\gamma_z$, there exists a $\tau(z) \in \mathbb{R}_{> 0}$ such that $\gamma_z(\tau(z)) \in L_{A_n}^r$. The choice of $\tau(z) $ varies smoothly for each $z \in \varphi(L(n+1,n))$. \end{lem} \begin{proof} In order to apply the implicit function theorem, we must show for all $(t,z)$ with $\rho \circ \gamma =0$ that \[ \frac{\partial (\rho \circ \gamma)}{\partial t} \neq 0. \] Note that $\rho \circ \gamma$ is smooth. By the chain rule, \[ \left. \frac{\partial (\rho \circ \gamma)}{\partial t}\right|_{(s,p)} = \mbox{grad }\rho |_{\gamma(s,p)} \cdot \dot{\gamma}|_{(s,p)}, \] where $\dot{\gamma}|_{(s,p)} = \frac{\partial \gamma}{\partial t}|_{(s,p)}$. If $\mbox{grad } \rho \arrowvert_{\gamma(s,p)} \cdot \dot{ \gamma}|_{(s,p)} = 0$, then either $\mbox{grad } \rho$ is not transverse along $\{ (\rho \circ \gamma) \ (s,p)=0 \}$ or $ \dot{ \gamma}|_{(s,p)} = 0$, since $\mbox{grad } \rho \neq 0$. By construction grad $\rho = \nabla \rho$ is a Liouville vector field transverse to $L_{A_n}^r$ . Furthermore, the conformal symplectic nature of a Liouville vector field implies that for any integral curve $\gamma$ satisfying the initial value problem given by equation (\ref{ivp}), $\dot{\gamma}|_{(s,p)} \neq 0$. Thus we see that the conditions for the implicit function theorem are satisfied and our claim is proven. \end{proof} \begin{rem}\label{helperrem} The time $\tau(z)$ can be normalized to 1 for each $z$, yielding a 1-parameter family of diffeomorphic contact manifolds $(M_t,\zeta_t)$ for $0 \le t \le 1$ given by \[ M_t = \psi_t( \varphi(L(n+1,n))), \quad \zeta_t = TM_t \cap J_{\mathbb{C}^3} (TM_t)\] where \[ M_0 = \psi_0(\varphi(L(n+1,n))) = \varphi(L(n+1,n)), \quad M_1 = \psi_1 (\varphi(L(n+1,n))) = L_{A_n}.\] \end{rem} Moreover, we can relate the standard contact structure on $L(n+1,n)$ under the image of $\varphi$. To avoid excessive parentheses, we use $S^3/A_n$ in place of $L(n+1,n)$ in this lemma. \begin{lem}\label{technicalphi} On $\varphi(S^3/A_n), \ \ \varphi_*\xi_{std}= T(\varphi(S^3/A_n)) \cap J_{\mathbb{C}^3} (T(\varphi(S^3/A_n))) .$ \end{lem} \begin{proof} Since $A_n \subset SL(2,\mathbb{C})$ we have \[ \tilde \varphi (J_{\mathbb{C}^2}TS^3) = J_{\mathbb{C}^3}(T\tilde \varphi(S^3)). \] We examine $\varphi_*\big(\xi_{\text{std}}\big)$: \begin{align*} \varphi_*(T(S^3/A_n) \cap J_{\mathbb{C}^2} T(S^3/A_n)) &=\tilde \varphi_*(TS^3 \cap J_{\mathbb{C}^2}(TS^3)) \\ &=\tilde \varphi_*(TS^3) \cap \tilde \varphi_*(J_{\mathbb{C}^2}(TS^3)) \\ &=\tilde \varphi_*(TS^3) \cap J_{\mathbb{C}^3}\tilde \varphi_*(TS^3) \\ &= T\tilde\varphi(S^3) \cap J_{\mathbb{C}^3}(T\tilde\varphi(S^3)) \\ &= T(\varphi(S^3/A_n)) \cap J_{\mathbb{C}^3}(T\varphi(S^3/A_n)). \end{align*} \end{proof} Lemmas \ref{oneparameter} and \ref{technicalphi} in conjunction with Remark \ref{helperrem} and Lemma \ref{graycor} yields the following proposition. \begin{prop}\label{propatlast} The image of the lens space $(\varphi(L(n+1,n)), \varphi_*\xi_{std})$ is contactomorphic to $(L_{A_n}, \xi_{0})$. \end{prop} It remains to show that $(\varphi(L(n+1,n)), \varphi_*\xi_{std})$ is contactomorphic to $(L(n+1,n),\xi_{std})$. To accomplish this, we use Moser's Lemma to prove the following lemma. \begin{lem}\label{moserlem} The manifolds $(\mathbb{C}^2 \setminus \{ \mathbf{0} \},d\lambda_{\mathbb{C}^2})$ and $(\mathbb{C}^2 \setminus \{\mathbf{0} \}, d\tilde\varphi^*\lambda_{\mathbb{C}^3})$ are contactomorphic. \end{lem} \begin{proof} Consider the family of 2-forms \[ \omega_t = (1 - t)\omega_{\mathbb{C}^2} + t\tilde \varphi^*\omega_{\mathbb{C}^3}\] for $0 \le t \le 1$. Then $\omega_t$ is exact because $Y_0$ and $Y$ are Liouville vector fields for $\mathbb{C}^2\setminus \mathbf{0}$ equipped with the symplectic forms $\omega_{\mathbb{C}^2}$ and $\omega_{\mathbb{C}^3}$ respectively, thus $d\lambda_t = \omega_t$ for \[\lambda_t = (1 - t)\lambda_{\mathbb{C}^2} + t\tilde\varphi^*(\lambda_{\mathbb{C}^3}).\] for $0 \le t \le 1$. We claim for each $t \in [0,1]$, $\lambda_t$ is a family of contact forms. We compute \begin{align*} \frac{2}{i} \tilde\varphi^*d\lambda_{\mathbb{C}^3} & = d(uv)\wedge d(\overline{uv} ) + d(u^{n+1})\wedge d(\bar u^{n+1}) + d(v^{n+1}) \wedge d(\bar v^{n+1}) \\ & = ((n+1)^2 |u|^{2n} + |v|^2)du\wedge d\bar u + 2\Re (u\bar v dv \wedge d\bar u) + ((n+1)^2|v|^{2n} + |u|^2) dv\wedge d\bar v. \end{align*} Since $\omega_t$ is exact for each $t\in[0,1]$, $d(\omega_t)=0$ for each $t\in[0,1]$. Moreover, a simple calculation reveals that $\omega_t \wedge \omega_t$ is a volume form on $\mathbb{C}^2$ for each $t\in[0,1]$. Thus we may conclude that, $\omega_t$ is a symplectic form for each $t\in[0,1]$. Applying Moser's argument, Theorem \ref{moser}, yields the desired result. \end{proof} This yields the desired corollary. \begin{cor}\label{atlast} The manifolds $(L(n+1,n), \ker \lambda_{\mathbb{C}^2})$ and $(L(n+1,n), \ker \varphi^*\lambda_{\mathbb{C}^3})$ are contactomorphic. \end{cor} \begin{proof} Let $\phi:(\mathbb{C}^2 \setminus \{ \mathbf{0} \},d\lambda_{\mathbb{C}^2})$ and $(\mathbb{C}^2 \setminus \{ \mathbf{0} \}, d\tilde\varphi^*\lambda_{\mathbb{C}^3})$ be the symplectomorphism, which exists by Lemma \ref{moserlem}. It induces the desired contactomorphism. On $\mathbb{C}^2 \setminus \{ \mathbf{0} \}$, \[ \phi^*d(\varphi^*\lambda_{\mathbb{C}^3}) = d\lambda_{\mathbb{C}^2}, \] thus \[ d\phi^*(\varphi^*\lambda_{\mathbb{C}^3}) = d\lambda_{\mathbb{C}^2}. \] So indeed on $L(n+1,n)$, \[ \begin{array}{lcl} \phi_*(\xi_{std}) &=& \phi_*(\ker \lambda_{\mathbb{C}^2}) \\ &=& \ker \varphi_* \lambda_{\mathbb{C}^3} \\ & =& \varphi_* \xi_{std}. \\ \end{array} \] \end{proof} Proposition \ref{propatlast} and Corollary \ref{atlast} complete the proof of Theorem \ref{lenslinkcontacto}. \end{document}
\begin{document} \title[One distribution function on the Moran sets]{One distribution function on the Moran sets} \author{Symon Serbenyuk} \address{ 45~Shchukina St. \\ Vinnytsia \\ 21012 \\ Ukraine} \email{simon6@ukr.net} \subjclass[2010]{11K55, 11J72, 28A80, 26A09} \keywords{ s-adic representation, Moran set, Hausdorff dimension, monotonic function, distribution function.} \begin{abstract} In the present article, topological, metric, and fractal properties of certain sets are investigated. These sets are images of sets whose elements have restrictions on using digits or combinations of digits in own s-adic representations, under the map $f$, that is a certain distribution function. \end{abstract} \maketitle \section{Introduction} Let us consider space $\mathbb R^n$. In \cite{Moran1946}, P. A. P. Moran introduced the following construction of sets and calculated the Hausdorff dimension of the limit set \begin{equation} \label{eq: Cantor-like set} E=\bigcap^{\infty} _{n=1}{\bigcup_{i_1,\dots , i_n\in A_{0,p}}{\Delta_{i_1i_2\ldots i_n}}}. \end{equation} Here $p$ is a fixed positive integer, $A_{0,p}=\{1, 2, \dots , p\}$, and sets $\Delta_{i_1i_2\ldots i_n}$ are basic sets having the following properties: \begin{itemize} \item any set $\Delta_{i_1i_2\ldots i_n}$ is closed and disjoint; \item for any $i\in A_{0,p}$ the condition $\Delta_{i_1i_2\ldots i_ni}\subset\Delta_{i_1i_2\ldots i_n}$ holds; \item $$ \lim_{n\to\infty}{d\left(\Delta_{i_1i_2\ldots i_n}\right)}=0, \text{where $d(\cdot)$ is the diameter of a set}; $$ \item each basic set is the closure of its interior; \item at each level the basic sets do not overlap (their interiors are disjoint); \item any basic set $\Delta_{i_1i_2\ldots i_ni}$ is geometrically similar to $\Delta_{i_1i_2\ldots i_n}$; \item $$ \frac{d\left(\Delta_{i_1i_2\ldots i_ni}\right)}{d\left(\Delta_{i_1i_2\ldots i_n}\right)}=\sigma_i, $$ where $\sigma_i\in (0,1)$ for $i=\overline{1,p}$. \end{itemize} The Hausdorff dimension $\alpha_0$ of the set $E$ is the unique root of the following equation $$ \sum^{p} _{i=1}{\sigma^{\alpha_0} _i}=1. $$ It is easy to see that set \eqref{eq: Cantor-like set} is a Cantor-like set and a self-similar fractal. The set $E$ is called \emph{the Moran set}. Much research has been devoted to Moran-like constructions and Cantor-like sets (for example, see \cite{{Falconer1997},{Falconer2004}, {Mandelbrot1977}, {PS1995}, DU2014, DU2014(2), HRW2000, PS1995, {S.Serbenyuk 2017}, {S. Serbenyuk fractals}} and references therein). Fractal sets are widely applicated in computer design, algorithms of the compression to information, quantum mechanics, solid-state physics, analysis and categorizations of signals of various forms appearing in different areas (e.g. the analysis of exchange rate fluctuations in economics), etc. In addition, such sets are useful for checking of preserving the Hausdorff dimension by certain functions \cite{{S. Serbenyuk abstract1},{S.Serbenyuk 2017}}. However, for much classes of fractals the problem of the Hausdorff dimension calculation is difficult and the estimate of parameters on which the Hausdorff dimension of certain classes of fractal sets depends is left out of consideration. Let $s>1$ be a fixed positive integer. Let us consider the s-adic representation of numbers from~$[0,1]$: $$ x=\Delta^s _{\alpha_1\alpha_2...\alpha_n...}=\sum^{\infty} _{n=1}{\frac{\alpha_n}{s^n}}, $$ where $\alpha_n\in A=\{0,1,\dots, s-1\}$. In addition, we say that the following representation $$ x=\Delta^{-s }_{\alpha_1\alpha_2...\alpha_n...}=\sum^{\infty} _{n=1}{\frac{\alpha_n}{(-s)^n}}, $$ is the nega-s-adic representation of numbers from $\left[-\frac{s}{s+1}, \frac{1}{s+1}\right]$. Here $\alpha_n\in A$ as well. Some articles (see \cite{ DU2014, DU2014(2), {S. Serbenyuk fractals},{S. Serbenyuk abstract 2}, {S. Serbenyuk abstract 3},{S. Serbenyuk abstract 5}, {Symon1}, {Symon2}, {S. Serbenyuk 2013}, {S. Serbenyuk 2017 fractals}} ) were devoted to sets whose elements have certain restrictions on using combinations of digits in own s-adic representation. Let us consider the following results. Suppose $s>2$ be a fixed positive integer. Let us consider a class $\Upsilon_s$ of sets $\mathbb S_{(s,u)}$ represented in the form \begin{equation*} \mathbb S_{(s,u)}= \left\{x: x=\frac{u}{s-1} +\sum^{\infty} _{n=1} {\frac{\alpha_n - u}{s^{\alpha_1+\dots+\alpha_n}}}, (\alpha_n) \in L, \alpha_n \ne u, \alpha_n \ne 0 \right\}, \end{equation*} where $u=\overline{0,s-1}$, the parameters $u$ and $s$ are fixed for the set $\mathbb S_{(s,u)}$. That is the class $\Upsilon_s$ contains the sets $\mathbb S_{(s,0)}, \mathbb S_{(s,1)},\dots,\mathbb S_{(s,s-1)}$. We say that $\Upsilon$ is a class of sets such that contains the classes $\Upsilon_3, \Upsilon_4,\dots ,\Upsilon_n,\dots$. It is easy to see that the set $\mathbb S_{(s,u)}$ can be defined by the s-adic representation in the following form \begin{equation*} \mathbb S_{(s,u)}=\left\{x: x= \Delta^{s}_{{\underbrace{u\ldots u}_{\alpha_1-1}} \alpha_1{\underbrace{u\ldots u}_{\alpha_2 -1}}\alpha_2 ...{\underbrace{u\ldots u}_{ \alpha_n -1}}\alpha_n...}, (\alpha_n) \in L, \alpha_n \ne u, \alpha_n \ne 0 \right\}, \end{equation*} \begin{theorem}[\cite{{Symon2}, {S. Serbenyuk 2017 fractals}, {S. Serbenyuk fractals}}] \label{th: theorem1} For an arbitrary $u \in A$ the set $\mathbb S_{(s,u)}$ is an uncountable, perfect, nowhere dense set of zero Lebesgue measure, and a self-similar fractal whose Hausdorff dimension $\alpha_0 (\mathbb S_{(s,u)})$ satisfies the following equation $$ \sum _{p \ne u, p \in A_0} {\left(\frac{1}{s}\right)^{p \alpha_0}}=1. $$ \end{theorem} \begin{remark} We note that the statement of the last-mentioned theorem is true for all sets $\mathbb S_{(s,0)}, \mathbb S_{(s,1)},\dots,\mathbb S_{(s,s-1)}$ (for fixed parameters $u=\overline{0,s-1}$ and any fixed $2<s\in\mathbb N$ ) without the sets $ S_{(3,1)}$ and $ S_{(3,2)}$. \end{remark} \begin{theorem}[\cite{{Symon2}, {S. Serbenyuk 2017 fractals}, {S. Serbenyuk 2013}, {S. Serbenyuk fractals}}] \label{th: theorem2} Let $E$ be a set, whose elements contain (in own s-adic or nega-s-adic representation) only digits or combinations of digits from a certain fixed finite set $\{\sigma_1, \sigma_2,\dots,\sigma_m\}$ of s-adic digits or combinations of digits. Then the Hausdorff dimension $\alpha_0$ of $E$ satisfies the following equation: $$ N(\sigma^1 _m)\left(\frac{1}{s}\right)^{\alpha_0}+N(\sigma^2 _m)\left(\frac{1}{s}\right)^{2\alpha_0}+\dots+N(\sigma^{k} _m)\left(\frac{1}{s}\right)^{k\alpha_0}=1, $$ where $N(\sigma^k_m)$ is a number of k-digit combinations $\sigma^k_m$ from the set $\{\sigma_1, \sigma_2,\dots,\sigma_m\}$, $k \in \mathbb N$, and $N(\sigma^1 _m)+N(\sigma^2 _m)+\dots+ N(\sigma^{k} _m)=m$. \end{theorem} Now we will describe the main function of our investigation. Let $\eta$ be a random variable, that defined by the s-adic representation $$ \eta= \frac{\xi_1}{s}+\frac{\xi_2}{s^2}+\frac{\xi_3}{s^3}+\dots +\frac{\xi_{k}}{s^{k}}+\dots = \Delta^{s} _{\xi_1\xi_2...\xi_{k}...}, $$ where $\xi_k=\alpha_k$ and digits $\xi_k$ $(k=1,2,3, \dots )$ are random and taking the values $0,1,\dots ,s-1$ with positive probabilities ${p}_{0}, {p}_{1}, \dots , {p}_{s-1}$. That is $\xi_k$ are independent and $P\{\xi_k=i_k\}={p}_{i_k}$, $i_k \in A$. From the definition of a distribution function and the following expressions $$ \{\eta<x\}=\{\xi_1<\alpha_1(x)\}\cup\{\xi_1=\alpha_1(x),\xi_2<\alpha_2(x)\}\cup \ldots $$ $$ \cup\{\xi_1=\alpha_1(x),\xi_2=\alpha_2(x),\dots ,\xi_{k-1}=\alpha_{k-1}(x), \xi_{k}<\alpha_{k}(x)\}\cup \dots, $$ $$ P\{\xi_1=\alpha_1(x),\xi_2=\alpha_2(x),\dots ,\xi_{k-1}=\alpha_{k-1}(x), \xi_{k}<\alpha_{k}(x)\} =\beta_{\alpha_{k}(x)}\prod^{k-1} _{j=1} {{p}_{\alpha_{j}(x)}}, $$ where $$ \beta_{\alpha_k}=\begin{cases} \sum^{\alpha_k(x)-1} _{i=0} {p_{i}(x)}&\text{whenever $\alpha_k(x)>0$}\\ 0&\text{whenever $\alpha_k(x)=0$,} \end{cases} $$ it is easy to see that the following statement is true. \begin{statement} The distribution function ${f}_{\eta}$ of the random variable $\eta$ can be represented in the following form $$ {f}_{\eta}(x)=\begin{cases} 0&\text{whenever $x< 0$}\\ \beta_{\alpha_1(x)}+\sum^{\infty} _{k=2} {\left({\beta}_{\alpha_k(x)} \prod^{k-1} _{j=1} {{p}_{\alpha_j(x)}}\right)}&\text{whenever $0 \le x<1$}\\ 1&\text{whenever $x\ge 1$,} \end{cases} $$ where ${p}_{\alpha_{j(x)}}>0$. \end{statement} The function $$ {f}(x)=\beta_{\alpha_1(x)}+\sum^{\infty} _{n=2} {\left({\beta}_{\alpha_n(x)}\prod^{n-1} _{j=1} {{p}_{\alpha_j(x)}}\right)}, $$ can be used as a representation of numbers from $[0,1]$. That is $$ x=\Delta^{P} _{\alpha_1(x)\alpha_2(x)...\alpha_n(x)...}=\beta_{\alpha_1(x)}+\sum^{\infty} _{n=2} {\left({\beta}_{\alpha_n(x)}\prod^{n-1} _{j=1} {{p}_{\alpha_j(x)}}\right)}, $$ where $P=\{p_0,p_1,\dots , p_{s-1}\}$, $p_0+p_1+\dots+p_{s-1}=1$, and $p_i>0$ for all $i=\overline{0,s-1}$. The last-mentioned representation is \emph{the P-representation of numbers from $[0,1]$}. In the present article, we will consider properties of images of the sets considered in Theorem~\ref{th: theorem1} and Theorem~\ref{th: theorem2} under the map $f$. We begin with definitions. Let $s$ be a fixed positive integer, $s> 2$. Let $c_1, c_2,\dots ,c_m$ be an ordered tuple of integers such that $c_i\in\{0,1,\dots ,s-1\}$ for $i=\overline{1,m}$. \begin{definition} {\itshape A cylinder of rank $m$ with base $c_1c_2\ldots c_m$} is a set $\Delta^{P} _{c_1c_2\ldots c_m}$ formed by all numbers of the segment $[0,1]$ with P-representations in which the first $m$ digits coincide with $c_1,c_2,\dots ,c_m$, respectively, i.e., $$ \Delta^{P} _{c_1c_2\ldots c_m}=\left\{x: x=\Delta^{P} _{\alpha_1\alpha_2\ldots\alpha_n\ldots}, \alpha_j=c_j, j=\overline{1,m}\right\}. $$ \end{definition} Cylinders $\Delta^{P} _{c_1c_2\ldots c_m}$ have the following properties: \begin{enumerate} \item any cylinder $\Delta^{P} _{c_1c_2\ldots c_m}$ is a closed interval; \item $$ \inf \Delta^{P} _{c_1c_2\ldots c_m}= \Delta^{P} _{c_1c_2\ldots c_m000...}, \sup \Delta^{P} _{c_1c_2\ldots c_m}= \Delta^{P} _{c_1c_2\ldots c_m[s-1][s-1][s-1]...}; $$ \item $$ | \Delta^{P} _{c_1c_2\ldots c_m}|=p_{c_1}p_{c_2}\cdots p_{c_m}; $$ \item $$ \Delta^{P} _{c_1c_2\ldots c_mc}\subset \Delta^{P} _{c_1c_2\ldots c_m}; $$ \item $$ \Delta^{P} _{c_1c_2\ldots c_m}=\bigcup^{s-1} _{c=0} { \Delta^{P} _{c_1c_2\ldots c_mc}}; $$ \item $$ \lim_{m \to \infty} { |\Delta^{P} _{c_1c_2\ldots c_m}|}=0; $$ \item $$ \frac{| \Delta^{P} _{c_1c_2\ldots c_mc_{m+1}}|}{| \Delta^{-D} _{c_1c_2\ldots c_m}|}=p_{c_{m+1}}; $$ \item $$ \sup\Delta^{P} _{c_1c_2...c_mc}=\inf \Delta^{P} _{c_1c_2...c_m[c+1]}, $$ where $c \ne s-1$; \item $$ \bigcap^{\infty} _{m=1} {\Delta^{-D} _{c_1c_2\ldots c_m}}=x=\Delta^{-D} _{c_1c_2\ldots c_m\ldots}. $$ \end{enumerate} \begin{definition} A number $x \in[0,1]$ is called {\itshape P-rational} if $$ x=\Delta^{P} _{\alpha_1\alpha_2\ldots\alpha_{n-1}\alpha_n000\ldots} $$ or $$ x=\Delta^{P} _{\alpha_1\alpha_2\ldots\alpha_{n-1}[\alpha_n-1][s-1][s-1][s-1]\ldots}. $$ The other numbers in $[0,1]$ are called {\itshape P-irrational}. \end{definition} \section{The objects of research} Let $2<s$ be a fixed positive integer, $A=\{0,1,\dots ,s-1\}$, $A_0=A \setminus \{0\}=\{1,2,\dots , s -1\}$, and $$ L \equiv (A_0)^{\infty}= (A_0) \times (A_0) \times (A_0)\times\dots $$ be the space of one-sided sequences of elements of $ A_0$. Let $P=\{p_0,p_1, \dots , p_{s-1}\}$ be a fixed set of positive numbers such that $p_0+p_1+\dots + p_{s-1}=1$. Let us consider a class $\Gamma$ that contains classes $\Gamma_{P_s}$ of sets $\mathbb S_{(P_s,u)}$ represented in the form \begin{equation} \label{S(s,u)1} \mathbb S_{(P_s,u)}\equiv\left\{x: x= \Delta^{P}_{{\underbrace{u...u}_{\alpha_1-1}} \alpha_1{\underbrace{u...u}_{\alpha_2 -1}}\alpha_2 ...{\underbrace{u...u}_{ \alpha_n -1}}\alpha_n...}, (\alpha_n) \in L, \alpha_n \ne u, \alpha_n \ne 0 \right\}, \end{equation} where $u=\overline{0,s-1}$, the parameters $u$ and $s$ are fixed for the set $\mathbb S_{(P_s,u)}$. That is the class $\Gamma_{P_s}$ contains the sets $\mathbb S_{(P_s,0)}, \mathbb S_{(P_s,1)},\dots,\mathbb S_{(s,s-1)}$. \begin{lemma} An arbitrary set $\mathbb S_{(P_s,u)}$ is a uncountable set. \end{lemma} \begin{proof} Let us consider the mapping $g: \mathbb S_{(P_s,u)} \to S_u$. That is $$ \forall (\alpha_n)\in L: x= \Delta^{P}_{{\underbrace{u...u}_{\alpha_1-1}} \alpha_1{\underbrace{u...u}_{\alpha_2 -1}}\alpha_2 ...{\underbrace{u...u}_{ \alpha_n -1}}\alpha_n...} \stackrel{g}{\longrightarrow} \Delta^{s}_{\alpha_1\alpha_2 ...\alpha_n...}=y=g(x). $$ It follows from the definition of an arbitrary set $S_u$ that s-adic-rational numbers of the form $$ \Delta^{s} _{\alpha_1\alpha_2\ldots\alpha_{n-1}\alpha_n000\ldots} $$ do not belong to $ S_u$ (since the condition $\alpha_n\notin\{0,u\}$ holds). Hence each element of $ S_u$ has the unique s-adic representation. For any $x\in \mathbb S_{(P_s,u)}$ there exists $y=g(x)\in S_u$ and for any $y\in S_u$ there exists $x=g^{-1}(y)\in \mathbb S_{(P_s,u)}$. Since P-rational numbers do not belong to $\mathbb S_{(P_s,u)}$, we have that for arbitrary $x_1\ne x_2$ the inequality $f(x_1)\ne f(x_2)$ holds. So from the uncountability of $ S_u$ follows the uncountability of the set $\mathbb S_{(P_s,u)}$. \end{proof} To investigate topological and metric properties of $\mathbb S_{(P_s,u)}$, we will study properties of cylinders. Let $c_1, c_2,\dots , c_n$ be an ordered tuple of integers such that $c_i\in\{0,1,\dots ,s-1\}$ for $i=\overline{1,n}$. \begin{definition} {\itshape A cylinder of rank $n$ with base $c_1c_2\ldots c_n$} is a set $\Delta^{(P,u)} _{c_1c_2\ldots c_n}$ of the form: $$ \Delta^{(P,u)} _{c_1c_2\ldots c_n}=\left\{x: x=\Delta^{P}_{{\underbrace{u...u}_{c_1-1}} c_1{\underbrace{u...u}_{c_2 -1}}c_2 ...{\underbrace{u...u}_{ c_n -1}}c_n{\underbrace{u...u}_{\alpha_{n+1}-1}}\alpha_{n+1}{\underbrace{u...u}_{\alpha_{n+2}-1}}\alpha_{n+2}...}, \alpha_j=c_j, j=\overline{1,n}\right\}. $$ \end{definition} By $(a_1a_2\ldots a_k)$ denote the period $a_1a_2\ldots a_k$ in the representation of a periodic number. \begin{lemma} Cylinders $ \Delta^{(P,u)} _{c_1...c_n} $ have the following properties: \label{lm: Lemma on cylinders} \begin{enumerate} \item $$ \inf \Delta^{(P,u)} _{c_1...c_n}=\begin{cases} \Delta^{P} _{{\underbrace{0...0}_{c_1-1}} c_1{\underbrace{0...0}_{c_2 -1}}c_2 ...{\underbrace{0...0}_{ c_n -1}}c_n({\underbrace{0...0}_{ s-2}}[s-1])} &\text{if $u=0$}\\ \Delta^{P} _{{\underbrace{1...1}_{c_1-1}} c_1{\underbrace{1...1}_{c_2 -1}}c_2 ...{\underbrace{1...1}_{ c_n -1}}c_n({\underbrace{1...1}_{ s-2}}[s-1])} &\text{if $u=1$}\\ $$\\ \Delta^{P} _{{\underbrace{u...u}_{c_1-1}} c_1{\underbrace{u...u}_{c_2 -1}}c_2 ...{\underbrace{u...u}_{ c_n -1}}c_n(1)}&\text{if $ u \in \{2,3,\dots ,s-1\}$,} \end{cases} $$ $$ \sup \Delta^{(P,u)} _{c_1...c_n...}=\begin{cases} \Delta^{P} _{{\underbrace{[s-1]...[s-1]}_{c_1-1}} c_1 ...{\underbrace{[s-1]...[s-1]}_{ c_n -1}}c_n({\underbrace{[s-1]...[s-1]}_{ s-3}}[s-2])} &\text{if $u=s-1$}\\ \Delta^{P} _{{\underbrace{u...u}_{c_1-1}} c_1{\underbrace{u...u}_{c_2 -1}}c_2 ...{\underbrace{u...u}_{ c_n -1}}c_n({\underbrace{u...u}_{ u}}[u+1])} &\text{if $u\in\{1,\dots, s-2\}$}\\ $$\\ \Delta^{P} _{{\underbrace{0...0}_{c_1-1}} c_1{\underbrace{0...0}_{c_2 -1}}c_2 ...{\underbrace{0...0}_{ c_n -1}}c_n(1)}&\text{if $ u=0$.} \end{cases} $$ \item If $d(\cdot) $ is the diameter of a set, then $$ d(\Delta^{(P,u)} _{c_1...c_n})=d(\mathbb S_{(P_s,u)})p^{c_1+c_2+\dots+c_n-n} _{u}\prod^{n} _{j=1}{p_{c_j}}. $$ \item $$ \frac{d(\Delta^{(P,u)} _{c_1...c_nc_{n+1}})}{d(\Delta^{(P,u)} _{c_1...c_n})}=p_{c_{n+1}}p^{c_{n+1}-1} _{u}. $$ \item $$ \Delta^{(P,u)} _{c_1c_2...c_n} =\bigcup^{s-1} _{i=1} { \Delta^{(P,u)} _{c_1c_2...c_ni}}~~~\forall c_n \in A_0,~~~n \in \mathbb N,~ i \ne u. $$ \item The following relationships are satisfied: \begin{enumerate} \item if $ u\in \{0,1\}$, then $$ \inf \Delta^{(P,u)} _{c_1...c_np}> \sup \Delta^{(P,u)} _{c_1...c_n[p+1]}; $$ \item if $ u \in \{2,3,\dots ,s-3\}$, then $$ \begin{cases} \sup \Delta^{(P,u)} _{c_1...c_np}< \inf \Delta^{(P,u)} _{c_1...c_n[p+1]}&\text{for all $p+1\le u$}\\ $$\\ \inf \Delta^{(P,u)} _{c_1...c_np}> \sup \Delta^{(P,u)} _{c_1...c_n[p+1]},&\text{for all $u<p$;} \end{cases} $$ \item if $ u \in \{s-2,s-1\}$, then $$ \sup \Delta^{(P,u)} _{c_1...c_np}< \inf \Delta^{(P,u)} _{c_1...c_n[p+1]} ~~~(\text{in this case, the condition $p\ne s-1$ holds}). $$ \end{enumerate} \end{enumerate} \end{lemma} \begin{proof} \emph{The first property} follows from the equality $$ x=\Delta^{P}_{{\underbrace{u...u}_{c_1-1}} c_1{\underbrace{u...u}_{c_2 -1}}c_2 ...{\underbrace{u...u}_{ c_n -1}}c_n{\underbrace{u...u}_{\alpha_{n+1}-1}}\alpha_{n+1}{\underbrace{u...u}_{\alpha_{n+2}-1}}\alpha_{n+2}...} $$ $$ =\Delta^{P}_{{\underbrace{u...u}_{c_1-1}} c_1{\underbrace{u...u}_{c_2 -1}}c_2 ...{\underbrace{u...u}_{ c_n -1}}c_n(0)}+p^{c_1+\dots +c_n-n} _{u}\left(\prod^{n} _{k=1}{p_{c_k}}\right)\Delta^{P}_{{\underbrace{u...u}_{\alpha_{n+1}-1}}\alpha_{n+1}{\underbrace{u...u}_{\alpha_{n+2}-1}}\alpha_{n+2}...} $$ and the definition of $ \mathbb S_{(P_s,u)}$. It is easy to see that \emph{the second property} follows from the first property, \emph{the third property} is a corollary of the first and second properties, and \emph{Property 4} follows from the definition of the set. Let us show that \emph{Property 5} is true. We now prove that the first inequality holds for $ u=1$. In fact, $$ \inf \Delta^{(P,0)} _{c_1...c_np}- \sup \Delta^{(P,0)} _{c_1...c_n[p+1]}= \Delta^{P} _{{\underbrace{0...0}_{c_1-1}} c_1{\underbrace{0...0}_{c_2 -1}}c_2 ...{\underbrace{0...0}_{ c_n -1}}c_n{\underbrace{0...0}_{p -1}}p({\underbrace{0...0}_{ s-2}}[s-1])}-\Delta^{P} _{{\underbrace{0...0}_{c_1-1}} c_1{\underbrace{0...0}_{c_2 -1}}c_2 ...{\underbrace{0...0}_{ c_n -1}}c_n{\underbrace{0...0}_{p}}[p+1](1)} $$ $$ =\beta_pp^{c_1+...+c_n-n+p-1} _0\prod^{n} _{j=1}{p_{c_j}}+p_pp^{c_1+...+c_n-n+p-1} _0\left(\prod^{n} _{j=1}{p_{c_j}}\right)\inf{\mathbb S_{(P_s,0)}} $$ $$ -\beta_{p+1}p^{c_1+...+c_n-n+p} _0\prod^{n} _{j=1}{p_{c_j}}-p_{p+1}p^{c_1+...+c_n-n+p} _0\left(\prod^{n} _{j=1}{p_{c_j}}\right)\sup{\mathbb S_{(P_s,0)}} $$ $$ =p^{c_1+...+c_n-n+p} _0\left(\prod^{n} _{j=1}{p_{c_j}}\right)\left(\beta_pp^{-1} _0+p_pp^{-1} _0\inf{\mathbb S_{(P_s,0)}}-\beta_{p+1}-p_{p+1}\sup{\mathbb S_{(P_s,0)}}\right) $$ $$ =p^{c_1+...+c_n-n+p-1} _0\left(\prod^{n} _{j=1}{p_{c_j}}\right)(p_0(1-p_0-p_p-p_{p+1}\sup{\mathbb S_{(P_s,0)}})+(1-p_0)(p_1+...+p_{p-1})+p_p\inf{\mathbb S_{(P_s,0)}})>0 $$ because $$ 1-p_0-p_p-p_{p+1}\sup{\mathbb S_{(P_s,0)}}=1-p_0-p_p-p_{p+1}\frac{p_0}{1-p_1}= \frac{\sum_{i\notin\{0,1,p,p+1\}}p_i+p_{p+1}(1-p_0)+p_0p_1+p_1p_p}{1-p_1}>0. $$ Also, $$ \inf \Delta^{(P,1)} _{c_1...c_np}- \sup \Delta^{(P,1)} _{c_1...c_n[p+1]}= \Delta^{P} _{{\underbrace{1...1}_{c_1-1}} c_1{\underbrace{0...0}_{c_2 -1}}c_2 ...{\underbrace{1...1}_{ c_n -1}}c_n{\underbrace{1...1}_{p -1}}p({\underbrace{1...1}_{ s-2}}[s-1])}-\Delta^{P} _{{\underbrace{1...1}_{c_1-1}} c_1{\underbrace{1...1}_{c_2 -1}}c_2 ...{\underbrace{1...1}_{ c_n -1}}c_n\underbrace{1...1}_{p}[p+1](12)} $$ $$ =\beta_pp^{c_1+...+c_n+p-n-1} _1\prod^{n} _{j=1}{p_{c_j}}+p_pp^{c_1+...+c_n-n+p-1} _1\left(\prod^{n} _{j=1}{p_{c_j}}\right)\inf{\mathbb S_{(P_s,1)}} $$ $$ -\beta_{p+1}p^{c_1+...+c_n+p-n} _1\prod^{n} _{j=1}{p_{c_j}}-p_{p+1}p^{c_1+...+c_n-n+p} _1\left(\prod^{n} _{j=1}{p_{c_j}}\right)\sup{\mathbb S_{(P_s,1)}} $$ $$ =p^{c_1+...+c_n-n+p-1} _1\left(\prod^{n} _{j=1}{p_{c_j}}\right)\left(\beta_p+p_p\inf{\mathbb S_{(P_s,1)}}-\beta_{p+1}p_1-p_{p+1}p_1\sup{\mathbb S_{(P_s,1)}}\right) $$ $$ =p^{c_1+...+c_n-n+p-1} _1\left(\prod^{n} _{j=1}{p_{c_j}}\right)(p_p\inf{\mathbb S_{(P_s,1)}}+p_1(1-p_1-p_p-p_{p+1}\sup{\mathbb S_{(P_s,1)}})+(1-p_1)(p_0+p_2+...+p_{p-1}))>0, $$ since $$ \sup{\mathbb S_{(P_s,1)}}=\Delta^P _{(12)}=\beta_1+\sum^{\infty} _{k=1}{\beta_1p^k _{1} p^k _2}+\sum^{\infty} _{k=1}{\beta_2p^k _{1} p^k _2}=\frac{p_0+p_0p_1+p^2 _1}{1-p_1p_2}>0 $$ and $$ 1=p_0+p_1+\dots+ p_{s-1}. $$ Let us prove the system of inequalities. Consider the first inequality. For the case when $p+1\le u$ we get $$ \inf \Delta^{(P,u)} _{c_1...c_n[p+1]}-\sup \Delta^{(P,u)} _{c_1...c_np}=\Delta^{P} _{{\underbrace{u...u}_{c_1-1}} c_1{\underbrace{u...u}_{c_2 -1}}c_2 ...{\underbrace{u...u}_{ c_n -1}}c_n{\underbrace{u...u}_{p}} [p+1](1)}-\Delta^{P} _{{\underbrace{u...u}_{c_1-1}} c_1{\underbrace{u...u}_{c_2 -1}}c_2 ...{\underbrace{u...u}_{ c_n -1}}c_n{\underbrace{u...u}_{p-1}} p({\underbrace{u...u}_{ u}}[u+1])} $$ $$ =\beta_up^{c_1+...+c_n-n+p-1} _u\prod^{n} _{j=1}{p_{c_j}}+\beta_{p+1}p^{c_1+...+c_n-n+p} _u\prod^{n} _{j=1}{p_{c_j}}+p_{p+1}p^{c_1+...+c_n-n+p} _u\left(\prod^{n} _{j=1}{p_{c_j}}\right)\cdot\Delta^P _{(1)} $$ $$ -\beta_pp^{c_1+...+c_n-n+p-1} _u\prod^{n} _{j=1}{p_{c_j}}-p_{p}p^{c_1+...+c_n-n+p-1} _u\left(\prod^{n} _{j=1}{p_{c_j}}\right)\cdot\Delta^P _{({\underbrace{u...u}_{ u}}[u+1])} $$ $$ =p^{c_1+...+c_n-n+p-1} _u\left(\prod^{n} _{j=1}{p_{c_j}}\right)\left(\beta_u+\beta_{p+1}p_u+p_{p+1}p_u\Delta^P _{(1)}-\beta_p-p_p\Delta^P _{({\underbrace{u...u}_{ u}}[u+1])}\right) $$ $$ =p^{c_1+...+c_n-n+p-1} _u\left(\prod^{n} _{j=1}{p_{c_j}}\right)\left(p_{p+1}p_u\Delta^P _{(1)}+(\beta_u-\beta_p)+p_up_p+\beta_pp_u-p_p\Delta^P _{({\underbrace{u...u}_{ u}}[u+1])}\right)>0 $$ since the conditions $p<u$, $\beta_u-\beta_p>0$, and $\beta_{p+1}=\beta_p+p_{p}$ hold. Let us prove that the second inequality is true. Here $ p>u $, i.e., $p-u \ge 1$. Similarly, $$ \inf \Delta^{(P,u)} _{c_1...c_np}-\sup \Delta^{(P,u)} _{c_1...c_n[p+1]}=\Delta^{P} _{{\underbrace{u...u}_{c_1-1}} c_1{\underbrace{u...u}_{c_2 -1}}c_2 ...{\underbrace{u...u}_{ c_n -1}}c_n{\underbrace{u...u}_{p-1}}p(1)}-\Delta^{P} _{{\underbrace{u...u}_{c_1-1}} c_1{\underbrace{u...u}_{c_2 -1}}c_2 ...{\underbrace{u...u}_{ c_n -1}}c_n{\underbrace{u...u}_{p}}[p+1]({\underbrace{u...u}_{ u}}[u+1])} $$ $$ =\beta_pp^{c_1+...+c_n-n+p-1} _u\prod^{n} _{j=1}{p_{c_j}}+p_{p}p^{c_1+...+c_n-n+p-1} _u\left(\prod^{n} _{j=1}{p_{c_j}}\right)\cdot\Delta^P _{(1)} $$ $$ -\beta_up^{c_1+...+c_n-n+p-1} _u\prod^{n} _{j=1}{p_{c_j}}-\beta_{p+1}p^{c_1+...+c_n-n+p} _u\prod^{n} _{j=1}{p_{c_j}} -p_{p+1}p^{c_1+...+c_n-n+p} _u\left(\prod^{n} _{j=1}{p_{c_j}}\right)\cdot\Delta^P _{({\underbrace{u...u}_{ u}}[u+1])} $$ $$ =p^{c_1+...+c_n-n+p-1} _u\left(\prod^{n} _{j=1}{p_{c_j}}\right)\left(\beta_p+p_{p}\Delta^P _{(1)}-\beta_u-\beta_{p+1}p_u-p_{p+1}p_u\Delta^P _{({\underbrace{u...u}_{ u}}[u+1])}\right) $$ $$ =p^{c_1+...+c_n-n+p-1} _u\left(\prod^{n} _{j=1}{p_{c_j}}\right)\left(p_{p}\Delta^P _{(1)} +(p_{u+1}+...+p_{p+1})+p_u(p_{p+1}+...+p_{s-1}-p_{p+1}\Delta^P _{({\underbrace{u...u}_{ u}}[u+1])})\right)>0 $$ since the conditions $p>u$, $\beta_p-\beta_u=p_u+p_{u+1}+...+p_{p-1}$, and $1-\beta_{p+1}=p_{p+1}+...+p_{s-1}$ hold. Suppose that $u=s-2$. Then $$ \inf\Delta^{(P,s-2)} _{c_1c_2...c_n[p+1]}-\sup\Delta^{(P,s-2)} _{c_1c_2...c_np} $$ $$ =\Delta^{P} _{{\underbrace{[s-2]...[s-2]}_{c_1-1}} c_1{\underbrace{[s-2]...[s-2]}_{c_2 -1}}c_2 ...{\underbrace{[s-2]...[s-2]}_{ c_n -1}}c_n{\underbrace{[s-2]...[s-2]}_{p}} [p+1](1)} $$ $$ - \Delta^{P} _{{\underbrace{[s-2]...[s-2]}_{c_1-1}} c_1{\underbrace{[s-2]...[s-2]}_{c_2 -1}}c_2 ...{\underbrace{[s-2]...[s-2]}_{ c_n -1}}c_n{\underbrace{[s-2]...[s-2]}_{p-1}} p({\underbrace{[s-2]...[s-2]}_{ s-2}}[s-1])} $$ $$ =\beta_{s-2}p^{c_1+...+c_n-n+p-1} _{s-2}\prod^{n} _{j=1}{p_{c_j}}+\beta_{p+1}p^{c_1+...+c_n-n+p} _{s-2}\prod^{n} _{j=1}{p_{c_j}} $$ $$ +p_{p+1}p^{c_1+...+c_n-n+p} _{s-2}\left(\prod^{n} _{j=1}{p_{c_j}}\right)\cdot\Delta^{P} _{(1)}-\beta_pp^{c_1+...+c_n-n+p-1} _{s-2}\prod^{n} _{j=1}{p_{c_j}} $$ $$ -p_pp^{c_1+...+c_n-n+p-1} _{s-2}\left(\prod^{n} _{j=1}{p_{c_j}}\right)\cdot\Delta^P _{({\underbrace{[s-2]...[s-2]}_{ s-2}}[s-1])} $$ $$ =p^{c_1+...+c_n-n+p-1} _{s-2}\left(\prod^{n} _{j=1}{p_{c_j}}\right)(\beta_{s-2}+\beta_{p+1}p_{s-2}+p_{s-2}p_{p+1}\Delta^{P} _{(1)}-\beta_{p}-p_p\Delta^P _{({\underbrace{[s-2]...[s-2]}_{ s-2}}[s-1])}) $$ $$ =p^{c_1+...+c_n-n+p-1} _{s-2}\left(\prod^{n} _{j=1}{q_{c_j}}\right)(p_p(1-\Delta^P _{({\underbrace{[s-2]...[s-2]}_{ s-2}}[s-1])})+(p_{p+1}+...+p_{s-3})+\beta_{p+1}p_{s-2}+p_{s-2}p_{p+1}\Delta^{P} _{(1)})>0 $$ since $\beta_{s-2}-\beta_{p}=p_p+p_{p+1}+\dots+p_{s-3}$. Here $p\ne s-1$. Suppose that $u=s-1$. Then $$ \inf\Delta^{(P,s-1)} _{c_1c_2...c_n[p+1]}-\sup\Delta^{(P,s-1)} _{c_1c_2...c_np} $$ $$ =\Delta^{P} _{{\underbrace{[s-1]...[s-1]}_{c_1-1}} c_1{\underbrace{[s-1]...[s-1]}_{c_2 -1}}c_2 ...{\underbrace{[s-1]...[s-1]}_{ c_n -1}}c_n{\underbrace{[s-1]...[s-1]}_{p}} [p+1](1)} $$ $$ -\Delta^{P} _{{\underbrace{[s-1]...[s-1]}_{c_1-1}} c_1 ...{\underbrace{[s-1]...[s-1]}_{ c_n -1}}c_n{\underbrace{[s-1]...[s-1]}_{p-1}}p({\underbrace{[s-1]...[s-1]}_{ s-3}}[s-2])} $$ $$ =\beta_{s-1}p^{c_1+...+c_n-n+p-1} _{s-1}\prod^{n} _{j=1}{p_{c_j}}+\beta_{p+1}p^{c_1+...+c_n-n+p} _{s-1}\prod^{n} _{j=1}{p_{c_j}} $$ $$ +p_{p+1}p^{c_1+...+c_n-n+p} _{s-1}\left(\prod^{n} _{j=1}{p_{c_j}}\right)\cdot\Delta^{P} _{(1)}-\beta_pp^{c_1+...+c_n-n+p-1} _{s-1}\prod^{n} _{j=1}{p_{c_j}} $$ $$ -p_pp^{c_1+...+c_n-n+p-1} _{s-1}\left(\prod^{n} _{j=1}{p_{c_j}}\right)\cdot\Delta^P _{({\underbrace{[s-1]...[s-1]}_{ s-3}}[s-2])} $$ $$ =p^{c_1+...+c_n-n+p-1} _{s-1}\left(\prod^{n} _{j=1}{p_{c_j}}\right)(\beta_{s-1}+\beta_{p+1}p_{s-1}+p_{s-1}p_{p+1}\Delta^{P} _{(1)}-\beta_{p}-p_p\Delta^P _{({\underbrace{[s-1]...[s-1]}_{ s-3}}[s-2])})>0. $$ \end{proof} \begin{theorem} The set $\mathbb S_{(P_s,u)}$ is a perfect and nowhere dense set of zero Lebesgue measure. \end{theorem} \begin{proof} We now prove that \emph{the set $\mathbb S_{(P_s,u)}$ is a nowhere dense set}. From the definition it follows that there exist cylinders $ \Delta^{(P,u)} _{c_1...c_n}$ of rank $n$ in an arbitrary subinterval of the segment $I=[\inf\mathbb S_{(P_s,u)},\sup\mathbb S_{(P_s,u)}]$. Since Property 5 from Lemma~\ref{lm: Lemma on cylinders} is true for these cylinders, we have that for any subinterval of $ I$ there exists a subinterval such that does not contain points from $\mathbb S_{(P_s,u)}$. So $\mathbb S_{(P_s,u)}$ is a nowhere dense set. Let us show that \emph{$\mathbb S_{(P_s,u)}$ is a set of zero Lebesgue measure}. Suppose that $ I^{(P_s,u)} _{c_1c_2...c_n} $ is a closed interval whose endpoints coincide with endpoits of the cylinder $ \Delta^{(P,u)} _{c_1c_2...c_n}$, $$ |I^{(P_s,u)} _{c_1c_2...c_n}|=d(\Delta^{(P,u)} _{c_1c_2...c_n})=d(\mathbb S_{(P_s,u)})p^{c_1+c_2+\dots+c_n-n} _{u}\prod^{n} _{j=1}{p_{c_j}}, $$ and $$ \mathbb S_{(P_s,u)}= \bigcap^{\infty} _{k=1} E^{(P_s,u)} _k, $$ where $$ E^{(P_s,u)} _1=\bigcup_{c_1\in A_0\setminus\{u\}}{I^{(P_s,u)} _{c_1}}, $$ $$ E^{(P_s,u)} _2=\bigcup_{c_1,c_2\in A_0\setminus\{u\}}{I^{(P_s,u)} _{c_1c_2}}, $$ $$ \dots\dots\dots\dots\dots\dots\dots $$ $$ E^{(P_s,u)} _k= \bigcup_{c_1,c_2,...,c_k\in A_0\setminus\{u\}}{I^{(P_s,u)} _{c_1c_2...c_k}}, $$ $$ \dots\dots\dots\dots\dots\dots\dots $$ In addition, since $ E^{(P_s,u)} _{k+1} \subset E^{(P_s,u)} _k $, we have $$ E^{(P_s,u)} _k= E^{(P_s,u)} _{k+1} \cup \bar E^{(P_s,u)} _{k+1}. $$ Let $ I$ be an initial closed interval such that $ \lambda(I)=d_0 $ and $\mathbb [\inf \mathbb S_{(P_s,u)}, \sup\mathbb S_{(P_s,u)}]=I$, $\lambda(\cdot)$ be the Lebesgue measure of a set. Then $$ \lambda(E^{(P_s,u)} _1)=\sum_{c_1\in A_0\setminus\{u\}}{|I^{(P_s,u)} _{c_1}|}=d(\mathbb S_{(P_s,u)})\sum_{c_1\in A_0\setminus\{u\}}{p^{c_1-1} _{u}}=\gamma_0. $$ We get $$ \lambda(\bar E^{(P_s,u)} _1)=d_0-\lambda(E^{(P_s,u)} _1)=d_0 - \gamma_0 d_0= d_0(1 - \gamma_0). $$ Similarly, $$ \lambda(\bar E^{(P_s,u)} _2)=\lambda(E^{(P_s,u)} _1)-\lambda(E^{(P_s,u)} _2)=\gamma_0d_0-\gamma^2 _0d_0=d_0\gamma_0(1-\gamma_0), $$ $$ \lambda(\bar E^{(P_s,u)} _3)=\lambda(E^{(P_s,u)} _2)-\lambda(E^{(P_s,u)} _3)=\gamma^2 _0d_0-\gamma^3 _0d_0=(1-\gamma_0)\gamma^2 _0d_0, $$ $$ \dots\dots\dots\dots\dots\dots\dots $$ So, $$ \lambda(\mathbb S_{(P_s,u)})=d_0-\sum^{n} _{k=1}{\lambda(\bar E^{(P_s,u)} _k)}=d_0-\sum^{n} _{k=1}{\gamma^{k-1} _0d_0(1-\gamma_0)}=d_0-\frac{d_0(1-\gamma_0)}{1-\gamma_0}=0. $$ The set $\mathbb S_{(P_s,u)}$ is a set of zero Lebesgue measure. Let us prove that \emph{$\mathbb S_{(P_s,u)}$ is a perfect set}. Since $$ E^{(P_s,u)} _k= \bigcup_{c_1,c_2,...,c_k\in A_0\setminus\{u\}}{I^{(P_s,u)} _{c_1c_2...c_k}} $$ is a closed set ($E^{(P_s,u)} _k$ is a union of segments), we see that $$ \mathbb S_{(P_s,u)}= \bigcap^{\infty} _{k=1} E^{(P_s,u)} _k $$ is a closed set. Let $ x \in \mathbb S_{(P_s,u)} $, $ P$ be any interval that contains $ x $, and $ J_n $ be a segment of $ E^{(P_s,u)} _n $ that contains $ x $. Choose a number $ n $ such that $ J_n \subset P $. Suppose that $ x_n $ is the endpoint of $ J_n $ such that the condition $ x_n \ne x $ holds. Hence $ x_n \in \mathbb S_{(P_s,u)} $ and $ x $ is a limit point of the set. Since $\mathbb S_{(P_s,u)}$ is a closed set and does not contain isolated points, we obtain that $\mathbb S_{(P_s,u)}$ is a perfect set. \end{proof} \begin{theorem} The set $\mathbb S_{(P_s,u)} $ is a self-similar fractal and the Hausdorff dimension $\alpha_0 (\mathbb S_{(P_s,u)})$ of the set satisfies the following equation $$ \sum _{i\in A_0\setminus\{u\}} {\left(p_ip^{i-1} _u\right)^{\alpha_0}}=1. $$ \end{theorem} \begin{proof} Since $ \mathbb S_{(P_s,u)} \subset I$ and $ \mathbb S_{(P_s,u)}$ is a perfect set, we obtain that $\mathbb S_{(P_s,u)}$ is a compact set. In addition, $$ \mathbb S_{(P_s,u)}=\bigcup_{i\in A_0\setminus\{u\}}{\left[I^{(P_s,u)} _i\cap \mathbb S_{(P_s,u)}\right]} $$ and $\left[I^{(P_s,u)} _i\cap \mathbb S_{(P_s,u)}\right]\stackrel{p_ip^{i-1} _u}{\sim}\mathbb S_{(P_s,u)}$ for all $i\in A_0\setminus\{u\}$. Since the set $\mathbb S_{(P_s,u)}$ is a compact self-similar set of space $ \mathbb R^1 $, we have that the self-similar dimension of this set is equal to the Hausdorff dimension of $\mathbb S_{(P_s,u)}$. So the set $\mathbb S_{(P_s,u)} $ is a self-similar fractal, and its Hausdorff dimension $\alpha_0$ satisfies the equation $$ \sum _{i\in A_0\setminus\{u\}} {\left(p_ip^{i-1} _u\right)^{\alpha_0}}=1. $$ \end{proof} \begin{theorem} Let $E$ be a set whose elements represented in terms of the P-representation by a finite number of fixed combinations $\tau_1, \tau_2,\dots,\tau_m$ of digits from the alphabet $A$. Then the Hausdorff dimension $\alpha_0$ of $E$ satisfies the following equation: $$ \sum^{m} _{j=1}{\left(\prod^{s-1} _{i=0}{p^{N_i(\tau_j)} _i}\right)^{\alpha_0}}=1, $$ where $N_i(\tau_k)$ ($k=\overline{1,m}$) is a number of the digit $i$ in $\tau_k$ from the set $\{\tau_1, \tau_2,\dots,\tau_m\}$. \end{theorem} \begin{proof} Let $\{\tau_1, \tau_2,\dots,\tau_m\}$ be a set of fixed combinations of digits from $A$ and the P-representation of any number from $E$ contains only such combinations of digits. It is easy to see that there exist combinations $\tau', \tau''$ from the set $\Xi=\{\tau_1, \tau_2,\dots,\tau_m\}$ such that $\Delta^P _{\tau^{'}\tau^{'}...}=\inf E$, $\Delta^P _{\tau^{''}\tau^{''}...}=\sup E$, and $$ d(E)=\sup E - \inf E=\Delta^P _{\tau^{''}\tau^{''}...}-\Delta^s _{\tau^{'}\tau^{'}...}. $$ \emph{A cylinder $ \Delta^{(P,E)} _{\tau^{'} _1\tau^{'} _2\ldots\tau^{'} _n}$ of rank $n$ with base $\tau^{'} _1\tau^{'} _2\ldots\tau^{'} _n$} is a set formed by all numbers of $E$ with the P-representations in which the first $n$ combinations of digits are fixed and coincide with $\tau^{'} _1,\tau^{'} _2,\dots,\tau^{'} _n$ respectively ($\tau^{'} _j\in \Xi$ for all $j=\overline{1,n}$). It is easy to see that $$ d( \Delta^{(P,E)} _{\tau^{'} _1\tau^{'} _2...\tau^{'} _n})=d(E)\cdot p^{N_0(\tau^{'} _1\tau^{'} _2...\tau^{'} _n)} _0p^{N_1(\tau^{'} _1\tau^{'} _2...\tau^{'} _n)} _1\cdots p^{N_{s-1}(\tau^{'} _1\tau^{'} _2...\tau^{'} _n)} _{s-1}, $$ where ${N_i(\tau^{'} _1\tau^{'} _2...\tau^{'} _n)}$ is a number of the digit $i\in A$ in $\tau^{'} _1\tau^{'} _2...\tau^{'} _n$. Since $E$ is a closed set, $ E \subset [\inf E, \sup E] $, and $$ \frac{d\left( \Delta^{(P,E)} _{\tau^{'} _1\tau^{'} _2...\tau^{'} _n\tau^{'} _{n+1}}\right)}{d\left( \Delta^{(P,E)} _{\tau^{'} _1\tau^{'} _2...\tau^{'} _n}\right)}=\prod^{s-1} _{i=0}{p^{N_i(\tau^{'} _{n+1})} _i}, $$ $$ E=[I_{\tau_{1}} \cap E]\cup [I_{\tau_{2}} \cap E]\cup\ldots\cup[I_{\tau_m}\cap E], $$ where $I_{\tau_j}=[\inf \Delta^{(P,E)} _{\tau_j},\sup \Delta^{(P,E)} _{\tau_j}]$ and $j=1,2,\dots,m,$ we have $$ {[I_{\tau_j} \cap E]} \stackrel{\omega_j}{\sim}E~\text{for all}~j=\overline{1,m}, $$ where $$ \omega_j=\prod^{s-1} _{i=0}{p^{N_i(\tau_j)} _i}. $$ This completes the proof. \end{proof} \end{document}
\begin{document} \title[Pseudofree ${\mathbb Z}/3$-actions on $K3$ surfaces] {Pseudofree ${\mathbb Z}/3$-actions on $K3$ surfaces} \author{Ximin Liu} \address{Department of Applied Mathematics, Dalian University of Technology, Dalian 116024, China (\emailfont{xmliu@ms.u-tokyo.ac.jp})} \author{Nobuhiro Nakamura} \address{Research Institute for Mathematical Sciences, Kyoto university, Kyoto, 606-8502, Japan (\emailfont{nakamura@kurims.kyoto-u.ac.jp})} \begin{abstract} In this paper, we give a weak classification of locally linear pseudofree actions of the cyclic group of order $3$ on a $K3$ surface, and prove the existence of such an action which can not be realized as a smooth action on the standard smooth $K3$ surface. \end{abstract} \subjclass[2000]{Primary: 57S17. Secondary: 57S25, 57M60, 57R57} \keywords{group actions, locally linear, pseudofree, $K3$ surface, Seiberg-Witten invariants.} \maketitle \section{Introduction}\label{sec:intro} Let $G$ be the cyclic group of order $3$ ($G={\mathbb Z}/3$), and suppose that $G$ acts locally linearly and pseudofreely on a $K3$ surface $X$. (An action on a space is called {\it pseudofree} if it is free on the compliment of a discrete subset.) The purpose of this paper is to give a weak classification of such $G$-actions and to prove that there exists such an action on $X$ which can not be realized by a smooth action for the standard smooth structure on $X$. \begin{Theorem}\label{thm:main0} There exists a locally linear pseudofree $G$-action on a $K3$ surface $X$ which can not be realized by a smooth action for the standard smooth structure on $X$. \end{Theorem} After submitting this paper to the journal, the authors found that the $G$-action in \thmref{thm:main0} is unsmoothable for infinitely many smooth structures on $X$. This is proved in \remref{rem:inf}. To state the result more precisely, we prepare notation. Let $b_i$ be the $i$-th Betti number of $X$, and $b_+$ (resp. $b_-$) be the rank of a maximal positive (resp. negative) definite subspace $H^+(X;{\mathbb R})$ (resp. $H^-(X;{\mathbb R})$) of $H^2(X;{\mathbb R})$. For any $G$-space $V$, let $V^G$ be the fixed point set of the $G$-action. Let $b_\bullet^G = \dim H^\bullet (X;{\mathbb R})^G$, where $\bullet = 2, +, -$. The Euler number of $X$ is denoted by $\chi (X)$ and the signature of $X$ by $\operatorname{Sign}(X)$. When we fix a generator $g$ of $G$, the representation at a fixed point can be described by a pair of nonzero integers $(a,b)$ modulo $3$ which is well-defined up to order and changing the sign of both together. Hence, there are two types of fixed points. \begin{itemize} \item The type ($+$): $(1,2) = (2,1)$. \item The type ($-$): $(1,1) = (2,2)$. \end{itemize} Let $m_+$ be the number of fixed points of the type ($+$), and $m_-$ be the number of fixed points of the type ($-$). \thmref{thm:main0} immediately follows from the next theorem. \begin{Theorem}\label{thm:main} Let $G$ be the cyclic group of order $3$. For locally linear pseudofree $G$-actions on a $K3$ surface $X$, we have the following{\textup :} \begin{enumerate} \item Every locally linear pseudofree $G$-action on $X$ belongs to one of four types in \tabref{tab:actions}. Furthermore, each of four types can be actually realized by a locally linear pseudofree $G$-action on $X$. \begin{table}[h] \caption{The classification of actions} \label{tab:actions} \begin{center} \begin{tabular}{l|c|c|c|c|c|c|c} Type & $\#X^G$ & $m_+$ & $m_-$ & $b_2^G$ & $b_+^G$ & $b_-^G$ & $\operatorname{Sign}(X/G)$ \\ \hline $A_0$ & $6$ & $6$ & $0$ & $10$ & $3$ & $7$ & $-4$ \\ $A_1$ & $9$ & $3$ & $6$ & $12$ & $3$ & $9$ & $-6$ \\ $A_2$ & $12$ & $0$ & $12$ & $14$ & $3$ & $11$ & $-8$ \\ \hline $B$ & $3$ & $0$ & $3$ & $8$ & $1$ & $7$ & $-6$ \\ \end{tabular} \end{center} \end{table} \item The type $A_1$ can not be realized by a smooth action on the standard smooth $K3$ surface. \end{enumerate} \end{Theorem} \begin{Remark}\label{rem:1} The assertion (1) in \thmref{thm:main} is an application of the remarkable result by A.~L.~Edmonds and J.~H.~Ewing \cite{EE} with Freedman's classification of simply-connected topological $4$-manifolds \cite{Freedman}. \end{Remark} \begin{Remark} To prove the assertion (2), we use the mod $p$ vanishing theorem of Seiberg-Witten invariants by F.~Fang \cite{Fang}, with the fact that the Seiberg-Witten invariants for the canonical $\operatorname{Spin}^c$-structure of the standard smooth $K3$ surface is $\pm 1$. \end{Remark} \begin{Remark} The type $A_0$, $A_1$ and $A_2$ are actions which act trivially on $H^+(X;{\mathbb R})$. \end{Remark} \begin{Remark}\label{rem:std} The type $A_0$ is realized by a smooth action on the Fermat quartic surface. (See \propref{prop:Fermat}. ) \end{Remark} \begin{Remark} We do not know whether $A_2$ and $B$ can be realized by a smooth action for some smooth structure on a $K3$ surface, or not. \end{Remark} \begin{Remark} K.~Kiyono proved the existence of unsmoothable locally linear pseudofree actions on the connected sums of $S^2\times S^2$ \cite{Kiyono}. Although he also uses the Seiberg-Witten gauge theory, his method is different from ours. It is interesting that he invokes the ``$G$-invariant $10/8$-theorem'' instead of Seiberg-Witten invariants. (A related paper is \cite{KL}.) \end{Remark} \section{The proof of the assertion (1)}\label{sec:proof1} As mentioned in \remref{rem:1}, the proof of the assertion (1) of \thmref{thm:main} will rely on the realization theorem by A.~L.~Edmonds and J.~H.~Ewing \cite{EE}. First, we summarize their result in the very special case when $G={\mathbb Z}/3$. \begin{Theorem}[\cite{EE}]\label{thm:EE} Let $G$ be the cyclic group of order $3$. Suppose that one is given a fixed point data $$ \mathcal D = \{(a_0,b_0), (a_1,b_1), \ldots, (a_n,b_n), (a_{n+1},b_{n+1})\}, $$ where $a_i, b_i \in {\mathbb Z}/3\setminus\{0\}$, and a $G$-invariant symmetric unimodular form $$ \Phi\colon V\times V\to {\mathbb Z}, $$ where $V$ be a finitely generated ${\mathbb Z}$-free ${\mathbb Z}[G]$-module. Then the data $\mathcal D$ and the form $(V,\Phi)$ are realizable by a locally linear, pseudofree, $G$-action on a closed, simply-connected, topological $4$-manifold if and only if they satisfy the following two conditions{\textup :} \begin{enumerate} \item The condition REP{\textup :} As a ${\mathbb Z}[G]$-module, $V$ splits into $F\oplus T$, where $F$ is free and $T$ is a trivial ${\mathbb Z}[G]$-module with $\operatorname{rank}_{\mathbb Z} T = n$. \item The condition GSF{\textup :} The $G$-Signature Formula is satisfied{\textup :} $$ \operatorname{Sign}(g, (V,\Phi)) = \sum_{i=0}^{n+1}\frac{(\zeta^{a_i} + 1)(\zeta^{b_i} + 1)}{(\zeta^{a_i} - 1)(\zeta^{b_i} - 1)}, $$ where $\zeta = \exp(2\pi\sqrt{-1}/3)$. \end{enumerate} \end{Theorem} \begin{Remark} In \cite{EE}, A.~L.~Edmonds and J.~H.~Ewing prove the realization theorem for all cyclic groups of prime order $p$, and for general $p$, the third condition {\it TOR} which is related to the Reidemeister torsion should be satisfied. However, when $p=3$, the condition {\it TOR} is redundant. This follows from the fact that the class number of ${\mathbb Z}[\zeta]$ is $1$, and Corollary 3.2 of \cite{EE}. \end{Remark} Now, let us begin the proof of the assertion (1). Suppose that a locally linear pseudofree $G$-action on $X$ is given. First of all, the ordinary Lefschetz formula should hold: $L(g,X) = 2 + \operatorname{tr} ( g|_{H^2(X)}) =\#X^G$. Noting that $\#X^G = m_+ + m_- $ and $ 2 + \operatorname{tr} ( g|_{H^2(X)}) \leq 24$, we obtain $$ m_+ + m_- \leq 24. $$ This is compatible with the condition {\it REP}. Note that \begin{equation*} \chi (X/G) = \frac13 \{ 24 + 2 (m_+ + m_- )\}. \end{equation*} By \thmref{thm:EE}, the $G$-Signature Formula should hold: \begin{align*} \operatorname{Sign} (g,X) &= \operatorname{Sign} (g^2,X) = \frac13 (m_+ - m_-),\\ \operatorname{Sign} (X/G) &= \frac13 \left\{ -16 + \frac23 (m_+ - m_-)\right\}.\\ \end{align*} Since $\operatorname{Sign} (X/G)$ is an integer, $ m_+ - m_- \equiv 6 \mod 9. $ This with the inequality $-24\leq m_+ - m_-\leq 24$ implies that \begin{equation}\label{eq:diff} m_+ - m_- = -21,-12,-3,6,15,24. \end{equation} We can calculate $b_+^G$ and $b_-^G$ from $\chi(X/G)$ and $\operatorname{Sign} (X/G)$. Since $b_+^G$ is $1$ or $3$, we obtain the following: \begin{itemize} \item When $b_+^G=1$, $2m_+ + m_-=3$. \item When $b_+^G=3$, $2m_+ + m_-=12$. \end{itemize} By these equations, \eqref{eq:diff} and non-negativity of $m_+$ and $m_-$, we obtain \tabref{tab:actions}. Next we will prove the existence of actions. First, we construct a smooth $G$-action of type $A_0$ on the Fermat quartic surface. \begin{Proposition}\label{prop:Fermat} There exists a smooth $G$-action of the type $A_0$ on the Fermat quartic surface $X$ which is defined by the equation $\sum_{i=0}^3 z_i^4 = 0$ in $\operatorname{\C P}^3$. \end{Proposition} \proof By the symmetry of the defining equation, the symmetric group of degree $4$ acts on $X$ as permutations of variables. Therefore $G$ acts smoothly on $X$ via this action. We can easily check that the $G$-action is pseudofree, and belongs to the type $A_0$. \endproof To prove the existence of actions of other types, we invoke \thmref{thm:EE}. We need to construct $G$-actions on the intersection form. Let $(V_{K3}, \Phi_{K3})$ be the intersection form of the $K3$ surface, which is even and indefinite. Since an even indefinite form is completely characterized by its rank and signature, $(V_{K3}, \Phi_{K3})$ is isomorphic to $3H\oplus \Gamma_{16}$, where $H$ is the hyperbolic form, and $\Gamma_{16}$ is a negative definite even form of rank $16$. We will construct $G$-actions on $3H$ and $\Gamma_{16}$ separately. \begin{Lemma}\label{lem:E16} For each integer $k$ which satisfies $0\leq k\leq5$, there is a $G$-action on $\Gamma_{16}$ such that $$ \Gamma_{16}\cong (16-3k){\mathbb Z}\oplus k{\mathbb Z}[G]\text{ as a ${\mathbb Z}[G]$-module}. $$ \end{Lemma} \proof When $k=0$, it suffices to take the trivial $G$-action. Hence we suppose $k\geq 1$. Recall that the lattice $\Gamma_{16}$ is the set of $(x_1,\ldots,x_{16})\in(\frac12 {\mathbb Z})^{16}$ which satisfy \begin{enumerate} \item $x_i\equiv x_j\mod {\mathbb Z}$ for any $i,j$, \item $\sum_{i=1}^{16}x_i\equiv 0\mod 2{\mathbb Z}$. \end{enumerate} The unimodular bilinear form on $\Gamma_{16}$ is defined by $-\sum_{i=1}^{16}x_i^2$. Note that the symmetric group of degree $16$ acts on $\Gamma_{16}$ as permutations of components. For a fixed generator $g$ of $G$, define the $G$-action on $\Gamma_{16}$ by $$g = (1,2,3)(4,5,6)\cdots(3k-2,3k-1,3k),$$ where $(l,m,n)$ is the cyclic permutation of $(x_l, x_m ,x_n)$. As a basis for $\Gamma_{16}$, we take \begin{equation*} f_i = \left\{ \begin{aligned} e_i+e_{16},& \quad\qquad (i=1,\ldots ,9), \\ e_i-e_{16},& \quad\qquad (i=10,\ldots,15),\\ \frac12 (e_1+e_2&+\cdots+e_{16}),\, (i=16), \end{aligned}\right. \end{equation*} where $e_1,\ldots,e_{16}$ is the usual orthonormal basis for ${\mathbb R}^{16}$. Then the basis $(f_1,f_2,\ldots,f_{16})$ gives required direct splitting. \endproof \begin{Lemma}\label{lem:3H} There is a $G$-action on $3H$ such that $3H \cong{\mathbb Z}[G]\oplus{\mathbb Z}[G]$ as a ${\mathbb Z} [G]$-module, and $G$-fixed parts of a maximal positive definite subspace and a negative one of $3H\otimes{\mathbb R}$ both have rank $1$. \end{Lemma} \proof Such a $G$-action is given as permutations of three $H$'s. \endproof With \lemref{lem:E16} and \lemref{lem:3H} understood, for each of $A_1$, $A_2$ and $B$, the corresponding $G$-action on $(V_{K3}, \Phi_{K3})$ can be constructed. That is, \begin{itemize} \item for $A_1$, $3H \cong 6{\mathbb Z}$ and $\Gamma_{16}\cong {\mathbb Z}\oplus 5{\mathbb Z}[G]$, \item for $A_2$, $3H \cong 6{\mathbb Z}$ and $\Gamma_{16}\cong 4{\mathbb Z}\oplus 4{\mathbb Z}[G]$, \item for $B$, $3H \cong {\mathbb Z} [G]\oplus{\mathbb Z} [G]$ and $\Gamma_{16}\cong {\mathbb Z}\oplus 5{\mathbb Z}[G]$. \end{itemize} Now the conditions {\it REP} and {\it GSF} are satisfied. Therefore we have a locally linear pseudofree $G$-action on a closed simply-connected $4$-manifold $X$ whose intersection form is just $(V_{K3}, \Phi_{K3})$ by \thmref{thm:EE}. Since $X$ is simply-connected and its intersection form is even, we see that $X$ is homeomorphic to the $K3$ surface by Freedman's theorem \cite{Freedman}. Thus the assertion (1) is proved. \begin{Remark} By using Theorem 1.3 in \cite{BW}, we can prove that the topological conjugacy class of actions of the type $B$ is unique, that is, any action of the type $B$ is conjugate to the action which we have constructed. \end{Remark} \begin{Remark} We can also construct a locally linear pseudofree action of the type $A_0$ by \thmref{thm:EE}. For this purpose, we need to construct a $G$-action on $3H$ such that $3H \cong 3{\mathbb Z} \oplus{\mathbb Z} [G]$ as a ${\mathbb Z}[G]$-module, and the rank of a $G$-fixed maximal positive definite subspace of $3H\otimes{\mathbb R}$ is $3$ and the rank of a negative one is $1$. Such a $G$-action on $3H$ is constructed from the cohomology ring of a $4$-torus with a $G$-action as follows: Let $\zeta =\exp(2\pi\sqrt{-1}/3)$, and consider the lattice ${\mathbb Z}\oplus\zeta{\mathbb Z}\subset{\mathbb C}$. For each $i = 0,1,2$, let us consider a $2$-torus $T_{\zeta^i} = {\mathbb C}/({\mathbb Z}\oplus\zeta{\mathbb Z})$ with a $G$-action, where the $G$-action is defined by the multiplication by $\zeta^i$. Next, consider the $4$-torus $T_{12}=T_{\zeta}\times T_{\zeta^2}$ with the diagonal $G$-action. Then we can prove that the induced $G$-action on $H^2(T_{12};{\mathbb Z})$ has required properties. Using this with a $G$-action on $\Gamma_{16}$ such that $\Gamma_{16}\cong {\mathbb Z}\oplus 5{\mathbb Z}[G]$, we obtain a $G$-action of the type $A_0$ by \thmref{thm:EE}. \end{Remark} \section{The proof of the assertion (2)}\label{sec:proof2} In this section, we consider $X$ as the smooth $K3$ surface with the standard smooth structure. Suppose now that a smooth action of the type $A_1$ exists. To obtain a contradiction, we use a Seiberg-Witten invariant of $X$. Recall that, for a smooth $4$-manifold with $b_1=0$ and $b_+\geq 2$, Seiberg-Witten invariants constitute a map from the set of equivalence classes of $\operatorname{Spin}^c$-structures on $X$ to ${\mathbb Z}$. That is, for a $\operatorname{Spin}^c$-structure $c$, the corresponding Seiberg-Witten invariant $\operatorname{SW}_X(c)$ is given as an integer. We use the canonical $\operatorname{Spin}^c$-structure $c_0$ which is characterized as one whose determinant line bundle $L$ is trivial in the case of $K3$ surface $X$. Note that $c_0$ is also characterized as the $\operatorname{Spin}^c$-structure which is determined by the $\operatorname{Spin}$-structure. Since $X$ is simply-connected and $L$ is trivial, we can see that every $G={\mathbb Z}/3$-action on $X$ lifts to a $G$-action on the $\operatorname{Spin}^c$-structure $c_0$. Then, the $G$-index of the Dirac operator $D_X$ can be written as $\mathop{\text{\rm ind}}\nolimits_G D_X = \sum_{j=0}^2 k_j{\mathbb C}_j \in R(G) \cong {\mathbb Z}[t]/(t^3=1)$, where ${\mathbb C}_j$ is the complex $1$-dimensional weight $j$ representation of $G$ and $R(G)$ is the representation ring of $G$. F.~Fang \cite{Fang} proves the mod $p$ vanishing theorem under a ${\mathbb Z}/p$-action where $p$ is a prime. \begin{Theorem}[\cite{Fang}]\label{thm:Fang} Let $Y$ be a smooth closed oriented $4$-dimensional ${\mathbb Z}/p$-manifold with $b_1=0$ and $b_+\geq 2$, where $p$ is a prime. Suppose that $c$ is a $\operatorname{Spin}^c$-structure on which the ${\mathbb Z}/p$-action lifts, and that ${\mathbb Z}/p$ acts trivially on $H^+(Y;{\mathbb R})$. If $2k_j \leq b_+ -1$ for $j=0,\ldots,p-1$, then \begin{equation*} \operatorname{SW}_Y(c) \equiv 0 \mod p. \end{equation*} \end{Theorem} \begin{Remark} The second author generalized \thmref{thm:Fang} to the case when $b_1>0$ \cite{Nakamura}. \end{Remark} On the other hand, it is well-known that $\operatorname{SW}_X(c_0) = \pm 1$ for the standard $K3$ surface $X$. (See e.g. \cite{FM} or \cite{T}.) Therefore, in the case when $G$ acts on $(X,c_0)$, we have $k_j >1$ for some $j$ by \thmref{thm:Fang}. Coefficients $k_j$ are calculated by the $G$-spin theorem. (For the $G$-spin theorem, we refer \cite{AB,AH,LM,Sh}.) For the fixed generator $g\in G$, the Lefschetz number $\mathop{\text{\rm ind}}\nolimits_g D_{X}$ is calculated by the formula as \begin{equation*} \mathop{\text{\rm ind}}\nolimits_g D_{X} =\sum_{j=0}^{2} \zeta^j k_j = \sum_{P\in X^G} \nu(P), \end{equation*} where $\zeta=\exp (2\pi\sqrt{-1}/3)$ and $\nu(P)$ is a complex number associated to each fixed point $P$ given as follows. Suppose that a fixed point $P$ has the representation type $(a, b)$ with respect to $g$. Then the number $\nu (P)$ associated to $P$ is given by, \begin{equation}\label{eq:nup} \nu (P) = \frac1{{(\zeta^{a})}^{1/2} - {(\zeta^{a})}^{-1/2}}\frac1{{(\zeta^{b})}^{1/2} - {(\zeta^{b})}^{-1/2}}. \end{equation} The signs of ${(\zeta^{a})}^{1/2}$ and ${(\zeta^{b})}^{1/2}$ are determined such that $$ \left\{{(\zeta^{a})}^{1/2}\right\}^3 =\left\{{(\zeta^{b})}^{1/2}\right\}^3 = 1. $$ (This is because, in our case, the $g$-action on the $\operatorname{Spin}$-structure generates a $G$-action on the $\operatorname{Spin}$-structure. See \cite[p.20]{AH} or \cite[p.175]{Sh}.) With the above understood, we obtain \begin{align*} \mathop{\text{\rm ind}}\nolimits_g D_X & =k_0 + \zeta k_1 + \zeta^2 k_2 = \frac13 (m_+ - m_-),\\ \mathop{\text{\rm ind}}\nolimits_{g^2} D_X & = k_0 + \zeta^2 k_1 + \zeta k_2 = \frac13 (m_+ - m_-),\\ \mathop{\text{\rm ind}}\nolimits_1 D_X & = k_0 + k_1 + k_2 = 2.\\ \end{align*} Solving these equations, we have \begin{align*} k_0 &= \frac19 \left\{ 6 + 2 (m_+ - m_-)\right\},\\ k_1 = k_2 &= \frac19 \left\{ 6 - (m_+ - m_-)\right\}. \end{align*} In the case of an action of type $A_1$, $m_+=3$ and $m_- =6$. Hence, we have $k_0=0$ and $k_1=k_2=1$. Therefore there is no $j$ so that $k_j >1$. This is a contradiction. Thus the assertion (2) is proved. \begin{Remark}\label{rem:inf} It is clear that a proposition similar to (2) of \thmref{thm:main} is true for the smooth structure such that the Seiberg-Witten invariant for the $\operatorname{Spin}^c$-structure with trivial determinant line bundle is not congruent to $0$ modulo $3$. Let us examine elliptic surfaces which are homeomorphic to $K3$. Consider relatively minimal regular elliptic surfaces with at most two multiple fibers whose Euler number is $24$. Let $p$ and $q$ be the multiplicities of multiple fibers, and let us write such elliptic surface as $E(2)_{p,q}$. (We assume that $p$ and $q$ may be $1$.) The following are known about $E(2)_{p,q}$. \begin{enumerate} \item $E(2)_{1,1}$ (no multiple fiber) is diffeomorphic to the standard $K3$. \item $E(2)_{p,q}$ is homeomorphic to the $K3$ surface if and only if $\gcd(p,q)=1$. (See e.g.\cite{Ue}.) \item $E(2)_{p,q}$ is not diffeomorphic to $E(2)_{p^\prime,q^\prime}$ if $pq\neq p^\prime q^\prime$\cite{FM0}. \item Let $c_0$ be the $\operatorname{Spin}^c$-structure with trivial determinant line bundle. If $p$ and $q$ are odd, then $\operatorname{SW}_{E(2)_{p,q}}(c_0)=\pm 1$ \cite{FM2,FS}. \end{enumerate} Thus we see that the type $A_1$ can not be realized by a smooth action on $E(2)_{p,q}$ such that $\gcd(p,q)=1$ and $p$ and $q$ are odd. Note that there are infinitely many $(p,q)$ which give different smooth structures. \end{Remark} \end{document}
\begin{document} \title{Sparse Isotropic Regularization for Spherical Harmonic\ Representations of Random Fields on the Sphere} \begin{abstract} This paper discusses sparse isotropic regularization for a random field on the unit sphere $\sph{2}$ in $\mathbb{R}^{3}$, where the field is expanded in terms of a spherical harmonic basis. A key feature is that the norm used in the regularization term, a hybrid of the $\ell_{1}$ and $\ell_2$-norms, is chosen so that the regularization preserves isotropy, in the sense that if the observed random field is strongly isotropic then so too is the regularized field. The Pareto efficient frontier is used to display the trade-off between the sparsity-inducing norm and the data discrepancy term, in order to help in the choice of a suitable regularization parameter. A numerical example using Cosmic Microwave Background (CMB) data is considered in detail. In particular, the numerical results explore the trade-off between regularization and discrepancy, and show that substantial sparsity can be achieved along with small $L_{2}$ error. \end{abstract} \section{Introduction} This paper presents a new algorithm for the sparse regularization of a real-valued random field $T$ on the sphere, with the regularizer taken to be a novel norm (a hybrid of $\ell_1$ and $\ell_2$ norms) imposed on the coefficients $\coesh$ of the spherical harmonic decomposition, \begin{equation*} \RF(\PT{x})=\sum_{\ell=0}^\infty\sum_{m=-\ell}^\ell \coesh \shY(\PT{x}), \quad \PT{x}\in \mathbb{S}^2. \end{equation*} Here $\shY$ for $m=-\ell,\ldots,\ell$ is a (complex) orthonormal basis for the space of homogeneous harmonic polynomials of degree $\ell$ in $\mathbb{R}^3$, restricted to the unit sphere $\mathbb{S}^2:=\{\PT{x}\in\mathbb{R}^3:|\PT{x}|=1\}$, with $|\cdot|$ denoting the Euclidean norm in $\mathbb{R}^3$. Random fields on the sphere have recently attracted much attention from both mathematicians \cite{MaPe2011} and astrophysicists. In particular, the satellite data used to form the map of the Cosmic Microwave Background (see \cite{Planck2016I,Planck2016IX,Planck2016XVI}), is usually viewed as, to a good approximation, a single realization of an isotropic Gaussian random field, after correction for the obscured portion of the map near the galactic plane. Sparse regularization of data (i.e. a regularized approximation in which many coefficients in an expansion are zero) is another topic that has recently attracted great attention, especially in compressed sensing and signal analysis, see for example \cite{CanRT2006a,DauFL2008,Don2006}. In the context of CMB the use of sparse representations is somewhat controversial, see for example \cite{StDoFaRa2013}, but nevertheless has often been discussed, especially in the context of inpainting to correct for the obscuring effect of our galaxy near the galactic plane. In a recent paper, Cammarota and Marinucci \cite{CamMar15} considered a particular $\ell_1$-regularization problem based on spherical harmonics, and showed that if the true field is both Gaussian and isotropic (the latter meaning that the underlying law is invariant under rotation), then the resulting regularized solution is neither Gaussian nor isotropic. The problem of anisotropy has also been pointed out in sparse inpainting on the sphere \cite{Fe_etal2014}. The scheme analyzed in \cite{CamMar15} obtains a regularized field as the minimizer of \begin{equation}\label{eq:CRminimiser} \frac{1}{2}\|\RF-\RFo\|_{L_2(\mathbb{S}^2)}^2+ \lambda\sum_{\ell=0}^\infty\sum_{m=-\ell}^\ell |\coesh|, \end{equation} where $\RFo$ is the observed field, and $\lambda\ge0$ is a regularization parameter. Behind the non-preservation of isotropy in this scheme lies a more fundamental problem, namely that the regularizer in \eqref{eq:CRminimiser} is not invariant under rotation of the coordinate axes. For this reason the regularized field, and even the sparsity pattern, will in general depend on the choice of coordinate axes. The essential point is that for a given $\ell\ge 1$ the sum $\sum_{m=-\ell}^\ell|\coesh|^2$ is rotationally invariant, while the sum $\sum_{m=-\ell}^\ell|\coesh|$ is not. For convenience the rotational invariance property is proved in the next section. A simple example might be illuminating. Suppose that a particular realization of the field happens to take the (improbable!) form $$ \RF(\PT{x})=\PT{x}\cdot\PT{p}, \quad \PT{x}\in \mathbb{S}^2 $$ for some fixed point $\PT{p}$ on the celestial unit sphere. If the $z$ axis is chosen so that $\PT{p}$ is at the north pole, then $T(\PT{x})=\cos\theta$ where $\theta$ is the usual polar angle, and so $\RF(\PT{x})=\alpha\shY[1,0](\PT{x})$, where $\alpha=\sqrt{4\pi/3}$ (since $\shY[1,0](\theta,\phi) = \sqrt{3/(4\pi)} \cos\theta$). Thus with this choice we have $\coesh[1,0]= \alpha$, and all other coefficients are zero. On the other hand, if the axes are chosen so that $\PT{p}$ lies on the $x$ axis then the field has the polar coordinate representation $$ \RF(\PT{x})=\sin\theta \cos \phi =\frac{1}{2} (e^{i\phi} +e^{-i\phi})\sin \theta = \frac{\alpha}{\sqrt{2}}(\shY[1,1]-\shY[1,-1]), $$ so that now the only non-zero coefficients are $\coesh[1,1]=-\coesh[1,-1]=\alpha/\sqrt{2}$. Note that the sum of the absolute values in the second case is larger than that in the first case by a factor of $\sqrt{2}$. (Note that even the choice of complex basis for the spherical harmonics affects the sum of the absolute values of the coefficients; but not, of course, the sum of the squares of the absolute values.) With this motivation, in this paper we replace the regularizer in \eqref{eq:CRminimiser} by one that is manifestly rotationally invariant: in our scheme the regularized field is the minimizer of \begin{equation} \label{eq:our_minimiser} \frac{1}{2}\norm{\RF-\RFo}{\Lp{2}{2}}^2+ \lambda\sum_{\ell=0}^\infty\beta_\ell \bigg(\sum_{m=-\ell}^\ell |\coesh|^2\bigg)^{1/2}, \end{equation} where the $\beta_\ell$ are at this point arbitrary positive numbers normalized by $\beta_0 =1$. With an appropriate choice of $(\beta_\ell)_{\ell\in\Nz}$ and $\lambda$ our regularized solution will turn out to be sparse, but with the additional property of either preserving all or discarding all the coefficients $\coesh$ of a given degree $\ell$. It is easily seen that the regularized field, that is the minimizer of \eqref{eq:our_minimiser} for a given observed field $\RFo$, takes the form \begin{equation*} \RFr(\PT{x}) = \sum_{\ell=0}^{\infty}\sum_{m=-\ell}^{\ell}\coeshr \shY(\PT{x}),\quad \PT{x}\in\sph{2}, \end{equation*} where (see Proposition~\ref{prop:reg.sol}) \begin{equation*} \coeshr := \left\{\begin{array}{ll} \displaystyle \left(1 - \frac{\lambda\beta_{\ell}}{\Alo}\right) \coesho, & \mbox{if~}\Alo>\lambda\beta_{\ell},\\ 0, & \mbox{if~}\Alo\le\lambda\beta_{\ell}, \end{array}\right. \end{equation*} where \begin{equation}\label{eq:Alo} \Alo := \left(\sum_{m=-\ell}^\ell | \coesho |^2 \right)^\frac{1}{2}, \quad \ell\ge0. \end{equation} Since the resulting sparsity pattern depends entirely on the sequence of ratios $\Alo/(\lambda\beta_{\ell})$ for $\ell\ge0$, it is clear that in any application of the present regularization scheme, the choices of the sequence $(\beta_{\ell})_{\ell\ge0}$ and the parameter $\lambda$ are crucial. In this paper we shall discuss these choices in relation to a particular dataset from the cosmic microwave background (CMB) project, first choosing $\beta_{\ell}$ to match the observed decay of the $\Alo$, and finally choosing the parameter $\lambda$. We shall see that the resulting sparsity can vary greatly as $\lambda$ varies for given $(\beta_{\ell})_{\ell\in\Nz}$, with little change to the $L_{2}$ error of the approximation. Because of its very nature, the regularized solution has in general a smaller norm than the observed field. We therefore explore the option of scaling the regularized field so that both the observed and regularized fields have the same $L_{2}$ norm. The paper is organized as follows: In Section~\ref{sec:prelim} we review key definitions and properties of isotropic random fields on the unit sphere, the choice of norm and the regularization model. In Section~\ref{sec:regsol} we give the analytic solution to the regularization model. Section~\ref{sec:iso} proves that the regularization scheme produces a strongly isotropic field when the observed field is strongly isotropic. Section~\ref{sec:err} estimates the approximation error of the sparsely regularized random field from the observed random field, and for a given error provides an upper estimate of the regularization parameter $\lambda$. In Section~\ref{sec:scaling}, we consider the option of scaling the regularized field so that the $L_2$-norm is preserved. In Section~\ref{sec:num} we describe the numerical experiments that illustrate the proposed regularization algorithm. In particular, Section~\ref{sec:beta} considers the choice of the scaling parameters in the norm, while Section~\ref{sec:eff} illustrates use of the Pareto efficient frontier to help guide the choice of regularization parameter. Finally, Section~\ref{sec:CMB} uses the CMB data to illustrate the regularization scheme. \section{Preliminaries}\label{sec:prelim} \subsection{Rotational invariance} In this subsection, randomness plays no role. Let $\sph{2}$ be the unit sphere in the Euclidean space $\R^3$. Let $\Lp{2}{2}:=L_{2}(\sph{2},\sigma)$ denote the space of complex-valued square integrable functions on $\sph{2}$ with the surface measure $\sigma$ on $\sph{2}$ satisfying $\sigma(\sph{2})=4\pi$, endowed with the inner product $\int_{\sph{2}}f(\PT{x})\conj{g(\PT{x})}\IntDiff[]{x}$ for $f,g\in\Lp{2}{2}$ and with induced $L_2$-norm $\norm{f}{\Lp{2}{2}}=\sqrt{\int_{\sph{2}}|f(\PT{x})|^2\IntDiff[]{x}}$. The (complex-valued) spherical harmonics $\{\shY : \ell=0,1,2,\ldots; m=-\ell,\ldots,\ell\}$, which are the eigenfunctions of the Laplace-Beltrami operator for the sphere, form a complete orthonormal basis for $\Lp{2}{2}$. There are various spherical harmonic definitions. This paper uses the basis as in \cite{Liboff2003}, which is widely used in physics. A function $f\in \Lp{2}{2}$ can be expanded in terms of a Fourier-Laplace series \begin{equation}\label{eq:Fseries} f \sim \sum_{\ell=0}^\infty \sum_{m=-\ell}^{\ell} \Fcoe{f} \shY, \text{ with } \widehat{f}_{\ell,m} := \int_{\sph{2}} f(\PT{x}) \conj{\shY(\PT{x})} \IntDiff[]{x}, \end{equation} with $\sim$ denoting the convergence in the $\Lp{2}{2}$ sense. The $\Fcoe{f}$ are called the \emph{Fourier coefficients} for $f$ under the Fourier basis $\shY$. Parseval's theorem states that \[ \norm{f}{\Lp{2}{2}}^2 = \sum_{\ell=0}^\infty \sum_{m=-\ell}^{\ell} |\Fcoe{f}|^2, \qquad f \in \Lp{2}{2}. \] As promised in the Introduction, we now show that the sum over $m$ of the squared absolute values of the Fourier coefficients is rotationally invariant. Let $\RotGr[3]$ be the rotation group on $\R^3$. For a given rotation $\rho \in \RotGr[3]$ and a given function $f\in L^2(\sph{2})$, the linear operator $\rtop$ on $\Lp{2}{2}$ associated with the rotation $\rho$ is defined by \begin{equation*} \rtop f(\PT{x}):=f(\rho^{-1}\PT{x}), \qquad \PT{x} \in \sph{2}. \end{equation*} The rotated function $\rtop f$ is essentially the same function as $f$, but expressed with respect to a coordinate system rotated by $\rho$. The operators $\rtop$ form a representation of the group $\RotGr[3]$, in that \begin{align*} (\rtop[\rho_{1}]\rtop[\rho_{2}])f(\PT{x}) &= \rtop[\rho_{1}](\rtop[\rho_{2}]f)(\PT{x}) = (\rtop[\rho_{2}]f)(\rho_{1}^{-1}\PT{x}) = f(\rho_{2}^{-1}\rho_{1}^{-1}\PT{x})\\ &=f((\rho_{1}\rho_{2})^{-1}\PT{x})=\rtop[\rho_{1}\rho_{2}]f(\PT{x}), \quad \rho_{1},\rho_{2}\in \RotGr[3]. \end{align*} \noindent \textbf{Definition} A (non-linear) functional $\funct$ of $f\in L_2(\sph{2})$ is rotationally invariant if for all rotations $\rho\in \RotGr[3]$, \[ \funct(\rtop f) = \funct(f). \] \begin{proposition}\label{prop:rotinv} For $f\in L_2(\sph{2})$ and $\ell\ge 0$ the sum over $m$ of the squares of the absolute values of the Fourier coefficients $\Fcoe{f}$, \[ \ml(f):= \sum_{m=-\ell}^\ell |\Fcoe{f}|^2, \] is rotationally invariant. \end{proposition} \begin{proof} By Fubini's theorem we can write, using \eqref{eq:Fseries}, \begin{align*} \ml(f)= \sum_{m=-\ell}^\ell |\Fcoe{f}|^2 &= \int_{\sph{2}}\int_{\sph{2}}f(\PT{x})\conj{f(\PT{x}')} \sum_{m=-\ell}^\ell \conj{\shY(\PT{x})}\shY(\PT{x}') \IntDiff[]{x}\mathrm{d}\sigma(\PT{x}')\\ &=\int_{\sph{2}}\int_{\sph{2}}f(\PT{x})\conj{f(\PT{x}')} \frac{(2\ell+1)}{4\pi}\Legen(\PT{x}\cdot\PT{x}') \IntDiff[]{x}\mathrm{d}\sigma(\PT{x}'), \end{align*} where $P_\ell$ is the Legendre polynomial scaled so that $P_\ell(1)=1$, and in the last step we used the addition theorem for spherical harmonics \cite{Muller1966}. Similarly, we have \begin{align*} \ml(\rtop f) &=\int_{\sph{2}}\int_{\sph{2}}\rtop f(\PT{x})\conj{\rtop f(\PT{x}')} \frac{(2\ell+1)}{4\pi}\Legen(\PT{x}\cdot\PT{x}') \IntDiff[]{x}\mathrm{d}\sigma(\PT{x}')\\ &=\int_{\sph{2}}\int_{\sph{2}}f(\rho^{-1}\PT{x})\conj{f(\rho^{-1}\PT{x}')} \frac{(2\ell+1)}{4\pi}\Legen(\PT{x}\cdot\PT{x}') \IntDiff[]{x}\mathrm{d}\sigma(\PT{x}'). \end{align*} Now change variables to $\PT{z}:=\rho^{-1}\PT{x}$ and $\PT{z'}:=\rho^{-1}\PT{x'}$, and use the rotational invariance of the inner product, \[ \PT{x}\cdot\PT{x}' = (\rho^{-1}\PT{x})\cdot(\rho^{-1}\PT{x}') = \PT{z}\cdot\PT{z}', \] together with the rotational invariance of the surface measure to obtain \begin{align*} \ml(\rtop f) =\int_{\sph{2}}\int_{\sph{2}}f(\PT{z})\conj{f(\PT{z}')} \frac{(2\ell+1)}{4\pi}\Legen(\PT{z}\cdot\PT{z}') \IntDiff[]{z}\mathrm{d}\sigma(\PT{z}') = \ml(f), \end{align*} thus completing the proof. \end{proof} \subsection{Random fields on spheres} Let $(\Omega,\cF,\probm)$ be a probability space and let $\cB(\sph{2})$ denote the Borel algebra on $\sph{2}$. A real-valued random field on the sphere $\sph{2}$ is a function $T:\Omega \times \sph{2} \rightarrow \R$ which is measurable on $\cF \otimes \cB(\sph{2})$. Let $\Lppsph{2}{2}$ be the $L_{2}$ space on the product space $\prodpsph[2]$ with product measure $\prodpsphm[]$. In the paper, we assume that $\RF\in \Lppsph{2}{2}$. By Fubini's theorem, $\RF\in \Lp{2}{2}$ $\Pas$, in which case $T$ admits an expansion in terms of spherical harmonics, $\Pas$, \begin{equation}\label{eq:KL} T \sim \sum_{\ell=0}^\infty \sum_{m=-\ell}^{\ell} \coesh \shY, \qquad \coesh := \coesh(\omega) = \int_{\sph{2}} \RF(\PT{x}) \conj{\shY(\PT{x})} \IntDiff[]{x}. \end{equation} We will for brevity write $\RF(\omega,\PT{x})$ as $\RF(\omega)$ or $\RF(\PT{x})$ if no confusion arises. The rotational invariance of the sum of $|\coesh|^2$ over $m$ is a corollary to Proposition \ref{prop:rotinv}, which we state as follows. \begin{corollary}\label{cor:rot inv T} The coefficients $\coesh$ of the random field $\RF$ in \eqref{eq:KL} have the property that for each $\ell\ge 0$ \[ \sum_{m=-\ell}^\ell |\coesh(\omega)|^2\; \mbox{is rotationally invariant}, \quad \omega\in \probSp. \] \end{corollary} The coefficients $\coesh$ are assumed to be uncorrelated mean-zero complex-valued random variables, that is \[ \bbE[\coesh]=0,\quad \bbE[\coesh \conj{\coesh[\ell',m']}]= C_{\ell,m} \delta_{\ell,\ell'}\delta_{m m'}, \] where the $C_{\ell,m}$ are non-negative numbers. The sequence $(C_{\ell,m})$ is called the \emph{angular power spectrum} of the random field $\RF$. It follows that $\RF(\PT{x})$ has mean zero for each $\PT{x} \in \sph{2}$ and covariance \[ \bbE[T(\PT{x})T(\PT{y})]= \sum_{\ell=0}^\infty \sum_{m=-\ell}^{\ell}C_{\ell,m}\conj{\shY(\PT{x})}\shY(\PT{y}), \qquad \PT{x}, \PT{y} \in \sph{2}, \] assuming for the moment that the sum is convergent. In this paper we are particularly concerned with questions of isotropy. Following \cite{MaPe2011}, the random field $T$ is \emph{strongly isotropic} if for any $k \in \bbN$ and for any set of $k$ points $\PT{x}_1,\ldots,\PT{x}_k \in \sph{2}$ and for any rotation $\rho \in \RotGr[3]$, $T(\PT{x}_1), \ldots, T(\PT{x}_k)$ and $T(\rho \PT{x}_1),\ldots,T(\rho \PT{x}_k)$ have the same law, that is, have the same joint distribution in $\probSp^{k}$. A more easily satisfied property is weak isotropy: for an integer $n \ge 1$, $T$ is said to be $n$-\emph{weakly isotropic} if for all $\PT{x} \in \sph{2}$, the $n$th-moment of $T(\PT{x})$ is finite, i.e. $\bbE[ |T(\PT{x})|^n ] <\infty$, and if for $k=1,\ldots,n$, for all sets of $k$ points $\PT{x}_1,\ldots,\PT{x}_k \in \sph{2}$ and for any rotation $\rho \in \RotGr[3]$, \[ \bbE[T(\PT{x}_1) \cdots T(\PT{x}_k) ] = \bbE [ T(\rho \PT{x}_1) \cdots T(\rho \PT{x}_k) ]. \] If the field $T$ is at least 2-weakly isotropic and also satisfies $\bbE[T(\PT{x})]=0$ for all $\PT{x}\in\sph{2}$ then by definition the covariance $\bbE[T(\PT{x})T(\PT{y})]$ is rotationally invariant, and hence admits an $L_2$-convergent expansion in terms of Legendre polynomials, \begin{equation*} \bbE[T(\PT{x})T(\PT{y})] = \sum_{\ell=0}^\infty \frac{2\ell+1}{4\pi} \APS{\ell} \Legen(\PT{x} \cdot \PT{y}) =\sum_{\ell=0}^\infty \sum_{m=-\ell}^{\ell}\APS{\ell}\conj{\shY(\PT{x})}\shY(\PT{y}), \end{equation*} where in the last step we again used the addition theorem for spherical harmonics. Thus in this case we have $C_{\ell,m} = \APS{\ell}$, and the angular power spectrum is independent of $m$, and can be written as \[ C_\ell = \bbE[|a_{\ell,m}|^2 ] = \frac{1}{2\ell+1} \bbE\left[ \sum_{m=-\ell}^\ell |a_{\ell,m}|^2\right]. \] We note that the scaled angular power spectrum as used in astrophysics for the CMB data, see for example \cite{Gorski_etal2005}, is \[ D_\ell := \frac{\ell(\ell+1)}{2\pi} \; \APS{\ell}. \] A random field $T$ is \emph{Gaussian} if for each $k\in \bbN$ and each choice of $\PT{x}_1,\ldots,\PT{x}_k \in \sph{2}$ the vector $(T(\PT{x}_1),\ldots,T(\PT{x}_k))$ is a multivariate random variable with a Gaussian distribution. A Gaussian random field is completely specified by giving its mean and covariance function. The following proposition relates Gaussian and isotropy properties of a random field. \begin{proposition}\label{pro:GaussF} \cite[Proposition 5.10]{MaPe2011} Let $T$ be a Gaussian random field on $\sph{2}$. Then $T$ is strongly isotropic if and only if $T$ is 2-weakly isotropic. \end{proposition} By \cite[Theorem 5.13, p. 123]{MaPe2011}, a $2$-weakly isotropic random field is in $\Lp{2}{2}$ $\Pas$. In the present paper we are principally concerned with input random fields that are both Gaussian and strongly isotropic. Our main aim is to show that the resulting regularized field is also strongly isotropic. (Of course the Gaussianity of the field is inevitably lost, given that some of the coefficients may be replaced by zero.) \subsection{Norms and regularization models}\label{sec:reg} In this section the randomness of the field plays no real role. Thus the observed field $\RFo$ may be thought of either as a deterministic field or as one realization of a random field. Assume that the observed field $\RFo$ is given by \begin{equation}\label{eq:RF.o} \RFo = \sum_{\ell=0}^\infty \sum_{m=-\ell}^{\ell} \coesho \shY. \end{equation} Consider an approximating field $\RF$ with the spherical harmonic expansion \begin{equation*} \RF = \sum_{\ell=0}^\infty \sum_{m=-\ell}^{\ell} \coesh \shY. \end{equation*} Let $\Nz = \{0, 1, 2, \ldots\}$ and let \begin{equation}\label{eq:Aell} A_\ell := \left(\sum_{m=-\ell}^\ell | \coesh |^2 \right)^\frac{1}{2}, \quad \ell \in\Nz. \end{equation} Then \[ \|T\|_{L_2(\mathbb{S}^2)}^2= \sum_{\ell=0}^\infty\sum_{m=-\ell}^\ell|a_{\ell,m}|^2 = \sum_{\ell=0}^\infty A_\ell^2. \] Clearly $A_\ell = 0$ if and only if $\coesh = 0$ for $m = -\ell,\ldots,\ell$. For simplicity, we let $\vcoesh:=\vcoesh(\RF)$ (an infinite dimensional vector) denote the sequence of spherical harmonic coefficients $\coesh, m = -\ell,\ldots,\ell$, $\ell\in\Nz$, of the field $\RF$: \begin{equation}\label{eq:avec} \vcoesh := (\coesh[0,0], \coesh[1,-1], \coesh[1,0], \coesh[1,1], \ldots, \coesh[\ell,-\ell], \ldots, \coesh[\ell,\ell],\ldots)^T. \end{equation} For a positive sequence $\{\beta_{\ell}\}_{\ell\in\Nz}$, we define the norm \begin{equation} \label{eq:norm} \norm{\vcoesh}{1,2,\beta} := \norm{\vcoesh}{1,2} := \sum_{\ell=0}^\infty \beta_\ell \left( \sum_{m=-\ell}^\ell |\coesh|^2 \right)^\frac{1}{2} = \sum_{\ell=0}^\infty \beta_\ell A_\ell. \end{equation} We call $\beta_{\ell}$ the \emph{degree-scaling} sequence, because it describes the relative importance of different degrees $\ell$. (In Section~\ref{sec:beta}, we will discuss the choice of the parameters $\beta_\ell$.) This choice of norm, a scaled hybrid between the standard $\ell_1$ and $\ell_{2}$ norms, is the key to preserving isotropy while still giving sparse solutions. We will measure the agreement between the observed data $\coesho$ and the approximation $a_{\ell,m}$ by the $\ell_{2}$ norm, or its square, the discrepancy, \begin{equation*} \norm{\vcoesh - \vcoesho}{2}^2 = \sum_{\ell=0}^\infty \sum_{m=-\ell}^\ell | \coesh - \coesho |^2 = \|\RF-\RFo\|^2_{\Lp{2}{2}}. \end{equation*} Given the observed data $\coesho$ for $\ell \in\Nz$, $m = -\ell,\ldots,\ell$ arranged in the vector $\vcoesho$ as in (\ref{eq:avec}), and a regularization parameter $\lambda \geq 0$, our regularized problem is\ \begin{equation}\label{eq:model1} \mathop{\mathrm{Minimize}}_{\vcoesh} \ \tfrac{1}{2}\norm{\vcoesh - \vcoesho}{2}^2 + \lambda \norm{\vcoesh}{1,2}. \end{equation} As both norms are convex functions and $\lambda \geq 0$, the objective is strictly convex and there is a unique global minimizer. Moreover, first order optimality conditions are both necessary and sufficient for a global minimizer (see \cite{Ber2009} for example). A closely related model is \begin{equation}\label{eq:model2} \begin{array}{cl} \displaystyle\mathop{\mathrm{Minimize}}_{\vcoesh} & \norm{\vcoesh}{1,2} \\[1ex] \mbox{Subject to} & \norm{\vcoesh - \vcoesho}{2}^2 \leq \sigma^2. \end{array} \end{equation} Again, as the feasible region is bounded a global solution exists, and as the norms are convex functions any local minimizer is a global minimizer and the necessary conditions for a local minimizer are also sufficient. When the constraint in (\ref{eq:model2}) is active, the Lagrange multiplier determines the value of $\lambda$ in (\ref{eq:model1}). If the objective was $\| \vcoesh \|_1$, instead of $\| \vcoesh \|_{1,2}$, this would be a very simple example of the constrained $\ell_{1}$-norm minimization problem, widely used, see \cite{Don2006,CanRT2006a,vdBerFri2011} for example, to find sparse solutions to under-determined systems of linear equations. Such problems, with a separable structure, can be readily solved, see \cite{WriNF2009,spgl1:2007} for example. An alternative formulation would be a LASSO~\cite{Tib1996,OsbPT2000a} based approach: \begin{equation}\label{eq:model3} \begin{array}{cl} \displaystyle\mathop{\mathrm{Minimize}}_{\vcoesh} & \tfrac{1}{2} \norm{\vcoesh - \vcoesho}{2}^2 \\[1ex] \mbox{Subject to} & \norm{\vcoesh}{1,2} \leq \kappa. \end{array} \end{equation} Such problems, using the standard $\ell_{1}$ norm $\| \vcoesh\|_1$ instead of $\| \vcoesh\|_{1,2}$, and related problems have been widely explored in statistics and compressed sensing, see \cite{EfHaJoTi2004,CanRT2006a,Don2006} for example. The \emph{regularized random field} $\RFr$ is given in terms of the spherical harmonic expansion \begin{equation}\label{eq:RF.r} \RFr := \sum_{\ell=0}^\infty \sum_{m=-\ell}^{\ell} \coeshr \shY, \end{equation} where the regularized coefficients $\coeshr$ minimize one of the model problems (\ref{eq:model1}), (\ref{eq:model2}) or (\ref{eq:model3}). We will concentrate on the model (\ref{eq:model1}). The relation to the other models is detailed in the appendix. It is up to the user to choose which regularization model is easiest to interpret: in a particular application specifying a bound $\norm{\vcoesh - \vcoesho}{2}^2 \leq \sigma^2$ on the discrepancy or a bound $\norm{\vcoesh}{1,2} \leq \kappa$ on the norm of the regularized solution may be easier to interpret than directly specifying the regularization parameter $\lambda$. The appendix shows how to determine the corresponding value of the regularization parameter $\lambda$ given either $\sigma$ or $\kappa$ for these alternative models. \section{Analytic solution to the sparse regularization model}\label{sec:regsol} Consider the optimization problem (\ref{eq:model1}). The coefficients $\coesh$ and $\coesho$ are complex, while all the other quantities, such as $A_\ell, \Alo$, $\beta_\ell$ and $\alr$, are real. Temporarily we write the real and imaginary parts of $a_{\ell,m}$ explicitly, \[ a_{\ell,m}=x_{\ell,m}+ \imu\hspace{0.3mm} y_{\ell,m},\quad \mbox{and define} \quad \nabla_{a_{\ell,m}} =\frac{\partial}{\partial x_{\ell,m}}+ \imu\hspace{0.3mm} \frac{\partial}{\partial y_{\ell,m}}. \] Then for all degrees $\ell$ for which $A_\ell$ is positive the definition~(\ref{eq:Aell}) gives \begin{equation}\label{eq:normderiv} \nabla_{\coesh} A_\ell = A_\ell^{-1} \; \coesh, \qquad m = -\ell,\ldots,\ell, \end{equation} and hence from \eqref{eq:norm} \begin{equation*} \nabla_{\coesh}\norm{\vcoesh}{1,2} = \beta_\ell A_\ell^{-1} \; \coesh, \qquad m = -\ell,\ldots,\ell, \end{equation*} It follows that the necessary and sufficient conditions for a local/global minimum in \eqref{eq:model1} are \begin{eqnarray} (\coesh - \coesho) + \lambda \beta_\ell A_\ell^{-1} \coesh = 0, & & \quad m = -\ell,\ldots,\ell \qquad \mbox{when } A_\ell > 0, \label{eq:opt1a}\\ \coesh = 0, & & \quad m = -\ell,\ldots,\ell \qquad \mbox{when } A_\ell = 0. \notag \end{eqnarray} For each $\lambda\ge 0$ we define the degree sets \begin{equation}\label{eq:Gamma} \Gamma(\lambda) := \left\{\ell\in\Nz: \frac{\Alo}{\beta_\ell} > \lambda\right\}, \qquad \Gamma^c(\lambda) := \left\{\ell\in\Nz: \frac{\Alo}{\beta_\ell} \leq \lambda\right\}. \end{equation} Note that both $\Gamma(\lambda)$ and $\Gamma^c(\lambda)$ are random sets. For a particular realization of the field and for $\lambda = 0$ the set $\Gamma^c(0)$ consists only of those $\ell$ values for which $\Alo = 0$. For $\lambda \geq \sup_{\ell\in\Nz} \{\Alo/\beta_\ell\}$ the set $\Gamma(\lambda)$ is empty. For degrees $\ell\in\Gamma^c(\lambda)$, all the regularized coefficients are zero. For degrees $\ell \in \Gamma(\lambda)$ where $A_\ell > 0$, equation (\ref{eq:opt1a}) shows that the regularized coefficients are given by \begin{equation}\label{eq:Aellr} \coeshr = \alr \coesho \quad \mbox{and hence}\quad \Alr := \left(\sum_{m=-\ell}^\ell|\coeshr|^2\right)^{1/2} = \alr \Alo, \end{equation} where $\Alo$ is given by \eqref{eq:Alo}, and (\ref{eq:opt1a}) gives \begin{equation}\label{eq:alpha} \alr = \frac{\Alr}{\Alr + \lambda \beta_\ell} = \frac{\Alr}{\Alo} = \frac{\Alo - \lambda \beta_\ell}{\Alo} \in (0, 1]. \end{equation} Summing up, $\Alr$ is given by \begin{equation*} \label{eq:Alr} \Alr := \left\{ \begin{array}{cl} \displaystyle \Alo - \lambda \beta_\ell & \quad\mbox{for } \ell \in \Gamma(\lambda),\\ \displaystyle 0 & \quad\mbox{for } \ell \in \Gamma^c(\lambda), \end{array}\right. \end{equation*} and the regularized coefficients are \begin{equation*} \coeshr = \left\{\begin{array}{cl} \alr \coesho & \mbox{for } m = -\ell,\ldots,\ell, \quad \ell \in \Gamma(\lambda),\\ 0 & \mbox{for } m = -\ell,\ldots,\ell, \quad \ell\in\Gamma^c(\lambda). \end{array}\right. \end{equation*} In the vector notation introduced in \eqref{eq:avec}, \begin{eqnarray} & & \norm{\vcoesh^{\mathrm{r}}}{1,2} = \sum_{\ell\in\Gamma(\lambda)} \beta_\ell (\Alo - \lambda\beta_\ell) =\sum_{\ell\in\Gamma(\lambda)}\beta_\ell A_\ell^r, \label{eq:lamnrm}\\[1ex] & & \norm{\vcoesh^{\mathrm{r}} - \vcoesho}{2}^2 = \lambda^2\sum_{\ell\in\Gamma(\lambda)}\beta_\ell^2 +\sum_{\ell\in\Gamma^c(\lambda)} (\Alo)^2. \label{eq:lamssq} \end{eqnarray} The value $\lambda = 0$ gives the solution $\vcoesh^{\mathrm{r}} = \vcoesho$ (noting that in \eqref{eq:lamssq} the first term vanishes for $\lambda=0$ while in the second sum each term is zero). For $\lambda > \sup_{\ell\in \Nz} \{\Alo/\beta_\ell\}$ the solution is $\vcoesh^{\mathrm{r}} = 0$. We have established the solution to problem \eqref{eq:model1}, as summarized by the following proposition. \begin{proposition}\label{prop:reg.sol} Let $\coesho$, $m=-\ell,\dots,\ell$, $\ell\in\Nz$, be the Fourier coefficients for a random field $\RFo$ on $\sph{2}$. For a positive sequence $\{\beta_{\ell}\}_{\ell\in\Nz}$ and a positive regularization parameter $\lambda$, the solution of the regularization problem \eqref{eq:model1} is, in the $\Lppsph{2}{2}$ sense, \begin{equation*} \RFr = \sum_{\ell=0}^\infty \sum_{m=-\ell}^{\ell} \coeshr \shY \end{equation*} with regularized coefficients, for $m=-\ell,\dots,\ell$ and $\ell\in\Nz$, \begin{equation*} \coeshr := \left\{\begin{array}{ll} \displaystyle \left(1 - \frac{\lambda\beta_{\ell}}{\Alo}\right) \coesho, & \Alo>\lambda\beta_{\ell},\\ 0, & \Alo\le\lambda\beta_{\ell}, \end{array}\right. \quad \Alr = \left\{\begin{array}{ll} \displaystyle \left(1 - \frac{\lambda\beta_{\ell}}{\Alo}\right) \Alo, & \Alo>\lambda\beta_{\ell},\\ 0, & \Alo\le\lambda\beta_{\ell}, \end{array}\right. \end{equation*} where $\Alo$ is given by \eqref{eq:Alo}, and $A_\ell^r$ by \eqref{eq:Aellr}. \end{proposition} \section{Regularization preserves strong isotropy}\label{sec:iso} Marinucci and Peccati \cite[Lemma 6.3]{MaPe2011} proved that the Fourier coefficients of a strongly isotropic random field have the same law under any rotation of the coordinate axes, in a sense to be made precise in the first part of the following theorem. In the following theorem we prove that the converse is also true. In the theorem, $D^\ell(\rho)$, for a given $\ell\ge 0$ and a given rotation $\rho\in \RotGr[3]$, is the $(2\ell+1)\times (2\ell+1)$ Wigner matrix, which has the property \[ \sum_{m'=-\ell}^\ell D^\ell(\rho)_{m',m} \shY[\ell,m'](\PT{x}) =\shY( \rho^{-1} \PT{x}),\quad m=-\ell, \ldots, \ell. \] The Wigner matrices form (irreducible) $(2\ell+1)$-dimensional representations of the rotation group $\RotGr[3]$, in the sense that (as can easily be verified) \[ D^\ell(\rho_1)D^\ell(\rho_2)=D^\ell(\rho_1\rho_2), \quad \rho_1,\rho_2 \in \RotGr[3]. \] \begin{theorem} \label{lem:aell isotropy} Let $\RF$ be a real, square-integrable random field on $\sph{2}$, with the spherical harmonic coefficients $\coesh$. Let $\vcoesh_{\ell \cdot}$ denote the corresponding $(2\ell+1)$-dimensional vector, $\vcoesh_{\ell \cdot}:=(\coesh[\ell ,-\ell],\ldots,\coesh[\ell ,\ell])^T$.\\ (i) \cite[Lemma 6.3]{MaPe2011} If $T$ is strongly isotropic then for every rotation $\rho\in\RotGr[3]$, every $k\ge 1$ and every $\ell_1,\ldots,\ell_k\ge 0$, we have \begin{equation}\label{cond aell} (D^{\ell_1}(\rho) \vcoesh_{\ell_1 \cdot}, \ldots, D^{\ell_k}(\rho) \vcoesh_{\ell_k \cdot}) \overset{d}{=} (\vcoesh_{\ell_1 \cdot},\ldots,\vcoesh_{\ell_k \cdot}), \end{equation} where $\overset{d}{=}$ denotes identity in distribution.\\ (ii) If the condition \eqref{cond aell} holds for all $\rho\in \RotGr[3]$, all $k\ge1$ and any $\ell_1,\dots,\ell_{k}\ge0$, then the field $\RF$ is strongly isotropic. \end{theorem} \begin{proof}[Proof of (ii)] Let $\rho$ be a rotation in $\RotGr[3]$ and let $\PT{x}_1,\ldots,\PT{x}_k$ be $k$ arbitrary points on $\sph{2}$. Then \begin{align*} & \left( \RF(\rho^{-1}\PT{x}_1),\ldots,\RF(\rho^{-1}\PT{x}_k) \right)\\ & = \left( \sum_{\ell_1=0}^\infty \sum_{m_1=-\ell_1}^{\ell_1} a_{\ell_1,m_1} Y_{\ell_1,m_1}( \rho^{-1} \PT{x}_1),\ldots, \sum_{\ell_k=0}^\infty \sum_{m_k=-\ell_k}^{\ell_k} a_{\ell_k,m_k} Y_{\ell_k,m_k}( \rho^{-1} \PT{x}_k) \right) \\ & = \left( \sum_{\ell_1=0}^\infty \sum_{m_1=-\ell_1}^{\ell_1} a_{\ell_1,m_1} \sum_{m_1'=-\ell_1}^{\ell_1} D^{\ell_1}(\rho)_{m_1',m_1} \shY[\ell_1,m_1'](\PT{x}_1),\ldots, \right.\\ &\left.\qquad\qquad\sum_{\ell_k=0}^\infty \sum_{m_k=-\ell_k}^{\ell_k} a_{\ell_k,m_k} \sum_{m_k'=-\ell_k}^{\ell_k} D^{\ell_k}(\rho)_{m_k',m_k} \shY[\ell_k,m_k'](\PT{x}_k) \right) \\ & = \left( \sum_{\ell_1=0}^\infty \sum_{m_1'=-\ell_1}^{\ell_1} \wtd{\coesh[\ell_1,m_1']} \shY[\ell_1,m_1'](\PT{x}_1),\ldots, \sum_{\ell_k=0}^\infty \sum_{m_k'=-\ell_k}^{\ell_k} \wtd{\coesh[\ell_k,m_k']} \shY[\ell_k,m_k'](\PT{x}_k) \right) \end{align*} where we write \[ \wtd{ \coesh[\ell,m'] } := \sum_{m=-\ell}^\ell D^{\ell}(\rho)_{m',m}\: \coesh. \] Since condition \eqref{cond aell} holds, for all $\ell_1,\ldots,\ell_k\ge 0$ we have \[ (\wtd{\coesh[\ell_1,-\ell_1]},\ldots, \wtd{\coesh[\ell_1,\ell_1]}, \ldots, \wtd{\coesh[\ell_k,-\ell_k]},\ldots,\wtd{\coesh[\ell_k,\ell_k]}) \overset{d}{=} (\coesh[\ell_1,-\ell_1],\ldots, \coesh[\ell_1,\ell_1] \ldots, \coesh[\ell_k,-\ell_k],\ldots, \coesh[\ell_k,\ell_k]). \] Now we use a simple instance of the principle that if a finite set $B$ of random variables has the same joint distribution as another set $B'$, then, for any measurable real-valued function $f$, $f(B)$ will have the same joint distribution as $f(B')$. Thus, \begin{align*} \left( \RF(\rho^{-1}\PT{x}_1),\ldots,\RF(\rho^{-1}\PT{x}_k) \right) & \overset{d}{=} \left( \sum_{\ell_1=0}^\infty \sum_{m_1=-\ell_1}^{\ell_1} \coesh[\ell_1,m_1] \shY[\ell_1,m_1](\PT{x}_1),\ldots, \sum_{\ell_k=0}^\infty \sum_{m_k=-\ell_k}^{\ell_k} \coesh[\ell_k,m_k] \shY[\ell_k,m_k](\PT{x}_k) \right) \\ & = \left( \RF(\PT{x}_1),\ldots,\RF(\PT{x}_k) \right). \end{align*} In other words, the random field $T$ is strongly isotropic. \end{proof} The following theorem shows that the regularized random field $\RFr$ in \eqref{eq:RF.r} is strongly isotropic if the observed random field $\RFo$ is strongly isotropic. \begin{theorem}\label{thm:isotropy.reg.RF} Let $\RFo$ be a real observed random field on the sphere $\sph{2}$ as in \eqref{eq:RF.o} and let $\RFr$ given by Proposition~\ref{prop:reg.sol} be the correspondingly regularized random field. If $\RFo$ is strongly isotropic then the regularized random field $\RFr$ is also strongly isotropic. \end{theorem} \begin{proof} For an arbitrary realization of the regularized field we have \begin{align}\label{eq:RF.r.a} \RFr(\PT{x}) & = \sum_{\ell=0}^\infty \sum_{m=-\ell}^{\ell} \coeshr \shY(\PT{x})\notag\\[1mm] & = \sum_{\ell=0}^\infty \sum_{m=-\ell}^{\ell} \alr(\RFo)\coesho \shY(\PT{x})\notag,\quad \PT{x}\in\sph{2},\quad \end{align} where the $\alpha_\ell(\RFo)$, for $\ell=0,1,2,\ldots$ given by \eqref{eq:alpha}, are rotationally invariant as a consequence of Corollary~\ref{cor:rot inv T}. Since $\RFo$ is strongly isotropic, from Theorem~\ref{lem:aell isotropy} part (i), for any rotation $\rho \in \RotGr[3]$, every $k \ge 1$ and every $\ell_1,\ldots,\ell_k \ge 0$, we have \[ (D^{\ell_1}(\rho) \vcoesho_{\ell_1 \cdot}, \ldots, D^{\ell_k}(\rho) \vcoesho_{\ell_k \cdot}) \overset{d}{=} (\vcoesho_{\ell_1 \cdot},\ldots,\vcoesho_{\ell_k \cdot}). \] It follows from the rotational invariance of the $\alpha_{\ell}$ that \begin{equation}\label{eq:D.a} (\alpha_{\ell_1} D^{\ell_1}(\rho) \vcoesho_{\ell_1 \cdot},\ldots, \alpha_{\ell_k} D^{\ell_k}(\rho) \vcoesho_{\ell_k \cdot}) \overset{d}{=}(\alpha_{\ell_1} \vcoesho_{\ell_1 \cdot}, \ldots, \alpha_{\ell_k} \vcoesho_{\ell_k \cdot}) \end{equation} The equality in \eqref{eq:D.a} is equivalent to \[ (D^{\ell_1}(\rho) \vcoeshr_{\ell_1 \cdot},\ldots, D^{\ell_k}(\rho) \vcoeshr_{\ell_k \cdot}) \overset{d}{=} (\vcoeshr_{\ell_1 \cdot},\ldots, \vcoeshr_{\ell_k \cdot}), \] for any rotation $\rho$, every $k \ge 1$ and every $\ell_1,\ldots,\ell_k \ge 0$. So, by Theorem~\ref{lem:aell isotropy} part (ii) the field $\RFr$ is strongly isotropic. \end{proof} The above theorem and Proposition~\ref{pro:GaussF} imply the following corollary. \begin{corollary} The regularized random field $\RFr$ is strongly isotropic if the observed random field $\RFo$ is Gaussian and 2-weakly isotropic. \end{corollary} \section{Approximation error of the regularized solution}\label{sec:err} This section estimates the approximation error of the sparsely regularized random field from the observed random field, and gives one choice for the regularization parameter $\lambda$. Let $\{\coesho|\ell\in\Nz,\:m=-\ell,\dots,\ell\}$ and $\{\coeshr|\ell\in\Nz,\:m=-\ell,\dots,\ell\}$ be the Fourier coefficients for an observed random field $\RFo$ and the regularized field $\RFr$ on $\sph{2}$ respectively. \begin{lemma} Let $\RFo$ be a random field in $\Lppsph{2}{2}$. For any $\lambda>0$ and any positive sequence $\{\beta_{\ell}\}_{\ell=0}^{\infty}$, let $\RFr$ be the regularized solution to the {regularization problem (\ref{eq:model1}) with regularization parameter $\lambda$.} Then $\RFr$ is in $\Lppsph{2}{2}$. \end{lemma} \begin{proof} By \eqref{eq:alpha}, $0<\alpha_{\ell}\le1$ for $\ell\in\Gamma(\lambda)$. We now define $\alpha_{\ell}=0$ for $\ell\in \Gamma^{c}(\lambda)$, so that $\alpha_{\ell}\in[0,1]$ for all $\ell\in\Nz$. Since $\RFo$ is in $\Lppsph{2}{2}$, by Parseval's identity and Fubini's theorem, \begin{align*} \norm{\RFr}{\Lppsph{2}{2}}^{2}=\expect{\norm{\RFr}{\Lp{2}{2}}^{2}} &=\expect{\sum_{\ell=0}^{\infty}\sum_{m=-\ell}^{\ell} |\coeshr|^2} =\expect{\sum_{\ell=0}^{\infty}\sum_{m=-\ell}^{\ell} |\alpha_{\ell}\coesho|^2}\\ &\le\expect{\sum_{\ell=0}^{\infty}\sum_{m=-\ell}^{\ell} |\coesho|^2} =\expect{\norm{\RFo}{\Lp{2}{2}}^{2}} =\norm{\RFo}{\Lppsph{2}{2}}^{2}<\infty. \end{align*} Thus, $\RFr$ is in $\Lppsph{2}{2}$. \end{proof} The following theorem shows that the $\Lppsph{2}{2}$ error of the regularized solution can be arbitrarily small with an appropriate regularization parameter $\lambda$. \begin{theorem}\label{thm:L2err.RFr} Let $\RFo$ be a random field in $\Lppsph{2}{2}$. For any $\eps>0$ and any positive sequence $\{\beta_{\ell}\}_{\ell=0}^{\infty}$, let $\RFr$ be the regularized field of the solution to the regularization problem {(\ref{eq:model1}) with regularization parameter satisfying} $0\le\lambda<\frac{\eps}{2\sqrt{\sum_{\ell=0}^{\ell^{*}}\beta_{\ell}^{2}}}$, where $\ell^{*}$ is the smallest integer such that $\sum_{\ell>\ell^{*}} \expect{(\Alo)^2} \le \eps^{2}/4$, where $\Alo$ is given by \eqref{eq:Alo}. Then, \begin{equation}\label{eq:Lppsph2.err} \norm{\RFo - \RFr}{\Lppsph{2}{2}} < \eps. \end{equation} \end{theorem} \begin{remark} The integer $\ell^{*}$ in the theorem exists as the series $\sum_{\ell=0}^{\infty}\expect{(\Alo)^2}=\norm{\RFo}{\Lppsph{2}{2}}^{2}$ is convergent. \end{remark} \begin{proof} Using Fubini's theorem and the degree sets defined in \eqref{eq:Gamma}, we split the squared $\Lppsph{2}{2}$ error of the regularized field $\RFr$ as \begin{align}\label{eq:I0} \norm{\RFo - \RFr}{\Lppsph{2}{2}}^{2} &= \expect{\norm{\RFo - \RFr}{\Lp{2}{2}}^{2}}\\ &= \expect{\sum_{\ell=0}^{\infty}\sum_{m=-\ell}^{\ell} \bigl|\coesho - \coeshr\bigr|^2}\notag\\ &=\expect{\sum_{\ell \in \Gamma^c(\lambda)} \sum_{m=-\ell}^{\ell} |\coesho|^2 + \sum_{\ell \in \Gamma(\lambda)} \sum_{m=-\ell}^{\ell} \bigl|(1-\alpha_{\ell}) \coesho\bigr|^2}\notag\\ &=\expect{\sum_{\ell \in \Gamma^c(\lambda)} (\Alo)^2} + \expect{\sum_{\ell \in \Gamma(\lambda)}\bigl|(1-\alpha_{\ell}) \Alo\bigr|^2},\notag \end{align} where the second equality is by Parseval's identity, the third equality uses equation \eqref{eq:Aellr} and the fourth equality uses \eqref{eq:Alo}. Since $\RFo$ is in $\Lppsph{2}{2}$, \begin{align*} \sum_{\ell=0}^{\infty} \expect{(\Alo)^2}=\norm{\RFo}{\Lppsph{2}{2}}^{2}<\infty, \end{align*} thus there exists the smallest integer $\ell^{*}$ such that \begin{equation*} \expect{\sum_{\ell>\ell^{*}}(\Alo)^2}=\sum_{\ell>\ell^{*}}\expect{(\Alo)^2} \le \frac{\eps^{2}}{4}. \end{equation*} This shows that the first term of the right-hand side of \eqref{eq:I0} is bounded above by \begin{align} \expect{\sum_{\ell \in \Gamma^c(\lambda)}(\Alo)^2} &= \expect{\sum_{\ell \le \ell^{*}; \ell \in \Gamma^c(\lambda)} (\Alo)^2 + \sum_{\ell> \ell^{*}; \ell \in \Gamma^c(\lambda)} (\Alo)^{2}} \notag\\ &\le \expect{\sum_{\ell \le \ell^{*}; \ell \in \Gamma^c(\lambda)} (\Alo)^2} + \expect{\sum_{\ell> \ell^{*}} (\Alo)^{2}} \notag\\ &\le \expect{\sum_{\ell\le\ell^{*}; \ell \in \Gamma^c(\lambda)} (\Alo)^2} + \frac{\eps^{2}}{4}.\label{eq:I1} \end{align} For the first term of the right-hand side of \eqref{eq:I1}, we have $\Alo\le \lambda\beta_{\ell}$, and hence \begin{align}\label{eq:I1.1} \expect{\sum_{\ell\le\ell^{*};\ell \in \Gamma^c(\lambda)} (\Alo)^{2}} \le \expect{\sum_{\ell\le\ell^{*};\ell \in \Gamma^c(\lambda)} (\lambda\beta_{\ell})^{2}} &\le \lambda^2 \sum_{\ell=0}^{\ell^{*}} \beta^2_\ell < \frac{\eps^{2}}{4}, \end{align} where we used the condition \begin{equation*} \lambda <\frac{\eps}{2\sqrt{\sum_{\ell=0}^{\ell^{*}}\beta_{\ell}^{2}}}. \end{equation*} We now estimate the second term of the right-hand side of \eqref{eq:I0}. By \eqref{eq:alpha} for $\ell\in\Gamma(\lambda)$ we have $1-\alpha_{\ell}=\lambda\beta_{\ell}/\Alo \le 1$, thus \begin{align*} \expect{\sum_{\ell \in \Gamma(\lambda)}\bigl| (1-\alpha_{\ell}) \Alo\bigr|^{2}} &\le \expect{\sum_{\ell\le\ell^{*};\ell\in\Gamma(\lambda)}(\lambda\beta_{\ell})^{2} + \sum_{\ell>\ell^{*};\ell\in\Gamma(\lambda)}(\Alo)^{2}}\notag\\ &\le \sum_{\ell=0}^{\ell^{*}}\lambda^{2}\beta_{\ell}^{2} + \expect{\sum_{\ell>\ell^{*}}(\Alo)^2} <\frac{\eps^{2}}{4} + \frac{\eps^{2}}{4} < \frac{\eps^{2}}{2}. \end{align*} This with \eqref{eq:I1.1}, \eqref{eq:I1} and \eqref{eq:I0} gives \eqref{eq:Lppsph2.err}. \end{proof} \section{Scaling to preserve the $L_{2}$ norm}\label{sec:scaling} The sparse regularization leads to a reduction of the $L_2$-norm of the regularized field from that of the observed field. In this section, we scale the regularized field so that the $L_{2}$-norm of the resulting field is preserved. By \eqref{eq:Aell} and Parseval's identity, \begin{equation*} \begin{array}{ll} \norm{\RFo}{\Lp{2}{2}}^{2}&=\displaystyle\sum_{\ell=0}^{\infty} \sum_{m=-\ell}^{\ell}|\coesho|^{2}=\sum_{\ell=0}^{\infty}(\Alo)^{2},\\[5mm] \norm{\RFr}{\Lp{2}{2}}^{2}&=\displaystyle\sum_{\ell=0}^{\infty} \sum_{m=-\ell}^{\ell}|\coeshr|^{2}=\sum_{\ell=0}^{\infty}(\Alr)^{2}. \end{array} \end{equation*} For each realization $\RFo(\omega)$, $\omega\in\probSp$ of an observed field $\RFo$, we define a new random variable, the \emph{scaling (factor) for the $L_{2}$ norm}, by \begin{equation}\label{eq:scalf} \scalf:=\scalf(\omega):=\scalf(\RFo(\omega),\RFr(\omega)):=\frac{\norm{\RFo(\omega)}{\Lp{2}{2}}}{\norm{\RFr(\omega)}{\Lp{2}{2}}}=\sqrt{\sum_{\ell=0}^{\infty}(\Alo)^{2}/\sum_{\ell=0}^{\infty}(\Alr)^{2}}. \end{equation} Then, for the same realization, we scale up the regularized field $\RFr$ by multiplying by the factor $\scalf$ to obtain \begin{equation*} \widetilde{\RFr} := \scalf\: \RFr. \end{equation*} We say the resulting field $\widetilde{\RFr}$ is the \emph{scaled regularized field} of $\RFo$ for the parameter choices $\lambda$ and $\{\beta_{\ell}\}_{\ell=0}^{\infty}$. \section{Numerical experiments}\label{sec:num} In this section, we use cosmic microwave background (CMB) data on $\sph{2}$, see for example \cite{Planck2016I}, to illustrate the regularization algorithm. \subsection{CMB data}\label{sec:CMBdata} The CMB data giving the sky temperature of cosmic microwave background are available on $\sph{2}$ at HEALPix points (Hierarchical Equal Area isoLatitude Pixelation) \footnote{\url{http://healpix.sourceforge.net}} \cite{Gorski_etal2005}. These points provide an equal area partition of $\sph{2}$ and are equally spaced on rings of constant latitude. This enables the use of fast Fourier transform (FFT) techniques for spherical harmonics. In the experiments, we use the CMB map with $N_{\rm side} = 2048$, giving $N_{\rm pix}=12\times 2048^2=50,331,648$ HEALPix points, see \cite{Planck2016IX}, as computed by SMICA \cite{CaLeDeBePa2008}, a component separation method for CMB data processing, see Figure~\ref{fig:CMB_Original}. In this map the mean $a^{\rm o}_{0,0}$ and first moments $a^{\rm o}_{1,m}$, for $m=-1,0,1$ are set to zero. A CMB map can be modelled as a realization of a strongly isotropic random field $\cmbRF$ on $\sph{2}$. \begin{figure} \caption{The CMB data with $N_{\rm side} = 2048$ as computed by SMICA.} \label{fig:CMB_Original} \end{figure} \subsection{Analysis of the CMB data} The Python HEALPy package \cite{Gorski_etal2005} was used to calculate the Fourier coefficients $\coesho$ of the observed field, using an equal weight quadrature rule at the HEALPix points. This instance of CMB data is band-limited with maximum degree $L=4,000$, thus \begin{equation*} \RFo = \tRFo =\sum_{\ell=0}^{\trdeg}\sum_{m=-\ell}^{\ell}\coesho\shY. \end{equation*} The observed $\Alo$ given by \eqref{eq:Alo} for $\ell=0,\dots,L$ are shown on a logarithmic scale in Figure~\ref{fig:CMB_Alo} for degree $\ell$ up to $4,000$. Once $\lambda$ and $\beta_\ell$ are chosen we easily calculate $a_{\ell,m}^r$ and $A_\ell^r$ using Proposition \ref{prop:reg.sol}, and so obtain the regularized field \begin{equation*} \RFr = \tRFr =\sum_{\ell=0}^{\trdeg}\sum_{m=-\ell}^{\ell}\coeshr\shY, \end{equation*} again with the use of the HEALPy package. \begin{figure} \caption{The observed field $\Alo$.} \label{fig:CMB_Alo} \end{figure} \subsection{Choosing the degree scaling parameters $\beta_\ell$}\label{sec:beta} The degree scaling parameters $\beta_{\ell}$ can be chosen to reflect the decay of the angular power spectrum of the observed data. For the CMB data in Figure~\ref{fig:CMB_Alo} there is remarkably little decay in $\Alo$ for degrees $\ell$ between $2,000$ and $4,000$, so we choose $\beta_{\ell}=1$ for $\ell=0,\dots,L$. Note that, if the true data correspond to a field that is not band-limited but has finite $L_2(\sph{2})$ norm, then $\Alo$ must eventually decay, and decaying $\beta_{\ell}$ would then be appropriate for $\ell>L$. \subsection{Choosing the regularization parameter $\lambda$}\label{sec:eff} Now we turn to the choice of the regularization parameter $\lambda$. We recall from Proposition~\ref{prop:reg.sol} that $\Alr/\Alo$ depends directly on the ratio $\Alo/(\lambda \beta_\ell)$, and that $\Alr = 0$ if the latter ratio is $\le 1$, or, since we have chosen $\beta_\ell=1$, if $\Alo \le \lambda$. It is therefore very clear from Figure~\ref{fig:CMB_Alo} that the sparsity (i.e. the percentage of the coefficients $\coeshr$ that are zero) will depend sensitively on the choice of $\lambda$. In Figure~\ref{fig:CMBpowspec} we illustrate the effect of two choices of $\lambda$ on the computed values of $\Alr$. In the left panel within Figure~\ref{fig:CMBpowspec} the choice is $\lambda = 1.05\times 10^{-6}$, while in the right panel the value of $\lambda$ is $9.75\times 10^{-7}$, about $7\%$ smaller. In the right panel, the sparsity is less than $10\%$, whereas on the left it is $72.1\%$. This means that of the original coefficients (more than $16$ million of them) only $4.5$ million are now non-zero. \begin{figure} \caption{The regularized field $\Alr$ with $\beta_{\ell}=1$, and with $\lambda=1.05\times10^{-6}$ (left graph) and $\lambda=9.75\times10^{-7}$ (right graph).} \label{fig:CMBpowspec} \end{figure} \subsection{Efficient frontier} A more systematic approach to choosing the regularization parameter $\lambda$ is to make use of the Pareto efficient frontier \cite{OsbPT2000a,DauFL2008,vdBerFri2009}. The efficient frontier of the multi-objective problem with two objectives, $\| \vcoesh \|_{1,2}$ and $\| \vcoesh - \vcoesho\|_2^2$, is the graph obtained by plotting the optimal values of these two quantities on the $y$ and $x$ axes respectively as $\lambda$ varies. As illustrated in the left figure in Figure~\ref{fig:CMBpareto} for the CMB data, the graph of the efficient frontier is in this case a continuous piecewise quadratic, with knots when the number of degrees $\ell$ with $\Alo/\beta_\ell > \lambda$ changes, that is when the degree set $\Gamma(\lambda)$ changes. In the figure, $\lambda$ is increasing from left (when $\lambda=0$) to right where $\vcoeshr$ vanishes at $\lambda= 5.89\times10^{-5}$. The point on the graph when $\lambda=1.48 \times 10^{-5}$ is shown in the figure. At this value of $\lambda$ the discrepancy $\|\vcoeshr-\vcoesho\|_2^2$ has the value $10^{-7}$, while $\|\vcoeshr \|_{1,2}= \sum_{\ell=0}^\infty \beta_\ell A_\ell$ has the value $\kappa=1.21 \times 10^{-3}$. \begin{figure} \caption{Efficient frontiers of $\norm{\vcoeshr}{1,2}$ (left) and $\norm{\vcoeshr}{0,2,L}$ (right) against discrepancy $\norm{\vcoeshr - \vcoesho}{2}^2$ on the CMB data for $\beta_{\ell}=1$ and $L=4,000$.} \label{fig:CMBpareto} \end{figure} The idea of the efficient frontier is that each point on the frontier corresponds to an optimal solution for some $\lambda$, while points above the frontier are feasible but not optimal. At points on the frontier, one objective can be improved only at the expense of making the other worse. The appendix shows how to determine the corresponding value of the regularization parameter $\lambda$ given either $\sigma$ or $\kappa$ for models (\ref{eq:model2}) or (\ref{eq:model3}). One can specify the value of $\lambda$ or the discrete discrepancy $\norm{\vcoeshr -\vcoesho}{2}^{2}$ (equivalent to specifying $\sigma$ in \eqref{eq:model2}) or the norm $\norm{\vcoeshr}{1,2}$ (equivalent to specifying $\kappa$ in \eqref{eq:model3}). In the right figure in Figure~\ref{fig:CMBpareto}, we plot the $\ell_{0}$-norm defined by \begin{equation*} \norm{\vcoeshr}{0,2,L} := \sum_{\ell=0}^L 1_{\{ {\Alr>0}\}} =\# \Gamma(\lambda), \end{equation*} so $\| \vcoeshr \|_{0,2,L}$ counts the number of degrees $\ell=0,\ldots,L$ with at least one non-zero coefficient, against $\norm{\vcoeshr - \vcoesho}{2}^2$ to more directly compare sparsity and data fitting. This is a piecewise constant graph with discontinuities at the values of $\lambda$ when the degree set $\Gamma(\lambda)$ changes. From this graph it is clear that high sparsity (or small $\ell_0$ norm) implies large discrepancy of the regularized field. \subsection{Scaling to preserve the $L_2$ norm} \begin{figure} \caption{The (quadratic) curve of discrepancy $\norm{\vcoesho-\gamma\vcoeshr}{2}^{2}$ with respect to norm-scaling factor $\gamma$ for $\beta_{\ell}=1$ for the CMB data with $\lambda=1.05\times10^{-6}$. The circle corresponds to the value of $\gamma$ for which $\|\gamma \vcoeshr \|_2 = \| \vcoesho\|_2$. The star corresponds to the value $\gamma_{\rm opt}$ for which the discrepancy is minimized.} \label{fig:CMBdiscrep} \end{figure} The scaling factor $\gamma$ can be chosen as in \eqref{eq:scalf} so that the $L_{2}$ norms of the observed data and regularized solution are equal. Figure~\ref{fig:CMBdiscrep} illustrates the relation between the scaling factor $\gamma$ and the discrepancy $\norm{\vcoesho-\gamma\vcoeshr}{2}^{2}$ for the CMB data with $\lambda=1.05\times10^{-6}$, corresponding to the left panel in Figure~\ref{fig:CMBpowspec}. The choice of $\gamma$ in \eqref{eq:scalf} that equates the $L_2$-norms $\|\vcoesho\|_2$ and $\|\gamma \vcoeshr\|_2$ makes the discrepancy $\norm{\vcoesho-\gamma\vcoeshr}{2}^{2}$ close to the optimal choice in the sense of minimizing the discrepancy. It also shows that $\gamma=1$ (no scaling) gives a much larger discrepancy. \subsection{Errors and sparsity for the regularized CMB field}\label{sec:CMB} Table~\ref{tab:no.l2nrm.coeff.CMB} gives errors and sparsity results for the regularized CMB field. Included for comparison is the Fourier reconstruction of degree $L=4,000$ (with $(L+1)^{2}$ coefficients $\coesh$, $m=-\ell,\dots,\ell$, $\ell=0,\dots,L$), for which the errors should be zero in the absence of rounding errors. For the regularized field the computations use $\beta_{\ell}=1$ for $\ell=0,\dots,L$, and two values of the regularization parameter, namely $\lambda=1.056\times10^{-6}$ and $\lambda=9.75\times10^{-7}$ as used in Figure~\ref{fig:CMBpowspec}, and the errors are given for both the unscaled case (i.e. with $\gamma=1$) and scaled with $\gamma$ chosen as in \eqref{eq:scalf} to equate the $L_{2}$-norms of the observed and regularized fields. The sparsity is the percentage of the regularized coefficients $\coeshr$ which are zero. The $L_{2}$ errors are estimated by equal weight quadrature at the HEALPix points, while the $L_{\infty}$ errors are estimated by the maximal absolute error at the HEALPix points. \begin{table}[htb] \centering \begin{minipage}{0.98\textwidth} \centering \scriptsize \begin{tabular}{l*{12}{c}c} \toprule & \multirow{2}{*}{Fourier} & \multicolumn{2}{c}{$\lambda=1.05e{-6}$} & &\multicolumn{2}{c}{$\lambda=9.75e{-7}$} \\ \cline{3-4}\cline{6-7} & & Unscaled regularized & Scaled regularized & & Unscaled regularized & Scaled regularized \\ \midrule Sparsity & $0\%$ & $72.1\%$ & $72.1\%$ && $9.44\%$ & $9.44\%$ \\ Scaling $\scalf$ & - & $1$ & $1.0953$ && $1$ & $1.0888$ \\ $L_2$ errors & $8.02e{-12}$ & $6.48e{-05}$ & $5.82e{-05}$ && $6.16e{-05}$ & $5.54e{-05}$ \\ $L_{\infty}$ errors & $6.34e{-11}$ & $1.52e{-03}$ & $1.50e{-03}$ && $1.47e{-03}$ & $1.44e{-03}$ \\ \bottomrule \end{tabular} \end{minipage} \begin{minipage}{0.9\textwidth} \caption{Sparsity and estimated $L_2$ and $L_{\infty}$ errors for the regularized fields, both scaled and unscaled, from the CMB data using $\beta_{\ell}=1$, degree $L=4,000$, and two values of $\lambda$.} \label{tab:no.l2nrm.coeff.CMB} \end{minipage} \end{table} Figures~\ref{fig:CMB.scale.beta1_lam1.05e-06} and \ref{fig:CMB.scale.beta1_lam1.05e-06.err} show respectively the realization of the scaled regularized field and its pointwise errors with $\beta_{\ell}=1$, $\lambda=1.05\times10^{-6}$ and $\gamma\approx1.0953$, the first parameter choice in Table \ref{tab:no.l2nrm.coeff.CMB}. This regularized field uses only $27.90\%$ of the coefficients in the Fourier approximation. Figures~\ref{fig:CMB.scale.beta1_lam9.75e-07} and \ref{fig:CMB.scale.beta1_lam9.75e-07.err} show the realization of the scaled regularized field and its errors for the second parameter choice in Table \ref{tab:no.l2nrm.coeff.CMB}, which uses $90.56\%$ of the coefficients. \begin{figure} \caption{\scriptsize (a) and (b) show the realizations of the scaled regularized random field for $\beta_{\ell}=1$ with $\lambda=1.05\times10^{-6}$ and $\lambda=9.75\times10^{-7}$ from the original CMB field, truncation degree $L=4,000$; (c) and (d) show the pointwise errors of (a) and (b) respectively, with the range of the color map one tenth of that in (a) and (b).} \label{fig:CMB.scale.beta1_lam1.05e-06} \label{fig:CMB.scale.beta1_lam9.75e-07} \label{fig:CMB.scale.beta1_lam1.05e-06.err} \label{fig:CMB.scale.beta1_lam9.75e-07.err} \label{fig:CMB.reg.beta1} \end{figure} The errors in Figure \ref{fig:CMB.reg.beta1} should be considered in relation to the $\Lp{2}{2}$ and $\Lp{\infty}{2}$ norms of the original CMB field, which are $3.84e{-04}$ and $1.86e{-03}$ respectively. (The latter number implies that there are points of the original map corresponding to Figure \ref{fig:CMB_Original} with values that exceed the limits of the color map by a factor of nearly 4. However, points exceeding the limits of the color map are relatively rare.) We can observe that the magnitudes of the pointwise errors in Figures~\ref{fig:CMB.reg.beta1} are mostly an order of magnitude smaller than the magnitude of the corresponding fields. The largest errors occur near the equator where the original CMB map was masked and then inpainted using other parts of the data, see \cite{Planck2016IX}. Outside the region near the equator the errors in Figures~\ref{fig:CMB.scale.beta1_lam1.05e-06.err} and \ref{fig:CMB.scale.beta1_lam9.75e-07.err} vary from place to place but on the whole are uniformly distributed. Table~\ref{tab:no.l2nrm.coeff.CMB} and Figure~\ref{fig:CMB.reg.beta1} show that our appropriate choice of the regularization parameter $\lambda$ can make the errors of the scaled regularized field sufficiently small. Moreover, the larger of the two choices of $\lambda$ significantly increases the sparsity while only slightly increasing the approximation error. \section*{References} \appendix \section{Relation to constrained models} Consider, for simplicity, the case when $\RF$ is band-limited with maximum degree $L$. When their constraints are active, the two constrained models (\ref{eq:model2}) and (\ref{eq:model3}) are equivalent to the regularized model (\ref{eq:model1}). This equivalence is detailed below, where we also show how to calculate the value of the regularization parameter corresponding to an active constraint, see (\ref{eq:siglam}) and (\ref{eq:kaplam}) below. Consider the optimization problem (\ref{eq:model2}) with the data fitting constraint $\| \vcoesh - \vcoesho \|_2^2 \leq \sigma^2$. Introducing a Lagrange multiplier $\mu\in\mathbb{R}$ for the constraint, the optimality conditions are, using (\ref{eq:normderiv}), \begin{equation} \begin{array}{ll} \beta_\ell A_\ell^{-1} \coesh + 2 \mu (a_{\ell, m} - \coesho) = 0, \quad m = -\ell,\ldots,\ell & \mbox{when } A_\ell > 0, \\% \label{eq:opt2a}\\ \coesh = 0, \quad m = -\ell,\ldots,\ell & \mbox{when } A_\ell = 0, \\ \norm{\vcoesh - \vcoesho}{2}^2 - \sigma^2 \leq 0, & \mbox{primal feasibility}, \\% \mu \geq 0, & \mbox{dual feasibility}, \\ \mu \left(\norm{\vcoesh - \vcoesho}{2}^2 - \sigma^2\right) = 0, & \mbox{complementarity}. \end{array} \label{eq:opt2} \end{equation} If $\sigma^2 \geq \| \vcoesho \|_2^2$, the unique solution is $\vcoesh = \mathbf{0}$, that is $\coesh = 0, \; m = -\ell,\ldots,\ell, \; \ell=0,\dots,L$. On the other hand if $\sigma = 0$, the unique solution is $\vcoesh = \vcoesho$, so $\coesh = \coesho, \; m = -\ell,\ldots,\ell, \; \ell=0,\dots,L$. Comparing the optimality conditions (\ref{eq:opt1a}) and the first equation of (\ref{eq:opt2}) shows that, for $\mu > 0$, \[ \lambda = \frac{1}{2\mu}. \] In terms of $\mu \geq 0$ we define the degree sets, similarly to \eqref{eq:Gamma}, by \begin{equation*} \widetilde{\Gamma}(\mu) := \left\{0 \leq \ell \leq L: \frac{\beta_\ell}{2 \Alo} < \mu\right\} =\Gamma(\lambda),\qquad \widetilde{\Gamma}^{c}(\mu) := \left\{0\leq\ell\leq L: \frac{\beta_\ell}{2 \Alo} \geq \mu\right\} = \Gamma^{c}(\lambda). \end{equation*} For $\mu = 0$, $\widetilde{\Gamma}(0) = \emptyset$, while for $\mu > \max_{0\le \ell\le L} \frac{\beta_\ell}{2 \Alo}$, the index set $\widetilde{\Gamma}^{c}(\mu) = \emptyset$. The optimality condition (\ref{eq:opt2}) gives, for all $\ell\in\widetilde{\Gamma}(\mu)$ with $A_\ell > 0$, \begin{equation*} \coesh = \alr \coesho, \quad m = -\ell,\ldots,\ell, \quad \mbox{where} \;\; \alr = \frac{A_\ell}{A_\ell + \beta_\ell/(2\mu)}. \end{equation*} As before, $0 < \alr < 1$, $A_\ell = \alr \Alo$, so \begin{equation*} A_{\ell} = \left\{ \begin{array}{cl} \displaystyle \Alo - \beta_\ell/(2\mu) & \quad\mbox{for } \ell\in \widetilde{\Gamma}(\mu), \\[1ex] 0 & \quad\mbox{for } \ell\in \widetilde{\Gamma}^{c}(\mu). \end{array}\right. \end{equation*} Given that the constraint is active, the value of $\lambda$ corresponding to $\sigma$ can be found by solving, see (\ref{eq:lamssq}), \begin{equation}\label{eq:siglam} \lambda^2 \sum_{\ell\in\Gamma(\lambda)} \beta_\ell^2 = \sigma^2 - \sum_{\ell\in\Gamma^{c}(\lambda)} (\Alo)^2 . \end{equation} The only issue here is finding the sets $\Gamma(\lambda)$ and $\Gamma^c(\lambda)$ when we start from the model \eqref{eq:model2}. As they only change when $\lambda$ is $\frac{\Alo}{\beta_\ell}$, this can be done by sorting the values $\frac{\Alo}{\beta_\ell}$, and finding the largest value of $\lambda' = \frac{\Alo[\ell']}{\beta_{\ell'}}$ such that $\| \vcoesh - \vcoesho \|_2^2 \leq \sigma^2$. And then solving (\ref{eq:siglam}) using $\Gamma(\lambda')$ and $\Gamma^c(\lambda')$. Consider now the LASSO type model (\ref{eq:model3}) with a constraint $\| \vcoesh \|_{1,2}\leq \kappa$. Introducing a Lagrange multiplier $\nu\in\mathbb{R}$ for the constraint, the optimality conditions are, again using (\ref{eq:normderiv}), \begin{equation} \begin{array}{ll} (\coesh - \coesho) + \nu \beta_\ell A_\ell^{-1} \coesh = 0, \quad m = -\ell,\ldots,\ell & \mbox{when } A_\ell > 0, \\% \label{eq:opt2a}\\ \coesh = 0, \quad m = -\ell,\ldots,\ell & \mbox{when } A_\ell = 0, \\ \norm{\vcoesh}{1,2} - \kappa \leq 0, & \mbox{primal feasibility}, \\% \nu \geq 0, & \mbox{dual feasibility}, \\ \nu \left( \norm{\vcoesh}{1,2} - \kappa \right) = 0, & \mbox{complementarity}. \end{array} \label{eq:opt3} \end{equation} If $\kappa \geq \norm{\vcoesh}{1,2}$ then the solution is $\vcoesh = \vcoesho$ with $\nu =0$. If $\kappa = 0$ then the solution is $\vcoesh = \mathbf{0}$. The first equation in (\ref{eq:opt3}), for $A_\ell > 0$, gives \begin{equation*} \coesh = \alr \coesho, \quad m = -\ell,\ldots,\ell, \quad \mbox{where} \;\; \alr = \frac{1}{1 + \nu \beta_\ell A_\ell^{-1}} = \frac{A_\ell}{A_\ell + \nu \beta_\ell}. \end{equation*} When the constraint is active and $\nu > 0$, comparing the first equation in (\ref{eq:opt3}) and (\ref{eq:opt1a}), shows that $\lambda = \nu$. Given a value for $\kappa$ with $0 < \kappa < \norm{\vcoesho}{1,2}$, the corresponding value of $\lambda$ satisfies, see (\ref{eq:lamnrm}) \begin{equation}\label{eq:kaplam} \lambda \sum_{\ell\in\Gamma(\lambda)} (\beta_\ell)^2 = \sum_{\ell\in\Gamma(\lambda)} \beta_\ell \Alo - \kappa. \end{equation} Again, the only issue is first determining the set $\Gamma(\lambda)$, defined in (\ref{eq:Gamma}), which can be done by sorting the $\frac{\Alo}{\beta_\ell}$ to find the smallest value $\lambda^* = \frac{\Alo[\ell^*]}{\beta_{\ell^*}}$ such that $\| \vcoesh \|_{1,2} \geq \kappa$, and then solving (\ref{eq:kaplam}) using $\Gamma(\lambda^*)$. \end{document}
\begin{document} \title{Gaussian-state quantum-illumination receivers for target detection} \author{Saikat Guha${}^{1}$, Baris I. Erkmen${}^{2}$} \affiliation{${}^{1}$Disruptive Information Processing Technologies, BBN Technologies, Cambridge, MA 02138 \\ ${}^{2}$California Institute of Technology, Jet Propulsion Laboratory, Pasadena, CA 91109} \begin{abstract} The signal half of an entangled twin-beam, generated using spontaneous parametric downconversion, interrogates a region of space that is suspected of containing a target, and has high loss and high (entanglement-breaking) background noise. A joint measurement is performed on the returned light and the idler beam that was retained at the transmitter. An optimal quantum receiver, whose implementation is not yet known, was shown to achieve $6\,$dB gain in the error-probability exponent relative to that achieved with a single coherent-state (classical) laser transmitter and the optimum receiver. We present two structured optical receivers that achieve up to $3\,$dB gain in the error exponent over that attained with the classical sensor. These are to our knowledge the first designs of quantum-optical sensors for target detection, which can be readily implemented in a proof-of-concept experiment, that appreciably outperform the best classical sensor in the low-signal-brightness, high-loss and high-noise operating regime. \end{abstract} \maketitle A distant region engulfed in bright thermal light, suspected of containing a weakly reflecting target, is interrogated using an optical transmitter. The return light is processed by a receiver to decide whether or not the target is present. Recent work \cite{sacchi2005, lloyd2008, tan2008} has shown that in the above scenario, a ``quantum illumination" (QI) transmitter, i.e., one that generates entangled Gaussian-state light via continuous-wave pumped spontaneous parametric downconversion (SPDC), in conjunction with the optimal quantum receiver, substantially outperforms a coherent-state (un-entangled) transmitter and the corresponding optimum-measurement receiver. This advantage accrues despite the loss of entanglement between the target-return and the idler beams due to the high loss and noise in the intervening medium. This is the first example of an entanglement-based performance gain in a bosonic channel where the initial entanglement does not survive the loss and noise in the system. The SPDC transmitter and optimal receiver combination has been shown to yield up to a factor of $4$ (i.e., $6$ dB) gain in the error-probability exponent over a coherent state transmitter and optimal receiver combination, in a highly lossy and noisy scenario \cite{tan2008}. The optimal receiver for the former source corresponds to the Helstrom minimum probability of error (MPE) measurement \cite{Helstrom1976} under two hypotheses -- $H_0$: target absent, and $H_1$: target present. It can be expressed as a projective measurement onto the positive eigenspace of the difference of the joint target-return and idler density operators under the two hypotheses. However, no known structured optical receiver is yet able to attain the full $6\,$dB predicted performance gain. In this paper we present two structured receivers, which, when used in conjunction with the SPDC transmitter, are shown to achieve up to a factor of $2$ error-exponent advantage---i.e., half of the full factor of $4$ predicted by the Helstrom bound---over the optimum-reception classical sensor, in the low signal brightness, high loss and high noise regime. The first receiver uses a low-gain optical parametric amplifier (OPA) and ideal photon counting \cite{guha2009}, whereas the second uses phase-conjugation followed by balanced dual detection. Both receivers attain the same asymptotic error exponent, although the second receiver yields slightly better performance than the first. Both receivers attempt to detect the remnant phase-sensitive cross correlation between the return-idler mode pairs when the target is present~\cite{erkmenshapiro:PCOCT}. Both of our proposed receivers, consisting of separable measurements over $M$ pairs of target-return and idler modes, offer strictly better performance than any classical-state transceiver, and have low-complexity implementations. Apart from the binary-hypothesis target-detection problem considered here, our receivers have been shown to considerably outperform conventional optical sensing and communications applications built on the quantum illumination concept, such as two-way secure communications \cite{Shapiro2009} and standoff one-vs-two-target resolution sensing \cite{Guh2009}. Consider $M$ independent signal-idler mode pairs obtained from SPDC, $\{{\hat a}_S^{(k)}, {\hat a}_I^{(k)}\}$; $1 \le k \le M$. Each $T$-sec-long transmission comprises $M = WT \gg 1$ signal-idler mode pairs, where $W$ is the SPDC sourceÕs phase-matching bandwidth. Each mode pair is in an identical entangled two-mode-squeezed state with a Fock-basis representation~\cite{footnote1} \begin{equation} |\psi\rangle_{SI} = \sum_{n=0}^{\infty}\sqrt{\frac{N_S^n}{(N_S+1)^{n+1}}}|n\rangle_S|n\rangle_I, \end{equation} where $N_S$ is the mean photon number in each signal and idler mode. $|\psi\rangle_{SI}$ is a pure maximally-entangled zero-mean Gaussian state with covariance matrix $V^{SI} = \langle[ \begin{array}{cccc} {\hat a}_S & {\hat a}_I & {\hat a}_S^{\dagger} & {\hat a}_I^\dagger \end{array} ]^{T}[ \begin{array}{cccc} {\hat a}_S^{\dagger} & {\hat a}_I^\dagger & {\hat a}_S & {\hat a}_I \end{array} ]\rangle$ given by \begin{equation*} \left [ \begin{smallmatrix} N_S + 1 & 0 & 0 & \sqrt{{N_S(N_S+1)}} \\ 0 & N_S+1 & \sqrt{{N_S(N_S+1)}} & 0 \\ 0 & \sqrt{{N_S(N_S+1)}} & N_S & 0 \\ \sqrt{{N_S(N_S+1)}} & 0 & 0 & N_S \end{smallmatrix} \right ]. \end{equation*} Under hypothesis $H_0$ (no target), the target-return mode ${\hat a_R} = {\hat a_B}$, where ${\hat a_B}$ is in a thermal state with mean photon number $N_B \gg 1$. Under hypothesis $H_1$ (target present), ${\hat a_R} = \sqrt{\kappa}{\hat a_S} + \sqrt{1-\kappa}{\hat a_B}$, where $\kappa \ll 1$, and ${\hat a_B}$ is in a thermal state with mean photon number $N_B/(1-\kappa)$, such that the mean noise photon number is equal under both hypotheses. Under $H_1$, each of the $M$ return-idler mode pairs are in a zero-mean Gaussian state, ${\hat \rho}_{RI}^{(1)}$, with the covariance matrix of each mode, $V^{RI} = \langle [ \begin{array}{cccc} {\hat a}_R & {\hat a}_I & {\hat a}_R^{\dagger} & {\hat a}_I^\dagger \end{array} ]^{T}[ \begin{array}{cccc} {\hat a}_R^{\dagger} & {\hat a}_I^\dagger & {\hat a}_R & {\hat a}_I \end{array} ]\rangle$, given by \begin{equation*} \left[ \begin{smallmatrix} {\kappa}N_S + N_B + 1 & 0 & 0 & \sqrt{\kappa{N_S(N_S+1)}} \\ 0 & N_S+1 & \sqrt{\kappa{N_S(N_S+1)}} & 0 \\ 0 & \sqrt{\kappa{N_S(N_S+1)}} & {\kappa}N_S+N_B & 0 \\ \sqrt{\kappa{N_S(N_S+1)}} & 0 & 0 & N_S \end{smallmatrix} \right]. \end{equation*} Under $H_0$, the joint return-idler state for each of the $M$ mode pairs, ${\hat \rho}_{RI}^{(0)}$, is a product of two zero-mean thermal states $({\hat \rho}_{N_B} \otimes {\hat \rho}_{N_S})$ with mean photon numbers $N_B$ and $N_S$ respectively, viz., $V^{RI} = {\rm{diag}}(N_B+1, N_S+1, N_B, N_S)$. The {\em{binary detection problem}} is the MPE discrimination between $H_{0}$ and $H_{1}$ using the optimal measurement on the $M$ return-idler mode pairs, $({\hat \rho}_{RI}^{(m)})^{\otimes M}$ for $m = 0$ or $1$. The minimum probability of error is given by, $ P_{e,{\min}}^{(M)} = [1 - \sum_n\gamma_n^{(+)}]/2 $, where $\gamma_n^{(+)}$ are the non-negative eigenvalues of $({\hat \rho}_{RI}^{(1)})^{\otimes M} - ({\hat \rho}_{RI}^{(0)})^{\otimes M}$ \cite{Helstrom1976}. The quantum Chernoff bound (QCB), given by $Q_{\rm QCB} \triangleq \min_{0 \le s \le 1}Q_s$ where $Q_s \triangleq {\rm{Tr}}\bigl[({\hat \rho}_{RI}^{(0)})^s({\hat \rho}_{RI}^{(1)})^{1-s} \bigr]$, is an upper bound to $P_{e,{\min}}^{(M)}$ and is asymptotically tight in the exponent of the minimum error probability \cite{audenaert2007}. In particular, we have \begin{equation} P_{e,{\min}}^{(M)} \le \frac{1}{2}Q_{\rm QCB}^M \le \frac{1}{2}Q_{0.5}^M, \label{eq:bounds} \end{equation} where the first inequality (QCB) is asymptotically tight as $M \to \infty$~\footnote{A loose lower bound on the error probability is available in~\cite{tan2008}. As the Chernoff bound is asymptotically tight in error exponent to the optimal receiver's performance as $M\rightarrow \infty$, we focus only on the upper bound in this paper.}. The QCB is customarily represented as $P_{e,{\min}}^{(M)} \le e^{-MR_Q}/2$ in terms of an error exponent $R_Q \triangleq -{\ln}(Q_{\rm QCB})$. The second inequality is a looser upper bound known as the Bhattacharyya bound. Symplectic decomposition of Gaussian-state covariance matrices was used to compute the QCB \cite{Pirandola2008, tan2008}, and it was shown that in the high loss, weak transmission and bright background regime, i.e., with $N_S \ll 1$, $\kappa \ll 1$, and $N_B \gg 1$, the entangled transmitter yields a QCB error-exponent $R_Q = {\kappa}N_S/N_B$, which is four times (or $6$ dB) higher than the error-exponent $R_C = {\kappa}N_S/(4N_B)$ for a coherent-state transmitter with a mean photon number $N_{S}$ per mode. In Fig.~\ref{fig:bounds}, we plot the QCB for the entangled and coherent state transmitters, showing a clear advantage of quantum over classical illumination. \begin{figure} \caption{(Color online) The figure shows five plots of error-probabilities and bounds thereof as a function of $M$. The two solid curves marked by arrows (from top to bottom respectively) are the coherent-state (blue) and the entangled Gaussian-state (red) transmitters. The third solid curve (black, in between the aforementioned QCB curves) plots the error-probability performance of the OPA receiver, whereas the dash-dotted curve shows the performance of the phase-conjugate receiver. The curve plotted with circles depicts the error-probability performance of the coherent-state transmitter and homodyne detection receiver, which is in fact a lower bound to the performance of an arbitrary classical-state transmitter, including classically correlated signal-idler transmitter states. The parameters used to generate the plots are $N_S=0.01$, $N_B = 20$, and $\kappa = 0.01$.} \label{fig:bounds} \end{figure} When a coherent-state transmitter is used, each received mode ${\hat a_R}$ is in a thermal state with mean photon number $N_B$, and a mean-field $\langle{\hat a_R}\rangle = 0$ or $\sqrt{\kappa{N_S}}$ for hypotheses $H_0$ and $H_1$ respectively. Homodyne detection on each received mode ${\hat a_R^{(k)}}$ yields a variance-$(2N_B+1)/4$ Gaussian-distributed random variable $X_k$ with mean $0$ or $\sqrt{\kappa{N_S}}$ given the hypothesis. The minimum error probability decision rule is to compare $X = X_1 + \ldots + X_M$ against a threshold: $``H_0"$ is declared if $X < (M\sqrt{\kappa{N_S}})/2$ and $``H_1"$ otherwise. The corresponding probability of error is \begin{equation*} P_{e,{\rm{hom}}}^{(M)} = \frac{1}{2}{\rm{erfc}}\left(\sqrt{\frac{\kappa{N_S}M}{4N_B+2}}\right) \approx \frac{e^{-MR_{C_{\rm{hom}}}}}{2 \sqrt{ \pi MR_{C_{\rm{hom}}}}}, \nonumber \end{equation*} where $\text{erfc}(x) \triangleq (2/\sqrt{\pi}) \int_{x}^{\infty} e^{-t^{2}} {\rm d}t $, $R_{C_{\rm{hom}}} ={\kappa{N_S}}/({4N_B+2})$ is the error exponent, and the approximation holds for ${\kappa}N_SM/(4N_B+2) \gg 1$. When $N_B \gg 1$, $R_{C_{\rm{hom}}} \approx {\kappa}N_S/4N_B = R_{C}$, so mode-by-mode homodyne detection is asymptotically optimal for the coherent-state transmitter. \subsection{The OPA Receiver} Unlike the coherent-state transmitter, the entangled transmitter results in zero-mean joint return-idler states under both hypotheses. The sole distinguishing factor between the two hypotheses that makes quantum illumination perform superior to the unentangled coherent-state transmitter, are the off-diagonal terms of $V^{RI}$ bearing the remnant phase-sensitive cross correlations of the return-idler mode pairs when the target is present. The OPA receiver uses an optical parametric amplifier to combine the incident return and idler modes ${\hat a}_R^{(k)}$ and ${\hat a}_I^{(k)}$, $1 \le k \le M$, producing output mode-pairs: \begin{equation} {\hat c}^{(k)} = \sqrt{G}{\hat a}_I^{(k)} + \sqrt{G-1}{\hat a}_R^{\dagger{(k)}} \end{equation} and \begin{equation} {\hat d}^{(k)} = \sqrt{G}{\hat a}_R^{(k)} + \sqrt{G-1}{\hat a}_I^{\dagger{(k)}}, \end{equation} where $G > 1$ is the gain of the OPA (see Fig.~\ref{fig:OPArcvr}). Thus, under both hypotheses ${\hat c}^{(k)}$ is in an independent, identical, zero-mean thermal state, ${\hat \rho}_c = \sum_{n=0}^{\infty}(N_m^n/(1+N_m)^{1+n})|n\rangle\langle{n}|$, for $m \in \left\{0, 1\right\}$, where the mean photon number is given by $N_{0} \triangleq GN_S + (G-1)(1+N_B)$ under $H_{0}$, and $N_{1} \triangleq GN_S + (G-1)(1+N_B + {\kappa}N_S) + 2\sqrt{G(G-1)}\sqrt{{\kappa}N_S(N_S+1)}$ under $H_{1}$. \begin{figure}\label{fig:OPArcvr} \end{figure} The joint state of the $M$ received modes $\left\{{\hat c}^{(k)}, 1 \le k \le M\right\}$, is the $M$-fold tensor product ${\hat \rho}_c^{\otimes{M}}$, and the $M$-fold product of thermal states is diagonal in the $M$-fold tensor-product of photon-number bases. Therefore, the optimum joint quantum measurement to distinguish between the two hypotheses is to count photons on each output mode ${\hat c}^{(k)}$ and decide between the two hypotheses based on the total photon count $N$ over all $M$ detected modes, using a threshold detector. The probability mass function of $N$ under the two hypotheses is given by \begin{equation} P_{N|H_m}(n|H_m) = \left(\begin{array}{c}n+M-1\\n\end{array}\right) \frac{N_m^{n}}{(1+N_m)^{n+M}}, \nonumber \end{equation} where $n=0,1,2,\dots$ and $m = 0$ or $1$. The mean and variance of this distribution are $MN_m$ and $M\sigma_m^2$ respectively, where $\sigma_m^2 = N_m(N_m+1)$. The minimum error probability to distinguish between the two distributions $P_{N|H_0}(n|H_0)$ and $P_{N|H_1}(n|H_1)$ using $M$ i.i.d. observations is bounded above by the classical Bhattacharyya bound \cite{guha2009}, \begin{equation} P_{e,{\rm{OPA}}}^{(M)} \le \frac{1}{2}e^{-MR_B}, \end{equation} where with a small OPA gain $G=1+\epsilon^2$, $\epsilon \ll 1$, the error exponent $R_B$ is given by \begin{eqnarray} R_B &=& \frac{{\epsilon}^2\kappa{N_S(N_S+1)}}{2N_S(N_S+1)+2{\epsilon^2}(1+2N_S)(1+N_S+N_B)} \nonumber \\ &\approx& \kappa{N_S}/2N_B, \end{eqnarray} for a choice of $\epsilon^2 = N_S/\sqrt{N_B}$, for $N_S \ll 1$, $\kappa \ll 1$, $N_B \gg 1$ \footnote{A different $\epsilon$ with $N_S/N_B \ll \epsilon^2 \ll 1/N_B$ works as well.}. Therefore by construction, for a weak transmitter operating in a highly lossy and noisy regime, the OPA receiver achieves at least a $3$ dB gain in error exponent over the optimum-receiver classical sensor whose QCB error exponent $R_C = \kappa{N_S}/4N_B$. For $N_S \ll 1$ and $\epsilon \ll 1$, both $N_0$ and $N_1 \ll 1$. Hence, a single-photon detector (as opposed to a full photon-counting measurement) suffices to achieve the performance of the receiver depicted in Fig.~\ref{fig:OPArcvr}. Due to the central limit theorem, for $M \gg 1$, $P_{N|H_m}(n|H_m)$, $m \in \left\{0,1\right\}$ approach Gaussian distributions with mean and variance $MN_m$ and $M\sigma_m^2$ respectively. Hence for $M \gg 1$, \begin{equation} P_{e,{\rm{OPA}}}^{(M)} = \frac{1}{2}{\rm{erfc}}\left(\sqrt{R_{{\rm{OPA}}}M}\right) \approx \frac{e^{-MR_{{\rm{OPA}}}}}{2\sqrt{\pi MR_{\rm{OPA}}}}, \nonumber \end{equation} where an error-exponent $R_{{\rm{OPA}}} = (N_1-N_0)^2/2(\sigma_0+\sigma_1)^2$ can be achieved using a threshold detector that decides in favor of hypothesis $H_0$ if $N < N_{\rm{th}}$, and $H_{1}$ otherwise, where $N_{\rm{th}} \triangleq \lceil{M(\sigma_1N_0+\sigma_0N_1)/(\sigma_0+\sigma_1)}\rceil$. Fig.~\ref{fig:bounds} shows that $P_{e, {\rm OPA}}^{(M)}$ is strictly smaller (by $3$ dB in error-exponent) than $P_{e, {\rm hom}}^{(M)}$ --- the error probability achieved by the coherent state transmitter with a homodyne detection receiver. One can show using convexity arguments that in the high background regime, $P_{e, {\rm hom}}^{(M)}$ is in fact a strict lower bound to the error probability achievable by an arbitrary classical-state transmitter, which includes classically-correlated signal-idler mixed states (i.e., those that admit a Glauber P-representation). \subsection{The Phase-Conjugate Receiver} The phase-conjugate (PC) receiver is another receiver whose error-probability achieves the same $3$ dB error-exponent gain over the optimal classical transceiver in the asymptotic operating regime $N_S \ll 1$, $\kappa \ll 1$, $N_B \gg 1$, and has slightly better performance than the OPA receiver (see Fig.~\ref{fig:bounds}). As illustrated in Fig.~\ref{fig:PCreceiver}, the receiver phase-conjugates all $M$ return modes ${\hat a}_R^{(k)}$, $1 \le k \le M$ according to \begin{equation} {\hat a}_C^{(k)} = \sqrt{2}{\hat a}_V^{(k)} + {\hat a}_R^{{\dagger}(k)}\,, \end{equation} where ${\hat a}_V^{(k)}$ are vacuum-state operators needed to preserve the commutator. The conjugated return and the retained idler are then detected by a dual, balanced difference detector: the output modes of the 50-50 beam splitter, ${\hat a}_X^{(k)} = ({\hat a}_C^{(k)}+{\hat a}_I^{(k)})/\sqrt{2}$ and ${\hat a}_Y^{(k)} = ({\hat a}_C^{(k)}-{\hat a}_I^{(k)})/\sqrt{2}$, are detected and fed into a unity-gain difference amplifier, such that the final measurement is equivalent to \begin{equation} {\hat N}^{(k)} = {\hat N}_X^{(k)} - {\hat N}_Y^{(k)}\,, \end{equation} where ${\hat N}_X^{(k)} = {\hat a}_X^{\dagger{(k)}}{\hat a}_X^{(k)}$ and ${\hat N}_Y^{(k)} = {\hat a}_Y^{\dagger{(k)}}{\hat a}_Y^{(k)}$. The final decision is based on the sum of the photon counts $N$ over all $M$ modes. \begin{figure}\label{fig:PCreceiver} \end{figure} To simplify the subsequent analysis, let us define ${\bar N}_X \triangleq \langle{{\hat N}_X^{(k)}}\rangle$, ${\bar N}_Y \triangleq \langle{{\hat N}_Y^{(k)}}\rangle$, ${\bar N}_C \triangleq \langle{{\hat a}_C^{\dagger{(k)}}{\hat a}_C^{(k)}}\rangle$, and ${\bar N}_I \triangleq \langle{{\hat a}_I^{\dagger{(k)}}{\hat a}_I^{(k)}}\rangle$. Under hypothesis $H_0$, the modes ${\hat a}_C^{(k)}$ and ${\hat a}_I^{(k)}$ are in product thermal states, whereas under $H_1$ they are in a zero-mean joint Gaussian state with nonzero phase-insensitive cross correlation given by $\langle{{\hat a}_C^{\dagger{(k)}}{\hat a}_I^{(k)}}\rangle = \sqrt{\kappa{N_S}(N_S+1)} = C_q$. Measurement of ${\hat N}^{(k)}$, $1 \le k \le M$, produces a sequence of i.i.d. random variables $N_k$ with mean and variance given by ${N}_0 = 0$ and $\sigma_0^2 = {\bar N}_X({\bar N}_X+1) + {\bar N}_Y({\bar N}_Y+1)$ under hypothesis $H_0$, and ${N}_1 = 2C_q$ and $\sigma_1^2 = {\bar N}_X({\bar N}_X+1) + {\bar N}_Y({\bar N}_Y+1) - ({\bar N}_C - {\bar N}_I)^2/2$ under hypothesis $H_1$. Under hypothesis $H_{0}$ we have ${\bar N}_X = {\bar N}_Y = ({\bar N}_C + {\bar N}_I)/2$, whereas ${\bar N}_X = ({\bar N}_C + {\bar N}_I)/2 + C_q$, and ${\bar N}_Y = ({\bar N}_C + {\bar N}_I)/2 - C_q$ holds for hypothesis $H_1$. Finally, we have ${\bar N}_C = 1 + N_B$ for $H_0$, ${\bar N}_C = 1 + {\kappa}N_S + N_B$ for $H_1$, and because the idler is unaffected under either hypothesis, ${\bar N}_I = N_S$. For large $M$, and hypothesis $H_m$, $m \in \left\{0,1\right\}$, the distribution of $N = \sum_{k=1}^MN_k$ approaches a Gaussian distribution with mean and variance given by $MN_m$ and $M\sigma_m^2$ respectively. Therefore the probability of error \begin{equation} P_{e,{\rm{PCR}}}^{(M)} \approx \frac{1}{2}{\rm{erfc}}\left(\sqrt{R_{{\rm{PCR}}}M}\right) \approx \frac{e^{-MR_{{\rm{PCR}}}}}{2\sqrt{\pi MR_{\rm{PCR}}}}, \nonumber \end{equation} where an error-exponent $R_{{\rm{PCR}}} = (N_1-N_0)^2/2(\sigma_0+\sigma_1)^2$ can be achieved using a threshold detector that decides in favor of hypotheses $H_0$ when $N < N_{\rm{th}}$ and in favor of $H_{1}$ otherwise, where $N_{\rm{th}} = \lceil{M(\sigma_1N_0+\sigma_0N_1)/(\sigma_0+\sigma_1)}\rceil$. The corresponding error-exponent is given by \begin{eqnarray} R_{PCR} &=& \frac{\kappa{N_S}(N_S+1)}{2N_B + 4N_SN_B+6N_S+4\kappa{N_S^2}+3{\kappa}N_S+2} \nonumber\\ &\approx& \kappa{N_S}/2N_B, \end{eqnarray} where $N_S \ll 1$, $\kappa \ll 1$, $N_B \gg 1$. The PC receiver achieves the same $3$ dB error-exponent gain as the OPA receiver over the optimum-reception classical transceiver, though the performance of the former is slightly better in absolute terms (see Fig.~\ref{fig:bounds}). One reason for this is that balanced dual-detection cancels the common-mode excess noise in $\hat{a}_{X}$ and $\hat{a}_{Y}$, which is reflected by the negative term $({\bar N}_C - {\bar N}_I)^2/2$ in the variance of $N_k$ under hypothesis $H_1$. On the other hand, the OPA receiver operates at very low gain, thus requires much less pump power than unity-gain phase-conjugation. In summary, we have proposed two receiver structures, both viable for low-complexity proof-of-concept experimental demonstrations using off-the-shelf optical components, which in conjunction with the SPDC entangled-state source, could substantially outperform classical transceivers for various entangled-state optical sensing applications, such as standoff target detection, one-vs-two-target resolution sensing \cite{Guh2009} and two-way secure communications \cite{Shapiro2009}. The authors thank J. H. Shapiro for making the observation about generalizing the coherent-state performance bound to arbitrary signal-idler classically-correlated transmitters. The authors also thank F. Wong, S. Lloyd, and Z. Dutton for valuable discussions. SG thanks the DARPA Quantum Sensors Program and BBN Technologies. BIE's contribution to the research described in this paper was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. \end{document}
\begin{document} \title{Confining the state of light to a quantum manifold by engineered two-photon loss} \author{Zaki Leghtas} \affiliation{Department of Applied Physics, Yale University, New Haven, Connecticut 06520, USA} \author{Steven Touzard} \affiliation{Department of Applied Physics, Yale University, New Haven, Connecticut 06520, USA} \author{Ioan M. Pop} \affiliation{Department of Applied Physics, Yale University, New Haven, Connecticut 06520, USA} \author{Angela Kou} \affiliation{Department of Applied Physics, Yale University, New Haven, Connecticut 06520, USA} \author{Brian Vlastakis} \affiliation{Department of Applied Physics, Yale University, New Haven, Connecticut 06520, USA} \author{Andrei Petrenko} \affiliation{Department of Applied Physics, Yale University, New Haven, Connecticut 06520, USA} \author{Katrina M. Sliwa} \affiliation{Department of Applied Physics, Yale University, New Haven, Connecticut 06520, USA} \author{Anirudh Narla} \affiliation{Department of Applied Physics, Yale University, New Haven, Connecticut 06520, USA} \author{Shyam Shankar} \affiliation{Department of Applied Physics, Yale University, New Haven, Connecticut 06520, USA} \author{Michael J. Hatridge} \affiliation{Department of Applied Physics, Yale University, New Haven, Connecticut 06520, USA} \author{Matthew Reagor} \affiliation{Department of Applied Physics, Yale University, New Haven, Connecticut 06520, USA} \author{Luigi Frunzio} \affiliation{Department of Applied Physics, Yale University, New Haven, Connecticut 06520, USA} \author{Robert J. Schoelkopf} \affiliation{Department of Applied Physics, Yale University, New Haven, Connecticut 06520, USA} \author{Mazyar Mirrahimi} \affiliation{INRIA Paris-Rocquencourt, Domaine de Voluceau, B.P.~105, 78153 Le Chesnay Cedex, France} \affiliation{Department of Applied Physics, Yale University, New Haven, Connecticut 06520, USA} \author{Michel H. Devoret} \affiliation{Department of Applied Physics, Yale University, New Haven, Connecticut 06520, USA} \date{\today} \begin{abstract} Physical systems usually exhibit quantum behavior, such as superpositions and entanglement, only when they are sufficiently decoupled from a lossy environment. Paradoxically, a specially engineered interaction with the environment can become a resource for the generation and protection of quantum states. This notion can be generalized to the confinement of a system into a manifold of quantum states, consisting of all coherent superpositions of multiple stable steady states. We have experimentally confined the state of a harmonic oscillator to the quantum manifold spanned by two coherent states of opposite phases. In particular, we have observed a Schr\"{o}dinger cat state spontaneously squeeze out of vacuum, before decaying into a classical mixture. This was accomplished by designing a superconducting microwave resonator whose coupling to a cold bath is dominated by photon pair exchange. This experiment opens new avenues in the fields of nonlinear quantum optics and quantum information, where systems with multi-dimensional steady state manifolds can be used as error corrected logical qubits. \end{abstract} \maketitle {Maintaining the state of a system in the vicinity of a predefined state despite the presence of external perturbations plays a central role in science and engineering. Examples of this notion, called stabilization, include devices such as Watt's governor for regulating the angular velocity in a steam engine, and the escapement mechanism which prevents decay of the oscillation of a pendulum in clocks. The problem of stabilizing a quantum system is fundamentally more subtle than stabilizing a classical one. Stabilizing a system requires an interaction which, quantum mechanically, is always invasive. The mere act of learning something about a system perturbs it. Carefully designed non destructive quantum measurements have recently been incorporated in feedback loops to stabilize a \emph{single} quantum state \cite{Sayrin2011,Vijay2012,Riste2012,Campagne-Ibarcq2013}. Alternatively, adequately engineering an interaction with an auxiliary dissipative system, termed engineered dissipation, can also stabilize a single quantum state \cite{Krauter2011,Murch2012,Geerlings2013,Shankar2013,Lin2013}. } {Can engineered dissipation protect \emph{all} unknown superpositions of two states, thus protecting quantum information? In fact, the static random access memory of a computer chip dynamically stabilizes the states representing 0 and 1 by combining the energy supply and dissipation, providing fast access time and robustness against noise. For a quantum memory, however, one must construct a system with not only one or two stable steady states (SSSs), but rather a whole quantum manifold composed of {all} coherent superpositions of two SSSs, see Fig.~\ref{fig:schema}a. By construction, such a system does not distinguish between all its SSSs and hence cannot correct for errors within the SSS manifold. However, quantum information encoded in this manifold will be protected against perturbations which move it out of the manifold.} {An oscillator which exchanges only pairs of photons with a dissipative auxiliary system \cite{Wolinsky1988} is a practical example which displays a manifold of SSSs. This two-photon loss will force and confine the state of the oscillator into the quantum manifold spanned by two oscillation states with opposite phases. Uncontrolled energy decay, termed single photon loss, causes decoherence within the SSS manifold, and hence quantum superpositions will eventually decay into classical mixtures. Nevertheless, in the regime where pairs of photons are extracted at a rate at least as large as the single photon decay rate, transient quantum coherence can be observed.} {This regime, essential to the proposal by Wolinsky and Carmichael \cite{Wolinsky1988}, had not been reached as it requires combining strong non-linear interactions between modes and low single photon decay rates. Our experiment enters this regime through a circuit quantum electrodynamics (cQED) architecture \cite{Wallraff2004}, benefiting from the strong non-linearity and low loss of a Josephson junction.} Our setup, schematically described in Fig.~\ref{fig:schema}b, is based on a recent proposal \cite{Mirrahimi2014}. It consists of two superconducting microwave oscillators coupled through a Josephson junction in a bridge transmon configuration \cite{Kirchmair2013}. These oscillators are the fundamental modes of two superconducting cavities. One cavity, termed the storage, holds the manifold of SSSs and is designed to have minimal {single photon} dissipation. The other, termed the readout, is over-coupled to a transmission line and its role is to evacuate entropy from the storage. {In a variety of non-linear systems, {the} interaction of a pump tone with relevant degrees of freedom provides cooling \cite{Teufel2011}, squeezing \cite{Drummond2010}, and amplification \cite{Siddiqi2004,Castellanos-Beltran2008}}. Similarly, we use the four-wave mixing capability of the Josephson junction {to} generate a {coupling which exchanges pairs of photon in the storage with single photons in the readout}. By off-resonantly pumping the readout at an angular frequency \begin{equation} \omega_p=2\omega_s-\omega_r\;, \label{eq:freqmatching} \end{equation} where $\omega_{r,s}$ are the readout and storage {angular} frequencies respectively, {the pump stimulates the conversion of two storage photons into one readout and one pump photon. The readout photon then rapidly dissipates through the transmission line, resulting in a loss in photon-pairs for the storage, as illustrated in Fig.~\ref{fig:schema}d}. This engineered dissipation is the key ingredient in our experiment. The input power that balances this dissipation is provided by {the readout drive}: a weak resonant irradiation of the readout. Due to {the non-linear mixing with the pump, these input readout photons are converted into pairs of storage photons, as illustrated in Fig.~\ref{fig:schema}e}. {Unlike the usual linear driven-dissipative oscillator which adopts only one oscillation state, our non-linear driven-dissipative system displays a quantum manifold of SSSs corresponding to all superpositions of two oscillation states with opposite phases.} In the experiment, we employed a third mode besides the storage and readout: the excitation of the bridge transmon qubit, restricted to its ground and first excited state. It served as a calibration tool for all the experimental parameters, and as a means to directly measure the Wigner function of the storage. {Our system is well described by the effective Hamiltonian for the storage and the readout \cite{supp}:} \begin{eqnarray} \text{\bf{H}}_{sr}/\hbar&=&g_2^*\text{\bf{a}}_s^2\text{\bf{a}}_r^\dag+g_2(\text{\bf{a}}_s^\dag)^2\text{\bf{a}}_r+\epsilon_d\text{\bf{a}}_r^\dag+\epsilon_d^*\text{\bf{a}}_r\notag\\ &-&\chi_{rs}\text{\bf{a}}_r^\dag\text{\bf{a}}_r\text{\bf{a}}_s^\dag\text{\bf{a}}_s-\sum_{m=r,s}\frac{\chi_{mm}}{2}{\text{\bf{a}}_m^\dag}^2\text{\bf{a}}_m^2\;. \label{eq:twomodesH} \end{eqnarray} The readout and storage annihilation operators are denoted $\text{\bf{a}}_{r}$ and $\text{\bf{a}}_s$ respectively. The first line is a microscopic Hamiltonian of the degenerate parametric oscillator \cite{Carmichael2007} with $$ g_2=\chi_{rs}\xi_p^*/2\;,\qquad \xi_p\approx{-i\epsilon_p}/\left({\frac{\kappa_r}{2}+i(\omega_r-\omega_p)}\right)\;, $$ where $\chi_{rs}/2\pi=206~$kHz is the dispersive coupling between the readout and the storage, and $\epsilon_{p}, \epsilon_d$ are the pump and drive amplitudes, respectively. The terms in $g_2$ correspond to the conversion of pairs of photons in the storage into single photons in the readout (Fig.~\ref{fig:schema}d-e). The readout and storage have a Kerr non-linearity: $\chi_{rr}/2\pi=2.14~$MHz and $\chi_{ss}/2\pi\approx4~$kHz, respectively. The Kerr interactions can be considered as perturbations which do not significantly disturb the two-photon conversion effects \cite{supp}. The storage and readout single photon lifetimes are respectively $1/\kappa_s=20~\mu$s and $1/\kappa_r=25~$ns. The two-photon processes shown in Fig.~\ref{fig:schema}d-e are only activated when the frequency matching condition \eqref{eq:freqmatching} is met. We satisfy this condition by performing a calibration experiment {as shown} in Fig.~\ref{fig:readoutspec_vs_pumpfreq}. We excited the readout with a weak CW probe tone ($\approx1$ photon), and measured its transmitted power, in presence of the pump tone, while sweeping the frequency of both tones. The pump power is kept fixed during this measurement, and its value was chosen as the largest that did not degrade the coherence times of our system \cite{supp}. When the frequency matching condition is met, the probe photons are converted back and forth into pairs of storage photons (Fig.~\ref{fig:schema}d-e). When equilibrium is reached for this process, the input probe photons interfere destructively with the back-converted storage photons and are now reflected {back into the probe input port} \cite[Section 12.1.1]{Carmichael2007}: the readout is in an induced dark state. The {dip} in Fig.~\ref{fig:readoutspec_vs_pumpfreq}(a-b) is a signature of this {interference}. {Its depth indicates that we have achieved a large non-linear coupling $g_2\gg\kappa_s$ \cite{supp}}. For the subsequent experiments, we fixed the pump frequency to $\omega_p/2\pi=8.011~$GHz, {which makes the dip coincide with the readout resonance frequency}. We demonstrate that photons are inserted in the storage by measuring the {probability of having $n>0$ photons in the storage while sweeping} the readout drive frequency, as shown in Fig.~\ref{fig:readoutspec_vs_pumpfreq}(c). {We apply a 10 $\mu$s square pulse from the pump and drive tones simultaneously, and then excite the qubit from its ground to its excited state, conditioned on there being $n=0$ photons in the storage \cite{Johnson2010}. Reading out the qubit state then answers the question: are there 0 photons in the storage?} The peak at zero detuning shows that the readout drive and the pump combine non-linearly to insert photons {into} the storage. {We then choose the drive tone frequency which maximizes the number of photons in the storage,} and the drive power is fixed to ensure an equilibrium average photon number in the storage of $\approx 4$ \cite{supp}. Adiabatically eliminating the readout from \eqref{eq:twomodesH} \cite{supp,Carmichael2007}, we obtain a dynamics for the storage governed by the Hamiltonian \begin{equation*} \text{\bf{H}}_s/\hbar=\epsilon_2^*\text{\bf{a}}_s^2+\epsilon_2(\text{\bf{a}}_s^\dag)^2-\frac{\chi_{ss}}{2}{\text{\bf{a}}_s^\dag}^2\text{\bf{a}}_s^2\;, \end{equation*} and loss operators $\sqrt{\kappa_2}\text{\bf{a}}_s^2$ and $\sqrt{\kappa_s}\text{\bf{a}}_s$, where $$ \epsilon_2=-i\frac{\chi_{sr}}{\kappa_r}\xi_p^*\epsilon_d\;,\qquad \kappa_2=\frac{\chi_{sr}^2}{\kappa_r}\abs{\xi_p}^2\;. $$ The $\epsilon_2$ non-linear drive inserts pairs of photons in the storage (Fig.~\ref{fig:schema}e) and is analogous to the usual squeezing drive of a non-linear oscillator \cite{Drummond2010}. The novel element in this experiment is the non-linear decay, of rate $\kappa_2$, which extracts only photons in pairs from the storage (Fig.~\ref{fig:schema}d). In absence of unavoidable loss $\kappa_s$ and neglecting the effect of $\chi_{ss}$ \cite{supp}, the storage converges into the two-dimensional quantum manifold spanned by coherent states $\ket{\pm\alpha_\infty}$, where \begin{equation*} \alpha_\infty\Big|_{\chi_{ss}=\kappa_s=0}=i\sqrt{\frac{2\epsilon_d}{\xi_p\chi_{sr}}}\;. \end{equation*} In a classical model where quantum noise is just ordinary noise \cite{DYKMAN1980}, our system behaves as a bi-stable oscillator with two oscillation states of amplitudes $\pm\alpha_\infty$. The storage then evolves to $+\alpha_\infty$ \emph{or} $-\alpha_\infty$. However in the full quantum model, the storage must evolve to $+\alpha_\infty$ \emph{and} $-\alpha_\infty$ when initialized in the vacuum state, thus forming an even Schr\"{o}dinger cat state: ${\mathcal N}(\ket{\alpha_\infty}+\ket{-\alpha_\infty})={\mathcal N}(\sum_{n=0}^\infty{(\alpha_\infty^{2n}/{2n!})\ket{2n}})$ (${\mathcal N}$ is a normalization constant) \cite{Ourjoumtsev2007,Vlastakis2013,Deleglise2008,Monroe1996,Hofheinz2009}. {We visualize these dynamics by measuring the state of the storage by direct Wigner tomography \cite{Lutterbach1997,Vlastakis2013}. The Wigner function \cite{Haroche2006} is a representation of a quantum state defined over the complex plane as $W(\alpha)=\frac{2}{\pi}\bket{\text{\bf{D}}_\alpha \text{\bf{P}} \text{\bf{D}}_{-\alpha}}$, the normalized expectation value of the parity operator $\text{\bf{P}}=e^{i\pi\text{\bf{a}}_s^\dag\text{\bf{a}}_s}$ for the state displaced by the operator $\text{\bf{D}}_\alpha=e^{\alpha \text{\bf{a}}_s^\dag-\alpha^* \text{\bf{a}}_s}$.} This quasi-probability distribution vividly displays the quantum features of a coherent superposition. The bi-stable property of our system is demonstrated in Fig.~\ref{fig:double_well} by initializing the storage in coherent states with a mean photon number of 6.8 with various phases, and observing their convergence to the closest equilibrium state (Fig.~\ref{fig:double_well}, {displacement angle = \{0,$\pm \pi/4$,$\pm 3\pi/4$,$\pi$\}}). The upper and lower middle panels (Fig.~\ref{fig:double_well}~, {displacement angle = $\pm\pi/2$}) correspond to states initialized at almost equal distance from $\pm\alpha_\infty$ which {randomly evolve} to one equilibrium state or the other, thus converging to the statistical mixture of $\pm\alpha_\infty$. The coherent splitting of the vacuum into the quantum superposition of $\ket{\pm\alpha_\infty}$ is demonstrated in Fig.~\ref{fig:time_evolution}. In absence of loss in the storage, the pairwise exchange of photons between the storage and the environment conserves parity. Therefore, since the vacuum state is an even parity state, it must transform into the even cat state: the unique even state contained in the manifold of equilibrium states. Similarly, Fock state $\ket{1}$ being an odd parity state, it must transform into the odd cat state \cite{supp}. In presence of $\kappa_s$, all coherences will ultimately disappear. However, for large enough $\kappa_2$, a {quantum superposition} transient state is observed. In this experiment, we achieve $\abs{\xi_p}^2=1.2$ which implies {$g_2/2\pi=111~$kHz} and {$\kappa_2/\kappa_s=1.0$}. The quantum {nature} of the transient storage state is visible in the negative fringes of the Wigner function (see Fig.~\ref{fig:time_evolution}a-b,$7~\mu$s), and the non-Poissonian photon number statistics (Fig.~\ref{fig:time_evolution}d,$7~\mu$s). {After 7 $\mu$s of pumping, we obtain a state with an average photon number $\bar n=2.4$, and a parity of 42$\%$, which is larger than the parity of a thermal state ($17\%$) or a coherent state (0.8$\%$) with equal $\bar n$. After 19$\mu$s of pumping, although the negative fringes vanish, the phase and amplitude of the SSSs $\ket{\pm\alpha_\infty}$ are conserved. Our data is in good agreement with numerical simulations, see Fig.~\ref{fig:time_evolution}c, {indicating that our dominant source of imperfection is single photon loss}. {These results illustrate the confinement of the storage state into the manifold of SSSs, and how it transits through a quantum superposition of $\ket{\pm\alpha_\infty}$.} In conclusion, we have realized a non-linearly driven-dissipative oscillator which spontaneously evolves towards the quantum manifold spanned by two coherent states. Starting from the vacuum, a Schr\"{o}dinger cat state is produced, as shown by negativities in the Wigner function and a non-Poissonian photon number distribution. This was achieved by attaining the regime in which the photon pair exchange rate is of the same order as the single photon decay rate. The ratio between these two rates can be further improved within the present technology by using a higher $Q$ oscillator and increasing its non-linear coupling to the bath. Our experiment is an essential step towards a new paradigm for universal quantum computation \cite{Mirrahimi2014}. By combining higher order forms of our non-linear dissipation with efficient error syndrome measurements \cite{Sun2014}, quantum information can be encoded and manipulated in a protected manifold of quantum states. {\textbf{Acknowledgements}: The authors thank L. Jiang and V. V. Albert for helpful discussions. Facilities use was supported by YINQE and NSF MRSEC DMR 1119826. This research was supported by ARO under Grant No. W911NF-14-1-0011. MM acknowledges support from the French ``Agence Nationale de la Recherche'' under the project EPOQ2 number ANR-09-JCJC-0070.} \begin{thebibliography}{40} \bibitem{Sayrin2011} C.~Sayrin, {\it et~al.\/}, {\it Nature\/} {\bf 477}, 73 (2011). \bibitem{Vijay2012} R.~Vijay, {\it et~al.\/}, {\it Nature\/} {\bf 490}, 77 (2012). \bibitem{Riste2012} D.~Riste, C.~C. Bultink, K.~W. Lehnert, L.~DiCarlo, {\it Phys. Rev. Lett.\/} {\bf 109}, 240502 (2012). \bibitem{Campagne-Ibarcq2013} P.~Campagne-Ibarcq, {\it et~al.\/}, {\it Phys. Rev. X\/} {\bf 3}, 021008 (2013). \bibitem{Krauter2011} H.~Krauter, {\it et~al.\/}, {\it Phys. Rev. Lett.\/} {\bf 107}, 080503 (2011). \bibitem{Murch2012} K.~W. Murch, {\it et~al.\/}, {\it Phys. Rev. Lett.\/} {\bf 109}, 183602 (2012). \bibitem{Geerlings2013} K.~Geerlings, {\it et~al.\/}, {\it Phys. Rev. Lett.\/} {\bf 110}, 120501 (2013). \bibitem{Shankar2013} S.~Shankar, {\it et~al.\/}, {\it Nature\/} {\bf 504}, 419 (2013). \bibitem{Lin2013} Y.~Lin, {\it et~al.\/}, {\it Nature\/} {\bf 504}, 415 (2013). \bibitem{Wolinsky1988} M.~Wolinsky, H.~J. Carmichael, {\it Phys. Rev. Lett.\/} {\bf 60}, 1836 (1988). \bibitem{Wallraff2004} A.~Wallraff, {\it et~al.\/}, {\it Nature\/} {\bf 431}, 162 (2004). \bibitem{Mirrahimi2014} M.~Mirrahimi, {\it et~al.\/}, {\it New J. Phys.\/} {\bf 16}, 045014 (2014). \bibitem{Kirchmair2013} G.~Kirchmair, {\it et~al.\/}, {\it Nature\/} {\bf 495}, 205 (2013). \bibitem{Teufel2011} J.~D. Teufel, {\it et~al.\/}, {\it Nature\/} {\bf 475}, 359 (2011). \bibitem{Drummond2010} P.~Drummond, Z.~Ficek, {\it Quantum Squeezing\/} ({Springer}, 2010). \bibitem{Siddiqi2004} I.~Siddiqi, {\it et~al.\/}, {\it Phys. Rev. Lett.\/} {\bf 93}, 207002 (2004). \bibitem{Castellanos-Beltran2008} M.~A. Castellanos-Beltran, K.~D. Irwin, G.~C. Hilton, L.~R. Vale, K.~W. Lehnert, {\it Nature Physics\/} {\bf 4}, 929 (2008). \bibitem{supp} See supplementary materials for details. \bibitem{Carmichael2007} H.~J. Carmichael, {\it Statistical Methods in Quantum Optics 2\/} ({Springer}, 2007). \bibitem{Johnson2010} B.~Johnson, {\it et~al.\/}, {\it Nature Physics\/} {\bf 6}, 663 (2010). \bibitem{DYKMAN1980} M.~I. Dykman, M.~A. Krivoglaz, {\it Physica A\/} {\bf 104}, 480 (1980). \bibitem{Ourjoumtsev2007} A.~Ourjoumtsev, H.~Jeong, R.~Tualle-Brouri, P.~Grangier, {\it Nature\/} {\bf 448}, 784 (2007). \bibitem{Vlastakis2013} B.~Vlastakis, {\it et~al.\/}, {\it Science\/} {\bf 342}, 607 (2013). \bibitem{Deleglise2008} S.~Del{\'e}glise, {\it et~al.\/}, {\it Nature\/} {\bf 455}, 510 (2008). \bibitem{Monroe1996} C.~Monroe, D.~M. Meekhof, B.~E. King, D.~J. Wineland, {\it Science\/} {\bf 272}, 1131 (1996). \bibitem{Hofheinz2009} M.~Hofheinz, {\it et~al.\/}, {\it Nature\/} {\bf 459}, 546 (2009). \bibitem{Lutterbach1997} L.~G. Lutterbach, L.~Davidovich, {\it Phys. Rev. Lett.\/} {\bf 78}, 2547 (1997). \bibitem{Haroche2006} S.~Haroche, J.~Raimond, {\it Exploring the Quantum: Atoms, Cavities and Photons.\/} (Oxford University Press, 2006). \bibitem{Sun2014} L.~Sun, {\it et~al.\/}, {\it Nature\/} {\bf 511}, 444 (2014). \bibitem{Schuster2007} D.~Schuster, {\it et~al.\/}, {\it Nature\/} {\bf 445}, 515 (2007). \end{thebibliography} \begin{figure*}\label{fig:schema} \end{figure*} \begin{figure*}\label{fig:readoutspec_vs_pumpfreq} \end{figure*} \begin{figure*} \caption{Bi-stable behavior of the steady state manifold of the non-linearly driven-damped storage oscillator. The central panel shows the theoretical classical equivalent of a potential of the storage nonlinear dynamics. The modulus of the velocity (color) has three zeros corresponding to two SSSs $\ket{\pm\alpha_\infty}$ and the saddle point $\ket{0}$. Trajectories initialized on the panel border converge to one of these two SSSs. {These trajectories are curved due to the Kerr effect}. The outside panels show the measured Wigner function of the storage after $10~\mu$s of pumping for different initial states. For each panel, we initialize the storage in a coherent state of amplitude $\alpha_k$, where $\abs{\alpha_k}=2.6$ and $\arg(\alpha_k)$ is indicated in each panel. The storage converges to a combination of $\ket{\pm\alpha_\infty}$. The weight of each of these two states and the coherence of their superposition is set by the initial state. For the initial phases $\arg(\alpha_k)=0,\pm\pi/4$, the storage mainly evolves to $\ket{\alpha_\infty}$, with only a small weight on $\ket{-\alpha_\infty}$. On the other hand, for initial phases $\arg(\alpha_k)=\pm3\pi/4,\pi$, the state mainly evolves to $\ket{-\alpha_\infty}$ with a small weight on $\ket{\alpha_\infty}$. For the initial phases $\arg(\alpha_k)=\pm\pi/2$, the initial state is almost symmetrically positioned with respect to the two states $\ket{\pm\alpha_\infty}$ and has no definite parity (even and odd photon number states are almost equally populated). Hence, the state evolves to a mixture of $\ket{\pm\alpha_\infty}$.} \label{fig:double_well} \end{figure*} \begin{figure*} \caption{Time evolution of the storage state in presence of the nonlinear drive and dissipation processes described in Fig.~\ref{fig:schema}. The panels correspond to measured data (a), to reconstructed density matrices \cite{Vlastakis2013} (b), and to numerical simulations (c). They display the Wigner function after a pumping duration indicated at the top of the panel. The storage is initialized in the quantum vacuum state at $t=0~\mu$s. First, the state squeezes in the $Q$ quadrature ($t=2~\mu$s). Small, but visible negativities appearing at $t=7~\mu$s indicate that the superposition of the SSSs shown in Fig.~\ref{fig:double_well}, panel 2, is now coherent, and that a continuous evolution from a squeezed state to a quantum state approximating a Schr\"{o}dinger cat state is taking place. Finally, these negativities disappear as a consequence of the unavoidable storage photon loss, and the state decays into a statistical mixture of the two SSSs ($t=19~\mu$s). (d) Storage photon number distribution measured using the photon number splitting of the qubit \cite{Schuster2007}. At $t=2,7~\mu$s, the $n=2$ population is larger than $n=1$. A similar population inversion is also present between $n=4$ and $n=3$ at $t=7~\mu$s. The non-Poissonian character of the photon number distribution at $t=2,7~\mu$s confirms the non-classical nature of the dynamical states of the storage for these intermediate times.} \label{fig:time_evolution} \end{figure*} \beginsupplement \renewcommand{\bibnumfmt}[1]{[S#1]} \renewcommand{\citenumfont}[1]{S#1} \widetext \begin{center} \textbf{\large Supplementary material for ``Confining the state of light to a quantum manifold by engineered two-photon loss''} \end{center} \section{Materials and methods} \subsection{Qubit fabrication} The transmon qubit was fabricated with a double-angle-evaporated Al/AlO$_x$/Al Josephson junction, defined using the bridge-free fabrication technique \cite{Lecocq2011} on a double-side-polished 2 mm-by-19 mm chip of c-plane sapphire with a 0.43 mm thickness. The aluminum film thickness for each deposition was 20 nm and 30 nm. The Josephson junction has {an area of 0.09$\pm$ 0.02 $\mu$m$^2$}. Between these two depositions, an AlO$_x$ barrier was grown via thermal oxidation for {6 minutes} in 100 Torr static pressure of gaseous mixture 85 \% argon 15 \% oxygen. The room-temperature junction resistance was 6.67 k$\Omega$. The sapphire chip was placed across two 3D aluminum cavities separated by a 2 mm wall, {as shown in Fig.~\ref{fig:cavPic}}. These cavities were machined out of high purity aluminum (99.99\% purity), and prepared by removing $\approx~200~\mu$m of material with acid etching \cite{Reagor2013}. The antenna pads on each side of the Josephson junction couple to the TE101 mode of each cavity. On the readout cavity side, the antenna is 0.5 mm wide and 7.5 mm long. On the storage cavity side, the antenna is 0.5 mm wide and 4.2 mm long with a 0.01 mm gap capacitor for extra coupling tunability. These dimensions were optimized to meet the desired coupling strengths using finite element simulations and black box circuit quantization analysis \cite{Nigg2012}. \subsection{Measurement setup} \subsubsection{Waveguide Purcell filter} The output of the readout cavity is coupled to a transmission line through a WR-102 waveguide which exponentially attenuates signals below a cutoff frequency of 5.8 GHz. This way, the readout (7.152 GHz) is above cutoff, and is hence well coupled to the transmission line. On the other hand, the qubit (4.9007 GHz) is below, and is hence isolated from the transmission line. With this architecture, we obtained a qubit lifetime of $T_1=23~\mu$s despite its strong coupling to the low $Q$ readout cavity ($\chi_{qr}/2\pi=35~$MHz, $\kappa_r=(26~\text{ns})^{-1}$). Waveguide transmission at the qubit frequency is set by the waveguide length (7.62 cm) and detuning below cutoff, and in our case is -70 dB (at 300 K), while only -0.16 dB (at 300 K) at the cavity frequency. The coupling between the cavity and waveguide is through an aperture, whose dimensions (7.4 mm long, 3.96 mm wide, 5.64 mm deep) determine the coupling strength, which is measured to be $Q^{\text{out}}_r$ = 7500 (assuming internal quality factor $Q_r^\text{in}\gg Q^{\text{out}}_r$). The input couplings for the readout and storage cavities were measured at room temperature to be $Q^\text{in}_r$=4,000,000 and $Q^\text{in}_s$=15,000,000. The output port of the storage cavity $Q^\text{out}_s\approx Q^\text{in}_s$ was not used in this experiment. \subsubsection{Amplification chain} The transmission line is connected to a Josephson parametric converter (JPC) acting as a phase preserving amplifier \cite{Bergeal2010,Bergeal2010a,Roch2012}, operating near the quantum limit with a gain of 20 dB over a bandwidth of 4.6 MHz. We obtain an input noise visibility ratio for the amplification chain of 8 dB \cite{Narla2014}, indicating that $\approx$90\% percent of the noise at room temperature are amplified quantum fluctuations. The qubit state is measured by sending a square pulse of length $T_\text{pulse}=1~\mu$s through the input readout port. The frequency of this pulse is centered at the readout cavity frequency when the qubit is in its ground state. When the qubit is in its ground state $\ket{g}$, the pulse transmits to the cavity output port towards the JPC. Since the dispersive shift is much larger than the cavity line width ($\chi_{qr}\gg \kappa_r\gg1/T_\text{pulse}$), if on the other hand, the qubit is in its excited state $\ket{e}$, the pulse reflects off the input port. When the qubit is in $\ket{g}$, the steady state number of photons in the readout cavity during this pulse is about 4 photons (calibrated using qubit measurement induced dephasing \cite{Gambetta2006}). When exiting the JPC, the pulse propagates through two isolators at 20 mK, a superconducting line between the 20 mK stage and the 4 K stage, where it is amplified by a HEMT amplifier with 40 dB gain. At room temperature, the signal is further amplified, mixed down to 50 MHz and digitized with an analog to digital converter (ADC) (see Fig.~\ref{fig:setup}). For each measurement, we record the two quadratures ($I$ and $Q$) of the digitized signal. A histogram of 820,000 measured ($I$,$Q$) values is shown in Fig.~\ref{fig:JPC}. This histogram is the sum of two gaussians: the right one corresponds to the qubit in $\ket{g}$ and the left one corresponds to the qubit in $\ket{e}$ (corresponding to a qubit thermal excited state occupancy of 20\%). The $I$ and $Q$ quadratures are rotated such that the information lies in the $I$ quadrature only. The right gaussian is squeezed in the $Q$ quadrature, which is a consequence of the JPC saturation. An ($I$,$Q$) value lying on the right (left) hand side of the threshold indicated by dotted line in Fig.~\ref{fig:JPC} is associated to a qubit in the ground (excited) state. This threshold is calculated such that the errors of mistaking $\ket{g}$ for $\ket{e}$ and $\ket{e}$ for $\ket{g}$ are equal. This separability fidelity is calculated here to be 99\%, which would coincide with the measurement fidelity in the limit of large $T_1$. \subsection{System parameters} \subsubsection{Parameter values} The system parameters are shown in Table~\ref{table:parameters}. \begin{table}[!h] \begin{tabular}{| l | c | c | c | c | } \hline Mode & Frequency (GHz) & T$_1$ ($\mu$s) & T$_2$ ($\mu$s) & Thermal population\\ \hline Qubit & $ 4.9007 $ & 23 & 1 & 20\%\\ Storage & 7.57861 & 20 & - &$\le$ 5\%\\ Readout & 7.152 & 0.025 & - &$\le$ 2\%\\ \hline \end{tabular} \caption{Frequencies, thermal populations and coherence times of each mode.} \label{table:parameters} \end{table} \begin{table}[!h] \begin{tabular}{| l | c | c | c | } \hline $\chi/2\pi$ (MHz) & Qubit & Storage & Readout\\ \hline Qubit & $ 130 $ & & \\ Storage & 1.585 & (0.004) & \\ Readout & 35 & 0.206 &2.14\\ \hline \end{tabular} \caption{Dispersive couplings between the qubit, storage and readout modes. The diagonal elements in this table refer to the self-Kerr terms, which enter in the Hamiltonian as $\sum_{m=q,r,s}-\frac{\chi_{mm}}{2}{\text{\bf{a}}_m^\dag}^2\text{\bf{a}}_m^2$, where the subscripts $m=q,r,s$ stand respectively for the qubit, readout and storage. The off-diagonal terms in the table are the cross-Kerr terms, which enter the Hamiltonian as $-\chi_{qs}\text{\bf{a}}_q^\dag\text{\bf{a}}_q\text{\bf{a}}_s^\dag\text{\bf{a}}_s-\chi_{qr}\text{\bf{a}}_q^\dag\text{\bf{a}}_q\text{\bf{a}}_r^\dag\text{\bf{a}}_r-\chi_{rs}\text{\bf{a}}_r^\dag\text{\bf{a}}_r\text{\bf{a}}_s^\dag\text{\bf{a}}_s$. The value for the storage Kerr (between brackets) was not directly measured, but only estimated from other measured quantities using the geometric equality: $\chi_{ss}=\chi_{qs}^2/4\chi_{qq}$ \cite{Nigg2012}.} \end{table} \subsubsection{Choice of parameters} As described in the main text, the goal of this experiment was to obtain a non-linear dissipation rate $\kappa_2=\frac{\chi_{sr}^2}{\kappa_r}\abs{\xi_p}^2$ which is as large as possible. This rate is proportional to the pump power and the square of the readout-storage cross-Kerr $\chi_{sr}$. It is not possible to pump arbitrarily hard since mixing of the pump due to higher order non-linear terms will eventually produce undesirable effects. For example, in Fig.~\ref{fig:Stark}, we can see that for pump powers larger than 100 mW {(measured at the output of the generator)}, the storage mode linewidth increases above the linewidth in absence of pump. We have also seen that for pump powers larger than 200 mW, the qubit thermal population starts to increase. This is why we fix the pump power to 100 mW for the rest of the experiment. From the AC Stark shift on the qubit, we know that this corresponds to $\abs{\xi_p}^2=1.2$. Therefore, it is useful to have a large enough $\chi_{sr}$ in order to achieve $\kappa_2$ of the same order as $\kappa_s$ for $\abs{\xi_p}\approx1$. For our parameter values, this corresponds to $\chi_{rs}/2\pi$ of the order of 200 kHz. We designed our system to obtain the latter coupling. We cannot increase this coupling too much since we believe this will decrease the storage cavity lifetime due to the Purcell effect (in the near future, we plan on designing a pass-band, instead of a high-pass, Purcell filter to lift this constraint). Since we have $\chi_{rs}=2\sqrt{\chi_{rr}\chi_{ss}}$, and we want a storage Kerr at most of the order of its linewidth (in order to minimize the distortion of the coherent state superpositions), we had to increase the readout Kerr $\chi_{rr}$ (by increasing the junction participation in this mode \cite{Nigg2012}) until we obtained the desired $\chi_{rs}$. Moreover, we needed to have a qubit mode to perform Wigner tomography. The latter necessitates short un-selective pulses on the qubit \cite{Vlastakis2013}. For this reason, we needed a large enough transmon anharmonicity, which necessarily implied a very large qubit-readout cross-Kerr (here $\chi_{qr}/2\pi=35~$MHz). Strongly coupling a qubit to a lossy resonator reduces its coherence times due to the Purcell effect. The use of a Purcell filter \cite{Reed2010} (described above) seemed favorable. This is why we designed our qubit frequency to be around 5 GHz, and the readout mode around 7 GHz, the former below and the latter above the waveguide cutoff frequency. The pump tone needs to be at $\omega_p=2\omega_s-\omega_r$, which is below the readout if $\omega_s<\omega_r$ and above otherwise. We thought it would be more cautious to have this strong pump tone as far as possible from the qubit (to avoid the pump coupling to the qubit mode), and therefore designed the storage mode to be about half a GHz above the readout. This way, the pump is one GHz above the readout mode, and hence three GHz above the qubit. The drawback of this design is that the storage mode is not protected by the Purcell filter since it is above cutoff. In the near future we will repeat this experiment with a pass-band Purcell filter. \subsection{Measurement methods} \subsubsection{Spectroscopy} Readout mode and qubit spectroscopy are obtained by performing transmission spectroscopy and saturation spectroscopy, respectively. Storage mode spectroscopy is obtained by sequentially sending a long ($100~\mu s$) and weak probe tone to the storage input port, then performing a selective $\pi$ pulse \cite{Johnson2010} on the qubit conditioned on there being zero photons in the storage, and finally measuring the qubit through the readout mode. If the probe tone is off-resonant with the storage mode frequency, the storage photon number remains zero, the $\pi$ pulse therefore inverts the qubit state. On the other hand, if the probe tone is resonant, the storage gets populated to larger photon numbers, hence the $\pi$ pulse cannot completely invert the qubit state. This change in qubit state vs. probe frequency is detected by the measurement pulse through the readout mode. \subsubsection{Lifetimes} Qubit lifetime $T_1$ and coherence time $T_2$ are measured with the usual $T_1$ and Ramsey pulse sequences. The readout mode lifetime is extracted from its linewidth. Since the readout mode has a relatively large Kerr ($\chi_{rr}/2\pi=2.14~$MHz), the transmission spectra are broadened by this Kerr as we increase the power of the probe tone. Hence, we perform transmission spectroscopy for decreasing probe power until the linewidth stops narrowing. The mode lifetime is then $1/\kappa_r$ where $\kappa_r/2\pi$ is the spectral linewidth at small probe powers. The storage mode lifetime is obtained by first displacing the storage state, and after a variable wait time, measuring the parity of the storage state. By fitting the data, we obtain the storage lifetime. \subsubsection{Thermal population} Qubit thermal population is obtained by taking a single shot histogram of the qubit state (see Fig.~\ref{fig:JPC}). We get the thermal excited state occupancy by extracting the probability of getting a count on the left hand side of the threshold (dotted line). We can give a bound on the thermal population $n^{\text{th}}_r$ of the readout mode. This thermal population $n^{\text{th}}_r$ induces a dephasing rate for the qubit given by $\kappa_{\phi,th}=n^{\text{th}}_r\kappa_r$, in the limit where $\chi_{qr}\gg \kappa_r$ \cite{Sears2012}. We know that the measured dephasing rate $\kappa_\phi=1/T_2-1/2T_1\approx 1/T_2$ (since $T_1\gg T_2$), is at least larger than $\kappa_{\phi,th}$. The inequality $\kappa_\phi\ge\kappa_{\phi,th}$ is equivalent to $$ n^{\text{th}}_r\le1/(T_2\kappa_r)= 2\%\;. $$ By measuring the qubit number split spectrum to the storage mode, we should in principle be able to measure the storage thermal occupancy. However, the spectrum linewidth sets a bound on which thermal population in the storage one can robustly measure. This linewidth $\kappa_\text{spec}/2\pi$ is due to the finite spectroscopy pulse length and power, and is bounded by $(2\pi T_2)^{-1}$. Assuming a small number of thermal photons $n^{\text{th}}_s\ll1$, at equilibrium, the storage is in a mixture of the vacuum state with probability $(1-n^{\text{th}}_s)$ and the first excited state with probability $n^{\text{th}}_s$. The spectrum of the qubit is then $S(\omega)=(1-n^{\text{th}}_s)S_0(\omega)+n^{\text{th}}_sS_1(\omega)$, where $S_0$ and $S_1$ are the qubit spectra when the number of photons in the storage is 0 or 1, respectively. We have $S_k(\omega)=\frac{\abs{\epsilon_\text{probe}}^2}{\left(\frac{\kappa_\text{spec}}{2}\right)^2+(\omega-\omega_q-k\chi_{qs})^2}$, where $\epsilon_\text{probe}$ is the probe amplitude, and we have neglected the effect of $\kappa_s$ on $\kappa_\text{spec}$ since in practice $\kappa_s\ll\kappa_\text{spec}$. When we measured the spectrum $S$ while the storage was in thermal equilibrium, we could not resolve a peak at $\omega_q-\chi_{qs}$ corresponding to one photon. This implies that we have $n^{\text{th}}_sS_1(\omega_q-\chi_{qs})\le(1-n^{\text{th}}_s)S_0(\omega_q-\chi_{qs})$. In our case, we took a qubit spectrum with a gaussian $\pi$ pulse ($800$ ns standard deviation), and we observed a linewidth $\kappa_{\text{spec}}$=1/(0.23~$\mu$s). In the limit where $\kappa_\text{spec}\ll \chi_{qs}$, this sets the following bound on our measure of $n^{\text{th}}_s$ : $$ n^{\text{th}}_s\le(\kappa_{\text{spec}}/2\chi_{qs})^2 = 5\%\;. $$ \subsubsection{Cross-Kerr terms} The qubit to readout cross-Kerr is obtained by measuring the readout spectrum. Due to the thermal occupancy of the qubit, this spectrum exhibits two peaks, separated by $\chi_{qr}/2\pi$. The qubit to storage cross-Kerr is obtained by inserting photons in the storage and measuring a qubit spectrum. We see many peaks, each one corresponding to a photon number state in the storage. The linear dependence of the central frequency of each peak on the peak number give the qubit-storage cross-Kerr (see Fig.~\ref{fig:qubitnumbersplitting}). This measurement is further refined by performing a parity revival experiment \cite{Vlastakis2013}. The readout to storage cross-Kerr is obtained by measuring the readout frequency as a function of photons inserted in the storage. The readout mode frequency decreases linearly with storage photon number with a proportionality constant corresponding to the cross-Kerr. \subsubsection{Kerr terms} The transmon anharmonicity (also termed qubit Kerr $\chi_{qq}$) is obtained by measuring qubit spectroscopy with increasing probe power until we observe the two photon transition from $\ket{g}$ to $\ket{f}$, which is detuned from the main $\ket{g}$ to $\ket{e}$ peak by half the qubit anharmonicity. The readout mode Kerr is obtained from the pump Stark shift (Fig.~\ref{fig:Stark}). Indeed, as we will show in the following section, due to the pump, all three modes frequencies decrease linearly with the pump power. The ratio of the slopes of the qubit shift to the readout shift is $\chi_{qr}/2\chi_{rr}$. Hence, knowing $\chi_{qr}$, we extract $\chi_{rr}$. A useful check is to make sure that the ratio of slopes of the qubit and storage shifts is indeed $\chi_{qr}/\chi_{rs}$. We find that this value agrees with the independently measured cross-Kerr values with a deviation of $5\%$. The storage Kerr was not measured, but merely estimated from the formula $\chi_{ss}=\chi_{qs}^2/4\chi_{qq}$ \cite{Nigg2012}. \subsubsection{Photon number calibration} The storage cavity was displaced using a 20 ns square pulse. Similarly to \cite{Kirchmair2013}, we calibrate the amplitude of this pulse by measuring a cut of the Wigner function of the vacuum state, and fitting a gaussian to the data. The DAC to photon number correspondence is obtained by imposing that the standard deviation of this gaussian needs to be $1/2$. We calibrate the number of photons in the readout mode by measuring the measurement-induced dephasing rate on the qubit while a tone is applied to the readout mode \cite{Gambetta2006}. \subsubsection{Phase locking} The quantum state produced in the storage is a consequence of non-linear mixing of the pump and drive tones in our Josephson circuit. If we used a third generator to probe the state of the storage, this generator would not be phase locked to the state in the storage, and hence we would expect all our Wigner functions to be completely smeared and to exhibit no phase coherence. To avoid this problem we generate the pump and storage tones from two separate generators at respectively $\omega_p$ and $\omega_s$, and we mix them at room temperature to generate the drive tone (see dashed box in Fig.~\ref{fig:setup}). This is achieved by doubling the frequency of the storage generator to $2\omega_s$ using a mixer, and then mixing this doubled frequency with the pump to obtain $2\omega_s\pm\omega_p$. The upper sideband at $2\omega_s+\omega_p$ is then filtered by a low pass filter with a 12 GHz cutoff frequency, and hence only the drive tone at the desired frequency $\omega_d=2\omega_s-\omega_p$ enters our device. We use fast microwave switches controlled by markers from the arbitrary waveform generator (AWG) to produce the pulse sequences for the experiment. \subsubsection{Parity measurement and Wigner tomography} \label{sec:WignerTomo} The Wigner function uniquely defines the quantum state $\rho_s$ of an oscillator. It is defined as $W(\alpha)=\frac{2}{\pi}P(\alpha)$, where $P(\alpha)=\tr{\text{\bf{D}}_{-\alpha}\rho_s\text{\bf{D}}_{\alpha}e^{i\pi\text{\bf{a}}_s^\dag\text{\bf{a}}_s}}$ \cite{Haroche2006}. In this experiment, we directly measured $P(\alpha=I+iQ)$ following the measurement protocol of \cite{Vlastakis2013,Sun2014} (see Fig.~\ref{fig:pulseSeq}). In the data of Figs.~\ref{fig:time_evolution}-\ref{fig:Fock_time_evolution}, for each point $(I_k,Q_k)$ of the $I-Q$ plane, we repeat 10,000 times: \begin{enumerate} \item Initialize the qubit by measuring its state and post-selecting on it being in the ground state \item Displace the cavity state with a 20 ns square pulse of amplitude $a_k=\sqrt{I_k^2+Q_k^2}$ and phase $\phi_k=\arg(I_k+iQ_k)$ \item Perform a $+\pi/2$ pulse on the qubit around the X-axis. \item Wait for $\pi/\chi_{qs}$ \item Perform a $+\pi/2$ pulse on the qubit around the X-axis (then repeat all steps with a $-\pi/2$ pulse) \item Measure the qubit state \end{enumerate} All measurements are single shot and are binned to be 0 or 1 depending on whether the data point lies on the left or right of the threshold (see Fig.~\ref{fig:JPC}). Each one of the qubit pulses is a gaussian pulse with a 4 ns standard deviation, and we truncate the pulse length to 5 standard deviations. After post-selecting on the initial measurement, the data is averaged, and two Wigner maps are obtained. One corresponding to both pulses with a $+\pi/2$ angle, and the other where the second pulse is with a $-\pi/2$ angle. We then subtract these two maps in order to correct for systematic errors due to the readout-storage cross-Kerr and the finite un-selectivity of the $\pi/2$ pulses \cite{Vlastakis2013,Sun2014}. {Indeed, assume the storage is in a pure state $\ket{\psi}$, and we want to measure its Wigner function. We model the finite un-selectivity of the $\pi/2$ pulses by assuming that there is an $N_{\text{max}}$, such that if there are $n\le N_\text{max}$ photons in the cavity, the pulses are able to rotate the qubit state, whereas for all $n> N_\text{max}$, the qubit state is unaffected by the pulse. Each qubit measurement is thresholded and associated to the qubit being in state g or e. The probability of measuring $m=g,e$ when the qubit state was in fact in $t=g,e$ is denoted $p^\alpha(m|t)$. In the latter notation, the superscript $\alpha$ refers to the displacement amplitude of the storage, which is a simplified model incorporating the readout-storage cross-Kerr and its effect on the readout fidelity due to the presence of photons in the storage. First, we displace the state by $\alpha$, and denote the displaced state $\ket{\psi_\alpha}$. Second, we perform two $\pi/2$ pulses separated by a $\pi/\chi_{qs}$ wait time. We then obtain the following qubit-storage entangled state : $$\ket{\psi_\alpha}^+=\text{\bf{P}}_\text{even}\ket{\psi_\alpha}\ket{e}+\text{\bf{P}}_\text{odd}\ket{\psi_\alpha}\ket{g}+\text{\bf{P}}_{>N_\text{max}}\ket{\psi_\alpha}\ket{g}\;,$$ where $\text{\bf{P}}_\text{even}=\sum_{2n\le N_\text{max}}{\ket{2n}\bra{2n}}$, $\text{\bf{P}}_\text{odd}=\sum_{2n+1\le N_\text{max}}{\ket{2n+1}\bra{2n+1}}$ and $\text{\bf{P}}_{>N_\text{max}}=\sum_{n > N_\text{max}}{\ket{n}\bra{n}}$. The measured quantity, which is the expectation value of the qubit energy is \begin{eqnarray*} \bket{\sigma_z}^+&=&\norm{\text{\bf{P}}_\text{even}\ket{\psi_\alpha}}^2(p^\alpha(e|e)-p^\alpha(g|e))-\norm{\text{\bf{P}}_\text{odd}\ket{\psi_\alpha}}^2(p^\alpha(g|g)-p^\alpha(e|g))\\ &+&\norm{\text{\bf{P}}_{>N_\text{max}}\ket{\psi_\alpha}}^2 (p^\alpha(e|g)-p^\alpha(g|g))\;. \end{eqnarray*} When the second $\pi/2$ pulse has a $\pi$ phase shift, we get $$\ket{\psi_\alpha}^-=\text{\bf{P}}_\text{even}\ket{\psi_\alpha}\ket{g}+\text{\bf{P}}_\text{odd}\ket{\psi_\alpha}\ket{e}+\text{\bf{P}}_{>N_\text{max}}\ket{\psi_\alpha}\ket{g}\;,$$ and hence \begin{eqnarray*} \bket{\sigma_z}^-&=&\norm{\text{\bf{P}}_\text{even}\ket{\psi_\alpha}}^2(p^\alpha(e|g)-p^\alpha(g|g))-\norm{\text{\bf{P}}_\text{odd}\ket{\psi_\alpha}}^2(p^\alpha(g|e)-p^\alpha(e|e))\\ &+&\norm{\text{\bf{P}}_{>N_\text{max}}\ket{\psi_\alpha}}^2 (p^\alpha(e|g)-p^\alpha(g|g))\;. \end{eqnarray*} We then substract these two expectation values and obtain $\Delta\bket{\sigma_z}=C_\alpha(\norm{\text{\bf{P}}_\text{even}\ket{\psi_\alpha}}^2-\norm{\text{\bf{P}}_\text{odd}\ket{\psi_\alpha}}^2)=C_\alpha P(\alpha)$, where the contrast $C_\alpha$ is given by $C_\alpha=\frac{1}{2}(p^\alpha(g|g)+p^\alpha(e|e)-p^\alpha(e|g)-p^\alpha(g|e))$. In the case of perfect readout: $p^\alpha(g|g)=p^\alpha(e|e)=1$ and $p^\alpha(e|g)=p^\alpha(g|e)=0$, and hence $C_\alpha= 1$. Notice that this subtraction eliminated the third term in $\bket{\sigma_z}^\pm$ which is due to the finite un-selectivity of the pulses, and would appear as an offset in the Wigner tomography. This subtraction also makes the effect of the storage-readout cross-Kerr symmetric, making no bias towards positive or negative values.} {From these measured Wigner functions, one can reconstruct a density matrix which best reproduces this data \cite{Vlastakis2013}. As a consistency check, we can compare the diagonal elements of this reconstructed density matrix, to the directly measured photon number probabilities using qubit spectroscopy. As shown in Fig.~\ref{fig:Numbersplitting_time_evolution}, there is a good agreement between these two independent measurements. One can also extract the expectation value of any observable directly from the measured Wigner function, and compare them to the theoretical predictions through numerical simulations. This comparison is made in Figs.~\ref{fig:Obs_vs_time}-\ref{fig:Obs_vs_time_2}, and we observe good agreement between theory and experiment.} \subsubsection{Qubit dynamics during the pumping} When the pump and the drive tones are on, the readout mode remains mainly in vacuum and the storage state evolves from vacuum to a mixture of coherent states, while transiting through a coherent state superposition (see Fig.~4 of the main text). In principle, if the Hamiltonian of the three modes (qubit, readout, storage) is fully captured by the Hamiltonian described in \eqref{eq:totalH1}\eqref{eq:totalH2}, the qubit state should not be influenced by the pumping. For example, if we initialize the qubit in its ground state before activating the pump and drive tones, the qubit should remain in its ground state, unless it absorbs a thermal photon, and this thermal absorption rate should be independent of the number of photons in the two other modes. However, we have observed that when the pumping is on, as the photon number in the storage mode increases, the qubit thermal occupation increases significantly. This is most likely related to the previously unexplained mechanism which causes the qubit lifetime to decrease when photons are inserted in the readout mode \cite{Slichter2012}. The parametric pumping mechanism relies on the frequency matching condition $\omega_p=2\omega_s-\omega_r$, where $\omega_{p,s,r}$ are the pump, storage and readout frequencies, respectively. The pump and drive tone frequencies need to be tuned with a precision of order $g_2$, as observed in Fig.~2 of the main paper, and computed in the next section (see Eq.~\eqref{eq:g2}). In our experiment, we tune these tones to fulfill this condition when the qubit is in its ground state. If the qubit suddenly jumps to the excited while the pumping is activated, the readout and storage frequencies will shift by their respective dispersive coupling to the qubit $\chi_{qr}$ and $\chi_{qs}$. In particular, $(\chi_{qr}-\chi_{qs})/2\pi=33.4~$MHz $\gg g_2/2\pi=111~$kHz. Hence, the frequency matching condition no longer holds, and the pumping process is interrupted. This undesirable process can be slightly filtered by measuring the qubit state after completing the pumping, and post-selecting on the qubit being in its ground state (see Fig.~\ref{fig:pulseSeq}). However, we do not filter out processes where the qubit jumped up to the excited state for a random time, and jumped back down to its ground state before the measurement is performed. We believe it is these kinds of processes which produce an excess of n=0 population in the storage (see Fig.~\ref{fig:Numbersplitting_time_evolution}). The effect of these large pumps and populated modes on the qubit decay rates is subject to ongoing research. \section{Supplementary text} \subsection{The pumped Josephson circuit Hamiltonian} We start by writing the Hamiltonian of the qubit, readout and storage modes coupled to a Josephson junction, with two tones (the drive and the pump) on the readout mode. \begin{eqnarray*} \text{\bf{H}}/\hbar&=&\sum_{m=q,r,s}\bar{\omega}_m\text{\bf{a}}_m^\dag\text{\bf{a}}_m-\frac{E_J}{\hbar}\left(\cos(\text{\boldsymbol{$\varphi$}})+\text{\boldsymbol{$\varphi$}}^2/2\right) +2\Re\left(\epsilon_pe^{-i\omega_p t}+\epsilon_de^{-i\omega_d t}\right)(\text{\bf{a}}_r+\text{\bf{a}}_r^\dag)\;,\\ \text{\boldsymbol{$\varphi$}}&=&\sum_{m=q,r,s}\varphi_m(\text{\bf{a}}_m+\text{\bf{a}}_m^\dag)\;. \end{eqnarray*} The first term corresponds to the linear Hamiltonian of each mode of annihilation operator $\text{\bf{a}}_m$. Their bare frequencies $\bar\omega_m$ are shifted towards the measured frequencies $\omega_m$ due to the contribution of the Josephson junction in the Hamiltonian. The latter is represented by the cosine term, to which we have removed the quadratic terms by including them in the linear part of the Hamiltonian. $E_J$ is the Josephson energy, and $\text{\boldsymbol{$\varphi$}}$ is the phase across the junction, which can be decomposed as the linear combination of the phase across each mode, with $\varphi_m$ denoting the contribution of mode $m$ to the zero point fluctuations of $\text{\boldsymbol{$\varphi$}}$. The system is irradiated by a drive and pump tones with complex amplitudes $\epsilon_d,~\epsilon_p$ and frequencies $\omega_d,~\omega_p$, respectively. $\Re()$ denotes the real part. The pump is a large amplitude far off-resonant tone, while the drive is a weak tone close to resonant with the readout mode. We place ourselves in a regime where $$ \omega_{p},~\omega_d,~\bar\omega_m \gg \epsilon_p\sim (\omega_p-\bar\omega_r)\gg \frac{E_J}{\hbar} \norm{\text{\boldsymbol{$\varphi$}}}^4/4!\;. $$ In order to eliminate the fastest time scales corresponding to the system frequencies and the pump amplitude, we make a change of frame using the unitary \begin{eqnarray*} U&=&{e^{i\bar\omega_q t \text{\bf{a}}_q^\dag\text{\bf{a}}_q}}{e^{i\omega_d t \text{\bf{a}}_r^\dag\text{\bf{a}}_r}}{e^{i\frac{\omega_p+\omega_d}{2} t \text{\bf{a}}_s^\dag\text{\bf{a}}_s}}e^{-\tilde\xi_p\text{\bf{a}}_r^\dag+\tilde\xi_p^*\text{\bf{a}}_r}\;,\\ \frac{d\tilde\xi_p}{dt}&=&-i\bar\omega_r\tilde\xi_p-i2\Re\left(\epsilon_pe^{-i\omega_pt}\right)-\frac{\kappa_r}{2}\tilde\xi_p\;. \end{eqnarray*} After a time scale of order $1/\kappa_r$ we have $\tilde\xi_p\approx \xi_pe^{-i\omega_p t}$, $\xi_p={-i\epsilon_p}/\left({\frac{\kappa_r}{2}+i(\bar\omega_r-\omega_p)}\right)\approx{-i\epsilon_p}/\left({\frac{\kappa_r}{2}+i(\omega_r-\omega_p)}\right)$. In this new frame, the Hamiltonian is \begin{eqnarray*} \tilde\text{\bf{H}}/\hbar&=&(\bar\omega_r-\omega_d)\text{\bf{a}}_r^\dag\text{\bf{a}}_r+(\bar\omega_s-\frac{\omega_p+\omega_d}{2})\text{\bf{a}}_s^\dag\text{\bf{a}}_s-\frac{E_J}{\hbar}(\cos(\tilde\text{\boldsymbol{$\varphi$}})+\tilde\text{\boldsymbol{$\varphi$}}^2/2)\;,\\ \tilde\text{\boldsymbol{$\varphi$}}&=&\sum_{k=q,r,s}\phi_k(\tilde\text{\bf{a}}_k+\tilde\text{\bf{a}}_k^\dag)+(\tilde\xi_p+\tilde\xi_p^*)\phi_r\;,\\ \tilde\text{\bf{a}}_q&=&e^{-i\bar\omega_q t}\text{\bf{a}}_q\;,\tilde\text{\bf{a}}_r=e^{-i\omega_d t}\text{\bf{a}}_r\;, \tilde\text{\bf{a}}_s=e^{-i\frac{\omega_p+\omega_d}{2} t}\text{\bf{a}}_s\;. \end{eqnarray*} We now expand the cosine up to the fourth order, and only keep non rotating terms: \begin{eqnarray} \tilde\text{\bf{H}}&\approx&\text{\bf{H}}_{\text{shift}}+\text{\bf{H}}_{\text{Kerr}}+\text{\bf{H}}_{2}\;, \label{eq:totalH1} \end{eqnarray} where : \begin{eqnarray} \text{\bf{H}}_{\text{shift}}&=&(-\delta_q-\chi_{qr}\abs{\xi_p}^2)\text{\bf{a}}_q^\dag\text{\bf{a}}_q\notag\\ &+&(\bar\omega_r-\omega_d-\delta_r-{2\chi_{rr}}\abs{\xi_p}^2)\text{\bf{a}}_r^\dag\text{\bf{a}}_r\notag\\ &+&(\bar\omega_s-\frac{\omega_p+\omega_d}{2}-\delta_s-\chi_{rs}\abs{\xi_p}^2)\text{\bf{a}}_s^\dag\text{\bf{a}}_s\;,\notag\\ \text{\bf{H}}_{\text{Kerr}}&=&-\sum_{m=q,r,s}\frac{\chi_{mm}}{2}{\text{\bf{a}}_m^\dag}^2\text{\bf{a}}_m^2-\chi_{qr}\text{\bf{a}}_q^\dag\text{\bf{a}}_q\text{\bf{a}}_r^\dag\text{\bf{a}}_r-\chi_{qs}\text{\bf{a}}_q^\dag\text{\bf{a}}_q\text{\bf{a}}_s^\dag\text{\bf{a}}_s-\chi_{rs}\text{\bf{a}}_r^\dag\text{\bf{a}}_r\text{\bf{a}}_s^\dag\text{\bf{a}}_s\;,\notag\\ \text{\bf{H}}_{2}&=&g_{2}^*\text{\bf{a}}_s^2\text{\bf{a}}_r^\dag+g_{2}(\text{\bf{a}}_s^\dag)^2\text{\bf{a}}_r+\epsilon_d\text{\bf{a}}_r^\dag+\epsilon_d^*\text{\bf{a}}_r\;. \label{eq:totalH2} \end{eqnarray} The first term $\text{\bf{H}}_{\text{shift}}$ corresponds to the modes frequency shifts. The bare frequencies are shifted by $\delta_{q,r,s}$ which arise from the operator ordering chosen in $\text{\bf{H}}_{\text{Kerr}}$. Moreover, the frequencies are shifted down by a term proportional to $\abs{\xi_p}^2$, which corresponds to the AC Stark shift induced by the pump. We observe this linear shift vs. pump power in Fig.~\ref{fig:Stark}. The second term $\text{\bf{H}}_{\text{Kerr}}$ corresponds to self-Kerr and cross-Kerr coupling terms \cite{Nigg2012}. We have: $\chi_{mm}=\frac{E_J}{\hbar}\varphi_m^4/2$, and $\chi_{mm'}=\frac{E_J}{\hbar}\varphi_m^2\varphi_{m'}^2$. The last term $\text{\bf{H}}_2$ contains the terms which reveal the physics we have observed in this paper. It is the microscopic Hamiltonian of a degenerate parametric oscillator \cite{Wolinsky1988}. The first term in this Hamiltonian is a non-linear coupling between the storage and readout modes: two photons from the storage can swap with a single photon in the readout. In contrast to the usual parametric oscillator, our readout mode is not twice the frequency of the storage mode. This term is produced by four-wave mixing of the pump and the readout and storage modes. The term in $\epsilon_d$ corresponds to a drive on the readout mode. Our coupling strength is given by $$ g_{2}=\chi_{sr}\xi_p^*/2\;. $$ The second term in $\text{\bf{H}}_2$ is a coherent drive on the readout mode. It corresponds to the input energy which is converted into pairs of photons in the storage, thus creating coherent state superpositions. \subsection{Two-mode model and semi-classical analysis} Here we assume the qubit remains in its ground state. The storage and readout modes evolve under the Hamiltonian: \begin{eqnarray} \text{\bf{H}}_{sr}&=&\Delta_d\text{\bf{a}}_r^\dag\text{\bf{a}}_r+\frac{\Delta_p+\Delta_d}{2}\text{\bf{a}}_s^\dag\text{\bf{a}}_s\notag\\ &+&g_{2}^*\text{\bf{a}}_s^2\text{\bf{a}}_r^\dag+g_{2}(\text{\bf{a}}_s^\dag)^2\text{\bf{a}}_r+\epsilon_d\text{\bf{a}}_r^\dag+\epsilon_d^*\text{\bf{a}}_r\\ &-&\chi_{rs}\text{\bf{a}}_r^\dag\text{\bf{a}}_r\text{\bf{a}}_s^\dag\text{\bf{a}}_s-\sum_{m=r,s}\frac{\chi_{mm}}{2}{\text{\bf{a}}_m^\dag}^2\text{\bf{a}}_m^2\;, \label{eq:Hsr} \end{eqnarray} where $\Delta_d=\bar\omega_r-\omega_d-\delta_r-{2\chi_{rr}}\abs{\xi_p}^2$ and $\Delta_p=-\Delta_d+2(\bar\omega_s-\frac{\omega_p+\omega_d}{2}-\delta_s-\chi_{rs}\abs{\xi_p}^2)$. Theory curves of Fig.2 in the main paper are obtained by numerically finding the steady state density matrix of the Lindblad equation with damping operators $\sqrt{\kappa_r}\text{\bf{a}}_r$ and $\sqrt{\kappa_s}\text{\bf{a}}_s$ and Hamiltonian $\text{\bf{H}}_{sr}$. We now write the quantum Langevin equations with damping, which require including incoming bath fields $\text{\bf{a}}_r^{in}$ and $\text{\bf{a}}_s^{in}$ \cite{Drummond2010}: \begin{eqnarray*} \frac{d}{dt}\text{\bf{a}}_r&=&-i[\text{\bf{a}}_r,\text{\bf{H}}_{sr}]-\frac{\kappa_r}{2}\text{\bf{a}}_r+\sqrt{\kappa_r}\text{\bf{a}}_r^{in}\;,\\ \frac{d}{dt}\text{\bf{a}}_s&=&-i[\text{\bf{a}}_s,\text{\bf{H}}_{sr}]-\frac{\kappa_s}{2}\text{\bf{a}}_s+\sqrt{\kappa_s}\text{\bf{a}}_s^{in}\;. \end{eqnarray*} The remainder of this section is devoted to gaining some insight into the steady state solutions of the equations above. We simplify this task by neglecting the Kerr terms in the Hamiltonian. This leads us to: \begin{eqnarray} \frac{d}{dt}\text{\bf{a}}_r&=&-i\Delta_d\text{\bf{a}}_r-ig_{2}^*\text{\bf{a}}_s^2-i\epsilon_d-\frac{\kappa_r}{2}\text{\bf{a}}_r+\sqrt{\kappa_r}\text{\bf{a}}_r^{in}\label{Langevin2modes_r}\;,\\ \frac{d}{dt}\text{\bf{a}}_s&=&-i\frac{\Delta_p+\Delta_d}{2}\text{\bf{a}}_s-2ig_{2}\text{\bf{a}}_s^\dag\text{\bf{a}}_r-\frac{\kappa_s}{2}\text{\bf{a}}_s+\sqrt{\kappa_s}\text{\bf{a}}_s^{in}\;. \label{Langevin2modes_s} \end{eqnarray} We can further simplify these nonlinear Langevin equations by taking the classical limit, where the field operators are replaced by their complex expectation values \cite[chapter 4]{Drummond2010}: \begin{eqnarray*} 0&=&-i\Delta_da_r-ig_{2}^*a_s^2-i\epsilon_d-\frac{\kappa_r}{2}a_r\label{Langevin2modes_r}\;,\\ 0&=&-i\frac{\Delta_p+\Delta_d}{2}a_s-2ig_{2}a_s^*a_r-\frac{\kappa_s}{2}a_s \label{Langevin2modes_s}\;. \end{eqnarray*} One solution is \begin{equation} a_s=0 \;, a_r=-i\epsilon_d/(\frac{\kappa_r}{2}+i\Delta_d)\;. \label{eq:g2} \end{equation} This is the usual classical Lorentzian response of a driven-damped oscillator. Now assuming $a_s\ne 0$, we obtain a second solution for $a_r$: $$ a_r=\frac{-\Delta_p-\Delta_d+i\kappa_s}{4g_{2}}e^{2i\theta_s}\;, $$ where $\theta_s$ is the phase of $a_s$. Here, the modulus squared of $a_r$ is a parabolic function of the detuning $\Delta_p+\Delta_d$ with a width of $1/\abs{4g_2}^2$, and a minimal value $\abs{\kappa_s/4g_2}^2$. This corresponds to the dip observed in Fig.~2 of the main paper, and its depth is a direct signature of the fact that $g_2\gg \kappa_s$. The response of the storage cavity $a_s$ verifies: \begin{eqnarray*} a_s^2&=&\frac{1}{g_{2}^*}(-\Delta_d+i\frac{\kappa_r}{2}) a_r-\frac{\epsilon_d}{g_{2}^*}\label{Langevin2modes_r}\;,\\ \abs{a_s}^2&=&\frac{1}{4\abs{g_{2}}^2}(\Delta_d-i\frac{\kappa_r}{2})(\Delta_p+\Delta_d-i\kappa_s)-\frac{\epsilon_d}{g_{2}^*}e^{-2i\theta_s}\;. \end{eqnarray*} A sufficient condition for this equation to have a solution is $$ \frac{\abs{\Delta_d-i\frac{\kappa_r}{2}}\abs{\Delta_p+\Delta_d-i\kappa_s}}{4\abs{g_{2}\epsilon_d}}\le 1\;. $$ {A model for the response $\abs{a_r}^2$ of the readout mode as a function of the readout probe and pump tone detunings is:} $$ \abs{a_r}^2(\Delta_r,\Delta_p)=\text{min}\left(\frac{\abs{\epsilon_d}^2}{\frac{\kappa_r^2}{4}+\Delta_d^2},\frac{(\Delta_p+\Delta_d)^2+\kappa_s^2}{16\abs{g_{2}}^2}\right)\;. $$ We have checked that this simple semi-classical expression without Kerr terms captures the main features of the data in Fig.~2 (a) of the main paper. However, the transient coherent state superposition shown in Fig.~4 of the main paper cannot be explained by such a semi-classical model: it is a quantum signature of our system. \subsection{Single-mode model and classical analysis} \subsubsection{Adiabatic elimination of the readout mode} {We can adiabatically eliminate the readout mode \cite{Carmichael2007}, and obtain a master equation for the reduced density matrix of the storage mode alone. Let $\rho_{sr}$ be the density matrix which represents the joint readout and storage state. It verifies} \begin{equation} \frac{d}{dt}\rho_{sr}=-i[\text{\bf{H}}_{sr},\rho_{sr}]+\frac{\kappa_r}{2}\text{\bf{D}}[\text{\bf{a}}_r]\rho_{sr}+\frac{\kappa_s}{2}\text{\bf{D}}[\text{\bf{a}}_s]\rho_{sr}\;, \label{eq:Lindblad_sr} \end{equation} {where the Hamiltonian $\text{\bf{H}}_{sr}$ is given in \eqref{eq:Hsr}, and here we take $\Delta_d=\Delta_p=0$. Let $\delta$ be a small dimensionless parameter $\delta\ll1$. We place ourselves in the regime where $g_2/\kappa_r, \epsilon_d/ \kappa_r, \chi_{rs}/\kappa_r\sim \delta$ and $\chi_{ss}/\kappa_r,\kappa_s/\kappa_r\sim\delta^2$. We assume that the number of photons in the readout mode is always much smaller than one. We then search for a solution of \eqref{eq:Lindblad_sr} in the form} $$ \rho_{sr}=\rho_{00}\ket{0}\bra{0}+\delta\left(\rho_{01}\ket{0}\bra{1}+\rho_{10}\ket{1}\bra{0}\right)+\delta^2\left(\rho_{11}\ket{1}\bra{1}+\rho_{02}\ket{0}\bra{2}+\rho_{20}\ket{2}\bra{0}\right)+O(\delta^3)\;, $$ {where $\rho_{mn}$ acts on the storage Hilbert space, whereas $\ket{m}\bra{n}$ act on the readout Hilbert space. The goal here is to derive the dynamics of $\rho_s=\text{Tr}_r(\rho_{sr})=\rho_{00}+\delta^2\rho_{11}$ up to second order in $\delta$, where $\text{Tr}_r$ denotes the partial trace over the readout degrees of freedom. First, lets multiply \eqref{eq:Lindblad_sr} by $\bra{0}$ and $\ket{0}$. We get, up to second order terms in $\delta$ : } \begin{eqnarray} \frac{d}{\kappa_r dt}\rho_{00}&=&-\frac{i}{\kappa_r}\bra{0}[\text{\bf{H}}_{sr},\rho]\ket{0}+\delta^2\rho_{11}+\frac{\kappa_s}{2\kappa_r}\text{\bf{D}}[\text{\bf{a}}_s]\rho_{00}+O(\delta^3)\notag\\ &=&-i\delta^2\left(\text{\bf{A}}^\dag\rho_{10}-\rho_{01}\text{\bf{A}}\right)-i[-\frac{\chi_{ss}}{2\kappa_r}(\text{\bf{a}}_s^\dag)^2\text{\bf{a}}_s^2,\rho_{00}]+\delta^2\rho_{11}+\frac{\kappa_s}{2\kappa_r}\text{\bf{D}}[\text{\bf{a}}_s]\rho_{00}\label{eq:lindblad00}\\ &+&O(\delta^3)\notag\;, \end{eqnarray} {where $\text{\bf{A}}=\frac{1}{\delta\kappa_r}(g_2^*\text{\bf{a}}_s^2+\epsilon_d)$, and hence $\norm{A}=O(1)$ in $\delta$. We now need to find expressions of $\rho_{01,10,11}$ up to $0^{th}$ order terms in $\delta$. We find, neglecting terms of order $\delta$ and higher:} \begin{eqnarray} \frac{d}{\kappa_rdt}\rho_{10}&=&-i\text{\bf{A}}\rho_{00}-\frac{1}{2}\rho_{10}+O(\delta)\;,\\ \frac{d}{\kappa_rdt}\rho_{11}&=&-i\left(\text{\bf{A}}\rho_{01}-\rho_{10}\text{\bf{A}}^\dag\right)-\rho_{11}+O(\delta)\;. \end{eqnarray} {The derivative of $\rho_{10}$ has two terms: the first one can be interpreted as an external driving term, and the second is a damping term. Although the first term is time dependent, making this equation difficult to solve exactly, we know that its temporal variation is slow (of order $\delta^2$) in comparison to the damping rate (of order 1). This is where we make the adiabatic approximation: we assume that $\rho_{10}$ is continuously in its steady state. The same reasoning then applies to $\rho_{11}$, which yields: } \begin{eqnarray} \rho_{10}&=&-2i\text{\bf{A}}\rho_{00}+O(\delta)\;,\\ \rho_{11}&=&-i\left(\text{\bf{A}}\rho_{01}-\rho_{10}\text{\bf{A}}^\dag\right)+O(\delta)\\ &=&4\text{\bf{A}}\rho_{00}\text{\bf{A}}^\dag+O(\delta)\;. \end{eqnarray} { Injecting these expressions in \eqref{eq:lindblad00}, and rearranging terms, we find} \begin{eqnarray*} \frac{d}{dt}\rho_s&=&-i[\text{\bf{H}}_s,\rho_s]+\frac{\kappa_2}{2}D[\text{\bf{a}}_s^2]\rho_s+\frac{\kappa_s}{2}D[\text{\bf{a}}_s]\rho_s\;,\\ \text{\bf{H}}_s&=&\epsilon_2^*\text{\bf{a}}_s^2+\epsilon_2(\text{\bf{a}}_s^\dag)^2-\frac{\chi_{ss}}{2}{\text{\bf{a}}_s^\dag}^2\text{\bf{a}}_s^2\;, \end{eqnarray*} {with} $$ \kappa_2=4\abs{g_2}^2/\kappa_r\;,\qquad \epsilon_2=-2ig_2\epsilon_d/\kappa_r\;. $$ \subsubsection{Semi-classical analysis} Let's define $\alpha(t)=\tr{\text{\bf{a}}_s\rho_s}$ and calculate its dynamics. Using $[\text{\bf{a}}_s,(\text{\bf{a}}_s^\dag)^2]=2\text{\bf{a}}_s^\dag$, $[\text{\bf{a}}_s,{\text{\bf{a}}_s^\dag}^2\text{\bf{a}}_s^2]=2\text{\bf{a}}_s^\dag\text{\bf{a}}_s^2$ and $\tr{\text{\bf{a}}_s D[\text{\bf{a}}_s^2]\rho_s}=-2\tr{\text{\bf{a}}_s^\dag\text{\bf{a}}_s^2\rho_s}$, we find \begin{eqnarray*} \frac{d}{dt}\alpha&=&-2i\epsilon_2\tr{\text{\bf{a}}_s^\dag\rho_s}+i\chi_{ss}\tr{\text{\bf{a}}_s^\dag\text{\bf{a}}_s^2\rho_s} -\kappa_2\tr{\text{\bf{a}}_s^\dag\text{\bf{a}}_s^2\rho_s}-\frac{\kappa_s}{2}\alpha\;. \end{eqnarray*} Let's assume a solution in the form of a coherent state $\rho_s(t)=\ket{\alpha(t)}\bra{\alpha(t)}$, we then find \begin{eqnarray*} \frac{d}{dt}\alpha&=&-2i\epsilon_2\alpha^*-\left(-i\chi_{ss}+\kappa_2\right)\abs{\alpha}^2\alpha-\frac{\kappa_s}{2}\alpha\;. \end{eqnarray*} The central panel of Fig.~3 of the main paper illustrates this equation. The white lines correspond to trajectories governed by this equation, and the absolute value $\abs{\frac{d}{dt}\alpha}$ is represented by the colormap. In steady state $\alpha(t)\rightarrow\alpha_\infty$, and we have \begin{eqnarray*} 0&=&-2i\epsilon_2\alpha_\infty^*-\left(-i\chi_{ss}+\kappa_2\right)\abs{\alpha_\infty}^2\alpha_\infty-\frac{\kappa_s}{2}\alpha_\infty\;. \end{eqnarray*} We write $\alpha_\infty$ in the form $\alpha_\infty=r_\infty e^{i\theta_\infty}$ and $-i\chi_{ss}+\kappa_2=r_2e^{i\varphi_2}$: \begin{eqnarray*} 2i\epsilon_2r_\infty e^{-i\theta_\infty}&=&-r_2e^{i\varphi_2}r_\infty^2r_\infty e^{i\theta_\infty}-\frac{\kappa_s}{2}r_\infty e^{i\theta_\infty}\;. \end{eqnarray*} Notice that $\alpha_\infty=0$ is a solution, now assume $\alpha_\infty\ne 0$: \begin{eqnarray*} -2i\epsilon_2e^{-2i\theta_\infty}&=&r_2e^{i\varphi_2}r_\infty^2+\frac{\kappa_s}{2}\;. \end{eqnarray*} Taking the module square of this equation we get \begin{eqnarray*} r_2^2r_{\infty}^4+r_2\kappa_s\cos(\varphi_2)r_{\infty}^2+\frac{\kappa_s^2}{4}-4\abs{\epsilon_2}^2&=&0\;. \end{eqnarray*} The latter equation is quadratic in $r_\infty^2$, and we assume $\varphi_2$ small enough in order for its discriminant to be positive. If $\abs{\epsilon_2}\le\frac{\kappa_s}{4}$, this equation has no positive roots and hence $\alpha_\infty=0$ is the unique solution. Now lets assume $\abs{\epsilon_2}>\frac{\kappa_s}{4}$, then \begin{eqnarray*} r_{\infty}^2&=&\frac{1}{2r_2^2}\left(-r_2\kappa_s\cos(\varphi_2)+\sqrt{\left(r_2\kappa_s\cos(\varphi_2)\right)^2-4r_2^2(\frac{\kappa_s^2}{4}-4\abs{\epsilon_2}^2)}\right)\;, \end{eqnarray*} and two solutions exist for the phase $\theta_\infty$: \begin{eqnarray*} \theta_\infty^-&=&\theta_{2}/2+3\pi/4-\varphi_K/2\\ \theta_\infty^+&=&\theta_\infty^-+\pi\;, \end{eqnarray*} where $\theta_{2}$ is the phase of $\epsilon_2$, and $\varphi_K=\arctan(\frac{r_{\infty}^2r_2\sin(\varphi_2)}{r_{\infty}^2r_2\cos(\varphi_2)+\kappa_s/2})$ Note that if $\chi_{ss}=0$, then $r_2=\kappa_2$ and $\varphi_2=0$ and we find $$r_\infty\Big|_{\chi_{ss}=0}=\sqrt{\frac{2\abs{\epsilon_2}-{\kappa_s}/{2}}{\kappa_2}}\;.$$ \begin{figure} \caption{Experiment schematic.} \label{fig:setup} \end{figure} \begin{figure} \caption{{Pictures of the device. (a) Photograph of the two halves of our 3D aluminum cavities, the bridge transmon on a sapphire chip, and the rectangular waveguide. The left half is screwed on to the right one. The readout cavity has a hole which couples it to the rectangular waveguide behind it, which in turn is coupled to a transmission line through a waveguide to SMA adapter. (b) Left : schematics of the JJ and the antenna pads. Top right: optical image in the region containing the JJ and the gap capacitor. Bottom right: Scanning electron microscope image of the JJ.}} \label{fig:cavPic} \end{figure} \begin{figure} \caption{Single shot readout of the qubit state with the JPC. Top left panel: two-dimensional histogram of the ($I$,$Q$) values of 820,000 measurements of the qubit in thermal equilibrium (20\% ground state and 80\% excited state). This histogram was rotated such that the information about the qubit state is encoded in the $I$ quadrature. The right and left gaussian distributions correspond to the qubit in $\ket{g}$ and $\ket{e}$ respectively. Bottom panel: histogram of the $I$ values, where the sum of two gaussians (full line) is fitted to the data (full dots). Right panel: Histogram of the $Q$ values, where a single gaussian (full line) is fitted to the data (full dots). The dotted line is the measurement threshold: if a data point lies on the left or right of this threshold, the outcome is associated with $\ket{e}$ or $\ket{g}$ respectively. The right gaussian is squeezed in the $I$ quadrature due to the amplifier saturation. } \label{fig:JPC} \end{figure} \begin{figure}\label{fig:qubitnumbersplitting} \end{figure} \begin{figure} \caption{AC stark shift due to the pump tone. We place the pump tone at $\omega_p=8.011~$GHz, and varie its power. For each power, we measure the spectrum of the qubit (a,d), readout mode (b,c) and storage mode (c,e). The frequencies of these modes (a-c) decrease linearly with the pump power, as shown by the linear fit (full line) to the data (full dots). The linewidths are represented in panels (d-f).} \label{fig:Stark} \end{figure} \begin{figure} \caption{Pulse sequence which generates the data of Fig.~\ref{fig:time_evolution}. First we initialize the qubit state by measuring it and post-select on it being in its ground state. Then we switch the pump and drive on for a variable amount of time. Finally, we perform Wigner tomography. The pulse sequence corresponding to the tomography is in the dashed rectangle and is described in Section~\ref{sec:WignerTomo}.} \label{fig:pulseSeq} \end{figure} \begin{figure*}\label{fig:time_evolution} \end{figure*} \begin{figure*} \caption{Evolution of the storage mode state during pumping. We initialize the storage state in Fock state $\ket{1}$ and switch on the pump and drive tones for various times $t_k$. Each one of the 10 panels in (a,b), ordered from left to right, is the Wigner function of the storage state after $t_k=k~\mu$s of pumping. We compare of the raw data (a) to the Wigner functions obtained from a reconstructed density matrix (b). The Fock state is prepared by displacing the storage mode by a coherent state with an average photon number of 0.5, and then projecting to the odd parity manifold by measurement \cite{Sun2014}. As in Fig.~\ref{fig:time_evolution}, the state starts by squeezing in the $Q$ quadrature. At $t=3~\mu$s, the state resembles an odd Schr\"{o}dinger cat state where a cut of the Wigner function at $I=0$ alternates between 0, then positive, 0 at the center (this would be negative in the ideal lossless case), positive, and finally 0 again. Indeed, since we initialize the storage mode in an odd parity state, its evolution under exchanges of photon pairs conserves parity, and hence the transient superposition state has odd parity. As in Fig.~\ref{fig:time_evolution}, the state finally converges to a classical mixture of the two pointer states centered around $\ket{\pm\alpha_\infty}$.} \label{fig:Fock_time_evolution} \end{figure*} \begin{figure}\label{fig:Obs_vs_time} \end{figure} \begin{figure} \caption{Identical description as Fig.~\ref{fig:Obs_vs_time}, where the values are extracted from the numerical simulations described in Fig.~\ref{fig:time_evolution}.} \label{fig:Obs_vs_time_2} \end{figure} \begin{figure*} \caption{Photon number distribution of the storage state during pumping. Each panel $k$ of the 20 panels in (a,b), ordered from left to right and top to bottom, represents the photon number distribution of the storage state after $t_k=k~\mu$s of pumping. In (a) we perform qubit spectroscopy with a $400~$ns sigma gaussian $\pi$ pulse. Due to the qubit-storage number splitting, this is a measure of the photon number distribution in the storage. In (b), we represent the diagonal of the reconstructed density matrix obtained from the Wigner tomography. These two independent measurements give consistent results, and exhibit the non-poissonian character of the photon number distribution during the transient evolution.} \label{fig:Numbersplitting_time_evolution} \end{figure*} \end{document}
\begin{document} \title{\Large \bf One-sided FKPP travelling waves in the context of homogeneous fragmentation processes} \author{Robert Knobloch\thanks{Department of Mathematics, Saarland University, PO Box 151150, 66041 Saarbr\"ucken, Germany \newline e-mail: knobloch@math.uni-sb.de} } \date{\today} \maketitle \begin{abstract} In this paper we introduce the one-sided FKPP equation in the context of homogeneous fragmentation processes. The main result of the present paper is concerned with the existence and uniqueness of one-sided FKPP travelling waves in this setting. Moreover, we prove some analytic properties of such travelling waves. Our techniques make use of fragmentation processes with killing, an associated product martingale as well as various properties of L\'evy processes. \end{abstract} \noindent {\bf 2010 Mathematics Subject Classification:} 60G09, 60J25. \noindent {\bf Keywords and phrases:} FKPP equation, fragmentation process, L\'evy process, travelling wave. \section{Introduction} This paper deals with an integro-differential equation that is defined in terms of the dislocation measure of a fragmentation process. Given its probabilistic interpretation we consider this equation as an analogue of the one-sided Fisher-Kolmogorov-Petrovskii-Piscounov (FKPP) travelling wave equation in the context of fragmentation processes. In particular, we are concerned with the existence and uniqueness of solutions to this equation in the setting of conservative or dissipative homogeneous fragmentation processes and we derive certain analytic properties of the solutions when they exist. The FKPP travelling wave equation in the context of fragmentations has a similar probabilistic interpretation as the classical FKPP travelling wave equation whose probabilistic interpretation is related to branching Brownian motions, see Section~\ref{s.cFKPPe}. In this respect we also refer to \cite{104}, where the two-sided FKPP travelling wave equation for conservative homogeneous fragmentations is studied. It turns out that solutions in the setting of fragmentation processes have similar properties as their classical counterparts. However, the techniques of proving existence and uniqueness results are very different between the two cases, not least because the equations differ significantly. Indeed, whereas the classical FKPP travelling wave equation is a differential equation of second order, the FKPP travelling wave equation in our setting is an integro-differential equation of first order. This difference results from the non-diffusive behaviour of fragmentation processes and the more complicated jump structure of fragmentations in comparison with branching Brownian motions. In the context of homogeneous fragmentation processes we prove the existence and uniqueness of one-sided travelling waves within a certain range of wave speeds. More precisely, the problem we are concerned with in this paper can be roughly described as follows. Consider the integro-differential equation \[ cf'(x)+\int_{\mathcal P}\left(\prod_{n}f(x+\ln(|\pi_n|))-f(x)\right)\mu(\text{d}\pi)=0 \] for certain $c\in\R^+:=(0,\infty)$ and all $x\in\R^+_0:=[0,\infty)$, where the product is taken over all $n\in{\mathbb N}$ with $|\pi_n|\in\R^+$. Here the space $\mathcal P$ is the space of partitions $(\pi_n)_{n\in{\mathbb N}}$ of ${\mathbb N}$ and $\mu$ is the so-called dislocation measure on $\mathcal P$. This notation is introduced in more detail in the next section. We are interested in solutions $f:{\mathbb R}\to[0,1]$ of the above equation that satisfy \[ f|_{\R^+_0}\in C^1(\R^+_0,[0,1])\qquad\text{and}\qquad f|_{(-\infty,0)}\equiv1 \] as well as the boundary condition \[ \lim_{x\to\infty}f(x)=0. \] Roughly speaking, the main result of this paper states that there is some constant $c_0>0$ such that there exists a unique solution of the above boundary value problem for every $c>c_0$ and there does not exist such a solution for any $c\le c_0$. Our approach is based on using fragmentation processes with killing at an exponential barrier. These processes have been studied in \cite{KK12} and we briefly describe the corresponding concepts below. The outline of this paper is as follows. In the next section we give a brief introduction to homogeneous fragmentation processes as well as appropriately killed versions of such fragmentations. Subsequently, in Section~\ref{s.FKPPfrag} we introduce the one-sided FKPP equation in our setting and we state our main results. Afterwards, in the fourth section we provide some motivation for the problems considered in this paper by explaining some related results that are known in the literature on fragmentation processes and branching Brownian motions, respectively. In Section~\ref{s.fa} we show how the existence and uniqueness of one-sided FKPP travelling waves for fragmentation processes can be obtained if the dislocation measure is finite. This provides some motivation for the existence and uniqueness result in the setting of general fragmentation processes, which this paper is mainly concerned with. The subsequent three sections are devoted to the proofs of our main results. Throughout the present paper we adopt the notation ${\mathbb R}_\infty:=[-\infty,\infty)$ as well as the conventions $\ln(0):=-\infty$ and $\inf(\emptyset):=\infty$. The notation $C^n$, $n\in{\mathbb N}_0:={\mathbb N}\cup\{0\}$, refers to the set of $n$-times continuously differentiable functions. The integral of a real-valued function $f$ with respect to the Lebesgue measure on a set $[s,t]\subseteq{\mathbb R}$ is denoted by $\int_{[s,t]}f(u)\text{d} u$ and $\int_s^tf(u)\text{d} u$ denotes the Riemann integral. The operators $\land$ and $\lor$ refer to the minimum and maximum, respectively. Furthermore, we shall use the abbreviation DCT for the dominated convergence theorem. All the random objects are assumed to be defined on a complete probability space $(\Omega,\mathscr F,\mathbb P)$. \section{Homogeneous fragmentation processes with killing}\label{s.khfp} In this section we provide a brief introduction to partition-valued fragmentation processes and we present the main tools that we need in the subsequent sections. In addition, we introduce a specific killing mechanism for these processes. The advantage of partition-valued fragmentation processes compared to so-called mass fragmentations is their explicit genealogical structure of blocks. This structure is crucial for the killing mechanism that we introduce below. Regarding the state space of partition-valued fragmentation processes let $\mathcal{P}$ be the space of partitions $\pi=(\pi_n)_{n\in{\mathbb N}}$ of ${\mathbb N}$, where the blocks of $\pi$ are ordered by their least element such that $\inf(\pi_i)<\inf(\pi_j)$ if $i<j$. For every $\pi\in\mathcal P$ let $(|\pi|^\downarrow_n)_{n\in{\mathbb N}}$ be the decreasing reordering of the sequence given by \[ |\pi_n|:=\limsup_{k\to\infty}\frac{\sharp(\pi_n\cap\{1,\ldots,k\})}{k} \] for every $n\in{\mathbb N}$, where $\sharp$ denotes the counting measure on ${\mathbb N}$. Throughout this paper we consider a homogeneous $\mathcal P$-valued fragmentation process $\Pi:= (\Pi(t))_{t\in\R^+_0}$, where $\Pi(t) = (\Pi_n(t))_{n\in{\mathbb N}}$, and we denote by $\mathscr F:=(\mathscr F_t)_{t\in\R^+_0}$ the completion of the filtration generated by $\Pi$. Homogeneous $\mathcal{P}$-valued fragmentations are exchangeable Markov processes that were introduced in \cite{85}, see also \cite{92}. Bertoin showed in \cite{85} that the distribution of $\Pi$ is determined by some constant $d\in\R^+_0$ (the {\it rate of erosion} which describes the drift of $\Pi$) and a $\sigma$-finite measure $\nu$ (the so-called {\it dislocation measure} that indirectly describes the jumps of $\Pi$) on the infinite simplex \[ \mathcal S:=\left\{{\bf s}:=(s_n)_{n\in{\mathbb N}}:s_1\ge s_2\ldots\ge0,\,\sum_{n\in{\mathbb N}}s_n\le1\right\}, \] such that $\nu(\{(1,0,\ldots)\})=0$ and \begin{equation}\label{e.levymeasure} \int_{\mathcal{S}}(1-s_1)\nu(d{\bf s})<\infty. \end{equation} The process $\Pi$ is said to be {\it conservative} if $\nu(\sum_{n\in{\mathbb N}}s_n<1)=0$, i.e. if there is no loss of mass by sudden dislocations, and {\it dissipative} otherwise. In this paper we allow for both of these cases. {\it Throughout this paper we assume that $d=0$ as well as $\nu({\bf s}\in\mathcal S:s_2=0)=0$.} \\ In view of the forthcoming assumption (\ref{e.L.1.Bertoin}) this enables us to resort to the results of \cite{KK12}, where the same assumptions are made. Let us mention that the assumption $d=0$ does not result in any loss of generality, see Remark~\ref{r.drift}. Consider the exchangeable partition measure $\mu$ on $\mathcal{P}$ given by \[ \mu(d\pi) = \int_{\mathcal{S}}\varrho_{\bf s}(d\pi)\nu(d{\bf s}), \] where $\varrho_{\bf s}$ is the law of Kingman's paint-box based on ${\bf s}\in\mathcal S$. Similarly to $\nu$ the measure $\mu$ describes the jumps of $\Pi$, although more directly, and is also referred to as {\it dislocation measure}. In \cite{85} Bertoin showed that the homogeneous fragmentation process $\Pi$ is characterised by a Poisson point process. More precisely, there exists a $\mathcal P\times\mathbb N$-valued Poisson point process $(\pi(t),\kappa(t))_{t\in\R^+_0}$\label{p.pi_t} with characteristic measure $\mu\otimes\sharp$ such that $\Pi$ changes state only at the times $t\in\R^+_0$ for which an atom $(\pi(t),\kappa(t))$ occurs in $(\mathcal P\setminus({\mathbb N},\emptyset,\ldots))\times\mathbb N$. At such a time $t\in\R^+_0$ the sequence $\Pi(t)$ is obtained from $\Pi(t-)$ by replacing its $\kappa(t)$-th term, $\Pi_{\kappa(t)}(t-)\subseteq{\mathbb N}$, with the restricted partition $\pi(t)|_{\Pi_{\kappa(t)}(t-)}$ and reordering the terms such that the resulting partition of ${\mathbb N}$ is an element of $\mathcal P$. We denote the possible random jump times of $\Pi$, i.e. the times at which the abovementioned Poisson point process has an atom in $(\mathcal P\setminus({\mathbb N},\emptyset,\ldots))\times\mathbb N$, by $(t_i)_{i\in\mathcal I}$, where the index set $\mathcal I\subseteq\R^+_0$ is countable. Moreover, by exchangeability, the limit \[ |\Pi_n(t)|:=\lim_{k\to\infty}\frac{\sharp(\Pi_n(t)\cap\{1,\ldots,k\})}{k}, \] referred to as {\it asymptotic frequency}, exists $\mathbb P$-a.s. simultaneously for all $t\in\R^+_0$ and $n\in{\mathbb N}$. Let us point out that the concept of asymptotic frequencies provides us with a notion of {\it size} for the blocks of a $\mathcal P$-valued fragmentation process. In Theorem~3 of \cite{85} Bertoin showed that the process $(-\ln(|\Pi_1(t)|))_{t\in\R^+_0}$ is a killed subordinator, a fact we shall make use of below. For the time being, let $x\in{\mathbb R}_\infty$. In this paper we are concerned with a specific procedure of killing blocks of $\Pi$, see Figure~\ref{f.kfp.2a}, that was introduced in \cite{KK12}. More precisely, for $c>0$ a block $\Pi_n(t)$ is killed, with cemetery state $\emptyset$, at the moment of its creation $t\in\R^+_0$ if $|\Pi_n(t)|<e^{-(x+ct)}$. We denote the resulting fragmentation process with killing by $\Pi^x:=(\Pi^x(t))_{t\in\R^+_0}$ and the cemetery state of $\Pi^x$ is $(\emptyset,\ldots)$. Note that possibly $\Pi^x(t)\not\in\mathcal P$ as $\bigcup_{n\in{\mathbb N}}\Pi^x_n(t)\subsetneq{\mathbb N}$ is possible due to the killing of blocks. We denote by $\zeta^x$, $x\in{\mathbb R}_\infty$, the random extinction time of $\Pi^x$, i.e. $\zeta^x$ is the supremum of all the killing times of individual blocks. The question whether $\zeta^x$ is finite or infinite was considered in Theorem~2 of \cite{KK12}, see also Proposion~\ref{positivesurviaval} below. Furthermore, define a function $\varphi:{\mathbb R}\to[0,1]$ by \begin{equation}\label{e.phi} \varphi(x):=\mathbb P(\zeta^x<\infty) \end{equation} for all $x\in{\mathbb R}_\infty$. The function $\varphi$ will be of utmost interest in the present paper. Let us point out that $\varphi$ depends on the drift $c>0$ of the killing line. However, in order to keep the notation as simple as possible, we omit this dependence in the notation as the constant $c$ does not vary within results or proofs. Note that if $x<0$, then $\zeta^x=0$ and thus $\varphi(x)=1$. Let us remark that we could choose a non-zero rate $d$ of erosion by changing the slope $c$ of the killing line: \begin{rem}\label{r.drift} The results of this paper remain valid if we omit the assumption $d=0$ and replace the slope $c>0$ by $c_d:=c+d$. Consequently, the assumption $d=0$ is merely made for the sake of simplicity, but does not restrict the generality of our results. \end{rem} Set \[ \underline p:=\inf\left\{p\in{\mathbb R}:\int_{\mathcal S}\left|1-\sum_{n\in{\mathbb N}}s^{1+p}_n\right|\nu(\text{d}{\bf s})<\infty\right\}\in[-1,0] \] and for any $p>\underline p$ define \[ \Phi(p):=\int_{\mathcal S}\left(1-\sum_{n\in{\mathbb N}}s^{1+p}_n\right)\nu(\text{d}{\bf s}) \] as well as \[ \Phi(\underline p):=\lim_{p\downarrow \underline p}\Phi(p). \] Moreover, for each $p\in[\underline p,\infty)$ set \begin{equation}\label{c_p.2} c_p:=\frac{\Phi(p)}{1+p}. \end{equation} Throughout this paper we assume that there exists some $p\in(\underline p,\infty)$ such that \begin{equation}\label{e.L.1.Bertoin} (1+p)\Phi'(p)>\Phi(p), \end{equation} where $\Phi'$ denotes the derivative of $\Phi$. Let us point out that a sufficient condition for (\ref{e.L.1.Bertoin}) is the existence of some $p^*\in[\underline p,\infty)$ such that $\Phi(p^*)=0$. In particular, (\ref{e.L.1.Bertoin}) holds if $\Pi$ is conservative. In view of (\ref{e.L.1.Bertoin}) the same line of argument as in Lemma~1 of \cite{89} yields the existence of a unique solution of the equation \begin{equation}\label{e.bar_p} (1+p)\Phi'(p)=\Phi(p) \end{equation} on $(\underline p,\infty)$. We denote this unique solution of (\ref{e.bar_p}) by $\bar p$. The definition in (\ref{c_p.2}) then entails that $c_{\bar p}=\Phi'(\bar p)$. According to \cite{KK12} the fragmentation process with killing survives with positive probability if the drift of the killing line is greater than $c_{\bar p}$ and becomes extinct almost surely otherwise. \begin{proposition}[Theorem~2 of \cite{KK12}]\label{positivesurviaval} If $c>c_{\bar p}$, then $\varphi(x)\in(0,1)$ for all $x\in\R^+_0$. If, on the other hand, $c\le c_{\bar p}$, then $\varphi\equiv1$. \end{proposition} For any $t\in\R^+_0$ we denote by $B_n(t)$ the block of $\Pi(t)$ that contains the element $n\in{\mathbb N}$. According to Theorem~3 (ii) in \cite{85} it follows by means of the exchangeability of $\Pi$ that under $\mathbb P$ the process \[ \xi_n:=(-\ln(|B_n(t)|) )_{t\in\R^+_0}\,, \] cf. Figure~\ref{f.kfp.2a}, is a killed subordinator with Laplace exponent $\Phi$, cemetery state $\infty$ and killing rate \[ \int_{\mathcal S}\left(1-\sum_{k\in{\mathbb N}}s_k\right)\nu(\text{d}{\bf s}). \] Hence, the process $X_n:=(X_n(t))_{t\in\R^+_0}$, defined by \[ X_n(t):=ct-\xi_n(t) \] for all $t\in\R^+_0$, is a spectrally negative L\'evy process of bounded variation. Let $\mathcal I_n\subset\mathcal I$ be such that the jump times of $X_n$ are given by $(t_i)_{i\in\mathcal I_n}$. Note that $(t_i)_{i\in\mathcal I_n}$ are precisely the times when the subordinator $\xi_n$ jumps. For $n\in{\mathbb N}$ and $x\in\R^+_0$ we shall make use of the shifted and killed process $X^x_n:=(X^x_n(t))_{t\in\R^+_0}$, see Figure~\ref{f.kfp.3killed}, given by \[ X^x_n(t):=(X_n(t)+x)\mathds1_{\{\tau^-_{n,x}>t\}}-\infty\cdot\mathds1_{\{\tau^-_{n,x}\le t\}}=\left(x+ct+\ln(|B_n(t)|)\right)\mathds1_{\{\tau^-_{n,x}>t\}}-\infty\cdot\mathds1_{\{\tau^-_{n,x}\le t\}} \] for each $t\in\R^+_0$, where \[ \tau^-_{n,x}:=\inf\{t\in\R^+_0:X_n(t)<-x\} \qquad\text{as well as}\qquad \infty\cdot0:=0. \] \begin{figure}\label{f.kfp.2a} \label{f.kfp.3killed} \end{figure} For every $t\in\R^+_0$ set \[ \mathcal N^x_t:=\left\{n\in{\mathbb N}:\left[t<\tau^-_{n,x}\right]\land\left[\exists\,k\in{\mathbb N}:n=\min\Pi^x_k(t)\right]\right\}.\label{p.Nxt} \] That is to say, $\mathcal N^x_t$ consists of all the indices of blocks $B_n(t)$ that are not yet killed by time $t$. Let us remark that the first condition ``$t<\tau^-_{n,x}$'' ensures that the block containing $n\in{\mathbb N}$ is still alive at time $t$ and the second condition ``$\exists\,k\in{\mathbb N}:n=\min\left(\Pi^x_k(t)\right)$'' is used to avoid considering the same block multiple times. More precisely, for a block $B_n(t)$ that is alive at time $t\in\R^+_0$ only its least element is an element of $\mathcal N^x_t$. Without this condition all elements of $B_n(t)$ would be in $\mathcal N^x_t$. Note that in \cite{KK12} the notation $\mathcal N^x_t$ refers to a different ordering of the same set of blocks and the set of indices that we denote here by $\mathcal N^x_t$ is the quotient space $\widetilde{\mathcal{N}}_t^x/\sim$ in \cite{KK12}. Throughout this paper we shall repeatedly need an estimate regarding the number of fragments alive at a given time $t\in\R^+_0$. To this end, set $N^x_t:=\text{card}(\mathcal N^x_t)$ and observe that $N^x_t<\infty$ $\mathbb P$-almost surely. Indeed, since $\sum_{n\in{\mathbb N}}|\Pi_n(t)|\le1$ we infer that $|\Pi_n(t)|\ge e^{-(x+ct)}$ for at most $e^{x+ct}$-many $n\in{\mathbb N}$. Hence, \begin{equation}\label{e.N_x} N^x_t\le e^{x+ct}. \end{equation} \section{The one-sided FKPP equation for fragmentations}\label{s.FKPPfrag} In this section we establish the set-up for our considerations by defining the FKPP travelling wave equation in the context of fragmentation processes. Moreover, this section is devoted to presenting our main results. The main problem addressed in this paper is to find a range of wave speeds for which we can prove the existence of a unique travelling wave solution to the one-sided FKPP equation for homogeneous fragmentations as defined below. In order to tackle this problem we shall derive a connection between solutions of the abovementioned FKPP equation and a product martingale that was introduced in \cite{KK12}. Furthermore, we aim at studying some analytic properties of such travelling waves. \subsection{Set-up} For any function $f$ on some subset of ${\mathbb R}_\infty$ set \[ \mathcal C_f:=\left\{x\in\R^+:f'(x)\text{ exists}\right\}, \] where $f'$ denotes the derivative of $f$. Since we do not know a priori whether the functions $f$ we are interested in are differentiable, we need to define the integro-differential equations below for arguments in $\mathcal C_f$. Regarding solutions of such an equation we shall be particularly concerned with the function $\varphi$, given by (\ref{e.phi}), and our main results show in particular that $\mathcal C_\varphi=\R^+$. For functions $u:\R^+_0\times{\mathbb R}_\infty\to[0,1]$, with $u(t,\cdot)|_{[-\infty,ct)}\equiv1$ for each $t\in\R^+_0$ and $u(0,\cdot)|_{\R^+_0}=g$ for some continuous function $g:\R^+_0\to[0,1]$, consider the initial value problem \begin{equation}\label{e.FKPP.0} \frac{\partial u}{\partial t}(t,x)=\int_{\mathcal P}\left(\prod_{n\in{\mathbb N}}u(t,x+\ln(|\pi_n|))-u(t,x)\right)\mu(\text{d}\pi) \end{equation} for all $x\in\R^+_0$ and $t\in\mathcal C_{u(\cdot,x)}$. We call this initial value problem \emph{one-sided FKPP equation for fragmentation processes}. Here we are interested in the so-called \emph{FKPP travelling wave solutions} of (\ref{e.FKPP.0}) with wave speed $c\in\R^+_0$, that is in solutions of (\ref{e.FKPP.0}) which are of the form $u(t,x)=f(x-ct)$ for all $t,x\in\R^+_0$ with $x-ct\ge0$. \begin{definition} A \emph{one-sided FKPP travelling wave for fragmentation processes} is a monotone function $f:{\mathbb R}_\infty\to[0,1]$, with $f|_{[-\infty,0)}\equiv1$, that satisfies the following \emph{one-sided FKPP travelling wave equation} \begin{equation}\label{e.FKPP.1} cf'(x)+\int_{\mathcal P}\left(\prod_{n\in{\mathbb N}}f(x+\ln(|\pi_n|))-f(x)\right)\mu(\text{d}\pi)=0 \end{equation} for all $x\in\mathcal C_f$ with the boundary condition \begin{equation}\label{e.FKPP.2} \lim_{x\to\infty}f(x)=0. \end{equation} \end{definition} Let us now introduce an operator whose definition is inspired by the integro-differential equation~(\ref{e.FKPP.1}). To this end, let $\mathcal D_L$ be the set of all functions $f:{\mathbb R}_\infty\to[0,1]$, with $f|_{[-\infty,0)}\equiv1$, for which the mapping \[ \pi\mapsto\prod_{n\in{\mathbb N}}f(x+\ln(|\pi_n|)-f(x) \] on $\mathcal P$ is $\mu$-integrable. Then we define an integral operator $L$ with domain $\mathcal D_L$ by \[ Lf(x):=\int_{\mathcal P}\left(\prod_{n\in{\mathbb N}}f(x+\ln(|\pi_n|)-f(x)\right)\mu(\text{d}\pi)\label{p.L} \] for each $f\in\mathcal D_L$ and all $x\in\R^+_0$. \iffalse Further, we consider the integro-differential operator $T$ on $\mathcal D_L$ given by \[ Tf(x)=cf'(x)+Lf(x)\label{p.T_p+} \] for any $f\in\mathcal D_L$ and every $x\in\mathcal C_f$. \fi Recall that the upper Dini derivative (from above) of a function $f:{\mathbb R}_\infty\to[0,1]$ is defined by \[ f'_+(x):=\limsup_{h\downarrow 0}\frac{f(x+h)-f(x)}{h} \] for all $x\in{\mathbb R}$. On this note, observe that the Dini derivative is well defined, but may take the value $\infty$ or $-\infty$. The following class of monotone functions plays a crucial role in the analysis of the one-sided FKPP travelling wave equation. \begin{definition}\label{d.T} We denote by $\mathcal T$\label{p.Dp} the set consisting of all nonincreasing functions $f:{\mathbb R}_\infty\to[0,1]$, with $f|_{[-\infty,0)}\equiv1$ and such that $f|_{\R^+}$ is continuous, that satisfy (\ref{e.FKPP.2}) as well as \begin{equation}\label{finite_derivative} \sup_{x\in[s,t]}|f'_+(x)|<\infty \end{equation} for all $s,t\in\R^+$. \end{definition} \subsection{Main results} Here we present the main results of this paper. For this purpose, let us consider the following process which several of our proofs make use of: For any function $f:{\mathbb R}_\infty\to[0,1]$ and $x\in\R^+_0$ let $Z^{x,f}:=(Z^{x,f}_t)_{t\in\R^+_0}$ be given by \begin{equation}\label{e.prod_martingale} Z^{x,f}_t:=\prod_{n\in\mathcal N^x_t}f\left(X^x_n(t)\right) \end{equation} for all $t\in\R^+_0$. This process was considered in Section~5 of \cite{KK12} and in some proofs we shall resort to Theorem~10 in \cite{KK12}, which is concerned with the martingale property of $Z^{x,f}$. In this spirit, the first main result of the present paper reads as follows: \begin{theorem}\label{p.t.1.2} Let $c>c_{\bar p}$. In addition, let $f\in\mathcal T$ and assume that $Z^{x,f}$ is a martingale. Then $f$ solves (\ref{e.FKPP.1}). \end{theorem} The above theorem will be proven in Section~\ref{s.cspm}. The following result, whose proof will be provided in Section~\ref{s.ap}, deals with some analytic properties of one-sided FKPP travelling waves. \begin{theorem}\label{t.ap} Every one-sided FKPP travelling wave $f\in\mathcal T$ is right-continuous at $0$ and the function $f|_{\R^+}$ is strictly monotonically decreasing and continuously differentiable. \end{theorem} In particular, it follows from Theorem~\ref{t.ap} that $-cf'=Lf$ on $\R^+$ for every one-sided FKPP travelling wave $f\in\mathcal T$, where $c>0$ denotes the wave speed. The main goal of this paper is to establish the existence of a unique travelling wave in $\mathcal T$ to (\ref{e.FKPP.0}) with wave speed $c$ for $c>c_{\bar p}$ as well as the nonexistence of such a travelling wave with wave speed $c\le c_{\bar p}$. More specifically, the following result states that the extinction probability of the fragmentation process with killing solves equation~(\ref{e.FKPP.1}) with boundary condition~(\ref{e.FKPP.2}) for $c>c_{\bar p}$ and, moreover, is the only such function. Recall the function $\varphi$ defined in (\ref{e.phi}). \begin{theorem}\label{t.mainresult.1} If $c>c_{\bar p}$, then there exists a unique one-sided FKPP travelling wave in $\mathcal T$ with wave speed $c$, given by $\varphi$. On the other hand, if $c\le c_{\bar p}$, then there is no one-sided FKPP travelling wave in $\mathcal T$ with wave speed $c$. \end{theorem} We shall prove Theorem~\ref{t.mainresult.1} in Section~\ref{s.mainresult.1}. In view of the forthcoming Remark~\ref{r.BHK09} this theorem shows that one-sided FKPP travelling waves exist precisely for those positive wave speeds for which there do not exist two-sided travelling waves. Moreover, in view of \cite[Theorem~4]{KK12}, Theorem~\ref{t.mainresult.1} shows that travelling wave solutions exist exactly for those wave speeds that are larger than the asymptotic decay of the largest fragment in the fragmentation process with killing on the event of survival. Let us further point out that if $\Pi$ is conservative, i.e. if loss of mass by sudden dislocations is excluded, then in view of (3) in \cite{HKK10} equation (\ref{e.FKPP.1}) can be written as \[ cf'(x)+\int_{\mathcal S}\left(\prod_{n\in{\mathbb N}}f(x+\ln(s_n))-f(x)\right)\nu(\text{d}{\bf s})=0, \] which is the analogue of the two-sided FKPP equation considered in \cite{104} in the conservative setting, cf. (\ref{e.FKPP.0.a}). \section{Motivation -- The classical FKPP equation}\label{s.cFKPPe} In order to present the framework in which Theorem~\ref{t.mainresult.1} should be seen let us now briefly mention some known results that are related to our work. To this end we denote by $C^{1,2}(\R^+_0\times A,[0,1])$\label{p.C^1_2}, $A\subseteq{\mathbb R}$, the space of all functions $f:\R^+_0\times A\to[0,1]$ such that $f(x,\cdot)\in C^2(A,[0,1])$ and $f(\cdot,y)\in C^1(\R^+_0,[0,1])$ for all $x\in\R^+_0$ and $y\in A$. The classical FKPP equation in the form that is of most interest for us, cf. \cite{McK75}, is the following parabolic partial differential equation: \begin{equation}\label{e.FKPPclassical.0} \frac{\partial u}{\partial t}=\frac{1}{2}\frac{\partial^2u}{\partial x^2}+\beta(u^2-u) \end{equation} with $u\in C^{1,2}(\R^+_0\times{\mathbb R},[0,1])$. This equation, which first arose in the context of a genetics model for the spread of an advantageous gene through a population, was originally introduced by Fisher \cite{Fis30,Fis37} as well as by Kolmogorov, Petrovskii and Piscounov \cite{KPP37}. Since then it has attracted much attention by analysts and probabilists alike. In fact, several authors showed that this equation is closely related to dyadic branching Brownian motions, e.g. \cite{McK75} (see also \cite{McK76}), thus establishing a link of this analytic problem to probability theory. In this probabilistic interpretation the term ``$\frac{1}{2}\frac{\partial^2u}{\partial x^2}$'' corresponds to the motion of the underlying Brownian motion, the ``$\beta$'' is the rate at which the particles split and the term ``$u^2-u$'' results from the binary branching, where two particles replace one particle at each branching time. A solution $u$ of equation (\ref{e.FKPPclassical.0}) can be interpreted in different ways. The classical work concerning this partial differential equation, such as \cite{Fis30}, \cite{Fis37} and \cite{KPP37}, describes the wave of advance of advantageous genes. More precisely, there are two types of individuals (or genes) in a population and $u(t,x)$ measures the frequency or concentration of the advantageous type at the time-space point $(t,x)$. In the setting regarding the abovementioned probabilistic interpretation, that links (\ref{e.FKPPclassical.0}) with a dyadic branching Brownian motion, let $u(t,x)$ be the probability that at time $t$ the largest particle of the branching Brownian motion has a value less than $x$. Then $u$ satisfies equation (\ref{e.FKPPclassical.0}), see (7) in \cite{McK75}. That is to say, in \cite{Fis30}, \cite{Fis37} and \cite{KPP37} the FKPP equation~(\ref{e.FKPPclassical.0}) describes the bulk of a population, in \cite{McK75} it describes the most advanced particle of a branching Brownian motion. The classical FKPP travelling waves are solutions of (\ref{e.FKPPclassical.0}) of the form \[ u(t,x)=f(x-c t) \] for some $f\in C^2({\mathbb R},[0,1])$ and some constant $c\in{\mathbb R}$. This leads to the so-called FKPP travelling wave equation with wave speed $c\in{\mathbb R}$, \begin{align*}\label{e.FKPPclassical} \frac{1}{2}f''+cf'+\beta(f^2-f) &= 0\notag \\[0.5ex] \lim_{x\to-\infty}f(x) &= 0 \\[0.5ex] \lim_{x\to\infty}f(x) &= 1,\notag \end{align*} where $\beta>0$. This travelling wave boundary value problem was studied by various authors, using both analytic as well as probabilistic techniques, and it is known that is has a unique (up to additive translation) solution $f\in C^2({\mathbb R},[0,1])$ if $|c|\ge\sqrt{2\beta}$. In the opposite case that $0\le|c|<\sqrt{2\beta}$ there is no travelling wave solution. Regarding probabilistic approaches to the classical FKPP travelling wave equation we also refer for instance to the work of Bramson \cite{Bra78,Bra83}, Chauvin and Rouault \cite{CR88,CR90}, Uchiyama \cite{Uch77,Uch78} as well as \cite{Har99}, \cite{101} and \cite{Nev87}. Interesting with regard to our work is that the above boundary value problem was extended to continuous-time branching random walks, cf. \cite{Kyp99}, and to conservative homogeneous fragmentation processes, see \cite{104}. In the context of such fragmentation processes the corresponding partial integro-differential equation, referred to as FKPP equation, is given by \begin{equation}\label{e.FKPP.0.a} \frac{\partial u}{\partial t}(t,x)=\int_{\mathcal S}\left(\prod_{n\in{\mathbb N}}u(t,x+\ln(s_n))-u(t,x)\right)\nu(\text{d}{\bf s}) \end{equation} for certain $u:\R^+_0\times{\mathbb R}\to[0,1]$. Note that (\ref{e.FKPP.0.a}) looks quite different compared to the classical FKPP equation (\ref{e.FKPPclassical.0}). This difference results from the fact that fragmentation processes have no spatial motion except at jump times and from the more complicated jump structure of fragmentations in comparison with dyadic branching Brownian motions. However, despite the difference of the above equation compared to the classical FKPP equation, the name {\it FKPP equation} for (\ref{e.FKPP.0.a}) stems from the similarity in terms of the probabilistic interpretation these two equations have. Of particular interest to us are the FKPP travelling waves to (\ref{e.FKPP.0.a}) with wave speed $c\in{\mathbb R}$, i.e. solutions of (\ref{e.FKPP.0.a}) which are of the form $u(t,x)=f(x-ct)$ for all $t\in\R^+_0$ and $x\in{\mathbb R}$. These travelling wave solutions are functions $f\in C^1({\mathbb R},[0,1])$ that satisfy the following FKPP travelling wave equation \[ cf'(x)+\int_{\mathcal S}\left(\prod_{n\in{\mathbb N}}f(x+\ln(s_n)-f(x)\right)\nu(\text{d}{\bf s})=0 \] for all $x\in{\mathbb R}$ with boundary conditions \[ \lim_{x\to-\infty}f(x)=0\qquad\text{and}\qquad\lim_{x\to\infty}f(x)=1. \] \begin{rem}\label{r.BHK09} For every $p\in(\underline{p},\bar p]$ let $\mathcal T_2(p)$ denote the space of monotonically increasing functions $f\in C^1({\mathbb R},[0,1])$ satisfying the boundary conditions $\lim_{x\to-\infty}f(x)=0$ as well as $\lim_{x\to\infty}f(x)=1$ and such that the mapping $x\mapsto e^{(1+p)x}(1-f(x))$ is monotonically increasing. In Theorem~1 of \cite{104} Berestycki et. al. showed that for $p\in(\underline{p},\bar p]$ there exists a unique (up to additive translation) FKPP travelling wave solution in $\mathcal T_2(p)$ with wave speed $c_p$, cf. (\ref{c_p.2}). According to Lemma~1 of \cite{89} the mapping $p\mapsto\nicefrac{\Phi(p)}{(1+p)}=c_p$ is monotonically increasing on $(\underline p,\bar p]$ and thus it follows in view of Theorem~3 (ii) in \cite{104} that $c_{\bar p}$ is the maximal wave speed for two-sided travelling waves. \end{rem} In this paper we are interested in the one-sided counterpart of the abovementioned FKPP equation. In the classical setting the one-sided FKPP equation is the following partial differential equation \[ \frac{\partial u}{\partial t}=\frac{1}{2}\frac{\partial^2u}{\partial x^2}+\beta(u^2-u) \] on $\R^+\times\R^+$ with $u\in C^{1,2}(\R^+_0\times\R^+_0,[0,1])$. Observe that this equation is the analogue of (\ref{e.FKPPclassical.0}) for functions defined on $\R^+_0\times\R^+_0$. The corresponding one-sided FKPP travelling wave equation with wave speed $c\in{\mathbb R}$ is given by the differential equation \begin{equation}\label{e.FKPPclassical.2} \frac{1}{2}f''+cf'+\beta(f^2-f) = 0 \end{equation} on $\R^+$ for $f\in C^2(\R^+_0,[0,1])$ satisfying the boundary conditions \begin{equation}\label{e.FKPPclassical.2.0a} \lim_{x\to0}f(x) = 1\qquad\text{as well as}\qquad\lim_{x\to\infty}f(x) = 0. \end{equation} By considering killed branching Brownian motion with drift, killed upon hitting the origin, Harris et. al. proved in \cite{HHK06} that solutions of the one-sided FKPP travelling wave boundary value problem (\ref{e.FKPPclassical.2}) and (\ref{e.FKPPclassical.2.0a}) exist and are unique (up to additive translation) for all $c\in(-\sqrt{2\beta},\infty)$ and there is no such travelling wave solution for $c\in(-\infty,-\sqrt{2\beta}]$. Notice that the one-sided travelling wave solutions for negative $c$ are precisely those wave speeds for which there does not exist a two-sided travelling wave. With regard to the one-sided FKPP travelling wave equation in the classical setting we refer also to \cite{Wat65}, concerning existence of a solution, as well as \cite{Pin95} for existence and uniqueness of a solution of (\ref{e.FKPPclassical.2}) and (\ref{e.FKPPclassical.2.0a}) obtained by means of analytic techniques. \section{The finite activity case}\label{s.fa} In this section we prove existence of a one-sided FKPP travelling wave in the situation of a finite dislocation measure $\nu$, sometimes referred to as the {\it finite activity case}. In this respect note that a homogeneous fragmentation process with finite $\nu$ may still have infinitely many jumps in any finite time interval after the first jump, because infinitely many blocks may be present at any such time and each block fragments with the same rate. However, in this setting of a finite dislocation measure every block $B_n$, $n\in{\mathbb N}$, has only finitely many jumps up to any $t\in\R^+_0$. In particular, this implies that the fragmentation process with killing and with finite $\nu$ has finite activity in bounded time intervals, since at any time there are only finitely many blocks alive. Therefore, in this finite activity situation it is possible to consider the time $\tau_k:\Omega\to\R^+\cup \{\infty\}$, $k\in{\mathbb N}$, of the $k$-th jump of the killed process $\Pi^x$, a fact we shall make use of below. An approach to solve the classical one-sided FKPP equation with boundary condition $u(0,x)=g(x)$ for some suitable function $g:\R^+_0\to[0,1]$ is to show that the function $u:\R^+_0\times\R^+_0\to[0,1]$, given by \[ u(t,x)=\mathbb E\left(\prod_{n\in{\mathbb N}}g(x+Y_n(t))\right) \] for all $t,x\in\R^+_0$, is a solution of the considered boundary value problem, where the $Y_n(t)$ are the positions of the particles at time $t$ in a dyadic branching Brownian motion. In this section we show that for fragmentations with a finite dislocation measure $\nu$ a similar approach as above works for the initial value problem (\ref{e.FKPP.0}). More precisely, we prove that for certain functions $g:{\mathbb R}_\infty\to[0,1]$ the function $u:\R^+_0\times{\mathbb R}_\infty\to[0,1]$, defined by \begin{equation}\label{e.classicalFKPP} \forall\,x\in[ct,\infty):\,u(t,x)=\mathbb E\left(\prod_{n\in\mathcal N^{x-ct}_t}g(x+\ln(|B_n(t)|))\right)\qquad\text{and}\qquad u(t,\cdot)|_{[-\infty,ct)}\equiv1 \end{equation} for all $t\in\R^+_0$, solves equation~(\ref{e.FKPP.0}) with boundary condition $u(0,\cdot)=g$. \begin{proposition}\label{p.t.mainresult.1.1} Assume that $\nu(\mathcal S)<\infty$ and let $c>0$. Then every function $u:\R^+_0\times{\mathbb R}_\infty\to[0,1]$ defined by (\ref{e.classicalFKPP}), for some function $g:{\mathbb R}_\infty\to[0,1]$ with $g|_{\R^+_0}\in C^0(\R^+_0)$ and $g|_{[-\infty,0)}\equiv1$, satisfies the boundary condition \begin{equation}\label{e.bc} u(0,\cdot)=g \end{equation} and solves equation~(\ref{e.FKPP.0}) for any $x\in\R^+_0$. In particular, any such function $u$ of the form $u(t,x)=f(x-ct)$ for some $f:{\mathbb R}_\infty\to[0,1]$, with $f|_{[-\infty,0)}\equiv1$, and all $t\in\R^+_0$ and $x\in{\mathbb R}_\infty$ solves (\ref{e.FKPP.1}). \end{proposition} The following corollary of Proposition~\ref{p.t.mainresult.1.1} provides a short proof that the extinction probability $\varphi$ of the fragmentation process with killing solves equation~(\ref{e.FKPP.1}) in the special case of a finite dislocation measure. \begin{corollary}\label{c.p.t.mainresult.1.1.1} Assume that $\nu(\mathcal S)<\infty$ and let $c>c_{\bar p}$. Then $\varphi$ is an FKPP travelling wave with wave speed $c$. \end{corollary} \begin{pf} Let us first show that $\varphi$ solves (\ref{e.FKPP.1}). For this purpose, observe that the fragmentation property, in conjunction with the tower property of conditional expectations, yields that \begin{align*} \varphi(x-ct) &= \mathbb E\left(\prod_{n\in\mathcal N^{x-ct}_t}\left.\mathbb P\left(\zeta^{x-ct+ct+y}<\infty\right)\right|_{y=\ln(|B_n(t)|)}\right) \\[0.5ex] &= \mathbb E\left(\prod_{n\in\mathcal N^{x-ct}_t}\varphi(x+\ln(|B_n(t)|))\right), \end{align*} and thus $u:\R^+_0\times{\mathbb R}_\infty\to[0,1]$, given by $u(t,x):=\varphi(x-ct)$, satisfies (\ref{e.classicalFKPP}) with $g=\varphi$. Hence, according to Proposition~\ref{p.t.mainresult.1.1} the function $\varphi$ solves (\ref{e.FKPP.1}). Since $c>c_{\bar p}$, it follows from Theorem~10 in \cite{KK12} that $\varphi$ also satisfies the boundary condition~(\ref{e.FKPP.2}), which completes the proof. \end{pf} The major part of this paper, cf. Theorem~\ref{t.mainresult.1}, is concerned with the proof that the conclusion of Corollary~\ref{c.p.t.mainresult.1.1.1} holds true also in the general case of an infinite dislocation measure. {\bf Proof of Proposition~\ref{p.t.mainresult.1.1}}\quad The proof is based on a decomposition according to the first and second jump times of a killed fragmentation. Treating these parts separately we obtain the desired expression for the right derivative of $u(\cdot,x)$, $x\in\R^+_0$. Let $g:{\mathbb R}_\infty\to[0,1]$ be some function that satisfies $g|_{\R^+_0}\in C^0(\R^+_0)$ and $g|_{[-\infty,0)}\equiv1$. Further, consider the function $u:\R^+_0\times{\mathbb R}_\infty\to[0,1]$ defined by (\ref{e.classicalFKPP}) and fix some $x\in\R^+_0$ as well as $t\in\mathcal C_{u(\cdot,x)}$. In the light of the c\`adl\`ag paths of $\Pi$ and the DCT, note first that $u$ satisfies the boundary condition (\ref{e.bc}), since $|B_1(0)|=1$ and $|B_n(0)|=0$, i.e. $g(x+\ln(|B_n(0)|))=1$, for all $n\in{\mathbb N}\setminus\{1\}$. In order to prove that $u$ solves (\ref{e.FKPP.0}) Lebesgue-a.e., let $(t_i)_{i\in\mathcal I^x}$ be the jump times of $\Pi^x$ and in view of the finiteness of the dislocation measure and $N^x_t\le e^{x+ct}$, cf. (\ref{e.N_x}), we assume without loss of generality that $\mathcal I^x={\mathbb N}$ and that $0<t_i< t_j$ for any $i,j\in{\mathbb N}$ with $i<j$. Since $t_1$ is exponentially distributed with parameter $\mu(\mathcal P)$, we have \[ \lim_{h\downarrow0}\frac{\mathbb P\left(t_1\le h\right)}{h}\notag = \lim_{h\downarrow0}\frac{1-e^{-h\mu(\mathcal P)}}{h} = \mu(\mathcal P) \] and deduce by resorting to the strong fragmentation property of $\Pi$ that \begin{equation}\label{second_jump} \begin{aligned} \lim_{h\downarrow0}\frac{\mathbb P(t_2\le h)}{h} &\le \lim_{h\downarrow0}\frac{\mathbb P(t_1\le h)}{h}\lim_{h\downarrow0}\mathbb E\left(\left.\mathbb P\left(\mathfrak{e}_{\mu(\mathcal P)e^{x+c t}}\le h\right)\right|_{t=t_1}\right) \\[0.5ex] &= \mu(\mathcal P)\mathbb E\left(\lim_{h\downarrow0}\left(1-e^{-h\mu(\mathcal P)e^{x+ct_1}}\right)\right) \\[0.5ex] &=0, \end{aligned} \end{equation} where $\mathfrak{e}_{\mu(\mathcal P)e^{x+c t_1}}$ denotes a random variable that is exponentially distributed with parameter $\mu(\mathcal P)e^{x+c t_1}$. Consequently, \begin{equation}\label{e.p.t.mainresult.1.1.3} \lim_{h\downarrow0}\frac{\mathbb P(t_1\le h<t_2)}{h} = \lim_{h\downarrow0}\frac{\mathbb P(t_1\le h)}{h}-\lim_{h\downarrow0}\frac{\mathbb P(t_2\le h)}{h} = \mu(\mathcal P). \end{equation} By means of the strong Markov property, the fact that the distrubution of $\pi(t_1)$ is given by $\nicefrac{\mu(\cdot)}{\mu(\mathcal P)}$ and the independence between $\pi(t_1)$ and the random vector $ \left( \begin{array}{c} t_1 \\ t_2 \end{array} \right)$, see Proposition~2 in Section 0.5 of \cite{Ber96}, we have \begin{align*} & \mathbb E\left(\prod_{n\in\mathcal N^{x-c(t+h)}_{t+h}}g(x+\ln(|B_n(t+h)|))\mathds1_{\left\{t_1\le h<t_2\right\}}\right) \\[0.75ex] &= \mathbb E\Bigg(\prod_{n\in{\mathbb N}}\Bigg(\mathds1_{\{x-c(t+h)+ct_1<-\ln(|\pi_n(t_1)|)\}}+\mathds1_{\{x-c(t+h)+ct_1\ge-\ln(|\pi_n(t_1)|)\}} \\[0.5ex] &\qquad \cdot\mathbb E\Bigg(\prod_{k\in\mathcal N^{x-c(t+h)+ct_1+\ln(|\pi_n(t_1)|)}_t}g\left(x+\ln(|\pi_n(t_1)|)+\ln\left(\left|B^{(n)}_k(t)\right|\right)\right)\mathds1_{\left\{t_1\le h<t_2\right\}}\Bigg|\mathscr F_{t_1}\Bigg)\Bigg)\Bigg) \\[0.75ex] &= \mathbb E\Bigg(\prod_{n\in{\mathbb N}}\Bigg(\mathds1_{\{x-c(t+h-t_1)<-\ln(|\pi_n(t_1)|)\}}+\mathds1_{\{x-c(t+h-t_1)\ge-\ln(|\pi_n(t_1)|)\}} \\[0.5ex] &\qquad \cdot\mathbb E\Bigg(\mathds1_{\left\{t_1\le h<t_2\right\}}\mathbb E\Bigg(\prod_{k\in\mathcal N^{x-c(t+h-t_1)+\ln(|\pi_n(t_1)|)}_t}g\left(x+\ln(|\pi_n(t_1)|)+\ln\left(\left|B^{(n)}_k(t)\right|\right)\right)\Bigg|\mathscr F_h\Bigg)\Bigg|\mathscr F_{t_1}\Bigg)\Bigg)\Bigg) \\[0.75ex] &= \mathbb E\Bigg(\mathds1_{\left\{t_1\le h<t_2\right\}}\prod_{n\in{\mathbb N}}\Bigg(\mathds1_{\{x-c(t+h-t_1)<-\ln(|\pi_n(t_1)|)\}}+\mathds1_{\{x-c(t+h-t_1)\ge-\ln(|\pi_n(t_1)|)\}} \\[0.5ex] &\qquad \cdot\mathbb E\Bigg(\prod_{k\in\mathcal N^{x+\ln(u)-ct}_t}g\left(x+\ln\left(u\right)+\ln(|B_k(t)|)\right)\Bigg)\Bigg|_{u=|\pi_n(t_1)|}\Bigg)\Bigg) \\[0.75ex] &= \mathbb P(t_1\le h<t_2)\mathbb E\left(\prod_{n\in{\mathbb N}}u_h(t,x+\ln(|\pi_n(t_1)|))\right) \\[0.75ex] &= \mathbb P(t_1\le h<t_2)\int_\mathcal P\prod_{n\in{\mathbb N}}u_h(t,x+\ln(|\pi_n|))\frac{\mu(\text{d}\pi)}{\mu(\mathcal P)}\, , \end{align*} where \[ u_h(t,\cdot)|_{[c(t+h-t_1),\infty)}:= u|_{[c(t+h-t_1),\infty)}\qquad\text{as well as}\qquad u_h(t,\cdot)|_{[-\infty,c(t+h-t_1))}:\equiv1. \] Therefore, (\ref{e.p.t.mainresult.1.1.3}) yields that \begin{align}\label{e.p.t.mainresult.1.1.3_b} & \lim_{h\downarrow0}\mathbb E\left(\frac{1}{h}\prod_{n\in\mathcal N^{x-c(t+h)}_{t+h}}g(x+\ln(|B_n(t+h)|))\mathds1_{\left\{t_1\le h<t_2\right\}}\right) \notag \\[0.75ex] &= \int_\mathcal P\prod_{n\in{\mathbb N}}\lim_{h\downarrow0}u_h(t,x+\ln(|\pi_n|))\mu(\text{d}\pi)\frac{1}{\mu(\mathcal P)}\lim_{h\downarrow0}\frac{\mathbb P(t_1\le h<t_2)}{h} \\[0.75ex] &= \int_\mathcal P\prod_{n\in{\mathbb N}}u(t,x+\ln(|\pi_n|))\mu(\text{d}\pi).\notag \end{align} Moreover, \begin{align*} &\mathbb E\left(\prod_{n\in\mathcal N^{x-ct}_t}g(x+\ln(|B_n(t)|))\mathds1_{\left\{t^{(n)}_1\le h<t^{(n)}_2\right\}}\right) \\[0.5ex] &= \mathbb E\left(\prod_{n\in\mathcal N^{x-ct}_t}g(x+\ln(|B_n(t)|))\mathbb P\left(\left.t^{(n)}_1\le h<t^{(n)}_2\right|\mathscr F_t\right)\right) \\[0.5ex] &=u(t,x)\mathbb P(t_1\le h<t_2) \\[0.5ex] &=\int_\mathcal Pu(t,x)\mu(\text{d}\pi)\frac{1}{\mu(\mathcal P)}\mathbb P(t_1\le h<t_2) \end{align*} holds for all $h>0$, where conditionally on $\mathscr F_t$ the $t^{(n)}_1$ and $t^{(n)}_2$ are independent copies of $t_1 $ and $t_2$, respectively. Hence, \begin{align}\label{e.p.t.mainresult.1.1.3_c} \lim_{h\downarrow0}\mathbb E\left(\frac{1}{h}\prod_{n\in\mathcal N^{x-ct}_t}g(x+\ln(|B_n(t)|))\mathds1_{\left\{t^{(n)}_1\le h<t^{(n)}_2\right\}}\right) \notag &= \int_\mathcal Pu(t,x)\mu(\text{d}\pi)\frac{1}{\mu(\mathcal P)}\lim_{h\downarrow0}\frac{\mathbb P(t_1\le h<t_2)}{h} \\[0.5ex] &= \int_\mathcal Pu(t,x)\mu(\text{d}\pi). \end{align} Since $|B_1(h)|=1$ and $\mathcal N^{x-c(t+h)}_h=\{1\}$ on $\{t_1> h\}$, we deduce with $t^{(n)}_1$ being defined as above that \begin{equation}\label{no_jump} \begin{aligned} & \mathbb E\left(\prod_{n\in\mathcal N^{x-c(t+h)}_{t+h}}g(x+\ln(|B_n(t+h)|))\mathds1_{\left\{t_1> h\right\}}\right) \\[0.75ex] &= \mathbb E\left(\left.\mathbb E\left.\left(\prod_{n\in\tilde{\mathcal N}^{x-c(t+h)+ch}_t}g(x+\ln(\gamma)+\ln(|B^{(n)}(t)|))\right|\mathscr F_h\right)\right|_{\gamma=|B_1(h)|}\mathds1_{\left\{t_1> h\right\}}\right) \\[0.75ex] &= \mathbb E\left(\mathbb E\left(\prod_{n\in\mathcal N^{x-ct}_t}g(x+\ln(|B_n(t)|))\right)\mathds1_{\left\{t_1> h\right\}}\right) \\[0.75ex] &= \mathbb E\left(\prod_{n\in\mathcal N^{x-ct}_t}g(x+\ln(|B_n(t)|))\right)\mathbb P\left(t_1> h\right) \\[0.75ex] &= \mathbb E\left(\prod_{n\in\mathcal N^{x-ct}_t}g(x+\ln(|B_n(t)|))\mathbb P\left(t_1> h\right)\right) \\[0.75ex] &= \mathbb E\left(\prod_{n\in\mathcal N^{x-ct}_t}g(x+\ln(|B_n(t)|))\mathbb P\left(\left.t^{(n)}_1> h\right|\mathscr F_t\right)\right) \\[0.75ex] &=\mathbb E\left(\prod_{n\in\mathcal N^{x-ct}_t}g(x+\ln(|B_n(t)|))\mathds1_{\left\{t^{(n)}_1> h\right\}}\right) \end{aligned} \end{equation} holds for each $h>0$, where conditionally on $\mathscr F_h$ the $\tilde{\mathcal N}^{(\cdot)}_t$ and $B^{(n)}$ are independent copies of $\mathcal N^{(\cdot)}_t$ and $B_n$, respectively. Furthermore, note that (\ref{second_jump}) results in \begin{equation}\label{second_jump_b} \lim_{h\downarrow0}\frac{\mathbb P\left(t^{(n)}_2\le h\right)}{h}=\lim_{h\downarrow0}\frac{\mathbb E\left(\mathbb P\left(\left.t^{(n)}_2\le h\right|\mathscr F_t\right)\right)}{h}=\lim_{h\downarrow0}\frac{\mathbb P\left(t_2\le h\right)}{h}=0, \end{equation} where $t^{(n)}_2$ is defined as above. Bearing in mind that $|g|\le1$ it follows from the DCT in conjunction with (\ref{second_jump}) and (\ref{second_jump_b}), respectively, that \begin{align*} \lim_{h\downarrow 0}\mathbb E\left(\prod_{n\in\mathcal N^{x-c(t+h)}_{t+h}}g(x+\ln(|B_n(t+h)|))\mathds1_{\left\{t_2\le h\right\}}\right) &=0 \\[0.5ex] &=\lim_{h\downarrow 0}\mathbb E\left(\prod_{n\in\mathcal N^{x-ct}_t}g(x+\ln(|B_n(t)|))\mathds1_{\left\{t^{(n)}_2\le h\right\}}\right). \end{align*} Since \begin{align*} u(t+h,x) &= \mathbb E\left(\prod_{n\in\mathcal N^{x-c(t+h)}_{t+h}}g(x+\ln(|B_n(t+h)|))\mathds1_{\left\{t_1>h\right\}}\right) \\[0.75ex] &\qquad +\mathbb E\left(\prod_{n\in\mathcal N^{x-c(t+h)}_{t+h}}g(x+\ln(|B_n(t+h)|))\mathds1_{\left\{t_2\le h\right\}}\right) \\[0.75ex] &\qquad +\mathbb E\left(\prod_{n\in\mathcal N^{x-c(t+h)}_{t+h}}g(x+\ln(|B_n(t+h)|))\mathds1_{\left\{t_1\le h<t_2\right\}}\right) \end{align*} and \begin{align*} u(t,x) &= \mathbb E\left(\prod_{n\in\mathcal N^{x-ct}_t}g(x+\ln(|B_n(t)|))\mathds1_{\left\{t^{(n)}_1> h\right\}}\right)+ \mathbb E\left(\prod_{n\in\mathcal N^{x-ct}_t}g(x+\ln(|B_n(t)|))\mathds1_{\left\{t^{(n)}_2\le h\right\}}\right) \\[0.5ex] &\qquad +\mathbb E\left(\prod_{n\in\mathcal N^{x-ct}_t}g(x+\ln(|B_n(t)|))\mathds1_{\left\{t^{(n)}_1\le h<t^{(n)}_2\right\}}\right) \end{align*} hold for every $h>0$, it thus follows from (\ref{e.p.t.mainresult.1.1.3_b}), (\ref{e.p.t.mainresult.1.1.3_c}) and (\ref{no_jump}) that \[ \lim_{h\downarrow 0}\frac{u(t+h,x)-u(t,x)}{h} = \int_\mathcal P\left(\prod_{n\in{\mathbb N}}u(t,x+\ln(|\pi_n|))-u(t,x)\right)\mu(\text{d}\pi), \] which completes the proof, since $t\in\mathcal C_{u(\cdot,x)}$. $\square$ \section{Sufficiency criterion for the existence of travelling waves}\label{s.cspm} The goal of this section is to provide the proof of Theorem~\ref{p.t.1.2}. A first approach to try proving Theorem~\ref{p.t.1.2} might be to pursue a line of argument along the lines of the proof of Theorem~1 in \cite{104}. But that proof relies on $f$ being continuously differentiable and in our situation we cannot use any differentiability assumption. In fact, even if we knew that $f$ is differentiable with a bounded derivative $f'$, we would at least need that the set of discontinuities of $f'$ is a Lebesgue null set. However, in general the set of such discontinuities may have positive Lebesgue measure, cf. Example~3.5 in \cite{119}. Let us start with the following auxiliary result. \begin{lemma}\label{l.MVT} Let $f\in\mathcal T$ as well as $a,b\in\R^+$. Then we have \[ f(a)-f(b)\le(b-a)\sup_{x\in(a,b)}|f'_+(x)|. \] \end{lemma} \begin{pf} Define a function $\phi:[a,b]\to{\mathbb R}$ by \[ \phi(x):=f(x)-\frac{f(b)-f(a)}{b-a}(x-a) \] for all $x\in[a,b]$. Let us first show that there exists some $x_0\in(a,b)$ such that \begin{equation}\label{e.MVT.-1} \limsup_{h\downarrow0}\frac{\phi(x_0+h)-\phi(x_0)}{h}\le0. \end{equation} To this end, assume \begin{equation}\label{e.MVT.0} \phi'_+(x):=\limsup_{h\downarrow0}\frac{\phi(x+h)-\phi(x)}{h}>0 \end{equation} for each $x\in(a,b)$. Then for every $x\in(a,b)$ there exists some $\epsilon_x>0$ such that for every $\epsilon\in(0,\epsilon_x]$ we have \begin{equation}\label{e.MVT.1} \frac{\phi(x+h)-\phi(x)}{h}>0 \end{equation} for some $h\in(0,\epsilon)$. We now show that this implies that $\phi$ is nondecreasing on $(a,b)$. For this purpose, consider $c,d\in[a,b]$ and assume \begin{equation}\label{e.MVT.2} \max_{x\in[c,d]}\phi(x)\ne\phi(d), \end{equation} where the existence of this maximum follows from the continuity of $\phi$, which in turn follows from $f\in\mathcal T$ being continuous. Then there exists some $x_0\in[c,d)$ such that \[ \max_{x\in[c,d]}\phi(x)=\phi(x_0). \] However, this implies that $\phi(x_0)\ge\phi(x)$ for all $x\in(x_0,(x_0+\epsilon_{x_0})\land d)$, which contradicts (\ref{e.MVT.1}). Hence, (\ref{e.MVT.2}) cannot be true and consequently we infer that \begin{equation}\label{e.MVT.3} \max_{x\in[c,d]}\phi(x)=\phi(d) \end{equation} for all $c,d\in[a,b]$ under assumption~(\ref{e.MVT.0}). Note that $\phi$ not being nondecreasing on $[a,b]$ would entail that there exist $c,d\in[a,b]$, with $c<d$, such that $\phi(c)>\phi(d)$, which contradicts (\ref{e.MVT.3}). Therefore, we conclude that $\phi$ is nondecreasing and nonconstant on $[a,b]$ if (\ref{e.MVT.0}) holds. This, however, contradicts the fact that \[ \phi(a)=f(a)=\phi(b). \] We thus deduce that (\ref{e.MVT.0}) cannot hold and hence there exists some $x_0\in(a,b)$ such that (\ref{e.MVT.-1}) holds. With $x_0\in(a,b)$ given by (\ref{e.MVT.-1}) we obtain \[ 0\ge\phi'_+(x_0)=f'_+(x_0)-\frac{f(b)-f(a)}{b-a}, \] which results in \[ 0\le \sup_{x\in(a,b)}|f'_+(x)|-\frac{f(a)-f(b)}{b-a} \] and thus \[ f(a)-f(b)\le (b-a)\sup_{x\in(a,b)}|f'_+(x)|. \] \end{pf} We proceed by establishing two auxiliary results, which in spirit are analogues of respective results in \cite{104}. Afterwards we provide a lemma giving conditions under which only the block containing $1$ is alive in the fragmentation process with killing. Finally, having all these auxiliary results at hand, we finish this section with the proof of Theorem~\ref{p.t.1.2}. Observe first that a straightforward argument by induction yields that \begin{equation}\label{e.estimate} \left|\prod_{n\in{\mathbb N}}a_n-\prod_{n\in{\mathbb N}}b_n\right|\le\sum_{n\in{\mathbb N}}|a_n-b_n| \end{equation} holds for all sequences $(a_n)_{n\in{\mathbb N}},(b_n)_{n\in{\mathbb N}}\in[0,1]^{\mathbb N}$. The following lemma, whose proof is based on (\ref{e.estimate}), shows in particular that $f\in\mathcal D_L$ and, moreover, that $Lf$ is bounded on compact sets for any $f\in\mathcal T$. \begin{lemma}\label{l.p.t.1.2.4.0} Let $f\in\mathcal T$ and let $a,b\in\R^+$. Then \[ \int_{\mathcal P}\sup_{x\in[a,b]}\left|\prod_{n\in{\mathbb N}}f(x+\ln(|\pi_n|))-f(x)\right|\mu(\text{d}\pi)<\infty. \] \end{lemma} \begin{pf} By means of (\ref{e.estimate}) we have \begin{align}\label{e.l.p.t.1.2.3.1} & \int_{\mathcal P}\sup_{x\in[a,b]}\left|\prod_{n\in{\mathbb N}}f(x+\ln(|\pi_n|))-f(x)\right|\mu(\text{d}\pi) \\[0.5ex] &\le \int_{\mathcal P}\sup_{x\in[a,b]}\left|f(x+\ln(|\pi|^\downarrow_1))-f(x)\right|\mu(\text{d}\pi)+\int_{\mathcal P}\sum_{n\in{\mathbb N}\setminus\{1\}}\sup_{x\in[a,b]}\left|f(x+\ln(|\pi|^\downarrow_n))-1\right|\mu(\text{d}\pi).\notag \end{align} Since \[ \frac{\text{d}}{\text{d} x}[\ln(x)+2(1-x)]=\frac{1}{x}-2 \] and $\ln(1)+2(1-1)=0$, we deduce that \[ -\ln(x)\le2(1-x) \] holds for all $x\in[\nicefrac{1}{2},1]$. Therefore, for every $\epsilon\in(0,\nicefrac{1}{2}]$ we have \begin{equation}\label{e.l.p.t.1.2.3.1a} -\ln(|\pi|^\downarrow_1)\le2(1-|\pi|^\downarrow_1) \end{equation} for all $\pi\in\mathcal P$ with $1-|\pi|^\downarrow_1\le\epsilon$. Moreover, by means of Lemma~\ref{l.MVT} we have for any $x\in\R^+$ and $\pi\in\mathcal P$ with $|\pi|^\downarrow_1>e^{-x}$ the estimate \begin{equation}\label{e.l.p.t.1.2.3.1b2} \left|f\left(x+\ln(|\pi|^\downarrow_1)\right)-f(x)\right|\le-\ln(|\pi|^\downarrow_1)\sup_{y\in\left(x+\ln(|\pi|^\downarrow_1),\,x\right)}|f'_+(y)|. \end{equation} Furthermore, for every $\gamma\in(0,a)$ define \[ A_{a,\gamma}:=\left\{\pi\in\mathcal P:a+\ln(|\pi|^\downarrow_1)\in[0,\gamma)\right\}=\left\{\pi\in\mathcal P:|\pi|^\downarrow_1\in[e^{-a},e^{\gamma-a})\right\} \] and observe that in view of $\gamma-a<0$ and (\ref{e.levymeasure}) we have \[ \mu\left(A_{a,\gamma}\right)\le\mu\left(\{\pi\in\mathcal P:|\pi|^\downarrow_1<e^{\gamma-a}\}\right)<\infty. \] Hence, resorting to (\ref{e.levymeasure}), (\ref{finite_derivative}) and (\ref{e.l.p.t.1.2.3.1a}) as well as (\ref{e.l.p.t.1.2.3.1b2}) we conclude in the light of (3) in \cite{HKK10} and $f(x)\in[0,1]$ for every $x>0$ that \begin{align*} &\int_{\mathcal P}\sup_{x\in[a,b]}\left|f(x+\ln(|\pi|^\downarrow_1))-f(x)\right|\mu(\text{d}\pi)\notag \\[0.5ex] &\le \int_{\{\pi\in\mathcal P:1-|\pi|^\downarrow_1>\epsilon\}\cup A_{a,\gamma}}\sup_{x\in[a,b]}\left|f(x+\ln(|\pi|^\downarrow_1))-f(x)\right|\mu(\text{d}\pi)\notag \\[0.5ex] &\qquad +\int_{\{\pi\in\mathcal P:1-|\pi|^\downarrow_1\le\epsilon\}\setminus A_{a,\gamma}}\sup_{x\in[a,b]}\left|f(x+\ln(|\pi|^\downarrow_1))-f(x)\right|\mu(\text{d}\pi) \\[0.5ex] &\le \mu\left(\{\pi\in\mathcal P:1-|\pi|^\downarrow_1>\epsilon\}\cup A_{a,\gamma}\right)+\int_{\{\pi\in\mathcal P:1-|\pi|^\downarrow_1\le\epsilon\}\setminus A_{a,\gamma}}-\ln(|\pi|^\downarrow_1)\sup_{y\in(a+\ln(|\pi|^\downarrow_1),b)}|f_+'(y)|\,\mu(\text{d}\pi)\notag \\[0.5ex] &\le \mu\left(\{\pi\in\mathcal P:|\pi|^\downarrow_1<1-\epsilon\}\right)+\mu\left(A_{a,\gamma}\right)+2\sup_{y\in[\gamma,b)}|f_+'(y)|\int_{\mathcal P}(1-|\pi|^\downarrow_1)\,\mu(\text{d}\pi)\notag \\[0.5ex] &< \infty\notag \end{align*} for any $\epsilon\in(0,\nicefrac{1}{2}]$, which shows that the first term on the right-hand side of (\ref{e.l.p.t.1.2.3.1}) is finite. In order to deal with the second term on the right-hand side of (\ref{e.l.p.t.1.2.3.1}), note that the monotonicity of $f$ together with $f|_{[-\infty,0)}\equiv1$ and $f|_{[0,\infty)}\in[0,1]$ yields that \begin{align*} \int_{\mathcal P}\sum_{n\in{\mathbb N}\setminus\{1\}}\sup_{x\in[a,b]}|1-f(x+\ln(|\pi|^\downarrow_n))|\mu(\text{d}\pi) &\le \int_{\mathcal P}\sum_{n\in{\mathbb N}\setminus\{1\}}|1-f(b+\ln(|\pi|^\downarrow_n))|\mu(\text{d}\pi) \\[0.5ex] &\le \int_{\mathcal P}\sum_{n\in{\mathbb N}\setminus\{1\}}e^{(b+\ln(|\pi|^\downarrow_n))}\mu(\text{d}\pi)\notag \\[0.5ex] &= e^b\int_{\mathcal P}\sum_{n\in{\mathbb N}\setminus\{1\}}|\pi|^\downarrow_n\mu(\text{d}\pi) \\[0.5ex] &< \infty \end{align*} for all $x>0$. Observe that the finiteness holds, since \[ \int_{\mathcal P}\sum_{n\in{\mathbb N}\setminus\{1\}}|\pi|^\downarrow_n\mu(\text{d}\pi) =\int_{\mathcal P}\left(\left(1-|\pi|^\downarrow_1\right)+\left(\sum_{n\in{\mathbb N}}|\pi|^\downarrow_n-1\right)\right)\mu(\text{d}\pi) \le \int_{\mathcal P}(1-|\pi|^\downarrow_1)\mu(\text{d}\pi) < \infty. \] Consequently, also the second term on the right-hand side of (\ref{e.l.p.t.1.2.3.1}) is finite. \end{pf} As already mentioned, the previous lemma implies that $Lf$ exists for each $f\in\mathcal T$. The next lemma goes a step further for that it shows that $Lf$ is continuous for every $f\in\mathcal T$. \begin{lemma}\label{l.p.t.1.2.4} Let $f\in\mathcal T$. Then the function $Lf$ is continuous on $\R^+$. \end{lemma} \begin{pf} Fix some $x\in\R^+$ and let $(x_k)_{k\in{\mathbb N}}$ be a sequence in $\R^+$ with $x_k\to x$ as $k\to\infty$. In addition, fix some $\epsilon\in(0,x)$ and let $k_\epsilon\in{\mathbb N}$ be such that $|x-x_k|\le\epsilon$ for all $k\ge k_\epsilon$. Observe that \begin{align}\label{e.l.p.t.1.2.3.3} & \int_{\mathcal P}\sup_{k\ge k_\epsilon}\left|\prod_{n\in{\mathbb N}}f(x+\ln(|\pi_n|))-f(x)-\prod_{n\in{\mathbb N}}f(x_k+\ln(|\pi_n|))+f(x_k)\right|\mu(\text{d}\pi) \\[0.5ex] &\le \int_{\mathcal P}\left|\prod_{n\in{\mathbb N}}f(x+\ln(|\pi_n|))-f(x)\right|\mu(\text{d}\pi) +\int_{\mathcal P}\sup_{y\in[x-\epsilon,\,x+\epsilon]}\left|\prod_{n\in{\mathbb N}}f(y+\ln(|\pi_n|))-f(y)\right|\mu(\text{d}\pi).\notag \end{align} According to Lemma~\ref{l.p.t.1.2.4.0} both of the integrals on the right-hand side of (\ref{e.l.p.t.1.2.3.3}) are finite. Hence, we can apply the DCT and deduce that \begin{align*} &\lim_{k\to\infty}|Lf(x)-Lf(x_k)| \\[0.5ex] &= \int_{\mathcal P}\lim_{k\to\infty}\left|\prod_{n\in{\mathbb N}}f(x+\ln(|\pi_n|))-f(x)-\prod_{n\in{\mathbb N}}f(x_k+\ln(|\pi_n|))+f(x_k)\right|\mu(\text{d}\pi) \\[0.5ex] &= \int_{\mathcal P}\left|\prod_{n\in{\mathbb N}}f(x+\ln(|\pi_n|))-\prod_{n\in{\mathbb N}}\lim_{k\to\infty}f(x_k+\ln(|\pi_n|))-f(x)+\lim_{k\to\infty}f(x_k)\right|\mu(\text{d}\pi) \\[0.5ex] &= 0, \end{align*} where the final equality follows from $f\in\mathcal T$ being continuous on $\R^+$. Notice that we can interchange the limit and the product in the penultimate equality, since only finitely many factors of the product differ from 1. Hence, we have proven the continuity of $Lf$ at $x$ and since $x\in\R^+$ was chosen arbitrarily, this completes the proof. \end{pf} Recall the process $Z^{x,f}$ that we defined in (\ref{e.prod_martingale}) and set \[ \Delta Z^{x,f}_t:=Z^{x,f}_t-Z^{x,f}_{t-} \] for every $t>0$. We are now in a position to prove Theorem~\ref{p.t.1.2}. {\bf Proof of Theorem~\ref{p.t.1.2}} Throughout the proof let $x\in\mathcal C_f$ and let $(a_n)_{n\in{\mathbb N}}$ be a sequence in $(0,1)$ with $a_n\downarrow 0$ as $n\to\infty$. Moreover, Consider the following stopping time \begin{equation}\label{d.delta} \delta:=\inf\left\{t>0:x+ct+\ln\left(|\Pi(t)|^\downarrow_2\right)>0\right\}\land\tau^-_{1,0}\land1. \end{equation} Recall from Lemma~\ref{l.p.t.1.2.4.0} that $f\in\mathcal D_L$. The idea of the proof is to consider an appropriate decomposition of the limit of $\mathbb E(Z^{x,f}_{\delta\land a_n}-Z^{x,f}_{0})a_n^{-1}$ as $n\to\infty$, which by the martingale property of $Z^{x,f}$ equals $0$. In this spirit the proof deals with the jumps and drift that contribute to the difference $Z^{x,f}_{\delta\land a_n}-Z^{x,f}_{0}$ separately and eventually combines these considerations in order to prove the assertion. Let us first deal with the jumps of $\Pi^x$ that contribute to the difference $Z^{x,f}_{\delta\land a_n}-Z^{x,f}_{0}$. To this end, we start by pointing out that $\delta>0$ $\mathbb P$-almost surely. Indeed, if $\delta=\tau^-_{1,0}\land1$, then the $\mathbb P$-a.s. positivity of $\delta$ follows, since for $X_n$ the point $0$ is irregular for $(-\infty,0)$. In order to deal with the case $\delta<\tau^-_{1,0}\land1$, note that \[ x+c\tau+\ln\left(|\Pi(\tau)|^\downarrow_1\right)\ge x+c\tau+\ln\left(|B_1(\tau)|\right)=X^x_1(\tau)\ge x\Longrightarrow|\Pi(\tau)|^\downarrow_1\ge e^{-c\tau} \] and \[ x+c\tau+\ln\left(|\Pi(\tau)|^\downarrow_2\right)\ge0 \Longrightarrow|\Pi(\tau)|^\downarrow_2\ge e^{-(x+c\tau)} \] hold on the event $\{\delta<\tau^-_{1,0}\land1\}$ for any random time $\tau\le\delta$. Therefore, on this event we have \[ e^{-c\tau}\le|\Pi(\tau)|^\downarrow_1\le1-e^{-(x+c\tau)} \] for every random time $\tau\le\delta$, which implies that \[ \delta\ge\tau\ge\frac{1}{c}\ln\left(1+e^{-x}\right)>0 \] on $\{\delta<\tau^-_{1,0}\land1\}$. The compensation formula for Poisson point processes yields that \begin{align*} & \frac{1}{a_n}\mathbb E\left(\sum_{i\in\mathcal I}\mathds1_{(0,\delta\land a_n]}(t_i)\Delta Z^{x,f}_{t_i}\right) \\[0.5ex] &= \frac{1}{a_n}\mathbb E\left(\sum_{i\in\mathcal I}\mathds1_{(0,\delta\land a_n]}(t_i)\prod_{l\in{\mathbb N}}f(X^x_1(t_i-)+\ln(|\pi_l(t_i)|))-f(X^x_1(t_i-))\right) \\[0.5ex] &= \frac{1}{a_n}\mathbb E\left(\int_{(0,\delta\land a_n]}\int_\mathcal P\prod_{l\in{\mathbb N}}f(X^x_1(t-)+\ln(|\pi_l|))-f(X^x_1(t-))\mu(\text{d}\pi)\text{d} t\right) \\[0.5ex] &= \mathbb E\left(\frac{1}{a_n}\int_{(0,\delta\land a_n]}Lf\left(X^x_1(t-)\right)\text{d} t\right). \end{align*} Since, \[ \min_{y\in[x,\,x+ca_n]}Lf(y)\mathbb E\left(\frac{\delta\land a_n}{a_n}\right) \le \mathbb E\left(\frac{1}{a_n}\int_{(0,\delta\land a_n]}Lf\left(X^x_1(t-)\right)\text{d} t\right) \le\max_{y\in[x,\,x+ca_n]}Lf(y)\mathbb E\left(\frac{\delta\land a_n}{a_n}\right), \] we thus infer by means of Lemma~\ref{l.p.t.1.2.4} and the DCT that \begin{equation}\label{e.jumps_2} \lim_{n\to\infty}\frac{\mathbb E\left(\sum_{i\in\mathcal I}\mathds1_{(0,\delta\land a_n]}(t_i)\Delta Z^{x,f}_{t_i}\right)}{a_n} =Lf(x). \end{equation} Let us now deal with the remaining contribution to the difference $Z^{x,f}_{a_n}-Z^{x,f}_{0}$. For this purpose, consider the process $(\hat Z^{x,f}_t)_{t\in\R^+_0}$, given by \[ \hat Z^{x,f}_t:=Z^{x,f}_t-\sum_{i\in\mathcal I:t_i\le t}\Delta Z^{x,f}_{t_i}. \] Since, according to Lemma~\ref{l.MVT} and (\ref{finite_derivative}), \[ \mathbb E\left(\sup_{n\in{\mathbb N}}\left|\frac{f(x+c(\delta\land a_n))-f(x)}{c(\delta\land a_n)}\cdot\frac{\delta\land a_n}{a_n}\right|\right) \le\mathbb E\left(\sup_{y\in(x,x+c(\delta\land a_n))}|f'_+(y)|\right) \le\sup_{y\in(x,x+c)}|f'_+(y)|<\infty, \] we deduce by applying the DCT that \begin{equation}\label{e.derivative_f} \begin{aligned} \lim_{n\to\infty}\frac{\mathbb E\left(\hat Z^{x,f}_{\delta\land a_n}-\hat Z^{x,f}_0\right)}{a_n} &= \lim_{n\to\infty}\mathbb E\left(\frac{f(x+c(\delta\land a_n))-f(x)}{a_n}\right) \\[0.5ex] &= c\,\lim_{n\to\infty}\mathbb E\left(\frac{f(x+c(\delta\land a_n))-f(x)}{c(\delta\land a_n)}\cdot\frac{\delta\land a_n}{a_n}\right) \\[0.5ex] &= c\,\mathbb E\left(\lim_{n\to\infty}\frac{f(x+c(\delta\land a_n))-f(x)}{c(\delta\land a_n)}\cdot\lim_{n\to\infty}\frac{\delta\land a_n}{a_n}\right) \\[0.5ex] &=cf'(x). \end{aligned} \end{equation} Combining (\ref{e.jumps_2}) with (\ref{e.derivative_f}) yields that \begin{align*} 0 &=\lim_{n\to\infty}\frac{\mathbb E\left(Z^{x,f}_{\delta\land a_n}-Z^{x,f}_{0}\right)}{a_n}\notag \\[0.5ex] &= \lim_{n\to\infty}\frac{\mathbb E\left(\sum_{i\in\mathcal I}\mathds1_{\{t_i\in(0,\delta\land a_n]\}}\Delta Z^{x,f}_{t_i}\right)}{a_n}+\lim_{n\to\infty}\frac{\mathbb E\left(\hat Z^{x,f}_{\delta\land a_n}-\hat Z^{x,f}_0\right)}{a_n}\notag \\[0.5ex] &= Lf(x)+cf'(x) \end{align*} holds for all $x\in\mathcal C_f$, where the first equality results from the martingale property of $Z^{x,f}$ in conjunction with the optional sampling theorem. Consequently, $f$ solves (\ref{e.FKPP.1}), which completes the proof. \iffalse In the light of (\ref{finite_derivative}) it thus follows from Proposition~\ref{fTCDD} and Lebesgue's integrability criterion for Riemann integrals as well as the continuity of $f$ that \begin{equation}\label{e.final_step} f(t)-f(s) = \int_{[s+a_n,t+a_n]} f'_+(u)\text{d} u =\int_s^t-\frac{1}{c}Lf(u)\text{ d} u =F(t)-F(s) \end{equation} for all $s,t\in\R^+$, where $F$ is an antiderivative of $-c^{-1}Lf$. Note that above we used that $\R^+\setminus\mathcal C_f$ is a Lebesgue null set in order to use (\ref{e.l.t.1.1.2c}) for getting the second equality. According to (\ref{e.final_step}) we have $f=F+\text{const.}$ and consequently $f$ solves (\ref{e.FKPP.1}). \fi $\square$ \section{Analytic properties of one-sided FKPP travelling waves}\label{s.ap} In this section we provide the proof of Theorem~\ref{t.ap}. For this purpose we shall resort to the following version of the fundamental theorem of calculus for Dini derivatives, taken from \cite{HT06}. \begin{proposition}[Theorem~11 in \cite{HT06}]\label{fTCDD} Let $a,b\in{\mathbb R}$ with $a<b$. If f is a continuous function that has a finite Dini derivative $f'_+(y)$ for every $y\in[a,b]$, then \begin{equation}\label{e.fTCDD} f(b)-f(a) = \int_{[a,b]} f'_+(y)\text{ d} y, \end{equation} provided that $f'_+$ is Lebesgue integrable over $[a, b]$. \end{proposition} This version of the fundamental theorem of calculus for Dini derivatives will be used in the proof of Proposition~\ref{l.uniqueness} that we are now going to present. Furthermore, we shall resort to Proposition~\ref{fTCDD} also in the proof of Theorem~\ref{t.ap}, where we show that travelling waves are continuously differentiable. Let us point out that $f$ having finite Dini derivatives is essential in Proposition~\ref{fTCDD}. Indeed, for singular functions $f$, such as the Cantor function, the equality in (\ref{e.fTCDD}) does not hold true, since in that case $f'_+=0$ Lebesgue-a.e. but $f$ is not a constant function. Let us proceed with the following proposition that shows uniqueness of one-sided FKPP travelling waves in $\mathcal T$ with wave speed $c>c_{\bar p}$. Our method of proof for this result makes use of Proposition~\ref{fTCDD}. \begin{proposition}\label{l.uniqueness} Any one-sided FKPP travelling wave $f\in\mathcal T$ with wave speed $c>c_{\bar p}$ satisfies \[ f=\varphi. \] \end{proposition} \begin{pf} Let $f\in\mathcal T$ be a function that solves (\ref{e.FKPP.1}) and fix some $x>0$. Notice first that the map \[ t\mapsto \hat Z^{x,f}_t:=Z^{x,f}_t-\sum_{i\in\mathcal I:t_i\le t}\Delta Z^{x,f}_{t_i} \] is continuous. In addition, recall from (\ref{d.delta}) the definition \[ \delta:=\inf\left\{t>0:x+ct+\ln\left(|\pi(t)|^\downarrow_2\right)>0\right\}\land\tau^-_{1,0}\land1 \] and observe by means of (\ref{finite_derivative}) that the Dini derivative $\hat Z^{x,f}_+$, given by \begin{equation}\label{e.l.t.1.1.0.1} \hat Z^{x,f}_+(s):=\limsup_{h\downarrow0}\frac{\hat Z^{x,f}_{s+h}-\hat Z^{x,f}_s}{h} =c f'_+(X^x_1(s)) \end{equation} for all $s\in[0,\delta]$, is a finite Lebesgue measurable function, since $f'_+$ and $s\mapsto X^x_1(s)$ are Lebesgue measurable. Moreover, in view of (\ref{finite_derivative}) we also infer that \[ \int_{[0,\delta]}\left|\hat Z^{x,f}_+(s)\right|\text{d} s \le c\int_{[0,\delta)}\left|f'_+(X^x_1(s))\right|\text{d} s \le c \int_{[0,1)}\sup_{y\in[x,\,x+c]}\left|f'_+(y)\right|\text{d} s = c\sup_{y\in[x,\,x+c]}\left|f'_+(x)\right| <\infty \] holds $\mathbb P$-almost surely. Hence, $\hat Z^{x,f}_+(s)$ is Lebesgue integrable over $[0,\delta]$. According to Proposition~\ref{fTCDD} we thus obtain that \begin{equation}\label{e.l.t.1.1.0.2} Z^{x,f}_{\delta\land a_n}-Z^{x,f}_0=\hat Z^{x,f}_{\delta\land a_n}-\hat Z^{x,f}_0+\sum_{i\in\mathcal I:t_i\le \delta\land a_n}\Delta Z^{x,f}_{t_i}=\int_{[0,\delta\land a_n]} \hat Z^{x,f}_+(s)\text{ d} s+\sum_{i\in\mathcal I:t_i\le \delta\land a_n}\Delta Z^{x,f}_{t_i} \end{equation} for all $n\in{\mathbb N}$, where $(a_n)_{n\in{\mathbb N}}$ is a sequence in $(0,1)$ with $a_n\downarrow 0$ as $n\to\infty$. \iffalse since \[ \mathbb E\left(f'(X^x_k(s))\right)=\mathbb E\left(\lim_{h\to0}\frac{f(X^x_k(s)+h)-f(X^x_k(s))}{h}\right)\le \liminf_{h\to0}\frac{\mathbb E(f(X^x_k(s)+h))-\mathbb E(f(X^x_k(s)))}{h} \] \fi With $\eta$ being the Poisson random measure on $\R^+_0\otimes\mathcal P$ that determines $X_1$, we deduce from (\ref{e.l.t.1.1.0.2}), in conjunction with the compensation formula for Poisson point processes and Fubini's theorem that \begin{align}\label{e.l.t.1.1.1.2b} &\mathbb E\left(Z^{x,f}_{\delta\land a_n}\right)-\mathbb E\left(Z^{x,f}_0\right)\notag \\[0.5ex] &= \mathbb E\left(\int\limits_{[0,\delta\land a_n]} \hat Z^{x,f}_+(s)\text{ d} s\right)\notag \\ &\qquad +\mathbb E\left(\int\limits_{[0,1]\times\mathcal P}\mathds1_{[0,\delta\land a_n]}(s)\left(\prod_{k\in{\mathbb N}}f(X^x_1(s-)+\ln(|\pi_k|))-f(X^x_1(s-))\right)\eta(\text{d} s,\text{d}\pi)\right) \\[0.5ex] &= \mathbb E\left(\int\limits_{[0,\delta\land a_n]} \hat Z^{x,f}_+(s)\text{ d} s\right)+\mathbb E\left(\int\limits_{[0,\delta\land a_n]}\int\limits_{\mathcal P}\left(\prod_{k\in{\mathbb N}}f(X^x_1(s-)+\ln(|\pi_k|))-f(X^x_1(s-))\right)\mu(\text{d}\pi)\text{ d} s\right)\notag \\[0.5ex] &= \mathbb E\left(\int\limits_{[0,\delta\land a_n]} \hat Z^{x,f}_+(s)\text{ d} s\right)+\mathbb E\left(\int\limits_{[0,\delta\land a_n]}Lf(X^x_1(s-)\text{ d} s\right)\notag \end{align} for all $n\in{\mathbb N}$. Observe that $f$ being monotone and $X^x_1$ having only at most countably many jumps $\mathbb P$-a.s. implies that $X^x_1(s)\in C_f$ for Lebesgue-a.a. $s\in(0,1)$ $\mathbb P$-almost surely. By means of (\ref{e.l.t.1.1.0.1}) and (\ref{e.l.t.1.1.1.2b}) as well as the fact that any $u\in(0,1)$ is $\mathbb P$-a.s. not a jump time of $\Pi$ this results in \[ \mathbb E\left(Z^{x,f}_{\delta\land a_n}\right)-\mathbb E\left(Z^{x,f}_0\right) = \mathbb E\left(\int_{[0,\delta\land a_n]}\left(c f'_++Lf\right)(X^x_i(s))\text{d} s\right)= 0, \] i.e. \[ \mathbb E\left(Z^{x,f}_{\delta\land a_n}\right)=\mathbb E\left(Z^{x,f}_0\right)=f(x). \] By means of the strong fragmentation property of $\Pi$ we thus conclude that \[ \mathbb E\left(\left.Z^{x,f}_{\tau+\delta\land a_n}\right|\mathscr F_\tau\right)=\prod_{n\in\mathcal N^x_\tau}\left.\mathbb E\left(Z^{y,f}_{\delta\land a_n}\right)\right|_{y=X^x_n(\tau)}=\prod_{n\in\mathcal N^x_\tau}f(X^x_n(\tau))=Z^{x,f}_\tau \] holds $\mathbb P$-a.s. for every finite stopping time $\tau$. Therefore, \begin{equation}\label{uniqueness_martingale} \begin{aligned} \mathbb E\left(\left.Z^{x,f}_{t+k(\delta\land a_n)}\right|\mathscr F_t\right) &= \mathbb E\left(\left.Z^{x,f}_{t+\delta}+\sum_{n=2}^k\mathbb E\left(\left.Z^{x,f}_{t+n\delta}-Z^{x,f}_{t+(n-1)\delta}\right|\mathscr F_{t+(n-1)\delta}\right)\right|\mathscr F_t\right) \\ &= \mathbb E\left(\left.Z^{x,f}_{t+\delta}\right|\mathscr F_t\right) \\ &= Z^{x,f}_t \end{aligned} \end{equation} $\mathbb P$-a.s. for all $t\in\R^+_0$ and every measurable $k:\Omega\to{\mathbb N}$. For any $s,t\in\R^+_0$ set \[ k_s:=\left\lfloor\frac{s}{\delta\land a_n}\right\rfloor\qquad\text{as well as}\qquad r_s:=\frac{s}{\delta\land a_n}-k_s\in(0,1) \] and observe that $Z^{x,f}$ is $\mathbb P$-a.s. left-continuous at $t+s$, since $t+s$ is $\mathbb P$-a.s. not a jump time of $\Pi$. In conjunction with the DCT for conditional expectations and (\ref{uniqueness_martingale}) this implies that \[ \mathbb E\left(\left.Z^{x,f}_{t+s}\right|\mathscr F_t\right) =\lim_{n\to\infty}\mathbb E\left(\left.Z^{x,f}_{t+s-r_s(\delta\land a_n)}\right|\mathscr F_t\right) =\lim_{n\to\infty}\mathbb E\left(\left.Z^{x,f}_{t+k_s(\delta\land a_n)}\right|\mathscr F_t\right)=Z^{x,f}_t \] for all $s,t\in\R^+_0$. Hence, $Z^{x,f}$ is a martingale and thus we deduce from Theorem~10 in \cite{KK12} that $f=\varphi$. \end{pf} Proposition~\ref{l.uniqueness} shows that in order to derive analytic properties of one-sided FKPP travelling waves in $\mathcal T$ we only need to consider the function $\varphi$. Bearing this in mind we proceed to prove Theorem~\ref{t.ap}. In order to obtain strict monotonicity of $\varphi$ we shall use the following result. \begin{lemma}\label{l.continuity.1.a0} Let $c>c_{\bar p}$. For any $0\le x<y<\infty$ there exists some $\alpha_{x,y}>0$\label{p.a_x_y} such that \[ \varphi(x)-\varphi(x+h)\ge\alpha_{x,y}\left(\varphi(y)-\varphi(y+h)\right) \] for all $h>0$. \end{lemma} \begin{pf} In the first part of this proof we show that for every deterministic time $t>0$ the probability that $X_1$ reaches level $x>0$ before time $t$ is positive. In the second part we use this fact in order to obtain a lower bound of the probability that for some $n\in{\mathbb N}$ the process $X^x_n$ hits a given level $y>x$ before some deterministic time $s>0$. Subsequently, we combine this lower bound with the estimate (\ref{e.N_x}) of the number of blocks that are alive at a given time and with the positivity of the probability of extinction. \underline{Part I} For every $x\in\R^+_0$ set \[ \quad \tau^+_{1,x}:=\inf\{t\in\R^+_0:X_1(t)>x\}. \] According to Corollary~3.14 in \cite{103} we have that $(\tau^+_{1,x})_{x\in\R^+_0}$ is a subordinator with either killing at an independent exponential ``time'' $\mathfrak e$ or with no killing in which case we set $\mathfrak e:=\infty$. Moreover, by means of Proposition~1.7 in \cite{Ber99} we thus infer that \begin{equation}\label{e.estimate.partI.b} \mathbb P\left(\tau^+_{1,x}< t\right)=\mathbb P\left(\{\tilde\tau^+_{1,x}< t\}\cap\{x<\mathfrak e\}\right)=\mathbb P\left(\tilde\tau^+_{1,x}< t\right)\mathbb P\left(x< \mathfrak e\right)>0 \end{equation} holds for all $t>0$ and $x\in\R^+_0$, where $(\tilde\tau^+_{1,x})_{x\in\R^+_0}$ is some non-killed subordinator, independent of $\mathfrak e$, satisfying \[ \tilde\tau^+_{1,x}\mathds1_{\{x< \mathfrak e\}}=\tau^+_{1,x}\mathds1_{\{x< \mathfrak e\}}. \] For the time being, fix some $x\in\R^+_0$. Let us now show that \begin{equation}\label{e.estimate.partI.b2} \forall\,t>0:\,\mathbb P(\tau^+_{1,x}<\tau^-_{1,0}\land t)>0. \end{equation} To this end, assume we have \begin{equation}\label{e.estimate.partI.b2b} \exists\,t_0>0:\,\mathbb P(\tau^+_{1,x}<\tau^-_{1,0}\land t_0)=0. \end{equation} Our goal is to show that this results in a contradiction. For this purpose, set $\tau^2_0:=\tilde\tau^2_0:=0$ and for every $n\in{\mathbb N}$ define \[ \tilde\tau^1_n := \inf\{t\ge\tilde\tau^2_{n-1}:X_1(t)<0\} \qquad\text{as well as}\qquad \tilde\tau^2_n := \inf\{t\ge\tilde\tau^1_n:X_1(t)=0\}. \] In addition, set \[ n^*:=\sup\left\{n\in{\mathbb N}:\tilde\tau^2_n<\infty\right\} \] as well as \[ \tau^1_n := \inf\{t\ge\tau^2_{n-1}:X_1(t)<0\} \qquad\text{and}\qquad \tau^2_n := \inf\{t\ge\tau^1_n:X_1(t)=0\}\land \tilde\tau^2_{n^*}, \] where $\tilde\tau^2_\infty:=\infty$. Since for $X_1$ the point $0$ is irregular for $(-\infty,0)$, there exists some $\varepsilon>0$ such that $\mathbb P(\tau^-_{1,0}\ge\varepsilon)>0$ and consequently we obtain by means of the strong Markov property of $\Pi$ that \begin{equation}\label{e.c.l.l.p.2.2.0.1b} \sum_{n\in{\mathbb N}}\mathbb P\left(\tau^1_n-\tau^2_{n-1}\ge\varepsilon\left|\mathscr F_{\tau^2_{n-1}}\right.\right)=\sum_{n\in{\mathbb N}}\mathbb P\left(\tau^-_{1,0}\ge\varepsilon\right)=\infty \end{equation} $\mathbb P$-almost surely. Since $\{\tau^1_n-\tau^2_{n-1}\ge\varepsilon\}$ is $\mathscr F_{\tau^2_{n}}$-measurable, we can apply an extended Borel-Cantelli lemma (see e.g. \cite[(3.2) Corollary in Chapter~4]{Dur91} or \cite[Corollary~5.29]{113}) to deduce that \[ \left\{\{\tau^1_n-\tau^2_{n-1}\ge\varepsilon\}\text{ holds for infinitely many $n\in{\mathbb N}$}\right\}=\left\{\sum_{n\in{\mathbb N}}\mathbb P\left(\tau^1_n-\tau^2_{n-1}\ge\varepsilon\left|\mathscr F_{\tau^2_{n-1}}\right.\right)=\infty\right\}. \] Thus (\ref{e.c.l.l.p.2.2.0.1b}) implies that $\tau^2_n\to\infty$ $\mathbb P$-a.s. on the event $\{n^*=\infty\}$ as $n\to\infty$. With $t_0$ given by (\ref{e.estimate.partI.b2b}) another application of the strong Markov property therefore yields that \begin{equation}\label{e.estimate.partI.c} \mathbb P\left(\tau^+_{1,x}< t_0\right) \le \mathbb E\left(\sum_{n\in{\mathbb N}}\mathbb P\left(\tau^3_{n,x}<\tau^1_n\land t_0\left|\mathscr F_{\tau^2_{n-1}}\right.\right)\right) = \sum_{n\in{\mathbb N}}\mathbb P\left(\tau^+_{1,x}<(\tau^-_{1,0}\land t_0)\right) =0, \end{equation} where \[ \tau^3_{n,x} := \inf\{t\ge\tau^2_{n-1}:X_1(t)>x\} \] for all $n\in{\mathbb N}$. Since (\ref{e.estimate.partI.c}) contradicts (\ref{e.estimate.partI.b}), we conclude that (\ref{e.estimate.partI.b2}) does indeed hold true. \underline{Part II} Let $0\le x<y<\infty$ and for any $t\in\R^+_0$ set $R^x_1(t):=\sup_{n\in{\mathbb N}}X^x_n(t)$. In addition, we define \[ \tau^+_y(x):=\inf\left\{t\in\R^+_0:R^x_1(t)\ge y\right\}. \] Note that $R^x_1(\tau^+_y(x))=y$ if $\tau^+_y(x)<\infty$, since $R^x_1$ does not jump upwards and thus creeps over the level $y$. Furthermore, let $s>0$ and set $\gamma:=e^{x+cs}-1$ as well as \[ \alpha_{x,y}:=\mathbb P\left(\tau^+_y(x)<\zeta^x\land s\right)\mathbb P(\zeta^{y}<\infty)^\gamma. \] Observe that (\ref{e.estimate.partI.b2}) and Proposition~\ref{positivesurviaval} imply that $\alpha_{x,y}>0$, since \[ \mathbb P\left(\tau^+_y(x)<\zeta^x\land s\right)\ge\mathbb P\left(\tau^+_{1,y-x}<\tau^-_{1,0}\land s\right). \] By means of the strong fragmentation property of $\Pi$ we deduce that \begin{align*} \varphi(x)-\varphi(x+h) &= \mathbb P(\zeta^x<\infty)-\mathbb P(\zeta^{x+h}<\infty)\notag \\[0.5ex] &\overset{(*)}\ge \mathbb P\left(\tau^+_y(x)<\zeta^x\land s\right)\mathbb P(\zeta^y<\infty)^\gamma\left(\mathbb P(\zeta^y<\infty)-\mathbb P(\zeta^{y+h}<\infty)\right) \\[0.5ex] &= \alpha_{x,y}\left(\varphi(y)-\varphi(y+h)\right)\notag \end{align*} holds true for any $h>0$, where the exponent $\gamma$ in $(*)$ results from the estimate \[ N^x_{\tau^+_y(x)}\le e^{x+c\tau^+_y(x)}< e^{x+cs}=\gamma+1 \] $\mathbb P$-a.s. on $\{\tau^+_y(x)<s\}$. Notice that in $(*)$ we have used that the value of $X^x_n$, $n\in{\mathbb N}$, at time $\tau^+_y(x)$ is less than or equal to $y$ as well as the monotonicity of the probability of extinction. \end{pf} Observe that $\varphi$ is clearly a monotone function. However, even though monotonicity is trivial, it is not obvious whether $\varphi$ is strictly monotone. The following lemma answers the question regarding strict monotonicity of $\varphi$ affirmatively. \begin{lemma}\label{positivesurviaval.monotonicity} Let $c>c_{\bar p}$. Then $\varphi$ is strictly monotonically decreasing on $\R^+_0$. \end{lemma} \begin{pf} Let $x\in\R^+_0$ and set \[ \gamma_x:=\ln\left(|\pi(\zeta^x)|^\downarrow_1\cdot\left|\Pi^x_{\kappa(\zeta^x)}(\zeta^x-)\right|\right). \] According to Proposition~\ref{positivesurviaval} we have $\mathbb P(\zeta^x<\infty)>0$ and hence \begin{align*} & \mathbb P\left(\{\zeta^x<\infty\}\cap\bigcup_{n\in{\mathbb N}}\{x+c\zeta^x+\gamma_x\in(-n,0)\}\right) \\[0.5ex] &= \mathbb P(\{\zeta^x<\infty\}\cap\{x+c\zeta^x+\gamma_x\in (-\infty,0)\}) \\[0.5ex] &= \mathbb P(\zeta^x<\infty) \\[0.5ex] &> 0. \end{align*} Therefore, there exists some $z>0$ such that \[ \mathbb P\left(\{\zeta^x<\infty\}\cap\{x+c\zeta^x+\gamma_x\in(-z,0)\}\right)>0 \] and thus the strong fragmentation property, in conjunction with Proposition~\ref{positivesurviaval}, yields that \[ \mathbb P(\{\zeta^x<\infty\}\cap\{\zeta^{x+z}=\infty\})\ge \mathbb P(\{\zeta^x<\infty\}\cap\{x+c\zeta^x+\gamma_x\in(-z,0)\})\mathbb P(\zeta^0=\infty) > 0. \] Consequently, there exists some $z>0$ such that \begin{align}\label{e.l.positivesurviaval.2.1c5} \mathbb P(\zeta^x<\infty) &= \mathbb P(\{\zeta^x<\infty\}\cap\{\zeta^{x+z}=\infty\})+\mathbb P(\{\zeta^x<\infty\}\cap\{\zeta^{x+z}<\infty\})\notag \\[0.5ex] &> \mathbb P(\zeta^{x+z}<\infty), \end{align} where the final estimate follows from $\{\zeta^{x+z}<\infty\}\subseteq\{\zeta^x<\infty\}$. Observe that (\ref{e.l.positivesurviaval.2.1c5}) implies that for every $h>0$ there exists some $y\ge x$ such that \begin{equation}\label{e.l.positivesurviaval.2.1e} \varphi(y)>\varphi(y+h). \end{equation} \iffalse To see that (\ref{e.l.positivesurviaval.2.1e}) does indeed hold true, assume that for every $y\ge x$ there exists some $z_y>y$ for which $\mathbb P(\zeta^y<\infty)\le\mathbb P(\zeta^{z_y}<\infty)$. By means of the monotonicity of $\varphi$ this would imply that \[ \mathbb P(\zeta^y<\infty)=\mathbb P(\zeta^z<\infty) \] for all $y\ge x$ and $z\in[y,z_y]$, which in turn is equivalent to $\varphi$ being constant on $[x,\infty)$. However, this contradicts (\ref{e.l.positivesurviaval.2.1c5}), and thus (\ref{e.l.positivesurviaval.2.1e}) holds. \fi According to Lemma~\ref{l.continuity.1.a0}, for all $h>0$ and $y\ge x$ satisfying (\ref{e.l.positivesurviaval.2.1e}) there exists some $\alpha_{x,y}>0$ such that \[ \varphi(x)-\varphi(x+h) \ge \alpha_{x,y}\left(\varphi(y)-\varphi(y+h)\right) >0. \] Since $x\in\R^+_0$ was chosen arbitrarily, this proves the assertion that $\varphi$ is strictly monotonically decreasing on $\R^+_0$. \end{pf} In the proof of Theorem~\ref{t.ap} we shall make use of the theory of scale functions for spectrally negative L\'evy processes. For this purpose, let $W$ be the scale function of the spectrally negative L\'evy process $X_1$. That is to say, $W$ is the unique continuous and strictly monotonically increasing function $W:\R^+_0\to\R^+_0$, whose Laplace transform satisfies \[ \int_{(0,\infty)}e^{-\beta x}W(x)\text{ d} x=\frac{1}{\psi(\beta)} \] for all $\beta>\Psi(0)$, where $\psi$ denotes the Laplace exponent of $X_1$ and $\Psi(0):=\sup\{\lambda>0:\psi(\lambda)=0\}$. Let us now tackle the proof of Theorem~\ref{t.ap}. {\bf Proof of Theorem~\ref{t.ap}} This proof is divided into two parts. In the first part we show that $\varphi\in \mathcal T$ and that $\varphi$ is right-continuous at $0$. Subsequently, in the second part we use the continuity of $\varphi|_{\R^+}$ in order to prove that $\varphi|_{\R^+}$ is continuously differentiable, if $\varphi$ solves (\ref{e.FKPP.1}). According to Proposition~\ref{l.uniqueness} and Lemma~\ref{positivesurviaval.monotonicity} the proof is then complete. \underline{Part I} Recall the definition of $\mathcal T$ in Definition~\ref{d.T} and note that Theorem~10 in \cite{KK12} yields that $\varphi$ satisfies (\ref{e.FKPP.2}). Hence, since $\varphi$ is nonincreasing, in order to prove $\varphi\in\mathcal T$ it remains to show that $\varphi|_{\R^+}$ is continuous and that (\ref{finite_derivative}) holds. To this end, let $\mathfrak n$ denote the excursion measure of excursions $({\bf e}_s)_{s\in\R^+_0}$ indexed by the local time at the running maximum of the L\'evy process $X_1$. Furthermore, for any such excursion ${\bf e}$ let $\bar{\bf e}$ denote the height of this excursion. According to Lemma~8.2 in \cite{103} the scale function $W$ has a right-derivative on $\R^+$ given by \[ W'_+(x)=W(x)\mathfrak n(\bar{\bf e}> x) \] for any $x\in\R^+$. Note that $\mathfrak n$ being $\sigma$-finite (cf. Theorem~6.15 in \cite{103}) implies that \[ \sup_{y\ge x}\mathfrak n(\bar{\bf e}> y)=\mathfrak n(\bar{\bf e}> x)<\infty. \] for every $x\in\R^+_0$. Moreover, we have \[ \mathbb P(\xi^x=\infty)\ge\mathbb P_x\left(\tau^+_{1,y}<\tau^-_{1,0}\right)\mathbb P(\xi^y=\infty) \] for all $x,y\in\R^+_0$ with $x<y$, where under $\mathbb P_x$ the process $X_1$ is shifted to start in $x$. Therefore, we obtain \begin{equation}\label{e.t.mainresult.1.2.p2} \varphi(x)-\varphi(x+h)\le(1-\varphi(x))\left(\frac{1}{\mathbb P_x\left(\tau^+_{1,x+h}<\tau^-_{1,0}\right)}-1\right)\le\frac{1}{\mathbb P_x\left(\tau^+_{1,x+h}<\tau^-_{1,0}\right)}-1 \end{equation} for all $h\in\R^+$ and $x\in\R^+_0$. By means of (8.8) in Theorem~8.1 of \cite{103} we have \begin{equation}\label{e.t.mainresult.1.2.p2_c} \frac{1}{\mathbb P_x\left(\tau^+_{1,x+h}<\tau^-_{1,0}\right)}-1=\frac{W(x+h)-W(x)}{W(x)} \end{equation} for any $x\in\R^+_0$. Consequently, \[ \sup_{y\ge x}\left|\varphi'_+(y)\right|\le\sup_{y\ge x}\limsup_{h\downarrow0}\left(\frac{1}{W(y)}\frac{W(y+h)-W(y)}{h}\right)=\sup_{y\ge x}\mathfrak n(\bar{\bf e}> y)<\infty \] holds for every $x\in\R^+$. Moreover, in view of (\ref{e.t.mainresult.1.2.p2}), (\ref{e.t.mainresult.1.2.p2_c}) and the continuity of $W|_{\R^+_0}$ we deduce that $\varphi|_{\R^+}$ is continuous. Therefore, we conclude that $\varphi\in\mathcal T$. Since $X_1$ has bounded variation, we infer by means of Lemma~8.6 in \cite{103} that $W(0)>0$ and thus the above line of argument also yields that $\varphi$ is right-continuous at $0$. \underline{Part II} Assume that $\varphi$ satisfies (\ref{e.FKPP.1}) on $\mathcal C_\varphi$. In view of Part~I it follows from Lemma~\ref{l.p.t.1.2.4} that $L\varphi$ is continuous. Moreover, we deduce from (\ref{e.FKPP.1}) and the monotonicity of $\varphi$ that $\varphi'_+=-c^{-1}L\varphi$ Lebesgue-almost everywhere on $\R^+$. Since the upper Dini derivative $\varphi'_+$ is bounded on any interval $[a,b]\subseteq\R^+$, it thus follows from Proposition~\ref{fTCDD} and Lebesgue's integrability criterion for Riemann integrals that \[ \varphi(b)-\varphi(a)=\int_a^b\varphi'_+(x)\text{ d} x=-\frac{1}{c}\int_a^bL\varphi(x)\text{ d} x=F(b)-F(a) \] for Lebesgue-almost all $a,b\in\R^+$, where $F\in C^1(\R^+,{\mathbb R})$ is an antiderivative of $-c^{-1}L\varphi$ on $\R^+$. Hence, we have $\varphi=F+\text{const.}$ Lebesgue-almost everywhere on $\R^+$. Since $\varphi|_{\R^+_0}$ and $F$ are continuous, this implies that $\varphi|_{\R^+}=F+\text{const.}$ and consequently $\varphi|_{\R^+}\in C^1(\R^+,[0,1])$. In the light of Proposition~\ref{l.uniqueness} and Lemma~\ref{positivesurviaval.monotonicity} this proves the assertion. $\square$ \iffalse Above we have shown that $\sup_{y\ge x}|\varphi'_+(y)|<\infty$ for every $x>0$. The final result of this section is concerned with the question whether even $\sup_{y\in\R^+_0}|\varphi'_+(y)|<\infty$ holds true. As it turns out, an integral test is required to answer this question. \begin{lemma}\label{l.der0} Let $c>c_{\bar p}$. Then $\sup_{y\in\R^+_0}|\varphi'_+(y)|<\infty$ if and only if \begin{equation}\label{e.l.der0.1} \limsup_{z\downarrow0}\int_{(0,\alpha)}\frac{\Pi(-(x+z),-x)}{z}\text{ d} x<\infty \end{equation} for some $\alpha>0$. \end{lemma} \begin{pf} Let $x\in\R^+$ and recall that $\sup_{y\ge x}|\varphi'_+(y)|<\infty$ was shown in Part I of the proof of Theorem~\ref{t.ap}. In order to show boundedness of $\varphi'_+$ on $[0,x)$, let $\epsilon,t>0$ be such that $e^{\epsilon+ct}<2$. It follows from (\ref{e.N_x}) that $N^\epsilon_s\le1$ for all $s\le t$. Notice that $\Pi^0_1([\zeta^0\land t]-)$ is the only block alive at time $[\zeta^0\land t]-$ and set $k:=\min(\Pi^0_1([\zeta^0\land t]-))$. Since extinction of $\Pi^0$ occurs by $X_{k}([\zeta^0\land t]-)$ jumping below zero (as opposed to creeping), we have that $\delta_0:=|B_{k}([\zeta^0\land t]-)|-|B_{k}([\zeta^0\land t])|>0$ on $\{\zeta^0< t\}$. More precisely, according to (8.29) in \cite{103} we have \begin{align*} \mathbb P\left(X_{\zeta^0}\in(-z,0),X_{\zeta^0-}\in(0,\alpha)\right) &= \int_{(0,\alpha)}\mathbb P\left(X_{\tau^-_{0,k}}\in(-z,0),X_{\tau^-_{0,k}-}\in\text{d} x\right) \\[0.5ex] &= \int_{(0,\alpha)}\Pi(-(x+z),-x)ce^{-\Phi(0)x}\text{d} x, \end{align*} where we also resorted to Lemma~8.6 in \cite{103} as well as to the fact that $W\equiv0$ on $(-\infty,0)$. Hence, for any $n\in{\mathbb N}$ there exists some $x_n\in(0,\epsilon\land\nicefrac{1}{n})$ such that $\mathbb P(x_n>\delta_0)\le\nicefrac{1}{n}$. Then we have \[ \left|\varphi(0)-\varphi(x_n)\right|\mathds1_{\{\zeta^0<t\}}\le\mathbb P\left(X_{\zeta^0}\in(-x_n,0),X_{\zeta^0-}\in(0,\alpha)\right)\le\gamma x_n \] and consequently \begin{align*} \varphi(0)-\varphi(x_n) &\le \left|\varphi(0)-\varphi(x_n)\right|\mathds1_{\{\zeta^0\ge t\}}+\left|\varphi(0)-\varphi(x_n)\right|\mathds1_{\{\zeta^0<t\}} \\[0.5ex] &\le \left|\varphi(X^0(t))-\varphi((X^0(t)+\nicefrac{1}{n})\right|\mathds1_{\{X^0(t)>0\}}+\frac{1}{n}\mathds1_{\{\zeta^0<t\}} \end{align*} for every $n\in{\mathbb N}$. By means of the continuity of $\varphi$ on $\R^+$ we thus conclude that $\varphi(x_n)\uparrow\varphi(0)$ as $n\to\infty$, which proves that $\varphi$ is right-continuous at $0$. On the other hand, we have \[ \varphi(0)-\varphi(x_n)\ge\mathbb P\left(X_{\zeta^0}\in(-x_n,0),X_{\zeta^0-}\in(0,\alpha)\right), \] and thus $\varphi'_+=\infty$ if (\ref{e.l.der0.1}) does not hold true. \end{pf} \fi \section{Existence and uniqueness of one-sided travelling waves}\label{s.mainresult.1} This section is devoted to the proof of Theorem~\ref{t.mainresult.1}. Our method of proof makes use of the results that we developed in the previous two sections. {\bf Proof of Theorem~\ref{t.mainresult.1}} The first part of the proof shows the nonexistence of one-sided FKPP travelling waves in $\mathcal T$ for wave speeds $c\le c_{\bar p}$ and the second part proves the existence of such travelling waves for wave speeds above the critical value $c_{\bar p}$. The uniqueness was shown in Proposition~\ref{l.uniqueness}. \underline{Part I} Fix some $c\le c_{\bar p}$ as well as $x\in\R^+_0$ and let $f\in\mathcal T$. Further, assume that $f$ satisfies (\ref{e.FKPP.1}). Then the proof of Proposition~\ref{l.uniqueness} shows that $(Z^{x,f}_t)_{t\in\R^+_0}$ is a uniformly integrable martingale and hence the $\mathbb P$--a.s. martingale limit $Z^{x,f}_\infty:=\lim_{t\to\infty}Z^{x,f}_t$ satisfies \begin{equation}\label{e.l.t.mainresult.1.1.1} \mathbb E\left(Z^{x,f}_\infty\right)=\mathbb E\left(Z^{x,f}_0\right)=f(x). \end{equation} Since $c\le c_{\bar p}$, we have according to Proposition~\ref{positivesurviaval} that $\mathbb P(\zeta^x<\infty)=1$, that is to say $\mathcal N^x_t\to\emptyset$ $\mathbb P$-a.s. as $t\to\infty$. Because the empty product equals 1, we thus infer that \[ Z^{x,f}_\infty=\lim_{t\to\infty}\prod_{n\in\mathcal N^x_t}f(X^x_n(t))=1 \] $\mathbb P$-almost surely. In view of (\ref{e.l.t.mainresult.1.1.1}) this implies that $f\equiv1$, which is a contradiction to $f\in\mathcal T$, since every $f\in\mathcal T$ satisfies (\ref{e.FKPP.2}). Consequently, there does not exist a function $f\in\mathcal T$ that satisfies (\ref{e.FKPP.1}). \underline{Part II} Now let $c>c_{\bar p}$ and $x\in\R^+_0$. In the light of Proposition~\ref{l.uniqueness} it only remains to show that $\varphi$ is indeed a one-sided FKPP travelling wave with wave speed $c$. Since $\varphi\in\mathcal T$ satisfies the boundary condition (\ref{e.FKPP.2}), we only have to deal with (\ref{e.FKPP.1}). In order to prove that $\varphi$ solves (\ref{e.FKPP.1}) we aim at applying Theorem~\ref{p.t.1.2}. To this end, observe that the fragmentation property of $\Pi$ yields that \[ \varphi(x)=\mathbb E(\mathbb P(\zeta^x<\infty|\mathscr F_t))=\mathbb E\left(\prod_{n\in\mathcal N^x_t}\varphi(X^x_n(t))\right)=\mathbb E\left(Z^{x,\varphi}_t\right) \] for every $t\in\R^+_0$. By means of another application of the fragmentation property we therefore deduce that \[ \mathbb E\left(\left.Z^{x,\varphi}_{t+s}\right|\mathscr F_t\right)=\prod_{n\in\mathcal N^x_t}\left.\mathbb E\left(Z^{y,\varphi}_s\right)\right|_{y=X^x_n(t)}=\prod_{n\in\mathcal N^x_t}\varphi(X^x_n(t))=Z^{x,\varphi}_t \] holds $\mathbb P$-a.s. for all $s,t\in\R^+_0$. Hence, $Z^{x,\varphi}$ is a $\mathbb P$-martingale. In the proof of Theorem~\ref{t.ap} we have shown that $\varphi\in \mathcal T$ and consequently we infer from Theorem~\ref{p.t.1.2} that $\varphi$ solves the integro-differential equation (\ref{e.FKPP.1}). $\square$ \end{document}
\begin{document} \title{Molecular machines for quantum error correction} \author{Thiago Guerreiro} \email{barbosa@puc-rio.br} \affiliation{Department of Physics, Pontif\'icia Universidade Cat\'olica, Rio de Janeiro, Brazil} \begin{abstract} Inspired by biological molecular machines we explore the idea of an active quantum robot whose purpose is delaying decoherence. A conceptual model capable of partially protecting arbitrary logical qubit states against single physical qubit errors is presented. Implementation of an instance of that model - the entanglement qubot - is proposed using laser-dressed Rydberg atoms. Dynamics of the system is studied using stochastic wavefunction methods. \end{abstract} \maketitle \section{Introduction} The living cell can be seen as a Brownian computer \cite{Bennett1982}. At its core, machines of molecular dimensions store, correct and process information in the presence of noise, with the goal of keeping the state of the living creature away from thermodynamical equilibrium. The machinery of life \cite{Goodsell1993} is responsible for gene expression, matter transport across the cell and energy harvesting, among a vast number of other tasks \cite{Alberts}. An example of such molecular devices is \textit{RNA polymerase} (RNAP): an enzyme with $\sim 40.000$ atoms, roughly $ \SI{10}{nm} $ of linear size, capable of synthesising a strand of RNA from a DNA template in the presence of Brownian noise, at error rates as low as $ 10^{-7} $ \cite{Milo}. Molecular devices such as RNAP have inspired nanotechnology \cite{Feynman, Zhang2018} and various artificial molecular machines were built, such as molecular ratchets \cite{Serreli2007}, pumps \cite{Stoddart2015}, motors \cite{Kassem2017}, and gene editing tools \cite{CRISPR}. Detailed unified understanding of biological molecular machines according to the tradition of theoretical physics is yet to be achieved \cite{Bialek}, but there is little doubt that experimental \cite{Block} and computational methods \cite{Bressloff} in physics play a key role in that endeavour. It is also expected that the coming age of quantum information processing will illuminate biological systems through simulation of quantum chemistry \cite{Google_ai} and quantum enhanced learning \cite{Outeiral2020, Emani2021}. Conversely \cite{Frauenfelder2014}, one could ask whether biological molecular machines will inspire new ideas for engineering autonomous molecular-sized quantum information processing devices with the goal of keeping quantum states away from thermodynamical equilibrium. It is the purpose of this work to explore this idea. \begin{figure}\label{landscape} \label{color} \end{figure} A quantum molecular machine would be a device composed of at most a few thousand atoms capable of autonomously storing, protecting and/or processing quantum states in the presence of external decoherence and thermalization. We refer to these bio-inspired devices as quantum robots, or qubots \cite{Guerreiro2020}. Devising qubots is a problem in coherent quantum chemistry \cite{Carr2009, Krems} much like engineering artificial molecular machines is a problem in synthetic chemistry \cite{Lau2017}. Hence, the ultracold atom \cite{Budker} and molecular toolbox \cite{Ospelkaus2010, Liu2017} is expected to play a key role in the conception of these active quantum devices. As we will see, qubots exploit open system dynamics to achieve their purpose and thus have a close connection to the idea of engineered environments constructed to produce desired quantum states \cite{Plenio1999, Plenio2001, Diehl2008, Verstraete2009, Vacanti2009, Reiter2012, Brask2015, Reiter2017}. Their nature, however, is much closer to that of artificial molecular ratchets and pumps that respond to the environment and consume resources to maintain nonequilibrium states \cite{Cheng2015}. In what follows, we explore various aspects around the idea of qubots. We begin by introducing a conceptual model for a quantum robot capable of partially protecting a logical qubit state against single physical qubit errors. It is interesting that the model can handle almost all combinations of phase and bit-flip errors since, as pointed out by Kitaev, \textit{it is generally easy to get rid of one kind of errors, but not both} \cite{Kitaev2001}. The construction is somewhat inspired by the surface code \cite{Fowler2012}, only here syndrome detection and correction are part of the system's dynamics rather than a consequence of measurement followed by external conditional action. Next, a specific physical implementation of instances of the model based on laser-dressed Rydberg atoms is discussed. More specifically, we exhibit potential landscapes implementing an \textit{entanglement qubot}, a device that stabilizes a Bell state against single qubit errors. The stabilized Bell state is only one possible state of the logical qubit, but in this case we can view the qubot as preserving a maximally entangled state. An ensemble of entanglement qubots could therefore preserve vast amounts of entanglement, a useful resource. Simulation of the entanglement qubot dynamics is performed with the help of stochastic wavefunction methods, and we evaluate the effects of coupling the motional degrees of freedom of the robot to an external heat bath. We conclude with a discussion on potential future developments regarding active quantum matter. \section{Conceptual model} We would like to introduce the conceptual model of a quantum robot capable of protecting an arbitrary logical qubit state against errors. Our quantum robot consists of two parts, called the \textit{nucleus} and the \textit{correctors}. See Figure \ref{landscape}(a) for a schematic representation. A pair of particles denoted $ a $ and $ b $ constitute the nucleus. Quantum information is stored in the particles' internal spin degrees of freedom taken to be two spin 1/2 systems with Hilbert space $ \mathbb{C}^{2} \otimes \mathbb{C}^{2} $ and basis states denoted $ \lbrace \vert 0 \rangle \vert 0 \rangle , \vert 0 \rangle \vert 1 \rangle, \vert 1 \rangle \vert 0 \rangle , \vert 1 \rangle \vert 1 \rangle \rbrace $. Particle $ a $ is held fixed at the origin by an optical tweezer while $ b $ is subject to the potential \begin{eqnarray} V(R) = V_{t}(R) + V_{I}(R) \ , \end{eqnarray} where $ R $ is the relative distance between $ a $ and $ b $, $ V_{t}(R) $ is a trap potential for particle $ b $ and \begin{align} V_{I}(R) = J_{z} Z_{a} Z_{b} + J_{x} X_{a} X_{b} + J_{y} Y_{a} Y_{b} \ , \label{interaction} \end{align} is the interaction energy between the particles, where $ X_{\lambda}, Y_{\lambda}, Z_{\lambda} $ are the Pauli operators for particle $ \lambda $ ($ = a, b $) and the coefficients $ J_{\alpha} = J_{\alpha}(R) $ form a spatial-dependent spin-spin interaction pattern. We assume for simplicity that particle $ b $ can only move along the direction $ \hat{R} $. As an example of trap potential one may consider an optical tweezer, \begin{eqnarray} V_{t}(R) &=& V_{0} \left( R - \delta \right)^{2} \ . \label{lattice} \end{eqnarray} where $ V_{0} $ and $ \delta $ are constants. Tunneling outside the confining potential is considered negligible. Note also that dipole-dipole interactions among atoms and polar molecules is of the form \eqref{interaction}, and typically for molecules \cite{Wei2011, Pietraszewicz2013} and spin impurities in diamond \cite{Choi2017}, \begin{eqnarray} J_{\alpha} = (d^{2} / R^{3} ) j_{\alpha} \ , \label{pattern} \end{eqnarray} where $ d $ is the dipole moment \cite{Weinberg2012} and $ j_{\alpha} $ a proportionality constant with $ \alpha = x, y, z $. Through the remaining of this section we will consider this radial dependence as an illustration of the qubot functioning. Note however that effective spin interactions of the so-called $XYZ$ form with more general radial dependencies can be engineered within a number of different systems, including trapped ions \cite{Porras2004, Kim2010}, atoms in dressed Rydberg states \cite{Glaetzle2015, Bijnen2015} and microwave-excited polar molecules in optical lattices \cite{Micheli2006, Brennen2007}. In the next section an implementation using laser dressed Rydberg atoms will be discussed. Bell states of the particles' spins are eigenstates of $ V_{I} $ with eigenvalues given by \begin{eqnarray} V_{I} \vert \psi^{-} \rangle &=& \left( -J_{x} - J_{y} - J_{z} \right) \vert \psi^{-} \rangle \label{V_1} \ , \\ V_{I} \vert \phi^{-} \rangle &=& \left( -J_{x} + J_{y} + J_{z} \right) \vert \phi^{-} \rangle \label{V_2} \ , \\ V_{I} \vert \psi^{+} \rangle &=& \left( J_{x} + J_{y} - J_{z} \right) \vert \psi^{+} \rangle \label{V_3} \ , \\ V_{I} \vert \phi^{+} \rangle &=& \left( J_{x} - J_{y} + J_{z} \right) \vert \phi^{+} \rangle \label{V_4} \ . \end{eqnarray} This implies that the total potential $ V(R) $ exhibits collective spin-dependent landscapes. As an example consider the trap potential \eqref{lattice} and the spin pattern \eqref{pattern}. If local equilibrium positions $ R_{0}(\vert \psi \rangle) $ exist, they satisfy the condition \begin{eqnarray} R_{0}^{4} (R_{0} - \delta) = \dfrac{ 3d^{2} \langle \psi \vert W \vert \psi \rangle}{2 V_{0}} \ , \label{equilibria} \end{eqnarray} where $ \langle \psi \vert W \vert \psi \rangle = \langle \psi \vert \left( j_{z} Z_{a} Z_{b} + j_{x} X_{a} X_{b} + j_{y} Y_{a} Y_{b} \right) \vert \psi \rangle $ are possible expectation values with respect to each of the four Bell states. Figure \ref{landscape}(b) shows the total potential landscape seen by particle $ b $ for each of the spin Bell states, displaying the spin-dependent potentials. Note that the state $ \vert \psi^{+} \rangle $ does not exhibit a minimum; this is not a problem provided the protected logical qubit states do not involve $ \vert \psi^{+} \rangle $. In between equilibrium points of the potential landscapes in Figure \ref{landscape}(b) there are \textit{corrective} sites, where devices we call correctors are present. Correctors are represented in Figure \ref{landscape}(a) as \textit{loops}. The function of the corrective devices is executing a unitary operation on the spin subspace once the particle approaches their site. There are two correctors, denoted $ L1$ and $ L2 $. For illustration of the device functioning, in the remaining of this section we treat the correctors $ L1 $ and $ L2 $ as qubits. Note however that there are a number of ways of implementing such devices and alternatives to the qubit model will be discussed in the following implementation section. Consider the $ L1 $ device has basis states $ \lbrace \vert \mu_{0}^{1} \rangle, \vert \mu_{1}^{1} \rangle \rbrace $. Whenever the particle enters one of the $ L1 $ loops, the unitary operation $ Z_{b} X_{L1}$ is executed, where $ X_{L1} = \vert \mu_{0}^{1} \rangle \langle \mu_{1}^{1} \vert + \vert \mu_{1}^{1} \rangle \langle \mu_{0}^{1} \vert $. It is important that $ L1 $ is insensitive to whether particle $ b $ entered the innermost or outermost loop, since obtaining that information would collapse the spin state of the system as it is correlated to motion. The $ L2 $ system, or middle corrector, has basis states $ \lbrace \vert \mu_{0}^{2} \rangle, \vert \mu_{1}^{2} \rangle \rbrace $ and whenever particle $ b $ enters $L2 $, the unitary $ X_{b} X_{L2} $ is executed, where $ X_{L2} $ is once again the bit-flip operator on the corresponding basis states of $ L2 $. We have the following operations: \begin{align} L1: \ Z_{b} X_{L1} \ , \ \ L2: \ X_{b} X_{L2} \ . \label{loop_eqs} \end{align} Note these unitaries act on the spins \textit{conditional} on the particle's position. Hence, when tracing out the position degree of freedom, action of the corrective sites manifests as dissipative maps on the spin subspace. Logical basis states of the nucleus are defined as \begin{eqnarray} \vert \bar{0} \rangle &=& \vert \psi^{-} \rangle \\ \vert \bar{1} \rangle &=& \vert \phi^{-} \rangle \end{eqnarray} and an arbitrary logical qubit state is \begin{eqnarray} \vert \Psi \rangle = \alpha \vert \bar{0} \rangle + \beta \vert \bar{1} \rangle \end{eqnarray} Note that a superposition of the $ \vert \bar{0} \rangle, \vert \bar{1} \rangle $ states implies particle $ b $ is in a superposition of singlet and triplet spin states, implying a superposition of different spatial equilibrium points. To understand how the qubot delays decoherence and partially protects the logical qubit, one must follow carefully what happens to the particles when a physical error occurs in one of the spins. Single physical qubit errors are assumed to be much more likely than multi-qubit errors \cite{Fowler2012} and the depolarizing channel is considered as decoherence model. A summary of possible errors and how they act on logical basis states is shown in Table \ref{errors}. \begin{table}[h!] \centering \begin{tabular}{cccc} \hline \hline Error & $ \ \ \vert \psi^{-} \rangle $ & \multicolumn{1}{c}{$ \ \ \vert \phi^{-} \rangle $} & Corrected state \\ \hline $ X_{a} $ & $ - \vert \phi^{-} \rangle $ & $ - \vert \psi^{-} \rangle $ & $ - \alpha \vert \bar{0} \rangle - \beta \vert \bar{1} \rangle $ \\ $ X_{b} $ & $ + \vert \phi^{-} \rangle $ & $ + \vert \psi^{-} \rangle $ & $ \alpha \vert \bar{0} \rangle + \beta \vert \bar{1} \rangle $ \\ $ Z_{a} $ & $ + \vert \psi^{+} \rangle $ & $ + \vert \phi^{+} \rangle $ & $ \alpha \vert \bar{0} \rangle - \beta \vert \bar{1} \rangle $ \\ $ Z_{b} $ & $ - \vert \psi^{+} \rangle $ & $ + \vert \phi^{+} \rangle $ & $ -\alpha \vert \bar{0} \rangle - \beta \vert \bar{1} \rangle $ \\ $ Z_{a} X_{a} $ & $ - \vert \phi^{+} \rangle $ & $ - \vert \psi^{+} \rangle $ & $ \alpha \vert \bar{0} \rangle - \beta \vert \bar{1} \rangle $ \\ $ Z_{b} X_{b} $ & $ + \vert \phi^{+} \rangle $ & $ - \vert \psi^{+} \rangle $ & $ -\alpha \vert \bar{0} \rangle - \beta \vert \bar{1} \rangle $ \\ \hline \hline \end{tabular} \caption{Effect of physical errors on logical basis states and the final corrected state after action of the qubot. \label{errors}} \end{table} \begin{figure}\label{cycle} \label{color} \end{figure} As an illustration, consider the example of a bit-flip in the first spin described by the $ X_{a} $ operator. Initially, an arbitrary logical qubit state $ \vert \Psi \rangle = \alpha \vert \psi^{-} \rangle + \beta \vert \phi^{-} \rangle $ is in a superposition of equilibrium positions $ R_{0}(\vert \psi^{-} \rangle) $ and $ R_{0}(\vert \phi^{-} \rangle) $ given by solutions of \eqref{equilibria}. The $ X_{a} $ error changes the spin state of the particles according to \begin{eqnarray} \alpha \vert \psi^{-} \rangle + \beta \vert \phi^{-} \rangle \rightarrow - \alpha \vert \phi^{-} \rangle - \beta \vert \psi^{-} \rangle \end{eqnarray} and hence the particles' interaction potential is changed accordingly. After the error, the possible positions of particle $b$ are no longer equilibrium points of the potential landscapes. For the case in which $b$ was initially at $ R_{0}( \vert \psi^{-} \rangle) $, the particles repel, forcing $b$ into $L2 $. Similarly, for $ R_{0}( \vert \phi^{-} \rangle ) $, occurrence of the error causes an attractive force which pulls $ b $ into $ L2 $. Once $b$ reaches the loop, the operator $ X_{b} X_{L2} $ is applied, restoring the logical qubit to the original state and driving the system back to the initial superposition of equilibrium points. Naturally this process introduces kinetic energy in the form of phonons, which must be removed if particle $ b $ is to settle back in the original state. This implies the need for a dissipative force acting on $ b $ which could be provided by state-independent cooling of the atom motion. For now, we will assume that such cooling is present, and this \textit{phonon} issue will be discussed further in the implementation section. Similar processes occur for $ X_{b} $ and $ Z_{b} $ errors: a combination of spin-motion dynamics and subsequent application of the loop operators corrects errors and restores the system to the initial arbitrary logical state. The qubot is also able to correct a concatenation of phase and bit-flip errors, given by $ Y_{b} $. Note that this requires a passage through two correctors. The present qubot model is not able to correct all errors. As can be seen in Table \ref{errors}, logical basis states transform under $ Z_{a} $ with opposite parity, thus inducing a phase error in the logical qubit. This imparts on the $ Y_{a} $ error since $ i Y_{a} = Z_{a} X_{a} $. This imperfection can be traced back to the fact that the qubot uses two physical qubits to encode a logical state. The quantum Hamming bound \cite{Gottesman1996} implies that for single qubit errors, a minimum of five qubits are required to achieve complete fault tolerance for one logical qubit. Despite this partial fault tolerance the qubot can delay decoherence of arbitrary logical qubit states, and for some specific states it is even able to preserve it regardless of the error, as for example the singlet $ \vert \psi \rangle = \vert \psi^{-} \rangle $. More general models implementing \textit{perfect} quantum error correcting codes \cite{Laflamme1996} can nevertheless be devised at the expense of more particles or higher spin states. Note that to protect arbitrary logical qubit states, the qubot potential landscapes must distinguish between all the four elements of the Bell basis, as in Figure \ref{landscape}(b). If the landscape for two or more Bell states is indistinguishable, certain errors will cause no effect upon the atom preventing the action of the correctors. Note also that the order of the potential minima for each Bell state defines the choice of position and action for the corrective sites, as well as the choice of logical basis states. It is instructive to consider the qubot operation under a depolarizing channel acting on particle $ b $ alone. Denote environment states as $ \vert e_{j} \rangle $. Decoherence causes the joint particle-environment-corrector state to evolve according to, \begin{align} \vert \Psi \rangle \vert e_{0} \rangle \vert \mu_{0}^{1} \mu_{0}^{2} \rangle \rightarrow \sqrt{1-p} \left( \alpha \vert \psi^{-} \rangle + \beta \vert \phi^{-} \rangle \right) \vert e_{0} \rangle \vert \mu_{0}^{1} \mu_{0}^{2} \rangle \nonumber \\ \nonumber \\ + \sqrt{\dfrac{p}{3}} \left( \alpha \vert \phi^{-} \rangle + \beta \vert \psi^{-} \rangle \right) \vert e_{1} \rangle \vert \mu_{0}^{1} \mu_{0}^{2} \rangle \nonumber \\ \nonumber \\ + \sqrt{\dfrac{p}{3}} \left( - \alpha \vert \psi^{+} \rangle + \beta \vert \phi^{+} \rangle \right) \vert e_{2} \rangle \vert \mu_{0}^{1} \mu_{0}^{2} \rangle \nonumber \\ \nonumber \\ + \sqrt{\dfrac{p}{3}} \left( \alpha \vert \phi^{+} \rangle - \beta \vert \psi^{+} \rangle \right) \vert e_{3} \rangle \vert \mu_{0}^{1} \mu_{0}^{2} \rangle \label{decoherence1} \end{align} where $ p $ denotes the error probability. Equation \eqref{decoherence1} describes the depolarizing dynamics suffered by the logical qubit, with the first term proportional to $ \sqrt{1-p}$ corresponding to no decoherence and the subsequent terms proportional to $ \sqrt{p/3} $ corresponding to errors on the logical qubit. Note that at this stage, the corrective devices remain unaffected while the system undergoes errors and the environment \textit{learns} when an error has occurred. Tracing out the environment, the above evolution induces a dissipative map on the spin system increasing its entropy and causing decoherence of the original state. With the occurrence of errors the potential landscapes acting on $ b $ undergo a change forcing the action of the correctors upon the spin state of the nucleus. Purity of the logical qubit is restored at the expense of an increase in entropy for the correctors; after a correction event, \eqref{decoherence1} evolves to \begin{align} \vert \Psi \rangle \left( \sqrt{1-p} \vert e_{0} \rangle \vert \mu_{0}^{1} \mu_{0}^{2} \rangle + \sqrt{\dfrac{p}{3}} \vert e_{1} \rangle \vert \mu_{0}^{1} \mu_{1}^{2} \rangle \right. \nonumber \\ \left. - \sqrt{\dfrac{p}{3}} \vert e_{2} \rangle \vert \mu_{1}^{1} \mu_{0}^{2} \rangle - \sqrt{\dfrac{p}{3}} \vert e_{3} \rangle \vert \mu_{1}^{1} \mu_{1}^{2} \rangle \right) \end{align} where we can see that the original logical qubit state is restored and the environment gets correlated to the correctors' state. After a single error correction, the correctors' states must be reset to the pure initial state $ \vert \mu_{0}^{1} \mu_{0}^{2} \rangle $. This is a non-unitary operation which requires energy expenditure, similar to erasing a quantum state \cite{Bennett2003, Bub2001} and can be implemented as a non-equilibrium stochastic process. This corresponds to a consumption of resources by the qubot analogous to the consumption of resources by biological molecular machines and living organisms. Irrespective of the physical implementation of the corrective sites, such consumption of resources is a mandatory part of the qubot operation in accordance to the laws of thermodynamics. \section{Implementation} \textit{Potential engineering}. Spin-spin interactions of the form \eqref{interaction} suitable for implementing quantum robots could be engineered in a number of different atomic and molecular systems. In this section a physical implementation using laser-dressed Rydberg atoms \cite{Glaetzle2015, Bijnen2015, Jau2015} is discussed. As will be shown, instances of the qubot described in the previous section can be realized for realistic experimental parameters, provided one chooses the correct logical basis elements and position of corrective sites. We will focus on a qubot that stabilizes an effective entangled spin state against a depolarizing environment similar to the one outlined in \cite{Guerreiro2020}. We shall refer to this device as an \textit{entanglement qubot}. \begin{figure}\label{levels} \label{color} \end{figure} \begin{figure*}\label{Rydberg_potentials} \label{color} \end{figure*} A pair of $ ^{87} $Rb atoms labelled $ a $ and $ b $ constitute the qubot nucleus. Effective spin states are provided by hyperfine levels of $ b $, specifically \begin{eqnarray} \vert 0 \rangle &=& \vert 5^{2} S_{1/2}, F = 1, m_{F} = 1 \rangle \ , \\ \vert 1 \rangle &=& \vert 5^{2} S_{1/2}, F = 2, m_{F} = 2 \rangle \ , \end{eqnarray} with energy difference $ \omega_{01} $. The atom-atom interaction potential is induced by dressing the $ \vert 0 \rangle, \vert 1 \rangle $ states with two strongly interacting Rydberg Zeeman sublevels in the $n^{2} P_{1/2}$ manifold via Rabi oscillations with detunings $ \Delta_{\pm} $ and frequencies $ \Omega_{\pm} $ using $ \sigma^{\pm} $ polarized light. The interaction between Rydberg states arises from a van der Waals potential of the form $ C_{6} R^{-6} $, and a fixed orientation of the two particles is considered, with the atoms polarized perpendicular to the plane. Large detunings guarantee that only a small fraction of the Rydberg states is admixed to the $ \vert 0 \rangle, \vert 1 \rangle $ levels while maintaining a long lifetime. Following \cite{Glaetzle2015}, the Rydberg states are \begin{eqnarray} \vert r_{\pm} \rangle = \vert n^{2} P_{1/2}, m_{j} = \pm 1/2 \rangle \vert m_{I} = 3/2 \rangle \ , \end{eqnarray} with an energy difference $ \Delta E_{r} $. Detunings are chosen such that the energy conservation condition $ \Delta E_{r} = (\Delta_{+} - \Delta_{-}) $ is satisfied. A level diagram is shown in Figure \ref{levels}. The atoms are trapped in one dimensional potentials, insensitive to their internal states. State-independent trapping of Rydberg dressed atoms can be achieved in so-called magic \cite{Ye2008, Zhang2011} and magnetic traps \cite{Boetes2018}. While atom $ a $ is fixed at the origin, $ b $ is able to move under the influence of a force resulting from the combination of an external tweezer and the atom-atom interaction potential. As in quantum chemistry \cite{Carr2009, Krems}, the time scale associated to electronic dynamics is much shorter than the time scale of nuclei motion. An effective spin dependent Born-Oppenheimer potential can therefore be derived at fixed atomic separations $ R $. In the limit of large detunings $ \Omega_{\pm} \ll \Delta_{\pm} $ and for $ \Delta_{+}/ \Delta_{-} < 0, \Delta_{+} + \Delta_{-}< 0 $, adiabatic elimination \cite{Paulisch2014} can be used in the rotating frame to obtain an effective interaction acting on the subspace generated by the $ \vert 0 \rangle, \vert 1 \rangle $ states to fourth order in $ \Omega_{\pm} / \Delta_{\pm} $, \begin{align} V_{I}(R) = J_{z} Z_{a} Z_{b} + J_{x} X_{a} X_{b} + J_{y} Y_{a} Y_{b} +J_{\parallel} \left( Z_{a} + Z_{b} \right) \ , \label{real_int} \end{align} where $ J_{\alpha}(R) $ ($ \alpha = x,y,z $) are radial steplike coefficients depending on the Rabi frequencies $ \Omega_{\pm} $, detunings $ \Delta_{\pm} $ and van der Waals $ C_{6} $ coefficients for the $ n^{2} P_{1/2}$ manifold. $ J_{\parallel} $ is an effective magnetic field, which we assume can be cancelled by an additional weak non-homogeneous field on the order of \SI{2}{G}. See Appendices A and B for explicit definitions, formulas and details on the potential and effective magnetic field, respectively. A plot of the $ J_{\alpha} $ spin pattern for $ n = 60 $, detunings $ \Delta_{-} = - \Delta_{+} = 2\pi \times \SI{50}{MHz} $ and Rabi frequencies $ \Omega_{-} = \Omega_{+} / 3 = \SI{2 \pi \times 3}{MHz} $ can be seen in Figure \ref{Rydberg_potentials}(a). Note these are in the same parameter region as used for realizing the quantum spin ice Hamiltonian on a kagome lattice in \cite{Glaetzle2015, Glaetzle2014}. The parameters defining a qubot potential are not unique, allowing some freedom in the construction; for an example of a different set of numbers and the resulting spin pattern see the Appendix C. From the spin pattern coefficients together with Eqs.\eqref{V_1}-\eqref{V_4} and a trap potential $ V_{t}(R) $ we can derive the collective spin-dependent potentials acting on particle $ b $. Consider a trap potential provided by two neighboring optical tweezers, \begin{eqnarray} V_{t}(R) = V_{0} \left[ \left( R - \delta_{1} \right)^{2} + \left( R - \delta_{2} \right)^{2} \right] \end{eqnarray} where $ V_{0} = \SI{15}{kHz / \mu m^{2}} $, $ \delta_{1} = \SI{1.6}{\mu m} $ and $ \delta_{2} = \SI{2.0}{\mu m} $. The resulting spin-dependent potential landscapes $ V(R) $ can be seen in Figure \ref{Rydberg_potentials}(b), where each trace corresponds to a different Bell state of the two atoms. Note equilibrium positions are separated by approximately $ \SI{0.3}{\mu m} $. Trap frequencies are approximately $ \omega_{t} / 2 \pi \approx \SI{1}{kHz} $. Possible positions for the corrective sites $ L1 $ and $ L2 $, corresponding to the transformations \eqref{loop_eqs}, are represented by dashed vertical lines. Note the potential landscapes for the Bell states $ \vert \psi^{-} \rangle $ and $ \vert \psi^{+} \rangle $ overlap. This implies that one cannot choose either $ \vert \psi^{-} \rangle $ or $ \vert \psi^{+} \rangle $ as protected states, as in this case, phase errors could not be corrected. The protected logical state is chosen to be $ \vert \phi^{+} \rangle $. \textit{Corretive sites}. Correctors $ L1 $ and $ L2 $ were previously considered to be qubits acting as an entropy sink for maintaining the purity of the protected logical qubit state carried by the nucleus. The interaction between superconducting quantum electronics and atomic \cite{Kielpinski2012}, molecular \cite{Andre2006} and mesoscopic particles \cite{Martinetz2020} has been extensively studied in the context of hybrid quantum systems and the coupling between NV centers and superconductors has been observed \cite{Kubo2011}. A number of different implementations involving superconducting qubit systems is therefore expected. Beyond qubits, one may consider additional atoms as candidates for implementing corrective devices. Controlled atomic collisions \cite{Jaksch1999} would provide the mechanism for position-dependent unitary operations. One could envision a lattice with arrays of \textit{data} particles interpolated with \textit{corrective} particles, analogous to the surface code \cite{Fowler2012}; occurrence of errors would alter the interaction between data particles, enabling or inhibiting motion and tunneling - and consequently interactions - with neighboring corrective sites. It would be as a surface code \textit{in motion}, where errors induce controlled motion leading to correction feedbacks. It is important to stress that in the course of the qubot action, entropy of the corrective atoms would increase and a dissipative map for restarting the correctors in their original state would have to be continuously enforced, for example through an amplitude damping channel \cite{Guerreiro2020}. Corrective devices could also be implemented using Rabi oscillations between the $ \vert 0 \rangle, \vert 1 \rangle $ levels. By carefully tuning the Rabi frequency of the transition and the profile of the spin-dependent potentials in Figure \ref{Rydberg_potentials}(b) it is in principle possible to engineer the transit time of atom $ b $ through $ L1 $ and $ L2 $ such that $ Z_{b} $ and $ X_{b} $ operations are applied, analogous to the transit time stimulated decay in ammonia masers \cite{Feynman_lect} and Ramsey interferometry in atomic fountain clocks \cite{Wynands2005}. In this implementation - probably the most practical from an experimental point-of-view - the electromagnetic field assumes the role of entropy sink since conditional $ X $ and $ Z $ operations on the atom would introduce uncertainties in the intensity and phase of the field, respectively. A schematics of this implementation is shown in Figure \ref{setupB}. \begin{figure}\label{setupB} \label{color} \end{figure} \textit{Operation, cooling and lifetime}. Operation of the qubot proceeds as described in the previous section: occurrence of an error induces a change in the potential landscape seen by atom $ b $ thus forcing it into one of the corrective sites $ L1 $ or $ L2 $. Note that errors can occur due to external environmental influence or intrinsically due to thermal and quantum fluctuations of the atomic motion. Consider atom $ b $ in a thermal state. For temperatures on the order of \SI{10}{nK}, reachable for atomic ensembles \cite{Weld2009}, the occupation number of atomic motion is $ \bar{n} \approx 0.1 $ pointing out that the atom is effectively in the trap ground state. Zero point motion of the atom is approximately $ R_{\mathrm{zpm}} \simeq \sqrt{\hbar/2m\omega_{t}} \approx \SI{0.23}{\mu m} $, indicating that at \SI{10}{nK} quantum fluctuations can cause the atom to reach the corrective sites even when no environmental error took place, inducing change in the qubot state. Hence, intrinsic fluctuation errors are expected to constitute a portion of total errors. In the next section, a model of the qubot operation taking into account intrinsic and external errors will be discussed. Errors can be effectively corrected provided the qubot nucleus undergoes constant cooling of its motional degrees of freedom to dissipate the kinetic energy gained by mechanical forces due to potential changes. Such cooling mechanism needs to preserve the quantum information stored in the nucleus, so it must be insensitive to the quantum state stored in the spins. State-insensitive cooling of neutral atoms can be achieved via superfluid immersion \cite{Daley2004}, cavity cooling \cite{Griessner2004} or sympathetic cooling through spin-independent Rydberg interactions with neighboring atoms \cite{Belyansky2019}. What is the order of magnitude of the expected lifetime for the protected entangled state? The $ 60P_{1/2} $ Rydberg state has a lifetime on the order of $ \tau_{r} \approx \SI{133}{\mu s} $ \cite{Beterov2009}. This implies a bare lifetime for the effective spin state of $ \tau_{s} \approx (2 \Delta_{-}/\Omega_{-})^{2} \tau_{r} \approx \SI{9}{ms} $ \cite{Glaetzle2015}, corresponding to a spin decoherence rate $ \Gamma \approx \SI{111}{Hz} $. A decay process to the ground state $ \vert 0 \rangle $ is defined by the following transformations, \begin{eqnarray} \vert 0 \rangle \vert e_{0} \rangle &\rightarrow & \vert 0 \rangle \vert e_{0} \rangle \\ \vert 1 \rangle \vert e_{0} \rangle &\rightarrow & \sqrt{1 - \tau_{s}^{-1} dt} \vert 1 \rangle \vert e_{0} \rangle + \sqrt{\tau_{s}^{-1} dt} \vert 0 \rangle \vert e_{1} \rangle \end{eqnarray} where the first ket corresponds to the spin of the particle while the second ket represents the environment state. Action of this quantum channel upon the elements of the Bell basis can be written in terms of strings of Pauli errors \cite{Preskill}. It is thus expected that the qubot is able to extend the lifetime of Rydberg dressed entangled states. \section{Dynamics Simulation} Exploration of the qubot requires simulation of its error-correction dynamics. Any such simulation must take into account the effects of quantum fluctuations of atomic motion, as these fluctuations are in themselves a source of errors that can disturb the protected Bell state. A first principles description of the spin and motion degrees of freedom is intricate as the spin state is subject to transformations conditional on the motion state, which in itself is conditioned on the spin through the spin-dependent potential. As Wheeler would say \cite{Wheeler}: \textit{spin tells matter how to move, matter tells spin how to turn}. To capture the essential features of the qubot we propose an open quantum system model in which the motion and spin degrees of freedom follow a set of discrete-time coupled stochastic Schrodinger equations. Each realization of the evolution is described in terms of sequences of quantum state pairs, denoted $ \vert \psi \rangle $ for the spin and $ \vert \phi \rangle $ for the motion degree of freedom. Averaging over many realizations of the stochastic process results in the mean behavior of the system. The spin and motion degrees of freedom act as environments for each other. This idea can be used to motivate the model as follows. For simplicity, discretize (1D) space into a set of points $ R_{k} $. The position state reads \begin{eqnarray} \vert \phi \rangle = \sum_{k} \phi(R_{k}) \vert R_{k} \rangle \end{eqnarray} where $ \vert \phi(R_{k}) \vert^{2} $ gives the probability of finding the particle at position $ R_{k} $. The initial state evolves in a small time increment $ \delta t $ according to \begin{align} \vert \psi \rangle \vert \phi \rangle \xrightarrow{\delta t} \sum_{i} \phi(R_{i}) ( T(R_{i}) \vert \psi \rangle ) (W(\vert \psi \rangle) \vert R_{i} \rangle ) \nonumber \\ = \vert \Psi(t + \delta t) \rangle \end{align} where $ T(R_{i}) $ is the identity operator unless $ R_{i} = R_{L1} $ or $ R_{i} = R_{L2} $, for which \begin{eqnarray} T(R_{L1}) = Z_{b} \\ T(R_{L2}) = X_{b} \end{eqnarray} The operator $ W(\vert \psi \rangle) $ contains information on the spin-dependent potential and is responsible for the evolution of the motion state. Expanding $ \vert \Psi(t + dt) \rangle $, \begin{eqnarray} \vert \Psi(t + \delta t) \rangle &=& \sum_{i \neq L1, L2} \phi(R_{i}) \vert \psi \rangle (W(\vert \psi \rangle) \vert R_{i} \rangle ) \nonumber \\ &+& \phi(R_{L1}) ( Z_{b} \vert \psi \rangle ) (W(\vert \psi \rangle) \vert R_{L1} \rangle ) \nonumber \\ &+& \phi(R_{L2}) ( X_{b} \vert \psi \rangle ) (W(\vert \psi \rangle) \vert R_{L2} \rangle ) \label{unitary_spin_motion} \end{eqnarray} Assuming the spin state is continuously monitored in the Bell basis, the above state continuously collapses to a random separable state allowing the phase information and correlations of the global state to be ignored. Note that under this \textit{monitoring} assumption one can describe the dynamics of the system within a simpler scenario and yet verify the error correction capability of the proposed qubot. Moreover, monitoring of the joint spin state in the Bell basis can be achieved by continuous measurement of the force acting on particle $ a $, since the interaction between the particles is given by their joint spin state. The motion state then acts as an environment for the spin, inducing \textit{corrective} jump operators, \begin{eqnarray} L_{1} = \sqrt{\gamma_{L1}} Z_{b} \\ L_{2} = \sqrt{\gamma_{L2}} X_{b} \end{eqnarray} where we define \textit{correction rates} as \begin{eqnarray} \gamma_{L1} dt &=& \vert \phi(R_{L1})\vert^{2} \\ \gamma_{L2} dt &=& \vert \phi(R_{L2}) \vert ^{2} \end{eqnarray} Note that the probability of a given corrective jump occuring is also the probability of finding the particle in the corresponding corrective site. In addition to corrective jumps the spin state is also under the effect of a depolarizing channel due to an external decoherence environment, defined in terms of the collapse operators \begin{eqnarray} L_{3} = \sqrt{\dfrac{\Gamma}{3}} X_{b} \ , \ L_{4} = \sqrt{\dfrac{\Gamma}{3}} Y_{b} \ , L_{5} = \sqrt{\dfrac{\Gamma}{3}} Z_{b} \ , \ \end{eqnarray} where $ \Gamma $ is the decoherence rate. Conversely spin acts as an environment to the motion state. If no spin corrective jump occurs the motion state is left almost unperturbed, according to \eqref{unitary_spin_motion}, and evolves through the unitary predicted by the spin state $ \vert \psi \rangle $ plus the effects of a damping collapse operator provided by an additional spin-insensitive cooling environment with damping rate $ \kappa $ acting as a drain of kinetic energy, as discussed previously. On the other hand, if a corrective jump $ L_{1} $ or $ L_{2} $ happens the motion state collapses to $ \vert R_{L1} \rangle $ or $ \vert R_{L2} \rangle $, respectively. The collapsed state subsequently evolves according to the unitary predicted by the spin state $ \vert \psi \rangle $ plus the additional damping collapse operator. When spin jumps happen, the motion Hamiltonian must be updated accordingly for the next time iteration. This evolution can be implemented via a \textit{coupled} Monte-Carlo method. First, define the Motion Monte-Carlo procedure (MMC) for a damped harmonic oscillator as following: \begin{itemize} \item[\textbf{(1)}] Define motion state $ \vert \phi \rangle $ and Hamiltonian $ H $; \item[\textbf{(2)}] Compute $ \delta v = \kappa \delta t \langle \phi \vert a^{\dagger} a \vert \phi \rangle $; \item[\textbf{(3)}] Choose uniformly distributed random number $ q \in [0,1] $; \item[\textbf{(4)}] If $ q < \delta v $, update $\vert \phi \rangle \leftarrow a\vert \phi \rangle / \sqrt{ \delta v / \delta t} $; \item[\textbf{(5)}] If $ q > \delta v $, update $ \vert \phi \rangle \leftarrow e^{-i \hat{H} \delta t} \vert \phi \rangle / \sqrt{1 - \delta v} $, where $ \hat{H} = H - \frac{i}{2} a^{\dagger} a $; \end{itemize} We denote by \textbf{MMC}$(\vert \phi \rangle, H, \delta t) $ the output of the above procedure for input state $ \vert \phi \rangle $, Hamiltonian $ H $, over a time step $ \delta t $. This output consists of the updated motion state after one time step. The following algorithm, dubbed Spin-Motion Monte Carlo (\textbf{SMMC}), summarizes one time iteration of the qubot dynamics: \begin{itemize} \item[\textbf{(1)}] Define (update) motion and spin states $ \vert \phi \rangle $ and $ \vert \psi \rangle $ and motion Hamiltonian $ H = H(\vert \psi \rangle) $; \item[\textbf{(2)}] Define correction rates $ \gamma_{L1} \delta t = \vert \langle R_{L1} \vert \phi \rangle \vert^{2} , \gamma_{L2} \delta t = \vert \langle R_{L2} \vert \phi\rangle \vert^{2} $, where $ \delta t $ is the discrete time increment; \item[\textbf{(3)}] Compute $ \delta p_{k} = \delta t \langle \psi \vert L_{k}^{\dagger} L_{k} \vert \psi \rangle $ and $ \delta p = \sum_{k} \delta p_{k} $; \item[\textbf{(4)}] Choose uniformly distributed random number $ r \in [0,1] $; \item[\textbf{(5)}] If $ r < \delta p $, update $\vert \psi \rangle \leftarrow L_{k} \vert \psi \rangle / \sqrt{dp_{k}/ \delta t} $ with probability $ \delta p_{k} / \delta p $; \textbf{(5.1)} If jumps $ L_{k} $ with $ k = 1 $ or $ 2 $ occurred, update $ \vert \phi \rangle \leftarrow \vert R_{L_{k}} \rangle $ and run \textbf{MMC}$(\vert R_{L_{k}} \rangle, H,\delta t)$. After MMC update the motion state and the motion Hamiltonian to $ H = H(L_{k}\vert \psi \rangle) $; \textbf{(5.2)} If jumps $ L_{k} $ with $ k = 3, 4 $ or $ 5 $ occurred, run \textbf{MMC}$(\vert \phi \rangle, H,\delta t)$. After MMC update the motion state and the motion Hamiltonian to $ H = H(L_{k}\vert \psi \rangle) $; \item[\textbf{(6)}] If $ r > \delta p $, update $ \vert \psi \rangle \leftarrow e^{-i H_{s} \delta t} \vert \psi \rangle / \sqrt{1 - \delta p} $, where $ H_{s} = -i \sum_{k} L_{k}^{\dagger} L_{k} $; \textbf{(6.1)} Run \textbf{MMC}$(\vert \phi \rangle, H,\delta t)$. After MMC update the motion state and the motion Hamiltonian to $ H = H( \vert \psi\rangle ) $; \item[\textbf{(7)}] Go to \textbf{(1)} for next iteration. \end{itemize} A time series of quantum states $ \lbrace \vert \psi(t) \rangle, \vert \phi(t) \rangle \rbrace $ is called a quantum trajectory of the system, and can be obtained by iterating \textbf{SMMC}. Mean behavior of the qubot can be obtained by averaging quantities of interest over many quantum trajectories. For example, we can define the \textit{overlap} between the qubot spin state and the protected Bell state as $ F = \mathbb{E} \left[ \vert \langle \psi(t) \vert \phi^{+} \rangle \vert^{2} \right] $, where $\mathbb{E} \left[ ... \right] $ denotes the ensemble average over all quantum trajectories. The quantity $ F $ then measures how close the qubot spin state is on average to the protected state and hence quantifies how well the qubot functions. To simplify the dynamics simulation, spin-dependent potentials are taken to be harmonic traps of equal resonance frequency. This removes any issues due to anharmonicity in the potentials and allows for the definition of fixed phonon creation and annihilation operators. The potentials shown in Figure \ref{Rydberg_potentials}(b) are approximated as \begin{eqnarray} V(\vert \psi \rangle ,R) = \dfrac{m \omega_{t}^{2}}{2} \left[ R - R_{0}(\vert \psi \rangle) \right]^{2} \end{eqnarray} where $ \omega_{t} / 2 \pi = \SI{1}{kHz} $ and the trap position $ R_{0}(\vert \psi \rangle) $ is given by \begin{figure}\label{simulation1} \label{color} \end{figure} \begin{eqnarray} R_{0}(\vert \psi \rangle) = \left\{ \begin{array}{ll} R_{01}, & \mathrm{if} \ \vert \phi^{+} \rangle \\ R_{10}, & \mathrm{if} \ \vert \phi^{-} \rangle \\ R_{00}, & \mathrm{if} \ \vert \psi^{\pm} \rangle \\ \end{array} \right. \end{eqnarray} The positions $ R_{\alpha \beta} $ are dependent on the details of the experimental implementation. Inspired by Figure \ref{Rydberg_potentials}(b) we consider $ R_{01} = \SI{1.90}{\mu m} $, $ R_{10} = \SI{2.20}{\mu m} $ and $ R_{00} =\SI{1.64}{\mu m} $. Since the Hamiltonian always appears inside a commutator, constant terms can be neglected without affecting the dynamics. Defining the origin of our reference frame at the minimum of the potential $ V(\vert \phi^{+} \rangle) $ and neglecting constant shifts, the Hamiltonian reads \begin{eqnarray} H(\vert \psi \rangle) = \omega_{t} a^{\dagger} a - m \omega_{t}^{2} \Delta R_{0}( \vert \psi \rangle ) R_{\mathrm{zpm}} \left( a^{\dagger} + a \right) \label{Hamiltonian_sim} \end{eqnarray} with $ a^{\dagger}, a $ the creation and annihilation operators for the $ \vert \phi^{+} \rangle $ potential, given by, \begin{eqnarray} a = \sqrt{\dfrac{m\omega_{t}}{2}} \left( R + \dfrac{i}{m\omega_{t}} P \right) \\ a^{\dagger} = \sqrt{\dfrac{m\omega_{t}}{2}} \left( R - \dfrac{i}{m\omega_{t}} P \right) \end{eqnarray} with $ R, P $ the atom position and momentum operators of particle $ b $, respectively, $ R_{\mathrm{zpm}} $ the corresponding zero-point motion and $ \Delta R_{0}(\vert \psi \rangle) = R_{0}(\vert \psi \rangle) - R_{0}(\vert \phi^{+} \rangle ) $. The effect of a change in the spin state can be interpreted as the appearance of an additional force acting on particle $ b $. Figure \ref{simulation1} shows the result of iterating \textbf{SMMC} averaged over $ 10^{3} $ quantum trajectories, implemented using QuTiP \cite{qutip}, for the initial Bell-position state $ \vert \phi^{+} \rangle \vert \chi \rangle $, where $ \vert \chi \rangle $ is a Gaussian wavepacket in position with uncertainty $ \Delta R $. See the Figure caption for details on the parameters used in the simulation. The top graph shows the mean overlap $ F = \mathbb{E} \left[ \vert \langle \psi(t) \vert \phi^{+} \rangle \vert^{2} \right] $ as a function of time for the qubot (thick green line) compared to the depolarizing channel alone (thin purple line). We can see that initially the qubot overlap drops faster than the free spins, but it stabilizes at about $ 70\% $, while free decohering spins decrease significantly below. The middle plot shows the atom position and its quantum uncertainty as a function of time: action of the qubot stabilizes the location of the atom. Note that motion of the atom towards one corrective site is expected to increase correction rates of that site and decrease correction rates of the other. This behavior can be seen in the bottom graph, where rates are shown as a function of time. As expected, $ \gamma_{L1} $ (light yellow line) displays significant anti-correlation with $ \gamma_{L2} $ (thick green line). The effect of finite temperature can be evaluated by adapting \textbf{SMMC} to include motion collapse operators $ \sqrt{\kappa(\bar{n} + 1)} a $ and $ \sqrt{\kappa \bar{n}} a^{\dagger} $ representing contact with a thermal bath of phonons at temperature $ T $ with coupling $ \kappa $ and thermal occupation number $ \bar{n} $, where $ \bar{n} = 1 / (e^{\hbar \omega_{t} / k_{B} T} - 1) $. When in contact with a thermal bath, the particle initially in the ground state evolves to a thermal state with mean number of phonons $ \bar{n} $, increasing the position spread and consequently the intrinsic qubot error rate. The spin overlap is thus expected to decrease with temperature. \begin{figure}\label{Temperature} \label{color} \end{figure} The time-averaged steady state overlap $ \langle F \rangle_{s} $ as a function of temperature is plotted in Figure \ref{Temperature}. Each point is the result of time-averaging $ 10^{2} $ quantum trajectories with error bars corresponding to one standard deviation. As expected, the effect of contact with a heat bath is to decrease the overlap. \begin{figure}\label{optimal} \label{color} \end{figure} Quantum fluctuations of the atomic motion can induce \textit{internal} errors if the atom interacts with the correctors when no external (decoherence) error has taken place. To quantify that effect, the steady state overlap $ \langle F \rangle_{s} $ and correction rates $ \langle \gamma \rangle_{s} $ are numerically calculated for different values of the $ L1 $ position $ \vert R_{L1} \vert $, shown in Figure \ref{optimal}; $ R_{L1} = - R_{L2} $ is assumed. Note that if the correctors are too close to the equilibrium position of $ \vert \phi^{+} \rangle ($ $\vert R_{L1} \vert < \SI{0.40}{\mu m}$), the steady state overlap $ \langle F \rangle_{s} $ falls below 50\%, while the mean rate for `correction' events are on the order of \SI{1}{kHz}, due to the atom fluctuating towards $ L1 $ or $ L2 $ even in the absence of an error. As $\vert R_{L1} \vert $ is increased, the steady state overlap increases, reaching a maximum value $ \langle F \rangle_{s} \approx 0.7 $ for $ \vert R_{L1} \vert \approx 0.63 $, and then decreases again as the correctors are placed further apart from the atom. The mean correction rates can be seen to decrease as the position $\vert R_{L1} \vert $ is further increased, which is intuitive since larger distances imply longer correction times. The optimal operation point $ \vert R_{L1} \vert \approx 0.63 $ is such that the mean correction rates $ \langle \gamma \rangle $ are of the same order of the decoherence rate $ \Gamma = \SI{100}{Hz}$. See Appendix D for more details. \section{Discussion} Throughout this work we discussed quantum robots, devices as the one conceptualized in \cite{Guerreiro2020}, capable of harnessing interactions between its constituent parts and the surrounding environment to achieve targeted tasks such as state protection against decoherence. We have introduced for the first time a model of a qubot capable of partially protecting an arbitrary logical qubit state against general single physical qubit errors. The first physical implementation of an instance of such device, capable of protecting a Bell state against the detrimental action of a depolarizing environment has been described, as well as Monte-Carlo simulations of the qubot dynamics and the inclusion of effects due to contact of the device with a thermal bath. From where we stand, several directions for future exploration can be sighted. For instance, a more thorough investigation of the capabilities of the proposed \textit{entanglement qubot} remains to be done: by tuning the relevant parameters such as the Rydberg level detunings $ \Delta_{\pm} $ and trap potential $ V_{t}(R) $ can we engineer a qubot capable of protecting entangled states other than the $ \vert \phi^{+} \rangle $ state? What about implementing a system analogous to the conceptual model, capable of protecting an arbitrary logical qubit? Could we extend the device to handle multiple qubits? Would the protection work against general physical errors? We have focused on the implementation using Rydberd-dressed atoms, but that is certainly not the only possibility. What other opportunities are offered by considering different physical setups for qubots? Polar molecules provide a promising platform \cite{Wei2011, Micheli2006, Brennen2007, Carr2009} with the possibility of coupling to superconducting quantum electronics \cite{Andre2006}. Synthetic molecular machines are one of the frontiers of nanotechnology \cite{Zhang2018, Stoddart2015, Kassem2017, Lau2017}. Enabled by the idea of a quantum robot we can envision extensions of the molecular machinery toolbox where the quantum states of the nanomachines play a fundamental role in their dynamics. These devices would combine resources from the environment, stochasticity and non-equilibrium to execute coupled quantum motion and processing of quantum information entering the realm of quantum nanomechanics. For example, in the entanglement qubot one could set the correction sites to perform the operation $ L1 = L2 = X_{b} $, and initiate the spins in the state $ \vert \psi^{+} \rangle $. This would cause a periodic spin-driven motion of the atom. It would be interesting to investigate the possibility of building quantum time crystals \cite{Choi2017, Wilczek2012, Zhang2017} using this scheme. Quantum robots with no moving parts are also a hitherto unexplored direction. In such devices an error in one degree of freedom would unleash a chain of reactions in other internal non-mechanical parts of the system, which would act back on the affected degree of freedom and steer it to a desired state. This touches upon the theoretical issue of quantum feedback \cite{Milburn, Ahn2002}, in a situation where the feedback itself is carried by quantum mechanical information, rather than the usual classical information scheme in which a measurement result is used to counter-act on the system. Finally, a very intriguing thought is the combination of a large number of quantum robots interacting with each other. Large numbers of interacting classical \textit{active} agents display fascinating emergent behavior \cite{Vicsek1995, Geyer2018}. Ensembles of active quantum agents on the other hand remain unexplored. Qubots offer a concrete path towards experimentally uncovering the physics of \textit{quantum active} matter. \section{Appendix A: effective potentials} As described in the main text, admixing strongly interacting Rydberg states from the $ n^{2}P_{1/2} $ manifold to the low-lying $ 5^{2}S_{1/2} $ Zeeman sublevels induces spatial dependent spin-spin interactions of the form \eqref{real_int}. For completeness we reproduce the main results of \cite{Glaetzle2015} outlining the toolbox for engineering a wide range of effective spin interactions. The interaction coefficients $ J_{\alpha} $ are calculated by adiabatic elimination of the Rydberg levels $ \vert r_{\pm} \rangle $ up to fourth order in $ \Delta / \Omega $, and are given by \begin{align} J_{z}(R) = \dfrac{1}{4} \left( \tilde{V}_{--}(R) - 2 \tilde{V}_{+-}(R) + \tilde{V}_{++}(R) \right) \ , \\ J_{x}(R) = 2 \left( \tilde{W}_{+-}(R) + \tilde{W}_{++}(R) \right) \ , \\ J_{y}(R) = 2 \left( \tilde{W}_{+-}(R) - \tilde{W}_{++}(R) \right) \ , \\ J_{\parallel}(R) = \dfrac{1}{4} \left( \tilde{V}_{--}(R) - \tilde{V}_{++}(R) \right) \ , \end{align} where the functions $ \tilde{W}_{\alpha \beta}, \tilde{V}_{\alpha \beta} $ are effective radial dependent steplike potentials, \begin{eqnarray} \tilde{V}_{\alpha \alpha}(R) = \dfrac{\Omega^{2}_{\bar{\sigma}}}{2 \Delta_{\bar{\sigma}}} - \dfrac{\Omega^{4}_{\bar{\sigma}}}{4 \Delta_{\bar{\sigma}}^{3}} + \dfrac{\Omega_{\bar{\alpha}}^{4}}{4 \Delta_{\bar{\alpha}}^{2}} \dfrac{ V_{++} - 2 \Delta_{\alpha} }{W_{++}^{2} - ( V_{++} - 2 \Delta_{+} )( V_{++} - 2 \Delta_{-} ) } \end{eqnarray} \begin{eqnarray} \tilde{V}_{+-}(R) = \dfrac{\Omega^{2}_{-}}{4 \Delta_{-}} + \dfrac{\Omega^{2}_{+}}{4 \Delta_{+}} - \dfrac{\Omega_{+}^{2} \Omega_{-}^{2}}{16 \Delta_{+}^{2} \Delta_{-}} - \dfrac{\Omega_{+}^{2} \Omega_{-}^{2}}{16 \Delta_{-}^{2} \Delta_{+}} - \dfrac{\Omega^{4}_{-}}{16 \Delta_{-}^{3}} - \dfrac{\Omega^{4}_{+}}{16 \Delta_{+}^{3}} + \dfrac{\Delta_{\pm}^{2} \Omega_{+}^{2} \Omega_{-}^{2}}{16 \Delta_{+}^{2} \Delta_{-}^{2}} \dfrac{ ( \Delta_\pm - V_{+-} ) }{(\Delta_\pm - V_{+-})^{2} - W_{+-}^{2}} \label{steplike1} \end{eqnarray} \ \begin{eqnarray} \tilde{W}_{+-}(R) &=& \dfrac{\Omega_{+}^{2} \Omega_{-}^{2}}{16 \Delta_{+}^{2} \Delta_{-}^{2}} \dfrac{ \Delta_{\pm}^{2} W_{+-} }{ (\Delta_\pm - V_{+-})^{2} - W_{+-}^{2} } \end{eqnarray} \begin{eqnarray} \tilde{W}_{++}(R) &=& \dfrac{\Omega_{+}^{2} \Omega_{-}^{2}}{4 \Delta_{+} \Delta_{-}} \dfrac{W_{++}}{W_{++}^{2} - ( V_{++} - 2 \Delta_{+} )( V_{++} - 2 \Delta_{-} ) } \label{steplike2} \end{eqnarray} written in terms of the $ n^{2}P_{1/2} $ van der Waals potentials $ V_{\alpha \beta}, W_{\alpha \beta} $. Note the single particle light-shifts have been included in the above expressions. Moreoever, $ \tilde{V}_{+-} = \tilde{V}_{-+} $, and we have defined $ \Delta_{\pm} = \Delta_{+} + \Delta_{-} $ and $ \bar{\alpha} = - \alpha $. In the parameter region $ \Delta_{+-} < 0 $, $ \Delta_{+} / \Delta_{-} < 0 $ resonant Rydberg excitations are avoided for all values of $ R $. For atomic orientation $ \theta = \pi / 2 $ (polar), $ \phi = 0 $ (azimuthal) the van der Waals potentials are \begin{eqnarray} V_{\alpha \beta} = \dfrac{c_{\alpha \beta}}{R^{6}} \ , \ W_{+-} = \dfrac{w}{R^{6}} = -\dfrac{1}{3} W_{++} \ . \end{eqnarray} where the so-called $ C_{6} $ coefficients $ c_{\alpha \beta} $ and $ w $ are obtained from second order perturbation theory, and are given by \begin{eqnarray} c_{++} &=&\dfrac{2}{81} \left( 5 C_{6}^{(a)} + 14 C_{6}^{(b)} + 8 C_{6}^{(c)} \dfrac{}{} \right) \\ c_{+-} &=&\dfrac{2}{81} \left( C_{6}^{(a)} + 10 C_{6}^{(b)} + 16 C_{6}^{(c)} \dfrac{}{} \right) \\ w &=&\dfrac{2}{81} \left( C_{6}^{(a)} + C_{6}^{(b)} - 2 C_{6}^{(c)} \dfrac{}{} \right) \end{eqnarray} The \textit{indivitual channel} coefficients $ C_{6}^{(\nu)} $, $ \nu = a,b,c $ are not dependent of magnetic quantum numbers and characterize the interaction strengh. There is one channel for each non-vanishing matrix element of the dipole-dipole interaction potential \cite{Glaetzle2015}, \begin{eqnarray} a &:& \ P_{1/2} + P_{1/2} \rightarrow S_{1/2} + S_{1/2} \\ b &:& \ P_{1/2} + P_{1/2} \rightarrow D_{3/2} + D_{3/2} \\ c &:& \ P_{1/2} + P_{1/2} \rightarrow D_{3/2} + S_{1/2} \end{eqnarray} and each $ C_{6}^{(\nu)} $ is calculated from the radial part of the dipole-dipole matrix element \cite{Walker2007}, \begin{eqnarray} C^{(\nu)}_{6} = \sum_{n_{\alpha} n_{\beta}} \dfrac{e^{4}}{\delta_{\alpha \beta}} \left( R_{nl}^{n_{\alpha} l_{\alpha}} R_{nl}^{n_{\beta} l_{\beta}} \right)^{2} \label{C6_coeff} \end{eqnarray} where \begin{eqnarray} R_{nl}^{n_{i} l_{i}} = \int dr r^{2} \psi_{n,l,j}(r)^{*} r \psi_{n_{i},l_{i},j_{i}}(r) \ , \end{eqnarray} and $ \delta_{\alpha \beta} $is the energy defect between levels $ n_{\alpha}$ and $ n_{\beta} $. \begin{figure}\label{C6} \label{color} \end{figure} To numerically obtain the coefficients \eqref{C6_coeff}, and consequently the step-like potentials \eqref{steplike1} and \eqref{steplike2}, we use the ARC python library for alkali Rydberg atoms \cite{ARC}. Numerical calculation results are shown in Figure \ref{C6} as a function of the principal quantum number for the $ n^{2}P_{1/2} $ manifold. For $ n = 60 $, as used in the main text, we find \begin{eqnarray} - C_{6}^{(a)} \approx 2 \pi \times \SI{2.7E5}{MHz \cdot \mu m^{6}} \\ C_{6}^{(b)} \approx 2 \pi \times \SI{1.1E3}{MHz \cdot \mu m^{6}} \\ C_{6}^{(c)} \approx 2 \pi \times \SI{4.9E4}{MHz \cdot \mu m^{6}} \end{eqnarray} \section{Appendix B: Magnetic field $ J_{\parallel} $} Besides the $ J_{\alpha}(R) $ coefficients, the Rydberg dressing generates an effective magnetic field term $ J_{\parallel} (Z_{a} + Z_{b}) $ in the interaction energy. Under the influence of this term, Bell states of the $ ab $ pair are no longer eigenstates of the interaction. To obtain the spin dependent potential landscapes given by the eigenvalues in Eqs.\eqref{V_1}-\eqref{V_4}, we need to cancel $ J_{\parallel} $ by applying an external spatial dependent static field. How large such a field needs to be? A plot of $ J_{\parallel} $ can be seen in Figure \ref{J_par}. \begin{figure}\label{J_par} \label{color} \end{figure} Note that $ \langle J_{\parallel} \rangle \approx \SI{1401}{kHz} $. Considering the Land\'e factor $ \vert g_{F} \vert \approx \SI{0.70}{MHz / G} $ for the $ 5^{2} S_{1/2} $ states \cite{Bize1999} this effective magnetic field can be cancelled by an additional weak non-homogeneous field of order of magnitude $ \vert B_{c} \vert \approx \SI{2}{G} $. \section{Appendix C: Alternative spin pattern} Alternative spin dependent potentials, defined by parameters different from the ones employed in the main text are shown in Figure \ref{alternative_potentials}. Here, we consider detunings $ \Delta_{+} = -2 \pi \times \SI{70}{MHz} $, $ \Delta_{-} = 2 \pi \times \SI{30}{MHz} $, Rabi frequencies $ \Omega_{+} = \Omega_{-} = -2 \pi \times \SI{7}{MHz} $ and the trap potential \begin{eqnarray} V_{t}(R) = V_{0} \left( R - \delta \right)^{2} \end{eqnarray} where $ V_{0} = \SI{15}{kHz / \mu m^{2}} $ and $ \delta = \SI{2.30}{\mu m} $. \begin{figure}\label{alternative_potentials} \label{color} \end{figure} Note the resulting landscapes in Figure \ref{alternative_potentials}(c) suggest $ \vert \phi^{-} \rangle $ as protected state, while corrective loops $ L1$ and $ L2 $ should be reversed with respect to the choice discussed in the main text. The effective magnetic field has a mean value $ \langle J_{\parallel} \rangle \approx \SI{1803}{kHz} $, which requires a slightly higher compensating magnetic field, but still on the order of a few Gauss. The spatial profile $ J_{\parallel}(R) $ is shown in Figure \ref{alternative_J_par}. \begin{figure}\label{alternative_J_par} \label{color} \end{figure} \section{Appendix D: Optimal operation} To evaluate the effect of positioning of the correctors $ L1 $ and $ L2 $, we ran \textbf{SMMC}, as described in the main text, for different values of the positions $R_{L1} = - R_{L2} $. \begin{figure}\label{fidelity_optimal_comparison} \label{color} \end{figure} Figure \ref{fidelity_optimal_comparison} shows traces of the overlap $ F $ as a function of time. Each trace corresponds to a different corrector position (see caption), and the overlap of free spins under the action of the depolarizing channel is shown as the grey dashed line for comparison. The points in Figure \ref{optimal} (see main text) are obtained by time-averaging the overlap above $ \SI{10}{ms} $ for each of the traces in Figure \ref{fidelity_optimal_comparison}. We can see that if the corrector's positions are too close to the atom's equilibrium position, the overlap quickly decays due to \textit{internal} errors, occurring when a quantum fluctuation in the atomic position places it near the corrective site. This fast drop in overlap can be mitigated by positioning the correctors further apart from the $ \vert \phi^{+} \rangle $ equilibrium point. There is, however, a trade-off: the maximum steady state overlap $ \approx 70 \% $ is reached for a position $ \vert R_{L1} \vert \approx \SI{0.63}{\mu m} $, while placing the correctors further than that reduce the correction rates below the decoherence rate and consequently the steady state overlap. Decoherence causes the overlap to decrease exponentially according to $ e^{-\Gamma t} = e^{-t / \tau_D} $, where $ \tau_{D} = \Gamma^{-1} = (\SI{100}{Hz})^{-1} = \SI{10}{ms} $ is the characteristic decay time of the system. Decoherence effectivelly freezes when the system reaches its steady state, which happens after a stabilization time $ t_s $ elapses. From Figure \ref{simulation1} in the main text, we see that $ t_{s} \approx \SI{4}{ms} $, yielding an expected overlap of $ F \approx e^{-t_{s}/\tau_{D}} \approx 0.67 $, in accordance to the simulation results. \\ \twocolumngrid \end{document}
\begin{document} \title{\bf Indecomposable Decomposition and Couniserial Dimension \footnotetext{{\it Key Words} \begin{abstract} {\noindent Dimensions like Gelfand, Krull, Goldie have an intrinsic role in the study of theory of rings and modules. They provide useful technical tools for studying their structure. In this paper we define one of the dimensions called couniserial dimension that measures how close a ring or module is to being uniform. Despite their different objectives, it turns out that there are certain common properties between the couniserial dimension and Krull dimension like each module having such a dimension contains a uniform submodule and has finite uniform dimension, among others. Like all dimensions, this is an ordinal valued invariant. Every module of finite length has couniserial dimension and its value lies between the uniform dimension and the length of the module. Modules with countable couniserial dimension are shown to possess indecomposable decomposition. In particular, von Neumann regular ring with countable couniserial dimension is semisimple artinian. If the maximal right quotient ring of a non-singular ring $R$ has a couniserial dimension as an $R$-module, then $R$ is a semiprime right Goldie ring. As one of the applications, it follows that all right $R$-modules have couniserial dimension if and only if $R$ is a semisimple artinian ring. } \end{abstract} \noindent{\bf 0. Introduction} { In this article we introduce a notion of dimension of a module, to be called couniserial dimension. It is an ordinal valued invariant that is in some sense a measure of how far a module is from being uniform. In order to define couniserial dimension for modules over a ring $R$, we first define, by transfinite induction, classes $\zeta _{\alpha }$ of $R$ -modules for all ordinals $\alpha \geq 1$. First we remark that if a module $M$ is isomorphic to all its non-zero submodules, then $M$ must be uniform. To start with, let $\zeta _{1}$ be the class of all uniform modules. Next, consider an ordinal $\alpha >1$; if $ \zeta _{\beta }$ has been defined for all ordinals $\beta <\alpha $, let $ \zeta _{\alpha }$ be the class of those $R$-modules $M$ such that for every non-zero submodule $N$ of $M$, where $N\ncong M$, we have $N\in \bigcup_{\beta <\alpha }\zeta _{\beta }$. If an $R$-module $M$ belongs to some $\zeta _{\alpha }$, then the least such $\alpha $ is called the {\it couniserial dimension} of $M$, denoted by c.u.dim$(M)$. For $M=0$, we define c.u.dim$(M)=0$. If a non-zero module $M$ does not belong to any $\zeta_{\alpha }$, then we say that c.u.dim$(M)$ is not defined, or that $M$ has no couniserial dimension. Equivalently, Proposition \ref{2.2} shows that }{an $R$-module $M$ has couniserial dimension if and only if for each descending chain of submodules of $M$, $M_{1}\geq M_{2}\geq ...$, there exists $n\geq 1$, either $M_{n}$ is uniform or $M_{n}\cong M_{k}$ for all $ k\geq n$.}{ It is clear by the definition that every submodule and so every summand of a module with couniserial dimension has couniserial dimension. Also note that, for the integer number $n$, couniserial dimension of }$\Bbb{Z}^{n}$ is $n$. {An example is given to show that the direct sum of two modules each with couniserial dimension (even copies of a module) need not have couniserial dimension. In Section 2, we prove some basic properties of the couniserial dimension. In Section 3, we prove our main results. It is shown in Theorem \ref{decomposation} that a module of countable (finite or infinite) couniserial dimension can be decomposed in to indecomposable modules. Theorem \ref{dedekind} shows that a Dedekind finite module with couniserial dimension is a finite direct sum of indecomposable modules. Theorem \ref{2.5} in Section 3 shows that for a right non-singuar ring $R$ with maximal right quotient ring $Q$, if $Q_R$ has couniserial dimension, then $R$ is a semiprime right Goldie ring which is a finite product of piecewise domains. The reader may compare this with the wellknown result that a prime ring with Krull dimension is a right Goldie ring but need not be a piecewise domain. Furthermore, a prime right Goldie ring need not have couniserial dimension as is also the case for Krull dimension. \\ \indent In Section 4, we give some applications of couniserial dimension. It is shown in Proposition \ref{Artinian} that a module $M$ with finite length is semisimple if and only if for every submodule $N$ of $M$ the right $R$-module $\oplus _{i=1}^{\infty }M/N$ has couniserial dimension. As a consequence a commutative noetherian ring $R$ is semisimple if and only if for every finite length module $M$ the module $\oplus _{i=1}^{\infty }M$ has couniserial dimension.} It is shown in Proposition \ref{anti is injective} that if $P$ is an anti-coHopfian projective right $R$-module and $\oplus _{i=1}^{\infty }E(P)$ has couniserial dimension, then $P$ is injective. As another application we show that all right (left) $R$-module have couniserial dimension if and only if $R$ is semisimple artinian (see Theorem \ref{final}). Several examples are included in the paper that demonstrates as to why the conditions imposed are necessary and what, if any, there is any relation with the corresponding result in the literature. \section{\hspace{-6mm}. Definitions and Notation.} Recall that a semisimple module $M$ is said to be {\it homogeneous} if $M$ is a direct sum of pairwise isomorphic simple submodules. A module $M$ has {\it finite uniform dimension} (or {\it finite Goldie rank}) if $M$ contains no infinite direct sum of non-zero submodules, or equivalently, there exist independent uniform submodules $U_1, ... ,U_n$ in $M$ such that $\oplus _{i = 1}^n U_i$ is an essential submodule of $M$. Note that $n$ is uniquely determined by $M$. In this case, it is written u.dim$(M) = n$.\\ \indent For any module $M$, we define Z$(M) = \lbrace x \in M : $ r.ann$(x)$ is an essential right ideal of $R \rbrace$ . It can be easily checked that Z$(M)$ is a submodule of $M$. If Z$(M) = 0$, then $ M$ is called a {\it non-singular} module. In particular, if we take $M = R_R$, then $R$ is called right non-singular if Z$(R_R) = 0$.\\ \indent A ring $R$ is a called {\it right Goldie} ring if it satisfies the following two conditions: (i) $R$ has ascending chain condition on right annihilator ideals and, (ii) u.dim$(R_R)$ is finite. \\ \indent Recall that a ring $R$ is {\it right V-ring} if all right simple $R$-modules are injective. A ring $R$ is called {\it fully right idempotent} if $I = I^2$, for every right ideal $I$. We recall that a right V-ring is fully right idempotent (see \cite [Corollary 2.2] {7}) and a prime fully right idempotent ring is right non-singular (see \cite[Lemma 4.3]{2}). So a prime right V-ring is right non-singular. Recall that a module $M$ is called {\it $\Sigma$-injective} if every direct sum of copies of $M$ is injective. A ring $R$ is called {\it right $\Sigma$-V-ring} if each simple right module is $\Sigma$-injective. \\ \indent In this paper, for a ring $R$, $Q = Q _{max} (R)$ stands maximal right quotient ring $R$. It is well known that if $R$ is a right non-singular, then the injective hull of $R_R$, $E(R_R)$, is a ring and is equal to the maximal right quotient ring of $R$, \cite[Corollary 2.31]{Gooderlnonsingular}.\\ \indent A module $M$ is called {\it Hopfian} if $M$ is not isomorphic to any of its proper factor modules (equivalently, every onto endomorphism of $M$ is 1-1). {\it Anti-Hopfian} modules are introduced by Hirano and Mogami \cite{Hirano}. Such modules are isomorphic to all its non-zero factor modules. A module $M$ is called uniserial if the lattice of submodules are linearly ordered. Anti-Hopfian modules are uniserial artinian. \\ \indent Recall that a module $M$ is called {\it coHopfian} if it is not isomorphic to a proper submodule (equivalently, every 1-1 endomorphism of $M$ is onto). Varadarjan \cite{varadarjan} dualized the concept of anti-Hopfian module and called it anti-coHopfian module. With slight modification we will call a non-zero module to be {\it anti-coHopfian} if is isomorphic to all its non-zero submodules. A non-zero module $M$ is called {\it uniform} if the intersection of any two non-zero submodules is non-zero. We see an anti-coHopfian module is noetherian and uniform.\\ \indent An $R$-module $M$ has cancellation property if for every $R$-modules $N$ and $T$, $M \oplus N \cong M\oplus T $ implies $N\cong T$. Every module with semilocal endomorphism ring has cancellation property \cite{crash}. Since endomorphism ring of a simple module is a division ring, it has cancellation property. \\ \indent Throughout this paper, let $R$ denote an arbitrary ring with identity and all modules are assumed to be unitary and right modules, unless other words stated. If $N$ is a submodule (resp. proper submodule) of $M$ we write $N\leq M$ (resp. $N<M$). Also, for a module $M$, $\oplus _{i=1}^{\infty }M$ stands for countably infinite direct sum of copies of $M$. If $N$ is a submodule of $M$ and $k > 1$, $\oplus _{i = k}^{\infty} N = \oplus _{i = 1}^{\infty} N_{i}$ is a submodule of $\oplus _{i = 1}^{\infty} M$ with $N_1 = N_2 = ... = N_{k - 1} = 0$ and for $i \geq k$ $N_i = N$. \section{\hspace{-6mm}. Basic and Preliminary Results.} As defined in the introduction, couniserial dimension is an ordinal valued number. The reader may refer to \cite {stoll} regarding ordinal numbers. We begin this section with a lemma and a remark on the definition of couniserial dimension. \begin{lemma} \label{anti-coHopfian} An anti-coHopfian module is uniform noetherian. \end{lemma} \begin{proof} Since $M$ is isomorphic to each cyclic submodules, $M$ is cyclic and every submodule of $M$ is cyclic and so $M$ is noetherian. Thus $M$ has a uniform submodule, say $U$. Since $U \cong M$, $M$ is uniform.$~\square$ \end{proof} \begin {remark} \label{1.3} {\rm We make the convention that a statement ``c.u.dim$(M) = \alpha$'' will mean that the couniserial dimension of $M$ exists and equals $\alpha$. By the definition of couniserial dimension, if $M$ has couniserial dimension and $N$ is a submodule of $M$, then $N$ has couniserial dimension and c.u.dim$(N) \leq $ c.u.dim$(M)$. Moreover, if $M$ is not uniform and c.u.dim$(M) = $ c.u.dim$(N)$, where $N$ is a submodule of $M$, then $M \cong N$. On the other hand, since every set of ordinal numbers has supremum, it follows immediately from the definition that $M$ has couniserial dimension if and only if for all submodules $N$ of $M$ with $N \ncong M$ , c.u.dim$(N)$ is defined. In the latter case, if $\alpha = $ sup$\lbrace$ c.u.dim$(N) ~ \vert \ N \leq M, N \ncong M \rbrace $, then c.u.dim$(M) \leq \alpha + 1$. } \end{remark} The next proposition provides a working definition for a module $M$ that has couniserial dimension. \begin{proposition}\label{2.2} An $R$-module $M$ has couniserial dimension if and only if for every descending chain of submodules $ M_1 \geq M_2 \geq ... $, there exists $ n \geq 1$ such that $M_n$ is uniform or $ M_n \cong M_k$ for all $k \geq n $. \end{proposition} \begin{proof} {\rm ($\Rightarrow$) Let $ M_1 \geq M_2 \geq ... $ be a descending chain of submodules of $M$. Put $\gamma =$ inf $\lbrace$c.u.dim$(M_n) ~ \vert ~ n\geq 1\rbrace $. So $\gamma = $ c.u.dim $(M_n)$ for some $n \geq 1$. If $ M_n$ is not uniform, then $ M_n \cong M_k$ for all $k \geq n $, because $\gamma $ is infimum. \\ ($ \Leftarrow $) If $M$ does not have couniserial dimension, then $M$ is not uniform and so there exists a submodule $M_1$ of $M$ such that $M_1 \ncong M$ and $M_1$ does not have couniserial dimension, by the above remark. So there exists a submodule $M_2$ of $M_1$ such that $M_2 \ncong M_1$ and $M_2$ does not have couniserial dimension. Continuing in this manner, we obtain a descending chain of submodules $ M_1 \geq M_2 \geq ... $, such that for every $i \geq 1$, $M_i$ does not have couniserial dimension and $M_i \ncong M_{i+1}$, a contradiction. This completes the proof. $~\square$ } \end{proof} As a consequence, we have the following corollary. \begin{corollary}\label{2.3} Every artinian module has couniserial dimension. \end{corollary} \begin{lemma} \label{less} If $M$ is an $R$-module and {\rm c.u.dim}$(M) = \alpha$, then for any $ 0 \leq \beta \leq \alpha$, there exists a submodule $N$ of $M$ such that {\rm c.u.dim}$(N) = \beta$. \end{lemma} \begin{proof} {\rm The proof is by transfinite induction on c.u.dim$(M) = \alpha$. The case $\alpha = 1$ is clear. Let $\alpha > 1$ and $0 \leq \beta < \alpha$, then, using Remark \ref{1.3}, there exists a submodule $K$ of $M$ such that $K \ncong M$ and $\beta \leq$ c.u.dim$(K)$. Now since $\beta \leq$ c.u.dim$(K) < \alpha$, by induction hypothesis, there exists a submodule $N$ of $K$ such that c.u.dim$(N) = \beta$. $~\square$ } \end{proof} As a consequence we have the following. \begin{lemma} \label{uniform submodule} Every module with couniserial dimension has a uniform submodule. \end{lemma} In the next proposition we observe that every module of finite couniserial dimension has finite uniform dimension. \begin{lemma} \label{non} Let $M$ be an $R$-module of finite couniserial dimension. Then $M$ has finite uniform dimension and {\rm u.dim}$(M) \leq$ {\rm c.u.dim}$(M)$. \end{lemma} \begin{proof} The proof is by induction on c.u.dim$(M) = n$. The case $n = 1$ is clear. Let $n > 1 $ and $N$ be a submodule of $M$ such that c.u.dim$(N) = n - 1 $. Thus by the inductive hypothesis, $N$ has finite uniform dimension. Put $m = $ u.dim$(N)$. If $N $ is not essential in $M$, then there exists a uniform submodule $U$ of $M$ such that $N \cap U = 0 $. Thus $N \oplus U$ is a submodule of $M$ of uniform dimension $m + 1$. Then $ (N \oplus U) \ncong N$ and so $ n - 1 < $ c.u.dim$(N \oplus U) \leq n$. Thus $(N \oplus U) \cong M$, by Remark \ref{1.3}. This proves the lemma. $~\square$ \end{proof} \begin{example} {\rm There exist modules of infinite couniserial dimension but of finite uniform dimension. Take $M = \Bbb{Z}_{{p}^{\infty}} \oplus \Bbb{Z}_{{p}^{\infty}}$. Then $M$ is artinian $\Bbb{Z}$-module of infinite couniserial dimension but of finite uniform dimension $2$.} \end{example} In the following we consider equality in the above lemma in a special case. \begin{lemma} \label{non1} Let $M$ be an injective non-uniform $R$-module of finite couniserial dimension. Then {\rm c.u.dim}$(M) = $ {\rm u.dim}$(M)$ if and only if $M$ is finitely generated semisimple module. \end{lemma} \begin{proof}{ $(\Leftarrow) $ is clear.\\ $(\Rightarrow)$. Let c.u.dim$(M) = $ u.dim$(M) = m > 1$. Then $M = E_1 \oplus ... \oplus E_m$, where $E_i$ are uniform injective modules. If $E_1$ is not simple then there exists a non-injective submodule $K$ of $E_1$. Thus $K \oplus E_2 \oplus ... \oplus E_m$ is not isomorphic to $M$. But clearly c.u.dim$(K \oplus E_2 \oplus ... \oplus E_m) \geq m$, a contradiction. This completes the proof. $~\square$} \end{proof} Note that the condition being injective is necessary in the above proposition. \begin{example} {\rm We can see easily that for $M = \Bbb{Z} \oplus \Bbb{Z}$, c.u.dim$(M) = $ u.dim$(M)$ $ = 2$ but $M$ is not semisimple. Also, the next lemma shows that there exists a module of finite uniform dimension without couniserial dimension.} \end{example} The following lemma shows that direct sum of two uniform modules may not have couniserial dimension. \begin{lemma} \label{example} Let $D$ be a domain and $S $ be a simple $D$-module. If $S \oplus D$ as $D$-module has couniserial dimension, then $D$ is principal right ideal domain. \end{lemma} \begin{proof} Let $I$ be a non-cyclic right ideal of $D$. Choose a non-zero element $x \in I$. Set $J_1 = xR$ which is isomorphic to $D$. Thus there exists a right ideal $J_2$ of $D$ such that $J_2 \cong I$ and $J_2 \leq J_1$. Now let $J_3$ be a cyclic right ideal contained in $J_2$ and by continuing this manner we have a descending chain $J_1 \geq J_2 \geq ...$ of right ideals of $D$ where for each odd integer $i$, $J_i$ is cyclic and for each even integer $i$, $J_i$ is not cyclic. Now consider the descending chain $S \oplus J_1 \geq S \oplus J_2 \geq ...$ of submodules of $S \oplus D$. Since $S$ has cancellation property and for each $i$, $S \oplus J_i$ is not uniform, by using Proposition \ref{2.2}, we see that, for some $n$, $J_n \cong J_{n + 1}$, a contradiction. Thus $D$ is a principal right ideal domain. \end{proof} \begin{remark} {\rm (1) The simple module $S$ in the statement of the Lemma \ref{example} can be replaced by any cancellable module. Indeed it follows from the Theorem \ref{2.5}, proved latter if the maximal right quotient ring $Q$ of a domain $D$ as $D$-module has couniserial dimension, then $Q_D$ has cancellation property and so if $Q \oplus D$ as $D$-module has couniserial dimension, $D$ must be right principal ideal domain. \\ (2) Also, since a Dedekind domain has cancellation property, similar proof shows that if $D$ is a Dedekind domain which is not right principal ideal domain, then $D \oplus D$ does not have couniserial dimension. This example shows that even direct sum of a uniform module with itself may not have couniserial dimension. } \end{remark} The definition of addition two ordinal numbers can be given inductively. If $ \alpha $ and $ \beta $ are two ordinal numbers then $ \alpha + 0 = \alpha $, $ \alpha + (\beta + 1) = (\alpha + \beta) + 1 $ and if $ \gamma $ is a limit ordinal then $ \alpha+\gamma $ is the limit of $ \alpha + \beta $ for all $ \beta < \gamma $ (See \cite{stoll}). \begin{lemma} \label{ord}\rm (See \cite [Theorem 7.10]{stoll}). For ordinal numbers $\alpha $, $\beta $ and $\gamma$, we have the following:\\ {\rm (1)} If $\alpha < \beta $, then $\gamma + \alpha < \gamma + \beta $. \\ {\rm (2)} If $\alpha < \beta $, then $ \alpha + \gamma \leq \beta + \gamma$. \end{lemma} We call an $R$-module $M$ {\it fully coHopfian} if every submodule of $M$ is coHopfian. Note that artinian modules are fully coHopfian. If $I$ is the set of prime numbers, then $\oplus _{p \in I} {\Bbb {Z} _p}$ is an example of fully coHopfian $\Bbb{Z}$-module that it is not artinian. \begin{proposition}{\label{b}} Let $M = M_1 \oplus M_2$ be a fully coHopfian $R$-module with couniserial dimension. Then c.u.dim$(M) \geq $ c.u.dim$(M_1) + $ c.u.dim$(M_2)$. \end{proposition} \begin{proof} {\rm We may assume $M_1, M_2 \neq 0$. We use transfinite induction on c.u.dim$(M_2) = \alpha$. Since $M_1 \ncong M$, c.u.dim$(M)$ $\geq $ c.u.dim$(M_{1}) + 1$. So the case $\alpha = 1$ is clear. Thus, suppose $\alpha > 1$ and for every right $R$-module $L$ of couniserial dimension less than $\alpha$, c.u.dim$(M_1 \oplus L ) \geq $ c.u.dim$(M_{1}) + $ c.u.dim$(L)$. If $\alpha$ is a successor ordinal, then there exists an ordinal number $\gamma$ such that c.u.dim$(M_2) = \gamma + 1$. Using Lemma \ref{less}, there exists a non-zero submodule $K$ of $M_2$ such that c.u.dim$(K) = \gamma < \alpha $. So by induction hypothesis $${\rm c.u.dim}(M_1) +\gamma = {\rm c.u.dim}(M_1) + {\rm c.u.dim}(K) \leq {\rm c.u.dim}(M_1 \oplus K).$$ Using our assumption and Remark \ref{1.3}, we have c.u.dim$(M_1 \oplus K) < $ c.u.dim$(M)$ and hence c.u.dim$(M_{1}) + $ c.u.dim$(M_{2}) \leq$ c.u.dim$(M)$.\\ \indent If $\alpha$ is a limit ordinal and $ 1 \leq \beta < \alpha$, then by Remark \ref{1.3}, there exists a non-zero submodule $K$ of $M_2$ such that $\beta \leq $ c.u.dim$(K)$. Then by induction hypothesis ${\rm c.u.dim}(M_1) + \beta \leq {\rm c.u.dim}(M_1) + {\rm c.u.dim}(K) \leq {\rm c.u.dim}(M_1 \oplus K) < {\rm c.u.dim}(M).$ Therefore c.u.dim$(M_1) + \alpha = $ sup$\lbrace$ c.u.dim$(M_{1}) + \beta \mid \beta < \alpha \rbrace \leq $ c.u.dim$(M)$. $~\square$ } \end{proof} The condition fully coHopfian of Proposition \ref{b} is necessary. \begin{example} {\rm For the $\Bbb{Z}$-modules $M = \oplus_{i = 1}^{\infty} \Bbb{Z}_{p}$, and $ L = \Bbb{Z}_{p}$, we have $M \cong M \oplus L$. One can see c.u.dim$(M) = \omega$ and so c.u.dim$(M) \ngeq $ c.u.dim$(M) + $ c.u.dim$(L)$. Also, in general, we don't have the equality in Proposition \ref{b}. Consider the $\Bbb{Z}$-module $ M = \Bbb{Z}_2 \oplus \Bbb{Z}_4$. Then, $M$ is fully coHopfian and $3 = $ c.u.dim$(M) > $ c.u.dim$(\Bbb{Z} _2) + $ c.u.dim$(\Bbb{Z} _4 )$.} \end{example} Here we prove another result on fully coHopfian module: \begin{proposition} \label{simple2} Let $M$ be an $R$-module and $N$ be a cancellable module (for example a simple module) such that $N \oplus M$ has couniserial dimension. If $M$ is fully coHopfian, then $M$ is artinian. \end{proposition} \begin{proof} Let $M$ be fully coHopfian and let $M_1 \geq M_2 \geq ... $ be a descending chain of submodules of $M$. Then $N \oplus M_1 \geq N \oplus M_2 \geq ... $ is a descending chain of submodules of $N \oplus M$ and so for some $n$, $M_i \cong M_n$ for each $i \geq n$. Now since $M$ is fully coHopfian, we have $M_i = M_n$ for each $i \geq n$. $~\square$ \end{proof} Let us recall the definition of uniserial dimension \cite{j.algebra}. \begin {definition} \label{uniserial dimension} {\rm In order to define uniserial dimension for modules over a ring $R$, we first define, by transfinite induction, classes $ \zeta_\alpha $ of $R$-modules for all ordinals $ \alpha \geq 1 $. To start with, let $ \zeta_1 $ be the class of non-zero uniserial modules. Next, consider an ordinal $ \alpha > 1 $; if $ \zeta_\beta $ has been defined for all ordinals $ \beta < \alpha $, let $ \zeta_\alpha $ be the class of those $R$-modules $M$ such that, for every submodule $N < M$, where $M/N \ncong M$, we have $M/N \in \bigcup_{\beta < \alpha} \zeta_\beta$. If an $R$-module $M$ belongs to some $\zeta_\alpha$, then the least such $\alpha$ is the {\it uniserial dimension of} $M$, denoted u.s.dim$(M)$. For $M = 0$, we define u.s.dim$(M) = 0$. If $M$ is non-zero and $M$ does not belong to any $\zeta_\alpha$, then we say that ``u.s.dim$(M)$ is not defined,'' or that `` $M$ has no uniserial dimension.''} \end {definition} \begin{remark}\label{semisimple eq} {\rm Note that, in general, there is no relation between the existence of the uniserial dimension and the existence of the couniserial dimension of a module. For example, the polynomial ring in infinite number of commutative indeterminates over a field $k$, $R = k[x_1, x_2, ...]$ has this property that c.u.dim$(R_R) = 1$ but $R_R$ does not have uniserial dimension (see \cite [ Remark 2.3] {j.a.its}). It follows by the definition that a semisimple module $M$ has uniserial dimension if and only if $M$ has couniserial dimension, in which case u.s.dim$(M) = \alpha$ if and only if c.u.dim$(M) = \alpha$. Furthermore a semisimple module $M$ has couniserial dimension if and only if $M$ is a finite direct sum of homogeneous semisimple modules ( see \cite [Proposition 1.18]{j.algebra}).} \end{remark} Using the above remark we have the following interesting results. \begin{corollary} \label{finite simple} All right semisimple modules over a ring $R$ have couniserial dimension if and only if there exist only finitely many non-isomorphic simple right $R$-modules. \end{corollary} \begin{lemma} \label{semisimple1} Suppose that $M$ is simple. Then $\oplus _{i = 1}^{\infty} E(M)$ has couniserial dimension if and only if $M$ is injective. \end{lemma} \begin{proof} ($\Leftarrow$) It is clear by the statement in Remark \ref{semisimple eq}. \\ ($\Rightarrow$) Consider the descending chain $$ M \oplus ( \oplus _{ i = 2}^{\infty} E(M)) \geq M^{(2)} \oplus ( \oplus _{ i = 3}^{\infty} E(M)) \geq ... $$ of submodules of $\oplus _{i = 1}^{\infty} E(M)$ where $M^{(n)} = \oplus_{i = 1}^{\infty} M_i $ with $M_1 = ... = M_n = M$ and for each $i > n$, $M _i = 0$. Then, by Proposition \ref{2.2}, there exists $n\geq 1$ such that $$M^{(n)} \oplus ( \oplus _{i = n + 1}^{\infty}E(M))\cong M^{(n + 1)}\oplus(\oplus _{i = n + 2}^{\infty} E(M))$$ and so $\oplus _{i = n + 1}^{\infty} E(M) \cong M \oplus (\oplus _{i = n + 2}^{\infty} E(M))$, because $M$ is cancelable . Since $M$ is cyclic, there exists a right module $L$ such that for some $k$, $E(M) ^{k} \cong M \oplus L $. This shows $M$ is injective. $~\square$ \end{proof} \section{\hspace{-6mm}. Main Results.} In this section we use our basic results to prove the main results. \begin{proposition} \label{simple1} Let $M_R$ be an injective module and $N_R$ be a cancellable module over a commmutative ring $R$ (for example a simple module) such that $N \oplus M$ has couniserial dimension. Then $M$ is $\Sigma$-injective. \end{proposition} \begin{proof} According to \cite [Theorem 6.17]{cyclic}, it is enough to show that $R$ satisfies the ascending chain condition on ideals of $R$ that are annihilators of subsets of $M$. Let $I_1 \leq I_2 \leq ... $ be a chain of such annihilator ideals. Then for each $i$, $M_i = $ ann$ _{M}(I_i) $ is a submodule of $M$ and so we have descending chain $N \oplus M_1 \geq N \oplus M_2 \geq ... $ of submodules of $N \oplus M$. Then there exists a positive integer $n$ such that $M_n \cong M_i$ for all $i \geq n$. Thus ann$(M_i) = $ ann$(M_n) $. Therefore for each $i \geq n$, $ I_i = I_n$. $~\square$ \end{proof} \begin{remark} {\rm One can see that the above result provides another proof for the fact that commutative V-rings (i.e, von Neumann regular rings ) are $\Sigma$-V-ring. For an example of right V-rings that is not $\Sigma$-V-ring, the reader may refer to \cite [ Example, page 60]{cyclic}.} \end{remark} The next result shows that if a module has countable couniserial dimension then it can be decomposed into indecomposable modules. \begin{theorem}\label{decomposation} For an $R$-module $M$, if {\rm c.u.dim}$(M) \leq \omega $, then $M$ has indecomposable decomposition. \end{theorem} \begin{proof} The proof is by induction on c.u.dim$(M) = \alpha$. The case $\alpha = 1$ is clear. If $\alpha > 1$ and $M$ is not indecomposable, then $M = N_1 \oplus N_2$, where $N_1$ and $N_2$ are non-zero submodules of $M$. If c.u.dim$(N_i) < $ c.u.dim$(M)$, $ i = 1, 2$, then by induction hypothesis $M$ has indecomposable decomposition. If not, for definiteness let c.u.dim$(N_1) = $ c.u.dim$(M)$. Then $M \cong N_1$, by Remark \ref{1.3}. Thus it contains an infinite direct sum of uniform modules, say $\oplus_{i = 1}^{\infty} K_i $. Clearly, c.u.dim$(\oplus_{i = 1}^{\infty} K_i ) \geq \omega$. Thus we have $M \cong \oplus_{i = 1}^{\infty} K_i $. $~\square$ \end{proof} \begin{remark} {\rm We do not know whether the above proposition holds for a module of arbitrary couniserial dimension. For infinite countable couniserial dimension one can show under some condition that the module can be represented as a direct sum of uniform modules. } \end{remark} Recall that a module $M$ is called {\it Dedekind finite} if $M$ is not isomorphic to any proper direct summand of itself. Clearly, every direct summand of a Dedekind finite module is a Dedekind finite module. Obviously, a Hopfian module is Dedekind finite. Since all finitely generated modules over a commutative ring are Hopfian (see \cite{good}), they provide examples of Dedekind finite modules. \begin{theorem} \label{dedekind} If $M$ is a Dedekind finite module with couniserial dimension, then $M$ has finite indecomposable decomposition. \end{theorem} \begin{proof} The proof is by induction on c.u.dim$(M) = \alpha$. The case $\alpha = 1$ is clear. Let $\alpha > 1$ and every Dedekind finite module with c.u.dim less than $\alpha$ be decomposed to finitely many indecomposable modules. If $M$ is not indecomposable, then $M = M_1 \oplus M_2$. Since $M_i \ncong M$, using Remark \ref{1.3}, c.u.dim$(M_i) < $ c.u.dim$(M)$ and so, by induction hypothesis, $M_i$ have finite indecomposable decomposition. This completes the proof. $~\square$ \end{proof} A ring $R$ is called a {\it von Neumann regular ring} if for each $x \in R$, there exists $y \in R$ such that $xyx = x$, equivalently, every principal right ideal is a direct summand. $R$ is {\it unit regular ring} if for each $x \in R,$ there exists a unit element $u \in R$ such that $x = xux.$ As a consequence of the above theorem we have the following corollary. \begin{corollary} Every Dedekind finite von Neumann regular ring (in particular, unit regular rings ) with couniserial dimension is semisimple artinian. \end{corollary} A ring $R$ is called a PWD {\it (piecewise domain)} if it possesses a complete set $ \lbrace e_{i} \vert 0 \leq i \leq n \rbrace$ of orthogonal idempotents such that $xy = 0$ implies $x = 0 $ or $y = 0$ whenever $x \in e_i Re_k$ and $y \in e_k Re_j $. Note that the definition is left-right symmetric and all $e_i R e_i$ are domain, see \cite{lance small}. An element $x$ of $R$ is called regular if its right and left annihilators are zero. \begin{proposition} \label{semiprime right Goldie} Let $R$ be a semiprime right Goldie ring with couniserial dimension. If u.dim$(R_R) = n$, then $R$ has a decomposition into $n$ uniform modules. In particular, it is a piecewise domain. \end{proposition} \begin{proof} We can assume that $n > 1$. Let $I_1 = U_1 \oplus ... \oplus U_n$ be an essential right ideal of $R$. Then, by \cite [Proposition 6.13]{goodearl}, $I_1$ contains a regular element $x$ and thus $J_1 = xR$ is a right ideal of $R$ which is $R$-isomorphic to $ R$. So u.dim$(J_1) = n$ and it contains an essential right ideal $I_2$ of $R$ such that it is a direct sum of $n$ uniform right ideals. By continuing in this manner we obtain a descending chain $ I_1 \geq J_1 \geq I_2 \geq ...$ of right ideals of $R$ such that $I_i$ are direct sum of $n$ uniform and $J_i$ are isomorphic to $R$. Since $R$ has couniserial dimension, for some $n$, $ I_n \cong R$. The last statement follows from \cite [Pages 2-3]{lance small}. This completes the proof. $~\square$ \end{proof} \begin{remark} \label{example of prime right Goldie} {\rm There exists an example of simple noetherian ring of uniform dimension $2$ which has no non-trivial idempotents (c.f. \cite [Example 7.16, page 441 ] {robson}). So by the above proposition this provides an example of prime right Goldie ring without couniserial dimension. } \end{remark} \begin{lemma}\label{Q-map} Let $R$ be a right non-singular ring with maximal right quotient ring $Q$. Let $M$ be a $Q$-module. If $M$ is non-singular $R$-module, such that $M_R$ has couniserial dimension, then $M_Q$ has couniserial dimension. \end{lemma} \begin{proof} {\rm Let $M \geq M_1 \geq M_2 \geq ...$ be a descending chain of $Q$-submodules of $M$. So it is a descending chain of $R$-submodules of $M$ and thus, for some $n$, $M_n$ is uniform $R$-module or $M_n \cong M_i$ as $R$-modules for all $i \geq n$. If $M_n $ is uniform $R$-module, then it is also uniform $Q$-module. So let $M_n \cong M_i$ as $R$-modules and let $\varphi_{i}$ be this isomorphism. If $q \in Q$ and $t \in M_n$ there exists an essential right ideal $E$ of $R$ such that $qE \leq R$. So $\varphi_{i} (tqE) = \varphi_{i} (tq) E $ and also $ \varphi_{i} (tqE) = \varphi_{i} (t) qE$. Then $\varphi_{i} (tq) E = \varphi_{i} (t) qE$. Since $Q$ is right non-singular, $\varphi_{i} (tq) = \varphi_{i} (t)q$. Thus $\varphi_{i}$ is a $Q$-isomorphism. This completes the proof. $~\square$} \end{proof} A ring $R$ is semiprime (prime) right Goldie ring if and only if its maximal right quotient ring is semisimple (simple) artinian ring, \cite [Theorems 3.35 and 3.36]{Gooderlnonsingular}. Semiprime right Goldie rings are non-singular. A right non-singular ring $R$ is semiprime right Goldie ring if and only if u.dim$(R_R)$ is finite, \cite [Theorem 3.17]{Gooderlnonsingular}. Recall that a {\it right full linear ring} is the ring of all linear transformations (written on the left) of a right vector space over a division ring. If the dimension of the vector space is finite, a right full linear ring is exactly a simple artinian ring. \begin{theorem}\label{2.5} Let $R$ be a right non-singular ring with maximal right quotient ring, $Q$. If $Q$ as an $R$-module has couniserial dimension, then $R$ is a semiprime right Goldie ring which is a finite product of prime Goldie rings, each of which is a piecewise domain. \end{theorem} \begin{proof} It is enough to show that $R$ has finite uniform dimension. Since $Q_R$ has couniserial dimension, $R_R$ has couniserial dimension and so every right ideal of $R$ has couniserial dimension. Thus Lemma \ref{uniform submodule} implies that every right ideal contains a uniform submodule. Now by \cite [Theorem 3.29] {goodearl} the maximal right quotient ring of $R$ is a product of right full linear rings, say $Q = \prod _{ i \in I} Q_{i}$, where $Q_{i}$ are right full linear rings. Note that since $R_R$ is right non-singular, $Q_R$ is also non-singular and so, using Lemma \ref{Q-map}, $Q_Q$ has couniserial dimension. At first we claim each $Q_{i}$ is endomorphism ring of a finite dimensional vector space. Assume the contrary. Then $Q_{j}$ is the endomorphism ring of an infinite dimensional vector space, for some $j$. Thus $Q_{j} \cong Q_{j} \times Q_{j} $ and so if $\iota: Q_{j} \longrightarrow Q$ be the canonical embedding, then $\iota ( Q_{j})$ is a right ideal of $Q$ and there exists a $Q$-isomorphism $Q \cong \iota ( Q_{j}) \times Q$. Then there exist right ideals $T_1$ and $T$ of $Q$ such that $Q = T_1 \oplus T$, $T_1$ and $Q$ are isomorphic as $Q$-modules and $T \cong \iota ( Q_{j})$ as $Q$-module. Because $Q_{j}$ is the endomorphism ring of an infinite dimensional vector space, it has a right ideal which is not principal, for example its socle. So $\iota ( Q_{j})$ and thus $T$ contains a non-cyclic right ideal of $Q$ and thus since $T \cong Q/T_1$, there exists a non-cyclic right ideal of $Q$, say $K_1$ such that $ Q \geq K_1 \geq T_1 $. Now $T_1 $ is isomorphic to $Q$. So we can have a descending chain $Q > K_1 > T_1 > K_2 > T_2 > ... $ of right ideals of $Q$ such that $T_i$ are cyclic but $K_i$ are not cyclic. This is a contradiction. So all $Q_i$ are endomorphism ring of finite dimensional vector spaces. Now to show $R$ is semiprime right Goldie ring it is enough to show that the index set $I$ is finite. If $I$ is infinite, there exist infinite subsets $I_1 $ and $I_2$ of $I$ such that $I = I_1 \cup I_2$. and $I_1\cap I_2 $ is empty. Let $T_1 = \prod _{i \in I} N_i$ such that $N_i = Q_i$ for all $i \in I_1$ and $N_i = 0$ for all $i \in I_2$. Similarly let $ T = \prod _ {i \in I} M_i$ such that $M_i = Q_i$ for all $i \in I_2$ and $M_i = 0$ for all $i \in I_1$. Then $T_1$ and $T$ are right ideals of $Q$ and $Q = T_1 \oplus T$. $T$ contains a right ideal of $Q$ which is not cyclic, for example $\oplus_{i \in I} M_i$. Since $T \cong Q/ T_1$, there exists a non-cyclic right ideal $K_1$ of $Q$ such that $Q \geq K_1 \geq T_1$. Note that $T_1$ is a cyclic $Q$-module and because $I_1$ is infinite, the structure of $T_1$ is similar to that of $Q$. We can continue in this manner and find a descending chain of right ideals of $Q$ such that $K_i$ are non cyclic $Q$-modules and $T_i$ are cyclic $Q$ modules, which is a contradiction. Therefore $I$ is finite and $R$ must have finite uniform dimension. This shows $R$ is semiprime right Goldie ring and so Proposition \ref{semiprime right Goldie} and \cite [ Corollary 3]{lance small} imply that it is a direct sum of prime right Goldie rings. $~\square$ \end{proof} The reader may ask what if $R_R$ has couniserial dimension instead of $Q_R$. Indeed we may point out that unlike a semiprime ring with right Krull dimension, a semiprime ring with couniserial dimension need not be a right Goldie ring. See Dubrovin \cite{Uniserial with nil} that contains an example of a primitive uniserial ring with non-zero nilpotent elements. \\ Next we show that the converse of the above theorem is not true, in general. In fact we show that there exists a prime right Goldie ring $R$ such that c.u.dim$(R_R) = 2$ and $Q _R$ does not have couniserial dimension. We need the following lemma to give the example. \begin{lemma}\label{morita} For an ordinal number $\alpha$, being of couniserial dimension $\alpha$ is a Morita invariant property for modules. \end{lemma} \begin{proof} This is clear by the definition of couniserial dimension and \cite [ Proposition 21.7 ]{Anderson}. $~\square$ \end{proof} \begin{example} {\rm Here we give an example of a prime right Goldie ring $R$ with maximal right quotient ring $Q$ such that $Q_R$ does not have couniserial dimension. Take $R = M_2 (\Bbb{Z})$, the $2 \times 2$ matrix ring over $\Bbb{Z}$. Then $R$ is a prime right Goldie ring with maximal right quotient ring $Q = M_2(\Bbb{Q})$. Note that under the standard Morita equivalent between the ring $\Bbb{Z}$ and $R= M_2(\Bbb{Z})$, see \cite [Theorem 17.20 ]{Lam}, $R$ corresponds to $\Bbb{Z} \oplus \Bbb{Z}$ and so using the above lemma $R$ has couniserial dimension $2$. If $\lbrace p_i \vert i \geq 1 \rbrace$ is the set of all prime numbers, then $\Bbb{Q}/ \Bbb{Z} = \sum _{i = 1}^{\infty} K_i /\Bbb{Z}$, where $K_i = \lbrace m/p_{i}^{n} \vert n \geq 0 $ and $m \in \Bbb{Z}\rbrace$. Then take $Q_{n} = \sum _{i = n} ^{\infty} K_i$. Then $M_2(Q_1) \geq M_2({Q_2}) \geq ... $ is a descending chain of $R$-submodules of $Q$ which are not uniform $R$-modules. Assume that for some $n$, $M_2({Q_{n}}) \cong M_2({Q_{n + 1}})$ with an $R$-isomorphism $\phi$. Let $\phi (\left( \begin{array}{ccc} 1& 0 \\ 0& 1 \end{array}\right)) = \left( \begin{array}{ccc} m_1/t_1& m_2/t_2\\ m_3/t_3& m_4/t_4 \end{array}\right)$, where $m_i/t_i \in Q_{n + 1}$. Suppose that $j \geq 1$ and $\phi (\left( \begin{array}{ccc} 1/p_{n}^{j}& 0 \\ 0& 1/p_{n}^{j} \end{array}\right) )= \left( \begin{array}{ccc} m_{1,j}/t_{1,j} & m_{2,j}/t_{2,j}\\ m_{3,j}/t_{3,j}& m_{4,j}/t_{4,j} \end{array}\right)$, where $p_n$ does not odd non of $t_{i,j}$ for all $1 \leq i \leq 4$. Then since $\phi$ is additive, we can easily see that $\left( \begin{array}{ccc} m_{1,j} p_n^{j}/t_{1,j} & m_{2,j}p_n^{j}/t_{2,j}\\ m_{3,j}p_n^{j}/t_{3,j}& m_{4,j}p_n^{j}/t_{4,j} \end{array}\right) = \left( \begin{array}{ccc} m_1/t_1& m_2/t_2\\ m_3/t_3& m_4/t_4 \end{array}\right)$ and this implies that $p_n^{j} \vert m_i$ for all $j \geq 1$ and $0 \leq i \leq 4$ and so $m_i = 0$, a contradiction. So $Q _R$ does not have couniserial dimension. } \end{example} \section{\hspace{-6mm}. Some Applications.} A right $R$-module $M$ which has a composition series is called a module of {\it finite length.} A right $R$-module $M$ is of finite length if and only if $M$ is both artinian and noetherian. The length of a composition series of $M_R$ is said to be the length of $M_R$ and is denoted by length$(M)$. Clearly, by Corollary \ref{2.3}, a module of finite length has couniserial dimension. The next result shows a relation between couniserial dimension of a finite length module $M$ and length$(M)$. \begin{proposition}\label{semi1} Let $M$ be a right $R$-module of finite length. Then the following statements hold: \\ {\rm (1)} If $N$ is a submodule of $M$, then {\rm c.u.dim}$(M/N) \leq $ {\rm c.u.dim}$(M)$.\\ {\rm (2)} {\rm c.u.dim}$(M) \leq $ {\rm length}$(M)$. \end{proposition} \begin{proof} {\rm (1) The proof is by induction on $n$, where length$(M) = n$. The case $n = 1$ is clear. Now, let $ n > 1 $ and assume that the assertion is true for all modules with length less than $n$. If $N$ is a non-zero submodule of $M$, then the length$(M/N) < n$. Thus for every proper submodule $K/N$ of $M/N$, by induction, c.u.dim$(K/N) \leq $ c.u.dim$(K) < $ c.u.dim$(M)$. Now, Remark \ref{1.3} implies that c.u.dim$(M/N) \leq $ c.u.dim$(M)$. \\ (2) The proof is by induction on length$(M) = n$. The case $n = 1$ is clear. Now if $n > 1$ and $K$ is a proper submodule of $M$, then, by assumption, c.u.dim$(K) \leq $ length$(K) < $ length$(M)$. Thus by Remark \ref{1.3}, c.u.dim$(M) \leq $ length$(M)$. $~\square$} \end{proof} Recall that an $R$-module $M$ is called {\it co-semisimple} if every simple $R$-module is $M$-injective, or equivalently, Rad$(M/N) = 0$ for every submodule $N\leq M$ (See \cite[Theorem 23.1]{wis}). The next proposition gives a condition as to when a module of finite length is semisimple. It may be of interest to state that for the finite length $\Bbb{Z}$-module $\Bbb{Z} _4$, $\oplus _{i = 1}^{\infty}\Bbb{Z} _4$ does not possess couniserial dimension. \begin{proposition}\label{Artinian} Let $M$ be a non-zero right $R$-module of finite length. Then $M$ is a semisimple $R$-module if and only if for every submodule $N$ of $M$ the right $R$-module $\oplus _{i = 1}^{\infty }M/N$ has couniserial dimension. \end{proposition} \begin{proof} ($\Rightarrow$) c.f. Remark \ref{semisimple eq}.\\ ($\Leftarrow$) For every submodule $N$ of $M$ the right $R$-module $\oplus _{i = 1}^{\infty }M/N$ has couniserial dimension. Clearly, this also holds for any factor module of $M$. We will proof the result by induction on the length$(M) =n $. The case $n = 1$ is clear. Now assume that $n > 1$ and the result is true for all modules of length less that $n$. Let $K $ be a non-zero submodule of $M$. Since length$(M/K) < n$, by the inductive hypothesis, $M/K$ is semisimple. Therefore, for every non-zero submodule $K$ of $M$, Rad$(M/K) = 0$. If Rad$(M) = 0$, then $M$ is co-semisimple. Let $S$ be a simple submodule of $M$. Consider the exact sequence $0 \longrightarrow S \longrightarrow M \longrightarrow M/S \longrightarrow 0$ which splits, because $M$ is co-semisimple. Therefore, $M$ is semisimple. Next suppose that Rad$(M) \neq 0$. Let $S$ be a simple submodule of $M$. Because by the above Rad$(M/S) = 0$, we obtain Rad$(M) \leq S$. This implies Rad$(M) = S$ and so $M$ has only one simple submodule. Thus Rad$(M) =$ soc$(M) = S$ is a simple module. Suppose that $M$ is not semisimple. Let $N$ be a maximal submodule of $M$. Then for every submodule $K \leq N < M$, $\oplus _{i = 1}^{\infty }N/K $ is a submodule of $ \oplus _{i = 1}^{\infty }M/K$ and thus $\oplus _{i = 1}^{\infty }N/K$ has couniserial dimension. Since length$(N) < n$, we conclude that $N$ is semisimple. Thus $N = $ soc$(M) = $ Rad$(M)$ is a simple module and so $M$ is of length $2$. \\ \indent Now consider the descending chain $$ N \oplus (\oplus _{i = 2}^{\infty }M ) > N^{(2)} \oplus (\oplus _{i = 3}^{\infty }M ) > ... $$ of submodules of $\oplus _{i = 1}^{\infty }M $. Using Proposition \ref{2.2}, there exists $k \geq 1$ such that $N^{(k)} \oplus (\oplus _{i = k + 1}^{\infty }M) \cong N^{(k + 1)} \oplus (\oplus _{i = k + 2}^{\infty }M) $. Since $ N^{(k + 1)}$ is finitely generated, there exists $m \geq 0$ and an $R$-module $T$, such that $ N^{(k)} \oplus M^{m} \cong N^{(k + 1)} \oplus T$. $N$ is simple and so it has cancellation property and thus $M^{m} \cong N \oplus T$. This implies Rad$(T)$ is semisimple of length $m$ and length$($soc$(T)) = m-1$, a contradiction. $~\square$ \end{proof} Recall that a ring $R$ is called {\it right bounded} if every essential right ideal contains a two-sided ideal which is essential as a right ideal. A ring $R$ is called right {\it fully bunded} if every prime factor ring is right bounded. A right noetherian right fully bounded ring is commonly abbreviated as a right FBN ring. Clearly all commutative noetherian rings are example of right FBN rings. Finite matrix rings over commutative noetherian rings are a large class of right FBN rings which are not commutative. In \cite [Theorem 2.11] {Hiranocom}, Hirano and et.al. showed that a right FBN ring $R$ is semisimple if and only if every right module of finite length is semisimple. As a consequence of the above proposition we have: \begin{corollary} A right FBN ring $R$ is semisimple if and only if for every finite length module $M$, the module $\oplus _{i = 1}^{\infty }M$ has couniserial dimension. \end{corollary} \begin{proposition} \label{anti is injective} Let $P$ be an anti-coHopfian projective right $R$-module. If $\oplus_{ i = 1}^{\infty}E(P) $ has couniserial dimension, then $P$ is injective. \end{proposition} \begin{proof} We first show that $P$ has cancellation property. Let $M = P \oplus B \cong P \oplus B'$. So there exist submodules $P'$ and $C $ of $M$ such that $M = P \oplus B = P' \oplus C$ and $P' \cong P$ and $ C \cong B'$. If $p_1$ is a projection map from $M = P \oplus B$ on to $P$. Then with restriction of $p_1$ to $C$ we have an exact sequence $ 0 \longrightarrow C \cap B \longrightarrow C \longrightarrow I \longrightarrow 0$, such that $I$ is a submodule of $P$. Note that every submodule of $P$ is projective, because it is anti-coHopfian. So $I$ is projective and thus $C \cong C \cap B \oplus I$. Similarly by considering map $p_2$ from $M = P' \oplus C$ to $P'$ we have $B \cong C \cap B \oplus J$ for some submodule $J$ of $P'$. Since $J \cong I \cong P$, we have $B\cong C$ and so $B\cong B'$. Then $P$ has cancellation property. Now consider the descending chain $$ P \oplus ( \oplus _{ i = 2}^{\infty} E(P)) \geq P^{(2)} \oplus ( \oplus _{ i = 3}^{\infty} E(P)) \geq ... $$ of submodules of $\oplus _{i = 1}^{\infty} E(P)$. Then, by Proposition \ref{2.2}, there exists $n\geq 1$ such that $$P^{(n)} \oplus ( \oplus _{i = n + 1}^{\infty}E(P))\cong P^{(n + 1)}\oplus(\oplus _{i = n + 2}^{\infty} E(P))$$ and so $\oplus _{i = n + 1}^{\infty} E(P) \cong P \oplus (\oplus _{i = n + 2}^{\infty} E(P))$ , because $P$ is cancelable . Since $P$ is finitely generated, there exists a right module $L$ such that for some $k$, $E(P) ^{k} \cong P\oplus L $. This shows $P$ is injective. $~\square$ \end{proof} As a consequence of the above proposition we have the following corollary: \begin{corollary} \label{domain} Let $R$ be a principal right ideal domain with maximal right quotient ring $Q$ ( which is a division ring). If the right $R$-module $\oplus_{i = 1}^{\infty} Q$ has couniserial dimension, then $R = Q$. \end{corollary} We need the following lemmas to prove the next theorem. Using Proposition \ref{2.2} we can see that: \begin{lemma} \label{factor} Let $I$ be a two sided ideal of $R$ and $M$ be an $R/I$-module. If $M$ as $R$-module has couniserial dimension, then $M$ as $R/I$-module has couniserial dimension. \end{lemma} \begin{lemma} \label{notherian uniform} If all finitely generated right modules have couniserial dimension, then every right module contains a noetherian uniform module. \end{lemma} \begin{proof} By Lemma \ref{anti-coHopfian} it is enough to show that every cyclic module contains an anti-coHopfian module. Let $M$ be a non-zero cyclic right module which does not contain anti-coHopfian module and let $S$ be a simple module. $M$ is not anti-coHopfian, then $M$ has a non-zero submodule $M_1 \ncong M$ and $M_ 1$ has a non-zero submodule $M_2$ such that $M_2 \ncong M_1$. By continuing in this manner we have a descending chain $ S \oplus M \geq S\oplus M_1 \geq S \oplus M_2 \geq ... $ of submodules of $S \oplus M$. Since $S\oplus M$ is finitely generated, by Proposition \ref{2.2}, $ S \oplus M_n \cong S \oplus M_{n + 1}$ for some $n$. This implies that $M_n \cong M_{n + 1}$ for some $n$, because $S$ is cancellable and this is a contradiction. $~\square$ \end{proof} \begin{theorem} \label{final} For a ring $R$ the following are equivalent. \\ {\rm (1)} $R$ is a semisimple artinian ring.\\ {\rm (2)} All right $R$-modules have couniserial dimension.\\ {\rm (3)} All left $R$-modules have couniserial dimension.\\ {\rm (4)} All right $R$-modules have uniserial dimension.\\ {\rm (5)} All left $R$-modules have uniserial dimension. \end{theorem} \begin{proof} For equivalence of (1), (4) and (5) refer \cite [Theorem 2.6]{j.algebra}. \\ $(1) \Rightarrow (2)$. This is clear by Corollary \ref{finite simple}.\\ $(2) \Rightarrow (1)$. At first we show $R$ satisfies ascending chain condition on two sided deals. Let $I_1 \leq I_2 \leq ... $ be a chain of ideals of $R$. Since the right module $\oplus _{i = 1} ^{\infty} R/I_i $ has couniserial dimension, there exists $n$ such that, for each $j \geq n$, $\oplus _{i = n} ^{\infty} R/I_i \cong \oplus _{i = j} ^{\infty} R/I_i $. Thus they have the same annihilators and so for each $j \geq n$, $I_n = I_j$. Suppose $R$ is a non-semisimple ring. By Lemma \ref{factor} every module over a factor ring of $R$ also has couniserial dimension. Thus by invoking the ascending chain condition on two sided ideals we may assume $R$ is not semisimple artinian but every factor ring of $R$ is semisimple artinian. Using Lemma \ref{semisimple1}, $R$ is a right V-ring. First let us assume that $R$ is primitive. So, by Theorem \ref{2.5}, $R$ is a prime right Goldie ring. By \cite [Theorem 5.16] {simple noetherian ring}, a prime right V-ring right Goldie is simple. By Lemma \ref{notherian uniform}, $R$ has a right noetheian uniform submodule and so using \cite [Corollary 7.25] {goodearl}, $R$ is right noetherian. Now we show that $R$ is Morita equivalent to a domain. By \cite [lemma 5.12]{5}, the endomorphism ring of every uniform right ideal of a prime right Goldie ring is a right ore domain. So by \cite [Theorem 1.2] {simple ring}, it is enough to show that $R$ has a uniform projective generator $U$. Let us assume that $R$ is not uniform and u.dim$(R) = n$ and let $U$ be a uniform right ideal of $R$. By \cite [Corollary 7.25] {goodearl}, $U^{n}$ can be embedded in $R$ and also $R$ can be embedded in $U^{n}$. Then c.u.dim$(R) = $ c.u.dim$(U^{n})$ and hence $R\cong U^{n}$, because $R$ is not uniform. Thus $U$ is a projective generator uniform right ideal of $R$. So $R$ is Morita equivalent to a domain. Now Lemma \ref{morita} and Lemma \ref {example} and Corollary \ref{domain} show that $R$ is simple artinian, a contradiction. So $R$ is not primitive, but every primitive factor ring is artinian (indeed all proper factor rings are artinian). Then since $R$ is a right V-ring, by \cite{Bacel proc}, $R$ is regular and $\Sigma$-V-ring. Also every right ideal contains a non-zero uniform right ideal, hence minimal. So $R$ has non-zero essential soc$(R)$. But $R$ is $\Sigma$-V-ring and by Corollary \ref{finite simple} , we have only finitely many non-isomorphic simple modules. Thus soc$(R)$ is injective. This implies $R$ is semisimple, a contradiction. This completes the proof. $~\square$ \end{proof} {\bf ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Summary} This paper defines couniserial dimension of a module that measures how far a module is from being uniform. The results proved in the paper demonstrate its importance for studding the structure of modules and rings and is a beginning of a larger project to study its impact. We close with some open questions:\\ 1) Does a module with arbitrary couniserial dimension possesses indecomposable dimension?\\ 2) Is there a theory for modules with both finite uniserial and couniserial dimensions that parallels to Krull-Schmidt-Remak-Azumaya theorem? {\bf ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Acknowledgments} This paper was written when the third author was visiting Ohio University, United States during May-August 2014. She wishes to express her deepest gratitude to Professor S. K. Jain for his kind guidance in her research project and Mrs. Parvesh Jain for the warm hospitality extended to her during her stay. She would also like to express her thanks to Professor E. Zelmanov for his kind invitation to visit the of University of Californian at San Diego and to give a talk. \end{document}
\begin{document} \maketitle \let\thefootnote\relax\footnote{{\hspace{-0.4cm}1. Institute of Mechanics, Polish Academy of Sciences, \'Sniadeckich 8, 00-656 Warszawa, Poland, email: pgwiazda@mimuw.edu.pl\\ 2. Institute of Mathematics of the Academy of Sciences of the Czech Republic, \v{Z}itn\'{a} 25, CZ-115 67 Praha~1, Czech Republic, email: michalek@math.cas.cz\\ 3. Institute of Applied Mathematics and Mechanics, University of Warsaw, Banacha 2, 02-097 Warszawa, Poland, phone: {\it +48 22 5544115}, email: aswiercz@mimuw.edu.pl\\ }} \begin{abstract} A common feature of systems of conservation laws of continuum physics is that they are endowed with natural companion laws which are in such case most often related to the second law of thermodynamics. This observation easily generalizes to any symmetrizable system of conservation laws. They are endowed with nontrivial companion conservation laws, which are immediately satisfied by classical solutions. Not surprisingly, weak solutions may fail to satisfy companion laws, which are then often relaxed from equality to inequality and overtake a role of a physical admissibility condition for weak solutions. We want to answer the question what is a critical regularity of weak solutions to a general system of conservation laws to satisfy an associated companion law as an equality. An archetypal example of such result was derived for the incompressible Euler system by Constantin et al. (\cite{ConstETiti}) in the context of the seminal Onsager's conjecture. This general result can serve as a simple criterion to numerous systems of mathematical physics to prescribe the regularity of solutions needed for an appropriate companion law to be satisfied. \noindent{\bf Keywords}: energy conservation, first order hyperbolic system, Onsager's conjecture \end{abstract} \section{Introduction} The passing decade has been to a significant extend directed to solving the famous conjecture of Onsager saying that solutions to incompressible Euler system conserve total kinetic energy as long as they are H\"older continuous with a H\"older exponent $\alpha>1/3$. Otherwise they may dissipate the energy. The ideas used to prove the celebrated Nash-Kuiper theorem appeared to have wide applicability in the context of fluid mechanics, and incompressible Euler system in particular. Interestingly, the construction of weak solutions via appropriate refinement of the method of convex integration allowed to generate solutions with a regularity as exactly prescribed by Onsager that do not coserve the energy. We shall summarize in a sequel the recent achievements in this direction, however our main interest in the current paper is aimed at an analogue and generalization of the first part of the Onsager's statement. The positive direction of this claim was fully solved by Constantin et al. already in the early nineties, cf.~\cite{ConstETiti}, see also~\cite{cheskidov, DuRo2000, eying}. A sufficient regularity for the energy to be conserved has been established for a variety of models, including the incompressible inhomogeneous Euler system and the compressible Euler in \cite{Feireisletal}, the incompressible inhomogeneous Navier-Stokes system in \cite{LeSh2016}, compressible Navier-Stokes in~\cite{Yu} and equations of magneto-hydrodynamics in \cite{Cafetal}. The above list gives a flavor of how broad is the class of systems for which one can specify the regularity of weak solutions which provides the energy to be conserved. This motivates us, instead of developing tools for another dozens of systems, to look at general systems of conservation laws. Apparently one can prescribe the condition for weak solutions providing that in addition to a conservation law they will satisfy a companion conservation law. To make the statement more precise, let us consider a conservation law, not necessarily hyperbolic, in a general form \begin{equation} \label{eq:balance} \diver_{X}(G(U(X))=0 \quad \quad \mbox{for } X\in \CX \end{equation} for an unknown (vector) function $U=U(X)\colon \CX\to \CO $ and a given matrix field $G\colon \CO \to \BM^{n\times (k+1)}$. Let us assume that $\CO$ and $\CX$ are open sets, $\CX \subseteq \BR^{k+1}$ or $\CX \subseteq \BR\times\BT^{k}$ and $\CO\subseteq \BR^n$, where $\BT^{k}$ denotes the flat torus of dimension $k$ (imposing the periodic boundary conditions). We denote $X = (x_0,x_1,\dots, x_k)^T$ the standard coordinates on $\BR^{k+1}$ or $\BR\times \BT^k$ and we consider on $\CO$ the coordinates $Y=(y_1,\dots,y_n)^T$ with respect to the canonical basis. For a matrix field $M=(M_{i,j})_{i=1,\dots, n,\ j=0,\dots, k}$, $M_{i,j}\colon \BR^n \to \BR$, we denote $M_{j}$ the $j$--th column vector. Moreover, we use the standard definition \begin{equation*} \diver_{X} M(X) = \sum_{j=0}^k \partial_{x_j}M_{j}(X). \end{equation*} We denote by $D_X$ (respectively $D_Y$, $D_U$) the differential ($D_X=(\partial_{x_0},\dots,\partial_{x_k})$) with respect to variables $X$ (respectively $Y$, $U$). Following the notation in~\cite{dafermos} we shall say that a smooth function $Q\colon \CO \to \BR^{s\times (k+1)}$ is a {\it companion} of $G$ if there exists a smooth function ${\mathcal B}\colon \CO \to \BM^{s\times n}$ such that \begin{equation} \label{eq:algebraic_cons} D_{U}Q_j(U)={\mathcal B}(U) D_{U}G_j(U)\quad \mbox{for all $U\in \CO$, $j\in\{0,\dots,k\}$}. \end{equation} Observe that for any classical solution $U$ of \eqref{eq:balance}, we obtain \begin{equation} \label{eq:companion} \diver_X (Q(U(X)))=0 \quad \mbox{for $X\in \CX$} \end{equation} where by a {\it classical solution} we mean a Lipschitz continuous vector field $U$ satisfying \eqref{eq:balance} for almost all $X\in\CX$. Identity \eqref{eq:companion} is called a {\it companion law} associated to $G$ (see e.g. \cite{dafermos}). In many applications, which we partially recall in Section \ref{sec:appl}, some relevant companion laws are {\it conservation of energy} or {\it conservation of entropy}. Before we discuss the relations between weak solutions and companion laws, let us remark that it was observed by Godunov \cite{godunov} that systems of conservation laws are symmetrizable if and only if they are endowed with nontrivial companion laws. We consider the standard definition of weak solutions to a conservation law \begin{dfn} We call the function $U\colon \CX \to \CO$ a weak solution to \eqref{eq:balance} if $G(U)$ is locally integrable in $\CX$ and the equality \begin{equation} \label{eq:weak_sol} \int_{\CX} G(U(X))\colon D_X \psi(X) \de{X}= 0 \end{equation} holds for all smooth test functions $\psi\colon \CX \to \BR^n$ with a compact support in $\CX$. \end{dfn} Analogously, we can define weak solutions to \eqref{eq:companion}, however weak solutions of \eqref{eq:balance} may not necessarily be weak solutions also to \eqref{eq:companion}. The main question we deal with in this paper reads as follows: \emph{What are sufficient conditions for a weak solution of \eqref{eq:balance} to satisfy also \eqref{eq:companion}?} Let us comment in more detail results related to the question of energy conservation for weak solutions of some conservation laws. Both parts of Onsager's conjecture for the incompressible inviscid Euler system have been resolved. Due to recent results of Isett \cite{isett} and Buckmaster et al.\,\cite{delell_rec} we know there exist solutions of the incompressible Euler equations of class $C([0,T];C^{1\slash 3}(\BT^3))$ which do not satisfy the energy equality. These results were preceded with a series of papers showing firstly existence of bounded (\cite{DLS09}), later continuous (\cite{DLS13}) and H\"older continuous (\cite{DLS14}) solutions with $\alpha=1/10$. The further results aimed to increasing the H\"older exponent, see \cite{Buc15, BSLISJ15, BDKS16, Ise13}. In the context of our studies, the second part of Onsager's conjecture is more relevant. Constantin et al. \cite{ConstETiti} showed the conservation of the global kinetic energy if the velocity field $u$ is of the class $L^3(0,T;B^{\alpha}_{3,\infty}(\BT^3))\cap C([0,T];L^2(\BT^3))$ whenever $\alpha>\frac 13$, see also \cite{eying}. Here $B^{\alpha}_{3,\infty}$ stands for a Besov space (definition is recalled in Section~\ref{sec:besov}). For the same system it was observed by Cheskidov et al.\,in \cite{cheskidov} that it is sufficient for $u$ to belong to a larger space $L^3(0,T;B^{1\slash 3}_{3,q}(\BT^3))$ where $q\in(1,\infty)$. We refer the reader to \cite{shvydkoy_energy} and \cite{shvydkoy} for more refinements in the case of the incompressible Euler system. For the incompressible inviscid equations of magneto--hydrodynamics, a result comparable to \cite{ConstETiti} was proved by Caflisch et al.\,\cite{Cafetal}, see also \cite{KangLee}. The standard technique developed in \cite{ConstETiti} is based on the convolution of the Euler system with a standard family of mollifiers. The crucial part of the proof is then to estimate an appropriate nonlinear commutator. The most of the mentioned results have been derived for systems with bilinear non--linearity. Recently, similar results for the compressible Euler system were presented by Feireisl et al. in \cite{Feireisletal}. A sufficient condition for the energy conservation is that the solution belongs to $B^{\alpha}_{3, \infty}((0,T)\times \BT^3)$ with $\alpha>1\slash 3$. Up to our knowledge, this was the first result treating nonlinearity which is not in a multilinear form. We extend this approach to a general class of conservation laws of the form~\eqref{eq:balance}. Let us mention that we are not aware of any reference where the problem would be treated in such generality. We believe that this general scenario might be of interest. Moreover, at least the application on the equations of polyconvex elastodynamics (Subsection \ref{subsec:polyconv}) is an original contribution of this paper. Let us present the main results of the paper. For the notation, we refer the reader to Section \ref{sec:besov}. \begin{thm} \label{thm:bes} Let $U\in B^{\alpha}_{3,\infty}(\CX; \CO)$ be a weak solution of \eqref{eq:balance} with $\alpha>\frac{1}{3}$. Assume that $G \in C^2(\CO;\BM^{n\times (k+1)})$ is endowed with a companion law with flux $Q\in C(\CO;\BM^{1\times (k+1)})$ for which there exists ${\mathcal B}\in C^1(\CO;\BM^{1\times n})$ related through identity \eqref{eq:algebraic_cons} and all the following conditions hold \begin{equation} \label{eq:assumpt_convex} \left. \begin{aligned} \mbox{$\CO$ is convex}, \\ {\mathcal B}\in W^{1,\infty}(\CO;\BM^{1\times n}), \\ |Q(V)|\leq C(1+|V|^3)\ \mbox{for all $V\in\CO$}, \\ \sup_{i,j \in{1,\dots,d}}\|\partial_{U_i}\partial _{U_j} G(U)\|_{C(\CO;\,\BM^{n\times (k+1)})}<+\infty. \end{aligned} \right\} \end{equation} Then $U$ is a weak solution of the companion law \eqref{eq:companion} with the flux $Q$. \end{thm} \begin{rem} \begin{itemize} \item We consider only a special case when the companion law is a scalar equation. If $Q\colon \CO \to \BM^{s\times (k+1)}$ and $s>1$, we can apply Theorem \ref{thm:bes} to each row of~\eqref{eq:companion}. \item The growth condition of $Q$ can be relaxed whenever $B^{\alpha}_{3,\infty}$ is embedded to an appropriate Lebesgue space. \item Under suitable assumptions, one can extend the theory on non--homogeneous fluxes $G=G(X,U)$ and equation \eqref{eq:balance} with non--zero right--hand side $h=h(X,U)$. \item Due to the definition of weak solutions, it is enough to consider the integrability and regularity of $U$ only locally in $\CX$. \end{itemize} \end{rem} Due to the assumption on the convexity of $\CO$, Theorem \ref{thm:bes} could be straightforwardly deduced from \cite{Feireisletal}; however, for the reader's convenience, we present the proof in Section \ref{sec:main}. It is worth noting that the convexity of $\CO$ might not be natural for all applications (this is e.g.\,the case of the polyconvex elasticity, see Section \ref{sec:appl}. To this purpose, we present a theorem dealing with the case of non--convex $\CO$. \begin{thm} \label{thm:non_convex} Let the assumptions of Theorem \ref{thm:bes} be satisfied, but instead of \eqref{eq:assumpt_convex} we assume that \begin{equation} \label{eq:assumpt_compact} \mbox{the essential range of $U$ is compact in $\CO$.} \end{equation} Then $U$ is a weak solution of the companion law \eqref{eq:companion} with the flux $Q$. \end{thm} Apparently, the conclusions of the previous theorems are reasonably weaker in comparison with some known results for particular conservation laws. As an example, the result of Constantin et al. in \cite{ConstETiti} does not need the Besov--type regularity with respect to time. Having more knowledge about the nonlinear part of $G$, we may be able to relax the class of solutions in Theorem \ref{thm:bes}, what is discussed in Section \ref{sec:appl}. Finally, we observe that in case we consider hyperbolic systems, the opposite direction of the Onsager's hypothesis is almost trivial. This is of course completely different situation than the case of incompressible Euler system, which is not a hyperbolic conservation law and the construction of solutions dissipating the energy was a challenge. It is well known, cf.~\cite[Chapter 1]{dafermos} among others, that shock solutions dissipate energy. Following Dafermos again, we note that crucial properties of local behavior of shocks may be investigated, without loss of generality, within the framework of systems in one-space dimension. Thus the essence can be already seen even on a simple example of the Burger's equation $u_t+(u^2/2)_x=0$. Classical solutions also satisfy $(u^2/2)_t+(u^3/3)_x=0$, which can be considered as a companion law. The shock solutions to the equation in the first form satsify Rankine-Hugoniot condition $s(u_{l}-u{r})=(u^2_l-u^2_r)/2$, thus the speed of the shock is $ s=(u_l+u_r)/2$, where $u_l=\lim_{y\to x(t)^-}u(y,t)$ and $u_r$ is defined correspondingly. Considering the second equation one gets $s=2(u_l^2+u_lu_r+u_r^2)/3(u_l+u_r)$, which in an obvious way is different. More generally if we multiply~\eqref{eq:balance} with the function ${\mathcal B}$ one easily concludes that to provide that Rankine-Hugoniot conditions to be satisfied for the companion law, we end up with a trivial companion law, namely ${\mathcal B}\equiv const$. Thus, knowing the regularity of shock solutions, as was shown in \cite[Prop. 2.1]{Feireisletal} $$(BV\cap L^\infty)(\Omega)\subset B_{\infty}^{\frac{1}{q}}(\Omega)$$ for every $q\in[1,+\infty]$, we observe that our assumptions are sharp. Let us briefly mention the outline of the rest of the paper. In Section \ref{sec:besov}, we introduce the notation. Section \ref{sec:main} contains proofs of the main propositions. Section \ref{sec:appl} is devoted to some relaxation of the conditions in Theorem \ref{thm:bes} and applications of the main theorems are also presented. \section{Notation and auxiliary estimates} \label{sec:besov} We will briefly present some properties of the Besov spaces $B^{\alpha}_{q,\infty}$. Let $\CX$ be as above, $\alpha\in(0,1)$ and $q\in[1,\infty)$. We denote by $B^{\alpha}_{q,\infty}(\CX)$ the Besov space which is defined as follows \begin{equation*} B^{\alpha}_{q,\infty}(\CX) = \left\{ U\in L^q(\CX) \colon \quad |U|_{B^{\alpha}_{q, \infty}(\CX)} <\infty \right\} \end{equation*} with \begin{align*} |U|_{B^{\alpha}_{q,\infty}(\CX)} = \sup_{\xi\in \BR^k}\frac{\|U(\cdot)-U(\cdot-\xi)\|_{L^q(\CX\cap (\CX+\xi))}}{|\xi|^{\alpha}}. \end{align*} On $B^{\alpha}_{q,\infty}(\CX)$ we consider the standard norm \begin{equation*} \|U\|^q_{B^{\alpha}_{p,\infty}(\CX)} = \|U\|^q_{L^q(\CX)}+|U|^q_{B^{\alpha}_{q,\infty}(\CX)}. \end{equation*} Assume that a non--negative function $\eta_1 \in C^{\infty}(\BR^k)$ has a compact support in $B(0,1)$ and $\int_{\BR^k}\eta_1(X)\de{X}=1$. For $\eps>0$ we denote $\eta_{\eps}(X) = \frac{1}{\eps^k}\eta_1(\frac{X}{\eps})$ and \begin{equation*} [f]_{\eps}(X)=f\ast \eta_{\eps}(X) \end{equation*} which is defined at least in $\CX_{\eps}= \{X\in \CX \colon \dist(X,\partial \CX)>\eps\}$. For vector or matrix--valued functions the convolution is defined component--wise. For $\CK \subseteq \BR^k$ and $\delta>0$ we also use the notation \begin{equation*} \CK^{\delta}=\{X\in \BR^k \colon \dist(X,\CK)<\delta\}=\cup_{X\in \CK}B(X,\delta). \end{equation*} One easily shows that for $f\in B^{\alpha}_{q,\infty}(\CX)$ the following estimates hold \begin{align} \label{eq:mollif_nabla} \|D_X [f]_{\eps}\|_{L^q(\CX_{\eps})}&\leq C \|f\|_{B^{\alpha}_{q,\infty}(\CX)}\eps^{\alpha-1}, \\ \label{eq:mollif_diff} \|[f]_{\eps}-f\|_{L^q(\CX_{\eps})}&\leq C\|f\|_{B^{\alpha}_{q,\infty}(\CX) }\eps^{\alpha}, \\ \label{eq:transport} \|f(\cdot-y)-f(\cdot)\|_{L^q(\CX\cap(\CX+y))}&\leq C \|f\|_{B^{\alpha}_{q,\infty}(\CX)}|y|^{\alpha} \end{align} where $C$ depends only on $\CX$. \section{The proof of the main results} \label{sec:main} In what follows, we will denote by $C$ a constant independent of $\eps$. \subsection{Commutator estimates} \label{subsec:convex} The essential part of the proof of Theorem \ref{thm:bes} pertains the estimation of the nonlinear commutator \begin{equation*} [G(U)]_{\eps}-G([U]_{\eps}). \end{equation*} It is based on the following observation, which appears in a special form in \cite{Feireisletal}. The rest of the proof of Theorem \ref{thm:bes} is a reminescence of the paper of \cite{ConstETiti}. \begin{lem} \label{lem:nonlin_commut} Let $\CO$ be a convex set, $U\in L^2_{loc}(\CX,\CO)$, $G\in C^2(\CO;\BR^n)$ and let \begin{equation} \label{eq:sec_der_bdd} \sup_{i,j \in{1,\dots,d}}\|\partial_{U_i}\partial_{U_j} G(U)\|_{L^{\infty}(\CO)}<+\infty. \end{equation} Then there exists $C>0$ depending only on $\eta_1$, second derivatives of $G$ and $k$ (dimension of $\CO$) such that \begin{equation*} \left\| [G(U)]_{\eps}-G([U]_{\eps}) \right\|_{L^{q}(K)} \leq C\Big(\|[U]_{\eps}-U\|^2_{L^{2q}(K)} + \sup_{Y\in \supp \eta_{\eps}}\|U(\cdot)-U(\cdot-Y)\|^2_{L^{2q}(K)} \Big) \end{equation*} for $q\in [1,\infty)$, where $K\subseteq \CX$ satisfies $K^{\eps} \subseteq \CX$. \end{lem} \begin{proof} Without loss of generality, we assume that $G$ is a scalar function and $U$ is finite everywhere on $\CX$. Then, because of \eqref{eq:sec_der_bdd} we get for $X$, $Y\in K$ \begin{align} \label{eq:first_tay} \left|G(U(X))-G([U]_{\eps}(X))-D_U G\circ U(X)(U(X)-[U]_{\eps}(X))\right| &\leq C |U(X)-[U]_{\eps}(X)|^2, \\ \label{eq:sec_tay} \left|G(U(X))-G(U(Y))-D_U G\circ U(X)(U(X)-U(Y))\right| &\leq C |U(X)-U(Y)|^2. \end{align} We convolve \eqref{eq:sec_tay} with $\eta_{\eps}$ in variable $Y$ and apply Jensen's inequality on the left--hand side \begin{equation} \label{eq:convol_tayl} \left|G(U(X))-[G(U)]_{\eps}(X)-D_U G\circ U(X)(U(X)-[U]_{\eps}(X))\right| \leq C |U(X)-U(\cdot)|^2\ast_Y \eta_{\eps}. \end{equation} Finally, coupling \eqref{eq:first_tay} and \eqref{eq:convol_tayl} implies to \begin{equation} \label{eq:eq3} \left| G([U]_{\eps}(X))-[G(U)]_{\eps}(X) \right| \leq C\left(|U(X)-[U]_{\eps}(X)|^2+|U(X)-U(\cdot)|^2\ast_Y \eta_{\eps}(X)\right). \end{equation} In order to complete the proof, we use Jensen's inequality to estimate the $L^q$ norm of the second term on the right--hand side of \eqref{eq:eq3} \begin{align*} &\int_{K} \left| \int_{\supp \eta_{\eps}}|U(X)-U(X-Y)|^2\eta_{\eps}(Y)\de{Y}\right|^q\de{X} \\ & \ \leq \int_{\supp \eta_{\eps}} \int_{K}|U(X)-U(X-Y)|^{2q}\eta_{\eps}(Y)\de{X}\de{Y} \leq \sup_{Y\in \supp \eta_{\eps}}\|U(\cdot)-U(\cdot-Y)\|^{2q}_{L^{2q}(K)}. \end{align*} \end{proof} \subsection{Proof of Theorem \ref{thm:bes}} \label{subsec:proof1} Let $\eps_0>0$ and consider a test function $\psi\in C^{\infty}(\CX)$ such that $\supp \psi\subseteq \CX_{\eps_0}$. Mollifying \eqref{eq:balance} by $\eta_{\eps}$, we obtain \begin{equation} \label{eq:moll_bal} \diver_{X} [G(U)]_{\eps}=0 \quad \mbox{in } \CX_{\eps_0} \end{equation} whenever $\eps<\eps_0$. We multiply both sides of \eqref{eq:moll_bal} by $\psi {\mathcal B}([U]_{\eps})$ (where ${\mathcal B}$ comes from \eqref{eq:algebraic_cons}) from the left and get \begin{equation} \label{eq:prep} \int_{\CX}\psi(X){\mathcal B}([U]_{\eps}(X)) \diver_{X} ([G(U)]_{\eps}(X)) \de{X}=0. \end{equation} We can recast the previous equality as follows \begin{equation*} \int_{\CX}\psi(X){\mathcal B}([U]_{\eps}(X)) \diver_{X} G([U]_{\eps}(X)) \de{X}= \int_{\CX}R_{\eps}\de{X} \end{equation*} with the commutator \begin{equation}\label{error} R_{\eps}=\psi(X){\mathcal B}([U]_{\eps}(X)) \diver_{X} \Big( G([U]_{\eps}(X))-[G(U)]_{\eps}(X) \Big). \end{equation} Due to \eqref{eq:algebraic_cons}, equality \eqref{eq:prep} might be adjusted to the form \begin{equation} \label{eq:mollifRendef} -\int_{\CX} Q([U]_{\eps}(X)) (D_X\psi(X))^T \de{X} = \int_{\CX}R_{\eps}\de{X}. \end{equation} In order to show that the right--hand side of \eqref{eq:mollifRendef} converges to zero as $\varepsilon\to0$, we write \begin{align}\nonumber \label{eq:commut_full} \int_{\CX}R_{\eps}(X)\de{X}&=\int_{\CX} \Big( G([U]_{\eps})-[G(U)]_{\eps} \Big):\left((D_U {\mathcal B}^T)([U]_{\eps})D_X [U]_{\eps} \psi\right)\de{X} \\ &+ \int_{\CX} \Big( G([U]_{\eps})-[G(U)]_{\eps} \Big)\colon\left({\mathcal B}^T([U]_{\eps})D_{X}\psi\right)\de{X} \\\nonumber &=I^{1}_{\eps}+I^{2}_{\eps}. \end{align} The first integral is estimated using Lemma \ref{lem:nonlin_commut} and \eqref{eq:mollif_nabla} as follows \begin{align*} |I^2_{\eps}|&\leq C\|{\mathcal B}\|_{W^{1,\infty}(\CO)}\|D_X [U]_{\eps}\|_{L^3(\CX_{\eps_0})}\|[U]_{\eps}-U\|^2_{L^3(\CX_{\eps_0})}\|\psi\|_{W^{1,\infty}(\CX_{\eps_0})} \\ &\leq C \eps^{\alpha-1}\eps^{2\alpha} \end{align*} Similarly, we have \begin{equation*} |I^1_{\eps}|\leq C \eps^{\alpha}, \end{equation*} hence, \begin{equation*} \int_{\CX}R_{\eps}\de{X}\to 0 \quad \mbox{as $\eps\to 0$} \end{equation*} as long as $\alpha>\frac{1}{3}$. The convergence of the left--hand side of \eqref{eq:mollifRendef} follows from the Vitali theorem. Indeed, the equi-integrability of $Q([U]_{\eps})$ in $\CX_{\eps_0}$ is a consequence of that of $|[U]_{\eps}|^3$ and the growth conditions on $Q$. \begin{rem} Having $\CO$ non--convex, we face the problem that $[U]_{\eps}$ does not have to belong to $\CO$. The convexity was crucial to conduct the Taylor expansion argument in Lemma \ref{lem:nonlin_commut}. However, we will see that a suitable extension of functions $G$, ${\mathcal B}$ and $Q$ does not alter the previous proof significantly. \end{rem} \subsection{Proof of Theorem \ref{thm:non_convex}} \label{subsec:proof2} There exists $\delta>0$ depending only on $\CK$ and $\CO$ such that $\CK^{2\delta}\subseteq \CO$. Let $\tilde{G}\in C^2(\BR^n;\BM^{(k+1)\times n})$, $\tilde{{\mathcal B}}\in C^1(\BR^n;\BM^{1\times n})$ and $\tilde{Q}\in C(\BR^n,\BM^{1\times (k+1)})$ be compactly supported functions satisfying $\tilde{G}=G$, $\tilde{{\mathcal B}}={\mathcal B}$ and $\tilde{Q}=Q$ on $\CK^{\delta}$. Such functions exist as there is a set $\CR$ with a smooth boundary satisfying $\CK^{\delta}\subseteq \CR \subseteq \CO$. Thus, relation~\eqref{eq:algebraic_cons} holds also for $G$, ${\mathcal B}$ and $Q$ on $\CK^{\delta}$. Similarly to the proof of Theorem \ref{thm:bes}, for a function $\psi \in C^{\infty}(\CX)$ compactly supported in $\CX_{\eps_0}$, we obtain for $\varepsilon<\varepsilon_0$ \begin{equation*} \int_{\CX}\psi\tilde{{\mathcal B}}([U]_{\eps}) \diver_{X} [\tilde{G}(U)]_{\eps} \de{X}=0. \end{equation*} We can write the previous equality as follows \begin{equation} \label{eq:molif} \int_{\CX}\psi\tilde{{\mathcal B}}([U]_{\eps}) \diver_{X} \tilde{G}([U]_{\eps}) \de{X}= \int_{\CX}\tilde{R}_{\eps}\de{X} \end{equation} with the commutator \begin{equation} \label{eq:comm_tilda} \tilde{R}_{\eps}=\psi\tilde{{\mathcal B}}([U]_{\eps}) \diver_{X}\Big( \tilde{G}([U]_{\eps})-[\tilde{G}(U)]_{\eps} \Big). \end{equation} Analogously to Subsection \ref{subsec:proof1}, $\int_{\CX}\tilde{R}_{\eps}\de{X}$ vanishes as $\eps \to 0$ due to Lemma \ref{lem:nonlin_commut}; hence, we may turn our attention to the left--hand side of \eqref{eq:molif}. We show that it converges to \begin{equation} -\int_{\CX}Q(U) (D_X\psi)^T \de{X}. \end{equation} To this end, we put \begin{equation*} \CG_{\eps}^\delta = \{X\in \CX \colon |U(X)-[U]_{\eps}(X)|<\delta\} \end{equation*} and since $D_U \tilde{Q}_{j}([U]_{\eps})= \tilde{{\mathcal B}}([U]_{\eps})D_U \tilde{G}_{j}([U]_{\eps})$ on $\CG_{\eps}^\delta$ we obtain \begin{align*} &\left| \int_{\CX}\psi\tilde{{\mathcal B}}([U]_{\eps}) \diver_{X} \tilde{G}([U]_{\eps}) \de{X}+\int_{\CX} Q(U)(D_X\psi)^T \de{X} \right| \\ & \quad \leq \left| \int_{\CX \backslash\CG_{\eps}^\delta} \psi\tilde{{\mathcal B}}([U]_{\eps}) \diver_{X} \tilde{G}([U]_{\eps}) \de{X} \right| + \left| \int_{\CX \backslash\CG_{\eps}^\delta} Q(U)(D_X \psi)^T\de{X} \right| \\ & \quad +\left|\int_{\CG_{\eps}^\delta}(\tilde{Q}(U)-\tilde{Q}([U]_{\eps}))(D_X \psi)^T\de{X} \right| = I^1_{\eps}+I^2_{\eps}+I^3_{\eps}. \end{align*} To estimate $I^1_{\eps}$, recall that $\tilde{G}$ and $\tilde{{\mathcal B}}$ are compactly supported, therefore \begin{align*} I^1_{\eps}\leq \int_{\CX \backslash\CG_{\eps}^\delta} \left| \psi\tilde{{\mathcal B}}([U]_{\eps}) D_U\tilde{G}([U]_{\eps}) D_{X} [U]_{\eps} \right|\de{X} \leq C\|\psi\|_{C^1} \int_{\CX \backslash\CG_{\eps}^\delta}|D_X [U]_{\eps}|\de{X}. \end{align*} By the means of H\"older's and Chebyshev's inequality, \eqref{eq:mollif_nabla} and \eqref{eq:mollif_diff} we observe that \begin{align*} I^1_{\eps}&\leq C\|\psi\|_{C^1} \|D_X [U]_{\eps}\|_{L^3(\CX_{\eps_0})} \left| \CX\backslash \CG_{\eps}^\delta \right|^{\frac{2}{3}} = C\|\psi\|_{C^1} \|D_X [U]_{\eps}\|_{L^3(\CX_{\eps_0})} \left|\{ |U-[U]_{\eps}|>\delta \} \right|^{\frac{2}{3}} \\& \leq \frac{ C\|\psi\|_{C^1}}{\delta^{2}} \|D_X [U]_{\eps}\|_{L^3(\CX_{\eps_0})} \|U-[U]_{\eps}\|^2_{L^3(\CX_{\eps_0})}\leq \frac{ C\|\psi\|_{C^1}}{\delta^{2}} \eps^{3\alpha -1}. \end{align*} The integral $I^2_{\eps}$ vanishes, as $\|Q(U)\|_{L^{\infty}(\CX)}<\infty$. Finally, we observe that \begin{align*} I^3_{\eps}\leq \|\psi\|_{C^1} \int_{\CX_{\eps_0}}|\tilde{Q}(U)-\tilde{Q}([U]_{\eps})|\de{X}. \end{align*} Therefore, $I^3_{\eps}\to 0$ due to the almost everywhere convergence of $\tilde{Q}(U)-\tilde{Q}([U]_{\eps})$ to zero and boundedness of $\tilde{Q}$. \section{Applications} \label{sec:appl} Observe that we have considered so far genuinely nonlinear fluxes $G$. The key part of the proof was to estimate \begin{equation} \int_{\CX} \Big( G([U]_{\eps})-[G(U)]_{\eps} \Big):\left((D_U {\mathcal B}^T)([U]_{\eps})D_X [U]_{\eps} \psi\right)\de{X}, \end{equation} where the integral vanishes whenever $G$ is an affine. Using this observation we might expect to drop some conditions on $U$ in the main theorems if some components of $G$ are affine functions. We present three extensions of Theorem \ref{thm:bes}, which follow directly from the previous observation. The first gives a sufficient condition to drop the Besov regularity with respect to some variables. It is connected with the columns of $G$. \begin{cor} \label{cor1} Let $G=(G_{1},\dots,G_{s},G_{ s+1},\dots G_{ k})$ where $G_{ 1},\dots,G_{ s}$ are affine vector--valued functions and $\CX = \CY\times \CZ$ where $\CY\subseteq \BR^s$ and $\CZ\subseteq \BR^{k+1-s}$. Then it is enough to assume that $U\in L^3(\CY;B^{\alpha}_{3,\infty}(\CZ))$ in Theorem \ref{thm:bes}. \end{cor} Next, we specify when we can omit the Besov regularity with respect to some components of $U$. \begin{cor} \label{cor2} Assume that $U=(V_1,V_2)$ where $V_1=(U_1,...,U_s)$ and $V_2=(U_{s+1},\dots,U_n)$. If ${\mathcal B}$ does not depend on $V_1$ and $G=G(V_1,V_2)=G_1(V_1)+G_2(V_2)$ and $G_1$ is linear then it is enough to assume $U_1,\dots,U_s \in L^3(\CX)$ in Theorem \ref{thm:bes}. \end{cor} Finally, we deal with the case when some components of ${\mathcal B}$ are not Lipschitz on $\CO$, but appropriate rows of $G$ are affine functions. \begin{cor} \label{cor3} Assume that a $j$--th row of $G$ is an affine function. Then the statement of Theorem \ref{thm:bes} holds even if we assume that ${\mathcal B}_{j}$ is only locally Lipschitz in $\CO$. \end{cor} In the rest of this paper, we present a few examples on which the general theory applies. Some of them show how the general framework allows to recover some known results. In what follows, we consider $\CX = (0,T)\times \BT^3$, $X=(t,x)$ and $\alpha>\frac{1}{3}$. We also present the systems in their standard form denoting $\nabla_x$ and $\diver_x$ the correspondent operators with respect to the spatial coordinate $x$. \subsection{Incompressible Euler system} \label{subsec:incom_eul} Let us consider the system of equations \begin{align*} \left.\begin{aligned}\diver_{x}\bu^T &= 0\\ \partial_t \bu + (\bu\cdot \nabla_x) \bu + \nabla_x p &= 0 \end{aligned} \right\} \quad \mbox{in } \CX \end{align*} for an unknown vector field $\bu\colon (0,T)\times \BT^3 \to \BR^3$ and scalar $p\colon (0,T)\times \BT^3 \to \BR$. The system can be rewritten into the divergence form with respect to $X=(t,x)$ \begin{align} \label{eq:eul_div_form} \left. \begin{aligned} \diver_{x}\bu^T &= 0, \\ \partial_t \bu +\diver_x(\bu\otimes\bu + p\BI) &= 0. \end{aligned} \right\} \end{align} By multiplying \eqref{eq:eul_div_form} with ${\mathcal B}(p,\bu)= (p-1\slash 2 |\bu|^2,\bu^T)$ we obtain the conservation law for the energy \begin{equation} \label{eq:inc_eul} \partial_t \left(\frac{1}{2}|\bu|^2\right) + \diver_x \left(\frac{1}{2}|\bu|^2 + p\bu^T \right)=0. \end{equation} Corollaries \ref{cor1}, \ref{cor2} and \ref{cor3} imply that any weak solution $(p,\bu)\in L^3(\CX)\times L^3(0,T;B^{\alpha}_{3,\infty}(\BT^3))$ is a weak solution to \eqref{eq:inc_eul}. \begin{rem} This result is comparable to \cite{ConstETiti}. \end{rem} \subsection{Compressible Euler system} \label{subsec:comp_eul} We consider the compressible Euler equations in the following form \begin{align} \label{eq:comp_eul} \left. \begin{aligned} \partial_t \rho+ \diver_{x}(\rho\bu^T) &= 0 \\ \partial_t \bu + \diver_x(\bu\otimes \bu) + \frac{\nabla_x p(\rho)}{\rho} &= 0 \end{aligned} \right\} \quad \mbox{in } \CX \end{align} for an unknown vector field $\bu\colon \CX \to \BR^3$ and scalar $\rho\colon \CX \to \BR$. The function $p\colon [0,\infty)\to\BR$ is given. Let $P$ be a primitive function to $\frac{p(\rho)}{\rho}$ such that $P(1)=0$. Then the system can be rewritten into the divergence form \begin{align} \label{eq:comp_eul_div} \left. \begin{aligned} \partial_t \rho+ \diver_{x}(\rho\bu^T) &= 0,\\ \partial_t \bu + \diver_x\left(\bu\otimes \bu+P(\rho)\BI\right) &= 0. \end{aligned} \right. \end{align} To get the conservation of the energy, we multiply \eqref{eq:comp_eul_div} with \begin{equation*} {\mathcal B}(\rho,\bu)=\left( P(\rho)+ \rho P'(\rho)-\frac{1}{2}|\bu|^2, \rho \bu^T\right) \end{equation*} and obtain \begin{equation} \label{eq:comp_cons} \partial_t\left( \frac{1}{2}\rho |\bu|^2 + \rho P(\rho) \right) +\diver_x\left[ \left( \frac{1}{2}\rho |\bu|^2 +\rho P(\rho)+p(\rho) \right) \bu^T\right] =0 \end{equation} Let $(\rho,\bu)\in L^3(0,T;B^{\alpha}_{3, \infty}(\BT^3))\times L^3(0,T;B^{\alpha}_{3, \infty}(\BT^3;\BR^3))$ be a weak solution to \eqref{eq:comp_eul_div} such that $\rho\in [\ubar{\rho},\bar{\rho}]$ for some $0<\ubar{\rho}<\bar{\rho}<\infty$ and $\bu\in B(0,R)$ for some $R>0$. Moreover, if $p\in C^2([\ubar{\rho},\bar{\rho}])$, we use\footnote{We can extend $p$ from $[\ubar{\rho},\bar{\rho}]$ on $\BR$ such that the extended function will be of class $C^2$ and compactly supported in $\BR$. Moreover, due to the boundedness of $|\bu|$ we can write $|\bu|^2 = \bu\cdot T(\bu)$ in $\CX$ where $T$ is a bounded Lipschitz function on $\BR^3$.} Corollary \ref{cor1} to show that $(\rho,\bu)$ is a weak solution to \eqref{eq:comp_cons}. In the contrast with the incompressible case, the continuity equation (the first equation of \eqref{eq:comp_eul}) is not linear with respect to $\rho$ and $\bu$. Therefore, we have to assume that $\bu$ is bounded to provide ${\mathcal B}(\rho,\bu)$ is Lip schitz o n the range of $( \rho,\bu)$. \begin{rem} We have considered the formulation of the compressible Euler system with the time derivative over a linear function of $(\rho, \bu)$. This has lead to a slightly different sufficient condition in comparison to \cite{Feireisletal}. \end{rem} \begin{rem} If $\rho>0$, system \eqref{eq:comp_eul} can be rewritten with respect to the quantities $\rho$ and $\bm = \rho \bu$ as follows \begin{align} \label{eq:comp_eul_div_2} \left. \begin{aligned} \partial_t \rho+ \diver_{x}(\bm) &= 0\\ \partial_t \bm + \diver_x\left(\frac{\bm\otimes \bm}{\rho}+p(\rho)\BI\right) &= 0 \end{aligned} \right\}\quad \mbox{in } \CX. \end{align} A suitable choice of ${\mathcal B}$ is then \begin{equation} {\mathcal B}(\rho,\bm) = \left(P(\rho)+ \rho P'(\rho)-\frac{|\bm|^2}{2\rho^2},\frac{\bm^T}{\rho}\right), \end{equation} which leads to the companion law \begin{equation} \label{eq:comp_cons_2} \partial_t\left( \frac{|\bm|^2}{2\rho} + \rho P(\rho) \right) +\diver_x\left[ \left( \frac{|\bm|^2}{2\rho} +\rho P(\rho)+p(\rho) \right) \bu\right] =0. \end{equation} As the continuity equation is now linear with respect to $(\rho,\bm)$, we can apply Corollaries \ref{cor1} and \ref{cor3}. As their consequence, a weak solution \[(\rho,\bm)\in L^3(0,T;B^{\alpha}_{3, \infty}(\BT^3))\times L^3(0,T;B^{\alpha}_{3, \infty}(\BT^3;\BR^3)) \] such that $\rho\in [\ubar{\rho},\bar{\rho}]$ for some $0<\ubar{\rho}<\bar{\rho}<\infty$ is also a weak solution to \eqref{eq:comp_cons_2}. \end{rem} \subsection{Polyconvex elasticity} \label{subsec:polyconv} Let us consider the evolution equations of nonlinear elasticity, see e.g.\,\cite{Dafermos1985} or \cite{Demoulini2001}, \begin{align} \label{eq:non_elast} \left. \begin{aligned} \partial_t F &= \nabla_x \bv \\ \partial_t \bv &= \diver_x \left(D_{F}W(F)\right) \end{aligned} \right\} \quad \mbox{in } \CX, \end{align} for an unknown matrix field $F\colon \CX \to \BM^{k\times k}$, and an unknown vector field $\bv\colon \CX \to\BR^k$. Function $W\colon \CU \to \BR$ is given. For many applications, $\CU= \BM_+^{k\times k}$ where $\BM_+^{k\times k}$ denotes the subset of $\BM^{k\times k}$ containing only matrices having positive determinant, see e.g.\,\cite{ball_open_prob} for the discussion on the form of $W$ and $\CU$. Let us point out that $\BM_+^{k\times k}$ is a non--convex connected set. System \eqref{eq:non_elast} can be rewritten into the divergence form in $(t,x)$ as follows \begin{align} \label{eq:polyconv_div} \left. \begin{aligned} \partial_t F_{i,j} &= \partial_{x_i}u_j =\diver_{x}\left(\left(\mathbf{e}^i\right)^T u_j\right), \quad \mathbf{e}^i_j=\delta_{i,j}, \\ \partial_t \bv &= \diver_x \left(D_{F}W(F)\right)^T. \end{aligned} \right. \end{align} By considering $F$ to have values in $\BR^{k^2}$ and taking ${\mathcal B}(F,\bv)=(\{D_F W(F)\}^T,\bv^T)$, we obtain the companion law \begin{equation} \label{eq:energy_polyc} \partial_t \left(\frac{1}{2}|\mathbf{v}|^2+W(F)\right) -\diver\left( D_{F}W(F)\mathbf{v} \right)=0. \end{equation} Let $(F,\bv)\in B^{\alpha}_{3,\infty}(\CX;\BM^{k\times k})\times B^{\alpha}_{3,\infty}(\CX;\BR^3)$ be a weak solution to \eqref{eq:polyconv_div} such that $F$ has a compact range in $\CU$ and $\bv$ in $\BR^k$. Directly from Theorem \ref{thm:non_convex}, $(F,\bv)$ is a weak solution to \eqref{eq:energy_polyc} whenever $W\in C^3(\CU)$. Note that this observation for polyconvex elasticity is up to our best knowledge an original contribution. \subsection{Magnetohydrodynamics} \label{subsec:MHD} Let us consider the system \begin{align} \label{eq:MHD} \left. \begin{aligned} \diver_{x}\bu^T &= 0 \\ \diver_{x}\bh^T &= 0 \\ \partial_t \bu + (\bu\cdot \nabla_x) \bu + \nabla_x p &= (\curl_x \bh)\times\bh \\ \partial_t \bh + \curl_x(\bh \times \bu) &= 0 \end{aligned} \right\} \quad &\mbox{in } \CX \end{align} for unknown vector functions $\bu\colon \CX \to \BR^3$ and $\bh\colon \CX \to \BR^3$ and an unknown scalar function $p\colon \CX\to \BR$. It describes the motion of an ideal electrically conducting fluid, see e.g. \cite[Chapter VIII]{landau}. Using standard vector calculus identities, \eqref{eq:MHD} can be written in the divergence form as follows: \begin{align*} \diver_{x}\bu^T &= 0,\\ \diver_{x}\bh^T &= 0,\\ \partial_t \bu + \diver_x\left(\bu\otimes \bu + p\BI +\frac{1}{2}|\bh|^2\BI-\bh\otimes \bh\right)&= 0,\\ \partial_t \bh + \diver_x(\bh \otimes \bu-\bu\otimes \bh) &= 0. \end{align*} With ${\mathcal B}(p,\bu,\bh)=(p-1\slash 2 |\bu|^2,-\bh\cdot\bu,\bu^T,\bh^T)$, the conservation of the total energy reads: \begin{align} \label{eq:MHD_en} &\partial_t\left( \frac{1}{2}|\mathbf{u}|^2 + \frac{1}{2}|\bh|^2 \right)+\diver_x\left[ \left(\frac{1}{2} |\mathbf{u}|^2+p+|\bh|^2\right) \mathbf{u}^T -(\bu\cdot\bh)\bh^T \right]=0. \end{align} A combination of Corollaries \ref{cor1}, \ref{cor2} and \ref{cor3} implies that any weak solution \begin{equation} (p,\bu,\bh)\in L^3(\CX)\times\Big(L^3(0,T;B^{\alpha}_{3,\infty}(\BT^3))\Big)^2 \end{equation} is a weak solution to \eqref{eq:inc_eul}. A similar result was obtained e.g. in \cite{Cafetal}. \subsection{Further examples} The list of examples is still far from being complete, however it is not our goal, and surely not an expectation of a reader, to provide an extended list. Among numerous further examples we will only mention inviscid compressible magneto--hydrodynamics. A direct combination of Subsection \ref{subsec:comp_eul} and \ref{subsec:MHD} gives a sufficient condition to satisfy the relevant energy equality. Another worth of mentioning example is heat conducting gas, see also \cite{Drivas:2017aa}. \noindent {\bf Acknowledgements} This work was partially supported by the Simons - Foundation grant 346300 and the Polish Government MNiSW 2015-2019 matching fund. P.G. and A. \'S.-G. received support from the National Science Centre (Poland), 2015/18/MST1/00075. The research of M. M. leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013)/ ERC Grant Agreement 320078. The Institute of Mathematics of the Academy of Sciences of the Czech Republic is supported by RVO:67985840. \end{document}
\begin{document} \title{Sublinear signal production in a two-dimensional Keller-Segel-Stokes system} \begin{abstract} \noindent {\textbf{Abstract:} We study the chemotaxis-fluid system \begin{align*} \left\{\begin{array}{r@{\,}l@{\quad}l@{\,}c} n_{t}\ &=\Delta n-\nabla\!\cdot(n\nabla c)-u\cdot\!\nabla n,\ &x\in\Omega,& t>0,\\ c_{t}\ &=\Delta c-c+f(n)-u\cdot\!\nabla c,\ &x\in\Omega,& t>0,\\ u_{t}\ &=\Delta u+\nabla P+n\cdot\!\nabla\phi,\ &x\in\Omega,& t>0,\\ \nabla\cdot u\ &=0,\ &x\in\Omega,& t>0, \end{array}\right. \end{align*} where $\Omega\subset\mathbb{R}^2$ is a bounded and convex domain with smooth boundary, $\phi\in W^{1,\infty}\left(\Omega\right)$ and $f\in C^1([0,\infty))$ satisfies $0\leq f(s)\leq K_0 s^\alpha$ for all $s\in[0,\infty)$, with $K_0>0$ and $\alpha\in(0,1]$. This system models the chemotactic movement of actively communicating cells in slow moving liquid. We will show that in the two-dimensional setting for any $\alpha\in(0,1)$ the classical solution to this Keller-Segel-Stokes-system is global and remains bounded for all times. } \\ {\textbf{Keywords:} chemotaxis, Keller-Segel, Stokes, chemotaxis-fluid interaction, global existence, boundedness}\\ {\textbf{MSC (2010):} 35K35 (primary), 35A01, 35Q35, 35Q92, 92C17} \end{abstract} \section{Introduction}\label{sec1:intro} \textbf{Keller-Segel models.}\quad Chemotaxis is the biological phenomenon of oriented movement of cells under influence of a chemical signal substance. This process is known to play a large role in various biological applications (\cite{HP09}). One of the first mathematical models concerning chemotaxis was introduced by Keller and Segel to describe the aggregation of bacteria (see \cite{KS70} and \cite{KS71}). A simple realization of a standard Keller-Segel system, which models the assumption that the cells are not only attracted by higher concentration of the signal chemical but also produce the chemical themselves, can be expressed by \begin{align}\label{KS}\tag{$KS$} \left\{\begin{array}{r@{\,}l@{\quad}l@{\,}c} n_{t}\ &=\Delta n-\nabla\!\cdot(n\nabla c),\ &x\in\Omega,& t>0,\\ c_{t}\ &=\Delta c-c+n,\ &x\in\Omega,& t>0, \end{array}\right. \end{align} in a bounded domain $\Omega\subset\mathbb{R}^N$ with $N\geq1$. Herein, $n=n(x,t)$ denotes the unknown density of the involved cells and $c=c(x,t)$ the unknown concentration of the attracting chemical substance. The Keller-Segel system alone has been studied intensively in the last decades and a wide array of interesting properties, such as finite time blow-up and spatial pattern formation, have been discovered (see also the surveys \cite{BBWT15},\cite{HP09},\cite{Ho03}). For instance, the Keller-Segel system obtained from \eqref{KS} with homogeneous Neumann boundary conditions where $\Omega\subset\mathbb{R}^N$ is a ball, emits blow-up solutions for $N\geq2$, if the total initial mass of cells lies above a critical value (\cite{mizoguchi_winkler_13},\cite{win10jde}), while all solutions remain bounded when either $N=1$, or $N=2$ an the initial total mass of cells is below the critical value (\cite{OY01},\cite{NSY97}). Through its application to various biological contexts, many variants of the Keller-Segel model have been proposed over the years. In particular, adaptions of \eqref{KS} in the form of \begin{align}\label{kssens} \refstepcounter{gleichung} n_{t} =\Delta n-\nabla\!\cdot(nS(x,n,c)\cdot\nabla c),\quad x\in\Omega, t>0, \end{align} with given chemotactic sensitivity function $S$, which can either be a scalar function, or more general a tensor valued function (see e.g. \cite{XO09-MSmodels}), for the first equation or \begin{align}\label{ksox} \refstepcounter{gleichung} c_{t} =\Delta c-ng(c),\quad x\in\Omega, t>0, \end{align} with given function $g$ for the second equation, have been studied. Both of these adjustments are known to have an influence on the boundedness of solutions to their respective systems. For instance, if we replace the first equation of \eqref{KS} with \eqref{kssens} for a scalar function $S$ satisfying $S(r)\leq C(1+r)^{-\gamma}$ for all $r\geq1$ and some $\gamma>1-\frac{2}{N}$, then all solutions to the corresponding Neumann problem are global and uniformly bounded. On the other hand if $N\geq2$, $\Omega\subset\mathbb{R}^N$ is a ball and $S(r)>cr^{-\gamma}$ for some $\gamma<1-\frac{2}{N}$ then the solution may blow up (\cite{HoWin05_bvblowchemo}). Considering the adaption of \eqref{KS} with \eqref{ksox} as second equation, which basically corresponds to the system assumption that the cells consume some of the chemical instead of producing it, it was shown in \cite{TaoWin12_evsmooth} that for $N=2$ the corresponding Neumann problem possesses bounded classical solution for suitable regular initial data not depending on a smallness condition. For $N=3$ it was proved, that there exist global weak solutions which eventually become smooth and bounded after some waiting time. A combination of both adjustments, where $S$ is matrix-valued with non-trivial nondiagonal parts, was studied in \cite{win15_chemorot}. There it was shown that under fairly general assumptions on $g$ and $S$ at least one generalized solution exists which is global. This result does neither contain a restriction on the spatial dimension nor on the size of the initial data. One last adaption of \eqref{KS} we would like to mention has only recently been studied thoroughly and concerns the system \begin{align}\label{KSa}\tag{$KS^\alpha$} \left\{\begin{array}{r@{\,}l@{\quad}l@{\,}c} n_{t}\ &=\Delta n-\nabla\!\cdot(n\nabla c),\ &x\in\Omega,& t>0,\\ c_{t}\ &=\Delta c-c+f(n),\ &x\in\Omega,& t>0, \end{array}\right. \end{align} with $f\in C^1\left([0,\infty)\right)$ satisfying $0\leq f(n)\leq K n^\alpha$ for any $n\geq0$ with $K>0$ and $\alpha>0$. In this setting it is known, that the system \eqref{KSa} does not emit any blow-up solution if $\alpha<\frac{2}{N}$ (\cite{liudongmei15_boundchemo}) but it remains an open question whether this exponent is indeed critical. Similar forms of $f(n)$ have been treated before either in the linear case $f(n)=n$ (\cite{Mimura1996499}) or (sub-)linear cases with an additional logistic growth term introduced to the first equation (eg. \cite{Os02-chemologatract},\cite{Win10-chemolog},\cite{NaOs13}). \textbf{Chemotaxis-fluid systems.}\quad Nonetheless, one assumption is shared by all of these adapted Keller-Segel models. That is, only the cell density $n$ and the chemical concentration $c$ are unknown and all other system parameters are fixed. In particular, the models assume that there is no interaction between the cells and their surroundings. However, experimental observations indicate that chemotactic motion inside a liquid can be substantially influenced by the mutual interaction between cells and fluid. For instance, in \cite{tuval2005bacterial} the dynamical generation of patterns and emergence of turbulence in population of aerobic bacteria suspended in sessile drops of water is reported, whereas examples involving instationary fluids are important in the context of broadcast spawning phenomena related to successful coral fertilization (\cite{coll1994chemical},\cite{miller1985demonstration}). A model considering the chemotaxis-fluid interaction, building on experimental observations of Bacillus subtilis was given in \cite{tuval2005bacterial}. In the system in question, the fluid velocity $u=u(x,t)$ and the associated pressure $P=P(x,t)$ are introduced as additional unknown quantities utilizing the incompressible Navier-Stokes equations. One of the first theoretical results concerning the solvability in this context were shown in \cite{lorz10}, where the local existence of weak solutions for $N\in\{2,3\}$ was shown. This setting, however, involved signal consumption in the form of per-capita oxygen consumption of the bacteria, which corresponds to an equation of the form \eqref{ksox}. Since we want to focus on the case of signal production by the cells as realized in \eqref{KS}, a more suitable system in this context is the Keller-Segel-Navier-Stokes system \begin{align}\label{KSNS}\tag{$KSNS$} \left\{\begin{array}{r@{\,}r@{\,}r@{\,}l@{\quad}l@{\,}c} n_{t}\,&+\,&u\cdot\!\nabla n\ &=\Delta n-\nabla\!\cdot(n\nabla c),\ &x\in\Omega,& t>0,\\ c_{t}\,&+\,&u\cdot\!\nabla c\ &=\Delta c-c+n,\ &x\in\Omega,& t>0,\\ u_{t}\,&+\,&u\cdot\!\nabla u\ &=\Delta u-\nabla P+n\cdot\!\nabla\phi,\ &x\in\Omega,& t>0,\\ &&\dive u\ &=0,\ &x\in\Omega,& t>0, \end{array}\right. \end{align} where the fluid is supposed to be driven by forces induced by the fixed gravitational potential $\phi$ and transports both the cells and the chemical. The mathematical analysis of \eqref{KSNS} regarding global and bounded solutions is far from trivial, as on the one hand its Navier-Stokes subsystem lacks complete existence theory (\cite{Wie99-NS}) and on the other hand the previously mentioned properties for Keller-Segel system can still emerge. In order to weaken the analytical effort necessary, a commonly made simplification is to assume that the fluid flow is comparatively slow and thus the fluid velocity evolution may be described by the Stokes equation $(\kappa=0)$ rather than the full Navier-Stokes system. Of course, all alterations to \eqref{KS} described above can be included as adjustments to the systems in this Keller-Segel(-Navier)-Stokes setting as well. Their influence on global and bounded solutions are one focal point of recent studies. For instance, an adjustment making use of both sensitivity and chemical consumption has been applied to Keller-Segel-Stokes systems in \cite{win15_globweak3d}, where for scalar valued sensitivity functions $S$ the existence of global weak solutions for bounded three-dimensional domains has been established. Building on this existence result, it was shown in \cite{win15_chemonavstokesfinal} that the generalized solution approaches a spatially homogeneous steady state under fairly weak assumptions imposed on the parameter functions $S$ and $g$. Under similar assumptions the existence of global weak solutions for suitable non-linear diffusion types have been proven in \cite{francescolorz10} and the existence of bounded and global weak solutions even allowing matrix-valued $S$ not requiring a decay assumption in \cite{win_ct_fluid_3d}. A Keller-Segel-Stokes system corresponding to the adjustment made to \eqref{KS} by only making use of rotational sensitivity was studied in \cite{Wang20157578}, where it was shown that the Neumann problem for the Keller-Segel-Stokes system possesses a unique global classical solution which remains bounded for all times, if we assume $S$ to satisfy $|S(x,n,c)|\leq C_S(1+n)^{-\alpha}$ with $C_S>0$ for some $\alpha>0$. Regarding the introduction of the additional logistic growth term $+rn-\mu n^2$ with $r\geq0$ and $\mu>0$ to the first equation, it was shown in \cite[Theorem 1.1]{tao_winkler15_zampfinal} for space dimension $N=3$, that every solution remains bounded if $\mu\geq23$ and thus any blow-up phenomena are excluded. Moreover, these solutions tend to zero (\cite[Theorem 1.2]{tao_winkler15_zampfinal}). Some of these results have in part been transferred to the full chemotaxis Navier-Stokes system. These include global existence of classical solutions for $N=2$ with scalar valued sensitivity (\cite{win_fluid_final}), large time behavior and eventual smoothness of such solutions (\cite{win15_chemonavstokesfinal}) and even global existence of mild solution to double chemotaxis systems under the effect of incompressible viscious fluid (\cite{kozono15}). Boundedness results with matrix-valued sensitivity without decay requirements but for small initial data have been discussed in \cite{caolan16_smalldatasol3dnavstokes} and boundedness results under influence of a logistic growth term in \cite{tao_winkler_non2015}. \textbf{Main results.}\quad Most of these results stated above are concerned with the chemical consumption version of the chemotaxis model (\cite{Wang20157578} and \cite{tao_winkler_non2015} being the exceptions). To the best of our knowledge the Stokes variant of chemotaxis-fluid interaction has only been discussed outside of the chemical consumption case either by introducing a logistic growth term as in \cite{tao_winkler_non2015} or taking a more general chemotactic sensitivity as in \cite{Wang20157578}. Motivated by this fact and the result of \cite{liudongmei15_boundchemo} for \eqref{KSa} mentioned above, we are now interested in whether the influence of a coupled slow moving fluid described by Stokes equation affects the possible choice for $\alpha\in(0,1)$, while still maintaining the exclusion of possible unbounded solutions. Henceforth, we will consider that the evolution of $(n,c,u,P)$ is governed by the Keller-Segel-Stokes System \begin{align}\label{KSaS}\tag{$KS^{\alpha}S$} \left\{\begin{array}{r@{\,}l@{\quad}l@{\,}c} n_{t}\ &=\Delta n-\nabla\!\cdot(n\nabla c)-u\cdot\!\nabla n,\ &x\in\Omega,& t>0,\\ c_{t}\ &=\Delta c-c+f(n)-u\cdot\!\nabla c,\ &x\in\Omega,& t>0,\\ u_{t}\ &=\Delta u+\nabla P+n\cdot\!\nabla\phi,\ &x\in\Omega,& t>0,\\ \dive u\ &=0,\ &x\in\Omega,& t>0, \end{array}\right. \end{align} where $\Omega\subset\mathbb{R}^2$ is a bounded and smooth domain and $f\in C^1([0,\infty))$ satisfies \begin{align}\label{fprop} \refstepcounter{gleichung} 0\leq f(s)\leq K_0 s^\alpha\quad\mbox{for all }s\in[0,\infty) \end{align} with some $\alpha\in(0,1]$ and $K_0>0$. We shall examine this system along with no-flux boundary conditions for $n$ and $c$ an a no-slip boundary condition for $u$, \begin{align}\label{bcond} \refstepcounter{gleichung} \frac{\partial n}{\partial\nu}=\frac{\partial c}{\partial\nu}=0\quad\mbox{and}\quad u=0\qquad \mbox{for }x\in\romega\mbox{ and }t>0, \end{align} and initial conditions \begin{align}\label{idcond} \refstepcounter{gleichung} n(x,0)=n_0(x),\quad c(x,0)=c_0(c),\quad u(x,0)=u_0(x),\quad x\in\Omega. \end{align} For simplicity we will assume $\phi\in\W[1,\infty]$ and that for some $\theta>2$ and $\delta\in(\frac{1}{2},1)$ the initial data satisfy the regularity and positivity conditions \begin{align}\label{idreg} \refstepcounter{gleichung} \begin{cases}&n_0\in \CSp{0}{\bomega}\mbox{ with }n_0> 0\mbox{ in }\bomega,\\ &c_0\in\W[1,\theta]\mbox{ with }c_0> 0\mbox{ in }\bomega,\\ &u_0\in \DA,\end{cases} \end{align} where here and below $A^\delta$ denotes the fractional power of the Stokes operator $A:=-\mathcal{P}\Delta$ in $\Lo[2]$ regarding homogeneous Dirichlet boundary conditions, with the Helmholtz projection $\mathcal{P}$ from $\Lo[2]$ to the solenodial subspace $L^2_\sigma(\Omega):=\left\{\left.\varphi\in\Lo[2]\right\vert\dive\varphi=0\right\}$. In this framework we can state our main result in the following way: \begin{theo}\label{Thm:globEx} Let $\theta>2$, $\delta\in(\frac{1}{2},1)$ and $\Omega\subset\mathbb{R}^2$ be a bounded and convex domain with smooth boundary. Assume $\phi\in\W[1,\infty]$ and that $n_0,c_0$ and $u_0$ comply with \eqref{idreg}. Then for any $\alpha\in(0,1)$, the PDE system \eqref{KSaS} coupled with boundary conditions \eqref{bcond} and initial conditions \eqref{idcond} possesses a solution $(n,c,u,P)$ satisfying \begin{align*} \begin{cases} n\in\CSp{0}{\bomega\times[0,\infty)}\cap\CSp{2,1}{\bomega\times(0,\infty)},\\ c\in\CSp{0}{\bomega\times[0,\infty)}\cap\CSp{2,1}{\bomega\times(0,\infty)},\\ u\in\CSp{0}{\bomega\times[0,\infty)}\cap\CSp{2,1}{\bomega\times(0,\infty)},\\ P\in\CSp{1,0}{\bomega\times[0,\infty)}, \end{cases} \end{align*} which solves \eqref{KSaS} in the classical sense and remains bounded for all times. This solution is unique within the class of functions which for all $T\in(0,\infty)$ satisfy the regularity properties \begin{align}\label{locExUClass} \refstepcounter{gleichung} \begin{cases} n\in\CSp{0}{[0,T);\Lo[2]}\cap\LSp{\infty}{(0,T);\CSp{0}{\bomega}}\cap\CSp{2,1}{\bomega\times(0,T)},\\ c\in\CSp{0}{[0,T);\Lo[2]}\cap\LSp{\infty}{(0,T);\W[1,\theta]}\cap\CSp{2,1}{\bomega\times(0,T)},\\ u\in\CSp{0}{[0,T);\Lo[2]}\cap\LSp{\infty}{(0,T);\DA}\cap\CSp{2,1}{\bomega\times(0,T)},\\ P\in \LSp{1}{(0,T);\W[1,2]}, \end{cases} \end{align} up to addition of functions $\hat{p}$ to $P$, such that $\hat{p}(\cdot,t)$ is constant for any $t\in(0,\infty)$. \end{theo} In view of Theorem \ref{Thm:globEx}, there is no evident difference regarding $\alpha$ between the coupled system \eqref{KSaS} and the chemotaxis system without fluid \eqref{KSa} for dimension $N=2$. In Section \ref{sec1:lEx} we will briefly discuss local existence of classical solutions and basic a priori estimates. Section \ref{sec2:regu} is dedicated to the connection between the regularity of $n$ and the regularity of the spacial derivative of $u$, which plays a crucial part in obtaining additional information on the regularity of $c$. In Section \ref{sec3:N2} we will combine standard testing procedures with the results from the previous sections to prove the boundedness and globality of classical solutions to \eqref{KSaS}. \setcounter{gleichung}{0} \section{Local existence of classical solutions}\label{sec1:lEx} The following theorem concerning the local existence of classical solutions, as well as an extensibility criterion can be proven with exactly the same steps demonstrated in \cite[Lemma 2.1]{win_fluid_final} and \cite[Lemma 2.1]{tao_winkler_chemohapto11siam}. \begin{lem}[Local existence of classical solutions]\label{Lem:locEx} Let $\theta>2$, $\delta\in(\frac{1}{2},1)$ and $\Omega\subset\mathbb{R}^2$ be a bounded and convex domain with smooth boundary. Suppose $\phi\in\W[1,\infty]$ and that $n_0,c_0$ and $u_0$ satisfy \eqref{idreg}. Then there exist $T_{max}\in(0,\infty]$ and functions $(n,c,u,P)$ satisfying \begin{align*} \begin{cases} n\in\CSp{0}{\bomega\times[0,T_{max})}\cap\CSp{2,1}{\bomega\times(0,T_{max})},\\ c\in\CSp{0}{\bomega\times[0,T_{max})}\cap\CSp{2,1}{\bomega\times(0,T_{max})},\\ u\in\CSp{0}{\bomega\times[0,T_{max})}\cap\CSp{2,1}{\bomega\times(0,T_{max})},\\ P\in\CSp{1,0}{\bomega\times[0,T_{max})}, \end{cases} \end{align*} that solve \eqref{KSaS} with \eqref{bcond} and \eqref{idcond} in the classical sense in $\Omega\times(0,T_{max})$. Moreover, we have $n>0$ and $c>0$ in $\bomega\times[0,T_{max})$ and the alternative \begin{align}\label{lExAlt} \refstepcounter{gleichung} \mbox{either }\,T_{max}=\infty\,\mbox{ or }\,\|n(\cdot,t)\|_{\Lo[\infty]}+\|c(\cdot,t)\|_{\W[1,\theta]}+\|A^{\delta}u(\cdot,t)\|_{\Lo[2]}\to\infty\ \mbox{as }t\nearrowT_{max}. \end{align} This solution is unique among all functions satisfying \eqref{locExUClass} for all $T\in(0,T_{max})$, up to addition of functions $\hat{p}$, such that $\hat{p}(\cdot,t)$ is constant for any $t\in(0,T)$, to the pressure $P$. \end{lem} Local existence at hand, we can immediately prove two elementary properties, which will be the starting point for all of our regularity results to come. \begin{lem}\label{Lem:masscons} Under the assumptions of Lemma \ref{Lem:locEx}, the solution of \eqref{KSaS} satisfies \begin{align}\label{masscons-n} \refstepcounter{gleichung} \int_{\Omega}\! n(x,t)\intd x=\int_{\Omega}\! n_0=:m\quad\mbox{for all}\ t\in(0,T_{max}) \end{align} and there exists a constant $C>0$ such that \begin{align}\label{L1-bound-c} \refstepcounter{gleichung} \int_{\Omega}\! c(x,t)\intd x\leq C\quad\mbox{for all}\ t\in(0,T_{max}). \end{align} \end{lem} \begin{bew} The first property follows immediately from simple integration of the first equation in \eqref{KSaS}. For \eqref{L1-bound-c} we integrate the second equation of \eqref{KSaS} and recall \eqref{fprop} to obtain \begin{align*} \frac{\intd}{\intd t}\int_{\Omega}\! c+\int_{\Omega}\! c\leq K_0\int_{\Omega}\! n^\alpha\quad\mbox{for all }t\in(0,T_{max}). \end{align*} Hence, making use of \eqref{masscons-n} and the fact $\alpha<1$, $y(t)=\int_{\Omega}\! c(x,t)\intd x$ satisfies the ODI \begin{align*} y'(t)+y(t)\leq C_1\|n_0\|_{\Lo[1]}^{\alpha}=C_2\quad\mbox{for all }t\in(0,T_{max}) \end{align*} for some $C_1>0$ and $C_2:=C_1m^\alpha>0$ in view of \eqref{idreg}. Upon integration we infer \begin{align*} y(t)\leq y(0)e^{-t}+C_2\left(1-e^{-t}\right)\quad\mbox{for all }t\in(0,T_{max}), \end{align*} which, due to the assumed regularity of $c_0$ in \eqref{idreg}, completes the proof. \end{bew} \setcounter{gleichung}{0} \section{Regularity of u implied by regularity of n}\label{sec2:regu} Let us recall that $\mathcal{P}$ denotes the Helmholtz projection from $\Lo[2]$ to the subspace $L^2_\sigma\left(\Omega\right)=\left\{\varphi\in\Lo[2]\,\vert\,\dive\varphi=0\right\}$ and $A:=-\mathcal{P}\Delta$ denotes the Stokes operator in $\Lo[2]$ under homogeneous Dirichlet boundary conditions. For now we limit our observations to a projected version of the Stokes subsystem $\frac{\intd}{\intd t}u+Au=\mathcal{P}\left(n\nabla\phi\right)$ in \eqref{KSaS} without regard for the rest of the system. In contrast to the setting with the full Navier-Stokes equations we can make use of the absence of the convective term $(u\cdot\nabla)u$ in the Stokes equation to gain results concerning the regularity of the spatial derivative $Du$ based on the regularity of the term $\mathcal{P}\left(n\nabla\phi\right)$, which in fact solely depends on the regularity of $n$, due to the assumed boundedness of $\nabla\phi$. In \cite[Lemma 2.4]{Wang20157578} this correlation between the regularity of $u$ and $n$ is proven in space dimension $N=2$. The proof of \cite[Lemma 2.4]{Wang20157578} is based on an approach employed in \cite[Section 3.1]{win_ct_fluid_3d}, which makes use of general results for sectorial operators shown in \cite{fr69}, \cite{hen81} and \cite{gig81} and mainly relies on an embedding of the domains of fractional powers $D\left(A^\beta\right)$ into $\Lo[p]$, see \cite[Theorem 1.6.1]{hen81} or \cite[Theorem 3]{gig81}, for instance. Since we are only working in two-dimensional domains we will only state the result from \cite[Lemma 2.4]{Wang20157578} here and refer the reader to \cite[Corollary 3.4]{win_ct_fluid_3d} and \cite[Lemma 2.5]{Wang20157578} for the remaining details regarding the proof. \begin{lem}\label{Lem:u_w1r_from_n_lp} Let $p\in[1,\infty)$ and $r\in[1,\infty]$ be such that \begin{align*} \begin{cases} r<\frac{2p}{2-p}\quad&\mbox{if }p\leq 2,\\ r\leq\infty\quad&\mbox{if }p>2. \end{cases} \end{align*} Furthermore, let $T>0$ be such that $n:\Omega\times(0,T)\mapsto\mathbb{R}$ satisfies \begin{align*} \|n(\cdot,t)\|_{\Lo[p]}\leq \eta\quad\mbox{for all }t\in(0,T), \end{align*} with some $\eta>0$. Then for $u_0\in\DA$ with $\delta\in\left(\frac{1}{2},1\right)$ and $\phi\in\W[1,\infty]$ all solutions $u$ of the third and fourth equations in \eqref{KSaS} fulfill \begin{align*} \|Du(\cdot,t)\|_{\Lo[r]}\leq C\quad\mbox{for all }t\in(0,T), \end{align*} with a constant $C=C(p,r,\eta,u_0,\phi)>0$. \end{lem} Evidently, a supposedly known bound for $n$ at hand, we immediately obtain the desired boundedness of $u$ in view of Sobolev embeddings. Nevertheless, since we only have the time independent $L^1$--bound of $n$ from Lemma \ref{Lem:masscons} as a starting point, obtaining a bound for $n$ in $\Lo[p]$ with suitable large $p>1$ will require additional work. \setcounter{gleichung}{0} \section{Global existence and boundedness in two-dimensional domains}\label{sec3:N2} For this section we fix $\theta>2,\delta\in(\frac{1}{2},1)$ and initial data satisfying \eqref{idreg}. In particular, this ensures that all requirements of Lemma \ref{Lem:locEx} are met. Let $(n,c,u,P)$ denote the solution given by Lemma \ref{Lem:locEx} and $T_{max}$ its maximal time of existence. Making use of the connection between the regularity of $u$ and $n$ discussed in the previous section we immediately obtain \begin{prop}\label{Prop:u_bound_N2_all_p} For all $r<2$ and all $q<\infty$ there exist constants $C_1>0$ and $C_2>0$ such that the solution to \eqref{KSaS} satisfies \begin{align*} \|Du(\cdot,t)\|_{\Lo[r]}\leq C_1\quad\mbox{for all }t\in(0,T_{max}) \end{align*} and \begin{align*} \|u(\cdot,t)\|_{\Lo[q]}\leq C_2\quad\mbox{for all }t\in(0,T_{max}). \end{align*} \end{prop} \begin{bew} Due to \eqref{masscons-n} and the regularity of $n_0$ we can find $C_3>0$ such that $\|n(\cdot,t)\|_{\Lo[1]}=\|n_0\|_{\Lo[1]}\leq C_3$ holds for all $t\in(0,T_{max})$. Thus, we may apply Lemma \ref{Lem:u_w1r_from_n_lp} with $p=1$ to obtain for any $r<2$ that $\|Du(\cdot,t)\|_{\Lo[r]}\leq C_2\ \mbox{for all }t\in(0,T_{max})$ with some $C_2>0$. The second claim then follows immediately from the Sobolev embedding theorem (\cite[Theorem 5.6.6]{evans}). \end{bew} \subsection{Obtaining a first information on the gradient of c}\label{ssec32:reg_c} In order to derive the bounds necessary in our approach towards the boundedness result we require an estimate on the gradient of $c$ as a starting point. To obtain a first information in this matter, we apply standard testing procedures to derive an energy inequality involving integrals of $n\ln n$ and $|\nabla c|^2$. But first, let us briefly recall Young's inequality in order to fix notation. \begin{lem}\label{young} Let $a,b,\varepsilon>0$ and $1<p,q<\infty$ with $\frac{1}{p}+\frac{1}{q}=1$. Then \begin{align*} ab\leq\varepsilon a^p+C(\varepsilon,p,q) b^q, \end{align*} where $C(\varepsilon,p,q)=(\varepsilon p)^{-\frac{q}{p}}q^{-1}$. \end{lem} Before we derive an inequality for the time evolution of $\int_{\Omega}\! n\ln$ we employ the Gagliardo-Nirenberg inequality\ to show one simple preparatory lemma on which we will rely multiple times later on. \begin{lem}\label{Lem:ngnb} Let $\Omega\subset\mathbb{R}^2$ be a bounded domain with smooth boundary. Let $r\geq1$ and $s\geq1$. Then for any $\eta>0$ there exists $C>0$ such that \begin{align*} \int_{\Omega}\!|\varphi|^{rs}\leq C\left(\int_{\Omega}\!|\nabla(|\varphi|^\nfrac{r}{2})|^2\right)^{\frac{(rs-1)}{r}}+C \end{align*} holds for all functions $\varphi\in\Lo[1]$ satisfying $\nabla(|\varphi|^{\nfrac{r}{2}})\in\Lo[2]$ and $\int_{\Omega}\!|\varphi|\leq\eta$. \end{lem} \begin{bew} By an application of the Gagliardo-Nirenberg inequality\ (see \cite[Lemma 2.3]{lankchapto15} for a version including integrability exponents less than $1$) we can pick $C_1>0$ such that \begin{align*} \int_{\Omega}\!|\varphi|^{rs}=\||\varphi|^\nfrac{r}{2}\|_{\Lo[2s]}^{2s}\leq C_1\|\nabla(|\varphi|^\nfrac{r}{2})\|_{\Lo[2]}^{2sa}\||\varphi|^\nfrac{r}{2}\|_{\Lo[\nfrac{2}{r}]}^{2s(1-a)}+C_1\||\varphi|^\nfrac{r}{2}\|_{\Lo[\nfrac{2}{r}]}^{2s} \end{align*} holds for all $\varphi\in\Lo[1]$ with $\nabla(|\varphi|^{\nfrac{r}{2}})\in\Lo[2]$, with $a\in(0,1)$ provided by \begin{align*} a=\frac{\frac{r}{2}-\frac{1}{2s}}{\frac{r}{2}+\frac{1}{2}-\frac{1}{2}}=1-\frac{1}{rs}. \end{align*} Since $\int_{\Omega}\!|\varphi|\leq\eta$ we have $\||\varphi|^\nfrac{r}{2}\|_{\Lo[\nfrac{2}{r}]}=\left(\int_{\Omega}\!|\varphi|\right)^\nfrac{r}{2}\leq\eta^\nfrac{r}{2}$ and thus \begin{align*} \int_{\Omega}\!|\varphi|^{rs}\leq C_2\left(\int_{\Omega}\!|\nabla(|\varphi|^\nfrac{r}{2})|^2\right)^{\frac{(rs-1)}{r}}+C_2 \end{align*} for all $\varphi\in\Lo[1]$ satisfying $\nabla(|\varphi|^{\nfrac{r}{2}})\in\Lo[2]$, where $C_2=C_1\max\{\eta,\eta^{rs}\}>0$. \end{bew} The particular form in which we will need this inequality most often is the following: \begin{cor}\label{cor:ngnb} There exists a constant $K_1>0$ such that the solution of \eqref{KSaS} fulfills \begin{align*} \int_{\Omega}\! n^2\leq K_1\int_{\Omega}\!|\nabla(n^\nfrac{1}{2})|^2+K_1 \end{align*} for all $t\in(0,T_{max})$. \end{cor} Testing of the first equation in \eqref{KSaS} with $\ln n$ yields the following estimation. \begin{lem}\label{Lem:n-energy} There exists a constant $K_2>0$ such that the solution of \eqref{KSaS} fulfills \begin{align}\label{n-energy} \refstepcounter{gleichung} \frac{\intd}{\intd t}\int_{\Omega}\! n\ln n +\int_{\Omega}\!|\nabla( n^{\nfrac{1}{2}})|^2\leq K_2\int_{\Omega}\!\left|\Delta c\right|^2+K_2\quad\mbox{for all }t\in(0,T_{max}). \end{align} \end{lem} \begin{bew} Making use of \eqref{masscons-n} and $\dive u=0$ in $\Omega$, multiplication of the first equation in \eqref{KSaS} with $\ln n$ and integration by parts yield \begin{align}\label{n-energy-proof-eq1} \refstepcounter{gleichung} \frac{\intd}{\intd t}\int_{\Omega}\! n\ln n+\int_{\Omega}\!\frac{|\nabla n|^2}{n}=\int_{\Omega}\!\nabla c\divdot\nabla n\quad\mbox{for all }t\in(0,T_{max}). \end{align} To further estimate the right hand side, we first let $K_1>0$ be as in Corollary \ref{cor:ngnb}. Then, integrating the right hand side of \eqref{n-energy-proof-eq1} by parts once more and applying Young's inequality with $p=q=2$ and $\varepsilon=\frac{3}{K_1}$ (see Lemma \ref{young}) and Corollary \ref{cor:ngnb}, we obtain \begin{align*} \frac{\intd}{\intd t}\int_{\Omega}\! n\ln n+4\int_{\Omega}\!|\nabla(n^{\nfrac{1}{2}})|^2&\leq\frac{3}{K_1}\int_{\Omega}\! n^2+C_1\int_{\Omega}\!|\Delta c|^2\\ &\leq\frac{3}{K_1}\left(K_1\int_{\Omega}\!|\nabla(n^{\nfrac{1}{2}})|^2+K_1\right)+C_1\int_{\Omega}\!|\Delta c|^2 \end{align*} for all $t\in(0,T_{max})$ and some $C_1>0$. Reordering the terms appropriately completes the proof with $K_2:=\max\{3,C_1\}$. \end{bew} The second of the separate inequalities treats the time evolution of $\int_{\Omega}\!|\nabla c|^2$. \begin{lem}\label{Lem:c-energy} Given any $\xi>0$, there exists a constant $K_3>0$ such that \begin{align}\label{c-energy} \refstepcounter{gleichung} \frac{\xi}{2}\frac{\intd}{\intd t}\int_{\Omega}\! |\nabla c|^2 +\frac{\xi}{4}\int_{\Omega}\!|\Delta c|^2+ \xi\int_{\Omega}\!|\nabla c|^2\leq \frac{1}{2}\int_{\Omega}\!|\nabla(n^{\nfrac{1}{2}})|^2+K_3 \end{align} holds for all $t\in(0,T_{max})$. \end{lem} \begin{bew} Testing the second equation of \eqref{KSaS} with $-\xi\Delta c$ and integrating by parts we obtain \begin{align*} \frac{\xi}{2}\frac{\intd}{\intd t}\int_{\Omega}\!|\nabla c|^2 +\xi\int_{\Omega}\!|\Delta c|^2+\xi\int_{\Omega}\!|\nabla c|^2=-\xi\int_{\Omega}\! f(n)\Delta c+\xi\int_{\Omega}\!\Delta c\nabla c\cdot u \end{align*} for all $t\in(0,T_{max})$. An application of Young's inequality to both integrals on the right side therefore implies that \begin{align}\label{c-energy-eq0} \refstepcounter{gleichung} \frac{\xi}{2}\frac{\intd}{\intd t}\int_{\Omega}\!|\nabla c|^2+\xi\int_{\Omega}\!|\Delta c|^2+\xi\int_{\Omega}\!|\nabla c|^2 \leq \xi\int_{\Omega}\! f(n)^{2}+\frac{\xi}{2}\int_{\Omega}\!|\Delta c|^2+\xi\int_{\Omega}\!|\nabla c|^2 |u|^2 \end{align} holds for all $t\in(0,T_{max})$. We fix $q>2$ and make use of the Hölder inequality to see that \begin{align}\label{c-energy-eq1} \refstepcounter{gleichung} \xi\int_{\Omega}\!|\nabla c|^2 |u|^2\leq \xi\|\nabla c\|_{\Lo[\frac{2q}{q-2}]}^2\|u \|_{\Lo[q]}^2 \end{align} is valid for all $t\in(0,T_{max})$. An application of the Gagliardo-Nirenberg inequality\ combined with \cite[Theorem 3.4]{sima90m} allows us to further estimate \begin{align*} \|\nabla c\|_{\Lo[\frac{2q}{q-2}]}^2&\leq C_1\|\Delta c\|_{\Lo[2]}^{\frac{4q+4}{3q}}\| c\|_{\Lo[1]}^{\frac{2q-4}{3q}}+C_1\|c\|_{\Lo[1]}^2\\ &\leq C_2\|\Delta c\|_{\Lo[2]}^{\frac{4}{3}+\frac{4}{3q}}+C_2\quad\mbox{for all }t\in(0,T_{max}) \end{align*} for some $C_1>0$ and $C_2>0$ in view of \eqref{L1-bound-c}. Plugging this into \eqref{c-energy-eq1} and recalling Proposition \ref{Prop:u_bound_N2_all_p}, we thus find $C_3>0$ such that \begin{align*} \xi\int_{\Omega}\!|\nabla c|^2|u|^2\leq C_3\|\Delta c\|_{\Lo[2]}^{\frac{4}{3}+\frac{4}{3q}}+C_3\quad\mbox{for all }t\in(0,T_{max}). \end{align*} Since $q>2$, we have $\frac{4}{3}+\frac{4}{3q}<2$ and may apply Young's inequality to obtain \begin{align}\label{c-energy-eq2} \refstepcounter{gleichung} \xi\int_{\Omega}\!|\nabla c|^2|u|^2\leq \frac{\xi}{4}\|\Delta c\|_{\Lo[2]}^2+C_4, \end{align} for some $C_4>0$ and all $t\in(0,T_{max})$. To estimate the term containing $f(n)^{2}$ in \eqref{c-energy-eq0} we let $K_1$ denote the positive constant from Corollary \ref{cor:ngnb}. Then, recalling \eqref{fprop} and making use of the fact $\alpha<1$, an application of Young's inequality yields $C_5>0$ fulfilling $\xi f(n)^{2}\leq \frac{1}{2K_1}n^2+C_5$ for all $(x,t)\in\Omega\times(0,T_{max})$ and thus, by Corollary \ref{cor:ngnb} \begin{align}\label{c-energy-eq3} \refstepcounter{gleichung} \xi\int_{\Omega}\! f(n)^{2}\leq \frac{1}{2K_1}\int_{\Omega}\! n^2+C_5|\Omega|\leq \frac{1}{2}\int_{\Omega}\!|\nabla(n^\nfrac{1}{2})|^2+C_6\quad\mbox{for all }t\in(0,T_{max}) \end{align} with $C_6:=\frac{1}{2}+C_5|\Omega|$. Combining \eqref{c-energy-eq0}, \eqref{c-energy-eq2} and \eqref{c-energy-eq3} completes the proof. \end{bew} Before we are able to combine the previous lemmata to derive an ODI appropriate for our purpose, we require one additional result which is a corollary from Lemma \ref{Lem:ngnb}. \begin{cor}\label{cor:ln_n-gradbound} There exists a constant $K_4>0$ such that the solution to \eqref{KSaS} obeys \begin{align*} \frac{1}{2}\int_{\Omega}\!|\nabla(n^{\nfrac{1}{2}})|^2\geq K_4\int_{\Omega}\! n\ln n -\frac{1}{2}\quad \mbox{for all }t\in(0,T_{max}). \end{align*} \end{cor} \begin{bew} In view of the pointwise inequality $x\ln x\leq x^2$ for $x\in(0,\infty)$, the positivity of $n$ ascertained in Lemma \ref{Lem:locEx} therefore implies $n\ln n\leq n^2$ for all $t\in(0,T_{max})$ and thus an application of Corollary \ref{cor:ngnb} immediately shows that there exists $C_1>0$ such that \begin{align*} \int_{\Omega}\! n\ln n\leq\int_{\Omega}\! n^2\leq C_1\|\nabla(n^{\nfrac{1}{2}})\|_{\Lo[2]}^2+C_1 \end{align*} holds for all $t\in(0,T_{max})$. Therefore, multiplying by $K_4:=\frac{1}{2C_1}$ and reordering the terms appropriately proves the asserted inequality. \end{bew} Adding up suitable multiples of the differential inequalities in Lemma \ref{Lem:n-energy} and Lemma \ref{Lem:c-energy}, we obtain a first bound on the gradient of $c$. \begin{prop}\label{Prop:grad_c2-bound} There exists a constant $C>0$ such that the solution of \eqref{KSaS} fulfills \begin{align}\label{grad_c2-bound} \refstepcounter{gleichung} \int_{\Omega}\!|\nabla c|^2\leq C\quad\mbox{for all }t\in(0,T_{max}). \end{align} \end{prop} \begin{bew} Letting $K_2$ denote the positive constant from Lemma \ref{Lem:n-energy}, we set $\xi=4K_2+4$ and then $K_3>0$ as the corresponding constant given by Lemma \ref{Lem:c-energy}. With the constants defined this way, we know that the inequality \begin{align}\label{grad_c2_sptemp-eq1} \refstepcounter{gleichung} (2K_2+2)\frac{\intd}{\intd t}\int_{\Omega}\!|\nabla c|^2+(K_2+1)\int_{\Omega}\!|\Delta c|^2+(4K_2+4)\int_{\Omega}\!|\nabla c|^2\leq\frac{1}{2}\int_{\Omega}\!|\nabla(n^\nfrac{1}{2})|^2+K_3, \end{align} holds for all $t\in(0,T_{max})$ due to Lemma \ref{Lem:c-energy}. Thus, adding up \eqref{n-energy} and \eqref{grad_c2_sptemp-eq1} we obtain \begin{align*} \frac{\intd}{\intd t}\bigg(\int_{\Omega}\! n\ln n+(2K_2+2)\int_{\Omega}\!|\nabla c|^2\bigg)+\frac{1}{2}\int_{\Omega}\!|\nabla(n^\nfrac{1}{2})|^2+\int_{\Omega}\!|\Delta c|^2+(4K_2+4)\int_{\Omega}\!|\nabla c|^2\leq C_1 \end{align*} for all $t\in(0,T_{max})$ with $C_1=K_2+K_3>0$. By Corollary \ref{cor:ln_n-gradbound} we can estimate $\frac{1}{2}\int_{\Omega}\!|\nabla(n^\nfrac{1}{2})|^2$ from below to obtain \begin{align*} \frac{\intd}{\intd t}\bigg(\int_{\Omega}\! n\ln n+(2K_2+2)\int_{\Omega}\!|\nabla c|^2\bigg)+K_4\int_{\Omega}\! n\ln n+\!\int_{\Omega}\!|\Delta c|^2+2(2K_2+2)\!\int_{\Omega}\!|\nabla c|^2\leq C_2 \end{align*} for all $t\in(0,T_{max})$, with $K_4>0$ as in Corollary \ref{cor:ngnb} and $C_2=C_1+\frac{1}{2}>0$. Dropping the non-negative term involving $|\Delta c|^2$, this implies that $y(t):=\int_{\Omega}\! n\ln n+(2K_2+2)\int_{\Omega}\!|\nabla c|^2$, $t\in[0,T_{max})$ satisfies \begin{align*} y'(t)+C_3y(t)\leq C_2\quad\mbox{for all }t\in(0,T_{max}), \end{align*} where $C_3:=\min\left\{K_4,2\right\}>0$. Upon an ODE comparison, this leads to the boundedness of $y$ and hence \eqref{grad_c2-bound}, due to $n\ln n$ being bounded from below by the positivity of $n$. \end{bew} \subsection{Further testing procedures}\label{ssec:33:testing} The $L^2$--bound of the gradient of $c$ proven in the previous lemma will be our starting point in improving the regularity of both $n$ and $c$. Preparation and combination of differential inequalities concerning $n^p$ and $|\nabla c|^{2q}$, for appropriately chosen $q$ and $p$, will be the main part of this section. The testing procedures employed in this approach are based on the application to a similar chemotaxis-Stokes system discussed in \cite{win_ct_fluid_3d}. The following preparatory result, taken from \cite[Lemma 2.5]{win_ct_fluid_2d}, will be a useful tool in estimations later on and is a simple derivation from Young's inequality. \begin{lem}\label{young_exp-sum1} Let $a>0$ and $b>0$ be such that $a+b<1$. Then for all $\varepsilon>0$ there exists $C>0$ such that \begin{align*} x^ay^b\leq\varepsilon(x+y)+C\quad\mbox{for all }x\geq0\mbox{ and }y\geq0. \end{align*} \end{lem} In the first step to improve the known regularities of $n$ and $c$, consist of an application of standard testing procedures to gain separate inequalities regarding the time evolution of $\int_{\Omega}\! n^p$ and $\int_{\Omega}\!|\nabla c|^{2q}$, respectively. \begin{lem}\label{Lem:np-ineq} Let $p>1$. Then the solution of \eqref{KSaS} satisfies \begin{align}\label{np-ineq} \refstepcounter{gleichung} \frac{1}{p}\frac{\intd}{\intd t}\int_{\Omega}\! n^p+\frac{2(p-1)}{p^2}\int_{\Omega}\!|\nabla(n^\nfrac{p}{2})|^2\leq\frac{p-1}{2}\int_{\Omega}\! n^p|\nabla c|^2 \end{align} for all $t\in(0,T_{max})$. \end{lem} \begin{bew} We multiply the first equation of \eqref{KSaS} with $n^{p-1}$ and integrate by parts to see that \begin{align*} \frac{1}{p}\frac{\intd}{\intd t}\int_{\Omega}\! n^p=-(p-1)\int_{\Omega}\!|\nabla n|^2 n^{p-2}+(p-1)\int_{\Omega}\! n^{p-1}\nabla c\cdot\nabla n-\frac{1}{p}\int_{\romega}\! n^{p}u\cdot\vec{\nu} \end{align*} holds for all $t\in(0,T_{max})$, where we made use of the fact $\dive u=0$ and the divergence theorem to rewrite the last term accordingly. Due to the boundary condition imposed on $u$ the last term disappears, such that an application of Young's inequality to the second to last term implies \begin{align*} \frac{1}{p}\frac{\intd}{\intd t}\int_{\Omega}\! n^p+(p-1)\int_{\Omega}\!|\nabla n|^2n^{p-2}\leq\frac{p-1}{2}\int_{\Omega}\!|\nabla n|^2 n^{p-2}+\frac{p-1}{2}\int_{\Omega}\! n^p|\nabla c|^2 \end{align*} for all $t\in(0,T_{max})$. Reordering the terms and rewriting $|\nabla n|^2n^{p-2}=\frac{4}{p^2}|\nabla(n^\nfrac{p}{2})|^2$ completes the proof. \end{bew} \begin{lem}\label{Lem:grad_c2q} Let $q>1$. Then \begin{align}\label{grad_c2q} \refstepcounter{gleichung} \frac{1}{2q}\frac{\intd}{\intd t}\int_{\Omega}\!|\nabla c|^{2q}&+\frac{2(q-1)}{q^2}\int_{\Omega}\!\Big\vert\nabla|\nabla c|^q\Big\vert^2+\int_{\Omega}\!|\nabla c|^{2q}\nonumber\\ &\leq\left(K_0(q-1)+\frac{K_0}{\sqrt{2}}\right)^2\int_{\Omega}\! n^{2\alpha}|\nabla c|^{2q-2}+\int_{\Omega}\!|\nabla c|^{2q}|Du| \end{align} for all $t\in(0,T_{max})$. \end{lem} \begin{bew} Differentiating the second equation of \eqref{KSaS} and making use of the fact $\Delta|\nabla c|^2=2\nabla c\cdot \nabla\Delta c+2|D^2c|^2$, we obtain \begin{align*} \frac{1}{2}\left(|\nabla c|^2\right)_t&=\nabla c\cdot\nabla\left(\Delta c-c+f(n)-u\cdot\nabla c\right)\\ &=\frac{1}{2}\Delta|\nabla c|^2-|D^2c|^2-|\nabla c|^2+\nabla c\cdot\nabla f(n)-\nabla c\cdot\nabla\left(u\cdot\nabla c\right)\mbox{ in }\Omega\times(0,T_{max}). \end{align*} We multiply this identity by $\left(|\nabla c|^2\right)^{q-1}$ and integrate by parts over $\Omega$, where due to the Neumann boundary conditions imposed on $n$ and $c$ every boundary integral except the one involving $\frac{\partial|\nabla c|^2}{\partial \nu}$ disappears. Thus, the equality \begin{align}\label{grad_c2q-eq1} \refstepcounter{gleichung} \frac{1}{2q}\frac{\intd}{\intd t}\int_{\Omega}\!|\nabla c|^{2q}&+\frac{q-1}{2}\int_{\Omega}\!|\nabla c|^{2q-4}\Big\vert\nabla|\nabla c|^2\Big\vert^2+\int_{\Omega}\!|\nabla c|^{2q-2}|D^2 c|^2+\int_{\Omega}\!|\nabla c|^{2q}\\ &=\int_{\Omega}\!|\nabla c|^{2q-2}\nabla c\cdot\nabla f(n)-\int_{\Omega}\!|\nabla c|^{2q-2}\nabla c\cdot\nabla\left(u\cdot\nabla c\right)+\frac{1}{2}\int_{\romega}\!|\nabla c|^{2q-2}\frac{\partial|\nabla c|^2}{\partial\nu}\nonumber \end{align} holds for all $t\in(0,T_{max})$. Recalling \eqref{fprop}, we integrate the first integral by parts to see that \begin{align*} \int_{\Omega}\!|\nabla c|^{2q-2}\nabla c\cdot\nabla f\left(n\right)\leq K_0\int_{\Omega}\!\Big\vert\nabla|\nabla c|^{2q-2}\Big\vert|\nabla c|\,n^\alpha +K_0\int_{\Omega}\!|\nabla c|^{2q-2}|\Delta c|\,n^\alpha \end{align*} holds for all $t\in(0,T_{max})$. Since $\nabla|\nabla c|^{2q-2}=2(q-1)|\nabla c|^{2q-4} D^2 c\cdot\nabla c$ in $\Omega\times(0,T_{max})$, and since the Cauchy-Schwarz inequality implies $|\Delta c|\leq \sqrt{2}|D^2 c|$, we may apply Young's inequality to obtain \begin{align}\label{grad_c2q-eq2} \refstepcounter{gleichung} \int_{\Omega}\!|\nabla c|^{2q-2}\nabla c\cdot\nabla f\left(n\right)&\leq\int_{\Omega}\!|\nabla c|^{2q-2}|D^2 c|^2+\frac{\left(2K_0(q-1)+\sqrt{2}K_0\right)^2}{4}\int_{\Omega}\!|\nabla c|^{2q-2} n^{2\alpha}\nonumber\\ &=\int_{\Omega}\!|\nabla c|^{2q-2}|D^2 c|^2+\left(K_0(q-1)+\frac{K_0}{\sqrt{2}}\right)^2\int_{\Omega}\!|\nabla c|^{2q-2} n^{2\alpha} \end{align} for all $t\in(0,T_{max})$. To treat the second integral on the right hand side of \eqref{grad_c2q-eq1}, we first rewrite \begin{align}\label{grad_c2q-eq3} \refstepcounter{gleichung} -\int_{\Omega}\!|\nabla c|^{2q-2}\nabla c\cdot\nabla\left(u\cdot \nabla c\right)&=-\int_{\Omega}\!|\nabla c|^{2q-2}\nabla c\cdot\left(Du\cdot\nabla c\right)-\int_{\Omega}\!|\nabla c|^{2q-2}\nabla c\cdot\left(D^2 c\cdot u\right) \end{align} for all $t\in(0,T_{max})$, and then make use of the pointwise equality \begin{align*} |\nabla c|^{2q-2}\nabla c\cdot\left(D^2c\cdot u\right)=\frac{1}{2q}u\cdot\nabla|\nabla c|^{2q}\mbox{ in }\Omega\times(0,T_{max}), \end{align*} to see that, since $u$ is divergence free, \begin{align*} -\int_{\Omega}\!|\nabla c|^{2q-2}\nabla c\cdot\left(D^2 c\cdot u\right)=\frac{1}{2q}\int_{\Omega}\!(\dive u)|\nabla c|^{2q}=0 \end{align*} holds for all $t\in(0,T_{max})$. Thus, \eqref{grad_c2q-eq3} implies \begin{align}\label{grad_c2q-eq4} \refstepcounter{gleichung} -\int_{\Omega}\!|\nabla c|^{2q-2}\nabla c\cdot\nabla\left(u\cdot \nabla c\right)\leq\int_{\Omega}\!|\nabla c|^{2q}|Du|\quad\mbox{for all }t\in(0,T_{max}). \end{align} For the remaining boundary integral in \eqref{grad_c2q-eq1} we recall that the convexity of $\Omega$ ensures $\frac{\partial|\nabla c|^2}{\partial\nu}\leq0$ on $\romega$ (see \cite[Lemme I.1, p.350]{lion}). Combining this with \eqref{grad_c2q-eq1}, \eqref{grad_c2q-eq2} and \eqref{grad_c2q-eq4} completes the proof due to the identity \[ \Big\vert\nabla|\nabla c|^q\Big\vert^2=\Big\vert\nabla\left(|\nabla c|^{2\cdot\nfrac{q}{2}}\right)\Big\vert^2=\frac{q^2}{4}|\nabla c|^{2q-4}\Big\vert\nabla|\nabla c|^2\Big\vert^2\quad\mbox{in }\Omega\times(0,T_{max}).\quad\mbox{ }\qedhere \] \end{bew} Before uniting the inequalities from \eqref{np-ineq} and \eqref{grad_c2q} into a single energy-type inequality, we estimate the right hand sides therein separately. \begin{lem}\label{Lem:right_hand_sides} Let $\infty>q>\max\{2,\frac{1}{\alpha}\}$, $p=\alpha q $. For any $\kappa>0$ there exist constants $K_5,K_6$ and $K_7>0$ such that \begin{align}\label{right_hand_sides-1} \refstepcounter{gleichung} \frac{p-1}{2}\int_{\Omega}\! n^p|\nabla c|^2\leq \frac{\kappa}{6}\left(\int_{\Omega}\!|\nabla(n^\nfrac{p}{2})|^2+\int_{\Omega}\!\Big\vert\nabla|\nabla c|^q\Big\vert^2\right)+K_5, \end{align} \begin{align}\label{right_hand_sides-2} \refstepcounter{gleichung} \left(K_0(q-1)+\frac{K_0}{\sqrt{2}}\right)^2\!\int_{\Omega}\! n^{2\alpha}|\nabla c|^{2q-2}\leq\frac{\kappa}{6}\left(\int_{\Omega}\!|\nabla(n^\nfrac{p}{2})|^2+\int_{\Omega}\!\Big\vert\nabla|\nabla c|^q\Big\vert^2\right)+K_6 \end{align} and \begin{align}\label{right_hand_sides-3} \refstepcounter{gleichung} \int_{\Omega}\! |\nabla c|^{2q}|Du|\leq\frac{\kappa}{6}\int_{\Omega}\!\Big\vert\nabla|\nabla c|^q\Big\vert^2+K_7 \end{align} hold for all $t\in(0,T_{max})$. \end{lem} \begin{bew} To prove \eqref{right_hand_sides-1}, we first fix some $\beta_1>1$ and apply Hölder's inequality to obtain \begin{align}\label{right_hand_sides-eq1} \refstepcounter{gleichung} \frac{p-1}{2}\int_{\Omega}\! n^p|\nabla c|^2\leq\frac{p-1}{2}\left(\int_{\Omega}\! n^{p\beta_1}\right)^{\nfrac{1}{\beta_1}}\left(\int_{\Omega}\! |\nabla c|^{2\beta'_1}\right)^{\nfrac{1}{\beta'_1}} \end{align} for all $t\in(0,T_{max})$, where $\beta'_1$ denotes the Hölder conjugate of $\beta_1$. By \eqref{masscons-n} and Lemma \ref{Lem:ngnb} applied to $\varphi=n$, $\eta=m$, $r=p$ and $s=\beta_1$, we can find $C_1>0$ such that \begin{align}\label{right_hand_sides-eq1.3} \refstepcounter{gleichung} \left(\int_{\Omega}\! n^{p\beta_1}\right)^\nfrac{1}{\beta_1}\leq C_1\left(\int_{\Omega}\!|\nabla(n^\nfrac{p}{2})|^2\right)^{1-\nfrac{1}{p\beta_1}}+C_1\quad\mbox{for all }t\in(0,T_{max}). \end{align} Similarly to the application of the Gagliardo-Nirenberg inequality\ (\cite[Lemma 2.3]{lankchapto15}) utilized in Lemma \ref{Lem:ngnb}, we can show that the second integral on the right in \eqref{right_hand_sides-eq1} satisfies \begin{align}\label{right_hand_sides-eq1.6} \refstepcounter{gleichung} \left(\int_{\Omega}\! |\nabla c|^{2\beta'_1}\right)^{\nfrac{1}{\beta'_1}}\leq C_2\left(\Big\|\nabla|\nabla c|^q\Big\|_{\Lo[2]}^\nfrac{2b_1}{q}\Big\||\nabla c|^q\Big\|_{\Lo[\nfrac{2}{q}]}^\nfrac{(2-2b_1)}{q}+\Big\||\nabla c|^q\Big\|^{\nfrac{2}{q}}_{\Lo[\nfrac{2}{q}]}\right) \end{align} for all $t\in(0,T_{max})$ with $C_2>0$ and $b_1\in(0,1)$ provided by \begin{align*} b_1=\frac{\frac{q}{2}-\frac{q}{2\beta'_1}}{\frac{q}{2}+\frac{1}{2}-\frac{1}{2}}=1-\frac{1}{\beta'_1}=\frac{1}{\beta_1}. \end{align*} Since Proposition \ref{Prop:grad_c2-bound} implies the boundedness of $\||\nabla c|^q\|_{\Lo[\nfrac{2}{q}]}$, plugging \eqref{right_hand_sides-eq1.3} and \eqref{right_hand_sides-eq1.6} into \eqref{right_hand_sides-eq1} we obtain $C_3>0$ such that \begin{align*} \frac{p-1}{2}\int_{\Omega}\! n^p|\nabla c|^2 \leq C_3&\bigg(\int_{\Omega}\!|\nabla(n^\nfrac{p}{2})|^2\bigg)^{1-\nfrac{1}{p\beta_1}}\left(\int_{\Omega}\!\Big\vert\nabla|\nabla c|^q\Big|^2\right)^{\nfrac{1}{q\beta_1}}\\ +&\ C_3\left(\int_{\Omega}\!|\nabla(n^\nfrac{p}{2})|^2\right)^{1-\nfrac{1}{p\beta_1}}+C_3\left(\int_{\Omega}\!\Big\vert\nabla|\nabla c|^q\Big|^2\right)^{\nfrac{1}{q\beta_1}}+C_3 \end{align*} holds for all $t\in(0,T_{max})$. Due to $\alpha<1$ the choice of $p=\alpha q$ implies $p<q$ and thus, $1-\frac{1}{p\beta_1}+\frac{1}{q\beta_1}<1$. Therefore, we may apply Lemma \ref{young_exp-sum1} with $\varepsilon=\frac{\kappa}{12}$ to the three terms on the right hand side containing an integral and obtain for some $C_4>0$ that \begin{align*} \frac{p-1}{2}\int_{\Omega}\! n^p|\nabla c|^2\leq\frac{\kappa}{6}\left(\int_{\Omega}\!|\nabla(n^\nfrac{p}{2})|^2+\int_{\Omega}\!\Big\vert\nabla|\nabla c|^q\Big|^2\right)+C_4 \end{align*} holds for all $t\in(0,T_{max})$, which proves \eqref{right_hand_sides-1}. The proof of \eqref{right_hand_sides-2} follows the same reasoning. First, we apply Hölder's inequality with $\beta_2=\frac{q+1}{2}$ and $\beta'_2$ as corresponding Hölder conjugate to obtain \begin{align}\label{right_hand_sides-eq2} \refstepcounter{gleichung} \int_{\Omega}\! n^{2\alpha}|\nabla c|^{2q-2}\leq\left(\int_{\Omega}\! n^{2\alpha\beta_2}\right)^{\nfrac{1}{\beta_2}}\left(\int_{\Omega}\! |\nabla c|^{(2q-2)\beta'_2}\right)^{\nfrac{1}{\beta'_2}} \end{align} for all $t\in(0,T_{max})$. Since the choices of $\beta_2$ and $p$ imply $\frac{2\alpha\beta_2}{p}=\frac{\alpha(q+1)}{\alpha q}>1$, we can utilize Lemma \ref{Lem:ngnb} with $\varphi=n$, $r=p$ and $s=\frac{2\alpha\beta_2}{p}$ to estimate \begin{align}\label{right_hand_sides-eq3} \refstepcounter{gleichung} \left(\int_{\Omega}\! n^{2\alpha\beta_2}\right)^{\nfrac{1}{\beta_2}}\leq C_5\left(\int_{\Omega}\!|\nabla(n^\nfrac{p}{2})|^2\right)^{\nfrac{(2\alpha\beta_2-1)}{p\beta_2}}+C_5\quad\mbox{for all }t\in(0,T_{max}), \end{align} with some $C_5>0$. For the integral involving $|\nabla c|^{(2q-2)\beta'_2}$, we make use of the Gagliardo-Nirenberg inequality\ as shown before to obtain $C_6>0$ such that \begin{align}\label{right_hand_sides-eq4} \refstepcounter{gleichung} \left(\int_{\Omega}\! |\nabla c|^{(2q-2)\beta'_2}\right)^{\nfrac{1}{\beta'_2}}\leq C_6\left(\int_{\Omega}\!\Big|\nabla|\nabla c|^q\Big|^2\right)^{\frac{q-1}{q}b_2}+C_6 \end{align} holds for all $t\in(0,T_{max})$, with $b_2\in(0,1)$ determined by \begin{align*} b_2=\frac{\frac{q}{2}-\frac{q}{2(q-1)\beta'_2}}{\frac{q}{2}+\frac{1}{2}-\frac{1}{2}}=1-\frac{1}{(q-1)\beta'_2}=1-\frac{1}{(q-1)}+\frac{1}{(q-1)\beta_2}. \end{align*} Thus, a combination of \eqref{right_hand_sides-eq2},\eqref{right_hand_sides-eq3} and \eqref{right_hand_sides-eq4} leads to \begin{align*} \bigg(K_0(q-1)+\frac{K_0}{\sqrt{2}}&\bigg)^2\!\int_{\Omega}\! n^{2\alpha}|\nabla c|^{2q-2}\leq C_7\bigg(\int_{\Omega}\!|\nabla(n^\nfrac{p}{2})|^2\bigg)^{\nfrac{(2\alpha\beta_2-1)}{p\beta_2}}\left(\int_{\Omega}\!\Big|\nabla|\nabla c|^q\Big|^2\right)^{\frac{q-1}{q}b_2}\\ &\ +C_7\left(\int_{\Omega}\!|\nabla(n^\nfrac{p}{2})|^2\right)^{\nfrac{(2\alpha\beta_2-1)}{p\beta_2}}+C_7\left(\int_{\Omega}\!\Big|\nabla|\nabla c|^q\Big|^2\right)^{\frac{q-1}{q}b_2}+C_7 \end{align*} for all $t\in(0,T_{max})$ with some $C_7>0$. Here the choice of $p$ and the fact that $\alpha<1$ imply \begin{align*} \frac{2\alpha\beta_2-1}{p\beta_2}+\frac{q-1}{q}b_2&=\frac{2\alpha}{p}-\frac{1}{p\beta_2}+\frac{q-2}{q}+\frac{1}{q\beta_2}\\ &=\frac{2}{q}-\frac{1}{\alpha q\beta_2 }+\frac{q-2}{q}+\frac{1}{q\beta_2 }=1-\frac{1-\alpha}{\alpha q\beta_2 }<1. \end{align*} Therefore, the requirements of Lemma \ref{young_exp-sum1} are satisfied again and an application thereof yields $C_8>0$ such that \begin{align*} \left(K_0(q-1)+\frac{K_0}{\sqrt{2}}\right)^2\!\int_{\Omega}\! n^{2\alpha}|\nabla c|^{2q-2}\leq \frac{\kappa}{6}\left(\int_{\Omega}\!|\nabla(n^\nfrac{p}{2})|^2+\int_{\Omega}\!\Big|\nabla|\nabla c|^q\Big|^2\right)+C_8 \end{align*} holds for all $t\in(0,T_{max})$ and thus proving \eqref{right_hand_sides-2}. To verify \eqref{right_hand_sides-3} we fix $\beta_3=\frac{3}{2}$ and $\beta'_3=3$. Since $\beta_3<2$ Hölder's inequality yields \begin{align*} \int_{\Omega}\!|\nabla c|^{2q}|Du|\leq\left(\int_{\Omega}\!|\nabla c|^{2q\beta'_3}\right)^{\nfrac{1}{\beta'_3}}\left(\int_{\Omega}\!|D u|^{\beta_3}\right)^{\nfrac{1}{\beta_3}}\leq C_9\Big\||\nabla c|^q\Big\|_{\Lo[6]}^2 \end{align*} for some $C_9>0$, in view of the boundedness of $\|D u\|_{\Lo[\frac{3}{2}]}$ shown in Proposition \ref{Prop:u_bound_N2_all_p}. Similarly to the previous applications of the Gagliardo-Nirenberg and Young inequalities we can make use of the boundedness of $\||\nabla c|^q\|_{\Lo[\nfrac{2}{q}]}$ to obtain $C_{10}>0$ such that \begin{align*} \int_{\Omega}\!|\nabla c|^{2q}|Du|\leq\frac{\kappa}{6}\int_{\Omega}\!\Big|\nabla|\nabla c|^q\Big|^2+C_{10} \end{align*} for all $t\in(0,T_{max})$, which completes the proof. \end{bew} Combining the three previous lemmata we are now in the position to control $L^p$--norms of $n$ and $\nabla c$ with arbitrarily high $p$. In fact we have \begin{prop}\label{Prop:np_grad_c2q-bounds} Let $\infty>q>\max\{2,\frac{1}{\alpha}\}$ and $p=\alpha q$. Then we can find $C>0$ such that, the solution to \eqref{KSaS} satisfies \begin{align}\label{np-bound} \refstepcounter{gleichung} \int_{\Omega}\! n^p\leq C\quad\mbox{for all }t\in(0,T_{max}) \end{align} and \begin{align}\label{grad_c2q-bound} \refstepcounter{gleichung} \int_{\Omega}\! |\nabla c|^{2q}\leq C\quad\mbox{for all }t\in(0,T_{max}). \end{align} \end{prop} \begin{bew} Given $q>\max\{2,\frac{1}{\alpha}\}$ and $p=\alpha q$ we fix $\kappa=\min\left\{\frac{2(q-1)}{q^2},\,\frac{2(p-1)}{p^2}\right\}$. By Lemmata \ref{Lem:np-ineq}, \ref{Lem:grad_c2q} and \ref{Lem:right_hand_sides}, we can find $C_1:=K_5+K_6+K_7>0$ such that \begin{align*} \frac{\intd}{\intd t}\bigg(\frac{1}{p}\int_{\Omega}\! n^p&+\frac{1}{2q}\int_{\Omega}\!|\nabla c|^{2q}\bigg)+\frac{2(p-1)}{p^2}\int_{\Omega}\!|\nabla(n^\nfrac{p}{2})|^2\\ &+\frac{2(q-1)}{q^2}\int_{\Omega}\!\Big\vert\nabla|\nabla c|^q\Big\vert^2+\int_{\Omega}\!|\nabla c|^{2q} \leq\frac{\kappa}{2}\left(\int_{\Omega}\!|\nabla(n^\nfrac{p}{2})|^2+\int_{\Omega}\!\Big\vert\nabla|\nabla c|^q\Big\vert^2\right)+C_1 \end{align*} holds for all $t\in(0,T_{max})$. Herein the choice of $\kappa$ implies \begin{align}\label{np_grad_c2q-bounds-eq1} \refstepcounter{gleichung} \frac{\intd}{\intd t}\bigg(\frac{1}{p}\int_{\Omega}\! n^p+\frac{1}{2q}\int_{\Omega}\!|\nabla c|^{2q}\bigg)+\frac{p-1}{p^2}\int_{\Omega}\!|\nabla(n^\nfrac{p}{2})|^2+\frac{q-1}{q^2}\int_{\Omega}\!\Big\vert\nabla|\nabla c|^q\Big\vert^2+\int_{\Omega}\!|\nabla c|^{2q}\leq C_1 \end{align} for all $t\in(0,T_{max})$. We drop the non-negative term $\frac{q-1}{q^2}\int_{\Omega}\!|\nabla|\nabla c|^q|^2$ and apply Lemma \ref{Lem:ngnb} to estimate $\int_{\Omega}\!|\nabla(n^\nfrac{p}{2})|^2$ from below in \eqref{np_grad_c2q-bounds-eq1}, to obtain $C_2,C_3>0$ such that $y(t):=\frac{1}{p}\int_{\Omega}\! n^p+\frac{1}{2q}\int_{\Omega}\!|\nabla c|^{2q}$, $t\in(0,T_{max})$ satisfies \begin{align*} y'(t)+C_2 y(t)\leq C_3\quad\mbox{for all }t\in(0,T_{max}), \end{align*} from which we infer the boundedness of $y$ upon an ODE comparison and thus \eqref{np-bound} and \eqref{grad_c2q-bound}. \end{bew} \subsection{Global existence and boundedness}\label{ssec:34:bound} We can now begin to verify the boundedness of the three quantities appearing in the extensibility criterion \eqref{lExAlt}. The first of these quantities will be $\|A^\delta u(\cdot,t)\|_{\Lo[2]}$. \begin{prop}\label{Prop:Au2-bound} Let $\delta\in(\frac{1}{2},1)$ be as in Lemma \ref{Lem:locEx}. There exists a constant $C>0$ such that the solution of \eqref{KSaS} satisfies \begin{align*} \|A^\delta u(\cdot,t)\|_{\Lo[2]}\leq C\quad\mbox{for all }t\in(0,T_{max}). \end{align*} \end{prop} \begin{bew} The proof essentially follows the argumentation of \cite[Lemma 2.3]{win_ct_fluid_2d}, whilst making use of the previously proven bound $\|n\|_{\Lo[p]}\leq C$ for all $t\in(0,T_{max})$ with some $p>2$. Nonetheless, let us recount the main arguments. It is well known, see \cite[Theorem 38.6]{sellyou} and \cite[p.204]{sohr} for instance, that the Stokes operator $A$ is a positive, sectorial operator and generates a contraction semigroup $\left(e^{-tA}\right)_{t\geq0}$ in $L^2_\sigma(\Omega)$ with operator norm bounded by \begin{align*} \|e^{-tA}\|\leq e^{-\mu_1 t}\quad\mbox{for all }t\geq0, \end{align*} with some $\mu_1>0$. Furthermore, the operator norm of the fractional powers of the Stokes operator satisfy an exponential decay property (\cite[Theorem 37.5]{sellyou}). That is, there exists $C_1>0$ such that \begin{align}\label{Au2-bound-expdecprop} \refstepcounter{gleichung} \left\| A^\deltae^{-tA}\right\|\leq C_1t^{-\delta}e^{-\mu_1 t}\quad\mbox{for all }t>0. \end{align} Thus, representing $u$ by its variation of constants formula \begin{align*} u(\cdot,t)=e^{-tA} u_0+\int_0^{t} e^{-(t-s)A}\mathcal{P}\left(n(\cdot,s)\nabla\phi\right)\intd s,\quad t\in(0,T_{max}), \end{align*} and applying the fractional power $A^\delta$, we can make use of the fact that $e^{-tA}$ commutes with $A^\delta$ (\cite[IV.(1.5.16), p.206]{sohr}), the contraction property and \eqref{Au2-bound-expdecprop} to find $C_2>0$ such that \begin{align}\label{Au2-bound-eq1} \refstepcounter{gleichung} \|A^\delta u(\cdot,t)\|_{\Lo[2]}&\leq \|A^\delta u_0\|_{\Lo[2]}+C_1\int_0^{t} \left(t-s\right)^{-\delta}e^{-\mu_1(t-s)}\left\|\mathcal{P}\left(n(\cdot,s)\nabla\phi\right)\right\|_{\Lo[2]}\intd s\nonumber\\ &\leq \|A^\delta u_0\|_{\Lo[2]}+C_2\sup_{t\in(0,T_{max})}\left\|n(\cdot,t)\right\|_{\Lo[2]}\int_0^\infty\!\! \sigma^{-\delta} e^{-\mu_1\sigma}\intd \sigma \end{align} holds for all $t\in(0,T_{max})$, by the boundedness of $\nabla\phi$. Due to \eqref{idreg} we have $\|A^\delta u_0\|_{\Lo[2]}\leq C_3$ for some $C_3>0$. Furthermore, since $\delta<1$ the integral converges and by Proposition \ref{Prop:np_grad_c2q-bounds} we can find $C_4>0$ such that $\|n(\cdot,t)\|_{\Lo[2]}\leq C_4$ for all $t\in(0,T_{max})$. Combined with \eqref{Au2-bound-eq1} these facts yield \begin{align*} \|A^\delta u(\cdot,t)\|_{\Lo[2]}\leq C_5\quad\mbox{for all }t\in(0,T_{max}) \end{align*} with some $C_5>0$, which completes the proof. \end{bew} The second quantity of the extensibility criterion we treat is $\|c(\cdot,t)\|_{\W[1,\theta]}$. In view of Proposition \ref{Prop:np_grad_c2q-bounds}, we can take some $q>\max\{2,\theta\}$ and obtain under simple application of the Poincaré inequality \begin{cor}\label{Cor:grad_ctheta-bound} There exists a constant $C>0$ such that \begin{align*} \|c(\cdot,t)\|_{\W[1,\theta]}\leq C \end{align*} holds for all $t\in(0,T_{max})$. \end{cor} Now, to prove the last remaining bound required for the extensibility criterion \eqref{lExAlt}, as well as one of the estimates required for the boundedness result, we require some well known results concerning the Neumann heat semigroup $\left(e^{t\Delta}\right)_{t\geq0}$. These semigroup estimates and Proposition \ref{Prop:np_grad_c2q-bounds} will be the main ingredients of our proof. For more details concerning the estimations used, we refer the reader to \cite[Lemma 2.1]{cao2014global}, \cite[Lemma 1.3]{win10jde} and \cite{hen81}. \begin{prop}\label{Prop:ninf-bound} There exists a constant $C>0$ such that \begin{align*} \|n(\cdot,t)\|_{\Lo[\infty]}\leq C \end{align*} holds for all $t\in(0,T_{max})$. \end{prop} \begin{bew} First, we fix $p>2$ and represent $n$ by its variation of constants formula \begin{align*} n(\cdot,t)=e^{t\Delta} n_0-\intote^{(t-s)\Delta}\big(\nabla\cdot\left(n\nabla c\right)+u\cdot\nabla n\big)(\cdot,s)\intd s,\quad t\in(0,T_{max}). \end{align*} The fact $\dive u=0$ and the maximum principle therefore yield \begin{align*} \|n(\cdot,t)\|_{\Lo[\infty]}&\leq \| n_0\|_{\Lo[\infty]}+\int_0^{t}\left\|e^{(t-s)\Delta}\big(\nabla\cdot\left(n\nabla c+u n\right)\big)(\cdot,s)\right\|_{\Lo[\infty]}\intd s \end{align*} for all $t\in(0,T_{max})$. Now, we can make use of the well known smoothing properties of the Neumann heat semigroup (see \cite[Lemma 2.1 (iv)]{cao2014global}), to estimate \begin{align}\label{ninf-bound-eq1} \refstepcounter{gleichung} \|n(\cdot,t)\|_{\Lo[\infty]}&\leq \|n_0\|_{\Lo[\infty]}\\&\ +C_1\int_0^{t}\!\!\left(1+\left(t-s\right)^{-\frac{1}{2}-\frac{1}{p}}\right)e^{-\lambda_1(t-s)}\left(\left\|\left(n\left(\nabla c +u\right)\right)(\cdot,s)\right\|_{\Lo[p]}\right)\intd s\nonumber \end{align} for all $t\in(0,T_{max})$ and some $C_1>0$, where $\lambda_1$ denotes the first nonzero eigenvalue of $-\Delta$ in $\Omega$ with regards to the homogeneous Neumann boundary conditions. To estimate $\|n(\nabla c+u)\|_{\Lo[p]}$ we apply Hölder's inequality to obtain some $C_2>0$ such that \begin{align*} \|n(\nabla c+u)(\cdot,t)\|_{\Lo[p]}\leq\|n(\cdot,t)\|_{\Lo[2p]}\left(\|\nabla c(\cdot,t)\|_{\Lo[2p]}+\|u(\cdot,t)\|_{\Lo[2p]}\right)\leq C_2 \end{align*} holds for all $t\in(0,T_{max})$, wherein the boundedness of all quantities on the right hand side followed in view of Propositions \ref{Prop:u_bound_N2_all_p} and \ref{Prop:np_grad_c2q-bounds}. Plugging this into \eqref{ninf-bound-eq1} and recalling $n_0\in \CSp{0}{\bomega}$ yields $C_3>0$ such that \begin{align*} \|n(\cdot,t)\|_{\Lo[\infty]}&\leq C_3+C_3\int_0^\infty\!\!\left(1+\left(t-s\right)^{-\frac{1}{2}-\frac{1}{p}}\right)e^{-\lambda_1(t-s)}\intd s \end{align*} is valid for all $t\in(0,T_{max})$. By the choice of $p$ we have $-\frac{1}{2}-\frac{1}{p}>-1$ and thus there exists $C_4>0$ such that \begin{align*} \|n(\cdot,t)\|_{\Lo[\infty]}\leq C_4\quad\mbox{for all }t\in(0,T_{max}), \end{align*} which completes the proof. \end{bew} Let us gather the previous results to prove our main theorem. \begin{proof}[\textbf{Proof of Theorem \ref{Thm:globEx}:}] As an immediate consequence of the bounds in Proposition \ref{Prop:Au2-bound}, Corollary \ref{Cor:grad_ctheta-bound} and Proposition \ref{Prop:ninf-bound}, we obtain $T_{max}=\infty$ in view of the extensibility criterion \eqref{lExAlt}. Secondly, since $\theta>2$ we have $\W[1,\theta]\hookrightarrow\Co[\mu_1]$ with $\mu_1=\frac{\theta-2}{\theta}$ (\cite[Theorem 5.6.5]{evans}). Thus, Corollary \ref{Cor:grad_ctheta-bound} implies $\|c(\cdot,t)\|_{\Lo[\infty]}\leq C$ for all $t\in(0,T_{max})$. Additionally, since for $\delta\in(\frac{1}{2},1)$ the fractional powers of the Stokes operator satisfy $D(A^\delta)\hookrightarrow\Co[\mu_2]$ for any $\mu_2\in(0,2\delta-1)$ (see \cite[Lemma III.2.4.3]{sohr} and \cite[Theorem 5.6.5]{evans}), Proposition \ref{Prop:Au2-bound} shows that $\|u(\cdot,t)\|_{\Lo[\infty]}\leq C$ for all $t\in(0,T_{max})$ and the boundedness of $\|n(\cdot,t)\|_{\Lo[\infty]}$ for all $t\in(0,T_{max})$ follows directly from Proposition \ref{Prop:ninf-bound}. \end{proof} \end{document}
\begin{document} \title{The vortex-wave system with gyroscopic effects} \section{Introduction} The purpose of this article is to investigate the well-posedness of the following PDE/ODE system: \begin{equation}\label{syst:1} \begin{cases} \displaystyle \partial_t \omega+ \operatorname{div} (v \omega )=0,\\ \displaystyle v=u + \sum_{k=1}^N \frac{\gamma_{k}}{2\pi} \frac{(x-h_{k})^\perp}{|x-h_{k}|^2},\quad u = K\ast \omega, \quad K(x)=\frac{1}{2\pi} \frac{x^\perp}{|x|^2},\\ \displaystyle m_{k}\ddot{h}_{k}=\gamma_{k}\Big(\dot{h}_{k}- u(t,h_{k}) -\sum_{j\neq k} \frac{\gamma_{j}}{2\pi} \frac{(h_{k}-h_{j})^\perp}{|h_{k}-h_{j}|^2}\Big)^\perp \text{ for } k=1,\dots,N, \end{cases} \end{equation} where $$\omega : [0,T]\times {\mathbb R}^2\to {\mathbb R}, \quad h_{k}:[0,T]\to {\mathbb R}^2 \text{ for } k=1,\dots,N,$$ and where $$(m_{k},\gamma_{k}) \in {\mathbb R}^+_{*}\times {\mathbb R} \text{ for } k=1,\dots,N.$$ We supplement \eqref{syst:1} with the initial conditions \begin{equation}\label{syst:2} \begin{split} &\omega(0,\cdot)=\omega_0\in L^\infty({\mathbb R}^2) \text{ compactly supported in some }B(0,R_0),\\ & (h_{k},h'_{k})(0)=(h_{k,0},\ell_{k,0})\text{ for } k=1,\dots,N, \text{ with $h_{k,0}$ distincts}. \end{split} \end{equation} System \eqref{syst:1} for $N=1$ was derived by Glass, Lacave and Sueur \cite{GLS1} as an asymptotical system for the dynamics of a body immersed in a 2D perfect incompressible fluid, when the size of the body vanishes whereas the mass is assumed to be constant. The position of the body at time $t$ is represented by the position $h(t)$, the fluid is described by its divergence-free velocity $ u(t,x)$ and vorticity $\omega(t,x)=\operatorname{curl} u(t,x)$. Under suitable decay assumptions, the divergence free condition enables to recover the velocity explicitly in terms of the vorticity by the Biot-Savart law \cite{MarPul}: $ u=x^\perp/(2\pi |x|^2)\ast \omega$. The quantities $m$ and $\gamma$ are reminiscent of the mass of the body and of the circulation of the velocity around the body, respectively. The second order differential equation verified by $h$ means that the body is accelerated by a force that is orthogonal to the difference between the body speed and the fluid velocity at this point. This gyroscopic force is similar to the well-known Kutta-Joukowski-type lift force revealed in the case of a single body in an irrotational unbounded flow, see for instance \cite{Lamb,MarPul,Thomson}. Therefore, a byproduct of \cite{GLS1} is the existence of a global weak solution of \eqref{syst:1} when $N=1$. In the case $N> 1$, it is not known whether the previous convergence result holds. The main goal of this paper is to establish the existence and the uniqueness (under an additional assumption on the initial data, see below) of solutions for any $N\geqslant 1$. In particular, we will prove that the trajectories of the points $h_k$ never collide if all the circulations $\gamma_{k}$ have the same sign. Such a result is important for example to justify the 2D spray inviscid model established by Moussa and Sueur \cite{MoussaSueur}, which was derived as a mean-field limit $N\to \infty$ of \eqref{syst:1}. We refer to that article for a comparison of the recent spray models introduced in the literature. Before giving the precise statements of our theorems, we mention that \eqref{syst:1} reduces to the so-called vortex-wave system when setting $m_k=0$: \begin{equation}\label{syst:wv} \begin{cases} \displaystyle \partial_t \omega+ \operatorname{div} (v \omega )=0,\\ \displaystyle v= u + \sum_{k=1}^N \frac{\gamma_{k}}{2\pi} \frac{(x-h_{k})^\perp}{|x-h_{k}|^2},\quad u = K\ast \omega, \quad K(x)=\frac{1}{2\pi} \frac{x^\perp}{|x|^2},\\ \displaystyle \dot{h}_{k} = u(t,h_{k}) +\sum_{j\neq k} \frac{\gamma_{j}}{2\pi} \frac{(h_{k}-h_{j})^\perp}{|h_{k}-h_{j}|^2} \text{ for } k=1,\dots,N. \end{cases} \end{equation} And indeed, for $N=1$, Glass, Lacave and Sueur showed in \cite{GLS2} that the asymptotical dynamics of a small solid with vanishing mass evolving in a 2D incompressible fluid is governed by the vortex-wave system. The vortex-wave system was previously derived by Marchioro and Pulvirenti \cite{MarPul1, MarPul} to describe the interaction of a background vorticity $\omega$ interacting with one or several point vortices $h_k$ with circulations $\gamma_k$. Very recently, Nguyen and Nguyen have also justified the vortex-wave system as the inviscid limit of the Navier-Stokes equations \cite{Toan}. For System~\eqref{syst:wv}, existence of a weak solution (according to Definition~\ref{def:1} below) is proved up to the first collision time between the vortex trajectories. Concerning uniqueness, it is open in general, and it holds in the particular case when the vorticity $\omega$ is initially constant near the point vortices (namely the condition appearing in Theorem~\ref{theorem:main-several} below), as suggested in \cite{MarPul1,MarPul} and proved in \cite{LM,MiotThesis}. It is also proved in \cite{MarPul1} that if all the $\gamma_k$ have the same sign then no collision occurs in finite time therefore global existence holds. As for the spray model, these results are the first key to get a time of existence that is independent of $N$, in order to consider the homogenized limit (or mean-field limit) $N\to \infty$, for instance, used by Schochet \cite{Schochet} to justify the vortex method in ${\mathbb R}^2$. The main goal of this paper is to establish the corresponding existence and uniqueness results for the vortex-wave system with gyroscopic effects \eqref{syst:1}. From now on we will refer to the points $h_k$ in \eqref{syst:1} as ``massive'' point vortices. \subsection*{Main results} The first part of our analysis focuses on the existence issue for \eqref{syst:1}. \begin{definition}\label{def:1} Let $T>0$. We say that $(\omega,\{h_k\}_{1\leqslant k\leqslant N})$ is a weak solution of \eqref{syst:1} on $[0,T]$, with initial data given by \eqref{syst:2}, if: \begin{itemize} \item $\omega\in L^\infty([0,T],L^1\cap L^\infty({\mathbb R}^2))\cap C([0,T],L^\infty({\mathbb R}^2)-w^\ast)$, $h_k \in C^2([0,T])$ for $k=1,\ldots,N$, \item the PDE in \eqref{syst:1} is satisfied in the sense of distributions, and the ODEs in \eqref{syst:1} are satisfied in the classical sense. \end{itemize} \end{definition} \begin{theorem}\label{theorem:main-several-existence} Let $\omega_0$ and $(\{h_{k,0}\},\{\ell_{k,0})\})$ be as in \eqref{syst:2}. There exists $T_{*}>0$ such that for any $T\in (0,T_{*})$, there exists a weak solution $(\omega,\{h_k\})$ to \eqref{syst:1} on $[0,T]$. Moreover, if we assume that $\gamma_k$ have the same sign for all $k=1,\ldots,N$, then $T_{*}=+\infty$. \end{theorem} \begin{remark*} The maximal time $T_{*}$ such that Theorem~\ref{theorem:main-several-existence} holds corresponds to the first collision between some of the massive points, and we will prove that no collision occurs in finite time if all the $\gamma_k$ have the same sign. \end{remark*} \begin{remark*} If the initial vorticity $\omega_0$ was only assumed to be in $L^p_c({\mathbb R}^2)$ for some $p>2$, then one could still prove (global if all $\gamma_k$ have the same sign) existence of a weak solution to \eqref{syst:1} such that $\omega\in L^\infty( L^p)$. However in this case no uniqueness result is known. \end{remark*} As already said, the same existence result is known to hold for the vortex-wave system \eqref{syst:wv}, see \cite{MarPul1}. The proof of Theorem~\ref{theorem:main-several-existence}, given in Section~\ref{sec:existence}, follows the same method as in \cite{MarPul1}, namely passing to the limit in an iterative scheme after establishing uniform estimates on the solution $(\omega_n, \{h_{k,n}\})$. To do so, we introduce a functional $\mathcal{H}_n$ in \eqref{def-Hn}. This functional is well-adapted to System~\eqref{syst:1} because it controls both the minimal distance between the vortex trajectories and the velocities; moreover, it can be shown that its time derivative is uniformly bounded. Except for the estimates we perform for this new functional $\mathcal{H}_n$, the proof of Theorem~\ref{theorem:main-several-existence} is quite straightforward and is not the main point of this paper. Our next result is that any weak solution as in Theorem~\ref{theorem:main-several-existence} is actually transported by the regular Lagrangian flow relative to the total velocity field. We refer to the recent papers \cite{ambrosio, ambrosio-survey, crippa-delellis, bresiliens-miot} for the subsequent definition of regular Lagrangian flow: \begin{definition}\label{def:lagrangian-flow} Let $T>0$ and let $v\in L_{\text{loc}}^1([0,T]\times {\mathbb R}^2)$. We say that $X:[0,T]\times {\mathbb R}^2\times{\mathbb R}^2$ is a regular Lagrangian flow relative to $v$ if \begin{itemize} \item For a.e. $x\in {\mathbb R}^2$, the map $t\mapsto X(t,x)$ is an absolutely continuous solution to the ODE $\frac{d}{dt} X(t,x)=v(t,X(t,x))$ with $X(0,x)=x$, i.e. a continuous function verifying $X(t,x)=x+\int_{0}^t v(s,X(s,x)) \, ds$ for all $t\in [0,T]$; \item There exists a constant $L>0$ independent of $t$ such that $$\mathcal{L}^2 (X(t,\cdot)^{-1}(A)) \leqslant L\mathcal{L}^2 (A), \quad \forall t\in [0,T], \forall A \text{ Borel set of }{\mathbb R}^2,$$ where $\mathcal{L}^2$ is the Lebesgue measure on ${\mathbb R}^2$. \end{itemize} \end{definition} Such a definition is intended to generalize the classical notion of flow associated to smooth vector fields. It was proved by Ambrosio \cite{ambrosio} that such flow exists and is unique under BV space regularity for the vector field. In \cite{LM, bresiliens-miot}, a similar result was established for vector fields composed of a smooth part and of a part with a specific localized singularity that is explicit. In the present setting, where the total velocity field in \eqref{syst:1} contains singularities created by the point vortices, we will rely on those last results to establish the following general result. \begin{theorem}\label{theorem:main-several-existence-lagrangian} Let $\{h_{k}\}$ be any given maps belonging to $W^{2,\infty}([0,T];{\mathbb R}^2)$ without collision: $$\min_{k\neq p} \min_{t\in [0,T]} |h_{k}(t)-h_{p}(t)|\geqslant\rho>0.$$ For $\omega_{0} \in L^\infty_{c}({\mathbb R}^2)$, let $\omega$ be a weak solution on $[0,T]$ (in the sense of Definition~\ref{def:1}) to \begin{equation}\label{syst:1-2} \begin{cases} \displaystyle \partial_t \omega+ \operatorname{div} (v \omega )=0,\\ \displaystyle v= u + \sum_{k=1}^N \frac{\gamma_{k}}{2\pi} \frac{(x-h_{k})^\perp}{|x-h_{k}|^2},\quad u = K\ast \omega, \quad K(x)=\frac{1}{2\pi} \frac{x^\perp}{|x|^2},\\ \end{cases} \end{equation} such that $\omega(0,\cdot)=\omega_0$. Then, there exists a unique regular Lagrangian flow $X$ relative to the total velocity field $v$ and $\omega$ is transported by this flow: \[\omega(t,\cdot)=X(t,\cdot)_\#\omega_0.\] Moreover, the vorticity $\omega(t,\cdot)$ is compactly supported in $B(0,R_T)$ for all $t\in [0,T]$, where $R_T$ depends on $T$, on $\|h_k\|_{L^{\infty}([0,T])}$ and on the initial data. Furthermore, we have the additional non collision information: $$ \text{for a.e. } x\in {\mathbb R}^2,\quad X(t,x)\neq h_k(t), \quad\forall t\in [0,T], \quad \forall k=1,\ldots,N.$$ Finally, if we assume \begin{equation}\label{constant} \omega_0=\alpha_k\quad \text{on }B(h_{k}(0),\delta_0),\quad \forall k=1,\ldots,N \end{equation} for some $\alpha_k\in {\mathbb R}$ and $\delta_0>0$, there exists a positive $\delta$ depending only on $T$, $\delta_0$, $\|\omega_0\|_{L^\infty}$, $\|h_k\|_{W^{2,\infty}([0,T])}$ and $R_0$, such that $$ \omega(t,\cdot)=\alpha_k\quad \text{on }B(h_k(t),\delta),\quad \forall t\in [0,T].$$ \end{theorem} We emphasize that Theorem~\ref{theorem:main-several-existence-lagrangian} does not rely on the equation verified by the point vortices and thus it holds not only for \eqref{syst:1} but also for any system \eqref{syst:1-2}. We finally turn to the uniqueness issue of \eqref{syst:2}. \begin{theorem}\label{theorem:main-several} Let $\omega_0$ and $(\{h_{k,0}\},\{\ell_{k,0})\})$ be as in \eqref{syst:2}. Assume moreover that $$ \omega_0=\alpha_k\quad \text{on }B(h_{k,0},\delta_0),\quad \forall k=1,\ldots,N$$ for some $\alpha_k\in {\mathbb R}$ and $\delta_0>0$. Then for any $T>0$, there exists at most one weak solution $(\omega,\{h_{k}\})$ to \eqref{syst:1} on $[0,T]$ with this initial condition. \end{theorem} The proof of Theorem~\ref{theorem:main-several} is a straightforward adaptation of the uniqueness proof given for the vortex-wave system in \cite{LM} when the vorticity is constant for all time in the neighborhood of the point vortices. Hence the main difficulty in order to prove uniqueness under Assumption~\eqref{constant} is to prove the last point of Theorem~\ref{theorem:main-several-existence-lagrangian}. Theorem~\ref{theorem:main-several} together with Theorem~\ref{theorem:main-several-existence} thus implies global existence and uniqueness if all the $\gamma_{k}$ have the same sign, and existence and uniqueness up to the first collision otherwise. The plan of this paper is as follows. In the next section, we prove Theorem~\ref{theorem:main-several-existence} after collecting a few well-known properties. Then in Section~\ref{sec:lagrangian} we establish Theorem~\ref{theorem:main-several-existence-lagrangian}. Finally, in Section~\ref{sec:final-proof} we show how it implies Theorem~\ref{theorem:main-several} by adapting the arguments of \cite{LM, MiotThesis}. For simplicity we focus for this on the case of one point, but the case of $N\geqslant 1$ points is similar. The last section is devoted to some additional properties satisfied by solutions of System~\eqref{syst:1}. With respect to the above-mentioned previous works, the main novelty for the proofs here is the use of a new local energy functional \begin{equation}\label{defi:energy-intro} F_k(t)=\sum_{j=1}^N \frac{\gamma_j}{2\pi} \ln |X(t,x)-h_j(t)|+\varphi(t,X(t,x))+\langle X(t,x),\dot{h}_k^\perp(t)\rangle, \end{equation}defined as long as $X(t,x)\neq h_j(t)$, where $\varphi$ is the stream function associated to $u$ (namely $u=\nabla^\perp \varphi$, see \eqref{def:stream}). It turns out that the two last terms in the definition \eqref{defi:energy-intro} are uniformly bounded. Hence controlling the distances between the fluid particles and the massive point vortices (thus controlling the behavior of $\omega(t,\cdot)$ near those points) is made possible by proving that $F_k(t)$ is bounded. In the case of one point vortex, the following formal computation on the derivative of $F(t):=F_1(t)$ shows that the most singular terms cancel, which motivates our definition \eqref{defi:energy-intro}\footnote{We set $X=X(t,x)$ and $h=h(t)$ for more clarity.}: \begin{equation*} \begin{split} F'=&\left\langle \frac{\gamma}{2\pi}\frac{X-h}{|X-h|^2},u(t,X)+\gamma K(X-h)-\dot{h}\right\rangle \\ &+\partial_t \varphi(t,X)+\langle \dot{X},\nabla\varphi(t,X)\rangle+\left\langle \dot{X}, \dot{h}^\perp \rangle+\langle X,\ddot{h}^\perp\right\rangle\\ =&\left\langle \frac{\gamma}{2\pi}\frac{X-h}{|X-h|^2},u(t,X)-\dot{h}\right\rangle +\partial_t \varphi(t,X)-\langle \dot{X},u^\perp(t,X)\rangle+\langle \dot{X}, \dot{h}^\perp \rangle+\langle X,\ddot{h}^\perp\rangle. \end{split} \end{equation*} Since \begin{equation*} \frac{\gamma}{2\pi}\frac{X-h}{|X-h|^2}=-\dot{X}^\perp+u^\perp(t,X), \end{equation*} we observe that the singular terms in the previous expression actually cancel. Finally, we get \begin{equation*} F'(t)= -\langle u^\perp(t,X),\dot{h}\rangle +\partial_t \varphi(t,X)+\langle X,\ddot{h}^\perp\rangle. \end{equation*} Thus it only remains to notice that this expression only involves bounded terms so that $|F'(t)|\leqslant C$ on $[0,T]$, as wanted. The rigorous proof of this bound for several points will be established in Proposition~\ref{prop:Fi}. \noindent \textbf{Notations. } From now on $C$ will refer to a constant depending only on $T$, on $\rho$, on $\|h_k\|_{W^{2,\infty}([0,T])}$, and on the initial data ($R_0$, $m_k$, $\gamma_k$, $h_{k,0}$, $\ell_{k,0}$ and $\|\omega_0\|_{L^\infty}$), but not on $\delta_0$. It will possibly changing value from one line to another. \noindent {\bf Acknowledgements.} The authors are partially supported by the Agence Nationale de la Recherche, Project SINGFLOWS, grant ANR-18-CE40-0027-01. C.L. was also partially supported by the CNRS, program Tellus, and the ANR project IFSMACS, grant ANR-15-CE40-0010. E.M. acknowledges French ANR project INFAMIE ANR-15-CE40-01. Both authors thank warmly Olivier Glass and Franck Sueur for interesting discussions. They also thank warmly the anonymous referee for suggesting a simplification of the proof of Proposition~\ref{prop:ineg:flot}, which appears in the present paper. \section{Proof of Theorem~\ref{theorem:main-several-existence}} \label{sec:existence} \subsection{Some general regularity properties} We start with the following well-known property, see \cite[Appendix 2.3]{MarPul} for instance. \begin{proposition}\label{prop:reg-u} Let $\Omega\in L^\infty([0,T],L^1\cap L^\infty( {\mathbb R}^2))$. Let $U=K\ast \Omega$. Then we have $$\|U\|_{L^\infty}\leqslant C\|\Omega\|_{L^\infty(L^1)}^{1/2}\|\Omega\|_{L^\infty}^{1/2}.$$ Moreover, $U$ is log-Lipschitz uniformly in time: \begin{equation*} \|U(\cdot,x)-U(\cdot,y)\|_{L^\infty(0,T)}\leqslant C(\|\Omega\|_{L^\infty(L^1\cap L^\infty)}) |x-y|(1+|\ln|x-y||),\quad \forall (x,y)\in {\mathbb R}^2\times {\mathbb R}^2. \end{equation*} \end{proposition} We also have the Calder\'on-Zygmund inequality \cite[Chapter II, Theorem 3]{stein} \begin{proposition}\label{prop:calderon} There exists $C$ such that for all $p\geqslant 2$ \begin{equation*} \|\nabla U(t,\cdot)\|_{L^p}\leqslant Cp \|\Omega(t,\cdot)\|_{L^p}\quad \text{for all } t\in [0,T]. \end{equation*} \end{proposition} In particular, it follows that any such velocity field satisfies \begin{equation} \label{regu:field} U\in L^\infty([0,T]\times {\mathbb R}^2))\cap L^\infty([0,T],W^{1,1}_{\text{loc}}({\mathbb R}^2)),\quad \text{div}(U)=0. \end{equation} \subsection{Some basic properties for weak solutions of \eqref{syst:1}-\eqref{syst:2}} \label{subsec:basic} In all this paragraph, $(\omega, \{h_k\})$ denotes a weak solution of \eqref{syst:1}-\eqref{syst:2} on $[0,T]$, so that in particular $u$ satisfies Proposition~\ref{prop:reg-u} and the regularity property \eqref{regu:field}. We assume \emph{moreover} that $\omega(t,\cdot)$ is compactly supported in some $B(0,R)$ for all $t\in [0,T]$. We introduce the stream function \begin{equation}\label{def:stream} \varphi(t,x)=\frac{1}{2\pi}\int_{{\mathbb R}^2}\ln |x-y|\omega(t,y)\,dy, \end{equation} so that \begin{equation*} u(t,x)=\nabla^\perp \varphi(t,x). \end{equation*} For the subsequent computations, in order to make the arguments rigorous, we introduce a regularized version of the stream function: for $\varepsilon>0$ and $\ln_\varepsilon$ a smooth function coinciding with $\ln$ on $[\varepsilon,+\infty)$ and satisfying $|\ln_\varepsilon'(r)|\leqslant C/\varepsilon$ for all $r>0$, we set \begin{equation}\label{def:stram-2} \varphi_\varepsilon(t,x)=\frac{1}{2\pi}\int_{{\mathbb R}^2} \ln_\varepsilon |x-y|\omega(t,y)\,dy. \end{equation} Note that by assumption on the support of $\omega(t,\cdot)$ the following estimate holds for $\varphi_\varepsilon$: \begin{equation} \label{est:phi} |\varphi_\varepsilon(t,x)|\leqslant C\ln (2+|x|), \end{equation} with $C$ also independent of $\varepsilon$. The following bound will be useful in order to establish a bound on the local energies in Proposition ~\ref{prop:Fi}: \begin{proposition}\label{prop:ineq:partialt-several} There exists $C_R$ depending only on $R$, $\|\omega\|_{L^\infty}$, $\|h_k\|_{L^\infty}$, and the initial data, such that \begin{equation*} \| \partial_t \varphi_\varepsilon \|_{L^\infty} \leqslant C_R. \end{equation*} \end{proposition} \begin{proof} Using the weak formulation for $\omega$ in \eqref{syst:1}, we have \begin{multline*} \partial_t \varphi_\varepsilon(t,x)\\ =\frac{1}{2\pi}\int_{{\mathbb R}^2} \ln_\varepsilon'(|x-y|)\frac{y-x}{|y-x|}\cdot\left(u(t,y)+\sum_{k=1}^N \gamma_k K(y-h_k(t))\right)\omega(t,y)\,dy, \end{multline*} therefore \begin{align*} | \partial_t \varphi_\varepsilon(t,x)| \leqslant& C\|u\|_{L^\infty} \int_{{\mathbb R}^2} \frac{|\omega(t,y)|}{|x-y|}\,dy\\ &+C \sum_{k=1}^N \int_{{\mathbb R}^2} \left|\frac{(x-y)\cdot(y-h_k(t))^\perp}{|x-y|^2|y-h_k(t)|^2} \omega(t,y)\right|\,dy. \end{align*} By the estimates (1.39) to (1.43) in \cite{Marchioro}, there exists a constant $C$ depending only on $R$, on $\|\omega \|_{L^\infty}$, and on $\|h_k\|_{L^\infty}$, such that \begin{equation*}\label{ineq:partialt} | \partial_t \varphi_\varepsilon(t,x)|\leqslant C_R. \end{equation*} The conclusion follows. \end{proof} In the previous computation, we needed the smoothness of $\ln_{\varepsilon}$ in order to use the weak formulation for $\omega$. This explains why we have to replace $\varphi$ in the definition of $F_{k}$ \eqref{defi:energy-intro} by $\varphi_{\varepsilon}$ (see \eqref{def-Fk}) when we compute the derivative. In the following proposition we state that $\nabla\varphi_{\varepsilon}$ approaches uniformly $\nabla\varphi$. \begin{proposition}\label{prop:reste-vitesse}We have \begin{equation*}\nabla^\perp \varphi_\varepsilon=u+R_\varepsilon, \end{equation*} where $R_\varepsilon$ satisfies \begin{equation*} \|R_\varepsilon \|_{L^\infty}\leqslant \varepsilon C\|\omega\|_{L^\infty}. \end{equation*} \end{proposition} \begin{proof} We have \begin{align*} \nabla^\perp \varphi_\varepsilon(t,x)&=u(t,x)+\int_{{\mathbb R}^2}\frac{(x-y)^\perp}{|x-y|^2}\left( |x-y|\ln'_\varepsilon|x-y|-1\right)\omega(t,y)\,dy\\ &=u(t,x)+R_\varepsilon(t,x), \end{align*} where \begin{equation*} |R_\varepsilon(t,x)|\leqslant C\int_{|y-x|\leqslant \varepsilon}\frac{1}{|x-y|}|\omega(t,y)|\,dy\leqslant C\|\omega(t,\cdot)\|_{L^\infty}\varepsilon. \end{equation*} \end{proof} \subsection{Proof of Theorem~\ref{theorem:main-several-existence} } The proof of Theorem~\ref{theorem:main-several-existence} is divided into two steps. \textbf{Step 1: iterative scheme.} Let $\rho\in (0,\min_{k\neq p} |h_{k,0}-h_{p,0}|)$ which will be fixed later. We consider the following iterative scheme: for $n\in \mathbb{N}^\ast$, given $$ \omega_{n-1}\in L^\infty([0,T_{n-1}],L^1\cap L^\infty({\mathbb R}^2)), $$ and given $N$ trajectories $h_{k,n-1}$ in $C^2([0,T_{n-1}])$ such that $$\min_{t\in [0,T_{n-1}]}\min_{k\neq p} |h_{k,n-1}(t)-h_{p,n-1}(t)|>0,$$ for some $T_{n-1}>0$, we set $$u_{n-1}=K\ast \omega_{n-1},$$ having in mind to solve the linear PDE \begin{equation}\label{syst:3}\begin{cases} \displaystyle \partial_t \omega_{n}+\operatorname{div} \Bigg[\Big(u_{n-1}+\sum_{k=1}^N\frac{\gamma_k}{2\pi} \frac{(x-h_{k,n-1}(t))^\perp}{|x-h_{k,n-1}(t)|^2}\Big) \omega_{n}\Bigg]=0\\ \omega_{n}(0)=\omega_0,\end{cases}\end{equation} and the non linear system of ODEs: for $k=1,\ldots,N$, \begin{equation}\label{syst:3-ODE} \left\{ \begin{aligned} & \displaystyle \ddot{h}_{k,n}(t)=\frac{\gamma_k}{m_{k}}\Big(\dot{h}_{k,n}(t)-u_{n-1}(t,h_{k,n}(t))\\ &\hspace{3cm}-\sum_{j\neq k}\frac{\gamma_j}{2\pi}\frac{(h_{k,n}(t)-h_{j,n}(t))^{\perp}}{|h_{k,n}(t)-h_{j,n}(t)|^2}\Big)^\perp\\ &(h_{k,n}(0),\dot{h}_{k,n}(0))=(h_{k,0},\ell_{k,0}), \end{aligned}\right. \end{equation} on $[0,T_{n}]$, where $T_{n}\in (0,T_{n-1}]$ will be chosen such that \begin{equation}\label{syst:3-dist} \min_{t\in [0,T_{n}]} \min_{k\neq p} |h_{k,n}(t)-h_{p,n}(t)|\geqslant \rho. \end{equation} For $n=0$ we take $\omega_0$ and $(h_{k,0},\ell_{k,0})$ as data (with $T_{0}=+\infty$). \begin{proposition} \label{prop:n} For all $n\in \mathbb{N}$, there exist $T_{n}\in (0,T_{n-1}]$ and a unique weak solution $\omega_{n}$ to \eqref{syst:3} and $\{h_{k,n}\}$ to \eqref{syst:3-ODE} on $[0,T_{n}]$ such that \eqref{syst:3-dist} is satisfied. Moreover, $$\|\omega_{n}(t,\cdot ) \|_{L^1\cap L^\infty}\leqslant \|\omega_{0}\|_{L^1\cap L^\infty} \quad \forall t\in [0,T_{n}]$$ and there exists $\widetilde{T}$ depending only on $\rho$, $h_{k,0},\ell_{k,0}$, $R_0$ and $\|\omega_{0}\|_{L^\infty}$ such that $T_{n}\geqslant \widetilde{T}$ for all $n$. Finally, if all the $\gamma_k$ have the same sign, then for any $T>0$, one can choose $\rho$ depending on $T$ (and on $h_{k,0},\ell_{k,0}$ and $\|\omega_{0}\|_{L^\infty}$, $R_0$) such that $T_n= T$ for all $n\in \mathbb{N}$. \end{proposition} \begin{proof} Given $(\omega_{n-1},\{h_{k,n-1}\})$ satisfying the bound of Proposition~\ref{prop:n}, we solve the \emph{linear} transport equation \eqref{syst:3} with initial data $\omega_0$ and velocity field given by $$v_{n-1}(t,x)=u_{n-1}(t,x)+\sum_{k=1}^N\frac{\gamma_k}{2\pi} \frac{(x-h_{k,n-1}(t))^\perp}{|x-h_{k,n-1}(t)|^2}.$$ The existence of such a weak solution $\omega_{n}\in L^\infty([0,T_{n-1}],L^1\cap L^\infty({\mathbb R}^2))$ follows from classical arguments on linear transport equation. For the uniqueness issue, we refer to Lemma~\ref{appendix:1} (derived from \cite[Chapter 1]{MiotThesis}), which proves that any field $v_{n-1}$ given as above, with $u_{n-1}$ satisfying the regularity property \eqref{regu:field} and with the maps $h_{k,n-1}$ Lipschitz continuous and not intersecting on $[0,T_{n-1}]$, has the renormalization property (see \cite[Definition 1.5]{DeLellis} for the definition of renormalization). By the usual arguments for linear transport equations, see \cite{dip-lions}, uniqueness therefore holds in $L^\infty([0,T_{n-1}], L^1\cap L^\infty({\mathbb R}^2))$ for the linear transport equation associated to $v_{n-1}$. Moreover, it follows from Corollary~\ref{coro:appendix} in the Appendix that the norms $\|\omega_{n}(t,\cdot)\|_{L^p}$ are constant in time for all $p$, therefore we get the desired bound for $\|\omega_{n}\|_{L^1\cap L^\infty}$. Recalling Proposition~\ref{prop:reg-u}, it follows that $\|u_n\|_{L^\infty}\leqslant C.$ Furthermore, the weak time continuity for $\omega_n$ established in \cite[Proposition 4.1]{LM} (see also \cite{MiotThesis}) implies that $u_n$ is uniformly continuous in space-time. Next, in view of the almost-Lipschitz property and the time regularity for $u_{n-1}$, Osgood's lemma ensures that there exists a unique solution $\{h_{k,n}\}$ to \eqref{syst:3-ODE} on some maximal open interval $I_n\subset [0,T_{n-1}]$ such that \[ \min_{k\neq p} |h_{k,n}(t)-h_{p,n}(t)| > 0, \quad \forall t\in I_n. \] We consider then $T_n\leqslant T_{n-1}$ such that $[0,T_n)\subset I_n$ and $T_n$ is the largest time for which \eqref{syst:3-dist} holds: \[ \min_{k\neq p} |h_{k,n}(t)-h_{p,n}(t)| > \rho,\quad \forall t\in [0,T_n). \] Taking the scalar product of \eqref{syst:3-ODE} with $\dot{h}_{k,n}(t)$ and using Proposition~\ref{prop:reg-u} and the lower bound \eqref{syst:3-dist}, we get on $[0,T_{n}]$: \[ m_{k} \frac{d |\dot{h}_{k,n}(t)|^2}{dt} \leqslant C |\dot{h}_{k,n}(t)|\leqslant C |\dot{h}_{k,n}(t)|^2+1 \] hence we deduce by Gronwall that \begin{equation}\label{ineg:d-31} |\dot{h}_{k,n}(t)|\leqslant C \quad \text{on } [0,\min (T_{n},1)], \end{equation} (where we emphasize that $C$ depends on $\rho$), so \[ T_{n}\geqslant \widetilde{T}:= \min\Big(1, \frac{\min_{k\neq p} |h_{k,0}-h_{p,0}| -\rho}{2C}\Big). \] It remains to study the case where all $\gamma_k$ have the same sign (say positive), where we have to derive an inequality like \eqref{ineg:d-31} which is independent of $\rho$. We fix $T>0$ and we assume that $T_{n-1}=T$. We want to show that $T_n=T$. In the sequel of this proof, $C$ depends only on the initial data and $T$. We seek for a uniform lower bound for the distances $|h_{k,n}-h_{p,n}|$ and for a uniform upper bound for $|\dot{h}_{k,n}|$ on $[0,T_n)$. To this aim, we introduce the quantity \begin{equation}\label{def-Hn} \mathcal{H}_n(t)=\sum_{j\neq k} \frac{\gamma_j \gamma_k}{2\pi} \ln |h_{j,n}(t)-h_{k,n}(t)|-\sum_{k=1}^N m_k |\dot{h}_{k,n}(t)|^2, \end{equation} defined on $[0,T_n]$. As we shall see below, bounding $|\mathcal{H}_n|$ uniformly with respect to $n$ allows to obtain the desired bounds on $|\dot{h}_{k,n}|$ and on $|h_{k,n}-h_{p,n}|$. In order to obtain a suitable estimate on $|\mathcal{H}_n|$, we compute the time derivative: \begin{equation*} \begin{split} \dot{\mathcal{H}}_n&=\sum_{j\neq k}\frac{\gamma_j \gamma_k}{2\pi} (\dot{h}_{k,n}- \dot{h}_{j,n})\cdot \frac{h_{k,n}-h_{j,n}}{|h_{k,n}-h_{j,n}|^2}-2 \sum_{k=1}^N m_k \dot{h}_{k,n}\cdot \ddot{h}_{k,n}\\ &= 2 \sum_{k=1}^N \dot{h}_{k,n}\cdot \left(\gamma_k \sum_{j\neq k} \frac{\gamma_j}{2\pi} \frac{h_{k,n}-h_{j,n}}{|h_{k,n}-h_{j,n}|^2}-m_k \ddot{h}_{k,n}\right), \end{split} \end{equation*} where we have exchanged $j$ and $k$ in order to pass from the first line to the second one. Thus by \eqref{syst:3-ODE}, it only remains: \begin{equation*} \dot{\mathcal{H}}_n(t)=2\sum_{k=1}^N \gamma_k \dot{h}_{k,n}(t)\cdot u_{n-1}(t,h_{k,n}(t))^\perp. \end{equation*} Using the bound $\|u_{n-1}\|_{\infty}\leqslant C$ we get \begin{equation}\label{ineg:H-1} |\dot{\mathcal{H}}_n(t)| \leqslant C \sum_{k=1}^N \gamma_k |\dot{h}_{k,n}(t)|. \end{equation} On the other hand, we notice that for all $k$, for all $t\in [0,T_n]$, using that $\ln |x-y|\leqslant |x|+|y|$ we have \begin{align*} |\dot{h}_{k,n}(t)|\leqslant& C |{\mathcal{H}}_n(t)|^{1/2}+ C\left( \sum_{j=1}^N \gamma_j |h_{j,n}(t)|\right)^{1/2}\\ \leqslant& C |{\mathcal{H}}_n(t)|^{1/2} + C\left( \sum_{j=1}^N \gamma_j |h_{j,n}(0)|+\sum_{j=1}^N \gamma_j \int_0^t |\dot{h}_{j,n}(\tau)|\,d\tau\right)^{1/2}\\ \leqslant& C |{\mathcal{H}}_n(t)|^{1/2} + C\left( \sum_{j=1}^N \gamma_j |h_{j,0}|+\max_k \max_{ [0,T_n]} |\dot{h}_{k,n}|\right)^{1/2}, \end{align*} hence \begin{equation*} \begin{split} \max_k \max_{ [0,T_n]} |\dot{h}_{k,n}|\leqslant &C \max_{ [0,T_n]} |{\mathcal{H}}_n|^{1/2}+C\sqrt{1+\max_k \max_{[0,T_n]} |\dot{h}_{k,n}|}\\ \leqslant&C \max_{ [0,T_n]} |{\mathcal{H}}_n|^{1/2}+C+ \frac{\max_k \max_{[0,T_n]} |\dot{h}_{k,n}|}{2}, \end{split} \end{equation*}where we have used that $C\sqrt{1+a}\leqslant C\sqrt{2}+C^2+a/2$ for $a>0$. Therefore \begin{equation}\label{ineg:H-2} \begin{split} \max_k \max_{ [0,T_n]} |\dot{h}_{k,n}|\leqslant C \max_{ [0,T_n]} |{\mathcal{H}}_n|^{1/2}+C. \end{split} \end{equation} Inserting \eqref{ineg:H-2} in \eqref{ineg:H-1} we also obtain \begin{align*} \max_{[0,T_n]}|\dot{\mathcal{H}}_n| \leqslant& C \left(1+\max_{ [0,T_n]}|{\mathcal{H}}_n|^{1/2}\right), \end{align*} therefore we get \[ \max_{[0,T_n]}|{\mathcal{H}}_n| \leqslant C\quad \text{and} \quad \max_{[0,T_n]}|\dot{\mathcal{H}}_n| \leqslant C. \] Coming back to \eqref{ineg:H-2}, it follows that \begin{equation}\label{ineg:d-2} \max_k \max_{ [0,T_n]} |\dot{h}_{k,n}|\leqslant C , \end{equation} so that \begin{equation*} \max_k \max_{ [0,T_n]} |{h}_{k,n}|\leqslant C . \end{equation*} Finally, by the definition of $\mathcal{H}_n(t)$ and by the previous bounds, using again that $\ln |x-y|\leqslant |x|+|y|$ we have for all $j\neq k$: \begin{align*} \gamma_j \gamma_k\ln |h_{j,n}(t)-h_{k,n}(t)|&\geqslant -2\pi |\mathcal{H}_n(t)| -\sum_{p,\ell =1}^N \gamma_p \gamma_\ell \left(|h_{p,n}(t)|+ |h_{\ell,n}(t)|\right)\\ &\geqslant -C, \end{align*} which means that there exists $\rho>0$ depending only on $T$ and the initial data such that \begin{equation*} \min_{j\neq k} |h_{j,n}(t)-h_{k,n}(t)|> \rho>0, \quad \forall t\in [0,\min (T_{n},T)). \end{equation*} Choosing this $\rho$ from the beginning, we conclude that $T_n= T$, and that the proposition is proved. \end{proof} \textbf{Step 2: Passing to the limit} We only sketch the subsequent arguments. By the previous estimates, extracting if necessary, we find that $\{\omega_n\}_{n\in {\mathbb N}}$ converges to some $\omega$ in $L^\infty$ weak-$\ast$ on $[0,\widetilde{T}]\times {\mathbb R}^2$. Moreover, setting $u=K\ast \omega$, we infer that $\{u_n\}_{n\in {\mathbb N}}$ converges to $u$ locally uniformly on $[0,\widetilde{T}]\times {\mathbb R}^2$ (see for instance \cite[Sect. 6.1]{GLS1}). On the other hand, the bounds \eqref{syst:3-dist}-\eqref{ineg:d-31} (or \eqref{ineg:d-2}) imply that each sequence $\{\ddot{h}_{k,n}\}_{n\in {\mathbb N}}$ is uniformly bounded on $[0,\widetilde{T}]$. By Ascoli's theorem, extracting again if necessary, we obtain that each $\{(h_{k,n},\dot{h}_{k,n})\}_{n\in {\mathbb N}}$ converges uniformly to some $(h_k,\dot{h}_k)$ on $[0,\widetilde{T}]$, and passing to the limit in \eqref{syst:3-ODE}, we see that the points $\{h_k\}$ satisfy the desired system of ODE in \eqref{syst:1}. Note that in particular that they satisfy \begin{equation} \label{ineg:d-5} \max_{k}\max_{[0,\widetilde{T}]}|h_k|\leqslant C,\quad \min_{j\neq k} \min_{ [0,\widetilde{T}]} |h_{j}-h_{k}|\geqslant \rho>0 \end{equation} and \begin{equation}\label{ineg:d-6} \max_k \max_{ [0,\widetilde{T}]} |\dot{h}_{k}|\leqslant C. \end{equation} Finally, coming back to \eqref{syst:3}, we can pass to the limit exploiting the previous types of convergence to show that $\omega$ is a weak solution of the first PDE in \eqref{syst:1} on $[0,\widetilde{T}]$. Iterating this construction we reach existence up to the first time of collisions. If all the circulations have the same sign, we take $T>0$, and we can replace $\widetilde{T}$ by $T$ in all the arguments above since for all $n$ we have a solution $\omega_n$ and $\{h_{k,n}\}$ on $[0,T_n=T]$. This shows that no collision occurs in finite time. \section{Proof of Theorem~\ref{theorem:main-several-existence-lagrangian}} \label{sec:lagrangian} In all this section, $\omega$ denotes any weak solution of \eqref{syst:1-2} on $[0,T]$, where $\{h_k\}$ are given trajectories belonging in $W^{2,\infty}([0,T])$. We assume the analog initial condition as \eqref{syst:2}: \begin{equation} \label{syst:ini-3} \omega(0,\cdot)=\omega_0\in L^\infty({\mathbb R}^2)\text{ compactly supported in some } B(0,R_0). \end{equation} We assume that no collision occurs, i.e. \begin{equation}\label{syst:4-dist} \min_{t\in [0,T]} \min_{k\neq p} |h_{k}(t)-h_{p}(t)|\geqslant \rho \end{equation} for some $\rho>0$. The purpose of this section is to show Theorem~\ref{theorem:main-several-existence-lagrangian}. We emphasize that the proof does not use the dynamics of $\{h_k\}$. \subsection{Regular Lagrangian flow}\label{subsec:lagrangian} We show here that there exists a unique regular Lagrangian flow as defined in Definition~\ref{def:1}. Recall the general following abstract result by Ambrosio \cite[Theorems 3.3 and 3.5]{ambrosio}. Given a vector field $v$ in $L^1_\text{loc}([0,T]\times {\mathbb R}^d)$, if existence and uniqueness for the continuity equation $$\partial_t \omega+\text{div}(v\omega)=0, \quad \omega(0, \cdot)=\omega_0\in L^1\cap L^\infty$$ hold in $L^\infty([0,T],L^1\cap L^\infty)$ then the regular Lagrangian flow $X$ for $v$ exists and is unique, and the unique solution is then given by $\omega(t,\cdot)=X(t,\cdot)_\#\omega_0$. In order to apply this result to the present setting, we introduce the divergence-free field \begin{equation}\label{def:field} v(t,x)=u(t,x)+\sum_{j=1}^N \gamma_j K(x-h_j(t)). \end{equation} By Corollary~\ref{coro:appendix} in the Appendix, the transport equation associated to $v$ admits a unique solution (which is renormalized by Lemma~\ref{appendix:1}: for any continuous function $\beta$ growing not too fast at infinity, the function $\beta(\omega)$ is also a solution). Therefore Ambrosio's result yields the existence and uniqueness of the regular Lagrangian flow $X$ associated to $v$, and we have $\omega(t,\cdot)=X(t,\cdot)_\#\omega_0$. This proves the first claim of Theorem~\ref{theorem:main-several-existence-lagrangian}. Again by Corollary~\ref{coro:appendix}, the renormalization property ensures that \begin{equation}\label{eq:norms}\|\omega(t,\cdot)\|_{L^p}=\|\omega_0\|_{L^p}, \quad 1\leqslant p\leqslant +\infty. \end{equation} We derive first the following property: \begin{proposition}\label{prop:ineg:flot} There exists $C$ depending only on $T$, on $\|h_k\|_{L^\infty([0,T])}$ and on the initial data such that \begin{equation*}\label{ineg:flot} \sup_{t\in [0,T]}|X(t,x)|\leqslant |x|+ C, \quad \text{ for a.e. } x\in {\mathbb R}^2. \end{equation*} \end{proposition} \begin{proof} Let us introduce the set of tubular neighborhoods $$\Sigma:=\bigcup_{t\in [0,T]}\bigcup_{k=1}^N B(h_{k}(t),1),$$ which is bounded set, let say included in a ball $B(0,C_{1})$. Outside $\Sigma$, the velocity $v$ is uniformly bounded by $C_{2}$ (see Proposition~\ref{prop:reg-u}). Therefore, for any $x\in \mathbb{R}^2$ such that $X(\cdot,x)$ is absolutely continuous on $[0,T]$, the map $t\to X(t,x)$ starts from $x$, has a Lipschitz variation outside $\Sigma$ and can evolve with a diverging velocity inside $\Sigma$, but remaining bounded. Thus, setting $C= C_{1}+C_{2}T$ proves the proposition. \end{proof} The following corollary gives the second point in Theorem~\ref{theorem:main-several-existence-lagrangian}. \begin{corollary} \label{coro:support-several} The vorticity $\omega(t,\cdot)$ is compactly supported for all $t\in [0,T]$, with $\operatorname{{supp}}(\omega(t,\cdot))\subset B(0,R_T)$ for some $R_T$ depending only on the initial data, on $\|h_k\|_{L^\infty([0,T])}$ and on $T$. \end{corollary} \begin{proof} We have $\omega(t,\cdot)=X(t,\cdot)_\#\omega_0$ and $\omega_0$ is compactly supported in $B(0,R_0)$, so it follows from Proposition~\ref{prop:ineg:flot} that $\omega(t,\cdot)$ is compactly supported for all $t\in [0,T]$, with $\operatorname{{supp}}(\omega(t,\cdot))\subset B(0,R_T)$ for $R_T= R_0+C$ (with $C$ given in Proposition~\ref{prop:ineg:flot}). \end{proof} \subsection{Vorticity trajectories} For the third point in Theorem~\ref{theorem:main-several-existence-lagrangian}, we have to show that for almost every $x\in {\mathbb R}^2$, we have $X(t,x)\neq h_k(t)$ for all $t\in [0,T]$ and for all $k=1,\ldots,N$. For almost every $x\in {\mathbb R}^2\setminus \cup_j \{ h_{j,0}\}$ such that $X(\cdot,x)$ is an absolutely continuous solution on $[0,T]$ to the ODE with field $v$ defined in \eqref{def:field}, by time continuity, there exists $T^\ast(x)$ such that $$\min_j \min_{t\in [0,T^\ast(x)]}|X(t,x)-h_j(t)|>0.$$ We may then consider the local microscopic energies near the points $h_k(t)$ on $[0,T^\ast(x)]$: \begin{equation}\label{def-Fk} F_k(t)=\sum_{j=1}^N \frac{\gamma_j}{2\pi} \ln |X(t,x)-h_j(t)|+\varphi_\varepsilon(t,X(t,x))+\langle X(t,x),\dot{h}_k^\perp(t)\rangle, \end{equation} where we recall that $\varphi_\varepsilon$ denotes the regularization of the stream function, see \eqref{def:stram-2}. On the other hand, the result in \cite[Proposition 4.1]{LM} states the continuity of $u$ on $[0,T]\times {\mathbb R}^2$. Therefore, the field $v(\cdot,X(\cdot,x))$ is continuous on $[0,T^\ast(x)]$. So we infer that $X(\cdot,x)$ is differentiable on $[0,T^\ast(x)]$ with $\frac{d}{dt}X(t,x)=v(t,X(t,x))$. This enables to perform the following estimate on the local energies. \begin{proposition} \label{prop:Fi} We have for $t\in [0,T^\ast(x)]$ and for all $k=1,\ldots, N$, \begin{equation*} |F'_k(t)|\leqslant C\Big(1+|x|+\varepsilon |X(t,x)-h_k(t)|^{-1}+\sum_{j\neq k}|X(t,x)-h_j(t)|^{-1}\Big). \end{equation*} \end{proposition} In the previous statement, $C$ is independent of $\varepsilon$ whereas $F_k$ depends on $\varepsilon$. \begin{proof} In the subsequent proof we set for clarity: $$X=X(t,x),\quad u=u(t,X(t,x)), \quad \varphi_\varepsilon=\varphi_\varepsilon(t,X(t,x)),\quad h_k=h_k(t),$$ and we compute on $[0,T^\ast(x)]$ \begin{align*} F_k'=&\sum_{j=1}^N \left\langle \frac{\gamma_j}{2\pi}\frac{X-h_j}{|X-h_j|^2},\sum_{m=1}^N\gamma_m K(X-h_m)+u-\dot{h}_j\right\rangle \\ &+\partial_t \varphi_\varepsilon+\langle \dot{X},\nabla\varphi_\varepsilon\rangle+\langle \dot{X}, \dot{h}_k^\perp\rangle+\langle X,\ddot{h}_k^\perp\rangle\\ =& \left\langle \sum_{j=1}^N \frac{\gamma_j}{2\pi}\frac{X-h_j}{|X-h_j|^2},\sum_{m=1}^N \frac{\gamma_m}{2\pi}\frac{(X-h_m)^\perp}{|X-h_m|^2} \right\rangle +\sum_{j=1}^N \left\langle \frac{\gamma_j}{2\pi}\frac{X-h_j}{|X-h_j|^2}, u -\dot{h}_j\right\rangle \\ &+\partial_t \varphi_\varepsilon+\langle \dot{X},\nabla\varphi_\varepsilon\rangle+\left\langle \dot{X}, \dot{h}_k^\perp\rangle+\langle X,\ddot{h}_k^\perp\right\rangle \\ =& \left\langle \frac{\gamma_k}{2\pi}\frac{X-h_k}{|X-h_k|^2}, u-\dot{h}_k\right\rangle +\sum_{j\neq k} \left\langle \frac{\gamma_j}{2\pi}\frac{X-h_j}{|X-h_j|^2}, u-\dot{h}_j\right\rangle \\ &+\partial_t \varphi_\varepsilon+\langle \dot{X},\nabla\varphi_\varepsilon\rangle+\left\langle \dot{X}, \dot{h}_k^\perp\right\rangle+\langle X,\ddot{h}_k^\perp\rangle. \end{align*} Next, using again that $X$ satisfies the ODE with field $v$ defined in \eqref{def:field}, we have \begin{equation}\label{eq:flot} \frac{\gamma_k}{2\pi}\frac{X-h_k}{|X-h_k|^2}=-\dot{X}^\perp+u^\perp-\sum_{j\neq k}\frac{\gamma_j}{2\pi}\frac{X-h_j}{|X-h_j|^2}, \end{equation} hence we get \begin{align*} F'_k=&\langle -\dot{X}^\perp + u^\perp, u-\dot{h}_k\rangle-\sum_{j\neq k}\left\langle \frac{\gamma_j}{2\pi} \frac{X-h_j}{|X-h_j|^2} , u-\dot{h}_k\right\rangle \\ &+\sum_{j\neq k} \left\langle \frac{\gamma_j}{2\pi}\frac{X-h_j}{|X-h_j|^2}, u-\dot{h}_j\right\rangle +\partial_t \varphi_\varepsilon+\langle \dot{X},\nabla\varphi_\varepsilon\rangle+\langle \dot{X}, \dot{h}_k^\perp\rangle+\langle X,\ddot{h}_k^\perp\rangle\\ =&\langle \dot{X}^\perp, - u +\nabla^\perp \varphi_\varepsilon \rangle -\langle u^\perp,\dot{h}_k\rangle \\ &+\sum_{j\neq k}\left\langle \frac{\gamma_j}{2\pi} \frac{X-h_j}{|X-h_j|^2} , \dot{h}_k-\dot{h}_j\right\rangle +\partial_t \varphi_\varepsilon+\langle X,\ddot{h}_k^\perp\rangle. \end{align*} Hence, plugging the equality $\nabla^\perp\varphi_\varepsilon=u+R_\varepsilon$, with $R_\varepsilon$ defined in Proposition~\ref{prop:reste-vitesse}, we have \begin{align*} F'_k=&\langle \dot{X}^\perp,R_\varepsilon \rangle -\langle u^\perp,\dot{h}_k\rangle +\sum_{j\neq k}\left\langle \frac{\gamma_j}{2\pi} \frac{X-h_j}{|X-h_j|^2} , \dot{h}_k-\dot{h}_j\right\rangle +\partial_t \varphi_\varepsilon+\langle X,\ddot{h}_k^\perp\rangle. \end{align*} By Proposition~\ref{prop:reste-vitesse} together with \eqref{eq:norms} and \eqref{eq:flot}, we have on the one hand \begin{equation*} \left|\langle \dot{X}^\perp,R_\varepsilon \rangle\right|\leqslant C \varepsilon \|\omega_0\|_{L^\infty}\ \left( \sum_{j=1}^N |X-h_j|^{-1}+\|u\|_{L^\infty}\right). \end{equation*} On the other hand, as $h_k \in W^{2,\infty}$, we obtain by Proposition~\ref{prop:ineg:flot} \begin{multline*} \Big|-\langle u^\perp ,\dot{h}_k\rangle +\sum_{j\neq k}\left\langle \frac{\gamma_j}{2\pi} \frac{X-h_j}{|X-h_j|^2} , \dot{h}_k-\dot{h}_j\right\rangle +\langle X,\ddot{h}_k^\perp\rangle\Big|\\ \leqslant C\Big(\|u\|_{L^\infty}+\sum_{j\neq k}|X-h_j|^{-1}+|x| +1 \Big). \end{multline*} Finally, recalling that $\|\partial_t \varphi_\varepsilon\|_{L^\infty}\leqslant C$ by Proposition~\ref{prop:ineq:partialt-several} and that $\|u\|_{L^\infty}\leqslant C$ by Proposition~\ref{prop:reg-u}, the conclusion follows. \end{proof} \begin{corollary}\label{coro:inf-several}For almost every $x$ in ${\mathbb R}^2$ we can take $T^\ast(x)=T$, more precisely $$\min_j \min_{t\in [0,T]}|X(t,x)-h_j(t)|>0,\quad \text{for a.e. } x\in {\mathbb R}^2.$$ \end{corollary} \begin{proof} We argue by contradiction, assuming that $T^\ast(x)=T$ is impossible for some $x\in {\mathbb R}^2\setminus\{ h_{k,0}\}$ where the flow exists, so that there exist $k\in \{1,\ldots,N\}$ and $\tilde T<T$ such that $\liminf_{t\to \tilde T} |X(t,x)-h_k(t)|=0$ and $\min_j \min_{t\in [0,T^\ast]}|X(t,x)-h_j(t)|>0$ for any $T^\ast<\tilde T$. We further set $X(t)=X(t,x)$. Let $t_n\to \tilde T$ such that $|X(t_n)-h_k(t_n)|\to 0$ as $n\to +\infty$. We recall that $\rho$ is defined by \eqref{syst:4-dist}. For $n$ sufficiently large we have $|X(t_n)-h_k(t_n)|<\rho/K$, with $K>3$ large to be determined later on. We take $t'_n$ maximal such that on $[t_n,t'_n)$ we have $|X(t)-h_k(t)|<\rho/3$. In particular, by \eqref{syst:4-dist}, for $j\neq k$ we have $|X(t)-h_j(t)|\geqslant 2\rho/3$ on $[t_n,t'_n)$. We assume first that $t'_n< \widetilde{T}$: then we have $|X(t'_n)-h_k(t'_n)|=\rho/3$. We fix $n\in {\mathbb N}$. For all $\varepsilon>0$, by the definition of $F_k$, we write \begin{equation}\label{esti:Fi} \begin{split} \frac{\gamma_k}{2\pi}& \ln \left(\frac{ |X(t'_n)-h_k(t'_n)|} {|X(t_n)-h_k(t_n)|}\right) =\sum_{j\neq k}^N \frac{\gamma_j}{2\pi} \ln \left( \frac{|X(t_n)-h_j(t_n)|}{|X(t'_n)-h_j(t'_n)|}\right) +\int_{t_n}^{t'_n}F'_k(\tau)\,d\tau \\ &+\varphi_\varepsilon(t_n,X(t_n))-\varphi_\varepsilon(t'_n,X(t'_n)) +\langle X(t_n),\dot{h}_k^\perp(t_n)\rangle-\langle X(t'_n),\dot{h}_k^\perp(t'_n)\rangle, \end{split} \end{equation} so by \eqref{est:phi}, Proposition~\ref{prop:Fi} and the previous estimates we get \begin{equation}\label{est:aout}\begin{split} \frac{|\gamma_k|}{2\pi} \ln\left(\frac{K}{3}\right)&\leqslant C\Big(1+|x| + \varepsilon\int_{t_n}^{t'_n}|X(\tau)-h_k(\tau)|^{-1}\,d\tau\Big) \end{split} \end{equation} Letting $\varepsilon\to 0$ for fixed $n$, we find \begin{equation*} \ln \left(\frac{K}{3}\right)\leqslant C, \end{equation*} which is a contradiction for $K$ sufficiently large (depending on $x$ and on the initial conditions). So we have $t'_n=\widetilde T$, hence \begin{equation}\label{esti:push} |X(t,x)-h_k(t)|<\frac{\rho}{3}\text { and } |X(t,x)-h_j(t)|\geqslant \frac{2\rho}{3} \text{ on } [t_n,\widetilde{T}). \end{equation} We have therefore localized the fluid trajectory $X(t)$ in the neighborhood of one point vortex trajectory $h_k(t)$, namely we have proved that if the trajectory goes too close to $h_{k}$, it stays in a neighborhood of radius $\rho/3$. We fix $n_0\in \mathbb{N}$ sufficiently large so that $|X(t_{n_0})-h_k(t_{n_0})|<\rho/K$. We come back to \eqref{esti:Fi}, replacing $t'_n$ by any $t\in [t_n, \widetilde{T})$, and we apply again Proposition~\ref{prop:Fi}: \begin{equation*} \ln |X(t)-h_k(t)|\geqslant \ln |X(t_{n_0})-h_k(t_{n_0})|-C\Big(1+ |x| +\varepsilon \int_{t_{n_0}}^t |X(\tau)-h_k(\tau)|^{-1}\,d\tau\Big). \end{equation*} Letting $\varepsilon \to 0$ we find \begin{equation*} |X(t)-h_k(t)|\geqslant |X(t_{n_0})-h_k(t_{n_0})|e^{-C(1+|x|)}\text { on } [t_{n_0},\widetilde{T}), \end{equation*} which contradicts the fact that $\liminf_{t\to \widetilde{T}} |X(t,x)-h_k(t)|=0$. Hence we conclude that $T^\ast(x)=T$ is possible. \end{proof} \begin{corollary} \label{coro:ODE} For a.e. $x\in {\mathbb R}^2$, the map $ X(\cdot,x)$ is the unique differentiable solution on $[0,T]$ of the ODE $$\frac{d}{dt} \gamma(t)=u(t,\gamma(t))+\sum_{j=1}^N \gamma_j K(\gamma(t)-h_j(t)),\quad \gamma(0)=x,$$such that $\min_j \min_{[0,T]} |\gamma(t)-h_j(t)|>0.$ \end{corollary} \begin{proof} We gather the already mentioned time continuity of $u$, the log-Lipschitz space regularity for $u$ stated in Proposition~\ref{prop:reg-u}, the no collision property of Corollary~\ref{coro:inf-several}, and the fact that $K$ is Lipschitz away from the origin. Invoking Osgood's Lemma, we can then conclude. \end{proof} We finish this paragraph with an additional estimate on the Lagrangian trajectories, which can be derived easily from the proof of Corollary~\ref{coro:inf-several}. \begin{proposition}\label{prop:dmin-several} Let $\omega$ be any weak solution of \eqref{syst:1-2} on $[0,T]$ with initial datum \eqref{syst:ini-3}, where $\{h_k\}$ are given trajectories in $W^{2,\infty}([0,T])$ satisfying the no collision property \eqref{syst:4-dist}. There exist $0<\delta<\min(\rho/3,1)$, $0<\delta_1<1$ and $0<\delta_2<1$, depending only on $T$, $\|h_k\|_{W^{2,\infty}([0,T])}$, $\rho$, $R_0$ and $\|\omega_0\|_{L^\infty}$, satisfying the following property: Let $x\in \operatorname{{supp}}(\omega_0)$ such that $\min_j \min_{t\in [0,T]} |X(t,x)-h_j(t)|>0$. If $|X(t_0,x)-h_k(t_0)|<\delta$ for some $t_0\in [0,T]$ and $k\in \{1,\ldots,N\}$, then $$\delta_1|x-h_k(0)|\leqslant |X(t,x)-h_k(t)|<\frac{\rho}{3},\quad \forall t\in [0,T].$$ If $\min_j |X(t_0,x)-h_j(t_0)|>\delta$ for some $t_0\in [0,T]$, then $$\delta_2\leqslant \min_j |X(t,x)-h_j(t)|,\quad \forall t\in [0,T].$$ \end{proposition} \begin{proof} We start with the first estimate. We come back to the proof of Corollary~\ref{coro:inf-several} above, with $t_n$ replaced by $t_0$, $t'_n$ replaced by $T$. With $K>3$ a sufficiently large number to be chosen, we set $\delta= \rho/K$. By \eqref{est:aout} we obtain, using that $|x|\leqslant R_0$ since $x$ belongs to $\operatorname{{supp}}(\omega_0)$: \begin{equation*}\begin{split} \frac{|\gamma_k|}{2\pi} \ln\left(\frac{K}{3}\right)&\leqslant C\Big(1+\varepsilon\int_{t_0}^{T}|X(\tau)-h_k(\tau)|^{-1}\,d\tau\Big). \end{split} \end{equation*} Letting $\varepsilon\to 0$, we find a contradiction if $K$ is sufficiently large (depending only on $T$, $\|h_k\|_{W^{2,\infty}([0,T])}$, $R_0$ and $\|\omega_0\|_{L^\infty}$). Hence by the same arguments as those leading to \eqref{esti:push} we obtain: $$|X(t,x)-h_k(t)|< \frac{\rho}{3},\quad |X(t,x)-h_j(t)|\geqslant \frac{2\rho}{3}\text{ for }j\neq k \text{ on }[t_0,T].$$ We can invoke the same arguments to obtain the estimates above on $[0,t_0]$. Therefore, by Proposition~\ref{prop:Fi}, this yields: $$\lim_{\varepsilon \to 0} \int_0^T|F'_k(\tau)|\,d\tau \leqslant C,$$ so that, using again \eqref{esti:Fi} with $t_n$ replaced by $0$ and $t'_n$ by $t$, we get \begin{equation*} \ln |X(t,x)-h_k(t)|\geqslant \ln |x-h_k(0)|- C \text { on } [0,T], \end{equation*} for a constant $C$, so the first part is proved by setting $\delta_1=e^{-C}$. We turn now to the second part. Let $\widetilde{K}\geqslant 1$ be a number to be determined later on. Let $(t_1,t_2)\subset (0,T)$ containing $t_0$ and be maximal such that $\min_j |X(t,x)-h_j(t)|> \delta/\widetilde{K}$ on $(t_1,t_2)$. If $(t_1,t_2)\neq (0,T)$, let $k\in \{1,\ldots,N\}$ such that $|X(t_1,x)-h_k(t_1)|=\delta/\widetilde{K}$ (or $|X(t_2,x)-h_k(t_2)|=\delta/\widetilde K$). Repeating the first part of the proof of Corollary~\ref{coro:inf-several} with $t_n=t_0$ and $t'_n=t_1$ (or $t'_n=t_2$), we find $|\ln \widetilde K|\leqslant C$ which is a contradiction provided $\widetilde K$ is sufficiently large (depending only on $T$, $\|h_k\|_{W^{2,\infty}([0,T])}$, $R_0$ and $\|\omega_0\|_{L^\infty}$). So, setting $\delta_2=\delta/\widetilde K$, the conclusion follows. \end{proof} \subsection{Decomposition of the vorticity and reduction to the case of one point vortex.} In all this subsection we assume moreover that $\omega_0$ is constant in a neighborhood of $\{ h_{k}(0) \}$, namely that \eqref{constant} holds. The purpose here is to show the last property of Theorem~\ref{theorem:main-several-existence-lagrangian}: the vorticity remains constant in a neighborhood of each point vortex. To this aim, we will first reduce the problem to the case of one single point vortex. In the next subsection, we will then establish the desired property. Let $\delta$, $\delta_1$ and $\delta_2$ be the constants introduced in Proposition~\ref{prop:dmin-several}. We decompose $\omega_0$ as $$\omega_0=\sum_{k=1}^N \omega_{0,k}+\omega_{0,r},$$ where $$\omega_{0,k}=\omega_0 \mathds{1}_{B(h_{k}(0),\delta)},\quad k=1,\ldots,N$$ and $$\omega_{0,r} \quad \text{is supported in }{\mathbb R}^2\setminus \cup_{j=1}^N B( h_{j}(0),\delta).$$ By uniqueness of the weak solution to the linear transport equation associated to the field \begin{equation*}v(t,x)=u(t,x)+\sum_{j=1}^N \gamma_j K(x-h_j(t)),\end{equation*} (see the Appendix and the beginning of Subsection~\ref{subsec:lagrangian}), $\omega$ may then be decomposed as \begin{equation*} \omega(t,\cdot)=\sum_{k=1}^N X(t,\cdot)_ \#\omega_{0,k}+X(t,\cdot)_\#\omega_{0,r}=\sum_{k=1}^N \omega_k(t,\cdot)+\omega_{r}(t,\cdot). \end{equation*} Let $K_{\delta}=1/(2\pi)\nabla^\perp \ln_{\delta}$, where $\ln_\varepsilon$ is defined in Subsection~\ref{subsec:basic}. So $K_\delta$ is a smooth, divergence-free map coinciding with $K$ on ${\mathbb R}^2\setminus B(0,\delta)$ such that $\|K_{\delta}\|_{L^\infty}\leqslant C \delta^{-1}.$ Let $k=1,\ldots,N$. By the first part of Proposition~\ref{prop:dmin-several}, by definition of $\delta$, we have \begin{equation}\begin{split}\label{ineq:gronwall} & |{X}(t,x)-h_k(t)|<\frac{\rho}{3}, \quad \forall t\in [0,T],\quad \text{ for a.e. }x \in \text{supp}(\omega_{0,k}). \end{split} \end{equation} Therefore, \begin{equation*} \min_{j\neq k} |{X}(t,x)-h_j(t)|\geqslant \frac{2\rho}{3}>\delta, \quad \forall t\in [0,T]\quad \text{ for a.e. }x \in \text{supp}(\omega_{0,k}). \end{equation*} So by Corollary~\ref{coro:ODE}, we have \begin{equation}\label{eq:flot-tilde} X(t,x)=\widetilde{X_k}(t,x), \quad \text{ for a.e. }x\in \text{supp}(\omega_{0,k}),\end{equation} where $\widetilde{X}_k$ is the unique regular Lagrangian flow associated to the field \begin{equation}\label{def:drus} \widetilde v_k(t,x)=u(t,x)+\sum_{j\neq k}\gamma_j K_{\delta}(x-h_j(t))+\gamma_k K(x-h_k(t)). \end{equation} In particular, \begin{equation}\label{eq:carre-0} \omega_k(t,\cdot)=X(t,\cdot)_\#\omega_{0,k}=\widetilde{X}_k(t,\cdot)_\#\omega_{0,k}. \end{equation} We observe here for later use that the same argument applied to $\omega^2$ (noting that it is also a distributional solution of \eqref{syst:1-2} with initial datum $\omega_0^2$) yields \begin{equation}\label{eq:carre}\omega_k^2(t,\cdot)=X(t,\cdot)_\#\omega_{0,k}^2=\widetilde{X_k}(t,\cdot)_\#\omega_{0,k}^2,\quad k=1,\ldots,N. \end{equation} So we are left with the case of a linear transport equation with field $\widetilde v_k$ given by the superposition \eqref{def:drus} of a regular part \begin{equation}\label{def:push} u_k(t,x)=u(t,x)+\sum_{j\neq k}\gamma_j K_{\delta}(x-h_j(t))\end{equation} and a singular part generared by only one point vortex: $$\gamma_k K(x-h_k(t)).$$ The analysis of this case was performed in \cite{bresiliens-miot}. It was proved in particular that for all $t\in [0,T]$, the regular Lagrangian flow $\widetilde{X}_k$ associated to $\widetilde v_k$ is the limit in $L^1_{\operatorname{{loc}}}({\mathbb R}^2)$ of the sequence $\widetilde{X_{k,n}}(t,\cdot)$, where $\widetilde{X_{k,n}}$ is the flow associated to any regularization of $\widetilde v_k$: $$\widetilde v_{k,n}(t,x)=u_{k,n}(t,x)+\frac{\gamma_k}{2\pi}\frac{(x-h_k(t))^\perp}{|x-h_k(t)|^2+n^{-2}},$$ with $u_{k,n}$ a smooth and divergence-free approximation of $u_k$. By Liouville's theorem, $\widetilde{X_{k,n}}(t,\cdot)$ thus preserves Lebesgue's measure. Moreover, Proposition~\ref{prop:ineg:flot} also applies to $\widetilde{X_{k,n}}$ (with a constant independent of $n$). Therefore, passing to the limit, we conclude that $\widetilde{X_k}(t,\cdot)$ preserves Lebesgue's measure: \begin{equation} \label{prop:lebesgue} \widetilde{X_k}(t,\cdot)_\#dx=dx,\quad \forall t\in [0,T]. \end{equation} We next derive a localization property for $\widetilde{X_k}(t,\cdot)$. \begin{proposition}\label{prop:loc} For all $R>0$, there exists $C_R$ depending only on $R$, $T$, $\|h_k\|_{W^{2,\infty}([0,T])}$, $\rho$, $R_0$ and $\|\omega_0\|_{L^\infty}$, but not on $\delta_0$, such that \begin{multline*} C_R|x-h_k(0)|\leqslant |\widetilde{X_k}(t,x)-h_k(t)|,\\ \forall t\in [0,T],\quad \text{ for all }x\in B(0,R)\setminus\{h_{k}(0)\},\quad k=1,\ldots,N. \end{multline*} \end{proposition} \begin{remark} By \eqref{eq:flot-tilde} and by Proposition~\ref{prop:dmin-several}, we already know that this holds for a.e. $x$ in $\text{supp}(\omega_{0,k})$. \end{remark} \begin{proof} As long as $\widetilde{X_k}(t,x)\neq h_k(t)$, we introduce the new energy \begin{equation*} \widetilde{F_k}(t)=\frac{\gamma_k}{2\pi}\ln |\widetilde{X_k}(t,x)-h_k(t)|+\varphi_\varepsilon(t,\widetilde{X_k}(t,x))+\psi_{\delta}(t,\widetilde{X_k}(t,x))+\langle \widetilde{X_k}(t,x), \dot{h}_k(t)^\perp\rangle, \end{equation*} where $$ \psi_{\delta}(t,x)=\sum_{j\neq k}\frac{\gamma_j}{2\pi} \ln_{\delta}|x-h_j(t)|,$$ so that $$\nabla^\perp \psi_{\delta}(t,x)=\sum_{j\neq k}\gamma_j K_{\delta}(x-h_j(t)).$$ Exactly as in the proof of Proposition~\ref{prop:Fi}, we compute, recalling the definition \eqref{def:push} of $u_k$, \begin{align*} \widetilde{F_k}'(t)=& -\left\langle u_k^\perp(t,\widetilde{X_k}(t,x)),\dot{h}_k(t)\right\rangle +\partial_t \varphi_\varepsilon(t,\widetilde{X_k}(t,x))+\partial_t \psi_{\delta}(t,\widetilde{X_k}(t,x))\\ &+\left\langle \dot{\widetilde{X_k}}(t,x)^\perp,R_\varepsilon(t,\widetilde{X_k}(t,x))\right\rangle +\left\langle \widetilde{X_k}(t,x),\ddot{h}_k^\perp(t)\right\rangle. \end{align*} Using the uniform bounds on $u_k$, $\partial_t \psi_{\delta}$, $h_j$, $\dot{h}_j$ and $\ddot{h}_j$ for $j=1,\ldots,N$, and using the previous bounds for $\partial_t \varphi_\varepsilon$ and $R_\varepsilon$ we therefore get for all $\varepsilon>0$ \begin{equation*} |\widetilde{F_k}'(t)| \leqslant C\big(|x|+\varepsilon |\widetilde{X_{k}}(t,x)-h_k(t)|^{-1}+1\big). \end{equation*} We may now conclude exactly as in the proof of the first part of Proposition~\ref{prop:dmin-several}: as long as $\widetilde{X_k}(t,x)\neq h_k(t)$, letting $\varepsilon$ tend to zero after integrating the inequality above on $[0,t]$, we get \begin{equation*} \left|\ln \left(\frac{|\widetilde{X_k}(t,x)-h_k(t)|}{|x-h_{k,0}|}\right)\right|\leqslant C(1+|x|),\end{equation*} where $C$ depends on $\delta$, $T$, $\|h_k\|_{W^{2,\infty}([0,T])}$, $\rho$ and on the initial data. So, setting $C_R=e^{-C(1+R)}$, the conclusion follows. \end{proof} \begin{proposition}\label{prop:loca-1} We have $$|x-h_{k}(0)|\leqslant C( |\widetilde{X_k}(t,x)-h_k(t)|+1),$$ where $C$ depends only on $T$, $\|h_k\|_{W^{2,\infty}([0,T])}$, $\rho$, $R_0$ and $\|\omega_0\|_{L^\infty}$, but not on $\delta_0$. \end{proposition} \begin{proof} \begin{align*} \frac{d}{dt} |\widetilde{X_k}&(t,x)-h_k(t)|^2\\ &=2\left\langle \widetilde{X_k}(t,x)-h_k(t), u_k(t,\widetilde{X_k}(t,x))+\gamma_k K(\widetilde{X_k}(t,x)-h_k(t))-\dot{h}_k(t)\right\rangle\\ &=2\left\langle \widetilde{X_k}(t,x)-h_k(t), u_k(t,\widetilde{X_k}(t,x))-\dot{h}_k(t)\right\rangle\\ &\geqslant -2|\widetilde{X_k}(t,x)-h_k(t)|(\|\dot{h}_k\|_{L^\infty}+\|u_k\|_{L^\infty}), \end{align*} hence \begin{equation*} \frac{d}{dt} |\widetilde{X_k}(t,x)-h_k(t)| \geqslant -C, \end{equation*} so the conclusion follows. \end{proof} \subsection{The vorticity remains constant in the neighborhood of the point vortices} We finally establish that the vorticity remains constant in a neighborhood of the point vortices. Let $C$ be the constant of Proposition~\ref{prop:loca-1}. We set $$R=2C+|h_{k}(0)|,$$ and we consider the corresponding constant $C_R$ of Proposition~\ref{prop:loc}. We may decrease $\delta_0$ so that $$\max(\delta_0,C_R \delta_0)<\min\left(\delta,\delta_2,1,\frac{2\rho}{3}\right),$$ where we recall $\delta$ and $\delta_2$ were found in Proposition~\ref{prop:dmin-several}. We fix $t\in [0,T]$. We claim that \begin{equation}\label{claim:vanish} \omega_j(t,y)=0,\quad \text{for a.e. } y\in B(h_k(t),C_R\delta_0),\quad \forall j\neq k.\end{equation} Indeed, by \eqref{eq:carre}, considering the $L^1$ function $\varphi_k=\mathds{1}_{B(h_{k}(t),C_R\delta_0)},$ we find \begin{equation*} \int_{{\mathbb R}^2}\omega_j^2(t,y)\varphi_k(y)\,dy =\int_{{\mathbb R}^2}\omega_{0,j}^2(x)\varphi_k({X}(t,x))\,dx. \end{equation*} On the other hand, for $x\in \text{supp}(\omega_{0,j})$, we have by \eqref{ineq:gronwall} $|{X}(t,x)-h_j(t)|<\rho/3$ and therefore $|{X}(t,x)-h_k(t)|>2\rho /3>C_R\delta_0$. So the right hand side above vanishes, which establishes \eqref{claim:vanish}. Using that $\delta_2>C_R\delta_0$, by the same arguments as above, the second part of Proposition~\ref{prop:dmin-several} yields that \begin{equation}\label{claim:vanish-2} \omega_r(t,y)=0,\quad \text{for a.e. } y\in \bigcup_{k=1}^N B(h_k(t),C_R\delta_0).\end{equation} Finally, we show that \begin{equation}\label{claim:vanish-3} \omega_k(t,y)=\alpha_k,\quad \text{for a.e. } y\in B(h_k(t),C_R\delta_0).\end{equation} Indeed, since $\omega_k(t,\cdot)=\widetilde{X_k}(t,\cdot)_\#\omega_{0,k}$, $\omega_k^2(t,\cdot)=\widetilde{X_k}(t,\cdot)_\#\omega_{0,k}^2$, and $\widetilde{X_k}(t,\cdot)_\#dx=dx$ by \eqref{eq:carre-0}, \eqref{eq:carre} and \eqref{prop:lebesgue}, we compute \begin{align*} \int_{{\mathbb R}^2}(\omega_k(t,y)-\alpha_k)^2\varphi_k(y)\,dy \hspace{-2cm}&\\ =&\int_{{\mathbb R}^2}\omega_k(t,y)^2\varphi_k(y)\,dy-2\alpha_k \int_{{\mathbb R}^2}\omega_k(t,y)\varphi_k(y)\,dy+\alpha_k^2\int_{{\mathbb R}^2}\varphi_k(y)\,dy\\ =&\int_{{\mathbb R}^2}\omega_{0,k}^2(x)\varphi_k(\widetilde{X_k}(t,x))\,dx-2\alpha_k \int_{{\mathbb R}^2}\omega_{0,k}(x)\varphi_k(\widetilde{X_k}(t,x))\,dx\\ &+\alpha_k^2\int_{{\mathbb R}^2}\varphi_{k}(\widetilde{X_k}(t,x))\,dx\\ =&\int_{{\mathbb R}^2}(\omega_{0,k}(x)-\alpha_k)^2\varphi_{k}(\widetilde{X_k}(t,x))\,dx\\ =&\int_{\widetilde{X_k}(t,\cdot)^{-1}(B(h_k(t),C_R\delta_0))}(\omega_{0,k}(x)-\alpha_k)^2\,dx. \end{align*} Now we observe that since $C_R\delta_0<1$, by Proposition~\ref{prop:loca-1}, we get $$\widetilde{X_k}(t,\cdot)^{-1}\Big(B(h_k(t),C_R\delta_0) \Big)\subset B(h_{k}(0),2C)\subset B(0,R).$$ Thus, we are allowed to use Proposition~\ref{prop:loc}, and we have for $x\in \widetilde{X_k}(t,\cdot)^{-1}(B(h_k(t),C_R\delta_0))$: $$C_R|x-h_{k}(0)|\leqslant |\widetilde{X_k}(t,x)-h_k(t)|\leqslant C_R\delta_0.$$ We get therefore \begin{equation*} \int_{{\mathbb R}^2}(\omega(t,y)-\alpha_k)^2\varphi_k(y)\,dy \leqslant \int_{B(h_{k}(0),\delta_0)}(\omega_0(x)-\alpha_k)^2\,dx=0, \end{equation*} and the conclusion follows. In view of \eqref{claim:vanish}, \eqref{claim:vanish-2} and \eqref{claim:vanish-3}, we finally conclude that \begin{equation} \label{om:constant} \omega(t,\cdot)=\alpha_k,\quad \text{a.e. on } B(h_k(t), C_R\delta_0). \end{equation} \section{Proof of Theorem~\ref{theorem:main-several}.} \label{sec:uniqueness-final} \textbf{Step 1: uniqueness in the case of one point vortex.} We start with the case $N=1$. Let $(\omega,h)$ and $(\widetilde{\omega},\widetilde{h})$ two solutions of \eqref{syst:1} with initial datum $(\omega_0,h_0,\ell_0)$ satisfying the assumption of Theorem~\ref{theorem:main-several}. So Theorem~\ref{theorem:main-several-existence-lagrangian} holds for both solutions: $\omega$ and $\widetilde{\omega}$ remain constant in a neighborhood of the trajectories of $h$ and $\tilde{h}$. Noting that $u-\widetilde{u}=K\ast(\omega-\widetilde{\omega})$ with $\int (\omega-\widetilde{\omega})=0$ and $\omega,\widetilde{\omega}$ compactly supported, we have $u-\widetilde{u}\in L^2({\mathbb R}^2)$ (see \cite[Proposition 3.3]{majda-bertozzi}) and we may consider the quantity \begin{equation*} D(t)=\|u(t,\cdot)-\widetilde{u}(t,\cdot)\|_{L^2}^2+|h(t)-\widetilde{h}(t)|^2+|\dot{h}(t)-\dot{\widetilde{h}}(t)|^2,\quad t\in [0,T]. \end{equation*} In what follows we establish a Gronwall inequality for $D(t)$. We remark that the only difference between \eqref{syst:1} and the vortex-wave system \eqref{syst:wv} is the ODE for the point vortex, since the PDE for the vorticity is the same. Thus we may directly use the estimates derived for \eqref{syst:wv} in \cite[Subsection 3.4]{LM} for the quantity $\|u(t,\cdot)-\widetilde{u}(t,\cdot)\|_{L^2}^2$. More precisely, by the estimate (3.9) in \cite{LM} we have for $t\in [0,T^\ast)$ and for all $p\geqslant 2$ \begin{equation*} \|u(t,\cdot)-\widetilde{u}(t,\cdot)\|_{L^2}^2\leqslant C\int_0^t \left( r(\tau)+\sqrt{r(\tau)}f(\sqrt{r(\tau)})+p\:r(\tau)^{1-1/p}\right)\,d\tau, \end{equation*} where $$r(t)=\|u(t,\cdot)-\widetilde{u}(t,\cdot)\|_{L^2}^2+|h(t)-\widetilde{h}(t)|^2,$$ and where $$f(\tau)=\tau |\ln \tau|.$$ Here, $T^\ast\in [0,T]$ is the largest time such that $|h(t)-\widetilde{h}(t)|<\min(1,\delta/2)$ on $[0,T^\ast)$. So using that $r(t)\leqslant D(t)$, and the inequalities $\tau f(\tau)\leqslant f(\tau^2)$, $\tau\leqslant f(\tau)$ for $\tau\leqslant 1$ and $f(\tau)\leqslant p\tau ^{1-1/p}$ (for all $p\geqslant 2$), we get for $t\in [0,T^\ast)$ and for all $p\geqslant 2$ \begin{equation} \label{ineq:vitesse} \|u(t,\cdot)-\widetilde{u}(t,\cdot)\|_{L^2}^2\leqslant C\,p\int_0^t D(\tau)^{1-1/p}\,d\tau. \end{equation} We emphasize that the property obtained in Theorem~\ref{theorem:main-several-existence-lagrangian} is crucial in order to obtain the previous estimate, by implying in particular that $u-\widetilde{u}$ is harmonic in the neighborhood of $h$ and $\tilde{h}$. We turn next to the estimate for the point vortices. We compute \begin{align*} \frac{d}{dt}|h-\widetilde{h}|^2&+\frac{d}{dt}|\dot{h}-\dot{\widetilde{h}}|^2\\ =&2\langle h-\widetilde{h}, \dot{h}-\dot{\widetilde{h}}\rangle -2\frac{\gamma}{m}\langle \dot{h}-\dot{\widetilde{h}},u(t,h)^\perp-\widetilde{u}(t,\widetilde{h})^\perp\rangle\\ \leqslant& D(t)+2\frac{\gamma}{m}\sqrt{D(t)}|u(t,h)-u(t,\widetilde{h})|+2\frac{\gamma}{m}\sqrt{D(t)}|(u-\widetilde{u})(t,\widetilde{h})|. \end{align*} On the one hand, since $u$ is log-Lipschitz we have $|u(t,h)-u(t,\widetilde{h})|\leqslant C f(|h-\tilde{h}|)\leqslant Cf(\sqrt{D(t)})$. On the other hand, exactly as in Step 2 in the proof of \cite[Proposition 3.10]{LM}, we rely on \cite[Lemma 3.9]{LM}: using the analyticity of $u-\widetilde{u}$ near $h$ and $\widetilde{h}$, that lemma enables to obtain $$|u(t,h)-\widetilde{u}(t,h)|\leqslant C \|u(t,\cdot)-\widetilde{u}(t,\cdot)\|_{L^2}.$$ Hence we get finally that for all $p\geqslant 2$, \begin{equation}\label{ineq:vortex} \frac{d}{dt}|h-\widetilde{h}|^2+\frac{d}{dt}|\dot{h}-\dot{\widetilde{h}}|^2 \leqslant C f(D(t))\leqslant CpD(t)^{1-1/p},\quad \forall t\in [0,T^\ast). \end{equation} Finally, gathering \eqref{ineq:vitesse} and \eqref{ineq:vortex}, we find $$D(t)\leqslant C\,p \int_0^t D(\tau)^{1-1/p}\,d\tau,\quad \forall p\geqslant 2.$$ So we conclude by usual arguments (see \cite[Chapter 8]{majda-bertozzi} that $D\equiv 0$ on $[0,T^\ast)$. Thus by definition of $T^\ast$ we get $T^\ast=T$ and uniqueness follows on $[0,T]$. \textbf{Step 2: Proof of Theorem~\ref{theorem:main-several} completed} Once the case of one point is settled, the conclusion of Theorem~\ref{theorem:main-several} follows easily by adapting the proof above to the case of several points, using \eqref{om:constant}, \eqref{ineg:d-5} and \eqref{ineg:d-6}. We refer also to the proof of uniqueness in \cite[Theorem 2.1, Chapter 2]{MiotThesis} dealing with several points. \label{sec:final-proof} \section{Some additional properties} We prove in this section some additional properties for System~\eqref{syst:1} in the case where the circulations and the vorticity have positive sign. \begin{proposition}\label{prop:energy-cst} Let $\omega_0$ and $(\{h_{k,0}\},\{\ell_{k,0}\})$ be as in \eqref{syst:2} and let $(\omega,\{h_k\})$ be any corresponding weak solution to \eqref{syst:1} on $[0,T]$. The following quantities are conserved: \begin{itemize} \item The energy, \begin{align*} \mathcal{H}_0=&\frac{1}{2\pi}\int_{{\mathbb R}^2} \int_{{\mathbb R}^2}\ln |x-y|\omega(t,y)\omega(t,x)\,dx\,dy+\frac{1}{\pi}\sum_{k=1}^N \gamma_k \int_{{\mathbb R}^2}\ln |x-h_k(t)|\omega(t,x)\,dx\\ &+\sum_{j\neq k}\frac{\gamma_k \gamma_j}{2\pi}\ln |h_k(t)-h_j(t)|-\sum_{k=1}^N m_k |\dot{h}_k(t)|^2. \end{align*} \item The momentum, \begin{equation*} \mathcal{I}_0=\int_{{\mathbb R}^2} |x|^2\omega(t,x)\,dx+\sum_{k=1}^N \gamma_k |h_k(t)|^2-2\sum_{k=1}^N m_k h_k(t)^\perp \cdot \dot{h}_k(t). \end{equation*} \end{itemize} \end{proposition} \begin{proof}(sketch) For $\varepsilon<\frac{1}{3}\min_{j\neq k}\min_{t\in [0,T]}|h_j(t)-h_k(t)|$, we replace $\ln$ by the smooth function $\ln_\varepsilon$ defined in the first section and we set $\varphi_\varepsilon=\frac1{2\pi}\ln_\varepsilon\ast \omega$ as in \eqref{def:stram-2}, so that, setting \begin{align*} \mathcal{H}_\varepsilon=&\int_{{\mathbb R}^2} \varphi_\varepsilon(t,x)\omega(t,x)\,dx+2\sum_{k=1}^N \gamma_k \varphi_\varepsilon(t,h_k(t))\\ &+\sum_{j\neq k}\frac{\gamma_j \gamma_k}{2\pi}\ln_\varepsilon |h_j(t)-h_k(t)|-\sum_{k=1}^N m_k |\dot{h}_k(t)|^2, \end{align*} we have $\sup_{ [0,T]}|\mathcal{H}_0-\mathcal{H}_\varepsilon|\leqslant C\varepsilon$, with the quantity $C$ depending only on $\|\omega\|_{L^\infty}$, $\|h_k\|_{L^\infty}$, $m_k$, $\gamma_k$ etc. It suffices then to compute the time derivative of $\mathcal{H}_\varepsilon$ using the weak formulation for $\omega$ and the ODE for the $h_k's$, which yields $\sup_{[0,T]}|\dot{\mathcal{H}}_\varepsilon|\leqslant C\varepsilon$. Letting $\varepsilon$ tend to zero, the conclusion follows. For $\mathcal{I}_0$ we compute directly the time derivative using the weak formulation for $\omega$ and the ODE for the $h_k's$ and we show that it vanishes, which yields the result. \end{proof} With these conservations, we can prove that the massive point vortices are confined if $\omega$ and $\{\gamma_{k}\}$ have the same sign. \begin{corollary}\label{coro:dmin} Assume moreover that $$\omega_0\geqslant 0,\text{ a.e. on }{\mathbb R}^2,\quad \gamma_k>0,\: k=1,\ldots,N.$$ Let $(\omega,\{h_k\})$ be any corresponding weak solution to \eqref{syst:1} on $[0,T]$. Then there exists $C>0$ and $d>0$, depending only on $\mathcal{H}_0$, $\mathcal{I}_0$, $m_k$, $\gamma_k$ and $\|\omega_0\|_{L^\infty}$ and $R_0$, but not on $T$, such that \begin{equation*} \sup_{t\in [0,T]}\left(|\dot{h}_k(t)|^2+|{h}_k(t)|^2\right)\leqslant C \end{equation*} and \begin{equation*} \inf_{t\in [0,T]} \min_{j\neq k}|h_j(t)-h_k(t)|\geqslant d. \end{equation*} \end{corollary} \begin{proof} Since $\omega$ is transported by the flow, we have $\omega(t,\cdot)\geqslant 0$ almost everywhere for $t\in [0,T]$. Similarly to the proof of Proposition~\ref{prop:n}, picking $m\neq n$, we have, using that $\ln(|x-y|)\leqslant |x|+|y|$, \begin{multline*} \frac{\gamma_m \gamma_n}{2\pi}\ln |h_m(t)-h_n(t)|\geqslant \mathcal{H}_0-\frac{1}{2\pi}\int_{{\mathbb R}^2}\int_{{\mathbb R}^2} \big(|x|+|y|\big)\omega(t,y)\omega(t,x)\,dx\,dy\\ -\frac{1}{\pi}\sum_{k=1}^N\gamma_k\int_{{\mathbb R}^2}\big(|x|+|h_k(t)|\big)\omega(t,x)\,dx -\sum_{j\neq k}\frac{\gamma_k \gamma_j}{2\pi}\big( |h_k(t)|+|h_j(t)|\big) \end{multline*} therefore by Cauchy-Schwartz inequality: \begin{equation}\label{ineq:appendix-1} \frac{\gamma_m \gamma_n}{2\pi}\ln |h_m(t)-h_n(t)| \geqslant \mathcal{H}_0-C\left(\int_{{\mathbb R}^2} |x|^2\omega(t,x)\,dx+\sum_{k=1}^N \gamma_k |h_k(t)|^2\right)^{1/2}, \end{equation} where $C$ depends only on $\|\omega(t,\cdot)\|_{L^1}=\|\omega_0\|_{L^1}$ and on $\gamma_k$. By the same estimates we also obtain \begin{equation}\label{ineq:appendix-2} \sum_{k=1}^N m_k |\dot{h}_k(t)|^2\leqslant - \mathcal{H}_0+C\left(\int_{{\mathbb R}^2} |x|^2\omega(t,x)\,dx+\sum_{k=1}^N \gamma_k |h_k(t)|^2\right)^{1/2}. \end{equation} On the other hand, by Cauchy-Schwartz inequality: \begin{align*} &\int_{{\mathbb R}^2} |x|^2\omega(t,x)\,dx+\sum_{k=1}^N \gamma_k |h_k(t)|^2 \leqslant \mathcal{I}_0+C\left(\sum_{k=1}^N m_k |\dot{h}_k(t)|^2\right)^{1/2}\left(\sum_{k=1}^N \gamma_k |{h}_k(t)|^2\right)^{1/2}\\ \leqslant& \mathcal{I}_0 +C\left(-\mathcal{H}_0+C\left(\int_{{\mathbb R}^2} |x|^2\omega(t,x)\,dx+\sum_{k=1}^N \gamma_k |h_k(t)|^2\right)^{1/2}\right)^{1/2}\left(\sum_{k=1}^N \gamma_k |{h}_k(t)|^2\right)^{1/2}\\ \leqslant& \mathcal{I}_0 +C\left(1+\int_{{\mathbb R}^2} |x|^2\omega(t,x)\,dx+\sum_{k=1}^N \gamma_k |h_k(t)|^2\right)^{3/4} \end{align*} where we have used \eqref{ineq:appendix-2}. We conclude that \begin{equation*} \int_{{\mathbb R}^2} |x|^2\omega(t,x)\,dx+\sum_{k=1}^N \gamma_k |h_k(t)|^2\leqslant C \end{equation*} with $C$ depending only $\mathcal{I}_0, \mathcal{H}_0$ and $\|\omega_0\|_{L^1}.$ Coming back to \eqref{ineq:appendix-1} and \eqref{ineq:appendix-2}, the conclusion follows. \end{proof} \appendix \section{Some results included in \cite{MiotThesis}} In this Appendix we gather several results from \cite[Chapter 1]{MiotThesis}. Since that reference is in french we provide here the statements in english and refer to \cite{MiotThesis} for the proofs. Similar results and proofs in the case of one point vortex are also to be found in \cite{LM}. \begin{lemma} \label{appendix:1}Let $\{h_k\}$ be $N$ Lipschitz trajectories on $[0,T]$ without collisions: \begin{equation*} \min_{t\in [0,T]} \min_{k\neq p} |h_{k}(t)-h_{p}(t)|\geqslant \rho \end{equation*} for some $\rho>0$. Let $\omega$ be a weak solution of the PDE \begin{equation} \label{eq:appendix} \partial_t \omega+\operatorname{div}(v \omega)=0\quad \text{on }[0,T], \end{equation}where $v$ is the divergence-free velocity field given by \begin{equation}\label{def:field-appendix} v(t,x)=u(t,x)+\sum_{j=1}^N \gamma_j K(x-h_j(t)). \end{equation} with $u$ a divergence-free vector field satisfying \begin{equation}\label{regu:field-appendix} u\in L^\infty([0,T]\times {\mathbb R}^2)\quad \text{ and }u(t,\cdot)\text{ is log-Lipschitz uniformly in time}. \end{equation} Let $\beta:{\mathbb R} \rightarrow {\mathbb R}$ be $C^1$ such that \begin{equation*} |\beta'(z)|\leqslant C(1+ |z|^p),\qquad \forall z\in {\mathbb R}, \end{equation*} for some $p\geqslant 0$. Then for all test function $\psi \in C_c^\infty([0,T] \times {\mathbb R}^2)$, we have \begin{equation*} \frac{d}{dt}\int_{{\mathbb R}^2} \psi \beta(\omega)\,dx=\int_{{\mathbb R}^2} \beta(\omega) (\partial_t \psi +v\cdot \nabla \psi)\,dx \:\: \mathrm{in }\: L^1([0,T]). \end{equation*} \end{lemma} This lemma is stated in \cite[Chapter 1, Lemme 1.5]{MiotThesis} in the case where $(\omega, \{h_k\})$ is a weak solution of the vortex-wave system. However a straightforward adaptation of the proof shows that this holds for the linear transport equation \eqref{eq:appendix} with any vector field $v$ given by the decomposition \eqref{def:field-appendix}, where $u$ satisfies the regularity properties \eqref{regu:field-appendix} and where the $h_j'$ are Lipschitz continuous on $[0,T]$ and do not intersect. We emphasize that their precise dynamics is not used to show the renormalization property. As a consequence of Lemma~\ref{appendix:1} it is observed in \cite[Chapter 1, Remarque 1.3]{MiotThesis} (or in \cite[Lemma 3.2]{LM} for the case of one point) that \begin{corollary}\label{coro:appendix} Under the same assumption as in Lemma~\ref{appendix:1} for the $\{h_k\}$, let $\omega$ be a weak solution of the PDE \eqref{eq:appendix}. Then for all $1\leqslant p \leqslant +\infty$ we have $\|\omega(t,\cdot)\|_{L^p}=\|\omega(0,\cdot)\|_{L^p}.$ In particular, uniqueness of the weak solution holds. \end{corollary} \def$'${$'$} \end{document}
\begin{document} \title{f PI regulation control of a 1-D semilinear wave equation} This paper is concerned with the Proportional Integral (PI) regulation control of the left Neumann trace of a one-dimensional semilinear wave equation. The control input is selected as the right Neumann trace. The control design goes as follows. First, a preliminary (classical) velocity feedback is applied in order to shift all but a finite number of the eivenvalues of the underlying unbounded operator into the open left half-plane. We then leverage on the projection of the system trajectories into an adequate Riesz basis to obtain a truncated model of the system capturing the remaining unstable modes. Local stability of the resulting closed-loop infinite-dimensional system composed of the semilinear wave equation, the preliminary velocity feedback, and the PI controller, is obtained through the study of an adequate Lyapunov function. Finally, an estimate assessing the set point tracking performance of the left Neumann trace is derived. \section{Introduction} Due to its widespread adoption by industry~\cite{aastrom1995pid,astrom2008feedback}, the stabilization and regulation control of finite-dimensional systems by means of Proportional-Integral (PI) controllers has been intensively studied. For this reason, the opportunity of extending PI control strategies to infinite-dimensional systems, and in particular to systems modeled by partial differential equations (PDEs), has attracted much attention in the recent years. Efforts in this research direction were originally devoted to the case of bounded control operators~\cite{pohjolainen1982robust,pohjolainen1985robust} and then extended to unbounded control operators~\cite{xu1995robust}. The study of PI control design combined with high-gain conditions was reported in~\cite{logemann1992robust}. More recently, the problem of PI boundary control of linear hyperbolic systems has been reported in a number of works~\cite{bastin2015stability,dos2008boundary,lamare2015control,xu2014multivariable}. This research direction has then been extended to the case of nonlinear transport equations~\cite{bastin2019exponential,coron2019pi,martins2014design,hayat2019pi,rodrigues2013lmi,trinh2017design}. The case of the boundary regulation control of the Neumann trace for a linear reaction-diffusion in the presence of an input delay was considered in~\cite{lhachemi2019pi}. The case of the boundary regulation control of the boundary velocity for linear damped wave equations, in the presence of a nonlinearity in the boundary conditions, has been considered in~\cite{barreau2019practical,terrand2018regulation}. A general procedure allowing the addition of an integral component for regulation control to open-loop exponentially stable semigroups with unbounded control operators has been proposed in~\cite{terrand2018lyapunov,terrand2019adding}. This paper is concerned with the PI regulation control of the left Neumann trace of a one-dimensional semilinear (undamped) wave equation. The selected control input takes the form of the right Neumann trace. The control design procedure goes as follows. First, inspired by~\cite{coron2006global}, a preliminary (classical) velocity-feedback is applied in order to shift all but a finite number of the eigenvalues of the underlying unbounded operator into the open left half-plan. Then, inspired by the early work~\cite{russell1978controllability} later extended in~\cite{coron2004global,coron2006global,schmidt2006} to semilinear heat and wave PDEs, we leverage on the projection of the system trajectories into a Riesz basis formed by the generalized eigenstructures of the unbounded operator in order to obtain a truncated model capturing the remaining unstable modes. Finally, similarly to~\cite{lhachemi2019pi}, this finite dimensional model is augmented to include the integral component of the PI controller, allowing to compute a stabilizing feedback. The local stability of the resulting closed-loop infinite-dimensional system, and the subsequent set point regulation performance, is assessed by a Lyapunov-based argument. The theoretical results are illustrated based on the simulation of an open-loop unstable semilinear wave equation. The paper is organized as follows. The investigated control problem is introduced in Section~\ref{sec: problem setting}. The proposed control design procedure is presented in Section~\ref{sec: control design} in a comprehensive manner. The subsequent stability analysis is carried out in Section~\ref{sec: stability and regulation} while the theoretical results are numerically illustrated in Section~\ref{sec: numerical illustration}. Finally, concluding remarks are formulated in Section~\ref{sec: conclusion}. \section{Problem setting}\label{sec: problem setting} Let $L>0$ and let $f : \mathbb{R} \rightarrow \mathbb{R}$ be a function of class $\mathcal{C}^2$. We consider the following wave equation on $(0,L)$: \begin{subequations}\label{eq: wave equation} \begin{align} &\dfrac{\partial^2 y}{\partial t^2} = \dfrac{\partial^2 y}{\partial x^2} + f(y) , \\ & y(t,0) = 0 , \qquad \dfrac{\partial y}{ \partial x}(t,L) = u(t) ,\\ & y(0,x) = y_0(x) , \qquad \dfrac{\partial y}{\partial t}(0,x) = y_1(x) , \end{align} \end{subequations} for $t > 0$ and $x \in (0,L)$, where the state is $y(t,\cdot):[0,L]\rightarrow\mathbb{R}$ and the control input is $u(t)$ and applies to the right Neumann trace. The control objective is to design a PI controller in order to locally stabilize the closed-loop system and locally regulate the system output selected as the left Neumann trace: \begin{equation}\label{eq: system output} z(t) = \dfrac{\partial y}{ \partial x}(t,0) . \end{equation} \begin{definition}\label{def: steady state} A function $y_e \in\mathcal{C}^2([0,L])$ is a steady-state of (\ref{eq: wave equation}) with associated constant control input $u_e \in \mathbb{R}$ and constant system output $z_e \in \mathbb{R}$ if \begin{equation*} \begin{split} & \dfrac{\mathrm{d}^2 y_e}{\mathrm{d} x^2}(x) + f(y_e(x)) = 0 , \qquad x \in (0,L), \\ & y_e(0) = 0 , \quad \dfrac{\mathrm{d} y_e}{\mathrm{d} x}(L) = u_e , \\ & z_e = \dfrac{\mathrm{d} y_e}{\mathrm{d} x}(0) . \end{split} \end{equation*} \end{definition} \begin{remark}\label{rem: existence of steady states} Introducing $F(y) = \int_0^y f(s) \,\mathrm{d}s$ for any $y \in \mathbb{R}$, assume that one of the two following properties holds: \begin{itemize} \item $F(y) \rightarrow + \infty$ when $\vert y \vert \rightarrow +\infty$; \item for any $a > 0$, when it makes sense, the integral $\int \frac{\mathrm{d}y}{\sqrt{a-F(y)}}$ diverges at $-\infty$ and $+\infty$. \end{itemize} Then we have the existence of a steady state $y_e \in\mathcal{C}^2([0,L])$ of (\ref{eq: wave equation}) associated with any given value of the system output $z_e \in \mathbb{R}$. Indeed, define $y \in \mathcal{C}^2([0,l))$ with $0 < l \leqslant + \infty$ as the maximal solution of $y''+f(y)=0$ with $y(0)=0$ and $y'(0) = z_e$. We only need to assess that $l > L$. Multiplying by $y'$ both sides of the ODE satisfied by $y$ and then integrating over $[0,x]$, we observe that $y$ satisfies the conservation law $y'(x)^2 + 2F(y(x)) = z_e^2$ for all $x \in [0,l)$. Hence any of the two above assumptions implies that $y$ and $y'$ are bounded on $[0,l)$. Thus $l = + \infty$ and the associated steady state control input is given by $u_e = y'(L)$. \end{remark} Given a desired value of the system output $z_e \in\mathbb{R}$ and an associated steady state function $y_e \in \mathcal{C}^2([0,L])$, the control design objective tackled in this paper is to guarantee the local stability of the system (\ref{eq: wave equation}), when augmented with an adequate control strategy, as well as ensuring the regulation performance, i.e., $z(t) = \frac{\partial y}{\partial x} \rightarrow z_e$ when $t \rightarrow +\infty$. To achieve this objective, we introduce the following deviations: $y_\delta(t,x) = y(t,x) - y_e(x)$ and $u_\delta(t) = u(t) - u_e$. A Taylor expansion with integral remainder shows that (\ref{eq: wave equation}) can equivalently be rewritten under the form: \begin{subequations}\label{eq: wave equation - variations around equilibrium} \begin{align} &\dfrac{\partial^2 y_\delta}{\partial t^2} = \dfrac{\partial^2 y_\delta}{\partial x^2} + f'(y_e) y_\delta + y_\delta^2 \int_0^1 (1-s) f''(y_e + s y_\delta) \,\mathrm{d}s , \label{eq: wave equation - variations around equilibrium - PDE} \\ & y_\delta(t,0)=0,\qquad \dfrac{\partial y_\delta}{ \partial x}(t,L) = u_\delta(t), \label{eq: wave equation - variations around equilibrium - BC} \\ & y_\delta(0,x) = y_0(x)-y_e(x) , \qquad \dfrac{\partial y_\delta}{\partial t}(0,x) = y_1(x) , \end{align} \end{subequations} for $t > 0$ and $x \in (0,L)$, while the output to be regulated is now expressed as \begin{equation}\label{eq: system output - variations around equilibrium} z_\delta(t) = \dfrac{\partial y_\delta}{ \partial x}(t,0) = z(t) - z_e . \end{equation} Finally, following classical proportional integral control design schemes, we introduce the following integral component on the tracking error: \begin{equation}\label{eq: integral component zeta} \dot{\zeta}(t) = \dfrac{\partial y_\delta}{ \partial x}(t,0) - z_r(t) = z(t) - ( z_e + z_r(t) ), \end{equation} where $z_r(t) \in \mathbb{R}$ is the reference input signal. \begin{remark} It was shown in~\cite{lhachemi2019pi} for a linear reaction-diffusion equation with Dirichlet boundary control that a simple proportional integral controller can be used to successfully control a Neumann trace. The control design was performed on a finite-dimensional truncated model capturing the unstable modes of the infinite dimensional system while assessing the stability of the full infinite-dimensional system via a Lyapunov-based argument. Such an approach cannot be directly applied to the case of the wave equation studied in this paper due to the fact that, even in the case of a linear function $f$, the open-loop system might exhibit an infinite number of unstable modes. To avoid this pitfall, we borrow the following remark from~\cite{coron2006global}. In the case $f = 0$, the control input $u_\delta(t) = - \alpha \frac{\partial y_{\delta}}{\partial t}(t,L)$, with $\alpha > 0$, ensures the exponential decay of the energy function defined as: \begin{equation*} E(t) = \int_0^L \left( \dfrac{\partial y_\delta}{\partial t}(t,x) \right)^2 + \left( \dfrac{\partial y_\delta}{\partial x}(t,x) \right)^2 \,\mathrm{d}x . \end{equation*} Thus, as suggested in~\cite{coron2006global}, a suitable control input candidate for (\ref{eq: wave equation - variations around equilibrium}) takes the form: \begin{equation}\label{eq: preliminary control input} u_\delta(t) = - \alpha \dfrac{\partial y_\delta}{ \partial t}(t,L) + v(t) , \end{equation} where $\alpha > 0$ is to be selected and $v(t)$ is an auxiliary command input. In particular, it was shown in~\cite{coron2006global} that, in the presence of the nonlinear term $f$, the velocity feedback can be used to locally stabilize all but possibly a finite number of the modes of the system. Then the authors showed that the design of the auxiliary control input $v$ can be performed by pole shifting on a finite dimensional truncated model to achieve the stabilization of the remaining unstable modes. The stability of the resulting closed-loop system was assessed via the introduction of a suitable Lyapunov function. In this paper, we propose to take advantage of such a control design strategy in order to achieve the regulation of the following Neumann trace by means of a proportional integral control design scheme via the introduced integral component (\ref{eq: integral component zeta}). \end{remark} \section{Control design}\label{sec: control design} \subsection{Equivalent homogeneous problem} By introducing the change of variable: \begin{equation}\label{eq: change of variable} w^1(t,x) = y_\delta(t,x) , \qquad w^2(t,x) = \dfrac{\partial y_\delta}{\partial t}(t,x) - \dfrac{x}{\alpha L} v(t) , \end{equation} we obtain from the wave equation (\ref{eq: wave equation - variations around equilibrium}), the integral component (\ref{eq: integral component zeta}), and the control strategy (\ref{eq: preliminary control input}) that \begin{subequations}\label{eq: wave equation - equivalent homogeneous problem} \begin{align} & \dfrac{\partial w^1}{\partial t} = w^2 + \dfrac{x}{\alpha L} v(t) , \label{eq: wave equation - equivalent homogeneous problem - PDE 1} \\ & \dfrac{\partial w^2}{\partial t} = \dfrac{\partial^2 w^1}{\partial x^2} + f'(y_e) w^1 + r(t,x) - \dfrac{x}{\alpha L} \dot{v}(t) , \label{eq: wave equation - equivalent homogeneous problem - PDE 2} \\ & \dot{\zeta}(t) = \dfrac{\partial w^1}{ \partial x}(t,0) - z_r(t) , \\ & w^1(t,0) = 0 ,\qquad \dfrac{\partial w^1}{ \partial x}(t,L) + \alpha w^2(t,L) = 0 , \\ & w^1(0,x) = y_0(x)-y_e(x) , \qquad w^2(0,x) = y_1(x) - \dfrac{x}{\alpha L} v(0) , \\ & \zeta(0) = \zeta_0 \end{align} \end{subequations} for $t > 0$ and $x \in (0,L)$, with the residual term \begin{equation}\label{eq: def residual term} r(t,x) = (w^1(t,x))^2 \int_0^1 (1-s) f''(y_e(x) + s w^1(t,x)) \,\mathrm{d}s . \end{equation} \begin{remark} A more classical change of variable for (\ref{eq: wave equation - variations around equilibrium}) with control input $u$ given by (\ref{eq: preliminary control input}) is generally obtained by setting \begin{equation}\label{eq: change of variable - classical} w (t,x) = y_\delta(t,x) - \dfrac{x(x-L)}{L} v(t) . \end{equation} In that case, (\ref{eq: wave equation - variations around equilibrium}) with $u$ given by (\ref{eq: preliminary control input}) yields \begin{subequations} \begin{align} & \dfrac{\partial^2 w}{\partial t^2} = \dfrac{\partial^2 w}{\partial x^2} + f'(y_e) w - \dfrac{x(x-L)}{L} \ddot{v}(t) + \left( \dfrac{x(x-L)}{L} f'(y_e) + \dfrac{2}{L} \right) v(t) + r(t,x) , \label{eq: wave equation - equivalent homogeneous problem - classical - PDE} \\ & w(t,0) = 0 , \qquad \dfrac{\partial w}{\partial x}(t,L) + \alpha \dfrac{\partial w}{\partial t}(t,L) = 0 , \\ & w(0,x) = y_0(x) - y_e(x) - \dfrac{x(x-L)}{L} v(0) , \qquad \dfrac{\partial w}{\partial t}(0,x) = y_1(x) - \dfrac{x(x-L)}{L} \dot{v}(0) \end{align} \end{subequations} with \begin{align*} & r(t,x) = \\ & \left( w(t,x) + \dfrac{x(x-L)}{L} v(t) \right)^2 \int_0^1 (1-s) f''\left(y_e(x) + s \left( w(t,x) + \dfrac{x(x-L)}{L} v(t) \right) \right) \,\mathrm{d}s . \end{align*} However, this change of variable (\ref{eq: change of variable - classical}) induces the occurrence of a $\ddot{v}$ term in (\ref{eq: wave equation - equivalent homogeneous problem - classical - PDE}), while only a $\dot{v}$ term appears in (\ref{eq: wave equation - equivalent homogeneous problem - PDE 1}-\ref{eq: wave equation - equivalent homogeneous problem - PDE 2}). Thus, in the subsequent procedure, the consideration of the change of variable (\ref{eq: change of variable}) instead of (\ref{eq: change of variable - classical}) will allow a reduction of the complexity of the controller architecture by avoiding the introduction of an extra additional integral component. \end{remark} We now introduce the Hilbert space \begin{equation*} \mathcal{H} = \left\{ (w^1,w^2) \in H^1(0,L) \times L^2(0,L) \,:\, w^1(0) = 0 \right\} \end{equation*} endowed with the inner product \begin{equation*} \left< (w^1,w^2) , (z^1,z^2) \right> = \int_0^L (w^1)'(x) \overline{(z^1)'(x)} + w^2(x) \overline{z^2(x)} \,\mathrm{d}x . \end{equation*} Defining the following state vector \begin{equation*} W(t) = (w^1(t,\cdot),w^2(t,\cdot)) \in \mathcal{H} , \end{equation*} the wave equation with integral component (\ref{eq: wave equation - equivalent homogeneous problem}) can be rewritten under the abstract form \begin{subequations}\label{eq: wave equation - abstract form} \begin{align} & \dfrac{\mathrm{d} W}{\mathrm{d} t}(t) = \mathcal{A} W(t) + a v(t) + b \dot{v}(t) + R(t,\cdot) , \label{eq: wave equation - abstract form - ODE} \\ & \dot{\zeta}(t) = \dfrac{\partial w^1}{ \partial x}(t,0) - z_r(t) , \label{eq: wave equation - abstract form - zeta} \\ & W(0,x) = \left( y_0(x)-y_e(x) , y_1(x) - \dfrac{x}{\alpha L} v(0) \right) , \\ & \zeta(0) = \zeta_0 . \end{align} \end{subequations} for $t > 0$ and $x \in (0,L)$, where \begin{equation}\label{eq: def operator A} \mathcal{A} = \begin{pmatrix} 0 & \mathrm{Id} \\ \mathcal{A}_0 & 0 \end{pmatrix} \end{equation} with $\mathcal{A}_0 = \Delta + f'(y_e) \,\mathrm{Id}$ on the domain \begin{align*} D(\mathcal{A}) = \{ (w^1,w^2) \in \mathcal{H} \,:\, & w^1 \in H^2(0,L) ,\, w^2 \in H^1(0,L) ,\, \\ & w^2(0) = 0 ,\, (w^1)'(L)+\alpha w^2(L) = 0 \} , \end{align*} and $a,b,R(t,\cdot) \in \mathcal{H}$ are defined by \begin{equation}\label{eq: def a b R} \quad a(x) = ( x/(\alpha L) , 0 ) , \quad b(x) = ( 0 , -x/(\alpha L) ) , \quad R(t,x) = ( 0 , r(t,x) ) . \end{equation} We have $R(t,\cdot) \in \mathcal{H}$ because $r(t,\cdot) \in L^2(0,L)$, which is a direct consequence of the facts that $w^1(t,\cdot) \in H^1(0,L) \subset L^\infty(0,L)$, $f''$ is continuous on $\mathbb{R}$, and $y_e$ is continuous on $[0,L]$. \begin{remark}\label{rmk: classical solutions} It is well-known that the operator $\mathcal{A}$ generates a $C_0$-semigroup~\cite{tucsnak2009observation}. Moreover, because the Neumann trace is $\mathcal{A}$-admissible, the application of~\cite[Lemma~1]{xu1995robust} shows that the augmentation of $\mathcal{A}$ with the integral component $\zeta$ still generates a $C_0$-semigroup. As $\dot{v}$ is seen as the control input, the state-space vector can further be augmented to include $v$, and the associated augmented operator also generates a $C_0$-semigroup. Now, noting that the residual term (\ref{eq: def residual term}) can be rewritten under the form \begin{equation*} r(t,x) = \int_{y_e(x)}^{w^1(t,x)+y_e(x)} (w^1(t,x)+y_e(x)-s) f''(s) \,\mathrm{d}s , \end{equation*} one can see that $w^1 \rightarrow \int_{y_e}^{w^1+y_e} (w^1+y_e-s) f''(s) \,\mathrm{d}s$, when seen as a function from $\{ w^1 \in H^1(0,L) \,:\, w^1(0)=0 \}$ to $L^2(0,L)$, is continuously differentiable. Consequently, the well-posedness of (\ref{eq: wave equation - abstract form}) follows from classical results~\cite{pazy2012semigroups}. In the subsequent developments, we will consider for initial conditions $W(0)\in D(\mathcal{A})$, continuously differentiable reference inputs $z_r$, and a control input $\dot{v}$ that will take the form of a state-feedback, the concept of classical solution for (\ref{eq: wave equation - abstract form}) on its maximal interval of definition $[0,T_{\max})$ with $0 < T_{\max} \leqslant + \infty$, i.e. $W \in \mathcal{C}^0([0,T_{\max});D(\mathcal{A})) \cap \mathcal{C}^1([0,T_{\max});\mathcal{H})$. \end{remark} \subsection{Properties of the operator $\mathcal{A}$} First, we explicit in the following lemma the adjoint operator $\mathcal{A}^*$. \begin{lemma}\label{lem: adjoint operator} The adjoint operator of $\mathcal{A}$ is defined on \begin{align*} D(\mathcal{A}^*) = \{ (z^1,z^2) \in \mathcal{H} \,:\, & z^1 \in H^2(0,L) ,\, z^2 \in H^1(0,L) ,\, \\ & z^2(0) = 0 ,\, (z^1)'(L)-\alpha z^2(L) = 0 \} \end{align*} by \begin{equation}\label{eq: adjoint operator} \mathcal{A}^* (z^1,z^2) = ( - z^2 - g , - (z^1)'' ) \end{equation} where $g \in \mathcal{C}^2([0,L])$ is uniquely defined by \begin{align*} & g'' = f'(y_e) z^2 , \\ & g(0) = g'(L) = 0 . \end{align*} \end{lemma} \textbf{Proof.} We write $\mathcal{A} = \mathcal{A}_{tr} + \mathcal{A}_{p}$ with \begin{equation*} \mathcal{A}_{tr} = \begin{pmatrix} 0 & \mathrm{Id} \\ \Delta & 0 \end{pmatrix} ,\qquad \mathcal{A}_{p} = \begin{pmatrix} 0 & 0 \\ f'(y_e) \,\mathrm{Id} & 0 \end{pmatrix} \end{equation*} where $\mathcal{A}_{tr}$ is an unbounded operator defined on the same domain as $\mathcal{A}$ while $\mathcal{A}_{p}$ in defined on $\mathcal{H}$. As $\mathcal{A}_{p}$ is bounded, straightforward computations show that $\mathcal{A}_{p}^* = (-g,0)$ where $g$ is defined as in the statement of the Lemma. It remains to compute $\mathcal{A}_{tr}^*$. To do so, one can observe that $0 \in \rho(\mathcal{A}_{tr})$ with \begin{equation*} \mathcal{A}_{tr}^{-1} w = \left( - \alpha w^1(L) x + \int_0^x\int_L^\xi w^2(s) \,\mathrm{d}s\,\mathrm{d}\xi , w^1 \right) , \quad \forall w = (w^1,w^2) \in \mathcal{H} . \end{equation*} Since $\mathcal{A}_{tr}^{-1}$ is bounded, straightforward computations show that \begin{equation*} \left(\mathcal{A}_{tr}^{-1}\right)^* w = \left( - \alpha w^1(L) x - \int_0^x\int_L^\xi w^2(s) \,\mathrm{d}s\,\mathrm{d}\xi , - w^1 \right) , \quad \forall w = (w^1,w^2) \in \mathcal{H} . \end{equation*} We deduce the claimed result by computing the inverse of the latter operator. \qed The strategy reported in this paper relies on the concept of Riesz bases. This concept is recalled in the following definition. \begin{definition}\label{def: Riesz basis} A family of vectors $(e_k)_{k \in \mathbb{Z}}$ of $\mathcal{H}$ is a Riesz basis if this family is maximal and there exist constants $m_r,M_R > 0$ such that for any $N \geqslant 0$ and any $c_{-n_0},\ldots,c_{n_0}\in\mathbb{C}$, \begin{equation*} m_R \sum\limits_{\vert k \vert \leqslant N} \vert c_k \vert^2 \leqslant \left\Vert \sum\limits_{\vert k \vert \leqslant N} c_k e_k \right\Vert_\mathcal{H}^2 \leqslant M_R \sum\limits_{\vert k \vert \leqslant N} \vert c_k \vert^2 . \end{equation*} The dual Riesz basis of $(e_k)_{k \in \mathbb{Z}}$ is the unique family of vectors $(f_k)_{k \in \mathbb{Z}}$ of $\mathcal{H}$ which is such that $\left< e_k , f_l \right>_\mathcal{H} = \delta_{k,l} \in \{0,1\}$ with $\delta_{k,l} = 1$ if and only if $k = l$. \end{definition} We can now introduce the following properties of the operator $\mathcal{A}$. These properties, expect the last item, are retrieved from~\cite[Lemmas~2 and~5]{coron2006global}. \begin{lemma}\label{lem: properties A} Let $\alpha > 1$. There exists a Riesz basis $(e_k)_{k \in \mathbb{Z}}$ of $\mathcal{H}$ consisting of generalized eigenfunctions of $\mathcal{A}$, associated to the eigenvalues $(\lambda_k)_{k \in \mathbb{Z}}$ and with dual Riesz basis $(f_k)_{k \in \mathbb{Z}}$, such that: \begin{enumerate} \item $e_k \in D(\mathcal{A})$ and $\Vert e_k \Vert_\mathcal{H} = 1$ for every $k \in \mathbb{Z}$; \item each eigenvalue $\lambda_k$ is geometrically simple; \item there exists $n_0 \geqslant 0$ such that, for any $k \in \mathbb{Z}$ with $\vert k \vert \geqslant n_0+1$, the eigenvalue $\lambda_k$ is algebraically simple and satisfies \begin{equation}\label{eq: lemma properties A - asymptotic behavior eigenvalues} \lambda_k = \dfrac{1}{2L} \log\left(\dfrac{\alpha-1}{\alpha+1}\right) + i \dfrac{k\pi}{L} + O\left( \dfrac{1}{\vert k \vert} \right) \end{equation} as $\vert k \vert \rightarrow + \infty$. \item if $k \geqslant n_0+1$, then $e_k$ (resp. $f_k$) is an eigenfunction of $\mathcal{A}$ (resp. $\mathcal{A}^*$) associated with the algebraically simple eigenvalue $\lambda_k$ (resp. $\overline{\lambda_k}$); \item for every $k \geqslant n_0+1$, one has $e_k = \overline{e_{-k}}$ and $f_k = \overline{f_{-k}}$; \item for every $\vert k \vert \leqslant n_0$, there holds \begin{subequations}\label{eq: lemma properties A - vectors for small k} \begin{align} \mathcal{A} e_k & \in \mathrm{span}\{ e_p \,:\, \vert p \vert \leqslant n_0 \} , \\ \mathcal{A}^* f_k & \in \mathrm{span}\{ f_p \,:\, \vert p \vert \leqslant n_0 \} ; \end{align} \end{subequations} \item introducing $e_k = (e_k^1,e_k^2)$, one has $(e_k^1)'(0) = O(1)$ as $\vert k \vert \rightarrow + \infty$. \end{enumerate} \end{lemma} The proof of Lemma~\ref{annex: proof lemma}, which is essentially extracted from~\cite{coron2006global}, is placed in annex for self-completeness of the manuscript. In the remainder of the paper, we select the constant $\alpha > 1$ such that \begin{equation*} \frac{1}{2L} \log\left(\frac{\alpha-1}{\alpha+1}\right) < -1 . \end{equation*} Based on the asymptotic behavior (\ref{eq: lemma properties A - asymptotic behavior eigenvalues}), only a finite number of eigenvalues might have a non negative real part. Thus, without loss of generality, we also select the integer $n_0 \geqslant 0$ provided by Lemma~\ref{lem: properties A} large enough such that $\operatorname{Re} \lambda_k < -1$ for all $\vert k \vert \geqslant n_0 + 1$. \begin{remark} The state-space can be written as $\mathcal{H} = \mathcal{H}_1 \bigoplus \mathcal{H}_2$ with the subspaces $\mathcal{H}_1 = \mathrm{span}\{ e_p \,:\, \vert p \vert \leqslant n_0 \}$ and $\mathcal{H}_2 = \overline{\mathrm{span}\{ e_p \,:\, \vert p \vert \geqslant n_0 + 1 \}}$. Introducing $\pi_1,\pi_2$ the projectors associated with this decomposition, Lemma~\ref{lem: properties A} shows that $\mathcal{A}$ takes the form $\mathcal{A} = \mathcal{A}_1 \pi_1 + \mathcal{A}_2 \pi_2$ where $\mathcal{A}_1 \in \mathcal{L}(\mathcal{H}_1)$ and $\mathcal{A}_2 : D(\mathcal{A}_2) \subset \mathcal{H}_2 \rightarrow \mathcal{H}_2$ with $D(\mathcal{A}_2) = D(\mathcal{A}) \cap \mathcal{H}_2$. Moreover, Lemma~\ref{lem: properties A} shows that $\mathcal{A}_2$ is a Riesz spectral operator. Then, as $D(\mathcal{A}) = \mathcal{H}_1 \bigoplus D(\mathcal{A}_2)$, we obtain that \begin{equation*} D(\mathcal{A}) = \left\{ \sum\limits_{k \in \mathbb{Z}} w_k e_k \,:\, \sum\limits_{k \in \mathbb{Z}} \vert \lambda_k w_k \vert^2 < \infty \right\} \end{equation*} and, for any $w \in D(\mathcal{A})$, \begin{equation}\label{eq: A = A1 + A2} \mathcal{A} w = \mathcal{A}_1 \pi_1 w + \sum\limits_{\vert k \vert \geqslant n_0 + 1} \lambda_k \left< w , f_k \right> e_k , \end{equation} where the equality holds in $\mathcal{H}$-norm. Now, for any $w \in D(\mathcal{A})$, consider the series expansion $w = (w^1,w^2) = \sum_{k \in \mathbb{Z}} \left< w , f_k \right> e_k$. In particular, one has $w^1 = \sum_{k \in \mathbb{Z}} \left< w , f_k \right> e_k^1$ in $H^1$-norm. Moreover, from (\ref{eq: A = A1 + A2}) we have that $\mathcal{A} w = \sum_{k \in \mathbb{Z}} \left< w , f_k \right> \mathcal{A} e_k$ and thus $\mathcal{A}_0 w^1 = \sum_{k \in \mathbb{Z}} \left< w , f_k \right> \mathcal{A}_0 e_k^1$ in $L^2$-norm. Since $f'(y_e) \in L^\infty(0,L)$, the expansion of the latter identity shows that $(w^1)'' = \sum_{k \in \mathbb{Z}} \left< w , f_k \right> (e_k^1)''$ in $L^2$-norm. Consequently, $w^1 = \sum_{k \in \mathbb{Z}} \left< w , f_k \right> e_k^1$ in $H^2$-norm and thus, by the continuous embedding $H^1(0,L) \subset L^\infty(0,L)$, \begin{equation}\label{eq: series expansion Neumann trace} (w^1)'(0) = \sum\limits_{k \in \mathbb{Z}} \left< w , f_k \right> (e_k^1)'(0) . \end{equation} The latter series expansion will be intensively used in the remainder of this paper. \end{remark} \subsection{Spectral reduction and truncated model}\label{subsec: spectral reduction and truncated model} Introducing, for every $k \in \mathbb{Z}$, \begin{equation*} w_k(t) = \left< W(t) , f_k \right>_\mathcal{H} , \quad a_k = \left< a , f_k \right>_\mathcal{H} , \quad b_k = \left< b , f_k \right>_\mathcal{H} , \quad r_k(t) = \left< R(t,\cdot) , f_k \right>_\mathcal{H} , \end{equation*} we obtain from (\ref{eq: wave equation - abstract form - ODE}), that \begin{align*} \dot{w}_k(t) & = \left< \mathcal{A}W(t) , f_k \right>_\mathcal{H} + a_k v(t) + b_k \dot{v}(t) + r_k(t) \\ & = \left< W(t) , \mathcal{A}^* f_k \right>_\mathcal{H} + a_k v(t) + b_k \dot{v}(t) + r_k(t) \end{align*} Recalling that $\mathcal{A}^* f_k = \overline{\lambda_k} f_k$ for $\vert k \vert \geqslant n_0+1$, we obtain that \begin{equation}\label{eq: residual dynamics} \dot{w}_k(t) = \lambda_k w_k(t) + a_k v(t) + b_k \dot{v}(t) + r_k(t) , \qquad \vert k \vert \geqslant n_0+1 . \end{equation} Moreover, after possibly linear recombination\footnote{In that case, all the properties stated by Lemma~\ref{lem: properties A} remain true except that $(e_k)_{\vert k \vert \leqslant N_0}$ might not be generalized eigenvectors of $\mathcal{A}$.} of $(e_k)_{\vert k \vert \leqslant N_0}$ and $(f_k)_{\vert k \vert \leqslant N_0}$, which we still denote by $(e_k)_{\vert k \vert \leqslant N_0}$ and $(f_k)_{\vert k \vert \leqslant N_0}$, to obtain matrices with real coefficients, we infer from (\ref{eq: lemma properties A - vectors for small k}) the existence of a matrix $A_0 \in \mathbb{R}^{(2n_0+1)\times(2n_0+1)}$ such that \begin{equation}\label{eq: prel truncated model} \dot{X}_0(t) = A_0 X_0(t) + B_{0,1} v(t) + B_{0,2} \dot{v}(t) + R_0(t) \end{equation} where \begin{equation*} X_0(t) = \begin{bmatrix} w_{-n_0}(t) \\ \vdots \\ w_{n_0}(t) \end{bmatrix} \in \mathbb{R}^{2 n_0 +1} , \quad B_{0,1} = \begin{bmatrix} a_{-n_0} \\ \vdots \\ a_{n_0} \end{bmatrix} \in \mathbb{R}^{2 n_0 +1} , \end{equation*} \begin{equation*} B_{0.2} = \begin{bmatrix} b_{-n_0} \\ \vdots \\ b_{n_0} \end{bmatrix} \in \mathbb{R}^{2 n_0 +1} , \quad R_0(t) = \begin{bmatrix} r_{-n_0}(t) \\ \vdots \\ r_{n_0}(t) \end{bmatrix} \in \mathbb{R}^{2 n_0 +1} . \end{equation*} We augment the state-space representation (\ref{eq: prel truncated model}) with the actual control input $v$ as follows: \begin{equation}\label{eq: truncated model} \dot{X}_1(t) = A_1 X_1(t) + B_{1} v_d(t) + R_1(t) \end{equation} where \begin{subequations} \begin{equation}\label{eq: def X1 vd and R1} X_1(t) = \begin{bmatrix} v(t) \\ X_{0}(t) \end{bmatrix} \in \mathbb{R}^{2 n_0 +2} , \quad v_d(t) = \dot{v}(t) , \quad R_1(t) = \begin{bmatrix} 0 \\ R_0(t) \end{bmatrix} \in \mathbb{R}^{2 n_0 +2} \end{equation} \begin{equation}\label{eq: def A1 vd and B1} A_1 = \begin{bmatrix} 0 & 0 \\ B_{0,1} & A_0 \end{bmatrix} \in \mathbb{R}^{(2n_0+2)\times(2n_0+2)} , \quad B_1 = \begin{bmatrix} 1 \\ B_{0,2} \end{bmatrix} \in \mathbb{R}^{2 n_0 +2} . \end{equation} \end{subequations} We now further augment the latter state-space representation to include the integral component of the PI controller. First, we note from (\ref{eq: series expansion Neumann trace}) that the dynamics of the integral component $\zeta$ satisfies \begin{equation*} \dot{\zeta}(t) = \sum\limits_{k \in \mathbb{Z}} w_k(t) (e_k^1)'(0) - z_r(t) . \end{equation*} We observe that the $\zeta$-dynamics involves all the coefficients of projection $w_k(t)$, with $k \in \mathbb{Z}$, hence cannot be used to augment (\ref{eq: truncated model}). This motivates the introduction of the following change of variable: \begin{align} \xi(t) & = \zeta(t) - \sum\limits_{\vert k \vert \geqslant n_0 + 1} \dfrac{(e_k^1)'(0)}{\lambda_k} w_k(t) \label{eq: def integral conponent xi} \\ & = \zeta(t) - 2 \sum\limits_{k \geqslant n_0 + 1} \operatorname{Re}\left\{ \dfrac{(e_k^1)'(0)}{\lambda_k} w_k(t) \right\} \nonumber \end{align} where, using Cauchy-Schwarz inequality, the series is convergent because $\vert \lambda_k \vert \sim \frac{\vert k \vert \pi}{l}$ and $(e_k^1)'(0) = O(1)$ as $\vert k \vert \rightarrow + \infty$. Moreover, the time derivative of $\xi$ is given by \begin{align*} \dot{\xi}(t) & = \dot{\zeta}(t) - \sum\limits_{\vert k \vert \geqslant n_0 + 1} \dfrac{(e_k^1)'(0)}{\lambda_k} \dot{w}_k(t) \\ & = \sum\limits_{\vert k \vert \leqslant n_0} w_k(t) (e_k^1)'(0) + \alpha_0 v(t) + \beta_0 \dot{v}(t) - \gamma(t) \\ & = L_1 X_1(t) + \beta_0 v_d(t) - \gamma(t), \end{align*} where \begin{subequations} \begin{align} \alpha_0 & = - \sum\limits_{\vert k \vert \geqslant n_0 +1} \dfrac{(e_k^1)'(0)}{\lambda_k} a_k = - 2 \sum\limits_{k \geqslant n_0 +1} \operatorname{Re}\left\{\dfrac{(e_k^1)'(0)}{\lambda_k} a_k\right\} , \\ \beta_0 & = - \sum\limits_{\vert k \vert \geqslant n_0 +1} \dfrac{(e_k^1)'(0)}{\lambda_k} b_k = - 2 \sum\limits_{k \geqslant n_0 +1} \operatorname{Re}\left\{\dfrac{(e_k^1)'(0)}{\lambda_k} b_k\right\} , \\ \gamma(t) & = z_r(t) + \sum\limits_{\vert k \vert \geqslant n_0 +1} \dfrac{(e_k^1)'(0)}{\lambda_k} r_k(t) = z_r(t) + 2 \sum\limits_{k \geqslant n_0 +1} \operatorname{Re}\left\{\dfrac{(e_k^1)'(0)}{\lambda_k} r_k(t)\right\} , \label{eq: def gamma} \end{align} \end{subequations} and \begin{equation*} L_1 = \begin{bmatrix} \alpha_0 & (e_{-n_0}^1)'(0) & \ldots & (e_{n_0}^1)'(0) \end{bmatrix} \in \mathbb{R}^{1 \times (2n_0+2)} . \end{equation*} Thus, with the introduction of \begin{subequations}\label{eq: def X A B and Gamma} \begin{equation}\label{eq: def X and A} X(t) = \begin{bmatrix} X_1(t) \\ \xi(t) \end{bmatrix} \in \mathbb{R}^{2 n_0 + 3} , \quad A = \begin{bmatrix} A_1 & 0 \\ L_1 & 0 \end{bmatrix} \in \mathbb{R}^{(2 n_0 + 3)\times(2 n_0 + 3)} , \end{equation} \begin{equation}\label{eq: def B and Gamma} B = \begin{bmatrix} B_1 \\ \beta_0 \end{bmatrix} \in \mathbb{R}^{2 n_0 + 3} , \quad \Gamma(t) = \begin{bmatrix} R_1(t) \\ -\gamma(t) \end{bmatrix} \in \mathbb{R}^{2 n_0 + 3} , \end{equation} \end{subequations} we obtain the truncated model \begin{equation}\label{eq: final truncated model} \dot{X}(t) = A X(t) + B v_d(t) + \Gamma(t) . \end{equation} Putting (\ref{eq: residual dynamics}) and (\ref{eq: final truncated model}) together, we obtain that the wave equation with integral component (\ref{eq: wave equation - abstract form}) admits the following equivalent representation used for both control design and stability analysis: \begin{subequations}\label{eq: wave equation - control design} \begin{align} & \dot{X}(t) = A X(t) + B v_d(t) + \Gamma(t) , \label{eq: wave equation - control design - truncated model} \\ & \dot{w}_k(t) = \lambda_k w_k(t) + a_k v(t) + b_k v_d(t) + r_k(t) , \qquad \vert k \vert \geqslant n_0+1 . \label{eq: wave equation - residual dynamics} \end{align} \end{subequations} \begin{remark} The representation (\ref{eq: wave equation - control design}) shows that the dynamics of the wave equation with integral component (\ref{eq: wave equation - abstract form}) can be split into two parts. The first part, given by (\ref{eq: wave equation - control design - truncated model}), consists of an ODE capturing the unstable dynamics plus a certain number of slow stable modes of the system. The second part, given by (\ref{eq: wave equation - residual dynamics}) and referred to as the residual dynamics, captures the stable dynamics of the system which are such that $\operatorname{Re}\lambda_k < -1$. The control strategy consists now into the two following steps. First, a state-feedback is designed to locally stabilize (\ref{eq: wave equation - control design - truncated model}). Then, a stability analysis is carried out to assess that such a control strategy achieves both the local stabilization of (\ref{eq: wave equation - control design}), as well as the fulfillment of the output regulation of the Neumann trace (\ref{eq: system output - variations around equilibrium}). \end{remark} \subsection{Control strategy and closed-loop dynamics}\label{sec_poleshift} The control design strategy consists in the design of a stabilizing state-feedback for (\ref{eq: wave equation - control design - truncated model}). Such a pole shifting is allowed by the following result. \begin{lemma}\label{lem: Kalman condition} The pair $(A,B)$ satisfies the Kalman condition. \end{lemma} \textbf{Proof.} From (\ref{eq: def X A B and Gamma}), the Hautus test easily shows that $(A,B)$ satisfies the Kalman condition if and only if $(A_1,B_1)$ satisfies the Kalman condition and the square matrix \begin{equation*} T = \begin{bmatrix} A_1 & B_1 \\ L_1 & \beta_0 \end{bmatrix} \in \mathbb{R}^{(2 n_0 + 3)\times(2 n_0 + 3)} \end{equation*} is invertible. We first show the following preliminary result: for any $\lambda \in\mathbb{C}$ and $z \in D(\mathcal{A}^*)$, $\left< a + \lambda b , z \right>_\mathcal{H} = 0$ and $\mathcal{A}^* z = \overline{\lambda} z$ implies $z = 0$. Recall based on Lemma~\ref{lem: adjoint operator} that $\mathcal{A}^* z = \overline{\lambda} z$ gives \begin{subequations}\label{eq: proof Kalman condition - characterization eigenvector fk} \begin{align} & z^2 + g = - \overline{\lambda} z^1 , \label{eq: proof Kalman condition - characterization eigenvector fk - 1} \\ & (z^1)'' = - \overline{\lambda} z^2 , \label{eq: proof Kalman condition - characterization eigenvector fk - 2} \\ & g'' = f'(y_e) z^2 , \label{eq: proof Kalman condition - characterization eigenvector fk - 3} \\ & z^1(0) = z^2(0) = g(0) = g'(L) = 0 , \label{eq: proof Kalman condition - characterization eigenvector fk - 4} \\ & (z^1)'(L)-\alpha z^2(L) = 0 . \label{eq: proof Kalman condition - characterization eigenvector fk - 5} \end{align} \end{subequations} From the definition (\ref{eq: def a b R}) of $a,b\in\mathcal{H}$ one has \begin{align*} \left< a + \lambda b , z \right>_\mathcal{H} & = \int_0^L \left( \dfrac{x}{\alpha L} \right)' \overline{(z^1)'(x)} - \lambda \dfrac{x}{\alpha L} \overline{z^2(x)} \,\mathrm{dx} \\ & = \left[ \dfrac{x}{\alpha L} \overline{(z^1)'(x)} \right]_{x=0}^{x=L} - \int_0^L \dfrac{x}{\alpha L} \overline{\left( (z^1)''(x) + \overline{\lambda} z^2(x) \right)} \,\mathrm{dx} \\ & = \dfrac{1}{\alpha} \overline{(z^1)'(L)} \end{align*} where we have used (\ref{eq: proof Kalman condition - characterization eigenvector fk - 2}). Thus we have $(z^1)'(L) = 0$. Then (\ref{eq: proof Kalman condition - characterization eigenvector fk - 5}) shows that $z^2(L) = 0$ and we infer from (\ref{eq: proof Kalman condition - characterization eigenvector fk - 1}) and (\ref{eq: proof Kalman condition - characterization eigenvector fk - 4}) that $(z^2)'(L) = 0$. Moreover, taking twice the derivative of (\ref{eq: proof Kalman condition - characterization eigenvector fk - 1}) and using (\ref{eq: proof Kalman condition - characterization eigenvector fk - 2}-\ref{eq: proof Kalman condition - characterization eigenvector fk - 3}), we obtain that $(z^2)'' + \left( f'(y_e) - (\overline{\lambda})^2 \right) z^2 = 0$. By Cauchy uniqueness, we deduce that $z^2 = 0$. Using (\ref{eq: proof Kalman condition - characterization eigenvector fk - 2}), (\ref{eq: proof Kalman condition - characterization eigenvector fk - 4}) and $(z^1)'(L) = 0$, we reach the conclusion $z = 0$. Assume now that $(A_1,B_1)$ does not satisfy the Kalman condition. From (\ref{eq: def A1 vd and B1}), the Hautus test shows the existence of $\lambda \in \mathbb{C}$, $x_1 \in \mathbb{C}$, and $x_2 \in \mathbb{C}^{2n_0+1}$, with either $x_1 \neq 0$ or $x_2 \neq 0$, such that \begin{align*} & x_2^* B_{0,1} = \lambda x_1^* , \\ & x_2^* A_0 = \lambda x_2^* , \\ & x_1^* + x_2^* B_{0,2} = 0 . \end{align*} This implies the existence of $x_2 \neq 0$ such that \begin{align*} & A_0^* x_2 = \overline{\lambda} x_2 , \\ & x_2^* ( B_{0,1} + \lambda B_{0,2} ) = 0 \end{align*} where $B_{0,1} + \lambda B_{0,2} = ( \left< a + \lambda b , f_k \right>_\mathcal{H} )_{-n_0 \leqslant k \leqslant n_0}$. Noting that $A_0^*$ is the matrix of $\mathcal{A}^*$ in $(f_k)_{\vert k \vert \leqslant n_0}$, this shows the existence of a nonzero vector $z \in D(\mathcal{A}^*)$ such that $\left< a + \lambda b , z \right>_\mathcal{H} = 0$ and $\mathcal{A}^* z = \overline{\lambda} z$. The result of the previous paragraph leads to the contraction $z=0$. Hence $(A_1,B_1)$ does satisfy the Kalman condition. It remains to show that the matrix $T$ is invertible. Let a vector \begin{equation*} X_e = \begin{bmatrix} v_e & w_{-n_0,e} & \ldots & w_{-n_0,e} & v_{d,e} \end{bmatrix}^\top \in \mathbb{R}^{2n_0+3} \end{equation*} be an element of the kernel of $T$. Then, by expanding $T X_e = 0$, we obtain that \begin{align*} 0 & = v_{d,e} , \\ 0 & = A_0 \begin{bmatrix} w_{-n_0,e} \\ \vdots \\ w_{-n_0,e} \end{bmatrix} + B_{0,1} v_e , \\ 0 & = \sum\limits_{\vert k \vert \leqslant n_0} w_{k,e} (e_k^1)'(0) - \left( \sum\limits_{\vert k \vert \geqslant n_0 +1} \dfrac{(e_k^1)'(0)}{\lambda_k} a_k \right) v_e . \end{align*} We define, for $\vert k \vert \geqslant n_0+1$, $w_{k,e} = - \frac{a_k}{\lambda_k} v_e$. Then, as $(w_{k,e})_{k \in \mathbb{Z}}$ is square summable, we can introduce $w_e = \sum_{k \in \mathbb{Z}} w_{k,e} e_k \in \mathcal{H}$. We obtain that \begin{align*} 0 & = A_0 \begin{bmatrix} w_{-n_0,e} \\ \vdots \\ w_{-n_0,e} \end{bmatrix} + B_{0,1} v_e , \\ 0 & = \lambda_k w_{k,e} + a_k v_e , \quad \vert k \vert \geqslant n_0+1 , \\ 0 & = \sum\limits_{k \in \mathbb{Z}} w_{k,e} (e_k^1)'(0) . \end{align*} In particular $(\lambda_k w_{k,e})_{k \in \mathbb{Z}}$ is square summable and thus $w_e \in D(\mathcal{A})$. The developments of Subsection~\ref{subsec: spectral reduction and truncated model} show that the above system is equivalent to \begin{align*} & \mathcal{A}w_e + a v_e = 0 , \\ & (w_e^1)'(0) = 0 \end{align*} with $w_e = (w_e^1,w_e^2)$. By expanding the former identity, we first have that $(w_e^1)''+f'(y_e)w_e^1=0$ with $w_e^1(0)=(w_e^1)'(0)=0$ and thus, by Cauchy uniqueness, $w_e^1 = 0$. We also have $w_e^2 = \frac{-x}{\alpha L} v_e$ with $v_e = - \alpha w_e^2(L) = (w_e^1)'(L) = 0$, and thus $w_e^2 = 0$. This yields $w_e = 0$, which shows that $w_{k,e} = 0$ for every $k \in \mathbb{Z}$. Hence the kernel of $T$ is reduced to $\{0\}$, which concludes the proof. \qed The result of Lemma~\ref{lem: Kalman condition} ensures the existence of a gain $K \in \mathbb{R}^{1 \times (2 n_0 +3)}$ such that $A_K = A + BK$ is Hurwitz. Then, we can set the state feedback \begin{equation}\label{eq: control input vd} v_d(t) = \dot{v}(t) = K X(t) , \end{equation} which yields the following closed-loop system dynamics: \begin{subequations}\label{eq: wave equation - closed-loop} \begin{align} & \dot{X}(t) = A_K X(t) + \Gamma(t) , \\ & \dot{w}_k(t) = \lambda_k w_k(t) + a_k v(t) + b_k v_d(t) + r_k(t) , \qquad \vert k \vert \geqslant n_0+1 . \end{align} \end{subequations} The objective of the remainder of the paper is to assess the local stability of the closed-loop system, as well as the study of the tracking performance. \section{Stability and set point regulation assessment}\label{sec: stability and regulation} \subsection{Stability analysis} The main stability result of this paper is stated in the following theorem. \begin{theorem} \label{thm: stability result} There exist $\kappa \in (0,1)$ and $\overline{C}_1,\delta > 0$ such that, for any $\eta \in [0,1)$, there exists $\overline{C}_2 > 0$ such that, for any initial condition satisfying \begin{equation*} \Vert W(0) \Vert_\mathcal{H}^2 + \vert \xi(0) \vert^2 + \vert v(0) \vert^2 \leqslant \delta \end{equation*} and any continuously differentiable reference input $z_r$ with \begin{equation*} \Vert z_r \Vert_{L^\infty(\mathbb{R}_+)}^2 \leqslant \delta , \end{equation*} the classical solutions of (\ref{eq: wave equation - equivalent homogeneous problem}) with control law (\ref{eq: control input vd}) is well defined on $\mathbb{R}_+$ and satisfies \begin{equation} \label{eq: stability result - w^1 L^inf bounded by one} \Vert w^1(t,\cdot) \Vert_{L^\infty(0,L)} < 1 \end{equation} and \begin{align} \Vert W(t) \Vert_\mathcal{H}^2 + \vert \xi(t) \vert^2 + \vert v(t) \vert^2 & \leqslant \overline{C}_1 e^{- 2 \kappa t} \left( \Vert W(0) \Vert_\mathcal{H}^2 + \vert \xi(0) \vert^2 + \vert v(0) \vert^2 \right) \label{eq: stability result - local exp stab} \\ & \phantom{\leqslant}\, + \overline{C}_2 \sup\limits_{0 \leqslant s \leqslant t} e^{-2\eta\kappa(t-s)} \vert z_r(s) \vert^2 . \nonumber \end{align} for all $t \geqslant 0$. \end{theorem} \begin{remark} In the particular case of a linear function $f$, which implies that the residual term (\ref{eq: def residual term}) is identically zero, the exponential stability result (\ref{eq: stability result - local exp stab}) stated by Theorem~\ref{thm: stability result} is global. \end{remark} \begin{remark} The result of Theorem~\ref{thm: stability result} ensures the stability of the closed-loop system in $(w,\xi)$ coordinates. This immediately induces the stability of the closed-loop system in its original coordinates because, from (\ref{eq: change of variable}), we have \begin{align*} \left\Vert \left( y_\delta(t,\cdot) , \dfrac{\partial y_\delta}{\partial t}(t,\cdot) \right) \right\Vert_\mathcal{H} & \leqslant \Vert W(t) \Vert_\mathcal{H} + \left\Vert \left( 0 , \dfrac{(\cdot)}{\alpha L} v(t) \right) \right\Vert_\mathcal{H} \\ & \leqslant \Vert W(t) \Vert_\mathcal{H} + \dfrac{1}{\alpha} \sqrt{\dfrac{L}{3}} \vert v(t) \vert \end{align*} and, from (\ref{eq: def integral conponent xi}), \begin{align*} \vert \zeta(t) \vert & \leqslant \vert \xi(t) \vert + \sum\limits_{\vert k \vert \geqslant n_0 + 1} \left\vert \dfrac{(e_k^1)'(0)}{\lambda_k} w_k(t) \right\vert \\ & \leqslant \vert \xi(t) \vert + \sqrt{\sum\limits_{\vert k \vert \geqslant n_0 + 1} \left\vert \dfrac{(e_k^1)'(0)}{\lambda_k} \right\vert^2} \sqrt{\sum\limits_{\vert k \vert \geqslant n_0 + 1} \vert w_k(t) \vert^2} \\ & \leqslant \vert \xi(t) \vert + \sqrt{\dfrac{1}{m_R} \sum\limits_{\vert k \vert \geqslant n_0 + 1} \left\vert \dfrac{(e_k^1)'(0)}{\lambda_k} \right\vert^2} \Vert W(t) \Vert_\mathcal{H} \end{align*} where we recall that $(e_k^1)'(0) = O(1)$ and $\vert \lambda_k \vert \sim k\pi/L$ when $\vert k \vert \rightarrow + \infty$. \end{remark} \textbf{Proof of Theorem~\ref{thm: stability result}.} Let $M > 3( \Vert a \Vert_\mathcal{H}^2 + \Vert b \Vert_\mathcal{H}^2 \Vert K \Vert^2 )/m_R$ be given, where we recall that $a,b \in \mathcal{H}$ are defined by (\ref{eq: def a b R}) and the constant $m_R > 0$ is as provided by Definition~\ref{def: Riesz basis}. We introduce for all $t \geqslant 0$ \begin{equation}\label{eq: def Lyapunov func} V(t) = M X(t)^\top P X(t) + \dfrac{1}{2} \sum\limits_{\vert k \vert \geqslant n_0+1} \vert w_k(t) \vert^2 , \end{equation} where $P$ is a symmetric definite positive matrix such that $A_K^\top P + P A_K = -I$. Then we obtain from (\ref{eq: wave equation - closed-loop}) that \begin{align*} \dot{V}(t) & = M X(t)^\top \left\{ A_K^\top P + P A_K \right\} X(t) + M \left\{ \Gamma(t)^\top P X(t) + X(t)^\top P \Gamma(t) \right\} \\ & \phantom{=}\, + \sum\limits_{\vert k \vert \geqslant n_0+1} \operatorname{Re}\lambda_k \vert w_k(t) \vert^2 + \sum\limits_{\vert k \vert \geqslant n_0+1} \operatorname{Re} \left\{ \overline{w_k(t)} \left( a_k v(t) + b_k v_d(t) + r_k(t) \right) \right\} \\ & \leqslant - M \Vert X(t) \Vert^2 - \sum\limits_{\vert k \vert \geqslant n_0+1} \vert w_k(t) \vert^2 + 2 M \Vert P \Vert \Vert X(t) \Vert \Vert \Gamma(t) \Vert \\ & \phantom{=}\, + \sum\limits_{\vert k \vert \geqslant n_0+1} \vert w_k(t) \vert \left( \vert a_k \vert \vert v(t) \vert + \vert b_k \vert \vert v_d(t) \vert + \vert r_k(t) \vert \right) \end{align*} where we have used that $\operatorname{Re} \lambda_k < -1$ for all $\vert k \vert \geqslant n_0 + 1$. Using now Young's inequality, we infer that \begin{align*} 2 M \Vert P \Vert \Vert X(t) \Vert \Vert \Gamma(t) \Vert & \leqslant \dfrac{M}{2} \Vert X(t) \Vert ^2 + 2 M \Vert P \Vert^2 \Vert \Gamma(t) \Vert^2 \end{align*} and \begin{align*} & \sum\limits_{\vert k \vert \geqslant n_0+1} \vert w_k(t) \vert \left( \vert a_k \vert \vert v(t) \vert + \vert b_k \vert \vert v_d(t) \vert + \vert r_k(t) \vert \right) \\ & \qquad \leqslant \dfrac{1}{2} \sum\limits_{\vert k \vert \geqslant n_0+1} \vert w_k(t) \vert^2 + \dfrac{3}{2} \sum\limits_{\vert k \vert \geqslant n_0+1} \left( \vert a_k \vert^2 \vert v(t) \vert^2 + \vert b_k \vert^2 \vert v_d(t) \vert^2 + \vert r_k(t) \vert^2 \right) \\ & \qquad \leqslant \dfrac{1}{2} \sum\limits_{\vert k \vert \geqslant n_0+1} \vert w_k(t) \vert^2 + \dfrac{3\Vert a \Vert_\mathcal{H}^2}{2m_R} \vert v(t) \vert^2 + \dfrac{3\Vert b \Vert_\mathcal{H}^2}{2m_R} \vert v_d(t) \vert^2 + \dfrac{3}{2m_R} \Vert R(t,\cdot) \Vert_\mathcal{H}^2 \\ & \qquad \leqslant \dfrac{1}{2} \sum\limits_{\vert k \vert \geqslant n_0+1} \vert w_k(t) \vert^2 + \dfrac{3 (\Vert a \Vert_\mathcal{H}^2 + \Vert b \Vert_\mathcal{H}^2 \Vert K \Vert^2)}{2m_R} \Vert X(t) \Vert^2 + \dfrac{3}{2m_R} \Vert R(t,\cdot) \Vert_\mathcal{H}^2 \end{align*} where we have used (\ref{eq: control input vd}) and the fact that $v(t)$ is the first component of $X(t)$. Thus, we obtain that \begin{align} \dot{V}(t) & \leqslant - \left\{ \dfrac{M}{2} - \dfrac{3 (\Vert a \Vert_\mathcal{H}^2 + \Vert b \Vert_\mathcal{H}^2 \Vert K \Vert^2)}{2m_R} \right\} \Vert X(t) \Vert^2 - \dfrac{1}{2} \sum\limits_{\vert k \vert \geqslant n_0+1} \vert w_k(t) \vert^2 \label{eq: prel estimate dot_V} \\ & \phantom{=}\, + 2 M \Vert P \Vert^2 \Vert \Gamma(t) \Vert^2 + \dfrac{3}{2m_R} \Vert R(t,\cdot) \Vert_\mathcal{H}^2 . \nonumber \end{align} We evaluate the two last terms of the above inequality. Recalling that $R_1(t)$, $\gamma(t)$, and $\Gamma(t)$ are defined by (\ref{eq: def X1 vd and R1}), (\ref{eq: def gamma}), and (\ref{eq: def B and Gamma}), respectively, we have \begin{align} \Vert \Gamma(t) \Vert^2 & = \Vert R_1(t) \Vert^2 + \vert \gamma(t) \vert^2 \nonumber \\ & = \sum\limits_{\vert k \vert \leqslant n_0} \vert r_k(t) \vert^2 + \left\vert z_r(t) + \sum\limits_{\vert k \vert \geqslant n_0 +1} \dfrac{(e_k^1)'(0)}{\lambda_k}r_k(t) \right\vert^2 \nonumber \\ & \leqslant \sum\limits_{\vert k \vert \leqslant n_0} \vert r_k(t) \vert^2 + 2 \vert z_r(t) \vert^2 + 2 \sum\limits_{\vert k \vert \geqslant n_0 +1} \left\vert\dfrac{(e_k^1)'(0)}{\lambda_k}\right\vert^2 \sum\limits_{\vert k \vert \geqslant n_0 +1} \vert r_k(t) \vert^2 \nonumber \\ & \leqslant C_0^2 \Vert R(t,\cdot) \Vert_\mathcal{H}^2 + 2 \vert z_r(t) \vert^2 \label{eq: estimate Gamma} \end{align} with $C_0 > 0$ defined by \begin{equation*} C_0^2 = \frac{1}{m_R} \max\left( 1 , 2 \sum\limits_{\vert k \vert \geqslant n_0 +1} \left\vert\dfrac{(e_k^1)'(0)}{\lambda_k}\right\vert^2 \right) < + \infty . \end{equation*} We now evaluate $\Vert R(t,\cdot) \Vert_\mathcal{H}^2 = \int_0^L \vert r(t,x) \vert^2 \,\mathrm{d}x $ where we recall that $r(t,x)$ is defined by (\ref{eq: def residual term}). Let $\epsilon \in (0,1)$ to be determined. Let $C_I \geqslant 0$ be the maximum of $\vert f'' \vert$ over the range $\left[ \min y_e -1 , \max y_e +1 \right]$. Thus, assuming that \begin{equation}\label{eq: a priori estimate} \Vert w^1(t,\cdot) \Vert_{L^\infty(0,L)} \leqslant \epsilon < 1 , \end{equation} we obtain that $\vert r(t,x) \vert \leqslant C_I \vert w^1(t,x) \vert^2$ and thus \begin{align} \Vert R(t,\cdot) \Vert_\mathcal{H}^2 & = \int_0^L \vert r(t,x) \vert^2 \,\mathrm{d}x \nonumber \\ & \leqslant C_I^2 \int_0^L \vert w^1(t,x) \vert^4 \,\mathrm{d}x \nonumber \\ & \leqslant \epsilon^2 C_I^2 \Vert w^1(t,\cdot) \Vert_{L^2(0,L)}^2 \nonumber \\ & \leqslant \epsilon^2 L^2 C_I^2 \Vert W(t) \Vert_{\mathcal{H}}^2 \label{eq: prel estimate R} \\ & \leqslant \epsilon^2 M_R L^2 C_I^2 \sum\limits_{k \in \mathbb{Z}} \vert w_k(t) \vert^2 \nonumber \\ & \leqslant \epsilon^2 C_1^2 \left\{ \Vert X(t) \Vert^2 + \sum\limits_{\vert k \vert \geqslant n_0+1} \vert w_k(t) \vert^2 \right\} \label{eq: estimate R} \end{align} where the third inequality follows from Poincar{\'e} inequality, the fourth inequality follows from Definition~\ref{def: Riesz basis}, and with the constant $C_1 \geqslant 0$ defined by $C_1^2 = M_R L^2 C_I^2$. We deduce from (\ref{eq: prel estimate dot_V}-\ref{eq: estimate Gamma}) and, under the \emph{a priori} estimate (\ref{eq: a priori estimate}), (\ref{eq: estimate R}) that \begin{align*} \dot{V}(t) & \leqslant - \left\{ \dfrac{M}{2} - \dfrac{3 (\Vert a \Vert_\mathcal{H}^2 + \Vert b \Vert_\mathcal{H}^2 \Vert K \Vert^2)}{2m_R} - \epsilon^2 C_1^2 \left( 2M\Vert P \Vert^2 C_0^2 + \dfrac{3}{2 m_R} \right) \right\} \Vert X(t) \Vert^2 \\ & \phantom{=}\, - \dfrac{1}{2} \left\{( 1 - 2\epsilon^2 C_1^2 \left( 2M\Vert P \Vert^2 C_0^2 + \dfrac{3}{2 m_R} \right) \right\} \sum\limits_{\vert k \vert \geqslant n_0+1} \vert w_k(t) \vert^2 \\ & \phantom{=}\, + 4 M \Vert P \Vert^2 \vert z_r(t) \vert^2 . \end{align*} With $M > 3( \Vert a \Vert_\mathcal{H}^2 + \Vert b \Vert_\mathcal{H}^2 \Vert K \Vert^2 )/m_R$ and by selecting $\epsilon \in (0,1)$ small enough (independently of the initial conditions and the reference signal $z_r$), we obtain the existence of constants $\kappa,C_2 > 0$ such that, under the \emph{a priori} estimate (\ref{eq: a priori estimate}), \begin{align*} \dot{V}(t) & \leqslant - 2 \kappa V(t) + C_2 \vert z_r(t) \vert^2 . \end{align*} Let $\eta\in[0,1)$ be arbitrary. Assuming that the \emph{a priori} estimate (\ref{eq: a priori estimate}) holds on $[0,t]$ for some $t > 0$, we obtain that \begin{align*} V(t) & \leqslant e^{- 2 \kappa t} V(0) + C_2 \int_0^t e^{-2\kappa(t-s)} \vert z_r(s) \vert^2 \,\mathrm{d}s \\ & \leqslant e^{- 2 \kappa t} V(0) + \dfrac{C_2}{2\kappa(1-\eta)} \sup\limits_{0 \leqslant s \leqslant t} e^{-2\eta\kappa(t-s)} \vert z_r(s) \vert^2 . \end{align*} Denoting by $\lambda_m(P) > 0$ and $\lambda_M(P) > 0$ the smallest and largest eigenvalues of the symmetric definite positive matrix $P$, we now note from (\ref{eq: def Lyapunov func}) that \begin{align*} V(0) & \leqslant M \lambda_M(P) \Vert X(0) \Vert^2 + \dfrac{1}{2} \sum\limits_{\vert k \vert \geqslant n_0+1} \vert w_k(0) \vert^2 \\ & \leqslant C_3 \left( \Vert W(0) \Vert_\mathcal{H}^2 + \vert \xi(0) \vert^2 + \vert v(0) \vert^2 \right) \end{align*} for some constant $C_3 > 0$ and \begin{align*} V(t) & \geqslant M \lambda_m(P) \Vert X(t) \Vert^2 + \dfrac{1}{2} \sum\limits_{\vert k \vert \geqslant n_0+1} \vert w_k(t) \vert^2 \\ & \geqslant C_4 \left( \Vert W(t) \Vert_\mathcal{H}^2 + \vert \xi(t) \vert^2 + \vert v(t) \vert^2 \right) \end{align*} for some constant $C_4 > 0$. Thus, assuming that the \emph{a priori} estimate (\ref{eq: a priori estimate}) holds over $[0,t]$ for some $t > 0$, we deduce from the three above estimates the existence of constants $\overline{C}_1,\overline{C}_2 > 0$ such that \begin{align} \Vert W(t) \Vert_\mathcal{H}^2 + \vert \xi(t) \vert^2 + \vert v(t) \vert^2 & \leqslant \overline{C}_1 e^{- 2 \kappa t} \left( \Vert W(0) \Vert_\mathcal{H}^2 + \vert \xi(0) \vert^2 + \vert v(0) \vert^2 \right) \label{eq: stab result} \\ & \phantom{\leqslant}\, + \overline{C}_2 \sup\limits_{0 \leqslant s \leqslant t} e^{-2\eta\kappa(t-s)} \vert z_r(s) \vert^2 . \nonumber \end{align} To conclude, let us note that \begin{equation*} \Vert w^1(t,\cdot) \Vert_{L^\infty(0,L)} \leqslant \sqrt{L} \Vert (w^1)'(t,\cdot) \Vert_{L^2(0,L)} \leqslant \sqrt{L} \Vert W(t) \Vert_\mathcal{H} . \end{equation*} Hence, if the initial condition is selected such that \begin{equation*} \Vert W(0) \Vert_\mathcal{H}^2 + \vert \xi(0) \vert^2 + \vert v(0) \vert^2 \leqslant \dfrac{\epsilon^2}{2L} \min\left( 1 , \frac{1}{2 \overline{C}_1} \right) , \end{equation*} which in particular ensures that $\Vert w^1(0,\cdot) \Vert_{L^\infty(0,L)} \leqslant \epsilon/\sqrt{2} < \epsilon$, and the reference input is chosen such that \begin{equation*} \Vert z_r \Vert_{L^\infty(0,L)}^2 \leqslant \frac{\epsilon^2}{4 L\overline{C}_2} , \end{equation*} it is readily checked based on (\ref{eq: stab result}) that the \emph{a priori} estimate (\ref{eq: a priori estimate}) holds for all $t \geqslant 0$. In this case, the stability estimate (\ref{eq: stab result}) holds for all $t \geqslant 0$. \qed \subsection{Set point regulation} We are now in position to assess the set point regulation of the left Neumann trace. \begin{theorem}\label{thm: reference tracking} Let $\kappa \in (0,1)$, $\delta > 0$, and $\eta \in [0,1)$ be as provided by Theorem~\ref{thm: stability result}. There exist constants $\overline{C}_3,\overline{C}_4>0$ such that, for any initial condition satisfying \begin{equation*} \Vert W(0) \Vert_\mathcal{H}^2 + \vert \xi(0) \vert^2 + \vert v(0) \vert^2 \leqslant \delta \end{equation*} and any continuously differentiable reference input $z_r$ with \begin{equation*} \Vert z_r \Vert_{L^\infty(\mathbb{R}_+)}^2 \leqslant \delta , \end{equation*} the classical solutions of (\ref{eq: wave equation - equivalent homogeneous problem}) satisfies \begin{align} \left\vert (w^1)'(t,0) - z_r(t) \right\vert & \leqslant \overline{C}_3 e^{-\kappa t} \left\{ \Vert W(0) \Vert_\mathcal{H} + \vert \xi(0) \vert + \vert v(0) \vert + \Vert \mathcal{A} W(0) \Vert_\mathcal{H} \right\} \label{eq: thm ref tracking - estimate} \\ & \phantom{\leqslant}\; + \overline{C}_4 \sup\limits_{0 \leqslant s \leqslant t} e^{-\eta\kappa(t-s)} \vert z_r(s) \vert \nonumber \end{align} for all $t \geqslant 0$. \end{theorem} \begin{remark} In particular, $z_r(t) \rightarrow 0$ implies $(w^1)'(t,0) \rightarrow 0$, i.e. $z(t) \rightarrow z_e$, which achieves the desired set point reference tracking. \end{remark} Before proceeding with the proof of Theorem~\ref{thm: reference tracking}, we first derive an estimate of $\Vert \frac{\mathrm{d}R}{\mathrm{d}t}(t,\cdot) \Vert_\mathcal{H}$. To do so, we assume that the assumptions and conclusions of Theorem~\ref{thm: stability result} apply. Following up with Remark~\ref{rmk: classical solutions}, we have that $\frac{\mathrm{d}R}{\mathrm{d}t} = (0,\frac{\mathrm{d}r}{\mathrm{d}t})$ with \begin{equation}\label{eq: time derivative r} \dfrac{\mathrm{d}r}{\mathrm{d}t}(t,\cdot) = \left[ w^2(t,\cdot) + \dfrac{(\cdot)}{\alpha L} v(t) \right] \int_{y_e}^{w^1(t,\cdot)+y_e} f''(s) \,\mathrm{d}s . \end{equation} Using (\ref{eq: stability result - w^1 L^inf bounded by one}) and the estimate $\vert f'' \vert \leqslant C_I$ on the range $\left[ \min y_e -1 , \max y_e +1 \right]$, we deduce that \begin{align} \left\Vert \dfrac{\mathrm{d}R}{\mathrm{d}t}(t,\cdot) \right\Vert_\mathcal{H}^2 & = \left\Vert \dfrac{\mathrm{d}r}{\mathrm{d}t}(t,\cdot) \right\Vert_{L^2(0,L)}^2 \nonumber \\ & \leqslant \int_0^L \left\vert w^2(t,x) + \dfrac{x}{\alpha L} v(t) \right\vert^2 \left\vert \int_{y_e(x)}^{w^1(t,x)+y_e(x)} \vert f''(s) \vert \,\mathrm{d}s \right\vert^2 \,\mathrm{d}x \nonumber \\ & \leqslant 2 C_I^2 \left\{ \Vert w^2(t) \Vert_{L^2(0,L)}^2 + \dfrac{L}{3 \alpha^2} \vert v(t) \vert^2 \right\} \nonumber \\ & \leqslant 2 C_I^2 \left\{ \Vert W(t) \Vert_\mathcal{H}^2 + \dfrac{L}{3 \alpha^2} \vert v(t) \vert^2 \right\}. \label{eq: estimate norm R_t} \end{align} We are now in position to complete the proof of Theorem~\ref{thm: reference tracking}. \textbf{Proof of Theorem~\ref{thm: reference tracking}.} We fix an integer $N \geqslant n_0$ and a constant $\gamma > 0$ such that\footnote{We recall that $\operatorname{Re}\lambda_k < -1$ for all $\vert k \vert \geqslant n_0 +1$ and that $\kappa \in (0,1)$.} $\mathrm{Re}\lambda_k \leqslant -\gamma < -\kappa < 0$ for all $\vert k \vert \geqslant N+1$. Then we have from (\ref{eq: series expansion Neumann trace}) that \begin{align} & \left\vert (w^1)'(t,0) - z_r(t) \right\vert \nonumber \\ & \qquad\leqslant \sum\limits_{k \in \mathbb{Z}} \vert w_k(t) \vert \vert (e_k^1)'(0) \vert + \vert z_r(t) \vert \nonumber \\ & \qquad\leqslant \sum\limits_{\vert k \vert \leqslant N} \vert w_k(t) \vert \vert (e_k^1)'(0) \vert + \sum\limits_{\vert k \vert \geqslant N+1} \vert \lambda_k w_k(t) \vert \left\vert \dfrac{(e_k^1)'(0)}{\lambda_k} \right\vert + \vert z_r(t) \vert \nonumber \\ & \qquad\leqslant \sqrt{\sum\limits_{\vert k \vert \leqslant N} \vert (e_k^1)'(0) \vert^2} \sqrt{\sum\limits_{\vert k \vert \leqslant N} \vert w_k(t) \vert^2} \nonumber \\ & \qquad\phantom{\leqslant}\; + \sqrt{\sum\limits_{\vert k \vert \geqslant N+1} \left\vert \dfrac{(e_k^1)'(0)}{\lambda_k} \right\vert^2} \sqrt{\sum\limits_{\vert k \vert \geqslant N+1} \vert \lambda_k w_k(t) \vert^2} + \vert z_r(t) \vert \nonumber \\ & \qquad\leqslant \sqrt{\dfrac{1}{m_R} \sum\limits_{\vert k \vert \leqslant N} \vert (e_k^1)'(0) \vert^2} \Vert W(t) \Vert_\mathcal{H} \label{eq: prel estimate tracking performance} \\ & \qquad\phantom{\leqslant}\; + \sqrt{\sum\limits_{\vert k \vert \geqslant N+1} \left\vert \dfrac{(e_k^1)'(0)}{\lambda_k} \right\vert^2} \sqrt{\sum\limits_{\vert k \vert \geqslant N+1} \vert \lambda_k w_k(t) \vert^2} + \vert z_r(t) \vert \nonumber \end{align} Based on (\ref{eq: stability result - local exp stab}), we only need to evaluate the term $\sum_{\vert k \vert \leqslant N} \vert \lambda_k w_k(t) \vert^2$. We have from (\ref{eq: wave equation - closed-loop}) that, for all $\vert k \vert \geqslant N+1 \geqslant n_0+1$, \begin{align} \lambda_k w_k(t) & = e^{\lambda_k t} \lambda_k w_k(0) + \int_0^t \lambda_k e^{\lambda_k(t-s)} \left\{ a_k v(s) + b_k v_d(s) + r_k(s) \right\} \,\mathrm{d}s \nonumber \\ & = e^{\lambda_k t} \lambda_k w_k(0) - \left\{ a_k v(t) + b_k v_d(t) + r_k(t) \right\} \label{eq: lambda_k w_k}\\ & \phantom{=}\, + e^{\lambda_k t} \left\{ a_k v(0) + b_k v_d(0) + r_k(0) \right\} \nonumber \\ & \phantom{=}\, + \int_0^t e^{\lambda_k(t-s)} \left\{ a_k v_d(s) + b_k \dot{v}_d(s) + \dot{r}_k(s) \right\} \,\mathrm{d}s \nonumber . \end{align} We have from (\ref{eq: control input vd}-\ref{eq: wave equation - closed-loop}) that $v_d(t) = K X(t)$ and $\dot{v}_d(t) = K A_K X(t) + K \Gamma(t)$ with \begin{subequations}\label{eq: set point - intermediate estimates for lamda_k wk} \begin{align} \Vert X(t) \Vert^2 & \leqslant \dfrac{1}{m_R} \Vert W(t) \Vert_\mathcal{H}^2 + \vert \xi(t) \vert^2 + \vert v(t) \vert^2 , \\ \Vert \Gamma(t) \Vert^2 & \leqslant L^2 C_0^2 C_I^2 \Vert W(t) \Vert_\mathcal{H}^2 + 2 \vert z_r(t) \vert^2 \end{align} where the second inequality follows from (\ref{eq: estimate Gamma}) and (\ref{eq: prel estimate R}). For ease of notation, we define $\mathrm{CI} = \sqrt{\Vert W(0) \Vert_\mathcal{H}^2 + \vert \xi(0) \vert^2 + \vert v(0) \vert^2}$. Using $\gamma > \kappa > \eta\kappa \geqslant 0$, we have from (\ref{eq: stability result - local exp stab}) that \begin{align} & \left\vert \int_0^t e^{\lambda_k(t-s)} v_d(s) \,\mathrm{d}s \right\vert \nonumber \\ & \leqslant \Vert K \Vert \int_0^t e^{-\gamma(t-s)} \Vert X(s) \Vert \,\mathrm{d}s \nonumber \\ & \leqslant \Vert K \Vert \max(1,m_R^{-1/2}) e^{-\gamma t} \int_0^t e^{\gamma s} \left\{ \sqrt{\overline{C}_1} e^{-\kappa s}\mathrm{CI} + \sqrt{\overline{C}_2} \sup\limits_{0 \leqslant \tau \leqslant s} e^{-\eta\kappa(s-\tau)} \vert z_r(\tau) \vert \right\} \,\mathrm{d}s \nonumber\\ & \leqslant C_5 e^{-\kappa t} \mathrm{CI} + C_5 \sup\limits_{0 \leqslant \tau \leqslant t} e^{-\eta\kappa(t-\tau)} \vert z_r(\tau) \vert \label{eq: set point - intermediate estimates for lamda_k wk - 1} \end{align} for some constant $C_5 > 0$ and, similarly, \begin{align} \left\vert \int_0^t e^{\lambda_k(t-s)} \dot{v}_d(s) \,\mathrm{d}s \right\vert & \leqslant C_6 e^{-\kappa t} \mathrm{CI} + C_6 \sup\limits_{0 \leqslant \tau \leqslant t} e^{-\eta\kappa(t-\tau)} \vert z_r(\tau) \vert \label{eq: set point - intermediate estimates for lamda_k wk - 2} \end{align} for some constant $C_6 > 0$. Finally, we also have, for $-\gamma < -\tilde{\kappa} < - \kappa < 0$, \begin{align} \left\vert \int_0^t e^{\lambda_k(t-s)} \dot{r}_k(s) \,\mathrm{d}s \right\vert & \leqslant \int_0^t e^{-\gamma(t-s)} \vert \dot{r}_k(s) \vert \,\mathrm{d}s \nonumber \\ & \leqslant \int_0^t e^{-(\gamma-\tilde{\kappa})(t-s)} \times e^{-\tilde{\kappa}(t-s)} \vert \dot{r}_k(s) \vert \,\mathrm{d}s \nonumber \\ & \leqslant \sqrt{ \int_0^t e^{-2(\gamma-\tilde{\kappa})(t-s)} \,\mathrm{d}s } \sqrt{ \int_0^t e^{-2\tilde{\kappa}(t-s)} \vert \dot{r}_k(s) \vert^2 \,\mathrm{d}s } \nonumber \\ & \leqslant \sqrt{ \dfrac{1}{2(\gamma-\tilde{\kappa})}} \sqrt{ \int_0^t e^{-2\tilde{\kappa}(t-s)} \vert \dot{r}_k(s) \vert^2 \,\mathrm{d}s } . \end{align} \end{subequations} Taking the square on both sides of (\ref{eq: lambda_k w_k}), using Young's inequality, substituting estimates (\ref{eq: stability result - local exp stab}), (\ref{eq: prel estimate R}), and (\ref{eq: set point - intermediate estimates for lamda_k wk}), and using the fact that $\sum_{\vert k \vert \geqslant N+1} \vert \left< z , f_k \right> \vert^2 \leqslant \Vert z \Vert_\mathcal{H}^2/m_R$ for all $z \in \mathcal{H}$, we infer the existence of a constant $C_7 > 0$ such that \begin{align*} \sum\limits_{\vert k \vert \geqslant N+1} \vert \lambda_k w_k(t) \vert^2 & \leqslant C_7 e^{-2 \kappa t} \sum\limits_{\vert k \vert \geqslant N+1} \vert \lambda_k w_k(0) \vert^2 \\ & \phantom{\leqslant}\, + C_7 e^{-2\kappa t} \mathrm{CI}^2 + C_7 \sup\limits_{0 \leqslant \tau \leqslant t} e^{-2\eta\kappa(t-\tau)} \vert z_r(\tau) \vert^2 \\ & \phantom{\leqslant}\, + C_7 \int_0^t e^{-2\tilde{\kappa}(t-s)} \left\Vert \dfrac{\mathrm{d}R}{\mathrm{d}t}(s,\cdot) \right\Vert_\mathcal{H}^2 \,\mathrm{d}s , \end{align*} where we recall that $\mathrm{Re}\lambda_k \leqslant -\gamma < -\kappa < 0$ for all $\vert k \vert \geqslant N+1$. Noting that, for $\vert k \vert \geqslant N+1 \geqslant n_0+1$, $\left< \mathcal{A}W(0) , f_k \right>_\mathcal{H} = \lambda_k w_k(0)$, we infer that $\sum_{\vert k \vert \geqslant N+1} \vert \lambda_k w_k(0) \vert^2 \leqslant \sum_{k \in \mathbb{Z}} \vert \left< \mathcal{A}W(0) , f_k \right>_\mathcal{H} \vert^2 \leqslant \Vert \mathcal{A} W(0) \Vert_\mathcal{H}^2 / m_R$. Then, based on (\ref{eq: stability result - local exp stab}) and (\ref{eq: estimate norm R_t}), and as $\tilde{\kappa} > \kappa > 0$, estimations similar to the ones reported in (\ref{eq: set point - intermediate estimates for lamda_k wk - 1}-\ref{eq: set point - intermediate estimates for lamda_k wk - 2}) show the existence of a constant $C_8 > 0$ such that \begin{align*} \sum\limits_{\vert k \vert \geqslant N+1} \vert \lambda_k w_k(t) \vert^2 & \leqslant C_8 e^{-2\kappa t} \left\{ \mathrm{CI}^2 + \Vert \mathcal{A} W(0) \Vert_\mathcal{H}^2 \right\} + C_8 \sup\limits_{0 \leqslant \tau \leqslant t} e^{-2\eta\kappa(t-\tau)} \vert z_r(\tau) \vert^2 . \end{align*} Substituting this latter estimate and (\ref{eq: stability result - local exp stab}) into (\ref{eq: prel estimate tracking performance}), we obtain the existence of constants $\overline{C}_3, \overline{C}_4 > 0$ such that (\ref{eq: thm ref tracking - estimate}) holds. \qed \section{Numerical illustration}\label{sec: numerical illustration} For numerical illustration, we set $L=1$, $\alpha = 1.1$, and we consider the nonlinear function $f(y) = y^3$, which leads to the boundary control system: \begin{subequations}\label{eq: numerical illustration - wave equation} \begin{align} &\dfrac{\partial^2 y}{\partial t^2} = \dfrac{\partial^2 y}{\partial x^2} + y^3 , \\ & y(t,0) = 0 , \qquad \dfrac{\partial y}{ \partial x}(t,L) = u(t) ,\\ & y(0,x) = y_0(x) , \qquad \dfrac{\partial y}{\partial t}(0,x) = y_1(x) , \end{align} \end{subequations} for $t > 0$ and $x \in (0,1)$. Since $F(y) = \int_0^y f(s) \,\mathrm{d}s = y^4/4 \rightarrow + \infty$ when $\vert y \vert \rightarrow + \infty$, it follows from Remark~\ref{rem: existence of steady states} the existence of a steady state $y_e \in \mathcal{C}^2([0,L])$ associated with any given value of the system output $z_e \in \mathbb{R} = y_e'(0)$. We set $z_e = 1.5$ and numerically compute the associated steady-state trajectory $y_e$, giving in particular the equilibrium control input $u_e = y_e'(L) \approx 0.781$. In the absence of non linearity, i.e. $f = 0$, it is known (see e.g.~\cite[Section~4]{russell1978controllability}) that the eigenvalues of the operator $\mathcal{A}$ defined by (\ref{eq: def operator A}) are given by \begin{equation*} \lambda_k = \dfrac{1}{2L} \log\left(\dfrac{\alpha-1}{\alpha+1}\right) + i \dfrac{k\pi}{L} . \end{equation*} These values are used as initial guesses to determine the eigenvalues $\lambda_k$ and the associated eigenvectors $e_k$ of the operator $\mathcal{A}$ in the presence of the non linearity $f(y)=y^3$ by using a shooting method. We obtain one unstable eigenvalue $\lambda_0 \approx 0.326$ while all other eigenvalues are stable with a real part less than $1$. The feedback gain is computed to place the poles of the truncated model at $-0.5$, $-1$, and $-1.5$. For numerical simulations, we select the initial condition $W(0,x) = (\frac{2\alpha}{5}x,-\frac{2}{5L}x) \in D(\mathcal{A})$ and the signal $z_r$ as depicted in Fig.~\ref{fig: Ref_zr}. The adopted numerical scheme is the modal approximation of the infinite-dimensional system using its first 10 modes. The time domain evolution of the state of the closed-loop system, the regulated output, and the command input, are depicted in Fig.~\ref{fig: sim closed-loop system}. The simulation results are compliant with the theoretical predictions. \begin{figure} \caption{Signal $z_r(t)$} \label{fig: Ref_zr} \end{figure} \begin{figure} \caption{Time domain evolution of the state of the closed-loop system} \label{fig: sim closed-loop system} \end{figure} \section{Conclusion and open issues}\label{sec: conclusion} In this paper we have investigated the proportional integral (PI) boundary regulation control of the left Neumann trace of a one-dimensional semilinear wave equation. Our control strategy combines a traditional velocity feedback and the design of an auxiliary control law performed on a finite-dimensional (spectrally) truncated model. A number of open issues and potential directions for further research emerge from this study, that we list and comment hereafter. \subsection{Other controls and outputs} \paragraph{Other controls} In this paper, we have considered a Neumann boundary control, on the right of the interval. Of course, we could have taken this control on the left, or even on both sides. While the proposed PI procedure seems to be extendable to the cases of a Robin boundary control, of mixed Dirichlet-Neumann boundary controls, and of a distributed (i.e., internal) control input (note that, in this case, the control operator is bounded), the case of a Dirichlet boundary control seems much more challenging and remains open. \paragraph{Other outputs} We have selected the regulated ouput as the Neumann trace. However, one might be interested in regulating other types of outputs. This includes, for example, the Dirichlet trace at the left boundary or the value of $y$ either at a specific location or on a subset of the spatial domain. It is not clear whether or not the reported PI control design procedure could be extended to these different settings. \subsection{The general multi-dimensional case}\label{sec_gen_multiD} In this paper we have focused on the one-dimensional wave equation, which is already quite challenging. The multi-dimensional case is completely open, in particular because, except in very particular cases (like the two-dimensional disk for instance), we do not have any Riesz basis formed by generalized eigenvectors of the underlying unbounded operator. Indeed, the Riesz basis property is essentially a one-dimensional property. Hence, in the multi-dimensional case, a first challenging difficulty that must be addressed is to replace the Riesz basis study by more abstract projection operators. On this issue, it seems that the recent work \cite{takahashi} provides very interesting line of research, although the authors of that reference deal with a parabolic equation with a selfadjoint operator and then one has a spectral decomposition and spectral projectors. For a wave equation, the underlying operator is not selfadjoint and is not even normal, which creates deep difficulties for the spectral study. At present, we leave this issue completely open. \subsection{Robustness with respect to perturbations} Beyond the set-point regulation control of the system output, PI controllers are well known for their ability to provide, in general, a form of robustness with respect to external disturbances. The case of a continuously differentiable additive distributed disturbance $d(t,\cdot) \in L^2(0,L)$, yielding the dynamics \begin{equation*} \dfrac{\partial^2 y}{\partial t^2} = \dfrac{\partial^2 y}{\partial x^2} + f(y) + d , \end{equation*} can be easily handled in our approach by merely embedding the contribution of the disturbance $d$ into the residual term $r$ given by (\ref{eq: def residual term}). See also~\cite{lhachemi2019pi} for the case of a linear reaction-diffusion equation. A similar robustness issue would be to evaluate the impact of an additive boundary disturbance $p(t)$ applying to the control input: \begin{equation*} \dfrac{\partial y}{ \partial x}(t,L) = u(t) + p(t) . \end{equation*} However, comparing to the case of distributed perturbations, the study of the robustness of infinite-dimensional systems with respect to boundary perturbations is generally much more challenging~\cite{mironchenko2019input}. \subsection{Robustness with respect to the domain} Another interesting robustness issue regarding the studied wave equation (\ref{eq: wave equation}) concerns the robustness of the PI procedure with respect to uncertainties on the length $L > 0$ of the domain. Indeed, in many engineering devices the value of $L$ is known only approximately; it may even happen that, along the process, the domain varies a bit. The robustness of the control strategy with respect to $L$ becomes then a major issue. In our strategy, the value of the parameter $L$ directly impacts the vectors of the Riesz-basis $(e_k)_{k \in\mathbb{Z}}$. However, since the control strategy relies on a finite dimensional truncated model, only the perturbations on a finite number of vectors of the Riesz-basis have an impact on the control strategy. Hence, similarly to the study reported in~\cite{coron2006global}, the Riesz basis property remains uniform with respect to small perturbations of $L > 0$. This allows to easily derive a robustness result versus small variations of the length of the spatial domain, and thus to give a positive answer to the question of robustness with respect to $L$. This result shows the power and advantage of the strategy developed here, based on spectral finite-dimensional truncations of the problem, which conveys robustness properties. In the multi-dimensional case, the question of robustness with respect to the domain would certainly be much more difficult to address. It would be required to introduce an appropriate shape topology and then perform a kind of sensitivity analysis on the control design with respect to that topology. \subsection{Robustness with respect to input and/or state delays} In this section, we discuss the impact of possible input or state delays on the reported control strategy. As we explain below, we cannot expect any robustness of our strategy with respect to input delays, while the situation for state delays is expected to be more favourable. \paragraph{Input delay} It was shown in~\cite{lhachemi2019pi} for a 1-D reaction-diffusion equation that the PI controller design procedure can be augmented with a predictor component~\cite{artstein1982linear,bresch2018new,lhachemi2018feedback} to handle constant control input delays. Such a strategy fails in the context of the wave equation studied in this paper. The reason is that, even for arbitrarily small $h>0$, the preliminary feedback \begin{equation*} u_\delta(t) = - \alpha \dfrac{\partial y_\delta}{ \partial t}(t-h,L) \end{equation*} might fail to stabilize the linear wave equation, yielding an infinite number of unstable modes. To illustrate this remark, let us consider the following linear wave equation: \begin{subequations}\label{eq: wave equation - input delay} \begin{align} & \dfrac{\partial^2 y}{\partial t^2} = \dfrac{\partial^2 y}{\partial x^2} - \beta y , \\ & y(t,0) = 0 , \qquad \dfrac{\partial y}{ \partial x}(t,L) = - \alpha \dfrac{\partial y}{\partial t}(t-h,L) , \end{align} \end{subequations} where $h > 0$ is a constant delay, $\alpha > 0$, and $\beta \in \mathbb{R}$. It was shown in~\cite{datko1997two} in the case $\alpha = 1$ and $\beta = 0$ that (\ref{eq: wave equation - input delay}) admits an infinite number of unstable modes for some arbitrarily small values of the delay $h > 0$. We elaborate here this remark by extending it to the case $\alpha > 0$ and $\beta\in\mathbb{R}$. We proceed similarly to~\cite{datko1997two} by looking for solutions of (\ref{eq: wave equation - input delay}) under the form $w(t,x) = e^{\lambda t} \sinh\left( \sqrt{\lambda^2+\beta} x \right)$ for some $\lambda\in\mathbb{C}$. It is easy to see that such a solution exists if and only if $\lambda\in\mathbb{C}$ satisfies \begin{equation*} \sqrt{\lambda^2+\beta} \cosh\left( \sqrt{\lambda^2 + \beta}L \right) = -\alpha\lambda e^{-\lambda h} \sinh\left( \sqrt{\lambda^2 + \beta}L \right) . \end{equation*} We start by studying the special case $\beta = 0$. Then the above identity becomes equivalent to: \begin{equation}\label{eq: wave delay - case beta = 0} e^{\lambda h} = - \alpha \tanh(\lambda L) . \end{equation} Let $k \in \mathbb{Z}$ be arbitrarily fixed and select the input delay \begin{equation}\label{eq: wave delay - def h} h = \dfrac{L}{k+1/2} . \end{equation} Let $\gamma > 0$ be the only positive number such that \begin{equation}\label{eq: wave delay - def gamma} e^{\gamma h} = \alpha \coth(\gamma L) . \end{equation} It is easily seen that $\lambda^0_n = \gamma + \dfrac{i}{L} \left( k + \dfrac{1}{2} \right) (4n+1)\pi$, $n\in\mathbb{Z}$, are distinct solutions of (\ref{eq: wave delay - case beta = 0}) with $\operatorname{Re}\lambda_n = \gamma > 0$. We now turn out attention onto the case $\beta \neq 0$. Let $h,\gamma > 0$ be still given by (\ref{eq: wave delay - def h}-\ref{eq: wave delay - def gamma}). We introduce the open set $A = \left\{ \lambda\in \mathbb{C} \,:\, 0 < \operatorname{Re}z < 2 \gamma \;, \vert\lambda\vert > \sqrt{\beta} \right\}$. We define for $\lambda \in A$ the holomorphic functions \begin{align*} f(\lambda) & = \lambda \cosh\left( \lambda L \right) + \alpha\lambda e^{-\lambda h} \sinh\left( \lambda L \right) , \\ g(\lambda) & = \sqrt{\lambda^2+\beta} \cosh\left( \sqrt{\lambda^2 + \beta}L \right) + \alpha\lambda e^{-\lambda h} \sinh\left( \sqrt{\lambda^2 + \beta}L \right) , \end{align*} with $\sqrt{\lambda^2+\beta} = \lambda \exp\left(\dfrac{1}{2}\operatorname{Log}\left( 1 + \dfrac{\beta}{\lambda^2} \right)\right)$, where $\operatorname{Log}$ denotes the principal determination of logarithm. We have $f(\lambda_n^0) = 0$ for all $n \in \mathbb{Z}$ and it can be observed that \begin{equation}\label{eq: wave delay - Rouche thm - step 1} g(\lambda) = f(\lambda) + O(1) , \qquad \lambda\in A , \quad \vert \lambda \vert \rightarrow + \infty . \end{equation} For a given constant $R > 0$ to be defined later, we consider the simple loop $\lambda_n : [0,2\pi] \rightarrow \mathbb{C}$ defined by $\lambda_n(\theta) = \lambda_n^0 + \dfrac{R e^{i\theta}}{n}$, $\theta \in [0,2\pi]$. We consider an integer $N_0 \geqslant 1$ such that $\lambda_n(\theta) \in A$ for all $\vert n \vert \geqslant N_0$ and all $\theta \in [0,2\pi]$. Standard computations show that \begin{equation*} f(\lambda_n(\theta)) = (-1)^k i \lambda_n^0 \dfrac{Re^{i\theta}}{n} \zeta \cosh(\gamma L) + O\left(\dfrac{1}{n}\right) \end{equation*} when $\vert n \vert \rightarrow + \infty$, uniformly with respect to $\theta\in[0,2\pi]$, where $\zeta = L + h\tanh(\gamma L) - L \tanh^2(\gamma L)$. We note that $\zeta \neq 0$. Indeed, the condition $\zeta=0$ would imply that $\tanh(\gamma L) > 0$ is the positive root of $L + h X - L X^2$, yielding the contradiction $\tanh(\gamma L) = (h+\sqrt{h^2+4L^2})/(2L) > 1 + h/(2L) > 1$. We deduce that \begin{equation}\label{eq: wave delay - Rouche thm - step 2} \vert f(\lambda_n(\theta)) \vert \sim C R \end{equation} when $\vert n \vert \rightarrow + \infty$, uniformly with respect to $\theta\in[0,2\pi]$, where the constant $C = \dfrac{2(2k+1)\pi \vert \zeta \vert \cosh\left(\gamma L\right)}{L} \neq 0$ is independent of $R > 0$. Hence, in view of (\ref{eq: wave delay - Rouche thm - step 1}-\ref{eq: wave delay - Rouche thm - step 2}) and by selecting $R > 0$ large enough, we can apply Rouch{\'e}'s theorem for $\vert n \vert \geqslant N_1$ with $N_1 > 0$ large enough. This shows for any $\vert n \vert \geqslant N_1$ the existence of $\lambda_n^\beta = \lambda_n^0 + O\left(\dfrac{1}{n}\right)$ such that $g(\lambda_n^\beta) = 0$. Since $h > 0$ defined by (\ref{eq: wave delay - def h}) can be made arbitrarily small by selecting $k$ arbitrarily large, we have shown the existence of arbitrarily small delays $h > 0$ such that (\ref{eq: wave equation - input delay}) admits an infinite number of unstable modes. Such an observation implies that the strategy reported in this paper cannot be successfully applied to the case of a delayed control input. Hence, the PI regulation control of (\ref{eq: wave equation}) in the presence of a delay in the control input remains open. \paragraph{State delay} While, as discussed above, the case of a delay in the boundary control induces many difficulties (in particular, an infinite number of unstable modes), the case of a delay in the non-linearity $f$ might be more favourable. More specifically, a possible research direction is concerned with the potential extension of the control strategy reported in this paper to the case of the wave equation \begin{equation*} \dfrac{\partial^2 y}{\partial t^2}(t,x) = \dfrac{\partial^2 y}{\partial x^2}(t,x) + f(y(t-h,x)) , \end{equation*} where $h > 0$ is a state-delay. Possible approaches include the use of either Lyapunov-Krasovskii functionals~\cite{kolmanovskii2012applied} or small gain arguments~\cite{lhachemi2020boundary}. \subsection{Alternatives for control design} In this paper, we have designed the control strategy, in particular, by using the pole shifting theorem (see in particular equation~\ref{eq: control input vd} in Subsection \ref{sec_poleshift}). Such a pole shifting is very natural considering the spectral approach that we have implemented: placing adequately the poles (of course, the unstable ones; but we can also shift to the left some of the stable ones in order to improve the stabilization properties) we are able to ensure some robustness properties, with respect to disturbances or with respect to the domain, as we have discussed previously. One can wonder whether this is possible to design the control by other methods. For instance by the celebrated Riccati procedure (see, e.g., \cite{Khalil,Sontag,trelatbook,Zabczyk}) applied to the truncated finite-dimensional system. Considering the classical Linear Quadratic Riccati theory, let us denote by $u_n$ the optimal Riccati control (for some given weights) associated with the truncated system in dimension $n$. We do not make precise all notations but the framework is clear. The main question is: does $u_n$ converge to $u_\infty$ as $n\rightarrow+\infty$, where $u_\infty$ is the Riccati control of the complete system in infinite dimension (i.e., the PDE)? We expect that such a convergence property is true at least in the parabolic case, i.e., for instance, for heat-like equations, or in the hyperbolic case with internal controls, i.e., for instance, for wave-like equations with distributed control. The hyperbolic case with boundary controls is probably much more difficult. Adjacently, this discussion raises the problem of discussing the numerical efficiency of various control design approaches. It would be interesting to compare our approach, developed in the present paper, with other possible approaches and to compare their efficiency. One different approach that may come to our mind is backtepping design, which is well known to promote robustness properties (see, e.g., \cite{Krstic}), at least, for parabolic equations. It is not clear whether backstepping could be performed for the 1-D wave equation investigated in the present article. \subsection{Controlling the output regulation by quasi-static deformations} In the present work, we have dealt with the stabilization and regulation control in the vicinity of a given steady-state. In this section, we address the following question: considering that the system output has been regulated to a given steady-state, is it now possible to steer the system output to \emph{another} steady-state? More precisely, based on Definition \ref{def: steady state}, we denote by $\mathcal{S} \subset\mathcal{C}^2([0,L])$ the set of steady-states of (\ref{eq: wave equation}) endowed with the $\mathcal{C}^2([0,L])$ topology. Let $y_{e,1},y_{e,2} \in \mathcal{S}$ belonging to the same connected component of $\mathcal{S}$. Introducing for $i \in \{1,2\}$ the system output associated with the steady-state $y_{e,i}$ defined by \begin{equation*} z_{e,i} = \dfrac{\mathrm{d} y_{e,i}}{\mathrm{d} x}(0), \end{equation*} can we design a PI controller able to steer the system output from its initial value $z_{e,1}$ (or, close to it) into any neighbourhood of $z_{e,2}$ in finite time? A way to address this issue is to design a PI controller by \emph{quasi-static deformation}, as in \cite{coron2004global,coron2006global,schmidt2006}, along a path of steady outputs connecting $z_{e,1}$ to $z_{e,2}$. More precisely, since $y_{e,1},y_{e,2} \in \mathcal{S}$ are assumed to belong to the same connected component of $\mathcal{S}$, one can define $y_e(\tau,\cdot)$, with $\tau \in [0,1]$, a $\mathcal{C}^2$ path in $\mathcal{S}$ connecting $y_{e,1}$ to $y_{e,2}$, as well as the associated path of boundary inputs $u_e(\tau) = \frac{\partial y_e}{\partial x}(\tau,L)$. Now, a possible attempt would be to extend the approach of~\cite{coron2006global} by defining the path of system outputs $z_e(\tau) = \frac{\partial y_e}{\partial x}(\tau,0)$ associated with $y_e(\tau,\cdot)$. In this setting, the objective would be to study the deviations of the system output $z(t) = \frac{\partial y}{ \partial x}(t,0)$ with respect to the quasi-static path $y_e(\epsilon t , \cdot)$ by introducing $z_\delta(t,x) = z(t) - z_e(\epsilon t)$. In this configuration, $r(t) = z_e(\epsilon t)$ plays the role of a slowly time-varying reference input. It was shown in~\cite{coron2006global} that such a path can be used in order to steer the system (\ref{eq: wave equation}) from the steady-state $y_{e,1}$ to the steady-state $y_{e,2}$ by means of a boundary control input $u$ taking advantage of quasi-static deformations. Specifically, for $\epsilon > 0$ small enough, the authors studied the deviations of the system trajectory with respect to the quasi-static path $y_e(\epsilon t , \cdot)$ by introducing: \begin{align*} y_\delta(t,x) & = y(t,x) - y_e(\epsilon t , x) , \\ u_\delta(t) & = u(t) - u_e(\epsilon t) \end{align*} for $t \in [0,1/\epsilon]$ and $x \in [0,L]$. The preliminary feedback still takes the form (\ref{eq: preliminary control input}). Due to the quasi-static deformations-based approach and using a Taylor expansion as in (\ref{eq: wave equation - variations around equilibrium - PDE}), the design of the auxiliary control input $v(t)$ requires the introducing of the following family of wave operators parametrized by $\tau \in [0,1]$: \begin{equation*} \mathcal{A}(\tau) = \begin{pmatrix} 0 & \mathrm{Id} \\ \mathcal{A}_0(\tau) & 0 \end{pmatrix} \end{equation*} with $\mathcal{A}_0(\tau) = \Delta + f'(y_e(\tau,\cdot)) \,\mathrm{Id}$ defined on the domain \begin{align*} D(\mathcal{A}(\tau)) = \{ (w^1,w^2) \in \mathcal{H} \,:\, & w^1 \in H^2(0,L) ,\, w^2 \in H^1(0,L) ,\, \\ & w^2(0) = 0 ,\, (w^1)'(L)+\alpha w^2(L) = 0 \} . \end{align*} Following~\cite[Lem.~2]{coron2006global}, this family of operators admits a family $(e_{k}(\tau,\cdot))_{k \in \mathbb{Z}}$ of Riesz-bases formed by generalized eigenvectors of $\mathcal{A}(\tau)$, associated to the eigenvalues $(\lambda_k(\tau))_{k \in \mathbb{Z}}$ and with dual Riesz basis $(f_k(\tau,\cdot))_{k \in \mathbb{Z}}$, with properties similar to the ones of (\ref{lem: properties A}) but with an integer $n_0 \geqslant 0$ that is uniform with respect to $\tau \in [0,1]$. Without loss of generality, this latter integer can be selected such that $\vert k \vert \geqslant n_0 + 1$ implies $\operatorname{Re}\lambda_k < -1$. Moreover, the aforementioned family of Riesz-bases is uniform with respect to $\tau \in [0,1]$ in the sense that the constants $m_R,M_R$ of Definition \ref{def: Riesz basis} can be selected independently of $\tau \in [0,1]$. These key properties allowed the authors of~\cite{coron2006global} to design a control law of the form $v(t) = K(\epsilon t) X(t)$. The matrix $K(\tau)$, parametrized by $\tau \in [0,1]$, is obtained based on an augmented finite-dimensional LTI system, also parametrized by $\tau \in [0,1]$, which in particular captures the first modes of $\mathcal{A}(\tau)$ characterized by the integers $- n_0 \leqslant k \leqslant n_0$. The vector $X(t)$ captures, in addition to a number of integral components, the projection of the system trajectory $W(t,\cdot)$ onto the vector space spanned by $(e_{k}(t\epsilon,\cdot))_{\vert k \vert \leqslant n_0}$. The stability property of the resulting closed-loop system was assessed through the study of a suitable Lyapunov functional, yielding the following result~\cite[Thm.~1]{coron2006global}. For the wave equation (\ref{eq: wave equation}) with initial condition set as the steady-state $y_{e,1}$ and for the control input selected as above: for every $\delta > 0$, there exists $\epsilon_1 > 0$ so that, for every $\epsilon \in (0,\epsilon_1]$, we have \begin{equation*} \left\Vert \dfrac{\partial y}{\partial x}(1/\epsilon,\cdot) - \dfrac{\mathrm{d} y_{e,2}}{\mathrm{d}x} \right\Vert_{L^2(0,L)} + \left\Vert \dfrac{\partial y}{\partial t}(1/\epsilon,\cdot) \right\Vert_{L^2(0,L)} \leqslant \delta . \end{equation*} In conclusion, it is of interest to evaluate the possible extension of our PI regulation procedure to the case of quasi-steady deformations as described above. \subsection{System of one-dimensional partial differential equations} In Section \ref{sec_gen_multiD}, we have mentioned as a completely open issue the general multi-dimensional case, which is very challenging. As an intermediate case between 1-D and multi-D, we may consider the case of systems of one-dimensional partial differential equations. A line of research that is of great interest is to consider coupled scalar one-dimensional PDEs, for instance of the form \begin{align*} \dfrac{\partial^2 y}{\partial t^2} &= a \dfrac{\partial^2 y}{\partial x^2} + c y + dz \\ \dfrac{\partial^2 z}{\partial t^2} &= b \dfrac{\partial^2 y}{\partial x^2} + e y + fz \end{align*} which are 1-D coupled wave-like equations, with various possible controls and with various outputs (e.g., Neumann boundary control and Neumann regulated output like in this article). Of great interest too, would be to replace the above wave equation in $z$, with a heat-like equation in $z$, that is, consider a 1-D wave-like equation that is coupled with a 1-D heat-like equation in $z$. In that case, we expect new phenomena, emerging from the interesting coupling between a parabolic and a hyperbolic equation. Actually, even the case of coupled 1-D heat-like equations does not seem to have been considered in the literature concerning PI issues. It would be very interesting to address these problems. They indeed have attracted much attention for controllability issues and many powerful techniques have been introduced to treat such coupled systems (see, e.g., \cite{Alabau2002,Alabau2015,AlabauLeautaud,AK1,AK2,AK3,LiardLissy}). This open issue is a future line of research. \appendix \section{Annex - Proof of Lemma~\ref{lem: properties A}}\label{annex: proof lemma} From the definition of the operator $\mathcal{A}$ given by (\ref{eq: def operator A}), $\lambda \in\mathbb{C}$ is an eigenvalue of $\mathcal{A}$ associated with the nonzero eigenvector $w = (w^1,w^2) \in D(\mathcal{A})$ if and only if $w^2 = \lambda w^1$ and \begin{align*} & (w^1)'' + f'(y_e) w^1 = \lambda^2 w^1 , \\ & w^1(0)=0 , \quad (w^1)'(L)+\alpha\lambda w^1(L) = 0 \end{align*} for $x \in (0,L)$. Then, for $\vert \lambda \vert \rightarrow + \infty$, we obtain for $x \in [0,L]$ that \begin{equation*} w^1(x) = \sinh\left(\sqrt{\lambda^2+O(1)} x\right) , \quad (w^1)'(x) = \sqrt{\lambda^2+O(1)} \cosh\left(\sqrt{\lambda^2+O(1)} x\right) , \end{equation*} uniformly with respect to $x \in [0,L]$. Using now the right boundary conditions, we obtain the existence of an integer $k \in\mathbb{Z}$ such that, as $\vert k \vert \rightarrow +\infty$, \begin{equation*} \lambda_k = \dfrac{1}{2L} \log\left(\dfrac{\alpha-1}{\alpha+1}\right) + i \dfrac{k\pi}{L} + O\left( \dfrac{1}{\vert k \vert} \right) . \end{equation*} Then, an associated unit eigenvector is given by \begin{equation*} e_k = \dfrac{1}{A_k} \left( \sinh\left(\sqrt{\lambda_k^2+O(1)} x\right) , \lambda_k \sinh\left(\sqrt{\lambda_k^2+O(1)} x\right) \right) \end{equation*} where, recalling that $\alpha > 1$ and introducing $\beta = - \frac{1}{2L} \log\left(\frac{\alpha-1}{\alpha+1}\right) > 0$, \begin{equation*} A_k = \vert \lambda_k \vert \sqrt{ \dfrac{\sinh(2\beta L)}{2\beta} + O\left(\dfrac{1}{\vert k \vert}\right) } . \end{equation*} In particular, $(e_k^1)'(0) = O(1)$ as $\vert k \vert \rightarrow + \infty$, showing item 7. We show that the eigenvalues of $\mathcal{A}$ are geometrically simple (item 2). To do so, assume that $w_i = (w_i^1,w_i^2) \in D(\mathcal{A})$, with $i\in\{1,2\}$, are two eigenvectors of $\mathcal{A}$ associated with the same eigenvalue $\lambda \in\mathbb{C}$. We note that $w_i(L) \neq 0$ because otherwise $(w_i^1)'(L) = - \alpha \lambda w_i^1(l) = 0$ hence, by cauchy uniqueness, $w_i^1 = 0$ and thus $w_i^2 = \lambda w_i^1 = 0$, giving the contradiction $w = 0$. Then the function $g$ defined by $g = w_2^1(L) w_1^1 - w_1^1(L) w_2^1 \neq 0$ satisfies \begin{align*} & g'' + f'(y_e) g = \lambda^2 g , \\ & g(L)=g'(L)=0 \end{align*} implying $g=0$. Recalling that $w_i^2 = \lambda w_i^1$, this shows that $w_1$ and $w_2$ are not linearly independent. Recalling that $\mathcal{A}$ has compact resolvent, we denote by $(e_k)_{k\in\mathbb{Z}}$ a complete set of unit generalized eigenfunctions of $\mathcal{A}$ associated with the eigenvalues $(\lambda_k)_{k\in\mathbb{Z}}$~\cite{gohberg1978introduction}. We are going to apply Bari's theorem~\cite{gohberg1978introduction} to show that $(e_k)_{k\in\mathbb{Z}}$ is a Riesz basis. To do so, we need a Riesz basis of reference. Based on the definition the operator $\mathcal{A}$ given by (\ref{eq: def operator A}), we consider the below operator, obtained by removing the contribution of the terme $f'(y_e) \,\mathrm{Id}$, \begin{equation*} \mathcal{A}_{tr} = \begin{pmatrix} 0 & \mathrm{Id} \\ \Delta & 0 \end{pmatrix} \end{equation*} defined on the same domain as $\mathcal{A}$. We know from~\cite[Section~4]{russell1978controllability} that $\mathcal{A}_{tr}$ admits a Riesz basis of eigenvectors $(\phi_k)_{k\in\mathbb{Z}}$ associated with the eigenvalues $(\mu_k)_{k\in\mathbb{Z}}$ given for any $k\in\mathbb{Z}$ by \begin{equation*} \mu_k = \dfrac{1}{2L} \log\left(\dfrac{\alpha-1}{\alpha+1}\right) + i \dfrac{k\pi}{L} , \end{equation*} and \begin{equation*} \phi_k = \dfrac{1}{B_k} \left( \sinh(\mu_k x) , \mu_k \sinh(\mu_k x) \right) \end{equation*} where, recalling that $\beta = - \frac{1}{2L} \log\left(\frac{\alpha-1}{\alpha+1}\right) > 0$, \begin{equation*} B_k = \dfrac{1}{L\sqrt{2\beta}} \sqrt{(\beta^2 L^2 + k^2 \pi^2)\sinh(2\beta L)} . \end{equation*} We deduce that \begin{equation*} e_k = \phi_k + O\left(\dfrac{1}{\vert k \vert}\right) , \end{equation*} in $\mathcal{H}$-norm as $\vert k \vert \rightarrow +\infty$. Hence $(e_k)_{k\in\mathbb{Z}}$ is quadratically close to the Riesz basis $(\phi_k)_{k\in\mathbb{Z}}$. Then, with the results of~\cite[Lemma~6.2 and Theorem~6.3]{guo2001riesz} relying on Bari's theorem, we obtain that $(e_k)_{k\in\mathbb{Z}}$ is a Riesz basis. Introducing $(f_k)_{k\in\mathbb{Z}}$ as the Dual Riesz basis of $(e_k)_{k\in\mathbb{Z}}$, items 1, 3, 4, and 6 hold true. Finally, an homotopy argument using the operator $\mathcal{A}_{tr}$ shows that the algebraic multiplicity of the real eigenvalues of $\mathcal{A}$ is odd, yielding item 5. \end{document}
\begin{document} \begin{abstract} For any cofinite Fuchsian group $\Gamma\subset{\rm PSL}(2, \mathbb{R})$, we show that any set of $N$ points on the hyperbolic surface $\Gamma\backslash\mathbb{H}^2$ determines $\geq C_{\Gamma} \frac{N}{\log N}$ distinct distances for some constant $C_{\Gamma}>0$ depending only on $\Gamma$. In particular, for $\Gamma$ being any finite index subgroup of ${\rm PSL}(2, \mathbb{Z})$ with $\mu=[{\rm PSL}(2, \mathbb{Z}): \Gamma ]<\infty$, any set of $N$ points on $\Gamma\backslash\mathbb{H}^2$ determines $\geq C\frac{N}{\mu\log N}$ distinct distances for some absolute constant $C>0$. \end{abstract} \title{Distinct distances on hyperbolic surfaces} \section{Introduction} Erd\H{o}s \cite{Erdos} in 1946 asked the question of finding the minimal number of distinct distances among any $N$ points in the plane. The breakthrough work of Guth-Katz \cite{Guth-Katz} gave the lower bound $\geq C\frac{N}{\log N}$ for some constant $C>0$ in the Euclidean plane, which is sharp up to a factor of $\log$. Another related and widely studied conjecture is the Falconer's conjecture which asks about the lower bound of the Hausdorff dimension of the sets in $\mathbb{R}^d$ for which the difference set has positive Lebesgue measure. The Falconer's conjecture can be viewed as a continuous analogue of the distinct distances problem. Interested readers may check Falconer \cite{Falconer}, Guth-Iosevich-Ou-Wang \cite{Guth-Iosevich-Ou-Wang}, Iosevich \cite{Iosevich} etc. The Erd\H{o}s-Falconer type problems have been generalized to other spaces and applied to certain sum-product estimates, see e.g. Bourgain-Tao \cite{Bourgain-Tao}, Hart-Iosevich-Koh-Rudnev \cite{HIKR}, Roche-Newton and Rudnev \cite{RocheNewton-Rudnev}, Rudnev-Selig \cite{Rudnev-Selig}, Sheffer-Zahl \cite{Sheffer-Zahl}, and blog of Tao \cite{Tao} etc. However, the distinct distances problem has not been considered in hyperbolic surfaces until very recently by Lu and the author in \cite{Lu-Meng} where the modular surface and hyperbolic surfaces with cocompact fundamental groups are studied. But this problem is still open for more general hyperbolic surfaces arising from non-cocompact Fuchsian groups. In this paper, for all cofinite Fuchsian groups $\Gamma$, we give complete answer to the distinct distances problem for all hyperbolic surfaces $\Gamma\backslash\mathbb{H}^2$ endowed with the hyperbolic metric from $\mathbb{H}^2$. \begin{theorem}\label{thm-cofinite} For any cofinite Fuchsian group $\Gamma\subset \mathrm{PSL}(2, \mathbb{R})$, any set of $N$ points on the hyperbolic surface $\Gamma\backslash\mathbb{H}^2$ determines $\geq C_{\Gamma}\frac{N}{\log N}$ distinct distances for some constant $C_{\Gamma}$ depending only on $\Gamma$. \end{theorem} In particular, for finite index subgroups of the modular group $\mathrm{PSL}(2, \mathbb{Z})$, we extract out the dependence of the implied constants on the index. \begin{theorem}\label{thm-finite-index} For any finite index subgroup $\Gamma$ of $\mathrm{PSL}(2, \mathbb{Z})$ with $[ \mathrm{PSL}(2,\mathbb{Z}):\Gamma ]=\mu$, any set of $N$ points on the hyperbolic surface $\Gamma\backslash\mathbb{H}^2$ determines $\geq C \frac{N}{\mu\log N}$ distinct distances for some absolute constant $C>0$. \end{theorem} Theorem \ref{thm-finite-index} has application to equilateral dimension problem. The equilateral dimension of a metric space is the maximal number of points in the space with pairwise equal distance. It has been studied in various spaces, see Alon-Milman \cite{Alon-Milman}, Guy \cite{Guy}, Koolen \cite{Koolen} etc. For instance, the equilateral dimension of the $n$-dimensional Euclidean space is $n+1$. However, we are not aware of any result in literature about the equilateral dimension of general hyperbolic surfaces. We observe that the lower bound in Theorem \ref{thm-finite-index} is not trivial for distinct distances among any set of size $N\gg \mu^{1+\epsilon}$. Thus the following corollary holds. \begin{corollary} For any subgroup $\Gamma$ of $\mathrm{PSL}(2, \mathbb{Z})$ with finite index $[ \mathrm{PSL}(2,\mathbb{Z}):\Gamma ]=\mu$, the equilateral dimension of the hyperbolic surface $\Gamma\backslash\mathbb{H}^2$ is $\ll \mu^{1+\epsilon}$ for any $ \epsilon>0$. \end{corollary} The isometry group of the hyperbolic plane $\mathbb{H}^2$ is $\mathrm{PSL}(2, \mathbb{R})$ which acts on $\mathbb{H}^2$ by M\"{o}bius transformation: \[z\mapsto \gamma(z):=\frac{az+b}{cz+d}, \text{ for }\gamma=\begin{pmatrix}a&b\\c&d\end{pmatrix}\in\mathrm{PSL}(2, \mathbb{R}), z\in\mathbb{H}^2.\] For any discrete subgroup $\Gamma$ of $\mathrm{PSL}(2, \mathbb{R})$, i.e. a \textit{Fuchsian group}, the distance between any two points $p, q $ on the hyperbolic surface $ Y\cong \Gamma\backslash\mathbb{H}^2$ is $$d_Y(p, q):=\min_{\gamma_1, \gamma_2\in \Gamma} d_{\mathbb{H}^2}(\gamma_1(p), \gamma_2(q))=\min_{\gamma_1, \gamma_2\in\Gamma} d_{\mathbb{H}^2}(p, \gamma_1^{-1}\gamma_2(q) )=\min_{\gamma\in\Gamma}d_{\mathbb{H}^2}(p, \gamma(q)).$$ \begin{figure} \caption{Distances on hyperbolic surface} \end{figure} Instead of calculating distances on the surface directly, we consider representatives of the points in a fundamental domain $F_{\Gamma}$ of $\Gamma$. In \cite{Lu-Meng}, Lu and the author introduced the concept of a ``geodesic cover" $\Gamma'\subset\Gamma$ such that for any $p, q\in F_{\Gamma}$, $$ d_Y(p, q)= d_{\mathbb{H}^2}(p, \gamma' q) ~\text{for some}~\gamma'\in\Gamma'.$$ If there exists a finite geodesic cover, one can derive a lower bound for the distinct distance problem on the hyperbolic surface $\Gamma\backslash\mathbb{H}^2$. For the modular surface $\mathrm{PSL}(2, \mathbb{Z})\backslash\mathbb{H}^2$, we would be able to find a finite geodesic cover by working explicitly with matrices in $\mathrm{PSL}(2, \mathbb{Z})$. However, it is hard to tackle general non-cocompact Fuchsian groups this way since we cannot explicitly write out all the elements. Another difficulty to find such a finite geodesic cover is, the number of representatives we need to examine would blow up if the fundamental domain has many inequivalent cusps. This is not an issue for modular surface which has only one inequivalent cusp, and that the imaginary parts of points in a fundamental domain of $\mathrm{PSL}(2, \mathbb{Z})$ are all bounded below (or bounded above if we choose other type of fundamental domain). Therefore, representatives we have to examine will not have very small imaginary parts and the number of them could be bounded. But in the general case, if a pair of points are close to two inequivalent cusps respectively, the number of representatives we have to examine might lose control. In order to overcome such difficulties, we propose a more general concept of a geodesic cover defined on any subset of a fundamental domain $F_{\Gamma}$ and also defined in different base groups, see Definition \ref{defn-geodesic-covering-number}. By building relations between geodesic covers of different subregions in $F_{\Gamma}$ and geodesic covers of certain regions in different groups, we prove lower bounds for distinct distances on hyperbolic surfaces associated with any cofinite Fuchsian group. See Lemmas \ref{lem-geodesic-number-Distinct-distance}, \ref{lem-geodesic-cover-number-finite} and \ref{lem-finite-index-geo-cover-number} for details. \textbf{Acknowledgment.} The author is partially supported by the Humboldt Professorship of Professor Harald Helfgott. \section{Preliminaries and preparations} First we briefly summarize the properties of Fuchsian groups (see Beardon \cite{Beardon} or Katok \cite{Katok} for more related materials). A subgroup $\Gamma$ of $\mathrm{PSL}(2, \mathbb{R})$ is a \textit{Fuchsian group} if and only if $\Gamma$ acts \textit{properly discontinuously} on $\mathbb{H}^2$. Thus the $\Gamma$-orbit of any point $z\in\mathbb{H}^2$ is\textit{ locally finite}, which means any compact set $K\subset\mathbb{H}^2$ contains only finite number of orbit points, i.e. the set $\Gamma z\cap K$ is finite for any $z\in\mathbb{H}^2$. A \textit{cofinite} Fuchsian group is a discrete subgroup of $\mathrm{PSL}(2, \mathbb{R})$ of finite covolume i.e. a fundamental domain of $\Gamma\backslash\mathbb{H}^2$ has finite hyperbolic area. A cofinite discrete subgroup is also called a \textit{lattice} in some other contexts. Siegel's theorem (see \cite{Katok}, Theorem 4.1.1) says cofinite Fuchsian group is \textit{geometrically finite}, i.e. there exists a convex fundamental domain with finitely many sides. The cocompact Fuchsian groups has been considered in \cite{Lu-Meng}. In this paper, we focus on non-cocompact case. Suppose $\Gamma$ has parabolic elements, and thus its fundamental domain $F_{\Gamma}$ must have a vertex on $\hat{\mathbb{R}}$ which is called a \textit{cusp}. Since we assume $\Gamma$ is cofinite, by Siegel's theorem, its fundamental domain $F_{\Gamma}$ has finitely many cusps. We use an idea of Iwaniec (see \cite{Iwaniec}, \S 2.2) to partition the fundamental domain of a Fuchsian group. Define the stability group as $$\Gamma_z:=\{ \gamma\in\Gamma: \gamma z=z \}.$$ Given a cusp $\mathfrak{a}\in \hat{\mathbb{R}}$ for $\Gamma$. The stability group $\Gamma_{\mathfrak{a}}$ is a cyclic group generated by a parabolic element, say $\Gamma_{\mathfrak{a}}=\langle \gamma_{\mathfrak{a}}\rangle$. There exists $\sigma_{\mathfrak{a}}\in SL(2, \mathbb{R})$ such that \begin{equation}\label{eq-conjugate-to-translation} \sigma_{\mathfrak{a}}\infty=\mathfrak{a}, \quad \sigma_{\mathfrak{a}}^{-1}\gamma_{\mathfrak{a}}\sigma_{\mathfrak{a}}=\begin{pmatrix}1&1\\ 0&1\end{pmatrix}. \end{equation} Then $\sigma_{\mathfrak{a}}^{-1}$ sends $\mathfrak{a}$ to $\infty$ and $\sigma_{\mathfrak{a}}$ maps the strip \begin{equation}\label{infinite-part-of-fundamental-domain} P(T):=\{ z=x+iy: 0<x<1, y\geq T\}. \end{equation} into the cuspidal zone \begin{equation}\label{eq-F-cusp-to-F-infty} F_{\mathfrak{a}, T}=\sigma_{\mathfrak{a}}P(T). \end{equation} The cuspidal zone $F_{\mathfrak{a}, T}$ is contained in a disc (the boundary is a horocycle) tangent to $\hat{\mathbb{R}}$ at $\mathfrak{a}$. When there are more than one cusps, we may choose $T$ large enough such that the cuspidal zones are disjoint. By doing this, we divide the fundamental domain $F_{\Gamma}$ into cuspidal parts \begin{equation}\label{eq-partition-cuspidal-central} F_{\bowtie,T}:=\bigcup_{\mathfrak{a}}F_{\mathfrak{a}, T} \end{equation} and the central part $F_T:=F_{\Gamma}\setminus F_{\bowtie,T}$ (see Figure \ref{fig-cuspidal-partition}). \begin{figure} \caption{Cuspidal parts and central part} \label{fig-cuspidal-partition} \end{figure} Now we give the definition of a geodesic cover and geodesic-covering number of any region in a Fuchsian group. \begin{definition}\label{defn-geodesic-covering-number} Let $F_{\Gamma}$ be a fundamental domain of $\Gamma$, and $Y\cong\Gamma\backslash\mathbb{H}^2$ be the hyperbolic surface associated with $\Gamma$. For any subset $F'\subset F_{\Gamma}$, we say $\Gamma'\subset\Gamma$ is a \textbf{geodesic cover of $F'$ in $\Gamma$} if \begin{equation} d_{Y}(p, q)=\min_{\gamma_1, \gamma_2\in\Gamma'} d_{\mathbb{H}^2}(\gamma_1(p), \gamma_2(q)), \forall p, q\in F'. \end{equation} We call the smallest cardinality of $\Gamma'\subset\Gamma$ the \textbf{geodesic-covering number of $F'$ in $\Gamma$}, denoted by $K_{\Gamma}(F')$. \end{definition} \begin{remark} A geodesic cover always contains identity. If we take $F'=F_{\Gamma}$, this matches the definition of the geodesic cover in \cite{Lu-Meng}. \end{remark} \begin{remark} Note that this definition depends on different regions and different base groups. We see that if $F''\subset F'\subset F_{\Gamma}$, then $K_{\Gamma}(F'')\leq K_{\Gamma}(F')$. But for a subgroup $\Gamma^*$ of $\Gamma$, it is not clear if we have $K_{\Gamma^*}(F')\leq K_{\Gamma}(F')$ or vice versa. \end{remark} Then we consider the geodesic-covering numbers of the central part $F_T$ and cuspidal parts $F_{\mathfrak{a},T} $ for every cusp $\mathfrak{a}$. If all of them are finite, we are able to derive a lower bound for distinct distances on hyperbolic surfaces. \begin{lemma}\label{lem-geodesic-number-Distinct-distance} Assume $\Gamma$ is a cofinite Fuchsian group with a fundamental domain $F_{\Gamma}$. If the geodesic-covering numbers of $F_T$ and $F_{\mathfrak{a},T}$ for every cusp $\mathfrak{a}$ are all finite for some $T=T_{\Gamma}$, then any set of $N$ points on the hyperbolic surface $\Gamma\backslash\mathbb{H}^2$ determines $\geq C_{\Gamma}\frac{N}{\log N}$ distinct distances for some constant $C_{\Gamma}$ depending on $\Gamma$. \end{lemma} \begin{remark} Throughout our proof, we assume the set concerned has no points lying on the boundary of $F_{\Gamma}$. If there are points lying on the boundary of $F_{\mathfrak{a}, T}$ for some cusp $\mathfrak{a}$, we may use a parabolic motion to map $F_{\mathfrak{a}, T}$ to a translate of $P(T)$ without points on the boundary. If there are points lying on the boundary of $F_T$, the same proof of Lemma \ref{lem-geodesic-cover-number-finite} also works for the closure of $F_T$. \end{remark} \begin{remark}\label{remark-cocompact} If $\Gamma$ is a cocompact Fuchsian group, there is no cusp and $F_{\Gamma}$ is bounded. Thus we only need to assume the cuspidal part is empty and the central part $F_{T}=F_{\Gamma}$ (for large enough $T$). In this case, the above lemma still holds. \end{remark} \begin{proof}[Proof of Lemma \ref{lem-geodesic-number-Distinct-distance}] Given a set $\mathcal{S}$ of $N$ points on the hyperbolic surface $Y\cong\Gamma\backslash\mathbb{H}^2$, we consider such $N$ points on a fundamental domain $F_{\Gamma}$. According to the partition \eqref{eq-partition-cuspidal-central} of $F_{\Gamma}$, either $F_{\bowtie,T}$ or $F_T$ has more than $N/2$ points on it. \textbf{Case 1).} If $F_T$ contains more than $N/2$ points, we only need to consider the lower bound for distinct distances among them, since this is also a lower bound for the $N$ points on the whole surface. Denote the set of points on $F_T$ by $\mathcal{S}_1$. Since we assume the geodesic-covering number of $F_T$ is finite, we choose a finite geodesic cover $\Gamma'\subset \Gamma$ with cardinality $|\Gamma'|=K_{\Gamma}(F_T)$. Define the distance set $$d_{Y}(\mathcal{S}_1):=\{ d_{Y}(p, q): p, q\in \mathcal{S}_1 \}\subset \{ d_{\mathbb{H}^2}(p, q): p, q \in \cup_{\gamma\in \Gamma'} \gamma(\mathcal{S}_1)\},$$ and the distance quadruples \begin{align}\label{eq-distance-quadruple-to-surface} Q_{Y}(\mathcal{S}_1)&:=\{ (p_1, p_2; p_3, p_3)\in \mathcal{S}^4_1: d_{Y}(p_1, p_2)=d_{Y}(p_3, p_4)\neq 0 \} \nonumber\\ &~\subset Q_{\mathbb{H}^2}\big(\cup_{\gamma\in\Gamma'}\gamma(\mathcal{S}_1) \big), \end{align} where \begin{equation}\label{eq-defn-QH} Q_{\mathbb{H}^2}(\mathcal{P}):=\{ (p_1, p_2; p_3, p_4)\in \mathcal{P}^4: d_{\mathbb{H}^2}(p_1, p_2)=d_{\mathbb{H}^2}(p_3, p_4)\neq 0 \}. \end{equation} For any finite set of points $\mathcal{P}$ on a hyperbolic surface $Y$, the connection between $d_Y(\mathcal{P})$ and $Q_Y(\mathcal{P})$ is as follows. Suppose the elements of $d_Y(\mathcal{P})$ are $d_1, d_2, \cdots, d_k$ and $n_i$ is the number of pairs $(p_1, p_2)\in\mathcal{P}^2$ with distance $d_i$ ($1\leq i\leq k$). By the Cauchy-Schwarz inequality, we get \begin{equation} {|\mathcal{P}|\choose 2 }^2=\bigg( \sum_{i=1}^k n_i \bigg)^2\leq \bigg(\sum_{i=1}^k n^2_i \bigg)k=|Q_Y(\mathcal{P})| |d_Y(\mathcal{P})|, \end{equation} thus \begin{equation}\label{Quadruple-to-distance} |d_Y(\mathcal{P})|\geq \frac{ (|\mathcal{P}|^2-|\mathcal{P}| )^2 }{ |Q_Y(\mathcal{P})|}. \end{equation} For any set of points $\mathcal{P}$ in $\mathbb{H}^2$, by an argument of Tao in his blog \cite{Tao} (see also \cite{Rudnev-Selig}), one can derive \begin{equation}\label{eq-Tao-quaruple} |Q_{\mathbb{H}^2}(\mathcal{P})|\ll |\mathcal{P} |^3 \log (|\mathcal{P}|). \end{equation} Recently, Lu-Meng \cite{Lu-Meng} also gave a different proof for the above estimate by modifying the framework of Guth-Katz and working explicitly with isometries of $\mathbb{H}^2$. Since the geodesic-covering number $K_{\Gamma}(F_T)$ of $F_T$ in $\Gamma$ is finite, the cardinality of $\cup_{\gamma\in\Gamma'}\gamma(\mathcal{S}_1)$ is $\leq K_{\Gamma}(F_T) |\mathcal{S}_1 | \leq K_{\Gamma}(F_T) N$. By \eqref{eq-distance-quadruple-to-surface}, we derive that \begin{equation} |Q_Y(\mathcal{S}_1)|\ll K^3_{\Gamma}(F_T) N^3 (\log (K_{\Gamma}(F_T))+\log N). \end{equation} Thus by \eqref{Quadruple-to-distance}, we get \begin{equation} |d_Y(\mathcal{S})|\geq |d_Y(\mathcal{S}_1)|\gg \frac{N}{ K^3_{\Gamma}(F_T) (\log (K_{\Gamma}(F_T))+\log N) }\geq C'_{\Gamma}\frac{N}{\log N}, \end{equation} for some constant $C'_{\Gamma}>0$ depending on $\Gamma$. \textbf{Case 2).} There are more than $N/2$ points on $F_{\bowtie,T}$. Let $n_c<\infty$ be the number of cusps for the fundamental domain $F_{\Gamma}$. Then there exists one cusp $\mathfrak{b}$ such that $F_{\mathfrak{b},T}$ contains more than $N/2n_c$ points. We may assume all these points lie in the interior of $F_{\mathfrak{b}, T}$. Denote the set of points on $F_{\mathfrak{b},T}$ by $\mathcal{S}_2$. By a similar argument as in Case 1), we deduce that \begin{equation} |d_Y(\mathcal{S})|\geq |d_Y(\mathcal{S}_2)|\gg \frac{N/n_c}{ K^3_{\Gamma}(F_{\mathfrak{b}, T}) (\log(K_{\Gamma}(F_{\mathfrak{b},T}) ) +\log N ) } \geq C''_{\Gamma}\frac{N}{\log N}, \end{equation} for some constant $C''_{\Gamma}>0$ depending on $\Gamma$. Combining the cases 1) and 2), we finish the proof. \end{proof} \section{ Geodesic-covering numbers for cofinite Fuchsian groups } In this section, we give the proof of Theorem \ref{thm-cofinite} based on Lemma \ref{lem-geodesic-number-Distinct-distance}. We only need to bound the geodesic-covering numbers of $F_T$ and $F_{\mathfrak{a}, T} $ for every cusp $ \mathfrak{a}$. \begin{lemma}\label{lem-geodesic-cover-number-finite} Assume $\Gamma$ is a cofinite Fuchsian group with a fundamental domain $F_{\Gamma}$. If we partition $F_{\Gamma}$ as in \eqref{eq-partition-cuspidal-central} for some large enough $T$ depending on $\Gamma$, the geodesic-covering numbers of $F_T$ and $F_{\mathfrak{a}, T}$ for every cusp $\mathfrak{a}$ in $\Gamma$ are all finite. This is also true for the closure of $F_T$, i.e. $K_{\Gamma}(\overline{F}_T)<\infty$. \end{lemma} \begin{proof} We need to know the basic shape of a fundamental domain for any fuchsian group. A convenient choice for us is \textit{Ford domain} which was first introduced by L. R. Ford \cite{Ford}. It is known that Ford domain is a fundamental domain (see \cite{Katok}, Theorem 3.3.5). There are concrete methods to construct fundamental domains of Fuchsian groups, interested readers may check Voight \cite{Voight} for an algorithmic method, and Kulkarni \cite{Kul} for construction of special polygons (also a fundamental domain) for subgroups of modular group using Farey symbol. Let $F_{\Gamma}$ be a fundamental domain of cofinite $\Gamma$ with finite number of sides and finite number of cusps. We partition $F_{\Gamma}$ as in \eqref{eq-partition-cuspidal-central} for some $T$ we choose later, $$F_{\Gamma}=F_T\bigcup_{\mathfrak{a} ~{\rm cusp}}F_{\mathfrak{a}, T}. $$ 1) First we show that, for some $T$, the geodesic-covering number of $F_{\mathfrak{a}, T}$ in $\Gamma$ is finite for every cusp $\mathfrak{a}$. In order to do this, we make use of Ford domains. For any cusp $\mathfrak{a}$, by \eqref{eq-conjugate-to-translation}, there exists $\sigma_{\mathfrak{a}}$ such that the stability group $\Gamma_{\mathfrak{a}}$ is generated by $$\sigma_{\mathfrak{a}}\begin{pmatrix} 1 & 1\\ 0 & 1 \end{pmatrix}\sigma^{-1}_{\mathfrak{a}},$$ and the fundamental domain of $\sigma^{-1}_{\mathfrak{a}}\Gamma_{\mathfrak{a}}\sigma_{\mathfrak{a}}$ is \begin{equation} P:=\{ z\in\mathbb{H}^2: 0\leq x< 1, y>0\}. \end{equation} Denote $\widetilde{\Gamma}^{\mathfrak{a}}:=\sigma^{-1}_{\mathfrak{a}}\Gamma\sigma_{\mathfrak{a}}$ and $$\widetilde{\Gamma}_{\infty}^{\mathfrak{a}}:=\sigma^{-1}_{\mathfrak{a}}\Gamma_{\mathfrak{a}}\sigma_{\mathfrak{a}}=\left\langle \begin{pmatrix} 1 & 1\\ 0 & 1 \end{pmatrix} \right\rangle.$$ By \eqref{infinite-part-of-fundamental-domain} and \eqref{eq-F-cusp-to-F-infty}, the geodesic-covering number of $F_{\mathfrak{a}, T}$ in $\Gamma$ is the same as the geodesic-covering number of $\sigma^{-1}_{\mathfrak{a}}(F_{\mathfrak{a}, T} )=P(T)$ in $\sigma^{-1}_{\mathfrak{a}}\Gamma\sigma_{\mathfrak{a}}$, i.e. \begin{equation}\label{eq-geo-cover-number-to-P} K_{\Gamma}(F_{\mathfrak{a}, T})=K_{\widetilde{\Gamma}^{\mathfrak{a}}}(P(T)). \end{equation} We define a domain associated with cusp $\mathfrak{a}$ as \begin{align}\label{eq-Ford-domain-cusp} \mathcal{D}_{\mathfrak{a}}:=& \{ z\in P: {\rm Im}(\gamma z) <{\rm Im} (z), \forall \gamma\in \widetilde{\Gamma}^{\mathfrak{a}}\setminus\widetilde{\Gamma}_{\infty}^{\mathfrak{a}} \}\nonumber\\ =&\{ z\in P: |cz+d|>1, \forall \begin{pmatrix} * & *\\ c & d \end{pmatrix}\in \widetilde{\Gamma}^{\mathfrak{a}}\setminus\widetilde{\Gamma}_{\infty}^{\mathfrak{a}} \} \end{align} which is a \textit{Ford domain} and thus a fundamental domain of $\widetilde{\Gamma}^{\mathfrak{a}}$. Note that $\sigma_{\mathfrak{a}}^{-1}(F_{\Gamma})$ may not be the same as $\mathcal{D}_{\mathfrak{a}}$. We want to choose large enough $T$ such that $P(T)\subset \mathcal{D}_{\mathfrak{a}}$ for all cusp $\mathfrak{a}$. Since $\mathcal{D}_{\mathfrak{a}}$ is a fundamental domain of $ \widetilde{\Gamma}^{\mathfrak{a}} $, the boundary of $\mathcal{D}_{\mathfrak{a}}$ consists of finite number of pieces from\textit{ isometric circles} of the form $|z+\frac{d}{c}|=\frac{1}{|c|}$ for some $$c\neq 0, \begin{pmatrix} * & *\\ c & d \end{pmatrix}\in \widetilde{\Gamma}^{\mathfrak{a}}.$$ Thus there is a largest radius among these isometric circles, say $\frac{1}{c_{\mathfrak{a}}}$, actually (see \cite{Iwaniec}, \S 2.6) \begin{equation}\label{eq-min-c} c_{\mathfrak{a}}=\min\left\{ c>0: \begin{pmatrix} * & *\\ c & * \end{pmatrix}\in\widetilde{\Gamma}^{\mathfrak{a}}\setminus\widetilde{\Gamma}_{\infty}^{\mathfrak{a}} \right\}. \end{equation} For the fundamental domain $F_{\Gamma}$, there are only finite number of cusps, we choose any large enough $$T\geq 100+10\max_{\mathfrak{a} ~{\rm cusp}} \tfrac{1}{c_{\mathfrak{a}} },$$ then $P(T)=\sigma^{-1}_{\mathfrak{a}}(F_{\mathfrak{a}, T} )\subset \mathcal{D}_{\mathfrak{a}}$ for every cusp $\mathfrak{a}$. For the above choice of $T$, we are ready to estimate $K_{\widetilde{\Gamma}^{\mathfrak{a}}}(P(T))$ for any $\mathfrak{a}$. Consider the set \begin{equation} \mathcal{A}:=\big\{ \gamma\in\widetilde{\Gamma}^{\mathfrak{a}}: d_{\mathbb{H}^2}(z_1, \gamma z_2)\leq d_{\mathbb{H}}(z_1, z_2), z_1, z_2\in P(T), {\rm Im}(z_1)\geq {\rm Im}(z_2) \big\}, \end{equation} which, by Definition \ref{defn-geodesic-covering-number}, is a geodesic cover of $P(T)$ in $\widetilde{\Gamma}^{\mathfrak{a}}$. For any two points $z_1=x_1+iy_1$ and $z_2=x_2+iy_2$ in $P(T)$ with $y_1\geq y_2$, the only possible isometries $\gamma$ from $$ \widetilde{\Gamma}_{\infty}^{\mathfrak{a}}=\left\langle \begin{pmatrix} 1 & 1\\ 0 & 1 \end{pmatrix} \right\rangle,$$ such that $d_{\mathbb{H}^2}(z_1, \gamma z_2)\leq d_{\mathbb{H}^2}(z_1, z_2)$ are \begin{equation} \mathcal{T}:=\left\{ \begin{pmatrix} 1 & -1\\ 0 & 1 \end{pmatrix}, \begin{pmatrix} 1 &0\\ 0 & 1 \end{pmatrix}, \begin{pmatrix} 1 & 1\\ 0 & 1 \end{pmatrix} \right\}. \end{equation} If $\gamma\in \widetilde{\Gamma}^{\mathfrak{a}}\setminus\widetilde{\Gamma}_{\infty}^{\mathfrak{a}} $, by the construction of $\mathcal{D}_{\mathfrak{a}}$ and \eqref{eq-min-c}, we have \begin{equation} {\rm Im}(\gamma z_2)=\frac{y_2}{ (cx_2+d)^2+c^2 y_2^2}\leq \frac{1}{c^2 y_2}\leq \frac{1}{c^2_{\mathfrak{a}} y_2}. \end{equation} Since $y_2\geq T\geq 100+\frac{10}{c_{\mathfrak{a}}}$, we deduce that \begin{equation} {\rm Im}(\gamma z_2)\leq \frac{1}{100 c^2_{\mathfrak{a}}+10c_{\mathfrak{a}}}<\frac{1}{10c_{\mathfrak{a}}}. \end{equation} Denote $\gamma z_2=x_0+iy_0$ ($y_0<\frac{1}{10c_{\mathfrak{a}}}$), then by the hyperbolic distance formula, \begin{equation}\label{eq-hyperbolic-cosh-distance} 2\cosh(d_{\mathbb{H}^2}(z_1, z_2))=\frac{(x_1-x_2)^2+y_1^2+y_2^2}{y_1 y_2}, \end{equation} and $|x_1-x_2|\leq 1$, $y_1\geq y_2\geq T\geq 100+\frac{10}{c_{\mathfrak{a}}}$, we derive \begin{align} &2\cosh(d_{\mathbb{H}^2}(z_1, \gamma z_2)) -2\cosh(d_{\mathbb{H}^2}(z_1, z_2)) \nonumber\\ &=\frac{(x_1-x_0)^2+y_1^2+y_0^2}{y_1 y_0 }- \frac{(x_1-x_2)^2+y_1^2+y_2^2}{ y_1 y_2} \nonumber\\ &\geq \frac{y_1}{y_0}-\frac{1}{y_1 y_2}-\frac{y_1}{y_2}-\frac{y_2}{y_1} \nonumber\\ &\geq y_1\Big(\frac{1}{y_0}-\frac{1}{y_2}\Big)-\frac{1}{100^2}-1\nonumber\\ &\geq \frac{10}{c_{\mathfrak{a}}} \Big( 10c_{\mathfrak{a}}-\frac{c_{\mathfrak{a}}}{10} \Big)-2=99-2>0. \end{align} Hence we have $\mathcal{A}=\mathcal{T}$. We derive that the geodesic-covering number of $P(T)$ in $\widetilde{\Gamma}^{\mathfrak{a}}$ is $\leq 3$. Since our choice of $T$ works for all cusps, and by \eqref{eq-geo-cover-number-to-P}, we conclude that the geodesic-covering number of $F_{\mathfrak{a}, T}$ in $\Gamma$ is finite for all cusp $\mathfrak{a}$, precisely $K_{\Gamma}(F_{\mathfrak{a}, T})\leq 3$. 2) Now we bound the geodesic-covering number of the central part $F_T$ in $\Gamma$. Define the diameter of $F_T$ as $$\diam(F_T):=\max_{p, q\in F_T} d_{\mathbb{H}^2}(p, q).$$ Since $F_T$ is bounded, the diameter $\diam(F_T)$ is finite. Pick any point $O$ inside $F_T$ which is not fixed by any element in $\Gamma$ except identity, then the set $$\mathcal{B}:=\big\{ \gamma\in\Gamma: d_{\mathbb{H}^2}(O, \gamma(O))\leq 3\diam(F_T) \big\}$$ is a geodesic cover of $F_T$ in $\Gamma$. Indeed, for any $\gamma\not\in \mathcal{B}$ and any two points $p, q\in F_T$, by triangle inequality, we get \begin{align} d_{\mathbb{H}^2}(p, \gamma(q))&\geq d_{\mathbb{H}^2}(O, \gamma(O))-d_{\mathbb{H}^2}(p, O)-d_{\mathbb{H}^2}(\gamma(O), \gamma(q))\nonumber\\ &\geq 3\diam(F_T)-\diam(F_T)-\diam(F_T)=\diam(F_T)\geq d_{\mathbb{H}^2}(p, q). \end{align} Since a Fuchsian group $\Gamma$ acts properly discontinuously on $\mathbb{H}^2$, the $\Gamma$ orbit of any point is locally finite. Thus the set $\mathcal{B}$ is finite. Therefore, the geodesic-covering number of $F_T$ in $\Gamma$ is finite. The same proof also works for the closure of $F_T$. \end{proof} \begin{remark} Explicitly counting the cardinality of a set of the type $\mathcal{B}$ is the so-called \textit{hyperbolic circle problem}, see e.g. Lax-Phillips \cite{Lax-Phillips} and Phillips-Rudnick \cite{Phillips-Rudnick} etc. \end{remark} \section{Finite index subgroups of the modular group} In this section, we give the proof of Theorem \ref{thm-finite-index}. Let $\Gamma$ be a finite index subgroup of $\mathrm{PSL}(2, \mathbb{Z})$ with $[\mathrm{PSL}(2,\mathbb{Z}):\Gamma]=\mu$. Let $F$ be a fundamental domain of $\mathrm{PSL}(2, \mathbb{Z})$, we may choose $$F=\Big\{z\in\mathbb{H}^2: |\Re(z)|\leq \frac{1}{2}, |z|\geq 1 \Big\}.$$ If we have the right coset decomposition $$\mathrm{PSL}(2,\mathbb{Z})=\bigcup_{i=1}^{\mu} \Gamma \alpha_i,$$ then \begin{equation}\label{finite-index-fundamental-domain} F_{\Gamma}=\bigcup_{i=1}^{\mu} \alpha_i(F) \end{equation} is a fundamental domain of $\Gamma$. One can choose the coset representatives properly to get a simply connected fundamental domain of $\Gamma$ (see \cite{Schoen}, Chapter IV, Theorem 3). For example, for the principal congruence subgroup $$\Gamma(2)=\left\{ \gamma\in\mathrm{PSL}(2, \mathbb{Z}): \gamma\equiv \begin{pmatrix} 1 & 0\\ 0 & 1 \end{pmatrix} \bmod 2 \right\},$$ with index $[\mathrm{PSL}(2, \mathbb{Z}):\Gamma(2)]=6$, see Figure \ref{figure-Gamma2} (the arrows show the side parings) for a fundamental domain of $\Gamma(2)$ and Figure \ref{figure-surface-Gamma2} the shape of the surface $\Gamma(2)\backslash\mathbb{H}^2$. \begin{figure} \caption{Fundamental domain for $\Gamma(2) $} \label{figure-Gamma2} \end{figure} \begin{figure} \caption{Shape of surface $\Gamma(2)\backslash\mathbb{H}^2$} \label{figure-surface-Gamma2} \end{figure} For a set $\mathcal{S}$ of $N$ points on $Y\cong\Gamma\backslash\mathbb{H}^2$, we consider their representatives in a fundamental domain $F_\Gamma$ constructed from the right coset decomposition. Since $F_{\Gamma}$ is a union of $\mu$ copies of $F$, there exists an $\alpha_j$ such that $\alpha_j(F)$ contains $\geq N/\mu$ points from $\mathcal{S}$. Without loss of generality, we may assume $\alpha_j$ is identity and still denote this copy as $F$. Otherwise, we just take $\alpha_j^{-1}(F_{\Gamma})$ as the fundamental domain of $\Gamma$ since $\alpha_j$ is an isometry of $\mathbb{H}^2$ and this transformation will not change distances and angles among the points we are considering. If we have a lower bound for distinct distances among these $\geq N/\mu$ points, this would also give us a lower bound for distinct distances among all points of $\mathcal{S}$. We divide $F$ into two parts $F=F_{u}\cup F_o$ (see Figure \ref{figure-partition-fundamental-domain}) with \begin{equation}\label{one-fundamental-domain-parition} F_u:=\Big\{ z=x+iy\in\mathbb{H}^2: |x|\leq \frac{1}{2}, y\geq 2 \Big\} ~\text{and}~ F_o:=F\setminus F_u. \end{equation} \begin{figure} \caption{Partition of the fundamental domain $F=F_u+F_o$} \label{figure-partition-fundamental-domain} \end{figure} We want to bound the geodesic-covering numbers of $F_u$ and $F_o$ in different base groups. We prove the following lemma. \begin{lemma}\label{lem-finite-index-geo-cover-number} For any subgroup $\Gamma$ of $\mathrm{PSL}(2, \mathbb{Z})$, the geodesic-covering numbers $K_{\Gamma}(F_u)$ and $K_{\Gamma}(F_o)$ are both bounded by some absolute constants. Precisely \begin{enumerate}[label=(\roman*)] \item\label{finite-index-cusp-part} The geodesic-covering number of $F_u$ in $\Gamma$ is $K_{\Gamma}(F_u)\leq 3$. \item\label{finite-index-central-part} The geodesic-covering number of $F_o$ in $\Gamma$ is $K_{\Gamma}(F_o)\leq 252$. \end{enumerate} \end{lemma} \begin{remark} The estimate in \ref{finite-index-central-part} may be improved by more careful calculations. We don't aim to optimize the constant here. The key point is that the geodesic-covering number of $F_o$ in any subgroup $\Gamma$ is absolutely bounded and thus independent of the index of $\Gamma$ in $\mathrm{PSL}(2, \mathbb{Z})$. One may also use $y\geq U$ in the definition of $F_u$ for any large enough $U$ to optimize the estimate of $K_{\Gamma}(F_o)$. \end{remark} Before giving the proof of Lemma \ref{lem-finite-index-geo-cover-number}, we use it to prove Theorem \ref{thm-finite-index} first. \begin{proof}[Proof of Theorem \ref{thm-finite-index}] Suppose $\Gamma$ is a subgroup of $\mathrm{PSL}(2, \mathbb{Z})$ of finite index $[\mathrm{PSL}(2,\mathbb{Z}):\Gamma]=\mu$. Let $\mathcal{S}$ be a set of $N$ points on the hyperbolic surface $Y\cong\Gamma\backslash\mathbb{H}^2$, and define the distance set $$d_Y(\mathcal{S}):=\{ d_Y(p, q): p, q\in \mathcal{S} \}.$$ If $F$ is a fundamental domain of $\mathrm{PSL}(2, \mathbb{Z})$, by the fundamental domain of $\Gamma$ in the form \eqref{finite-index-fundamental-domain}, there exists some $j$ such that $\alpha_j(F)$ contains more than $N/\mu$ points. Since $\alpha_j$ is an isometry of $\mathbb{H}^2$, without loss of generality, we assume $\alpha_j(F)=F$ and let $\mathcal{S}_F$ be these $\geq N/\mu$ points on it. We observe that \begin{equation}\label{eq-distance-surface-to-one-F} |d_Y(\mathcal{S})|\geq |d_Y( \mathcal{S}_F )|. \end{equation} We use Lemma \ref{lem-finite-index-geo-cover-number} to establish a lower bound for $ |d_Y(\mathcal{S}_F )|$ and hence derive a lower bound for $|d_Y(\mathcal{S})|$. We partition the region $F=F_u\cup F_o$ as in \eqref{one-fundamental-domain-parition}. Either $F_u$ or $F_o$ contains more than $\frac{1}{2}|\mathcal{S}_F|\geq N/2\mu$ points. \textbf{Case 1).} The region $F_u$ contains more than $\frac{1}{2}|\mathcal{S}_F|$ points. Let $\mathcal{S}_u:=\mathcal{S}_F\cap F_u$ be the points on $F_u$, and $\Gamma_u$ be a geodesic-cover of $F_u$ in $\Gamma$ with cardinality $K_{\Gamma}(F_u)$. Then we have \begin{align} Q_Y(\mathcal{S}_u )&:=\big\{(p_1, p_2; p_3, p_4)\in \mathcal{S}_u^4: d_Y(p_1, p_2)=d_Y(p_3, p_4)\neq 0 ) \big\}\nonumber\\ &~\subset Q_{\mathbb{H}^2}\big(\cup_{\gamma\in\Gamma_u} \gamma(\mathcal{S}_u) \big), \end{align} where $Q_{\mathbb{H}^2}(\mathcal{P})$ is defined in \eqref{eq-defn-QH}. By Lemma \ref{lem-finite-index-geo-cover-number} \ref{finite-index-cusp-part} and \eqref{eq-Tao-quaruple}, we derive \begin{equation} |Q_Y(S_u)|\ll K_{\Gamma}^3(F_u) |\mathcal{S}_u|^3 \log(K_{\Gamma}(F_u)|\mathcal{S}_u|)\leq 27 |\mathcal{S}_u|^3 \log(3|\mathcal{S}_u|). \end{equation} Consequently by \eqref{Quadruple-to-distance} and \eqref{eq-distance-surface-to-one-F}, we get the lower bound \begin{equation} |d_Y(\mathcal{S})|\geq |d_Y(\mathcal{S}_u)|\gg \frac{|\mathcal{S}_u|}{ \log|\mathcal{S}_u| }, \end{equation} where the implied constant is absolute. Therefore, by the assumption $\frac{N}{2\mu}\leq |\mathcal{S}_u|\leq N$, we conclude that \begin{equation}\label{eq-distance-finite-index-cuspidal} |d_Y(\mathcal{S})|\geq C_1\frac{N}{\mu\log N} \end{equation} for some absolute constant $C_1>0$. \textbf{Case 2).} The region $F_o$ contains more than $\frac{1}{2}|\mathcal{S}_F|$ points. Let $\mathcal{S}_o=\mathcal{S}_F\cap F_o$ and $\Gamma_o$ be a geodesic cover of $F_o$ in $\Gamma$ with cardinality $K_{\Gamma}(F_o)$. By Lemma \ref{lem-finite-index-geo-cover-number} \ref{finite-index-central-part} and a similar argument as in \textbf{Case 1)}, we derive that \begin{equation} Q_Y(\mathcal{S}_o)\subset Q_{\mathbb{H}^2}\big( \cup_{\gamma\in\Gamma_o}\gamma(F_o) \big) \end{equation} and thus \begin{equation} |Q_Y(\mathcal{S}_o)|\ll K_{\Gamma}^3(F_o) |\mathcal{S}_o|^3 \log(K_{\Gamma}(F_o)|\mathcal{S}_o|)\leq 252^3 |\mathcal{S}_o|^3 \log( 252 |\mathcal{S}_o|). \end{equation} Again by \eqref{Quadruple-to-distance} and the assumption $\frac{N}{2\mu}\leq |\mathcal{S}_o|\leq N$, we conclude that \begin{equation}\label{eq-distance-finite-index-central} |d_Y(\mathcal{S})|\geq |d_Y(\mathcal{S}_o)|\geq C_2\frac{N}{\mu\log N} \end{equation} for some absolute constant $C_2>0$. Finally, combining \eqref{eq-distance-finite-index-cuspidal} and \eqref{eq-distance-finite-index-central} and taking $C=\min\{C_1, C_2\}$, we get the desired lower bound for distinct distances in hyperbolic surfaces associated with any finite index subgroup of $\mathrm{PSL}(2, \mathbb{Z})$, \begin{equation} |d_Y(\mathcal{S})|\geq C\frac{N}{\mu\log N} \end{equation} for some absolute constant $C>0$. \end{proof} In the following we prove Lemma \ref{lem-finite-index-geo-cover-number}. \begin{proof}[Proof of \ref{finite-index-cusp-part} in Lemma \ref{lem-finite-index-geo-cover-number}] Recall that $F_u$ is the region $$\Big\{ z=x+iy\in\mathbb{H}^2: |x|\leq \frac{1}{2}, y\geq 2 \Big\}.$$ We consider the set \begin{equation}\label{eq-set-smaller-distance} \mathcal{A}:=\{\gamma\in \mathrm{PSL}(2, \mathbb{Z}): d_{\mathbb{H}^2}(z_1, \gamma z_2)\leq d_{\mathbb{H}^2}(z_1, z_2), z_1, z_2\in F_u, {\rm Im}(z_1)\geq {\rm Im}(z_2) \}, \end{equation} which is a geodesic cover of $F_u$ in $\mathrm{PSL}(2, \mathbb{Z})$ by Definition \ref{defn-geodesic-covering-number}. For any subgroup $\Gamma$ of $\mathrm{PSL}(2, \mathbb{Z})$, the set $\mathcal{A}\cap\Gamma$ is a geodesic cover of $F_u$ in $\Gamma$. For any two points $z_1=x_1+iy_1$ and $z_2=x_2+iy_2$ in $F_u$ with $y_1\geq y_2\geq 2$, and $$\gamma=\begin{pmatrix} a&b\\ c&d \end{pmatrix}\in \mathrm{PSL}(2,\mathbb{Z}),$$ the imaginary part of $\gamma(z_2)$ can be written as $$ \frac{y_2}{|cz_2+d|^2}=\frac{y_2}{(cx_2+d)^2+c^2 y_2^2 }.$$ If $c=0$, then $a=d=1$, the isometry $\gamma$ is actually a translation of the form $$\gamma=\begin{pmatrix} 1&b\\ 0&1 \end{pmatrix}\in \mathrm{PSL}(2, \mathbb{Z}),$$ for some $b\in\mathbb{Z}$. The only possible choices of $\gamma$ for which $d_{\mathbb{H}^2}(z_1, \gamma z_2)\leq d_{\mathbb{H}^2}(z_1, z_2)$ are from the set \begin{equation}\label{eq-translation-cover} \mathcal{T}=\left\{ \begin{pmatrix} 1 & -1\\ 0 & 1 \end{pmatrix}, \begin{pmatrix} 1 &0\\ 0 & 1 \end{pmatrix}, \begin{pmatrix} 1 & 1\\ 0 & 1 \end{pmatrix} \right\}. \end{equation} If $c\neq 0$, then $|c|\geq 1$ and thus $${\rm Im}(\gamma z_2)\leq \frac{1}{y_2}\leq \frac{1}{2}.$$ Denote $\gamma(z_2)=x_0+iy_0$, then $y_0\leq \frac{1}{2}$. By the hyperbolic distance formula \eqref{eq-hyperbolic-cosh-distance} with the fact $y_1\geq y_2\geq 2$ and $|x_1-x_2|\leq 1$, we get \begin{align} &2\cosh(d_{\mathbb{H}^2}(z_1, \gamma z_2)) -2\cosh(d_{\mathbb{H}^2}(z_1, z_2)) \nonumber\\ &=\frac{(x_1-x_0)^2+y_1^2+y_0^2}{y_1 y_0 }- \frac{(x_1-x_2)^2+y_1^2+y_2^2}{ y_1 y_2} \nonumber\\ &\geq \frac{y_1}{y_0}-\frac{1}{y_1 y_2}-\frac{y_1}{y_2}-\frac{y_2}{y_1} \nonumber\\ &\geq 2y_1-\frac{1}{4}-\frac{y_1}{2}-1\geq \frac{7}{4}>0. \end{align} Thus for any $\gamma\in \mathrm{PSL}(2, \mathbb{Z})$ with $c\neq 0$, we always have $d_{\mathbb{H}^2}(z_1, \gamma z_2)>d_{\mathbb{H}^2}(z_1, z_2)$. Hence $\mathcal{A}=\mathcal{T}$. For $\Gamma$ being any subgroup of $\mathrm{PSL}(2, \mathbb{Z})$, the elements of $\gamma'\in\Gamma$ such that $$d_{\mathbb{H}^2}(z_1, \gamma' z_2)\leq d_{\mathbb{H}^2}(z_1, z_2) ~\text{ with} z_1, z_2\in F_u, {\rm Im}(z_1)\geq {\rm Im}(z_2)$$ are also from the set $\mathcal{A}=\mathcal{T}$ in \eqref{eq-set-smaller-distance} and \eqref{eq-translation-cover}. Therefore, by Definition \ref{defn-geodesic-covering-number}, the set $\mathcal{T}\cap \Gamma$ is a geodesic cover of $F_u$ in $\Gamma$. (Note that $\mathcal{T}\cap\Gamma$ always contains the identity.) We conclude that the geodesic-covering number of $F_u$ in any subgroup $\Gamma$ of $\mathrm{PSL}(2, \mathbb{Z})$ is $K_{\Gamma}(F_u)\leq 3$. \end{proof} \begin{proof}[Proof of \ref{finite-index-central-part} in Lemma \ref{lem-finite-index-geo-cover-number}] Now we deal with the bounded part $$F_o=\{z=x+iy\in\mathbb{H}^2: |z|\geq 1, 0<y<2 \}.$$ We estimate the diameter of $F_o$, \begin{align}\label{eq-Fo-diameter} \cosh(\diam(F_o))&=\cosh\big(\max_{z_1, z_2\in F_o} d_{\mathbb{H}^2}(z_2, z_2)\big)\nonumber\\ &\leq \cosh\bigg(d_{\mathbb{H}^2}\Big(\frac{-1+\sqrt{3}i}{2}, \frac{1}{2}+2i\Big) \bigg)=\frac{23\sqrt{3}}{24}=1.6598\ldots \end{align} Denote $r_0:=\max_{z\in F_o}d_{\mathbb{H}^2}(2i, z) $, then \begin{equation}\label{eq-Fo-r0} \cosh(r_0)=\cosh\bigg(d_{\mathbb{H}^2}\Big( 2i, \frac{1+\sqrt{3}i}{2} \Big) \bigg)=\frac{5\sqrt{3}}{6}=1.4433\ldots \end{equation} The point $2i$ is not fixed by any element in $\mathrm{PSL}(2, \mathbb{Z})$ except identity. By definition, the set \begin{equation} \Gamma_o:=\{\gamma\in \mathrm{PSL}(2, \mathbb{Z}): d_{\mathbb{H}^2}(2i, \gamma(2i))\leq \diam(F_o)+2r_0 \}. \end{equation} is a geodesic cover of $F_o$ in $\mathrm{PSL}(2, \mathbb{Z})$. In fact, for any $\gamma\in\mathrm{PSL}(2, \mathbb{Z})$ but not in $\Gamma_o$, we have \begin{equation}\label{eq-Fo-distance-increase} d_{\mathbb{H}^2}(z_1, \gamma z_2)\geq \diam(F_o)\geq d_{\mathbb{H}^2}(z_1, z_2), \forall z_1, z_2\in F_o. \end{equation} Now we estimate the size of $\Gamma_o$. The set $\{\gamma(F_o): \gamma\in \Gamma_o \}$ is contained in the disc $\mathcal{D}(2i, R)$ centering at $2i$ of radius $R=\diam(F_o)+3r_0$. Thus, \begin{equation} |\Gamma_o|\cdot {\rm Area}(F_o)={\rm Area}\Big(\bigcup_{\gamma\in\Gamma_o}\gamma(F_o)\Big)\leq {\rm Area}(\mathcal{D}(2i, R)). \end{equation} Since the area of the fundamental domain $F$ is $\pi/3$ and the area of $F_u$ is $1/2$, we derive that $${\rm Area}(F_o)=\frac{\pi}{3}-\frac{1}{2}=0.5471\ldots$$ By the hyperbolic area formula, \eqref{eq-Fo-diameter} and \eqref{eq-Fo-r0}, we get $${\rm Area}(\mathcal{D}(2i, R))=2\pi(\cosh(R)-1)=\frac{\pi}{36}(848 +11\sqrt{4381})=137.5389\ldots $$ Hence, \begin{equation} |\Gamma_o|\leq \frac{{\rm Area}(\mathcal{D}(2i, R))}{{\rm Area}(F_o)} \leq 252. \end{equation} For any subgroup $\Gamma$ of $\mathrm{PSL}(2, \mathbb{Z})$, by \eqref{eq-Fo-distance-increase}, we see that the set $\Gamma_o\cap\Gamma$ is a geodesic-cover of $F_o$ in $\Gamma$, and immediately we have $K_{\Gamma}(F_o)\leq |\Gamma_o|\leq 252$. \end{proof} \end{document}
\begin{document} \begin{frontmatter} \title{\textbf{\textsf{Fuzzy ample spectrum contractions in (more general than) non-Archimedean fuzzy metric spaces}}} \author[Antonio]{Antonio Francisco Rold\'{a}n L\'{o}pez de Hierro\corref{cor1}} \ead{aroldan@ugr.es} \author[Erdal,Erdal2]{Erdal Karap{\i}nar} \ead{erdalkarapinar@yahoo.com, karapinar@mail.cmuh.org.tw} \author[Naseer]{Naseer Shahzad} \ead{nshahzad@kau.edu.sa} \cortext[cor1]{Corresponding author: Antonio Francisco Rold\'{a}n L\'{o}pez de Hierro, aroldan@ugr.es\\ \hspace*{5mm} 2010 Mathematics Subject Classification: 47H10, 47H09, 54H25, 46T99.} \address[Antonio]{Department of Statistics and Operations Research, University of Granada, Granada, Spain.} \address[Erdal]{Department of Medical Research, China Medical University, Taichung 40402, Taiwan.} \address[Erdal2]{Department of Mathematics, \c{C}ankaya University, 06790, Etimesgut, Ankara, Turkey.} \address[Naseer]{Department of Mathematics, Faculty of Science, King Abdulaziz University, P.O.B. 80203, Jeddah 21589, Saudi Arabia.} \date{October 22$^{th}$, 2020} \begin{abstract} Taking into account that Rold\'{a}n \emph{et al.}'s \emph{ample spectrum contractions} have managed to extend and unify more than ten distinct families of contractive mappings in the setting of metric spaces, in this manuscript we present a first study on how such concept can be implemented in the more general framework of fuzzy metric spaces in the sense of Kramosil and Mich\'{a}lek. We introduce two distinct approaches to the concept of \emph{fuzzy ample spectrum contractions} and we prove general results about existence and uniqueness of fixed points. The proposed notions enjoys the following advantages with respect to previous approaches: (1) they permit to develop fixed point theory in a very general fuzzy framework (for instance, the underlying fuzzy space is not necessarily complete); (2) the procedures that we employ are able to overcome the technical drawbacks raising in fuzzy metric spaces in the sense of Kramosil and Mich\'{a}lek (that do not appear in fuzzy metric spaces in the sense of George and Veeramani); (3) we introduce a novel property about sequences that are not Cauchy in a fuzzy space in order to consider a more general context than non-Archimedean fuzzy metric spaces; (4) the contractivity condition associated to \emph{fuzzy ample spectrum contractions} does not have to be expressed in separate variables; (5) such fuzzy contractions generalize some well-known families of fuzzy contractive operators such that the class of all Mihe\c{t}'s fuzzy $\psi$-contractions. As a consequence of these properties, this study gives a positive partial answer to a question posed by this author some years ago. \end{abstract} \begin{keyword} Ample spectrum contraction \sep Fuzzy metric space \sep Fixed point \sep Property $\mathcal{NC}$ \sep $(T,\mathcal{S})$-sequence \end{keyword} \end{frontmatter} \section{\textbf{Introduction}} \emph{Fixed point theory} has become, in recent years, into a very flourishing branch of Nonlinear Analysis due, especially, to its ability to find solutions of nonlinear equations. After Banach's pioneering definition, a multitude of results have appeared that generalize his famous contractive mapping principle. Some extensions generalize the contractive condition (see, e.g., \cite{AyKaRo}) and others focus their efforts on the metric characteristics of the underlying space. Fixed point theory has been successfully applied in many fields, especially in Nonlinear Analysis, where it constitutes a major instrument in order to the existence and the uniqueness of solutions of several classes of equations such as functional equations, matrix equations \cite{AsMo,Berzig2012}, integral equations \cite{AyNaSaYa,HaLoSa,SaHuShFaRa}, nonlinear systems \cite{AAKR}, etc. In \cite{RoSh-ample1} Rold\'{a}n L\'{o}pez de Hierro and Shahzad introduced a new family of contractions whose main characteristic was its capacity to extend and unify several classes of well-known previous contractive mappings in the setting of metric spaces: (1) Banach's contractions; (2) manageable contractions \cite{DuKh}; (3) Farshid \emph{et al}'s $\mathcal{Z} $-contractions involving \emph{simulation functions} \cite{KhShRa,RoKaMa}; (4) Geraghty's contractions \cite{Geraghty}; (5) Meir and Keeler's contractions \cite{MeKe,Lim}; (6) $R$-contractions \cite{RoSh1}; (7) $(R,\mathcal{S} )$-contractions \cite{RoSh2}; (8) $(A,\mathcal{S})$-contractions \cite{ShRoFa1}; (9) Samet et al's contractions \cite{SaVeVe}; (10) Shahzad \emph{et al.}'s contractions \cite{ShKaRo}; (11) Wardowski's F-contractions \cite{War}; etc. In a broad (or even philosophical) sense, the conditions that define a Rold\'{a}n \emph{et al.}'s ample spectrum contraction should rather be interpreted as the minimal set of properties that any contraction must satisfy. Therefore, from our point of view, any other approach to contractive mappings in the field of fixed point theory could take them into account. One of the main settings in which more work has been done to extend contractive mappings in metric spaces is the context of fuzzy metric spaces (see, e.g., \cite{AlMi,GV,Grabiec,GrSa,Mi3,RoKaManro,FA-spaces}). These abstract spaces are able of providing an appropriate point of view for determining how similar or distinct two imprecise quantities are, which gives added value to the real analytical techniques that are capable of being extended to the fuzzy setting. In fact, the category of fuzzy metric spaces is so rich and broad that, as we shall see, the notion of fuzzy ample spectrum contraction does not directly come from its real corresponding counterpart. As a consequence of this great variety of examples, fuzzy metrics are increasingly appreciated in the scientific field in general due to the enormous popularity and importance of the results that are currently being obtained when working with fuzzy sets instead of classical sets, especially in Computation and Artificial Intelligence. In this work we give a first introduction on how ample spectrum contractions can be extended to fuzzy metric spaces. We will consider fuzzy metric spaces in the sense of Kramosil and Mich\'{a}lek \cite{KrMi}, which are more general than fuzzy metric spaces in the sense of George and Veeramani \cite{GV}. Undoubtedly, the second kind of fuzzy metric spaces are easier to handle than the first one because in the second case the metric only takes strictly positive values. Kramosil and Mich\'{a}lek's fuzzy spaces are as general that they include the case in which the distance between two points is infinite, that is, its fuzzy metric is constantly zero (see \cite{MRR3}). This fact greatly complicates the proofs that could be made of many fixed point theorems in the context of George and Veeramani's fuzzy spaces (which has been a significant challenge). In our study, we have tried to be as faithful as possible to the original definition of ample spectrum contraction in metric spaces. This could mislead the reader to the wrong idea that fuzzy ample spectrum contractions are relatively similar to ample spectrum contractions in metric spaces. This is a completely false statement: fuzzy ample spectrum contractions are so singular that, for the moment, these authors have not been able to demonstrate that each ample spectrum contraction in a metric space can be seen as a fuzzy ample spectrum contractions in a fuzzy metric space (we have only proved it under one additional condition). The difficulty appears when, in a fuzzy metric space, we try to define a mapping that could extend the mapping $\varrho$ of the ample spectrum contraction. In fact, as we commented above, the wide variety of distinct classes of fuzzy metrics, and the way in which the fuzzy distance between two points is represented by a distance distribution function, greatly complicates the task of studying some possible relationship between ample spectrum contractions in real metric spaces and in fuzzy metric spaces. The presented fuzzy contractivity condition makes only use of two metric terms: the fuzzy distance between two distinct points, $M(x,y,t)$, and the fuzzy distance between their images, $M(Tx,Ty,t)$, under the self-mapping $T$. Traditionally, these two terms have played separate roles in classical fuzzy and non-fuzzy contractivity conditions: for instance, they use to appear in distinct sides of the inequality. However, inspired by Farshid \emph{et al}'s $\mathcal{Z}$-contractions \cite{KhShRa,RoKaMa} (based on \emph{simulation functions}), the contractivity condition associated to \emph{fuzzy ample spectrum contractions} is not necessarily a constraint on separate variables. As a consequence of this generalization, it is easy to check that the canonical examples motivating the fixed point theory, that is, the Banach's contractive mappings, are particular cases of \emph{fuzzy ample spectrum contractions}. Furthermore, we illustrate how Mihe\c{t} fuzzy $\psi$-contractions \cite{Mi3} can also be seen as \emph{fuzzy ample spectrum contractions}, and why this last class of fuzzy contractions are not the best choice to check that Banach's contractions in metric spaces are fuzzy contractions. In this paper we introduce two kind of fuzzy ample spectrum contractions. The first one is more strict, but it allows to demonstrate some fixed point results when the contractive condition does not satisfy additional constraints. The second definition is more general but it forces us to handle a concrete subfamily of contractivity conditions. We prove some distinct results about existence and uniqueness of fixed points associated to each class of fuzzy contractions. Our results generalize some previous fixed point theorems introduced, on the one hand, by Mihe\c{t} and, on the other hand, by Altun and Mihe\c{t}. By the way, this study gives a positive partial answer to a question posed by Mihe\c{t} in \cite{Mi3} some years ago. In such paper, the author wondered whether some fixed point theorems involving fuzzy $\psi$-contractions could also hold in fuzzy metric spaces that did not necessarily satisfy the non-Archimedean property. To face this challenge, we have introduced a novel assumption that we have called \emph{property} $\mathcal{NC}$ (\textquotedblleft non-Cauchy\textquotedblright). Such condition establishes a very concrete behavior for asymptotically regular sequences that do not satisfy the Cauchy's condition: in such cases, the fuzzy distance between two partial subsequences and their corresponding predecessors can be controlled in terms of convergence. Obviously, our study is a proper generalization because each non-Archimedean fuzzy metric space satisfies such property even if the associated triangular norm is only continuous at the boundary of the unit square (but not necessarily in the interior of the unit square). \vspace*{3mm} \section{\textbf{Preliminaries}} For an optimal understanding of this paper, we introduce here some basic concepts and notations that could also be found in \cite{G-book,RoSh-ample1,ShRoFa1}. Throughout this manuscript, let $\mathbb{R}$ be the family of all real numbers, let $\mathbb{I}$ be the real compact interval $\left[ 0,1\right] $, let $\mathbb{N}=\{1,2,3,\ldots\}$ denote the set of all positive integers and let $\mathbb{N}_{0}=\mathbb{N} \cup\{0\}$. Henceforth, $X$ will denote a non-empty set. A {\emph{binary relation on }$X$} is a non-empty subset $\mathcal{S}$ of the Cartesian product space $X\times X$. The notation $x\mathcal{S}y$ means that $(x,y)\in \mathcal{S}$. We write $x\mathcal{S}^{\mathcal{\ast}}y$ when $x\mathcal{S}y$ and $x\neq y$. Hence $\mathcal{S}^{\mathcal{\ast}}$ is another binary relation on $X$ (if it is non-empty). Two points $x$ and $y$ are $\mathcal{S} $\emph{-comparable} if $x\mathcal{S}y$ or $y\mathcal{S}x$. We say that $\mathcal{S}$ is \emph{transitive} if we can deduce $x\mathcal{S}z$ from $x\mathcal{S}y$ and $y\mathcal{S}z$. The \emph{trivial binary relation} $\mathcal{S}_{X}$ on $X$ is defined by $x\mathcal{S}_{X}y$ for each $x,y\in X$. From now on, let $T:X\rightarrow X$ be a map from $X$ into itself. We say that $T$ is {$\mathcal{S}$\emph{-nondecreasing}} if $Tx\mathcal{S}Ty$ for each $x,y\in X$ such that $x\mathcal{S}y$. If a point $x\in X$ verifies $Tx=x $, then $x$ is a \emph{fixed point of }$T$. We denote by $\operatorname*{Fix}(T)$ the set of all fixed points of $T$. A sequence $\{x_{n}\}_{n\in\mathbb{N}_{0}}$ is called a {\emph{Picard sequence of }$T$} \emph{based on }$x_{0}\in X$ if $x_{n+1}=Tx_{n}$ for all $n\in\mathbb{N}_{0}$. Notice that, in such a case, $x_{n}=T^{n}x_{0}$ for each $n\in\mathbb{N}_{0}$, where $\{T^{n}:X\rightarrow X\}_{n\in\mathbb{N}_{0}}$ are the \emph{iterates of }$T$ defined by $T^{0}=\,$identity, $T^{1}=T $ and $T^{n+1}=T\circ T^{n}$ for all $n\geq2$. Following \cite{RoShJNSA}, a sequence $\left\{ x_{n}\right\} $ in $X$ is \emph{infinite} if $x_{n}\neq x_{m}$ for all $n\neq m$, and $\left\{ x_{n}\right\} $ is \emph{almost periodic} if there exist $n_{0} ,N\in\mathbb{N}$ such that \[ x_{n_{0}+k+Np}=x_{n_{0}+k}\quad\text{for all }p\in\mathbb{N}\text{ and all }k\in\left\{ 0,1,2,\ldots,N-1\right\} . \] \begin{proposition} \label{K38 - 22 lem infinite or almost periodic}\textrm{(\cite{RoShJNSA}, Proposition 2.3)} Every Picard sequence is either infinite or almost periodic. \end{proposition} \subsection{\textbf{Ample spectrum contractions in metric spaces}} We describe here the notion of \emph{ample spectrum contraction} in the context of a metric space. Such concept involves a key kind of sequences of real numbers that must be highlighted. Let $(X,d)$ be a metric space, let $\mathcal{S}$ be a binary relation on $X$, let $T:X\rightarrow X$ be a self-mapping and let $\varrho:A\times A\rightarrow\mathbb{R}$ be a function where $A\subseteq\mathbb{R}$ is a non-empty subset of real numbers. \begin{definition} \label{K38 - 02 def TS sequence in MS}\textrm{(\cite{RoSh-ample1}, Definition 3)} Let $\{a_{n}\}$ and $\{b_{n}\}$ be two sequences of real numbers. We say that $\{\left( a_{n},b_{n}\right) \}$ is a {$(T,\mathcal{S}^{\ast} )$\emph{-sequence}} if there exist two sequences $\{x_{n}\},\{y_{n}\}\subseteq X$ such that \[ x_{n}\mathcal{S}^{\ast}y_{n},\quad Tx_{n}\mathcal{S}^{\ast}Ty_{n},\quad a_{n}=d(Tx_{n},Ty_{n})\quad\text{and}\quad b_{n}=d(x_{n},y_{n})\quad\text{for all }n\in\mathbb{N}_{0}. \] \end{definition} The previous class of sequences plays a crucial role in the third condition of the following definition. \begin{definition} \label{K38 - 01 def ample spectrum contraction MS}\textrm{(\cite{RoSh-ample1}, Definition 4)} We will say that $T:X\rightarrow X$ is an \emph{ample spectrum contraction} w.r.t. $\mathcal{S}$ and $\varrho$ if the following four conditions are fulfilled. \begin{description} \item[$\left( \mathcal{B}_{1}\right) $] $A$ is nonempty and $\left\{ \,d\left( x,y\right) \in\left[ 0,\infty\right) :x,y\in X,~x\mathcal{S} ^{\ast}y\,\right\} \subseteq A$. \item[$\left( \mathcal{B}_{2}\right) $] If $\{x_{n}\}\subseteq X$ is a Picard $\mathcal{S}$-nondecreasing sequence of $T$ such that \[ x_{n}\neq x_{n+1}\quad\text{and}\quad\varrho\left( d\left( x_{n+1} ,x_{n+2}\right) ,d\left( x_{n},x_{n+1}\right) \right) \geq0\quad\text{for all }n\in\mathbb{N}_{0}, \] then $\{d\left( x_{n},x_{n+1}\right) \}\rightarrow0$. \item[$\left( \mathcal{B}_{3}\right) $] If $\{\left( a_{n},b_{n}\right) \}\subseteq A\times A$ is a $(T,\mathcal{S}^{\ast})$-sequence such that $\{a_{n}\}$ and $\{b_{n}\}$ converge to the same limit $L\geq0$ and verifying that $L<a_{n}$ and $\varrho(a_{n},b_{n})\geq0$ for all $n\in\mathbb{N}_{0}$, then $L=0$. \item[$\left( \mathcal{B}_{4}\right) $] {$\varrho\left( d(Tx,Ty),d(x,y)\right) \geq0\quad$for all $x,y\in X\quad$such that $x\mathcal{S}^{\ast}y$ and $Tx\mathcal{S}^{\ast}Ty$.} \end{description} \end{definition} In some cases, we will also consider the following two auxiliary properties. \begin{description} \item[$\left( \mathcal{B}_{2}^{\prime}\right) $] If $x_{1},x_{2}\in X$ are two points such that \[ T^{n}x_{1}\mathcal{S}^{\ast}T^{n}x_{2}\quad\text{and}\quad{\varrho({d}\left( T^{n+1}x_{1},T^{n+1}x_{2}\right) ,{d}\left( T^{n}x_{1},T^{n}x_{2}\right) )\geq0}\quad\text{for all }n\in\mathbb{N}_{0}, \] then $\{{d}\left( T^{n}x_{1},T^{n}x_{2}\right) \}\rightarrow0$. \item[$\left( \mathcal{B}_{5}\right) $] If $\{\left( a_{n},b_{n}\right) \}$ is a $(T,\mathcal{S}^{\ast})$-sequence such that $\{b_{n}\}\rightarrow0$ and $\varrho(a_{n},b_{n})\geq0$ for all $n\in\mathbb{N}_{0}$, then $\{a_{n}\}\rightarrow0$. \end{description} Rold\'{a}n L\'{o}pez de Hierro and Shahzad demonstrated that, under very weak conditions, these contractions have a fixed point and, if we assume other constraints, then such fixed point is unique (see \cite{RoSh-ample1}). After that, the same authors and Karap\i nar were able to extend their study to Branciari distance spaces. \subsection{\textbf{Triangular norms}} A \emph{triangular norm} \cite{ScSk}\ (for short, a \emph{t-norm}) is a function $\ast:\mathbb{I}\times\mathbb{I}\rightarrow\mathbb{I}$ satisfying the following properties: associativity, commutativity, non-decreasing on each argument, has $1$ as unity (that is, $t\ast1=t$ for all $t\in\mathbb{I}$). It is usual that authors consider continuous t-norms on their studies. A t-norm $\ast$ is \emph{positive} if $t\ast s>0$ for all $t,s\in\left( 0,1\right] $. Given two t-norms $\ast$ and $\ast^{\prime}$, we will write $\ast\leq \ast^{\prime}$ when $t\ast s\leq t\ast^{\prime}s$ for all $t,s\in\mathbb{I}$. Examples of t-norms are the following ones: \[ \begin{tabular} [c]{ll} $\text{Product }\ast_{P}:$ & $t\ast_{P}s=t\,s$\\ $\text{\L ukasiewicz }\ast_{L}:$ & $t\ast_{L}s=\max\{0,t+s-1\}$\\ $\text{Minimum }\ast_{m}:$ & $t\ast_{m}s=\min\{t,s\}$\\ $\text{Drastic }\ast_{D}:$ & $t\ast_{D}s=\left\{ \begin{tabular} [c]{ll} $0,$ & if $t<1$ and $s<1,$\\ $\min\{t,s\},$ & if $t=1$ or $s=1.$ \end{tabular} \right. $ \end{tabular} \] If $\ast$ is an arbitrary t-norm, then $\ast_{D}\leq\ast\leq\ast_{m}$, that is, the drastic t-norm is the absolute minimum and the minimum t-norm is the absolute maximum among the family of all t-norms (see \cite{t-norm}). \begin{definition} We will say that a t-norm $\ast$ is \emph{continuous at the }$1$ \emph{-boundary} if it is continuous at each point of the type $\left( 1,s\right) $ where $s\in\mathbb{I}$ (that is, if $\{t_{n}\}\rightarrow1$ and $\{s_{n}\}\rightarrow s$, then $\{t_{n}\ast s_{n}\}\rightarrow1\ast s=s$). \end{definition} Obviously, each continuous t-norm is continuous at the $1$-boundary. \begin{proposition} \label{K38 - 18 propo cancellation}Let $\{a_{n}\},\{b_{n}\},\{c_{n} \},\{d_{n}\},\{e_{n}\}\subseteq\mathbb{I}$ be five sequences and let $L\in\mathbb{I}$ be a number such that $\{a_{n}\}\rightarrow L$, $\{b_{n}\}\rightarrow1$, $\{d_{n}\}\rightarrow1$ and $\{e_{n}\}\rightarrow L$. Suppose that $\ast$ is a continuous at the $1$-boundary $t$-norm and that \[ a_{n}\geq b_{n}\ast c_{n}\ast d_{n}\geq e_{n}\quad\text{for all } n\in\mathbb{N}. \] Then $\{c_{n}\}$ converges to $L$. \end{proposition} \begin{proof} As $\{c_{n}\}\subseteq\mathbb{I}=\left[ 0,1\right] $ is bounded, then it has a convergent partial subsequence. Let $\{c_{\sigma(n)}\}$ be an arbitrary convergent partial subsequence of $\{c_{n}\}$ and let $L^{\prime} =\lim_{n\rightarrow\infty}c_{\sigma(n)}$. Since $\ast$ is continuous at the point $\left( 1,L^{\prime}\right) $, $\{b_{\sigma(n)}\}\rightarrow1$ and $\{c_{\sigma(n)}\}\rightarrow L^{\prime}$, then $\{b_{\sigma(n)}\ast c_{\sigma(n)}\}\rightarrow1\ast L^{\prime}=L^{\prime}$, and as $\{d_{\sigma (n)}\}\rightarrow1$, then $\{b_{\sigma(n)}\ast c_{\sigma(n)}\ast d_{\sigma (n)}\}\rightarrow L^{\prime}\ast1=L^{\prime}$. Furthermore, taking into account that $a_{n}\geq b_{n}\ast c_{n}\ast d_{n}\geq e_{n}$ for all $n\in\mathbb{N}$, we deduce that \[ L=\lim_{n\rightarrow\infty}a_{\sigma(n)}\geq\lim_{n\rightarrow\infty}\left( b_{\sigma(n)}\ast c_{\sigma(n)}\ast d_{\sigma(n)}\right) \geq\lim _{n\rightarrow\infty}e_{\sigma(n)}=L. \] Hence $L^{\prime}=\lim_{n\rightarrow\infty}\left( b_{\sigma(n)}\ast c_{\sigma(n)}\ast d_{\sigma(n)}\right) =L$. This proves that any convergent partial subsequence of $\{c_{n}\}$ converges to $L$. Next we consider the limit inferior and the limit superior of $\{c_{n}\}$. The previous argument shows that \[ L=\liminf_{n\rightarrow\infty}c_{n}\leq\limsup_{n\rightarrow\infty}c_{n}=L. \] As the limit inferior and the limit superior of $\{c_{n}\}$ are equal to $L$, the sequence $\{c_{n}\}$ is convergent and its limit is $L$. \end{proof} \begin{example} The cancelation property showed in Proposition \ref{K38 - 18 propo cancellation} is not satisfied by all t-norms. For instance, let $\ast_{D}$ be the drastic t-norm (which is not continuous at any point $(1,s)$ or $\left( s,1\right) $ of the $1$-boundary when $s>0$). Let $L=0$ and let $\{a_{n}\},\{b_{n}\},\{c_{n}\},\{d_{n}\},\{e_{n}\}\subseteq \mathbb{I}$ be the sequences on $\mathbb{I}$ given, for all $n\in\mathbb{N}$, by: \[ a_{n}=e_{n}=0,\quad b_{n}=d_{n}=1-\frac{1}{n},\quad c_{n}=\frac{1}{2}. \] Then $b_{n}\ast_{D}c_{n}=b_{n}\ast_{D}c_{n}\ast_{D}d_{n}=a_{n}=e_{n}=L=0$ for all $n\in\mathbb{N}$, $\{b_{n}\}\rightarrow1$ and $\{d_{n}\}\rightarrow1$. However, $\{c_{n}\}$ does not converge to $L=0$. In fact, it has not any partial subsequence converging to $L=0$. \end{example} \subsection{\textbf{Fuzzy metric spaces}} In this subsection we introduce two distinct notions of \emph{fuzzy metric space} that represent natural extensions of the concept of \emph{metric space} to a setting in which some uncertainty or imprecision can be considered when determining the distance between two points. \begin{definition} \label{definition KM-space}\textrm{(cf. Kramosil and Mich\'{a}lek \cite{KrMi})} A triplet $(X,M,\ast$) is called a \emph{fuzzy metric space in the sense of Kramosil and Mich\'{a}lek} (briefly, a \emph{KM-FMS}) if $X$ is an arbitrary non-empty set, $\ast$ is a $t$-norm and $M:X\times X\times\left[ 0,\infty\right) \rightarrow\mathbb{I} $ is a fuzzy set satisfying the following conditions, for each $x,y,z\in X$, and $t,s\geq0$: \begin{description} \item[(KM-1)] $M(x,y,0)=0$; \item[(KM-2)] $M(x,y,t)=1$ for all $t>0$ if, and only if, $x=y$; \item[(KM-3)] $M(x,y,t)=M(y,x,t)$; \item[(KM-4)] $M(x,z,t+s)\geq M(x,y,t)\ast M(y,z,s)$; \item[(KM-5)] $M(x,y,\cdot):\left[ 0,\infty\right) \rightarrow\left[ 0,1\right] $ is left-continuous. \end{description} \end{definition} The value $M(x,y,t)$ can be interpreted of as the degree of nearness between $x$ and $y$ compared to $t$. On their original definition, Kramosil and Mich\'{a}lek did not assume the continuity of the t-norm $\ast$. However, in later studies in KM-FMS, it is very usual to suppose that $\ast$ is continuous (see, for instance, \cite{Mi3}). The following one is the canonical way in which a metric space can be seen as a KM-FMS. \begin{example} \label{K38 - 24 ex canonical metric FMS}Each metric space $\left( X,d\right) $ can be seen as a KM-FMS $(X,M^{d},\ast)$, where $\ast$ is any t-norm, by defining $M:X\times X\times\left[ 0,\infty\right) \rightarrow\mathbb{I}$ as: \[ M^{d}\left( x,y,t\right) =\left\{ \begin{tabular} [c]{ll} $0,$ & if $t=0,$\\ $\dfrac{t}{t+d\left( x,y\right) },$ & if $t>0.$ \end{tabular} \right. \] Notice that $0<M^{d}\left( x,y,t\right) <1$ for all $t>0$ and all $x,y\in X$ such that $x\neq y$. Furthermore, $\lim_{t\rightarrow\infty}M^{d}\left( x,y,t\right) =1$ for all $x,y\in X$. More properties of these spaces are given in Proposition \ref{K38 - 44 propo Md non-Archimedean}. \end{example} \begin{remark} \label{K38 - 32 rem Km to GV}Definition \ref{definition KM-space} is as general that such class of fuzzy spaces can verify that \begin{equation} M(x,y,t)=0\text{\quad for all }t>0\text{\quad when }x\neq y. \label{K38 - 26 prop bad condition} \end{equation} In the context of \emph{extended metric spaces} (whose metrics take values in the extended interval $\left[ 0,\infty\right] $, including $\infty$; see \cite{MRR3,FA-spaces}), this property can be interpreted by saying that the \textquotedblleft distance\textquotedblright\ between the points $x$ and $y$ is infinite. Property (\ref{K38 - 26 prop bad condition}) is usually unsuitable in the setting of fixed point theory because it often spoils the arguments given in the proofs. This drawback is sometimes overcame by assuming that the initial condition is a point $x_{0}\in X$ such that $M(x_{0} ,Tx_{0},t)>0$ for all $t>0$ because the contractivity condition helps to prove that all the points of the Picard sequence based on $x_{0}$ satisfy the same condition. In order to hereby such characteristic, we would need to assume additional constraints on the auxiliary functions we will employ to introduce the announced fuzzy ample spectrum contractions. Such constraints would not appear if we would have decided to restrict out study to fuzzy metric spaces in the sense of George and Veeramani, which were introduced in order to consider a Hausdorff topology on the corresponding fuzzy spaces and to prove a version of the Baire theorem. \end{remark} \begin{definition} \textrm{(cf. George and Veeramani \cite{GV})} A triplet $(X,M,\ast$) is called a \emph{fuzzy metric space in the sense of George and Veeramani} (briefly, a\emph{\ GV-FMS}) if $X$ is an arbitrary non-empty set, $\ast$ is a continuous $t$-norm and $M:X\times X\times\left( 0,\infty\right) \rightarrow\mathbb{I}$ is a fuzzy set satisfying the following conditions, for each $x,y,z\in X$, and $t,s>0$: \begin{description} \item[(GV-1)] $M(x,y,t)>0$; \item[(GV-2)] $M(x,y,t)=1\text{ for all }t>0$ if, and only if, $x=y$; \item[(GV-3)] $M(x,y,t)=M(y,x,t)$; \item[(GV-4)] $M(x,z,t+s)\geq M(x,y,t)\ast M(y,z,s)$; \item[(GV-5)] $M(x,y,\cdot):\left( 0,\infty\right) \rightarrow\left[ 0,1\right] $ is a continuous function. \end{description} \end{definition} \begin{lemma} Each GV-FMS is, in fact, a KM-FMS by extending $M$ to $t=0$ as $M(x,y,0)=0$ for all $x,y\in X$. \end{lemma} \begin{lemma} \textrm{(cf. Grabiec \cite{Grabiec})} If $\left( X,M,\ast\right) $ is a KM-FMS (respectively, a GV-FMS) and $x,y\in X$, then each function $M(x,y,\cdot)$ is nondecreasing on $\left[ 0,\infty\right) $ ((respectively, on $\left( 0,\infty\right) $). \end{lemma} \begin{proposition} \label{K40 21 propo either infinite or almost-constant fuzzy}\textrm{(cf. \cite{RoKaFu}, Proposition 2)} Let $\left\{ x_{n}\right\} $ be a Picard sequence in a KM-FMS $(X,M,\ast)$ such that $\{M(x_{n},x_{n+1},t)\}\rightarrow 1$ for all $t>0$. If there are $n_{0},m_{0}\in\mathbb{N}$ such that $n_{0}<m_{0}$ and $x_{n_{0}}=x_{m_{0}}$, then there is $\ell_{0}\in\mathbb{N}$ and $z\in X$ such that $x_{n}=z$ for all $n\geq\ell_{0}$ (that is, $\left\{ x_{n}\right\} $ is constant from a term onwards). In such a case, $z$ is a fixed point of the self-mapping for which $\left\{ x_{n}\right\} $ is a Picard sequence. \end{proposition} \begin{proof} Let $T:X\rightarrow X$ be a mapping for which $\left\{ x_{n}\right\} $ is a Picard sequence, that is, $x_{n+1}=Tx_{n}$ for all $n\in\mathbb{N}$. The set \[ \Omega=\left\{ \,k\in\mathbb{N}:\exists~n_{0}\in\mathbb{N}\text{ such that }x_{n_{0}}=x_{n_{0}+k}\,\right\} \] is non-empty because $m_{0}-n_{0}\in\Omega$, so it has a minimum $k_{0} =\min\Omega$. Then $k_{0}\geq1$ and there is $n_{0}\in\mathbb{N}$ such that $x_{n_{0}}=u_{n_{0}+k_{0}}$. As $\left\{ x_{n}\right\} $ is not infinite, then it must be almost periodic by Proposition \ref{K38 - 22 lem infinite or almost periodic}. In fact, it is easy to check, by induction on $p$, that: \begin{equation} x_{n_{0}+r+pk_{0}}=x_{n_{0}+r}\quad\text{for all }p\in\mathbb{N}\text{ and all }r\in\left\{ 0,1,2,\ldots,k_{0}-1\right\} . \label{K40 20 prop} \end{equation} If $k_{0}=1$, then $x_{n_{0}}=x_{n_{0}+1}$. Similarly $x_{n_{0}+2} =Tx_{n_{0}+1}=Tx_{n_{0}}=x_{n_{0}+1}=x_{n_{0}}$. By induction, $x_{n_{0} +r}=x_{n_{0}}$ for all $r\geq0$, which is precisely the conclusion. Next we are going to prove that the case $k_{0}\geq2$ leads to a contradiction. Assume that $k_{0}\geq2$. Then each two terms in the set $\{\,x_{n_{0} },x_{n_{0}+1},x_{n_{0}+2},\ldots,x_{n_{0}+k_{0}-1}\,\}$ are distinct, that is, $x_{n_{0}+i}\neq x_{n_{0}+j}$ for all $0\leq i<j\leq k_{0}-1$ (on the contrary case, $k_{0}$ is not the minimum of $\Omega$). Since $x_{n_{0}+i}\neq x_{n_{0}+i+1}$ for all $i\in\left\{ 0,1,2,\ldots,k_{0}-1\right\} $, then there is $s_{i}>0$ such that $M(x_{n_{0}+i},x_{n_{0}+i+1},s_{i})<1$ for all $i\in\left\{ 0,1,2,\ldots,k_{0}-1\right\} $. Since each $M(x_{n_{0} +i},x_{n_{0}+i+1},\cdot)$ is a non-decreasing function, if $t_{0} =\min(\{\,s_{i}:0\leq i\leq k_{0}-1\,\})>0$, then \[ M(x_{n_{0}+i},x_{n_{0}+i+1},t_{0})<1\quad\text{for all }i\in\left\{ 0,1,2,\ldots,k_{0}-1\right\} . \] Let define \[ \delta_{0}=\max\left( \left\{ \,M(x_{n_{0}+i},x_{n_{0}+i+1},t_{0}):0\leq i\leq k_{0}-1\,\right\} \right) \in\left[ 0,1\right) . \] Then $\delta_{0}<1$. Since $\lim_{n\rightarrow\infty}M(x_{n},x_{n+1},t_{0})=1 $, there is $r_{0}\in\mathbb{N}$ such that $r_{0}\geq n_{0}$ and $M(x_{r_{0} },x_{r_{0}+1},t_{0})>\delta_{0}$. Let $i_{0}\in\left\{ 0,1,2,\ldots ,k_{0}-1\right\} $ be the unique integer number such that the non-negative integer numbers $r_{0}-n_{0}$ and $i_{0}$ are congruent modulo $k_{0}$, that is, $i_{0}$ is the rest of the integer division of $r_{0}-n_{0}$ over $k_{0}$. Hence there is a unique integer $p\geq0$ such that $\left( r_{0} -n_{0}\right) -i_{0}=pk_{0}$. Since $r_{0}=n_{0}+i_{0}+pk_{0}$, property (\ref{K40 20 prop}) guarantees that \[ x_{r_{0}}=x_{n_{0}+i_{0}+pk_{0}}=x_{n_{0}+i_{0}}, \] where $n_{0}+i_{0}\in\left\{ n_{0},n_{0}+1,n_{0}+2,\ldots,n_{0} +k_{0}-1\right\} $. As a consequence: \[ \delta_{0}=\max\left( \left\{ \,M(x_{n_{0}+i},x_{n_{0}+i+1},t_{0}):0\leq i\leq k_{0}-1\,\right\} \right) \geq M(x_{n_{0}+i_{0}},x_{n_{0}+i_{0} +1},t_{0})=M(x_{r_{0}},x_{r_{0}+1},t_{0})>\delta_{0}, \] which is a contradiction. \end{proof} \subsection{\textbf{Non-Arquimedean fuzzy metric spaces}} A KM-FMS $(X,M,\ast)$ is said to be \emph{non-Archimedean} \cite{Istr}\ if \begin{equation} M(x,z,t)\geq M(x,y,t)\ast M(y,z,t)\quad\text{for all }x,y,z\in X\text{ and all }t>0. \label{K38 - 27 prop non-Arquimedean} \end{equation} This property is equivalent to: \[ M(x,z,\max\{t,s\})\geq M(x,y,t)\ast M(y,z,s)\quad\text{for all }x,y,z\in X\text{ and all }t,s>0. \] Notice that this property depends on both the fuzzy metric $M$ and the t-norm $\ast$. Furthermore, the non-Archimedean property (\ref{K38 - 27 prop non-Arquimedean}) implies the triangle inequality (KM-4), so each non-Arquimedean fuzzy metric space is a KM-FMS. \begin{proposition} \label{K38 - 44 propo Md non-Archimedean}Given a metric space $\left( X,d\right) $, let $(X,M^{d})$ be the canonical way to see $\left( X,d\right) $ as a KM-FMS (recall Example \ref{K38 - 24 ex canonical metric FMS}). Then the following properties are fulfilled. \begin{enumerate} \item \label{K38 - 44 propo Md non-Archimedean, item 1}$(X,M^{d})$ is a KM-FMS (and also a GV-FMS) under any t-norm $\ast$. \item \label{K38 - 44 propo Md non-Archimedean, item 2}If $\ast$ is a t-norm such that $\ast\leq\ast_{P}$, then $(X,M,\ast)$ is a non-Archimedean KM-FMS. \item \label{K38 - 44 propo Md non-Archimedean, item 3}The metric space $(X,d)$ satisfies $d\left( x,z\right) \leq\max\{d\left( x,y\right) ,d(y,z)\}$ for all $x,y,z\in X$ if, and only if, $(X,M^{d},\ast_{m})$ is a non-Archimedean KM-FMS. \end{enumerate} \end{proposition} Notice that the \emph{discrete metric} on $X$, defined by $d\left( x,y\right) =0$ if $x=y$ and $d\left( x,y\right) =1$ if $x\neq y$, is an example of metric satisfying the property involved in item \ref{K38 - 44 propo Md non-Archimedean, item 3} of Proposition \ref{K38 - 44 propo Md non-Archimedean}. \begin{example} \textrm{(Altun and Mihe\c{t} \cite{AlMi}, Example 1.3)} Let $\left( X,d\right) $ be a metric space and let $\vartheta$ be a nondecreasing and continuous function from $\left( 0,\infty\right) $ into $\left( 0,1\right) $ such that $\lim_{t\rightarrow\infty}\vartheta(t)=1$. Let $\ast$ be a t-norm such that $\ast\leq\ast_{P}$. For each $x,y\in X$ and all $t\in\left( 0,\infty\right) $ , define \[ M(x,y,t)=\left[ \vartheta(t)\right] ^{d\left( x,y\right) }. \] Then $\left( X,M,\ast\right) $ is a non-Archimedean KM-FMS. \end{example} \begin{example} A KM-FMS is called \emph{stationary} when each function $t\mapsto M(x,y,t)$ does not depend on $t$, that is, it only depends in the points $x$ and $y$. For instance, if $X=\left( 0,\infty\right) $ and $M$ is defined by \[ M(x,y,t)=\frac{\min\{x,y\}}{\max\{x,y\}}\quad\text{for all }x,y\in\left( 0,\infty\right) \text{ and all }t>0, \] then $(X,M,\ast)$ is a stationary KM-FMS. In fact, it is non-Archimedean. \end{example} \section{\textbf{Fuzzy spaces}} There are many properties which are defined for mathematical objects in a fuzzy metric space that, in fact, only depend on the fuzzy metric $M$, but not on the t-norm $\ast$. For instance, the notions of convergency and Cauchy sequence. Therefore, it is worth-noting to introduce such notions when $M$ is an arbitrary function. \begin{definition} A \emph{fuzzy space} is a pair $(X,M)$ where $X$ is a non-empty set and $M$ is a fuzzy set on $X\times X\times\left[ 0,\infty\right) $, that is, a mapping $M:X\times X\times\left[ 0,\infty\right) \rightarrow\mathbb{I}$ (notice that no additional conditions are assumed for $M$). In a fuzzy space $(X,M)$, we say that a sequence $\{x_{n}\}\subseteq X$ is: \begin{itemize} \item $M$\emph{-Cauchy} if for all $\varepsilon\in\left( 0,1\right) $ and all $t>0$ there is $n_{0}\in\mathbb{N}$ such that $M\left( x_{n} ,x_{m},t\right) >1-\varepsilon$ for all $n,m\geq n_{0}$; \item $M$\emph{-convergent to }$x\in X$ if for all $\varepsilon\in\left( 0,1\right) $ and all $t>0$ there is $n_{0}\in\mathbb{N}$ such that $M\left( x_{n},x,t\right) >1-\varepsilon$ for all $n\geq n_{0}$ (in such a case, we write $\{x_{n}\}\rightarrow x$). \end{itemize} We say that the fuzzy space $(X,M)$ is \emph{complete} (or $X$ is $M$\emph{-complete}) if each $M$-Cauchy sequence in $X$ is $M$-convergent to a point of $X$. \end{definition} \begin{proposition} \label{K38 - 49 propo uniqueness of limit}The limit of an $M$-convergent sequence in a KM-FMS whose t-norm is continuous at the $1$-boundary is unique. \end{proposition} \begin{remark} \begin{enumerate} \item Using the previous definitions, it is possible to prove that a sequence $\{x_{n}\}$ in a metric space $(X,d)$ is Cauchy (respectively, convergent to $x\in X$) if, and only if, $\{x_{n}\}$ is $M^{d}$-Cauchy (respectively, $M^{d}$-convergent to $x\in X$) in $(X,M^{d}$). \item Notice that a sequence $\{x_{n}\}$ is $M$-convergent to $x\in X$ if, and only if, $\lim_{n\rightarrow\infty}M(x_{n},x,t)=1$ for all $t>0$. \end{enumerate} \end{remark} It is clear that one of the possible drawbacks of the previous definition is the fact that, if $M$ does not satisfy additional properties such as (KM-1)-(KM-5), then the limit of an $M$-convergent sequence has not to be unique, or an $M$-convergent sequence needs not be $M$-Cauchy. However, it is a good idea to consider general fuzzy spaces because such spaces allows us to reflex about what properties depend on $M$ and $\ast$, and what other properties only depend on $M$. In the second case, the next conditions will be of importance in what follows. Notice that the following notions have sense even if the limits of $M$-convergent sequences are not unique. \begin{definition} Let $\mathcal{S}$ be a binary relation on a fuzzy space $\left( X,M\right) $, let $Y\subseteq X$ be a nonempty subset, let $\{x_{n}\}$ be a sequence in $X$ and let $T:X\rightarrow X$ be a self-mapping. We say that: \begin{itemize} \item $T$ is $\mathcal{S}$\emph{-nondecreasing-continuous} if $\{Tx_{n} \}\rightarrow Tz$ for all $\mathcal{S}$-nondecreasing sequence $\{x_{n} \}\subseteq X$ such that $\{x_{n}\}\rightarrow z\in X$; \item $T$ is $\mathcal{S}$\emph{-strictly-increasing-continuous} if $\{Tx_{n}\}\rightarrow Tz$ for all $\mathcal{S}$-strictly-increasing sequence $\{x_{n}\}\subseteq X$ such that $\{x_{n}\}\rightarrow z\in X$; \item $Y$ is $(\mathcal{S},M)$\emph{-strictly-increasing-complete} if every $\mathcal{S}$-strictly-increasing and $M$-Cauchy sequence $\{y_{n}\}\subseteq Y$ is $M$-convergent to a point of $Y$; \item $Y$ is $(\mathcal{S},M)$\emph{-strictly-increasing-precomplete} if there exists a set $Z$ such that $Y\subseteq Z\subseteq X$ and $Z$ is $(\mathcal{S} ,M)$-strictly-increasing-complete; \item $\left( X,M\right) $ is $\mathcal{S}$ \emph{-strictly-increasing-regular} if, for all $\mathcal{S}$ -strictly-increasing sequence $\{x_{n}\}\subseteq X$ such that $\{x_{n} \}\rightarrow z\in X$, it follows that $x_{n}\mathcal{S}z$ for all $n\in\mathbb{N}$; \item $\left( X,M\right) $ is \emph{metrically-}$\mathcal{S}$ \emph{-strictly-increasing-regular} if, for all $\mathcal{S}$ -strictly-increasing sequence $\{x_{n}\}\subseteq X$ such that $\{x_{n} \}\rightarrow z\in X$ and \[ M\left( x_{n},x_{n+1},t\right) >0\quad\text{for all }n\in\mathbb{N}\text{ and all }t>0, \] it follows that $x_{n}\mathcal{S}z$ and $M(x_{n},z,t)>0$ for all $n\in\mathbb{N}$ and all $t>0$. \end{itemize} \end{definition} The reader can notice that we will only use the previous notions when we are working with infinite Picard sequences, so they could be refined in prospect work. \section{\textbf{The property $\mathcal{NC}$}} In a fuzzy metric space $(X,M,\ast)$, if $\{x_{n}\}$ is not an $M$-Cauchy sequence, then there are $\varepsilon_{0}\in\left( 0,1\right) $ and $t_{0}>0$ such that, for all $k\in\mathbb{N}$, there are natural numbers $m\left( k\right) ,n\left( k\right) \geq k$ such that $M\left( x_{n(k)},x_{m(k)},t_{0}\right) \leq1-\varepsilon_{0}$. Equivalently, there are two partial subsequences $\{x_{n(k)}\}_{k\in\mathbb{N}}$ and $\{x_{m(k)}\}_{k\in\mathbb{N}}$ of $\{x_{n}\}$ such that $k<n\left( k\right) <m\left( k\right) <n\left( k+1\right) $ and \[ 1-\varepsilon_{0}\geq M\left( x_{n(k)},x_{m(k)},t_{0}\right) \quad\text{for all }k\in\mathbb{N}. \] Associated to each number $n\left( k\right) $, if we take $m\left( k\right) $ is the least integer satisfying the previous property, then we can also suppose that \[ M\left( x_{n\left( k\right) },x_{m\left( k\right) -1},t_{0}\right) >1-\varepsilon_{0}\geq M\left( x_{n\left( k\right) },x_{m\left( k\right) },t_{0}\right) \quad\text{for all }k\in\mathbb{N}. \] From these inequalities, in a general fuzzy metric space, it is difficult to go further, even if we try to use the properties of the t-norm $\ast$. Next, we are going to introduce a new condition on the fuzzy space in order to guarantee that the sequences $\{M\left( x_{n\left( k\right) },x_{m\left( k\right) },t_{0}\right) \}_{k\in\mathbb{N}}$ and $\{M\left( x_{n\left( k\right) -1},x_{m\left( k\right) -1},t_{0}\right) \}_{k\in\mathbb{N}}$ satisfy additional properties. This will be of great help when we handle such sequences in the proofs of fixed point theorems. Immediately after, we show that non-Archimedean fuzzy KM-metric spaces satisfy this new condition (which is not trivial at all). \begin{definition} We will say that a fuzzy space $(X,M)$ satisfies the \emph{property }$\mathcal{NC}$ (\textquotedblleft\emph{not Cauchy}\textquotedblright) if for each sequence $\{x_{n}\}\subseteq X$ which is not $M$-Cauchy and verifies $\lim_{n\rightarrow\infty}M\left( x_{n},x_{n+1},t\right) =1$ for all $t>0$, there are $\varepsilon_{0}\in\left( 0,1\right) $, $t_{0}>0$ and two partial subsequences $\{x_{n(k)}\}_{k\in\mathbb{N}}$ and $\{x_{m(k)}\}_{k\in \mathbb{N}}$ of $\{x_{n}\}$ such that, for all $k\in\mathbb{N}$, \begin{align*} & k<n\left( k\right) <m\left( k\right) <n\left( k+1\right) \qquad\text{and}\\ & M\left( x_{n\left( k\right) },x_{m\left( k\right) -1},t_{0}\right) >1-\varepsilon_{0}\geq M\left( x_{n\left( k\right) },x_{m\left( k\right) },t_{0}\right) , \end{align*} and also \[ \lim_{k\rightarrow\infty}M\left( x_{n\left( k\right) },x_{m\left( k\right) },t_{0}\right) =\lim_{k\rightarrow\infty}M\left( x_{n\left( k\right) -1},x_{m\left( k\right) -1},t_{0}\right) =1-\varepsilon_{0}. \] \end{definition} Notice that the previous definition does not depend on any t-norm. However, when a t-norm of a specific class plays a role, then additional properties hold. \begin{theorem} \label{K38 - 24 th Non-Arch impies NC}Each non-Arquimedean KM-fuzzy metric space $(X,M,\ast)$ whose t-norm $\ast$ is continuous at the $1$-boundary satisfies the property $\mathcal{NC}$. \end{theorem} \begin{proof} Suppose that $\{x_{n}\}\subseteq X$ is a sequence which is not $M$-Cauchy and verifies \[ \lim_{n\rightarrow\infty}M\left( x_{n},x_{n+1},t\right) =1\quad\text{for all }t>0. \] Then there are $\varepsilon_{0}\in\left( 0,1\right) $, $t_{0}>0$ and two partial subsequences $\{x_{n(k)}\}_{k\in\mathbb{N}}$ and $\{x_{m(k)} \}_{k\in\mathbb{N}}$ of $\{x_{n}\}$ such that $k<n\left( k\right) <m\left( k\right) <n\left( k+1\right) $ and \[ M\left( x_{n\left( k\right) },x_{m\left( k\right) -1},t_{0}\right) >1-\varepsilon_{0}\geq M\left( x_{n\left( k\right) },x_{m\left( k\right) },t_{0}\right) \quad\text{for all }k\in\mathbb{N}. \] As $(X,M,\ast)$ is a non-Archimedean KM-FMS, then, for all $k\in\mathbb{N}$, \begin{align} 1-\varepsilon_{0} & \geq M\left( x_{n\left( k\right) },x_{m\left( k\right) },t_{0}\right) \geq M\left( x_{n\left( k\right) },x_{m\left( k\right) -1},t_{0}\right) \ast M\left( x_{m\left( k\right) -1} ,x_{m\left( k\right) },t_{0}\right) \nonumber\\ & \geq\left( 1-\varepsilon_{0}\right) \ast M\left( x_{m\left( k\right) -1},x_{m\left( k\right) },t_{0}\right) .\label{K38 - 29 prop} \end{align} Since $\lim_{k\rightarrow\infty}M\left( x_{m\left( k\right) -1},x_{m\left( k\right) },t_{0}\right) =1$ and $\ast$ is continuous at the $1$-boundary, then \[ \lim_{k\rightarrow\infty}M\left( x_{n\left( k\right) },x_{m\left( k\right) },t_{0}\right) =1-\varepsilon_{0}. \] Taking into account that, by (\ref{K38 - 29 prop}), \[ \overset{a_{k}}{\overbrace{\,1-\varepsilon_{0}\,}}~\geq~\overset{c_{k} }{\overbrace{\,M\left( x_{n\left( k\right) },x_{m\left( k\right) -1},t_{0}\right) \,}}\ast\overset{d_{k}}{\overbrace{\,M\left( x_{m\left( k\right) -1},x_{m\left( k\right) },t_{0}\right) \,}}~\geq~\overset{e_{k} }{\overbrace{\,\left( 1-\varepsilon_{0}\right) \ast M\left( x_{m\left( k\right) -1},x_{m\left( k\right) },t_{0}\right) \,}}, \] for all $k\in\mathbb{N}$ and $\ast$ is continuous, Proposition \ref{K38 - 18 propo cancellation} (applied with $b_{k}=1$ for all $k\in\mathbb{N}$) guarantees that \[ \lim_{k\rightarrow\infty}M\left( x_{n\left( k\right) },x_{m\left( k\right) -1},t_{0}\right) =1-\varepsilon_{0}, \] which means that \[ \lim_{k\rightarrow\infty}M\left( x_{n\left( k\right) },x_{m\left( k\right) },t_{0}\right) =\lim_{k\rightarrow\infty}M\left( x_{n\left( k\right) },x_{m\left( k\right) -1},t_{0}\right) =1-\varepsilon_{0}. \] Next, observe that, for all $k\in\mathbb{N}$, \begin{align*} \overset{a_{k}}{\overbrace{\,M\left( x_{n\left( k\right) },x_{m\left( k\right) },t_{0}\right) \,}~} & \geq~\overset{b_{k}}{\overbrace{\,M\left( x_{n\left( k\right) },x_{n\left( k\right) -1},t_{0}\right) \,}} \ast\,\overset{c_{k}}{\overbrace{\,M\left( x_{n\left( k\right) -1},x_{m\left( k\right) -1},t_{0}\right) \,}}\ast\overset{d_{k}} {\overbrace{\,M\left( x_{m\left( k\right) -1},x_{m\left( k\right) } ,t_{0}\right) \,}}\\ & \geq~M\left( x_{n\left( k\right) },x_{n\left( k\right) -1} ,t_{0}\right) \ast M\left( x_{n\left( k\right) -1},x_{n(k)},t_{0}\right) \ast M\left( x_{n\left( k\right) },x_{m\left( k\right) },t_{0}\right) \\ & \qquad\underset{e_{k}}{\underbrace{\qquad\qquad\ast M\left( x_{m\left( k\right) },x_{m\left( k\right) -1},t_{0}\right) \ast M\left( x_{m\left( k\right) -1},x_{m\left( k\right) },t_{0}\right) \qquad\qquad}}~. \end{align*} Clearly $\{a_{k}\}\rightarrow1-\varepsilon_{0}$, $\{b_{k}\}\rightarrow1$, $\{d_{k}\}\rightarrow1$ and $\{e_{k}\}\rightarrow1-\varepsilon_{0}$. Thus, Proposition \ref{K38 - 18 propo cancellation} guarantees again that \[ \lim_{k\rightarrow\infty}M\left( x_{n\left( k\right) -1},x_{m\left( k\right) -1},t_{0}\right) =1-\varepsilon_{0}. \] As a consequence, \[ \lim_{k\rightarrow\infty}M\left( x_{n\left( k\right) },x_{m\left( k\right) },t_{0}\right) =\lim_{k\rightarrow\infty}M\left( x_{n\left( k\right) -1},x_{m\left( k\right) -1},t_{0}\right) =1-\varepsilon_{0}. \] This completes the proof. \end{proof} \begin{corollary} \label{K38 - 43 coro metric implies NC}If $(X,d)$ is a metric space, then $(X,M^{d})$ satisfies the property $\mathcal{NC}$. \end{corollary} \begin{proof} By item \ref{K38 - 44 propo Md non-Archimedean, item 2} of Proposition \ref{K38 - 44 propo Md non-Archimedean, item 3}, $(X,M^{d},\ast_{p})$ is a non-Archimedean KM-FMS and, as $\ast_{P}$ is continuos, Theorem \ref{K38 - 24 th Non-Arch impies NC} ensures that $(X,M^{d})$ satisfies the property $\mathcal{NC}$. \end{proof} \begin{example} Let $(X,d)$ be a metric space for which there are $x_{0},y_{0},z_{0}\in X$ such that $d(x_{0},z_{0})>\max\{d(x_{0},y_{0}),d(y_{0},z_{0})\}$. By Corollary \ref{K38 - 43 coro metric implies NC}, the fuzzy space $(X,M^{d})$ satisfies the property $\mathcal{NC}$. Furthermore, item \ref{K38 - 44 propo Md non-Archimedean, item 1} of Proposition \ref{K38 - 44 propo Md non-Archimedean, item 3} ensures that $(X,M^{d} ,\ast_{m})$ is a KM-FMS. However, taking into account the third item of the same proposition, $(X,M^{d},\ast_{m})$ is not a non-Archimedean KM-FMS because it does not satisfy the non-Archimedean property for $x_{0}$, $y_{0}$ and $z_{0}$. \end{example} \section{\textbf{Fuzzy ample spectrum contractions}} In this section we introduce two distinct notions of \emph{fuzzy ample spectrum contraction} in the setting of KM-FMS. At a first sight, we can believe that they are the natural performance in FMS of the properties that define an ample spectrum contraction in metric spaces. However, FMS are distinct in nature to metric spaces: for instance, they are much varied than metric spaces. As we advise in the introduction, we will work on a KM-FMS. Their main advantage is that they are more general than GV-FMS, but they also have a great drawback: the statements of the fixed point theorems need more technical hypotheses and their corresponding proofs are more difficult. The reader can easily deduce how the proofs would be easier in a GV-FMS. Furthermore, rather than working in a non-Archimedean FMS, we will employ KM-FMS that only satisfy the property $\mathcal{NC}$ and whose t-norms are continuous at the $1$-boundary (which are more general). We give the following definitions in the context of fuzzy spaces. Throughout this section, let $\left( X,M\right) $ be a fuzzy space, let $T:X\rightarrow X$ be a self-mapping, let $\mathcal{S}$ be a binary relation on $X$, let $B\subseteq\mathbb{R}^{2}$ be a subset and let $\theta:B\rightarrow\mathbb{R}$ be a function. Let \[ \mathcal{M}_{T}=\left\{ \,(M\left( Tx,Ty,t\right) ,M\left( x,y,t\right) )\in\mathbb{I}\times\mathbb{I}:x,y\in X,~x\mathcal{S}^{\ast}y,~Tx\mathcal{S} ^{\ast}Ty,~t\in\left[ 0,\infty\right) \,\right\} . \] \subsection{(\textbf{Type-1) Fuzzy ample spectrum contractions}} The notion of fuzzy ample spectrum contraction directly depend on a very singular kind of sequences of pairs of distance distribution functions. \begin{definition} \label{K38 - 55 def T,S,M sequence}Let $\{\phi_{n}\}$ and $\{\psi_{n}\}$ be two sequences of functions $\phi_{n},\psi_{n}:\left[ 0,\infty\right) \rightarrow\mathbb{I}$ . We say that $\{\left( \phi_{n},\psi_{n}\right) \}$ is a {$(T,\mathcal{S}^{\ast},M)$\emph{-sequence}} if there exist two sequences $\{x_{n}\},\{y_{n}\}\subseteq X$ such that \[ x_{n}\mathcal{S}^{\ast}y_{n},\quad Tx_{n}\mathcal{S}^{\ast}Ty_{n},\quad \phi_{n}\left( t\right) =M(Tx_{n},Ty_{n},t)\quad\text{and}\quad\psi _{n}\left( t\right) =M(x_{n},y_{n},t) \] for all $n\in\mathbb{N}$ and all $t>0$. \end{definition} \begin{definition} \label{K38 - 54 def FASC}A mapping $T:X\rightarrow X$ is said to be a \emph{fuzzy ample spectrum contraction w.r.t. }$\left( M,\mathcal{S} ,\theta\right) $ if the following four conditions are fulfilled. \begin{description} \item[$(\mathcal{F}_{1})$] $B$ is nonempty and $\mathcal{M}_{T}\subseteq B$. \item[$(\mathcal{F}_{2})$] If $\{x_{n}\}\subseteq X$ is a Picard $\mathcal{S} $-strictly-increasing sequence of $T$ such that \[ \theta\left( M\left( x_{n+1},x_{n+2},t\right) ,M\left( x_{n} ,x_{n+1},t\right) \right) \geq0\quad\text{for all }n\in\mathbb{N}\text{ and all }t>0, \] then $\lim_{n\rightarrow\infty}M\left( x_{n},x_{n+1},t\right) =1$ for all $t>0$. \item[$(\mathcal{F}_{3})$] If $\{\left( \phi_{n},\psi_{n}\right) \}$ is a $(T,\mathcal{S}^{\ast},M)$-sequence and $t_{0}>0$ are such that $\{\phi _{n}\left( t_{0}\right) \}$ and $\{\psi_{n}\left( t_{0}\right) \}$ converge to the same limit $L\in\mathbb{I}$ and verifying that $L>\phi _{n}\left( t_{0}\right) $ and $\theta(\phi_{n}\left( t_{0}\right) ,\psi_{n}\left( t_{0}\right) )\geq0$ for all $n\in\mathbb{N}_{0}$, then $L=1$. \item[$(\mathcal{F}_{4})$] ${\theta}${$\left( M(Tx,Ty,t),M(x,y,t)\right) \geq0\quad$for all }$t>0$ and all {\ $x,y\in X\quad$such that $x\mathcal{S} ^{\ast}y$ and $Tx\mathcal{S}^{\ast}Ty$.} \end{description} \end{definition} In some cases, we will also consider the following properties. \begin{description} \item[$(\mathcal{F}_{2}^{\prime})$] If $\{x_{n}\},\{y_{n}\}\subseteq X$ are two $T$-Picard sequences such that \[ x_{n}\mathcal{S}^{\ast}y_{n}\quad\text{and}\quad{\theta(M\left( x_{n+1},y_{n+1},t\right) ,M\left( x_{n},y_{n},t\right) )\geq0} \quad\text{for all }n\in\mathbb{N}\text{ and all }t>0, \] then $\lim_{n\rightarrow\infty}{M}\left( x_{n},y_{n},t\right) =1$ for all $t>0$. \item[$(\mathcal{F}_{5})$] If $\{\left( \phi_{n},\psi_{n}\right) \}$ is a $(T,\mathcal{S}^{\ast},M)$-sequence such that $\{\psi_{n}\left( t\right) \}\rightarrow1$ for all $t>0$ and $\theta(\phi_{n}(t),\psi_{n}(t))\geq0$ for all $n\in\mathbb{N}$ and all $t>0$, then $\{\phi_{n}\left( t\right) \}\rightarrow1$ for all $t>0$. \end{description} Many of the remarks that were given in the context of metric spaces for ample spectrum contractions can now be repeated. In particular, we highlight the following ones. \begin{remark} \label{K38 - 30 rem def FAEC} \begin{enumerate} \item \label{K38 - 30 rem def FAEC, item 1}Although the set $B$ in which is defined the function $\theta:B\rightarrow\mathbb{R}$ can be greater than $\mathbb{I}\times\mathbb{I}$, for our purposes, we will only be interested in the values of $\theta$ when its arguments belong to $\mathbb{I}\times \mathbb{I}$. Hence it is sufficient to assume that $B\subseteq\mathbb{I} \times\mathbb{I}$. \item \label{K38 - 30 rem def FAEC, item 2}If the function $\theta$ satisfies $\theta\left( t,s\right) \leq t-s$ for all $\left( t,s\right) \in B\cap(\mathbb{I}\times\mathbb{I})$, then property $(\mathcal{F}_{5})$ holds. It follows from the fact that \[ 0\leq\theta(\phi_{n}(t),\psi_{n}(t))\leq\phi_{n}(t)-\psi_{n}(t)\quad \Rightarrow\quad\psi_{n}(t)\leq\phi_{n}(t)\leq1. \] \item By choosing $y_{n}=x_{n+1}$ for all $n\in\mathbb{N}$, it can be proved that $(\mathcal{F}_{2}^{\prime})$ implies $(\mathcal{F}_{2})$. \end{enumerate} \end{remark} In the following result we check that a great family of ample spectrum contractions are, in fact, fuzzy ample spectrum contractions. \begin{theorem} \label{K38 - 56 th ASC implies FASC}Let $(X,d)$ be a metric space and let $T:X\rightarrow X$ be an ample spectrum contraction w.r.t. $\mathcal{S}$ and $\varrho$, where $\varrho:\left[ 0,\infty\right) \times\left[ 0,\infty\right) \rightarrow\mathbb{R}$. Suppose that the function $\varrho$ satisfies the following property: \begin{equation} t,s>0,\quad\varrho\left( t,s\right) \geq0\quad\Rightarrow\quad\left[ ~\varrho\left( \frac{t}{r},\frac{s}{r}\right) \geq0~~\text{for all }r>0~\right] . \label{K38 - 51 prop} \end{equation} Let define: \[ \theta_{\varrho}:\left( 0,1\right] \times\left( 0,1\right] \rightarrow \mathbb{R},\qquad\theta_{\varrho}\left( t,s\right) =\varrho\left( \frac{1-t}{t},\frac{1-s}{s}\right) \quad\text{for all }t,s\in\left( 0,1\right] . \] Then $T$ is a fuzzy ample spectrum contraction w.r.t. $(M^{d},\mathcal{S} ,\theta_{\varrho})$. Furthermore, if property $(\mathcal{B}_{5})$ (respectively, $(\mathcal{B}_{2}^{\prime})$) holds, then property $(\mathcal{F}_{5})$ (respectively, $(\mathcal{F}_{2}^{\prime})$) also holds. \end{theorem} \begin{proof} First of all, we observe that, for all $x,y\in X$ and all $t>0$, \begin{align} & \theta_{\varrho}(M^{d}(Tx,Ty,t),M^{d}(x,y,t))=\theta_{\varrho}\left( \frac{t}{t+d(Tx,Ty)},\frac{t}{t+d(x,y)}\right) \nonumber\\ & \qquad=\varrho\left( \frac{\,1-\dfrac{t}{t+d(Tx,Ty)}\,}{\dfrac {t}{t+d(Tx,Ty)}},\frac{\,1-\dfrac{t}{t+d(x,y)}\,}{\dfrac{t}{t+d(x,y)}}\right) =\varrho\left( \frac{d(Tx,Ty)}{t},\frac{d(x,y)}{t}\right) . \label{K38 - 50 prop} \end{align} Next we check all properties that define a fuzzy ample spectrum contraction. $(\mathcal{F}_{1})$ $B=\left( 0,1\right] \times\left( 0,1\right] $ is nonempty and $\mathcal{M}_{T}\subseteq B$. $(\mathcal{F}_{2})$ Let $\{x_{n}\}\subseteq X$ be a Picard $\mathcal{S} $-strictly-increasing sequence of $T$ such that \[ \theta_{\varrho}(M^{d}\left( x_{n+1},x_{n+2},t\right) ,M^{d}\left( x_{n},x_{n+1},t\right) )\geq0\quad\text{for all }n\in\mathbb{N}\text{ and all }t>0. \] In particular, for $t=1$, using (\ref{K38 - 50 prop}), for each $n\in \mathbb{N}$, \[ 0\leq\theta_{\varrho}(M^{d}\left( x_{n+1},x_{n+2},1\right) ,M^{d}\left( x_{n},x_{n+1},1\right) )=\varrho(d(x_{n+1},x_{n+2}),d(x_{n},x_{n+1})). \] As $T$ is an ample spectrum contraction w.r.t. $\mathcal{S}$ and $\varrho$, axiom $(\mathcal{B}_{2})$ implies that $\{d(x_{n},x_{n+1})\}\rightarrow0$, so $\lim_{n\rightarrow\infty}M^{d}\left( x_{n},x_{n+1},t\right) =1$ for all $t>0$. $(\mathcal{F}_{3})$~Let $\{\left( \phi_{n},\psi_{n}\right) \}$ be a $(T,\mathcal{S}^{\ast},M^{d})$-sequence and let $t_{0}>0$ be such that $\{\phi_{n}\left( t_{0}\right) \}$ and $\{\psi_{n}\left( t_{0}\right) \}$ converge to the same limit $L\in\mathbb{I}$ and verifying that $L>\phi _{n}\left( t_{0}\right) $ and $\theta_{\varrho}(\phi_{n}\left( t_{0}\right) ,\psi_{n}\left( t_{0}\right) )\geq0$ for all $n\in \mathbb{N}_{0}$. Let $\{x_{n}\},\{y_{n}\}\subseteq X$ be some sequences such that, for all $n\in\mathbb{N}$ and all $t>0$, \[ x_{n}\mathcal{S}^{\ast}y_{n},\quad Tx_{n}\mathcal{S}^{\ast}Ty_{n},\quad \phi_{n}\left( t\right) =M^{d}(Tx_{n},Ty_{n},t)\quad\text{and}\quad\psi _{n}\left( t\right) =M^{d}(x_{n},y_{n},t). \] Let define $a_{n}=d(Tx_{n},Ty_{n})$ and $b_{n}=d(x_{n},y_{n})$ for all $n\in\mathbb{N}$. Then $\{\left( a_{n},b_{n}\right) \}\subseteq\left[ 0,\infty\right) \times\left[ 0,\infty\right) $ is a $(T,\mathcal{S}^{\ast })$-sequence. Furthermore, for all $n\in\mathbb{N}$, \begin{align*} 0 & \leq\theta_{\varrho}(\phi_{n}\left( t_{0}\right) ,\psi_{n}\left( t_{0}\right) )=\theta_{\varrho}(M^{d}(Tx_{n},Ty_{n},t_{0}),M^{d}(x_{n} ,y_{n},t_{0}))\\[0.2cm] & =\varrho\left( \frac{d(Tx_{n},Ty_{n})}{t_{0}},\frac{d(x_{n},y_{n})}{t_{0} }\right) =\varrho\left( \frac{a_{n}}{t_{0}},\frac{b_{n}}{t_{0}}\right) . \end{align*} By (\ref{K38 - 51 prop}), using $r=1/t_{0}$, we deduce that, for all $n\in\mathbb{N}$, \[ 0\leq\varrho\left( \frac{\,\frac{a_{n}}{t_{0}}\,}{\frac{1}{t_{0}}} ,\frac{\,\frac{b_{n}}{t_{0}}\,}{\frac{1}{t_{0}}}\right) =\varrho\left( a_{n},b_{n}\right) . \] Since $L>\phi_{n}\left( t_{0}\right) $, then $L>0$. Let $\varepsilon _{0}=t_{0}\frac{1-L}{L}\geq0$. Thus, as $\{\phi_{n}\left( t_{0}\right) \}\rightarrow L$ and $\{\psi_{n}\left( t_{0}\right) \}\rightarrow L$, then \[ \left\{ \frac{t_{0}}{t_{0}+d(Tx_{n},Ty_{n})}\right\} _{n\in\mathbb{N} }\rightarrow L\quad\text{and}\quad\left\{ \frac{t_{0}}{t_{0}+d(x_{n},y_{n} )}\right\} _{n\in\mathbb{N}}\rightarrow L. \] This is equivalent to \[ \{a_{n}\}=\left\{ d(Tx_{n},Ty_{n})\right\} _{n\in\mathbb{N}}\rightarrow t_{0}\frac{1-L}{L}=\varepsilon_{0}\quad\text{and}\quad\{b_{n}\}=\left\{ d(x_{n},y_{n})\right\} _{n\in\mathbb{N}}\rightarrow t_{0}\frac{1-L} {L}=\varepsilon_{0}. \] Notice that \begin{align*} \frac{t_{0}}{t_{0}+d(Tx_{n},Ty_{n})}=\phi_{n}\left( t_{0}\right) <L\quad & \Leftrightarrow\quad\frac{1}{t_{0}+a_{n}}<\frac{L}{t_{0}}\quad\Leftrightarrow \quad\frac{t_{0}}{L}<t_{0}+a_{n}\\[0.2cm] & \Leftrightarrow\quad\frac{t_{0}}{L}-t_{0}<a_{n}\quad\Leftrightarrow \quad\varepsilon_{0}=t_{0}\frac{1-L}{L}<a_{n}. \end{align*} Taking into account that $T$ is an ample spectrum contraction w.r.t. $\mathcal{S}$ and $\varrho$, condition $(\mathcal{B}_{3})$ ensures that $\varepsilon_{0}=0$, which leads to $L=1$. $(\mathcal{F}_{4})$ Let{\ }$t>0$ and let {$x,y\in X$ be such that $x\mathcal{S}^{\ast}y$ and $Tx\mathcal{S}^{\ast}Ty$. Therefore, }$d(x,y)>0$ and $d(Tx,Ty)>0$.{\ As} $T$ is an ample spectrum contraction w.r.t. $\mathcal{S}$ and $\varrho$, property $(\mathcal{B}_{4})$ guarantees that $\varrho\left( d(Tx,Ty),d(x,y)\right) \geq0$. By (\ref{K38 - 51 prop}), it follows that \[ \varrho\left( \frac{d(Tx,Ty)}{r},\frac{d(x,y)}{r}\right) \geq0~~\text{for all }r>0. \] As a consequence, by (\ref{K38 - 50 prop}), \[ \theta_{\varrho}({M^{d}(Tx,Ty,t),M^{d}(x,y,t))=\varrho\left( \frac {d(Tx,Ty)}{t},\frac{d(x,y)}{t}\right) \geq0.} \] $(\mathcal{B}_{5})\Rightarrow(\mathcal{F}_{5})$. Suppose that the property $(\mathcal{B}_{5})$ holds. Let $\{\left( \phi_{n},\psi_{n}\right) \}$ be a $(T,\mathcal{S}^{\ast},M)$-sequence such that $\{\psi_{n}\left( t\right) \}\rightarrow1$ for all $t>0$ and $\theta_{\varrho}(\phi_{n}(t),\psi _{n}(t))\geq0$ for all $n\in\mathbb{N}$ and all $t>0$. Let $\{x_{n} \},\{y_{n}\}\subseteq X$ be some sequences such that, for all $n\in\mathbb{N}$ and all $t>0$, \[ x_{n}\mathcal{S}^{\ast}y_{n},\quad Tx_{n}\mathcal{S}^{\ast}Ty_{n},\quad \phi_{n}\left( t\right) =M^{d}(Tx_{n},Ty_{n},t)\quad\text{and}\quad\psi _{n}\left( t\right) =M^{d}(x_{n},y_{n},t). \] Using $t=1$, for all $n\in\mathbb{N}$, \[ 0\leq\theta_{\varrho}(\phi_{n}(1),\psi_{n}(1))=\theta_{\varrho}(M^{d} (Tx_{n},Ty_{n},1),M^{d}(x_{n},y_{n},1))=\varrho\left( d(Tx_{n},Ty_{n} ),d(x_{n},y_{n})\right) . \] Notice that $\{M^{d}(x_{n},y_{n},t)\}=\{\psi_{n}\left( t\right) \}\rightarrow1$ for all $t>0$ is equivalent to say that $\{d(x_{n} ,y_{n})\}\rightarrow0$. Taking into account that we suppose that property $(\mathcal{B}_{5})$ holds, then $\{d(Tx_{n},Ty_{n})\}\rightarrow0$, so $\{M^{d}(Tx_{n},Ty_{n},t)\}=\{\phi_{n}\left( t\right) \}\rightarrow1$ for all $t>0$. $(\mathcal{B}_{2}^{\prime})\Rightarrow(\mathcal{F}_{2}^{\prime})$. Let $\{x_{n}\},\{y_{n}\}\subseteq X$ be two $T$-Picard sequences such that \[ x_{n}\mathcal{S}^{\ast}y_{n}\quad\text{and}\quad{\theta}_{\varrho}{(M} ^{d}{\left( x_{n+1},y_{n+1},t\right) ,M}^{d}{\left( x_{n},y_{n},t\right) )\geq0}\quad\text{for all }n\in\mathbb{N}\text{ and all }t>0. \] Using $t=1$, for all $n\in\mathbb{N}$, \[ 0\leq{\theta}_{\varrho}{(M}^{d}{\left( x_{n+1},y_{n+1},1\right) ,M} ^{d}{\left( x_{n},y_{n},1\right) )}=\varrho\left( d(Tx_{n},Ty_{n} ),d(x_{n},y_{n})\right) . \] As we suppose that property $(\mathcal{B}_{2}^{\prime})$ holds, then $\{d(x_{n},y_{n})\}\rightarrow0$ which means that $\{{M}^{d}{\left( x_{n},y_{n},t\right) }\}\rightarrow1$ for all $t>0$. \end{proof} \begin{example} If $T$ is a Banach contraction, then there is $\lambda\in\left[ 0,1\right) $ such that $d(Tx,Ty)\leq\lambda d(x,y)$ for all $x,y\in X$. In this case, $T$ is an ample spectrum contraction associated to the function $\varrho_{\lambda }(t,s)=\lambda s-t$ for all $t,s\geq0$. In such a case, for all $t,s,r>0$, \[ \varrho_{\lambda}\left( \frac{t}{r},\frac{s}{r}\right) =\lambda\frac{s} {r}-\frac{t}{r}=\frac{\lambda s-t}{r}=\frac{\varrho_{\lambda}(t,s)}{r}, \] which means that property (\ref{K38 - 51 prop}) holds. \end{example} In this general framework we introduce one of our main results. Notice that in the following result we assume that the t-norm $\ast$ is continuous at the $1$-boundary, but it has not to be continuous on the whole space $\mathbb{I}\times\mathbb{I}$. \begin{theorem} \label{K38 - 31 th main FAEC}Let $\left( X,M,\ast\right) $ be a KM-FMS endowed with a transitive binary relation $\mathcal{S}$ and let $T:X\rightarrow X$ be an $\mathcal{S}$-nondecreasing fuzzy ample spectrum contraction with respect to $\left( M,\mathcal{S},\theta\right) $. Suppose that $\ast$ is continuous at the $1$-boundary, $T(X)$ is $(\mathcal{S},M)$-strictly-increasing-precomplete and there exists a point $x_{0}\in X$ such that $x_{0}\mathcal{S}Tx_{0}$. Also assume that, at least, one of the following conditions is fulfilled: \begin{description} \item[$\left( a\right) $] $T$ is $\mathcal{S}$-strictly-increasing-continuous. \item[$\left( b\right) $] $\left( X,M\right) $ is $\mathcal{S} $-strictly-increasing-regular and condition $(\mathcal{F}_{5})$ holds. \item[$\left( c\right) $] $\left( X,M\right) $ is $\mathcal{S} $-strictly-increasing-regular and $\theta\left( t,s\right) \leq t-s$ for all $\left( t,s\right) \in B\cap(\mathbb{I}\times\mathbb{I})$. \end{description} If $\left( X,M\right) $\ satisfies the property $\mathcal{NC}$, then the Picard sequence of $T~$based on $x_{0}$ converges to a fixed point of $T$. In particular, $T$ has at least a fixed point. \end{theorem} \begin{proof} Let $x_{0}\in X$ be the point such that $x_{0}\mathcal{S}Tx_{0}$ and let $\{x_{n}\}$ be the Picard sequence of $T$ starting from $x_{0}$. If there is some $n_{0}\in\mathbb{N}_{0}$ such that $x_{n_{0}}=x_{n_{0}+1}$, then $x_{n_{0}}$ is a fixed point of $T$, and the proof is finished. On the contrary, suppose that $x_{n}\neq x_{n+1}$ for all $n\in\mathbb{N}_{0}$. As $x_{0}\mathcal{S}Tx_{0}=x_{1}$ and $T$ is $\mathcal{S}$-nondecreasing, then $x_{n}\mathcal{S}x_{n+1}$ for all $n\in\mathbb{N}_{0}$. In fact, as $\mathcal{S}$ is transitive, then \[ x_{n}\mathcal{S}x_{m}\quad\text{for all }n,m\in\mathbb{N}_{0}\text{ such that }n<m, \] which means that $\{x_{n}\}$ is an $\mathcal{S}$-nondecreasing sequence. Since $T$ is a fuzzy ample spectrum contraction with respect to $\left( M,\mathcal{S},\theta\right) $ and $x_{n}\mathcal{S}^{\ast}x_{n+1}$ and $Tx_{n}=x_{n+1}\mathcal{S}^{\ast}x_{n+2}=Tx_{n+1}$, then \[ {\theta\left( M(x_{n+1},x_{n+2},t),M(x_{n},x_{n+1},t)\right) ={\theta\left( M(Tx_{n},Tx_{n+1},t),M(x_{n},x_{n+1},t)\right) }\geq0} \] {for all }$t>0$ and all $n\in\mathbb{N}_{0}${. Axiom }$(\mathcal{F}_{2})$ guarantees that \begin{equation} \lim_{n\rightarrow\infty}M\left( x_{n},x_{n+1},t\right) \rightarrow 1\quad\text{for all }t>0. \label{K38 - 17 prop} \end{equation} By Proposition \ref{K38 - 22 lem infinite or almost periodic}, the $T$-Picard sequence $\{x_{n}\}$ is infinite or almost periodic. If $\{x_{n}\}$ is almost periodic, then there are $n_{0},m_{0}\in\mathbb{N}$ such that $n_{0}<m_{0}$ and $x_{n_{0}}=x_{m_{0}}$. In such a case, Proposition \ref{K40 21 propo either infinite or almost-constant fuzzy} guarantees that there is $\ell_{0}\in\mathbb{N}$ and $z\in X$ such that $x_{n}=z$ for all $n\geq\ell_{0}$, so $z$ is a fixed point of $T$, and the proof is finished. On the contrary case, suppose that $\{x_{n}\}$ is infinite, that is, $x_{n}\neq x_{m}$ for all $n\neq m$. This means that $\{x_{n}\}$ is an $\mathcal{S} $-strictly-increasing sequence because $x_{n}\mathcal{S}x_{m}$ and $x_{n}\neq x_{m}$, that is, $x_{n}\mathcal{S}^{\ast}x_{m}$ for all $n<m$. Next we prove that $\{x_{n}\}$ is an $M$-Cauchy sequence by contradiction. Since we suppose that $\left( X,M\right) $ satisfies the property $\mathcal{NC}$ and $\{x_{n}\}$ is a sequence which is not $M$-Cauchy but it satisfies (\ref{K38 - 17 prop}), there are $\varepsilon_{0}\in\left( 0,1\right) $ and $t_{0}>0$ and two partial subsequences $\{x_{n(k)} \}_{k\in\mathbb{N}}$ and $\{x_{m(k)}\}_{k\in\mathbb{N}}$ of $\{x_{n}\}$ such that, for all $k\in\mathbb{N}$, $k<n\left( k\right) <m\left( k\right) <n\left( k+1\right) $, \[ M\left( x_{n\left( k\right) },x_{m\left( k\right) -1},t_{0}\right) \geq1-\varepsilon_{0}>M\left( x_{n\left( k\right) },x_{m\left( k\right) },t_{0}\right) \] and also \begin{equation} \lim_{k\rightarrow\infty}M\left( x_{n\left( k\right) },x_{m\left( k\right) },t_{0}\right) =\lim_{k\rightarrow\infty}M\left( x_{n\left( k\right) -1},x_{m\left( k\right) -1},t_{0}\right) =1-\varepsilon_{0}. \label{K38 - 21 prop} \end{equation} Let define $L=1-\varepsilon_{0}\in\left( 0,1\right) $ and, for all $t>0$ and all $k\in\mathbb{N}$, \[ \phi_{k}\left( t\right) =M(x_{n\left( k\right) },x_{m\left( k\right) },t)\quad\text{and}\quad\psi_{k}\left( t\right) =M(x_{n\left( k\right) -1},x_{m\left( k\right) -1},t). \] We claim that $\{(\phi_{k},\psi_{k})\}_{k\in\mathbb{N}}$, $t_{0}$ and $L$ satisfy all hypotheses in condition $(\mathcal{F}_{3})$. On the one hand, $\{(\phi_{k},\psi_{k})\}$ is a {$(T,\mathcal{S}^{\ast},M)$}-sequence because $\phi_{k}=M(x_{n\left( k\right) },x_{m\left( k\right) },\cdot )=M(Tx_{n\left( k\right) -1},Tx_{m\left( k\right) -1},\cdot)$, $\psi _{k}=M(x_{n\left( k\right) -1},x_{m\left( k\right) -1},\cdot)$, $x_{n\left( k\right) -1}\mathcal{S}^{\ast}x_{m\left( k\right) -1}$ and $Tx_{n\left( k\right) -1}=x_{n\left( k\right) }\mathcal{S}^{\ast }x_{m\left( k\right) }=Tx_{m\left( k\right) -1}$ for all $k\in\mathbb{N}$. Furthermore, $L=1-\varepsilon_{0}>M(x_{n\left( k\right) },x_{m\left( k\right) },t_{0})=\phi_{k}\left( t_{0}\right) $ for all $k\in\mathbb{N}$. Also (\ref{K38 - 21 prop}) means that \[ \lim_{k\rightarrow\infty}\phi_{k}\left( t_{0}\right) =\lim_{k\rightarrow \infty}\psi_{k}\left( t_{0}\right) =1-\varepsilon_{0}=L. \] As $T$ is a fuzzy ample spectrum contraction with respect to $\left( M,\mathcal{S},\theta\right) $, condition $(\mathcal{F}_{3})$ guarantees that $1-\varepsilon_{0}=L=1$, which is a contradiction because $\varepsilon_{0}>0 $. This contradiction shows that $\{x_{n}\}$ is an $M$-Cauchy sequence in $(X,M)$. Since $\{x_{n+1}=Tx_{n}\}\subseteq TX$ and $T(X)$ is $(\mathcal{S},M) $-strictly-increasing-precomplete, then there exists a set $Z$ such that $TX\subseteq Z\subseteq X$ and $Z$ is $(\mathcal{S},M)$ -strictly-increasing-complete. Since $\{x_{n+1}=Tx_{n}\}\subseteq TX\subseteq Z$ and $\{x_{n}\}$ is an $\mathcal{S}$-strictly-increasing $M$-Cauchy sequence, then there is $z\in Z\subseteq X$ such that $\{x_{n}\}$ $M$-converges to $z$. It only remain to prove that, under any of the conditions $(a)$, $(b)$ or $(c)$, $z$ is a fixed point of $T$. \begin{description} \item[$(a)$] Suppose that $T$ is $\mathcal{S}$-strictly-increasing-continuous. As $\{x_{n}\}$ is $\mathcal{S}$-strictly-increasing and $\{x_{n}\}$ $M$-converges to $z$, then $\{Tx_{n}\}$ $M$-converges to $Tz$. However, as $Tx_{n}=x_{n+1}$ for all $n\in\mathbb{N}$ and the $M$-limit in a KM-FMS is unique, then $Tz=z$, that is, $z$ is a fixed point of $T$. \item[$(b)$] Suppose that $\left( X,M\right) $ is $\mathcal{S} $-strictly-increasing-regular and condition $(\mathcal{F}_{5})$ holds. In this case, since $\{x_{n}\}$ is an $\mathcal{S}$-strictly-increasing sequence such that $\{x_{n}\}\rightarrow z\in X$, it follows that $x_{n}\mathcal{S}z$ for all $n\in\mathbb{N}$. Taking into account that the sequence $\{x_{n}\}$ is infinite, then there is $n_{0}\in\mathbb{N}$ such that $x_{n}\neq z$ and $x_{n}\neq Tz$ for all $n\geq n_{0}-1$. Moreover, as $T$ is $\mathcal{S} $-nondecreasing, then $x_{n+1}=Tx_{n}\mathcal{S}Tz$, which means that $x_{n}\mathcal{S}^{\ast}z$ and $x_{n}\mathcal{S}^{\ast}Tz$ for all $n\geq n_{0}$. Using that $T$ is a fuzzy ample spectrum contraction with respect to $\left( M,\mathcal{S},\theta\right) $, condition $(\mathcal{F}_{4})$ implies that \[ \theta\left( M(x_{n+1},Tz,t),M(x_{n},z,t)\right) =\theta\left( M(Tx_{n},Tz,t),M(x_{n},z,t)\right) \geq0 \] for all $n\geq n_{0}$ and all $t>0$. Taking into account that $\{M(x_{n} ,z,t)\}_{n\geq n_{0}}\rightarrow1$ for all $t>0$, assumption $(\mathcal{F} _{5})$ applied to the $(T,\mathcal{S}^{\ast},M)$-sequence \[ \left\{ \,\left( \,\phi_{n}=M(Tx_{n},Tz,\cdot),\,\psi_{n}=M(x_{n} ,z,\cdot)\,\right) \,\right\} _{n\geq n_{0}} \] leads to $\{M(x_{n+1},Tz,t)\}_{n\geq n_{0}}\rightarrow1$ for all $t>0$, that is, $\{x_{n}\}_{n\geq n_{0}}$ $M$-converges to $Tz$. As the $M$-limit in a KM-FMS is unique, then $Tz=z$, so $z$ is a fixed point of $T$. \item[$(c)$] Suppose that $\left( X,M\right) $ is $\mathcal{S} $-strictly-increasing-regular and $\theta\left( t,s\right) \leq t-s$ for all $\left( t,s\right) \in B\cap(\mathbb{I}\times\mathbb{I})$. This case follows from $(b)$ taking into account item \ref{K38 - 30 rem def FAEC, item 2} of Remark \ref{K38 - 30 rem def FAEC}. \end{description} \end{proof} Theorem \ref{K38 - 31 th main FAEC} guarantees the existence of fixed points of $T$. In the following result we describe some additional assumptions in order to ensure that such fixed point is unique. \begin{theorem} \label{K38 - 31 th main FAEC uniqueness}Under the hypotheses of Theorem \ref{K38 - 31 th main FAEC}, assume that the property $(\mathcal{F} _{2}^{\prime}) $ is fulfilled and that each pair of fixed points $x,y\in\operatorname*{Fix}(T)$ of $T$ is associated to another point $z\in X$ which is, at the same time, $\mathcal{S}$-comparable to $x$ and to $y$. Then $T$ has a unique fixed point. \end{theorem} \begin{proof} Let $x,y\in\operatorname*{Fix}(T)$ be two fixed points of $T$. By hypothesis, there exists $z_{0}\in X$ such that $z_{0}$ is, at the same time, $\mathcal{S}$-comparable to $x$ and $\mathcal{S}$-comparable to $y$. We claim that the $T$-Picard sequence $\{z_{n}\}$ of $T$ starting from $z_{0}$ $M$-converges, at the same time, to $x$ and to $y$ (so we will deduce that $x=y$). We check the first statement. To prove it, we consider two possibilities. \begin{itemize} \item Suppose that there is $n_{0}\in\mathbb{N}$ such that $z_{n_{0}}=x$. In this case, $z_{n_{0}+1}=Tz_{n_{0}}=Tx=x$. Repeating this argument, $z_{n}=x$ for all $n\geq n_{0}$, so $\{z_{n}\}$ $M$-converges to $x$. \item Suppose that $z_{n}\neq x$ for all $n\in\mathbb{N}$. Since $z_{0}$ is $\mathcal{S}$-comparable to $x$, assume, for instance, that $z_{0}\mathcal{S}x $ (the case $x\mathcal{S}z_{0}$ is similar). As $z_{0}\mathcal{S}x$, $T$ is $\mathcal{S}$-nondecreasing and $Tx=x$, then $z_{n}\mathcal{S}x$ for all $n\in\mathbb{N}$. Therefore $z_{n}\mathcal{S}^{\ast}x$ and $Tz_{n} \mathcal{S}^{\ast}Tx$ for all $n\in\mathbb{N}$. Using the contractivity condition $(\mathcal{F}_{4})$, for all $n\in\mathbb{N}$ and all $t>0$, \[ 0\leq\theta(M(Tz_{n},Tx,t),M(z_{n},x,t))=\theta(M(T^{n+1}z_{0},T^{n+1} x,t),M(T^{n}z_{0},T^{n}x,t)). \] It follows from $(\mathcal{F}_{2}^{\prime})$ that $\{M(z_{n},x,t)\}=\{M(T^{n} z_{0},T^{n}x,t)\}\rightarrow1$ for all $t>0$, that is, $\{z_{n}\}$ $M$-converges to $x$. \end{itemize} In any case, $\{z_{n}\}\rightarrow x$ and, similarly, $\{z_{n}\}\rightarrow y $, so $x=y$ and $T$ has a unique fixed point. \end{proof} \begin{corollary} Under the hypotheses of Theorem \ref{K38 - 31 th main FAEC}, assume that condition $(\mathcal{F}_{2}^{\prime})$ holds and each two fixed points of $T$ are $\mathcal{S}$-comparable. Then $T$ has a unique fixed point. \end{corollary} Immediately we can deduce that Theorems \ref{K38 - 31 th main FAEC} and \ref{K38 - 31 th main FAEC uniqueness} remains true if we replace the hypothesis that $(X,M)$ satisfies the property $\mathcal{NC}$ by the fact that $(X,M,\ast)$ is a non-Archimedean KM-FMS (recall Theorem \ref{K38 - 24 th Non-Arch impies NC}). Taking into account its great importance in this manuscript, we enunciate here the complete statement. \begin{corollary} Let $\left( X,M,\ast\right) $ be a non-Archimedean KM-FMS endowed with a transitive binary relation $\mathcal{S}$ and let $T:X\rightarrow X$ be an $\mathcal{S}$-nondecreasing fuzzy ample spectrum contraction with respect to $\left( M,\mathcal{S},\theta\right) $. Suppose that $\ast$ is continuous at the $1$-boundary, $T(X)$ is $(\mathcal{S},d)$-strictly-increasing-precomplete and there exists a point $x_{0}\in X$ such that $x_{0}\mathcal{S}Tx_{0}$. Also assume that, at least, one of the following conditions is fulfilled: \begin{description} \item[$(a)$] $T$ is $\mathcal{S}$-strictly-increasing-continuous. \item[$(b)$] $\left( X,M\right) $ is $\mathcal{S}$ -strictly-increasing-regular and condition $(\mathcal{F}_{5})$ holds. \item[$(c)$] $\left( X,M\right) $ is $\mathcal{S}$ -strictly-increasing-regular and $\theta\left( t,s\right) \leq t-s$ for all $\left( t,s\right) \in B\cap(\mathbb{I}\times\mathbb{I})$. \end{description} Then the Picard sequence of $T~$based on $x_{0}$ converges to a fixed point of $T$ (in particular, $T$ has at least a fixed point). In addition to this, assume that property $(\mathcal{F}_{2}^{\prime})$ is fulfilled and that for all $x,y\in\operatorname*{Fix}(T)$, there exists $z\in X$ which is $\mathcal{S}$-comparable, at the same time, to $x$ and to $y$. Then $T$ has a unique fixed point. \end{corollary} We highlight that the previous results also hold in GV-FMS (but we avoid its corresponding statements). \subsection{\textbf{Type-2 fuzzy ample spectrum contractions}} As we commented in Remark \ref{K38 - 32 rem Km to GV}, Definition \ref{definition KM-space} is as general that such class of fuzzy spaces can verify that \[ M(x,y,t)=0\text{\quad for all }t>0\text{\quad when }x\neq y, \] which correspond to infinite distance. In such cases, although a sequence $\{x_{n}\}$ satisfies \[ \theta\left( M\left( x_{n+1},x_{n+2},t\right) ,M\left( x_{n} ,x_{n+1},t\right) \right) \geq0\quad\text{for all }n\in\mathbb{N}\text{ and all }t>0, \] it is impossible to deduce that $\lim_{n\rightarrow\infty}M\left( x_{n},x_{n+1},t\right) =1$ for all $t>0$. Therefore, in order to cover the fixed point theorems that were demonstrated under this assumption, we must slightly modify the conditions that a fuzzy ample spectrum contraction satisfies. In this case, the following new type of contractions may be considered. \begin{definition} \label{K38 - 53 def type-2 FASC}Let $\left( X,M\right) $ be a fuzzy space, let $T:X\rightarrow X$ be a self-mapping, let $\mathcal{S}$ be a binary relation on $X$, let $B\subseteq\mathbb{R}^{2}$ be a subset and let $\theta:B\rightarrow\mathbb{R}$ be a function. We will say that $T:X\rightarrow X$ is a \emph{type-2 fuzzy ample spectrum contraction w.r.t. }$\left( M,\mathcal{S},\theta\right) $ if it satisfies properties $(\mathcal{F}_{1})$, $(\mathcal{F}_{3})$ and the following ones: \begin{description} \item[$(\widetilde{\mathcal{F}}_{2})$] If $\{x_{n}\}\subseteq X$ is a Picard $\mathcal{S}$-strictly-increasing sequence of $T$ such that, for all $n\in\mathbb{N}$ and all $t>0$, \[ M\left( x_{n},x_{n+1},t\right) >0\quad\text{and}\quad\theta\left( M\left( x_{n+1},x_{n+2},t\right) ,M\left( x_{n},x_{n+1},t\right) \right) \geq0, \] then $\lim_{n\rightarrow\infty}M\left( x_{n},x_{n+1},t\right) =1$ for all $t>0$. \item[$(\widetilde{\mathcal{F}}_{4})$] If {$x,y\in X$ are such that $x\mathcal{S}^{\ast}y$ and $Tx\mathcal{S}^{\ast}Ty$ and }$t_{0}>0$ is such that $M(x,y,t_{0})>0$, then \[ {\theta\left( M(Tx,Ty,t_{0}),M(x,y,t_{0})\right) \geq0.} \] \end{description} \end{definition} In some cases, we will also consider the following properties. \begin{description} \item[$(\widetilde{\mathcal{F}}_{2}^{\prime})$] If $\{x_{n}\},\{y_{n} \}\subseteq X$ are two $T$-Picard sequences such that, for all $n\in\mathbb{N} $ and all $t>0$, \[ x_{n}\mathcal{S}^{\ast}y_{n},\quad{M(x_{n},y_{n},t)>0}\quad\text{and} \quad{\theta(M\left( x_{n+1},y_{n+1},t\right) ,M\left( x_{n},y_{n} ,t\right) )\geq0} \] then $\lim_{n\rightarrow\infty}{M}\left( x_{n},y_{n},t\right) =1$ for all $t>0$. \item[$(\widetilde{\mathcal{F}}_{5})$] If $\{\left( \phi_{n},\psi_{n}\right) \}$ is a $(T,\mathcal{S}^{\ast},M)$-sequence such that $\phi_{n}(t)>0$, $\psi_{n}(t)>0$ and $\{\psi_{n}\left( t\right) \}\rightarrow1$ for all $t>0$, and also $\theta(\phi_{n}(t),\psi_{n}(t))\geq0$ for all $n\in \mathbb{N}$ and all $t>0$, then $\{\phi_{n}\left( t\right) \}\rightarrow1$ for all $t>0$. \end{description} It is clear that $(\mathcal{F}_{i})\Rightarrow(\widetilde{\mathcal{F}}_{i})$ for all $i\in\{2,4,5\}$ and also $(\mathcal{F}_{2}^{\prime})\Rightarrow (\widetilde{\mathcal{F}}_{2}^{\prime})$, so each type-1 fuzzy ample spectrum contraction is a type-2 fuzzy ample spectrum contraction, that is, the notion of type-2 fuzzy ample spectrum contraction is more general than the notion of type-1 fuzzy ample spectrum contraction. \begin{lemma} \label{K38 - 52 lem equal notion in GV-FMS}In a GV-FMS, the notions of type-1 and type-2 fuzzy ample spectrum contractions coincide. \end{lemma} \begin{proof} It follows from the fact that, in a GV-FMS, $M(x,y,t)>0$ for all $x,y\in X$ and all $t>0$. Hence the respective properties $(\mathcal{F}_{i})$ and $(\widetilde{\mathcal{F}}_{i})$ are equal. \end{proof} However, this generality forces us to assume additional constraints in order to guarantee existence and uniqueness of fixed points, as we show in the next result. \begin{theorem} \label{K38 - 33 th main FAEC type-2}Let $\left( X,M,\ast\right) $ be a KM-FMS endowed with a transitive binary relation $\mathcal{S}$ and let $T:X\rightarrow X$ be an $\mathcal{S}$-nondecreasing type-2 fuzzy ample spectrum contraction with respect to $\left( M,\mathcal{S},\theta\right) $. Suppose that $\ast$ is continuous at the $1$-boundary, $T(X)$ is $(\mathcal{S},d)$-strictly-increasing-precomplete and there exists a point $x_{0}\in X$ such that $x_{0}\mathcal{S}Tx_{0}$ and $M(x_{0},Tx_{0},t)>0$ for all $t>0$. Also suppose that the function $\theta$ satisfies: \begin{equation} (t,s)\in B,\quad\theta\left( t,s\right) \geq0,\quad s>0\quad\Rightarrow\quad t>0. \label{K38 - 34 prop theta} \end{equation} Assume that, at least, one of the following conditions is fulfilled: \begin{description} \item[$\left( a\right) $] $T$ is $\mathcal{S}$-strictly-increasing-continuous. \item[$\left( b\right) $] $\left( X,M\right) $ is metrically-$\mathcal{S} $-strictly-increasing-regular and condition $(\widetilde{\mathcal{F}}_{5})$ holds. \item[$\left( c\right) $] $\left( X,M\right) $ is metrically-$\mathcal{S} $-strictly-increasing-regular and $\theta\left( t,s\right) \leq t-s$ for all $\left( t,s\right) \in B\cap(\mathbb{I}\times\mathbb{I})$. \end{description} If $\left( X,M\right) $\ satisfies the property $\mathcal{NC}$, then the Picard sequence of $T~$based on $x_{0}$ converges to a fixed point of $T$. In particular, $T$ has at least a fixed point. \end{theorem} \begin{proof} We can repeat many of the arguments showed in the proof of Theorem \ref{K38 - 31 th main FAEC}, but we must refine them. Let $x_{0}\in X$ be the point such that $x_{0}\mathcal{S}Tx_{0}$ and $M(x_{0},Tx_{0},t)>0$ for all $t>0$. Let $\{x_{n}\}$ be the Picard sequence of $T$ starting from $x_{0}$. Assume that $x_{n}\neq x_{n+1}$ for all $n\in\mathbb{N}_{0}$ and $x_{n}\mathcal{S}x_{m}$ for all $n,m\in\mathbb{N}_{0}$ such that $n<m$. Since $T$ is a fuzzy ample spectrum contraction with respect to $\left( M,\mathcal{S},\theta\right) $ and $x_{0}\mathcal{S}^{\ast}Tx_{0}=x_{1}$, $Tx_{0}=x_{1}\mathcal{S}^{\ast}x_{2}=Tx_{1}$ and $M(x_{0},x_{1},t)=M(x_{0} ,Tx_{0},t)>0$ then \[ {\theta\left( M(x_{1},x_{2},t),M(x_{0},x_{1},t)\right) ={\theta\left( M(Tx_{0},Tx_{1},t),M(x_{0},x_{1},t)\right) }\geq0} \] {for all }$t>0${. Using property (\ref{K38 - 34 prop theta}), it follows that, for all }$t>0$, \[ {\theta\left( M(x_{1},x_{2},t),M(x_{0},x_{1},t)\right) \geq0,}\quad M(x_{0},x_{1},t)>0\quad\Rightarrow\quad M(x_{1},x_{2},t)>0. \] By induction, it can be proved that \begin{equation} M(x_{n},x_{n+1},t)>0\quad\text{for all }t>0\text{ and all }n\in\mathbb{N}_{0} \label{K38 - 35 prop} \end{equation} and \[ {\theta\left( M(x_{n+1},x_{n+2},t),M(x_{n},x_{n+1},t)\right) \geq0} \quad\text{for all }t>0\text{ and all }n\in\mathbb{N}_{0}. \] Hence we can apply property $(\widetilde{\mathcal{F}}_{2})$ and we deduce that \[ \lim_{n\rightarrow\infty}M\left( x_{n},x_{n+1},t\right) \rightarrow 1\quad\text{for all }t>0. \] Following the same arguments given in the proof of Theorem \ref{K38 - 31 th main FAEC}, we can reduce us to the case in which $\{x_{n}\}$ is infinite, in which we know that there is $z\in X$ such that $\{x_{n}\}$ $M$-converges to $z$. It only remain to prove that, under any of conditions $(a)$, $(b)$ or $(c)$, $z$ is a fixed point of $T$. In case $(a)$, the proof of Theorem \ref{K38 - 31 th main FAEC} can be repeated. \begin{description} \item[$(b)$] Suppose that $\left( X,M\right) $ is metrically-$\mathcal{S} $-strictly-increasing-regular and condition $(\widetilde{\mathcal{F}}_{5})$ holds. In this case, since $\{x_{n}\}$ is an $\mathcal{S}$-strictly-increasing sequence such that $\{x_{n}\}\rightarrow z\in X$ and (\ref{K38 - 35 prop}) holds, it follows that \[ x_{n}\mathcal{S}z\quad\text{and}\quad M(x_{n},z,t)>0\quad\text{for all } n\in\mathbb{N}\text{ and all }t>0. \] Taking into account that the sequence $\{x_{n}\}$ is infinite, then there is $n_{0}\in\mathbb{N}$ such that $x_{n}\neq z$ and $x_{n}\neq Tz$ for all $n\geq n_{0}-1$. Moreover, as $T$ is $\mathcal{S}$-nondecreasing, then $x_{n+1} =Tx_{n}\mathcal{S}Tz$, which means that $x_{n}\mathcal{S}^{\ast}z$ and $x_{n}\mathcal{S}^{\ast}Tz$ for all $n\geq n_{0}$. Using that $T$ is a type-2 fuzzy ample spectrum contraction with respect to $\left( M,\mathcal{S} ,\theta\right) $ and $M(x_{n},z,t)>0$ for all $n\in\mathbb{N}$ and all $t>0 $, condition $(\widetilde{\mathcal{F}}_{4})$ implies that \[ \theta\left( M(x_{n+1},Tz,t),M(x_{n},z,t)\right) =\theta\left( M(Tx_{n},Tz,t),M(x_{n},z,t)\right) \geq0 \] for all $n\geq n_{0}$ and all $t>0$. Furthermore, property (\ref{K38 - 34 prop theta}) ensures that \[ \theta\left( M(x_{n+1},Tz,t),M(x_{n},z,t)\right) \geq0,\quad M(x_{n} ,z,t)>0\quad\Rightarrow\quad M(x_{n+1},Tz,t)>0 \] for all $n\geq n_{0}$ and all $t>0$. Taking into account that $\{M(x_{n} ,z,t)\}_{n\geq n_{0}}\rightarrow1$ for all $t>0$, assumption $(\widetilde {\mathcal{F}}_{5})$ applied to the $(T,\mathcal{S}^{\ast},M)$-sequence \[ \left\{ \,\left( \,\phi_{n}=M(Tx_{n},Tz,\cdot),\,\psi_{n}=M(x_{n} ,z,\cdot)\,\right) \,\right\} _{n\geq n_{0}} \] leads to $\{M(x_{n+1},Tz,t)\}_{n\geq n_{0}}\rightarrow1$ for all $t>0$, that is, $\{x_{n}\}_{n\geq n_{0}}$ $M$-converges to $Tz$. As the $M$-limit in a KM-FMS is unique, then $Tz=z$, so $z$ is a fixed point of $T$. \end{description} \end{proof} In this context, it is also possible to prove a similar uniqueness result. \begin{theorem} \label{K38 - 42 th main FAEC type-2 uniqueness}Under the hypotheses of Theorem \ref{K38 - 33 th main FAEC type-2}, assume that property $(\mathcal{F} _{2}^{\prime})$ is fulfilled and that for all $x,y\in\operatorname*{Fix}(T)$, there exists $z\in X$ which is $\mathcal{S}$-comparable, at the same time, to $x$ and to $y$. Then $T$ has a unique fixed point. \end{theorem} \begin{proof} All arguments of the proof of Theorem \ref{K38 - 31 th main FAEC uniqueness} \ can be repeated in this context. \end{proof} \begin{corollary} Theorems \ref{K38 - 33 th main FAEC type-2} and \ref{K38 - 42 th main FAEC type-2 uniqueness} remains true if we replace the hypothesis that $(X,M)$ satisfies the property $\mathcal{NC}$ by the fact that $(X,M,\ast)$ is a non-Archimedean KM-FMS. \end{corollary} \begin{proof} It follows from Theorem \ref{K38 - 24 th Non-Arch impies NC}. \end{proof} \section{\textbf{Consequences}} In this section we show some direct consequences of our main results. \subsection{\textbf{Mihe\textrm{\c{t}}'s fuzzy }$\psi$\textbf{-contractions}} In \cite{Mi3} Mihe\c{t} introduced a class of contractions in the setting of KM-fuzzy metric spaces that attracted much attention. It was defined by considering the following family of auxiliary function. Let $\Psi$ be the family of all continuous and nondecreasing functions $\psi:\mathbb{I} \rightarrow\mathbb{I}$ satisfying $\psi\left( t\right) >t$ for all $t\in\left( 0,1\right) $. Notice that if $\psi\in\Psi$, then $\psi\left( 0\right) \geq0$ and $\psi\left( 1\right) =1$, so $\psi\left( t\right) \geq t$ for all $t\in\mathbb{I}$. \begin{definition} \label{K38 - 37 def Mihet}\textrm{(Mihe\c{t} \cite{Mi3}, Definition 3.1)} Given a KM-FMS $\left( X,M,\ast\right) $ (it is assumed that $\ast$ is continuous), a mapping $T:X\rightarrow X$ is a \emph{fuzzy }$\psi $\emph{-contraction} if there is $\psi\in\Psi$ such that, for all $x,y\in X$ and all $t>0$, \begin{equation} M\left( x,y,t\right) >0\quad\Rightarrow\quad M\left( Tx,Ty,t\right) \geq\psi\left( M\left( x,y,t\right) \right) . \label{K38 - 37 def Mihet, prop} \end{equation} \end{definition} \begin{theorem} \label{K38 - 38 th Mihet}\textrm{(Mihe\c{t} \cite{Mi3}, Theorem 3.1)} Let $(X,M,\ast)$ be an $M$-complete non-Archimedean KM-FMS (it is assumed that $\ast$ is continuous) and let $T:X\rightarrow X$ be a fuzzy $\psi$-contractive mapping. If there exists $x\in X$ such that $M(x,Tx,t)>0$ for all $t>0$, then $T$ has a fixed point. \end{theorem} We show that the class of fuzzy ample spectrum contraction properly contains to the class of Mihe\c{t}'s $\psi$-contractions. \begin{theorem} \label{K38 - 39 th Mihet implies type-2}Given a Mihe\c{t}'s fuzzy $\psi $-contraction $T:X\rightarrow X$ in a KM-FMS $(X,M,\ast)$, let define $\theta_{\psi}:\mathbb{I}\times\mathbb{I}\rightarrow\mathbb{R}$ by $\theta_{\psi}\left( t,s\right) =t-\psi\left( s\right) $ for all $t,s\in\mathbb{I}$. Then $T$ is a type-2 fuzzy ample spectrum contraction w.r.t. $(M,\mathcal{S}_{X},\theta_{\psi})$ that also satisfies properties $(\widetilde{\mathcal{F}}_{2}^{\prime})$, $(\widetilde{\mathcal{F}}_{5})$, (\ref{K38 - 34 prop theta}) and $\theta_{\psi}\left( t,s\right) \leq t-s$ for all $t,s\in\mathbb{I}$. \end{theorem} \begin{proof} Let $T:X\rightarrow X$ be a $\psi$-contraction in a KM-FMS $\left( X,M,\ast\right) $. Let consider on $X$ the trivial order $\mathcal{S}_{X}$ defined by $x\mathcal{S}_{X}y$ for all $x,y\in X$. Let $B=\mathbb{I} \times\mathbb{I} $ and let define $\theta_{\psi}:\mathbb{I}\times \mathbb{I}\rightarrow\mathbb{R}$ by $\theta_{\psi}\left( t,s\right) =t-\psi\left( s\right) $ for all $t,s\in\mathbb{I}$. We claim that $T$ is a fuzzy ample spectrum contraction w.r.t. $\left( M,\mathcal{S}_{X} ,\theta_{\psi}\right) $ that also satisfies properties $(\widetilde {\mathcal{F}}_{2}^{\prime})$, $(\widetilde{\mathcal{F}}_{5})$, (\ref{K38 - 34 prop theta}) and $\theta_{\psi}\left( t,s\right) \leq t-s$ for all $t,s\in\mathbb{I}$. We check all conditions. (\ref{K38 - 34 prop theta}) If $t,s\in\mathbb{I}$ are such that $\theta_{\psi }\left( t,s\right) \geq0$ and $s>0$, then $t-\psi\left( s\right) \geq0$, so $t\geq\psi\left( s\right) \geq s>0$. In fact, since $\psi\left( s\right) \geq s$ for all $s\in\mathbb{I}$, then \[ \theta_{\psi}\left( t,s\right) =t-\psi\left( s\right) \leq t-s\quad \text{for all }t,s\in\mathbb{I}. \] $(\mathcal{F}_{1})$. It is obvious because $B=\mathbb{I}\times\mathbb{I}$. $(\widetilde{\mathcal{F}}_{4})$. Let{\ $x,y\in X$ be two points such that $x\mathcal{S}_{X}^{\ast}y$ and $Tx\mathcal{S}_{X}^{\ast}Ty$ and let }$t>0$ be such that{\ }$M(x,y,t)>0${. Since }$T$ is a fuzzy $\psi$-contraction, {by (\ref{K38 - 37 def Mihet, prop}),} \[ M\left( x,y,t\right) >0\quad\Rightarrow\quad M\left( Tx,Ty,t\right) \geq\psi\left( M\left( x,y,t\right) \right) . \] Therefore \[ \theta_{\psi}\left( M\left( Tx,Ty,t\right) ,M\left( x,y,t\right) \right) =M\left( Tx,Ty,t\right) -\psi\left( M\left( x,y,t\right) \right) \geq0. \] $(\widetilde{\mathcal{F}}_{2}^{\prime})$. Let $x_{1},x_{2}\in X$ be two points such that, for all $n\in\mathbb{N}$ and all $t>0$, \[ T^{n}x_{1}\mathcal{S}^{\ast}T^{n}x_{2},\quad{M(T^{n}x_{1},T^{n}x_{2} ,t)>0}\quad\text{and}\quad{\theta}_{\psi}{(M(T^{n+1}x_{1},T^{n+1} x_{2},t),M(T^{n}x_{1},T^{n}x_{2},t))\geq0.}\quad \] Therefore \begin{align*} 0 & \leq{\theta}_{\psi}{(M(T^{n+1}x_{1},T^{n+1}x_{2},t),M(T^{n}x_{1} ,T^{n}x_{2},t))}\\ & ={M(T^{n+1}x_{1},T^{n+1}x_{2},t)-\psi(M(T^{n}x_{1},T^{n}x_{2},t)).} \end{align*} Hence ${\psi(M(T^{n}x_{1},T^{n}x_{2},t))\leq M(T^{n+1}x_{1},T^{n+1}x_{2},t)} $, which means that, for all $n\in\mathbb{N}$ and all $t>0$, \begin{equation} {0<M(T^{n}x_{1},T^{n}x_{2},t)\leq\psi(M(T^{n}x_{1},T^{n}x_{2},t))\leq M(T^{n+1}x_{1},T^{n+1}x_{2},t)\leq1.} \label{K38 - 40 prop} \end{equation} As a consequence, for each $t>0$, the sequence $\{{M(T^{n}x_{1},T^{n}x_{2} ,t)}\}_{n\in\mathbb{N}}$ is nondecreasing and bounded above. Hence, it is convergent. If $L(t)=\lim_{n\rightarrow\infty}{M(T^{n}x_{1},T^{n}x_{2},t)}$, letting $n\rightarrow\infty$ in (\ref{K38 - 40 prop}) and taking into account that $\psi$ is continuous, we deduce that \[ L(t)\leq\psi\left( L\left( t\right) \right) \leq L(t). \] This means that $\psi\left( L\left( t\right) \right) =L\left( t\right) $, so $L\left( t\right) \in\{0,1\}$. However, as $L(t)\geq{M(T^{n} x_{1},T^{n}x_{2},t)}>0$, then $L(t)=1$. This proves that $\lim_{n\rightarrow \infty}{M(T^{n}x_{1},T^{n}x_{2},t)}=1$ for all $t>0$. $(\widetilde{\mathcal{F}}_{2})$. It follows from $(\widetilde{\mathcal{F}} _{2}^{\prime})$ using $x_{2}=Tx_{1}$. $(\widetilde{\mathcal{F}}_{3})$. Let $\{\left( \phi_{n},\psi_{n}\right) \}$ be a $(T,\mathcal{S}^{\ast},M)$-sequence and let $t_{0}>0$ be such that $\{\phi_{n}\left( t_{0}\right) \}$ and $\{\psi_{n}\left( t_{0}\right) \}$ converge to the same limit $L\in\mathbb{I}$, which satisfies $L>\phi _{n}\left( t_{0}\right) $ and $\theta_{\psi}(\phi_{n}\left( t_{0}\right) ,\psi_{n}\left( t_{0}\right) )\geq0$ for all $n\in\mathbb{N}$. Therefore, for all $n\in\mathbb{N}$, \[ 0\leq\theta_{\psi}(\phi_{n}\left( t_{0}\right) ,\psi_{n}\left( t_{0}\right) )=\phi_{n}\left( t_{0}\right) -\psi\left( \psi_{n}\left( t_{0}\right) \right) , \] so \[ \psi\left( \psi_{n}\left( t_{0}\right) \right) \leq\phi_{n}\left( t_{0}\right) \quad\text{for all }n\in\mathbb{N}. \] Letting $n\rightarrow\infty$, as $\psi$ is continuous, we deduce that $\psi\left( L\right) \leq L$, so $L\in\{0,1\}$. However, as $L>\phi _{n}\left( t_{0}\right) \geq0$, then $L=1$. $(\widetilde{\mathcal{F}}_{5})$. It follows from the fact that $\theta_{\psi }\left( t,s\right) =t-\psi\left( s\right) \leq t-s$ for all $t,s\in \mathbb{I}$. \end{proof} The previous result means that each Mihe\c{t}'s fuzzy $\psi$-contraction in a KM-FMS is a type-2 fuzzy ample spectrum contraction that also satisfies properties $(\widetilde{\mathcal{F}}_{2}^{\prime})$, $(\widetilde{\mathcal{F} }_{5})$, (\ref{K38 - 34 prop theta}) and $\theta_{\psi}\left( t,s\right) \leq t-s$ for all $t,s\in\mathbb{I}$. Furthermore, we are going to show that it is also $\mathcal{S}_{X}$-strictly-increasing-continuous. \begin{lemma} \label{K38 - 48 lem Mihet contraction is nondecreasing}Each Mihe\c{t}'s fuzzy $\psi$-contraction $T:X\rightarrow X$ in a KM-FMS $(X,M,\ast)$ is $T$ is $\mathcal{S}_{X}$-strictly-increasing-continuous. \end{lemma} \begin{proof} Let $\{x_{n}\}\subseteq X$ be an $\mathcal{S}_{X}$-strictly-increasing sequence such that $\{x_{n}\}$ $M$-converges to $z\in X$. Let $t_{0}>0$ be arbitrary. Since $\lim_{n\rightarrow\infty}M(x_{n},z,t_{0})=1$, then there is $n_{0}\in\mathbb{N}$ such that $M(x_{n},z,t_{0})>0$ for all $n\geq n_{0}$. Theorem \ref{K38 - 39 th Mihet implies type-2} ensures that $T$ is a type-2 fuzzy ample spectrum contraction w.r.t. $(M,\mathcal{S}_{X},\theta_{\psi})$, where $\theta_{\psi}:\mathbb{I}\times\mathbb{I}\rightarrow\mathbb{R}$ is given by $\theta_{\psi}\left( t,s\right) =t-\psi\left( s\right) $ for all $t,s\in\mathbb{I}$. Applying property $(\widetilde{\mathcal{F}}_{4})$, we deduce that, for all $n\geq n_{0}$, \[ 0\leq\theta_{\psi}(M(Tx_{n},Tz,t_{0}),M(x_{n},z,t_{0}))=M(Tx_{n} ,Tz,t_{0})-\psi(M(x_{n},z,t_{0})). \] As a result, $\psi(M(x_{n},z,t_{0}))\leq M(Tx_{n},Tz,t_{0})\leq1$ for all $n\geq n_{0}$. Letting $n\rightarrow\infty$, we deduce that \[ 1=\psi(1)=\lim_{n\rightarrow\infty}\psi(M(x_{n},z,t_{0}))\leq\lim _{n\rightarrow\infty}M(Tx_{n},Tz,t_{0})\leq1, \] so $\lim_{n\rightarrow\infty}M(Tx_{n},Tz,t_{0})=1$. Varying $t_{0}>0$, we deduce that $\{Tx_{n}\}$ $M$-converges to $Tz$, so $T$ is $\mathcal{S}_{X}$-strictly-increasing-continuous. \end{proof} We can conclude that the Mihe\c{t}'s theorem \ref{K38 - 39 th Mihet implies type-2} is a particular case of our main results. \begin{corollary} Theorem \ref{K38 - 38 th Mihet} follows from Theorems \ref{K38 - 33 th main FAEC type-2} and \ref{K38 - 42 th main FAEC type-2 uniqueness}. \end{corollary} \begin{proof} Let $(X,M,\ast)$ be an $M$-complete non-Archimedean KM-FMS (it is assumed that $\ast$ is continuous) and let $T:X\rightarrow X$ be a fuzzy $\psi$-contractive mapping. Suppose that there exists $x_{0}\in X$ such that $M(x_{0} ,Tx_{0},t)>0$ for all $t>0$. Given $\psi\in\Psi$, let define $\theta_{\psi }\left( t,s\right) =t-\psi\left( s\right) $ for all $t,s\in\mathbb{I}$. Theorem \ref{K38 - 39 th Mihet implies type-2} guarantees that $T$ type-2 fuzzy ample spectrum contraction w.r.t. $(M,\mathcal{S}_{X},\theta_{\psi})$ that also satisfies properties $(\widetilde{\mathcal{F}}_{2}^{\prime})$, $(\widetilde{\mathcal{F}}_{5})$, (\ref{K38 - 34 prop theta}) and $\theta _{\psi}\left( t,s\right) \leq t-s$ for all $t,s\in\mathbb{I}$. Notice that $T(X)$ is $(\mathcal{S}_{X},d)$-strictly-increasing-precomplete because $X$ is $M$-complete. By Lemma \ref{K38 - 48 lem Mihet contraction is nondecreasing}, $T$ is $\mathcal{S}_{X}$-strictly-increasing-continuous. Item $(a)$ of Theorem \ref{K38 - 33 th main FAEC type-2} demonstrates that $T$ has, at least, one fixed point. Furthermore, as $(\widetilde{\mathcal{F}}_{2}^{\prime})$ holds and any two fixed points of $T$ are $\mathcal{S}_{X}$-comparable, then Theorem \ref{K38 - 42 th main FAEC type-2 uniqueness} implies that $T$ has a unique fixed point. \end{proof} \begin{example} Mihe\c{t}'s $\psi$-contractions include Radu's contractions \cite{Radu} (which, at the same time, generalize Gregori and Sapena's contractions \cite{GrSa}) satisfying: \[ M(Tx,Ty,t)\geq\frac{M(x,y,t)}{M(x,y,t)+k(1-M(x,y,t))}\quad\text{for all }x,y\in X\text{ and all }t>0, \] where $k\in\left( 0,1\right) $. Therefore, if we consider the function $\theta:\mathbb{I}\times\mathbb{I}\rightarrow\mathbb{R}$ given, for all $t,s\in\mathbb{I}$, by \[ \theta\left( t,s\right) =t-\frac{s}{s+k\left( 1-s\right) }, \] then each Radu's contraction is also a fuzzy ample spectrum contraction associated to $\theta$. \end{example} \begin{remark} In \cite{Mi3} Mihe\c{t} posed the following question: \emph{Does Theorem 3.1 remain true if \textquotedblleft non-Archimedean fuzzy metric space\textquotedblright\ is replaced by \textquotedblleft fuzzy metric space\textquotedblright?} Here we cannot give a general answer, but we can say that if $(X,M)$ satisfies the property $\mathcal{NC}$ and $\ast$ is continuous at the $1$-boundary, then his Theorem 3.1 remains true. \end{remark} \begin{remark} In order to conclude the current subsection, we wish to highlight one of the main advantages of the contractivity condition described in $(\mathcal{F}_{4})$, that is, \[ {\theta\left( M(Tx,Ty,t),M(x,y,t)\right) \geq0} \] \emph{versus} the Mihe\c{t}'s inequality \[ M\left( Tx,Ty,t\right) \geq\psi\left( M\left( x,y,t\right) \right) . \] To explain it, let us call $u=M\left( x,y,t\right) $ and $v=M\left( Tx,Ty,t\right) $. Then the first inequality is equivalent to \[ {\theta\left( u,v\right) \geq0,} \] and the second one can be expressed as \[ v\geq\psi(u). \] In this sense, Mihe\c{t}'s inequality can be interpreted as a condition in separate variables, that is, $u$ and $v$ are placed on distinct sides of the inequality. However, from the inequality ${\theta\left( u,v\right) \geq0}$, in general, it is impossible to deduce a relationship in separate variables such as $v\geq\psi(u)$. As a consequence, it is often easy to check that a self-mapping $T:X\rightarrow X$ satisfies a general condition such as ${\theta\left( u,v\right) \geq0}$ rather than a more restricted contractivity condition such as $v\geq\psi(u)$. To illustrate this advantage, we show how canonical examples of contractions in the setting of fixed point theory, that is, Banach's contractions in metric spaces, can be easily seen as fuzzy ample spectrum contractions but, in general, it is complex to prove that they are also Mihe\c{t}'s fuzzy $\psi$-contractions. \end{remark} \begin{example} Let consider the fuzzy metric $M^{d}$ on $X=\left[ 0,\infty\right) $ associated to the Euclidean metric $d(x,y)=\left\vert x-y\right\vert $ for all $x\in X$. Given $\lambda\in\left( 0,1\right) $, let $T:X\rightarrow X$ be the self-mapping given by \[ Tx=\lambda\ln\left( 1+x\right) \qquad\text{for all }x\in X\text{.} \] Although $T$ is a Banach's contraction associated to the constant $\lambda$, that is, $d(Tx,Ty)\leq\lambda d(x,y)$ for all $x,y\in X$, in general, it is not easy to prove that $T$ is a Mihe\c{t}'s $\psi$-contraction because we must to determine a function $\psi:\mathbb{I}\rightarrow\mathbb{I}$, satisfying certain properties, and also such that \[ \frac{t}{t+\lambda\left\vert \, \ln\dfrac{1+x}{1+y} \, \right\vert }\geq\psi\left( \frac{t}{t+\left\vert x-y\right\vert }\right) \qquad\text{for all }x,y\in X\text{ and all }t>0\text{.} \] To show that $T$ is a fuzzy contraction in $(X,M^{d})$ it could be better to employ other methodologies, involving terms such that $M^{d}(x,y,\lambda t)$, rather than Mihe\c{t}'s procedure. However, by handling the function $\theta$ defined as \[ \theta\left( t,s\right) =\lambda\frac{1-s}{s}-\frac{1-t}{t}\qquad\text{for all }t,s\in\left( 0,1\right] , \] it can directly checked that $T$ is a fuzzy ample spectrum contraction because, for all $x,y\in X$ and all $t>0$: \begin{align*} \theta(M^{d}\left( Tx,Ty,t\right) ,M^{d}\left( x,y,t\right) ) = \lambda \, \frac{~1-\dfrac{t}{t+d(x,y)}~}{\dfrac{t}{t+d(x,y)}}-\frac{~1-\dfrac {t}{t+d(Tx,Ty)}~}{\dfrac{t}{t+d(Tx,Ty)}} =\frac{\lambda d(x,y)-d(Tx,Ty)}{t}. \end{align*} \end{example} \subsection{\textbf{Altun and Mihe\textrm{\c{t}}'s fuzzy contractions}} In \cite{AlMi} Altun and Mihe\c{t} introduce the following kind of fuzzy contractions in the setting of ordered KM-FMS. Recall that a \emph{partial order on a set }$X$ is a binary relation $\mathcal{S}$ on $X$ which is reflexive, antisymmetric and transitive. \begin{theorem} \textrm{(Altun and Mihe\c{t} \cite{AlMi}, Theorem 2.4)} Let $(X,M,\ast)$ be a complete non-Archimedean KM-FMS (it is assumed that $\ast$ is continuous) and let $\preceq$ be a partial order on $X$. Let $\psi:\mathbb{I}\rightarrow \mathbb{I}$ be a continuous mapping such that $\psi(t)>t$ for all $t\in\left( 0,1\right) $. Also let $T:X\rightarrow X$ be a nondecreasing mapping w.r.t. $\preceq$ with the property \[ M(Tx,Ty,t)\geq\psi(M(x,y,t))\quad\text{for all }t>0\text{ and all }x,y\in X\text{ such that }x\preceq y. \] Suppose that either: \begin{description} \item[$(a)$] $T$ is continuous,\quad or \item[$(b)$] $x_{n}\preccurlyeq x$ for all $n\in\mathbb{N}$ whenever $\{x_{n}\}\subseteq X$ is a nondecreasing sequence with $\{x_{n}\}\rightarrow x\in X$ \end{description} hold. If there exists $x_{0}\in X$ such that \[ x_{0}\preccurlyeq Tx_{0}\quad\text{and}\quad\psi(M(x_{0},Tx_{0},t))>0\quad \text{for each }t>0, \] then $T$ has a fixed point. \end{theorem} Obviously, the previous theorem can be interpreted as a version of Theorem \ref{K38 - 37 def Mihet} in which a partial order (which is a transitive binary relation) is employed. Notice that here the function $\psi$ is not necessarily nondecreasing, but such property have not been used in the arguments of the proofs of the previous subsection. \subsection{\textbf{Fuzzy ample spectrum contractions by using admissible mappings}} {Following \cite{SaVeVe}, }when a fuzzy space $(X,M)$ is endowed with a function $\alpha:X\times X\rightarrow\left[ 0,\infty\right) $, the following notions can be introduced: \begin{itemize} \item the function $\alpha$ is \emph{transitive} if $\alpha(x,z)\geq1$ for each point $x,z\in X$ for which there is $y\in X$ such that $\alpha(x,y)\geq1$ and $\alpha(y,z)\geq1$; \item a mapping $T:X\rightarrow X$ is $\alpha$\emph{-admissible} whenever $\alpha\left( x,y\right) \geq1$ implies $\alpha\left( Tx,Ty\right) \geq1$, where $x,y\in X$; \item a sequence $\{x_{n}\}\subseteq X$ is $\alpha$\emph{-nondecreasing} if $\alpha(x_{n},x_{n+1})\geq1$ for all $n\in\mathbb{N}$; \item a sequence $\{x_{n}\}\subseteq X$ is $\alpha$\emph{-strictly-increasing} if $\alpha(x_{n},x_{n+1})\geq1$ and $x_{n}\neq x_{n+1}$ for all $n\in \mathbb{N}$; \item a mapping $T:X\rightarrow X$ is $\alpha$\emph{-nondecreasing-continuous} if $\{Tx_{n}\}\rightarrow Tz$ for all $\alpha$-nondecreasing sequence $\{x_{n}\}\subseteq X$ such that $\{x_{n}\}\rightarrow z\in X$; \item a mapping $T:X\rightarrow X$ is $\alpha$ \emph{-strictly-increasing-continuous} if $\{Tx_{n}\}\rightarrow Tz$ for all $\alpha$-strictly-increasing sequence $\{x_{n}\}\subseteq X$ such that $\{x_{n}\}\rightarrow z\in X$; \item a subset $Y\subseteq X$ is $(\alpha,M)$ \emph{-strictly-increasing-complete} if every $\alpha$-strictly-increasing and $M$-Cauchy sequence $\{y_{n}\}\subseteq Y$ is $M$-convergent to a point of $Y$; \item a subset $Y\subseteq X$ is $(\alpha,M)$ \emph{-strictly-increasing-precomplete} if there exists a set $Z$ such that $Y\subseteq Z\subseteq X$ and $Z$ is $(\alpha,M)$-strictly-increasing-complete; \item $\left( X,M\right) $ is $\alpha$\emph{-strictly-increasing-regular} if, for all $\alpha$-strictly-increasing sequence $\{x_{n}\}\subseteq X$ such that $\{x_{n}\}\rightarrow z\in X$, it follows that $\alpha(x_{n},z)\geq1$ for all $n\in\mathbb{N}$; \end{itemize} In this setting, it is possible to introduce the notion of \emph{fuzzy ample spectrun contraction w.r.t. }$(M,\alpha,\theta)$ by replacing any condition of the type $x\mathcal{S}y$ by $\alpha(x,y)\geq1$ in Definition \ref{K38 - 54 def FASC} (or Definition \ref{K38 - 53 def type-2 FASC}). In this general framework, it is possible to obtain some consequences as the following ones. \begin{corollary} Let $\left( X,M,\ast\right) $ be a KM-FMS endowed with a transitive function $\alpha:X\times X\rightarrow\left[ 0,\infty\right) $ and let $T:X\rightarrow X$ be an $\alpha$-nondecreasing fuzzy ample spectrum contraction with respect to $\left( M,\alpha,\theta\right) $. Suppose that $\ast$ is continuous at the $1$-boundary, $T(X)$ is $(\alpha,d)$-strictly-increasing-precomplete and there exists a point $x_{0}\in X$ such that $\alpha(x_{0},Tx_{0})\geq1$. Also assume that, at least, one of the following conditions is fulfilled: \begin{description} \item[$(a)$] $T$ is $\alpha$-strictly-increasing-continuous. \item[$(b)$] $\left( X,M\right) $ is $\alpha$-strictly-increasing-regular and condition $(\mathcal{F}_{5})$ holds. \item[$(c)$] $\left( X,M\right) $ is $\alpha$-strictly-increasing-regular and $\theta\left( t,s\right) \leq t-s$ for all $\left( t,s\right) \in B\cap(\mathbb{I}\times\mathbb{I})$. \end{description} If $\left( X,M\right) $\ satisfies the property $\mathcal{NC}$, then the Picard sequence of $T~$based on $x_{0}$ converges to a fixed point of $T$. In particular, $T$ has at least a fixed point. \end{corollary} \begin{proof} It follows from Theorem \ref{K38 - 31 th main FAEC} by considering on $X$ the binary relation $\mathcal{S}_{\alpha}$ given, for each $x,y\in X$, by $x\mathcal{S}_{\alpha}y$ if $\alpha(x,y)\geq1$. \end{proof} \vspace*{-3mm} \section{\textbf{Conclusions and prospect work}} In this paper we have introduced the notion of \emph{fuzzy ample spectrum contraction} in the setting of fuzzy metric spaces in the sense of Kramosil and Mich\'{a}lek into two distinct ways. After that, we have demonstrated some very general theorems in order to ensure the existence and uniqueness of fixed points for such families of fuzzy contractions. We have also illustrated that these novel classes of fuzzy contractions extend and generalize some well known groups of previous fuzzy contractions. In order to attract the attention of many researchers in the field of fixed point theory, in the title we announced that out results was going to be developed in the setting of non-Archimedean fuzzy metric spaces. However, we have presented a new property (that we have called $\mathcal{NC}$) in order to consider more general families of fuzzy metric spaces. By working with property $\mathcal{NC}$ we have given a positive partial answer to an open problem posed by Mihe\c{t} in \cite{Mi3}. In this line of research there is much prospect work to carry out. For instance, we have shown that these novel fuzzy contraction generalize the notion of \emph{ample spectrum contraction} in the setting of metric spaces under an additional assumption. Then, the following doubts naturally appears: \emph{Open question 1:} is any ample spectrum contraction in a metric space $(X,d)$ a fuzzy ample spectrum contraction in the fuzzy metric space $(X,M^{d})$? \emph{Open question 2:} in order to cover other kinds of fuzzy contractions, is it possible to extend the notion of ample spectrum contraction to the fuzzy setting by introducing a function in the argument $t$ of $M(x,y,t)$? \vspace*{3mm} \section*{Acknowledgments} A.F. Rold\'{a}n L\'{o}pez de Hierro is grateful to Project TIN2017-89517-P of Ministerio de Econom\'{\i}a, Industria y Competitividad and also to Junta de Andaluc\'{\i}a by project FQM-365 of the Andalusian CICYE. This article was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah. N. Shahzad acknowledges with thanks DSR for financial support. \end{document}
\begin{document} \begin{center} \title[Families of Integral Cographs within a Triangular Arrays] {Families of Integral Cographs within a Triangular Array} \author{Hsin-Yun Ching} \address{The Citadel } \email{hching@citadel.edu} \author{Rigoberto Fl\'orez} \address{The Citadel } \email{rigo.florez@citadel.edu} \author{Antara Mukherjee} \address{The Citadel} \email{antara.mukherjee@citadel.edu} \thanks{Several of the results in this paper were found by the first author while working on his undergraduate research project under the guidance of the second and third authors (who followed the guidelines given in \cite{FlorezMukherjeeED}).} \end{center} \maketitle \begin{abstract} The \emph{determinant Hosoya triangle}, is a triangular array where the entries are the determinants of two-by-two Fibonacci matrices. The determinant Hosoya triangle $\bmod \,2$ gives rise to three infinite families of graphs, that are formed by complete product (join) of (the union of) two complete graphs with an empty graph. We give a necessary and sufficient condition for a graph from these families to be integral. Some features of these graphs are: they are integral cographs, all graphs have at most five distinct eigenvalues, all graphs are either $d$-regular graphs with $d=2,4,6,\dots $ or almost-regular graphs, and some of them are Laplacian integral. Finally we extend some of these results to the Hosoya triangle. \end{abstract} \section {Introduction}\label{intro} A graph is integral if the eigenvalues of its adjacency matrix are integers. These graphs are rare and the techniques used to find them are quite complicated. The notion of integral graphs was first introduced in 1974 by Harary and Schwenk \cite{harary}. A cograph is a graph that avoids the 4 vertices path as an induced subgraph \cite{Corneil}. In this paper we study an infinite family of integral cographs associated with a combinatorial triangle. These families have at most five distinct eigenvalues. The coefficients of many recurrence relations are represented using triangular arrangements. These representations give geometric tools to study properties of the recurrence relation. A natural relation between graph theory and recursive relations holds through the adjacency matrices from the triangle. A classic example is the relation between the Pascal triangle and Sierpin\'ski graph or Hanoi graph that has fractal properties. Recently, some authors have been interested in graphs associated with Riordan arrays. Examples of graphs associated with combinatorial triangles can be found in \cite{Baker,Blair,Cheon,CheonJung,deo,koshy2}. The aim of this paper is to give another example of a triangular array that gives rise to families of graphs with good behavior. We use $\mathbb{Z}_{>0}$ to denote the set of all positive integers. The \emph{determinant Hosoya triangle}, in Table \ref{Symmetric_matrix}, is a triangular array where the entry $H_{r,k}$ with $r,k\in \mathbb{Z}_{>0}$ (from left to right) is given by \[H_{r,k}=\begin{vmatrix} F_{k+1} & F_{k}\\ F_{r-k+1} & F_{r-k+2}\\ \end{vmatrix}.\] For example, the entry $H_{7,5}$ of $\mathcal{H}$ is given by \[H_{7,5}=\begin{vmatrix} F_{6} & F_{5}\\ F_{3} & F_{4}\\ \end{vmatrix}=\begin{vmatrix} 8 & 5\\ 2 & 3\\ \end{vmatrix}=24-10=14.\] \begin{figure} \caption{Determinant Hosoya triangle $\mathcal{H}$.} \label{symmetric:matrix} \label{Symmetric_matrix} \end{figure} The determinant Hosoya triangle is symmetric with respect to its median. Therefore, the families of symmetric matrices, that are naturally embedded in this triangle, give rise to three infinite families of graphs (the adjacency matrices are the symmetric matrices $\bmod\; 2$). These graphs are either $d$-regular with $d=2,4,6,\dots $ or almost-regular graphs. All these three families of graphs have at most five distinct eigenvalues and one of the families is formed by integral graphs. We give a necessary and sufficient condition to determine whether a family is integral. The square matrices in the determinant Hosoya triangle are of rank two, so they are the sum of two rank-one matrices. This allows us to classify our matrices into three families depending on their size ($n=3t+r$, with $0\le r\le 2$). Their graphs are the complete product of two complete graphs with an empty graph. A graph (associated to the determinant Hosoya triangle) is integral if and only if its adjacency matrix is of size $n=3t+1$. For example, from this triangle we obtain a rank two matrix $\mathcal{S}_{7}$, depicted on the left side of \eqref{Example:Marices:Mod}. Its rows are restrictions of the diagonals of the triangle. This matrix has several interesting properties. For instance, evaluating the entries of this matrix $\mod 2$ we obtain an adjacency matrix that gives rise to a regular subgraph --with five distinct eigenvalues-- that is an integral cograph. The left side in Figure \ref{symmetric:graph:Intro} depicts the adjacency graph from $\mathcal{S}_{7} \bmod 2$. Deleting its loops, we obtain the subgraph depicted on the right side in Figure \ref{symmetric:graph:Intro}. \begin{equation}\label{Example:Marices:Mod} \mathcal{S}_{7}= \left[ \begin{array}{ccccccc} 0 & 1 & 1 & 2 & 3 & 5 & 8 \\ 1 & 3 & 4 & 7 & 11 & 18 & 29 \\ 1 & 4 & 5 & 9 & 14 & 23 & 37 \\ 2 & 7 & 9 & 16 & 25 & 41 & 66 \\ 3 & 11 & 14 & 25 & 39 & 64 & 103 \\ 5 & 18 & 23 & 41 & 64 & 105 & 169 \\ 8 & 29 & 37 & 66 & 103 & 169 & 272 \\ \end{array} \right] \bmod 2 = \left[ \begin{array}{ccccccc} 0 & 1 & 1 & 0 & 1 & 1 & 0 \\ 1 & 1 & 0 & 1 & 1 & 0 & 1 \\ 1 & 0 & 1 & 1 & 0 & 1 & 1 \\ 0 & 1 & 1 & 0 & 1 & 1 & 0 \\ 1 & 1 & 0 & 1 & 1 & 0 & 1 \\ 1 & 0 & 1 & 1 & 0 & 1 & 1 \\ 0 & 1 & 1 & 0 & 1 & 1 & 0 \\ \end{array} \right] \end{equation} \begin{figure} \caption{$\mathcal{G}_{7}^{*}=(K_{2}^{*} \sqcup K_{2}^{*}) \nabla \overline{K}_{3}$ \hspace{2cm} $\mathcal{G}_{7}=(K_{2} \sqcup K_{2}) \nabla \overline{K}_{3}$} \label{symmetric:graph:Intro} \end{figure} \textbf{Proposition.} \emph{The graph $\mathcal{G}_{w}$ is integral cograph if and only if $w=3t+1$.} A \emph{join} or \emph{the complete product of two graphs} $G_1$ and $G_2$, denoted by $G_1\nabla G_2$, is defined as the graph obtained by joining each vertex of $G_1$ with all vertices of $G_2$. The main results of this paper show that the graphs that are generated from the matrices in that determinant Hosoya triangle $\bmod\, 2$ are cographs of the form $(K_{n} \sqcup K_{m}) \nabla \overline{K}_{r}$. We use $G_1\sqcup G_2$ to denote the disjoint union of $G_1$ and $G_2$. We give complete criteria --depending on the embedding of the adjacency matrices within the determinant Hosoya triangle $\bmod\, 2$-- for these graphs to be integral. Thus, one of the main results of this paper states that a graph $(K_{n}\sqcup K_{m})\nabla \overline{K}_{r}$ is integral if and only if $2nr=pq$ and $n-1=p-q$ for some $p,q\in \mathbb{Z}_{>0}$. Finally, we present a discussion on graphs associated to the \emph{Hosoya triangle}, a triangular array where the entries are products of Fibonacci numbers. \section{Graphs from matrices in combinatorial triangles }\label{triangles} In this section we discuss three families of graphs originated from matrices embedded in the determinant Hosoya triangle. \subsection{Determinant Hosoya triangle} \label{triangles:1} The \emph{determinant Hosoya triangle} is a triangular array with entries $\left\{H_{r, k}\right\}_{r, k> 0}$ (taken from left to right) defined recursively by \begin{equation}\label{Hosoya:Seq} H_{r,k}= H_{r-1,k}+H_{r-2,k} \; \text{ and } \; H_{r,k}= H_{r-1,k-1}+H_{r-2,k-2}, \end{equation} with initial conditions $H_{1,1}=0, H_{2,1}= H_{2,2}=1,$ and $ H_{3,2}=3$ where $ r>1 $ and $1\le k \le r$ (see Figure \ref{Symmetric_matrix} on Page \pageref{Symmetric_matrix}). The triangle can also be obtained from the generating function $(x+y+xy)/((1-x-x^2)(1-y-y^2))$ (originally discovered by Sloane \cite{sloane}, see \seqnum{A108038}). Equivalent definitions for the entries of this triangle are: $$H_{r,k}:=F_{k-1}F_{r-k+2}+F_{k}F_{r-k},$$ which is a variation of $F_{k+r}=F_{k-1}F_{n}+F_{k}F_{n+1}$ (this identity gives rise to Fibonomial triangle, see Vajda \cite{Vajda}). The entry is also a determinant (a formal proof of these two facts is in \cite{BlairRigoAntara}) \[H_{r,k}=\begin{vmatrix} F_{k+1} & F_{k}\\ F_{r-k+1} & F_{r-k+2}\\ \end{vmatrix}.\] We now present a result on the divisibility property of the entries of the determinant Hosoya triangle. Note that for every positive $m$, the entries of $(3m+1)$-th row of $\mathcal{H}$ are always even numbers. \begin{proposition}\label{gcd:property} If $r, k \in \mathbb{Z}_{>0}$, then these hold \begin{enumerate} \item $F_{\gcd(k+1,r+2)}$ and $ F_{\gcd(k,r+2)}$ divide $$H_{r,k}=\begin{vmatrix} F_{k+1} & F_{k}\\ F_{r-k+1} & F_{r-k+2}\\ \end{vmatrix},$$ \item if $r=3t+1$ for $t>1$ and $1\le k\le {\lfloor(r+1)/2\rfloor}$, then $$H_{r,k}=\begin{vmatrix} F_{k+1} & F_{k}\\ F_{r-k+1} & F_{r-k+2}\\ \end{vmatrix} \text{ is even.}$$ \end{enumerate} \end{proposition} \begin{proof} Part (1). Since $\gcd(F_{k},F_{k+1})=\gcd(F_{r-k+2},F_{r-k+1})=1$, \begin{align*} \gcd(F_{k+1}F_{r-k+2}, F_{k}F_{r-k+1}) &= \gcd(F_{k+1},F_{r-k+1}) \gcd(F_{k},F_{r-k+2}) \\ &= F_{\gcd(k+1,r-k+1)}F_{\gcd(k,r-k+2)}\\ &= F_{\gcd(k+1,(r+2)-(k+1))}F_{\gcd(k,(r+2)-k)}\\ &= F_{\gcd(k+1,r+2)}F_{\gcd(k,r+2)}. \end{align*} Part (2). We prove this part using three cases namely, $k=3m, k=3m+1$, and $k=3m+2$. If $k=3m$ or $k=3m+2$ for $m>1$, then using Part (1) with $r=3t+1$, we see that $F_{3}=2$ is a factor of $H_{r,k}$. Thus, $H_{r,k}$ is even. We now prove the case for $k=3m+1$ and $r=3t+1$. Note that $F_{k+1}F_{r-k+2}=F_{3m+2}F_{3t-3m+2}$. Rewriting $F_{3m+2}F_{3t-3m+2}$ as $F_{3m+2}F_{3(t-m)+2}$ we see that $F_{k+1}F_{r-k+2}$ is always odd. Similarly, we can see that $ F_{k}F_{r-k+1}=F_{3m+1}F_{3(t-m)+1}$, which is always odd. Hence, $H_{r,k}=F_{k+1}F_{r-k+2}-F_{k}F_{r-k+1}$ is even. \end{proof} \subsection{Graphs from symmetric matrices} \label{adjacency:graph} In this section we explore the properties of the graphs from symmetric matrices embedded in $\mathcal{H}$ $\bmod \; 2$. Note that we are going to analyze the graphs that arise from the triangle in Figure \ref{Symmetric_matrix}. Replacing the median of this triangle, namely $(0,3,5,16,\ldots)$ by $(0,0,0,0,\ldots)$, the new triangle $\bmod\; 2$, gives rise to a graph without loops. We start with the definition of a symmetric matrix $\mathcal{S}_{w}$, $w\in \mathbb{Z}_{>0}$ within $\mathcal{H}$, \begin{equation} \label{eq3} \mathcal{S}_{w} = \left[ {\begin{array}{lllll} H_{1,1} & H_{2,1} & H_{3,1} & \cdots& H_{w,1} \\ H_{2,2} & H_{3,2}& H_{4,2} & \cdots & H_{w+1,2}\\ \vdots &\vdots&\vdots&\ddots &\vdots\\ H_{w,w} & H_{w+1,w} & H_{w+2,w} & \cdots & H_{2w-1,w} \end{array} } \right]_{w\times w}. \end{equation} The matrix $\mathcal{S}_{w}$ can be written as the sum of two rank one matrices. Therefore, the rank of $\mathcal{S}_{w}$ is at most 2. Thus, $\mathcal{S}_{w}$ has the form $\mathbf{u_1}^{T}\mathbf{v_1}+\mathbf{u_2}^{T}\mathbf{v_2}$, where $\mathbf{u_1},\mathbf{v_1},\mathbf{u_2},\mathbf{v_2}$ are column vectors with consecutive Fibonacci numbers. These can be seen located along the sides of $\mathcal{H}$ in Figure \ref{Symmetric_matrix}. Note that the entries of the matrix $\mathcal{S}_{w}\bmod 2$ are $s_{ij}=H_{i,j} \bmod 2$ for $1\le i,j\le w$. For example, the matrix $\mathcal{S}_{7}$ given in \eqref{Example:Marices:Mod} on Page \pageref{Example:Marices:Mod} is equal to this matrix. \[ \mathcal{S}_{7}= \left[ \begin{array}{c} 0 \\ 1 \\ 1 \\ 2 \\ 3 \\ 5 \\ 8 \\ \end{array} \right] \left[ \begin{array}{ccccccc} 1 & 1 & 2 & 3 & 5 & 8 & 13 \\ \end{array} \right] +\left[ \begin{array}{c} 1 \\ 2 \\ 3 \\ 5 \\ 8 \\ 13 \\ 21 \\ \end{array} \right] \left[ \begin{array}{ccccccc} 0 & 1 & 1 & 2 & 3 & 5 & 8 \\ \end{array} \right]. \] Let $K_{t}^{*}$ be the complete graph on $t$ vertices with loops at each vertex and recall that $\overline{K}_{t}$ is the empty graph on $t$ vertices. We show that the graph obtained from the symmetric matrix $\mathcal{S}_{w}\bmod 2$ is given by \begin{equation}\label{graph:definition} \mathcal{G}_{w}^{*}= \begin{cases} (K_{t}^{*}\sqcup K_{t}^{*})\nabla\overline{K}_{t}, & \mbox{ if }\; w=3t;\\ (K_{t}^{*}\sqcup K_{t}^{*})\nabla\overline{K}_{t+1}, & \mbox{ if }\; w=3t+1;\\ (K_{t}^{*}\sqcup K_{t+1}^{*})\nabla\overline{K}_{t+1}, & \mbox{ if }\; w=3t+2. \end{cases} \end{equation} For simplicity, we use $\mathcal{G}_{nmr}^{*}$ to denote the graph $(K_{n}\sqcup K_{m})\nabla \overline{K}_{r}$ (again for simplicity, we do not use $\mathcal{G}_{n,m,r}^{*}$ that is more natural). Therefore, $\mathcal{G}_{ttt}^{*}$ denotes the graph when $w=3t$, $\mathcal{G}_{tt(t+1)}^{*}$ is the graph when $w=3t+1$, and $\mathcal{G}_{t(t+1)(t+1)}^{*}$ is the graph when $w=3t+2$ (see Figure \ref{symmetric:graph}). \begin{proposition}\label{graph:structure} Let $\mathcal{G}_{w}^{*}$ be as given in \eqref{graph:definition}. Then the graph of $\mathcal{S}_{w}\bmod 2$ is $\mathcal{G}_{w}^{*}$. \end{proposition} \begin{proof} We prove that for $w=3t+1$, the graph from the matrix is given by $\mathcal{G}_{tt(t+1)}^{*}$, the proof for the cases when $w=3t$ or $3t+2$ are similar so we omit them. Let us consider the matrix $\mathcal{S}_{w}\bmod 2$ for $w=3t+1$. First we establish this notation: we denote by $u_{ij}$ those vertices of the graph $\mathcal{G}_{tt(t+1)}^{*}$ that correspond to the entries of the $i$-th row and $j$-th column of $\mathcal{S}_{w}\bmod 2$ for $i,j=3k$, $1\le k\le t-1$ (see Figure \ref{symmetric:graph}). Similarly, we denote by $v_{ij}$ the vertices corresponding to entries in the $i$-th row and $j$-th column for $i,j=3k-1$, $1\le k\le t-1$ and by $z_{ij}$ we denote the vertices corresponding to entries in the $i$-th row and $j$-th column for $i,j=3k+1$, $1\le k\le t$. Using these notation, it is clear that $\mathcal{G}_{tt(t+1)}^{*}=G_{1}\nabla G_{2}$, such that $G_{1}$ is the union of two complete graphs each of order $t$ on vertices $\{u_{i}\}_{1\le i\le t}$ and $ \{v_{i}\}_{1\le i\le t}$ while $G_{2}=\overline{K}_{t+1}$, the empty graph on the vertices $\{z_{j}\}_{1\le j\le t+1}$. \end{proof} \begin{figure} \caption{Matrix $\mathcal{S}_{7} \bmod 2$ and its graph $\mathcal{G}_{7}^{*}$.} \label{symmetric:graph} \end{figure} Note that, for some results in this paper we also use the graph $\mathcal{G}_{w}=\mathcal{G}_{w}^{*}\setminus \{ \text{all loops} \}$, that is the graph obtained in Proposition \ref{graph:structure} but without loops. If $K_t$ represents the complete graph (with no loops at any vertex), then the graph $\mathcal{G}_{w}$ is defined as: \begin{equation}\label{graph:definition_noloops} \mathcal{G}_{w}= \begin{cases} (K_{t}\sqcup K_{t})\nabla\overline{K}_{t}, & \mbox{ if }\; w=3t;\\ (K_{t}\sqcup K_{t})\nabla\overline{K}_{t+1}, & \mbox{ if }\; w=3t+1;\\ (K_{t}\sqcup K_{t+1})\nabla\overline{K}_{t+1}, & \mbox{ if }\; w=3t+2. \end{cases} \end{equation} Once again for simplicity we use the notation $\mathcal{G}_{ttt}$ for the graph without loops when $w=3t$, and similarly, for $w=3t+1$, we use $\mathcal{G}_{tt(t+1)}$, while for $w=3t+2$, we use the notation $\mathcal{G}_{t(t+1)(t+1)}$ for the graph without loops. See Table \ref{tabla2} for some examples of this type of graphs. Note that the empty graph describes the minimum dominating set of the graph $\mathcal{G}_{w}$. \begin{table}[!ht] \begin{tabular}{|l|c|c|c|} \hline $t$ & $3t$ & $3t+1$ & $3t+2$ \\ \hline \hline 2 &$\vcenter{\hbox{\includegraphics[scale=.14]{Graph3NormalNL.eps}}}$ &$\vcenter{\hbox{\includegraphics[scale=.14]{Graph1NormalNL.eps}}}$ &$\vcenter{\hbox{\includegraphics[scale=.14]{Graph7NormalNL.eps}}}$\\ \hline 3 &$\vcenter{\hbox{\includegraphics[scale=.14]{Graph4NormalNL.eps}}}$ &$\vcenter{\hbox{\includegraphics[scale=.14]{Graph2NormalNL.eps}}}$ &$\vcenter{\hbox{\includegraphics[scale=.14]{Graph8NormalNL.eps}}}$\\ \hline 4 &$\vcenter{\hbox{\includegraphics[scale=.14]{Graph5NormalNL.eps}}}$ &$\vcenter{\hbox{\includegraphics[scale=.14]{Graph6NormalNL.eps}}}$ &$\vcenter{\hbox{\includegraphics[scale=.14]{Graph9NormalNL.eps}}}$\\ \hline \end{tabular} \caption{Graphs of $\mathcal{G}_{w}$.} \label{tabla2} \end{table} It is clear from the definition of join of graphs that $\mathcal{G}_{nmr}=(K_{n}\sqcup K_{m})\nabla \overline{K}_{r}$ is connected for $n,m,r>0$. It is easy to verify that a connected graph is $d$-regular if and only if the largest eigenvalue is $d$ and $[1,1,\ldots,1]$ is an eigenvector of the graph. We use this result to show that the graph $\mathcal{G}_{tt(t+1)}$ is regular. In many papers, $\Delta$ and $\delta$ are used to denote the maximum and minimum degree of a graph, we use the same notation in the following results. A \emph{cograph} is defined recursively as follows: any single vertex graph is a cograph, if $G$ is a cograph, then so is its complement graph $\overline{G}$, and if $G_1$ and $G_2$ are cographs, then so is their disjoint union $G_1\sqcup G_2$. Equivalently, a cograph is a graph which does not contain the path on 4 vertices as an induced subgraph \cite{Corneil}. The proof to the following proposition is straightforward and we omit it. \begin{proposition}\label{Propositon:cograph} The graphs $\mathcal{G}_{w}^{*}$ and $\mathcal{G}_{w}$, as defined in \eqref{graph:definition} and \eqref{graph:definition_noloops}, are cographs. \end{proposition} Now we give some consequences of this proposition. Royle \cite{Royle} proves that the rank of a cograph $X$ is equal to the number of distinct non-zero rows of its adjacency matrix. Using this result we have as corollaries of Proposition \ref{Propositon:cograph} these results. The rank of the adjacency matrix of $\mathcal{G}_{t(t+1)(t+1)}$ is $2 (t + 1)$; the rank of the adjacency matrix of $\mathcal{G}_{ttt}$ is $2 t + 1$; and the rank of the adjacency matrix of $\mathcal{G}_{tt(t+1)}$ is $2 t + 1$. Two vertices are \emph{duplicate} if their open neighborhoods are the same \cite{Biyikoglu}. Two vertices in $\overline{K}_{t}$ or $\overline{K}_{t+1}$ with $t>1$, are duplicate vertices of a graph $\mathcal{G}_{w}$ as in \eqref{graph:definition_noloops}. Therefore, by \cite{Biyikoglu,Royle} we know that $0$ is an eigenvalue of $\mathcal{G}_{w}$ (similarly we have that $-1$ is an is an eigenvalue of $\mathcal{G}_{w}$) see Proposition \ref{Corollary:1} for the whole set of eigenvalues. In the following part we present families of regular and almost-regular connected graphs and their characteristics. \begin{theorem}\label{theorem:regular} The graph $\mathcal{G}_{w}$ is regular if and only if $w=3t+1$. Moreover, $\mathcal{G}_{tt(t+1)}$ is $2t$-regular. \end{theorem} \begin{proof} First we show that the graph $\mathcal{G}_{tt(t+1)}$ is regular. The adjacency matrix of $\mathcal{G}_{tt(t+1)}$ is $A=[a_{ij}]$, where $a_{ii}=0$ and $a_{ij}=\begin{vmatrix} F_{i+1} & F_{i}\\ F_{j} & F_{j+1} \\ \end{vmatrix} \bmod 2$ for $i\ne j$. Therefore, \[a_{ij}= \begin{cases} (F_{i+1}F_{j+1}-F_{i}F_{j}) \bmod 2, & \text{ for } i\ne j;\\ 0, & \text{ for } i= j. \end{cases} \] {\bf Claim}. The entry $a_{ij}$ of $A$, where $a_{ij}\in \xi=\{a_{1,3k+1}, a_{2,3k},a_{3,3k-1},\ldots, a_{3k+1,1}\}$, is equal to $0 \bmod 2$ for $1\le k\le t$ and 1 otherwise. (Sometimes we use $a_{i,j}$ instead of $a_{ij}$ to avoid ambiguities.) Proof of Claim. Consider the entry $a_{i,3k+2-i}=F_{i+1}F_{3k+2-i+1}-F_{i}F_{3k+2-i}$. We prove the claim by three cases, $i=3s-1$, $i=3s,$ and $i=3s+1$ for $s>1$. Indeed, if $i=3s-1$ it is easy to see that $F_{\gcd(i+1,3k+2-i)}$ and $F_{\gcd(i,3k+2-i+1)}$ divide $a_{i,3k+2-i}$, and $F_{\gcd(i+1,3k+2-i)}$ is even (every third Fibonacci number is even). If $i=3s$, then in this case also $F_{\gcd(i+1,3k+2-i)}$ and $F_{\gcd(i,3k+2-i+1)}$ divide $a_{i,3k+2-i}$ and $F_{\gcd(i,3k+2-i+1)}$ is even. Finally, if $i=3s+1$, by Proposition \ref{gcd:property}, $a_{i,3k+2-i}$ is even. However, the other entries $a_{ij}$ of $A$ for $a_{ij}\notin \xi$ are all odd. The matrix $A$ is therefore of the form \begin{equation} \label{eq6} A = \left[ {\begin{array}{lll|l} A(K_{t}) & &\;\bigzero & \\ &&& \\ &&&\;\;\;\; J^{T}\\ \;\;\;\; \bigzero&& A(K_{t}) & \\ &&&\\ \hline &&&\\ & J & &A(\overline{K}_{t+1})\\ \end{array} } \right]_{(3t+1)\times(3t+1)} \end{equation} where $A(K_{t})$ and $A(K_{t+1})$ are the adjacency matrices of the complete graphs $K_{t}$ and $K_{t+1}$ and $J=[j_{rl}]_{(t+1)\times t}$ with $j_{rl}=1$ for $1\le r\le (t+1)$ and $1\le l\le t$ ($J$ is the matrix of $1$'s). From $A$ we see that the graph is connected. It is easy to verify that $\lambda=2t$ is the largest eigenvalue of the $A$ and that $[1,1,\ldots,1]_{3t+1}$ is an eigenvector of $A_{w}$. This proves that $\mathcal{G}_{tt(t+1)}$ is a $2t$-regular graph. Conversely, we show that if $w\ne 3t+1$, then $\mathcal{G}_{w}$ is not regular. We prove this using cases. {\bf Case $w=3t+2$}. By the definition of join of graphs we know that $\mathcal{G}_{t(t+1)(t+1)}$ is connected with maximum degree $\Delta=2t+1$ and minimum degree $\delta= 2t$. The number of vertices of degree $\Delta$ is $2t$ while the number of vertices of degree $\delta$ is $t$. Since $\Delta-\delta=1$, $\mathcal{G}_{t(t+1)(t+1)}$ is almost-regular. {\bf Case $w=3t$}. Recall that $\mathcal{G}_{ttt}$ is connected and $\Delta=2t$ and $\delta= 2t-1$. The number of vertices of degree $\Delta$ is $t$ while the number of vertices of degree $\delta$ is $2t$. Once again, since $\Delta-\delta=1$, $\mathcal{G}_{ttt}$ is almost-regular. This completes the proof. \end{proof} We observe that the graph $\mathcal{G}_{ttt}^{*}$ (defined in \eqref{graph:definition}) has loops at all vertices that have degree $\delta=2t-1$. Similarly, the graph $\mathcal{G}_{t(t+1)(t+1)}^{*}$ has loops at all vertices that have degree $\delta=2t$. If we have $\mathcal{G}_{tt(t+1)}^{*}$, then the entries of the diagonal $a_{ii}$ are even if $i=3k+1$ for $0\le k\le t$. This implies that the number of loops in $\mathcal{G}_{tt(t+1)}^{*}$ is $2t$. Note. The graph $\mathcal{G}_{tt(t+1)}$ (defined in \eqref{graph:definition_noloops}) is not vertex transitive for $t>1$ and vertex transitive when $t=1$. The graphs $\mathcal{G}_{tt(t+1)}$ with $t>1$ and $\mathcal{G}_{ttt}$ with $t\ge 1$, have two orbits each. In fact, in both cases the vertices of the graph that are part of $(K_{t}\sqcup K_{t})$, all belong to one orbit while the vertices of the empty graphs, are in a second orbit. In the case of the graph $\mathcal{G}_{t(t+1)(t+1)}$ with $t\ge 1$, the vertices are split up into three orbits. The vertices of $(K_{t}\sqcup K_{t+1})$ that have degree $2t$ are in the first orbit while the vertices of $(K_{t}\sqcup K_{t+1})$ that have degree $(2t+1)$ are all in the second orbit and finally, the vertices of the empty graph $\overline{K}_{t+1}$ are in the third orbit. This shows that these graphs are non-asymmetric. \section{Integral graphs}\label{Background on IntegralGraphs} A graph is called \emph{integral} if all its eigenvalues are integers. Now we summarize some results of Bali\'{n}ska et al. \cite{balinska}. Let $G_i$ be a graph with $n_i$ vertices and $r_i$ regularity for $i=1,2$. \begin{equation}\label{perfect_square} G_{1}\nabla G_{2} \text{ is integral $\iff$ $G_{1}$ and $G_{2}$ are integral and} (r_{1}-r_{2})^{2}+4n_{1}n_{2} \text{ is a perfect square.} \end{equation} In 1980 Cvetkovi\'{c} \textit{et al.} \cite{cvetkovi} (see also \cite{hwang}) proved that the characteristic polynomial $P_{G_{1}\nabla G_{2}}(\lambda)$ of $G_{1}\nabla G_{2}$ equals \begin{equation}\label{eqn:1} (-1)^{n_{2}}P_{G_1}(\lambda)P_{\overline{G}_{2}}(-\lambda-1)+(-1)^{n_{1}}P_{G_2}(\lambda)P_{\overline{G}_{1}}(-\lambda-1)-(-1)^{n_1+n_2}P_{\overline{G}_{1}}(-\lambda-1)P_{\overline{G}_{2}}(-\lambda-1). \end{equation} In particular, if $G_{i}$ is $r_{i}$-regular in the result above, then the characteristic polynomial of $G_{1}\nabla G_{2}$ is given by \begin{equation*} P_{G_{1}\nabla G_{2}}(\lambda) =\frac{P_{G_{1}}(\lambda)P_{G_{2}}(\lambda)}{(\lambda-r_{1})(\lambda-r_{2})}((\lambda-r_{1})(\lambda-r_{2})-n_{1}n_{2}). \end{equation*} Let $A(G_1)_{m\times m}$ and $A(G_2)_{n\times n}$ be the adjacency matrices of $G_1$ and $G_2$ and let $a,c\in \mathbb{R}^{m}$ and $b,d\in \mathbb{R}^{n}$, where $ad^{T}$ and $bc^T$ are matrices with $1$'s as their entries. From Zhang \textit{et al.} \cite{Zhang} (see also \cite{hwang}), we know that the adjacency matrix of $G_1\nabla G_2$ is \begin{equation} \label{eqn:3} M = \left[ {\begin{array}{ll} A(G_{1}) & ad^{T} \\ bc^T & A(G_{2}) \end{array} } \right]. \end{equation} In addition, if $\widetilde{A_{1}}=ac^{T}-A(G_{1})$ and $\widetilde{A_{2}}=bd^{T}-A(G_{2})$, then the characteristic polynomial of $M$ is given by \begin{equation}\label{eqn:4} P_{M}(\lambda)=(-1)^{m}P_{\widetilde{A_{1}}}(-\lambda)P_{A(G_{2})}(\lambda)+(-1)^{n}P_{A(G_{1})}(\lambda)P_{\widetilde{A_{2}}}(-\lambda)-(-1)^{m+n}P_{\widetilde{A_{1}}}(-\lambda)P_{\widetilde{A_{2}}}(-\lambda). \end{equation} \subsection{An infinite family of integral graphs}\label{main_result} In this section we give a more general result that will be important for proving results about the graphs embedded in the combinatorial triangles in this paper. In particular, we present the necessary and sufficient condition for the graph $\mathcal{G}_{nmr}:=(K_{n}\sqcup K_{m})\nabla \overline{K}_{r}$ to be integral. The first three parts of the following lemma are well known. Since the characteristic polynomial is multiplicative under disjoint union of graphs, the last part of the lemma is a product of the characteristic polynomials of $K_{n}$ and $K_{m}$. So, we omit the complete proof of the lemma. \begin{lemma}\label{BasicCharacteristicPoly} If $n$ is a positive integer, then \begin{enumerate} \item \label{CompleteN} the characteristic polynomial of $K_{n}$ is given by $p(\lambda)=(-1)^n(\lambda-(n-1))(\lambda+1)^{n-1}$. \item \label{BipartiteN}The characteristic polynomial of $K_{m,n}$ is given by $p(\lambda)=(-1)^{m+n}\lambda^{m+n-2}(\lambda^2-nm)$. \item \label{EmptyN} The characteristic polynomial of $\overline{K}_{n}$ is given by $p(\lambda)=(-1)^n\lambda^n$. \item \label{DisjointUnionN}The characteristic polynomial of $K_{m}\sqcup K_{n}$ is given by $$p(\lambda)=(-1)^{m+n}(\lambda-(n-1))(\lambda-(m-1))(\lambda+1)^{m+n-2}.$$ \end{enumerate} \end{lemma} \begin{proposition} \label{Main:thm1} Let $n,m,r\in \mathbb{Z}_{>0}$ and let $P(\lambda)$ be the characteristic polynomial of $\mathcal{G}_{nmr}$. \begin{enumerate} \item \label{Main:thm1:part1} If $n\ne m$, then $P(\lambda)$ is equal to $$(-1)^{m+n+r}\lambda^{r-1}(\lambda+1)^{m+n-2}[\lambda^{3}-(m+n-2)\lambda^{2}-((m+n)(r+1)-mn-1)\lambda-((m+n)r-2mnr)].$$ \item \label{Main:thm1:part2} If $n=m>1$, then $$P(\lambda)=(-1)^r\lambda^{r-1} (\lambda+1)^{2(n-1)}(\lambda-(n-1))(\lambda^2-(n-1)\lambda-2nr).$$ Moreover, $\mathcal{G}_{nnr}$ is integral if and only if $2nr=pq$ and $n-1=p-q$ for some $p, q\in \mathbb{Z}_{>0}$. \end{enumerate} \end{proposition} \begin{proof} The proof of Part \eqref{Main:thm1:part2} is a direct application of Part \eqref{Main:thm1:part1}, therefore, it is omitted. So, we prove Part \eqref{Main:thm1:part1}. We start by computing the characteristic polynomial $P(\lambda)$ of $\mathcal{G}_{nmr}$. In fact, to obtain $P(\lambda)$, we substitute the appropriate characteristic polynomials from Lemma \ref{BasicCharacteristicPoly} into \eqref{eqn:1}. Since $\mathcal{G}_{nmr}:=(K_{n}\sqcup K_{m})\nabla \overline{K}_{r}$, we set $G_1:= K_{n}\sqcup K_{m}$, $G_2:=\overline{K}_{r}$ in \eqref{eqn:1}. Therefore, their complement graphs are $\overline{G_1}=K_{m,n}$ and $\overline{G_2}=K_r$. This and Lemma \ref{BasicCharacteristicPoly} imply that $$P_{\;\overline{G_1}}(-\lambda-1)=(\lambda+1)^{m+n-2}((\lambda+1)^2-mn)\quad \text{ and } \quad P_{\;\overline{G_2}}(-\lambda-1)=(\lambda+r)(\lambda)^{r-1}.$$ Substituting this and $P_{G_1}(\lambda)$, $P_{G_2}(\lambda)$ (after applying Lemma \ref{BasicCharacteristicPoly}) into \eqref{eqn:1} gives that $P(\lambda)$ is equal to \begin{multline*} (-1)^{m+n+r}(\lambda+1)^{m+n-2}\lambda^{r-1}[(\lambda-n+1)(\lambda-m+1)(\lambda+r)+ \lambda((\lambda+1)^2-mn)\\ -(\lambda+r)((\lambda+1)^2-mn)]. \end{multline*} Simplifying the polynomial further we obtain that $P(\lambda)$ is equal to \[ (-1)^{m+n+r}\lambda^{r-1}(\lambda+1)^{m+n-2}[\lambda^{3}-(m+n-2)\lambda^{2}-((m+n)(r+1)-mn-1)\lambda-r(m+n-2mn)]. \] This completes the proof. \end{proof} \begin{proposition} \label{Main:thm1:Moreover} Let $n,r\in \mathbb{Z}_{>0}$. $\mathcal{G}_{nnr}$ is integral if and only if $2nr=pq$ and $n-1=p-q$ for some $p, q\in \mathbb{Z}_{>0}$. \end{proposition} \begin{proof} Let $p,q\in \mathbb{Z}_{>0}$ such that $pq=2nr$ and $p-q=n-1$. This implies that characteristic polynomial $P(\lambda)$ in Proposition \ref{Main:thm1} Part \eqref{Main:thm1:part1} has linear factors. Conversely, if $\mathcal{G}_{nnr}=(K_{n}\sqcup K_{n})\nabla \overline{K}_{r}$ with either $2nr\ne pq$ or $n-1\ne p-q$ for every $p,q\in \mathbb{Z}_{>0}$, then $(\lambda^2-(n-1)\lambda-2nr)$ does not have linear factors. \end{proof} Another way to see why $\mathcal{G}_{nnr}$ is integral, is as follows. Let us first note that $(K_{n}\sqcup K_{n})$ is $(n-1)$-regular with $2n$ vertices and $\overline{K}_{r}$ is 0-regular with $r$ vertices. In addition, from \cite{balinska,barik} we have that $(K_{n}\sqcup K_{n})$ and $\overline{K}_{r}$ are both integral. Therefore, substituting $(n-1)$, $0$ and $2n$ into \eqref{perfect_square} or in the expression $(r_{1}-r_{2})^{2}+4n_{1}n_{2}$ shows that it is a perfect square. Therefore, $(r_{1}-r_{2})^{2}+4n_{1}n_{2} =(n-1)^2+8nr$. Since $pq=2nr$ and $p-q=n-1$ (for $p,q\in \mathbb{Z}_{>0}$), we have $(n-1)^2+8nr=(p+q)^2$. This proves that $\mathcal{G}_{nnr}$ is integral. \subsection{Integral Graphs from Combinatorial Triangles} In this section we present several results that provide the criteria for the graphs given in \eqref{graph:definition} and \eqref{graph:definition_noloops}, to be integral. In particular, we present families of integral and non-integral graphs with at most five distinct eigenvalues. \subsection{Graphs without loops} In this section we give characteristic polynomials of graphs with only linear factors over the set of integers. We also give characteristic polynomials of certain graphs that do not factor completely over the set of integers. Finally, we provide a necessary and sufficient for the graphs to be integral. Note that these graphs are defined in \eqref{graph:definition_noloops} on Page \pageref{graph:definition_noloops}. \begin{proposition}\label{Corollary:1} For $t>0$, these hold \begin{enumerate}[(a)] \item \label{poly:1} the characteristic polynomial of $\mathcal{G}_{tt(t+1)}$ is given by \[P(\lambda)= \begin{cases} \lambda^2(\lambda+2)(\lambda-2), & \mbox{ if }\; t=1;\\ \lambda^{t}(\lambda-2t)\left((\lambda+1)^2-t^{2}\right)(\lambda+1)^{2(t-1)}, & \text{ if } t>1. \end{cases} \] \item \label{poly:2} The characteristic polynomial of $\mathcal{G}_{ttt}$ is given by \[P(\lambda)= \begin{cases} \lambda(\lambda^{2}-2), & \mbox{ if }\; t=1;\\ \lambda^{t-1}(\lambda+1)^{2(t-1)}(\lambda-t+1)(\lambda^{2}-(t-1)\lambda-2t^{2}),& \text{ if } t>1. \end{cases} \] \item \label{poly:3} The characteristic polynomial of $\mathcal{G}_{t(t+1)(t+1)}$ is given by \[P(\lambda)= \begin{cases} \lambda(\lambda+1)(\lambda^{3}-\lambda^{2}-6\lambda+2), & \mbox{ if }\; t=1;\\ \lambda^{t}(\lambda+1)^{2t-1}(\lambda^{3}+(1-2t)\lambda^{2}-(t^{2}+4t+1)\lambda+(2t^{2}-1)(t+1)), & \text{ if } t>1. \end{cases} \] \end{enumerate} \end{proposition} The proof of Proposition \ref{Corollary:1} is straightforward from Proposition \ref{Main:thm1}. The \emph{energy} of a graph $G$ is the sum of the absolute values of the eigenvalues of $G$. A direct consequence of Proposition \ref{Corollary:1} Part \eqref{poly:2} is that the energy of $\mathcal{G}_{tt(t+1)}$ is $6t-2$, for $t>0$. We now give the proof for the necessary and sufficient condition for the graphs from the symmetric matrices in $\mathcal{H}\bmod 2$ to be integral. We start with a lemma that will be important for showing that $\mathcal{G}_{ttt}$ and $\mathcal{G}_{t(t+1)(t+1)}$ are not integral graphs. \begin{lemma}\label{not:integral:new} If $t\in \mathbb{Z}_{>0}$, then these polynomials do not factor with monic linear factors in $\mathbb{Z}[x]$ \begin{enumerate}[(a)] \item \label{part:a} $p_{1}(x)=x^{2}+(1-t)x-2t^{2}$, \item \label{part:b} $p_{2}(x)=x^{3}+(1-2t)x^{2}-(t^{2}+4t+1)x+(2t^{3}+2t^{2}-t-1)$, \item \label{irr:polyn1} $p_{3}(x)=x^{2}-tx-2t(t+1)$, \item \label{irr:polyn2} $p_{4}(x)=x^{3}-(2t+1)x^{2}-(t+1)^{2}x+2t(t+1)^{2}$. \end{enumerate} \end{lemma} \begin{proof} Proof of Part \eqref{part:a}. Since the discriminant of $p_{1}(x)$ and $(1-t)$ have distinct parity, using the quadratic formula we have that any root of the polynomial $p_{1}(x)$ has two as the denominator. Proof of Part \eqref{part:b}. We prove this part by contradiction. Let us assume that $q\in \mathbb{Z}$ is a root of $p_{2}(x)$. So, $q$ must divide the independent coefficient $(2t^{3}+2t^{2}-t-1)=(2t^2-1)(t+1)$. Clearly, $q\ne 0$ and $1\le q\le (2t^2-1)(t+1)$. Since $p(q)=0$, we have $q^{3}+(1-2t)q^{2}-(t^{2}+4t+1)q+(2t^{3}+2t^{2}-t-1)=0$. Rewriting the cubic expression we have, \begin{equation}\label{eqn:roots} q^{3}+(1-2t)q^{2}-(t^{2}+4t+1)q = (1-2t^2)(t+1). \end{equation} Now we consider the following five cases: {\bf Case $q=1$.} Substituting $q=1$ in \eqref{eqn:roots} we obtain that $1-6t-t^2>(1-2t^2)(t+1) $ if $t>1$. It is now easy to see that $1-6t-t^2$ is greater than the right hand side of \eqref{eqn:roots} for $t>1$. {\bf Case $1<q<t+1$.} Since both $q$ and $t$ are integers, there is an integer $h\ge 2$ such that $t+1=q+h$. We now prove that, substituting $q=t+1-h$ in \eqref{eqn:roots} the left hand side of \eqref{eqn:roots} is greater than its right hand side. Thus, $q^{3}+(1-2t)q^{2}-(t^{2}+4t+1)q=(1-2t^3-2t-5t^2)-h^3+4h^2-4h+h^2t+2ht^2$. We observe that for $h\ge 2$, $(1-2t^3-2t-5t^2)-h^3+4h^2-4h+h^2t+2ht^2\ge (1-2t^3-2t-5t^2)+4t+4t^2$. It is easy to check that, \[(1-2t^3-2t-5t^2)+4t+4t^2 =1-2t^3-t^2+2t > 1-2t^3-2t^2+t \text{ for } t>1.\] {\bf Case $q=t+1$.} Substituting $q=t+1$ in the left hand side of \eqref{eqn:roots} we obtain $1-2t^3-2t-5t^2$. It is once again easy to see that $1-2t^3-5t^2-2t<1-2t^3-2t^2+t$ for $t>1$. {\bf Case $(t+1)<q\le (2t-1)$.} Here we claim that the left side of \eqref{eqn:roots} is less than the right side. To prove this we observe that since $q\le 2t-1$, \[q^{3}+(1-2t)q^{2}-(t^{2}+4t+1)q\le 1+2t-7t^2-2t^3.\] Next we observe that $1+2t-7t^2-2t^3<1-2t^3-2t^2+t$ for $t>1$. {\bf Case $q>(2t-1)$.} Let $q=2t-1+h$ where $h>1$ is an integer. We prove that the left hand side of \eqref{eqn:roots} is greater than its right hand side. Indeed, substituting $q=2t-1+h$ in the left hand side of \eqref{eqn:roots} and simplifying we obtain $$q^{3}+(1-2t)q^{2}-(t^{2}+4t+1)q=(1+2t-7t^2-2t^3)+h^3-2h^2+4h^2t-8ht+3ht^2.$$ Next we observe that for $h\ge 2$, $$(1+2t-7t^2-2t^3)+h^3-2h^2+4h^2t-8ht+3ht^2\ge (1+2t-7t^2-2t^3)+6t^2.$$ Finally, it is easy to check that \[(1+2t-7t^2-2t^3)+6t^2= 1-2t^3-t^2+2t >1-2t^3-2t^2+t \text { for } t>1.\] This completes the proof. Proof of Part \eqref{irr:polyn1}. It is straightforward using the quadratic formula. Observe that the discriminant of $p_{3}(x)$ is between two consecutive perfect squares, for $t>0$. Thus, $(3 t+1)^2<9 t^2+8 t<(3 t+2)^2$. Proof of Part \eqref{irr:polyn2}. Suppose, by contradiction, that $q\in \mathbb{Z}$ is a root of $p_{4}(x)$. Therefore, $q$ divides the independent coefficient of $p_{4}(x)$. Thus $p\mid 2t(t+1)^{2}$. (Clearly, $q\ne 0$ and $1\le |q|\le 2t(t+1)^2$.) This implies that, $q^{3}-(2t+1)q^{2}-(t+1)^{2}q+2t(t+1)^{2}=0$. Adding $(t+1)^2$ to either side of this equation and factoring the left hand side we have \begin{equation}\label{eq10} (q-(2t+1))(q^{2}-(t+1)^2)=(t+1)^2. \end{equation} We now consider the following four cases. {\bf Case $q=\pm (t+1)$ or $q=\pm t$.} Substituting $q=\pm (t+1)$ in \eqref{eq10} gives that the left side equals zero. That is a contradiction, because the right side is non-zero. It is also easy to see that substituting $q=\pm t$ in \eqref{eq10} we obtain a similar contradiction. {\bf Case $(t+1)< |q|\le (2t+1)$.} Since $[q-(2t+1)]\le 0$ and $[q^{2}-(t+1)^2]> 0$, the left side of \eqref{eq10} is less than or equal to zero. That is a contradiction. {\bf Case $ |q|> (2t+1)$.} There is an integer $h\ge 1$ such that $|q|=2t+1+h$. Substituting this value of $q$ in $(q^{2}-(t+1)^2)$, we have that $[2(t+1)(t+h)+(t+h)^{2}]>(t+1)^2$. This proves that when we substitute the value of $q$ in the left hand side of \eqref{eq10} we have that $H[2(t+1)(t+h)+(t+h)^{2}]\ne(t+1)^2$, where $H=(\pm(2t+1+h)-(2t+1))$. {\bf Case $ 0<|q|<t$.} We analyze the case in which $1\le q<t$, the case $-t<q\le-1$ is similar and it is omitted. There is an integer $h\ge 1$ such that $q=t-h$. Substituting this value of $q$ in the left hand side of \eqref{eq10} and simplifying we obtain $(t+h+1)(t+1+h(2t-h)+t)$. Since $h<t$, we have $(2t-h)+t\ge1$. This implies that $(t+1+h(2t-h)+t)>(t+1)$. Therefore, $(t+h+1)(t+1+h(2t-h)+t)>(t+1)^{2}$. This completes the proof. \end{proof} \begin{proposition} \label{Main:thm3} Let $\mathcal{G}_{w}$ be as in \eqref{graph:definition_noloops} on Page \pageref{graph:definition_noloops}. The graph $\mathcal{G}_{w}$ is integral if and only if $w=3t+1$. \end{proposition} \begin{proof} We first prove the necessity of $w$ for $\mathcal{G}_{w}$ by contradiction. Suppose that $\mathcal{G}_{w}$ is integral and that $w \ne 3t+1$. Therefore, $w=3t$ or $w=3t+2$. However, by Proposition \ref{Corollary:1} parts (b) and (c), and Lemma \ref{not:integral:new}, we know that the characteristic polynomials in both cases do not have all integer roots. Therefore, $\mathcal{G}_{ttt}$ and $\mathcal{G}_{t(t+1)(t+1)}$ are not integral. That is a contradiction. Thus, $w=3t+1$. We now use cases to prove the sufficiency of $w$ for $\mathcal{G}_{w}$. {\bf Case $w=3t+2$.} Using Proposition \ref{Corollary:1} Part (c) and Lemma \ref{not:integral:new} Part (b) it is clear that $\mathcal{G}_{t(t+1)(t+1)}$ is not an integral graph. {\bf Case $w=3t$.} We analyze the first line in \eqref{graph:definition_noloops}. That is, the graph $\mathcal{G}_{ttt}=(K_{t}\sqcup K_{t})\nabla \overline{K}_{t}$. We know that the complete graph $K_{t}$ is integral and regular, therefore $(K_{t}\sqcup K_{t})$ is integral and regular. In addition, the empty graph $ \overline{K}_{t}$ is integral and regular. Substituting $r_{1}:=(t-1)$ and $r_{2}:=0$ (the regularities) and $n_{1}:=2t$ and $n_{2}:=t$ (the number of vertices of $(K_{t}\sqcup K_{t})$ and $\overline{K}_{t}$) into \eqref{perfect_square}, we have that, $(r_{1}-r_{2})^{2}+4n_{1}n_{2}=9t^{2}-2t+1$. Note that $(3t-1)^2<9t^{2}-2t+1<(3t)^2$ for $t>0$. Thus, $9t^{2}-2t+1$ is between two consecutive squares. So it is not a perfect square for $t>0$. This proves that $\mathcal{G}_{ttt}$ is not an integral graph. {\bf Case $w=3t+1$.} We analyze the second line in \eqref{graph:definition_noloops}. That is, the graph $\mathcal{G}_{tt(t+1)}=(K_{t}\sqcup K_{t})\nabla \overline{K}_{t+1}$. Since $(K_{t}\sqcup K_{t})$ and $\overline{K}_{t+1}$ are integral and regular, we have that substituting $r_{1}:=(t-1)$ and $r_{2}:=0$ (the regularities) and $n_{1}:=2t$ and $n_{2}:=t+1$ (the number of vertices of $(K_{t}\sqcup K_{t})$ and $\overline{K}_{t+1}$) into \eqref{perfect_square} we obtain $(r_{1}-r_{2})^{2}+4n_{1}n_{2}=(3t+1)^2$, a perfect square. This implies that $\mathcal{G}_{tt(t+1)}$ is an integral graph. \end{proof} Note. Since $G_1:=K_{t}\sqcup K_{t}$ and $G_2:=\overline{K}_{l}$ are both regular using \cite[Table 2]{barik} we have that $\mathcal{G}_{tt(t+1)}$ is integral and that $\mathcal{G}_{ttt}$ is not integral. These give alternative proofs for Case $w=3t+1$ and Case $w=3t$ in the previous proof. However, for $\mathcal{G}_{t(t+1)(t+1)}$ we cannot use the result from \cite[Table 2]{barik}. \subsection{Graphs with loops} Let us recall that in Proposition \ref{graph:structure} on Page \pageref{graph:structure} we showed that the graphs from $\mathcal{S}_{w} \bmod 2$ for $w=3t+r$, $0\le r\le 2$ are given by $\mathcal{G}_{ttt}^{*}$, $\mathcal{G}_{tt(t+1)}^{*}$ and $\mathcal{G}_{t(t+1)(t+1)}^{*}$ (defined in \eqref{graph:definition} on Page \pageref{graph:definition}). In this section we provide the characteristic polynomials of these graphs. We also provide a necessary and sufficient condition for the graphs to be integral, this will be given as the proof of Theorem \ref{Main:thm4}. We start the section with two lemmas that will be important for showing that the characteristic polynomials of $\mathcal{G}_{tt(t+1)}^{*}$ and $\mathcal{G}_{t(t+1)(t+1)}^{*}$ do not have all integer roots. \begin{proposition}\label{Propo:2:loops} If $t>0$, then \begin{enumerate} \item \label{part1} the characteristic polynomial of $\mathcal{G}_{ttt}^{*}$ is given by $$P^{*}(\lambda)=(-1)^{3t} \lambda^{3(t-1)}(\lambda-2t)(\lambda^{2}-t^{2}).$$ \item \label{part2} The characteristic polynomial of $\mathcal{G}_{tt(t+1)}^{*}$ is given by $$P^{*}(\lambda)=(-1)^{3t+1}\lambda^{(3t-2)}(\lambda-t)(\lambda^{2}-t\lambda-2t(t+1)).$$ \item \label{part3} The characteristic polynomial of $\mathcal{G}_{t(t+1)(t+1)}^{*}$ is given by $$P^{*}(\lambda)=(-1)^{3t+2}\lambda^{(3t-1)}(\lambda^{3}-(2t+1)\lambda^{2}-(t+1)^{2}\lambda+2t(t+1)^{2}).$$ \end{enumerate} \end{proposition} \begin{proof} We prove part (1), the proof of parts (2) and (3) are similar (using Lemma \ref{not:integral:new}) so we omit them. We start by observing that the adjacency matrix of $\mathcal{G}_{ttt}^{*}$, (using \eqref{eqn:3} on Page \pageref{eqn:3}) is given by \begin{equation} \label{eqn:5} A(\mathcal{G}_{ttt}^{*}) = \left[ {\begin{array}{ll} A(K_{t}^{*}\sqcup K_{t}^{*}) & ad\,^{T} \\ bc\,^T & A(\overline{K_{t}}) \end{array} } \right], \end{equation} where $A(K_{t}^{*}\sqcup K_{t}^{*})$ is the adjacency matrix of $(K_{t}^{*}\sqcup K_{t}^{*})$, $A(\overline{K_{t}})$ is the adjacency matrix of $\overline{K_{t}}$, $a^{T}=[1,1,\ldots,1]_{1\times 2t}$, $b\,^{T}=[1,1,\ldots,1]_{1\times t}$, $c=[1,1,\ldots,1]_{2t\times 1}$, and $b=d$. Therefore, $ad\,^{T}$ and $bc\,^{T}$ are both rectangular matrices of orders $2t\times t$ and $t\times 2t$, respectively where all their entries are 1. Next we observe that the characteristic polynomial of $A(\mathcal{G}_{ttt}^{*})$ is given by \eqref{eqn:4} on Page \pageref{eqn:4}. Note that $\widetilde{A}(K_{t}^{*}\sqcup K_{t}^{*})=ac^{T}-A(K_{t}^{*}\sqcup K_{t}^{*})$ is the adjacency matrix of the complete bipartite graph $K_{t,t}$. From Lemma \ref{BasicCharacteristicPoly} Part \eqref{BipartiteN} we have that the characteristic polynomial of $\widetilde{A}(K_{t}^{*}\sqcup K_{t}^{*})$ is given by $(-1)^{2t}\lambda^{2t-2}(\lambda^2-t^2)$. Similarly, we obtain that the characteristic polynomial of $\widetilde{A}(\overline{K_{t}})=bd\,^{T}-A(\overline{K_{t}})$ is given by $(-1)^{t}\lambda^{t-1}(\lambda-t)$. Finally, using similar techniques as described above, we see that the characteristic polynomials of $A(K_{t}^{*}\sqcup K_{t}^{*})$ and $A(\overline{K_{t}})$ are given by $(-1)^{2t}\lambda^{2(t-1)}(\lambda-t)^{2}$ and $(-1)^{t}\lambda^{t}$, respectively. Substituting all these characteristic polynomials in \eqref{eqn:4} on Page \pageref{eqn:4} and simplifying, we have that the characteristic polynomial $P^{*}(\lambda)$ of $A(\mathcal{G}_{ttt}^{*})$, is given by \begin{multline*} (-1)^{3t}[(\lambda)^{2t-2}(\lambda^{2}-t^{2})\lambda^{t}+(-1)^{t}\lambda^{2t-2}(\lambda-t)^{2}(-\lambda)^{t-1}(-\lambda-t)-\\ (-1)^{3t}(\lambda)^{2t-2}(\lambda^{2}-t^{2})(-\lambda)^{t-1}(-\lambda-t)]. \end{multline*} Simplifying further we obtain $P^{*}(\lambda)=(-1)^{3t}\lambda^{3(t-1)}(\lambda-2t)(\lambda^{2}-t^{2})$. \end{proof} It is clear from Proposition \ref{Propo:2:loops} Part \eqref{part1}, that the characteristic polynomial of $\mathcal{G}_{ttt}^{*}$ has only integral roots. It is also clear from Proposition \ref{Propo:2:loops} Parts \eqref{part2} and \eqref{part3}, the characteristic polynomials for $\mathcal{G}_{tt(t+1)}^{*}$ and $\mathcal{G}_{t(t+1)(t+1)}^{*}$ do not have all integral roots. \begin{theorem} \label{Main:thm4} The graph $\mathcal{G}_{w}^{*}$ is integral if and only if $w=3t$ with $t\ge 1$. \end{theorem} \begin{proof} The proof that $\mathcal{G}_{w}^{*}$ is integral when $w=3t$ is straightforward from Proposition \ref{Propo:2:loops} Part \eqref{part1}. The proof of the fact that $\mathcal{G}_{w}^{*}$ is not integral when $w=3t+1$ and $w=3t+2$, follows from Proposition \ref{Propo:2:loops} Parts \eqref{part2} and \eqref{part3}. \end{proof} \subsection{Laplacian Characteristic Polynomials} The \emph{Laplacian} of a graph $G$ is the difference of the diagonal matrix of the vertex degrees of $G$ and the adjacency matrix of $G$. The graph $G$ is called \emph{Laplacian integral} if all eigenvalues of its Laplacian matrix are integers. Morris in \cite{Merris} proves that a cograph is Laplacian integral. Therefore, the graphs $\mathcal{G}_{w}^{*}$ and $\mathcal{G}_{w}$, as defined in \eqref{graph:definition} and \eqref{graph:definition_noloops} on Page \pageref{graph:definition}, are Laplacian integral. The following proposition presents the Laplacian characteristic polynomial of these graphs. \begin{proposition}\label{CorollaryLaplacian} For $t>0$ these hold \begin{enumerate} \item the Laplacian characteristic polynomial of $\mathcal{G}_{ttt}$ is given by \[\mathcal{L}_{w}(\lambda)=(-1)^t\lambda(\lambda-t)(\lambda-3t)(\lambda-2t)^{3(t-1)}. \] \item The Laplacian characteristic polynomial of $\mathcal{G}_{tt(t+1)}$ is given by \[\mathcal{L}_{w}(\lambda)= (-1)^{t+1}\lambda(\lambda-(t+1))(\lambda-(3t+1))(\lambda-2t)^t(\lambda-(2t+1))^{2(t-1)}. \] \item The Laplacian characteristic polynomial of $\mathcal{G}_{t(t+1)(t+1)}$ is given by \[\mathcal{L}_{w}(\lambda)= (-1)^{t} \lambda(\lambda-(t+1))(\lambda-(3t+2))(\lambda-2(t+1))^{t}(\lambda-(2t+1))^{2t-1}. \] \end{enumerate} \end{proposition} \section{Complement Graphs and their characteristics}\label{complement:graphs} In this section we study the nature of the complements $\overline{\mathcal{G}}_{w}$ of the graphs $\mathcal{G}_{w}$, for $w=3t+r$ where $0\le r\le 2$, given in \eqref{graph:definition_noloops} on Page \pageref{graph:definition_noloops}. Specifically we prove that the complement graph $\overline{\mathcal{G}}_{w}$ of $\mathcal{G}_{w}$ is the union of two graphs, namely a complete graph and a complete bipartite graph. We also show that $\overline{\mathcal{G}}_{ttt}$ is integral with five distinct eigenvalues while $\overline{\mathcal{G}}_{tt(t+1)}$ is integral with four distinct eigenvalues. In the case of $\overline{\mathcal{G}}_{t(t+1)(t+1)}$, it is not integral but has five distinct eigenvalues. We use the classic notation $K_i$ for the complete graph and $K_{i,j}$ for complete bipartite graph. If $w=3t+r$, where $0\le r\le 2$, then \begin{equation}\label{complement:graph} \overline{\mathcal{G}}_{w}= \begin{cases} K_{t,t}\sqcup K_{t}, & \mbox{ if }\; r=0;\\ K_{t,t}\sqcup K_{t+1}, & \text{ if }\; r=1;\\ K_{t,t+1}\sqcup K_{t+1},& \text{ if }\; r=2. \end{cases} \end{equation} Note that $\eqref{complement:graph}$ is in fact the complement of \eqref{graph:definition_noloops}. That is, if $r=0$ (when $r\in \{1,2\}$, it is similar), the complement of ${\mathcal{G}}_{w}$ is given by the disjoint union of the complements of $(K_{t}\sqcup K_{t})$ and $\overline{K}_{t}$ (see \cite{barik}). Therefore, $\overline{\mathcal{G}}_{w}=K_{t,t}\sqcup K_{t}$ since the complement of $(K_{t}\sqcup K_{t})$ is $K_{t,t}$ and the complement of $\overline{K}_{t}$ is $K_{t}$. \begin{theorem} Let $w=3t+r$ with $r\in \{0,1,2\}$ and $t>0$ and let $\overline{\mathcal{G}}_{w}$ be as defined in \eqref{complement:graph}. Then these hold \begin{enumerate} \item if $r=0$, then $\overline{\mathcal{G}}_{w}$ is an integral graph with five distinct eigenvalues and characteristic polynomial $$\overline{P}(\lambda)=\lambda^{2(t-1)}(\lambda^{2}-t^{2})(\lambda-t+1)(\lambda+1)^{t-1}.$$ \item If $r=1$, then $\overline{\mathcal{G}}_{w}$ is an integral graph with four distinct eigenvalues and characteristic polynomial $$\overline{P}(\lambda)=\lambda^{2(t-1)}(\lambda-t)^{2}(\lambda+t)(\lambda+1)^{t}.$$ \item If $r=2$, then $\overline{\mathcal{G}}_{w}$ is not an integral graph but it does have five distinct eigenvalues and the stated characteristic polynomial $$\overline{P}(\lambda)=\lambda^{2t-1}(\lambda-t)(\lambda+1)^{t}(\lambda^{2}-t(t+1)).$$ \end{enumerate} \end{theorem} \begin{proof} We prove Part(1), the proofs of Parts (2) and (3) are similar so we omit them. Since $\overline{\mathcal{G}}_{w}=K_{t,t}\sqcup K_{t}$, for $w=3t$, the result is straightforward using Lemma \ref{BasicCharacteristicPoly} Part \eqref{DisjointUnionN}. \end{proof} \section {Hosoya graphs and their complements}\label{Hosoya:graphs} The \emph{Hosoya triangle}, denoted by $\mathcal{H}_{F}$, is a classical triangular array where the entry in position $k$ (taken from left to right) of the $r$th row is equal to $H_{F(r,k)}:= F_{k}F_{r-k+1}$, where $1\le k \le r$ (see Table \ref{TablaHoyaF}). The formal definition, tables, and other results related to this triangle can be found in \cite{florezHiguitaJunesGCD, florezjunes, hosoya, koshy,koshy2} and \cite{sloane} at \seqnum{A058071}. Note that the recursive definition of the Hosoya triangle $\mathcal{H}_{F}$ is given by \eqref{Hosoya:Seq} on Page \pageref{Hosoya:Seq} with initial conditions $H_{F(1,1)}=$ $H_{F(2,1)}=$ $H_{F(2,2)}=$ $H_{F(3,2)}=1$. \begin{table} [!ht] \small \begin{center} \addtolength{\tabcolsep}{-1pt} \scalebox{1}{ \begin{tabular}{cccccccccccccccccc} &&&&&& 1 &&&&&&&\\ &&&&& 1 && 1 &&&&&&\\ &&&& 2 && 1 && 2 &&&&&\\ &&& 3 && 2 && 2 && 3 &&&&\\ && 5 && 3 && 4 && 3 && 5 &&\\ & 8 && 5 && 6 && 6 && 5 && 8 &\\ 13 && 8 && 10 && 9 && 10 && 8 && 13 \\ \end{tabular}} \end{center} \caption{ Hosoya's triangle.} \label{TablaHoyaF} \end{table} In this section we characterize the graphs associated with certain symmetric matrices in the Hosoya triangle $\mathcal{H}_{F}\bmod 2$ --called Hosoya graphs-- and the complements of these graphs. In particular, we show that the Hosoya graphs are integral and we present a necessary and sufficient condition for the complement Hosoya graphs to be integral. We would like to observe that the Hosoya graphs associated with these special symmetric matrices were presented for the first time in an article by Blair et al. \cite{Blair}. \subsection{Graphs from symmetric matrices in the Hosoya triangle} In this section we discuss the graphs associated with symmetric matrices in the Hosoya triangle $\mathcal{H}_{F}\bmod 2$. Let us recall that the entries of the triangle $\mathcal{H}_{F}\bmod 2$ are given by $H_{F(r,k)}:= F_{k}F_{r-k+1}$, where $1\le k \le r$ (see Page \pageref{triangles:1}). If $w= 3t+r$ with $t>0$ and $0\le r\le 2$, then we define the symmetric matrix $\mathcal{T}_{w}$ in the Hosoya triangle $\mathcal{H}_{F}$ in the following way: \begin{equation} \label{eq5} \mathcal{T}_{w} = \left[ {\begin{array}{lllll} H_{F(1,1)} & H_{F(2,1)} & H_{F(3,1)} & \cdots& H_{F(w,1)} \\ H_{F(2,2)} & H_{F(3,2)}& H_{F(4,2)} & \cdots & H_{F(w+1,2)}\\ \vdots &\vdots&\vdots&\ddots &\vdots\\ H_{F(w,w)} & H_{F(w+1,w)} & H_{F(w+2,w)} & \cdots & H_{F(2w-1,w)} \end{array} } \right]_{w\times w}. \end{equation} Let $T_{w}$ be the matrix $\mathcal{T}_{w} \bmod 2$. That is, $T_{w}$ is the matrix $[t_{ij}]_{w\times w}$ where $t_{ij}=H_{F(i,j)} \bmod 2$. Let $\Theta_{w}$ be the graph of ${T}_{w}$, see Table \ref{tabla3}. In Proposition \ref{Hosoya:graph} we show that $\Theta_{w}$ is integral. Note that if $w=3t+r$ with $t> 0$ and $0\le r \le 2 $, then $(T_{w}-\lambda I)$ is a product of two vectors. It is easy to see that $\Theta_{w}$ is an integral graph with exactly one non-zero eigenvalue $\lambda=2t+r$. \begin{proposition}\label{Hosoya:graph} Let $\Theta_{w}$ be the graph from $T_{w}$. If $w=3t+r$ with $t> 0$ and $0\le r \le 2 $, then $\Theta_{w}$ is the complete graph $K_{2t+r}$ with loops at every vertex and $t$ isolated vertices. Furthermore, $\Theta_{w}$ is an integral graph. \end{proposition} \begin{proof} It is known that the Fibonacci number $F_{n}\equiv 0 \bmod 2 \iff 3\mid n$. This and the definition of $\mathcal{T}_{w}$ (see \eqref{eq5}), imply that every third row and every third column of $\mathcal{T}_{w}$ are formed by even numbers and that the remaining rows and columns are formed by odd numbers only. Thus, if $t_{ij}$ is an entry of $\mathcal{T}_{w}$, then $t_{ij}\equiv 0 \bmod 2 \iff i \equiv 0 \mod 3 \text{ or } j \equiv 0 \mod 3$. This and $w=3t+r$ imply that $T_{w}$ contains $t$ columns and $t$ rows with zeros as entries. The remaining $2t+r$ rows and columns have ones as entries. These two features of $T_{w}$ give $(K_{2t+r}^{*}\sqcup \overline{K_{t}})$ (the disjoint union of the complete graph on $2t+r$ vertices with loops at every vertex and $t$ isolated vertices). Finally, from the discussion above we have that the only eigenvalue of $\Theta_{w}$ is an integer, namely $\lambda=2t+r$. Therefore, $\Theta_{w}$ is an integral graph. This completes the proof. \end{proof} \begin{table}[!ht] \begin{tabular}{|l|c|c|c|} \hline $k$ & $3t$ & $3t+1$ & $3t+2$ \\ \hline \hline 1 & $\vcenter{\hbox{\includegraphics[scale=.3]{Graph1Symm.eps}}}$ & $\vcenter{\hbox{\includegraphics[scale=.085]{Graph5Symm.eps}}}$& $\vcenter{\hbox{\includegraphics[scale=.3]{Graph2.eps}}}$\\ \hline 2 & $\vcenter{\hbox{\includegraphics[scale=.3]{Graph2Symm.eps}}}$ & $\vcenter{\hbox{\includegraphics[scale=.11]{Graph6Symm.eps}}}$& $\vcenter{\hbox{\includegraphics[scale=.3]{Graph3.eps}}}$\\ \hline 3 & $\vcenter{\hbox{\includegraphics[scale=.33]{Graph3Symm.eps}}}$ & $\vcenter{\hbox{\includegraphics[scale=.3]{Graph7Symm.eps}}}$ &$\vcenter{\hbox{\includegraphics[scale=.3]{Graph4.eps}}}$\\ \hline 4 & $\vcenter{\hbox{\includegraphics[scale=.3]{Graph4Symm.eps}}}$ & $\vcenter{\hbox{\includegraphics[scale=.34]{Graph8Symm.eps}}}$ & $\vcenter{\hbox{\includegraphics[scale=.33]{Graph9Symm.eps}}}$\\ \hline \end{tabular} \caption{Graphs $\Theta_{w}$.} \label{tabla3} \end{table} \subsection{Complements of the Hosoya Graph} In this section we first discuss the necessary and sufficient condition for the graphs of the form $G:= K_{n}\nabla \overline{K}_{r}$ for $n,r\in \mathbb{Z}_{>0}$, to be integral. These graphs are the generalizations of graphs that are the complements of the Hosoya graphs. It is easy to see (using \cite[Table 2]{barik}) that the graph $G:=K_{n} \nabla \overline{K}_{r}$ is integral if and only if there are $p, q\in \mathbb{Z}_{>0}$ such that $nr=pq$ with $n-1=p-q$. The characteristic polynomial of $G$ is given by $P(\lambda)=(-1)^r\lambda^{r-1} (\lambda+1)^{(n-1)}(\lambda^2-(n-1)\lambda-nr)$. We observe that the complement of the Hosoya graphs, denoted by $\overline{\Theta}_{w}$, can be described as the join of a complete graph and an empty graph. In fact, $\overline{\Theta}_{w}= K_{t}\nabla\overline{K}_{w-t}$ with $w=3t+r$ for $0\le r\le 2$. These graphs are special cases of the graphs $K_{n}\nabla \overline{K}_{r}$ (discussed in the previous paragraph). In addition, recall that the characteristic polynomial of $K_{n}\nabla \overline{K}_{r}$ is $P(\lambda)=(-1)^r\lambda^{r-1} (\lambda+1)^{(n-1)}(\lambda^2-(n-1)\lambda-nr)$. Thus, if we consider the graphs $\overline{\Theta}_{w}$, then it is easy to check that $P(\lambda)$ has integral roots if and only if $w=3t+2$. In other words, the graph $\overline{\Theta}_{w}$ is integral if and only if $w=3t+2$. Recently, Be\'{a}ta et al. \cite{beata} have shown a combinatorial connection with graphs of the form $K_{n} \nabla \overline{K}_{r}$. \hrule \noindent 2010 {\it Mathematics Subject Classification}: Primary 68R11; Secondary 11B39. \noindent \emph{Keywords: } Fibonacci number, determinant Hosoya triangle, adjacency matrix, eigenvalue, integral graph, cograph. \end{document}
\begin{document} \title[Topos-Based Logic for Quantum Systems and Bi-Heyting Algebras]{Topos-Based Logic for Quantum Systems\\and Bi-Heyting Algebras} \author{Andreas D\"oring} \address{Andreas D\"oring\newline \indent Clarendon Laboratory\newline \indent Department of Physics\newline \indent University of Oxford\newline \indent Parks Road\newline \indent Oxford OX1 3PU, UK} \email{doering@atm.ox.ac.uk} \date{December 5, 2013} \subjclass{Primary 81P10, 03G12; Secondary 06D20, 06C99} \keywords{Bi-Heyting algebra, Topos, Quantum, Logic} \begin{abstract} To each quantum system, described by a von Neumann algebra of physical quantities, we associate a complete bi-Heyting algebra. The elements of this algebra represent contextualised propositions about the values of the physical quantities of the quantum system. \end{abstract} \maketitle \section{Introduction} \label{Sec_Introd} Quantum logic started with Birkhoff and von Neumann's seminal article \cite{BvN36}. Since then, non-distributive lattices with an orthocomplement (and generalisations thereof) have been used as representatives of the algebra of propositions about the quantum system at hand. There are a number of well-known conceptual and interpretational problems with this kind of `logic'. For review of standard quantum logic(s), see the article \cite{DCG02}. In the last few years, a different form of logic for quantum systems based on generalised spaces in the form of presheaves and topos theory has been developed by Chris Isham and this author \cite{DI(1),DI(2),DI(3),DI(4),Doe07b,DI(Coecke)08,Doe09,Doe11}. This new form of logic for quantum systems is based on a certain Heyting algebra $\Subcl{\Sig}$ of clopen, i.e., closed and open subobjects of the spectral presheaf $\Sig$. This generalised space takes the r\^ole of a state space for the quantum system. (All technical notions are defined in the main text.) In this way, one obtains a well-behaved intuitionistic form of logic for quantum systems which moreover has a topological underpinning. In this article, we will continue the development of the topos-based form of logic for quantum systems. The main new observation is that the complete Heyting algebra $\Subcl{\Sig}$ of clopen subobjects representing propositions is also a complete co-Heyting algebra. Hence, we relate quantum systems to complete bi-Heyting algebras in a systematic way. This includes two notions of implication and two kinds of negation, as discussed in the following sections. The plan of the paper is as follows: in section \ref{Sec_Background}, we briefly give some background on standard quantum logic and the main ideas behind the new topos-based form of logic for quantum systems. Section \ref{Sec_BiHeytAlgs} recalls the definitions and main properties of Heyting, co-Heyting and bi-Heyting algebras, section \ref{Sec_SigEtc} introduces the spectral presheaf $\Sig$ and the algebra $\Subcl{\Sig}$ of its clopen subobjects. In section \ref{Sec_RepOfPropsAndBiHeyting}, the link between standard quantum logic and the topos-based form of quantum logic is established and it is shown that $\Subcl{\Sig}$ is a complete bi-Heyting algebra. In section \ref{Sec_NegsAndRegs}, the two kinds of negations associated with the Heyting resp. co-Heyting structure are considered. Heyting-regular and co-Heyting regular elements are characterised and a tentative physical interpretation of the two kinds of negation is given. Section \ref{Sec_Conclusion} concludes. Throughout, we assume some familiarity with the most basic aspects of the theory of von Neumann algebras and with basics of category and topos theory. The text is interspersed with some physical interpretations of the mathematical constructions. \section{Background} \label{Sec_Background} \textbf{Von Neumann algebras.} In this article, we will discuss structures associated with von Neumann algebras, see e.g. \cite{KR83}. This class of algebras is general enough to describe a large variety of quantum mechanical systems, including systems with symmetries and/or superselection rules. The fact that each von Neumann algebra has `sufficiently many' projections makes it attractive for quantum logic. More specifically, each von Neumann algebra is generated by its projections, and the spectral theorem holds in a von Neumann algebra, providing the link between self-adjoint operators (representing physical quantities) and projections (representing propositions). The reader not familiar with von Neumann algebras can always take the algebra $\BH$ of all bounded operators on a separable, complex Hilbert space $\cH$ as an example of a von Neumann algebra. If the Hilbert space $\cH$ is finite-dimensional, $\dim\cH=n$, then $\BH$ is nothing but the algebra of complex $n\times n$-matrices. \textbf{Standard quantum logic.} From the perspective of quantum logic, the key thing is that the projection operators in a von Neumann algebra $\N$ form a complete orthomodular lattice $\PN$. Starting from Birkhoff and von Neumann \cite{BvN36}, such lattices (and various kinds of generalisations, which we don't consider here) have been considered as quantum logics, or more precisely as algebras representing propositions about quantum systems. The kind of propositions that we are concerned with (at least in standard quantum logic) are of the form ``the physical quantity $A$ has a value in the Borel set $\De$ of real numbers'', which is written shortly as ``$\Ain\De$''. These propositions are pre-mathematical entities that refer to the `world out there'. In standard quantum logic, propositions of the form ``$\Ain\De$'' are represented by projection operators via the spectral theorem. If, as we always assume, the physical quantity $A$ is described by a self-adjoint operator $\hA$ in a given von Neumann algebra $\N$, or is affiliated with $\N$ in the case that $\hA$ is unbounded, then the projection corresponding to ``$\Ain\De$'' lies in $\PN$. (For details on the spectral theorem see any book on functional analysis, e.g. \cite{KR83}.) Following Birkhoff and von Neumann, one then interprets the lattice operations $\meet,\join$ in the projection lattice $\PN$ as logical connectives between the propositions represented by the projections. In this way, the meet $\meet$ becomes a conjunction and the join $\join$ a disjunction. Moreover, the orthogonal complement of a projection, $\hP':=\hat 1-\hP$, is interpreted as negation. Crucially, meets and joins do not distribute over each other. In fact, $\PN$ is a distributive lattice if and only if $\N$ is abelian if and only if all physical quantities considered are mutually compatible, i.e., co-measurable. Quantum systems always have some incompatible physical quantities, so $\N$ is never abelian and $\PN$ is non-distributive. This makes the interpretation of $\PN$ as an algebra of propositions somewhat dubious. There are many other conceptual difficulties with quantum logics based on orthomodular lattices, see e.g. \cite{DCG02}. \textbf{Contexts and coarse-graining.} The topos-based form of quantum logic that was established in \cite{DI(2)} and developed further in \cite{Doe07b,DI(Coecke)08,Doe09,Doe11} is fundamentally different from standard quantum logic. For some conceptual discussion, see in particular \cite{Doe09}. Two key ideas are \emph{contextuality} and \emph{coarse-graining} of propositions. Contextuality has of course been considered widely in foundations of quantum theory, in particular since Kochen and Specker's seminal paper \cite{KS67}. Yet, the systematic implementation of contextuality in the language of presheaves is comparatively new. It first showed up in work by Chris Isham and Jeremy Butterfield \cite{Ish97,IB98,IB99,IB00,IB00b,IB02,Ish05} and was substantially developed by this author and Isham. For recent, related work see also \cite{HLS09,CHLS09,HLS09b,HLS11} and \cite{AB11,AMS11}. Physically, a context is nothing but a set of compatible, i.e., co-measurable physical quantities $(A_i)_{i\in I}$. Such a set determines and is determined by an abelian von Neumann subalgebra $V$ of the non-abelian von Neumann algebra $\N$ of (all) physical quantities. Each physical quantity $A_i$ in the set is represented by some self-adjoint operator $\hA$ in $V$.\footnote{From here on, we assume that all the physical quantities $A_i$ correspond to \emph{bounded} self-adjoint operators that lie in $\N$. Unbounded self-adjoint operators affiliated with $\N$ can be treated in a straightforward manner.} In fact, $V$ is generated by the operators $(\hA_i)_{i\in I}$ and the identity $\hat 1$, in the sense that $V=\{\hat 1, \hA_i \mid i\in I\}''$, where $\{S\}''$ denotes the double commutant of a set $S$ of operators (see e.g. \cite{KR83}).\footnote{We will often use the notation $V'$ for a subalgebra of $V$, which does \emph{not} mean the commutant of $V$. We trust that this will not lead to confusion.} Each abelian von Neumann subalgebra $V$ of $\N$ will be called a context, thus identifying the mathematical notion and its physical interpretation. The set of all contexts will be denoted $\VN$. Each context provides one of many `classical perspectives' on a quantum system. We partially order the set of contexts $\VN$ by inclusion. A smaller context $V'\subset V$ represents a `poorer', more limited classical perspective containing fewer physical quantities than $V$. Each context $V\in\VN$ has a complete Boolean algebra $\PV$ of projections, and $\PV$ clearly is a sublattice of $\PN$. Propositions ``$\Ain\De$'' about the values of physical quantities $A$ in a (physical) context correspond to projections in the (mathematical) context $V$. Since $\PV$ is a Boolean algebra, there are Boolean algebra homomorphisms $\ld:\PV\ra\{0,1\}\simeq\{\false,\true\}$, which can be seen as truth-value assignments as usual. Hence, there are consistent truth-value assignments for all propositions ``$\Ain\De$'' for propositions about physical quantities \emph{within} a context. The key result by Kochen and Specker \cite{KS67} shows that for $\N=\BH$, $\dim\cH\geq 3$, there are no truth-value assignments for \emph{all} contexts simultaneously in the following sense: there is no family of Boolean algebra homomorphisms $(\ld_V:\PV\ra\{0,1\})_{V\in\VN}$ such that if $V'=V\cap\tilde V$ is a subcontext of both $V$ and $\tilde V$, then $\ld_{V'}=\ld_V|_{V'}=\ld_{\tilde V}|_{V'}$, where $\ld_V|_{V'}$ is the restriction of $\ld_V$ to the subcontext $V'$, and analogously $\ld_{\tilde V}|_{V'}$. As Isham and Butterfield realised \cite{IB98,IB00}, this means that a certain presheaf has no global elements. In \cite{Doe05}, it is shown that this result generalises to all von Neumann algebras without a type $I_2$-summand. In the topos approach to quantum theory, propositions are represented not by projections, but by suitable subobjects of a quantum state space. An obstacle arises since the Kochen-Specker theorem seems to show that such a quantum state space cannot exist. Yet, if one considers presheaves instead of sets, this problem can be overcome. The presheaves we consider are `varying sets' $(\ps S_V)_{V\in\VN}$, indexed by contexts. Whenever $V'\subset V$, there is a function defined from $\ps S_V$, the set associated with the context $V$, to $\ps S_{V'}$, the set associated with the smaller context $V'$. This makes $\ps S=(\ps S_V)_{V\in\VN}$ into a contravariant, $\Set$-valued functor. Since by contravariance we go from $\ps S_V$ to $\ps S_{V'}$, there is a built-in idea of \emph{coarse-graining}: $V$ is the bigger context, containing more self-adjoint operators and more projections than the smaller context $V'$, so we can describe more physics from the perspective of $V$ than from $V'$. Typically, the presheaves defined over contexts will mirror this fact: the component $\ps S_V$ at $V$ contains more information (in a suitable sense, to be made precise in the examples in section \ref{Sec_SigEtc}) than $\ps S_{V'}$, the component at $V'$. Hence, the presheaf map $\ps S(i_{V'V}):\ps S_V\ra\ps S_{V'}$ will implement a form of coarse-graining of the information available at $V$ to that available at $V'$. The subobjects of the quantum state space, which will be called the spectral presheaf $\Sig$, form a (complete) Heyting algebra. This is typical, since the subobjects of any object in a topos form a Heyting algebra. Heyting algebras are the algebraic representatives of (propositional) intuitionistic logics. In fact, we will not consider \emph{all} subobjects of the spectral presheaf, but rather the so-called clopen subobjects. The latter also form a complete Heyting algebra, as was first shown in \cite{DI(2)} and is proven here in a different way, using Galois connections, in section \ref{Sec_RepOfPropsAndBiHeyting}. The difference between the set of all subobjects of the spectral presheaf and the set of clopen subobjects is analogous to the difference between all subsets of a classical state space and (equivalence classes modulo null subsets of) measurable subsets. Together with the representation of states (which we will not discuss here, but see \cite{DI(2),Doe08,Doe09,DI12}), these constructions provide an intuitionistic form of logic for quantum systems. Moreover, there is a clear topological underpinning, since the quantum state space $\Sig$ is a generalised space associated with the nonabelian algebra $\N$. The construction of the presheaf $\Sig$ and its algebra of subobjects incorporates the concepts of contextuality and coarse-graining in a direct way, see sections \ref{Sec_SigEtc} and \ref{Sec_RepOfPropsAndBiHeyting}. \section{Bi-Heyting algebras} \label{Sec_BiHeytAlgs} The use of bi-Heyting algebras in superintuitionistic logic was developed by Rauszer \cite{Rau73,Rau77}. Lawvere emphasised the importance of co-Heyting and bi-Heyting algebras in category and topos theory, in particular in connection with continuum physics \cite{Law86,Law91}. Reyes, with Makkai \cite{MR95} and Zolfaghari \cite{RZ96}, connected bi-Heyting algebras with modal logic. In a recent paper, Bezhanishvili et al. \cite{BBGK10} prove (among other things) new duality theorems for bi-Heyting algebras based on bitopological spaces. Majid has suggested to use Heyting and co-Heyting algebras within a tentative representation-theoretic approach to the formulation of quantum gravity \cite{Maj95,Maj08}. As far as we are aware, nobody has connected quantum systems and their logic with bi-Heyting algebras before. The following definitions are standard and can be found in various places in the literature; see e.g. \cite{RZ96}. A \textbf{Heyting algebra $H$} is a lattice with bottom element $0$ and top element $1$ which is a cartesian closed category. In other words, $H$ is a lattice such that for any two elements $A,B\in H$, there exists an exponential $A\Rightarrow B$, called the \textbf{Heyting implication (from $A$ to $B$)}, which is characterised by the adjunction \begin{equation} C\leq(A\Rightarrow B)\quad\text{if and only if}\quad C\wedge A\leq B. \end{equation} This means that the product (meet) functor $A\meet_:H\ra H$ has a right adjoint $A\Rightarrow\_:H\ra H$ for all $A\in H$. It is straightforward to show that the underlying lattice of a Heyting algebra is distributive. If the underlying lattice is complete, then the adjoint functor theorem for posets shows that for all $A\in H$ and all families $(A_i)_{i\in I}\subseteq H$, the following infinite distributivity law holds: \begin{equation} A\meet\bjoin_{i\in I}A_i = \bjoin_{i\in I}(A\meet A_i). \end{equation} The \textbf{Heyting negation} is defined as \begin{align} \neg: H &\lra H^{\op}\\ \nonumber A &\lmt (A\Rightarrow 0). \end{align} The defining adjunction shows that $\neg A=\bjoin\{B\in H \mid A\meet B=0\}$, i.e., $\neg A$ is the largest element in $H$ such that $A\meet\neg A=0$. Some standard properties of the Heyting negation are: \begin{align} & A\leq B\text{ implies }\neg A\geq\neg B,\\ & \neg\neg A\geq A,\\ & \neg\neg\neg A=\neg A\\ & \neg A\join A\leq 1. \end{align} Interpreted in logical terms, the last property on this list means that in a Heyting algebra the law of excluded middle need not hold: in general, the disjunction between a proposition represented by $A\in H$ and its Heyting negation (also called Heyting complement, or pseudo-complement) $\neg A$ can be smaller than $1$, which represents the trivially true proposition. Heyting algebras are algebraic representatives of (propositional) intuitionistic logics. A canonical example of a Heyting algebra is the topology $\mc T$ of a topological space $(X,\mc T)$, with unions of open sets as joins and intersections as meets. A \textbf{co-Heyting algebra} (also called \textbf{Brouwer algebra $J$}) is a lattice with bottom element $0$ and top element $1$ such that the coproduct (join) functor $A\join\_:J\ra J$ has a left adjoint $A\Leftarrow\_:J\ra J$, called the \textbf{co-Heyting implication (from $A$)}. It is characterised by the adjunction \begin{equation} (A\Leftarrow B)\leq C\text{ iff }A\leq B\join C. \end{equation} It is straightforward to show that the underlying lattice of a co-Heyting algebra is distributive. If the underlying lattice is complete, then the adjoint functor theorem for posets shows that for all $A\in J$ and all families $(A_i)_{i\in I}\subseteq J$, the following infinite distributivity law holds: \begin{equation} A\join\bmeet_{i\in I}A_i = \bmeet_{i\in I}(A\join A_i). \end{equation} The \textbf{co-Heyting negation} is defined as \begin{align} \sim: J &\lra J^{\op}\\ A &\lmt (1\Leftarrow A). \end{align} The defining adjunction shows that $\sim A=\bmeet\{B\in J \mid A\join B=1\}$, i.e., $\sim A$ is the smallest element in $J$ such that $A\join \sim A=1$. Some properties of the co-Heyting negation are: \begin{align} & A\leq B\text { implies }\sim A\geq\sim B,\\ & \sim\sim A\leq A,\\ &\sim\sim\sim A=\sim A\\ &\sim A\meet A\geq 0. \end{align} Interpreted in logical terms, the last property on this list means that in a co-Heyting algebra the law of noncontradiction does not hold: in general, the conjunction between a proposition represented by $A\in J$ and its co-Heyting negation $\sim A$ can be larger than $0$, which represents the trivially false proposition. Co-Heyting algebras are algebraic representatives of (propositional) paraconsistent logics. We will not discuss paraconsistent logic in general, but in the final section \ref{Sec_NegsAndRegs}, we will give and interpretation of the co-Heyting negation showing up in the form of quantum logic to be presented in this article. A canonical example of a co-Heyting algebra is given by the closed sets $\mc C$ of a topological space, with unions of closed sets as joins and intersections as meets. Of course, Heyting algebras and co-Heyting algebras are dual notions. The opposite $H^{\op}$ of a Heyting algebra is a co-Heyting algebra and vice versa. A \textbf{bi-Heyting algebra $K$} is a lattice which is a Heyting algebra and a co-Heyting algebra. For each $A\in K$, the functor $A\meet\_:K \ra K$ has a right adjoint $A\Rightarrow\_:K\ra K$, and the functor $A\join\_:K\ra K$ has a left adjoint $K\Leftarrow\_:K\ra K$. A bi-Heyting algebra $K$ is called complete if it is complete as a Heyting algebra and complete as a co-Heyting algebra. A canonical example of a bi-Heyting algebra is a Boolean algebra $\mc B$. (Note that by Stone's representation theorem, each Boolean algebra is isomorphic to the algebra of clopen, i.e., closed and open, subsets of its Stone space. This gives the connection with the topological examples.) In a Boolean algebra, we have for the Heyting negation that, for all $A\in\mc B$, \begin{equation} A\join\neg A=1, \end{equation} which is the characterising property of the co-Heyting negation. In fact, in a Boolean algebra, $\neg =\sim$. \section{The spectral presheaf of a von Neumann algebra and clopen subobjects} \label{Sec_SigEtc} With each von Neumann algebra $\N$, we associate a particular presheaf, the so-called spectral presheaf. A distinguished family of subobjects, the so-called clopen subobjects, are defined and their interpretation is given: clopen subobjects can be seen as families of local propositions, compatible with respect to coarse-graining. The constructions presented here summarise those discussed in \cite{DI(1),DI(2),DI(Coecke)08,DI12}. Let $\N$ be a von Neumann algebra, and let $\VN$ be the set of its abelian von Neumann subalgebras, partially ordered under inclusion. We only consider subalgebras $V\subset\N$ which have the same unit element as $\N$, given by the identity operator $\hat 1$ on the Hilbert space on which $\N$ is represented. By convention, we exclude the trivial subalgebra $V_0=\bbC\hat 1$ from $\VN$. (This will play an important r\^ole in the discussion of the Heyting negation in section \ref{Sec_NegsAndRegs}.) The poset $\VN$ is called the \textbf{context category of the von Neumann algebra $\N$}. For $V',V\in\VN$ such that $V'\subset V$, the inclusion $i_{V'V}:V'\hookrightarrow V$ restricts to a morphism $i_{V'V}|_{\mc P(V')}:\mc P(V')\ra\PV$ of complete Boolean algebras. In particular, $i_{V'V}$ preserves all meets, hence it has a left adjoint \begin{align} \deo_{V,V'}: \PV &\lra \mc P(V')\\ \nonumber \hP &\lmt \deo_{V,V'}(\hP)=\bmeet\{\hQ\in V' \mid \hQ\geq \hP\} \end{align} that preserves all joins, i.e., for all families $(\hP_i)_{i\in I}\subseteq\PV$, it holds that \begin{equation} \deo_{V,V'}(\bjoin_{i\in I}\hP_i)=\bjoin_{i\in I}\deo_{V,V'}(\hP_i), \end{equation} where the join on the left hand side is taken in $\PV$ and the join on the right hand side is in $\mc P(V')$. If $W\subset V'\subset V$, then $\deo_{V,W}=\deo_{V',W}\circ\deo_{V,V'}$, obviously. We note that distributivity of the lattices $\PV$ and $\mc P(V')$ plays no r\^ole here. If $\N$ is a von Neumann algebra and $\cM$ is any von Neumann subalgebra such that their unit elements coincide, $\hat 1_{\cM}=\hat 1_{\N}$, then there is a join-preserving map \begin{align} \deo_{\N,\cM'}: \PN &\lra \mc P(\cM)\\ \nonumber \hP &\lmt \deo_{\N,\cM}(\hP)=\bmeet\{\hQ\in \mc P(\cM) \mid \hQ\geq \hP\}. \end{align} Recall that the Gel'fand spectrum $\Sigma(A)$ of an abelian $C^*$-algebra $A$ is the set of algebra homomorphisms $\ld:A\ra\bbC$. Equivalently, the elements of the Gel'fand spectrum $\Sigma(A)$ are the pure states of $A$. The set $\Sigma(A)$ is given the relative weak*-topology (as a subset of the dual space of $A$), which makes it into a compact Hausdorff space. By Gel'fand-Naimark duality, $A\simeq C(\Sigma(A))$, that is, $A$ is isometrically $*$-isomorphic to the abelian $C^*$-algebra $C(\Sigma(A))$ of continuous, complex-valued functions on $\Sigma(A)$, equipped with the supremum norm. If $A$ is an abelian von Neumann algebra, then $\Sigma(A)$ is extremely disconnected. We now define the main object of interest: \begin{definition} Let $\N$ be a von Neumann algebra. The \textbf{spectral presheaf $\Sig$ of $\N$} is the presheaf over $\VN$ given \begin{itemize} \item [(a)] on objects: for all $V\in\VN$, $\Sig_V:=\Sigma(V)$, the Gel'fand spectrum of $V$, \item [(b)] on arrows: for all inclusions $i_{V'V}:V'\hookrightarrow V$, \begin{align} \Sig(i_{V'V}): \Sig_V &\lra \Sig_{V'}\\ \nonumber \ld &\lmt \ld|_{V'}. \end{align} \end{itemize} \end{definition} The restriction maps $\Sig(i_{V'V})$ are well-known to be continuous, surjective maps with respect to the Gel'fand topologies on $\Sig_V$ and $\Sig_{V'}$, respectively. They are also open and closed, see e.g. \cite{DI(2)}. We equip the spectral presheaf with a distinguished family of subobjects (which are subpresheaves): \begin{definition} A subobject $\ps S$ of $\Sig$ is called \textbf{clopen} if for each $V\in\VN$, the set $\ps S_V$ is a clopen subset of the Gel'fand spectrum $\Sig_V$. The set of all clopen subobjects of $\Sig$ is denoted as $\Subcl{\Sig}$. \end{definition} The set $\Subcl{\Sig}$, together with the lattice operations and bi-Heyting algebra structure defined below, is the algebraic implementation of the new topos-based form of quantum logic. The elements $\ps S\in\Subcl{\Sig}$ represent propositions about the values of the physical quantities of the quantum system. The most direct connection with propositions of the form ``$\Ain\De$'' is given by the map called daseinisation, see Def. \ref{Def_OuterDas} below. We note that the concept of contextuality (cf. section \ref{Sec_Background}) is implemented by this construction, since $\Sig$ is a presheaf over the context category $\VN$. Moreover, coarse-graining is mathematically realised by the fact that we use subobjects of presheaves. In the case of $\Sig$ and its clopen subobjects, this means the following: for each context $V\in\VN$, the component $\ps S_V\subseteq\Sig_V$ represents a \emph{local proposition} about the value of some physical quantity. If $V'\subset V$, then $\ps S_{V'}\supseteq\Sig(i_{V'V})(\ps S_V)$ (since $\ps S$ is a subobject), so $\ps S_{V'}$ represents a local proposition at the smaller context $V'\subset V$ that is \emph{coarser} than (i.e., a consequence of) the local proposition represented by $\ps S_V$. A clopen subobject $\ps S\in\Subcl{\Sig}$ can hence be interpreted as a collection of \emph{local propositions}, one for each context, such that smaller contexts are assigned coarser propositions. Clearly, the definition of clopen subobjects makes use of the Gel'fand topologies on the components $\Sig_V$, $V\in\VN$. We note that for each abelian von Neumann algebra $V$ (and hence for each context $V\in\VN$), there is an isomorphism of complete Boolean algebras \begin{align} \label{Eq_alphaV} \alpha_V:\PV &\lra \Cp(\Sig_V)\\ \nonumber \hP &\lmt \{\ld\in\Sig_V \mid \ld(\hP)=1\}. \end{align} Here, $\Cp(\Sig_V)$ denotes the clopen subsets of $\Sig_V$. There is a purely order-theoretic description of $\Subcl\Sig$: let \begin{equation} \mc P:=\prod_{V\in\VN}\PV \end{equation} be the set of choice functions $f:\VN\ra\coprod_{V\in\VN}\PV$, where $f(V)\in\PV$ for all $V\in\VN$. Equipped with pointwise operations, $\mc P$ is a complete Boolean algebra, since each $\PV$ is a complete Boolean algebra. Consider the subset $\mc S$ of $\mc P$ consisting of those functions for which $V'\subset V$ implies $f(V')\geq f(V)$. The subset $\mc S$ is closed under all meets and joins (in $\mc P$), and clearly, $\mc S\simeq\Subcl\Sig$. We define a partial order on $\Subcl{\Sig}$ in the obvious way: \begin{equation} \forall \ps S,\ps T\in\Subcl{\Sig} :\quad \ps S \leq \ps T :\Longleftrightarrow (\forall V\in\VN: \ps S_V\subseteq \ps T_V). \end{equation} We define the corresponding (complete) lattice operations in a stagewise manner, i.e., at each context $V\in\VN$ separately: for any family $(\ps S_i)_{i\in I}$, \begin{equation} \forall V\in\VN : (\bmeet_{i\in I}\ps S_i)_V:=\on{int}(\bigcap_{i\in I}\ps S_{i;V}), \end{equation} where $\ps S_{i;V}\subseteq\Sig_V$ is the component at $V$ of the clopen subobject $\ps S_i$. Note that the lattice operation is not just componentwise set-theoretic intersection, but rather the interior (with respect to the Gel'fand topology) of the intersection. This guarantees that one obtains clopen subsets at each stage $V$, not just closed ones. Analogously, \begin{equation} \forall V\in\VN : (\bjoin_{i\in I}\ps S_i)_V:=\on{cl}(\bigcup_{i\in I}\ps S_{i;V}), \end{equation} where the closure of the union is necessary in order to obtain clopen sets, not just open ones. The fact that meets and joins are not given by set-theoretic intersections and unions also means that $\Subcl{\Sig}$ is not a sub-Heyting algebra of the Heyting algebra $\Sub{\Sig}$ of all subobjects of the spectral presheaf. The difference between $\Sub{\Sig}$ and $\Subcl{\Sig}$ is analogous to the difference between the power set $PX$ of a set $X$ and the complete Boolean algebra $BX$ of measurable subsets (with respect to some measure) modulo null sets. For results on measures and quantum states from the perspective of the topos approach, see \cite{Doe08,DI12}. In section \ref{Sec_RepOfPropsAndBiHeyting}, we will show that $\Subcl{\Sig}$ is a complete bi-Heyting algebra. \begin{example} For illustration, we consider a simple example: let $\N$ be the \emph{abelian} von Neumann of diagonal matrices in $3$ dimensions. This is given by \begin{equation} \N:=\bbC\hP_1+\bbC\hP_2+\bbC\hP_3, \end{equation} where $\hP_1,\hP_2,\hP_3$ are pairwise orthogonal rank-$1$ projections on a $3$-dimensional Hilbert space. The projection lattice $\PN$ of $\N$ has $8$ elements, \begin{equation} \PN=\{\hat 0,\hP_1,\hP_2,\hP_2,\hP_3,\hP_1+\hP_2,\hP_1+\hP_3,\hP_2+\hP_3,\hat 1\}. \end{equation} Of course, $\PN$ is a Boolean algebra. The algebra $\N$ has three non-trivial abelian subalgebras, \begin{equation} V_i:=\bbC\hP_i+\bbC(\hat 1-\hP_i),\quad i=1,2,3. \end{equation} Hence, the context category $\VN$ is the $4$-element poset with $\N$ as top element and $V_i\subset\N$ for $i=1,2,3$. The Gel'fand spectrum $\Sig_{\N}$ of $\N$ has three elements $\ld_1,\ld_2,\ld_3$ such that \begin{equation} \ld_i(\hP_j)=\delta_{ij}. \end{equation} The Gel'fand spectrum $\Sig_{V_1}$ of $V_1$ has two elements $\ld'_1,\ld'_{2+3}$ such that \begin{equation} \ld'_1(\hP_1)=1,\quad\ld'_1(\hat 1-\hP_1)=0,\quad\ld'_{2+3}(\hP_1)=0,\quad\ld'_{2+3}(\hat 1-\hP_1)=1. \end{equation} (Note that $\hat 1-\hP_1=\hP_2+\hP_3$.) Analogously, the spectrum $\Sig_{V_2}$ has two elements $\ld'_{1+3},\ld'_2$, and the spectrum $\Sig_{V_3}$ has two elements $\ld'_{1+2},\ld'_3$. Consider the restriction map of the spectral presheaf from $\Sig_{\N}$ to $\Sig_1$: \begin{equation} \Sig(i_{V_1,\N})(\ld_1)=\ld'_1,\quad\Sig(i_{V_1,\N})(\ld_2)=\Sig(i_{V_1,\N})(\ld_3)=\ld'_{2+3}. \end{equation} The restriction maps from $\Sig_{\N}$ to $\Sig_{V_2}$ resp. $\Sig_{V_3}$ are defined analogously. This completes the description of the spectral presheaf $\Sig$ of the algebra $\N$. We will now determine all clopen subobjects of $\Sig$. First, note that the Gel'fand spectra all are discrete sets, so topological questions are trivial here. We simply have to determine all subobjects of $\Sig$. We distinguish a number of cases: \begin{itemize} \item [(a)] Let $\ps S\in\Subcl\Sig$ be a subobject such that $\ps S_{\N}=\Sig_{\N}=\{\ld_1,\ld_2,\ld_3\}$. Then the restriction maps of $\Sig$ dictate that for each $V_i$, $i=1,2,3$, we have $\ps S_{V_i}\supset\Sig(i_{V_i,\N})(\ps S_{N})=\Sig_{V_i}$, so $\ps S$ must be $\Sig$ itself. \item [(b)] Let $\ps S$ be a subobject such that $\ps S_{\N}$ contains two elements, e.g. $\ps S_{\N}=\{\ld_1,\ld_2\}$. Then $\ps S_{V_1}=\Sig_{V_1}$ and $\ps S_{V_2}=\Sig_{V_2}$, but $\ps S_{V_3}$ can either be $\{\ld'_{1+2}\}$ or $\{\ld'_{1+2},\ld'_3\}$, so there are $2$ options. Moreover, there are three ways of picking two elements from the three-element set $\Sig_{\N}$, so we have $3\cdot 2=6$ subobjects $\ps S$ with two elements in $\ps S_{\N}$. \item [(c)] Let $\ps S$ be such that $\ps S_{\N}$ contains one element, e.g. $\ps S_{\N}=\{\ld_1\}$. Then $\ps S_{V_1}$ can either be $\{\ld'_1\}$ or $\{\ld'_1,\ld'_{2+3}\}$; $\ps S_{V_2}$ can either be $\{\ld'_{1+3}\}$ or $\{\ld'_{1+3},\ld'_2\}$; and $\ps S_{V_3}$ can either be $\{\ld'_{1+2}\}$ or $\{\ld'_{1+2},\ld'_3\}$. Hence, there are $2^3$ options. Moreover, there are three ways of picking one element from $\Sig_{\N}$, so there are $3\cdot 2^3=24$ subobjects $\ps S$ with one element in $\ps S_{\N}$. \item [(d)] Finally, consider a subobject $\ps S$ such that $\ps S_{\N}=\emptyset$. Since the $V_i$ are not contained in one another, there are no conditions arising from restriction maps of the spectral presheaf $\Sig$. Hence, we can pick an arbitrary subset of $\Sig_{V_i}$ for $i=1,2,3$. Since each $\Sig_{V_i}$ has $2$ elements, there are $4$ subsets of each, so we have $4^3=64$ subobjects $\ps S$ with $\ps S_{\N}=\emptyset$. \end{itemize} In all, $\Subcl\Sig$ has $64+24+6+1=95$ elements. \end{example} We conclude this section with the remark that the pertinent topos in which the spectral presheaf (and the other presheaves discussed in this section) lie of course is the topos $\SetVNop$ of presheaves over the context category $\VN$. \section{Representation of propositions and bi-Heyting algebra structure} \label{Sec_RepOfPropsAndBiHeyting} \begin{definition} \label{Def_OuterDas} Let $\N$ be a von Neumann algebra, and let $\PN$ be its lattice of projections. The map \begin{align} \ps\deo: \PN &\lra \Subcl{\Sig}\\ \nonumber \hP &\lmt \ps\deo(\hP):=(\alpha_V(\deo_{\N,V}(\hP)))_{V\in\VN} \end{align} is called \textbf{outer daseinisation of projections}. \end{definition} This map was introduced in \cite{DI(2)} and discussed in detail in \cite{Doe09,Doe11}. It can be seen as a `translation' map from standard quantum logic, encoded by the complete orthomodular lattice $\PN$ of projections, to a form of (super)intuitionistic logic for quantum systems, based on the clopen subobjects of the spectral presheaf $\Sig$, which conceptually plays the r\^ole of a quantum state space. In standard quantum logic, the projections $\hP\in\PN$ represent propositions of the form ``$\Ain\De$'', that is, ``the physical quantity $A$ has a value in the Borel set $\De$ of real numbers''. The connection between propositions and projections is given by the spectral theorem. Outer daseinisation can hence be seen as a map from propositions of the form ``$\Ain\De$'' into the bi-Heyting algebra $\Subcl{\Sig}$ of clopen subobjects of the spectral presheaf. A projection $\hP$, representing a proposition ``$\Ain\De$'', is mapped to a collection $(\deo_{\N,V}(\hP))_{V\in\VN}$, consisting of one projection $\deo_{\N,V}(\hP)$ for each context $V\in\VN$. (Each isomorphism $\alpha_V$, $V\in\VN$, just maps the projection $\deo_{\N,V}(\hP)$ to the corresponding clopen subset of $\Sig_V$, which does not affect the interpretation.) Since we have $\deo_{\N,V}(\hP)\geq\hP$ for all $V$, the projection $\deo_{\N,V}(\hP)$ represents a coarser (local) proposition than ``$\Ain\De$'' in general. For example, if $\hP$ represents ``$\Ain\De$'', then $\deo_{\N,V}(\hP)$ may represent ``$\Ain\Ga$'' where $\Ga\supset\De$. The map $\ps\deo$ preserves all joins, as shown in section 2.D of \cite{DI(2)} and in \cite{Doe09}. Here is a direct argument: being left adjoint to the inclusion of $\PV$ into $\PN$, the map $\deo_{\N,V}$ preserves all colimits, which are joins. Moreover, $\alpha_V$ is an isomorphism of complete Boolean algebras, so $\alpha_V\circ\deo_{\N,V}$ preserves all joins. This holds for all $V\in\VN$, and joins in $\Subcl{\Sig}$ are defined stagewise, so $\ps\deo$ preserves all joins. Moreover, $\ps\deo$ is order-preserving and injective, but not surjective. Clearly, $\ps\deo(\hat 0)=\ps 0$, the empty subobject, and $\ps\deo(\hat 1)=\Sig$. For meets, we have \begin{equation} \forall \hP,\hQ\in\PN : \ps\deo(\hP\meet\hQ)\leq\ps\deo(\hP)\meet\ps\deo(\hQ). \end{equation} In general, $\ps\deo(\hP)\meet\ps\deo(\hQ)$ is not of the form $\ps\deo(\hat R)$ for any projection $\hat R\in\PN$. See \cite{DI(2),Doe09} for proof of these statements. Let $(\ps S_i)_{i\in I}\subseteq\Subcl{\Sig}$ be a family of clopen subobjects of $\Sig$, and let $\ps S\in\Subcl{\Sig}$. Then \begin{equation} \forall V\in\VN : (\ps S\meet\bjoin_{i\in I}\ps S_i)_V=\bjoin_{i\in I}(\ps S_V\meet\ps S_{i;V}), \end{equation} since $\Cp(\Sig_V)$ is a distributive lattice (in fact, a complete Boolean algebra) in which finite meets distribute over arbitrary joins. Hence, for each $\ps S\in\Subcl{\Sig}$, the functor \begin{equation} \ps S\meet\_:\Subcl{\Sig} \lra \Subcl{\Sig} \end{equation} preserves all joins, so by the adjoint functor theorem for posets, it has a right adjoint \begin{equation} \ps S\Rightarrow\_:\Subcl{\Sig} \lra \Subcl{\Sig}. \end{equation} This map, the \textbf{Heyting implication from $\ps S$}, makes $\Subcl{\Sig}$ into a complete Heyting algebra. This was shown before in \cite{DI(2)}. The Heyting implication is given by the adjunction \begin{equation} \ps R\meet\ps S\leq\ps T\quad\text{if and only if}\quad\ps R\leq (\ps S\Rightarrow\ps T). \end{equation} (Note that $\ps S\meet\_=\_\meet\ps S$.) This implies that \begin{equation} (\ps S\Rightarrow\ps T)=\bjoin\{\ps R\in\Subcl{\Sig} \mid \ps R\meet\ps S\leq\ps T\}. \end{equation} The stagewise definition is: for all $V\in\VN$, \begin{equation} (\ps S\Rightarrow\ps T)_V=\{\ld\in\Sig_V \mid \forall V'\subseteq V:\text{ if } \ld|_{V'}\in \ps S_{V'}\text{, then }\ld|_{V'}\in\ps T_{V'}\}. \end{equation} As usual, the \textbf{Heyting negation $\neg$} is defined for all $\ps S\in\Subcl{\Sig}$ by \begin{equation} \neg\ps S:=(\ps S\Rightarrow\ps 0). \end{equation} That is, $\neg\ps S$ is the largest element of $\Subcl{\Sig}$ such that \begin{equation} \ps S\meet\neg\ps S=\ps 0. \end{equation} The stagewise expression for $\neg\ps S$ is \begin{equation} \label{Eq_HeytNegStagew} (\neg\ps S)_V=\{\ld\in\Sig_V \mid \forall V'\subseteq V:\ld|_{V'}\notin \ps S_{V'}\}. \end{equation} In $\Subcl{\Sig}$, we also have, for all families $(\ps S_i)_{i\in I}\subseteq\Subcl{\Sig}$ and all $\ps S\in\Subcl{\Sig}$, \begin{equation} \forall V\in\VN: (\ps S\join\bmeet_{i\in I}\ps S_i)_V=\bmeet_{i\in I}(\ps S_V\join\ps S_{i;V}), \end{equation} since finite joins distribute over arbitrary meets in $\Cp(\Sig)$. Hence, for each $\ps S$ the functor \begin{equation} \ps S\join\_:\Subcl{\Sig} \lra \Subcl{\Sig} \end{equation} preserves all meets, so it has a left adjoint \begin{equation} \ps S\Leftarrow\_: \Subcl{\Sig} \lra \Subcl{\Sig} \end{equation} which we call \textbf{co-Heyting implication}. This map makes $\Subcl{\Sig}$ into a complete co-Heyting algebra. It is characterised by the adjunction \begin{equation} (\ps S\Leftarrow\ps T)\leq\ps R\quad\text{iff}\quad\ps S\leq\ps T\join\ps R, \end{equation} so \begin{equation} \label{Eq_coImp} (\ps S\Leftarrow\ps T)=\bmeet\{\ps R\in\Subcl{\Sig} \mid \ps S\leq\ps T\join\ps R\}. \end{equation} One can think of $\ps S\Leftarrow\_$ as a kind of `subtraction' (see e.g. \cite{RZ96}): $\ps S\Leftarrow\ps T$ is the smallest clopen subobject $\ps R$ for which $\ps T\join\ps R$ is bigger then $\ps S$, so it encodes how much is `missing' from $\ps T$ to cover $\ps S$. We define a \textbf{co-Heyting negation} for each $\ps S\in\Subcl{\Sig}$ by \begin{equation} \sim\ps S:=(\Sig\Leftarrow\ps S). \end{equation} (Note that $\Sig$ is the top element in $\Subcl{\Sig}$.) Hence, $\sim\ps S$ is the smallest clopen subobject such that \begin{equation} \sim\ps S\join\ps S=\Sig \end{equation} holds. We have shown in a direct manner, without use of topos theory as in section \ref{Sec_SigEtc}: \begin{proposition} $(\Subcl{\Sig},\meet,\join,\ps 0,\Sig,\Rightarrow,\neg,\Leftarrow,\sim)$ is a complete bi-Heyting algebra. \end{proposition} We give direct arguments for the following two facts (which also follow from the general theory of bi-Heyting algebras): \begin{lemma} For all $\ps S\in\Subcl{\Sig}$, we have $\neg\ps S\leq\;\sim\ps S$. \end{lemma} \begin{proof} For all $V\in\VN$, it holds that $(\neg\ps S)_V\subseteq\Sig_V\backslash\ps S_V$, since $(\neg\ps S\meet\ps S)_V=(\neg\ps S)_V\cap\ps S_V=\emptyset$, while $(\sim\ps S)_V\supseteq\Sig_V\backslash\ps S_V$ since $(\sim\ps S\join\ps S)_V=(\sim\ps S)_V\cup\ps S_V=\Sig_V$. \end{proof} The above lemma and the fact that $\neg\ps S$ is the largest subobject such that $\neg\ps S\meet\ps S=\ps 0$ imply \begin{corollary} In general, $\sim\ps S\meet\ps S\geq\ps 0$. \end{corollary} This means that the co-Heyting negation does not give a system in which a central axiom of most logical systems, viz. freedom from contradiction, holds. We have a glimpse of \emph{paraconsistent logic}. In fact, a somewhat stronger result holds: for any von Neumann algebra except for $\bbC\hat 1=M_1(\bbC)$ and $M_2(\bbC)$, we have $\sim\ps S>\neg\ps S$ and $\sim\ps S\meet\ps S>\ps 0$ for all clopen subobjects except $\ps 0$ and $\Sig$. This follows easily from the representation of clopen subobjects as families of projections, see beginning of next section. \section{Negations and regular elements} \label{Sec_NegsAndRegs} In this section, we will examine the Heyting negation $\neg$ and the co-Heyting negation $\sim$ more closely. We will determine regular elements with respect to the Heyting and the co-Heyting algebra structure. Throughout, we will make use of the isomorphism $\alpha_V:\PV\ra\Cp(\Sig_V)$ (defined in (\ref{Eq_alphaV})) between the complete Boolean algebras of projections in an abelian von Neumann algebra $V$ and the clopen subsets of its spectrum $\Sig_V$. Given a projection $\hP\in\PV$, we will use the notation $S_{\hP}:=\alpha_V(\hP)$. Conversely, for $S\in\Cp(\Sig_V)$, we write $\hP_S:=\alpha_V^{-1}(S)$. Given a clopen subobject $\ps S\in\Subcl{\Sig}$, it is useful to think of it as a collection of projections: consider \begin{equation} (\hP_{\ps S_V})_{V\in\VN} = (\alpha_V(\ps S_V))_{V\in\VN}, \end{equation} which consists of one projection for each context $V$. The fact that $\ps S$ is a subobject then translates to the fact that if $V'\subset V$, then $\hP_{\ps S_{V'}}\geq\hP_{\ps S_V}$. (This is another instance of coarse-graining.) If $\ld\in\Sig_V$ and $\hP\in\PV$, then \begin{equation} \ld(\hP)=\ld(\hP^2)=\ld(\hP)^2\in\{0,1\}, \end{equation} where we used that $\hP$ is idempotent and that $\ld$ is multiplicative. \textbf{Heyting negation and Heyting-regular elements.} We consider the stagewise expression (see eq. (\ref{Eq_HeytNegStagew})) for the Heyting negation: \begin{align} (\neg\ps S)_V &=\{\ld\in\Sig_V \mid \forall V'\subseteq V:\ld|_{V'}\notin \ps S_{V'}\}\\ &= \{\ld\in\Sig_V \mid \forall V'\subseteq V:\ld|_{V'}(\hP_{\ps S_{V'}})=0 \}\\ &= \{\ld\in\Sig_V \mid \forall V'\subseteq V:\ld(\hP_{\ps S_{V'}})=0 \}\\ &= \{\ld\in\Sig_V \mid \ld(\bjoin_{V'\subseteq V}\hP_{\ps S_{V'}})=0 \} \end{align} As we saw above, the smaller the context $V'$, the larger the associated projection $\hP_{\ps S_{V'}}$. Hence, for the join in the above expression, only the \emph{minimal} contexts $V'$ contained in $V$ are relevant. A minimal context is generated by a single projection $\hQ$ and the identity, \begin{equation} V_{\hQ}:=\{\hQ,\hat 1\}''=\bbC\hQ+\bbC\hat 1. \end{equation} Here, it becomes important that we excluded the trivial context $V_0=\{\hat 1\}''=\bbC\hat 1$. Let \begin{equation} m_V:=\{V'\subseteq V \mid V'\text{ minimal}\}=\{V_{\hQ} \mid \hQ\in\PV\}. \end{equation} We obtain \begin{align} (\neg\ps S)_V &= \{\ld\in\Sig_V \mid \ld(\bjoin_{V'\in m_V}\hP_{\ps S_{V'}})=0 \}\\ &= \{\ld\in\Sig_V \mid \ld(\hat 1-\bjoin_{V'\in m_V}\hP_{\ps S_{V'}})=1 \}\\ &= S_{\hat 1-\bjoin_{V'\in m_V}\hP_{\ps S_{V'}}}. \end{align} This shows: \begin{proposition} \label{Prop_HeytingNegLocal} Let $\ps S\in\Subcl{\Sig}$, and let $V\in\VN$. Then \begin{equation} \hP_{(\neg\ps S)_V}=\hat 1-\bjoin_{V'\in m_V}\hP_{\ps S_{V'}}, \end{equation} where $m_V=\{V'\subseteq V \mid V'\text{ minimal}\}$. \end{proposition} We can now consider double negation: $(\neg\neg\ps S)_V=S_{\hat 1-\bjoin_{V'\in m_V}\hP_{(\neg\ps S)_{V'}}}$, so \begin{equation} \hP_{(\neg\neg\ps S)_V}=\hat 1-\bjoin_{V'\in m_V}\hP_{(\neg\ps S)_{V'}}. \end{equation} For a $V'\in m_V$, we have $\hP_{(\neg\ps S)_{V'}}=\hat 1-\bjoin_{W\in m_{V'}}\hP_{\ps S_{W}}$, but $m_{V'}=\{V'\}$, since $V'$ is minimal, so $\hP_{(\neg\ps S)_{V'}}=\hat 1-\hP_{\ps S_{V'}}$. Thus, \begin{equation} \label{Eq_DoubleHeytNegStagew} \hP_{(\neg\neg\ps S)_V}=\hat 1-\bjoin_{V'\in m_V}(\hat 1-\hP_{\ps S_{V'}})=\bmeet_{V'\in m_V}\hP_{\ps S_{V'}}. \end{equation} Since $\hP_{\ps S_{V'}}\geq\hP_{\ps S_V}$ for all $V'\in m_V$ (because $\ps S$ is a subobject), we have \begin{equation} \label{Eq_DoubleHeytNegBigger} \hP_{(\neg\neg\ps S)_V}=\bmeet_{V'\in m_V}\hP_{\ps S_{V'}}\geq\hP_{\ps S_V} \end{equation} for all $V\in\VN$, so $\neg\neg\ps S\geq\ps S$ as expected. We have shown: \begin{proposition} \label{Prop_HeytingReg} An element $\ps S$ of $\Subcl{\Sig}$ is Heyting-regular, i.e., $\neg\neg\ps S=\ps S$, if and only if for all $V\in\VN$, it holds that \begin{equation} \hP_{\ps S_V}=\bmeet_{V'\in m_V}\hP_{\ps S_{V'}}, \end{equation} where $m_V=\{V'\subseteq V \mid V'\text{ minimal}\}$. \end{proposition} \begin{definition} A clopen subobject $\ps S\in\Subcl{\Sig}$ is called \textbf{tight} if \begin{equation} \Sig(i_{V'V})(\ps S_V)=\ps S_{V'} \end{equation} for all $V',V\in\VN$ such that $V'\subseteq V$. \end{definition} For arbitrary subobjects, we only have $\Sig(i_{V'V})(\ps S_V)\subseteq\ps S_{V'}$. Let $\ps S\in\Subcl{\Sig}$ be an arbitrary clopen subobject, and let $V,V'\in\VN$ such that $V'\subset V$. Then $\Sig(i_{V'V})(\ps S_V)\subseteq\ps S_{V'}\subseteq\Sig_{V'}$, so $\hP_{\Sig(i_{V'V})(\ps S_V)}\in\mc P(V')$. Thm. 3.1 in \cite{DI(2)} shows that \begin{equation} \hP_{\Sig(i_{V'V})(\ps S_V)}=\deo_{V,V'}(\hP_{\ps S_V}). \end{equation} This key formula relates the restriction maps $\Sig(i_{V'V}):\Sig_V\ra\Sig_{V'}$ of the spectral presheaf to the maps $\deo_{V,V'}:\PV\ra\mc P(V')$. Using this, we see that \begin{proposition} A clopen subobject $\ps S\in\Subcl{\Sig}$ is tight if and only if $\hP_{\ps S_{V'}}=\deo_{V,V'}(\hP_{\ps S_V})$ for all $V',V\in\VN$ such that $V'\subseteq V$. \end{proposition} It is clear that all clopen subobjects of the form $\ps\deo(\hP)$, $\hP\in\PN$, are tight (see Def. \ref{Def_OuterDas}). \begin{proposition} For a tight subobject $\ps S\in\Subcl{\Sig}$, it holds that $\neg\neg\ps S=\ps S$, i.e., tight subobjects are Heyting-regular. \end{proposition} \begin{proof} We saw in equation (\ref{Eq_DoubleHeytNegStagew}) that $\hP_{(\neg\neg\ps S)_V}=\bmeet_{V'\in m_V}\hP_{\ps S_{V'}}$ for all $V\in\VN$. Moreover, $\hP_{(\neg\neg\ps S)_V}\geq\hP_{\ps S_V}$ from equation (\ref{Eq_DoubleHeytNegBigger}). Consider the minimal subalgebra $V_{\hP_{\ps S_V}}=\{\hP_{\ps S_V},\hat 1\}''$ of $V$. Then, since $\ps S$ is tight, we have \begin{equation} \deo_{V,V_{\hP_{\ps S_V}}}(\hP_{\ps S_V})=\bmeet\{\hQ\in\mc P(V_{\hP_{\ps S_V}}) \mid \hQ\geq\hP_{\ps S_V}\}=\hP_{\ps S_V}, \end{equation} so, for all $V\in\VN$, \begin{equation} \hP_{(\neg\neg\ps S)_V}=\bmeet_{V'\in m_V}\hP_{\ps S_{V'}}=\hP_{\ps S_V}. \end{equation} \end{proof} \begin{corollary} Outer daseinisation $\ps\deo:\PN\ra\Subcl{\Sig}$ maps projections into the Heyting-regular elements of $\Subcl{\Sig}$. \end{corollary} We remark that in order to be Heyting-regular, an element $\ps S\in\Subcl{\Sig}$ need not be tight. \textbf{Co-Heyting negation and co-Heyting regular elements.} For any $\ps S\in\Subcl{\Sig}$, by its defining property $\sim\ps S$ is the smallest element of $\Subcl{\Sig}$ such that $\ps S\join\sim\ps S=\Sig$. Let $V$ be a maximal context, i.e., a maximal abelian subalgebra (masa) of the non-abelian von Neumann algebra $\N$. Then clearly \begin{equation} (\sim\ps S)_V=\Sig_V\backslash\ps S_V. \end{equation} Let $V\in\VN$, not necessarily maximal. We define \begin{equation} M_V:=\{\tilde V\supseteq V \mid \tilde V\text{ maximal}\}. \end{equation} \begin{proposition} \label{Prop_CoHeytingNegLocal} Let $\ps S\in\Subcl{\Sig}$, and let $V\in\VN$. Then \begin{equation} \hP_{(\sim\ps S)_V}=\bjoin_{\tilde V\in M_V}(\deo_{\tilde V,V}(\hat 1-\hat P_{\ps S_{\tilde V}})), \end{equation} where $M_V=\{\tilde V\supseteq V \mid \tilde V\text{ maximal}\}$. \end{proposition} \begin{proof} $\sim\ps S$ is a (clopen) subobject, so we must have \begin{equation} \hP_{(\sim\ps S)_V}\geq\bjoin_{\tilde V\in M_V}(\deo_{\tilde V,V}(\hat 1-\hat P_{\ps S_{\tilde V}})), \end{equation} since $(\sim\ps S)_V$, the component at $V$, must contain all the restrictions of the components $(\sim\ps S)_{\tilde V}$ for $\tilde V\in M_V$ (and the above inequality expresses this using the corresponding projections). On the other hand, $\sim\ps S$ is the \emph{smallest} clopen subobject such that $\newblock{\ps S\join\sim\ps S}=\Sig$. So it suffices to show that for $\hP_{(\sim\ps S)_V}=\bjoin_{\tilde V\in M_V}(\deo_{\tilde V,V}(\hat 1-\hP_{\ps S_{\tilde V}}))$, we have $\hP_{(\sim\ps S)_V}\join\hP_{\ps S_V}=\hat 1$ for all $V\in\VN$, and hence $\sim\ps S\join\ps S=\Sig$. If $V$ is maximal, then $\hP_{(\sim\ps S)_V}=\deo_{V,V}(\hat 1-\hP_{\ps S_V})=\hat 1-\hP_{\ps S_V}$ and hence $\hP_{(\sim\ps S)_V}\join\hP_{\ps S_V}=\hat 1$. If $V$ is non-maximal and $\tilde V$ is any maximal context containing $V$, then $\hP_{(\sim\ps S)_V}\geq\hP_{(\sim\ps S)_{\tilde V}}$ and $\hP_{\ps S_V}\geq\hP_{\ps S_{\tilde V}}$, so $\hP_{(\sim\ps S)_V}\join\hP_{\ps S_V}\geq\hP_{(\sim\ps S)_{\tilde V}}\join\hP_{\ps S_{\tilde V}}=\hat 1$. \end{proof} For the double co-Heyting negation, we obtain \begin{align} \hP_{(\sim\sim\ps S)_V} &=\bjoin_{\tilde V\in M_V}\deo_{\tilde V,V}(\hat 1-\hP_{(\sim\ps S)_{\tilde V}})\\ &= \bjoin_{\tilde V\in M_V}\deo_{\tilde V,V}(\hat 1-\bjoin_{W\in M_{\tilde V}}\deo_{W,\tilde V}(\hat 1-\hP_{\ps S_W})). \end{align} Since $\tilde V$ is maximal, we have $M_{\tilde V}=\{\tilde V\}$, and the above expression simplifies to \begin{align} \hP_{(\sim\sim\ps S)_V} &=\bjoin_{\tilde V\in M_V}\deo_{\tilde V,V}(\hat 1-(\hat 1-\hP_{\ps S_{\tilde V}}))\\ &= \bjoin_{\tilde V\in M_{V}}\deo_{\tilde V,V}(\hP_{\ps S_{\tilde V}}). \end{align} Note that the fact that $\ps S$ is a subobject implies that \begin{equation} \label{Eq_DoubleCoHeytNegBigger} \hP_{(\sim\sim\ps S)_V}\leq\hP_{\ps S_V} \end{equation} for all $V\in\VN$, so $\sim\sim\ps S\leq\ps S$ as expected. We have shown: \begin{proposition} \label{Prop_CoHeytReg} An element $\ps S$ of $\Subcl{\Sig}$ is co-Heyting-regular, i.e., $\newblock{\sim\sim\ps S}=\ps S$, if and only if for all $V\in\VN$ it holds that \begin{equation} \hP_{\ps S_V}=\bjoin_{\tilde V\in M_{V}}\deo_{\tilde V,V}(\hP_{\ps S_{\tilde V}}), \end{equation} where $M_V=\{\tilde V\supseteq V \mid \tilde V\text{ maximal}\}$. \end{proposition} \begin{proposition} If $\ps S\in\Subcl{\Sig}$ is tight, then $\sim\sim\ps S=\ps S$, i.e., tight subobjects are co-Heyting regular. \end{proposition} \begin{proof} If $\ps S$ is tight, then for all $V\in\VN$ and for all $\tilde V\in M_V$, we have $\hP_{\ps S_V}=\deo_{\tilde V,V}(\hP_{\ps S_{\tilde V}})$, so $\bjoin_{\tilde V\in M_V}\deo_{\tilde V,V}(\hP_{\ps S_{\tilde V}})=\hP_{\ps S_V}$. By Prop. \ref{Prop_CoHeytReg}, the result follows. \end{proof} \begin{corollary} Outer daseinisation $\ps\deo:\PN\ra\Subcl{\Sig}$ maps projections into the co-Heyting-regular elements of $\Subcl{\Sig}$. \end{corollary} \textbf{Physical interpretation.} We conclude this section by giving a tentative physical interpretation of the two kinds of negation. For this interpretation, it is important to think of an element $\ps S\in\Subcl{\Sig}$ as a collection of local propositions $\ps S_V$ (resp. $\hP_{\ps S_V}$), one for each context $V$. Moreover, if $V'\subset V$, then the local proposition represented by $\ps S_{V'}$ is coarser than the local proposition represented by $\ps S_V$. Let $\ps S\in\Subcl{\Sig}$ be a clopen subobject, and let $\neg\ps S$ be its Heyting complement. As shown in Prop. \ref{Prop_HeytingNegLocal}, the local expression for components of $\neg\ps S$ is given by \begin{equation} \hP_{(\neg\ps S)_V}=\hat 1-\bjoin_{V'\in m_V}\hP_{\ps S_{V'}}, \end{equation} where $m_V$ is the set of all minimal contexts contained in $V$. The projection $\hP_{(\neg\ps S)_V}$ is always smaller than or equal to $\hat 1-\hP_{\ps S_V}$, since $\hP_{\ps S_{V'}}\geq\hP_{\ps S_V}$ for all $V'\in m_V$. For the Heyting negation of the local proposition in the context $V$, represented by $\ps S_V$ or equivalently by the projection $\hP_{\ps S_V}$, one has to consider all the coarse-grainings of this proposition to minimal contexts (which are the `maximal' coarse-grainings). The Heyting complement $\neg\ps S$ is determined at each stage $V$ as the complement of the join of all the coarse-grainings $\hP_{\ps S_{V'}}$ of $\hP_{\ps S_V}$. In other words, the component of the Heyting complement $\neg\ps S$ at $V$ is not simply the complement of $\ps S_V$, but the complement of the disjunction of all the coarse-grainings of this local proposition to all smaller contexts. The coarse-grainings of $\ps S_V$ are specified by the clopen subobject $\ps S$ itself. The component of the co-Heyting complement $\sim\ps S$ at a context $V$ is given by \begin{equation} \hP_{(\sim\ps S)_V}=\bjoin_{\tilde V\in M_V}(\deo_{\tilde V,V}(\hat 1-\hat P_{\ps S_{\tilde V}})), \end{equation} where $M_V$ is the set of maximal contexts containing $V$. The projection $\hP_{(\sim\ps S)_V}$ is always larger than or equal to $\hat 1-\hP_{\ps S_V}$, as was argued in the proof of Prop. \ref{Prop_CoHeytingNegLocal}. This means that the co-Heyting complement $\sim\ps S$ has a component $(\sim\ps S)_V$ at $V$ that may overlap with the component $\ps S_V$, hence the corresponding local propositions are not mutually exclusive in general. Instead, $\hP_{(\sim\ps S)_V}$ is the disjunction of all the coarse-grainings of complements of (finer, i.e., stronger) local propositions at contexts $\tilde V\supset V$. The co-Heyting negation hence gives local propositions that for each context $V$ take into account all those contexts $\tilde V$ from which one can coarse-grain to $V$. The component $(\sim\ps S)_V$ is defined in such a way that all the stronger local propositions at maximal contexts $\tilde V\supset V$ are complemented in the usual sense, i.e., $\hP_{(\sim\ps S)_{\tilde V}}=\hat 1-\hP_{\ps S_{\tilde V}}$ for all maximal contexts $\tilde V$. At smaller contexts $V$, we have some coarse-grained local proposition, represented by $\hP_{(\sim\ps S)_V}$, that will in general not be disjoint from (i.e., mutually exclusive with) the local proposition represented by $\hP_{\ps S_V}$. \section{Conclusion and outlook} \label{Sec_Conclusion} Summing up, we have shown that to each quantum system described by a von Neumann algebra $\N$ of physical quantities one can associate a (generalised) quantum state space, the spectral presheaf $\Sig$, together with a complete bi-Heyting algebra $\Subcl{\Sig}$ of clopen subobjects. Elements $\ps S$ can be interpreted as families of local propositions, where `local' refers to contextuality: each component $\ps S_V$ of a clopen subobject represents a proposition about the value of a physical quantity in the context (i.e., abelian von Neumann subalgebra) $V$ of $\N$. Since $\ps S$ is a subobject, there is a built-in form of coarse-graining which guarantees that if $V'\subset V$ is a smaller context, then the local proposition represented by $\ps S_{V'}$ is coarser than the proposition represented by $\ps S_V$. The map called outer daseinisation of projections (see Def. \ref{Def_OuterDas}) is a convenient bridge between the usual Hilbert space formalism and the new topos-based form of quantum logic. Daseinisation maps a propositions of the form ``$\Ain\De$'', represented by a projection $\hP$ in the complete orthomodular lattice $\PN$ of projections in the von Neumann algebra $\N$, to an element $\ps\deo(\hP)$ of the bi-Heyting algebra $\Subcl{\Sig}$. We characterised the two forms of negation arising from the Heyting and the co-Heyting structure on $\Subcl{\Sig}$ by giving concrete stagewise expressions (see Props. \ref{Prop_HeytingNegLocal} and \ref{Prop_CoHeytingNegLocal}), considered double negation and characterised Heyting regular elements of $\Subcl{\Sig}$ (Prop. \ref{Prop_HeytingReg}) as well as co-Heyting regular elements (Prop. \ref{Prop_CoHeytReg}). It turns out that daseinisation maps projections into Heyting regular and co-Heyting regular elements of the bi-Heyting algebra of clopen subobjects. The main thrust of this article is to replace the standard algebraic representation of quantum logic in projection lattices of von Neumann algebras by a better behaved form based on bi-Heyting algebras. Instead of having a non-distributive orthomodular lattice of projections, which comes with a host of well-known conceptual and interpretational problems, one can consider a complete bi-Heyting algebra of propositions. In particular, this provides a distributive form of quantum logic. Roughly speaking, a non-distributive lattice with an orthocomplement has been traded for a distributive one with two different negations. We conclude by giving some open problems for further study: \begin{itemize} \item [(a)] It will be interesting to see how far the constructions presented in this article can be generalised beyond the case of von Neumann algebras. A generalisation to complete orthomodular lattices is immediate, but more general structures used in the study of quantum logic(s) remain to be considered. \item [(b)] Bi-Heyting algebras are related to bitopological spaces, see \cite{BBGK10} and references therein. But the spectral presheaf $\Sig$ is not a topological (or bitopological) space in the usual sense. Rather, it is a presheaf which has no global elements. Hence, there is no direct notion of points available, which makes it impossible to define of a set underlying the topology (or topologies). Generalised notions of topology like frames will be useful to study the connections with bitopological spaces. \item [(c)] All the arguments given in this article are topos-external. There is an internal analogue of the bi-Heyting algebra $\Subcl{\Sig}$ in the form of the power object $P\ps O$ of the so-called outer presheaf, see \cite{DI12}, so one can study many aspects internally in the topos $\SetVNop$ associated with the quantum system. This also provides the means to go beyond propositional logic to predicate logic, since each topos possesses an internal higher-order intuitionistic logic. \end{itemize} \textbf{Acknowledgements.} I am very grateful to the ASL, and to Reed Solomon, Valentina Harizanov and Jennifer Chubb personally, for giving me the opportunity to organise a Special Session on ``Logic and Foundations of Physics'' for the 2010 North American Meeting of the ASL, Washington D.C., March 17--20, 2010. I would like to thank Chris Isham and Rui Soares Barbosa for discussions and support. Many thanks to Dan Marsden, who read the manuscript at an early stage and made some valuable comments and suggestions. The anonymous referee also provided some very useful suggestions, which I incorporated. Finally, Dominique Lambert's recent talk at \textsl{Categories and Physics 2011} at Paris 7 served as an eye-opener on paraconsistent logic (and made me lose my fear of contradictions ;-) ). \end{document}
\begin{document} \title[MENDART in cavity QED under Lindbladian dephasing]{Mean excitation numbers due to anti-rotating term (MENDART) in cavity QED under Lindbladian dephasing} \author{A. V. Dodonov} \affiliation{Instituto de F\'{\i}sica, Universidade de Bras\'{\i}lia, PO Box 04455, 70910-900, Bras\'{\i}lia, Distrito Federal, Brazil} \begin{abstract} We study the photon generation from arbitrary initial state in cavity QED due to the combined action of the anti-rotating term present in the Rabi Hamiltonian and Lindblad-type dephasing. We obtain a simple set of differential equations describing this process and deduce useful formulae for the moments of the photon number operator, demonstrating analytically that the average photon number increases linearly with time in the asymptotic limit. \end{abstract} \pacs{42.50.Pq, 32.80.-t, 42.50.Ct, 42.50.Hz} \maketitle In the 2008-th paper by \textit{Werlang et al.} \cite{Werlang} a puzzling quantum effect was noticed from numerical simulations: when a two-level atom interacts with a single mode of the radiation field in a cavity by means of the Rabi Hamiltonian, while subject to standard Markovian dephasing mechanism, the average intracavity photon number exhibits a linear growth with time. Such asymptotic photon generation due to decoherence occurs because for pure dephasing processes the environment may be viewed as a unmonitored detector making random nondemolition measurements of the number of quanta in the atom-field system \cite{Carm,WM}, whilst in \cite{PRL,Nature,Zeno} it was shown that nondemolition measurements can pump energy into the system via the destruction of quantum coherence provided the anti-rotating term is kept in the light-matter interaction Hamiltonian (i.e., without performing the Rotating Wave Approximation \cite{JC}). Besides, the pure dephasing reservoirs always possess a finite temperature (see, e.g. \cite{Carm,WM} for microscopic deduction) and hence store an infinite amount of energy, so the additional system energy is continuously supplied by the environment and the First Law of Thermodynamics is not violated (for the discussion concerning the Second Law of Thermodynamics in systems subject to frequent quantum measurements see \cite{Nature}). Although the phenomenon of photon generation due to decoherence was explained qualitatively in \cite{Werlang,CAMOP}, no satisfactory analysis was carried out to derive analytically whether for the pure Markovian dephasing the average photon number de facto increases linearly with time and whether this growth saturates for large times. So the aim of this paper is to investigate analytically the behavior of Mean Excitation Numbers due to Anti-Rotating Term (MENDART), such as mean photon number and its variance or atomic excitation probability, and investigate their asymptotic characteristics in the simplest case of Markovian dephasing. We shall show that for any initial state in the asymptotic limit the mean photon number $ \left\langle n\right\rangle $ indeed increases linearly with time, the average value of the photon number second moment $\left\langle n^{2}\right\rangle $ grows quadratically with time, and the atomic excitation probability $P_{e}$ attains a constant value. So this paper provides the missing mathematical explanation for the phenomenon of steady photon generation due to Lindblad-type decoherence in the presence of the anti-rotating term. Our starting point is the Markovian master equation for the density matrix $ \rho $ that takes into account both the atomic and cavity field phase-damping (dephasing) \cite{Carm,WM,CAMOP,amendart} \begin{equation} \dot{\rho}=-i[H,\rho ]+\frac{\gamma _{a}}{2}(\sigma _{z}\rho \sigma _{z}-\rho )+\gamma _{c}\left( 2n\rho n-n^{2}\rho -\rho n^{2}\right) \, \label{111}, \end{equation} where $\gamma _{a}$ ($\gamma _{c}$) is the atomic (cavity) dephasing rate and $H$ is the Rabi Hamiltonian \cite{Rabi,Ra1} (we set $\hbar =1$) \begin{equation} H=\omega n+\frac{\Omega }{2}\sigma _{z}+g(a+a^{\dagger })(\sigma _{+}+\sigma _{-}) \label{Rabi} \end{equation} that includes the anti-rotating term $(a\sigma _{-}+a^{\dagger }\sigma _{+})$ . Here $a$ and $a^{\dagger }$ are the cavity annihilation and creation operators, $n\equiv a^{\dagger }a$ is the photon number operator, and $ \omega $, $\Omega $ and $g$ are the cavity frequency, the atomic transition frequency and the atom-field coupling constant, respectively. The Pauli operators are defined as $\sigma _{-}=|g\rangle \langle e|$, $\sigma _{+}=|e\rangle \langle g|$ and $\sigma _{z}=|e\rangle \langle e|-|g\rangle \langle g|$, so that kets $|g\rangle $ and $|e\rangle $ can be interpreted as atomic\ ground and excited states, respectively. Expanding the density matrix in the Fock basis as \begin{eqnarray} \rho &=&\sum_{n,m=0}^{\infty }(a_{n,m}|g,n\rangle \langle g,m|+b_{n,m}|e,n\rangle \langle e,m| \nonumber \\ &&+c_{n,m}|g,n\rangle \langle e,m|+c_{m,n}^{\ast }|e,n\rangle \langle g,m|)~, \end{eqnarray} where $a_{n,m}$, $b_{n,m}$ and $c_{n,m}$ are time-dependent coefficients, we obtain the exact set coupled differential equations (the prime stands for the time derivative) \begin{eqnarray} a_{n,m}^{\prime } &=&i[\omega (m-n)+i\gamma _{c}\left( n-m\right) ^{2}]a_{n,m}+ig(\sqrt{m}c_{n,m-1} \nonumber \\ &-&\sqrt{n}c_{m,n-1}^{\ast }+\sqrt{m+1}c_{n,m+1}-\sqrt{n+1}c_{m,n+1}^{\ast }) \label{a1} \end{eqnarray} \begin{eqnarray} b_{n,m}^{\prime } &=&i[\omega (m-n)+i\gamma _{c}\left( n-m\right) ^{2}]b_{n,m}+ig(\sqrt{m}c_{m-1,n}^{\ast } \nonumber \\ &-&\sqrt{n}c_{n-1,m}+\sqrt{m+1}c_{m+1,n}^{\ast }-\sqrt{n+1}c_{n+1,m}) \label{a2} \end{eqnarray} \begin{eqnarray} c_{n,m}^{\prime } &=&if_{n,m}c_{n,m}+ig(\sqrt{m}a_{n,m-1} \label{a3} \\ &&-\sqrt{n+1}b_{n+1,m}+\sqrt{m+1}a_{n,m+1}-\sqrt{n}b_{n-1,m}), \nonumber \end{eqnarray} where $f_{n,m}\equiv \omega \left( m-n\right) +\Omega +i[\gamma _{a}+\gamma _{c}(n-m)^{2}]$. In the strong dephasing limit, $(\gamma _{a}+\gamma _{c})\gtrsim |g|$, one expects on physical ground that due to the decoherence the terms $c_{n,m}$ rapidly attain some constant values, so assuming that $c_{n,m}^{\prime }=0$ we get \begin{eqnarray} c_{n,m} &=&\frac{g}{f_{n,m}}(\sqrt{n+1}b_{n+1,m}-\sqrt{m}a_{n,m-1} \nonumber \\ &&+\sqrt{n}b_{n-1,m}-\sqrt{m+1}a_{n,m+1})\,. \label{cc} \end{eqnarray} Now we substitute the expression for $c_{n,m}$ back into equations (\ref{a1} )--(\ref{a2})\footnote{ Actually, one must have in mind that the equation (\ref{cc}) only holds after sufficient amount of time, but such nuances are not relevant when one is interested in the asymptotic behavior.} and define new coefficients $ \tilde{a}_{n,m}=e^{-i\omega t(m-n)}a_{n,m}$ and $\tilde{b}_{n,m}=e^{-i\omega t(m-n)}b_{n,m}$ that are slowly varying functions of time. Assuming that $ \left\vert g\right\vert \ll \omega ,\Omega $ (a condition that holds in cavity QED experiments unless the so-called `ultra-strong coupling regime' \cite{Hows1,Bla} is achieved) we neglect the rapidly oscillating terms and obtain the following effective differential equations for the diagonal probability amplitudes \begin{equation} \tilde{a}_{n}^{\prime } =-[(v_{1}+v_{2})n+v_{2}]\tilde{a} _{n}+[v_{1}nb_{n-1}+v_{2}(n+1)\tilde{b}_{n+1}] \label{d1} \end{equation} \begin{equation} \tilde{b}_{n}^{\prime } =-[(v_{1}+v_{2})n+v_{1}]\tilde{b} _{n}+[v_{2}na_{n-1}+v_{1}( n+1) \tilde{a}_{n+1}], \label{d2} \end{equation} where $\tilde{a}_{n}\equiv \tilde{a}_{n,n}$, $\tilde{b}_{n}\equiv \tilde{b} _{n,n}\ $and we defined coefficients \begin{equation} v_{1}=\frac{2\gamma g^{2}}{\left( \omega -\Omega \right) ^{2}+\gamma ^{2}} ,~v_{2}=\frac{2\gamma g^{2}}{\left( \omega +\Omega \right) ^{2}+\gamma ^{2}}~ \end{equation} with $\gamma \equiv \gamma _{c}+\gamma _{a}$ standing for the total dephasing rate. One can easily verify that the normalization condition is maintained, $ \sum_{n=0}^{\infty }(\tilde{a}_{n}^{\prime }+\tilde{b}_{n}^{\prime })=0$, so the equations (\ref{d1})-(\ref{d2}) are consistent and lead to the following coupled differential equations for the low-order MENDART \begin{equation} \left\langle n(t)\right\rangle ^{\prime }=v_{2}+\left( v_{1}-v_{2}\right) \left[ P_{e}(t)+\left\langle n\sigma _{z}(t)\right\rangle \right] \label{n} \end{equation} \begin{equation} P_{e}(t)^{\prime }=v_{2}-\left( v_{1}+v_{2}\right) \left[ P_{e}(t)+\left \langle n\sigma _{z}(t)\right\rangle \right] \label{Pe} \end{equation} \begin{eqnarray} \left\langle n\sigma _{z}(t)\right\rangle ^{\prime } &=&v_{2}-2\left( v_{1}-v_{2}\right) \left\langle n(t)\right\rangle \label{ns} \\ &&-\left( v_{1}+v_{2}\right) \left[ P_{e}(t)+\left\langle n\sigma _{z}(t)\right\rangle +2\left\langle n^{2}\sigma _{z}(t)\right\rangle \right] \nonumber \end{eqnarray} \begin{eqnarray} \left\langle n^{2}(t)\right\rangle ^{\prime } &=&2\left( \omega ^{2}+\Omega ^{2}+\gamma ^{2}\right) ^{-1} \label{n2} \\ &&\times \left[ \gamma g^{2}\left( 1+4\left\langle n(t)\right\rangle \right) -\omega \Omega \left\langle n\sigma _{z}(t)\right\rangle ^{\prime }\,\right] . \nonumber \end{eqnarray} These equations cannot be solved analytically due to the coupling to the dynamical variable $\left\langle n^{2}\sigma _{z}(t)\right\rangle $ which obeys another differential equation. However, one can deduce the \textit{general} formula for the average photon number $\left\langle n(t)\right\rangle $ by noticing the similarity in the last terms of equations (\ref{n}) and (\ref{Pe}). One gets \begin{equation} \left\langle n(t)\right\rangle -\left\langle n(0)\right\rangle =\frac{2}{ \omega ^{2}+\Omega ^{2}+\gamma ^{2}}\left\{ g^{2}\gamma t-\omega \Omega \lbrack P_{e}(t)-P_{e}(0)]\right\}\,, \label{nn} \end{equation} where $P_{e}(t)\leq 1$ is still an unknown function of time\footnote{ Notice that the obtained asymptotic photon generation rate $2\gamma g^{2}(\omega ^{2}+\Omega ^{2}+\gamma ^{2})^{-1}$ resembles the approximate formula obtained in \cite{We} [namely $2\gamma g^{2}((\omega +\Omega )^{2}+\gamma ^{2})^{-1}$], although in that paper the mathematical approach was oversimplified.}. Furthermore, in the asymptotic regime $t\rightarrow \infty $ we expect from the equation (\ref{Pe}) that $P_{e}$ attains a constant value. Imposing $\lim_{t\rightarrow \infty }P_{e}(t)^{\prime }=0$ we obtain from (\ref{Pe})--(\ref{ns}) \begin{equation} \lim_{t\rightarrow \infty }[P_{e}(t)+\left\langle n\sigma _{z}(t)\right\rangle ]=\frac{1}{2}-\frac{\omega \Omega }{\omega ^{2}+\Omega ^{2}+\gamma ^{2}} \label{x1} \end{equation} \begin{equation} \lim_{t\rightarrow \infty }\left\langle n\sigma _{z}(t)\right\rangle ^{\prime }=0\mathrm{\,} \end{equation} \begin{equation} \lim_{t\rightarrow \infty }\left\langle n^{2}\sigma _{z}(t)\right\rangle =- \frac{2\omega \Omega }{\omega ^{2}+\Omega ^{2}+\gamma ^{2}} \lim_{t\rightarrow \infty }\left\langle n(t)\right\rangle \end{equation} and from equation (\ref{n2}) we get \begin{equation} \lim_{t\rightarrow \infty }\left\langle n^{2}(t)\right\rangle ^{\prime }= \frac{2\gamma g^{2}}{\omega ^{2}+\Omega ^{2}+\gamma ^{2}}\left[ 1+4\lim_{t\rightarrow \infty }\left\langle n(t)\right\rangle \right] ~. \label{x4} \end{equation} Hence, in the asymptotic regime $t\rightarrow \infty $ we have the following rules for the Asymptotic MENDART (AMENDART, as coined in \cite{amendart}) for any initial state: \textbf{a)} $\left\langle n(t)\right\rangle $ and $ -\left\langle n^{2}\sigma _{z}(t)\right\rangle $ increase \emph{linearly} with time; \textbf{b)} $\left\langle n^{2}(t)\right\rangle $ increases \emph{ quadratically} with time; \textbf{c)} $P_{e}(t)$ and $\left\langle n\sigma _{z}(t)\right\rangle $ attain \emph{constant} values. We solved numerically the effective differential equations (\ref{d1})--(\ref{d2}) and verified that the formula (\ref{nn}) is correct for all times, thereby accounting for the linear growth of $\left\langle n(t)\right\rangle $ noticed in \cite {Werlang} from numerical data, while the equations (\ref{x1})-(\ref{x4}) agree with the numerical results in the asymptotic regime. In the figures \ref{fig1}--\ref{fig3} we compare the exact dynamics resultant from the original differential equations (\ref{a1})--(\ref{a3}) to the effective dynamics governed by the simplified equations (\ref{d1})--(\ref {d2}) for the parameters $\omega =1$, $g=4\times 10^{-2}$, $\gamma _{a}=2g$ and $\gamma _{c}=0$. In the figure \ref{fig1} (\ref{fig2}) we consider the resonant regime $\Omega =\omega $ (dispersive regime $\Omega =\omega -20g$) for the initial zero-excitation state $|g,0\rangle $. We plot the dynamical behavior of observables $\left\langle n\right\rangle $, $\left\langle n^{2}\right\rangle $, $\left\langle n^{2}\sigma _{z}\right\rangle $, $ \left\langle n\sigma _{z}\right\rangle $ and $P_{e}$ calculated from the original differential equations (\ref{a1})--(\ref{a3}). Within the thickness of the lines these curves are indistinguishable from the graphs resulting out of the effective equations (\ref{d1})--(\ref{d2}). To exemplify the difference between the original and effective differential equations, we show the zoom for the behavior of $\left\langle n^{2}\right\rangle $ at initial times: the solid line depicts the exact dynamics, and the dashed line -- the effective one. The observed discrepancies are quite small and appear because $c_{n,m}^{\prime }$ does not becomes zero instantly as was assumed in our analysis; nevertheless, these minor differences are irrelevant regarding the asymptotic behavior. In the figures we also show the photon number distributions calculated numerically at the time $gt=300$ according to the original differential equations (bars) and the effective ones (dots). Once again, the agreement is excellent. In the figure \ref{fig3} we consider the initial state $\rho (0)=\rho _{therm}\otimes |e\rangle \langle e|$, where $\rho _{therm}$ is the thermal state of the Electromagnetic field whose photon number distribution is $ p_{n}=\bar{n}^{n}/(\bar{n}+1)^{n+1}$, where $\bar{n}$ is the average photon number. We set $\Omega =\omega $, $\bar{n}=0.3$ and show the asymptotic behavior of $\left\langle n\right\rangle $, $\left\langle n^{2}\right\rangle $ and $\left\langle n^{2}\sigma _{z}\right\rangle $ obtained from the original differential equations (figure \ref{fig3}a) and the zoom of $ \left\langle n\right\rangle $ and $P_{e}$ for initial times (figure \ref {fig3}b, where the dashed lines represent the solutions of the effective differential equations). We see that asymptotically the behavior agrees with equations (\ref{nn})--(\ref{x4}), although the transient dynamics cannot be reliably described by the equations (\ref{d1})--(\ref{d2}). \begin{figure} \caption{Exact and effective dynamical behavior of principal observables and the photon statistics for the time $gt=300$ in the resonant regime, $\Omega = \protect\omega $.} \label{fig1} \end{figure} \begin{figure} \caption{Same as figure \protect\ref{fig1} in the dispersive regime, $\Omega =\protect\omega -20g$.} \label{fig2} \end{figure} \begin{figure} \caption{Behavior of $\left\langle n\right\rangle $, $\left\langle n^{2}\right\rangle $, $\left\langle n^{2}\protect\sigma _{z}\right\rangle $ and $P_{e}$ in \textbf{(a)} the asymptotic regime and \textbf{(b) }during the transient regime for small times. The initial state is $\protect\rho _{therm}\otimes |e\rangle \langle e|$ with average photon number $ \left\langle n(0)\right\rangle =0.3$.} \label{fig3} \end{figure} Regarding the practical observation of the asymptotic linear photon growth inside the cavity due to decoherence, it seems quite unlikely in current cavity or circuit QED implementations because the photons (and atomic excitations) would be lost due to radiative and nonradiative relaxation processes. As example, let us consider the current state of the art circuit QED implementations. The typical parameters are \cite{Fedor}: $\omega \sim \Omega \sim 8\,$\textrm{GHz} and $g\sim 0.3\,$\textrm{GHz}, while the dephasing rate is of the order of $\gamma _{a}\sim 1\,\mathrm{MHz}$, although it can be made large at will (usually one desires to decrease $\gamma $ and not to increase it). Considering a high value for the total dephasing rate $\gamma \sim 1\mathrm{GHz}$ the resulting asymptotic photon growth rate due to dephasing would be $\sim 1\,\mathrm{MHz}.$ This value is of the same order of magnitude as the cavity relaxation rate for a rather high cavity quality factor $Q\sim 10^{4}$, so $\left\langle n(t)\right\rangle $ would saturate at some (small) value instead of showing an asymptotic growth, as calculated explicitly in \cite{CAMOP,2atoms,amendart} for standard quantum optical master equation. Some photons escape to the outside world via radiative dissipation channel so they could be ultimately detected outside the cavity, but in this case different models predict different photon emission rates depending on assumptions made about the reservoir \cite{Libe,W3} (in particular whether it is Markovian or not). Recently a more sophisticated microscopic model was developed for deducing the master equation in the presence of the anti-rotating term, valid in a specific regime of parameters \cite{Bla}. According to that model, the phenomenon of dephasing-induced generation of photons is greatly exaggerated by the Lindblad-type master equation (\ref{111}), and instead of the linear asymptotic growth the average photon number saturates at some value that strongly depends on the reservoir spectral density \cite{Bla}. Nevertheless, the very phenomenon of photon generation due to decoherence persists and our formulae provide the upper bound for the photon generation rate. From the qualitative viewpoint, in realistic (lossy) cavity QED architectures this phenomenon would lead to a parameter-dependent heating of the system slightly above the thermal equilibrium values \cite{2atoms,amendart}, depending on the atom-field detuning, coupling strength and the dephasing rate. In summary, we obtained simplified differential equations describing the process of photon generation (from vacuum or any other state) due to the combined action of the anti-rotating term and the standard Lindbladian dephasing in Markovian cavity QED, whose validity was confirmed by extensive numerical simulations. From these equations we deduced analytical formulae describing the overall behavior of MENDART for arbitrary initial state, demonstrating that asymptotically the mean photon number $\left\langle n\right\rangle $ increases linearly with time at the rate $2\gamma g^{2}(\omega ^{2}+\Omega ^{2}+\gamma ^{2})^{-1}$, $\left\langle n^{2}\right\rangle $ grows quadratically with time and the atomic excitation probability attains a constant value. \begin{acknowledgments} The author acknowledges a partial support of Decanato de Pesquisa e P\'{o} s-Gradua\c{c}\~{a}o (Universidade de Bras\'{\i}lia, Brazil). \end{acknowledgments} \end{document}
\begin{document} \title [Location of concentrated vortices]{Location of concentrated vortices in planar steady Euler flows} \author{Guodong Wang, Bijun Zuo} \address{Institute for Advanced Study in Mathematics, Harbin Institute of Technology, Harbin 150001, P.R. China} \email{wangguodong@hit.edu.cn} \address{College of Mathematical Sciences, Harbin Engineering University, Harbin {\rm150001}, PR China} \email{bjzuo@amss.ac.cn} \begin{abstract} In this paper, we study two-dimensional steady incompressible Euler flows in which the vorticity is sharply concentrated in a finite number of regions of small diameter in a bounded domain. Mathematical analysis of such flows is an interesting and physically important research topic in fluid mechanics. The main purpose of this paper is to prove that in such flows the locations of these concentrated blobs of vorticity must be in the vicinity of some critical point of the Kirchhoff-Routh function, which is determined by the geometry of the domain. The vorticity is assumed to be only in $L^{4/3},$ which is the optimal regularity for weak solutions to make sense. As a by-product, we prove a nonexistence result for concentrated multiple vortex flows in convex domains. \end{abstract} \maketitle \section{Introduction} Let $D\subset\mathbb R^2$ be a simply-connected bounded domain with smooth boundary $\partial D$. Consider in $D$ an ideal fluid in steady state, the motion of which is described by the famous Euler equations \begin{equation}\label{euler} \begin{cases} (\mathbf v\cdot\nabla)\mathbf v=-\nabla P&\mathbf x=(x_1,x_2)\in D,\\ \nabla\cdot\mathbf v=0&\mathbf x\in D,\\ \mathbf v\cdot\mathbf n =g&\mathbf x\in\partial D, \end{cases} \end{equation} where $\mathbf v=(v_1,v_2)$ is the velocity field, $P$ is a scalar function that represents the pressure, $\mathbf n$ is the unit outward normal on $\partial D,$ and $g$ is a given function satisfying the following compatibility condition \begin{equation}\label{g} \int_{\partial D}gdS=0. \end{equation} Here we assume that the fluid is of unit density. The first two equations in \eqref{euler} are the momentum conservation and mass conservation respectively, and the boundary condition in \eqref{euler} means that the rate of mass flow across the boundary per unit area is $g$. In particular, if $g\equiv0,$ then there is no matter flow through the boundary. The scalar vorticity $\omega$, defined as the signed magnitude of curl$\mathbf v,$ that is, \[\omega=\partial_{x_1} v_2-\partial_{x_2}v_1,\] is one of the fundamental physical quantities and plays an important role in the study of two-dimensional flows. Below we reformulate the Euler equations \eqref{euler} as a single equation of $\omega$, which is much easier to handle mathematically. First we show that $\mathbf v$ can be recovered from $\omega$. In fact, since $\mathbf v$ is divergence-free and $D$ is simply-connected, we can apply the Green's theorem to show that there is a scalar function $\psi,$ called the \emph{stream function}, such that \begin{equation}\label{psi} \mathbf v=(\partial_{x_2}\psi,-\partial_{x_1}\psi). \end{equation} For convenience, throughout this paper we will use the symbol $\mathbf b^\perp$ to denote the clockwise rotation through $\pi/2$ of any planar vector $\mathbf b=(b_1,b_2)$, that is, $\mathbf b^\perp=(b_2,-b_1)$, and $\nabla^\perp f$ to denote $(\nabla f)^\perp$ for any scalar function $f$, that is, $\nabla^\perp f=(\partial_{x_2}f,-\partial_{x_1}f)$. Thus \eqref{psi} can also be written as \begin{equation}\label{psi2} \mathbf v=\nabla^\perp \psi. \end{equation} It is easy to check that $\psi$ and $\omega$ satisfy \begin{equation}\label{poisson} \begin{cases} -\Delta\psi=\omega&\text{in }D,\\ \nabla^\perp\psi\cdot\mathbf n=g&\text{on }\partial D. \end{cases} \end{equation} To deal with the boundary condition in \eqref{poisson}, we consider the following elliptic problem \begin{equation}\label{q} \begin{cases} -\Delta \psi_0=0&\text{in }D,\\ \nabla^\perp \psi_0\cdot\mathbf n=g&\text{on }\partial D. \end{cases} \end{equation} To solve \eqref{q}, we first solve the following Laplace equation with standard Neumann boundary condition \begin{equation*} \begin{cases} -\Delta \psi_1=0&\text{in }D,\\ \frac{\partial \psi_1}{\partial\mathbf n}=g&\text{on }\partial D, \end{cases} \end{equation*} then the harmonic conjugate of $\psi_1$ solves \eqref{q}. Note that by the maximum principle the solution to \eqref{q} is unique up to a constant. Now it is easy to see that $\psi-\psi_0$ satisfies \begin{equation}\label{gw} \begin{cases} -\Delta(\psi-\psi_0)=\omega&\text{in }D,\\ \nabla^\perp (\psi-\psi_0)\cdot\mathbf n=0&\text{on }\partial D. \end{cases} \end{equation} The boundary condition in \eqref{gw} implies that $\psi-\psi_0$ is a constant on $\partial D$ (recall that $D$ is simply-connected). Without loss of generality by adding a suitable constant we assume that $\psi-\psi_0=0$ on $\partial D,$ thus $\psi-\psi_0$ can be expressed in terms of the Green's operator as follows \begin{equation}\label{exp} \psi-\psi_0=\mathcal G\omega:=\int_DG(\cdot,\mathbf y)\omega(\mathbf y)d\mathbf y, \end{equation} where $G(\cdot,\cdot)$ is the Green's function for $-\Delta$ in $D$ with zero boundary condition. Combining \eqref{psi2} and \eqref{exp}, we have recovered $\psi$ from $\omega$ in the following \begin{equation}\label{bs} \mathbf v=\nabla^\perp(\mathcal G\omega+\psi_0), \end{equation} which is usually called the Biot-Savart law in fluid mechanics. On the other hand, taking the curl on both sides of the momentum equation in \eqref{euler} we get \begin{equation}\label{ve1} \mathbf v\cdot\nabla \omega=0. \end{equation} From \eqref{bs} and \eqref{ve1}, the Euler equations \eqref{euler} are reduced to a single equation of $\omega$ \begin{equation}\label{ve} \nabla^\perp(\mathcal G\omega+\psi_0)\cdot\nabla \omega=0\quad\text{ in }D, \end{equation} which is usually called the \emph{vorticity equation}. \begin{remark} When $D$ is multiply-connected, the above discussion is still valid. The only difference is that one needs to replace the usual Green's function $G$ by the hydrodynamic Green's function (see \cite{Flu}, Definition 15.1), which does not cause any essential difficulty for the problem discussed in this paper. \end{remark} In the rest of this paper, we will be focused on the study of \eqref{ve}. Note that once we have obtained a solution $\omega$ to \eqref{ve}, we immediately get a solution to \eqref{euler} with \[\mathbf v=\nabla^\perp(\mathcal G\omega+\psi_0),\quad P(\mathbf x)=\int_{L_{\mathbf x_0,\mathbf x}}\omega(\mathbf y)\mathbf v^\perp(\mathbf y)\cdot d\mathbf y-\frac{1}{2}|\mathbf v(x)|^2,\] where $\mathbf x_0$ is a fixed point in $D$ and $L_{\mathbf x_0,\mathbf x}$ is any $C^1$ curve joining $\mathbf x_0$ and $\mathbf x$ (one can easily check that the above line integral is well defined by using Green's theorem and the fact that $\omega$ is a solution). Since in many physical problems the vorticity is of low regularity, not even continuous, it is necessary to define the notion of weak solutions to \eqref{ve}. In the rest of this paper, we regard $\psi_0$ as a given function. \begin{definition}\label{wsve} Let $\omega\in L^{4/3}(D)$. If for any $\phi\in C_c^\infty(D)$ it holds that \begin{equation}\label{int} \int_D\omega\nabla^\perp(\mathcal G\omega+\psi_0)\cdot\nabla\phi d\mathbf x=0,\end{equation} then $\omega$ is called a weak solution to the vorticity equation \eqref{ve}. \end{definition} The above definition is reasonable from the fact that one can multiply any test function $\phi$ on both sides of \eqref{ve} and integrate by parts formally to get \eqref{int}. \begin{remark} Since $\psi_0$ is harmonic (thus smooth) and $\phi$ has compact support in $D$, we see that the integral $\int_D\omega \nabla^\perp \psi_0\cdot\nabla\phi d\mathbf x$ in \eqref{int} makes sense. Note that throughout this paper we do not impose any condition on the boundary value of $\psi_0$. \end{remark} \begin{remark} By the Calderon-Zygmund inequality and Sobolev inequality, $\omega\in L^{4/3}(D)$ is the optimal regularity for the integral $\int_D\omega\nabla^\perp\mathcal G\omega\cdot\nabla\phi d\mathbf x$ in \eqref{int} to be well-defined. \end{remark} In the literature, there has been extensive study on the existence of weak solutions to \eqref{ve}. See \cite{B1,B2,CLW,CPY1,CPY2,CW1,CWZ2,CWZu,EM,LP,LYY,SV,T,W,WZ} for example. The solutions obtained in these papers have one common feature, that is, the vorticity is the function of the stream function ``locally". In this regard, Cao and Wang \cite{CW2} proved a general criterion for an $L^{4/3}$ function to be a weak solution. \begin{theorem}[Cao--Wang, \cite{CW2}]\label{cwthm} Let $k$ be a positive integer and $\psi_0\in C^1(\bar D)$. Suppose that $\omega\in L^{4/3}(D)$ satisfies \begin{equation}\label{fo} \omega=\sum_{i=1}^k\omega_i, \,\,\min_{1\leq i< j\leq k}\{\text{dist}(\text{supp}\omega_i,\text{supp}\omega_j)\}>0,\,\,\omega_i=f^i(\mathcal G\omega+\psi_0), \text{a.e. in (supp}\omega_i)_\delta, \end{equation} where $\delta$ is a positive number, \[\text{(supp}\omega_i)_\delta:=\{\mathbf x\in D\mid \text{dist}(\mathbf x,\text{supp}\omega_i)<\delta\},\] and each $f^i$ is either monotone from $\mathbb R$ to $\mathbb R\cup\{\pm\infty\}$ or Lipschitz from $\mathbb R$ to $\mathbb R$. Then $\omega$ is a weak solution to the vorticity equation \eqref{ve}. \end{theorem} Some examples of such flows are as follows. When $f_i$ in Theorem \ref{cwthm} is a Heaviside function, the solutions are called vortex patches, and related existence results can be found in \cite{CPY1,CW1,CWZu,T,WZ}. When $f_i$ is a power function, related papers are \cite{CLW,CPY2,LP,LYY,SV}. In \cite{B1,B2,EM}, the authors obtained some steady vortex flows by maximizing or minimizing the kinetic energy of the fluid on the rearrangement class of some given function. The solutions obtained in \cite{B1,B2,EM} still have the form \eqref{fo}, where each $f_i$ is a monotone function, but the precise expression of $f_i$ is unknown. Recently Cao, Wang and Zhan \cite{CWZ2,W} modified Turkington's method \cite{T} and proved the existence of a large class of solutions of the form \eqref{fo}, where each $f_i$ is a given function with few restrictions. Among the flows mentioned above, some are of particular interest and attract more attention, that is, flows in which the vorticity is sharply concentrated in a finite number of small regions and vanishes elsewhere, just like a finite sum of Dirac measures. Mathematically, the vorticity in such flows has the form \begin{equation}\label{cv} \omega_\varepsilon=\sum_{i=1}^k\omega_{\varepsilon,i},\quad {\rm supp}(\omega_{\varepsilon,i})\subset B_{o(1)}(\bar x_i),\quad\int_D\omega_{\varepsilon,i} d\mathbf x=\kappa_i+o(1),\quad i=1,\cdot\cdot\cdot,k, \end{equation} where $\varepsilon$ is a small positive parameter, $k$ is a positive integer, $\bar x_i\in D$, $\kappa_i$ is a fixed non-zero real number, $i=1,\cdot\cdot\cdot,k$, and $o(1)\to0$ as $\varepsilon \to0^+$. Papers concerning the existence of such solutions include \cite{CLW,CPY1,CPY2,CW1,CWZ2,CWZu,SV,T,W}. Note that all the flows constructed in these papers have bounded vorticity. Euler Flows with vorticity of the form \eqref{cv} is closely related to a very famous Hamiltonian system in $\mathbb R^2$, the point vortex model (see \cite{L}), which describes the evolution of a finite number of point vortices with their locations being the canonical variables. The point vortex model is only an approximate model, and its precise connection with the 2D Euler equations with concentrated vorticity in the evolutionary case is a tough and unsolved problem. For a detailed discussion, we refer the interested readers to \cite{MP1,MP2,MP3,MPa,T2}. According to the point vortex model, the locations of concentrated blobs of vorticity in steady Euler flows are not arbitrary, but \emph{should} be near a critical point of the following Kirchhoff-Routh function \begin{equation}\label{krf} W(\mathbf x_1,\cdot\cdot\cdot,\mathbf x_k)=-\sum_{1\leq i<j\leq k}\kappa_i\kappa_jG(\mathbf x_i,\mathbf x_j)+\frac{1}{2}\sum_{i=1}^k\kappa_i^2H(\mathbf x_i)+\sum_{i=1}^k\kappa_i\psi_0(\mathbf x_i), \end{equation} where \[(\mathbf x_1,\cdot\cdot\cdot,\mathbf x_k)\in \underbrace{D\times\cdot\cdot\cdot\times D}_{k\text { times}}\setminus \{(\mathbf x_1,\cdot\cdot\cdot,\mathbf x_k)\mid \mathbf x_i\in D, \mathbf x_i=\mathbf x_j \text{ for some }i\neq j\}\] and $H(\mathbf x)=h(\mathbf x,\mathbf x)$ with $h$ being the regular part of Green's function, that is, \[h(\mathbf x,\mathbf y):=-\frac{1}{2\pi}\ln|\mathbf x-\mathbf y|-G(\mathbf x,\mathbf y),\quad \mathbf x,\mathbf y\in D.\] However, to our knowledge there is no complete and rigorous proof on this issue in the literature, although the solutions of the form \eqref{cv} constructed in \cite{CLW,CPY1,CPY2,CW1,CWZ2,EM,SV,T} are all based on the hypothesis that $(\bar {\mathbf x}_1,\cdot\cdot\cdot,\bar {\mathbf x}_k)$ is a critical point of $ W$. The aim of paper is prove that such a hypothesis is necessary. This paper is organized as follows. In Section 2, we state our main results (Theorems \ref{mthm} and \ref{none}) and give some comments. In Sections 3 and 4 we provide the proofs of them. \section{Main results} In this section, we present our two main results. The first result is about the necessary condition about the locations of concentrated vortices. \begin{theorem}\label{mthm} Let $k$ be a positive integer, $\bar {\mathbf x}_1,\cdot\cdot\cdot,\bar {\mathbf x}_k\in D$ be $k$ different points and $\kappa_1,\cdot\cdot\cdot,\kappa_k$ be $k$ non-zero real numbers. Assume that there exists a sequence of weak solutions $\{\omega_n\}_{n=1}^{+\infty}$ to the vorticity equation \eqref{ve}, satisfying $\omega_n=\sum_{i=1}^k\omega_{n,i}$ with $\omega_{n,i}\in L^{4/3}(D)$ and \[{\rm supp}(\omega_{n,i})\subset B_{o(1)}(\bar {\mathbf x}_i),\quad \int_D\omega_{n,i} d\mathbf x=\kappa_i+o(1),\quad i=1,\cdot\cdot\cdot,k,\] where $o(1)\to0$ as $n\to+\infty$. Then $(\bar {\mathbf x}_1,\cdot\cdot\cdot,\bar {\mathbf x}_k)$ must be a critical point of $W$ defined by \eqref{krf}. \end{theorem} Here we compare Theorem \ref{mthm} with two related results in \cite{CGPY} and \cite{CM}. In \cite{CGPY}, Cao, Guo, Peng and Yan studied planar Euler flows with vorticity of the following patch form \begin{equation}\label{ceq} \omega^\lambda=\sum_{i=1}^k\omega^\lambda_i,\quad \omega^\lambda_i=\lambda\chi_{\{\mathbf x\in D\mid \mathcal G\omega^\lambda(\mathbf x)>\mu_i^\lambda\}\cap B_{\delta}(\bar {\mathbf x}_i)},\quad \int_D\omega_i^\lambda d\mathbf x=\kappa_i, \quad i=1,\cdot\cdot\cdot,k, \end{equation} where $\lambda$ is a large positive parameter, $\chi$ denotes the characteristic function, each $\mu^\lambda_i$ is a real number depending on $\lambda$ and each $\kappa_i$ is a given non-zero number. They proved that if supp$\omega^\lambda_i$ ``shrinks" to $\bar {\mathbf x}_i$ as $\lambda\to+\infty$, then $\bar {\mathbf x}_1\cdot\cdot\cdot,\bar {\mathbf x}_k$ must necessarily constitute a critical point of $W$ (see Theorem 1.1 in \cite{CGPY} for the precise statement) . Compared with their result, we consider more general flows and only impose very weak regularity on the vorticity in Theorem \ref{mthm}. Moreover, as we will see in the next section, the proof we provide is shorter and more elementary. The other relevant work is \cite{CM}. In \cite{CM}, Caprini and Marchioro studied the evolution of a finite number of blobs of vorticity in $\mathbb R^2$ and proved the finite-time localization property (see Theorem 1.2 in \cite{CM} for the precise statement). In their result, each $\omega_{n,i}$ is required additionally to have a definite sign and satisfy the growth condition \begin{equation}\label{gwth} \|\omega_{n,i}\|_{L^\infty}\leq M(\text{diam(supp}\omega_{n,i}))^{-\delta}, \end{equation} where $M$ and $\delta$ are both fixed positive numbers. As a consequence of their result, Theorem \ref{mthm} holds true if the additional growth condition \eqref{gwth} is satisfied (although they only considered the whole plane case, similar result for a bounded domain can also be proved without any difficulty). In this sense, our result can be regarded as a strengthened version of Caprini and Marchioro's result in the steady case. \begin{remark} In Theorem 1.1 in \cite{CGPY}, for vorticity of the form \eqref{ceq}, $\bar{\mathbf x}_i\in D$ and $\bar {\mathbf x}_i\neq \bar {\mathbf x}_j$ for $ i\neq j$ are not assumptions but can be proved as conclusions. However, in the very general setting of this paper, these two conclusions may be false. For example, we can regard a single blob of vorticity as two artificially, thus they may concentrate on the same point. Also, Cao, Wang and Zuo \cite{CWZu} constructed a pair steady vortex patches with opposite rotation directions in the unit disk (Theorem 5.1, \cite{CWZu}), and it can be checked that as the ratio of the circulations of the two patches goes to infinity, the patch with smaller circulation will approach the boundary of the disk. \end{remark} Our second result is about the nonexistence of concentrated multiple vortex flows in convex domains, which can be seen as a by-product of Theorem \ref{mthm}. \begin{theorem}\label{none} Let $\delta_0>0$ be fixed, $D$ be a smooth convex domain, $k\geq 2$ be a positive integer, $ \kappa_1,\cdot\cdot\cdot,\kappa_k$ be $k$ positive numbers and $f_1,\cdot\cdot\cdot,f_k$ be $k$ real functions satisfying \[\lim_{t\to0^+}f_i(t)=0,\quad i=1\cdot\cdot\cdot,k.\] If $\psi_0\equiv0,$ then there exists $\varepsilon_0>0$, such that for any $\varepsilon\in(0,\varepsilon_0),$ there is no weak solution $\omega_\varepsilon$ to the vorticity equation \eqref{ve} satisfying \begin{itemize} \item[(1)] $\omega_\varepsilon=\sum_{i=1}^k\omega_{\varepsilon,i},$ $\omega_{\varepsilon,i}\in L^{4/3}(D), i=1\cdot\cdot\cdot,k;$ \item[(2)] $\text{dist(supp}\omega_{\varepsilon,i},\text{supp}\omega_{\varepsilon,j}) >\delta_0 \,\, \forall\,1\leq i<j\leq k$ and $\text{dist(supp}\omega_{\varepsilon,i},\partial D)>\delta_0\,\, \forall\,1\leq i\leq k;$ \item[(3)] diam(supp$\omega_{\varepsilon,i}$)<$\varepsilon,\,\,i=1,\cdot\cdot\cdot,k.$ \item[(4)] $\int_D\omega_{\varepsilon,i}d\mathbf x=\kappa_i+f_i(\varepsilon),\,\,i=1,\cdot\cdot\cdot,k.$ \end{itemize} \end{theorem} \section{Proof of Theorem \ref{mthm}} First we need the following lemma. \begin{lemma}\label{lem} Let $\omega\in L^{4/3}(\mathbb R^2)$ with compact support. Define \begin{equation*} f(\mathbf x)=\int_{\mathbb R^2}\ln|\mathbf x-\mathbf y|\omega(\mathbf y)d\mathbf y. \end{equation*} Then $f\in W^{2,4/3}_{\rm loc}(\mathbb R^2)$ and the distributional partial derivatives of $f$ can be expressed as \begin{equation}\label{deri} \partial_{x_i} f(\mathbf x)=\int_{\mathbb R^2}\frac{x_i-y_i}{|\mathbf x-\mathbf y|^2}\omega(\mathbf y)d\mathbf y\quad \text{a.e. }\,\mathbf x\in\mathbb R^2, \,\,i=1,2. \end{equation} \end{lemma} \begin{proof} By the Calderon-Zygmund estimate we have $f\in W^{2,4/3}_{\rm loc}(\mathbb R^2)$. The expression \eqref{deri} follows from Theorem 6.21 on page 157, \cite{LL}. \end{proof} Now we are ready to prove Theorem \ref{mthm}. The key point of the proof is to use the anti-symmetry of the singular part of the Biot-Savart kernel. \begin{proof}[Proof of Theorem \ref{mthm}] Fix $l\in\{1,\cdot\cdot\cdot,k\}$. It is sufficient to show that \[\nabla_{{\mathbf x}_l}W(\bar {\mathbf x}_1,\cdot\cdot\cdot,\bar {\mathbf x}_k)=\mathbf 0.\] Let $r_0$ be a small positive number such that \[r_0<\text{dist}(\bar{\mathbf x}_i,\partial D)\quad \forall\,1\leq i\leq k,\quad r_0<\frac{1}{2}\text{dist}(\bar{\mathbf x}_i,\bar{\mathbf x}_j)\quad \forall\,1\leq i<j\leq k.\] Choose $\phi(\mathbf x)=\rho(\mathbf x)\mathbf b\cdot \mathbf x$ in Definition \ref{wsve}, where $\mathbf b$ is a constant planar vector and $\rho$ satisfies \[\rho\in C_c^\infty( D),\,\,\rho\equiv 1\text{ in } B_{r_0}(\bar {\mathbf x}_l),\,\,\rho\equiv 0\text{ in } B_{r_0}(\bar {\mathbf x}_i)\,\,\forall\,i\neq l.\] Existence of such $\rho$ can be easily obtained by mollifying a suitable patch function. Then we have \[\int_D\omega_n\nabla^\perp\left(\mathcal G\omega_n+\psi_0\right)\cdot\nabla\phi d\mathbf x =0.\] Denote \[A_n=\int_D\omega_n\nabla^\perp\mathcal G\omega_n\cdot\nabla\phi d\mathbf x,\quad B_n=\int_D\omega_n\nabla^\perp\psi_0\cdot\nabla\phi d\mathbf x.\] Then \begin{equation}\label{ab} A_n+B_n=0,\quad n=1,2,\cdot\cdot\cdot. \end{equation} Below we analyze $A_n$ and $B_n$ separately. For $A_n$, we have \begin{align*} A_n=&\int_D\omega_n(\mathbf x)\nabla^\perp_{\mathbf x}\int_D\left(-\frac{1}{2\pi}\ln|\mathbf x-\mathbf y|-h(\mathbf x,\mathbf y)\right)\omega_n(\mathbf y)d\mathbf y\cdot\nabla\phi d\mathbf x\\ =&-\frac{1}{2\pi}\int_D\omega_n(\mathbf x)\int_D\frac{(\mathbf x-\mathbf y)^\perp}{|\mathbf x-\mathbf y|^2}\omega_n(\mathbf y)d\mathbf y\cdot\nabla\phi d\mathbf x-\int_D\omega_n\int_D\nabla^\perp_{\mathbf x}h(\mathbf x,\mathbf y)\omega_n(\mathbf x)(\mathbf y)d\mathbf y\cdot\nabla\phi d\mathbf x. \end{align*} Here we used Lemma \ref{lem} and the facts that $h\in C^\infty(D\times D)$ and $\omega_n$ has compact support in $D$. Since $\omega_n\in L^{4/3}(D)$, by the Hardy-Littlewood-Sobolev inequality (see Theorem 0.3.2 in \cite{SO}) we have \[\int_D\frac{|\omega_n(\mathbf y)|}{|\mathbf x-\mathbf y|}d\mathbf y\in L^4(D).\] Now we can apply Fubini's theorem (see \cite{ru}, page 164) to obtain \[\frac{(\mathbf x-\mathbf y)^\perp}{|\mathbf x-\mathbf y|^2}\cdot\nabla\phi\omega_n(\mathbf x)\omega_n(\mathbf y)\in L^1(D\times D)\] and \[\int_D\omega_n(\mathbf x)\int_D\frac{(\mathbf x-\mathbf y)^\perp}{|\mathbf x-\mathbf y|^2}\omega_n(\mathbf y)d\mathbf y\cdot\nabla\phi d\mathbf x=\int_D\int_D\frac{(\mathbf x-\mathbf y)^\perp\cdot\nabla\phi}{|\mathbf x-\mathbf y|^2}\omega_n(\mathbf x)\omega_n(\mathbf y) d\mathbf xd\mathbf y.\] Thus we have obtained \begin{align*} A_n=-\frac{1}{2\pi}\int_D\int_D\frac{(\mathbf x-\mathbf y)^\perp\cdot\nabla\phi}{|\mathbf x-\mathbf y|^2}\omega_n(\mathbf x)\omega_n(\mathbf y) d\mathbf xd\mathbf y-\int_D\int_D\nabla^\perp_{\mathbf x}h(\mathbf x,\mathbf y)\cdot\nabla\phi\omega_n(\mathbf x) \omega_n(\mathbf y)d\mathbf xd\mathbf y. \end{align*} Substituting $\phi(\mathbf x)=\rho(\mathbf x)\mathbf b\cdot\mathbf x$ in $A_n$, for sufficiently large $n$ we have \begin{align*} A_n&=-\frac{1}{2\pi}\int_D\int_{D}\frac{(\mathbf x-\mathbf y)^\perp\cdot\mathbf b}{|\mathbf x-\mathbf y|^2}\omega_{n,l}(\mathbf x)\omega_n(\mathbf y) d\mathbf xd\mathbf y-\int_D\int_D\nabla^\perp_{\mathbf x}h(\mathbf x,\mathbf y)\cdot\mathbf b\omega_{n,l}(\mathbf x) \omega_n(\mathbf y)d\mathbf xd\mathbf y\\ =&-\frac{1}{2\pi}\sum_{j=1}^k\int_D\int_D\frac{(\mathbf x-\mathbf y)^\perp \cdot\mathbf b}{|\mathbf x-\mathbf y|^2}\omega_{n,l}(\mathbf x)\omega_{n,j}(\mathbf y)d\mathbf xd\mathbf y -\sum_{j=1}^k\int_D\int_D\nabla_{\mathbf x}^\perp h(\mathbf x,\mathbf y)\cdot\mathbf b\omega_{n,l}(\mathbf x)\omega_{n,j}(y)d\mathbf xd\mathbf y\\ =&-\frac{1}{2\pi}\int_D\int_D\frac{(\mathbf x-\mathbf y)^\perp \cdot\mathbf b}{|\mathbf x-\mathbf y|^2}\omega_{n,l}(\mathbf x)\omega_{n,l}(\mathbf y)d\mathbf xd\mathbf y -\frac{1}{2\pi}\sum_{j=1,j\neq l}^k\int_D\int_D\frac{(\mathbf x-\mathbf y)^\perp \cdot\mathbf b}{|\mathbf x-\mathbf y|^2}\omega_{n,l}(\mathbf x)\omega_{n,j}(\mathbf y)d\mathbf xd\mathbf y\\ &-\sum_{j=1}^k\int_D\int_D\nabla_{\mathbf x}^\perp h(\mathbf x,\mathbf y)\cdot\mathbf b\omega_{n,l}(\mathbf x)\omega_{n,j}(\mathbf y)d\mathbf xd\mathbf y\\ =&-\frac{1}{2\pi}\int_D\int_D\frac{(\mathbf x-\mathbf y)^\perp \cdot\mathbf b}{|\mathbf x-\mathbf y|^2}\omega_{n,l}(\mathbf x)\omega_{n,l}(\mathbf y)d\mathbf xd\mathbf y +\sum_{j=1,j\neq l}^k\int_D\int_D\nabla_{\mathbf x}^\perp G(\mathbf x,\mathbf y)\cdot\mathbf b\omega_{n,l}(\mathbf x)\omega_{n,j}(\mathbf y)d\mathbf xd\mathbf y\\ &-\int_D\int_D\nabla_{\mathbf x}^\perp h(\mathbf x,\mathbf y)\cdot\mathbf b\omega_{n,l}(\mathbf x)\omega_{n,l}(\mathbf y)d\mathbf xd\mathbf y\\ :=&C_n+D_n, \end{align*} where \[C_n=-\frac{1}{2\pi}\int_D\int_D\frac{(\mathbf x-\mathbf y)^\perp \cdot\mathbf b}{|\mathbf x-\mathbf y|^2}\omega_{n,l}(\mathbf x)\omega_{n,l}(\mathbf y)d\mathbf xd\mathbf y,\] \[D_n=\left(\sum_{j=1,j\neq l}^k\int_D\int_D\nabla_{\mathbf x}^\perp G(\mathbf x,\mathbf y)\omega_{n,l}(\mathbf x)\omega_{n,j}(\mathbf y)d\mathbf xd\mathbf y -\int_D\int_D\nabla_{\mathbf x}^\perp h(\mathbf x,\mathbf y)\omega_{n,l}(\mathbf \mathbf x)\omega_{n,l}(\mathbf y)d\mathbf xd\mathbf y\right)\cdot\mathbf b.\] By the anti-symmetric property of the integrand in $C_n$, we see that \[C_n=0 \quad \text{for sufficiently large $n$}.\] For $D_n,$ it is clear that \[\lim_{n\to+\infty}D_n=\left(\sum_{j=1,j\neq l}^k \kappa_l\kappa_j\nabla_{\mathbf x}^\perp G(\bar {\mathbf x}_l,\bar {\mathbf x}_j) -\kappa_l^2\nabla_{\mathbf x}^\perp h(\bar {\mathbf x}_l,\bar {\mathbf x}_l)\right)\cdot\mathbf b.\] To conclude, we have obtained \begin{equation}\label{a} \lim_{n\to+\infty}A_n=\left(\sum_{j=1,j\neq l}^k \kappa_l\kappa_j\nabla_x^\perp G(\bar {\mathbf x}_l,\bar {\mathbf x}_j) -\kappa_l^2\nabla_{\mathbf x}^\perp h(\bar {\mathbf x}_l,\bar {\mathbf x}_l)\right)\cdot\mathbf b. \end{equation} For $B_n,$ it is also clear that \begin{equation}\label{b} \lim_{n\to+\infty}B_n=\kappa_l\nabla^\perp\psi_0(\bar{\mathbf x}_l)\cdot\mathbf b. \end{equation} Combining \eqref{ab}, \eqref{a} and \eqref{b} we immediately get \[\left(\sum_{j=1,j\neq l}^k \kappa_l\kappa_j\nabla_{\mathbf x}^\perp G(\bar {\mathbf x}_l,\bar {\mathbf x}_j) -\kappa_l^2\nabla_{\mathbf x}^\perp h(\bar {\mathbf x}_l,\bar {\mathbf x}_l)+\kappa_l\nabla^\perp\psi_0(\bar{\mathbf x}_l)\right)\cdot \mathbf b= 0\] for sufficiently large $n$. Since $\mathbf b$ can be any constant vector, we deduce that \[\sum_{j=1,j\neq l}^k \kappa_l\kappa_j\nabla_{\mathbf x}^\perp G(\bar {\mathbf x}_l,\bar {\mathbf x}_j) -\kappa_l^2\nabla_{\mathbf x}^\perp h(\bar {\mathbf x}_l,\bar {\mathbf x}_l)+\kappa_l\nabla^\perp\psi_0(\bar{\mathbf x}_l)= \mathbf 0,\] which is exactly \[\nabla_{{\mathbf x}_l}W(\bar {\mathbf x}_1,\cdot\cdot\cdot,\bar {\mathbf x}_k)=\mathbf 0.\] \end{proof} \section{Proof of Theorem \ref{none}} In this section we give the proof of Theorem \ref{none}. To begin with, we need an important property of the Kirchhoff-Routh function in a convex domain proved by Grossi and Takahashi. We only state the following simple version of their result which is enough for our use. \begin{theorem}[Grossi--Takahashi, Theorem 3.2, \cite{GT}]\label{gtt} Let $D$ be a smooth convex domain, $k\geq 2$ be a positive integer and $\kappa_1,\cdot\cdot\cdot,\kappa_k$ be $k$ positive numbers. If $\psi_0\equiv0,$ then the Kirchhoff-Routh function $W$ defined by \eqref{krf} has no critical point in \[\underbrace{D\times\cdot\cdot\cdot\times D}_{k\text { times}}\setminus \{(\mathbf x_1,\cdot\cdot\cdot,\mathbf x_k)\mid \mathbf x_i\in D, \mathbf x_i=\mathbf x_j \text{ for some }i\neq j\}.\] \end{theorem} \begin{proof}[Proof of Theorem \ref{none}] Suppose, by contradiction, that there exist a sequence of positive numbers $\{\varepsilon_{n}\}_{n=1}^{+\infty}$, $\varepsilon_n\to0^+$ as $n\to+\infty$, and a sequence of weak solutions $\{\omega_n\}_{n=1}^{+\infty}$ to the vorticity equation \eqref{ve} such that \begin{itemize} \item[(i)] $\omega_n=\sum_{i=1}^k\omega_{n,i},$ $\omega_{n,i}\in L^{4/3}(D), i=1\cdot\cdot\cdot,k;$ \item[(ii)] $\text{dist(supp}\omega_{n,i},\text{supp}\omega_{n,j}) >\delta_0 \,\,\,\forall\,1\leq i<j\leq k$ and $\text{dist(supp}\omega_{n,i},\partial D)>\delta_0\,\,\, \forall\,1\leq i\leq k;$ \item[(iii)] diam(supp$\omega_{n,i}$)<$\varepsilon_n,\,\,i=1,\cdot\cdot\cdot,k.$ \item[(iv)] $\int_D\omega_{n,i}d\mathbf x=\kappa_i+f_i(\varepsilon_n),\,\,i=1,\cdot\cdot\cdot,k.$ \end{itemize} Define \[{\mathbf x}_{n,i}=\left(\int_{D}\omega_{n,i}d\mathbf x\right)^{-1}\int_{D}\mathbf x\omega_{n,i}d\mathbf x,\quad i=1\cdot\cdot\cdot,k.\] By (ii) and (iii) we see that \[\text{dist}(\mathbf x_{n,i},\mathbf x_{n,j}) \geq\frac{\delta_0}{2} \,\,\, \forall\,1\leq i<j\leq k,\quad \text{dist}(\mathbf x_{n,i},\partial D)\geq\frac{\delta_0}{2}\,\,\, \forall\,1\leq i\leq k\] if $n$ is large enough. Thus we can choose a subsequence $\{\mathbf x_{n_m,i}\}$ such that $\mathbf x_{n_m,i}\to\bar {\mathbf x}_i$ as $m\to+\infty, i=1,\cdot\cdot\cdot,k,$ where $\bar{\mathbf x}_1,\cdot\cdot\cdot,\bar{\mathbf x}_k$ satisfy \[\text{dist}(\bar{\mathbf x}_{i},\bar {\mathbf x}_{j}) \geq\frac{\delta_0}{2} \,\,\, \forall\,1\leq i<j\leq k,\quad \text{dist}(\bar{\mathbf x}_{i},\partial D)\geq\frac{\delta_0}{2}\,\,\, \forall\,1\leq i\leq k.\] Now we can see that the sequence of solutions$\{\omega_{n_m}\}$ satisfies the assumptions in Theorem \ref{mthm}, and therefore $(\bar{\mathbf x}_1,\cdot\cdot\cdot,\bar{\mathbf x}_k)$ must be a critical point of $W$ (with $\psi_0\equiv 0$). This is a contradiction to Theorem \ref{gtt}. \end{proof} {\bf Acknowledgements:} {G. Wang was supported by National Natural Science Foundation of China (12001135, 12071098) and China Postdoctoral Science Foundation (2019M661261).} \phantom{s} \thispagestyle{empty} \end{document}
\begin{document} \title{Exploiting Low-Rank Structure in Semidefinite Programming by Approximate Operator Splitting} \begin{abstract} In contrast to many other convex optimization classes, state-of-the-art semidefinite programming solvers are still unable to efficiently solve large scale instances. This work aims to reduce this scalability gap by proposing a novel proximal algorithm for solving general semidefinite programming problems. The proposed methodology, which is based on the primal-dual hybrid gradient method, allows for the presence of linear inequalities without the need to add extra slack variables and avoids solving a linear system at each iteration. More importantly, it simultaneously computes the dual variables associated with the linear constraints. The main contribution of this work is that it achieves a substantial speedup improvement by effectively adjusting the proposed algorithm in order to exploit the low-rank property inherent to several semidefinite programming problems. This modification is the key element that allows the operator splitting method to efficiently scale to larger instances. Convergence guarantees are presented along with an intuitive interpretation of the algorithm. Additionally, an open-source semidefinite programming solver called \texttt{ProxSDP} is made available and its implementation details are discussed. Case studies are presented in order to evaluate the performance of the proposed methodology. \noindent \textbf{Keywords:} Semidefinite Programming, Operator Splitting Methods, Inexact Fixed Point Iteration, Approximate Proximal Point, Low-Rank Matrix Approximation, Convex Optimization. \end{abstract} \section{Introduction} \subsection{Motivation and contributions} Semidefinite programming (SDP) plays an important role in the field of convex optimization and subsumes several classes of optimization problems such as linear programming (LP), quadratic programming (QP) and second-order cone programming (SOCP). As a consequence, the range of applications to which SDP can be applied is wide and constantly expanding. In addition to being a general framework for convex problems, SDP is also a powerful tool for building tight convex relaxations of NP-hard problems. This property has significant practical consequences for the approximation of a wide range of combinatorial optimization problems and potentially to all constraint satisfaction problems \cite{raghavendra2008optimal}. In practice, if one is interested in solving an SDP problem, it is crucial to have access to a fast, reliable and memory efficient software available. Unfortunately, in comparison to other convex optimization classes, the currently available SDP solvers are not as efficient as their counterparts. All these elements suggest that the development of an efficient algorithm and software for solving SDP problems would provide a noteworthy contribution. In this sense, the contributions of this paper are the following. \begin{itemize} \item A first order proximal algorithm for solving general SDP problems based on the \textit{primal-dual hybrid gradient} (PDHG) \cite{chambolle2011first} is proposed. The main advantage of this methodology, in comparison to other operator splitting techniques, is that it computes the optimal dual variables along with the optimal primal solution. Additionally, the algorithm does not require the solving of a linear system at every iteration and it allows for the presence of linear inequalities without the need to introduce additional variables into the problem. \item Inspired by the approximate proximal point algorithm, a modified version of the PDHG that can exploit the low-rank property of SDP is proposed. For several problems of interest, this modification makes PDHG competitive with interior-point methods, in some cases providing a speed improvement of an order of magnitude. For problems with low-rank structure, the proposed algorithm is able to solve instances with dimensions that were still unattainable to interior-point methods in less than ten minutes, up to a $5,000 \times 5,000$ sized semidefinite matrix. \item An open source SDP solver, called \texttt{ProxSDP}, is made publicly available. The goal of developing and providing this software is to both make the results of this paper reproducible and to foster the use of semidefinite programming in different fields. \end{itemize} The remainder of this section will cover some historical background on semidefinite programming, the current solution methodologies and introduce some notations. Section 2 will introduce the PDHG algorithm in the context of semidefinite programming. Section 3 will show how to modify the PDHG method in order to exploit the low-rank structure of the problem. In section 4, three case studies from different domains are considered in order to validate the proposed methodology. \subsection{Notations} In this work, we make use of the following notation. The symbol $X \succeq 0$ means that the matrix $X$ lies on the positive semidefinite cone, i.e. $y^* X y \geq 0 \hspace{0.2cm} \forall \hspace{0.1cm} y \in \mathbb{R}^n$. The symbol $\mathbb{S}^n$ represents the set of all $n \times n$ symmetric matrices and $\mathbb{S}_+^n$ is the set of all $n \times n$ symmetric positive semidefinite matrices. The symbol $\left\Vert \cdot \right\Vert_2$ denotes both the Euclidean norm for vectors and the \textit{spectral} norm for matrices. Additionally, $\left\Vert \cdot \right\Vert_F$ represents the Frobenius norm for matrices. Given a function $f : \mathbb{R}^n \mapsto \mathbb{R} \cup \{\infty\}$, the associated subdifferential operator is defined as follows: \begin{equation*} \begin{aligned} & \partial f = \{(x, g) : x \in \mathbb{R}^n, \hspace{0.1cm} f(y) \geq f(x) + g^T (y-x) \hspace{0.1cm} \forall \hspace{0.1cm} y \in \textbf{dom} \hspace{0.05cm} f \}. \end{aligned}\label{eq_sdp} \end{equation*} The subdifferential operator evaluated at a point $x \in \mathbb{R}^n$ gives a set that is denoted by $\partial f (x)$, which is called the subdifferential of $f$ at $x$ and is given by \begin{equation*} \begin{aligned} & \partial f(x) = \{g : g^T (y-x) \leq f(y) - f(x) \hspace{0.1cm} \forall \hspace{0.1cm} y \in \textbf{dom} \hspace{0.05cm} f\}. \end{aligned}\label{eq_sdp} \end{equation*} A subgradient of $f$ at $x$ is any point in the subdifferential of $f$ at $x$, i.e. $g \in \partial f(x)$. The inverse of the subdifferential operator, denoted by $(\partial f)^{-1}$, is defined as follows: \begin{equation*} \begin{aligned} & (\partial f)^{-1} = \{(g, x) : (x, g) \in \partial f \}. \end{aligned}\label{eq_sdp} \end{equation*} \subsection{Semidefinite programming formulation} In this work we are going to consider a formulation of semidefinite programming where the inequalities are explicitly stated, thereby avoiding the use of slacks variables. The formulation referred to as \textit{general} SDP form is defined as the following \begin{equation} \begin{aligned} & \underset{X \in \mathbb{S}^n}{\text{minimize}} & & \textbf{tr}(CX)\\ & \text{subject to} & & \mathcal{A}(X) = b,\\ &&& \mathcal{G}(X) \leq h,\\ &&& X \succeq 0. \end{aligned}\label{eq_sdp_general} \end{equation} where the operators $\mathcal{A}: \mathbb{S}^n_+ \rightarrow \mathbb{R}^m$ and $\mathcal{G}: \mathbb{S}^n_+ \rightarrow \mathbb{R}^p$ are given by \begin{equation*} \begin{aligned} & \mathcal{A}(X) = \begin{bmatrix} \textbf{tr}(A_1 X) \\ \textbf{tr}(A_2 X) \\ \vdots \\ \textbf{tr}(A_m X) \end{bmatrix}, \hspace{0.3cm} \mathcal{G}(X) = \begin{bmatrix} \textbf{tr}(G_1 X) \\ \textbf{tr}(G_2 X) \\ \vdots \\ \textbf{tr}(G_p X) \end{bmatrix} \end{aligned} \end{equation*} and the problem data are the symmetric matrices $A_1, \dots, A_m, G_1, \dots, G_p, C \in \mathbb{S}^n$ and the vectors $b \in \mathbb{R}^m$ and $h \in \mathbb{R}^p$. In this semidefinite programming formulation, one wants to minimize a linear function subjected to a set of $m$ linear equality constraints and $p$ linear inequalities constraints, where the decision variable is an $n \times n$ symmetric matrix constrained to be on the positive semidefinite (p.s.d.) cone. The dual problem takes the form \begin{equation*} \begin{aligned} & \underset{y \in \mathbb{R}^{m + p}}{\text{maximize}} & & [b^T \hspace{0.1cm} h^T] \hspace{0.1cm} y\\ & \text{subject to} & & \mathcal{A}^*(y) + \mathcal{G}^*(y) \preceq C,\\ &&& y_j \leq 0, \hspace{0.2cm} \forall \hspace{0.1cm} j = m+1, \dots, p. \end{aligned}\label{eq_sdp} \end{equation*} where the conjugate operators $\mathcal{A}^*: \mathbb{R}^m \rightarrow \mathbb{S}^n_+$ and $\mathcal{G}^*: \mathbb{R}^p \rightarrow \mathbb{S}^n_+$ are given by $\mathcal{A}^*(y) = \sum_{i=1}^{m} y_{i} A_i$ and $\mathcal{G}^*(y) =\sum_{j=1}^{p} y_{j+m} G_j$ respectively. Despite being very similar to linear programming, strong duality does not always hold for SDP. For a comprehensive analysis on the duality of semidefinite programming the reader should refer to the work of Boyd and Vandenberghe \cite{vandenberghe1996semidefinite}. \subsection{Applications} In addition to the theoretical motivation, several problems of practical interest lie precisely in the SDP class. As more applications are found, an efficient method for solving large scale SDP problems is required. This section, briefly covers some representative applications that have been successfully solved by SDP. For a more complete list of applications and SDP problems, the reader can refer to \cite{wolkowicz2012handbook, vandenberghe1999applications}. Traditionally, semidefinite programming has been widely used in control theory. Classic applications such as the stability of dynamic systems and stochastic control problems \cite{lur1957some} have motivated the development of SDP over decades. Modern applications such as motion for humanoid robots \cite{kuindersma2016optimization} use SDP as a core element for control. In several cases, SDP problems in control can be formulated in the form of \textit{linear matrix inequalities} (LMI) \cite{bellman1963systems}. In this sense, for some particular applications in control, there is no need to use general SDP algorithms since some LMI problems have closed form solutions, although adding a little complexity to the problem might render the closed form solutions useless. As a consequence, interior-point methods that were particularly developed to solve LMI problems were proposed \cite{vandenberghe2005interior}. Extensive literatures on LMI can be found in \cite{boyd1994linear, scherer2000linear}. Subsequently, several fields of study have found SDP to be a powerful tool for solving complex problems. For instance, the use of SDP in power systems allowed for deriving tight bounds and solutions for more realistic optimal power flow models with alternating current networks \cite{lavaei2012zero}. In chip design, transistor sizing was also optimized with the use of SDP \cite{vandenberghe1998optimizing, vandenberghe1997optimal}. In the field of structural truss layout, the use of SDP has been popularized after the seminal work of Ben-Tal and Nemirovski \cite{ben1997robust}. This latter work also presented the first ideas that subsequently lead to the expanding field of optimization under uncertainty \cite{ben1998robust}, where SDP is also used to approximate chance constraints \cite{zymler2013distributionally}. A remarkable property of SDP is the ability to build tight convex relaxations to NP-hard problems. This technique, which is also known as semidefinite relaxation (SDR), has been a powerful tool bridging convex and combinatorial optimization. In the early nineties, Lov\'asz and Schrijver developed a SDR for optimization problems with the presence of boolean variables \cite{lovasz1991cones}. Soon after, SDR gained momentum after the celebrated Goemans-Williamson randomized rounding method for the max-cut problem \cite{goemans1995improved}. Eventually, similar SDR approaches were proposed for other combinatorial problems, such as the max-3-sat \cite{karloff19977} and the traveling salesman problem \cite{cvetkovic1999semidefinite}. Recently, Cand\`es et al. proposed an SDR approach to the phase retrieval problem \cite{candes2015phase}. Such relaxations generally square the original number of decision variables through a technique called \textit{lifting} \cite{lovasz2003semidefinite}. From the algorithmic perspective, this can quickly make moderate size instances computationally challenging. In the last decade, applications in machine learning have challenged traditional SDP methods at solving remarkably large-scale instances \cite{de2007deploying}. The problem size $n$ is usually associated with the sample size, which can easily reach millions. One of the most celebrated applications is the \textit{matrix completion} problem \cite{candes2012exact} which became very popular with the \textit{Netflix prize} \cite{bennett2007netflix}. In graphical models, the problem of covariance selection, which is a powerful tool for modeling dependences between random variables, can also be formulated as an SDP problem \cite{wainwright2006log}. In statistical learning, finding the best model that combines different positive definite kernels, which is also known as kernel learning, can be achieved by solving an SDP problem \cite{lanckriet2004learning, lanckriet2004statistical}. For \textit{constraint satisfaction problems} (CSP), problems where one tries to satisfy as many constraints as possible, semidefinite programming also plays an essential role. In the work of Raghavendra \cite{raghavendra2008optimal}, it was shown that if the \textit{Unique Games} conjecture \cite{khot2002power} is true, then semidefinite programming achieves the best approximation for every CSP. Even though there is no consensus on the legitimacy of the Unique Games conjecture, the recent work by Khot et al. \cite{khot2018pseudorandom, dinur2018towards} provides new results suggesting the veracity of the conjecture. Those recent developments bring semidefinite programming back into the spotlight and imply that this class of convex optimization algorithms does have singular properties worth exploiting. \subsection{SDP solution methods} In the early days, general SDP problems were solved by the ellipsoid method \cite{shor1977cut, iudin1977informational} and subsequently by bundle methods \cite{hiriart2013convex}. After the advent of the first polynomial time interior-point algorithm for linear programming by Karmarkar et al. \cite{karmarkar1984new, adler1989implementation}, Nesterov and Nemirovski extended the interior-point methods for other classes of convex optimization problems \cite{nesterov1994interior}. Shortly afterwards, a range of interior-point methods to solve SDPs were proposed \cite{alizadeh1992optimization, helmberg1996interior, alizadeh1998primal}. The solvers CSDP \cite{borchers1999csdp} and MOSEK \cite{mosek2010mosek} use state-of-the-art interior points methods for solving general SDP problems. Up to medium size problems, this class of methods is preferable due to the fast convergence and high precision. However, as is inherent to all second-order methods, the use of interior-point algorithms may be prohibitive for solving large-scale instances. The main bottleneck is due to the cumbersome effort for computing and storing the Hessian at each iteration. More recently, first-order methods have been widely used for applications in machine learning and signal processing. Even though first-order methods generally have slower convergence rates, their costs per iteration are usually small and they require less memory allocation \cite{boyd2011distributed}. These characteristics make first-order methods very appealing for large scale problems and, consequently, it has been an intense area of research in several fields, such as image processing \cite{heide2016proximal}. A great example of the use of first order methods is the conic solver SCS developed by O'Donoghue et al. \cite{o2016conic}, which can efficiently solve general conic optimization problems to modest accuracy. More recently, the use of the \textit{alternating direction method of multipliers} (ADMM) for specifically solving SDP has been proposed by \cite{madani2015admm}. For most of the algorithms, exploiting sparsity patterns in the decision variables is not as straightforward as it is for other classes of convex optimization problems. In this sense, a major recent contribution has been made by showing that sparsity can be exploited by means of chordal decomposition techniques \cite{fukuda2001exploiting, nakata2003exploiting, vandenberghe2015chordal, pakazad2018distributed}. This approach has enabled parallel implementations that can solve larger instances with the use of supercomputers \cite{fujisawa2012high}. In a series of works, Zhang and Lavaei presented SDP algorithms that can properly take advantage of the problem's sparsity \cite{zhang2017sparse, zhang2017modified}. \section{A primal-dual operator splitting for SDP}\label{sec_fixed_point} In this section, the proposed operator splitting method will be built from the first-order optimality conditions of the SDP in its general form (\ref{eq_sdp_general}). The strategy adopted to derive the algorithm translates the problem of finding a solution that satisfies the optimality conditions into a problem of finding a fixed point of a related monotone operator. This approach has been previously adopted with the purpose of designing new algorithms and developing alternative proofs for existing ones \cite{ryu2016primer}. Further information on the use of monotone operators in the context of convex optimization can be found in \cite{eckstein1989splitting, combettes2004solving, combettes2005signal, combettes2011proximal}. Consider the general SDP form (\ref{eq_sdp_general}) where the problem constraints are encoded by indicator functions \begin{equation} \begin{aligned} & \underset{X \in \mathbb{S}^n}{\text{minimize}} & & \textbf{tr}(C X) + I_{\mathbb{S}^n_+}(X) + I_{{=b \atop \leq h}}(\mathcal{M}(X)), \hspace{0.3cm} \mathcal{M} = \begin{bmatrix} \mathcal{A} \\ \mathcal{G}\end{bmatrix}. \end{aligned}\label{eq_sdp} \end{equation} Where the indicator functions are defined as the following: \begin{equation*} \begin{aligned} &I_{\mathbb{S}^n_+}(X) = \left\{ \begin{array}{ll} 0, \hspace{0.3cm} \text{if} \hspace{0.1cm} X \succeq 0, \\ \infty, \hspace{0.15cm} \text{otherwise,} \end{array} \right. \end{aligned} \end{equation*} encodes the positive semidefinite cone constraint and \begin{equation*} \begin{aligned} & I_{{=b \atop \leq h}}(u) = I_{=b}(u_1) + I_{\leq h}(u_2), \hspace{0.5cm} u = [u_1 \hspace{0.2cm} u_2]^T, \end{aligned} \end{equation*} encodes the linear constraints right-hand side for any $u \in \mathbb{R}^{m + p}$ such that \begin{equation*} \begin{aligned} & I_{=b}(u_1) = \left\{ \begin{array}{ll} 0, \hspace{0.3cm} \text{if} \hspace{0.1cm} u_1 = b, \\ \infty, \hspace{0.15cm} \text{otherwise,} \end{array} \right. \hspace{1cm} I_{\leq h}(u_2) = \left\{ \begin{array}{ll} 0, \hspace{0.3cm} \text{if} \hspace{0.1cm} u_2 \leq h, \\ \infty, \hspace{0.15cm} \text{otherwise.} \end{array} \right. \end{aligned} \end{equation*} for any $u_1 \in \mathbb{R}^{m}$ and $u_2 \in \mathbb{R}^{p}$. \subsection{Optimality condition} The first order optimality condition (\ref{eq_opt_system}) for the optimization problem (\ref{eq_sdp}) can be expressed as follows \begin{equation} \begin{aligned} & 0 \in \partial \hspace{0.1cm} \textbf{tr}(C X) + \partial \hspace{0.1cm} I_{\mathbb{S}^n_+}(X) + \mathcal{M}^* (\partial \hspace{0.1cm} I_{{=b \atop \leq h}}(\mathcal{M}(X))). \end{aligned}\label{eq_opt_system} \end{equation} By introducing an auxiliary variable $y \in \mathbb{R}^{m + p}$, the optimality condition can be recast as the following system of inclusions \begin{equation*} \begin{aligned} & 0 \in \partial \hspace{0.1cm} \textbf{tr}(C X) + \partial \hspace{0.1cm} I_{\mathbb{S}^n_+}(X) + \mathcal{M}^*(y),\\ & y \in \partial \hspace{0.1cm} I_{{=b \atop \leq h}} (\mathcal{M}(X)). \end{aligned} \end{equation*} By definition, the auxiliary variable $y$ represents the dual variable associated with the problem constraints. This statement is easily verifiable since $y \in \partial \hspace{0.1cm} I_{{=b \atop \leq h}}(\mathcal{M}(X))$, i.e. $y$ is a subgradient of $I_{{=b \atop \leq h}}(\mathcal{M}(X))$ at $X$. Since problem (\ref{eq_sdp}) is convex, finding a pair $(X^*, y^*)$ satisfying (\ref{eq_opt_system}) is equivalent to finding an optimal primal-dual pair for (\ref{eq_sdp}) as long as strong duality holds \cite{bauschke2011convex}. Using the fact that $(\partial f)^{-1} = \partial f^*$, for an $f$ that is a convex closed proper \cite{rockafellar2015convex}, one can manipulate the second inclusion as follows \begin{equation*} \begin{aligned} & y \in \partial \hspace{0.1cm} I_{{=b \atop \leq h}} (\mathcal{M}(X)) \iff \partial \hspace{0.1cm} I^*_{{=b \atop \leq h}} (y) \ni \mathcal{M}(X),\\ & \hspace{2.85cm} \iff 0 \in \partial \hspace{0.1cm} I^*_{{=b \atop \leq h}} (y) - \mathcal{M}(X). \end{aligned} \end{equation*} Using this new expression, the system (\ref{eq_opt_system}) can be recast as $0 \in F(X, y)$, where $F$ is given by the following monotone operator \begin{equation} \begin{aligned} F(X, y) = \big( \partial \hspace{0.1cm} \textbf{tr}(C X) + \partial \hspace{0.1cm} I_{\mathbb{S}^n_+}(X) + \mathcal{M}^*(y) \hspace{0.05cm}, \hspace{0.1cm} \partial \hspace{0.1cm} I_{{=b \atop \leq h}}^*(y) - \mathcal{M}(X) \big). \end{aligned}\label{eq_F} \end{equation} One can verify that $F$ is a monotone operator by noticing that $F$ is the sum of a subdifferential operator and a monotone affine operator \cite{bauschke2017convex} This formulation of the inclusion (\ref{eq_opt_system}) implies that finding a zero of the underlying monotone operator $F$ is equivalent to finding an optimal primal-dual pair for the semidefinite programming problem (\ref{eq_sdp}). In the remainder of this section, a method for finding a zero for the operator $F$ will be established. \subsection{Fixed point iteration} Finding a zero of the monotone operator $F$ can be translated into finding a fixed point for the system $P(X, u) \in \alpha F(X, u) + P(X, u)$, where $P$ is a positive definite operator \cite{bauschke2011convex}. This formulation induces the following fixed point iteration \begin{equation} \begin{aligned} & \big(X^{k}, u^{k}\big) \leftarrow \big(P + \alpha F\big)^{-1} P(X^{k-1}, u^{k-1}). \end{aligned}\label{eq_fixed_point} \end{equation} This iterative process is called the \textit{generalized proximal point method} and it is guaranteed to converge to a zero of $F$ if the matrix $P$ is positive definite and a fixed point for (\ref{eq_fixed_point}) exists \cite{ryu2016primer}. By choosing $P$ as \begin{equation} \begin{aligned} & P = \begin{bmatrix} I & - \alpha \mathcal{M}^* \\[0.3cm] - \alpha \mathcal{M} & I \end{bmatrix}, \end{aligned}\label{eq_P} \end{equation} the fixed point inclusions can be expressed as \begin{equation*} \begin{aligned} & \left(X^{k-1} - \alpha \mathcal{M}^*(y^{k-1}), \hspace{0.1cm} y^{k-1} - \alpha \mathcal{M}(X^{k-1})\right) \in \alpha F(X^k, y^k) \hspace{-0.05cm} + \left(X^k - \alpha \mathcal{M}^*(y^k), \hspace{0.1cm} y^k - \alpha \mathcal{M}(X^k)\right) \end{aligned} \end{equation*} where further manipulation leads to the system \begin{equation*} \begin{aligned} & \hspace{-0.1cm} \big( X^{k-1} - \alpha \mathcal{M}^*(y^{k-1}), \hspace{0.1cm} y^{k-1} + \alpha \mathcal{M}(2X^{k} - X^{k-1}) \big) \in \alpha \big(\partial \hspace{0.1cm} \textbf{tr}(C X^{k}) + \partial \hspace{0.1cm} I_{\mathbb{S}^n_+}(X^{k}), \hspace{0.1cm}\partial \hspace{0.1cm} I^*_{{=b \atop \leq h}} (y^{k}) \big) \hspace{-0.05cm} + \big(X^k, \hspace{0.1cm} y^k \big) \end{aligned} \end{equation*} which induces the following fixed point iteration: \begin{equation*} \begin{aligned} & X^{k} \leftarrow \big(I + \alpha \partial \hspace{0.1cm} (\textbf{tr}(C \cdot) + I_{\mathbb{S}^n_+}))^{-1} (X^{k-1} - \alpha \mathcal{M}^{*}(y^{k -1}) \big) \\[0.1cm] & y^{k} \leftarrow \big(I + \alpha \partial \hspace{0.1cm} I^*_{{=b \atop \leq h}})^{-1} (y^{k-1} + \alpha \mathcal{M}(2X^{k} - X^{k-1}) \big). \end{aligned}\label{eq_cp_fixed_point} \end{equation*} This scheme is a particular case of the primal-dual hybrid gradient (PDHG) proposed by Chambolle and Pock \cite{chambolle2011first, pock2009algorithm} which has been successfully applied to a wide range of image processing problems, such as image denoising and deconvolution \cite{sidky2012convex, vaiter2013robust, heide2016proximal}. Convergence is guaranteed as long as a solution to (\ref{eq_sdp}) exists and $0 < \alpha < 1 / \left\Vert \mathcal{M} \right\Vert_2$. The progress of the algorithm can be measured by the primal, dual and combined residuals as \begin{equation*} \begin{aligned} & \epsilon_{\text{primal}}^k = \left\Vert \tfrac{1}{\alpha}(X^{k} - X^{k - 1}) - \mathcal{M}^*(y^k - y^{k - 1}) \right\Vert_F,\\ & \epsilon_{\text{dual}}^k = \left\Vert \tfrac{1}{\alpha} (y^{k} - y^{k - 1}) - \mathcal{M}(X^k - X^{k - 1}) \right\Vert_2,\\ & \epsilon_{\text{comb}}^k = \epsilon_{\text{primal}}^k + \epsilon_{\text{dual}}^k. \end{aligned}\label{eq_comb_residual} \end{equation*} \subsection{Resolvents and proximal operators in SDP}\label{subsec_prox} To employ the fixed point iteration (\ref{eq_fixed_point}) one needs to compute both resolvent operators $(I + \alpha \partial \hspace{0.1cm} (\textbf{tr}(C \cdot) + I_{\mathbb{S}^n_+}))^{-1}$ and $(I + \alpha \partial \hspace{0.1cm} I^*_{{=b \atop \leq h}})^{-1}$. For any convex function $f:\mathbb{R}^{m \times n} \rightarrow \mathbb{R} \cup \{+ \infty\}$ and $\alpha > 0$, the resolvent operator associated with the subdifferential operator $\partial f$ is given by \begin{equation*} \begin{aligned} & z = (I + \alpha \partial f)^{-1} (v) \iff 0 \in \partial f (z) + \tfrac{1}{\alpha} (z-v) \iff z = \underset{x}{\text{argmin}}\Big\{f(x) + \tfrac{1}{2 \alpha}||x - v||_{\text{2}}^2\Big\}. \end{aligned} \end{equation*} Therefore, for a general convex function $f$, the resolvent of $\partial f$ can be expressed as the solution of an associated convex optimization problem. This mapping is usually referred as the \textit{proximal operator} of $f$. The constant $\alpha > 0$ is a parameter that controls the trade-off between moving towards the minimizer of $f$ and shifting in the direction of $v$. Methods based on proximal operators, such as the proximal gradient descent \cite{combettes2011proximal}, are particularly interesting when the proximal operator has a known closed-form. When the objective function is not differentiable, proximal algorithms are generally a good alternative to subgradient based methods, such as ISTA for linear inverse problems \cite{beck2009fast}. For a deeper review of proximal algorithms and more details on the resolvent calculus employed in the forthcoming sections the reader should refer to \cite{parikh2014proximal}. In the following, the proximal operators associated with (\ref{eq_fixed_point}) are going to be analyzed in more detail. \subsubsection{Box constraints} The resolvent associated with $\partial I_{{=b \atop \leq h}}$ is simply given by the projection onto the box constraints \begin{equation*} \begin{aligned} & \big(I + \alpha \partial I_{{=b \atop \leq h}}\big)^{-1} (u) = \textbf{proj}_{{=b \atop \leq h}}(u) = \begin{bmatrix} \textbf{proj}_{=b}(u_1) \\[0.3cm] \textbf{proj}_{\leq h}(u_2) \end{bmatrix} = \begin{bmatrix} b \\[0.3cm] \min{\{u_2,h\}} \end{bmatrix}, \end{aligned} \end{equation*} where $u_1 \in \mathbb{R}^p$ and $u_2 \in \mathbb{R}^m$ and $\min{\{u_2,h\}}$ is the point-wise minimum. Additionally, for a convex function $f$ , the \textit{extended Moreau decomposition} \cite{moreau1962decomposition} gives the identity \begin{equation*} \begin{aligned} & u = \big(I + \alpha \partial f^*\big)^{-1} (u) + \alpha \hspace{0.05cm} \big(I + \partial f / \alpha \big)^{-1} (u / \alpha). \end{aligned} \end{equation*} Therefore one concludes that \begin{equation} \begin{aligned} & \big(I + \alpha \partial I_{{=b \atop \leq h}}^*\big)^{-1} (u) = u - \alpha \hspace{0.05cm} \textbf{proj}_{{=b \atop \leq h}}(u / \alpha). \end{aligned}\label{eq_resolvent_1} \end{equation} \subsubsection{Positive semidefinite cone} Similarly, the resolvent associated with the positive semidefinite constraint is given by the Euclidean projection onto the positive semidefinite cone. Let $S \in \mathbb{S}^n$, the projection onto the set $\{X : X \succeq 0 \}$ has the closed form \begin{equation*} \begin{aligned} & \big(I + \alpha \partial I_{\mathbb{S}_+^n}\big)^{-1} (S) = \textbf{proj}_{\mathbb{S}_+^n}(S) = \sum_{i=1}^n \text{max}\{0, \lambda_i\} u_i u_i^{T}, \end{aligned} \end{equation*} where $S = \sum_{i=1}^n \lambda_i u_i u_i^{T}$ is the eigenvalue decomposition of the symmetric matrix $S$. \subsubsection{Trace} Given the symmetric matrices $C$ and $S$, the resolvent associated with the trace function is given by the formula \begin{equation*} \begin{aligned} & \big(I + \alpha \partial \textbf{tr}(C \cdot)\big)^{-1} (S) = S - \alpha C. \end{aligned} \end{equation*} Unlike the majority of cases, the resolvent associated with the trace function plus any convex function $g$ is given by the left composition as in \begin{equation*} \begin{aligned} & \big(I + \alpha \partial (g + \textbf{tr}(C \cdot)\big)^{-1} (S) = \big(I + \alpha \partial g \big)^{-1} \circ \big( I + \alpha \partial \textbf{tr}(C \cdot)\big)^{-1} (S) = \big(I + \alpha \partial g \big)^{-1} \big(S - \alpha C \big). \end{aligned} \end{equation*} Consequently, \begin{equation} \begin{aligned} & \big(I + \alpha \partial \hspace{0.1cm} (\textbf{tr}(C \cdot) + I_{\mathbb{S}^n_+})\big)^{-1} (S) = \textbf{proj}_{\mathbb{S}_+^n} (S - \alpha C). \end{aligned}\label{eq_resolvent_2} \end{equation} \subsection{PD-SDP} Algorithm 1, which is referred to as \texttt{PD-SDP} or \texttt{P}rimal-\texttt{D}ual \texttt{S}emi\texttt{D}efinite \texttt{P}rogramming, matches the fixed point iteration (\ref{eq_fixed_point}) and the resolvents in its closed forms (\ref{eq_resolvent_1}, \ref{eq_resolvent_2}). In this particular setting, the primal-dual method turns out to be a very simple routine. As illustrated in Algorithm 1, the method avoids explicitly solving a linear system or a convex optimization problem at each iteration. One only needs a subroutine to evaluate the resolvents (\ref{eq_resolvent_1}, \ref{eq_resolvent_2}) and access the abstract linear operator for $\mathcal{M}$ and its adjoint. \def\smath#1{\text{\scalebox{.9}{$#1$}}} \begin{algorithm}[H] \caption{\texttt{\texttt{PD-SDP}}} \label{Algorithm} \begin{algorithmic} \STATE{\textbf{Given:} $\mathcal{M}$, $b \in \mathbb{R}^m$, $h \in \mathbb{R}^p$ and $C \in \mathbb{S}^n$. } \WHILE{$\epsilon_{\text{comb}}^k > \epsilon_{\text{tol}}$ } \STATE{$X^{k + \smath{1}} \hspace{0.1cm} \leftarrow \textbf{proj}_{\mathbb{S}_+^n} (X^k - \alpha ( \mathcal{M}^* (y^k) + C))$ }\COMMENT{Primal step} \STATE{$y^{k + \smath{1/2}} \hspace{-0.05cm} \leftarrow y^{k} + \alpha \mathcal{M}(2 X^{k + \smath{1}} - X^k)$ }\COMMENT{Dual step part 1} \STATE{$y^{k + \smath{1}} \hspace{0.2cm} \leftarrow y^{k + \smath{1/2}} - \alpha \hspace{0.05cm} \textbf{proj}_{{=b \atop \leq h}}(y^{k + \smath{1/2}} / \alpha)$}\COMMENT{Dual step part 2 } \ENDWHILE{ } \RETURN{$\big(X^{k + 1}, y^{k + 1}\big)$} \end{algorithmic} \end{algorithm} In \cite{chambolle2011first}, it was shown that PDHG achieves a $\mathcal{O}(1/k)$ convergence rate for non-smooth problems, where $k$ is the number of iterations. Similar convergence rates can be achieved by other operator splitting methods such as Tseng's ADM \cite{tseng1991applications} or ADMM \cite{glowinski1975approximation, eckstein1992douglas}. However, the \texttt{PD-SDP} has the advantage of offering the optimal dual variable as a by-product of the algorithm. The computational complexity of each loop is dominated by the projection onto the positive semidefinite cone. In the most naive implementation, each iteration will cost $\mathcal{O}(n^3)$ operations, where $n$ is the dimension of the p.s.d. matrix. If one knew the number of positive eigenvalues $r$, at each iteration, the computational cost could be reduced to $\mathcal{O}(n^2 r)$. In this work we are going to refer to $r$ as the \textit{target-rank} of a particular iteration. Unfortunately, in practice, one does not have access to the \textit{target-rank}. However, as will be shown in the next section, there is no need to know the \textit{target-rank} in advance. Even more surprisingly, faster running times can be achieved by underestimating the target rank to some extent. \section{Speeding up with inexact solves} So far, we have proposed a first-order method for solving general SDP problems. However, the projection onto the positive semidefinite cone is an obstacle to make the algorithm scalable for larger instances. In this section, we are going to explore how to take advantage of a low-rank structure, even if the target rank is unknown. \subsection{Low-rank approximation} It is well-known that SDP solutions very often exhibit a low-rank structure. More precisely, as shown by Barvinok \cite{barvinok1995problems} and Pataki \cite{pataki1998rank}, any SDP with $m$ equality constraints has an optimal solution with a rank of at most $\sqrt{2m}$. In practice, for several SDP problems, it is frequently observed that the optimal solution has an even smaller rank. This phenomenon is notably present in SDPs generated by a semidefinite relaxation, where the solution ideally has low rank. In several cases, even if the relaxation is inexact, the rank of the solution is usually substantially small. This property has motivated a series of nonconvex methods aiming to exploit the low-rank structure of the problem \cite{zhang2011penalty, zhao2012approximation, yuan2016proximal}. For instance, one can encode the positive semidefinite constraint as a matrix factorization of the type $X=V^{T} V$ where $V \in \mathbb{R}^{r \times n}$ and $r$ is the target rank. This technique was proposed a decade ago by Burer and Monteiro \cite{burer2003nonlinear} and since then it has been one of the main tools for tackling the scalability of low-rank SDPs. This matrix factorization approach has been successfully applied to large-scale computer vision \cite{shah2016biconvex} and combinatorial optimization problems \cite{wang2017mixing}. Unfortunately, by resorting to this approach, one loses convexity and all the associated guarantees. The main bottleneck of \texttt{PD-SDP} and any other convex optimization methods for solving SDPs is computing the eigenvalue decomposition. A natural approach to overcome this issue is to make use of low-rank matrix approximation techniques in place of computing the full matrix decomposition. Recent work by Udell, Tropp et al. \cite{yurtsever2017sketchy} uses matrix sketching methods \cite{halko2011finding, tropp2017practical} to successfully find approximate solutions to low-rank convex problems. While their methodology possesses several advantages, such as optimal storage, it does require all solutions to be low-rank in order to guarantee convergence to an optimal solution. In contrast, the methodology proposed in this paper exploits the low-rank structure of the problem whenever possible, but it also converges to an optimal solution even in the presence of full-rank solutions. As was showed by Eckart and Young \cite{eckart1936approximation}, the best rank-$r$ approximation of symmetric matrices, for both the Frobenius and the spectral norms, is given by the truncated eigenvalue decomposition. Inspired by this result, the \textit{approximate} projection onto the positive semidefinite cone is given by \begin{equation} \begin{aligned} & \textbf{aproj}_{\mathbb{S}^n_+}(X, r) = \sum_{i=1}^r \text{max}\{0, \lambda_i\} u_i u_i^{T}, \end{aligned}\label{eq_trunc_eig} \end{equation} where $X$ is a symmetric matrix, $r$ is its \textit{target-rank} and $\lambda_1 \geq \cdots \geq \lambda_r$ are the eigenvalues with the $r$ largest real values. It is important to notice that, despite being different from the Euclidean projection, $\textbf{aproj}_{\mathbb{S}^n_+}(X, r)$ does project the matrix $X$ onto the p.s.d. cone. In other words, the truncated projection maps onto the p.s.d. cone but not necessarily onto the closest point, according to the Frobenius norm, as illustrated in Figure 1. If the target-rank $r$ equals the number of nonzero eigenvalues, both the truncated and the full projection will be equivalent. Otherwise, if the target-rank $r$ is smaller than the number of nonzero eigenvalues, the truncated projection will be only an approximation of the exact projection. In this case, according to the Eckart–Young–Mirsky theorem \cite{eckart1936approximation}, the approximation error can be expressed as the sum of the eigenvalues that were left out by the truncated projection as \begin{equation*} \begin{aligned} & \left\Vert \textbf{proj}_{\mathbb{S}^n_+}(X) - \textbf{aproj}_{\mathbb{S}^n_+}(X, r) \right\Vert_F^2 = \sum_{i=r + 1}^{n} \max\{\lambda_i, 0\}. \end{aligned} \end{equation*} For practical purposes, the approximation error can be bounded in terms of the smaller eigenvalue computed by the truncated projection as the following \begin{equation} \begin{aligned} & \left\Vert \textbf{proj}_{\mathbb{S}^n_+}(X) - \textbf{aproj}_{\mathbb{S}^n_+}(X, r) \right\Vert_F^2 \leq (n - r) \max\{\lambda_r, 0\}. \end{aligned}\label{eq_error_approx} \end{equation} The partial eigenvalue decomposition (\ref{eq_trunc_eig}) can be efficiently computed via power iteration algorithms or Krylov subspace methods \cite{golub2012matrix, higham1988matrix}. Computational routines are freely available in almost every programming language for numerical computing \cite{lehoucq1998arpack, anderson1999lapack}. \subsection{Convergence checking and target rank update} As previously noted, the \texttt{PD-SDP} method can be seen as a fixed point iteration of a monotone operator \cite{he2010convergence}. In this sense, replacing the exact projection onto the positive semidefinite cone by its approximation (\ref{eq_trunc_eig}) will result in an inexact iteration as the following \begin{equation} \begin{aligned} & \big(X^{k + 1}, u^{k + 1}\big) \leftarrow \big(P + \alpha F \big)^{-1} P(X^k, u^k) + \varepsilon^k, \end{aligned}\label{eq_approx_fixed_point} \end{equation} where $F$ and $P$ are the ones defined in (\ref{eq_F}) and (\ref{eq_P}), respectively, and $\varepsilon$ is an error component. In the literature, this methodology can be found under the name of \textit{inexact solves} or \textit{approximate proximal point} \cite{rockafellar1976monotone}. In the work of Eckstein and Bertsekas \cite{eckstein1992douglas}, they have shown that the approximate scheme (\ref{eq_approx_fixed_point}) converges as long as the error component is summable, i.e. \begin{equation} \begin{aligned} & \sum_{k=1}^{\infty} \left\Vert \varepsilon^k \right\Vert_2 < \infty. \end{aligned}\label{eq_summable} \end{equation} In the context of the approximate projection onto the p.s.d. cone, condition (\ref{eq_summable}) can be expressed in terms of the smallest eigenvalue of the truncated decomposition for each iterate. Let $\lambda_r^k$ denote the smallest eigenvalue computed at the $k^{\text{th}}$ iteration of the algorithm (\ref{eq_approx_fixed_point}), which corresponds to the $r^{\text{th}}$ largest eigenvalue at that iteration. Analogous to \cite{eckstein1992douglas}, given a target-rank $r$, the fixed point iteration (\ref{eq_approx_fixed_point}) will converge to a fixed point as long as \begin{equation} \begin{aligned} & (n - r) \sum_{k=1}^{\infty} \left\Vert \max\{\lambda_r^k, 0\} \right\Vert_2 < \infty \end{aligned}\label{eq_conv_condition} \end{equation} and a fixed point exists. It is easily verifiable that for an arbitrarily fixed target-rank $r$, the iteration (\ref{eq_approx_fixed_point}) will never converge. For instance, if one fixes the target-rank to a value smaller than the rank of the optimal solution, the error component will remain above a threshold and the sequence of errors will not be summable. We refer to the target-rank as \textit{sufficient} if it satisfies the condition (\ref{eq_conv_condition}). Since the minimal sufficient target-rank is not known \textit{a priori}, it is necessary to use an update mechanism that can guarantee the convergence of (\ref{eq_approx_fixed_point}). The strategy adopted in this paper starts the algorithm with a small target-rank and increase its value whenever necessary. The combined residual (\ref{eq_comb_residual}) will be used to describe the \textit{state} of the algorithm and to trigger the update of the target-rank. Given an initial target-rank, the sum of the subsequent combined residuals can either converge or diverge. Even if the sequence converges, it will not necessarily be monotonic. In this regard, instead of checking the convergence of the sequential iterates we are going to evaluate the residuals (\ref{eq_comb_residual}) within a window of size $\ell$. In case the combined residuals converge according to a given tolerance, we need to examine the approximation error (\ref{eq_error_approx}). It follows from (\ref{eq_error_approx}) that if the approximation error is zero, the smallest eigenvalue of $X^*$ is less than or equal to zero and the truncated projection is no longer an approximation. Therefore, the inexact iteration (\ref{eq_approx_fixed_point}) has also converged to a fixed point of (\ref{eq_fixed_point}) and consequently an optimal primal-dual solution for the SDP problem of interest has been found. If the combined residual has converged with a target rank $r$ but the approximation error is greater than the tolerance, the target-rank needs to be updated. In this case, it is interesting to notice that even though the current iterate pair is not an optimal point, it does give a feasible primal-dual solution under the assumption of strong duality. By characterizing a fixed point of (\ref{eq_error_approx}), the current iterate will satisfy the linear constraints of the original SDP problem. Additionally, as was previously pointed out, the truncated projection maps onto the positive semidefinite cone. Therefore, given a target rank $r$, if a fixed point of (\ref{eq_error_approx}) is found and strong duality holds, one has a feasible point designated by $(X_{[r]}, y_{[r]})$. The last possible case occurs when the combined residuals either stay stationary or diverge within the last $\ell$ iterates. In this case, the target-rank also needs to be updated. After updating the target-rank, the process is repeated. The combination of \texttt{PD-SDP} and the target rank updating scheme is described in Algorithm 2 and will be referred to as \texttt{LR-PD-SDP}. In the worst case scenario, the target-rank will be updated until $r$ equals $n$ and the subsequent iterations of the algorithm will be equivalent to the ones in \texttt{PD-SDP}. Consequently, in this setting, \texttt{LR-PD-SDP} will converge to a fixed point of (\ref{eq_F}) under the same conditions of \texttt{PD-SDP}. \begin{algorithm}[H] \caption{\texttt{\texttt{LR-PD-SDP}}} \label{algorithm2} \begin{algorithmic} \STATE{\textbf{Given:} $\mathcal{M}$, $b \in \mathbb{R}^p$, $h \in \mathbb{R}^q$, $C \in \mathbb{S}^n$ and $r=1$. } \WHILE{$(n - r) \lambda_r > \varepsilon_{\lambda}$ } \WHILE{$\epsilon_{\text{comb}}^k > \epsilon_{\text{tol}}$ \textbf{and} $\epsilon_{\text{comb}}^{k} < \epsilon_{\text{comb}}^{k - \ell}$ } \STATE{$X^{k + \smath{1}} \hspace{0.1cm} \leftarrow \textbf{aproj}_{\mathbb{S}_+^n} (X^k - \alpha (\mathcal{M}^*(y^k) + C), \hspace{0.05cm} r)$ } \COMMENT{Approximate primal step} \STATE{$y^{k + \smath{1/2}} \hspace{-0.05cm} \leftarrow y^{k} + \alpha \mathcal{M}(2 X^{k + \smath{1}} - X^k)$ }\COMMENT{Dual step part 1} \STATE{$y^{k + \smath{1}} \hspace{0.2cm} \leftarrow y^{k + \smath{1/2}} - \alpha \hspace{0.05cm} \textbf{proj}_{{=b \atop \leq h}}(y^{k + \smath{1/2}} / \alpha)$ }\COMMENT{Dual step part 2 } \ENDWHILE{ } \IF{$\epsilon_{\text{comb}}^k < \epsilon_{\text{tol}}$} \STATE{$(X_{[r]}, y_{[r]}) \leftarrow (X^{k + 1}, y^{k + 1})$}\COMMENT{Save feasible solution} \ENDIF{} \STATE{$r \leftarrow 2 \hspace{0.05cm} r $ }\COMMENT{\textit{Target-rank} update} \ENDWHILE{ } \RETURN{$(X^{k + 1}, y^{k + 1})$} \end{algorithmic} \end{algorithm} Each iteration of \texttt{LR-PD-SDP} has a computational complexity of $\mathcal{O}(n^2 r)$ as opposed to $\mathcal{O}(n^3)$ achieved by \texttt{PD-SDP}. Additionally, if one doubles the target-rank whenever necessary, the updating procedure can be carried out $\mathcal{O}(\text{log}(n))$ times. Usually, \texttt{LR-PD-SDP} will require more iterations to reach convergence than \texttt{PD-SDP}. On the other hand, \texttt{LR-PD-SDP} induces the rank of the iterates $X^k$ to remain small. As it is illustrated in Figure 2, \texttt{LR-PD-SDP} avoids the presence of high rank iterates, as happens with \texttt{PD-SDP}. Consequently, if the problem of interest has a low-rank solution, the \texttt{LR-PD-SDP} will terminate much faster than \texttt{PD-SDP}. \begin{figure} \caption{Comparison of the rank path of the iterates $X^k$ for both \texttt{PD-SDP} and \texttt{LR-PD-SDP} methods. Additionally, the sequence of primal intermediate feasible solution found by \texttt{LR-PD-SDP} are represented as the points $X_{[1]}, X_{[2]}$ and $X_{[4]}$.} \end{figure} \section{\texttt{ProxSDP} solver} The complete implementation of the \texttt{LR-PD-SDP} algorithm is available online at \centerline{\href{https://github.com/mariohsouto/ProxSDP.jl}{\texttt{https://github.com/mariohsouto/ProxSDP.jl}}} \noindent This project includes usage examples and all the data and scripts needed for next sections' benchmarks. The solver was completely written in the Julia language \cite{bezanson2017julia}, making extensive usage of its linear algebra capabilities. The use of sparse matrix operations were crucial to achieve good performance on manipulations involving the linear constraints. Additionally, dense linear algebra routines relying on BLAS \cite{lawson1979basic} were heavily used, just like multiple in-place operation to avoid unnecessary memory allocations. The built-in wrappers over LAPACK \cite{anderson1990lapack} and BLAS made it very easy to write high performance code. In particular, the ARPACK wrapper, used to efficiently compute the largest eigenvalues, was modified to maximize in-place operations and avoid unnecessary allocations. Instead of writing a solver interface from scratch, we used the package MathOptInterface.jl (MOI) that abstracts solver interfaces. In doing that, we were able to write problems only once and test them in all available solvers. Moreover, having a MOI based interface means that the \texttt{ProxSDP} solver is available through the modeling language JuMP \cite{dunning2017jump}. \section{Case studies} In this section, we will present three SDP problems to serve as background for comparison between SDP solvers. The main goal of these experiments is to show how \texttt{LR-PD-SDP} outperforms the state-of-the-art solvers in the low-rank setting. In this sense, the numerical experiments are focused on semidefinite relaxation problems. On these SDP relaxations, the original problem one is interested in solving is nonconvex and it can be formulated as an SDP plus a rank constraint of the form $\textbf{rank}(X)=d$, where $d$ usually assumes a small value. Unfortunately, the rank constraint makes the problem extremely hard to solve and any exact algorithm has doubly exponential complexity \cite{chistov1984complexity}. The SDR avoids this problem by simply dropping the rank constraint and solving the remaining problem via semidefinite programming. Usually SDRs admit low-rank solutions, even without the presence of the rank constraint, making this the ideal case for testing the \texttt{LR-PD-SDP} algorithm. In the presented experiments, we are going to consider a default numerical tolerance of $\epsilon_{\text{tol}}=10^{-3}$. As any first-order method, both \texttt{PD-SDP} and \texttt{LR-PD-SDP} may require a large number of iterations to converge to a higher accuracy \cite{eckstein1998operator, boyd2011distributed}. All the following tests were made using a Intel(R) Core(TM) i7-5820K CPU 3.30GHz (12 cores) Linux workstation with 62 Gb of RAM. In the following benchmarks for \texttt{PD-SDP} and \texttt{LR-PD-SDP} the Julia version used was compiled with Intel's MKL. The maximum running time for all experiments was set to 1200s. \subsection{Graph equipartition} Consider the undirected graph $G = (V, E)$ where $V$ is the set of vertices, $E$ is the set of edges, $n$ is the total number of edges and a cut $(S, S')$ is a disjoint partition of $V$. Let $x \in \{-1, +1\}^n$ such that \begin{equation*} \begin{aligned} & x_i = \left\{ \begin{array}{ll} +1, \hspace{0.3cm} \text{if} \hspace{0.1cm} x_i \in S, \\ -1, \hspace{0.3cm} \text{if} \hspace{0.1cm} x_i \in S', \end{array} \right. \hspace{0.2cm} \forall \hspace{0.1cm} i=1, \cdots, n. \end{aligned} \end{equation*} Given a set of weights $w$, the quantity $\tfrac{1}{4} \sum_{(i, j) \in E} w_{i j} (1 - x_i x_j)$ is called the weight of the cut $(S, S')$. The \textit{graph equipartition} problem aims to find the cut with maximum weight on a given graph such that both partitions of the graph have the same cardinality. This problem can be formulated as the following combinatorial optimization problem \begin{equation*} \begin{aligned} & \underset{x}{\text{maximize}} & & \tfrac{1}{4} \sum_{(i, j) \in E} w_{i j} (1 - x_i x_j)\\ & \text{subject to} & & x_i \in \{-1, +1\}, \hspace{0.4cm} \forall \hspace{0.1cm} i = 1,\cdots, n,\\ &&& \sum_{i=1}^n x_i=0. \end{aligned}\label{eq_max_cut_binary} \end{equation*} The binary constraints $x \in \{-1, +1\}^n$, can be expressed as a nonconvex equality constraints of the form $x_i^2=1 \hspace{0.1cm} \forall \hspace{0.1cm} i = 1, \cdots, n$. By lifting the decision variables to the space of the symmetric matrices $X \in \mathbb{S}_+^n$ and introducing a rank one constraint, the graph equipartition problem can be formulated as follows \begin{equation*} \begin{aligned} & \underset{X \in \mathbb{S}_+^n}{\text{minimize}} & & \textbf{tr}(W X)\\ & \text{subject to} & & \textbf{tr}(\mathds{1}_{n \times n} X) = 0, \\ &&& \textbf{diag}(X) = 1,\\ &&& X \succeq 0,\\ &&& \textbf{rank}(X)=1, \end{aligned}\label{eq_mimo_rank} \end{equation*} where the symmetric matrix $W$ is composed by the original weights $w$ and $\mathds{1}_{n \times n}$ denotes a $n \times n$ matrix filled with ones. By dropping the rank constraint, one obtains an SDP relaxation. For more details on graph partition problems, the reader should refer to \cite{karisch2000solving}. \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline \textbf{n} & \textbf{sdplib} & \textbf{SCS} & \textbf{CSDP} & \textbf{MOSEK} & \textbf{PD-SDP} & \textbf{LR-PD-SDP} \\ \hline 124 & gpp124-1 & 1.6 & 0.4 & \textbf{0.2} & 0.7 & 0.9 \\ \hline 124 & gpp124-2 & 1.5 & 0.4 & 0.3 & 0.5 & \textbf{0.2} \\ \hline 124 & gpp124-3 & 1.6 & 0.3 & \textbf{0.2} & 0.6 & \textbf{0.2} \\ \hline 124 & gpp124-4 & 1.7 & 0.5 & 0.3 & 0.6 & \textbf{0.2} \\ \hline 250 & gpp250-1 & 21.4 & 2.9 & \textbf{0.9} & 3.7 & 1.4 \\ \hline 250 & gpp250-2 & 7.8 & 2.2 & \textbf{1.1} & 4.1 & 1.2 \\ \hline 250 & gpp250-3 & 12.6 & 2.1 & \textbf{0.9} & 3.4 & \textbf{0.9} \\ \hline 250 & gpp250-4 & 16.4 & 2.2 & 0.9 & 3.8 & \textbf{0.6} \\ \hline 500 & gpp500-1 & 134.2 & 59.1 & 8.2 & 22.7 & \textbf{5.6} \\ \hline 500 & gpp500-2 & 97.4 & 12.2 & 8.6 & 21.5 & \textbf{6.1} \\ \hline 500 & gpp500-3 & 64.4 & 12.1 & 8.9 & 15.5 & \textbf{4.4} \\ \hline 500 & gpp500-4 & 71.4 & 13.4 & 8.7 & 15.4 & \textbf{6.5} \\ \hline 801 & equalG11 & 324.2 & 47.3 & 32.4 & 84.3 & \textbf{11.3} \\ \hline 1001 & equalG51 & 425.1 & 98.7 & 83.4 & 113.5 & \textbf{22.5} \\ \hline \end{tabular} \caption{Comparison of running times (seconds) for the SDPLIB's graph equipartition problem instances.} \end{table} \textit{Problem instances:} \hspace{0.3cm} Graph equipartition instances from the SDPLIB \cite{borchers1999sdplib} problem set were used to evaluate the performance of the proposed methods. As Table 1 shows, for smaller instances Mosek solver is slightly faster. For larger instances such as equalG11 and equalG51, \texttt{LR-PD-SDP} outperforms all other considered methods with a considerable margin. Furthermore, without exploiting the low-rank structure of the problem, \texttt{PD-SDP} fails to scale as the number of edges increases. \subsection{Sensor network localization} Now consider the problem of estimating the position of a set of sensors on a $d$-dimensional plane \cite{alfakih1999solving}. Let $a_1, \dots, a_m \in \mathbb{R}^d$ be a set of anchor points in which the positions are known and let $x_1, \dots, x_n \in \mathbb{R}^d$ be a set of sensor points that have unknown positions. Given an incomplete set of Euclidean distances between sensors and between sensors and anchors, the goal is to find the true positions of each sensor. This problem, known as \textit{sensor network localization}, is originally formulated as the following quadratic constrained program: \begin{equation} \begin{aligned} & \underset{x_1, \cdots, x_n \in \mathbb{R}^{d}}{\text{find}} & & x_1, \cdots, x_n\\ & \text{subject to} & & \left\Vert x_i - x_j \right\Vert_2^2 = w_{ij}^2, \hspace{0.3cm} \forall \hspace{0.1cm} (i, j) \in \Omega_s,\\ &&& \left\Vert a_k - x_j \right\Vert_2^2 = \tilde{w}_{kj}^2, \hspace{0.3cm} \forall \hspace{0.1cm} (k, j) \in \Omega_a,\\ \end{aligned} \label{ed_quad_const} \end{equation} where the distances between sensor $i$ and sensor $j$ is denoted by $w_{ij}$ and the distance between anchor $k$ and sensor $j$ is denoted by $\tilde{w}_{kj}$. The indexes of the distances that are known are either in the set $\Omega_s$ or in the set $\Omega_a$. Unfortunately, solving (\ref{ed_quad_const}) is NP-hard \cite{boyd2004convex}. We can formulate the nonconvex problem (\ref{ed_quad_const}) as a rank constrained semidefinite problem \cite{so2007theory}. In order to start building this alternative formulation, consider the matrices $X \in \mathbb{R}^{d \times n}$ and $Y \in \mathbb{S}^n$ as \begin{equation*} \begin{aligned} & X = \left[ \begin{array}{ccc} \vertbar & & \vertbar \\ x_{1} & \cdots & x_{n} \\ \vertbar & & \vertbar \end{array} \right] \hspace{0.1cm} \text{and} \hspace{0.1cm} Y = X^T X = \begin{bmatrix} x_{1}^T x_{1} & x_{1}^T x_2 & \dots & x_{1}^T x_n \\ x_{2}^T x_1 & x_{2}^T x_{2} & \dots & x_{2}^T x_n \\ \vdots & \vdots & \ddots & \vdots \\ x_{n}^T x_1 & x_{n}^T x_2 & \dots & x_{n}^T x_{n} \end{bmatrix} . \end{aligned} \end{equation*} Now let $E^{(i, j)} \in \mathbb{S}^n$ be filled with zeros except for the following entries: $E^{(i, j)}_{i,i}=1$, $E^{(i, j)}_{j,j}=1$, $E^{(i, j)}_{i,j}=-1$ and $E^{(i, j)}_{j,i}=-1$. With this setting, the constraints that represent the distance between sensors can be formulated as \begin{equation} \begin{aligned} & \textbf{tr}(E^{(i, j)} Y) = \omega_{ij}^2, \hspace{0.4cm} \forall \hspace{0.1cm} (i, j) \in \Omega_s. \end{aligned}\label{eq_dist_sensors} \end{equation} Similarly, let $Z \in \mathbb{S}^{d + n}$ be the matrix \begin{equation*} \begin{aligned} & Z = \begin{bmatrix} I_{d \times d} & X \\ X^T & Y \end{bmatrix}. \end{aligned} \end{equation*} Additionally, let $U^{(k, j)} \in \mathbb{S}^{d + n}$ be filled with zeros except for the entries: $U^{(k, j)}_{1,1}=a_k^T a_k$, $U^{(k, j)}_{d+j,d +j}=1$, $U^{(k, j)}_{1:d,d + j}=- a_k$ and $U^{(k, j)}_{d + j, 1:d}=- a_k^T$. The constraints regarding the distances between sensors and anchors can be formulated as \begin{equation} \begin{aligned} & \textbf{tr}(U^{(k, j)} Z) = \tilde{\omega}_{kj}^2, \hspace{0.4cm} \forall \hspace{0.1cm} (k, j) \in \Omega_a. \end{aligned}\label{eq_dist_anchors} \end{equation} Using (\ref{eq_dist_sensors}), (\ref{eq_dist_anchors}) and the Schur complement of $Y \succeq X^T X$ \cite{so2007theory}, the network localization problem can be formulated as \begin{equation} \begin{aligned} & \underset{Z \in \mathbb{S}_+^{d + n}}{\text{find}} & & Z\\ & \text{subject to} & & Z = \begin{bmatrix} I_{d \times d} & X \\ X^T & Y \end{bmatrix} \succeq 0,\\ &&& \textbf{tr}(E^{(i, j)} Y) = \omega_{ij}^2, \hspace{0.4cm} \forall \hspace{0.1cm} (i, j) \in \Omega_s,\\ &&& \textbf{tr}(U^{(k, j)} Z) = \tilde{\omega}_{kj}^2, \hspace{0.4cm} \forall \hspace{0.1cm} (k, j) \in \Omega_a,\\ &&& \textbf{rank}(Y) = d. \end{aligned} \label{eq_network_loc} \end{equation} If a unique solution for the given set of distances exists, the SDR obtained by dropping the rank constraint in (\ref{eq_network_loc}) will be exact \cite{so2007theory}. \textit{Problem instances:} \hspace{0.3cm} In a set of numerical simulation, we randomly generate anchor points and distances measurements. Each anchor and sensor has its position in the two-dimensional Euclidean plane, i.e. $d=2$. In this sense, if the relaxation is exact the optimal solution $Y^*$ must have a rank of two. This property justifies the \texttt{LR-PD-SDP} outperforming other solvers when number of sensors grow, as it can be seen in Table 2. The importance of exploiting the low rank structure can be verified by observing that \texttt{PD-SDP} does not efficiently scale as $n$ increases. \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|c|c|} \hline \textbf{n} & \textbf{SCS} & \textbf{CSDP} & \textbf{MOSEK} & \textbf{PD-SDP} & \textbf{LR-PD-SDP} \\ \hline 50 & 0.2 & 0.2 & \textbf{0.1} & 0.5 & 0.6 \\ \hline 100 & \textbf{0.8} & 4.5 & 0.9 & 6.1 & 1.6 \\ \hline 150 & \textbf{2.6} & 28.1 & 3.2 & 14.4 & 3.6 \\ \hline 200 & 6.4 & 89.8 & 11.2 & 32.3 & \textbf{6.1} \\ \hline 250 & 12.1 & 239.2 & 36.4 & 52.9 & \textbf{7.9} \\ \hline 300 & 28.7 & timeout & 85.2 & 96.6 & \textbf{13.5} \\ \hline \end{tabular} \caption{Comparison of running times (seconds) for randomized network localization problem instances.} \end{table} \subsection{MIMO detection} Consider an application in the field of wireless communication known in the literature as binary multiple-input multiple-output (MIMO) \cite{jalden2003semidefinite, jalden2008diversity}. As in several MIMO applications, one needs to send and receive multiple data signals over the same channel with the presence of additive noise. The binary MIMO can be modeled as: \begin{equation*} \begin{aligned} & y = H x + \varepsilon, \end{aligned} \end{equation*} where $y \in \mathbb{R}^m$ is the received signal, $H \in \mathbb{R}^{m \times n}$ is the channel and $\varepsilon \in \mathbb{R}^m$ is an i.i.d. Gaussian noise with variance $\sigma^2$. The signal, which is unknown to the receiver, is represented by $x \in \{-1, +1\}^n$. Assuming the noise distribution is known to be Gaussian, a natural approach is to compute the maximum likelihood estimate of the signal by solving the optimization problem: \begin{equation} \begin{aligned} & \underset{x}{\text{minimize}} & & \left\Vert H x - y \right\Vert_2^2\\ & \text{subject to} & & x \in \{-1, +1\}^n. \end{aligned}\label{eq_binary_least_squares} \end{equation} At first sight, the structure of the problem is very similar to a standard least squares problem. However, the unknown signal is constrained to be binary, which makes the problem nonconvex and dramatically changes the problem's complexity. More precisely, solving \eqref{eq_binary_least_squares} is known to be NP-hard \cite{verdu1989computational}. By using a similar technique as in the graph equipartition problem, one can reformulate \eqref{eq_binary_least_squares} as the following rank constrained semidefinite problem: \begin{equation*} \begin{aligned} & \underset{X \in \mathbb{S}_+^{n+1}}{\text{minimize}} & & \textbf{tr}(L X)\\ & \text{subject to} & & \textbf{diag}(X) = 1,\\ &&& X_{n+1, n+1} = 1,\\ &&& -1 \leq X \leq 1,\\ &&& X \succeq 0,\\ &&& \textbf{rank}(X) = 1. \end{aligned} \end{equation*} where the decision variable $X$ is a $n + 1 \times n + 1$ symmetric matrix and $L$ is given by: \begin{equation*} \begin{aligned} L= \begin{bmatrix} H^{*}H & -H^{*}y \\ -y^{*} H & y^{*} y \end{bmatrix} \end{aligned}\label{eq_matrix_completion} \end{equation*} Given the optimal solution $X^*$ for the relaxation, the solution for the original binary MIMO is obtained by slicing the last column as $x^* = X^*_{1:n, n + 1} $. For this particular problem, the SDR is known to be exact if the signal to noise ratio, $\sigma^{-1}$, is sufficiently large \cite{jalden2003semidefinite}. This implies that the rank of the optimal solution $X^*$ is guaranteed to be equal to one even without the rank constraint. This low-rank structure makes the ideal case study for the techniques proposed in this paper. \textit{Problem instances:} \hspace{0.3cm} In order to measure the performance of the different methods, problem instances with large signal to noise ratio were randomly generated. For each instance, the channel matrix $H$ is designed as a $n \times n$ matrix with i.i.d. standardized Gaussian entries. The true signal $x^*$ was drawn from a discrete uniform distribution. Since a high signal to noise ratio was used to build the instances, all recovered optimal solutions are rank one solutions. In this setting, as it is illustrated in Table 3, \texttt{LR-PD-SDP} outperforms all other methods as the signal length increases. More surprisingly, \texttt{LR-PD-SDP} was able to solve large scale instances with $5000 \times 5000$ p.s.d. matrices. The bottleneck found while trying to optimize even larger instances was the amount of memory required by the \texttt{ProxSDP} solver. \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|c|c|} \hline \textbf{n} & \textbf{SCS} & \textbf{CSDP} & \textbf{MOSEK} & \textbf{PD-SDP} & \textbf{LR-PD-SDP} \\ \hline 100 & 1.5 & 1.2 & \textbf{0.1} & \textbf{0.1} & \textbf{0.1} \\ \hline 500 & 277.8 & 27.4 & 2.3 & 3.1 & \textbf{1.1} \\ \hline 1000 & timeout & 97.2 & 15.6 & 16.5 & \textbf{4.7} \\ \hline 2000 & timeout & 473.6 & 117.5 & 115.9 & \textbf{38.9} \\ \hline 3000 & timeout & timeout & 418.2 & 350.6 & \textbf{122.1} \\ \hline 4000 & timeout & timeout & 976.8 & 906.5 & \textbf{258.3} \\ \hline 5000 & timeout & timeout & timeout & timeout & \textbf{472.4} \\ \hline \end{tabular} \caption{Running times (seconds) for MIMO detection with high SNR.} \end{table} \section{Conclusions and future work} As a concluding remark, this work has proposed a novel primal-dual method that can efficiently exploit the low-rank structure of semidefinite programming problems. As it was illustrated by the case studies, the proposed technique can achieve up to one order of magnitude faster solving times in comparison to existing algorithms. Additionally, an open source solver, \texttt{ProxSDP}, for general SDP problems was made available. We hope that the results and tools contemplated in this work foster the use of semidefinite programming on new applications and fields of study. One aspect of the proposed methodology not fully explored in this paper, is the value of the intermediate solutions found by \texttt{LR-PD-SDP}. For several applications, a suboptimal feasible solution may be useful. Particularly if one is interested in solving a semidefinite relaxation, a suboptimal solution can be almost as useful as the optimal solution, with the advantage of requiring less computing time to be discovered. For instance, a branch-and-bound search method can benefit from lower bounds that a feasible semidefinite relaxation provides \cite{dong2016relaxing}. This ability of quickly generating high quality lower bounds via intermediate feasible solutions can enhance the already well known SDP property of approximate hard problems. Another promising future line of work is the combination of chordal decomposition methods with the low-rank approximation presented in this work. If successful, this match would allow the exploitation of both sparsity and low-rank structure simultaneously. \section*{Acknowledgments} Firstly, we would like to thank the Brazilian agencies CNPq and CAPES for financial support. We extend many thanks to all members of LAMPS (Laboratory of Applied Mathematical Programming and Statistics), in special Thuener Silva and Raphael Saavedra, for the daily support and fruitful discussions. We would also like to thank the developers of MOI and JuMP for making comparisons between solvers much easier and specially Benoît Legat for helping with \texttt{ProxSDP}'s MOI interface. \end{document}
\begin{document} \title{1- and 3-photon dynamical Casimir effects using nonstationary cyclic qutrit} \author{H. Dessano} \affiliation{Institute of Physics, University of Brasilia, 70910-900, Brasilia, Federal District, Brazil} \affiliation{Instituto Federal de Bras\'{\i}lia, Campus Recanto das Emas, 72620-100, Brasilia, Federal District, Brazil} \author{A. V. Dodonov } \email{adodonov@fis.unb.br} \affiliation{Institute of Physics, University of Brasilia, 70910-900, Brasilia, Federal District, Brazil} \affiliation{International Centre for Condensed Matter Physics, University of Brasilia, 70910-900, Brasilia, Federal District, Brazil} \begin{abstract} We consider the nonstationary circuit QED setup in which a 3-level artificial atom in the $\Delta$-configuration interacts with a single-mode cavity field of natural frequency $\omega $. It is demonstrated that when some atomic energy level(s) undergoes a weak harmonic modulation, photons can be generated from vacuum via effective 1- and 3-photon transitions, while the atom remains approximately in the ground state. These phenomena occur in the dispersive regime when the modulation frequency is accurately tuned near $\omega $ and $3\omega $, respectively, and the generated field states exhibit strikingly different statistics from the squeezed vacuum state attained in standard cavity dynamical Casimir effect. \end{abstract} \maketitle \section{Introduction} The term \emph{cavity dynamical Casimir effect} (DCE) can be used to denote the class of phenomena that feature the generation of photons from vacuum in some cavity due to the resonant external perturbation of the system parameters, where the cavity serves to produce a resonant enhancement of the DCE \cite{rev0,rev1,nor,rev2,nation,macri}. These phenomena were originally studied in the context of electromagnetic resonators with oscillating walls or containing a macroscopic dielectric medium with time-modulated internal properties \cite{zer,nikonov,law,lambrecht,soh,maianeto}, but later were generalized for other bosonic fields, e. g., phononic excitations of ion chains \cite{ions}, optomechanical systems \cite{opto}, cold atoms \cite {tito} and Bose-Einstein condensates \cite{bec1,bec2}. For single-mode cavities the main resonance occurs near the modulation frequency $2\omega $, where $\omega $ is the bare cavity frequency, and in the absence of dissipation the average photon number grows exponentially with time \cite {pla,evd}, resulting in the squeezed vacuum state with even photon numbers, analogously to the phenomenon of parametric amplification \cite {rev0,rev2,law}. The cavity DCE was recently implemented experimentally using a Josephson metamaterial consisting of an array of 250 superconductive interference devices (SQUIDs) embedded in a microwave cavity whose electrical length was modulated by an external magnetic flux \cite{meta}. The concept of cavity DCE has been successfully extended to the area of circuit Quantum Electrodynamics (circuit QED) \cite {jpcs,liberato,zeilinger,JPA}, in which one or several artificial Josephson atoms strongly interact with a microwave field confined in superconducting resonators and waveguides \cite{cir1,cir2,cir3,cir4}. The exquisite \emph{in situ} control over the atomic parameters allows to rapidly modulate the atomic energy levels and the atom-field coupling strength \cite{majer,ge,ger,ger1,v1,v2,v3} , enabling the use of artificial atoms as substitutes of the dielectric medium with time-dependent properties. From the viewpoint of a toy model \cite{igor}, a modulated or oscillating dielectric slab can be imagined as a set of atoms with varying parameters, so ultimately DCE must emerge for a single nonstationary $2$ -level atom. Indeed, it was shown that for off-resonant qubit(s) undergoing a weak external perturbation, pairs of photons are generated from vacuum under the modulation frequency $\sim 2\omega $ while the atom(s) remains approximately in the initial state \cite {jpcs,JPA,igor,tom}. In this scenario the atom plays the role of both the source and real-time detector of DCE, since the (small) atomic transition probability depends on the photon number and in turn affects the photon generation pattern \cite{PLAI,jpcs,diego}. Moreover, the rich nonharmonic spectrum of the composite atom--field system permits the implementation of other phenomena in the nonstationary regime, such as: sideband transitions \cite{blais-exp,schuster,side2}, anti-dynamical Casimir effect \cite {igor,diego,lucas,juan,werlang}, $n$-photon Rabi model \cite{nr}, generation of entanglement \cite{entan,etc3}, quantum simulations \cite {relativistic,sim,ger1} and dynamical Lamb effect \cite{lamb,etc1}. Here we explore theoretically the prospects of implementing nontraditional versions of cavity DCE using 3-level atoms (qutrits) in the cyclic (also known as $\Delta $-) configuration subject to parametric modulation. In this case all the transitions between the atomic levels can occur simultaneously via the cavity field \cite {cycl1,cycl2,cycl3,cycl4}, so the total number of excitations is not conserved even upon neglecting the counter-rotating terms (rotating wave approximation). Although prohibited by the electric-dipole selection rules for usual atoms, the $\Delta $-configuration can be implemented for certain artificial atoms in circuit QED \cite{cir4} by breaking the inversion symmetry of the potential energy. Our goal is to find new modulation frequencies, exclusive of the cyclic qutrits, that induce photon generation from vacuum without changing appreciably the atomic state. We find that for the harmonic modulation of some energy level(s) of a dispersive cyclic qutrit, photons can be generated from vacuum for the modulation frequencies $\eta \approx \omega $ and $\eta \approx 3\omega $ while the atom predominantly remains in the ground state. We call these processes 1- and 3-photon DCE because the photons are generated via effective 1- and 3-photon transitions between the system dressed-states, whose rates depend on the product of all the three coupling strengths. We derive an approximate analytical description of the unitary dynamics and illustrate the typical system behavior by solving numerically the Schr\"{o} dinger equation. In particular, we show that the average photon number and atomic populations display a collapse-revival behavior, and the photon number distributions are completely different from the standard (2-photon) cavity DCE case. Moreover, we solve numerically the Markovian master equation and demonstrate that in the presence of weak dissipation the dissipative dynamics resembles the unitary one for initial times, confirming that our proposal is experimentally feasible. \section{Physical system} We consider a single cavity mode of constant frequency $\omega $ that interacts with a qutrit in the cyclic configuration \cite {cir4,cycl1,cycl2,cycl3,cycl4}, so that all the atomic transitions are allowed via one-photon transitions. The Hamiltonian reads \begin{equation} \hat{H}/\hbar =\omega \hat{n}+\sum_{k=1}^{2}E_{k}(t)\hat{\sigma} _{k,k}+\sum_{k=0}^{1}\sum_{l>k}^{2}g_{k,l}(\hat{a}+\hat{a}^{\dagger })( \hat{\sigma}_{l,k}+\hat{\sigma}_{k,l}). \label{H2} \end{equation} $\hat{a}$ ($\hat{a}^{\dagger }$) is the cavity annihilation (creation) operator and $\hat{n}=\hat{a}^{\dagger }\hat{a}$ is the photon number operator. The atomic eigenenergies are $E_{0}\equiv 0,E_{1}$ and $E_{2}$, the corresponding states are $|\mathbf{k}\rangle $ and we defined $\hat{ \sigma}_{k,j}\equiv |\mathbf{k}\rangle \langle \mathbf{j}|$. The constant parameters $g_{k,l}$ denote the coupling strengths between the atomic states $|\mathbf{k}\rangle $ and $|\mathbf{l}\rangle $ mediated by the cavity field. To emphasize the role of the counter-rotating terms (CRT) we rewrite (for $l>k$) \begin{equation*} g_{k,l}(\hat{a}+\hat{a}^{\dagger })(\hat{\sigma}_{l,k}+\hat{\sigma} _{k,l})\rightarrow g_{k,l}(\hat{a}\hat{\sigma}_{l,k}+c_{k,l}\hat{a}\hat{ \sigma}_{k,l}+h.c.), \end{equation*} where $c_{k,l}=1$ when the corresponding CRT is taken into account and is zero otherwise. Utilizing the tunability of Josephson atoms \cite{majer,ge,ger,ger1,v1,v2,v3}, we assume that the atomic energy levels can be modulated externally as \begin{equation*} E_{k}(t)\equiv E_{k}^{(0)}+\varepsilon _{k}\sin (\eta t+\phi _{k})\quad \text{ for} ~k=1,2~, \end{equation*} where $\varepsilon _{k}\ll E_{k}^{(0)}$ is the modulation amplitude, $\phi _{k}$ is the associated phase, $E_{k}^{(0)}$ is the bare energy value and $ \eta \gtrsim \omega $ is the modulation frequency. We would like to stress that for weak perturbations our approach can be easily generalized to multi-tone modulations or simultaneous perturbation of all the parameters in Hamiltonian (\ref{H2}). We expand the wavefunction as \begin{equation} |\psi (t)\rangle =\sum_{n=0}^{\infty }e^{-it\lambda _{n}}b_{n}(t)\mathcal{F} _{n}(t)|\varphi _{n}\rangle \label{state} \end{equation} \begin{equation*} \mathcal{F}_{n}(t)=\exp \left\{ \sum_{k=1}^{2}\frac{i\varepsilon _{k}}{\eta } \left[ \cos (\eta t+\phi _{k})-1\right] \langle \varphi _{n}|\hat{\sigma} _{k,k}|\varphi _{n}\rangle \right\} . \end{equation*} Here $\lambda _{n}$ are the eigenfrequencies of the bare Hamiltonian $\hat{H} _{0}\equiv \hat{H}[\varepsilon _{1}=\varepsilon _{2}=0]$ ($n$ increasing with energy) and $|\varphi _{n}\rangle $ are the corresponding eigenstates (dressed-states). $b_{n}(t)$ denotes the slowly-varying probability amplitude of the state $|\varphi _{n}\rangle $ and $\mathcal{F} _{n}(t)\approx 1$ is a rapidly oscillating function with a small amplitude. After substituting Eq. (\ref{state}) into the Schr\"{o}dinger equation, to the first order in $\varepsilon _{1}$ and $\varepsilon _{2}$ we obtain the differential equation \begin{equation} \dot{b}_{n}=\sum\limits_{m\neq n}b_{m}\left[ \Theta _{m;n}^{\ast }e^{it\left( \lambda _{n}-\lambda _{m}-\eta \right) }-\Theta _{n;m}e^{-it\left( \lambda _{m}-\lambda _{n}-\eta \right) }\right] \label{dif} \end{equation} that describes transitions between the dressed-states $|\varphi _{n}\rangle $ and $|\varphi _{m}\rangle $ with the transition rate $|\Theta _{n;m}|$, where \begin{equation} \Theta _{n;m}\equiv \frac{1}{2}\sum_{k=1}^{2}\varepsilon _{k}e^{i\phi _{k}}\langle \varphi _{n}|\hat{\sigma}_{k,k}|\varphi _{m}\rangle . \label{rate} \end{equation} The transition $|\varphi _{n}\rangle \leftrightarrow |\varphi _{m}\rangle $ occurs when the modulation frequency is resonantly tuned to $\eta _{r}=|\lambda _{m}-\lambda _{n}|+\Delta \nu $, where $\Delta \nu $ denotes a small shift \cite{JPA} dependent on $\varepsilon _{1},\varepsilon _{2}$ due to the rapidly-oscillating terms that were neglected in Eq. (\ref{dif}) (in this paper we adjust $\Delta \nu $ numerically). By writing the interaction-picture wavefunction as $|\psi _{I}(t)\rangle =\sum_{n}b_{n}(t)|\varphi _{n}\rangle $ one can cast Eq. (\ref{dif}) as a dressed-picture \emph{effective Hamiltonian} \begin{equation*} \hat{H}_{ef}(t)=-i\sum_{n,m\neq n}\Theta _{m;n}|\varphi _{m}\rangle \langle \varphi _{n}|e^{-it\left( \lambda _{n}-\lambda _{m}-\eta \right) }+h.c. \end{equation*} Since we focus on transitions in which the atom is minimally disturbed, we consider the dispersive regime \begin{equation*} |\Delta _{1}|,|\Delta _{2}|,|\Delta _{1}+\Delta _{2}|\gg \sqrt{n_{\max }} \max (g_{k,l})~, \end{equation*} where $n_{\max }$ is the maximum number of the system excitations and the bare detunings are defined as \begin{equation*} \Delta _{1}\equiv \omega -E_{1}^{(0)},~\Delta _{2}\equiv \omega -(E_{2}^{(0)}-E_{1}^{(0)}),~\Delta _{3}\equiv \Delta _{1}+\Delta _{2}. \end{equation*} Denoting by $|\zeta _{k}\rangle $ the dressed-states in which the atom is predominantly in the ground state, from the standard perturbation theory we find \begin{eqnarray} |\zeta _{k}\rangle &\approx &|\mathbf{0},k\rangle +\frac{c_{0,1}g_{0,1}^{2} \sqrt{k(k-1)}}{2\Delta _{1}\omega }|\mathbf{0},k-2\rangle \label{ds} \\ &&+\frac{g_{0,1}\sqrt{k}}{\Delta _{1}}|\mathbf{1},k-1\rangle -\frac{ c_{0,1}g_{0,1}\sqrt{k+1}}{2\omega -\Delta _{1}}|\mathbf{1},k+1\rangle \notag \\ &&-\frac{c_{1,2}g_{0,1}g_{1,2}k}{\Delta _{1}(2\omega -\Delta _{3})}|\mathbf{2 },k\rangle +\frac{g_{0,1}g_{1,2}\sqrt{k(k-1)}}{\Delta _{1}\Delta _{3}}| \mathbf{2},k-2\rangle \notag \\ &&-\frac{g_{0,2}\sqrt{k}}{\omega -\Delta _{3}}|\mathbf{2},k-1\rangle -\frac{ c_{0,2}g_{0,2}\sqrt{k+1}}{3\omega -\Delta _{3}}|\mathbf{2},k+1\rangle \notag \end{eqnarray} where $|\mathbf{j},k\rangle \equiv |\mathbf{j}\rangle _{atom}\otimes |k\rangle _{field}$ and $k\geq 0$. The corresponding eigenfrequencies are (neglecting constant shifts) \begin{equation} \Lambda _{k}\approx \omega _{ef}k+\alpha k^{2} \label{d} \end{equation} with the effective cavity frequency and the one-photon Kerr nonlinearity, respectively, \begin{eqnarray*} \omega _{ef} &\equiv &\omega +\frac{g_{0,1}^{2}}{\Delta _{1}}\left( 1-\frac{ g_{1,2}^{2}}{\Delta _{1}\Delta _{3}}\right) -\frac{g_{0,2}^{2}}{\omega -\Delta _{3}} \\ &&-\frac{c_{0,1}g_{0,1}^{2}}{2\omega -\Delta _{1}}-\frac{c_{0,2}g_{0,2}^{2}}{ 3\omega -\Delta _{3}} \end{eqnarray*} \begin{eqnarray*} \alpha &\equiv &\frac{g_{0,1}^{2}}{\Delta _{1}^{2}}\left( \frac{g_{1,2}^{2}}{ \Delta _{3}}-\frac{g_{0,1}^{2}}{\Delta _{1}}+\frac{c_{0,1}g_{0,1}^{2}}{ 2\omega }-\frac{c_{1,2}g_{1,2}^{2}}{2\omega -\Delta _{3}}\right. \\ &&\left. +\frac{g_{0,2}^{2}}{\omega -\Delta _{3}}+\frac{c_{0,1}g_{0,1}^{2}}{ 2\omega -\Delta _{1}}+\frac{c_{0,2}g_{0,2}^{2}}{3\omega -\Delta _{3}}\right) . \end{eqnarray*} In the Appendix \ref{apen} we present the complete expressions for the eigenstates and eigenvalues obtained from the 2- and 4-order perturbation theory, respectively. \section{1- and 3-photon DCE} The lowest-order phenomena that occur exclusively for cyclic qutrits depend on the combination $g_{0,1}g_{1,2}g_{0,2}$, so we define $G^{3}\equiv g_{0,1}g_{1,2}g_{0,2}/2$. Indeed, for $g_{0,2}=0$ we recover the ladder configuration, for $g_{0,1}=0$ the $\Lambda $- configuration and for $ g_{1,2}=0$ the V-configuration. After substituting the dressed-states ( \ref{ape}) into Eq. (\ref{rate}) we find that one such effect is the three-photon transition between the states $|\zeta _{k}\rangle $ and $|\zeta _{k+3}\rangle $. To the lowest order the respective transition rate reads \begin{equation} \Theta _{k;k+3}^{(\zeta )}=G^{3}\sqrt{\frac{(k+3)!}{k!}}\left[ \varepsilon _{1}q_{1}e^{i\phi _{1}}-\varepsilon _{2}q_{2}e^{i\phi _{2}}\right] , \label{x1} \end{equation} where the $k$-independent parameters are \begin{eqnarray*} q_{1} &=&\frac{c_{0,2}}{\Delta _{1}\left( 3\omega -\Delta _{3}\right) \left( 3\omega -\Delta _{1}\right) } \\ &&+\frac{c_{0,1}c_{1,2}}{\left( 2\omega -\Delta _{1}\right) \left( \omega -\Delta _{3}\right) \left( \omega +\Delta _{1}\right) } \end{eqnarray*} \begin{eqnarray*} q_{2} &=&\frac{c_{0,2}}{\Delta _{1}\Delta _{3}\left( 3\omega -\Delta _{3}\right) } \\ &&+\frac{c_{0,1}c_{1,2}}{\left( 2\omega -\Delta _{1}\right) \left( \omega -\Delta _{3}\right) \left( 4\omega -\Delta _{3}\right) }~. \end{eqnarray*} We see that this effect, corresponding roughly to the transitions $|\mathbf{0 },k\rangle \leftrightarrow |\mathbf{0},k+3\rangle \leftrightarrow |\mathbf{0} ,k+6\rangle \leftrightarrow \cdots $, relies on the CRT: either $c_{0,2}$ must be nonzero, or the product $c_{0,1}c_{1,2}$. The second effect allowed by the cyclic configuration is the one-photon transition between the states $|\zeta _{k}\rangle $ and $|\zeta _{k+1}\rangle $, or roughly $|\mathbf{0},k\rangle \leftrightarrow |\mathbf{0} ,k+1\rangle \leftrightarrow |\mathbf{0},k+2\rangle \leftrightarrow \cdots $. We obtain to the lowest order \begin{equation} \Theta _{k;k+1}^{(\zeta )}=G^{3}\sqrt{k+1}\left[ \varepsilon _{1}Q_{1}(k)e^{i\phi _{1}}-\varepsilon _{2}Q_{2}(k)e^{i\phi _{2}}\right] , \label{x2} \end{equation} where we defined $k$-dependent functions \begin{eqnarray*} Q_{1}(k) &=&\frac{1}{\Delta _{1}(\omega -\Delta _{1})}\left( \frac{ c_{1,2}c_{0,2}(k+1)}{3\omega -\Delta _{3}}+\frac{k}{\omega -\Delta _{3}} \right) \\ &&-\frac{c_{0,1}c_{0,2}(k+2)}{(2\omega -\Delta _{1})(3\omega -\Delta _{3})(3\omega -\Delta _{1})} \\ &&-\frac{c_{1,2}k}{\Delta _{1}(\omega -\Delta _{3})(\omega +\Delta _{1})}- \frac{c_{0,1}}{(\omega -\Delta _{1})(2\omega -\Delta _{1})} \\ &&\times \left( \frac{c_{1,2}c_{0,2}(k+2)}{3\omega -\Delta _{3}}+\frac{k+1}{ \omega -\Delta _{3}}\right) \end{eqnarray*} \begin{eqnarray*} Q_{2}(k) &=&\frac{1}{(2\omega -\Delta _{3})(\omega -\Delta _{3})}\left( \frac{c_{0,1}(k+1)}{2\omega -\Delta _{1}}-\frac{c_{1,2}k}{\Delta _{1}}\right) \\ &&+\frac{c_{0,1}c_{1,2}c_{0,2}(k+2)}{(2\omega -\Delta _{1})(4\omega -\Delta _{3})(3\omega -\Delta _{3})} \\ &&+\frac{k}{\Delta _{1}\Delta _{3}(\omega -\Delta _{3})}+\frac{c_{0,2}}{ (2\omega -\Delta _{3})(3\omega -\Delta _{3})} \\ &&\times \left( \frac{c_{0,1}(k+2)}{2\omega -\Delta _{1}}-\frac{c_{1,2}(k+1) }{\Delta _{1}}\right) . \end{eqnarray*} We see that for $k>0$ (nonvacuum field states) the CRT are not required for this effect, but for the photon generation from vacuum either $c_{0,1}$ or the product $c_{1,2}c_{0,2}$ must be nonzero. In analogy to the generation of photon pairs in the standard DCE, we call the above effects 3- and 1-photon DCE, respectively. As seen from Eqs. (\ref{x1}) and (\ref{x2}), to induce the 1- and 3-photon DCE it is sufficient to modulate just one of the energy levels, yet the simultaneous modulation of both $E_{1}$ and $E_{2}$ can increase the transition rate provided the phase difference $(\phi _{1}-\phi _{2})$ is properly adjusted. However, for constant modulation frequency the photon generation from vacuum is limited due to the resonance mismatch for multiphoton dressed states. Indeed, from Eq. (\ref{d}) we have \begin{equation*} \Lambda _{k+J}-\Lambda _{k}=\left( \omega _{ef}+J\alpha \right) J+(2\alpha J)k\,, \end{equation*} where $J=1,3$. Assuming realistically that $g_{l,k}$ and $\varepsilon _{j}$ are all of the same order of magnitude, we note that $|\alpha |\gtrsim |\Theta _{k;k+J}^{(\zeta )}|$ for $k\sim 1$. Hence for constant $\eta _{J}\simeq \Lambda _{J}-\Lambda _{0}$ (adjusted to generate photons from vacuum) the coupling between the states $|\zeta _{k}\rangle \rightarrow |\zeta _{k+J}\rangle $ goes off resonance as $k$ increases and we expect a limited photon production. We note that several methods to enhance the photon generation were proposed in similar setups, e.g., multi-tone modulations \cite{jpcs,diego}, time-varying modulation frequency including effective Landau-Zener transitions \cite{palermo} and optimum control strategies \cite{optcontrol}. \section{Discussion and conclusions} \begin{figure} \caption{(color online) \textbf{System behavior for 3-photon DCE}. a) Dynamics of the average photon number $n_{ph}$ and the Mandel's $Q$-factor. b) Dynamics of the atomic populations: the probability that atom leaves the initial state is $\lesssim 12\%$. c) Photon statistics $P(n)=\mathrm{Tr}( \hat{\protect\rho}|n\rangle \langle n|)$ for the time instant $\protect \omega t_{\ast }=0.91\times 10^{5}$ [marked by the green arrow in (a)], where $\hat{\protect\rho}$ is the total density operator. Notice the local peaks at $n=3k$, asserting the effective 3-photon nature of the process.} \label{f1} \end{figure} \begin{figure} \caption{(color online) \textbf{System behavior for 1-photon DCE}. Similar to Fig. \protect\ref{f1}. The probability that the atom leaves the initial state is now $\lesssim 30\%$. For $\protect\omega t_{\ast }=1.61\times 10^{5} $ (panel c) the photon statistics lacks local peaks, indicating that the photons are generated via effective 1-photon transitions.} \label{f2} \end{figure} To confirm our analytic predictions we solve numerically the Schr\"{o}dinger equation for the Hamiltonian (\ref{H2}) considering the initial state $| \mathbf{0},0\rangle $ (which is approximately equal to the system ground state in our regime of parameters) and feasible coupling constants $g_{0,1}/\omega =5\times 10^{-2}$, $g_{1,2}/\omega =6\times 10^{-2}$ and $g_{0,2}/\omega =3\times 10^{-2}$ (including all the CRT, $c_{l,k}=1$). For the sake of illustration we consider the sole modulation of $E_{2}$, setting $ \varepsilon _{1}=0$ and $\varepsilon _{2}=7\times 10^{-2}E_{2}^{(0)}$. In Fig. \ref{f1} we illustrate the 3-photon DCE for the detunings $\Delta _{1}/\omega =0.464$, $\Delta _{2}/\omega =0.106$ and modulation frequency $ \eta /\omega =3.0037$. We show the average photon number $n_{ph}=\langle \hat{a}^{\dagger }\hat{a}\rangle $, the Mandel's factor $Q=[\langle (\Delta \hat{n})^{2}\rangle -n_{ph}]/n_{ph}$ (that quantifies the spread of the photon number distribution, being $Q=1+2n_{ph}$ for the squeezed vacuum state) and the atomic populations $P_{k}=\langle \hat{\sigma}_{k,k}\rangle $ . We also show the photon number distribution at the time instant $\omega t_{\ast }=0.91\times 10^{5}$ (when $n_{ph}$ is maximum), confirming that the photon generation occurs via effective 3-photon processes. We observe that for $ t=t_{\ast }$ the photon statistics does not show special behavior around $ n\approx n_{ph}$. The average photon number and the atomic populations exhibits a collapse-revival behavior due to increasingly off-resonant couplings between the probability amplitudes $b_{m}$ in Eq. (\ref{dif}). Moreover, during the collapses [$n_{ph},(1-P_{0})\approx 0$] the Mandel's factor is very large, $Q\gg 1,n_{ph}$, which is typical of \emph{hyper-Poissonian} states that have long tails of distribution with very low (but not negligible) probabilities \cite{PLAI}. In Fig. \ref{f2} we perform a similar analysis for the 1-photon DCE, setting the parameters $\Delta _{1}/\omega =0.362$, $\Delta _{2}/\omega =0.51$ and $ \eta /\omega =0.9978$. The qualitative behavior of $n_{ph}$, $Q$ and the atomic populations is similar to the previous case, but the photon number distribution is completely different, as illustrated in the panel (c) for $ \omega t_{\ast }=1.61\times 10^{5}$. Now all the photon states are populated (as expected for an effective 1-photon process), and the $Q$-factor is always larger that $n_{ph}$ due to the larger spread of the distribution. As in the previous example, there are no special features in the photon statistics for $n\approx n_{ph}$, and one has similar probabilities of detecting any value ranging from 3 to 20 photons. To assess the experimental feasibility of our proposal we solve numerically the phenomenological Markovian master equation for the density operator $ \hat{\rho}$ \cite{cycl2,cycl4} \begin{equation*} \dot{\rho}=\frac{1}{i\hbar }[\hat{H},\hat{\rho}]+\kappa \mathcal{L}[\hat{a} ]+\sum_{k=0}^{1}\sum_{l>k}^{2}\gamma _{k,l}\mathcal{L}[\hat{\sigma} _{k,l}]+\sum_{k=1}^{2}\gamma _{k}^{(\phi )}\mathcal{L}[\hat{\sigma}_{k,k}]\,, \end{equation*} where $\mathcal{L}[\hat{O}]\equiv \hat{O}\hat{\rho}\hat{O}^{\dagger }-\hat{O} ^{\dagger }\hat{O}\hat{\rho}/2-\hat{\rho}\hat{O}^{\dagger }\hat{O}/2$ is the Lindblad superoperator, $\kappa $ is the cavity relaxation rate and $ \gamma _{k,l}$ ($\gamma _{k}^{(\phi )}$) are the atomic relaxation (pure dephasing) rates. Notice that related works demonstrated that for $ g_{k,l}/\omega < 10^{-1}$ and initial times this approach is a good approximation to a more rigorous microscopic model of dissipation \cite {diego,werlang,palermo}. Typical behavior of 3-photon DCE under unitary and dissipative evolutions is illustrated in Fig. \ref{f3}, where we set $\Delta _{1}/\omega =0.24$, $\Delta _{2}/\omega =-0.132$, $\eta /\omega =3.0269$ \cite{solv} and feasible dissipative parameters $ \gamma _{k,l}=\gamma _{k}^{(\phi )}=g_{0,1}\times 10^{-3}$ and $\kappa =g_{0,1}\times 10^{-4}$ (other parameters are as in Fig. \ref{f1}). It is seen that for initial times the dissipative dynamics resembles the unitary one, indicating that our predictions could be verified in realistic circuit QED systems. \begin{figure} \caption{(color online) \textbf{Dissipative 3-photon DCE}. Behavior of $ n_{ph}$, $Q$ and $P_{k}$ under unitary (black thin lines) and dissipative (red thick lines) evolutions. For initial times (till the first maximum of $ n_{ph}$) the effects of dissipation are small; for larger times the dissipation strongly affects the dynamics, but the main qualitative features persist.} \label{f3} \end{figure} In conclusion, we showed that for an artificial cyclic qutrit coupled to a single-mode cavity one can induce effective 1- and 3-photon transitions between the system dressed-states in which the atom remains approximately in the ground state. These effects occur in the dispersive regime of light-matter interaction for external modulation of some system parameter(s) with frequencies $\eta \approx \omega $ and $\eta \approx 3\omega $, respectively. We evaluated the associated transition rates assuming the modulation of one or both excited energy-levels of the atom, and our method can be easily extended to the perturbation of all the parameters in the Hamiltonian. For constant modulation frequency the average photon number and the atomic populations exhibit a collapse-revival behavior with a limited photon generation due to effective Kerr nonlinearities. The photon statistics is strikingly different from the standard (2-photon) DCE case, for which a squeezed vacuum state would be generated. Although we focused on transitions that avoid exciting the atom, our approach can be applied to study other uncommon transitions allowed by $\Delta $-atoms. Hence this study indicates viable alternatives to engineer effective interactions in nonstationary circuit QED using cyclic qutrits. \begin{acknowledgments} A. V. D. acknowledges a partial support of the Brazilian agency CNPq (Conselho Nacional de Desenvolvimento Cient\'{\i}fico e Tecnol\'{o}gico). \end{acknowledgments} \appendix \section{Full expressions for the dressed states}\label{apen} \begin{widetext} For the purpose of this paper it is sufficient to calculate the eigenstates of the Hamiltonian $\hat{H}_{0}$ using the second-order perturbation theory. In the dispersive regime we obtain \begin{eqnarray} |\zeta _{k}\rangle &=&\mathcal{N}_{k}\left[ |\mathbf{0},k\rangle +\frac{ g_{0,1}\sqrt{k}}{\Delta _{1}}|\mathbf{1},k-1\rangle -\frac{c_{0,1}g_{0,1} \sqrt{k+1}}{2\omega -\Delta _{1}}|\mathbf{1},k+1\rangle -\frac{g_{0,2} \sqrt{k}}{\omega -\Delta _{3}}|\mathbf{2},k-1\rangle -\frac{ c_{0,2}g_{0,2}\sqrt{k+1}}{3\omega -\Delta _{3}}|\mathbf{2},k+1\rangle \right. \nonumber\\ &&+\left( \frac{c_{0,1}g_{0,1}^{2}}{2\omega -\Delta _{1}}+\frac{ c_{0,2}g_{0,2}^{2}}{3\omega -\Delta _{3}}\right) \frac{\sqrt{(k+1)(k+2)} }{2\omega }|\mathbf{0},k+2\rangle +\left( \frac{c_{0,1}g_{0,1}^{2}}{ \Delta _{1}}-\frac{c_{0,2}g_{0,2}^{2}}{\omega -\Delta _{3}}\right) \frac{ \sqrt{k(k-1)}}{2\omega }|\mathbf{0},k-2\rangle \nonumber\\ &&+\left( \frac{c_{1,2}c_{0,2}(k+1)}{3\omega -\Delta _{3}}+\frac{k}{ \omega -\Delta _{3}}\right) \frac{g_{1,2}g_{0,2}}{\omega -\Delta _{1} }|\mathbf{1},k\rangle +\left( \frac{c_{0,1}(k+1)}{2\omega -\Delta _{1}}- \frac{c_{1,2}k}{\Delta _{1}}\right) \frac{g_{0,1}g_{1,2}}{2\omega _{0}-\Delta _{3}}|\mathbf{2},k\rangle \nonumber\\ &&+\frac{c_{0,2}g_{1,2}g_{0,2}\sqrt{(k+1)(k+2)}}{(3\omega -\Delta _{3})(3\omega -\Delta _{1})}|\mathbf{1},k+2\rangle -\frac{ c_{1,2}g_{1,2}g_{0,2}\sqrt{k(k-1)}}{(\omega -\Delta _{3})(\omega _{0}+\Delta _{1})}|\mathbf{1},k-2\rangle \nonumber\\ &&\left. +\frac{c_{0,1}c_{1,2}g_{0,1}g_{1,2}\sqrt{(k+1)(k+2)}}{(2\omega _{0}-\Delta _{1})(4\omega -\Delta _{3})}|\mathbf{2},k+2\rangle +\frac{ g_{0,1}g_{1,2}\sqrt{k(k-1)}}{\Delta _{1}\Delta _{3}}|\mathbf{2},k-2\rangle \right] \label{ape}, \end{eqnarray} where $\mathcal{N}_{k}=1+O[(g_{0}/\Delta _{1})^{2}]$ is the normalization constant whose value does not appear in our final (lowest-order) expressions. For the eigenenergy corresponding to the state $|\zeta _{k}\rangle $ we need to use the fourth-order perturbation theory to account for the effective Kerr-nonlinearity. We get \begin{equation*} \Lambda _{k}=\omega k+L_{1}(k)+L_{2}(k) \end{equation*} \begin{equation*} L_{1}(k)\equiv \left( \delta _{1}-\delta _{2}-c_{0,1}\delta _{3}-c_{0,2}\delta _{4}\right) k-\left( c_{0,1}\delta _{3}+c_{0,2}\delta _{4}\right) \end{equation*} \begin{equation*} L_{2}(k)\equiv \left[ \delta _{1}\beta _{1}(k)-\delta _{2}\beta _{2}(k) \right] k-\left[ c_{0,1}\delta _{3}\beta _{3}(k)+c_{0,2}\delta _{4}\beta _{4}(k)\right] \left( k+1\right) . \end{equation*} We defined the shifts $\delta _{1}=g_{0,1}^{2}/\Delta _{1}$,~$\delta _{2}=g_{0,2}^{2}/(\omega -\Delta _{3})$,~$\delta _{3}=g_{0,1}^{2}/(2\omega -\Delta _{1})$,~$\delta _{4}=g_{0,2}^{2}/(3\omega -\Delta _{3})$,~$\delta _{5}=g_{1,2}^{2}/(2\omega -\Delta _{3})$,~$\delta _{6}=g_{1,2}^{2}/(\omega -\Delta _{1})$. Other dimensionless functions of $k$ are defined as \begin{equation*} \beta _{1}(k)\equiv \left( \delta _{1}-c_{0,2}\delta _{2}\right) \frac{ c_{0,1}\left( k-1\right) }{2\omega }+\frac{g_{1,2}^{2}\left( k-1\right) }{\Delta _{1}\Delta _{3}}+c_{1,2}\delta _{5}\left( \frac{c_{0,1}\left( k+1\right) }{2\omega -\Delta _{1}}-\frac{k}{\Delta _{1}}\right) -\frac{ L_{1}(k)}{\Delta _{1}} \end{equation*} \begin{equation*} \beta _{2}(k)\equiv \left( c_{0,1}\delta _{1}-\delta _{2}\right) \frac{ c_{0,2}\left( k-1\right) }{2\omega }-\frac{c_{1,2}g_{1,2}^{2}\left( k-1\right) }{\left( \omega -\Delta _{3}\right) \left( \omega +\Delta _{1}\right) }+\delta _{6}\left( \frac{c_{1,2}c_{0,2}\left( k+1\right) }{ 3\omega -\Delta _{3}}+\frac{k}{\omega -\Delta _{3}}\right) +\frac{ L_{1}(k)}{\omega -\Delta _{3}} \end{equation*} \begin{equation*} \beta _{3}(k)\equiv \left( \delta _{3}+c_{0,2}\delta _{4}\right) \frac{k+2}{ 2\omega }+\delta _{5}\left( \frac{\left( k+1\right) }{2\omega _{0}-\Delta _{1}}-\frac{c_{1,2}k}{\Delta _{1}}\right) +\frac{ c_{1,2}g_{1,2}^{2}\left( k+2\right) }{\left( 2\omega -\Delta _{1}\right) \left( 4\omega -\Delta _{3}\right) }+\frac{L_{1}(k)}{2\omega -\Delta _{1}} \end{equation*} \begin{equation*} \beta _{4}(k)\equiv \left( c_{0,1}\delta _{3}+\delta _{4}\right) \frac{k+2}{ 2\omega }+c_{1,2}\delta _{6}\left( \frac{c_{1,2}\left( k+1\right) }{ 3\omega -\Delta _{3}}+\frac{k}{\omega -\Delta _{3}}\right) +\frac{ g_{1,2}^{2}\left( k+2\right) }{\left( 3\omega -\Delta _{3}\right) \left( 3\omega -\Delta _{1}\right) }+\frac{L_{1}(k)}{3\omega -\Delta _{3}}. \end{equation*} \end{widetext} \end{document}
\begin{document} \maketitle \centerline{\scshape Claudio Pessoa and Ronisio Ribeiro} {\footnotesize \centerline{Universidade Estadual Paulista (UNESP),} \centerline{Instituto de Bioci\^encias Letras e Ci\^encias Exatas,} \centerline{R. Cristov\~ao Colombo, 2265, 15.054-000, S. J. Rio Preto, SP, Brazil } \centerline{\email{c.pessoa@unesp.br} and \email{ronisio.ribeiro@unesp.br}}} \begin{quote}{\normalfont\fontsize{8}{10}\selectfont {\bfseries Abstract.} In this paper, we study the number of limit cycles that can bifurcating from a periodic annulus in discontinuous planar piecewise linear Hamiltonian differential system with three zones separated by two parallel straight lines. We prove that if the central subsystem, i.e. the system defined between the two parallel lines, has a real center and the others subsystems have centers or saddles, then we have at least three limit cycles that appear after perturbations of periodic annulus. For this, we study the number of zeros of a Melnikov function for piecewise Hamiltonian system and present a normal form for this system in order to simplify the computations. \par} \end{quote} \section{Introduction and Main Results} The first works on piecewise differential systems appeared in the 1930s, see \cite{And66}. This class of systems have great applicability, mainly in mechanics, electrical circuits, control theory, etc (see for instance the book \cite{diB08} and the papers \cite{Chu90, Fit61, McK70, Nag62}). This subject has piqued the attention of researchers in qualitative theory of differential equations and numerous studies about this topic have arisen in the literature recently. Piecewise differential systems with two zones are the most studied, either for their applications in modeling phenomena in general or for their apparent simplicity (see \cite{Fre12, LiS19b}). As in the smooth case, the researches are mainly concentrated to the determination of the number and position of the limit cycles of these systems. In 1998, Freire, Ponce, Rodrigo and Torres in \cite{Fre98} proved that a continuous piecewise linear differential systems in the plane with two zones has at most one limit cycle. In the discontinuous case, the maximum number of limit cycles is not known, but important partial results about this problem have been obtained, see for example \cite{Buz13, Bra13, Fre14b, Lli18b}. The problem becomes more complicated when we have more than two zones and there are a few works that deal with the discontinuous case (see \cite{Don17, Hu13, Lli15b, Wan16}). However, when restrictive hypotheses such as symmetry and linearity are imposed, the issue of limit cycles is well explored. More precisely, for symmetric continuous piecewise linear differential systems with three zones, conditions for nonexistence and existence of one, two or three limit cycles have been obtained (see for instance the book \cite{Lli14}). For the nonsymmetric case, examples with two limit cycles surrounding the only singular point at the origin was found in \cite{Lim17, Lli15}. Recently, some researchers have been trying to estimate the number of limit cycles in discontinuous piecewise Hamiltonian differential systems with three zones. In this direction, we have papers with one limit cycle, see \cite{Fon20, Lli18a} and more than two limit cycles, see \cite{Xio20, Xio21, Yan20}. In this work, we contribute along these lines. Our goal is to study the number of limit cycles that can bifurcated from periodic annulus of families of discontinuous planar piecewise linear Hamiltonian differential system with three zones separated by two parallel straight lines. We prove that if the central subsystem, i.e. the system between the two parallel lines, has a real center and the others subsystems have centers or saddles, then we have at least three limit cycles, visiting the three zones, that bifurcate from an periodic annulus. Our results are obtained by studying the number of zeros of the Melnikov function for piecewise Hamiltonian system, see the papers \cite{Xio20, Xio21} for more details about the Melnikov function. In order to set the problem, let $h_i:\mathbb{R}^2\rightarrow\mathbb{R}$, $i=L,R$, be the functions $h_{\scriptscriptstyle L}(x,y)=x+1$ and $h_{\scriptscriptstyle R}(x,y)=x-1$. Denote by $\Sigma_{\scriptscriptstyle L}=h_{\scriptscriptstyle L}^{-1}(0)$ and $\Sigma_{\scriptscriptstyle R}=h_{\scriptscriptstyle R}^{-1}(0)$ the {\it switching curves}. This straight lines decomposes the plane in three regions $$R_{\scriptscriptstyle L}=\{(x,y)\in\mathbb{R}^2:x<-1\},\quad R_{{\scriptscriptstyle C}}=\{(x,y)\in\mathbb{R}^2:-1<x<1\},$$ and $$R_{{\scriptscriptstyle R}}=\{(x,y)\in\mathbb{R}^2:x>1\}.$$ Consider the discontinuous planar piecewise linear near--Hamiltonian system with three zones, given by \begin{equation}\label{eq:01} \left\{\begin{array}{ll} \dot{x}= H_y(x,y)+\epsilon f(x,y), \\ \dot{y}= -H_x(x,y)+\epsilon g(x,y), \end{array} \right. \end{equation} with \begin{equation*} H(x,y)=\left\{\begin{array}{ll} \hspace{-0.3cm}H^{\scriptscriptstyle L}(x,y)=\dfrac{b_{\scriptscriptstyle L}}{2}y^2-\dfrac{c_{\scriptscriptstyle L}}{2}x^2+a_{\scriptscriptstyle L}xy+\alpha_{\scriptscriptstyle L}y-\beta_{\scriptscriptstyle L}x, \quad x\leq -1, \\ \hspace{-0.3cm}H^{\scriptscriptstyle C}(x,y)=\dfrac{b_{\scriptscriptstyle C}}{2}y^2-\dfrac{c_{\scriptscriptstyle C}}{2}x^2+a_{\scriptscriptstyle C}xy+\alpha_{\scriptscriptstyle C}y-\beta_{\scriptscriptstyle C}x, -1\leq x\leq 1, \\ \hspace{-0.3cm}H^{\scriptscriptstyle R}(x,y)=\dfrac{b_{\scriptscriptstyle R}}{2}y^2-\dfrac{c_{\scriptscriptstyle R}}{2}x^2+a_{\scriptscriptstyle R}xy+\alpha_{\scriptscriptstyle R}y-\beta_{\scriptscriptstyle R}x, \quad x \geq 1, \\ \end{array} \right. \end{equation*} \begin{equation}\label{eq:02} f(x,y)=\left\{\begin{array}{ll} f_{\scriptscriptstyle L}(x,y)=p_{\scriptscriptstyle L}x+q_{\scriptscriptstyle L}y+r_{\scriptscriptstyle L}, \quad x\leq -1, \\ f_{\scriptscriptstyle C}(x,y)=p_{\scriptscriptstyle C}x+q_{\scriptscriptstyle C}y+r_{\scriptscriptstyle C}, \quad -1\leq x\leq 1, \\ f_{\scriptscriptstyle R}(x,y)=p_{\scriptscriptstyle R}x+q_{\scriptscriptstyle R}y+r_{\scriptscriptstyle R}, \quad x \geq 1, \\ \end{array} \right. \end{equation} \begin{equation}\label{eq:03} g(x,y)=\left\{\begin{array}{ll} g_{\scriptscriptstyle L}(x,y)=s_{\scriptscriptstyle L}x+u_{\scriptscriptstyle L}y+v_{\scriptscriptstyle L}, \quad x\leq -1, \\ g_{\scriptscriptstyle C}(x,y)=s_{\scriptscriptstyle C}x+u_{\scriptscriptstyle C}y+v_{\scriptscriptstyle C}, \quad -1\leq x\leq 1, \\ g_{\scriptscriptstyle R}(x,y)=s_{\scriptscriptstyle R}x+u_{\scriptscriptstyle R}y+v_{\scriptscriptstyle R}, \quad x \geq 1, \\ \end{array} \right. \end{equation} where the dot denotes the derivative with respect to the independent variable $t$, here called the time, and $0\leq\epsilon<<1$. We call system \eqref{eq:01} of {\it left subsystem} when $x\leq -1$, {\it right subsystem} when $x\geq 1$ and {\it central subsystem} when $-1\leq x\leq 1$. Denote by $X_{\scriptscriptstyle L}(x,y)$, $X_{\scriptscriptstyle C}(x,y)$ and $X_{\scriptscriptstyle R}(x,y)$ the planar piecewise linear vector fields associated with the left, central and right subsystem from $\eqref{eq:01}|_{\epsilon=0}$, respectively. We will use the vector field $X_{\scriptscriptstyle L}$ and the switching curve $\Sigma_{\scriptscriptstyle L}$ in the next definitions. However, we can easily adapt the definitions to the vector fields $X_{\scriptscriptstyle C}$ and $X_{\scriptscriptstyle R}$ and the switching curve $\Sigma_{\scriptscriptstyle R}$. We say that the vector field $X_{\scriptscriptstyle L}$ has a real equilibrium $p$ if $p$ is an equilibrium of $X_{\scriptscriptstyle L}$ and $p\in R_{\scriptscriptstyle L}$. Otherwise, we will say that $X_{\scriptscriptstyle L}$ has a virtual equilibrium $p$ if $p\in (R_{\scriptscriptstyle L})^c$, where $(R_{\scriptscriptstyle L})^c$ denotes the complementary of $R_{\scriptscriptstyle L}$ in $\mathbb{R}^2$. The derivative of function $h_{\scriptscriptstyle L}$ in the direction of the vector field $X_{\scriptscriptstyle L}$, i.e., the expression $X_{\scriptscriptstyle L} h_{\scriptscriptstyle L}(p)=\langle X_{\scriptscriptstyle L}(p),\nabla h_{\scriptscriptstyle L}(p)\rangle,$ where $\left\langle \cdot,\cdot\right\rangle$ is the usual inner product in $\mathbb{R}^2$, characterize the contact between the vector field $X_{\scriptscriptstyle L}$ and the switching curve $\Sigma_{\scriptscriptstyle L}$. When $p\in\Sigma_{\scriptscriptstyle L}$ and $X_{\scriptscriptstyle L} h_{\scriptscriptstyle L}(p)=0$ we say that $p$ is a {\it tangent point} of $X_{\scriptscriptstyle L}$. We distinguish the followings subsets of $\Sigma_{\scriptscriptstyle L}$ (the same for $\Sigma_{\scriptscriptstyle R}$). \noindent Crossing set: $$\Sigma_{\scriptscriptstyle L}^{c}=\{p\in \Sigma_{\scriptscriptstyle L}:X_{\scriptscriptstyle L} h_{\scriptscriptstyle L}(p) \cdot X_{\scriptscriptstyle C} h_{\scriptscriptstyle L}(p)>0\};$$ \noindent Sliding set: $$\Sigma_{\scriptscriptstyle L}^{s}=\{p\in \Sigma_{\scriptscriptstyle L}:X_{\scriptscriptstyle L} h_{\scriptscriptstyle L}(p)>0, X_{\scriptscriptstyle C} h_{\scriptscriptstyle L}(p)<0\};$$ \noindent Escaping set: $$\Sigma_{\scriptscriptstyle L}^{e}=\{p\in \Sigma_{\scriptscriptstyle L}:X_{\scriptscriptstyle L} h_{\scriptscriptstyle L}(p)<0, X_{\scriptscriptstyle C} h_{\scriptscriptstyle L}(p)>0\}.$$ Suppose that system $\eqref{eq:01}|_{\epsilon=0}$ satisfies the following hypotheses: \begin{itemize} \item[(H1)] The unperturbed central subsystem from $\eqref{eq:01}|_{\epsilon=0}$ has a real center and the others unperturbed subsystems from $\eqref{eq:01}|_{\epsilon=0}$ have centers or saddles. \item[(H2)] The unperturbed system from $\eqref{eq:01}|_{\epsilon=0}$ has only crossing points on the straights lines $x=\pm 1$, except by some tangent points. \item[(H3)] The unperturbed system from $\eqref{eq:01}|_{\epsilon=0}$ has a periodic annulus consisting of a family of crossing periodic orbits around the origin such that each orbit of this family passes thought the three zones with clockwise orientation. \end{itemize} The main result in this paper is the follow. \begin{theorem}\label{the:01} The number of limit cycles of system \eqref{eq:01}, satisfying hypothese {\rm (Hi)} for $i=1,2,3$, which can bifurcate from the periodic annulus of the unperturbed system $\eqref{eq:01}|_{\epsilon=0}$ is at least three. \end{theorem} The paper is organized as follows. In Section \ref{sec:Mel} we introduce the first order Melnikov function associated to system $\eqref{eq:01}$. In Section \ref{sec:NF} we obtain a normal form to system $\eqref{eq:01}|_{\epsilon=0}$ that simplifies the computations and in Section \ref{sec:Teo} we will prove Theorem \ref{the:01}. \section{Melnikov Function}\label{sec:Mel} In this section, we will present the first order Melnikov function associated to system $\eqref{eq:01}$ that we will use to prove the main result of this paper. Suppose that $\eqref{eq:01}|_{\epsilon=0}$ satisfies the hypothesis (H3), i.e. there exists an open interval $J=(\alpha,\beta)$ such that for each $h\in J$ we have four points, $A(h)=(1,h)$, $A_1(h)=(1,a_1(h))\in \Sigma_{\scriptscriptstyle R}$, with $a_1(h)<h$, and $A_2(h)=(-1,a_2(h))$, $A_3(h)=(-1,a_3(h))\in \Sigma_{\scriptscriptstyle L}$, with $a_2(h)<a_3(h)$, whose are determined by the following equations \begin{equation}\label{eq:05} \begin{aligned} & H^{\scriptscriptstyle R}(A(h))=H^{\scriptscriptstyle R}(A_1(h)), \\ & H^{\scriptscriptstyle C}(A_1(h))=H^{\scriptscriptstyle C}(A_2(h)), \\ & H^{\scriptscriptstyle L}(A_2(h))=H^{\scriptscriptstyle L}(A_3(h)), \\ & H^{\scriptscriptstyle C}(A_3(h))=H^{\scriptscriptstyle C}(A(h)), \end{aligned} \end{equation} satisfying, for $h\in J$, $$H^{\scriptscriptstyle R}_y(A(h))\,H^{\scriptscriptstyle R}_y(A_1(h))\,H^{\scriptscriptstyle L}_y(A_2(h))\,H^{\scriptscriptstyle L}_y(A_3(h))\ne 0,$$ $$H^{\scriptscriptstyle C}_y(A(h))\,H^{\scriptscriptstyle C}_y(A_1(h))\,H^{\scriptscriptstyle C}_y(A_2(h))\,H^{\scriptscriptstyle C}_y(A_3(h))\ne 0.$$ Moreover, system $\eqref{eq:01}|_{\epsilon=0}$ has a crossing periodic orbit $L_h=L_h^{\scriptscriptstyle R}\cup\bar{L}_h^{\scriptscriptstyle C}\cup L_h^{\scriptscriptstyle L}\cup L_h^{\scriptscriptstyle C}$ passing through these points (see Fig. \ref{fig:01}), where \begin{equation*} \begin{aligned} L_h^{\scriptscriptstyle R}=\,&\Big\{(x,y)\in\mathbb{R}^2:H^{\scriptscriptstyle R}(x,y)=H^{\scriptscriptstyle R}(A(h))=\dfrac{b_{\scriptscriptstyle R}}{2}h^2+(a_{\scriptscriptstyle R}+\alpha_{\scriptscriptstyle R})h\\ &\,\,\,\,-\Big(\dfrac{c}{2}+\beta_{\scriptscriptstyle R}\Big),x>1\Big\}, \\ \bar{L}_h^{\scriptscriptstyle C}=\,&\{(x,y)\in\mathbb{R}^2:H^{\scriptscriptstyle C}(x,y)=H^{\scriptscriptstyle C}(A_1(h)),-1\leq x\leq1 \quad\text{and}\quad y<0\}, \\ L_h^{\scriptscriptstyle L}=\,&\{(x,y)\in\mathbb{R}^2:H^{\scriptscriptstyle L}(x,y)=H^{\scriptscriptstyle L}(A_2(h)),x<-1\}, \\ L_h^{\scriptscriptstyle C}=\,&\{(x,y)\in\mathbb{R}^2:H^{\scriptscriptstyle C}(x,y)=H^{\scriptscriptstyle C}(A_3(h)),-1\leq x\leq1 \quad\text{and}\quad y>0\}. \end{aligned} \end{equation*} \begin{figure} \caption{The crossing periodic orbit of system $\eqref{eq:01}|_{\epsilon=0}$.} \label{fig:01} \end{figure} Consider the solution of right subsystem from \eqref{eq:01} starting from point $A(h)$. Let $A_{\epsilon}(h)=(1,a_{\epsilon}(h))$ be the first intersection point of this orbit with straight line $x=1$. Denote by $B_{\epsilon}(h)=(-1,b_{\epsilon}(h))$ the first intersection point of the orbit from central subsystem from \eqref{eq:01} starting at $A_{\epsilon}(h)$ with straight line $x=-1$, $C_{\epsilon}(h)=(-1,c_{\epsilon}(h))$ the first intersection point of the orbit from left subsystem from \eqref{eq:01} starting at $B_{\epsilon}(h)$ with straight line $x=-1$ and $D_{\epsilon}(h)=(1,d_{\epsilon}(h))$ the first intersection point of the orbit from central subsystem from \eqref{eq:01} starting at $C_{\epsilon}(h)$ with straight line $x=1$ (see Fig. \ref{fig:02}). \begin{figure} \caption{Poincaré map of system \eqref{eq:01}.} \label{fig:02} \end{figure} We define the Poincaré map of piecewise system \eqref{eq:01} as follows, $$H^{\scriptscriptstyle R}(D_{\epsilon}(h))-H^{\scriptscriptstyle R}(A(h))=\epsilon M(h)+\mathcal{O}(\epsilon^2),$$ where $M(h)$ is called the {\it first order Melnikov function} associated to piecewise system \eqref{eq:01}. Then, using the same idea of Theorem 1.1 in \cite{Liu10}, it is easy to obtain the following theorem. \begin{theorem} Consider system \eqref{eq:01} with $0\leq \epsilon <<1$ and suppose that the unperturbed system $\eqref{eq:01}|_{\epsilon=0}$ has a family of crossing periodic orbits around the origin. Then the first order Melnikov function can be expressed as \begin{equation}\label{eq:mel} \begin{aligned} M(h) & = \frac{H_y^{\scriptscriptstyle R}(A)}{H_y^{\scriptscriptstyle C}(A)} I_{\scriptscriptstyle C} + \frac{H_y^{\scriptscriptstyle R}(A)H_y^{\scriptscriptstyle C}(A_3)}{H_y^{\scriptscriptstyle C}(A)H_y^{\scriptscriptstyle L}(A_3)} I_{\scriptscriptstyle L}\\ & \quad + \frac{H_y^{\scriptscriptstyle R}(A)H_y^{\scriptscriptstyle C}(A_3)H_y^{\scriptscriptstyle L}(A_2)}{H_y^{\scriptscriptstyle C}(A)H_y^{\scriptscriptstyle L}(A_3)H_y^{\scriptscriptstyle C}(A_2)} \bar{I}_{\scriptscriptstyle C} \\ &\quad + \frac{H_y^{\scriptscriptstyle R}(A)H_y^{\scriptscriptstyle C}(A_3)H_y^{\scriptscriptstyle L}(A_2)H_y^{\scriptscriptstyle C}(A_1)}{H_y^{\scriptscriptstyle C}(A)H_y^{\scriptscriptstyle L}(A_3)H_y^{\scriptscriptstyle C}(A_2)H_y^{\scriptscriptstyle R}(A_1)} I_{\scriptscriptstyle R}, \end{aligned} \end{equation} where $$I_{\scriptscriptstyle C}=\int_{\widehat{A_3A}}g_{\scriptscriptstyle C}dx-f_{\scriptscriptstyle C}dy, \,\, I_{\scriptscriptstyle L}=\int_{\widehat{A_2A_3}}g_{\scriptscriptstyle L}dx-f_{\scriptscriptstyle L}dy,\,\, \bar{I}_{\scriptscriptstyle C}=\int_{\widehat{A_1A_2}}g_{\scriptscriptstyle C}dx-f_{\scriptscriptstyle C}dy$$ and $$\quad I_{\scriptscriptstyle R}=\int_{\widehat{AA_1}}g_{\scriptscriptstyle R}dx-f_{\scriptscriptstyle R}dy.$$ Furthermore, if $M(h)$ has a simple zero at $h^{*}$, then for $0< \epsilon <<1$, the system \eqref{eq:01} has a unique limit cycle near $L_{h^{*}}$. \end{theorem} \section{Normal Form}\label{sec:NF} In order to simplify the computations to prove Theorem \ref{the:01} is convenient to do a continuous linear change of variables which transform system $\eqref{eq:01}|_{\epsilon=0}$ into a simple form. This change of variables is a homeomorphism with keeps invariant the straight lines $x=\pm 1$. Furthermore, this homeomorphism will be a topological equivalence between the systems. More precisely, we have the follow result. \begin{proposition}\label{fn:01} The discontinuous piecewise linear differential systems $\eqref{eq:01}|_{\epsilon=0}$ satisfying assumption (Hi), $i=1,2,3$, after a change of variables can be written as \begin{equation}\label{eq:04} \left\{\begin{array}{ll} \dot{x}= H_y(x,y), \\ \dot{y}= -H_x(x,y), \end{array} \right. \end{equation} where \begin{equation}\label{eq:fnh} H(x,y)=\left\{\begin{array}{ll} \begin{aligned} H^{\scriptscriptstyle L}(x,y)=&\,\dfrac{b_{\scriptscriptstyle L}}{2}y^2-\dfrac{c_{\scriptscriptstyle L}}{2}x^2+a_{\scriptscriptstyle L}xy\\ &+a_{\scriptscriptstyle L}y-\beta_{\scriptscriptstyle L}x,\quad\quad\quad\quad\quad x\leq -1, \\ H^{\scriptscriptstyle C}(x,y)=&\,\dfrac{1}{2}x^2+\dfrac{1}{2}y^2, \quad\quad\quad\quad -1\leq x\leq 1, \\ H^{\scriptscriptstyle R}(x,y)=&\,\dfrac{b_{\scriptscriptstyle R}}{2}y^2-\dfrac{c_{\scriptscriptstyle R}}{2}x^2+a_{\scriptscriptstyle R}xy\\ &-a_{\scriptscriptstyle R}y-\beta_{\scriptscriptstyle R}x,\,\, \quad\quad\quad\quad\quad x \geq 1. \\ \end{aligned} \end{array} \right. \end{equation} \end{proposition} \begin{proof} Through a translation, we can assume that the singularity of the central subsystem from $\eqref{eq:01}|_{\epsilon=0}$ is the origin, i.e. $\alpha_{\scriptscriptstyle C}=\beta_{\scriptscriptstyle C}=0$. By the hypotheses (H1) and (H3), we have that the central subsystem from $\eqref{eq:01}|_{\epsilon=0}$ satisfy $a_{\scriptscriptstyle C}^2+b_{\scriptscriptstyle C}c_{\scriptscriptstyle C}<0$ and $b_{\scriptscriptstyle C}>0$. Note that $b_i\ne 0$, for $i=L,R$. In fact, if the singular points of the subsystems from $\eqref{eq:01}|_{\epsilon=0}$ are centers, this is true due to the clockwise orientation of the orbits. Now, if the singular points are saddles and $b_i=0$ we have a separatrices parallel to switching straight lines $x=\pm 1$. System $\eqref{eq:01}|_{\epsilon=0}$ has four tangent points given by $P_1=(1,-a_{\scriptscriptstyle C}/b_{\scriptscriptstyle C})$, $P_2=(1,-(a_{\scriptscriptstyle R}+\alpha_{\scriptscriptstyle R})/b_{\scriptscriptstyle R})$, $P_3=(-1,a_{\scriptscriptstyle C}/b_{\scriptscriptstyle C})$ and $P_4=(-1,(a_{\scriptscriptstyle L}-\alpha_{\scriptscriptstyle L})/b_{\scriptscriptstyle L})$. By hypothesis (H2), we have that the system $\eqref{eq:01}|_{\epsilon=0}$ have only crossing points on the straight lines $x=\pm 1$, except in the tangent points. Hence, for all $y\in\mathbb{R}\setminus\{\pm a_{\scriptscriptstyle C}/b_{\scriptscriptstyle C},-(a_{\scriptscriptstyle R}+\alpha_{\scriptscriptstyle R})/b_{\scriptscriptstyle R}),(a_{\scriptscriptstyle L}-\alpha_{\scriptscriptstyle L})/b_{\scriptscriptstyle L})\}$, we must have $$\left\langle X_{\scriptscriptstyle L}(-1,y),(1,0)\right\rangle \left\langle X_{\scriptscriptstyle C}(-1,y),(1,0)\right\rangle>0$$ and $$\left\langle X_{\scriptscriptstyle R}(1,y),(1,0)\right\rangle \left\langle X_{\scriptscriptstyle C}(1,y),(1,0)\right\rangle>0.$$ But this implies that $b_{\scriptscriptstyle L}b_{\scriptscriptstyle C}>0$, $b_{\scriptscriptstyle R}b_{\scriptscriptstyle C}>0$, $P_1=P_2$ and $P_3=P_4$. Therefore, as $b_{\scriptscriptstyle C}>0$, we have that \begin{equation}\label{ch:01} \alpha_{\scriptscriptstyle L}=\frac{a_{\scriptscriptstyle L}b_{\scriptscriptstyle C}-a_{\scriptscriptstyle C}b_{\scriptscriptstyle L}}{b_{\scriptscriptstyle C}},\quad b_{\scriptscriptstyle L}>0,\quad \alpha_{\scriptscriptstyle R}=\frac{-a_{\scriptscriptstyle R}b_{\scriptscriptstyle C}+a_{\scriptscriptstyle C}b_{\scriptscriptstyle R}}{b_{\scriptscriptstyle C}}\quad\text{and}\quad b_{\scriptscriptstyle R}>0. \end{equation} Assuming the conditions \eqref{ch:01}, consider the change of variables \begin{displaymath} \left(\begin{array}{c} x\\ y \end{array}\right)=\left(\begin{array}{cc} 1 & 0\\ -\dfrac{a_{\scriptscriptstyle C}}{b_{\scriptscriptstyle C}} & \dfrac{\sqrt{-a^2_{\scriptscriptstyle C}-b_{\scriptscriptstyle C}c_{\scriptscriptstyle C}}}{b_{\scriptscriptstyle C}} \end{array}\right)\left(\begin{array}{c} u\\ v \end{array}\right) \end{displaymath} and rescaling the time by $\tilde{t}=\sqrt{-a^2_{\scriptscriptstyle C}-b_{\scriptscriptstyle C}c_{\scriptscriptstyle C}}\,t$. Applying the change of variables and rescaling the time above and rewriting the parameters conveniently, system $\eqref{eq:01}|_{\epsilon=0}$ becomes system \eqref{eq:04}. \end{proof} In what follows, we will consider the discontinuous planar piecewise linear near--Hamiltonian system \eqref{eq:01} with $f(x,y)$, $g(x,y)$ and $H(x,y)$ given by \eqref{eq:02}, \eqref{eq:03} and \eqref{eq:fnh}, respectively. \section{Proof of Theorem \ref{the:01}}\label{sec:Teo} The proof of Theorem \ref{the:01} is a straightforward consequence of Corollarys \ref{scs-a}, \ref{ccs-c}-\ref{ccc-c}. We can classify the systems that satisfy the hypothesis (H1) according to the configuration of their singular points. Thus, denoting the centers by the capital letter C and by S the saddles, in the case of three zones, we have the following three class of piecewise linear Hamiltonian systems: SCS, CCS and CCC. This is, CCC indicates that the singular points of the linear systems that define the piecewise differential system are centers and so on. In order to computate the zeros of the first order Melnikov function, it is necessary to find the open interval $J$, where it is define. For this, consider the follow proposition. \begin{proposition} Consider the system \eqref{eq:01} with the hypotheses (Hi), $i=1,2,3$. \begin{itemize} \item[(a)] If the system $\eqref{eq:01}|_{\epsilon=0}$ is of type SCS or CCS, then $J=(0,\tau)$, where $\tau=(a_{\scriptscriptstyle R}^2-b_{\scriptscriptstyle R}\beta_{\scriptscriptstyle R}-\omega_{\scriptscriptstyle RS}^2)/b_{\scriptscriptstyle R}\omega_{\scriptscriptstyle RS}$ with $\omega_{\scriptscriptstyle RS}=\sqrt{a^2_{\scriptscriptstyle R} + b_{\scriptscriptstyle R}c_{\scriptscriptstyle R}}$, and the periodic annulus are equivalents to one of the figures of Fig. \ref{fig:03}. \item[(b)] If the system $\eqref{eq:01}|_{\epsilon=0}$ is of type CCC, then $J=(0,\infty)$, and the periodic annulus are equivalents to one of the figures of Fig. \ref{fig:04}. \end{itemize} \end{proposition} \begin{proof} Suppose that the system $\eqref{eq:01}|_{\epsilon=0}$ is of type SCS or CCS. Note that if the saddles are virtual or if they are under the straight lines $x=\pm 1$, then we have not periodic orbits passing through the three zones. Denote by $W^u_{\scriptscriptstyle R}$ and $W^s_{\scriptscriptstyle R}$ (resp. $W^u_{\scriptscriptstyle L}$ and $W^s_{\scriptscriptstyle L}$) the unstable and stable separatrices of the saddles of the right (resp. left) subsystems from $\eqref{eq:01}|_{\epsilon=0}$, respectively. Denote by $P_{\scriptscriptstyle L}^{i}=W^i_{\scriptscriptstyle L}\cap \Sigma_{\scriptscriptstyle L}$ and $P_{\scriptscriptstyle R}^{i}=W^i_{\scriptscriptstyle R}\cap \Sigma_{\scriptscriptstyle R}$, for $i=u,s$. After some computate, is possible to show that $$P_{\scriptscriptstyle L}^{u}=\Bigg(-1,\frac{a_{\scriptscriptstyle L}^2+b_{\scriptscriptstyle L} \beta_{\scriptscriptstyle L}-\omega_{\scriptscriptstyle LS}^2}{b_{\scriptscriptstyle L}\omega_{\scriptscriptstyle LS}}\Bigg),\quad P_{\scriptscriptstyle L}^{s}=\Bigg(-1,-\frac{a_{\scriptscriptstyle L}^2+b_{\scriptscriptstyle L} \beta_{\scriptscriptstyle L}-\omega_{\scriptscriptstyle LS}^2}{b_{\scriptscriptstyle L}\omega_{\scriptscriptstyle LS}}\Bigg),$$ $$P_{\scriptscriptstyle R}^{u}=\Bigg(1,-\frac{a_{\scriptscriptstyle R}^2-b_{\scriptscriptstyle R} \beta_{\scriptscriptstyle R}-\omega_{\scriptscriptstyle RS}^2}{b_{\scriptscriptstyle R}\omega_{\scriptscriptstyle RS}}\Bigg)\quad\text{and}\quad P_{\scriptscriptstyle R}^{s}=\Bigg(1,\frac{a_{\scriptscriptstyle R}^2-b_{\scriptscriptstyle R} \beta_{\scriptscriptstyle R}-\omega_{\scriptscriptstyle RS}^2}{b_{\scriptscriptstyle R}\omega_{\scriptscriptstyle RS}}\Bigg),$$ where $\omega_{\scriptscriptstyle LS}=\sqrt{a^2_{\scriptscriptstyle L} + b_{\scriptscriptstyle L}c_{\scriptscriptstyle L}}$ and $\omega_{\scriptscriptstyle RS}=\sqrt{a^2_{\scriptscriptstyle R} + b_{\scriptscriptstyle R}c_{\scriptscriptstyle R}}$. Note that we have a symmetry between the points $P_{\scriptscriptstyle L}^{u}$ and $P_{\scriptscriptstyle L}^{s}$ (resp. $P_{\scriptscriptstyle R}^{u}$ and $P_{\scriptscriptstyle R}^{s}$) with respect to $x$-axis. Define by $\tau$ the smallest ordinate value between the points $P_{\scriptscriptstyle R}^{s}$ and $P_{\scriptscriptstyle L}^{u}$, i.e. $\tau= \min\{(a_{\scriptscriptstyle R}^2-b_{\scriptscriptstyle R}\beta_{\scriptscriptstyle R}-\omega_{\scriptscriptstyle RS}^2)/b_{\scriptscriptstyle R}\omega_{\scriptscriptstyle RS},(a_{\scriptscriptstyle L}^2+b_{\scriptscriptstyle L}\beta_{\scriptscriptstyle L}-\omega_{\scriptscriptstyle LS}^2)/b_{\scriptscriptstyle L}\omega_{\scriptscriptstyle LS}\}$. Then, less than one reflection around the $y$-axis, we can assuming that $\tau=(a_{\scriptscriptstyle R}^2-b_{\scriptscriptstyle R}\beta_{\scriptscriptstyle R}-\omega_{\scriptscriptstyle RS}^2)/b_{\scriptscriptstyle R}\omega_{\scriptscriptstyle RS}$. As the vector field $X_{\scriptscriptstyle C}$ associated with the central subsystem from $\eqref{eq:01}|_{\epsilon=0}$ is $X_{\scriptscriptstyle C}(x,y)=(y,-x)$, if system $\eqref{eq:01}|_{\epsilon=0}$ is of type SCS and the ordinates of the points $P_{\scriptscriptstyle R}^{s}$ and $P_{\scriptscriptstyle L}^{u}$ are distinct (see Fig. \ref{fig:03} (a)) or if system $\eqref{eq:01}|_{\epsilon=0}$ is of type CCS (see Fig. \ref{fig:03} (c) or (d)), then we have a homoclinic loop passing through the points $P_{\scriptscriptstyle R}^{s}$ and $P_{\scriptscriptstyle R}^{u}$. Otherwise, if system $\eqref{eq:01}|_{\epsilon=0}$ is of type SCS and the ordinates of points $P_{\scriptscriptstyle R}^{s}$ and $P_{\scriptscriptstyle L}^{u}$ are the same (see Fig. \ref{fig:03} (b)) then we have a hetoclinic orbit passing through the points $P_{\scriptscriptstyle R}^{s}$, $P_{\scriptscriptstyle R}^{u}$, $P_{\scriptscriptstyle L}^{s}$ and $P_{\scriptscriptstyle L}^{u}$ . Moreover, the central subsystem from $\eqref{eq:01}|_{\epsilon=0}$ has a periodic orbit tangent to straight lines $x=\pm 1$ in the points $P_{\scriptscriptstyle R}=(1,0)$ and $P_{\scriptscriptstyle L}=(-1,0)$. The Fig. \ref{fig:03} shows the possibles phase portraits of the system $\eqref{eq:01}|_{\epsilon=0}$ of type SCS and CCS. \begin{figure} \caption{Phase portrait of system $\eqref{eq:01}|_{\epsilon=0}$ of type : (a) SCS with the ordinates of points $P_{\scriptscriptstyle R}^{s}$ and $P_{\scriptscriptstyle L}^{u}$ distinct; (b) SCS with the ordinates of points $P_{\scriptscriptstyle R}^{s}$ and $P_{\scriptscriptstyle L}^{u}$ equal; (c) CCS when left subsystem has a virtual center; (d) CCS when left subsystem has a real center.} \label{fig:03} \end{figure} Consider a initial point of form $A(h)=(1,h)$, with $h\in (0,\tau)$. By the hypothesis (H3), the system $\eqref{eq:01}|_{\epsilon=0}$ has a family of crossing periodic orbits that intersects the straight lines $x=\pm1$ at four points, $A(h)$, $A_1(h)=(1,a_1(h))$, with $a_1(h)<h$, and $A_2(h)=(-1,a_2(h))$, $A_3(h)=(-1,a_3(h))$, with $a_2(h)<a_3(h)$ satisfying \begin{equation*} \begin{aligned} & H^{\scriptscriptstyle R}(A(h))=H^{\scriptscriptstyle R}(A_1(h)), \\ & H^{\scriptscriptstyle C}(A_1(h))=H^{\scriptscriptstyle C}(A_2(h)), \\ & H^{\scriptscriptstyle L}(A_2(h))=H^{\scriptscriptstyle L}(A_3(h)), \\ & H^{\scriptscriptstyle C}(A_3(h))=H^{\scriptscriptstyle C}(A(h)), \end{aligned} \end{equation*} where $H^{\scriptscriptstyle R}$, $H^{\scriptscriptstyle C}$ and $H^{\scriptscriptstyle L}$ are given by \eqref{eq:fnh}. More precisely, we have the equations \begin{equation*} \begin{aligned} & \frac{b_{\scriptscriptstyle R}}{2}(h-a_1(h))(h+a_1(h))=0, \\ & \frac{1}{2}(a_1(h)-a_2(h))(a_1(h)+a_2(h))=0, \\ & \frac{b_{\scriptscriptstyle L}}{2}(a_2(h)-a_3(h))(a_2(h)+a_3(h))=0, \\ & \frac{1}{2}(a_3(h)-h)(a_3(h)+h)=0. \end{aligned} \end{equation*} As $a_1(h)<h$, $a_2(h)<a_3(h)$, $b_{\scriptscriptstyle R}>0$ and $b_{\scriptscriptstyle L}>0$, the only solution of system above is $a_1(h)=-h$, $a_2(h)=-h$ and $a_3(h)=h$, i.e. we have the four points given by $A(h)=(1,h)$, $A_1(h)=(1,-h)$, $A_2(h)=(-1,-h)$ and $A_3(h)=(-1,h)$. Moreover, system $\eqref{eq:01}|_{\epsilon=0}$ has a periodic orbit $L_h$ passing through these points, for all $h\in(0,\tau)$. If $h\in[\tau,\infty)$ then the orbit of the system $\eqref{eq:01}|_{\epsilon=0}$ with initial condition in $A(h)$ do not return to straight line $x=1$ to positive times, i.e. the system $\eqref{eq:01}|_{\epsilon=0}$ has no periodic orbit passing thought the point $A(h)$. Therefore, if $h\in(0,\tau)$ the system $\eqref{eq:01}|_{\epsilon=0}$ has a periodic annulus, formed by the periodic orbits $L_h$, limited by one (see Fig. \ref{fig:03} (a)--(c)) or two (see Fig. \ref{fig:03} (d)) periodic orbits tangent to the straight lines $x=\pm1$, when $h=0$, and a homoclinic loop (see Fig. \ref{fig:03} (a), (c) and (d)) or heteroclinic orbit (see Fig. \ref{fig:03} (b)), when $h=\tau$. Therefore, item (a) is proven. To prove item (b), suppose that the system $\eqref{eq:01}|_{\epsilon=0}$ is of type CCC. The central subsystem from $\eqref{eq:01}|_{\epsilon=0}$ has a periodic orbit tangent to straight lines $x=\pm 1$ in the points $P_{\scriptscriptstyle R}=(1,0)$ and $P_{\scriptscriptstyle L}=(-1,0)$. Moreover, as in the previous case, for each $h\in(0,\infty)$ we have a periodic orbit $L_h$ passing through points $A(h)=(1,h)$, $A_1(h)=(1,-h)$, $A_2(h)=(-1,-h)$ and $A_3(h)=(-1,h)$. Therefore, the system $\eqref{eq:01}|_{\epsilon=0}$ has a continuum of periodic orbit formed by the periodic orbits $L_h$, with $h\in(0,\infty)$, and limited by one (see Fig. \ref{fig:04} (a)), two (see Fig. \ref{fig:04} (a)) or three (see Fig. \ref{fig:04} (c)) periodic orbits tangent to straight lines $x=\pm1$, when $h=0$. The Fig. \ref{fig:04} shows the possibles phase portraits of the system $\eqref{eq:01}|_{\epsilon=0}$ of type CCC. \begin{figure} \caption{Phase portrait of system $\eqref{eq:01}|_{\epsilon=0}$ of type CCC when: (a) the left and right subsystems have virtual centers; (b) the left subsystem has a real center and right subsystem has a virtual center; (c) the left and right subsystems have real centers.} \label{fig:04} \end{figure} \end{proof} The coefficients that multiply the integrals of first order Melnikov function \eqref{eq:mel} associated to system \eqref{eq:01} can be easily calculated. More precisely, we have the immediate corollary. \begin{corollary} Let $J$ be the interval of definition of Melnikov function \eqref{eq:mel}. For $h\in J$, $$ \frac{H_y^{\scriptscriptstyle R}(A)}{H_y^{\scriptscriptstyle C}(A)}=b_{\scriptscriptstyle R},\quad \frac{H_y^{\scriptscriptstyle R}(A)H_y^{\scriptscriptstyle C}(A_3)}{H_y^{\scriptscriptstyle C}(A)H_y^{\scriptscriptstyle L}(A_3)}=\frac{b_{\scriptscriptstyle R}}{b_{\scriptscriptstyle L}},\quad \frac{H_y^{\scriptscriptstyle R}(A)H_y^{\scriptscriptstyle C}(A_3)H_y^{\scriptscriptstyle L}(A_2)}{H_y^{\scriptscriptstyle C}(A)H_y^{\scriptscriptstyle L}(A_3)H_y^{\scriptscriptstyle C}(A_2)}=b_{\scriptscriptstyle R}$$ and $$\frac{H_y^{\scriptscriptstyle R}(A)H_y^{\scriptscriptstyle C}(A_3)H_y^{\scriptscriptstyle L}(A_2)H_y^{\scriptscriptstyle C}(A_1)}{H_y^{\scriptscriptstyle C}(A)H_y^{\scriptscriptstyle L}(A_3)H_y^{\scriptscriptstyle C}(A_2)H_y^{\scriptscriptstyle R}(A_1)}=1. $$ Then, the first order Melnikov function associated to system \eqref{eq:01} can be written as \begin{equation}\label{eq:mel01} \begin{aligned} M(h) = &\,\, b_{\scriptscriptstyle R}\int_{\widehat{A_3A}}g_{\scriptscriptstyle C}dx-f_{\scriptscriptstyle C}dy+\frac{b_{\scriptscriptstyle R}}{b_{\scriptscriptstyle L}} \int_{\widehat{A_2A_3}}g_{\scriptscriptstyle L}dx-f_{\scriptscriptstyle L}dy \\ &\, + b_{\scriptscriptstyle R}\int_{\widehat{A_1A_2}}g_{\scriptscriptstyle C}dx-f_{\scriptscriptstyle C}dy+\int_{\widehat{AA_1}}g_{\scriptscriptstyle R}dx-f_{\scriptscriptstyle R}dy. \end{aligned} \end{equation} \end{corollary} In what follows, we will determinate the first order Melnikov function associated to system \eqref{eq:01} when the system $\eqref{eq:01}|_{\epsilon=0}$ is of the type SCS, CCS and CCC. For this, we define the functions: \begin{equation}\label{eq:func} \begin{aligned} f_0(h) = \hspace{0.1cm} & h, \quad h\in(0,\infty), \\ f_{\scriptscriptstyle C}^{\scriptscriptstyle C}(h) =\hspace{0.1cm} & (h^2+1)\arccos\bigg(\frac{h^2-1}{h^2+1}\bigg),\quad h\in(0,\infty), \\ f_{\scriptscriptstyle R}^{\scriptscriptstyle C}(h) =\hspace{0.1cm} & ((a_{\scriptscriptstyle R}^2 - b_{\scriptscriptstyle R} \beta_{\scriptscriptstyle R})^2 + (2 a_{\scriptscriptstyle R}^2 + b_{\scriptscriptstyle R}^2 h^2 - 2 b_{\scriptscriptstyle R} \beta_{\scriptscriptstyle R}) \omega_{\scriptscriptstyle RC}^2 ) F_{\scriptscriptstyle R}^{\scriptscriptstyle C}(h)\\ &+ \omega_{\scriptscriptstyle RC}^4 F_{\scriptscriptstyle R}^{\scriptscriptstyle C}(h), \quad h\in(0,\infty), \\ f_{\scriptscriptstyle L}^{\scriptscriptstyle C}(h) =\hspace{0.1cm} & ((a_{\scriptscriptstyle L}^2 + b_{\scriptscriptstyle L} \beta_{\scriptscriptstyle L})^2 + (2 a_{\scriptscriptstyle L}^2 + b_{\scriptscriptstyle L}^2 h^2 + 2 b_{\scriptscriptstyle L}\beta_{\scriptscriptstyle L}) \omega_{\scriptscriptstyle LC}^2 ) F_{\scriptscriptstyle L}^{\scriptscriptstyle C}(h)\\ & + \omega_{\scriptscriptstyle LC}^4 F_{\scriptscriptstyle L}^{\scriptscriptstyle C}(h), \quad h\in(0,\infty),\\ f_{\scriptscriptstyle R}^{\scriptscriptstyle S}(h) =\hspace{0.1cm} & (a_{\scriptscriptstyle R}^2 - b_{\scriptscriptstyle R} \beta_{\scriptscriptstyle R} + b_{\scriptscriptstyle R} \omega_{\scriptscriptstyle RS} h - \omega_{\scriptscriptstyle RS}^2) \\ & \times(-a_{\scriptscriptstyle R}^2 + b_{\scriptscriptstyle R} \beta_{\scriptscriptstyle R} + b_{\scriptscriptstyle R} \omega_{\scriptscriptstyle RS} h + \omega_{\scriptscriptstyle RS}^2)F_{\scriptscriptstyle R}^{\scriptscriptstyle S}(h), \quad h\in(0,\tau), \\ f_{\scriptscriptstyle L}^{\scriptscriptstyle S}(h) =\hspace{0.1cm} & (-a_{\scriptscriptstyle L}^2 - b_{\scriptscriptstyle L} \beta_{\scriptscriptstyle L} + b_{\scriptscriptstyle L} \omega_{\scriptscriptstyle LS} h + \omega_{\scriptscriptstyle LS}^2)\\ & \times (a_{\scriptscriptstyle L}^2 + b_{\scriptscriptstyle L} \beta_{\scriptscriptstyle L} + b_{\scriptscriptstyle L} \omega_{\scriptscriptstyle LS} h - \omega_{\scriptscriptstyle LS}^2 ) F_{\scriptscriptstyle L}^{\scriptscriptstyle S}(h), \quad h\in(0,\tau),\\ \end{aligned} \end{equation} with \begin{equation*} \begin{aligned} F_{\scriptscriptstyle R}^{\scriptscriptstyle C}(h) =& \arccos\bigg(1-\frac{2 b_{\scriptscriptstyle R}^2 \omega_{\scriptscriptstyle RC}^2 h^2}{(a_{\scriptscriptstyle R}^2 - b_{\scriptscriptstyle R} \beta_{\scriptscriptstyle R})^2 + (2 a_{\scriptscriptstyle R}^2 + b_{\scriptscriptstyle R}^2 h^2 - 2 b_{\scriptscriptstyle R} \beta_{\scriptscriptstyle R}) \omega_{\scriptscriptstyle RC}^2 + \omega_{\scriptscriptstyle RC}^4}\bigg), \\ F_{\scriptscriptstyle L}^{\scriptscriptstyle C}(h) =& \arccos\bigg(1-\frac{2 b_{\scriptscriptstyle L}^2 \omega_{\scriptscriptstyle LC}^2 h^2 }{(a_{\scriptscriptstyle L}^2 + b_{\scriptscriptstyle L} \beta_{\scriptscriptstyle L})^2 + (2 a_{\scriptscriptstyle L}^2 + b_{\scriptscriptstyle L}^2 h^2 + 2 b_{\scriptscriptstyle L} \beta_{\scriptscriptstyle L}) \omega_{\scriptscriptstyle LC}^2 + \omega_{\scriptscriptstyle LC}^4}\bigg), \\ F_{\scriptscriptstyle R}^{\scriptscriptstyle S}(h) =& \log\bigg(1-\frac{2 b_{\scriptscriptstyle R} \omega_{\scriptscriptstyle RS} h}{-a_{\scriptscriptstyle R}^2 + b_{\scriptscriptstyle R} \beta_{\scriptscriptstyle R} + b_{\scriptscriptstyle R} \omega_{\scriptscriptstyle RS} h + \omega_{\scriptscriptstyle RS}^2}\bigg), \\ F_{\scriptscriptstyle L}^{\scriptscriptstyle S}(h) =& \log\bigg(1+\frac{2 b_{\scriptscriptstyle L} \omega_{\scriptscriptstyle LS} h}{a_{\scriptscriptstyle L}^2 + b_{\scriptscriptstyle L} \beta_{\scriptscriptstyle L} - b_{\scriptscriptstyle L} \omega_{\scriptscriptstyle LS} h - \omega_{\scriptscriptstyle LS}^2}\bigg), \\ \end{aligned} \end{equation*} where $\omega_{i{\scriptscriptstyle S}}=\sqrt{a^2_i + b_ic_i}$ and $\omega_{i{\scriptscriptstyle C}}=\sqrt{-a^2_i - b_ic_i}$, for $i=L,R$. \begin{theorem}\label{theo:scs} Suppose that system $\eqref{eq:01}|_{\epsilon=0}$ is of the type SCS. Then the first order Melnikov function $M(h)$ associated with system \eqref{eq:01} can be expressed as \begin{equation}\label{eq:melscs} M(h)=k_0f_0(h)+k_{\scriptscriptstyle C}^{\scriptscriptstyle C}f_{\scriptscriptstyle C}^{\scriptscriptstyle C}(h)+k_{\scriptscriptstyle R}^{\scriptscriptstyle S}f_{\scriptscriptstyle R}^{\scriptscriptstyle S}(h)+k_{\scriptscriptstyle L}^{\scriptscriptstyle S}f_{\scriptscriptstyle L}^{\scriptscriptstyle S}(h), \end{equation} for $h\in(0,\tau)$, where the functions $f_0,f_{\scriptscriptstyle C}^{\scriptscriptstyle C},f_{\scriptscriptstyle R}^{\scriptscriptstyle S},f_{\scriptscriptstyle L}^{\scriptscriptstyle S}$ are the ones defined in \eqref{eq:func}. Here the coefficients $k_0$ and $k_i^j$, for $i=L,C,R$ and $j=C,S$, depend on the parameters of system \eqref{eq:01}. \end{theorem} \begin{proof} The orbit $(x_{\scriptscriptstyle R}(x,y),y_{\scriptscriptstyle R}(x,y))$ of the system $\eqref{eq:01}|_{\epsilon=0}$, such that $(x_{\scriptscriptstyle R}(0,0),y_{\scriptscriptstyle R}(0,0))=(1,h)$, is given by \begin{equation*} \begin{aligned} x_{\scriptscriptstyle R}(t)=\,&\frac{e^{-t \omega_{\scriptscriptstyle RS}}}{2\omega_{\scriptscriptstyle RS}^2}(b_{\scriptscriptstyle R}\beta_{\scriptscriptstyle R}-a_{\scriptscriptstyle R}^2+\omega_{\scriptscriptstyle RS}^2-b_{\scriptscriptstyle R}\omega_{\scriptscriptstyle RS}h)+\frac{1}{2\omega_{\scriptscriptstyle RS}^2}(2a_{\scriptscriptstyle R}^2-2b_{\scriptscriptstyle R}\beta_{\scriptscriptstyle R})\\ & +\frac{e^{t \omega_{\scriptscriptstyle RS}}}{2\omega_{\scriptscriptstyle RS}^2}(b_{\scriptscriptstyle R}\beta_{\scriptscriptstyle R}-a_{\scriptscriptstyle R}^2+b_{\scriptscriptstyle R}\omega_{\scriptscriptstyle RS}h+\omega_{\scriptscriptstyle RS}^2), \\ y_{\scriptscriptstyle R}(t)=\,&\frac{e^{-t \omega_{\scriptscriptstyle RS}}}{2b_{\scriptscriptstyle R}\omega_{\scriptscriptstyle RS}^2}(a_{\scriptscriptstyle R}^3-a_{\scriptscriptstyle R}b_{\scriptscriptstyle R}\beta_{\scriptscriptstyle R}+a_{\scriptscriptstyle R}^2\omega_{\scriptscriptstyle RS}+a_{\scriptscriptstyle R}b_{\scriptscriptstyle R}\omega_{\scriptscriptstyle RS}h-b_{\scriptscriptstyle R}\beta_{\scriptscriptstyle R}\omega_{\scriptscriptstyle RS}-a_{\scriptscriptstyle R}\omega_{\scriptscriptstyle RS}^2)\\ & +\frac{e^{-t \omega_{\scriptscriptstyle RS}}}{2b_{\scriptscriptstyle R}\omega_{\scriptscriptstyle RS}^2}(b_{\scriptscriptstyle R}\omega_{\scriptscriptstyle RS}^2h-\omega_{\scriptscriptstyle RS}^3) +\frac{1}{2b_{\scriptscriptstyle R}\omega_{\scriptscriptstyle RS}^2}(-2a_{\scriptscriptstyle R}^3+2a_{\scriptscriptstyle R}b_{\scriptscriptstyle R}\beta_{\scriptscriptstyle R}+2a_{\scriptscriptstyle R}\omega_{\scriptscriptstyle RS}^2)\\ & +\frac{e^{t \omega_{\scriptscriptstyle RS}}}{2b_{\scriptscriptstyle R}\omega_{\scriptscriptstyle RS}^2}(a_{\scriptscriptstyle R}^3-a_{\scriptscriptstyle R}b_{\scriptscriptstyle R}\beta_{\scriptscriptstyle R}-a_{\scriptscriptstyle R}^2\omega_{\scriptscriptstyle RS}-a_{\scriptscriptstyle R}b_{\scriptscriptstyle R}\omega_{\scriptscriptstyle RS}h+b_{\scriptscriptstyle R}\beta_{\scriptscriptstyle R}\omega_{\scriptscriptstyle RS})\\ & +\frac{e^{t \omega_{\scriptscriptstyle RS}}}{2b_{\scriptscriptstyle R}\omega_{\scriptscriptstyle RS}^2}(-a_{\scriptscriptstyle R}\omega_{\scriptscriptstyle RS}^2+b_{\scriptscriptstyle R}\omega_{\scriptscriptstyle RS}^2h+\omega_{\scriptscriptstyle RS}^3). \end{aligned} \end{equation*} The fly time of the orbit $(x_{\scriptscriptstyle R}(x,y),y_{\scriptscriptstyle R}(x,y))$, from $A(h)=(1,h)$ to $A_1(h)=(1,-h)$, is $$t_{\scriptscriptstyle R}=\frac{1}{\omega_{\scriptscriptstyle RS}}\log\Bigg(1-\frac{2b_{\scriptscriptstyle R}\omega_{\scriptscriptstyle RS}h}{-a_{\scriptscriptstyle R}^2+b_{\scriptscriptstyle R}\beta_{\scriptscriptstyle R}+b_{\scriptscriptstyle R}\omega_{\scriptscriptstyle RS}h+\omega_{\scriptscriptstyle RS}^2}\Bigg).$$ Now, for $g_{\scriptscriptstyle R}$ and $f_{\scriptscriptstyle R}$ defined in \eqref{eq:02} and \eqref{eq:03}, respectively, we have \begin{equation}\label{sys:r} \begin{aligned} &\int_{\widehat{AA_1}}g_{\scriptscriptstyle R}dx-f_{\scriptscriptstyle R}dy=\\ &=\int_{0}^{t_{\scriptscriptstyle R}}g_{\scriptscriptstyle R}(x_{\scriptscriptstyle R}(t), y_{\scriptscriptstyle R}(t)) \dfrac{d}{dt}x_{\scriptscriptstyle R}(t) - f_{\scriptscriptstyle R}(x_{\scriptscriptstyle R}(t), y_{\scriptscriptstyle R}(t)) \dfrac{d}{dt}y_{\scriptscriptstyle R}(t) \\ & = \alpha_1h+\alpha_2f_{\scriptscriptstyle R}^{\scriptscriptstyle S}(h), \end{aligned} \end{equation} with $$ \alpha_1 = p_{\scriptscriptstyle R}+2r_{\scriptscriptstyle R}-u_{\scriptscriptstyle R}+\frac{(p_{\scriptscriptstyle R}+u_{\scriptscriptstyle R})(a_{\scriptscriptstyle R}^2-b_{\scriptscriptstyle R}\beta_{\scriptscriptstyle R})}{\omega_{\scriptscriptstyle RS}^2}\quad\text{and}\quad \alpha_2 = \frac{p_{\scriptscriptstyle R}+u_{\scriptscriptstyle R}}{2b_{\scriptscriptstyle R}\omega_{\scriptscriptstyle RS}^3}. $$ The orbit $(x_{\scriptscriptstyle C1}(x,y),y_{\scriptscriptstyle C1}(x,y))$ of the system $\eqref{eq:01}|_{\epsilon=0}$, such that $(x_{\scriptscriptstyle C1}(0,0),$ $y_{\scriptscriptstyle C1}(0,0))=(1,-h)$, is given by \begin{equation*} \begin{aligned} x_{\scriptscriptstyle C1}(t)&=\cos(t) - h \sin(t), \\ y_{\scriptscriptstyle C1}(t)&=-h \cos(t) - \sin(t). \end{aligned} \end{equation*} The fly time of the orbit $(x_{\scriptscriptstyle C1}(x,y),y_{\scriptscriptstyle C1}(x,y))$, from $A_1(h)=(1,-h)$ to $A_2(h)=(-1,-h)$, is $$t_{\scriptscriptstyle C1}=\arccos\Bigg(\frac{h^2-1}{h^2+1}\Bigg)$$ Now, for $g_{\scriptscriptstyle C}$ and $f_{\scriptscriptstyle C}$ defined in \eqref{eq:02} and \eqref{eq:03}, respectively, we obtain \begin{equation}\label{sys:c1} \begin{aligned} &\int_{\widehat{A_1A_2}}g_{\scriptscriptstyle C}dx-f_{\scriptscriptstyle C}dy=\\ &=\int_{0}^{t_{\scriptscriptstyle C1}}g_{\scriptscriptstyle C}(x_{\scriptscriptstyle C1}(t), y_{\scriptscriptstyle C1}(t)) \dfrac{d}{dt}x_{\scriptscriptstyle C1}(t) - f_{\scriptscriptstyle C}(x_{\scriptscriptstyle C1}(t), y_{\scriptscriptstyle C1}(t)) \dfrac{d}{dt}y_{\scriptscriptstyle C1}(t) \\ & = -2v_{\scriptscriptstyle C}+\alpha_3h+\alpha_4f_{\scriptscriptstyle C}^{\scriptscriptstyle C}(h), \end{aligned} \end{equation} with $$ \alpha_3 = u_{\scriptscriptstyle C}-p_{\scriptscriptstyle C}\quad\text{and}\quad \alpha_4 = \frac{p_{\scriptscriptstyle C}+u_{\scriptscriptstyle C}}{2}. $$ The orbit $(x_{\scriptscriptstyle L}(x,y),y_{\scriptscriptstyle L}(x,y))$ of the system $\eqref{eq:01}|_{\epsilon=0}$, such that $(x_{\scriptscriptstyle L}(0,0),$ $y_{\scriptscriptstyle L}(0,0))=(-1,-h)$, is given by \begin{equation*} \begin{aligned} x_{\scriptscriptstyle L}(t)=&-\frac{e^{-t\omega_{\scriptscriptstyle LS}}}{2\omega_{\scriptscriptstyle LS}^2}(-a_{\scriptscriptstyle L}^2-b_{\scriptscriptstyle L}\beta_{\scriptscriptstyle L}-b_{\scriptscriptstyle L}\omega_{\scriptscriptstyle LS}h+\omega_{\scriptscriptstyle LS}^2)-\frac{1}{2\omega_{\scriptscriptstyle LS}^2}(2a_{\scriptscriptstyle L}^2+2b_{\scriptscriptstyle L}\beta_{\scriptscriptstyle L})\\ &-\frac{e^{t\omega_{\scriptscriptstyle LS}}}{2\omega_{\scriptscriptstyle LS}^2}(-a_{\scriptscriptstyle L}^2-b_{\scriptscriptstyle L}\beta_{\scriptscriptstyle L}+b_{\scriptscriptstyle L}\omega_{\scriptscriptstyle LS}h+\omega_{\scriptscriptstyle LS}^2), \\ y_{\scriptscriptstyle L}(t)=&-\frac{e^{-t\omega_{\scriptscriptstyle LS}}}{2b_{\scriptscriptstyle L}\omega_{\scriptscriptstyle LS}^2}(a_{\scriptscriptstyle L}^3+a_{\scriptscriptstyle L}b_{\scriptscriptstyle L}\beta_{\scriptscriptstyle L}+a_{\scriptscriptstyle L}^2\omega_{\scriptscriptstyle LS}+a_{\scriptscriptstyle L}b_{\scriptscriptstyle L}\omega_{\scriptscriptstyle LS}h+b_{\scriptscriptstyle L}\beta_{\scriptscriptstyle L}\omega_{\scriptscriptstyle LS}-a_{\scriptscriptstyle L}\omega_{\scriptscriptstyle LS}^2) \\ &-\frac{e^{-t\omega_{\scriptscriptstyle LS}}}{2b_{\scriptscriptstyle L}\omega_{\scriptscriptstyle LS}^2}(b_{\scriptscriptstyle L}\omega_{\scriptscriptstyle LS}^2h-\omega_{\scriptscriptstyle LS}^3) -\frac{1}{2b_{\scriptscriptstyle L}\omega_{\scriptscriptstyle LS}^2}(-2a_{\scriptscriptstyle L}^3-2a_{\scriptscriptstyle L}b_{\scriptscriptstyle L}\beta_{\scriptscriptstyle L}+2a_{\scriptscriptstyle L}\omega_{\scriptscriptstyle LS}^2)\\ & -\frac{e^{t\omega_{\scriptscriptstyle LS}}}{2b_{\scriptscriptstyle L}\omega_{\scriptscriptstyle LS}^2}(a_{\scriptscriptstyle L}^3+a_{\scriptscriptstyle L}b_{\scriptscriptstyle L}\beta_{\scriptscriptstyle L}-a_{\scriptscriptstyle L}^2\omega_{\scriptscriptstyle LS}-a_{\scriptscriptstyle L}b_{\scriptscriptstyle L}\omega_{\scriptscriptstyle LS}h-b_{\scriptscriptstyle L}\beta_{\scriptscriptstyle L}\omega_{\scriptscriptstyle LS}) \\ &-\frac{e^{t\omega_{\scriptscriptstyle LS}}}{2b_{\scriptscriptstyle L}\omega_{\scriptscriptstyle LS}^2}(-a_{\scriptscriptstyle L}\omega_{\scriptscriptstyle LS}^2+b_{\scriptscriptstyle L}\omega_{\scriptscriptstyle LS}^2h+\omega_{\scriptscriptstyle LS}^3). \end{aligned} \end{equation*} The fly time of the orbit $(x_{\scriptscriptstyle L}(x,y),y_{\scriptscriptstyle L}(x,y))$, from $A_2(h)=(-1,-h)$ to $A_3(h)=(-1,h)$, is $$t_{\scriptscriptstyle L}=\frac{1}{\omega_{\scriptscriptstyle LS}}\log\Bigg(1+\frac{2b_{\scriptscriptstyle L}\omega_{\scriptscriptstyle LS}h}{a_{\scriptscriptstyle L}^2+b_{\scriptscriptstyle L}\beta_{\scriptscriptstyle L}-b_{\scriptscriptstyle L}\omega_{\scriptscriptstyle LS}h-\omega_{\scriptscriptstyle LS}^2}\Bigg).$$ Now, for $g_{\scriptscriptstyle L}$ and $f_{\scriptscriptstyle L}$ defined in \eqref{eq:02} and \eqref{eq:03}, respectively, we obtain \begin{equation}\label{sys:l} \begin{aligned} &\int_{\widehat{A_2A_3}}g_{\scriptscriptstyle L}dx-f_{\scriptscriptstyle L}dy=\\ &=\int_{0}^{t_{\scriptscriptstyle L}}g_{\scriptscriptstyle L}(x_{\scriptscriptstyle L}(t), y_{\scriptscriptstyle L}(t)) \dfrac{d}{dt}x_{\scriptscriptstyle L}(t) - f_{\scriptscriptstyle L}(x_{\scriptscriptstyle L}(t), y_{\scriptscriptstyle L}(t)) \dfrac{d}{dt}y_{\scriptscriptstyle L}(t) \\ & =\alpha_5h+\alpha_6f_{\scriptscriptstyle L}^{\scriptscriptstyle S}(h) , \end{aligned} \end{equation} with $$ \alpha_5 = p_{\scriptscriptstyle L}-2r_{\scriptscriptstyle L}-u_{\scriptscriptstyle L}+\frac{(p_{\scriptscriptstyle L}+u_{\scriptscriptstyle L})(a_{\scriptscriptstyle L}^2+b_{\scriptscriptstyle L}\beta_{\scriptscriptstyle L})}{\omega_{\scriptscriptstyle LS}^2}\quad\text{and}\quad \alpha_6 = \frac{p_{\scriptscriptstyle L}+u_{\scriptscriptstyle L}}{2b_{\scriptscriptstyle L}\omega_{\scriptscriptstyle LS}^3}. $$ Finally, the orbit $(x_{\scriptscriptstyle C2}(x,y),y_{\scriptscriptstyle C2}(x,y))$ of the system $\eqref{eq:01}|_{\epsilon=0}$, such that $(x_{\scriptscriptstyle C2}(0,0),y_{\scriptscriptstyle C2}(0,0))=(-1,h)$, is given by \begin{equation*} \begin{aligned} x_{\scriptscriptstyle C2}(t)&=-\cos(t) + h \sin(t), \\ y_{\scriptscriptstyle C2}(t)&=h \cos(t) + \sin(t). \end{aligned} \end{equation*} The fly time of the orbit $(x_{\scriptscriptstyle C2}(x,y),y_{\scriptscriptstyle C2}(x,y))$, from $A_3(h)=(-1,h)$ to $A(h)=(1,h)$, is $$t_{\scriptscriptstyle C2}=\arccos\Bigg(\frac{h^2-1}{h^2+1}\Bigg)$$ Now, for $g_{\scriptscriptstyle C}$ and $f_{\scriptscriptstyle C}$ defined in \eqref{eq:02} and \eqref{eq:03}, respectively, we obtain \begin{equation}\label{sys:c2} \begin{aligned} &\int_{\widehat{A_3A}}g_{\scriptscriptstyle C}dx-f_{\scriptscriptstyle C}dy=\\ & =\int_{0}^{t_{\scriptscriptstyle C2}}g_{\scriptscriptstyle C}(x_{\scriptscriptstyle C2}(t), y_{\scriptscriptstyle C2}(t)) \dfrac{d}{dt}x_{\scriptscriptstyle C2}(t) - f_{\scriptscriptstyle C}(x_{\scriptscriptstyle C2}(t), y_{\scriptscriptstyle C2}(t)) \dfrac{d}{dt}y_{\scriptscriptstyle C2}(t) \\ & = 2v_{\scriptscriptstyle C}+\alpha_3h+\alpha_4f_{\scriptscriptstyle C}^{\scriptscriptstyle C}(h). \end{aligned} \end{equation} Therefore, replacing \eqref{sys:r}, \eqref{sys:c1}, \eqref{sys:l} and \eqref{sys:c2} in \eqref{eq:mel01}, we obtain \begin{equation*} M(h)=k_0f_0(h)+k_{\scriptscriptstyle C}^{\scriptscriptstyle C}f_{\scriptscriptstyle C}^{\scriptscriptstyle C}(h)+k_{\scriptscriptstyle R}^{\scriptscriptstyle S}f_{\scriptscriptstyle R}^{\scriptscriptstyle S}(h)+k_{\scriptscriptstyle L}^{\scriptscriptstyle S}f_{\scriptscriptstyle L}^{\scriptscriptstyle S}(h) \end{equation*} with $$k_0=\alpha_1+2b_{\scriptscriptstyle R}\alpha_3+\frac{b_{\scriptscriptstyle R}}{b_{\scriptscriptstyle L}}\alpha_5,\quad k_{\scriptscriptstyle C}^{\scriptscriptstyle C}=2b_{\scriptscriptstyle R}\alpha_4,\quad k_{\scriptscriptstyle R}^{\scriptscriptstyle S}=\alpha_2\quad\text{and}\quad k_{\scriptscriptstyle L}^{\scriptscriptstyle S}=\frac{b_{\scriptscriptstyle R}}{b_{\scriptscriptstyle L}}\alpha_6.$$ \end{proof} \begin{remark} Suppose that the system $\eqref{eq:01}|_{\epsilon=0}$ is of type SCS and that the ordinates of the points $P_{\scriptscriptstyle L}^{\scriptscriptstyle u}$ and $P_{\scriptscriptstyle R}^{\scriptscriptstyle s}$ are equals, see Fig. \ref{fig:03} (b). Then the first order Melnikov function \eqref{eq:melscs}, is given by \begin{equation}\label{eq:scs1} M(h)=k_0f_0(h)+k_{\scriptscriptstyle C}^{\scriptscriptstyle C}f_{\scriptscriptstyle C}^{\scriptscriptstyle C}(h)+k_{\scriptscriptstyle R}^{\scriptscriptstyle S} f_{\scriptscriptstyle R}^{\scriptscriptstyle S}(h), \end{equation} with \begin{equation*} \begin{aligned} k_0=\,\,& p_{\scriptscriptstyle R}-u_{\scriptscriptstyle R}+2r_{\scriptscriptstyle R}+2b_{\scriptscriptstyle R}\Big(\frac{p_{\scriptscriptstyle L}-r_{\scriptscriptstyle L}}{b_{\scriptscriptstyle L}}+u_{\scriptscriptstyle C}-p_{\scriptscriptstyle C}\Big)\\ &+(p_{\scriptscriptstyle R}+u_{\scriptscriptstyle R})\Bigg(\frac{a_{\scriptscriptstyle R}^2-b_{\scriptscriptstyle R}\beta_{\scriptscriptstyle R}}{\omega_{\scriptscriptstyle RS}^2}\Bigg) +(p_{\scriptscriptstyle L}+u_{\scriptscriptstyle L})\Bigg(\frac{a_{\scriptscriptstyle R}^2-b_{\scriptscriptstyle R}\beta_{\scriptscriptstyle R}-\omega_{\scriptscriptstyle RS}^2}{\omega_{\scriptscriptstyle LS}\omega_{\scriptscriptstyle RS}}\Bigg), \end{aligned} \end{equation*} $$k_{\scriptscriptstyle C}^{\scriptscriptstyle C}=b_{\scriptscriptstyle R}(p_{\scriptscriptstyle C}+u_{\scriptscriptstyle C})\quad\text{and}\quad k_{\scriptscriptstyle R}^{\scriptscriptstyle S}=\frac{\omega_{\scriptscriptstyle LS}(p_{\scriptscriptstyle R}+u_{\scriptscriptstyle R})+\omega_{\scriptscriptstyle RS}(p_{\scriptscriptstyle L}+{u_{\scriptscriptstyle L}})}{2b_{\scriptscriptstyle R}\omega_{\scriptscriptstyle LS}\omega_{\scriptscriptstyle RS}^3},$$ where the functions $f_0,f_{\scriptscriptstyle C}^{\scriptscriptstyle C},f_{\scriptscriptstyle R}^{\scriptscriptstyle S}$ are the ones defined in \eqref{eq:func}. In fact, if the ordinates of the points $P_{\scriptscriptstyle L}^{\scriptscriptstyle u}$ and $P_{\scriptscriptstyle R}^{\scriptscriptstyle s}$ are equals, then $$\frac{a_{\scriptscriptstyle L}^2+b_{\scriptscriptstyle L} \beta_{\scriptscriptstyle L}-\omega_{\scriptscriptstyle LS}^2}{b_{\scriptscriptstyle L}\omega_{\scriptscriptstyle LS}}=\frac{a_{\scriptscriptstyle R}^2-b_{\scriptscriptstyle R} \beta_{\scriptscriptstyle R}-\omega_{\scriptscriptstyle RS}^2}{b_{\scriptscriptstyle R}\omega_{\scriptscriptstyle RS}}.$$ Isolating the parameter $\beta_{\scriptscriptstyle L}$ in the equality above, i.e. $$\beta_{\scriptscriptstyle L}=\frac{a_{\scriptscriptstyle R}^2 b_{\scriptscriptstyle L}\omega_{\scriptscriptstyle LS}-b_{\scriptscriptstyle L}b_{\scriptscriptstyle R}\beta_{\scriptscriptstyle R}\omega_{\scriptscriptstyle LS}-a_{\scriptscriptstyle L}^2b_{\scriptscriptstyle R}\omega_{\scriptscriptstyle RS}+b_{\scriptscriptstyle R}\omega_{\scriptscriptstyle LS}^2\omega_{\scriptscriptstyle RS}-b_{\scriptscriptstyle L}\omega_{\scriptscriptstyle LS}\omega_{\scriptscriptstyle RS}^2}{b_{\scriptscriptstyle L}b_{\scriptscriptstyle R}\omega_{\scriptscriptstyle RS}},$$ and replacing on function $M(h)$ given by \eqref{eq:melscs} we obtain the expression \eqref{eq:scs1}. \end{remark} The next two theorems provide expressions for the Melnikov function in the cases CCS and CCC. The proof of these results is analogous to proof of Theorem \ref{theo:scs}. \begin{theorem}\label{theo:ccs} Suppose that systems $\eqref{eq:01}|_{\epsilon=0}$ is of the type CCS. Then the first order Melnikov function $M(h)$ associated to system \eqref{eq:01} can be expressed as \begin{equation}\label{eq:melccs} M(h)=k_0f_0(h)+k_{\scriptscriptstyle C}^{\scriptscriptstyle C}f_{\scriptscriptstyle C}^{\scriptscriptstyle C}(h)+k_{\scriptscriptstyle R}^{\scriptscriptstyle C}f_{\scriptscriptstyle R}^{\scriptscriptstyle C}(h)+k_{\scriptscriptstyle L}^{\scriptscriptstyle S}f_{\scriptscriptstyle L}^{\scriptscriptstyle S}(h), \end{equation} for $h\in(0,\tau)$, where the functions $f_0,f_{\scriptscriptstyle C}^{\scriptscriptstyle C},f_{\scriptscriptstyle R}^{\scriptscriptstyle C},f_{\scriptscriptstyle L}^{\scriptscriptstyle S}$ are the ones defined in \eqref{eq:func}. Here the coefficients $k_0$ and $k_i^j$, for $i=L,C,R$ and $j=C,S$, depend on the parameters of system \eqref{eq:01}. \end{theorem} \begin{theorem}\label{theo:ccc} Suppose the systems $\eqref{eq:01}|_{\epsilon=0}$ is of the type CCC. Then the first order Melnikov function $M(h)$ associated to system \eqref{eq:01} can be expressed as \begin{equation}\label{eq:melccc} M(h)=k_0f_0(h)+k_{\scriptscriptstyle C}^{\scriptscriptstyle C}f_{\scriptscriptstyle C}^{\scriptscriptstyle C}(h)+k_{\scriptscriptstyle R}^{\scriptscriptstyle C}f_{\scriptscriptstyle R}^{\scriptscriptstyle C}(h)+k_{\scriptscriptstyle L}^{\scriptscriptstyle C}f_{\scriptscriptstyle L}^{\scriptscriptstyle C}(h), \end{equation} for $h\in(0,\infty)$, where the functions $f_0,f_{\scriptscriptstyle C}^{\scriptscriptstyle C},f_{\scriptscriptstyle R}^{\scriptscriptstyle C},f_{\scriptscriptstyle L}^{\scriptscriptstyle C}$ are the ones defined in \eqref{eq:func}. Here the coefficients $k_0$ and $k_i^{\scriptscriptstyle C}$, for $i=L,C,R$, depend on the parameters of system \eqref{eq:01}. \end{theorem} The next corollaries provided lower bounds for the number of limit cycles of system $\eqref{eq:01}$ that can bifurcate from a periodic annulus of system $\eqref{eq:01}|_{\epsilon=0}$ in the cases SCS, CCS and CCC. But first, we will recall basic linear algebra results. Let $\{f_0, f_1,\dots, f_n\}$ a set of real functions defined on a proper interval $I\subset \mathbb{R}$. We say that $\{f_0, f_1,\dots, f_n\}$ is {\it linearly independent} if the unique solution of the equation $$\sum_{i=0}^{n}\alpha_i f_i(t)=0,$$ for all $t\in I$, is $\alpha_0=\alpha_1=\dots=\alpha_n=0$. \begin{proposition}\label{prop:li1} If $\{f_0, f_1,\dots, f_n\}$ is linearly independent then there exist $t_1, t_2,\dots, t_n\in I$, with $t_i\ne t_j$ for $i\ne j$, and $\alpha_0, \alpha_1, \dots,\alpha_n\in\mathbb{R}$, not all null, such that for every $j\in \{1, 2, \dots, n\}$ $$\sum_{i=0}^{n}\alpha_i f_i(t_j)=0.$$ \end{proposition} For a proof of Proposition \ref{prop:li1}, see for instance \cite{Lli11}. Recall that if the Wronskian, \begin{displaymath} W(f_0,f_1,\ldots,f_n)(x)=\left|\begin{array}{cccc} f_0(x) & f_1(x) & \ldots & f_n(x)\\ f'_0(x) & f'_1(x) & \ldots & f'_n(x)\\ \vdots & \vdots & \ddots & \vdots\\ f^{(n)}_0(x) & f^{(n)}_1(x) & \ldots & f^{(n)}_n(x) \end{array}\right|, \end{displaymath} where $\{f_0, f_1,\dots, f_n\}$ is a set of functions with derivatives until order $n$ on $I$, is different of zero for some $x\in I$, then $\{f_0, f_1, \dots , f_n\}$ is linearly independent on $I$. \begin{corollary}[Case SCS -- Fig. \ref{fig:03} (a)]\label{scs-a} Consider the system \eqref{eq:01} with $a_{\scriptscriptstyle L}=b_{\scriptscriptstyle L}=b_{\scriptscriptstyle R}=c_{\scriptscriptstyle R}=1$, $a_{\scriptscriptstyle R}=c_{\scriptscriptstyle L}=0$, $\beta_{\scriptscriptstyle L}=2$ and $\beta_{\scriptscriptstyle R}=-2$. Then, for $0<\epsilon<<1$, the system \eqref{eq:01} has at least three limit cycles. \end{corollary} \begin{proof} For $a_{\scriptscriptstyle L}=b_{\scriptscriptstyle L}=b_{\scriptscriptstyle R}=c_{\scriptscriptstyle R}=1$, $a_{\scriptscriptstyle R}=c_{\scriptscriptstyle L}=0$, $\beta_{\scriptscriptstyle L}=2$ and $\beta_{\scriptscriptstyle R}=-2$, the eigenvalues of the linear part of the left, central and right subsystem from $\eqref{eq:01}|_{\epsilon=0}$ are $\pm 1$, $\pm i$ and $\pm 1$, respectively, i.e. we have one center and two saddles. Moreover, the coordinates of the equilibrium points on the left and right subsystem from $\eqref{eq:01}|_{\epsilon=0}$ are $(-3,2)$ and $(2,0)$, respectively, and so the saddles are real. Note that $P_{\scriptscriptstyle L}^{\scriptscriptstyle u}=(-1,2)$ and $P_{\scriptscriptstyle R}^{\scriptscriptstyle s}=(1,1)$, i.e. the ordinates of this points are different and $\tau=1$. Therefore, the first order Melnikov function from Theorem \ref{theo:scs} becomes $$M(h)=k_0f_0(h)+k_{\scriptscriptstyle C}^{\scriptscriptstyle C}f_{\scriptscriptstyle C}^{\scriptscriptstyle C}(h)+k_{\scriptscriptstyle R}^{\scriptscriptstyle S}f_{\scriptscriptstyle R}^{\scriptscriptstyle S}(h)+k_{\scriptscriptstyle L}^{\scriptscriptstyle S}f_{\scriptscriptstyle L}^{\scriptscriptstyle S}(h),\quad \forall\, h\in(0,1),$$ with \begin{equation*} \begin{aligned} f_0(h)&= h, \\ f_{\scriptscriptstyle C}^{\scriptscriptstyle C}(h)&=(h^2+1)\arccos\bigg(\frac{h^2-1}{h^2+1}\bigg),\\ f_{\scriptscriptstyle R}^{\scriptscriptstyle S}(h)&=(h^2-1)\log\bigg(-\frac{h+1}{h-1}\bigg),\\ f_{\scriptscriptstyle L}^{\scriptscriptstyle S}(h)&=(h^2-4)\log\bigg(-\frac{h+2}{h-2}\bigg), \end{aligned} \end{equation*} $$k_0=2(2p_{\scriptscriptstyle L}-p_{\scriptscriptstyle C}-r_{\scriptscriptstyle L}+r_{\scriptscriptstyle R}+u_{\scriptscriptstyle C}+u_{\scriptscriptstyle L})+3p_{\scriptscriptstyle R}+u_{\scriptscriptstyle R},\quad k_{\scriptscriptstyle C}^{\scriptscriptstyle C}=p_{\scriptscriptstyle C}+u_{\scriptscriptstyle C},$$ $$ k_{\scriptscriptstyle R}^{\scriptscriptstyle S}=\frac{p_{\scriptscriptstyle R}+u_{\scriptscriptstyle R}}{2}\quad\text{and}\quad k_{\scriptscriptstyle L}^{\scriptscriptstyle S}=\frac{p_{\scriptscriptstyle L}+u_{\scriptscriptstyle L}}{2}.$$ Consider the set of functions $\mathcal{F}_{\scriptscriptstyle SCS}=\{f_0,f_{\scriptscriptstyle C}^{\scriptscriptstyle C},f_{\scriptscriptstyle R}^{\scriptscriptstyle S},f_{\scriptscriptstyle L}^{\scriptscriptstyle S}\}$. Using the algebraic manipulator Mathematica (see \cite{Wol20}), we compute the Wronskian $W(f_0, f_{\scriptscriptstyle C}^{\scriptscriptstyle C},f_{\scriptscriptstyle R}^{\scriptscriptstyle S},f_{\scriptscriptstyle L}^{\scriptscriptstyle S})(h)$ and evaluate in some point on the interval $(0,1)$. More precisely, $W(f_0, f_{\scriptscriptstyle C}^{\scriptscriptstyle C},f_{\scriptscriptstyle R}^{\scriptscriptstyle S},f_{\scriptscriptstyle L}^{\scriptscriptstyle S})(0.4)=9.16568$. Then, set of functions $\mathcal{F}_{\scriptscriptstyle SCS}$ is linearly independent on the interval $(0,1)$ and, by Proposition \ref{prop:li1}, there are $h_i\in(0,1)$, with $i=1,2,3$, such that $M(h_i)=0$. Therefore, for $0< \epsilon <<1$, the system \eqref{eq:01} has a unique limit cycle near $L_{h_i}$, for each $i=1,2,3$, i.e. the system \eqref{eq:01} has at least three limit cycles. \end{proof} \begin{corollary}[Case SCS -- Fig. \ref{fig:03} (b)]\label{scs-b} Consider the system \eqref{eq:01} with $a_{\scriptscriptstyle L}=b_{\scriptscriptstyle L}=b_{\scriptscriptstyle R}=c_{\scriptscriptstyle R}=\beta_{\scriptscriptstyle L}=1$, $a_{\scriptscriptstyle R}=c_{\scriptscriptstyle L}=0$ and $\beta_{\scriptscriptstyle R}=-2$. Then, for $0<\epsilon<<1$, the system \eqref{eq:01} has at least two limit cycles. \end{corollary} \begin{proof} For $a_{\scriptscriptstyle L}=b_{\scriptscriptstyle L}=b_{\scriptscriptstyle R}=c_{\scriptscriptstyle R}=\beta_{\scriptscriptstyle L}=1$, $a_{\scriptscriptstyle R}=c_{\scriptscriptstyle L}=0$ and $\beta_{\scriptscriptstyle R}=-2$, the eigenvalues of the linear part of the left, central and right subsystem from $\eqref{eq:01}|_{\epsilon=0}$ are $\pm 1$, $\pm i$ and $\pm 1$, respectively, i.e. we have one center and two saddles. Moreover, the coordinates of the equilibrium points on the left and right subsystem from $\eqref{eq:01}|_{\epsilon=0}$ are $(-2,1)$ and $(2,0)$, respectively, and so the saddles are real. Note that $P_{\scriptscriptstyle L}^{\scriptscriptstyle u}=(-1,1)$ and $P_{\scriptscriptstyle R}^{\scriptscriptstyle s}=(1,1)$, i.e. the ordinates of this points are the same and $\tau=1$. Therefore, the first order Melnikov function from Theorem \ref{theo:scs} becomes $$M(h)=k_0f_0(h)+k_{\scriptscriptstyle C}^{\scriptscriptstyle C}f_{\scriptscriptstyle C}^{\scriptscriptstyle C}(h)+k_{\scriptscriptstyle R}^{\scriptscriptstyle S}f_{\scriptscriptstyle R}^{\scriptscriptstyle S}(h),\quad \forall \,h\in(0,1),$$ with \begin{equation*} \begin{aligned} f_0(h)&= h, \\ f_{\scriptscriptstyle C}^{\scriptscriptstyle C}(h)&=(h^2+1)\arccos\bigg(\frac{h^2-1}{h^2+1}\bigg),\\ f_{\scriptscriptstyle R}^{\scriptscriptstyle S}(h)&=(h^2-1)\log\bigg(-\frac{h+1}{h-1}\bigg), \end{aligned} \end{equation*} $$k_0=2(u_{\scriptscriptstyle C}-p_{\scriptscriptstyle C}-r_{\scriptscriptstyle L}+r_{\scriptscriptstyle R})+3(p_{\scriptscriptstyle R}+p_{\scriptscriptstyle L})+u_{\scriptscriptstyle R}+u_{\scriptscriptstyle L},\quad k_{\scriptscriptstyle C}^{\scriptscriptstyle C}=p_{\scriptscriptstyle C}+u_{\scriptscriptstyle C}$$ and $$ k_{\scriptscriptstyle R}^{\scriptscriptstyle S}=\frac{p_{\scriptscriptstyle R}+p_{\scriptscriptstyle L}+u_{\scriptscriptstyle R}+u_{\scriptscriptstyle L}}{2}.$$ Consider the set of functions $\mathcal{F}_{\scriptscriptstyle SCS}=\{f_0,f_{\scriptscriptstyle C}^{\scriptscriptstyle C},f_{\scriptscriptstyle R}^{\scriptscriptstyle S}\}$. Using the algebraic manipulator Mathematica, we compute the Wronskian $W(f_0, f_{\scriptscriptstyle C}^{\scriptscriptstyle C},f_{\scriptscriptstyle R}^{\scriptscriptstyle S})(h)$ and evaluate in some point on the interval $(0,1)$. More precisely, $W(f_0, f_{\scriptscriptstyle C}^{\scriptscriptstyle C},f_{\scriptscriptstyle R}^{\scriptscriptstyle S})(0.4)=-10.6955$. Then, set of functions $\mathcal{F}_{\scriptscriptstyle SCS}$ is linearly independent on the interval $(0,1)$ and, by Proposition \ref{prop:li1}, there are $h_i\in(0,1)$, with $i=1,2$, such that $M(h_i)=0$. Therefore, for $0< \epsilon <<1$, the system \eqref{eq:01} has a unique limit cycle near $L_{h_i}$, for each $i=1,2$, i.e. the system \eqref{eq:01} has at least two limit cycles. \end{proof} \begin{corollary}[Case CCS -- Fig. \ref{fig:03} (c)] \label{ccs-c} Consider the system \eqref{eq:01} with $a_{\scriptscriptstyle L}=\beta_{\scriptscriptstyle L}=b_{\scriptscriptstyle R}=c_{\scriptscriptstyle R}=1$, $b_{\scriptscriptstyle L}=2$, $c_{\scriptscriptstyle L}=-1$, $a_{\scriptscriptstyle R}=0$ and $\beta_{\scriptscriptstyle R}=-2$. Then for $0<\epsilon<<1$, the system \eqref{eq:01} has at least three limit cycles. \end{corollary} \begin{proof} For $a_{\scriptscriptstyle L}=\beta_{\scriptscriptstyle L}=b_{\scriptscriptstyle R}=c_{\scriptscriptstyle R}=1$, $b_{\scriptscriptstyle L}=2$, $c_{\scriptscriptstyle L}=-1$, $a_{\scriptscriptstyle R}=0$ and $\beta_{\scriptscriptstyle R}=-2$, the eigenvalues of the linear part of left, central and right subsystem from $\eqref{eq:01}|_{\epsilon=0}$ are $\pm i$, $\pm i$ and $\pm 1$, respectively, i.e. we have two centers and one saddle. Moreover, the coordinates of the equilibrium point on the left and right subsystem from $\eqref{eq:01}|_{\epsilon=0}$ are $(3,-2)$ and $(2,0)$, respectively, and so we have a virtual center and a real saddle. Note that $P_{\scriptscriptstyle R}^{\scriptscriptstyle s}=(1,1)$ and $\tau=1$. Therefore, the first order Melnikov function from Theorem \ref{theo:ccs} becomes $$M(h)=k_0f_0(h)+k_{\scriptscriptstyle C}^{\scriptscriptstyle C}f_{\scriptscriptstyle C}^{\scriptscriptstyle C}(h)+k_{\scriptscriptstyle R}^{\scriptscriptstyle S}f_{\scriptscriptstyle R}^{\scriptscriptstyle S}(h)+k_{\scriptscriptstyle L}^{\scriptscriptstyle C}f_{\scriptscriptstyle L}^{\scriptscriptstyle C}(h),\quad \forall \,h\in(0,1),$$ with \begin{equation*} \begin{aligned} f_0(h)&= h, \\ f_{\scriptscriptstyle C}^{\scriptscriptstyle C}(h)&=(h^2+1)\arccos\bigg(\frac{h^2-1}{h^2+1}\bigg),\\ f_{\scriptscriptstyle R}^{\scriptscriptstyle S}(h)&=(h^2-1)\log\bigg(-\frac{h+1}{h-1}\bigg),\\ f_{\scriptscriptstyle L}^{\scriptscriptstyle C}(h)&=(h^2+4)\arccos\bigg(\frac{4-h^2}{h^2+4}\bigg), \end{aligned} \end{equation*} $$k_0=2(u_{\scriptscriptstyle C}-p_{\scriptscriptstyle C}+r_{\scriptscriptstyle R}-u_{\scriptscriptstyle L})-r_{\scriptscriptstyle L}-p_{\scriptscriptstyle L}+u_{\scriptscriptstyle R}+3p_{\scriptscriptstyle R},\quad k_{\scriptscriptstyle C}^{\scriptscriptstyle C}=p_{\scriptscriptstyle C}+u_{\scriptscriptstyle C},$$ $$ k_{\scriptscriptstyle R}^{\scriptscriptstyle S}=\frac{p_{\scriptscriptstyle R}+u_{\scriptscriptstyle R}}{2}\quad\text{and}\quad k_{\scriptscriptstyle L}^{\scriptscriptstyle S}=\frac{p_{\scriptscriptstyle L}+u_{\scriptscriptstyle L}}{2}.$$ Consider the set of functions $\mathcal{F}_{\scriptscriptstyle CCS}=\{f_0,f_{\scriptscriptstyle C}^{\scriptscriptstyle C},f_{\scriptscriptstyle R}^{\scriptscriptstyle S},f_{\scriptscriptstyle L}^{\scriptscriptstyle C}\}$. Using the algebraic manipulator Mathematica, we compute the Wronskian $W(f_0, f_{\scriptscriptstyle C}^{\scriptscriptstyle C},$ $f_{\scriptscriptstyle R}^{\scriptscriptstyle S},f_{\scriptscriptstyle L}^{\scriptscriptstyle C})(h)$ and evaluate in some point on the interval $(0,1)$. More precisely, $W(f_0, f_{\scriptscriptstyle C}^{\scriptscriptstyle C},f_{\scriptscriptstyle R}^{\scriptscriptstyle S},f_{\scriptscriptstyle L}^{\scriptscriptstyle C})(0.4)=13.25$. Then, set of functions $\mathcal{F}_{\scriptscriptstyle CCS}$ is linearly independent on the interval $(0,1)$ and, by Proposition \ref{prop:li1}, there are $h_i\in(0,1)$, with $i=1,2,3$, such that $M(h_i)=0$. Therefore, for $0< \epsilon <<1$, the system \eqref{eq:01} has a unique limit cycle near $L_{h_i}$, for each $i=1,2,3$, i.e. the system \eqref{eq:01} has at least three limit cycles. \end{proof} \begin{corollary}[Case CCS -- Fig. \ref{fig:03} (d)]\label{ccs-d} Consider the system \eqref{eq:01} with $a_{\scriptscriptstyle L}=b_{\scriptscriptstyle R}=c_{\scriptscriptstyle R}=1$, $b_{\scriptscriptstyle L}=2$, $c_{\scriptscriptstyle L}=-1$, $a_{\scriptscriptstyle R}=0$ and $\beta_{\scriptscriptstyle R}=\beta_{\scriptscriptstyle L}=-2$. Then for $0<\epsilon<<1$, the system \eqref{eq:01} has at least three limit cycles. \end{corollary} \begin{proof} For $a_{\scriptscriptstyle L}=b_{\scriptscriptstyle R}=c_{\scriptscriptstyle R}=1$, $b_{\scriptscriptstyle L}=2$, $c_{\scriptscriptstyle L}=-1$, $a_{\scriptscriptstyle R}=0$ and $\beta_{\scriptscriptstyle R}=\beta_{\scriptscriptstyle L}=-2$, the eigenvalues of the linear part of left, central and right subsystem from $\eqref{eq:01}|_{\epsilon=0}$ are $\pm i$, $\pm i$ and $\pm 1$, respectively, i.e. we have two centers and one saddle. Moreover, the coordinates of the equilibrium point on the left and right subsystem from $\eqref{eq:01}|_{\epsilon=0}$ are $(-3,1)$ and $(2,0)$, respectively, and so we have a real center and a real saddle. Note that $P_{\scriptscriptstyle R}^{\scriptscriptstyle s}=(1,1)$ and $\tau=1$. Therefore, the first order Melnikov function from Theorem \ref{theo:ccs} becomes $$M(h)=k_0f_0(h)+k_{\scriptscriptstyle C}^{\scriptscriptstyle C}f_{\scriptscriptstyle C}^{\scriptscriptstyle C}(h)+k_{\scriptscriptstyle R}^{\scriptscriptstyle S}f_{\scriptscriptstyle R}^{\scriptscriptstyle S}(h)+k_{\scriptscriptstyle L}^{\scriptscriptstyle C}f_{\scriptscriptstyle L}^{\scriptscriptstyle C}(h),\quad \forall \,h\in(0,1),$$ with \begin{equation*} \begin{aligned} f_0(h)&= h, \\ f_{\scriptscriptstyle C}^{\scriptscriptstyle C}(h)&=(h^2+1)\arccos\bigg(\frac{h^2-1}{h^2+1}\bigg),\\ f_{\scriptscriptstyle R}^{\scriptscriptstyle S}(h)&=(h^2-1)\log\bigg(-\frac{h+1}{h-1}\bigg),\\ f_{\scriptscriptstyle L}^{\scriptscriptstyle C}(h)&=(h^2+4)\arccos\bigg(\frac{4-h^2}{h^2+4}\bigg), \end{aligned} \end{equation*} $$k_0=2(u_{\scriptscriptstyle C}-p_{\scriptscriptstyle C}+r_{\scriptscriptstyle R}+p_{\scriptscriptstyle L})-r_{\scriptscriptstyle L}+u_{\scriptscriptstyle L}+u_{\scriptscriptstyle R}+3p_{\scriptscriptstyle R},\quad k_{\scriptscriptstyle C}^{\scriptscriptstyle C}=p_{\scriptscriptstyle C}+u_{\scriptscriptstyle C},$$ $$k_{\scriptscriptstyle R}^{\scriptscriptstyle S}=\frac{p_{\scriptscriptstyle R}+u_{\scriptscriptstyle R}}{2}\quad\text{and}\quad k_{\scriptscriptstyle L}^{\scriptscriptstyle S}=\frac{p_{\scriptscriptstyle L}+u_{\scriptscriptstyle L}}{2}.$$ Consider the set of functions $\mathcal{F}_{\scriptscriptstyle CCS}=\{f_0,f_{\scriptscriptstyle C}^{\scriptscriptstyle C},f_{\scriptscriptstyle R}^{\scriptscriptstyle S},f_{\scriptscriptstyle L}^{\scriptscriptstyle C}\}$. Using the algebraic manipulator Mathematica, we compute the Wronskian $W(f_0, f_{\scriptscriptstyle C}^{\scriptscriptstyle C},$ $f_{\scriptscriptstyle R}^{\scriptscriptstyle S},f_{\scriptscriptstyle L}^{\scriptscriptstyle C})(h)$ and evaluate in some point on the interval $(0,1)$. More precisely, $W(f_0, f_{\scriptscriptstyle C}^{\scriptscriptstyle C},f_{\scriptscriptstyle R}^{\scriptscriptstyle S},f_{\scriptscriptstyle L}^{\scriptscriptstyle C})(0.2)=-4.26846$. Then, set of functions $\mathcal{F}_{\scriptscriptstyle CCS}$ is linearly independent on the interval $(0,1)$ and, by Proposition \ref{prop:li1}, there are $h_i\in(0,1)$, with $i=1,2,3$, such that $M(h_i)=0$. Therefore, for $0< \epsilon <<1$, the system \eqref{eq:01} has a unique limit cycle near $L_{h_i}$, for each $i=1,2,3$, i.e. the system \eqref{eq:01} has at least three limit cycles. \end{proof} \begin{corollary}[Case CCC -- Fig. \ref{fig:04} (a)]\label{ccc-a} Consider the system \eqref{eq:01} with $a_{\scriptscriptstyle L}=b_{\scriptscriptstyle R}=\beta_{\scriptscriptstyle L}=1$, $b_{\scriptscriptstyle L}=2$, $c_{\scriptscriptstyle L}=c_{\scriptscriptstyle R}=-1$ and $a_{\scriptscriptstyle R}=\beta_{\scriptscriptstyle R}=0$. Then, for $0<\epsilon<<1$, the system \eqref{eq:01} has at least three limit cycles. \end{corollary} \begin{proof} For $a_{\scriptscriptstyle L}=b_{\scriptscriptstyle R}=\beta_{\scriptscriptstyle L}=1$, $b_{\scriptscriptstyle L}=2$, $c_{\scriptscriptstyle L}=c_{\scriptscriptstyle R}=-1$ and $a_{\scriptscriptstyle R}=\beta_{\scriptscriptstyle R}=0$, the eigenvalues of the linear part of the left, central and right subsystem from $\eqref{eq:01}|_{\epsilon=0}$ are $\pm i$, $\pm i$ and $\pm i$, respectively, i.e. we have three centers. Moreover, the coordinates of the equilibrium points of the left and right subsystem from $\eqref{eq:01}|_{\epsilon=0}$ are $(3,-2)$ and $(0,0)$, respectively, and so the centers are virtual. Therefore, the first order Melnikov function from Theorem \ref{theo:ccc} becomes $$M(h)=k_0f_0(h)+k_{\scriptscriptstyle C}^{\scriptscriptstyle C}f_{\scriptscriptstyle C}^{\scriptscriptstyle C}(h)+k_{\scriptscriptstyle R}^{\scriptscriptstyle C}f_{\scriptscriptstyle R}^{\scriptscriptstyle C}(h)+k_{\scriptscriptstyle L}^{\scriptscriptstyle C}f_{\scriptscriptstyle L}^{\scriptscriptstyle C}(h),\quad \forall \,h\in(0,\infty),$$ with \begin{equation*} \begin{aligned} f_0(h)&= h, \\ f_{\scriptscriptstyle C}^{\scriptscriptstyle C}(h)&=(h^2+1)\arccos\bigg(\frac{h^2-1}{h^2+1}\bigg),\\ f_{\scriptscriptstyle R}^{\scriptscriptstyle C}(h)&=(h^2-1)\arccos\bigg(\frac{1-h^2}{h^2+1}\bigg),\\ f_{\scriptscriptstyle L}^{\scriptscriptstyle C}(h)&=(h^2+4)\arccos\bigg(\frac{4-h^2}{h^2+4}\bigg), \end{aligned} \end{equation*} $$k_0=2(u_{\scriptscriptstyle C}-p_{\scriptscriptstyle C}+r_{\scriptscriptstyle R}-u_{\scriptscriptstyle L})-r_{\scriptscriptstyle L}-p_{\scriptscriptstyle L}-u_{\scriptscriptstyle R}+p_{\scriptscriptstyle R},\quad k_{\scriptscriptstyle C}^{\scriptscriptstyle C}=p_{\scriptscriptstyle C}+u_{\scriptscriptstyle C},$$ $$ k_{\scriptscriptstyle R}^{\scriptscriptstyle S}=\frac{p_{\scriptscriptstyle R}+u_{\scriptscriptstyle R}}{2}\quad\text{and}\quad k_{\scriptscriptstyle L}^{\scriptscriptstyle S}=\frac{p_{\scriptscriptstyle L}+u_{\scriptscriptstyle L}}{2}.$$ Consider the set of functions $\mathcal{F}_{\scriptscriptstyle CCC}=\{f_0,f_{\scriptscriptstyle C}^{\scriptscriptstyle C},f_{\scriptscriptstyle R}^{\scriptscriptstyle C},f_{\scriptscriptstyle L}^{\scriptscriptstyle C}\}$. Using the algebraic manipulator Mathematica, we compute the Wronskian $W(f_0, f_{\scriptscriptstyle C}^{\scriptscriptstyle C},$ $f_{\scriptscriptstyle R}^{\scriptscriptstyle C},f_{\scriptscriptstyle L}^{\scriptscriptstyle C})(h)$ and evaluate in some point on the interval $(0,\infty)$. More precisely, $W(f_0, f_{\scriptscriptstyle C}^{\scriptscriptstyle C},f_{\scriptscriptstyle R}^{\scriptscriptstyle C},f_{\scriptscriptstyle L}^{\scriptscriptstyle C})(0.2)=-2.92151$. Then, set of functions $\mathcal{F}_{\scriptscriptstyle CCC}$ is linearly independent on the interval $(0,\infty)$ and, by Proposition \ref{prop:li1}, there are $h_i\in(0,\infty)$, with $i=1,2,3$, such that $M(h_i)=0$. Therefore, for $0< \epsilon <<1$, the system \eqref{eq:01} has a unique limit cycle near $L_{h_i}$, for each $i=1,2,3$, i.e. the system \eqref{eq:01} has at least three limit cycles. \end{proof} \begin{corollary}[Case CCC -- Fig. \ref{fig:04} (b)]\label{ccc-b} Consider the system \eqref{eq:01} with $a_{\scriptscriptstyle L}=b_{\scriptscriptstyle R}=1$, $\beta_{\scriptscriptstyle L}=-3$, $b_{\scriptscriptstyle L}=2$, $c_{\scriptscriptstyle L}=c_{\scriptscriptstyle R}=-1$ and $a_{\scriptscriptstyle R}=\beta_{\scriptscriptstyle R}=0$. Then, for $0<\epsilon<<1$, the system \eqref{eq:01} has at least three limit cycles. \end{corollary} \begin{proof} For $a_{\scriptscriptstyle L}=b_{\scriptscriptstyle R}=1$, $\beta_{\scriptscriptstyle L}=-3$, $b_{\scriptscriptstyle L}=2$, $c_{\scriptscriptstyle L}=c_{\scriptscriptstyle R}=-1$ and $a_{\scriptscriptstyle R}=\beta_{\scriptscriptstyle R}=0$, the eigenvalues of the linear part of the left, central and right subsystem from $\eqref{eq:01}|_{\epsilon=0}$ are $\pm i$, $\pm i$ and $\pm i$, respectively, i.e. we have three centers. Moreover, the coordinates of the equilibrium points of the left and right subsystem from $\eqref{eq:01}|_{\epsilon=0}$ are $(-5,2)$ and $(0,0)$, respectively, and so we have a real center and a virtual center. Therefore, the first order Melnikov function from Theorem \ref{theo:ccc} becomes $$M(h)=k_0f_0(h)+k_{\scriptscriptstyle C}^{\scriptscriptstyle C}f_{\scriptscriptstyle C}^{\scriptscriptstyle C}(h)+k_{\scriptscriptstyle R}^{\scriptscriptstyle C}f_{\scriptscriptstyle R}^{\scriptscriptstyle C}(h)+k_{\scriptscriptstyle L}^{\scriptscriptstyle C}f_{\scriptscriptstyle L}^{\scriptscriptstyle C}(h),\quad \forall \,h\in(0,\infty),$$ with \begin{equation*} \begin{aligned} f_0(h)&= h, \\ f_{\scriptscriptstyle C}^{\scriptscriptstyle C}(h)&=(h^2+1)\arccos\bigg(\frac{h^2-1}{h^2+1}\bigg),\\ f_{\scriptscriptstyle R}^{\scriptscriptstyle C}(h)&=(h^2-1)\arccos\bigg(\frac{1-h^2}{h^2+1}\bigg),\\ f_{\scriptscriptstyle L}^{\scriptscriptstyle C}(h)&=(h^2+4)\arccos\bigg(\frac{4-h^2}{h^2+4}\bigg), \end{aligned} \end{equation*} $$k_0=2(u_{\scriptscriptstyle C}-p_{\scriptscriptstyle C}+r_{\scriptscriptstyle R}+u_{\scriptscriptstyle L})-r_{\scriptscriptstyle L}-u_{\scriptscriptstyle R}+p_{\scriptscriptstyle R}+3p_{\scriptscriptstyle L},\quad k_{\scriptscriptstyle C}^{\scriptscriptstyle C}=p_{\scriptscriptstyle C}+u_{\scriptscriptstyle C},$$ $$ k_{\scriptscriptstyle R}^{\scriptscriptstyle S}=\frac{p_{\scriptscriptstyle R}+u_{\scriptscriptstyle R}}{2}\quad\text{and}\quad k_{\scriptscriptstyle L}^{\scriptscriptstyle S}=\frac{p_{\scriptscriptstyle L}+u_{\scriptscriptstyle L}}{2}.$$ Consider the set of functions $\mathcal{F}_{\scriptscriptstyle CCC}=\{f_0,f_{\scriptscriptstyle C}^{\scriptscriptstyle C},f_{\scriptscriptstyle R}^{\scriptscriptstyle C},f_{\scriptscriptstyle L}^{\scriptscriptstyle C}\}$. Using the algebraic manipulator Mathematica, we compute the Wronskian $W(f_0, f_{\scriptscriptstyle C}^{\scriptscriptstyle C},$ $f_{\scriptscriptstyle R}^{\scriptscriptstyle C},f_{\scriptscriptstyle L}^{\scriptscriptstyle C})(h)$ and evaluate in some point on the interval $(0,\infty)$. More precisely, $W(f_0, f_{\scriptscriptstyle C}^{\scriptscriptstyle C},f_{\scriptscriptstyle R}^{\scriptscriptstyle C},f_{\scriptscriptstyle L}^{\scriptscriptstyle C})(0.5)=7.2124$. Then, set of functions $\mathcal{F}_{\scriptscriptstyle CCC}$ is linearly independent on the interval $(0,\infty)$ and, by Proposition \ref{prop:li1}, there are $h_i\in(0,\infty)$, with $i=1,2,3$, such that $M(h_i)=0$. Therefore, for $0< \epsilon <<1$, the system \eqref{eq:01} has a unique limit cycle near $L_{h_i}$, for each $i=1,2,3$, i.e. the system \eqref{eq:01} has at least three limit cycles. \end{proof} \begin{corollary}[Case CCC -- Fig. \ref{fig:04} (c)]\label{ccc-c} Consider the system \eqref{eq:01} with $a_{\scriptscriptstyle L}=b_{\scriptscriptstyle R}=1$, $\beta_{\scriptscriptstyle L}=-3$, $b_{\scriptscriptstyle L}=\beta_{\scriptscriptstyle R}=2$, $c_{\scriptscriptstyle L}=c_{\scriptscriptstyle R}=-1$ and $a_{\scriptscriptstyle R}=0$. Then, for $0<\epsilon<<1$, the system \eqref{eq:01} has at least three limit cycles. \end{corollary} \begin{proof} For $a_{\scriptscriptstyle L}=b_{\scriptscriptstyle R}=1$, $\beta_{\scriptscriptstyle L}=-3$, $b_{\scriptscriptstyle L}=\beta_{\scriptscriptstyle R}=2$, $c_{\scriptscriptstyle L}=c_{\scriptscriptstyle R}=-1$ and $a_{\scriptscriptstyle R}=0$, the eigenvalues of the linear part of the left, central and right subsystem from $\eqref{eq:01}|_{\epsilon=0}$ are $\pm i$, $\pm i$ and $\pm i$, respectively, i.e. we have three centers. Moreover, the coordinates of the equilibrium points of the left and right subsystem from $\eqref{eq:01}|_{\epsilon=0}$ are $(-5,2)$ and $(2,0)$, respectively, and so the centers are real. Therefore, the first order Melnikov function from Theorem \ref{theo:ccc} becomes $$M(h)=k_0f_0(h)+k_{\scriptscriptstyle C}^{\scriptscriptstyle C}f_{\scriptscriptstyle C}^{\scriptscriptstyle C}(h)+k_{\scriptscriptstyle R}^{\scriptscriptstyle C}f_{\scriptscriptstyle R}^{\scriptscriptstyle C}(h)+k_{\scriptscriptstyle L}^{\scriptscriptstyle C}f_{\scriptscriptstyle L}^{\scriptscriptstyle C}(h),\quad \forall \,h\in(0,\infty),$$ with \begin{equation*} \begin{aligned} f_0(h)&= h, \\ f_{\scriptscriptstyle C}^{\scriptscriptstyle C}(h)&=(h^2+1)\arccos\bigg(\frac{h^2-1}{h^2+1}\bigg),\\ f_{\scriptscriptstyle R}^{\scriptscriptstyle C}(h)&=(h^2-1)\arccos\bigg(\frac{1-h^2}{h^2+1}\bigg),\\ f_{\scriptscriptstyle L}^{\scriptscriptstyle C}(h)&=(h^2+4)\arccos\bigg(\frac{4-h^2}{h^2+4}\bigg), \end{aligned} \end{equation*} $$k_0=2(u_{\scriptscriptstyle C}-p_{\scriptscriptstyle C}+r_{\scriptscriptstyle R}+u_{\scriptscriptstyle L})-r_{\scriptscriptstyle L}+u_{\scriptscriptstyle R}+3(p_{\scriptscriptstyle L}+p_{\scriptscriptstyle R}),\quad k_{\scriptscriptstyle C}^{\scriptscriptstyle C}=p_{\scriptscriptstyle C}+u_{\scriptscriptstyle C},$$ $$ k_{\scriptscriptstyle R}^{\scriptscriptstyle S}=\frac{p_{\scriptscriptstyle R}+u_{\scriptscriptstyle R}}{2}\quad\text{and}\quad k_{\scriptscriptstyle L}^{\scriptscriptstyle S}=\frac{p_{\scriptscriptstyle L}+u_{\scriptscriptstyle L}}{2}.$$ Consider the set of functions $\mathcal{F}_{\scriptscriptstyle CCC}=\{f_0,f_{\scriptscriptstyle C}^{\scriptscriptstyle C},f_{\scriptscriptstyle R}^{\scriptscriptstyle C},f_{\scriptscriptstyle L}^{\scriptscriptstyle C}\}$. Using the algebraic manipulator Mathematica, we compute the Wronskian $W(f_0, f_{\scriptscriptstyle C}^{\scriptscriptstyle C},$ $f_{\scriptscriptstyle R}^{\scriptscriptstyle C},f_{\scriptscriptstyle L}^{\scriptscriptstyle C})(h)$ and evaluate in some point on the interval $(0,\infty)$. More precisely, $W(f_0, f_{\scriptscriptstyle C}^{\scriptscriptstyle C},f_{\scriptscriptstyle R}^{\scriptscriptstyle C},f_{\scriptscriptstyle L}^{\scriptscriptstyle C})(0.5)=7.2124$. Then, set of functions $\mathcal{F}_{\scriptscriptstyle CCC}$ is linearly independent on the interval $(0,\infty)$ and, by Proposition \ref{prop:li1}, there are $h_i\in(0,\infty)$, with $i=1,2,3$, such that $M(h_i)=0$. Therefore, for $0< \epsilon <<1$, the system \eqref{eq:01} has a unique limit cycle near $L_{h_i}$, for each $i=1,2,3$, i.e. the system \eqref{eq:01} has at least three limit cycles. \end{proof} \section{Acknowledgments} The first author is partially supported by S\~ao Paulo Research Foundation (FAPESP) grants 19/10269-3 and 18/19726-5. The second author is supported by CAPES grants 88882.434343/2019-01. \addcontentsline{toc}{chapter}{Bibliografia} \end{document}
\begin{document} \title{Discrete, Tunable Color Entanglement} \author{S.~Ramelow$^{\dagger,1,2}$, L.~Ratschbacher$^{\dagger,1}$, A.~Fedrizzi$^{1,2,3}$, N.~K.~Langford$^{1,2}$ and A.~Zeilinger$^{1,2}$} \affiliation{ $^1$Institute for Quantum Optics and Quantum Information, Austrian Academy of Sciences, Boltzmanngasse 3, A-1090 Vienna, Austria \\ $^2$Faculty of Physics, University of Vienna, Boltzmanngasse 5, A-1090 Vienna, Austria \\ $^3$University of Queensland, Brisbane 4072, Australia } \begin{abstract} Although frequency multiplexing of information has revolutionized the field of classical communications, the color degree of freedom (DOF) has been used relatively little for quantum applications. We experimentally demonstrate a new hybrid quantum gate that transfers polarization entanglement of non-degenerate photons onto the color DOF. We create, for the first time, high quality, discretely color-entangled states (with energy bandgap up to 8.4 THz) without any filtering or postselection, and unambiguously verify and quantify the amount of entanglement (tangle, $0.611{\pm}0.009$) by reconstructing a restricted density matrix; we generate a range of maximally entangled states, including a set of mutually unbiased bases for an encoded qubit space. The technique can be generalized to transfer polarization entanglement onto other photonic DOFs, like orbital angular momentum. \end{abstract} \pacs{42.50Dv} \maketitle Color, or frequency, is one of the most familiar degrees of freedom (DOFs) of light and has been routinely analyzed in spectroscopy for centuries. However, although frequency multiplexing of information has had a profound impact on classical telecommunications, little work has aimed at exploiting the frequency DOF for quantum-based information technologies. A key ingredient in many such technologies is discretely encoded entanglement, which has been extensively investigated for other optical degrees of freedom (e.g., \cite{kwiat_new_1995, KwiatPG1999a, KimT2006a, AlessandroSource, KwiatPG1993a, timeEntanglement, ThewRT2004a, RarityJG1990a, OAMentanglement, LangfordNK2004a}). In contrast, discrete frequency entanglement has not yet been unambiguously demonstrated, despite potentially interesting applications such as enhanced clock synchronization beyond the classical limit~\cite{giovannetti_quantum-enhanced_2001,de_burgh_quantum_2005} and improved quantum communication in noisy channels~\cite{xiao_efficient_2008}. Flying qubits encoded in tunable frequency bins would also be an ideal mediator between stationary qubits with different energy levels; e.g., very recently the state of two photons emitted by two separate Yb ions was projected onto a discrete frequency-entangled state, allowing the creation of entanglement and realization of teleportation between the ions~\cite{olmschenk_quantum_2009}. Finally, the higher-dimensional Hilbert space accessible with the color DOF has known benefits for quantum communication~\cite{FujiwaraM2003a,WangC2005a} and quantum cryptography~\cite{BrussD2002a,CerfNJ2002a, SpekkensRW2001a}, and would also allow the exploration of fundamental questions about quantum mechanics~\cite{kochen-specker}. Continuous frequency entanglement between photon pairs arises naturally in spontaneous parametric down-conversion (SPDC) experiments as a consequence of energy conservation \cite{oumandel, raritybeating, KwiatPG1993a, alessandroarxive}. It is often, however, much simpler to control and use entanglement between systems with discrete, well-separated basis states (cf.\ time-bin entanglement~\cite{timeEntanglement, ThewRT2004a}). A simple discrete color-entangled state would be $(\ket{\omega_1}\ket{\omega_2}{+}\ket{\omega_2}\ket{\omega_1})/\sqrt{2}$, where $\ket{\omega_j}$ represent single-photon states occupying discrete, well-separated frequency bins. Although such a state can be \emph{postselected} from broadband, continuous frequency entanglement (e.g., as in \cite{oumandel, raritybeating}), for most quantum applications it is necessary to explicitly create such a state without postselection. There have been some proposals and attempts to create discrete color entanglement in nonlinear waveguides~\cite{booth_counterpropagating_2002,ravaro_nonlinear_2005}. To date, however, no experiment has been able to conclusively show the creation or quantitative measurement of discretely color-entangled photons. Here we report the first experimental demonstration of genuine discretely color-entangled states, created without any spectral filtering or postselection. We used a hybrid quantum gate (HQG), a gate that acts simultaneously on different DOFs, that can deterministically transfer polarization onto color entanglement and unambiguously verified and quantified this entanglement using non-classical interference. We also demonstrated full control over the frequency separation and phase of the created states, while maintaining a high fidelity. \begin{figure} \caption{(Color online) Schematic of the experimental setup. (a) Source of polarization-entangled photon pairs with tunable central frequencies. (b) The hybrid quantum gate's polarizing beam splitter (PBS) maps the polarization entanglement onto the color degree of freedom. Subsequently projecting on diagonal (D) linear polarization with polarizers (POL) at 45$^{\circ}$ generates the discretely color entangled state. (c) The state is analyzed by two-photon interference at a fiber beamsplitter (FBS); Si-APD single-photon detectors and coincidence counting (CC) logic measure the coincidence rate as a function of temporal delay between modes.} \label{fig:figure1} \end{figure} In our experiment (Fig.~\ref{fig:figure1}), a tunable source of polarisation entanglement based on continuous-wave SPDC~\cite{AlessandroSource} generates fibre-coupled photon pairs close to a pure state: \begin{equation}\label{eq:polentangledstate} \ket{\psi_{\rm in}}=(\alpha\ket{H}_1\ket{H}_2+e^{i\phi}\beta\ket{V}_1\ket{V}_2)\otimes\ket{\omega_1}_1\ket{\omega_2}_2, \end{equation} where $\alpha^2+\beta^2=1$, $H$ and $V$ denote vertical and horizontal polarization, and $\omega_j$ is the central frequency of mode $j$. This notation neglects the spectral entanglement within the single-photon bandwidth, which was much less than the photons' frequency separation, $\mu = \omega_1 {-} \omega_2$. By varying the temperature of the source's nonlinear (ppKTP) crystal, we continuously tuned the photon frequencies from degeneracy (809.6nm at 25.1$^\circ$C) to a maximum separation of 8.4 THz (18.3 nm) at 68.1$^\circ$C while maintaining high-quality polarization entanglement~\cite{AlessandroSource}. We controlled the polarization state with wave plates. Single mode fibers connect the source to the inputs of the hybrid gate depicted in Fig.~\ref{fig:figure1}(b). The PBS maps the state $\ket{\omega_1}_1$, depending on its polarization, to $\ket{\omega_1}_3$ ($H$) or $\ket{\omega_1}_4$ ($V$), and similarly for the state $\ket{\omega_2}_2$. This transfers the existing polarization entanglement onto color with the resulting \emph{hypoentangled}~\cite{nathanphd,hypoentangled} multi-DOF state: \begin{equation}\label{eq:hypocolorstate} \ket{\psi_{\rm hypo}}=\alpha\ket{H\omega_1}_3\ket{H\omega_2}_4+e^{i\phi}\beta\ket{V\omega_2}_3\ket{V\omega_1}_4. \end{equation} To create the desired state, the frequency entanglement must then be decoupled from the polarization DOF. This can be achieved deterministically by selectively rotating the polarization of one of the two frequencies (e.g., using dual-wavelength wave plates). For simplicity, we instead chose to erase the polarization information probabilistically by projecting both photons onto diagonal polarization using polarizers at 45$^\circ$. We erased temporal distinguishability between input photons by translating fibre coupler 2 to maximise the non-classical interference visibility at the PBS for degenerate photons. Finally, we compensated for unwanted birefringent effects of the PBS using wave plates in one arm. The gate output is then: \begin{equation}\label{eq:purecolorstate} \ket{\psi_{\rm out}}=\alpha\ket{\omega_1}_3\ket{\omega_2}_4+e^{i\phi}\beta\ket{\omega_2}_3\ket{\omega_1}_4. \end{equation} The parameters defining this state can be set by preparing an appropriate polarization input state (Eq.~\ref{eq:polentangledstate}). \begin{figure} \caption{(Color online) Analysis of the discretely color-entangled state. a) Single-photon spectra for modes 3 and 4; frequency separation is 2.1 THz (4.6 nm). The observed width of each bin is limited by the single-photon spectrometer. b) Normalized (i) coincidence and (ii) singles count rates as a function of delay in mode 4. The solid line in (i) is a fit of Eq.~\ref{eq:beatingfunction} to determine $V$ and the phase $\phi$. c) The estimated restricted density matrix: target-state fidelity, $0.891{\pm}0.003$; tangle, $0.611{\pm}0.009$; and purity, $0.801{\pm}0.004$.} \label{fig:figure2} \end{figure} To explore the performance of the hybrid gate, we first injected photon pairs close to the polarization state $(\ket{H}_1\ket{H}_2{-}\ket{V}_1\ket{V}_2)/\sqrt{2}$ with individual wavelengths 811.9 nm and 807.3 nm. The gate should then ideally produce the discrete, anticorrelated color-entangled state: $\ket{\psi}=(\ket{\omega_1}_3\ket{\omega_2}_4{-}\ket{\omega_2}_3\ket{\omega_1}_4)/\sqrt{2}$. Figure~\ref{fig:figure2}a) shows the unfiltered single-photon spectra of the two output modes, illustrating that each photon is measured at either $\omega_1$ or $\omega_2$. This reflects a curious feature of discretely colour-entangled states, that individual photons have no well-defined color and no photon is ever observed at ``mean-value'' frequency. This feature clearly distinguishes our experiment from the continuous frequency entanglement studied in earlier work~\cite{oumandel, raritybeating, KwiatPG1993a}. Because the detuning, $\mu=4.6$ nm, is much larger than the FWHM bandwidth of the individual color modes of 0.66 nm (0.30 THz; defined by the 10 mm nonlinear crystal), the two modes are truly orthogonal, making them good logical states for a frequency-bin qubit. This orthogonality also means that color anticorrelations are strictly enforced by energy conservation, because a single down-conversion event cannot produce two photons in the same frequency bin. We confirmed this by directly measuring the gate output in the frequency-bin computational basis (i.e.\ with bandpass filters in each arm tuned to $\omega_1$ or $\omega_2$). We observed strong, comparable coincidence rates for the two ``anticorrelated'' basis states ($10882\pm104$ and $9068\pm95$ in 30s for $\ket{\omega_1}_3\ket{\omega_2}_4$ and $\ket{\omega_2}_3\ket{\omega_1}_4$, resp.), and no coincidences for the same-frequency states ($\ket{\omega_1}_3\ket{\omega_1}_4$ and $\ket{\omega_2}_3\ket{\omega_2}_4$) to within error bars determined by the filters' finite extinction ratios. To demonstrate that the color state was not only anticorrelated but genuinely entangled, we used nonclassical two-photon interference~\cite{HongCK1987a}, overlapping the photons at a 50:50 fibre beam splitter (FBS) (Fig.~\ref{fig:figure1}c) and varying their relative arrival time by translating fibre coupler 4 while observing the output coincidences. The results in Fig.~\ref{fig:figure2}b) show high-visibility sinusoidal oscillations (frequency $\mu$) within a triangular envelope caused by the \emph{unfiltered} ``sinc-squared'' spectral distribution of the source~\cite{AlessandroSource,triangledip}. At the central delay, the normalised coincidence probability reaches up to $0.881\pm0.007$, far above ($>50\sigma$) the baseline level of $0.5$. This antibunching is an unambiguous signature for antisymmetric entanglement \cite{MattleK1996a,BStheory,alessandroarxive} and, in conjunction with the previous measurements, conclusively demonstrates that our discrete color state is strongly entangled. As expected, the single-photon detection rates (Fig.~\ref{fig:figure2}b) exhibit negligible interference effects. It is important to note that this signature is similar to those observed in earlier ``spatial quantum beating'' experiments~\cite{oumandel, raritybeating}. As demonstrated by Kaltenbaek et al.~\cite{KaltenbaekR2009a}, however, observing this signature in different contexts does not necessarily lead to the same conclusions. In previous experiments, the observed signal was only \emph{postselected} from broadband continuous frequency entanglement and at no point could the quantum state of the photons emitted by the source be described solely as a discretely colour-entangled state, uncoupled from other DOFs. By contrast, our measurements do not rely on any spectral postselection, but as supported by the single-photon spectra, they result directly from unfiltered discrete color entanglement. We now show how we can combine the above measurements to estimate a restricted density matrix in colour space. We first recall that energy conservation in the SPDC pair source and during photon propagation constrains the state to the two-dimensional anticorrelated subspace of the two-qubit color space (before and after the gate). This is a physical constraint, validated by the measurements in the computational basis. The complete density matrix within this subspace can be written (in the computational basis, $\{ \ket{\omega_1}_3\ket{\omega_1}_4, \ket{\omega_1}_3\ket{\omega_2}_4, \ket{\omega_2}_3\ket{\omega_1}_4, \ket{\omega_2}_3\ket{\omega_2}_4\}$): \begin{equation}\label{eq:colorentangledstate} \rho= \left( \; \begin{matrix} 0 \quad & \quad 0 \quad & \quad 0 \quad & \quad 0 \\ 0 \quad & p & \frac{V}{2}\:e^{-i\phi} & \quad 0 \\ 0 \quad & \frac{V}{2}\:e^{i\phi} & 1-p & \quad 0 \\ 0 \quad & 0 & 0 & \quad 0 \end{matrix} \; \right) \end{equation} with real parameters that obey the physicality constraints: $0\leq p\leq 1$ and $0\leq \frac{V}{2}\leq \sqrt{p(1-p)}$. Any detection events outside this subspace arise from higher-order emissions and accidental coincidences, and also lie outside the full two-qubit space. Our computational basis measurements showed that these vanished to within error bars, and we directly calculated the balance parameter, $p=0.546{\pm}0.004$ (using Poissonian errors). We estimated the remaining parameters by fitting them to the nonclassical interference signal. For the above density matrix, given the source's spectral properties, we analytically calculated the expected interference probability, $p_c$, to be (following~\cite{alessandroarxive}): \begin{equation} p_c(\tau) = \begin{array}{c} \frac{1}{2}-\frac{V}{2}\cos(\mu\tau{+}\phi) \, (1{-}\left|\frac{2\tau}{\tau_c}\right|) \quad \text{for }|\tau|<\frac{\tau_c}{2}, \end{array} \label{eq:beatingfunction} \end{equation} where the coherence time $\tau_c$ is the base-to-base envelope width, related to the single-photon frequency bandwidth via $\Delta f_{FWHM} = 0.885/\tau_c \sim 0.3$~THz. The missing elements V and $\phi$ can be identified as the visibility and phase of the oscillating signal and can therefore be estimated using curve fitting (for this state, $V= 0.782 {\pm} 0.006$ and $\phi = 179.2 {\pm} 0.4^\circ$). The resulting density matrix (Fig.~\ref{fig:figure2}c) is strongly entangled, with a target-state fidelity of $0.891{\pm}0.003$, tangle~\cite{CoffmanV2000a} of $0.611{\pm}0.009$, and purity of $0.801{\pm}0.004$ (error bars include Poissonian and fitting errors). This is the first quantitative measurement of the entanglement of any color-entangled state. \begin{figure} \caption{(Color online) Two-photon interference for color-entangled states with three different frequency separations (and corresponding crystal temperatures): a) 1.7 THz (3.8 nm), 33.7$^\circ$C; b) 3.6 THz (7.9 nm), 43.7$^\circ$C; and c) 8.4 THz (18.3 nm), 68.1$^\circ$C. Solid lines show the curve fit according to Eq.~\ref{eq:beatingfunction} with $V$, $\mu$ and $\phi$ as fitting parameters. The insets show the measured single-photon spectra for both modes of each state.} \label{fig:figure3} \end{figure} Several error sources in our experiment contributed cumulatively to unwanted photon distinguishability in the final colour state and reduced the measured entanglement, including: imperfect input polarization states, imperfect mode matching and residual polarization misalignment at the PBS, the finite PBS extinction ratio, and a slightly asymmetric FBS splitting. Accidental coincidence counts caused by detector dark counts and higher-order SPDC contributions were negligible. To illustrate the flexibility of the hybrid gate, we analysed a series of output states for different frequency detunings $\mu$ and phases $\phi$. We first tuned $\mu$ by varying the crystal temperature in the source, and the results (Fig.~\ref{fig:figure3}) agree well with Eq.~(\ref{eq:beatingfunction}). The source enabled us to reach a detuning of 18.3 nm (8.4 THz), about 30 times the individual color-bin bandwidths. The detunings estimated from curve fitting matched the single-photon spectra. \begin{figure} \caption{(Color online) (a) Coincidence probabilities after the FBS as a function of the delay for 13 different, close to maximally entangled discrete color states. The phase of the oscillation pattern is proportional to the phase of the original polarization-entangled state. (c) Four close to maximally entangled discrete color states that represent two unbiased bases.} \label{fig:figure4} \end{figure} We next prepared discrete color states of the form $(\ket{\omega_1}_3\ket{\omega_2}_4{+}e^{i\phi}\ket{\omega_2}_3\ket{\omega_1}_4)/\sqrt{2}$ with varying phase ($\phi=0^\circ,30^\circ,...,360^\circ$) (Fig.~\ref{fig:figure4}). The measured states display an average target-state fidelity of $0.90\pm0.01$, a tangle of $0.63\pm0.03$, and a purity of $0.82\pm0.02$, demonstrating that the hybrid gate accurately preserves quantum information stored in the original polarisation state. Note that, together with the product states $\ket{\omega_1}_3\ket{\omega_2}_4$ and $\ket{\omega_2}_3\ket{\omega_1}_4$, the entangled states with phase $0^\circ$, $90^\circ$, $180^\circ$ and $270^\circ$ constitute a full set of qubit mutually unbiased bases. This illustrates the states' potential usefulness for quantum protocols such as quantum cryptography. In this paper, we have for the first time conclusively demonstrated the creation, control and characterisation of high-quality, discretely color-entangled states, prepared without any spectral filtering or postselection using a hybrid quantum gate. We performed the first quantitative measurement of color entanglement using a novel technique for characterising the two-qubit color state within a restricted, antisymmetric subspace defined by energy conservation. Our hybrid gate can in fact be used to transfer polarization entanglement onto any desired photonic DOF ($\xi$), by preparing the input $\ket{\psi}_{\rm pol}\otimes\ket{\xi_1,\xi_2}$ and by appropriately erasing the polarization information after the PBS. Because the preparation of high-quality polarization states can be much easier than in other photonic DOFs, this gate represents a valuable tool for quantum information processing tasks in those DOFs. Our work also has important implications for the development of quantum memories and repeaters, because color-encoded information could provide a natural interface between flying and stationary qubits (such as single ions, atoms or atom ensembles) where information is encoded in different energy levels. Indeed, by inverting the procedure from~\cite{olmschenk_quantum_2009}, one could potentially entangle distant ions directly by letting them absorb a photon pair with the appropriate discrete color entanglement. Finally, we note that non-postselected, discretely color-entangled states could also be extracted from sources of continuous spectral entanglement (such as traditional SPDC) using custom-designed multi-band-pass filters. Although this approach would not be easily tunable and efficient as ours is, it would allow access to higher-dimensional entangled states in the color DOF. During preparation of this paper, some related work was published by X.\ Li \emph{et al.}~\cite{LiX2009a}. The authors report the creation of frequency-entangled photons using four-wave mixing in nonlinear fibres, but, as in \cite{oumandel, raritybeating}, used two narrow-band filters to postselect the desired state from a broader spectral distribution and color entanglement could not be demonstrated unambiguously. We would like to thank Thomas Jennewein and Bibiane Blauensteiner for useful ideas and support. This work has been supported by the FWF within SFB 015 P06, P20 and CoQuS (W1210), the European Commision Project QAP (No.\ 015846), and the DTO-funded U.S.\ Army Research Office QCCM program. \begin{thebibliography}{50} \expandafter\ifx\csname natexlab\endcsname\relax\def\natexlab#1{#1}\fi \expandafter\ifx\csname bibnamefont\endcsname\relax \def\bibnamefont#1{#1}\fi \expandafter\ifx\csname bibfnamefont\endcsname\relax \def\bibfnamefont#1{#1}\fi \expandafter\ifx\csname citenamefont\endcsname\relax \def\citenamefont#1{#1}\fi \expandafter\ifx\csname url\endcsname\relax \def\url#1{\texttt{#1}}\fi \expandafter\ifx\csname urlprefix\endcsname\relax\defURL {URL }\fi \providecommand{\bibinfo}[2]{#2} \providecommand{\eprint}[2][]{\url{#2}} \item[{$\dagger$\hphantom{\rbrack}}] These authors contributed equally to this work. \bibitem[{\citenamefont{Kwiat et~al.}(1995)}]{kwiat_new_1995} P.~G.~Kwiat \emph{et al.}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{75}}, \bibinfo{pages}{4337} (\bibinfo{year}{1995}). \bibitem[{\citenamefont{Kwiat et~al.}(1999)\citenamefont{Kwiat, Waks, White, Appelbaum, and Eberhard}}]{KwiatPG1999a} P.~G.~Kwiat \emph{et al.}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{60}}, \bibinfo{pages}{R773} (\bibinfo{year}{1999}). \bibitem[{\citenamefont{Kim et~al.}(2006)\citenamefont{Kim, Fiorentino, and Wong}}]{KimT2006a} \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Kim}}, \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Fiorentino}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{F.~N.~C.} \bibnamefont{Wong}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{73}}, \bibinfo{pages}{012316} (\bibinfo{year}{2006}). \bibitem[{\citenamefont{Fedrizzi et~al.}(2007)\citenamefont{Fedrizzi, Herbst, Poppe, Jennewein, and Zeilinger}}]{AlessandroSource} A.~Fedrizzi \emph{et al.}, \bibinfo{journal}{Opt. Express} \textbf{\bibinfo{volume}{15}}, \bibinfo{pages}{15377} (\bibinfo{year}{2007}). \bibitem[{\citenamefont{Kwiat et~al.}(1993)\citenamefont{Kwiat, Steinberg, and Chiao}}]{KwiatPG1993a} \bibinfo{author}{\bibfnamefont{P.~G.} \bibnamefont{Kwiat}}, \bibinfo{author}{\bibfnamefont{A.~M.} \bibnamefont{Steinberg}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{R.~Y.} \bibnamefont{Chiao}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{47}}, \bibinfo{pages}{R2472} (\bibinfo{year}{1993}). \bibitem[{\citenamefont{Brendel et~al.}(1999)\citenamefont{Brendel, Gisin, Tittel, and Zbinden}}]{timeEntanglement} J.~Brendel \emph{et al.}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{82}}, \bibinfo{pages}{2594} (\bibinfo{year}{1999}). \bibitem[{\citenamefont{Thew et~al.}(2004)\citenamefont{Thew, Ac\'{i}n, Zbinden, and Gisin}}]{ThewRT2004a} R.~T.~Thew \emph{et al.}, \bibinfo{journal}{Quantum Inf. Comput.} \textbf{\bibinfo{volume}{4}}, \bibinfo{pages}{93} (\bibinfo{year}{2004}). \bibitem[{\citenamefont{Rarity and Tapster}(1990{\natexlab{a}})}]{RarityJG1990a} \bibinfo{author}{\bibfnamefont{J.~G.} \bibnamefont{Rarity}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{P.~R.} \bibnamefont{Tapster}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{64}}, \bibinfo{pages}{2495} (\bibinfo{year}{1990}{\natexlab{a}}). \bibitem[{\citenamefont{Mair et~al.}(2001)}]{OAMentanglement} A.~Mair \emph{et al.}, \bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{412}}, \bibinfo{pages}{313} (\bibinfo{year}{2001}). \bibitem[{\citenamefont{Langford et~al.}(2004)\citenamefont{Langford, Dalton, Harvey, O'Brien, Pryde, Gilchrist, Bartlett, and White}}]{LangfordNK2004a} N.~K.~Langford \emph{et al.}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{93}}, \bibinfo{pages}{053601} (\bibinfo{year}{2004}). \bibitem[{\citenamefont{de~Burgh and Bartlett}(2005)}]{de_burgh_quantum_2005} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{de~Burgh}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{S.~D.} \bibnamefont{Bartlett}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{72}}, \bibinfo{pages}{042301} (\bibinfo{year}{2005}). \bibitem[{\citenamefont{Giovannetti et~al.}(2001)\citenamefont{Giovannetti, Lloyd, and Maccone}}]{giovannetti_quantum-enhanced_2001} \bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Giovannetti}}, \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Lloyd}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Maccone}}, \bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{412}}, \bibinfo{pages}{417} (\bibinfo{year}{2001}). \bibitem[{\citenamefont{Xiao et~al.}(2008)\citenamefont{Xiao, Wang, Zhang, Huang, Peng, and Long}}]{xiao_efficient_2008} L.~Xiao \emph{et al.}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{77}}, \bibinfo{pages}{042315} (\bibinfo{year}{2008}). \bibitem[{\citenamefont{Olmschenk et~al.}(2009)\citenamefont{Olmschenk, Matsukevich, Maunz, Hayes, Duan, and Monroe}}]{olmschenk_quantum_2009} S.~Olmschenk \emph{et al.}, \bibinfo{journal}{Science} \textbf{\bibinfo{volume}{323}}, \bibinfo{pages}{486} (\bibinfo{year}{2009}). \bibitem[{\citenamefont{Fujiwara et~al.}(2003)\citenamefont{Fujiwara, Takeoka, Mizuno, and Sasaki}}]{FujiwaraM2003a} M.~Fujiwara \emph{et al.}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{90}}, \bibinfo{pages}{167906} (\bibinfo{year}{2003}). \bibitem[{\citenamefont{Wang et~al.}(2005)\citenamefont{Wang, Deng, Li, Liu, and Long}}]{WangC2005a} C.~Wang \emph{et al.}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{71}}, \bibinfo{pages}{044305} (\bibinfo{year}{2005}). \bibitem[{\citenamefont{Bruss and Macchiavello}(2002)}]{BrussD2002a} \bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Bruss}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Macchiavello}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{88}}, \bibinfo{pages}{127901} (\bibinfo{year}{2002}). \bibitem[{\citenamefont{Cerf et~al.}(2002)\citenamefont{Cerf, Bourennane, Karlsson, and Gisin}}]{CerfNJ2002a} N.~Cerf \emph{et al.}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{88}}, \bibinfo{pages}{127902} (\bibinfo{year}{2002}). \bibitem[{\citenamefont{Spekkens and Rudolph}(2001)}]{SpekkensRW2001a} \bibinfo{author}{\bibfnamefont{R.~W.} \bibnamefont{Spekkens}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Rudolph}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{65}}, \bibinfo{pages}{012310} (\bibinfo{year}{2001}). \bibitem{kochen-specker} S.~Kochen and E.~Specker, J. Math. Mech. \textbf{17}, 59 (1967). \bibitem[{\citenamefont{Rarity and Tapster}(1990)}]{raritybeating} \bibinfo{author}{\bibfnamefont{J.~G.} \bibnamefont{Rarity}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{P.~R.} \bibnamefont{Tapster}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{41}}, \bibinfo{pages}{5139} (\bibinfo{year}{1990}). \bibitem[{\citenamefont{Ou and Mandel}(1988)}]{oumandel} \bibinfo{author}{\bibfnamefont{Z.~Y.} \bibnamefont{Ou}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Mandel}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{61}}, \bibinfo{pages}{54+} (\bibinfo{year}{1988}). \bibitem[{\citenamefont{Fedrizzi et~al.}(2008)\citenamefont{Fedrizzi, Herbst, Aspelmeyer, Barbieri, Jennewein, and Zeilinger}}]{alessandroarxive} A.~Fedrizzi \emph{et al.}, \bibinfo{journal}{arXiv:0807.4437} (\bibinfo{year}{2008}). \bibitem[{\citenamefont{Booth et~al.}(2002)\citenamefont{Booth, Atat\"{u}re, Giuseppe, Saleh, Sergienko, and Teich}}]{booth_counterpropagating_2002} M.~C.~Booth \emph{et al.} \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{66}}, \bibinfo{pages}{023815} (\bibinfo{year}{2002}). \bibitem[{\citenamefont{Ravaro et~al.}(2005)\citenamefont{Ravaro, Seurin, Ducci, Leo, Berger, Rossi, and Assanto}}]{ravaro_nonlinear_2005} M.~Ravaro \emph{et al.}, \bibinfo{journal}{J. Appl. Phys.} \textbf{\bibinfo{volume}{98}}, \bibinfo{pages}{063103} (\bibinfo{year}{2005}). \bibitem[{\citenamefont{Langford}(2007)}]{nathanphd} \bibinfo{author}{\bibfnamefont{N.~K.} \bibnamefont{Langford}}, Ph.D. thesis, \bibinfo{school}{University of Queensland} (\bibinfo{year}{2007}). \bibitem[{\citenamefont{Xi-Feng et~al.}(2006)\citenamefont{Xi-Feng, Guo-Ping, Jian, Chuan-Feng, and Guang-Can}}]{hypoentangled} R.~Xi-Feng \emph{et al.}, \bibinfo{journal}{Chinese Phys. Lett.} \textbf{\bibinfo{volume}{23}}, \bibinfo{pages}{552} (\bibinfo{year}{2006}). \bibitem[{\citenamefont{Hong et~al.}(1987)\citenamefont{Hong, Ou, and Mandel}}]{HongCK1987a} \bibinfo{author}{\bibfnamefont{C.~K.} \bibnamefont{Hong}}, \bibinfo{author}{\bibfnamefont{Z.~Y.} \bibnamefont{Ou}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Mandel}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{59}}, \bibinfo{pages}{2044} (\bibinfo{year}{1987}). \bibitem[{\citenamefont{Sergienko et~al.}(1995)\citenamefont{Sergienko, Shih, and Rubin}}]{triangledip} \bibinfo{author}{\bibfnamefont{A.~V.} \bibnamefont{Sergienko}}, \bibinfo{author}{\bibfnamefont{Y.~H.} \bibnamefont{Shih}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{M.~H.} \bibnamefont{Rubin}}, \bibinfo{journal}{J. Opt. Soc. Am. B} \textbf{\bibinfo{volume}{12}}, \bibinfo{pages}{859} (\bibinfo{year}{1995}). \bibitem[{\citenamefont{Mattle et~al.}(1996)\citenamefont{Mattle, Weinfurter, Kwiat, and Zeilinger}}]{MattleK1996a} K.~Mattle \emph{et al.}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{76}}, \bibinfo{pages}{4656} (\bibinfo{year}{1996}). \bibitem[{\citenamefont{Wang and Kaige}(2006)}]{BStheory} K.~Wang, \bibinfo{journal}{J. Phys. B: At. Mol. Opt. Phys.} \textbf{\bibinfo{volume}{39}}, \bibinfo{pages}{R293} (\bibinfo{year}{2006}). \bibitem{KaltenbaekR2009a} R.~Kaltenbaek, J.~Lavoie and K.~J.~Resch, \bibinfo{note}{unpublished preprint} (2009). \bibitem[{\citenamefont{Coffman et~al.}(2000)\citenamefont{Coffman, Kundu, and Wootters}}]{CoffmanV2000a} \bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Coffman}}, \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Kundu}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{W.~K.} \bibnamefont{Wootters}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{61}}, \bibinfo{pages}{052306} (\bibinfo{year}{2000}). \bibitem[{\citenamefont{Li et~al.}(2009)\citenamefont{Li, Yang, Ma, Cui, Ou, and Yu}}]{LiX2009a} X.~Li \emph{et al.}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{79}}, \bibinfo{eid}{033817} (\bibinfo{year}{2009}). \end{thebibliography} \end{document}
\begin{document} \baselineskip 16pt \title{On one generalization of finite nilpotent groups} \author{Zhang Chi \thanks{Research of the first author is supported by China Scholarship Council and NNSF of China(11771409)}\\ {\small Department of Mathematics, University of Science and Technology of China,}\\ {\small Hefei 230026, P. R. China}\\ {\small E-mail: zcqxj32@mail.ustc.edu.cn}\\ \\ { Alexander N. Skiba}\\ {\small Department of Mathematics and Technologies of Programming, Francisk Skorina Gomel State University,}\\ {\small Gomel 246019, Belarus}\\ {\small E-mail: alexander.skiba49@gmail.com}} \date{} \maketitle \date{} \maketitle \begin{abstract} Let $\sigma =\{\sigma_{i} | i\in I\}$ be a partition of the set $\Bbb{P}$ of all primes and $G$ a finite group. A chief factor $H/K$ of $G$ is said to be \emph{$\sigma$-central} if the semidirect product $(H/K)\rtimes (G/C_{G}(H/K))$ is a $\sigma_{i}$-group for some $i=i(H/K)$. $G$ is called \emph{$\sigma$-nilpotent} if every chief factor of $G$ is $\sigma$-central. We say that $G$ is \emph{semi-${\sigma}$-nilpotent} (respectively \emph{weakly semi-${\sigma}$-nilpotent}) if the normalizer $N_{G}(A)$ of every non-normal (respectively every non-subnormal) $\sigma$-nilpotent subgroup $A$ of $G$ is $\sigma$-nilpotent. In this paper we determine the structure of finite semi-${\sigma}$-nilpotent and weakly semi-${\sigma}$-nilpotent groups. \end{abstract} \footnotetext{Keywords: finite group, ${\sigma}$-soluble group, ${\sigma}$-nilpotent group, semi-${\sigma}$-nilpotent group, weakly semi-${\sigma}$-nilpotent group.} \footnotetext{Mathematics Subject Classification (2010): 20D10, 20D15, 20D30} \let\thefootnote\thefootnoteorig \section{Introduction} Throughout this paper, all groups are finite and $G$ always denotes a finite group. Moreover, $\mathbb{P}$ is the set of all primes, $\pi \subseteq \Bbb{P}$ and $\pi' = \Bbb{P} \setminus \pi$. If $n$ is an integer, the symbol $\pi (n)$ denotes the set of all primes dividing $n$; as usual, $\pi (G)=\pi (|G|)$, the set of all primes dividing the order of $G$. In what follows, $\sigma =\{\sigma_{i} | i\in I\}$ is some partition of $\Bbb{P}$, that is, $\Bbb{P}=\bigcup_{i\in I} \sigma_{i}$ and $\sigma_{i}\cap \sigma_{j}= \emptyset $ for all $i\ne j$. By the analogy with the notation $\pi (n)$, we write $\sigma (n)$ to denote the set $\{\sigma_{i} |\sigma_{i}\cap \pi (n)\ne \emptyset \}$; $\sigma (G)=\sigma (|G|)$. A group is said to be \emph{$\sigma$-primary} \cite{1} if it is a $\sigma _{i}$-group for some $i$. A chief factor $H/K$ of $G$ is said to be \emph{$\sigma$-central} (in $G$) \cite{1} if the semidirect product $(H/K)\rtimes (G/C_{G}(H/K))$ is $\sigma$-primary. The normal subgroup $E$ of $G$ is called \emph{$\sigma$-hypercentral} in $G$ if either $E=1$ or every chief factor of $G$ below $E$ is $\sigma$-central. Recall also that $G$ is called \emph{$\sigma$-nilpotent} \cite{1} if every chief factor of $G$ is $\sigma$-central. An arbitrary group $G$ has two canonical $\sigma$-nilpotent subgroups of particular importance in this context. The first of these is the \emph{$\sigma$-Fitting subgroup } $F_{\sigma}(G)$, that is, the product of all normal $\sigma$-nilpotent subgroups of $G$. The other useful subgroup is the \emph{$\sigma$-hypercentre $Z_{\sigma}(G)$ of $G$}, that is, the product of all $\sigma$-hypercentral subgroups of $G$. Note that in the classical case, when $\sigma = \sigma ^{1}=\{\{2\}, \{3\}, \ldots \}$ (we use here the notation in \cite{alg12}), $F_{\sigma}(G)=F(G)$ is the Fitting subgroup and $Z_{\sigma}(G)=Z_{\infty}(G)$ is the hypercentre of $G$. In fact, the $\sigma$-nilpotent groups are exactly the groups $G$ which can be written in the form $G=G_{1} \times \cdots \times G_{t}$ for some $\sigma$-primary groups $G_{1}, \ldots , G_{t}$ \cite{1}, and such groups have proved to be very useful in the formation theory (see, in particular, the papers \cite{19, 20} and the books \cite[Ch. IV]{bookShem}, \cite[Ch. 6]{15}). In the recent years, the $\sigma$-nilpotent groups have found new and to some extent unexpected applications in the theories of permutable and generalized subnormal subgroups (see, in particular, \cite{1, alg12}, \cite{3}--\cite{6} and the survey \cite{comm}). In view of the results in the paper \cite{belon}, the $\sigma$-nilpotent groups can be characterized as the groups in which the normalizer of any $\sigma$-nilpotent subgroup is $\sigma$-nilpotent. Groups in which normalizers of all non-normal $\sigma$-nilpotent subgroups are $\sigma$-nilpotent may be non-$\sigma$-nilpotent (see Example 1.3 below), and in the case when $\sigma = \sigma ^{1}$ such groups have been described in \cite[Ch. 4, Section 7]{We} (see also \cite{Sah}). In this paper, we determine the structure of such groups $G$ for the case arbitrary $\sigma$. {\bf Definition 1.1.} We say that $G$ is (i) \emph{semi-${\sigma}$-nilpotent} if the normalizer of every non-normal $\sigma$-nilpotent subgroup of $G$ is $\sigma$-nilpotent; (ii) \emph{weakly semi-${\sigma}$-nilpotent} if the normalizer of every non-subnormal $\sigma$-nilpotent subgroup of $G$ is $\sigma$-nilpotent; (iii) \emph{weakly semi-nilpotent} if $G$ is weakly semi-${\sigma}^{1}$-nilpotent. {\bf Remark 1.2.} (i) Every ${\sigma}$-nilpotent group is semi-${\sigma}$-nilpotent, and every semi-${\sigma}$-nilpotent group is weakly semi-${\sigma}$-nilpotent. (ii) The semi-${\sigma}^{1}$-nilpotent groups are exactly the \emph{semi-nilpotent groups} studied in \cite[Ch. 4, Section 7]{We} (see also \cite{Sah}). (iii) We show that $G$ is (weakly) semi-${\sigma}$-nilpotent if and only if the normalizer of every non-normal (respectively non-subnormal) $\sigma$-primary subgroup of $G$ is $\sigma$-nilpotent. Since every $\sigma$-primary group is $\sigma$-nilpotent, it is enough to show that if the normalizer of every non-normal (respectively non-subnormal) $\sigma$-primary subgroup $A$ of $G$ is $\sigma$-nilpotent, then $G$ is ${\sigma}$-semi-nilpotent (respectively weakly semi-${\sigma}$-nilpotent). First note that $A\ne 1$ and $A=A_{1} \times \cdots \times A_{n}$, where $\{A_{1}, \ldots , A_{n}\}$ is a complete Hall $\sigma$-set of $A$. The subgroups $A_{i}$ are characteristic in $A$, so $N_{G}(A)=N_{G}(A_{1}) \cap \cdots \cap N_{G}(A_{n})$, where either $N_{G}(A_{n})=G$ or $N_{G}(A_{n})$ is $\sigma$-nilpotent. Since $A$ is non-normal (respectively non-subnormal) in $G$, there is $i$ such that $N_{G}(A_{n})$ is $\sigma$-nilpotent. Therefore $N_{G}(A)$ is $\sigma$-nilpotent by Lemma 2.2(i) below. Hence $G$ is semi-${\sigma}$-nilpotent (respectively weakly semi-${\sigma}$-nilpotent). {\bf Example 1.3.} Let $p > q > r > t > 2$ be primes, where $q$ divides $p-1$ and $t$ divides $r-1$, and let $\sigma =\{\{p\}, \{q\}, \{p, q\}'\}$. Let $R$ be the quaternion group of order 8, $A$ a group of order $p$, and let $B=C_{p}\rtimes C_{q}$ be a non-nilpotent group of order $pq$ and $C$ a non-nilpotent group of order $rt$. Then $B\times R$ is a non-$\sigma$-nilpotent semi-${\sigma}$-nilpotent group and $B \times C$ is not semi-${\sigma}$-nilpotent. Now let $G=A\times (Q\rtimes R)$, where $Q$ is a simple ${\mathbb F}_{q}R$-module which is faithful for $R$. Then for every subgroup $V$ of $R$ we have $N_{G}(V)=A\times R$, so $G$ is weakly semi-${\sigma}$-nilpotent. On the other hand, $QV$ is supersoluble for every subgroup $V$ of $R$ of order 2 and so for some subgroup $L$ of $Q$ with $1 < L < Q$ we have $V \leq N_{G}(L)$ and $[L, V]\ne 1$. Hence $G$ is not semi-${\sigma}$-nilpotent. Recall that $G^{{\mathfrak{N}}_{\sigma}}$ is the \emph{$\sigma$-nilpotent residual of $G$}, that is, the intersection of all normal subgroups $N$ of $G$ with $\sigma$-nilpotent quotient $G/N$. Our goal here is to determine the structure of weakly semi-${\sigma}$-nilpotent and semi-${\sigma}$-nilpotent groups. In fact, the following concept is an important tool to achieve such a goal. {\bf Definition 1.4.} Let $H$ be a ${\sigma}$-nilpotent subgroup of $G$. Then we say that $H$ is \emph{$\sigma$-Carter subgroup} of $G$ if $H$ is an \emph{${\mathfrak{N}}_{\sigma}$-covering subgroup of $G$} \cite[p. 101]{15}, that is, $U^{{\mathfrak{N}}_{\sigma}}H=U$ for every subgroup $U$ of $G$ containing $H$. Note that in Example 1.3, the subgroup $C_{q}C$ is a $\sigma$-Carter subgroup of the group $B \times C$. It is clear also that a group $H$ of a soluble group $G$ is a Carter subgroup of $G$ if and only if it is a $\sigma ^{1}$-Carter subgroup of $G$. A \emph{complete set of Sylow subgroups of $G$} contains exactly one Sylow $p$-subgroup for each prime $p$ dividing $|G|$. In general, we say that a set ${\cal H}$ of subgroups of $G$ is a \emph{complete Hall $\sigma $-set} of $G$ \cite{2, comm} if every member $\ne 1$ of ${\cal H}$ is a Hall $\sigma _{i}$-subgroup of $G$ for some $i$ and ${\cal H}$ contains exactly one Hall $\sigma _{i}$-subgroup of $G$ for every $\sigma _{i}\in \sigma (G)$. Our first result is the following {\bf Theorem A.} {\sl If $G$ is weakly semi-${\sigma}$-nilpotent, then:} (i) {\sl $G$ has a complete Hall $\sigma$-set $\{H_{1}, \ldots , H_{t}\}$ such that for some $1\leq r \leq t$ the subgroups $H_{1}, \ldots , H_{r}$ are normal in $G$, $H_{i}$ is not normal in $G$ for all $i > r$, and $$\langle H_{r+1}, \ldots , H_{t} \rangle =H_{r+1}\times \cdots \times H_{t}.$$} (ii) {\sl If $G$ is not ${\sigma}$-nilpotent, then $N_{G}(H_{i})$ is a $\sigma$-Carter subgroup of $G$ for all $i > r$.} (iii) {\sl $F_{\sigma}(G)$ is a maximal ${\sigma}$-nilpotent subgroup of $G$ and $F_{\sigma}(G)=F_{0\sigma }(G)Z_{\sigma}(G)$, where $F_{0\sigma }(G)=H_{1} \cdots H_{r}$.} (iv) {\sl $V_{G}= Z_{\sigma}(G)$ for every maximal ${\sigma}$-nilpotent subgroup $V$ of $G$ such that $G=F_{\sigma}(G)V$. } (v) {\sl $G/F(G)$ is $\sigma$-nilpotent. } On the basis of Theorem A we prove also the following {\bf Theorem B.} {\sl Suppose that $G$ is semi-${\sigma}$-nilpotent, and let $\{H_{1}, \ldots , H_{t}\}$ be a complete Hall $\sigma$-set of $G$, where $H_{1}, \ldots , H_{r}$ are normal in $G$ and $H_{i}$ is not normal in $G$ for all $i > r$. Suppose also that non-normal Sylow subgroups of any Schmidt subgroup $A\leq H_{i}$ have prime order for all $i > r$. Then:} (i) {\sl $G/F_{\sigma}(G)$ is abelian. } (ii) {\sl If $U$ is any maximal $\sigma$-nilpotent non-normal subgroup of $G$, then $U$ is a $\sigma$-Carter subgroup of $G$ and $U_{G}= Z_{\sigma}(G)$.} (iii) {\sl If the subgroups $H_{1}, \ldots , H_{r}$ are nilpotent, then $G/F_{\sigma}(G)$ is cyclic. } (iv) {\sl Every quotient and every subgroup of $G$ are semi-${\sigma}$-nilpotent}. Now we consider some of corollaries of Theorems A and B in the three classical cases. First of all note that in the case when $\sigma = \sigma ^{1}$, Theorems A and B not only cover the main results in \cite[Ch. 5 Section 7]{We} but they also give the alternative proofs of them. Moreover, in this case we get from the theorems the following results. {\bf Corollary 1.4.} {\sl If $G$ is weakly semi-nilpotent, then:} (i) {\sl $G$ has a complete set of Sylow subgroups $\{P_{1}, \ldots , P_{t}\}$ such that for some $1\leq r \leq t$ the subgroups $P_{1}, \ldots , P_{r}$ are normal in $G$, $P_{i}$ is not normal in $G$ for all $i > r$, and $\langle P_{r+1}, \ldots , P_{t} \rangle =P_{r+1}\times \cdots \times P_{t}.$} (ii) {\sl $F(G)$ is a maximal nilpotent subgroup of $G$ and $F(G)=F_{0\sigma }(G)Z_{\infty}(G)$, where $F_{0\sigma }(G)=P_{1} \cdots P_{r}$.} (iii) {\sl If $G$ is not nilpotent, then $N_{G}(P_{i})$ is a Carter subgroup of $G$ for all $ i > r$.} {\bf Corollary 1.5} (See Theorem 7.6 in \cite[Ch. 4]{We}). {\sl If $G$ is semi-nilpotent and $F_{0}(G)$ denotes the product of its normal Sylow subgroups, then $G/F_{0}(G)$ is nilpotent. } {\bf Corollary 1.6} (See Theorem 7.8 in \cite[Ch. 4]{We}). {\sl If $G$ is semi-nilpotent, then: } (a) {\sl $F(G)$ is a maximal nilpotent subgroup of $G$.} (b) {\sl If $U$ is a maximal nilpotent subgroup of $G$ and $U$ is not normal in $G$, then $U_{G}=Z_{\infty}(G)$.} {\bf Corollary 1.7} (See Theorem 7.10 in \cite[Ch. 4]{We}). {\sl The class of all semi-nilpotent groups is closed under taking subgroups and homomorphic images.} In the other classical case when $\sigma =\sigma ^{\pi}=\{\pi, \pi'\}$, $G$ is $\sigma ^{\pi}$-nilpotent if and only if $G$ is \emph{$\pi$-decomposable}, that is, $G=O_{\pi}(G)\times O_{\pi'}(G)$. Thus $G$ is semi-${\sigma}^{\pi}$-nilpotent if and only if the normalizer of every $\pi$-decomposable non-normal subgroup of $G$ is $\pi$-decomposable; $G$ is weakly semi-${\sigma}^{\pi}$-nilpotent if and only if the normalizer of every $\pi$-decomposable non-subnormal subgroup of $G$ is $\pi$-decomposable. Therefore in this case we get from Theorems A and B the following results. {\bf Corollary 1.8.} {\sl Suppose that $G$ is not $\pi$-decomposable. If the normalizer of every $\pi$-decomposable non-subnormal subgroup of $G$ is $\pi$-decomposable, then:} (i) {\sl $G$ has a Hall $\pi$-subgroup $H_{1}$ and a Hall $\pi'$-subgroup $H_{2}$, and exactly one of these subgroups, $H_{1}$ say, is normal in $G$.} (ii) {\sl $G/F(G)$ is $\pi$-decomposable. } (iii) {\sl $N_{G}(H_{2})$ is an $\mathfrak{F}$-covering subgroup of $G$, where $\mathfrak{F}$ is the class of all $\pi$-decomposable groups.} (iv) {\sl $O_{\pi}(G)\times O_{\pi'}(G)=H_{1}\times O_{\pi'}(G)$ is a maximal $\pi$-decomposable subgroup of $G$ and every element of $G$ induces a $\pi'$-automorphism on every chief factor of $G$ below $O_{\pi'}(G)$.} {\bf Corollary 1.9.} {\sl Suppose that $G$ is not $\pi'$-closed and the normalizer of every $\pi$-decomposable non-normal subgroup of $G$ is $\pi$-decomposable. Then $G=H_{1}\rtimes H_{2}$, where $H_{1}$ is a Hall $\pi$-subgroup and $H_{2}$ is a Hall $\pi'$-subgroup of $G$. Moreover, if non-normal Sylow subgroups of any Schmidt subgroup $A\leq H_{2}$ have prime order, then:} (i) {\sl $G/O_{\pi}(G)\times O_{\pi'}(G)$ is abelian. } (ii) {\sl Every maximal $\pi$-decomposable non-normal subgroup of $G$ is an $\mathfrak{F}$-covering subgroup of $G$, where $\mathfrak{F}$ is the class of all $\pi$-decomposable groups.} (iii) {\sl If $H_{1}$ is nilpotent, then $G/O_{\pi}(G)\times O_{\pi'}(G) $ is cyclic. } In fact, in the theory of $\pi$-soluble groups ($\pi= \{p_{1}, \ldots , p_{n}\}$) we deal with the partition $\sigma =\sigma ^{1\pi }=\{\{p_{1}\}, \ldots , \{p_{n}\}, \pi'\}$. Moreover, $G$ is $\sigma ^{1\pi }$-nilpotent if and only if $G$ is \emph{$\pi$-special} \cite{Cun2}, that is, $G=O_{p_{1}}(G)\times \cdots \times O_{p_{n}}(G)\times O_{\pi'}(G)$. Thus $G$ is semi-${\sigma}^{1\pi}$-nilpotent if and only if the normalizer of every $\pi$-special non-normal subgroup of $G$ is $\pi$-special; $G$ is weakly semi-${\sigma}^{1\pi}$-nilpotent if and only if the normalizer of every $\pi$-special non-subnormal subgroup of $G$ is $\pi$-special. Therefore in this case we get from Theorems A and B the following results. {\bf Corollary 1.10.} {\sl Let $P_{i}$ be a Sylow $p_{i}$-subgroup of $G$ for all $p\in \pi= \{p_{1}, \ldots , p_{n}\}$. If the normalizer of every $\pi$-special non-subnormal subgroup of $G$ is $\pi$-special, then: } (i) {\sl $G$ has a Hall $\pi'$-subgroup $H$ and at least one of subgroups $P_{1}, \ldots , P_{n}, H$ is normal in $G$.} (ii) {\sl $O_{p_{1}}(G)\times \cdots \times O_{p_{n}}(G)\times O_{\pi'}(G)$ is a maximal $\pi$-special subgroup of $G$.} (iii) {\sl $G/F(G)$ is $\pi$-special. } {\bf Corollary 1.11.} {\sl Suppose that the normalizer of every $\pi$-special non-normal subgroup of $G$ is $\pi$-special. If non-normal Sylow subgroups of any Schmidt $\pi'$-subgroup of $G$ have prime order, then:} (i) {\sl $G/(O_{p_{1}}(G)\times \cdots \times O_{p_{n}}(G)\times O_{\pi'}(G))$ is abelian. } (ii) {\sl Every maximal $\pi$-special non-normal subgroup of $G$ is an $\mathfrak{F}$-covering subgroup of $G$, where $\mathfrak{F}$ is the class of all $\pi$-special groups.} (iii) {\sl If every normal in $G$ subgroup $A\in \{P_{1}, \ldots , P_{n}, H\}$ is nilpotent, then $G/(O_{p_{1}}(G)\times \cdots \times O_{p_{n}}(G)\times O_{\pi'}(G))$ is cyclic. } \section{Preliminaries} Recall that $G$ is said to be: a \emph{$D_{\pi}$-group} if $G$ possesses a Hall $\pi$-subgroup $E$ and every $\pi$-subgroup of $G$ is contained in some conjugate of $E$; a \emph{$\sigma$-full group of Sylow type} \cite{1} if every subgroup $E$ of $G$ is a $D_{\sigma _{i}}$-group for every $\sigma _{i}\in \sigma (E)$; \emph{$\sigma$-soluble} \cite{1} if every chief factor of $G$ is $\sigma$-primary. {\bf Lemma 2.1 } (See Theorem A and B in \cite{2}). {\sl If $G$ is $\sigma$-soluble, then $G$ is a $\sigma$-full group of Sylow type and, for every $i$, $G$ has a Hall $\sigma _{i}'$-subgroup and every two Hall $\sigma _{i}'$-subgroups of $G$ are conjugate. } A subgroup $A$ of $G$ is said to be \emph{${\sigma}$-subnormal} in $G$ \cite{1} if there is a subgroup chain $A=A_{0} \leq A_{1} \leq \cdots \leq A_{n}=G$ such that either $A_{i-1}\trianglelefteq A_{i}$ or $A_{i}/(A_{i-1})_{A_{i}}$ is $\sigma$-primary for all $i=1, \ldots , n$. Note that a subgroup $A$ of $G$ is subnormal in $G$ if and only if $A$ is ${\sigma}^{1}$-subnormal in $G$ (where ${\sigma}^{1}=\{\{2\}, \{3\}, \ldots \}$). {\bf Lemma 2.2. } (i) {\sl The class of all $\sigma$-nilpotent groups ${\mathfrak{N}}_{\sigma}$ is closed under taking direct products, homomorphic images and subgroups. Moreover, if $H$ is a normal subgroup of $G$ and $H/H\cap \Phi (G)$ is $\sigma$-nilpotent, then $H$ is $\sigma$-nilpotent } (See Lemma 2.5 in \cite{2}). (ii) {\sl $G$ is $\sigma $-nilpotent if and only if every subgroup of $G$ is ${\sigma}$-subnormal in $G$ } (See \cite[Proposition 3.4]{6}). (iii) {\sl $G$ is $\sigma $-nilpotent if and only if $G=G_{1}\times \cdots \times G_{n}$ for some $\sigma$-primary groups $G_{1}, \ldots , G_{n}$ } (See \cite[Proposition 3.4]{6}). {\bf Lemma 2.3} (See Lemma 2.6 in \cite{1}). {\sl Let $A$, $K$ and $N$ be subgroups of $G$. Suppose that $A$ is $\sigma$-subnormal in $G$ and $N$ is normal in $G$. } (1) {\sl If $N\leq K$ and $K/N$ is $\sigma$-subnormal in $G/N$, then $K$ is $\sigma$-subnormal in $G$}. (2) {\sl $A\cap K$ is $\sigma$-subnormal in $K$}. (3) {\sl If $A$ is $\sigma $-nilpotent, then $A\leq F_{\sigma}(G)$.} (4) {\sl $AN/N$ is $\sigma$-subnormal in $G/N$}. (5) {\sl If $A$ is a Hall $\sigma _{i}$-subgroup of $G$ for some $i$, then $A$ is normal in $G$.} In view of Proposition 2.2.8 in \cite{15}, we get from Lemma 2.2 the following {\bf Lemma 2.4.} {\sl If $N$ is a normal subgroup of $G$, then $(G/N)^{{\frak{N}}_{\sigma}}=G^{{\frak{N}}_{\sigma}}N/N.$ } {\bf Lemma 2.5.} {\sl If $G$ is ${\sigma}$-soluble and, for some $i$ and some Hall $\sigma _{i}$-subgroup $H$ of $G$, $N_{G}(H)$ is ${\sigma}$-nilpotent, then $N_{G}(H)$ is a $\sigma$-Carter subgroup of $G$. } {\bf Proof.} Let $N=N_{G}(H)$ and $N\leq U\leq G$. Suppose that $U^{{\mathfrak{N}}_{\sigma}}N\ne U$ and let $M$ be a maximal subgroup of $U$ such that $U^{{\mathfrak{N}}_{\sigma}}N\leq M$. Then $M$ is $\sigma$-subnormal in $U$ by Lemmas 2.2(i, ii) and 2.3(1), so $U/M_{U}$ is a $\sigma _{j}$-group for some $j$ since $U$ is clearly ${\sigma}$-soluble. Therefore $|U:M|$ is a $\sigma _{j}$-number, so $j\ne i$ and hence $H\leq M_{U}$. But then $U=M_{U}N_{U}(H)\leq M < U$ by Lemma 2.1 and the Frattini argument. This contradiction completes the proof of the lemma. It is clear that if $A$ is $\sigma$-Carter subgroup of $G$, then $A$ is a $\sigma$-Carter subgroup in every subgroup of $G$ containing $A$. Moreover, in view of Proposition 2.3.14 in \cite{15}, the following useful facts are true. {\bf Lemma 2.6.} {\sl Let $H$ and $R$ be subgroups of $G$, where $R$ is normal in $G$. } (i) {\sl If $H$ is a $\sigma$-Carter subgroup of $G$, then $HR/R$ is a $\sigma$-Carter subgroup of $G/R$. } (ii) {\sl If $U/R$ is a $\sigma$-Carter subgroup of $G/R$ and $H$ is a $\sigma$-Carter subgroup of $U$, then $H$ is a $\sigma$-Carter subgroup of $G$. } {\bf Lemma 2.7.} {\sl Suppose that $G$ possesses a $\sigma$-Carter subgroup. If $G$ is ${\sigma}$-soluble, then any two of its $\sigma$-Carter subgroups are conjugate. } {\bf Proof.} Assume that this lemma is false and let $G$ be a counterexample of minimal order. Then $|\sigma (G)| > 1$. Let $A$ and $B$ be $\sigma$-Carter subgroups of $G$, and let $R$ be a minimal normal subgroup of $G$. Then $AR/R$ and $BR/R$ are $\sigma$-Carter subgroups of $G/R$ by Lemma 2.6(i). Therefore for some $x\in G$ we have $AR/R=B^{x}R/R$ by the choice of $G$. If $AR\ne G$, then $A$ and $B^{x}$ are conjugate in $AR$ by the choice of $G$ and so $A$ and $B$ are conjugate. Now assume that $AR=G=B^{x}R=BR$. If $R\leq A$, then $A=G$ is $\sigma$-nilpotent and so $A=B$. Therefore we can assume that $A_{G}=1=B_{G}$. Since $G$ is ${\sigma}$-soluble, $R$ is a ${\sigma}_{i}$-group for some $i$. Let $H$ be a Hall ${\sigma}_{i}'$-subgroup of $A$. Since $|\sigma (G)| > 1$, it follows that $H\ne 1$ and so $N=N_{G}(H)\ne 1$. Since $A$ and $B$ be $\sigma$-Carter subgroups of $G$, both these subgroups are $\sigma$-nilpotent. Hence $A\leq N$ and, for some $x\in G$, $B^{x}\leq N$ by Lemma 2.1. But then the choice of $G$ implies that $A$ and $B^{x}$ are conjugate in $N$. So we again get that $A$ and $B$ are conjugate. The lemma is proved. If $G \not \in {\mathfrak{N}}_{\sigma}$ but every proper subgroup of $G$ belongs to ${\mathfrak{N}}_{\sigma}$, then $G$ is called an \emph{${\mathfrak{N}}_{\sigma}$-critical } or a \emph{minimal non-$\sigma$-nilpotent} group. If $G$ is an ${\mathfrak{N}}_{{\sigma}^{1}}$-critical group, that is, $G$ is not nilpotent but every proper subgroup of $G$ is nilpotent, then $G$ is said to be a \emph{Schmidt group}. {\bf Lemma 2.8} (See \cite[Ch. V, Theorem 26.1]{bookShem}). {\sl If $G$ is a Schmidt group, then $G=P\rtimes Q$, where $P=G^{\frak{N}}=G'$ is a Sylow $p$-subgroup of $G$ and $Q=\langle x \rangle $ is a cyclic Sylow $q$-subgroup of $G$ with $\langle x^{q} \rangle \leq Z(G)\cap \Phi (G)$. Hence $Q^{G}=G$. } {\bf Lemma 2.9.} {\sl If $G$ is an ${\frak{N}}_{\sigma}$-critical group, then $G$ is a Schmidt group.} {\bf Proof.} For some $i$, $G$ is an ${\frak{N}}_{\sigma _{0}}$-critical group, where $\sigma _{0}=\{\sigma_{i}, \sigma_{i}'\}$. Hence $G$ is a Schmidt group by \cite{belon}. {\bf Lemma 2.10. } {\sl Let $Z=Z_{\sigma}(G)$. Let $A$, $B$ and $N$ be subgroups of $G$, where $N$ is normal in $G$.} (i) {\sl $Z$ is ${\sigma}$-hypercentral in $G$. } (ii) {\sl If $ N\leq Z$, then $Z/N= Z_{\sigma}(G/N)$.} (iii) {\sl $Z_{\sigma}(B)\cap A\leq Z_{\sigma}(B\cap A)$. } (iv) {\sl If $A$ is $\sigma$-nilpotent, then $ZA$ is also $\sigma$-nilpotent. Hence $Z$ is contained in each maximal $\sigma$-nilpotent subgroup of $G$.} (v) {\sl If $G/Z$ is $\sigma$-nilpotent, then $G$ is also $\sigma$-nilpotent.} {\bf Proof. } (i) It is enough to consider the case when $Z=A_{1}A_{2}$, where $A_{1}$ and $A_{2}$ are normal ${\sigma}$-hypercentral subgroups of $G$. Moreover, in view of the Jordan-H\"{o}lder theorem for the chief series, it is enough to show that if $A_{1}\leq K < H \leq A_{1}A_{2}$, then $H/K$ is $\sigma$-central. But in this case we have $H=A_{1}(H\cap A_{2})$, where $H\cap A_{2}\nleq K$ and so from the $G$-isomorphism $(H\cap A_{2})/(K\cap A_{2})\simeq (H\cap A_{2})K/K=H/K$ we get that $C_{G}(H/K)=C_{G}((H\cap A_{2})/(K\cap A_{2}))$ and hence $H/K$ is $\sigma$-central in $G$. (ii) This assertion is a corollary of Part (i) and the Jordan-H\"{o}lder theorem for the chief series. (iii) First assume that $B=G$, and let $1= Z_{0} < Z_{1} < \cdots < Z_{t} = Z$ be a chief series of $G$ below $Z$ and $C_{i}= C_{G}(Z_{i}/Z_{i-1})$. Now consider the series $$1= Z_{0}\cap A \leq Z_{1}\cap A \leq \cdots \leq Z_{t} \cap A= Z\cap A.$$ We can assume without loss of generality that this series is a chief series of $A$ below $Z\cap A$. Let $i\in \{1, \ldots , t \}$. Then, by Part (i), $Z_{i}/Z_{i-1} $ is $\sigma$-central in $G$, $(Z_{i}/Z_{i-1})\rtimes (G/C_{i})$ is a $\sigma _{k}$-group say. Hence $(Z_{i}\cap A)/(Z_{i-1}\cap A)$ is a $\sigma _{k}$-group. On the other hand, $A/A\cap C_{i}\simeq C_{i}A/C_{i}$ is a $\sigma _{k}$-group and $$A\cap C_{i}\leq C_{A}((Z_{i}\cap A)/(Z_{i-1}\cap A)).$$ Thus $(Z_{i}\cap A)/(Z_{i-1}\cap A)$ is $\sigma$-central in $A$. Therefore, in view of the Jordan-H\"{o}lder theorem for the chief series, we have $Z\cap A\leq Z_{\sigma}(A)$. Now assume that $B$ is any subgroup of $G$. Then, in view of the preceding paragraph, we have $$ Z_{\sigma}(B) \cap A = Z_{\sigma}(B) \cap (B\cap A)\leq Z_{\sigma}(B\cap A).$$ (iv) Since $A$ is $\sigma$-nilpotent, $ZA/Z\simeq A/A\cap Z$ is $\sigma$-nilpotent by Lemma 2.2(i). On the other hand, $Z\leq Z_{\sigma}(ZA)$ by Part (iii). Hence $ZA$ is $\sigma$-nilpotent by Part (i). (v) This assertion follows from Part (i). The lemma is proved. The following lemma is a corollary of Lemmas 2.2(i) and 2.10(v). {\bf Lemma 2.11.} {\sl $F_{\sigma}(G)/\Phi (G)=F_{\sigma}(G/\Phi(G))$ and $F_{\sigma}(G)/Z_{\sigma}(G)=F_{\sigma}(G/Z_{\sigma}(G))$.} \section{Proofs of the main results} {\bf Proof of Theorem A.} Assume that this theorem is false and let $G$ be a counterexample of minimal order. Then $G$ is not $\sigma$-nilpotent. (1) {\sl Every proper subgroup $E$ of $G$ is weakly semi-${\sigma}$-nilpotent. Hence the conclusion of the theorem holds for $E$.} Let $V$ be a non-subnormal $\sigma$-nilpotent subgroup of $E$. Then $V$ is not subnormal in $ G$ by Lemma 2.3(2), so $N_{G}(V)$ is $\sigma$-nilpotent by hypothesis. Hence $N_{E}(V)=N_{G}(V)\cap E$ is $\sigma$-nilpotent by Lemma 2.2(i). (2) {\sl Every proper quotient $G/N$ of $G$ (that is, $N\ne 1$) is weakly semi-${\sigma}$-nilpotent. Hence the conclusion of the theorem holds for $G/N$. } In view of Remark 1.2(iii) and the choice of $G$, it is enough to show that if $U/N$ is any non-subnormal $\sigma$-primary subgroup of $G/N$, then $N_{G/N}(U/N)$ is $\sigma$-nilpotent. We can assume without loss of generality that $N$ is a minimal normal subgroup of $G$. Since $U/N$ is not subnormal in $G/N$, $U/N < G/N$ and $U$ is not subnormal in $G$. Hence $U$ is a proper subgroup of $G$, which implies that $U$ is $\sigma$-soluble by Claim (1). Hence $N$ is a $\sigma _{i}$-group for some $i$. If $U/N$ is a $\sigma _{i}$-group, then $U$ is $\sigma$-primary and so $N_{G}(U)$ is $\sigma$-nilpotent. Hence $N_{G/N}(U/N)=N_{G}(U)/N$ is $\sigma$-nilpotent by Lemma 2.2(i). Now suppose that $U/N$ is a $\sigma _{j}$-group for some $j\ne i$. Then $N$ has a complement $V$ in $U$ by the Schur-Zassenhaus theorem. Moreover, from the Feit-Thompson theorem it follows that at least one of the groups $N$ or $U/N$ is soluble and so every two complements to $N$ in $U$ are conjugate in $U$. Therefore $N_{G}(U)=N_{G}(NV)=NN_{G}(V)$. Since $U=NV$ is not subnormal in $G$, $V$ is not subnormal in $G$ by Lemma 2.3(1, 4) and so $N_{G}(V)$ is $\sigma$-nilpotent. Hence $N_{G/N}(U/N)=N_{G}(U)/N$ is $\sigma$-nilpotent. (3) {\sl If $A$ is an ${\mathfrak{N}}_{\sigma}$-critical subgroup of $G$, then $A=P\rtimes Q$, where $P=A^{\frak{N}}=A'$ is a Sylow $p$-subgroup of $A$ and $Q$ is a Sylow $q$-subgroup of $A$ for some different primes $p$ and $q$. Moreover, $P$ is subnormal in $G$ and so $P\leq O_{p}(G)$. } The first assertion of the claim directly follows from Lemmas 2.8 and 2.9. Since $A$ is not $\sigma$-nilpotent, $P$ is subnormal in $G$ by hypothesis. Therefore $ P\leq O_{p}(G)$ by Lemma 2.3(3). (4) {\sl $G$ is $\sigma$-soluble.} Suppose that this is false. Then $G$ is a non-abelian simple group since every proper section of $G$ is $\sigma$-soluble by Claims (1) and (2). Moreover, $G$ is not $\sigma$-nilpotent and so it has an ${\mathfrak{N}}_{\sigma}$-critical subgroup $A$. Claim (3) implies that for some Sylow subgroup $P$ of $A$ we have $1 < P\leq O_{p}(G) < G$. This contradiction shows that we have (4). (5) {\sl Statements (i) and (ii) hold for $G$.} Since $G$ is $\sigma$-soluble by Claim (4), it is a $\sigma$-full group of Sylow type by Lemma 2.1. In particular, $G$ possesses a complete Hall $\sigma$-set $\{H_{1}, \ldots , H_{t}\}$. Then there is an index $k$ such that $H_{k}$ is not subnormal in $G$ by Lemma 2.3(5) since $G$ is not $\sigma$-nilpotent. Then $N_{G}(H_{k})$ is $\sigma$-nilpotent by hypothesis, so $N_{G}(H_{i})$ is a $\sigma$-Carter subgroup of $G$ by Lemma 2.5 for all $i > r$. If for some $j\ne k$ the subgroup $H_{j}$ is not subnormal in $G$, then $N_{G}(H_{j})$ is also a $\sigma$-Carter subgroup of $G$. But then $N_{G}(H_{k})$ and $N_{G}(H_{j})$ are conjugate in $G$ by Lemma 2.7. Hence for some $x\in G$ we have $H_{k}^{x}\leq N_{G}(H_{j})$. Therefore, since $G$ is not $\sigma$-nilpotent, there is a complete Hall $\sigma$-set $\{L_{1}, \ldots , L_{t}\}$ of $G$ such that for some $1\leq r < t$ the subgroups $L _{1}, \ldots , L_{r}$ are normal in $G$, $L_{i}$ is not normal in $G$ for all $i > r$, and $\langle L_{r+1}, \ldots , L_{t} \rangle =L_{r+1}\times \cdots \times L_{t}$. (6) {\sl Every subgroup $V$ of $G$ containing $ F_{\sigma}(G)$ is $\sigma$-subnormal in $G$, so $F_{\sigma}(V)=F_{\sigma}(G)$.} From Claim (5) it follows that $H_{1}, \ldots , H_{r} \leq F_{\sigma}(G)$ and $$G/F_{\sigma}(G)=F_{\sigma}(G)(H_{r+1}\times \cdots \times H_{t})/F_{\sigma}(G)\simeq (H_{r+1}\times \cdots \times H_{t})/((H_{r+1}\times \cdots \times H_{t}) \cap F_{\sigma}(G))$$ is ${\sigma}$-nilpotent. Hence every subgroup of $G/F_{\sigma}(G)$ is ${\sigma}$-subnormal in $G/F_{\sigma}(G)$ by Lemma 2.2(ii). Therefore $V$ is ${\sigma}$-subnormal in $G$ by Lemma 2.3(1), so $F_{\sigma}(V)\leq F_{\sigma}(G)\leq F_{\sigma}(V)$ by Lemma 2.3(3). Hence we have (6). (7) {\sl Statement (iii) holds for $G$.} First note that $F_{\sigma}(G)$ is a maximal ${\sigma}$-nilpotent subgroup of $G$ by Claim (6). In fact, $F_{\sigma}(G)=F_{0\sigma }(G)\times O_{\sigma _{i_{1}}}(G)\times \cdots \times O_{\sigma _{i_{m}}}(G)$ for some $i_{1}, \ldots , i_{m} \subseteq \{r+1, \ldots , t\}$. Moreover, in view of Claim (5), we get clearly that $G/C_{G}(O_{\sigma _{i_{k}}}(G))$ is a $\sigma _{i_{k}}$-group and so $O_{\sigma _{i_{k}}}(G)\leq Z_{\sigma}(G)$. Hence $F_{\sigma}(G)=F_{0\sigma }(G)Z_{\sigma}(G)$. (8) {\sl Statement (iv) holds for $G$.} First we show that $U_{G}\leq Z_{\sigma}(G)$ for every ${\sigma}$-nilpotent subgroup $U$ of $G$ such that $G=F_{\sigma}(G)U$. Suppose that this is false. Then $U_{G}\ne 1$. Let $R$ be a minimal normal subgroup of $G$ contained in $U$ and $C=C_{G}(R)$. Then $$G/R=(F_{\sigma}(G)R/R)(U/R)=F_{\sigma}(G/R)(U/R),$$ so $$U_{G}/R=(U/R)_{G/R}\leq Z_{\sigma}(G/R)$$ by Claim (2). Since $G$ is $\sigma$-soluble, $R$ is a $\sigma _{i}$-group for some $i$. Moreover, from $G=F_{\sigma}(G)U$ and Lemma 2.1 we get that for some Hall $\sigma _{i}'$-subgroups $E$, $V$ and $W$ of $G$, of $F_{\sigma}(G)$ and of $U$, respectively, we have $E=VW$. But $R\leq F_{\sigma}(G)\cap U$, where $F_{\sigma}(G)$ and $U$ are $\sigma$-nilpotent. Therefore $E\leq C$, so $R/1$ is $\sigma$-central in $G$. Hence $R\leq Z_{\sigma}(G)$ and so $Z_{\sigma}(G/R)=Z_{\sigma}(G)/R$ by Lemma 2.10(ii). But then $U_{G}\leq Z_{\sigma}(G)$. Finally, if $V$ is any maximal ${\sigma}$-nilpotent subgroup of $G$ with $G=F_{\sigma}(G)V$, then $Z_{\sigma}(G)\leq V$ by Lemma 2.11(iv) and so $V_{G}= Z_{\sigma}(G)$. (9) {\sl Statement (v) holds for $G$.} In view of Lemma 2.2(i), it is enough to show that $D=G^{{\mathfrak{N}}_{\sigma}}$ is nilpotent. Assume that this is false. Then $D\ne 1$, and for any minimal normal subgroup $R$ of $G$ we have that $(G/R)^{{\mathfrak{N}}_{\sigma}}=RD/R \simeq D/D\cap R $ is nilpotent by Claim (2) and Lemmas 2.2(i) and 2.4. Moreover, Lemma 2.2(i) implies that $R$ is a unique minimal normal subgroup of $G$, $R\leq D$ and $R\nleq \Phi (G)$. Since $G$ is not ${\sigma}$-nilpotent, Claim (3) and \cite[Ch. A, 15.6]{DH} imply that $R=C_{G}(R)=O_{p}(G)=F(G)$ for some prime $p$. Then $R < D$ and $G=R\rtimes M$, where $M$ is not $\sigma$-nilpotent, and so $M$ has an ${\mathfrak{N}}_{\sigma}$-critical subgroup $A$. Claim (3) implies that for some prime $q$ dividing $|A|$ and for a Sylow $q$-subgroup $Q$ of $A$ we have $1 < Q\leq F(G)\cap M=R\cap M=1$. This contradiction completes the proof of (9). From Claims (5), (7), (8) and (9) it follows that the conclusion of the theorem is true for $G$, contrary to the choice of $G$. The theorem is proved. {\bf Proof of Theorem B.} Assume that this theorem is false and let $G$ be a counterexample of minimal order. Then $G$ is not ${\sigma}$-nilpotent. Nevertheless, $G$ is ${\sigma}$-soluble by Theorem A. Let $F_{0\sigma }(G)=H_{1} \cdots H_{r}$ and $ E=H_{r+1} \cdots H_{t}$. Then $E$ is $\sigma$-nilpotent by Theorem A(ii). (1) {\sl Every proper subgroup $E$ of $G$ is semi-${\sigma}$-nilpotent. Hence Statements (i) and (ii) hold for $E$} (See Claim (1) in the proof of Theorem A). (2) {\sl The hypothesis holds for every proper quotient $G/N$ of $G$. Hence Statements (i), (ii) and (iv) hold for $G/N$. } It is not difficult to show that $G/N$ is semi-${\sigma}$-nilpotent (see Claim (2) in the proof of Theorem A). Now let $U/N$ be any Schmidt $\sigma _{i}$-subgroup of $G/N$ such that $U/N\leq W/N$ for some non-normal in $G/N$ Hall $\sigma _{i}$-subgroup $W/N$ of $G/N$. In view of Lemma 2.1, we can assume without loss of generality that $W/N=H_{i}N/N$. Let $L$ be any minimal supplement to $N$ in $U$. Then $L\cap N\leq \Phi (L)$ and, by Lemma 2.8, $U/N=LN/N\simeq L/L\cap N$ is a $\sigma _{i}$-group and $L/L\cap N =(P/L\cap N)\rtimes (Q/L\cap N)$, where $P/L\cap N=(L/L\cap N)^{\frak{N}}=(L/L\cap N)'$ is a Sylow $p$-subgroup of $L/L\cap N$ and $Q/L\cap N=\langle x \rangle $ is a cyclic Sylow $q$-subgroup of $L/L\cap N$ with $V/L\cap N=\langle x^{q} \rangle =\Phi (Q/L\cap N)\leq \Phi (L/L\cap N)\cap Z(L/L\cap N)$ and $p, q \in \sigma _{i}$. Suppose that $|Q/L\cap N| > q$. Then $L\cap N < V$. In view of Lemma 2.2(i), a Sylow $p$-subgroup of $L$ is normal in $L$. Hence, in view of Lemma 2.8, for any Schmidt subgroup $A$ of $L$ we have $A =A_{p}\rtimes A_{q}$, where $A_{p}$ is a Sylow $p$-subgroup of $A$, $ A_{q}$ is a Sylow $q$-subgroup of $A$ and $(A_{q})^{A}=A$. We can assume without loss of generality that $A_{q}(L\cap N)/(L\cap N) \leq Q/L\cap N$. Then $A_{q}(L\cap N)/(L\cap N) \nleq V/L\cap N$ since $V\leq \Phi (L)$. It follows that $A_{q}\nleq N$. Since $W/N=H_{i}N/N$ is not normal in $G/N$, $H_{i}$ is not normal in $G$. But for some $x\in G$ we have $A^{x}\leq H_{i}$, so $|A_{q}^{x}|=|A_{q}|=q$ by hypothesis. Note that $|Q/V|=q$ since $Q/L\cap N$ is cyclic and $V/L\cap N=\Phi (Q/L\cap N)$. Hence $$(V/L\cap N)(A_{q}(L\cap N)/(L\cap N))=(V/L\cap N)\times (A_{q}(L\cap N)/(L\cap N))=Q/(L\cap N) ,$$ which implies that $Q/(L\cap N)$ is not cyclic. This contradiction shows that $|Q/L\cap N|=q$, so for a Sylow $q$-subgroup $S$ of $U/N$ we have $|S|=q$. Therefore the hypothesis holds for $G/N$. Hence we have (2) by the choice of $G$ (3) {\sl If $A$ is an ${\mathfrak{N}}_{\sigma}$-critical subgroup of $G$, then $A=P\rtimes Q$, where $P=A^{\frak{N}}=A'$ is a Sylow $p$-subgroup of $A$ and $Q$ is a Sylow $q$-subgroup of $A$ for some different primes $p$ and $q$. Moreover, the subgroup $P$ is normal in $G$. Hence $G$ has an abelian minimal normal subgroup $R$} (See Claim (3) in the proof of Theorem A). (4) {\sl Statement (i) holds for $G$.} In view of Lemma 2.2(i), it is enough to show that $G'$ is ${\sigma}$-nilpotent. Suppose that this is false. (a) {\sl $R=C_{G}(R)=O_{p}(G)=F(G)\nleq \Phi (G)$ for some prime $p$ and $|R| > p$}. From Claim (2) it follows that for every minimal normal subgroup $N$ of $G$, $(G/N)'=G'N/N\simeq G'/G'\cap N$ is $\sigma$-nilpotent. If $R\ne N$, it follows that $G'/((G'\cap N)\cap (G'\cap R))=G'/1$ is $\sigma$-nilpotent by Lemma 2.2(i). Therefore $R$ is a unique minimal normal subgroup of $G$, $R\leq D$ and $R\nleq \Phi (G)$ by Lemma 2.2(i). Hence $R=C_{G}(R)=O_{p}(G)=F(G)$ by Theorem 15.6 in \cite[Ch. A]{DH}, so $|R| > p$ since otherwise $G/R=G/C_{G}(R)$ is cyclic, which implies that $G'=R$ is ${\sigma}$-nilpotent. (b) {\sl $F_{\sigma}(V)=F_{\sigma}(G)$ for every subgroup $V$ of $G$ containing $ F_{\sigma}(G)$} (See Claim (6) in the proof of Theorem A). (c) {\sl $G=H_{1} \rtimes H_{2}$, where $R\leq H_{1}=F_{\sigma}(G)$ and $H_{2}$ is a minimal non-abelian group.} From Theorem A and Claim (a) it follows that $r=1$ and $R\leq H_{1}=F_{\sigma}(G)$. Now let $W=F_{\sigma}(G)V$, where $V$ is a maximal subgroup of $E$. Then $F_{\sigma}(G)=F_{\sigma}(W)$ by Claim (b), so $W/F_{\sigma}(W)=W/F_{\sigma}(G)\simeq V$ is abelian by Claim (1). Therefore $E$ is not abelian but every proper subgroup of $E$ is abelian, so $E=H_{2}$ since $E$ is $\sigma$-nilpotent. Hence we have (c). (d) {\sl $H_{1}=R$ is a Sylow $p$-subgroup of $G$ and every subgroup $H\ne 1$ of $H_{2}$ acts irreducibly on $R$. Hence every proper subgroup $H$ of $H_{2}$ is cyclic.} Suppose that $|\pi (H_{1})| > 1$. There is a Sylow $p$-subgroup $P$ of $H_{1}$ such that $H_{2}\leq N_{G}(P)$ by Claim (c) and the Frattini argument. Let $K=PH_{2}$. Then $K < G$ and $P=H_{1}\cap K$ is normal in $K$, so $R\leq P=F_{\sigma}(K)$ since $C_{G}(R)=R$ by Claim (a). Then $K/F_{\sigma}(K)=K/P\simeq H_{2}$ is abelian by Claim (1), a contradiction. Hence $H_{1}$ is a normal Sylow $p$-subgroup of $G$. Hence $H_{1}\leq F(G)\leq C_{G}(R)=R$ by \cite[Ch. A, 13.8(b)]{DH}, so $H_{1}=R$. Now let $S=RH$. By the Maschke theorem, $R=R_{1}\times \cdots \times R_{n}$, where $R_{i}$ is a minimal normal subgroup of $S$ for all $i$. Then $R=C_{S}(R)=C_{S}(R_{1}) \cap \cdots \cap C_{S}(R_{n})$. Hence, for some $i$, the subgroup $R_{i}H$ is not $\sigma$-nilpotent and so it has an ${\mathfrak{N}}_{\sigma}$-critical subgroup $A$ such that $1 < A'$ is normal in $G$ by Claim (3). But then $R\leq A$. Therefore $i=1$, so we have (d) since $H$ is abelian by Claim (c). (e) {\sl $H_{2}$ is not nilpotent. Hence $ |\pi (H_{2})| > 1$.} Suppose that $H_{2}=Q\times H$ is nilpotent, where $Q\ne 1$ is a Sylow $q$-subgroup of $H_{2}$. If $H\ne 1$, then $Q$ and $H$ are proper subgroups of $H_{2}$ and so the groups $Q$, $H$ and $H_{2}$ are abelian by Claim (c). Therefore $H_{2}=Q$ is a $q$-group. Then, since every maximal subgroup of $H_{2}$ is cyclic by Claim (d), $q=2$ by \cite[Ch. 5, Theorems 4.3, 4.4]{Gor}. Therefore $|R|=p$, contrary to Claim (a). Hence we have (e). (f) {\sl $H_{2}=A\rtimes B$, where $A=C_{H_{2}}(A)$ is a group of prime order $q\ne p$ and $B=\langle a \rangle$ is a group of order $r$ for some prime $r\not \in \{p, q\}$. } From Claims (d) and (e) it follows that $H_{2}$ is a Schmidt group with cyclic Sylow subgroups. Therefore Claim (f) follows from the hypothesis and Lemma 2.8. {\sl Final contradiction for (4). } Suppose that for some $x=yz\in RA$, where $y\in R$ and $z\in A$, we have $xa=ax$. Then $x\in N_{G}(B)$, so $R\cap \langle x \rangle =1$ since $B$ acts irreducible on $R$ by Claim (d). Hence $\langle x \rangle $ is a $q$-group and $V=\langle x \rangle B$ is abelian group such that $B\cap R=1$. Hence from the isomorphism $G/R\simeq H_{2}$ we get that $x=1$. Therefore $a$ induces a fixed-point-free automorphism on $RA$ and hence $RA$ is nilpotent by the Thompson theorem \cite[Ch. 10, Theorem 2.1]{Gor}. But then $A\leq C_{G}(R)=R$. This contradiction completes the proof of (4). (5) {\sl Statement (ii) holds for $G$.} Suppose that this is false. By Lemma 2.10(iv), $Z_{\sigma}(G)\leq U$. On the other, $U/Z_{\sigma}(G)$ is a maximal $\sigma$-nilpotent non-normal subgroup of $G/Z_{\sigma}(G)$ by Lemma 2.10(v). Hence in the case $Z_{\sigma}(G)\ne 1$ Claim (2) implies that $U/Z_{\sigma}(G)$ is a $\sigma$-Carter subgroup $G/Z_{\sigma}(G)$, so $U$ is a $\sigma$-Carter subgroup of $G$ by Lemma 2.6(ii). Hence $Z_{\sigma}(G)=1$, so Theorem A(iii) implies that $F_{\sigma}(G)= F_{0\sigma}(G)=H_{1}\cdots H_{r}$. Hence $E\simeq G/F_{0\sigma}(G)$ is abelian by Claim (4). Let $V=F_{\sigma}(G)U $. If $V=G$, then for some $x$ we have $H_{r+1}^{x}\leq U$ by Lemma 2.1. Hence $U\leq N_{G}(H_{r+1}^{x})$ and so $U= N_{G}(H_{r+1}^{x})$ is a $\sigma$-Carter subgroup of $G$ by Theorem A(ii). Therefore $V=F_{\sigma}(G)U $ is a normal proper subgroup of $G$. Let $x\in G$. If the subgroup $U^{x}$ is normal in $V$, then $U^{x}$ is subnormal in $G$ and so $U^{x}, U\leq F_{\sigma}(G) $ by Lemma 2.3(3), which implies that $U=F_{\sigma}(G) $ is normal in $G$ since $F_{\sigma}(G) $ and $U$ are maximal $\sigma$-nilpotent subgroups of $G$ by Theorem A(iii). This contradiction shows that $U^{x}$ and $U$ are non-normal maximal $\sigma$-nilpotent subgroups of $V$. Since $ V < G$, Claim (1) implies that $U^{x}$ and $U$ are $\sigma$-Carter subgroups of $V$. Since $V$ is $\sigma$-soluble, $U$ and $U^{x}$ are conjugate in $V$ by Lemma 2.7. Therefore $G=VN_{G}(U)$ by the Frattini argument. Since $U$ is a maximal $\sigma$-nilpotent non-normal subgroup of $G$, $U=N_{G}(U)$. Hence $G=VU=(F_{\sigma}(G)U)U =F_{\sigma}(G)U < G$. This contradiction completes the proof of the fact that every maximal $\sigma$-nilpotent non-normal subgroup $U$ of $G$ is a $\sigma$-Carter subgroup of $G$. But then $G=F_{\sigma}(G)U$ since $G/F_{\sigma}(G)$ is $\sigma$-nilpotent by Claim (4) and so $U_{G}= Z_{\sigma}(G)$ by Theorem A(iv). Hence we have (5). (6) {\sl If $F_{0\sigma} (G)\leq F(G)$, then $G/F_{\sigma}(G)$ is cyclic.} Assume that this is false. (i) {\sl $\Phi (F_{0\sigma}(G))=1$. Hence $F_{0\sigma} (G)$ is the direct product of some minimal normal subgroups $R_{1}, \ldots , R_{k}$ of $G$.} Suppose that $\Phi (F_{0\sigma} (G))\ne 1$ and let $N$ be a minimal normal subgroup of $G$ contained in $\Phi (F_{0\sigma} (G))\leq \Phi (G)$. Then $N$ is a $p$-group for some prime $p$. We show that the hypothesis holds for $G/N$. First note that $G/N$ is semi-${\sigma}$-nilpotent by Claim (2). Now let $V/N$ be a normal Hall $\sigma _{i}$-subgroup of $G/N$ for some $\sigma _{i}\in \sigma (G/N)$. If $p\in \sigma _{i}$, then $V$ is normal Hall $\sigma _{i}$-subgroup of $G$, so $V\leq F(G)$ by hypothesis and hence $V/N\leq F(G)N/N\leq F(G/N)$. Now assume that $p\not \in \sigma _{i}$ and let $W$ be a Hall $\sigma _{i}$-subgroup of $V$. Then $W$ is a Hall $\sigma _{i}$-subgroup of $G$. Moreover, every two Hall $\sigma _{i}$-subgroups of $V$ are conjugate in $V$ by Lemma 2.1, so $G=VN_{G}(W)=NWN_{G}(W)=NN_{G}(W)=N_{G}(W)$ by the Frattini argument. Therefore $W\leq F(G)$, so $V/N=WN/N\leq F(G/N)$. Hence $F_{0\sigma}(G/N)\leq F(G/N)$, so the hypothesis holds for $G/N$. The choice of $G$ and Lemma 2.11 imply that $(G/N)/F_{\sigma}(G/N) = (G/N)/(F_{\sigma}(G)/N)\simeq G/F_{\sigma}(G)$ is cyclic, a contradiction. Hence $\Phi (F_{0\sigma}(G))=1$, so we have (i) by \cite[Ch. A, Theorem 10.6(c)]{DH}. (ii) {\sl $Z_{\sigma}(G)=1$. Hence $F_{0\sigma} (G)=F_{\sigma} (G)=F(G)$.} Since $Z_{\sigma}(G/Z_{\sigma}(G))=1$ by Lemma 2.10(ii), Lemma 2.11 and Theorem A(iii) imply that $$ F_{0\sigma}(G/Z_{\sigma}(G))=F_{\sigma}(G/Z_{\sigma}(G)) =F_{\sigma}(G)/Z_{\sigma}(G)=F_{0\sigma} (G)Z_{\sigma}(G)/Z_{\sigma}(G), $$ where $F_{0\sigma} (G)\leq F(G)$ and so $F_{0\sigma}(G/Z_{\sigma}(G))\leq F(G/Z_{\sigma}(G))$. Therefore the hypothesis holds for $G/Z_{\sigma}(G)$ and hence, in the case when $Z_{\sigma}(G)\ne 1$, $G/F_{\sigma}(G)\simeq (G/Z_{\sigma}(G))/F_{\sigma}(G/Z_{\sigma}(G))$ is cyclic by the choice of $G$. Hence we have (ii). {\sl Final contradiction for (6).} Since $E\simeq G/F(G)$ is abelian by Claims (4) and (ii) and $G$ is not nilpotent, there is an index $i$ such that $V=R_{i}\rtimes E$ is not nilpotent. Then $C_{R_{i}}(E)\ne R_{i}$. By the Maschke theorem, $R_{i}= L_{1}\times \cdots \times L_{m}$ for some minimal normal subgroups $L_{1}, \ldots , L_{m}$ of $V$. Then, since $C_{R_{i}}(E)\ne R_{i}$, for some $j$ we have $L_{j}\rtimes E\ne L_{j}\times E$. Hence $L_{j}E$ contains a Schmidt subgroup $A_{p}\rtimes A_{q}$ such that $A_{p}=R_{i}$, so $m=1$. But then $E$ acts irreducible on $R_{i}$ and hence $G/F(G)\simeq E$ is cyclic. This contradiction completes the proof of (6). From Claims (1), (2), (4), (5) and (6) it follows that the conclusion of the theorem is true for $G$, contrary to the choice of $G$. The theorem is proved. \end{document}
\begin{document} \title{On the exact convergence to Nash equilibrium in hypomonotone regimes under full and partial-information} \thispagestyle{empty} \pagestyle{empty} \begin{abstract} In this paper, we consider distributed Nash equilibrium seeking in monotone and hypomonotone games. We first assume that each player has knowledge of the opponents' decisions and propose a passivity-based modification of the standard gradient-play dynamics, that we call ``Heavy Anchor''. We prove that Heavy Anchor allows a relaxation of strict monotonicity of the pseudo-gradient, needed for gradient-play dynamics, and can ensure exact asymptotic convergence in merely monotone regimes. We extend these results to the setting where each player has only partial information of the opponents' decisions. Each player maintains a local decision variable and an auxiliary state estimate, and communicates with their neighbours to learn the opponents' actions. We modify Heavy Anchor via a distributed Laplacian feedback and show how we can exploit equilibrium-independent passivity properties to achieve convergence to a Nash equilibrium in hypomonotone regimes. \end{abstract} \section{Introduction} Recent years have seen a flurry of research papers on distributed Nash equilibrium seeking, due to the increase of distributed systems. There are a broad range of networked scenarios that involve strategic interacting agents, where centralized approaches are not suitable. Some examples are demand-side management for smart grids, \cite{Basar2012}, electric vehicles, \cite{PEVParise}, competitive markets, \cite{LiDahleh}, network congestion control, \cite{Garcia}, power control and resource sharing in wireless/wired peer-to-peer networks, cognitive radio systems, \cite{Scutari_2014}. Classically, Nash Equilibrium (NE) seeking algorithms assume that each player has knowledge of every other agent's decision/action, the so called \textit{full-decision information} setting. In this setting, there are many well known algorithms that find the NE under various assumptions \cite{BasarLi1987}, \cite{FP07}, \cite{monoBookv2}. In a slightly more general setting, some algorithms require a centralized coordinator that broadcasts data to the network of agents, \cite{gramDR}. In recent years, a collective effort have been made to generalize these results to the \textit{partial-decision information} setting, where a centralized coordinator does not exists. Without a coordinator agents only have partial knowledge of the other agent's action, but may communicate locally with neighboring agents. A variety of NE seeking algorithms, for the partial-decision information setting, have been proposed, e.g. \cite{NedichDistAgg}-\cite{dianAgg}. However, all these results require \textit{strict/strong monotonicity} of the pseudo-gradient. Unfortunately, there are prominent classes of games that do not satisfy this assumption, e.g. in zero-sum games, saddle-point problems, Cournot games, \cite{ShanbhagTikhonov}, or in resource allocation games, \cite{Scutari_2014}. Some existing NE seeking methods are applicable for games with a \textit{merely monotone} pseudo-gradient, but only in \textit{full-decision information} settings, e.g. \cite{FP07}, \cite{monoBookv2}, \cite{frb}. However, these methods typically require more complex computations, such as the forward-backward-forward algorithm, \cite{gramTseng} \cite{monoBookv2}, Tikhonov proximal-point algorithm in \cite{ShanbhagTikhonov}, inexact proximal best-response in \cite{Scutari_2014}, \cite{YiTCNS}, or proximal-point/resolvent computation (e.g. Douglas-Rachford splitting), \cite{monoBookv2}. Even though proximal-point algorithms or Douglas-Rachford splitting can achieve exact convergence to a NE, they are computationally expensive since each step involves solving an optimization problem. These methods are only applicable in games with easily computed prox (resolvent) operators, \cite{FP07}, \cite{monoBookv2}. Regularization methods, such as the Tikhonov regularization \cite{ShanbhagTikhonov}, or continuous-time mirror-descent dynamics, \cite{boMirror}, are simpler, but require diminishing step-sizes (very slow convergence), or ensure convergence to only an approximate NE. We emphasize that all these existing methods for monotone games, assume agents have perfect knowledge of the actions of the other agents. Additionally, none of these methods deal with hypomonotone games for either the full or partial information setting. An extremum seeking method \cite{extreme_krstic} for continuous time monotone games has been proposed. However, this method only converges to an $\epsilon$ neighborhood of the NE. A payoff based method \cite{bandit} for discrete time is recently proposed to find the NE in monotone games. Agents random perturbation their action and moves in the direction of improvement. The random nature of the algorithm with the diminishing step sizes result in a slow method to converge to the NE but at the benefit of just using payoff information. \textit{Contributions. } Recognizing the lack of results for (hypo)monotone games, in this paper, we consider NE seeking for games with a \textit{monotone} or \textit{hypomonotone} pseudo-gradient. We propose an algorithm we call ``Heavy Anchor'', constructed by a passivity-based modification of the standard gradient-play dynamics. We demonstrate that in the full-decision information setting, Heavy Anchor ensures exact convergence to a NE for any positive parameter values. Additionally, we show that under a carefully chosen change of coordinates, and conditions on the parameters, Heavy Anchor converges in hypomonotone games. Furthermore, we extend the result to the partial-decision information setting, by using a distributed Laplacian feedback. More specifically, we prove convergence for monotone extended pseudo-gradient, or (hypo)monotone and inverse Lipschitz pseudo-gradient. To the best of our knowledge these are the first such results in the literature. Lastly, we look at quadratic games, an important subclass of games, and derive tighter conditions for the full information (hypo)monotone setting and the partial information (hypo)monotone setting. Heavy Anchor can be interpreted as modifying the standard gradient method with a term approximating the derivative of the agent's \emph{own} action as predictive term. We use the approximation as a frictional force to improve stability. Heavy Anchor is similar to \cite{antipin}, \cite{shamma} used in saddle-point problems in the full information setting. However, \cite{antipin}, \cite{shamma}, approximate the derivative of the \emph{other} agents' actions. Furthermore, our convergence results are global, unlike the local results in \cite{shamma}. In the physics literature, similar dynamics were investigated for stabilizing unknown equilibrium in chaotic systems \cite{physical2}, \cite{physical3}. However, they do not provide a rigorous characterization describing when the equilibrium is stabilized. Finally, Heavy Anchor is also related to second-order dynamics used in the optimization literature, e.g. \cite{attouch1}. If we discretize Heavy Anchor and restrict the parameter values, we can recover the optimistic gradient-descent/ascent (OGDA) \cite{optimistic} or the shadow Douglas Rachford \cite{shadow}. However, all these optimization methods assume that the map is the gradient of a convex function. This does not hold in a game typically - the game map a pseudo-gradient rather than a full gradient, and thus convergence results are not applicable in a game context. Moreover, all these results are for the full information setting. In \cite{dianMono} we presented the algorithm and proved convergence for monotone games. In this paper, we extend our results to hypomonotone games and provide additional analysis of inverse Lipschitz operators. Furthermore, we derive tighter conditions for the class of quadratic games, which were not analyzed in \cite{dianMono}. The paper is organized as follows. Section \ref{sec:background} gives preliminary background. Section \ref{sec:formulation} formulates the problem, standing assumptions and introduces the NE seeking algorithm for the full information case. The convergence analysis is presented in Section \ref{sec:alg}. Section \ref{sec:dist} presents the partial-information version of the algorithm. Section \ref{sec:invLip} investigates a property we are calling ``inverse Lipschitz'', critical in our analysis of the partial information setting and hypomonotone games. Section \ref{sec:partialConv} proves convergence for the partial information setting. Section \ref{sec:Quad} derives tighter conditions for the class of quadratic games. Section \ref{sec:sim} shows simulations of our proposed algorithm and concluding remarks are given in Section \ref{sec:conclusion}. \emph{Notations}. For $x\in\mathbb{R}^{n}$, $x^{T}$ denotes its transpose and $\norm{x} = \sqrt{\inp*{x}{x}} = \sqrt{x^{T}x}$ the norm induced by inner product $\inp*{\cdot}{\cdot}$. For a matrix $A\in\mathbb{R}^{n\times n}$, $\lambda (A) = \set{\lambda_{1},\dots,\lambda_{n}}$ and $\sigma (A) = \set{\sigma_{1},\dots,\sigma_{n}}$ denotes its eigenvalue and singular value set, respectively. Given $A, B\in \mathbb{R}^{n\times n}$, let $A \succeq B$ denote that $(A-B)$ is positive semidefinite. For $\mathcal{N} = \set{1, \dots, N}$, $col(x_{i})_{i\in \mathcal{N}} = [x_{1}^{T},\dots,x_{N}^{T}]^{T}$ denotes the stacked vector of $x_{i}$, while $diag(x_{i})_{i\in\mathcal{N}}$ is the diagonal matrix with $x_{i}$ along the diagonal. $I_{n}$, $\mathbf{1}_{n}$ and $\mathbf{0}_{n}$ denote the identity matrix, the all-ones and the all-zeros vector of dimension $n$, and $\otimes$ denotes the Kronecker product. Lastly, we denote $\mathfrak{j} = \sqrt{-1}$. \section{Background} \label{sec:background} \subsection{Monotone Operators} The following are from \cite{monoBookv2}. Let $T : \mathcal{H} \to 2^{\mathcal{H}}$ be an operator, where $\mathcal{H}$ is a Hilbert space. Its graph is denoted by $graT = \setc{(x,y)\in \mathcal{H}\times \mathcal{H}}{y \in Tx}$. An operator $T$ is $\mu$-strongly monotone and monotone, respectively, if it satisfies, $\inp*{Tx - Ty}{x-y} \geq \mu \snorm{x-y}$ $\forall x,y \in \mathcal{H}$, where $\mu >0$ and $\mu = 0$, respectively. Additionally, we say an operator $T$ is $\mu$-hypomonotone if $\inp*{Tx - Ty}{x-y} \geq -\mu \snorm{x-y}$ $\forall x,y \in \mathcal{H}$, where $\mu \geq 0$. $T$ is maximally monotone if $\forall (x,y)\in \mathcal{H}\times \mathcal{H}$, $(x,y)\in graT \iff (\forall (u,v)\in graT)\inp*{x-u}{y-v} \geq 0$. The resolvent of a monotone operator $T$ is denoted by $\mathcal{J}_{\lambda T} = \bracket{\text{Id} + \lambda T}^{-1}$, $\lambda>0$, where $\text{Id}$ is the identity operator. Fixed points of $\mathcal{J}_{\lambda T} $ are identical to zeros of $T$ (Prop. 23.2, \cite{monoBookv2}). An operator $T$ is $L$-Lipschitz if, $\norm{Tx- Ty} \leq L \norm{x-y}$ $\forall x,y \in \mathcal{H}$. An operator $T$ is $C$-cocoercive ($C$-inverse strongly monotone) if $\inp*{Tx - Ty}{x-y} \geq C \snorm{Tx-Ty}$ $\forall x,y \in \mathcal{H}$. \subsection{Equilibrium Independent Passivity} The following are from \cite{EIP}. Consider a system, \begin{align} \label{eqn:generalDynSys} \begin{split} \dot{x} &= f(x,u) \\ y &= h(x,u) \end{split} \end{align} with $x\in \mathbb{R}^n$, $u \in \mathbb{R}^q$ and $y \in \mathbb{R}^q$, $f$ locally Lipschitz and $h$ continuous. For a differentiable function $V:\mathbb{R}^n \rightarrow \mathbb{R}$, the time derivative of $V$ along solutions of \eqref{eqn:generalDynSys} is denoted by $\dot{V}(x) = \nabla^T V(x) \, f(x,u)$ or just $\dot{V}$. Let $\overline{u}$, $\overline{x}$, $\overline{y}$ be an equilibrium condition, such that $0=f(\overline{x},\overline{u})$, $\overline{y}=h(\overline{x},\overline{u})$. Equilibrium independent passivity (EIP) requires a system to be passive independent of the equilibrium point. \begin{defn}\label{def:EIP} System \eqref{eqn:generalDynSys} is Equilibrium Independent Passive (EIP) if it is passive with respect to $\overline{u}$ and $\overline{y}$; that is for every $ \overline{u} \in \overline{U}$ there exists a differentiable, positive semi-definite storage function $V: \mathbb{R}^n \to \mathbb{R}$ such that $V(\overline{x}) = 0$ and $\forall u \in \mathbb{R}^q$, $x \in \mathbb{R}^n$, $\dot{V}(x) \leq \inp*{y-\overline{y}}{u-\overline{u}}$. The system is Output-strictly EIP if, $\dot{V} \leq \inp*{y-\bar{y}}{u - \bar{u}} - \rho \snorm{y-\bar{y}}$ where $\rho > 0$. \end{defn} \subsection{Graph Theory} Let the graph $G = (\mathcal{N}, \mathcal{E})$ describe the information exchange among a set $\mathcal{N}$ of agents, where $\mathcal{E} \subset \mathcal{N} \times \mathcal{N}$. If agent $i$ can get information from agent $j$, then $(j, i) \in \mathcal{E}$ and agent $j$ is in agent $i$'s neighbour set $\mathcal{N}_{i} = \setc{j}{(j, i)\in \mathcal{E}}$. $G$ is undirected when $(i, j) \in \mathcal{E}$ if and only if $(j, i) \in \mathcal{E}$. $G$ is connected if there is a path between any two nodes. Let $W = [w_{ij}] \in \mathbb{R}^{N\times N}$ be the weighted adjacency matrix, with $w_{ij} > 0$ if $j \in \mathcal{N}_{i}$ and $w_{ij} = 0$ otherwise. Let $Deg = \diag(d_{i})_{i\in\mathcal{N}}$, where $d_{i} = \sum_{j=1}^{N} w_{ij}$. Assume that $W = W^{T}$ so the weighted Laplacian of $G$ is $L = Deg - W$. When $G$ is connected and undirected, $0$ is a simple eigenvalue of $L$, $L\mathbf{1}_N = \mathbf{0}$, $\mathbf{1}_N^{T}L = \mathbf{0}^{T}$, and all other eigenvalues are positive, $0 < \lambda_{2}(L) \leq \dots \leq \lambda_{N}(L)$. \section{Problem Setup}\label{sec:formulation} Consider a set $\mathcal{N}=\{ 1,\dots,N\}$ of $N$ players (agents) involved in a game. Each player $i \in \mathcal{N}$ controls its action or decision $x_i \in \Omega_i \subseteq \mathbb{R}^{n_i}$. The action set of all players is the Cartesian product $\Omega = \prod_{i\in\mathcal{N}}\Omega_i \subseteq \mathbb{R}^{n}$, $n = \sum_{i\in\mathcal{N}} n_i$. Let $x=(x_i,x_{-i})\in {\Omega}$ denote all agents' action profile or $N$-tuple, where $x_{-i}$ is the $(N-1)$-tuple of all agents' actions except agent $i$'s. Alternatively, $x$ is represented as a stacked vector $x = [x_1^T \dots x_N^T]^T \in \Omega \subseteq \mathbb{R}^n$. Each player (agent) $i$ aims to minimize its own cost function $J_i(x_i,x_{-i})$, $J_i : \Omega \to \mathbb{R}$, which depends on possibly all other players' actions. Let the game thus defined be denoted by $\mathcal{G}(\mathcal{N},J_i,\Omega_i)$. \begin{defn}\label{defNE} Given a game $\mathcal{G}(\mathcal{N},J_i,\Omega_i)$, an action profile $x^* =(x_i^*,x_{-i}^*)\in \Omega$ is a Nash Equilibrium (NE) of $\mathcal{G}$ if \begin{align*} (\forall i \in \mathcal{N})(\forall y_i \in \Omega_i) \quad J_i(x_i^*,x_{-i}^*) \leq J_i(y_i,x_{-i}^*) \end{align*} and therefore no agent has the incentive to unilaterally deviate from their action. \end{defn} Alternatively, if $J_{i}$ is differentiable then a NE $x^* \in \Omega$ satisfies the variational inequality (VI) (Proposition 1.4.2, \cite{FP07}), \begin{align} \label{eq:ViNash} (x-x^*)^TF(x^*)\geq 0 \quad \forall x\in {\Omega} \end{align} where $F : \Omega \to \mathbb{R}^n$ is the \emph{pseudo-gradient (game) map} defined by stacking all agents' partial gradients, \begin{align}\label{eq:expPsuedoGrad_F} F(x) = [\nabla_{x_1} J^T_1(x),\dotsc,\nabla_{x_N} J^T_N(x)]^T \end{align} with $\nabla_{x_i} J_i(x_i,x_{-i}) =\frac{\partial J_i}{\partial x_i}(x_i,x_{-i})\in \mathbb{R}^{n_i}$, the partial-gradient of $J_i(x_i,x_{-i})$ with respect to its own action $x_i$. We use the following basic convexity and smoothness assumption, which ensures the existence of a NE. \begin{asmp} \label{asmp:Jsmooth} For every $i\in\mathcal{N}$, $\Omega_i=\mathbb{R}^{n_i}$ and the cost function $J_i:\Omega \to \mathbb{R}$ is $\mathcal{C}^1$ in its arguments, convex and radially unbounded in $x_i$, for every $x_{-i}\in {\Omega}_{-i}$. \end{asmp} Under Assumption \ref{asmp:Jsmooth} from Corollary 4.2 in \cite{B99} it follows that a NE $x^*$ exists. Furthermore, the VI \eqref{eq:ViNash} reduces to $ F(x^{*})=0$. A standard method for reaching a Nash Equilibrium (NE) is using gradient-play dynamics \cite{flam}, i.e., \begin{align} \label{eqn:gradientDescent} \dot{x}_{i} &= -\nabla_{x_i}J_{i}(x_{i},x_{-i}), \forall i\in \mathcal{N}, \quad \text{or} \quad \dot{x} = -F(x) \end{align} This algorithm converges to the NE if the pseudo-gradient is strictly monotone but may fail if the pseudo-gradient is only monotone. For example, consider a $2$-player zero-sum game where the cost functions are $J_{1}(x_{1},x_{2}) = x_{1}x_{2}$, $J_{2}(x_{1},x_{2}) = -x_{1}x_{2}$. The pseudo-gradient is, \begin{align} F(x) &= \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}x \label{eqn:harmonic} \end{align} which is monotone and the NE is $(0,0)$. If the initial state $x(0) \neq (0,0)$ then \eqref{eqn:gradientDescent} will cycle around the NE and never converge, i.e., Figure \ref{fig:grad_harmonic}. In this paper, we are interested in monotone games. \begin{figure} \caption{Gradient vector field} \label{fig:grad_harmonic} \end{figure} \begin{asmp} \label{asmp:monotone} The pseudo-gradient $F$ is monotone. \end{asmp} Under Assumption \ref{asmp:Jsmooth} and \ref{asmp:monotone}, the set of NE is convex, (cf. Theorem 3, \cite{Scutari_2014}), characterized by $\{x^*| F(x^{*})=0\}$. \subsection{Proposed Algorithm} The dynamics \eqref{eqn:gradientDescent} can be viewed as an open-loop system with no feedback. We propose a new algorithm, what we are calling ``Heavy Anchor'', by modifying the feedback path with a bank of high-pass filters as depicted in Figure \ref{fig:bdAlg} below, with $u_{c}=\mathbf{0}$. We call it Heavy Anchor because we show that it looks like Polyak's heavy ball method but with the momentum term having the opposite sign. \begin{figure} \caption{Block diagram of \eqref{eqn:proposedAlg}} \label{fig:bdAlg} \end{figure} Explicitly the dynamics are, \begin{align} \label{eqn:proposedAlg} \begin{split} \dot{r} &= \alpha (x-r) \\ \dot{x} &= -F(x) - \beta(x-r) \\ \end{split} \tag{HA$_{\text{F}}$} \end{align} where $\alpha, \beta \in \mathbb{R}_{++}$ and $r\in\mathbb{R}^{n}$ are auxiliary variables. The individual agent dynamics are, \begin{align*} \dot{r}_{i} &= \alpha(x_{i}-r_{i}) \\ \dot{x}_{i} &= -\nabla_{x_i}J_{i}(x_{i},x_{-i}) - \beta(x_{i}-r_{i}) \end{align*} The new dynamics have a gradient-play component with a dynamic estimation of the own action derivative. Figure \ref{fig:anchor_harmonic} shows the decision trajectories $x$ under Heavy Anchor for the two player zero-sum game \eqref{eqn:harmonic}. \begin{figure} \caption{Decision trajectories under Heavy Anchor} \label{fig:anchor_harmonic} \end{figure} \subsection{Connections to Other Dynamics/Algorithms} Our proposed dynamics \eqref{eqn:proposedAlg} is related to other continuous-time dynamics or discrete-time algorithms. First, \eqref{eqn:proposedAlg} can be written as the second-order dynamics, \begin{align}\label{eqn:sec_proposedAlg} \ddot{x} + (\nabla F(x) + \beta + \alpha)\dot{x} + \alpha F(x) = 0. \end{align} Under appropriate restrictions on the values of $\alpha$ and $\beta$, this dynamics recovers other dynamics/algorithms. For example, similar dynamics appears in stabilizing unknown equilibrium in chaotic systems or saddle functions \cite{physical2}, \cite{physical3}. However, these works do not rigorously characterize stability/convergence. As another example, consider \begin{align*} \ddot{x} + \alpha \dot{x} + \beta \nabla^{2}f(x)\dot{x} + \nabla f(x) + \nabla \Psi(x) = 0, \end{align*} where $f$ is a convex function, as considered in the optimization literature, \cite{attouch1}, \cite{radu}. If $F=\nabla f$, \eqref{eqn:sec_proposedAlg} can be written as the above (with $\Psi(x) \equiv 0$). However, in a game $F$ is not a true gradient (unless the game is a potential game), but rather a pseudo-gradient, so convergence results are not applicable. Next, we relate \eqref{eqn:proposedAlg} to some existing discrete-time algorithms. Performing an Euler discretization of \eqref{eqn:proposedAlg} gives, \begin{align*} x_{k+1} &= x_{k} - sF(x_{k}) - s\beta(x_{k}-r_{k}) \\ r_{k+1} &= r_{k} + s\alpha (x_{k} - r_{k}) \end{align*} where $s > 0$ is the step size, which after some manipulations yields the second-order difference equation, \begin{align} \label{eqn:secondOrder} x_{k+2} &= x_{k+1} - \alpha s^{2}F(x_{k+1}) + (1 - s\alpha -s\beta)\bracket{x_{k+1}-x_{k}} \notag\\ &\quad - s(1-s\alpha)\bracket{F(x_{k+1})-F(x_{k})}. \end{align} Depending on how the parameters $\alpha$ and $\beta$ are selected we can recover some known algorithms. If $\alpha = \beta = \frac{1}{2s}$ and $F = \nabla f$ for some convex function $f$ then \eqref{eqn:secondOrder} becomes, \begin{align*} x_{k+2} &= x_{k+1} - \tilde{s}\bracket{2\nabla f(x_{k+1})-\nabla f(x_{k})} \end{align*} where $\tilde{s} = \frac{s}{2}$ gives the optimistic gradient-descent/ascent (OGDA) \cite{optimistic}, shadow Douglas Rachford \cite{shadow}, or the forward-reflected backward method \cite{frb}. On the other hand, if $\alpha = \frac{1}{s}$, and $F = \nabla f$ then \eqref{eqn:secondOrder} becomes, \begin{align*} x_{k+2} &= x_{k+1} - s\bracket{\nabla f(x_{k+1}) + \beta \bracket{x_{k+1}-x_{k}}} \end{align*} where $\beta < 0$ gives Polyak's heavy-ball method, \cite{polyakBall}. \section{Convergence under Perfect Information} \label{sec:alg} In this section we consider that each agent knows all $x_{-i}$ (actions that his cost depends on), hence the full (perfect) decision information setting. In Theorem \ref{thm:EIPconvergence} we show that the continuous-time dynamics \eqref{eqn:sec_proposedAlg} converges for all $\alpha, \beta > 0$, in this full information setting. Our idea is to see \eqref{eqn:proposedAlg} as an (EIP) passivity-based feedback modification of \eqref{eqn:gradientDescent}. To prove that $x$ in \eqref{eqn:proposedAlg} converges to an Nash Equilibrium in monotone games, we decompose the system into a feedback interconnection between two subsystems (see Fig. \ref{fig:bdAlg}). We show that each subsystem is EIP and use their storage functions to construct an appropriate Lyapunov function to prove that the equilibrium point of the interconnected system (which is a NE) is asymptotically stable. \begin{lemma} \label{lemma:eipSub1} Under Assumption \ref{asmp:monotone} the following system, \begin{align} \label{eqn:eip1} \begin{split} \dot{x} &= -F(x) + u_{1} \\ y_{1} &= x. \end{split} \end{align} is EIP with repect to $u_{1}$ and $y_{1}$. \end{lemma} \begin{proof} Let $\bar{x}$ be the equilibrium of \eqref{eqn:eip1} for input $\bar{u}_{1}$, and $\bar{y}_{1}$ the corresponding output. Consider the storage function $V_{1}(x) = \frac{1}{2}\snorm{x-\bar{x}}$. Then, along the solutions of \eqref{eqn:eip1}, \begin{align} \dot{V}_{1}(x) &= \inp*{x-\bar{x}}{-F(x)+u_{1}+F(\bar{x})-\bar{u}_{1}} \notag \\ &= -\inp*{x-\bar{x}}{F(x)-F(\bar{x})} + \inp*{u_{1} - \bar{u}_{1}}{y_{1} - \bar{y}_{1}} \label{eqn:EIPinequality1} \end{align} By Assumption \ref{asmp:monotone}, the first term is $\leq0$ and the system is EIP. \end{proof} \begin{lemma} \label{lemma:eipSub2} For any $\alpha, \beta > 0$ the following system, \begin{align} \label{eqn:eip2} \begin{split} \dot{r} &= -\alpha r + \alpha u_{2} \\ y_{2} &= -\beta r + \beta u_{2}. \end{split} \end{align} is OSEIP with repect to $u_{2}$ and $y_{2}$. \end{lemma} \begin{proof} Let $\bar{r}$ be the equilibrium of \eqref{eqn:eip2} for the input $\bar{u}_{2}$ and let $\bar{y}_{2}$ be the corresponding output. Consider the storage function $V_{2}(r) = \frac{\beta}{2\alpha}\snorm{r-\bar{r}}$. Then, along solutions of \eqref{eqn:eip2}, \begin{align} \dot{V}_{2}(r) &= \frac{\beta}{\alpha}\inp*{r-\bar{r}}{-\alpha r + \alpha u_{2} + \alpha \bar{r} - \alpha \bar{u}_{2}} \notag \\ &= \frac{\beta}{\alpha}\inp*{u_{2} - \frac{1}{\beta}y_{2} - \bar{u}_{2} + \frac{1}{\beta}\bar{y}_{2}}{\frac{\alpha}{\beta}(y_{2}-\bar{y}_{2})} \notag \\ &= \inp*{u_{2}-\bar{u}_{2}}{y_{2}-\bar{y}_{2}} - \frac{1}{\beta}\snorm{y_{2}-\bar{y}_{2}} \label{eqn:EIPinequality2} \end{align} Therefore, the system is OSEIP for any $\beta>0$.\end{proof} We now turn to the interconnected system \eqref{eqn:proposedAlg}. We show first that any equilibrium of \eqref{eqn:proposedAlg} is a NE. Then, using the two storage functions from Lemma \ref{lemma:eipSub1} and \ref{lemma:eipSub2} we show that any equilibrium point \eqref{eqn:proposedAlg} is asymptotically stable. \begin{lemma}\label{eq_FI} Any equilibrium of \eqref{eqn:proposedAlg} is $(x^{*},x^{*})$ where $x^{*}$ is a Nash equilibrium of the game. \end{lemma} \begin{proof} Let the equilibrium point of \eqref{eqn:proposedAlg} be denoted $(\bar{x},\bar{r})$. Then $0 = \alpha (\bar{x}-\bar{r})$ implies that $\bar{x} = \bar{r}$ and $0 = -F(\bar{x}) - \beta(\bar{x} - \bar{r}) = -F(\bar{x})$. An equilibrium $x^{*}$ of \eqref{eqn:gradientDescent} is such that $F(x^*) = 0$ therefore $\bar{x} = \bar{r} = x^{*}$ a NE. \end{proof} \begin{thm}\label{thm:EIPconvergence} Consider a game $\mathcal{G}(\mathcal{N},J_i,\Omega_i)$ under Assumption \ref{asmp:Jsmooth} and \ref{asmp:monotone}. Let the overall dynamics of the agents be given by \eqref{eqn:proposedAlg}. Then, for any $\alpha,\beta > 0$, the set of Nash equilibrium points $\setc{(x^*}{F(x^*) = 0}$ is globally asymptotically stable. \end{thm} \begin{proof} Note that \eqref{eqn:proposedAlg} is the system in Fig. \ref{fig:bdAlg}) with $u_c=0$. Consider the following candidate Lyapunov function $V(x,r) = V_{1}(x) + V_{2}(r)$ where $V_{1}(x) = \frac{1}{2}\snorm{x-\bar{x}}$ and $V_{2}(r)=\frac{\beta}{2\alpha}\snorm{r-\bar{r}}$, where cf. Lemma \ref{eq_FI}, $\bar{x}=\bar{r}=x^*$. Along the solutions of \eqref{eqn:proposedAlg}, from Lemma \ref{lemma:eipSub1}, \eqref{eqn:EIPinequality1}, and Lemma \ref{lemma:eipSub2}, \eqref{eqn:EIPinequality2}, using $u_1=-y_2$, $u_2=y_1$, $\bar{x}=\bar{r}$ and cancelling terms, we obtain, \begin{align}\label{EIP_fdb_F} \dot{V}(x,r) &= -\inp*{x-\bar{x}}{F(x)-F(\bar{x})} - \frac{1}{\beta}\snorm{y_{2}-\bar{y}_{2}} \notag \\ &= -\inp*{x-\bar{x}}{F(x)-F(\bar{x})} - \beta \snorm{x-r} \end{align} By Assumption \ref{asmp:monotone}, it follows that $\dot{V} \leq 0$. We resort to LaSalle's Invariance Principle \cite{nonlinear}. Note that $\dot{V} = 0$ implies $x - r = 0$. On $x=r$ the dynamics \eqref{eqn:proposedAlg} reduces to, $ 0=\dot{x} - \dot{r} = -F(x) - \beta(x-r) - \alpha(x-r) = -F(x) $, hence the largest invariant set is $\setc{x}{F(x)=0}$. Since $V$ is radially unbounded, the conclusion follows. \end{proof} \section{Partial Information} \label{sec:dist} In Section \ref{sec:alg} we considered that each agent knows all others' decisions $x_{-i}$. In this section we propose a version of \eqref{eqn:proposedAlg}, that works in the partial information setting, i.e. when agents do not know all others' decisions and instead estimate them based on communicating with their neighbors over a communication graph $G_{c}$. \begin{asmp} \label{asmp:graph} $G_{c} = (\mathcal{N}, \mathcal{E})$ is undirected and connected. \end{asmp} Assume that each agent $i$ maintains an estimate vector $\mathbf{x}^{i} = col(\mathbf{x}^{i}_{j})_{j\in\mathcal{N}} \in \mathbb{R}^{n}$ where $\mathbf{x}^{i}_{j}$ is agent $i$'s estimate of player $j$'s action. Note that $\mathbf{x}^{i}_{i} = x_{i}$ is player $i$'s actual action. Let $\mathbf{x} = col(\mathbf{x}^{i})_{i\in\mathcal{N}}\in\mathbb{R}^{Nn}$ represent all agents' estimates stacked into a single vector. Similarly, define the auxiliary variable $\mathbf{r}^{i} \in \mathbb{R}^{n}$ for each agent $i$. Let the extended pseudo-gradient be denoted as $\mathbf{F}(\mathbf{x}) := col(\nabla_{x_i}J_i(\mathbf{x}^{i}))_{i\in\mathcal{N}}$, where each agent uses its estimate of others' decisions instead of true decisions. Note that at consensus of estimates, $\mathbf{x}^{i}=x$, for all $i\in\mathcal{N}$, and $\mathbf{F}(\mathbf{1}_{N}\otimes x)=F(x)$, for any $x \in \mathbb{R}^n$. Let the matrix $\mathcal{R} = diag(\mathcal{R}_{i})_{i\in\mathcal{N}}$, where $\mathcal{R}_{i} = [\mathbf{0}_{n_{i}\times n<i}I_{n_{i}}\mathbf{0}_{n_{i} \times n>i}]$, and $n<i = \sum_{\substack{j<i }}n_{j}$, $n<i = \sum_{\substack{j>i}}n_{j}$, $i,j\in\mathcal{N}$. The matrix $\mathcal{R}_{i}$ is used to get the component of a vector that belongs to agent $i$, i.e., $x_{i} = \mathcal{R}_{i}\mathbf{x}^{i}$ and $x = \mathcal{R}\mathbf{x}$. The operation $\mathbf{x} = \mathcal{R}^{T}x$ sets $\mathbf{x}^{i}_{i}=x_{i}$ and $\mathbf{x}^{i}_{j} = 0$ for all $j\neq i$. The problem is thus lifted into an augmented space of decisions, estimates and auxiliary variables $(\mathbf{x},\mathbf{r})$, with the original space being its consensus subspace. Consider the partial information version of \eqref{eqn:proposedAlg}, over $G_{c}$, where the individual agent dynamics is given as, \begin{align} \label{eqn:distProposedAlgAgent} \mathbf{\dot{r}}^{i} &= \alpha (\mathbf{x}^{i} - \mathbf{r}^{i}) \\ \mathbf{\dot{x}}^{i} &= -\mathcal{R}_{i}^{T}\nabla_{x_i}J_{i}(\mathbf{x}^{i}) - \beta(\mathbf{x}^{i} - \mathbf{r}^{i}) - c\sum_{j\in \mathcal{N}_{i}} w_{ij}(\mathbf{x}^{i} - \mathbf{x}^{j}) \notag \end{align} or, in compact (stacked) form, as \begin{align} \label{eqn:distProposedAlg} \begin{split} \mathbf{\dot{r}} &= \alpha (\mathbf{x} - \mathbf{r}) \\ \mathbf{\dot{x}} &= -\mathcal{R}^{T}\mathbf{F}(\mathbf{x}) - \beta(\mathbf{x} - \mathbf{r}) - c\mathbf{L}\mathbf{x} \end{split} \tag{HA$_{\text{\textbf{F}}}$} \end{align} where $c> 0$ is a scaling factor and $\mathbf{L} = L\otimes I_{n}$. The individual agent dynamics \eqref{eqn:distProposedAlgAgent} is the augmented version of \eqref{eqn:proposedAlg} with a Laplacian (consensus) correction for the estimates. Note that the dynamics \eqref{eqn:distProposedAlg} is similar to Fig. \ref{fig:bdAlg}, but with an augmented state $(\mathbf{r},\mathbf{x})$, and with feedback loop closed with $u_c = - c\mathbf{L}\mathbf{x}$. At consensus, $\mathbf{x}=\mathbf{1}_N\otimes x$, $\mathbf{r}=\mathbf{1}_N\otimes r$, and \eqref{eqn:distProposedAlg} recovers \eqref{eqn:proposedAlg}. First we show that any equilibrium point of \eqref{eqn:distProposedAlg} is a NE. \begin{lemma} \label{lemma:distEQ} Consider a game $\mathcal{G}(\mathcal{N},J_i,\Omega_i)$ under Assumption \ref{asmp:Jsmooth}, over a communication graph $G_c = (\mathcal{N},\mathcal{E})$ satisfying Assumption \ref{asmp:graph}. Let each agents' dynamics be as in \eqref{eqn:distProposedAlgAgent} or overall as \eqref{eqn:distProposedAlg}. Then, any equilibrium $(\bar{\mathbf{x}},\bar{\mathbf{r}})$ of \eqref{eqn:distProposedAlg} satisfies $\bar{\mathbf{x}}^{1} = \cdots = \bar{\mathbf{x}}^{N} = \bar{\mathbf{r}}^{1} = \cdots = \bar{\mathbf{r}}^{N} = x^{*}$ where $x^{*}$ is a NE. \end{lemma} \begin{proof} Let $(\bar{\mathbf{x}},\bar{\mathbf{r}})$ denote an equilibrium of \eqref{eqn:distProposedAlg}. Then at equilibrium we have $\bar{\mathbf{x}} = \bar{\mathbf{r}}$ and $\mathbf{0}_{Nn} = -\mathcal{R}^{T}\mathbf{F}(\bar{\mathbf{x}}) - \mathbf{L}\bar{\mathbf{x}}$. Pre-multiplying both sides by $(\mathbf{1}^{T}_{N}\otimes I_{n})$ yields $\mathbf{0}_{n} = \mathbf{F}(\bar{\mathbf{x}})$ and therefore, $\mathbf{0}_{Nn} = -\mathbf{L}\bar{\mathbf{x}}$. By Assumption \ref{asmp:graph}, $\mathbf{0}_{Nn} = -\mathbf{L}\bar{\mathbf{x}}$ when $\bar{\mathbf{x}}^{1} =\cdots = \bar{\mathbf{x}}^{N}$ i.e., $\bar{\mathbf{x}} = \mathbf{1}_{N}\otimes \bar{x}$ for some $\bar{x} \in \mathbb{R}^{n}$. Therefore, $\mathbf{0}_{n} = \mathbf{F}(\bar{\mathbf{x}}) = \mathbf{F}(\mathbf{1}_{N}\otimes \bar{x}) = F(\bar{x})$ hence, $ \bar{x} = x^{*}$, where $x^{*}$ is a Nash Equilibrium. \end{proof} \begin{remark} We note that in the full decision information case, monotonicity of $F$ was instrumental (see Theorem \ref{thm:EIPconvergence}). In the augmented space monotonicity does not necessarily hold, even if on the consensus subspace it does cf. Assumption \ref{asmp:monotone}, see \cite{PavelGNE}. This is unlike distributed optimization, where due to separability, the extension of monotonicity/convexity properties to the augmented space holds automatically. This is the main technical difficulty in developing NE seeking dynamics in partial-information settings. \end{remark} Our first result is proved under a monotonicity assumption on the extended pseudo-gradient $\mathbf{F}$. \begin{asmp}\label{asmp:extendMono} The extended pseudo-gradient is monotone, $\inp*{\mathbf{x}-\mathbf{x'}}{\mathcal{R}^{T}(\mathbf{F}(\mathbf{x}) - \mathbf{F}(\mathbf{x'}))} \geq 0$, $\forall \mathbf{x}, \mathbf{x'}$. \end{asmp} Assumption \ref{asmp:extendMono} has been also used in Thm.1, \cite{dianCT}, or \cite{Hu} (as cocoercivity). It represents extension of monotonicity off the consensus subspace. Note that on the consensus subspace ($\mathbf{x}=\mathbf{1}_N\otimes x$), it is automatically satisfied by Assumption \ref{asmp:monotone}. Under Assumption \ref{asmp:extendMono}, the following result can be immediately obtained by exploiting EIP properties. \begin{thm} Consider a game $\mathcal{G}(\mathcal{N},J_i,\Omega_i)$ under Assumption \ref{asmp:Jsmooth}, \ref{asmp:monotone}, and \ref{asmp:extendMono}, over a communication graph $G_{c} = (\mathcal{N},\mathcal{E})$ satisfying Assumption \ref{asmp:graph}. Let the overall dynamics of the agents be given by \eqref{eqn:distProposedAlg} or \eqref{eqn:distProposedAlgAgent}. Then, for any $\alpha,\beta > 0$, \eqref{eqn:distProposedAlg} converges asymptotically to $(\mathbf{1}_{N}\otimes x^*,\mathbf{1}_{N}\otimes x^*)$ where $x^*$ is a NE. \end{thm} \begin{proof} Note that \eqref{eqn:distProposedAlg} is similar to a dynamics as in Fig. \ref{fig:bdAlg}, but with an augmented state $\mathbf{x}$ (decisions and estimates), and with feedback loop closed with $u_c = - c\mathbf{L}\mathbf{x}$. We exploit the EIP properties of the two, forward and feedback, subsystems. Namely, consider $V(\mathbf{x},\mathbf{r}) = \frac{1}{2}\snorm{\mathbf{x}-\bar{\mathbf{x}}} + \frac{\beta}{2\alpha}\snorm{\mathbf{r}-\bar{\mathbf{r}}}$, where $\bar{\mathbf{x}} = \bar{\mathbf{r}}= \mathbf{1}_{N}\otimes \bar{x}$ (cf. Lemma \ref{lemma:distEQ}). Then, along solutions of \eqref{eqn:distProposedAlg}, similar to \eqref{EIP_fdb_F} in Theorem \ref{thm:EIPconvergence}, we can obtain, \begin{align}\label{EIP_fdb_boldF} \dot{V}(\mathbf{x},\mathbf{r}) &= -\inp*{\mathbf{x}-\bar{\mathbf{x}}}{\mathcal{R}^{T}\mathbf{F}(\mathbf{x})-\mathcal{R}^{T}\mathbf{F}(\bar{\mathbf{x}})} \notag \\ & \quad +\inp*{\mathbf{x}-\bar{\mathbf{x}}}{u_c-\bar{u}_c} - \beta \snorm{\mathbf{x}-\mathbf{r}} \end{align} where $u_{c} = -c\mathbf{L}\mathbf{x}$. The first term is nonpositive under Assumption \ref{asmp:extendMono}. For any $\alpha,\beta >0$, the system is strictly EIP from $u_{c}$ to $y_{1}=\mathbf{x}$, and with $u_{c} = -c\mathbf{L}\mathbf{x}$, since $\mathbf{L}$ is positive semidefinite, it follows that $\dot{V} \leq 0$. We use LaSalle's Invariance Principle and find the largest invariant set \cite{nonlinear}. Note that $\dot{V} = 0$ implies that $\mathbf{x}=\mathbf{r}$ and $\mathbf{L}\mathbf{x}=\mathbf{L}\bar{\mathbf{x}}$. Since $\bar{\mathbf{x}} = \mathbf{1}_{N}\otimes \bar{x}$ (cf. Lemma \ref{lemma:distEQ}), $\mathbf{L}\mathbf{x}=\mathbf{L}\bar{\mathbf{x}}=0$, hence $\mathbf{x}=\mathbf{1}_{N}\otimes x$ for some $x \in \mathbb{R}^{n}$. Then, on $\mathbf{x}=\mathbf{r}$, the dynamics \eqref{eqn:distProposedAlg} reduces to $0=\dot{\mathbf{x}}-\dot{\mathbf{r}}= -\mathcal{R}^{T}\mathbf{F}(\mathbf{1}_{N}\otimes x)=-\mathcal{R}^{T}F(x)$, which implies $F(x)=0$, hence the largest invariant set is the NE set. Since $V$ is radially unbounded, the conclusion follows. \end{proof} On the other hand, Assumption \ref{asmp:extendMono} can be quite restrictive. Instead of this assumption on $\mathbf{F}$, we will use a weaker additional condition, this time on the pseudo-gradient $F$. This is the inverse Lipschitz property. In the next section we discuss this property. \section{Inverse Lipschitz} \label{sec:invLip} In convex analysis and monotone operator theory there are three properties on an operator $T$ that are frequently used and are important. These three properties are: $\mu$-strongly monotone, $L$-Lipschitz, and $C$-cocoercive, which describe upper and lower bounds on an operator $T$. However, it appears there is a natural definition missing. \begin{defn} An operator $T : \mathcal{H} \to 2^{\mathcal{H}}$ is $R$-inverse Lipschitz if, \begin{align*} \norm{x-y} &\leq R \norm{Tx-Ty} \quad \forall x,y \in \mathcal{H}, \end{align*} related to the condition used by Rockafeller \cite{rockInvLip}. \end{defn} \begin{remark} A $C$-cocoercive operator is also called $C$-inverse strongly monotone, because if $x=T^{-1}u$ and $y=T^{-1}v$ then, \begin{align*} \inp*{Tx - Ty}{x-y} &\geq C \snorm{Tx-Ty} \\ \inp*{u - v}{T^{-1}u-T^{-1}v} &\geq C \snorm{u-v} \end{align*} This is the same as the inverse operator $T^{-1}$ being $C$-strongly monotone. In the same spirit, we call $T$ a $R$-inverse Lipschitz operator because it is the same as the inverse operator $T^{-1}$ being $R$-Lipschitz, i.e., \begin{align*} \norm{x-y} &\leq R \norm{Tx-Ty} \\ \norm{T^{-1}u-T^{-1}v} &\leq R \norm{u-v} \end{align*} \end{remark} \subsection{Similarities} \subsubsection{Similarities in monotone operator theory} The property of inverse Lipschitz is closely related to coercive or radially unbounded property, \begin{defn} A function $f:\mathcal{H}\to\mathbb{R}$ is coercive (radially unbounded) if, \begin{align*} \lim_{\norm{x}\to \infty} f(x) = +\infty \end{align*} \end{defn} Since coercive functions are real valued and $T$ is in general vector valued, taking the norm can be thought of as an extension of the definition, i.e., $\lim_{\norm{x}\to \infty} \norm{Tx} = +\infty$. If $y=0$ and define $\tilde{T}x = Tx - T0$ then from the $R$-inverse Lipschitz definition of $T$, we see that $\norm{x} \leq R\norm{\tilde{T}x}$. Therefore, $R$-inverse Lipschitz is a stronger growth condition relating the input to the output, similar to coercivity and implies that $\tilde{T}$ is coercive. \subsubsection{Similarities to optimization} In optimization there are weaker conditions than strong convexity that can get linear convergence rates \cite{opt_weak_conditions}. One of these conditions is the Polyak-Lojasiewicz (PL) inequality. A function $f$ satisfies the (PL) inequality if $\forall x\in X$, $\frac{1}{2}\snorm{\nabla f(x)} \geq \mu \bracket{f(x)-f^{*}}$, where $f^*$ is the value of $f$ at the optimal solution. Theorem 2 \cite{opt_weak_conditions} shows that if $f$ has a Lipschitz-continuous gradient then (PL) is equivalent to the Error Bound (EB) inequality, $\forall x\in X$, $\norm{\nabla f (x)} \geq \mu \norm{x - Proj_{X^{*}}(x)}$. If $X = \mathbb{R}^{n}$ then $\nabla f(Proj_{X^{*}}(x)) = 0$ and the condition can be written as, $\norm{\nabla f (x) - \nabla f (y)} \geq \mu \norm{x - y}$ where $y = Proj_{X^{*}}(x)$, hence $\nabla f$ is $\frac{1}{\mu}$-inverse Lipschitz. Thus, the $R$-inverse Lipschitz condition is the monotone operator equivalent to the Error Bound inequality / Polyak-Lojasiewicz inequality. If we replace $\nabla f $ with a monotone operator then this condition is the condition used by Rockafeller in \cite{rockInvLip}, \begin{align*} \norm{x-y} \leq R \norm{Tx-y} \end{align*} $\forall x\in \mathcal{H}$ and $y = Ty$ (restricted). If we remove the restriction of $y = Ty$ then we get the definition of $R$-inverse Lipschitz. \subsubsection{Similarities in control / passivity} Passive systems with an inverse Lipschitz property have been analyzed in Chapter 6, section 11 \cite{control_inv_lip}. However, the analysis is only for the case when the operator is strongly monotone. \subsection{Relations / Properties} The following diagram shows the relationship between $R$-inverse Lipschitz and the other properties. \begin{figure} \caption{$cvx$: set of convex functions, $\mu$: set of $\mu$-strongly monotone operators, $C$: set of $C$-cocoercive operators, $M$: set of monotone operators, $R$: set of $R$-inverse Lipschitz operators, $L$: set of Lipschitz operators} \end{figure} \begin{prop}\label{c->l} If an operator $T : \mathcal{H} \to 2^{\mathcal{H}}$ is $C$-inverse strongly monotone then it is $\frac{1}{C}$-Lipschitz \end{prop} \begin{proof} Note that a $C$-inverse strongly monotone satisfies, $C \snorm{Tx-Ty} \leq \inp*{Tx-Ty}{x-y} \leq \norm{Tx-Ty}\norm{x-y}$, therefore $\norm{Tx-Ty} \leq \frac{1}{C}\norm{x-y}$. \end{proof} \begin{prop}[Baillon-Haddad \cite{Baillon}] \label{l->c} If an operator $T : \mathcal{H} \to 2^{\mathcal{H}}$ is $\frac{1}{C}$-Lipschitz and is the gradient of a convex function then it is $C$-inverse strongly monotone. \end{prop} \begin{prop}\label{u->r} If an operator $T : \mathcal{H} \to 2^{\mathcal{H}}$ is $\mu$-strongly monotone then it is $\frac{1}{\mu}$-inverse Lipschitz \end{prop} \begin{proof} Note that a $\mu$-inverse strongly monotone satisfies, $\mu \snorm{x-y} \leq \inp*{Tx-Ty}{x-y} \leq \norm{Tx-Ty}\norm{x-y}$, therefore $\norm{x-y} \leq \frac{1}{\mu}\norm{Tx-Ty}$. \end{proof} \begin{prop}\label{r->u} If an operator $T : \mathcal{H} \to 2^{\mathcal{H}}$ is $\frac{1}{\mu}$-inverse Lipschitz and is the gradient of a convex function then it is $\mu$-strongly monotone. \end{prop} \begin{proof} Let $\partial f = T$. If $\partial f$ is $\frac{1}{\mu}$-inverse Lipschitz then $(\partial f)^{-1} = \partial f^{*}$ is $\frac{1}{\mu}$-Lipschitz. From Prop 12.60(a,b) \cite{RockafellarVarAnal}, if a function $f$ is convex and $\partial f$ is $\frac{1}{\mu}$-Lipschitz then $f^{*}$ is $\mu$-strongly monotone. Since $\partial f^{*}$ is $\frac{1}{\mu}$-Lipschitz and $f^{**} = f$ we can conclude that $f$ is $\mu$-strongly monotone. \end{proof} \begin{prop}\label{ul->c} If an operator $T : \mathcal{H} \to 2^{\mathcal{H}}$ is $\mu$-strongly monotone and $L$-Lipschitz then it is $\frac{\mu}{L^{2}}$-inverse strongly monotone, or cocoercive with $C = \frac{\mu}{L^{2}}$. \end{prop} \begin{proof} A $\mu$-strongly monotone operator satisfies, $\mu \snorm{x-y} \leq \inp*{Tx-Tx}{x-y}$ and a $L$-Lipschitz operator satisfies, $\frac{1}{L^{2}}\snorm{Tx-Ty}\leq \snorm{x-y}$. Combining these together gives $\frac{\mu}{L^{2}}\snorm{Tx-Ty} \leq \inp*{Tx-Tx}{x-y}$. \end{proof} \begin{prop}\label{cr->u} If an operator $T : \mathcal{H} \to 2^{\mathcal{H}}$ is $C$-inverse strongly monotone ($C$-cocoercive) and $R$-inverse Lipschitz then it is $\frac{C}{R^{2}}$-strongly monotone. \end{prop} \begin{proof} A $C$-inverse strongly monotone operator satisfies, $C\snorm{Tx-Ty} \leq \inp*{Tx-Tx}{x-y}$ and a $R$-inverse Lipschitz operator satisfies, $\frac{1}{R^{2}}\snorm{x-y}\leq \snorm{Tx-Ty}$. Combining these together gives $\frac{C}{R^{2}}\snorm{x-y} \leq \inp*{Tx-Tx}{x-y}$. \end{proof} The following Lemma is a useful property of operators under the $R$-inverse Lipschitz assumption. \begin{prop} \label{lemma:resInvLip} Let $T : \mathcal{H} \to 2^{\mathcal{H}}$ be a maximally $\mu$-hypomonotone operator that is $R$-inverse Lipschitz. Then for any $\lambda \geq 0$ such that $\mu R^{2} \leq \lambda < \frac{1}{\mu}$, the following hold for the resolvent of $T$, $\mathcal{J}_{\lambda T} = (\mathrm{Id} + \lambda T)^{-1}$ \begin{enumerate} \item[(i)] $\mathcal{J}_{\lambda T}$ is maximally monotone. \item[(ii)] $\mathcal{J}_{\lambda T}$ is $L_{\mathcal{J}}$-Lipschitz, $\norm{\mathcal{J}_{\lambda T}x - \mathcal{J}_{\lambda T}y} \leq L_{\mathcal{J}}\norm{x-y}$ where, $L_{\mathcal{J}} \defeq \sqrt{\frac{R^{2}}{R^{2} + \lambda^{2} - 2\lambda \mu R^{2}}}$. \item[(iii)] $\inp*{x-y}{\mathcal{J}_{\lambda T}x - \mathcal{J}_{\lambda T}y} \leq \kappa_{\mathcal{J}} \snorm{x-y} $ where, \begin{align*} \kappa_{\mathcal{J}} \defeq \begin{cases} \frac{R^{2}(1- \mu\lambda)}{R^{2} + \lambda^{2} - 2\mu\lambda R^{2}} & R\geq \lambda \\ \frac{R^{2}(1+\frac{\lambda}{R})}{R^{2} + \lambda^{2} + 2\lambda R} & \lambda \geq R \end{cases} \end{align*} \end{enumerate} \end{prop} \begin{proof} Found in the Appendix. \end{proof} \begin{remark} The Lipschitz constant from Lemma \ref{lemma:resInvLip} (ii) can upper bound the inner product $\inp*{x-y}{\mathcal{J}_{\lambda T}x - \mathcal{J}_{\lambda T}y}$, but Lemma \ref{lemma:resInvLip} (iii) provides a tighter bound. \end{remark} When $T$ is differentiable some sufficient conditions for these properties are given next. \begin{prop} \label{lemma:funcType} Let $T$ be a differentiable operator and the Jacobian of $T$ be denoted $JT(x)$. Then $T$ is, \begin{enumerate} \item $\mu$-strongly monotone if: \null $\frac{1}{2}\bracket{JT(x) + JT(x)^{T}} \succeq \mu I$ \item $C$-cocoercive if: \null $C JT(x)^{T}JT(x) \preceq JT$ \item $L$-Lipschitz if: \null $JT(x)^{T}JT(x) \preceq L^{2} I$ \item $R$-inverse Lipschitz if: \null $JT(x)^{T}JT(x) \succeq \frac{1}{R^{2}} I$ \end{enumerate} \end{prop} \begin{proof} Found in the Appendix \end{proof} \subsection{Examples} \begin{exmp} The operator $Tx = \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix}x$ is monotone but is not strongly monotone nor cocoercive. It is $1$-Lipschitz and $1$-inverse Lipschitz. \end{exmp} \begin{exmp} The operator $T : [-1,1] \to [-1,1]$, $Tx = x^{3}$ is $\frac{1}{3}$-cocoercive and is not strongly monotone nor inverse Lipschitz. \end{exmp} \begin{exmp} The operator $Tx = \begin{bmatrix} 2 & 1 \\ -1 & 3 \end{bmatrix}x$ is $2$-strongly monotone, $\sqrt{\frac{15 + \sqrt{29}}{2}}$-Lipschitz, $\frac{1}{15+\sqrt{29}}$-cocoercive and $\sqrt{\frac{15 - \sqrt{29}}{2}}$-inverse Lipschitz \end{exmp} \begin{exmp} The operator $T: [0,\infty) \to [1,\infty)$, $Tx = e^{x}$ is $1$-strongly monotone and $1$-inverse Lipschitz, but is not cocoercive nor Lipschitz. \end{exmp} \begin{exmp} The operator $Tx = sin(x)$ is $1$-Lipschitz and is not strongly monotone, cocoercive or inverse Lipschitz. \end{exmp} \begin{exmp} The operator $T: (0,1) \to (0,\infty)$, $Tx = \frac{1}{x}$ is $1$-inverse Lipschitz and is not strongly monotone, cocoercive or Lipschitz. \end{exmp} \section{Convergence under Partial Information} \label{sec:partialConv} We will now show that \eqref{eqn:distProposedAlg} converges to the NE when the monotonicity of the extended pseudo-gradient, Assumption \ref{asmp:extendMono}, is replaced by a weaker assumption only on the pseudo-gradient. \begin{asmp}\label{asmp:inverse} The pseudo-gradient $F$ is $L_F$-Lipschitz, $R$-inverse Lipschitz, and $\mu$-hypomonotone, i.e., $\inp*{Fx-Fy}{x-y}\geq -\mu \snorm{x-y}$. \end{asmp} \begin{remark} Note that $F$ may not be monotone. For example, $F(x) = \begin{bmatrix} -1 & 1 \\ -1 & -1 \end{bmatrix}$ is $1$-hypomonotone, $\sqrt{2}$-Lipschitz and $\sqrt{2}$-inverse Lipschitz. \end{remark} When the extended monotonicity property (Assumption \ref{asmp:extendMono}) does not hold, we use Assumption \ref{asmp:inverse} and take advantage of properties of the dynamics on the augmented consensus subspace and its orthogonal complement. Our idea is to use a change of coordinates and in these coordinates show that, under Assumption \ref{asmp:monotone} and \ref{asmp:inverse}, the dynamics restricted to the consensus subspace satisfies a property similar to strict EIP for $\alpha$ parameters selected in a certain range (Lemma \ref{lemma:consTTS}). Then, for the overall dynamics, we exploit this property together with the excess passivity of the Laplacian to balance the coupling terms off the consensus subspace and show that \eqref{eqn:distProposedAlg} converges to a Nash Equilibrium (Theorem \ref{thm:distMono}). We first decompose the system into consensus and orthogonal component dynamics. Let $\mathbf{x}$ and $\mathbf{r}$ be decomposed into consensus and orthogonal components. i.e., \begin{align*} \mathbf{x} &= \mathbf{x}^{||} + \mathbf{x}^{\perp}, & \mathbf{x}^{||} &= \Pi_{||}\mathbf{x}, & \mathbf{x}^{\perp} &= \mathbf{x} - \mathbf{x}^{||}\\ \mathbf{r} &= \mathbf{r}^{||} + \mathbf{r}^{\perp}, & \mathbf{r}^{||} &= \Pi_{||}\mathbf{r}, & \mathbf{r}^{\perp} &= \mathbf{r} - \mathbf{r}^{||} \end{align*} where $\Pi_{||} = \frac{1}{N}(\mathbf{1}_{N}\otimes\mathbf{1}_{N}^{T}\otimes I_{n})$ and $\Pi_{\perp} = I_{Nn} - \Pi_{||}$, $\mathbf{x}^{||}=\mathbf{1}_{N}\otimes x$, $\mathbf{r}^{||}=\mathbf{1}_{N}\otimes r$, for some $x,r\in \mathbb{R}^n$. The overall dynamics \eqref{eqn:distProposedAlg} can be decomposed into the (augmented) consensus component dynamics, \begin{align} \begin{split} \mathbf{\dot{r}^{||}} &= \alpha\bracket{\mathbf{x}^{||} - \mathbf{r}^{||}} \\ \mathbf{\dot{x}^{||}} &= -\frac{1}{N}\mathbf{1}_{N}\otimes\mathbf{F}\bracket{\mathbf{x}^{||} + \mathbf{x}^{\perp}} - \beta\bracket{\mathbf{x}^{||} - \mathbf{r}^{||}} \end{split} \label{eqn:consInter} \end{align} and the orthogonal component dynamics, \begin{align} \label{eqn:orthDyn} \begin{split} \mathbf{\dot{r}^{\perp}} &= \alpha\bracket{\mathbf{x}^{\perp} - \mathbf{r}^{\perp}} \\ \mathbf{\dot{x}^{\perp}} &= -\Pi_{\perp}\mathcal{R}^{T}\mathbf{F}\bracket{\mathbf{x}^{||} + \mathbf{x}^{\perp}} - \beta\bracket{\mathbf{x}^{\perp} - \mathbf{r}^{\perp}} \\ &\qquad - c\mathbf{L}\mathbf{x}^{\perp} \end{split} \end{align} which are coupled one to another via $\mathbf{x}^{\perp}$ and $\mathbf{x}^{||}$. Let the change of variables $\mathbf{z}^{||} := \mathbf{x}^{||} - \mathbf{h}(\mathbf{r}^{||})$ where \begin{align}\label{h_def} \mathbf{h}(\mathbf{r}^{||}) := \mathcal{J}_{\frac{1}{\beta N}\mathbf{1}_{N}\otimes\mathbf{F}}(\mathbf{r}^{||}) = \left (\text{Id} + \frac{1}{\beta N}\mathbf{1}_{N}\otimes\mathbf{F} \right )^{-1}(\mathbf{r}^{||}) \end{align} is the resolvent of $\mathbf{1}_{N}\otimes\mathbf{F}$ on the consensus subspace. Then from \eqref{eqn:consInter}, it follows that \begin{align} \label{eqn:consDyn} \mathbf{\dot{r}^{||}} &= \alpha\bracket{\mathbf{z}^{||} + \mathbf{h}(\mathbf{r}^{||}) - \mathbf{r}^{||}} \\ \mathbf{\dot{z}^{||}} &= -\frac{1}{N}\mathbf{1}_{N}\otimes\mathbf{F}\bracket{\mathbf{z}^{||} + \mathbf{h}(\mathbf{r}^{||})+ \mathbf{x}^{\perp}} \notag \\ &\qquad - \bracket{\beta + \alpha \frac{\partial \mathbf{h}}{\partial \mathbf{r}}} \bracket{\mathbf{z}^{||} + \mathbf{h}(\mathbf{r}^{||}) - \mathbf{r}^{||}} \notag \end{align} Therefore, the dynamics \eqref{eqn:distProposedAlg} can be equivalently represented as \eqref{eqn:orthDyn} and \eqref{eqn:consDyn}. Note that an equilibrium point for these dynamics is $(\mathbf{\bar{z}^{||}},\mathbf{\bar{r}^{||}},\mathbf{\bar{x}^{\perp}},\mathbf{\bar{r}^{\perp}}) = (\mathbf{0}_{Nn}, \mathbf{1}_{N}\otimes x^{*}, \mathbf{0}_{Nn}, \mathbf{0}_{Nn})$, where $F(x^*)=0$ ($x^*$ is a NE), cf. Lemma \ref{lemma:distEQ}. Consider the dynamics \eqref{eqn:consDyn} restricted to the consensus subspace, i.e., when $\mathbf{x}^{\perp} = \mathbf{0}$, which is given as \begin{align} \label{eqn:consDynFI} \mathbf{\dot{r}^{||}} &= \alpha \bracket{\mathbf{z}^{||} + \mathbf{h}(\mathbf{r}^{||}) - \mathbf{r}^{||}}\\ \mathbf{\dot{z}^{||}} &= -\frac{1}{N}\mathbf{1}_{N}\otimes\mathbf{F}\bracket{\mathbf{z}^{||} + \mathbf{h}(\mathbf{r}^{||})} \notag \\ &\qquad - \bracket{\beta + \alpha \frac{\partial \mathbf{h}}{\partial \mathbf{r}}} \bracket{\mathbf{z}^{||} + \mathbf{h}(\mathbf{r}^{||}) - \mathbf{r}^{||}} \notag \end{align} \begin{lemma} \label{lemma:consTTS} Consider \eqref{eqn:consDynFI}, under Assumption \ref{asmp:Jsmooth} and \ref{asmp:inverse}. For any $0<d < 1$, let $\beta \in \bracket{\frac{\mu}{N}, \frac{1}{\mu N R^{2}}}$ and \begin{align*} 0< \alpha < \frac{4 d(1-d)(\beta-\frac{\mu}{N})(1-{\kappa_{\mathcal{J}}})}{\left((1-d)+ d(L_{\mathcal{J}} + L^2_{\mathcal{J}})\right )^2} \end{align*} where $\kappa_{\mathcal{J}}$ and $L_{\mathcal{J}}$ are obtained from Lemma \ref{lemma:resInvLip} for the pseudo-gradient $F$ and $\lambda = \frac{1}{\beta N}$. Let, \begin{align*} V^{||}(\mathbf{z}^{||}, \mathbf{r}^{||}) = \frac{1-d}{2}\snorm{\mathbf{r}^{||} - \mathbf{\bar{r}}^{||}} + \frac{d}{2}\snorm{\mathbf{z}^{||}} \end{align*} where $\mathbf{\bar{r}}^{||} = \mathbf{1}\otimes x^{*}$. Then, along any solution of \eqref{eqn:consDynFI}, $\dot{V}^{||} \leq -\varpi^{T} \Phi\varpi$ where $\varpi = \bracket{\norm{\mathbf{r}^{||} - \mathbf{\bar{r}}^{||}}, \norm{\mathbf{z}^{||}}}$, \begin{align} \label{eqn:lyapNSD} \Phi &= \begin{bmatrix} (1-d)\alpha(1-\kappa_{\mathcal{J}}) & -\frac{\alpha +\alpha(L_{\mathcal{J}}^{2}+L_{\mathcal{J}}-1)d}{2} \\ -\frac{\alpha + \alpha(L_{\mathcal{J}}^{2}+L_{\mathcal{J}}-1)d}{2} & d\bracket{\beta - \frac{\mu}{N}} \end{bmatrix} \end{align} and the matrix $\Phi$ is positive definite. \end{lemma} Using this Lemma we can show that \eqref{eqn:proposedAlg}, in the full information case, converges for hypomonotone games instead of just monotone. \begin{lemma} \label{lemma:full_hypo} Consider \eqref{eqn:proposedAlg}, under Assumption \ref{asmp:Jsmooth} and \ref{asmp:inverse}. For any $0<d < 1$, let $\beta \in \bracket{\mu, \frac{1}{\mu R^{2}}}$ and \begin{align*} 0< \alpha < \frac{4 d(1-d)(\beta-\frac{\mu}{N})(1-{\kappa_{\mathcal{J}}})}{\left((1-d)+ d(L_{\mathcal{J}} + L^2_{\mathcal{J}})\right )^2} \end{align*} where $\kappa_{\mathcal{J}}$ and $L_{\mathcal{J}}$ are obtained from Lemma \ref{lemma:resInvLip} for the pseudo-gradient $F$ and $\lambda = \frac{1}{\beta }$. Then, the dynamics \eqref{eqn:proposedAlg} globally converge to a NE $x^{*}$. \end{lemma} Next, we now show that \eqref{eqn:distProposedAlg} converges to a NE in the partial information case. \begin{thm} \label{thm:distMono} Consider a game $\mathcal{G}(\mathcal{N},J_i,\Omega_i)$ over a communication graph $G_c = (\mathcal{N},\mathcal{E})$, under Assumption \ref{asmp:Jsmooth}, \ref{asmp:graph} and \ref{asmp:inverse}. Let the overall dynamics of the agents be given by \eqref{eqn:distProposedAlg} or, equivalently, \eqref{eqn:orthDyn} and \eqref{eqn:consDyn}. Given any $0<d < 1$, set $\alpha, \beta$ to satisfy the conditions in Lemma \ref{lemma:consTTS}. Set $c$ such that, \begin{align} \label{eqn:graphGameCondition} c\lambda_{2}(L) > \frac{\eta_{1} + \eta_{2}}{4 det(\Phi)}L_F^{2} + L_F \end{align} where $\Phi$ is defined in \eqref{eqn:lyapNSD} and \begin{align*} \eta_{1} &= \alpha(1-d)(1-{\kappa_{\mathcal{J}}})\bracket{1+\frac{d}{\sqrt{N}}}^{2} + d\bracket{\beta - \frac{\mu}{N}}L_{\mathcal{J}}^{2} \\ \eta_{2} &= \alpha \bracket{1 + [L_{\mathcal{J}}^{2} + L_{\mathcal{J}} - 1]d}\bracket{1+\frac{d}{\sqrt{N}}}L_{\mathcal{J}} \end{align*} Then, the dynamics \eqref{eqn:distProposedAlg} globally converges to a NE $x^*$. \end{thm} \begin{proof} Consider the candidate Lyapunov function, \begin{align*} V(\mathbf{z}^{||},\mathbf{r}^{||},\mathbf{x}^{\perp},\mathbf{r}^{\perp}) &= \frac{1-d}{2}\snorm{\mathbf{r}^{||} - \mathbf{\bar{r}}^{||}} + \frac{d}{2}\snorm{\mathbf{z}^{||}} \\ &\quad + \frac{1}{2}\snorm{\mathbf{x}^{\perp}} + \frac{\beta}{2\alpha}\snorm{\mathbf{r}^{\perp}} \end{align*} where $\mathbf{\bar{r}}^{||} = \mathbf{1}_{N}\otimes x^{*}$ and $\mathbf{F}(\mathbf{\bar{r}}^{||}) =F(x^*)= 0$. Along \eqref{eqn:orthDyn} and \eqref{eqn:consDyn}, after re-grouping terms we can write, \begin{align*} \dot{V} &= \alpha(1-d)\inp*{\mathbf{r}^{||} - \mathbf{\bar{r}}^{||}}{\mathbf{z}^{||} + \mathbf{h}(\mathbf{r}^{||}) - \mathbf{r}^{||}} \\ & - \frac{d}{N}\inp*{\mathbf{z}^{||}}{\mathbf{1}_{N}\otimes\mathbf{F}(\mathbf{z}^{||} + \mathbf{h}(\mathbf{r}^{||}))} \\ &- d\inp*{\mathbf{z}^{||}}{\beta(\mathbf{z}^{||} + \mathbf{h}(\mathbf{r}^{||}) - \mathbf{r}^{||}) + \frac{\partial \mathbf{h}}{\partial \mathbf{r}}\mathbf{\dot{r}^{||}}} \\ &- \frac{d}{N}\inp*{\mathbf{z}^{||}}{\mathbf{1}_{N}\otimes\mathbf{F}(\mathbf{z}^{||} + \mathbf{h}(\mathbf{r}^{||}) + \mathbf{x}^{\perp})} \\ &+ \frac{d}{N}\inp*{\mathbf{z}^{||}}{\mathbf{1}_{N}\otimes\mathbf{F}(\mathbf{z}^{||} + \mathbf{h}(\mathbf{r}^{||}))} - \beta\snorm{\mathbf{x}^{\perp}-\mathbf{r}^{\perp}} \\ &- \inp*{\mathbf{x}^{\perp}}{\Pi_{\perp}\mathcal{R}^{T}\mathbf{F}(\mathbf{z}^{||} + \mathbf{h}(\mathbf{r}^{||}) + \mathbf{x}^{\perp}) + c\mathbf{L}\mathbf{x}^{\perp}} \end{align*} Note that the first three terms correspond to $\dot{V}^{||}$ along \eqref{eqn:consDynFI} in Lemma \ref{lemma:consTTS}, and $\beta>0$. Therefore, using Lemma \ref{lemma:consTTS} yields, \begin{align*} \dot{V} &\leq -\omega^{T} \Phi \omega \\ &- \frac{d}{N}\inp*{\mathbf{z}^{||}}{\mathbf{1}_{N}\otimes\mathbf{F}(\mathbf{z}^{||} + \mathbf{h}(\mathbf{r}^{||}) + \mathbf{x}^{\perp}) - \mathbf{1}_{N}\otimes\mathbf{F}(\mathbf{z}^{||} + \mathbf{h}(\mathbf{r}^{||}))} \\ &- \inp*{\mathbf{x}^{\perp}}{\Pi_{\perp}\mathcal{R}^{T}\mathbf{F}(\mathbf{z}^{||} + \mathbf{h}(\mathbf{r}^{||}) + \mathbf{x}^{\perp})} - c\lambda_{2}(L)\snorm{\mathbf{x}^{\perp}} \end{align*} where $\omega = \bracket{\norm{\mathbf{r}^{||} - \mathbf{\bar{r}}^{||}}, \norm{\mathbf{z}^{||}}}$. Under Assumption \ref{asmp:inverse}, it follows that $\mathbf{F}$ is also $L_F$-Lipschitz, (cf. Lemma 3, \cite{gramProx} or Lemma 1,\cite{TatarenkoShi}). Using this and Cauchy-Schwarz inequality, as well as $\Pi_{\perp}\mathcal{R}^{T}\mathbf{F}(\mathbf{h}(\mathbf{\bar{r}}^{||})) = \mathbf{0}_{Nn}$ yields, \begin{align*} \dot{V} &\leq -\omega^{T} \Phi \omega + \frac{d}{N} \sqrt{N} L_F \norm{\mathbf{z}^{||}} \norm{\mathbf{x}^{\perp}} - c\lambda_{2}(L)\snorm{\mathbf{x}^{\perp}} \\ &+ \norm{\mathbf{x}^{\perp}}\norm{\Pi_{\perp}\mathcal{R}^{T}}\norm{\mathbf{F}(\mathbf{z}^{||} + \mathbf{h}(\mathbf{r}^{||}) + \mathbf{x}^{\perp}) - \mathbf{F}(\mathbf{h}(\mathbf{\bar{r}}^{||}))} \end{align*}which, with $\norm{\Pi_{\perp}\mathcal{R}^{T}} \leq 1$ and Lemma \ref{lemma:resInvLip}(ii) for $\mathbf{h}$, leads to, \begin{align*} \dot{V} &\leq -\omega^{T} \Phi \omega + \frac{d}{N} \sqrt{N} L_F \norm{\mathbf{z}^{||}} \norm{\mathbf{x}^{\perp}} - c\lambda_{2}(L)\snorm{\mathbf{x}^{\perp}} \\ &+ \norm{\mathbf{x}^{\perp}} L_F\bracket{\norm{\mathbf{z}^{||}} + L_{\mathcal{J}}\norm{\mathbf{r}^{||} - \mathbf{\bar{r}}^{||}} + \norm{\mathbf{x}^{\perp}}} \end{align*} Therefore, \begin{align*} &\dot{V} \leq -\hat{\omega}^{T} \begin{bmatrix} & \hspace{-1cm} \Phi & -\frac{L_FL_{\mathcal{J}}}{2} \\ & & -\frac{L_F(\sqrt{N}+d)}{2\sqrt{N}} \\ -\frac{L_FL_{\mathcal{J}}}{2} & -\frac{L_F(\sqrt{N}+d)}{2\sqrt{N}} & c\lambda_{2}(L)-L_F \end{bmatrix} \hat{\omega} \end{align*} where $\hat{\omega}:= \bracket{\omega, \norm{\mathbf{x}^{\perp}}}= \bracket{\norm{\mathbf{r}^{||} - \mathbf{\bar{r}}^{||}}, \norm{\mathbf{z}^{||}}, \norm{\mathbf{x}^{\perp}}}$. The block matrix is positive definite if its Schur complement is positive definite, i.e., if \begin{align*} c\lambda_{2}(L) > \frac{\eta_{1} + \eta_{2}}{4 det(\Phi)}L_{F}^{2} + L_{F} \end{align*} where $\eta_{1}$, $\eta_{2}$ are as in the statement. Therefore, $\dot{V} \leq 0$ and $\dot{V} = 0$ only if ${\mathbf{r}^{||} = \mathbf{\bar{r}}^{||}}=\mathbf{1}_{N}\otimes x^*$, $\mathbf{z}^{||} =0$, $\mathbf{x}^{\perp}=0$, i.e., $\mathbf{x}^{||} = 0+ \mathbf{h}(\mathbf{\bar{r}}^{||})= \mathbf{h}(\mathbf{1}_{N}\otimes x^*)= \mathbf{1}_{N}\otimes h(x^*)=\mathbf{1}_{N}\otimes x^*$, where since $F(x^*)=0$, $x^*$ is a NE. The conclusion follows by a LaSalle argument \cite{nonlinear}. \end{proof} The conditions that we obtain for Theorem \ref{thm:distMono} are conservative. In the following section we restrict our attention to an important subclass of games called quadratic games and derive tighter conditions on the parameters $\alpha, \beta$ to ensure convergence. \section{Quadratic Hypomonotone Games} \label{sec:Quad} In this section, we consider a quadratic game $J_{i}(x_{i},x_{-i})= \frac{1}{2} x^TQ_{i}x + l_{i}^{T}x + c_{i}$ where $Q_{i}= Q_{i}^{T}\in\mathbb{R}^{n\times n}$, $l_{i}\in\mathbb{R}^{n}$, and $c_{i} \in \mathbb{R}$. The gradient of agents cost function with respect to their own action is, $\nabla_{x_{i}}J_{i}(x) = Q_{i}x + l_{i}$ and the pseudo-gradient is, \begin{align} F(x) &= Ax + b ,\qquad A \defeq\begin{bmatrix} Q_{1} \\ Q_{2} \\ \vdots \\ Q_{N} \end{bmatrix}, \quad b \defeq \begin{bmatrix} l_{1} \\ l_{2} \\ \vdots \\ l_{N} \end{bmatrix} \label{eqn:quad_pseudo} \end{align} For the perfect information case, algorithm \eqref{eqn:proposedAlg}, after the change of coordinates, $\hat{x} = x-x^*$ and $\hat{r} = r - r^{*}$, is written as, \begin{align} \label{eqn:DynLT} \dot{w} = \begin{bmatrix} \dot{\hat{x}} \\ \dot{\hat{r}} \end{bmatrix} &= \begin{bmatrix} -A-\beta I & \beta I \\ \alpha I & -\alpha I \end{bmatrix}\begin{bmatrix} \hat{x} \\ \hat{r} \end{bmatrix} \defeq M w \end{align} The following lemma relates the eigenvalues of $A$ to the eigenvalues of the overall $M$, \eqref{eqn:DynLT}. \begin{lemma} \label{lemma:eigRelation} Let $A\in \mathbb{R}^{n\times n}$ be a matrix where the $i^{th}$ eigenvalue of $A$ is denoted $\rho_{i}$. Then the eigenvalues of $M$, \eqref{eqn:DynLT}, are, \begin{align} \lambda_{i} &= \frac{-(\alpha + \beta + \rho_{i}) \pm \sqrt{(\alpha + \beta + \rho_{i})^{2} - 4\alpha \rho_{i}}}{2} \label{eqn:eigMap} \end{align} for all $i \in \set{1,\dots,n}$. \end{lemma} \begin{proof} Found in the Appendix \end{proof} The following Lemma gives conditions for the eigenvalues of $M$ to be in the OLHP. \begin{lemma} \label{thm:stableA} Let $A\in \mathbb{R}^{n\times n}$ be a matrix where the $i^{th}$ eigenvalue of $A$ is denoted $\rho_{i} = r_{i} + \mathfrak{j}k_{i}$ where $r_{i}$ ($k_{i}$) is the real (imaginary) part of $\rho_{i}$ and $\mathfrak{j} = \sqrt{-1}$. \begin{enumerate} \item[(i)] If $\rho_{i} = 0$ and $\alpha, \beta > 0$, then $\lambda_{i}$ from \eqref{eqn:eigMap} are $0$ and $-(\alpha+\beta)$. \item[(ii)] If $\rho_{i} \neq 0$, $r_{i} \geq 0$ and $\alpha, \beta > 0$, then $\lambda_{i}$ from \eqref{eqn:eigMap} are complex conjugate with real part less than $0$. \item[(iii)] If $r_{i} < 0$, $\beta \in \left(-r_i,\frac{k_i^2 + r_i^2}{-r_i}\right)$ and \begin{align*} \alpha \in \left(0,-(\beta+r_{i}) + \sqrt{\frac{(\beta+r_i)k_i^2}{-r_i}}\right) \end{align*} then $\lambda_{i}$ from \eqref{eqn:eigMap} have real part less than $0$. \end{enumerate} \end{lemma} \begin{proof} Found in the Appendix \end{proof} \begin{remark} Note that if the eigenvalues of $A$ fall only in case (i) and (ii) then $F$ is monotone. Additionally, the conditions $\alpha, \beta \geq 0$ are the same conditions as for the nonlinear case, Theorem \ref{thm:EIPconvergence}. If $A$ has eigenvalues in case (iii) then $F$ is hypomonotone. If the eigenvalues of $A$ are $-r\pm \mathfrak{j}k$, then $F$ is $r$-hypomonotone and $R=\frac{1}{\sqrt{r^{2}+k^{2}}}$-inverse Lipschitz. From Lemma \ref{thm:stableA}, $\beta \in (\mu, \frac{1}{\mu R^{2}})$ is the same condition on $\beta$ as in Lemma \ref{lemma:full_hypo} for the nonlinear case. \end{remark} \begin{thm} \label{thm:fullQuad} Consider a quadratic game $\mathcal{G}(\mathcal{N},J_i,\Omega_i)$ under Assumption \ref{asmp:Jsmooth}. Let the overall dynamics of the agents be given by \eqref{eqn:proposedAlg}. For the matrix $A$ given in \eqref{eqn:quad_pseudo} with eigenvalues $\rho_{i} = r_{i} + \mathfrak{j}k_{i}$, let $\mathcal{I}=\setc{i\in\set{1,\dots,n}}{r_{i}<0}$. If $\mathcal{I}=\emptyset$ then set $\alpha, \beta > 0$ else, \begin{align} \label{eqn:alpha_beta_conditions} \begin{split} \beta &\in \bigcap_{i\in \mathcal{I}}\left(-r_i,\frac{k_i^2 + r_i^2}{-r_i}\right) \\ \alpha &\in \bigcap_{i\in \mathcal{I}}\left(0,-\bracket{\beta+r_i} + \sqrt{\frac{(\beta+r_i)k_i^2}{-r_i}}\right) \end{split} \end{align} Then, the set $\setc{(x^*,x^{*})}{F(x^*) = 0}$ is globally asymptotically stable. \end{thm} \begin{conj} For the class of quadratic games where $F$ is $R$-inverse Lipschitz (for the perfect information setting) the optimal convergence rate is $exp(\frac{-1}{3R}t)$ when $\alpha = \frac{5}{9R}$ and $\beta = \frac{4}{9R}$. \end{conj} \subsection{Partial Information} In the partial information case the dynamics \eqref{eqn:distProposedAlg} are, \begin{align} \label{eqn:distDynamics} \begin{split} \mathbf{\dot{x}} &= -\mathcal{R}^{T}(\mathbf{A}\mathbf{x}+b) - \beta(\mathbf{x}-\mathbf{r}) - c\mathbf{L}\mathbf{x} \\ \mathbf{\dot{r}} &= \alpha(\mathbf{x}-\mathbf{r}) \end{split} \end{align} Similar to the complete information case, after doing a change of coordinates, we can prove convergence of \eqref{eqn:distDynamics}. \begin{thm} \label{thm:distQuad} Consider a game $\mathcal{G}(\mathcal{N},J_i,\Omega_i)$ under Assumption \ref{asmp:Jsmooth}, \ref{asmp:graph}, and \ref{asmp:inverse}. Let the overall dynamics of the agents be given by \eqref{eqn:distProposedAlg}. Let $\alpha, \beta$ be selected as in \eqref{eqn:alpha_beta_conditions} and scaled by $\frac{1}{N}$, and $c$ such that, \begin{align} \label{eqn:GraphCondition} c\lambda_{2}(L) \geq L_{\mathbf{A}} + \bracket{L_{\mathbf{A}}\bracket{\frac{p}{\sqrt{N}} + \frac{1}{2}}}^{2} \end{align} where $L_{\mathbf{A}} = \norm{\mathbf{A}}$, $p = \norm{P}$ where $P \succ \mathbf{0}$ satisfies the Lyapunov equation $P\tilde{M} + \tilde{M}^{T}P = -I$ and \begin{align*} \tilde{M} &= \begin{bmatrix} -\frac{1}{N}A - \beta I & \beta I \\ \alpha I & -\alpha I \end{bmatrix} \end{align*} Then, the set $\setc{(\mathbf{1}\otimes x^*, \mathbf{1}\otimes x^{*})}{F(x^*) = 0}$ is globally asymptotically stable. \end{thm} \begin{proof} Found in the Appendix. \end{proof} \begin{remark} Note that Theorem \ref{thm:distQuad} requires $\norm{P}= p$, if we restrict $\beta = \alpha$, and use Corollary 1 \cite{lyapBound} and Corollary 2.10 \cite{blockNorm}, we can obtain the simpler bound \begin{align} \label{eqn:lyapBound} p \leq \frac{N}{2L_{A} + 4\alpha N} \end{align} \end{remark} \subsection{Comparing Results For Quadratic vs General Games} For perfect information quadratic games with monotone pseudo-gradient, notice that Theorem \ref{thm:fullQuad} requires that $\alpha, \beta > 0$ and the rate of convergence can be determined by Lemma \ref{lemma:eigRelation}. For perfect information general games with monotone pseudo-gradient, Theorem \ref{thm:EIPconvergence} also requires $\alpha, \beta > 0$ but with no rate of convergence. For the partial information quadratic games with monotone pseudo-gradient, Theorem \ref{thm:distQuad} again requires that $\alpha, \beta > 0$. Additionally, the theorem requires that $c$ is larger than a function of the Lipschitz constant of the pseudo-gradient. For partial information general games with monotone pseudo-gradient, Theorem \ref{thm:distMono} allows $\beta > 0$ but $\alpha$ is now restricted by a function of $\beta$. Additionally, the $c$ term is larger than the one obtained for quadratic games. For perfect information quadratic games with hypomonotone pseudo-gradient, Lemma \ref{thm:stableA} and Theorem \ref{thm:distQuad} provides tight conditions on the range of values of $\alpha$ and $\beta$ for convergence to a NE. Note that for quadratic games, we are able to use the same method of analyzing the eigenvalues for both monotone and hypomonotone games. On the other hand for general games, the EIP analysis cannot be extended to the hypomonotone case and a different method is used to prove convergence. The analysis ends up having restrictions on $\alpha$ that don't appear for the quadratic case. The quadratic game case suggests that there might be a better Lyapunov function that could remove or relax the condition on $\alpha$ for general games. \section{Simulations} \label{sec:sim} In this section we first consider three hypomonotone quadratic games between $N=10$ agents communicating over a ring $G_c$ graph. We index each game by $ \mathcal{G}_{1}, \mathcal{G}_{2}, \mathcal{G}_{3}$. In game $\mathcal{G}_{j}$, the cost function for agent $i$ is $J_{i}(x) = w_{i}^{\mathcal{G}_{j}} x_{i}^{T}\begin{bmatrix} 5 & 1 \\ -1 & 5 \end{bmatrix} x_{N+1-i}$ where $w^{\mathcal{G}_{j}} = [w_{1}^{\mathcal{G}_{j}},\dots, w_{N}^{\mathcal{G}_{j}}]$ is equal to \begin{align*} w^{\mathcal{G}_{1}} &= \begin{bmatrix} 1 & 1 & 1 & 1 & 1 & -1 & -1 & -1 & -1 & -1 \end{bmatrix} \\ w^{\mathcal{G}_{2}} &= \begin{bmatrix} \frac{-9}{9} & \frac{-7}{9} & \frac{-5}{9} & \frac{-3}{9} & \frac{-1}{9} &\frac{1}{9} & \frac{3}{9} & \frac{5}{9} & \frac{7}{9} & \frac{9}{9} \end{bmatrix} \\ w^{\mathcal{G}_{3}} &= \begin{bmatrix} -2 & -1 & -1 & -1 & -1 & 1 & 1 & 1 & 1 & 2 \end{bmatrix} \end{align*} For game $\mathcal{G}_{1}$ the eigenvalues of $A$ from \eqref{eqn:quad_pseudo} are $1\pm \mathfrak{j}5$; $\mathcal{G}_{2}$ the eigenvalues are $\pm 1\pm \mathfrak{j}5$, $\pm \frac{7}{9}\pm \mathfrak{j}\frac{35}{9}$, $\pm \frac{5}{9}\pm \mathfrak{j}\frac{25}{9}$, $\pm \frac{3}{9}\pm \mathfrak{j}\frac{15}{9}$, and $\pm \frac{1}{9}\pm \mathfrak{j}\frac{5}{9}$; and for $\mathcal{G}_{3}$ the eigenvalues are $\pm 2\pm \mathfrak{j}10$ and $\pm 1\pm \mathfrak{j}5$. For all three games the Nash equilibrium is the origin. The following table contains information about the parameter values as in Theorem \ref{thm:distMono} and \ref{thm:distQuad}. For game $\mathcal{G}_{2}$ the conditions of Lemma \ref{lemma:resInvLip} are not satisfied and hence the column is empty. The $\beta$ values are selected as $0.9 \beta_{min} + 0.1 \beta_{max}$ and the $\alpha$ values are selected as $0.5 \alpha_{min} + 0.5 \alpha_{max}$. \begin{center} \begin{tabular}{ |c|c c|c|c c| } \hline & \multicolumn{2}{|c|}{$\mathcal{G}_{1}$} & $\mathcal{G}_{2}$ & \multicolumn{2}{|c|}{$\mathcal{G}_{3}$} \\ \hline param. & Thm \ref{thm:distQuad} & Thm \ref{thm:distMono} & Thm \ref{thm:distQuad} & Thm \ref{thm:distQuad} & Thm \ref{thm:distMono} \\ \hline $\beta_{min}$ & 0.1 & 0.1 & 0.1 & 0.2 & 0.2\\ $\beta_{max}$ & 2.6 & 2.6 & $\frac{13}{45}$ & 2.6 & 1.3 \\ $\beta$ & 0.35 & 0.35 & $\frac{107}{900}$ & 0.44 & 0.31\\ \hline $d$ & & 0.5 & & & 0.5\\ \hline $\alpha_{min}$ & 0 & 0 & 0 & 0 & 0\\ $\alpha_{max}$ & 0.540 & 0.145 & 0.065 & 0.581 & 0.064\\ $\alpha$ & 0.270 & 0.072 & 0.032 & 0.290 & 0.032\\ \hline $c_{min}$ & 1517 & 1668 & $2.22\times 10^6$ & 7739 & 15057 \\ \hline \end{tabular} \end{center} Figure \ref{fig:actions} shows the action trajectories for game $\mathcal{G}_{1}$ under \eqref{eqn:proposedAlg} for the parameters $\alpha, \beta, c$ satisfying Theorem \ref{thm:distQuad}, where the initial conditions $x(0)$, $r(0)$ are randomly selected with components between $-10$ to $10$. Notice in $\mathcal{G}_{1}$ that $\beta$ used is the same for Theorem \ref{thm:distMono} and Theorem \ref{thm:distQuad}. However, the $\alpha$ obtained from Theorem \ref{thm:distMono} gives a conservative value for $\alpha$ and is an order of magnitude smaller than Theorem \ref{thm:distQuad}. \begin{figure} \caption{Evolution of agents' actions } \label{fig:actions} \end{figure} The figures for the other examples are similar and are omitted. \subsection{Nonquadratic Example} The following example is a non quadratic game where Theorem \ref{thm:distQuad} no longer applies. Consider a hypomonotone game between $N=10$ agents communicating over a ring $G_c$ graph. The cost function for agent $i$ is $J_{i}(x) = w_{i}^{1} x_{i}^{T}\begin{bmatrix} 5 & 0 \\ 0 & 5 \end{bmatrix} x_{N+1-i} + w_{i}^{1}x_{i}^{T}\begin{bmatrix} \sin(x_{N+1-i,2}) \\ -\sin(x_{N+1-i,1})\end{bmatrix}$ where $x_{i,j}$ is the $j$th component of the vector $x_{i}$. For this game the pseudo-gradient is $1$-hypomonotone, $\frac{1}{4}$-inverse Lipschitz, and $6$-Lipschitz. Using Theorem \ref{thm:distMono}, $\beta_{min}=0.1$, $\beta_{max}=1.6$, and we selected $\beta = 0.9 \beta_{min} + 0.1 \beta_{max} = 0.25$. Using $d=0.5$ we obtain that $\alpha_{min} = 0$, $\alpha_{max} = 0.095$ and $\alpha = 0.5 \alpha_{min} + 0.5 \alpha_{max} = 0.0478$. Lastly, for a ring communication graph we obtain that $c_{min} = 3417$. Figures \ref{fig:actions_nonlin} shows the action trajectories and convergence to the NE. \begin{figure} \caption{Evolution of agents' actions} \label{fig:actions_nonlin} \end{figure} \section{Conclusion} \label{sec:conclusion} In this paper, we considered monotone games and proposed a continuous-time dynamics constructed via passivity-based modification of a gradient-play scheme. We showed that in the full-decision information it converges to a Nash equilibrium in merely monotone games, for any positive parameter values. Under different assumptions we provided extensions to the partial-decision information case and extensions to hypomonotone games. Among future interesting problems we mention, extensions to directed communication graphs or, with adaptive gains, as well as to generalized Nash equilibrium problems. \section*{Appendix A} \label{sec:AppendixA} \subsection*{Proof of Proposition \ref{lemma:resInvLip} } (i): Notice that, \begin{align*} \inp*{x-y}{(I+\lambda T)x - (I+\lambda T)y} &\geq \bracket{1-\lambda \mu}\snorm{x-y} \end{align*} by assumption $1> \lambda\mu$, therefore $(I+\lambda T)$ is strongly monotone. From Proposition 20.10 \cite{monoBookv2} the inverse of a monotone operator is monotone and therefore $\mathcal{J}_{\lambda T}$ is monotone. Additionally, since $(I+\lambda T)$ is strongly monotone, $\mathcal{J}_{\lambda T}$ is a single valued function. (ii): Let $u = \mathcal{J}_{\lambda T}x$ and $v = \mathcal{J}_{\lambda T}y$, i.e., $x = (I+\lambda T)u$. Then, \begin{align*} &\snorm{x-y} = \snorm{(I+\lambda T)u - (I+\lambda T)v} \\ &\quad = \snorm{u-v} + \lambda^{2}\snorm{Tu-Tv} + 2\lambda\inp*{u-v}{Tu-Tv} \\ &\quad \geq \bracket{1 + \frac{\lambda^{2}}{R^2}-2\lambda \mu}\snorm{u-v} \\ &\quad = \bracket{\frac{R^2 + \lambda^{2}-2\lambda\mu R^{2}}{R^2}}\snorm{u-v} \end{align*} By assumption $\mu R < 1$ which implies that $R^2 + \lambda^{2}-2\lambda\mu R^{2} > R^2 + \lambda^{2}-2\lambda R = (\lambda - R)^{2} > 0$. If $\lambda = R$ then $R^2 + \lambda^{2}-2\lambda\mu R^{2} = 2R^{2}\bracket{1-\lambda \mu}$ and by assumption $\lambda \mu < 1$, therefore the numerator is always positive and \begin{align*} \bracket{\frac{R^2}{R^2 + \lambda^{2} - 2\lambda \mu R^2}} \snorm{x-y} &\geq \snorm{\mathcal{J}_{\lambda T}x - \mathcal{J}_{\lambda T}y} \end{align*} (iii): Assume that $R\geq \lambda$ and let $u = \mathcal{J}_{\lambda T}x$, $v = \mathcal{J}_{\lambda T}y$, and $c = \frac{R^{2}(1-\mu\lambda)}{R^{2} + \lambda^{2}-2\mu\lambda R^{2}}$. Then, \begin{align} &c\snorm{x-y} = c\snorm{(I+\lambda T)u - (I+\lambda T)v} \notag \\ &= c\snorm{u-v} + c\lambda^{2}\snorm{Tu - Tv} \notag + 2c\lambda\inp*{Tu - Tv}{u - v} \notag \\ &\quad - \inp*{u-v}{(1+\lambda T)u-(1+\lambda T)v} \notag \\ &\quad + \inp*{u-v}{(1+\lambda T)u-(1+\lambda T)v} \notag \\ &= (c-1)\snorm{u-v} + c\lambda^{2}\snorm{Tu - Tv} \notag \\ &\quad + \lambda (2c-1)\inp*{Tu - Tv}{u - v} \label{eqn:intStep} \\ &\quad + \inp*{u-v}{(1+\lambda T)u-(1+\lambda T)v} \notag \end{align} Note that $2c-1 \geq 0$ for all $R\geq \lambda$ and $1> \mu\lambda$, so that \begin{align*} &c\snorm{x-y} \geq c\bracket{\frac{R^2+\lambda^2-2\mu\lambda R^{2}}{R^2}}\snorm{u-v} \\ &\quad - (1-\mu\lambda)\snorm{u-v} + \inp*{u-v}{(1+\lambda T)u-(1+\lambda T)v} \\ &= \inp*{u-v}{(1+\lambda T)u-(1+\lambda T)v} \\ &= \inp*{\mathcal{J}_{\lambda T}x-\mathcal{J}_{\lambda T}y}{x-y} \end{align*} Now assume that $\lambda \geq R$ and $c = \frac{R^{2}(1+ \frac{\lambda}{R})}{R^{2} + \lambda^{2} + 2\lambda R}$ then $2c-1 \leq 0$. Continuing from \eqref{eqn:intStep} and using the fact that $-\inp*{a}{b} \geq -\frac{1}{4s}\snorm{a}-s\snorm{b}$, yields \begin{align*} &c\snorm{x-y} \geq (c-1)\snorm{u-v} + c\lambda^{2}\snorm{Tu - Tv} \\ &\qquad + \lambda (2c-1)\bracket{\frac{R}{2}\snorm{Tu-Tv} + \frac{1}{2R}\snorm{u-v}} \\ &\qquad + \inp*{u-v}{(1+\lambda T)u-(1+\lambda T)v} \\ &\quad = c\bracket{\frac{R^2 + \lambda^2 + 2\lambda R}{R^2}}\snorm{u-v} -\bracket{1+\frac{\lambda}{R}}\snorm{u-v} \\ &\qquad + \inp*{u-v}{(1+\lambda T)u-(1+\lambda T)v} \\ &\quad = \inp*{\mathcal{J}_{\lambda T}x-\mathcal{J}_{\lambda T}y}{x-y} \end{align*} \subsection*{Proof of Proposition \ref{lemma:funcType} } (i) From \cite{FP07} Prop 2.3.2 (c). (ii) From \cite{FP07} Prop 2.9.25 (a). (iii) \begin{align*} \snorm{Tx-Ty} &= \snorm{\bracket{\int_{0}^{1}JT(x + t(y-x)) \partial t}(y-x)} \\ &\leq \max_{z} \snorm{JT(z)}\snorm{x-y}\\ &\leq L^{2} \snorm{x-y} \end{align*} (iv) Note that, \begin{align*} \snorm{Tx-Ty} &= \snorm{\bracket{\int_{0}^{1}JT(x + t(y-x)) \partial t}(y-x)} \\ &\geq \min_{z} \snorm{\bracket{\int_{0}^{1}JT(z) \partial t}(y-x)} \\ &= \min_{z} (y-x)^{T}\bracket{JT^{T}(z)JT(z)}(y-x) \\ &\geq \frac{1}{R^{2}} \snorm{y-x} \end{align*} \subsection*{Proof of Lemma \ref{lemma:consTTS} } First, note that $\mathbf{F}(\mathbf{\bar{r}}^{||}) =F(x^*)= \mathbf{0}_{n}$ for $\mathbf{\bar{r}}^{||} = \mathbf{1}_{N}\otimes x^{*}$. Therefore, since $\mathbf{h}$ \eqref{h_def} is the resolvent of $\mathbf{1}_{N}\otimes \mathbf{F}$ on the consensus subspace, and zeros of $\mathbf{1}_{N}\otimes \mathbf{F}$ are fixed points of the resolvent, (cf. Prop. 23.2, \cite{monoBookv2}), it follows that $\mathbf{\bar{r}}^{||} = \mathbf{h}(\mathbf{\bar{r}}^{||})$. Using this, along \eqref{eqn:consDynFI}, we can write \begin{align}\label{r_term_0} \hspace{-0.3cm} \inp*{\mathbf{r}^{||} - \mathbf{\bar{r}}^{||}}{\mathbf{\dot{r}}^{||}} & = \alpha\inp*{\mathbf{r}^{||} - \mathbf{\bar{r}}^{||}}{\mathbf{z}^{||}} - \alpha\snorm{\mathbf{r}^{||} - \mathbf{\bar{r}}^{||}} \\ & + \alpha\inp*{\mathbf{r}^{||} - \mathbf{\bar{r}}^{||}}{\mathbf{h}(\mathbf{r}^{||}) - \mathbf{h}(\mathbf{\bar{r}}^{||})}. \notag \end{align} To bound the last term we use Lemma \ref{lemma:resInvLip} as follows. For any $r\in \mathbb{R}^n$ let $h(r):=\mathcal{J}_{\frac{1}{\beta N}{F}}(r)$ the resolvent of $F$. Using $F(h(r)) =\mathbf{F}(\mathbf{1}_{N}\otimes h(r))$, we can write $\mathbf{1}_{N}\otimes\big(\text{Id}+ \frac{1}{\beta N}F\big)h(r)= (\text{Id} + \frac{1}{\beta N}\mathbf{1}_{N}\otimes\mathbf{F} )(\mathbf{1}_{N}\otimes h(r))$. Using $\big (\text{Id}+ \frac{1}{\beta N}F \big)h(r)=r$ and \eqref{h_def}, this is equivalent to $\mathbf{h}(\mathbf{1}_{N}\otimes r) = \mathbf{1}_{N}\otimes h(r)$. As $h$ is the resolvent of $F$, under Assumption \ref{asmp:monotone} and \ref{asmp:inverse}, we apply Lemma \ref{lemma:resInvLip} to $F$ with $\lambda = \frac{1}{\beta N}$. Therefore, since for any $\mathbf{r}^{||} = \mathbf{1}_{N}\otimes r$, $\mathbf{h}(\mathbf{r}^{||}) =\mathbf{1}_{N}\otimes h(r)$ the bounds from Lemma \ref{lemma:resInvLip} (ii) and (iii) hold, and it follows that the same bounds hold for $\mathbf{h}$. Using Lemma \ref{lemma:resInvLip} (iii) in the last term of \eqref{r_term_0} yields, \begin{align}\label{r_term} \begin{split} \inp*{\mathbf{r}^{||} - \mathbf{\bar{r}}^{||}}{\mathbf{\dot{r}}^{||}} &\leq \alpha \norm{\mathbf{r}^{||} - \mathbf{\bar{r}}^{||}}\norm{\mathbf{z}^{||}} \\ &\qquad - \alpha \bracket{1-{\kappa_{\mathcal{J}}}} \snorm{\mathbf{r}^{||} - \mathbf{\bar{r}}^{||}} \end{split} \end{align} Similarly, using \eqref{eqn:consDynFI}, we can write, \begin{align*} \inp*{\mathbf{z}^{||}}{\mathbf{\dot{z}}^{||}} &= -\frac{1}{N}\inp*{\mathbf{z}^{||}}{\mathbf{1}_N\otimes\mathbf{F}(\mathbf{z}^{||} + \mathbf{h}(\mathbf{r}^{||}))} \\ &\qquad - \beta \inp*{\mathbf{z}^{||}}{\mathbf{z}^{||} + \mathbf{h}(\mathbf{r}^{||}) - \mathbf{r}^{||}} \\ &\qquad - \alpha\inp*{\mathbf{z}^{||}}{\left[\frac{\partial \mathbf{h}}{\partial \mathbf{r}}\right](\mathbf{z}^{||} + \mathbf{h}(\mathbf{r}^{||}) - \mathbf{r}^{||})} \end{align*} Substituting $\mathbf{r}^{||} = (\text{Id} + \mathbf{1}_{N}\otimes \frac{1}{\beta N}\mathbf{F})\mathbf{h}(\mathbf{r}^{||})$ (cf. \eqref{h_def}) in the middle term and combining terms yields, \begin{align*} &\inp*{\mathbf{z}^{||}}{\mathbf{\dot{z}}^{||}} \\ & = -\frac{1}{N}\inp*{\mathbf{z}^{||}}{\mathbf{1}_{N}\otimes\mathbf{F}(\mathbf{z}^{||} + \mathbf{h}(\mathbf{r}^{||})) - \mathbf{1}_{N}\otimes\mathbf{F}(\mathbf{h}(\mathbf{r}^{||}))} \\ &\qquad - \beta\snorm{\mathbf{z}^{||}} - \alpha \inp*{\mathbf{z}^{||}}{\left[\frac{\partial \mathbf{h}}{\partial \mathbf{r}}\right](\mathbf{z}^{||} + \mathbf{h}(\mathbf{r}^{||}) - \mathbf{r}^{||})} \end{align*} The first term is non-negative since $\mathbf{z}^{||}$, $\mathbf{z}^{||} + \mathbf{h}(\mathbf{r}^{||})$ and $\mathbf{h}(\mathbf{r}^{||})$ are on the consensus subspace and $\mathbf{1}_{N}\otimes \mathbf{F}$ evaluates to just $\mathbf{1}_{N}\otimes F$, which is $\mu$-hypomonotone by Assumption \ref{asmp:inverse}. Adding and subtracting $\mathbf{\bar{r}}^{||} = \mathbf{h}(\mathbf{\bar{r}}^{||})$ in the last term, we can then write \begin{align*} \inp*{\mathbf{z}^{||}}{\mathbf{\dot{z}}^{||}} &\leq -\bracket{\beta - \frac{\mu}{N}}\snorm{\mathbf{z}^{||}} - \alpha\inp*{\mathbf{z}^{||}}{\left[\frac{\partial \mathbf{h}}{\partial \mathbf{r}}\right ]\mathbf{z}^{||}} \\ &\quad + \alpha \norm{\mathbf{z}^{||}}\norm{\frac{\partial \mathbf{h}}{\partial \mathbf{r}}}\norm{\mathbf{h}(\mathbf{r}^{||}) - \mathbf{h}(\mathbf{\bar{r}}^{||})} \\ &\quad + \alpha\norm{\mathbf{z}^{||}}\norm{\frac{\partial \mathbf{h}}{\partial \mathbf{r}}}\norm{\mathbf{r}^{||} - \mathbf{\bar{r}}^{||}}. \end{align*} The second term is non-positive since $\mathbf{h}$ is monotone by Lemma \ref{lemma:resInvLip} (i) and $\frac{\partial \mathbf{h}}{\partial \mathbf{r}}$ is positive semidefinite (cf. Proposition 2.3.2 \cite{FP07}). Using $\norm{\mathbf{h}(\mathbf{r}^{||}) - \mathbf{h}(\mathbf{\bar{r}}^{||})} \leq L_{\mathcal{J}} \norm{\mathbf{r}^{||} - \mathbf{\bar{r}}^{||}}$ and $\norm{\frac{\partial \mathbf{h}}{\partial \mathbf{r}}} \leq L_{\mathcal{J}}$ from Lemma \ref{lemma:resInvLip}(ii), yields, \begin{align} \begin{split} \inp*{\mathbf{z}^{||}}{\mathbf{\dot{z}}^{||}} &\leq - \bracket{\beta-\frac{\mu}{N}}\snorm{\mathbf{z}^{||}} \\ &\quad + \alpha\bracket{L_{\mathcal{J}} + L_{\mathcal{J}}^{2}}\norm{\mathbf{z}^{||}}\norm{\mathbf{r}^{||} - \mathbf{\bar{r}}^{||}} \end{split}\label{z_term} \end{align} Finally, for $V^{||}$ as in the lemma, using the bounds in \eqref{r_term}, \eqref{z_term}, along the solution of \eqref{eqn:consDynFI}, we can write $\dot{V}^{||}(\mathbf{z}^{||}, \mathbf{r}^{||}) \leq -\omega^{T}\Phi \omega$, where $\Phi $ is as in \eqref{eqn:lyapNSD} and $\omega = \bracket{\norm{\mathbf{r}^{||} - \mathbf{\bar{r}}^{||}}, \norm{\mathbf{z}^{||}}}$. It can be easily seen that for any given $d\in(0,1)$ and $\alpha$ as in the lemma, $\Phi$ is positive definite. \subsection*{Proof of Lemma \ref{lemma:full_hypo} } Note that if we start with \eqref{eqn:proposedAlg} and do the change of coordinates $z = x-\mathcal{J}_{\frac{1}{\beta}F}r$ we get \eqref{eqn:consDynFI} but with $\mathbf{r}^{||}$ replaced with $r$, $\mathbf{x}^{||}$ replaced with $x$, $\mathbf{z}^{||}$ with $z = x-\mathcal{J}_{\frac{1}{\beta}F}r$, and $\frac{1}{N}\mathbf{1}_{N}\otimes \mathbf{F}$ replaced with $F$. Therefore, following the same argument as Lemma \ref{lemma:consTTS} we can construct a Lyapunov function that shows that $x^{*}$ is asymptotically stable. \subsection*{Proof of Lemma \ref{lemma:eigRelation} } Let $v_{i} = (x_{i},y_{i})$ be the $i^{th}$ eigenvector of $M$ \eqref{eqn:DynLT} then, \begin{align*} \begin{bmatrix} -A-\beta I & \beta I \\ \alpha I & -\alpha I \end{bmatrix}\begin{bmatrix} x_{i} \\ y_{i} \end{bmatrix} &= \lambda_{i} \begin{bmatrix} x_{i} \\ y_{i} \end{bmatrix} \end{align*} The second row implies that $x_{i} = \frac{\alpha + \lambda_{i}}{\alpha}y_{i}$. Substituting this into the first row, yields \begin{align*} Ay_{i} &= -\frac{\lambda_{i}(\alpha + \beta + \lambda_{i})}{\alpha + \lambda_{i}}y_{i} \end{align*} This equation can only hold true if $y_{i}$ is an eigenvector for $A$. With $\rho_{i}$ is the corresponding eigenvalue. Therefore, \begin{align*} \rho_{i} &= -\frac{\lambda_{i}(\alpha + \beta + \lambda_{i})}{\alpha + \lambda_{i}} \end{align*} and solving for the roots of the quadratic in $\lambda_{i}$ gives \eqref{eqn:eigMap}. \subsection*{Proof of Lemma \ref{thm:stableA}} (i) From Lemma \ref{lemma:eigRelation} we see that the characteristic polynomial is $\mathcal{C}_{i} \defeq \lambda_{i}^{2} + (\alpha + \beta + \rho_{i})\lambda_{i} + \rho_{i}$ and when $\rho_{i} = 0$ we immediately get our result. (ii) We need to show that the real part of the roots of $\mathcal{C}_{i}$ must be less than 0. From \cite{GenRouth} we know that the roots of a complex coefficient polynomial are in the left half plane if the roots of, \begin{align*} \mathcal{C}_{i}^{*}\mathcal{C}_{i} &= \lambda_{i}^{4} + 2(\alpha + \beta + r_{i})\lambda_{i}^{3} \\ &\quad + \bracket{(\alpha+\beta +r_{i})^{2} + k_{i}^{2} + 2\alpha r_{i}}\lambda_{i}^{2} \\ &\quad + 2\bracket{r_{i}(\alpha+\beta+r_{i})+k_{i}^2}\lambda_{i} + \alpha^{2}\bracket{r_{i}^{2}+k_{i}^{2}} \end{align*} are in the left half plane. The Routh array for $\mathcal{C}_{i}^{*}\mathcal{C}_{i}$ is, \begin{equation*} \begin{array}{c|c|c} 1 & \bracket{(\alpha+\beta +r_{i})^{2} + k_{i}^{2} + 2\alpha r_{i}} & \alpha^{2}\bracket{r_{i}^{2}+k_{i}^{2}} \\ \hline 2(\alpha + \beta + r_{i}) & 2\bracket{\alpha r_{i}(\alpha+\beta+r_{i})+\alpha k_{i}^2} & 0\\ \hline T_1 & \alpha^{2}\bracket{r_{i}^{2}+k_{i}^{2}} & 0 \\ \hline T_2 & 0 & 0 \\ \hline \alpha^{2}\bracket{r_{i}^{2}+k_{i}^{2}} & 0 & 0 \end{array} \end{equation*} where \begin{align*} T_1 &= \left[ \underbrace{(\alpha +\beta + r_{i})^2}_{>0} + \underbrace{\alpha r_{i}}_{\geq 0} + \underbrace{\frac{\beta + r_{i}}{\alpha+\beta + r_{i}}}_{>0}\underbrace{k_{i}^2}_{\geq 0}\right] > 0 \\ T_2 &= \frac{2\alpha}{T_1}\left[ \underbrace{r_{i}(\alpha+\beta + r_{i})^3}_{\geq 0} + \underbrace{\frac{\beta + r_{i}}{\alpha + \beta + r_{i}}k_{i}^4}_{\geq 0} \right. \\ &\qquad \qquad \qquad \left. + \underbrace{(\alpha+\beta+r_{i})}_{\geq 0}\underbrace{(\beta+2r_{i})}_{\geq 0} \underbrace{k_{i}^2}_{\geq 0} \right] > 0 \end{align*} If we show that all elements in the left column in the Routh array are all positive then the roots of $\mathcal{C}_{i}$ are less than $0$. The term $2(\alpha + \beta + r_{i})$, $\alpha^{2}\bracket{r_{i}^2+k_{i}^2}$, and $T_1$ are positive. Either $r_{i}\neq 0$ or $k_{i}\neq 0$, therefore one of the terms in $T_{2}$ will be strictly positive making $T_{2} > 0$. Therefore, $\lambda_{i}$ has real part less than $0$. (iii) The term $\alpha^{2}\bracket{r_{i}^2+k_{i}^2}$ is always positive. By assumption, $\alpha > 0$ and $\beta + r_{i} > 0$, therefore the term $2(\alpha+\beta+r_{i})$ is positive. For the $T_{1}$ term, let $\epsilon = \beta + r_{i}>0$ then, \begin{align*} T_1 & = \left[(\epsilon+\alpha)^2 + \alpha r_{i} + \frac{\epsilon}{\epsilon+\alpha}k_{i}^2\right] \end{align*} Multiplying $T_{1}$ by $\epsilon + a > 0$ gives the condition, \begin{align*} 0 &< \left[(\epsilon+\alpha)^3 + \alpha(\epsilon+\alpha)r_{i} + \epsilon k_{i}^2\right] \\ 0 &< (\epsilon+\alpha)^3 + [\alpha + \epsilon - \epsilon](\epsilon+\alpha)r_{i} + \epsilon k_{i}^2 \\ 0 &< (\epsilon+\alpha)^3 - \epsilon (\epsilon + \alpha)r_{i} + \epsilon k_{i}^2 + (\epsilon + \alpha)^2r_{i} \end{align*} From the upper bound assumption on $\alpha$ we see that $\alpha$ satisfies, $\alpha < -\epsilon + \sqrt{\frac{-\epsilon k_{i}^2}{r_{i}}} \implies \epsilon k_{i}^2 + (\alpha+\epsilon)^2r_{i} > 0$. Therefore the condition is always satisfied. Note that as $\alpha \to 0$ that the condition becomes the assumption for the upper bound of $\beta$. For the $T_{2}$ term, \begin{align*} T_2 &= \frac{2\alpha}{T_1}\bracket{ r_{i}(\alpha+\epsilon)^3 + \frac{\epsilon}{\alpha + \epsilon}k_{i}^4 + (\alpha+\epsilon)(\epsilon +r_{i}) k_{i}^2 } \\ &= \frac{2\alpha}{(\epsilon+\alpha)T_1}\bracket{r_{i}(\epsilon+\alpha)^4 + \epsilon k_{i}^4 + (\alpha+\epsilon)^2(\epsilon + r_{i}) k_{i}^2 } \end{align*} Since $\frac{2}{(\epsilon+\alpha)T_1} > 0$ the condition for $T_{2}>0$ is, \begin{equation*} r_{i}x^2 + (\epsilon +r_{i}) k_{i}^2x + \epsilon k_{i}^4 > 0 \end{equation*} where $x = (\epsilon + \alpha)^2$. The roots of this equation are, \begin{align*} x &= \frac{1}{2r_{i}}[-(\epsilon + r_{i})k_{i}^2 \pm \sqrt{(\epsilon +r_{i})^2k_{i}^4 - 4\epsilon r_{i}k_{i}^4 } \\ &= \frac{1}{2r_{i}}[-(\epsilon+r_{i})k_{i}^2 \pm (\epsilon - r_{i})k_{i}^2 ] = \frac{-\epsilon k_{i}^2}{r_{i}} \text{ or } -k_{i}^2 \end{align*} Therefore, $x\in (-k_{i}^{2},\frac{-\epsilon k_{i}^2}{r_{i}})$ for $T_{2}>0$, but $x=(\epsilon+\alpha)^{2}$ so $x = (\epsilon+\alpha)^{2}\in (0,\frac{-\epsilon k_{i}^2}{r_{i}})$ which implies, \begin{align*} 0 < \alpha < -\bracket{\beta+r_{i}} + \sqrt{\frac{-(\beta+r_{i})k_{i}^2}{r_{i}}} \end{align*} \subsection*{Proof of Theorem \ref{thm:distQuad} } After performing a change of coordinates as in \eqref{eqn:DynLT} and a decomposition as in the nonlinear case, the dynamics \eqref{eqn:distProposedAlg} can be written as, \begin{align} \mathbf{\dot{x}^{||}} &= \Pi_{||}\bracket{\mathcal{R}^{T}\mathbf{A}\mathbf{x} - \beta(\mathbf{x} - \mathbf{r}) - c\mathbf{L}\mathbf{x}} \label{eqn:x_par}\\ &= \Pi_{||}\mathcal{R}^{T}\mathbf{A}(\mathbf{x}^{||} + \mathbf{x}^{\perp}) - \beta(\mathbf{x}^{||} - \mathbf{r}^{||}) \notag \\ \mathbf{\dot{x}^{\perp}} &= (I-\Pi_{||})\bracket{\mathcal{R}^{T}\mathbf{A}\mathbf{x} - \beta(\mathbf{x} - \mathbf{r}) - c\mathbf{L}\mathbf{x}} \notag \\ &= (I-\Pi_{||})\mathcal{R}^{T}\mathbf{A}(\mathbf{x}^{||} + \mathbf{x}^{\perp}) - \beta(\mathbf{x}^{\perp} - \mathbf{r}^{\perp}) - c\mathbf{L}\mathbf{x}^{\perp} \notag \\ \mathbf{\dot{r}}^{||} &= \alpha (\mathbf{x}^{||} - \mathbf{r}^{||}) \label{eqn:r_par}\\ \mathbf{\dot{r}}^{\perp} &= \alpha (\mathbf{x}^{\perp} - \mathbf{r}^{\perp}) \notag \end{align} Let $\mathbf{w} = \mathbf{w}^{||} + \mathbf{w}^{\perp}$, $\mathbf{w}^{||} = (\mathbf{x}^{||},\mathbf{r}^{||})$, $\mathbf{w}^{\perp} = (\mathbf{x}^{\perp},\mathbf{r}^{\perp})$, and $(\mathbf{w}^{i})^{||} = ((\mathbf{x}^{i})^{||}, (\mathbf{r}^{i})^{||})$. The matrix $\tilde{M}$ has the same structure as $M$ (some terms scaled). From Lemma \ref{thm:stableA} we know that $\tilde{M}$, for the $\alpha$ and $\beta$ satisfying the assumptions in the theorem, has all its eigenvalues with real part less than $0$ and therefore there exists a $P$ satisfying the Lyapunov equation. Consider the following Lyapunov function, \begin{align} V(\mathbf{w})\! &= \frac{1}{2}\snorm{\mathbf{x}^{\perp}} \! + \! \frac{\beta}{2\alpha}\snorm{\mathbf{r}^{\perp}} \! + \! \sum_{i\in\mathcal{N}} \snorm{(\mathbf{w}^{i})^{||}-w^{*}}_{P} \label{eqn:cand_lyap} \end{align} For the first two terms in \eqref{eqn:cand_lyap}, \begin{align*} &\frac{d}{d t}\bracket{\frac{1}{2}\snorm{\mathbf{x}^{\perp}} + \frac{\beta}{2\alpha}\snorm{\mathbf{r}^{\perp}}} \\ & = (\mathbf{x}^{\perp})^{T}\bracket{(I-\Pi_{||})\mathcal{R}^{T}\mathbf{A}(\mathbf{x}^{||}+\mathbf{x}^{\perp}) - cL\mathbf{x}^{\perp}} \\ &\qquad - (\mathbf{w}^{\perp})^{T}\begin{bmatrix} \beta I & -\beta I \\ -\beta I & \beta I \end{bmatrix}\mathbf{w}^{\perp} \end{align*} Since the last term is equal to $-\beta(\mathbf{x}^{\perp} - \mathbf{r}^{\perp})^{2} \leq 0$, therefore \begin{align*} &\frac{d}{d t}\bracket{\frac{1}{2}\snorm{\mathbf{x}^{\perp}} + \frac{\beta}{2\alpha}\snorm{\mathbf{r}^{\perp}}} \\ &\leq (\mathbf{x}^{\perp})^{T}(I-\Pi_{||})\mathcal{R}^{T}\mathbf{A}(\mathbf{x}^{||}+\mathbf{x}^{\perp}) - c\lambda_{2}(L)\snorm{\mathbf{x}^{\perp}} \end{align*} Using $\norm{(I-\Pi_{||})\mathcal{R}^{T}\mathbf{A}}= \sqrt{\frac{N-1}{N}}\norm{\mathbf{A}}$, $\norm{\mathbf{A}} \leq L_{\mathbf{A}}$ and $\mathcal{R}^{T}\mathbf{A}\mathbf{x}^{*} = 0$ yields, \begin{align} &\frac{d}{d t}\bracket{\frac{1}{2}\snorm{\mathbf{x}^{\perp}} + \frac{\beta}{2\alpha}\snorm{\mathbf{r}^{\perp}}} \notag \\ &\leq \sqrt{\frac{N-1}{N}}L_{\mathbf{A}}\bracket{ \snorm{\mathbf{x}^{\perp}} + \norm{\mathbf{x}^{\perp}}\norm{\mathbf{x}^{||}-x^{*}}} \notag \\ &\qquad - c\lambda_{2}(L)\snorm{\mathbf{x}^{\perp}} \notag \\ &\leq \sqrt{\frac{N-1}{N}}L_{\mathbf{A}}\bracket{ \snorm{\mathbf{x}^{\perp}} + \norm{\mathbf{x}^{\perp}}\norm{\mathbf{w}^{||}-w^{*}}} \label{eqn:lyp_bound_1} \\ &\qquad - c\lambda_{2}(L)\snorm{\mathbf{x}^{\perp}} \notag \end{align} Note that $\mathbf{w}^{||}$ can be written as $\mathbf{w}^{||} = 1\otimes w^{||}$ and $(\mathbf{w}^{i})^{||}=(\mathbf{w}^{j})^{||} = w^{||}$. For the third term in \eqref{eqn:cand_lyap}, along the solution of \eqref{eqn:x_par} and \eqref{eqn:r_par}, \begin{align*} &\frac{d}{d t} \sum_{i\in\mathcal{N}} \snorm{(\mathbf{w}^{i})^{||}-w^{*}}_{P} \\ &\quad = \sum_{i\in\mathcal{N}} ((\mathbf{w}^{i})^{||}-w^{*})^{T}(P\tilde{M}+\tilde{M}^{T}P)((\mathbf{w}^{i})^{||}-w^{*}) \\ &\qquad + ((\mathbf{w}^{i})^{||}-w^{*})^{T}(P+P^{T})Q\mathbf{w}^{\perp} \end{align*} where $Q = \begin{bmatrix} \frac{-1}{N}\mathbf{A} & 0 \\ 0 & 0 \end{bmatrix}$. \begin{align*} &\quad = -\snorm{\mathbf{w}^{||}-\mathbf{w}^{*}} + \sum_{i\in\mathcal{N}}((\mathbf{w}^{i})^{||}-w^{*})^{T}(P+P^{T})Q\mathbf{w}^{\perp} \\ &\quad = -\snorm{\mathbf{w}^{||}-\mathbf{w}^{*}} + (\mathbf{w}^{||}-w^{*})^{T}[I\otimes (P+P^{T})][\mathbf{1}\otimes Q]\mathbf{w}^{\perp} \end{align*} Using $\norm{P} = p$ and $\norm{\frac{1}{N}\mathbf{1}\otimes \mathbf{A}} \leq \frac{1}{\sqrt{N}}L_{\mathbf{A}}$, \begin{align} \begin{split} &\frac{d}{d t} \sum_{i\in\mathcal{N}} \snorm{(\mathbf{w}^{i})^{||}-w^{*}}_{V} \\ &\quad \leq -\snorm{\mathbf{w}^{||}-\mathbf{w}^{*}} + \frac{2p L_{\mathbf{A}}}{\sqrt{N}}\norm{\mathbf{w}^{||}-\mathbf{w}^{*}}\norm{\mathbf{x}^{\perp}} \end{split} \label{eqn:lyp_bound_2} \end{align} Therefore, from \eqref{eqn:lyp_bound_1} and \eqref{eqn:lyp_bound_2} the Lyapunov function $V$, \eqref{eqn:cand_lyap}, satisfies \begin{align*} \frac{d}{d t}V(\mathbf{w}) &\leq \varpi^{T}\begin{bmatrix} -1 & \frac{L_{\mathbf{A}}}{2\sqrt{N}}\bracket{2p + \sqrt{N-1}} \\ * & -c\lambda_{2}(L) + \sqrt{\frac{N-1}{N}}L_{\mathbf{A}} \end{bmatrix} \varpi \end{align*} where $\varpi = col(\norm{\mathbf{w}^{||}-\mathbf{w}^{*}},\norm{\mathbf{x}^{\perp}})$. Under \eqref{eqn:GraphCondition} the matrix is negative definite and using LaSalle's Invariance Principle \cite{nonlinear} concludes the proof. \end{document}
\begin{document} \title{Dynamics of shadow system of a singular Gierer-Meinhardt system on an evolving domain} \author{Nikos I. Kavallaris} \address{ Department of Mathematics, University of Chester, Thornton Science Park Pool Lane, Ince, Chester CH2 4NU, UK } \email{n.kavallaris@chester.ac.uk} \author{Raquel Bareira} \address{Barreiro School of Technology of the Polytechnic Institute of Setubal\\ Rua Americo da Silva Marinho-Lavradio, 2839-001 Barreiro, Portugal\\ and CMAFcIO - Center of Mathematics, Fundamental Applications and Operations Research, University of Lisbon, Portugal } \email{raquel.barreira@estbarreiro.ips.pt} \author{Anotida Madzvamuse} \address{School of Mathematical and Physical Sciences\\ Department of Mathematics\\ University of Sussex\\ Falmer, Brighton, BN1 9QH, England, UK} \email{a.madzvamuse@sussx.ac.uk} \subjclass{Primary: 35B44, 35K51 ; Secondary: 35B36, 92Bxx } \keywords{Pattern formation, Turing instability, activator-inhibitor system, shadow-system, invariant regions, diffusion-driven blow-up, evolving domains} \date{\today} \maketitle \begin{abstract} The main purpose of the current paper is to contribute towards the comprehension of the dynamics of the shadow system of a singular Gierer-Meinhardt model on an isotropically evolving domain. In the case where the inhibitor's response to the activator's growth is rather weak, then the shadow system of the Gierer-Meinhardt model is reduced to a single though non-local equation whose dynamics is thoroughly investigated throughout the manuscript. The main focus is on the derivation of blow-up results for this non-local equation, which can be interpreted as instability patterns of the shadow system. In particular, a {\it diffusion-driven instability (DDI)}, or {\it Turing instability}, in the neighbourhood of a constant stationary solution, which then is destabilised via diffusion-driven blow-up, is observed. The latter indicates the formation of some unstable patterns, whilst some stability results of global-in-time solutions towards non-constant steady states guarantee the occurrence of some stable patterns. Most of the derived results are confirmed numerically and also compared with the ones in the case of a stationary domain. \end{abstract} \subjclass{Primary: 35B44, 35K51 ; Secondary: 35B36, 92Bxx } \keywords{Pattern formation, Turing instability, activator-inhibitor system, shadow-system, invariant regions, diffusion-driven blow-up, evolving domains} \section{Introduction} The purpose of the current work is to study an activator-inhibitor system, introduced by Gierer and Meinhard in 1972 \cite{gm72} to describe the phenomenon of morphogenesis in hydra, on an evolving domain. Assume that $u(x,t)$ stands for the concentration of the activator, at a spatial point $x\in \Omega_t\subset \mathbb{R}^N, N=1,2,3,$ at time $t\in[0,T], T>0,$ which enhances its own production and that of the inhibitor. On the other hand, let $v(x,t)$ represents the concentration of the inhibitor, which suppresses its own production as well as that of the activator. Hence, the interaction between $u$ and $v$ can be described by the following non-dimensionalised system \cite{gm72} \begin{eqnarray} &&u_t+\nabla \cdot (\overrightarrow{\alpha} u)= D_1 \Delta u-u+\displaystyle\frac{u^p}{v^q}, \quad x\in \Omega_t,\; t\in (0,T), \label{egm1}\\ &&\tau v_t+\nabla \cdot (\overrightarrow{\alpha} v) = D_2 \Delta v-v+\displaystyle\frac{u^r}{v^s}, \quad \; x\in \Omega_t,\; t\in (0,T), \label{egm2}\\ && \frac{\partial u}{\partial \nu}=\frac{\partial v}{\partial \nu}=0 \quad \; x\in \partial\Omega_t,\; t\in (0,T),\label{egm3}\\ && u(x,0)=u_0(x)>0,\quad v(x,0)=v_0(x)>0, \quad x\in \Omega_0\subset \mathbb{R}^N, \label{egm4} \end{eqnarray} where $\nu$ is the unit normal vector on $\partial \Omega_t$, whereas $\overrightarrow{\alpha}\in \mathbb{R}^N$ stands for the convection velocity which is induced by the material deformation due to the evolution of the domain. Moreover, $D_1,D_2$ are the diffusion coefficients of the activator and inhibitor respectively; $\tau$ represents the response of the inhibitor to the activator's growth. Moreover, the exponents satisfying the conditions: $p>1,\; q, r,>0,\;\mbox{and}\; s>-1,$ measure the interactions between morphogens. The dynamics of system \eqref{egm1}-\eqref{egm4} can be characterised by two values: the net self-activation index $\pi=( p-1)/r$ and the net cross-inhibition index $\gamma=q/(s+1).$ Index $\pi$ correlates the strength of self-activation of the activator with the cross-activation of the inhibitor. Thus, if $\pi$ is large, then the net growth of the activator is large no matter the growth of the inhibitor. The parameter $\gamma$ measures how strongly the inhibitor suppresses the production of the activator and that of itself. If $\gamma$ is large then the production of the activator is strongly suppressed by the inhibitor. Finally, the parameter $\tau$ quantifies the inhibitor'€™s response against the activator'€™s growth. Guided by biological interpretation as well as by mathematical reasons, we assume that the parameters $p, q, r, s $ satisfy the condition \begin{eqnarray}\label{tc} p-r\gamma<1, \end{eqnarray} which in the literature is known as the {\it Turing condition} since it guarantees the occurrence of Turing patterns for the system \eqref{egm1}-\eqref{egm4} on a stationary domain \cite{nst06}. For analytical purposes, in the current work we will only consider the case of an isotropic flow on an evolving domain, and thus we have for any $x\in \Omega_t$: \begin{eqnarray} x=\rho(t) \xi,\quad\mbox{for}\quad \xi\in \Omega_0\subset \mathbb{R}^N,\label{isf} \end{eqnarray} with $\rho(t)$ being $C^1-$function with $\rho(0)=1.$ In the case of a growing domain we have $\dot{\rho}(t)=\frac{d \rho}{dt}>0,$ whilst when the domain shrinks or for domain contraction $\dot{\rho}(t)=\frac{d \rho}{dt}<0.$ Furthermore, the following equality holds \begin{eqnarray} \frac{dx}{dt}=\overrightarrow{\alpha}(x,t).\label{cisf} \end{eqnarray} Setting $ \hat{u}(\xi,t)=u(\rho(t)\xi,t),\; \hat{v}(\xi,t)=v(\rho(t)\xi,t), $ and then using the chain rule as well as \eqref{isf} and \eqref{cisf}, see also \cite{mm07}, we obtain: \begin{eqnarray*} &&\hat{u}_t-\overrightarrow{\alpha}\cdot \nabla_x u=u_t,\quad \nabla_x u=\frac{1}{\rho(t)}\nabla_{\xi}\hat{u}\\ &&\Delta_x u= \frac{1}{\rho^2(t)}\Delta_{\xi}\hat{u},\quad \nabla_x \cdot \left(\overrightarrow{\alpha} u\right)=\overrightarrow{\alpha}\cdot \nabla_x u+N\,u \frac{\dot{\rho}(t)}{\rho(t)}, \end{eqnarray*} whilst similar relations hold for $v$ as well. Therefore \eqref{egm1}-\eqref{egm4} is reduced to following system on a reference stationary domain $\Omega_0$ \begin{eqnarray} &&\hat{u}_t= \frac{D_1}{\rho^2(t)} \Delta_{\xi} \hat{u}-\left(1+N\frac{\dot{\rho}(t)}{\rho(t)}\right)\hat{u}+\displaystyle\frac{\hat{u}^p}{\hat{v}^q}, \quad \xi\in \Omega_0,\; t\in (0,T), \label{regm1}\\ &&\tau \hat{v}_t=\frac{D_2}{\rho^2(t)} \Delta_{\xi} \hat{v}-\left(1+N\frac{\dot{\rho}(t)}{\rho(t)}\right)\hat{v}+\displaystyle\frac{\hat{u}^r}{\hat{v}^s}, \quad \; \xi\in \Omega_0,\; t\in (0,T), \label{regm2}\\ && \frac{\partial \hat{u}}{\partial \nu}=\frac{\partial \hat{v}}{\partial \nu}=0 \quad \; \xi\in \partial\Omega_0,\; t\in (0,T),\label{regm3}\\ && \hat{u}(\xi,0)=\hat{u}_0(\xi)>0,\quad \hat{v}(\xi,0)=\hat{v}_0(\xi)>0, \quad \xi\in \Omega_0, \label{regm4} \end{eqnarray} where $\Delta_{\xi}$ represents the Laplacian on the reference static domain $\Omega_0.$ Henceforth, without any loss of generality we will omit the index $\xi$ from the Laplacian. Defining a new time scale \cite{L11}, \begin{eqnarray}\label{aal2} \sigma(t)=\int_0^t\frac{1}{\rho^2(\theta)}\,d\theta, \end{eqnarray} and setting $\tilde{u}(\xi,\sigma)=\hat{u}(\xi, t), \tilde{v}(\xi,\sigma)=\hat{v}(\xi, t),$ then system \eqref{regm1}-\eqref{regm4} can be written as \begin{eqnarray} &&\tilde{u}_{\sigma}= D_1 \Delta_{\xi} \tilde{u}-\left(\phi^2(\sigma)+N\frac{\dot{\phi}(\sigma)}{\phi(\sigma)}\right)\tilde{u}+\phi^2(\sigma)\displaystyle\frac{\tilde{u}^p}{\tilde{v}^q}, \quad \xi\in \Omega_0,\; \sigma\in (0,\Sigma), \label{tregm1}\\ &&\tau \tilde{v}_{\sigma}=D_2 \Delta_{\xi} \tilde{v}-\left(\phi^2(\sigma)+N\frac{\dot{\phi}(\sigma)}{\phi(\sigma)}\right)\tilde{v}+\phi^2(\sigma)\displaystyle\frac{\tilde{u}^r}{\tilde{v}^s}, \quad \; \xi\in \Omega_0,\; \sigma\in (0,\Sigma), \label{tregm2}\\ && \frac{\partial \tilde{u}}{\partial \nu}=\frac{\partial \tilde{v}}{\partial \nu}=0, \quad \; \xi\in \partial\Omega_0,\; \sigma\in (0,\Sigma),\label{tregm3}\\ && \tilde{u}(\xi,0)=\hat{u}_0(\xi)>0,\quad \tilde{v}(\xi,0)=\hat{v}_0(\xi)>0, \quad \xi\in \Omega_0, \label{tregm4} \end{eqnarray} where $ \rho(t)=\phi(\sigma),$ and thus $\dot{\rho}(t)=\frac{\dot{\phi}(\sigma)}{\phi^2(\sigma)},$ and $\Sigma=\sigma(T).$ Now if $D_1\ll D_2,$ i.e. when the inhibitor diffuses much faster than the activator, then system \eqref{tregm1}-\eqref{tregm4} can be fairly approximated by an ODE-PDE system with a non-local reaction term. We will denote the new approximation by {\it shadow system} as coined in \cite{ke78}. Below we provide a rather rough derivation of the shadow system, while for a more rigorous approach one can appeal to the arguments in \cite{bk18}. Indeed, dividing \eqref{tregm2} by $D_2$ and taking $D_2\to +\infty,$ see also \cite{n11}, then it follows that $\tilde{v}$ solves $ \Delta_{\xi} \tilde{v}=0, \quad \; \xi\in \Omega_0,\quad \frac{\partial \tilde{v}}{\partial \nu}=0, \quad \; \xi\in \partial\Omega_0, $ for any fixed $\sigma\in(0,\Sigma).$ Due to the imposed Neumann boundary condition then $\tilde{v}$ is a spatial homogeneous (independent of $\xi$) solution, i.e. $\tilde{v}(\xi,\sigma)=\eta(\sigma)$ and thus \eqref{tregm2} can be written as \begin{eqnarray}\label{rq3} \tau \frac{d\eta}{d \sigma}=-\Phi(\sigma)\eta+\phi^2(\sigma)\displaystyle\frac{\tilde{u}^r}{\eta^s}, \quad \sigma\in (0,\Sigma), \end{eqnarray} where \begin{eqnarray}\label{ps1} \Phi(\sigma)=:\left(\phi^2(\sigma)+N\frac{\dot{\phi}(\sigma)}{\phi(\sigma)}\right). \end{eqnarray} Averaging \eqref{rq3} over $\Omega_0$ we finally infer that the pair $(\tilde{u}, \eta)$ satisfies the {\it shadow system}\rm \begin{eqnarray} &&\tilde{u}_{\sigma}= D_1 \Delta_{\xi} \tilde{u}-\Phi(\sigma)\tilde{u}+\phi^2(\sigma)\displaystyle\frac{\tilde{u}^p}{\eta^q}, \quad \xi\in \Omega_0,\; \sigma\in (0,\Sigma), \label{stregm1}\\ &&\tau \frac{d\eta}{d\sigma}=-\Phi(\sigma)\eta+\phi^2(\sigma)\displaystyle\frac{\rlap{$-$}\!\int_{\Omega_0}\tilde{u}^r\,d\xi}{\eta^s}, \quad \sigma\in (0,\Sigma), \label{stregm2}\\ && \frac{\partial \tilde{u}}{\partial \nu}=0, \quad \; \xi\in \partial\Omega_0,\; \sigma\in (0,\Sigma),\label{stregm3}\\ && \tilde{u}(\xi,0)=\hat{u}_0(\xi)>0,\quad \eta(0)=\eta_0>0, \quad \xi\in \Omega_0, \label{stregm4} \end{eqnarray} where $ \rlap{$-$}\!\int_{\Omega_0}\tilde{u}^r\,d\xi=\frac{1}{|\Omega_0|}\int_{\Omega_0} \tilde{u}^r\,d\xi. $ In the limit case $\tau\to 0,$ i.e. when the inhibitor's response to the growth of the activator is quite small, then the shadow system is reduced to a single, though, non-local equation. Indeed, when $\tau=0$, \eqref{stregm2} entails that $ \eta(\sigma)=\left(\frac{\phi^2(\sigma)}{\Phi(\sigma)}\rlap{$-$}\!\int_{\Omega_0}\tilde{u}^r\,d\xi\right)^{\frac{1}{s+1}}, $ and thus \eqref{stregm1}-\eqref{stregm4} reduce to \begin{eqnarray} &&\tilde{u}_{\sigma}= D_1 \Delta_{\xi} \tilde{u}-\Phi(\sigma)\tilde{u}+\displaystyle\frac{\Psi(\sigma)\tilde{u}^p}{\left(\rlap{$-$}\!\int_{\Omega_0}\tilde{u}^r\,d\xi\right)^{\gamma}}, \quad \xi\in \Omega_0,\; \sigma\in (0,\Sigma), \label{nstregm1}\\ && \frac{\partial \tilde{u}}{\partial \nu}=0, \quad \; \xi\in \partial\Omega_0,\; \sigma\in (0,\Sigma),\label{nstregm2}\\ &&\tilde{u}(\xi,0)=\hat{u}_0(\xi)>0,\quad \xi\in \Omega_0, \label{nstregm3} \end{eqnarray} recalling $\gamma=\frac{q}{s+1}$ and \begin{eqnarray}\label{ps2} \Psi(\sigma)=\phi^{2(1-\gamma)}(\sigma)\Phi^{\gamma}(\sigma). \end{eqnarray} Recovering the $t$ variable entails that the following partial differential equation holds \begin{eqnarray} &&\hat{u}_{t}= \frac{D_1}{\rho^2(t)} \Delta_{\xi} \hat{u}-L(t)\hat{u}+L^{-\gamma}(t)\displaystyle\frac{\hat{u}^p}{\left(\rlap{$-$}\!\int_{\Omega_0}\hat{u}^r\,d\xi\right)^{\gamma}}, \quad \xi\in \Omega_0,\; t\in (0,T), \label{nstregm1t}\\ && \frac{\partial \hat{u}}{\partial \nu}=0, \quad \; \xi\in \partial\Omega_0,\; t\in (0,T),\label{nstregm2t}\\ &&\hat{u}(\xi,0)=\hat{u}_0(\xi)>0,\quad \xi\in \Omega_0, \label{nstregm3t} \end{eqnarray} where $L(t):=\left(1+N\frac{\dot{\rho}(t)}{\rho(t)}\right).$ We note that formulation \eqref{nstregm1}-\eqref{nstregm3} is more appropriate for the demonstrated mathematical analysis, however all of our theoretical results can be directly interpreted in terms of the equivalent formulation \eqref{nstregm1t}-\eqref{nstregm3t}. Besides, formulation \eqref{nstregm1t}-\eqref{nstregm3t} is more appropriate for our numerical experiments since the calculation of the functions $\Phi(\sigma)$ and $\Psi(\sigma)$ is not always possible. The main aim of the current work is to investigate the long-time dynamics of the non-local problem \eqref{nstregm1}-\eqref{nstregm3} and then check whether it resembles that of the reaction-diffusion system \eqref{tregm1}-\eqref{tregm4}. Biologically speaking we will investigate whether, under the fact that the inhibitor's response to the growth of the activator is quite small and when it also diffuses much faster than the activator, is it necessary to study the dynamics of both reactants or it is sufficient to study only the activator's dynamics. From here onwards, we take $D_1=1$, revert to the initial variables $x, u$ instead of $\xi, \widetilde{u}$ and we drop the index $\xi$ from the Laplacian $\Delta$ without any loss of generality. Hence, we will focus our study on the following single partial differential equation \begin{eqnarray} &&u_{\sigma}= \Delta u-\Phi(\sigma)u+\displaystyle\frac{\Psi(\sigma)u^p}{\left(\rlap{$-$}\!\int_{\Omega_0} u^r\,dx\right)^{\gamma}}, \quad x\in \Omega_0,\; \sigma\in (0,\Sigma), \label{nstregm1k}\\ && \frac{\partial u}{\partial \nu}=0, \quad \; x\in \partial\Omega_0,\; \sigma\in (0,\Sigma),\label{nstregm2k}\\ &&u(x,0)=u_0(x)>0,\quad x\in \Omega_0. \label{nstregm3k} \end{eqnarray} The layout of the current work is as follows. Section \ref{ode-bge} deals with the derivation and proofs of various blow-up results, induced by the non-local reaction term (ODE blow-up results), together with some global-time existence results for problem \eqref{nstregm1k}-\eqref{nstregm3}. Following the approach developed in \cite{KS16,KS18}, in Section \ref{tipf} we present and prove a Turing instability result associated with \eqref{nstregm1k}-\eqref{nstregm3}. This Turing instability occurs under the {\it Turing condition} \eqref{tc} and is exhibited in the form of a {\it driven-diffusion} finite-time blow-up. Finally, in Section \ref{num-sec} we appeal to various numerical experiments in order to confirm some of the theoretical results presented in Sections \ref{ode-bge} and \ref{tipf}. We also compare numerically the long-time dynamics of the non-local problem \eqref{nstregm1k}-\eqref{nstregm3} with that of the reaction-diffusion system \eqref{stregm1}-\eqref{stregm4}. \section{ODE Blow-up and Global Existence}\label{ode-bge} The current section is devoted to the presentation of some blow-up results for problem \eqref{nstregm1k}-\eqref{nstregm3k}, i.e. blow-up results induced by the kinetic (non-local) term in \eqref{nstregm1k}. Besides, some global-in-time existence results for problem \eqref{nstregm1k}-\eqref{nstregm3k} are also presented. Throughout the manuscript we use the notation $C$ and $c$ to denote positive constants with big and small values respectively. Our first observation is that the concentration of the activator cannot extinct in finite time. Indeed, the following proposition holds. \begin{proposition}\label{lbd} Assume that \begin{eqnarray}\label{mts2an} \inf_{(0,\Sigma)} \Psi(\sigma):=m_{\Psi}>0,\; \inf_{(0,\Sigma)} \Phi(\sigma):=m_{\Phi}>0\;\mbox{and}\; \sup_{(0,\Sigma)} \Phi(\sigma):=M_{\Phi}<+\infty\quad, \end{eqnarray} then for each $\Sigma>0$ there exists $C_{\Sigma}>0$ such that for the solution $u(x,\sigma)$ of \eqref{nstregm1k}-\eqref{nstregm3k} the following inequality holds \begin{equation} \label{jg0} u(x,\sigma)\geq C_{\Sigma}\quad \mbox{in}\quad \Omega_0\times [0,\Sigma). \end{equation} \end{proposition} \begin{proof} Owing to the maximum principle and by using \eqref{mts2an} we derive that $u=u(x,\sigma)>0.$ By virtue of the comparison principle, we also deduce that $u(x,\sigma)\geq \tilde u(\sigma)$, where $\tilde{u}=\tilde{u}(\sigma)$ is the solution to $ \frac{d\tilde{u}}{d\sigma}=-M_{\Phi}\tilde{u}\quad \mbox{in $(0, \Sigma)$},\quad \tilde{u}(0)=\tilde{u}_0\equiv \inf_{\Omega_0} u_0(x)>0, $ and thus \eqref{jg0} is satisfied with $C_{}=\tilde{u}_0e^{-M_{\Phi} \Sigma}$. \end{proof} \begin{remark}\label{rem1} It is easily checked that condition \eqref{mts2an} is satisfied for any decreasing function $\phi(\sigma)$ satisfying \begin{eqnarray}\label{diq1} \phi(\sigma)> \frac{1}{\sqrt{2N\sigma+1}},\; 0<\sigma<\Sigma, \end{eqnarray} since then by virtue of \eqref{ps1} \begin{eqnarray}\label{ts1} 0<\Phi(\sigma)=\left(\phi^2(\sigma)+N\frac{\dot{\phi}(\sigma)}{\phi(\sigma)}\right)< \phi^2(\sigma)< \phi^2(0)=1,\quad 0<\sigma<\Sigma. \end{eqnarray} Then \eqref{ts1} via \eqref{ps2} implies that \begin{eqnarray}\label{ts2a} 0<\Psi(\sigma)=\left(\phi(\sigma)\right)^{2(1-\gamma)} \Phi^{\gamma}(\sigma)<1 ,\quad\mbox{for}\quad 0<\gamma<1,\quad 0<\sigma<\Sigma \end{eqnarray} and \begin{eqnarray}\label{ts2c} 0<\Psi(\sigma)=\left(\phi(\sigma)\right)^{2(1-\gamma)} \Phi^{\gamma}(\sigma)<m^{2(1-\gamma)}_{\Phi} ,\quad\mbox{for}\quad \gamma>1,\quad 0<\sigma<\Sigma, \end{eqnarray} when $m_{\Phi}=\inf_{(0,\Sigma)}\Phi(\sigma)>0.$ \end{remark} A key estimate for obtaining some blow-up results presented throughout is the following proposition. \begin{proposition}\label{eiq} Let $\Psi(\sigma)$ and $\Phi(\sigma)$ satisfy \eqref{mts2an}, then there exists $\delta_0>0$ such for any $0<\delta\leq \delta_0$ the following estimate is fulfilled \begin{equation} \rlap{$-$}\!\int_{\Omega_0} u^{-\delta}\leq C\quad\mbox{for any}\quad 0<\sigma<\Sigma, \label{eqn:1.16h} \end{equation} where the constant $C$ is independent of time $\sigma.$ \label{thm:1.1} \end{proposition} \begin{proof} Define $\chi=u^{\frac{1}{\alpha}}$ for $\alpha\neq 0$, then we can easily check that $\chi$ satisfies \begin{eqnarray} &&\alpha \chi_{\sigma}=\alpha\left(\Delta \chi+4 (\alpha-1) \vert\nabla \chi^{\frac{1}{2}}\vert^2\right)-\Phi\chi+\frac{\Psi u^{p-1+\frac{1}{\alpha}}}{\left(\rlap{$-$}\!\int_{\Omega_0} u^r\right)^{\gamma}} \quad \mbox{in}\quad \Omega_0\times (0,\Sigma),\label{ob1}\\ && \frac{\partial \chi}{\partial \nu}=0, \quad \mbox{on}\quad \partial\Omega_0\times (0,\Sigma),\label{ob2a}\\ &&\chi(x,0)=u_0^{\frac{1}{\alpha}}(x),\quad\mbox{in}\quad \Omega_0.\label{ob2} \end{eqnarray} Averaging (\ref{ob1}) over $\Omega_0$, we obtain \begin{equation}\label{ob4} \alpha\frac{d}{d\sigma}\rlap{$-$}\!\int_{\Omega_0} \chi+4\alpha(1-\alpha)\rlap{$-$}\!\int_{\Omega_0}\vert \nabla \chi^{\frac{1}{2}}\vert^2+\Phi\rlap{$-$}\!\int_{\Omega_0} \chi= \frac{\rlap{$-$}\!\int_{\Omega_0}\Psi u^{p-1+\frac{1}{\alpha}}}{\left(\rlap{$-$}\!\int_{\Omega_0} u^r\right)^{\gamma}}, \end{equation} and hence \begin{eqnarray}\label{ts2} \frac{d}{d\sigma}\rlap{$-$}\!\int_{\Omega_0} \chi+4(1-\alpha)\rlap{$-$}\!\int_{\Omega_0}\vert\nabla \chi^{\frac{1}{2}}\vert^2+\frac{\Phi}{\alpha}\rlap{$-$}\!\int_{\Omega_0} \chi\leq 0, \end{eqnarray} for $\alpha<0.$ Setting $\delta=-\frac{1}{\alpha}$ we have \[ \frac{d}{d\sigma}\rlap{$-$}\!\int_{\Omega_0} \chi+4(1+\delta^{-1})\rlap{$-$}\!\int_{\Omega_0}\vert \nabla \chi^{\frac{1}{2}}\vert^2 \leq M_{\Phi}\delta \rlap{$-$}\!\int_{\Omega_0} \chi. \] Now, recall that Poincar\'e-Wirtinger's inequality, \cite{br11}, reads \begin{eqnarray}\label{pwi} \Vert \nabla w\Vert_2^2\geq \mu_2\Vert w\Vert_2^2,\quad\mbox{for any}\quad w\in H^1(\Omega), \end{eqnarray} where $\mu_2$ is the second eigenvalue of the Laplace operator associated with Neumann boundary conditions. Then \eqref{ts2} by virtue of \eqref{pwi} for $w=\chi^{\frac{1}{2}}$ entails that $\frac{d}{d\sigma}\rlap{$-$}\!\int_{\Omega_0} \chi+c\rlap{$-$}\!\int_{\Omega_0} \chi\leq 0$, for some positive constant $c,$ provided $0<\delta\ll 1$. Consequently, Gr\"{o}wnwall's lemma yields that $\chi(\sigma)\leq C<\infty$ for any $0<\sigma<\Sigma$ and thus (\ref{eqn:1.16h}) follows due to the fact that $\chi=u^{-\delta}.$ \end{proof} \begin{remark}\label{ts3} Note that Proposition \ref{thm:1.1} guarantees that the non-local term of problem \eqref{nstregm1k}-\eqref{nstregm3k} stays away from zero and hence its solution $u$ is bounded away from zero as well. In fact, inequality (\ref{eqn:1.16h}) implies $\displaystyle{ \rlap{$-$}\!\int_{\Omega_0} u^{\delta}\geq c=C^{-1}}$ and then \begin{equation} \rlap{$-$}\!\int_{\Omega_0} u^r\geq \left(\rlap{$-$}\!\int_{\Omega_0} u^\delta \right)^{r/\delta}\geq c^{r/\delta}>0\quad\mbox{for any}\quad 0<\sigma<\Sigma, \label{eqn:1.17} \end{equation} follows by Jensen's inequality, \cite{ev10}, taking $\delta\leq r,$ where again $c$ is independent of time $t.$ The latter estimate rules out the possibility of (finite or infinite time) quenching, i.e. $ \lim_{\sigma\to \Sigma} ||u(\cdot,\sigma)||_{\infty}=0, $ cannot happen, and thus extinction of the activator in the long run is not possible. \end{remark} \begin{remark}\label{nny} In case $\Phi(\sigma)$ is not bounded from above, as it happens for $\rho(t)=e^{\beta t},\beta>0,$ when $\Phi(\sigma)=(1+N\beta)(1-2\beta \sigma)^{-1}, 0<\sigma<\frac{1}{2\beta},$ then both of the estimates \eqref{eqn:1.16h} and \eqref{eqn:1.17} still hold true, however the involved constants depend on time $\sigma$ and thus (finite or infinite time) quenching cannot be ruled out. \end{remark} Next we present our first ODE-type blow-up result for problem \eqref{nstregm1k}-\eqref{nstregm3k} when an {\it anti-Turing condition}, the reverse of \eqref{tc}, is satisfied. \begin{theorem}\label{thm1} Take $p \geq r, 0<\gamma<1$ and $\omega=p-r\gamma>1.$ Assume also $\Psi(\sigma)>0$ and consider initial data $u_0(x)$ such that \begin{eqnarray}\label{bid} \bar{u}_0:=\rlap{$-$}\!\int_{\Omega_0} u_0\,dx>(\omega-1)^{\frac{1}{1-\omega}}\,I^{\frac{1}{1-\omega}}(\Sigma)>0, \end{eqnarray} provided that \begin{eqnarray}\label{nk2} I(\Sigma):=\int_0^{\Sigma} \Psi(\theta) e^{(1-\omega)\int^{\theta} \Phi(\eta)\,d\eta}\,d \theta<\infty, \end{eqnarray} then the solution of \eqref{nstregm1k}-\eqref{nstregm3k} blows up in finite time $\Sigma_b<\Sigma$, \\i.e. $\lim_{\sigma\to \Sigma_b}\Vert u(\cdot,\sigma)\Vert_\infty=+\infty.$ \label{lem:1.2} \end{theorem} \begin{proof} Since $p>1$ and $p\geq r$, then by virtue of the H\"{o}lder's inequality $ \rlap{$-$}\!\int_{\Omega_0} u^p\geq \left( \rlap{$-$}\!\int_{\Omega_0} u\right)^{p}$ and $ \left( \rlap{$-$}\!\int_{\Omega_0} u^r\right)^\gamma\leq \left( \rlap{$-$}\!\int_{\Omega_0} u^p \right)^{\frac{\gamma r}{p}}.$ Then $\bar{u}(\sigma)=\rlap{$-$}\!\int_{\Omega_0} u(x,\sigma)\,dx$ satisfies \begin{equation} \displaystyle\frac{d \bar{u}}{d\sigma}= -\Phi(\sigma)\bar{u}+\Psi(\sigma)\displaystyle{\frac{\rlap{$-$}\!\int_{\Omega_0} u^p}{\left(\rlap{$-$}\!\int_{\Omega_0} u^r\right)^{\gamma}}}\geq -\Phi(\sigma)\bar{u}+\Psi(\sigma)\bar{u}^{p-r\gamma}\quad\mbox{for}\quad 0<\sigma<\Sigma. \label{eqn:1.14} \end{equation} Set now $F(\sigma)$ to be the solution of the following Bernoulli's type initial value problem $ \frac{d F}{d\sigma}=-\Phi(\sigma)F(\sigma)+\Psi(\sigma)F^{\omega}(\sigma),\; 0<\sigma<\Sigma,\quad F(0)=\bar{u}_0>0, $ then via the comparison principle $F(\sigma)\leq \bar{u}(\sigma)$ for $0<\sigma<\Sigma$ and $F(\sigma)$ is given by $ F(\sigma)=e^{(\omega-1)\int^{\sigma} \Phi(\eta)\,d\eta}(G(\sigma))^{\frac{1}{1-\omega}}, $ where $ G(\sigma):=\left[\bar{u}_0^{1-\omega}-(\omega-1)\int_0^{\sigma} \Psi(\theta)e^{(1-\omega)\int^{\theta} \Phi(\eta)\,d\eta}\,d \theta\right]. $ Note that $F(\sigma)$ blows up in finite-time if there exists $\sigma^*<\Sigma$ such that $G(\sigma^*)=0.$ First note that $G(0)>0;$ furthermore, under the assumption \eqref{bid} we have $\lim_{\sigma\to \Sigma}G(\sigma)<0$ and thus by virtue of the intermediate value theorem there exists $\sigma^*<\Sigma$ such that $G(\sigma^*)=0.$ The latter implies that $\lim_{\sigma\to \sigma^*} F(\sigma)=+\infty$ and therefore $\lim_{s\to \Sigma_b} \bar{u}(\sigma)=+\infty$ for some $\Sigma_b\leq\sigma^*,$ which completes the proof. \end{proof} \begin{remark}\label{aal5} Note that for an exponentially growing domain, i.e. when $\rho(t)=e^{\beta t}, \beta>0,$ condition \eqref{nk2} is satisfied since then $1<\Phi(\sigma)=\left(1+N\beta\right)(1-2\beta \sigma)^{-1}$ and $1<\Psi(\sigma)=\left(1+N\beta\right)^{\gamma}(1-2\beta \sigma)^{-1}$ for all $\sigma \in\left(0,\frac{1}{2\beta}\right)$. Thus \begin{eqnarray*} I(\Sigma)=\left(1+N\beta\right)^{\gamma}\int_0^{\frac{1}{2 \beta}} \left(1-2\beta \theta\right)^{\frac{(\omega-1)(1+N\beta)}{2\beta}-1}\,d \theta=\frac{\left(1+N\beta\right)^{\gamma-1}}{(\omega-1)}<+\infty, \end{eqnarray*} and according to Theorem \ref{thm1} finite-time blow-up takes place at time \begin{eqnarray}\label{am1} \Sigma_b\leq\sigma_{g}=\frac{1}{2\beta}\left\{1-\left[1-(1+N\beta)^{1-\gamma}\bar{u}_0^{1-\omega}\right]^{\frac{2 \beta}{(\omega-1)(1+N\beta)}}\right\}, \end{eqnarray} and for initial data $u_0$ satisfying $ \bar{u}_0>\left(1+N\beta\right)^{\frac{1-\gamma}{\omega-1}}. $ Besides, for an exponential shrinking domain, i.e. when $\rho(t)=e^{-\beta t}, 0<\beta<\frac{1}{N},$ then again condition \eqref{nk2} is valid since then \begin{eqnarray}\label{aal3} 0<\Phi(\sigma)=\left(1-N\beta\right)(1+2\beta \sigma)^{-1}<1,\quad \sigma\in(0,\infty), \end{eqnarray} and \begin{eqnarray}\label{aal4} 0<\Psi(\sigma)=\left(1-N\beta\right)^{\gamma}(1+2\beta \sigma)^{-1}<1,\quad \sigma\in(0,\infty). \end{eqnarray} In that case \begin{eqnarray*} I(\Sigma)=\left(1-N\beta\right)^{\gamma}\int_0^{+\infty} \left(1+2\beta \theta\right)^{\frac{(\omega-1)(1-N\beta)}{2\beta}-1}\,d \theta=\frac{\left(1-N\beta\right)^{\gamma-1}}{(\omega-1)}<+\infty, \end{eqnarray*} and again finite-time blow-up occurs at \begin{eqnarray}\label{am2} \Sigma_b\leq \sigma_{s}=\frac{1}{2\beta}\left\{\left[1-(1-N\beta)^{1-\gamma}\bar{u}_0^{1-\omega}\right]^{\frac{2 \beta}{(1-\omega)(1-N\beta)}}-1\right\}, \end{eqnarray} provided that the initial data satisfy $ \bar{u}_0>\left(1-N\beta\right)^{\frac{1-\gamma}{\omega-1}}. $ For a stationary domain, i.e. when $\rho(t)=\phi(\sigma)=1,$ we have $\Phi(\sigma)=\Psi(\sigma)=1$ and thus finite-time blow-up occurs at $\Sigma_1\leq\sigma_1,$ provided that $\bar{u}_0>1,$ see also \cite{KS16, KS18}, where $ \sigma_1:=\frac{1}{1-\omega}\ln\left(1-\bar{u}_0^{1-\omega}\right). $ \end{remark} \begin{remark} When the domain evolves logistically, a feasible choice in the context of biology \cite{pspbm04}, i.e. when $ \rho(t)=\frac{e^{\beta t}}{1+\frac{1}{m}\left(e^{\beta t}-1\right)},\quad\mbox{for}\quad m\neq1, $ means that \eqref{aal2} cannot be solved for $t$ and it is more convenient to deal with problem \eqref{nstregm1t}-\eqref{nstregm3t} instead. Then following the same approach as in Theorem \ref{thm1} we show that the solution of \eqref{nstregm1t}-\eqref{nstregm3t} exhibits finite-time blow-up under the same conditions for parameters $p,\gamma,r$ provided that the initial condition satisfies \begin{eqnarray}\label{sk1} \bar{u}_0:=\rlap{$-$}\!\int_{\Omega_0} u_0\,dx>(\omega-1)^{\frac{1}{1-\omega}}\,\int_0^{\infty} L^{-\gamma}(\theta) e^{(1-\omega)\int^{\theta}L(\eta)\,d\eta}\,d \theta, \end{eqnarray} where now the quantity $L(t)=1+\frac{N\beta \left(1-\frac{1}{m}\right)}{1+\frac{1}{m}\left(e^{\beta t}-1\right)}.$ \end{remark} \begin{remark}\label{rem1a} Assume now that \begin{eqnarray}\label{nk11} 0<\bar{u}_0<(\omega-1)^{\frac{1}{1-\omega}}\,I^{\frac{1}{1-\omega}}(\Sigma), \end{eqnarray} then $G(\Sigma)>0$ and since $G(\sigma)$ is strictly decreasing we get that $G(\sigma)>0$ for any $0<\sigma<\Sigma$ which implies that $F(\sigma)$ never blows up. Therefore, since $F(\sigma)\leq \bar{u}(\sigma),$ there is still a possibility that $\bar{u}(\sigma)$ does not blow up either, however we cannot be sure and it remains to be verified numerically, see Section \ref{num-sec}. \end{remark} Next, we investigate the dynamics of some $L^\ell$-norms $||u(\cdot,\sigma)||_{\ell},$ which identify some invariant regions in the phase space. We first define $\zeta(\sigma)=\rlap{$-$}\!\int_{\Omega_0} u^r\,dx$, $y(\sigma)=\rlap{$-$}\!\int_{\Omega_0} u^{-p+1+r}\,dx$ and $w(\sigma)=\rlap{$-$}\!\int_{\Omega_0} u^{p-1+r}\,dx$, then H\"{o}lder's inequality implies \begin{eqnarray} w(\sigma)y(\sigma)\geq \zeta^2(\sigma), \quad 0\leq \sigma<\Sigma. \label{psl1} \end{eqnarray} Our first result in this direction provides some conditions under which a finite-time blow-up takes place, when an {\it anti-Turing condition} is in place, and is stated as follows. \begin{theorem}\label{obu1} Take $0<\gamma<1$ and $r\leq 1<\frac{p-1}{r}.$ Assume that $\Phi(\sigma), \Psi(\sigma)$ satisfy \eqref{mts2an} then if one of the following conditions holds: \begin{enumerate} \item $w(0)<\frac{m_{\Psi}}{M_{\Phi}}\zeta(0)^{1-\gamma},$ \item $\frac{p-1}{r}\geq 2$ and $w(0)<1,$ \end{enumerate} then finite-time blow-up occurs. \end{theorem} \begin{proof} Set $\chi=u^{\frac{1}{\alpha}}$ with $\alpha\neq 0$, then following the same steps as in Proposition \ref{eiq} we derive \begin{eqnarray} &&\quad \alpha \chi_{\sigma}=\alpha\left(\Delta \chi+4 (\alpha-1) \vert\nabla \chi^{\frac{1}{2}}\vert^2\right)-\Phi \chi+\Psi \frac{u^{p-1+\frac{1}{\alpha}}}{\left(\rlap{$-$}\!\int_{\Omega_0} u^r\right)^{\gamma}}, \;x\in \Omega_0,\; \sigma\in (0,\Sigma),\label{nob1} \\ && \quad \frac{\partial \chi}{\partial \nu}=0 \quad \; x\in \partial\Omega_0,\; \sigma\in (0,\Sigma),\label{nob2}\\ && \quad\chi(x,0)=u_0^{\frac{1}{\alpha}}(x), \quad x\in \Omega_0. \label{nob3} \end{eqnarray} Averaging (\ref{nob1}) over $\Omega_0$ and using zero-flux boundary condition \eqref{nob2}, we obtain \begin{eqnarray}\label{mkl1} \alpha\frac{d}{d\sigma}\rlap{$-$}\!\int_{\Omega_0} \chi=-4\alpha(1-\alpha)\rlap{$-$}\!\int_{\Omega_0}\vert \nabla \chi^{\frac{1}{2}}\vert^2-\Phi(\sigma)\rlap{$-$}\!\int_{\Omega_0} \chi+\Psi(\sigma)\frac{\rlap{$-$}\!\int_{\Omega_0} u^{p-1+\frac{1}{\alpha}}}{\left(\rlap{$-$}\!\int_{\Omega_0} u^r\right)^{\gamma}}. \end{eqnarray} Relation \eqref{mkl1} for $\alpha=\frac{1}{r},$ since also $r\leq 1,$ entails that \begin{eqnarray}\label{psl2} \frac{1}{r}\frac{d \zeta}{d\sigma}&&=-\frac{4}{r}\left(1-\frac{1}{r}\right)\rlap{$-$}\!\int_{\Omega_0}\vert \nabla \chi^{\frac{1}{2}}\vert^2-\Phi(\sigma)\zeta(\sigma)+\Psi(\sigma)\frac{\rlap{$-$}\!\int_{\Omega_0} u^{p-1+r}}{\left(\rlap{$-$}\!\int_{\Omega_0} u^r\right)^{\gamma}}\nonumber\\ &&\geq -M_{\Phi} \zeta(\sigma)+m_{\Psi}\frac{\zeta^{2-\gamma}(\sigma)}{w(\sigma)}\geq \frac{\zeta(\sigma)}{w(\sigma)}\left(-M_{\Phi} w(\sigma)+m_{\Psi}\zeta^{1-\gamma}(\sigma)\right), \end{eqnarray} which suffices by using \eqref{psl1} together with \eqref{mts2an}. Furthermore, since $\frac{p-1}{r}>1$ then \eqref{mkl1} for $\alpha=\frac{1}{-p+1+r}$ leads to \begin{equation}\label{mk2} \alpha\frac{dw}{d\sigma}=4\alpha(\alpha-1)\rlap{$-$}\!\int_{\Omega_0} \vert \nabla u^{\frac{1}{2\alpha}}\vert^2-\Phi(\sigma)w+\Psi(\sigma)\zeta^{1-\gamma}, \end{equation} which, owing to \eqref{mts2an} and using the fact that $\alpha=\frac{1}{-p+1+r}<0$ entails \begin{equation} \frac{1}{p-1-r} \frac{dw}{d\sigma}\leq M_{\Phi} w(\sigma)-m_{\Psi}\zeta^{1-\gamma}(\sigma). \label{ob7} \end{equation} Note that since $0<\gamma<1$, we have that the curve $ \Gamma_1: w=\frac{m_{\Psi}\zeta^{1-\gamma}}{M_{\Phi}}, \ \zeta>0, $ is concave in $w\zeta-$plane, with its endpoint at the origin $(0,0).$ Furthermore relations (\ref{psl2}) and (\ref{ob7}) imply that the region $\mathcal{R}=\{ (\zeta,w) \mid w<\frac{m_{\Psi}\zeta^{1-\gamma}}{M_{\Phi}}\}$ is invariant, and $\zeta(\sigma)$ and $w(\sigma)$ are increasing and decreasing on $\mathcal{R},$ respectively. Under the assumption $w(0)<\frac{m_{\Psi}\zeta^{1-\gamma}(0)}{M_{\Phi}}$ we have $\frac{dw}{d\sigma}<0, \ \frac{d\zeta}{d\sigma}>0, \quad\mbox{for}\quad 0\leq \sigma<\Sigma, $ and thus, \[ \frac{m_{\Psi}}{w(\sigma)}-\frac{M_{\Phi}}{\zeta^{1-\gamma}(\sigma)}\geq \frac{m_{\Psi}}{w(0)}-\frac{M_{\Phi}}{\zeta^{1-\gamma}(0)}\equiv c_0>0, \quad\mbox{for}\quad 0\leq \sigma<\Sigma.\] Therefore by virtue of \eqref{psl2} we derive the differential inequality \begin{eqnarray} \frac{1}{r}\frac{d\zeta}{d\sigma}\geq -M_{\Phi}\zeta(\sigma)+m_{\Psi}\frac{\zeta^{2-\gamma}(\sigma)}{w(\sigma)}&&=\zeta^{2-\gamma}(\sigma)\left(\frac{1}{w(\sigma)}-\frac{M_{\Phi}}{m_{\Psi}\zeta^{1-\gamma}(\sigma)}\right)\nonumber\\ &&\geq c_0\zeta^{2-\gamma}(\sigma), \quad 0\leq \sigma<\Sigma. \label{eqn:5.3} \end{eqnarray} Since $2-\gamma>1$, inequality (\ref{eqn:5.3}) implies that $\zeta(\sigma)$ blows up in finite time $ \sigma_1\leq \hat{\sigma}_1\equiv\frac{\zeta^{\gamma-1}(0)}{(1-\gamma)c_0 r}<\infty, $ and since $\zeta(\sigma)=\rlap{$-$}\!\int_{\Omega_0} u^r\,dx\leq \|u(\cdot,\sigma)\|^r_{\infty}$ we conclude that $u(x,\sigma)$ blows up in finite time $\Sigma_b\leq \hat{\sigma}_1.$ We now consider the latter case when $\frac{p-1}{r}\geq 2$ then $q=\frac{p-1-r}{r}\geq 1$, and thus by virtue of Jensen's inequality, \cite{ev10}, we obtain $ \rlap{$-$}\!\int_{\Omega_0} u^r\cdot\left( \rlap{$-$}\!\int_{\Omega_0} (u^{-r})^q\right)^{\frac{1}{q}}\geq \rlap{$-$}\!\int_{\Omega_0} u^{r}\cdot\rlap{$-$}\!\int_{\Omega_0} u^{-r}\geq 1, $ which entails $\zeta^{\frac{1}{r}}(\sigma)\geq w^{-\frac{1}{p-1-r}}(\sigma)$, and thus by virtue of \eqref{mts2an} \begin{eqnarray}\label{mk1} w(\sigma)\geq \zeta^{-\frac{p-1-r}{r}(\sigma)}=\zeta^{1-\frac{p-1}{r}}(\sigma)> \frac{1}{\Phi(\sigma)}\zeta^{1-\frac{p-1}{r}}(\sigma)\geq \frac{m_{\Psi}}{M_{\Phi}}\zeta^{1-\frac{p-1}{r}}(\sigma), \end{eqnarray} for any $\sigma\in[0,\Sigma)$. Since $\frac{p-1}{r}\geq 2$, the curve $\Gamma_2: w=\frac{m_{\Psi}\zeta^{1-\frac{p-1}{r}}}{M_{\Phi}}, \ \zeta>0,$ is convex and approaches $+\infty$ and $0$ as $\zeta\downarrow 0^+$ and $\zeta\uparrow+\infty$, respectively. Moreover, the curves $\Gamma_1$ and $\Gamma_2$ intersect at the point $(\zeta,w)=(1,1),$ and therefore, $w(0)<1$ combined with \eqref{mk1} implies that $w(0)<\frac{m_{\Psi}\zeta^{1-\gamma}(0)}{M_{\Phi}}$. Thus the latter case is reduced to the former one and again finite-time blow-up for the solution $u(x,\sigma)$ is established. \end{proof} \begin{remark}\label{rem4} Note that in the case of a stationary domain then $\zeta(\sigma)$ again blows up, see \cite{KS16, KS18}, in finite time $ \sigma_2\leq \hat{\sigma}_2\equiv\frac{\zeta^{\gamma-1}(0)}{(1-\gamma)c_1 r}, $ where $ c_1\equiv \frac{1}{w(0)}-\frac{1}{\zeta^{1-\gamma}(0)}, $ and thus $u(x,\sigma)$ blows in finite time $\Sigma_1\leq \hat{\sigma}_2$ under the condition $w(0)<\zeta(0)^{1-\gamma}.$ \end{remark} \begin{remark} For logistically growing or shrinking domain problem \eqref{nstregm1t}-\eqref{nstregm3t} exhibit finite-time blow-up under the assumptions of Theorem \ref{obu1} whenever $ w(0)<M_{L}^{-(\gamma+1)}\zeta(0)^{1-\gamma}, $ where $ M_{L}:=\sup_{(0,\infty)} L(t)=\sup_{(0,\infty)}\left(1+\frac{N\beta \left(1-\frac{1}{m}\right)}{1+\frac{1}{m}\left(e^{\beta t}-1\right)}\right). $ In particular, for a logistically growing domain, when $m>1,$ then $M_L=L(0)=1+N\beta\left(1-\frac{1}{m}\right),$ whilst for logistically shrinking domain, when $0<m<1$ we have $M_L=\lim_{t\to+\infty}L(t)=1$ and hence in that case blow-up conditions $(1)$ and $(2)$ of Theorem \ref{obu1} coincide with the ones of \cite[Theorem 3.5]{KS16}, see also Remark \ref{rem4}. \end{remark} Now we present a global-in-time existence result stated as follows. \begin{theorem}\label{thm4} Assume that $\frac{p-1}{r}<\min\{1, \frac{2}{N}, \frac{1}{2}(1-\frac{1}{r})\}$ and $0<\gamma<1.$ Consider functions $\Phi(\sigma), \Psi(\sigma)>0$ with \begin{eqnarray}\label{mts2a} \inf_{(0,\Sigma)} \Phi(\sigma):=m_{\Phi}>0\quad\mbox{and}\quad \sup_{(0,\Sigma)} \Psi(\sigma):=M_{\Psi}<+\infty, \end{eqnarray} then problem \eqref{nstregm1k}-\eqref{nstregm3k} has a global-in-time solution. \label{thm:1.3} \end{theorem} \begin{proof} We assume $\frac{p-1}{r}<\min\{ 1, \frac{2}{N}, \frac{1}{2}(1-\frac{1}{r})\}$ and $0<\gamma<1$. We also assume $N\geq 2$ since the complementary case $N=1$ is simpler. Note that for $p>1$, we have $\frac{p-1}{r}<\frac{2}{N}$ and $r>p$. Therefore we have $ 0<\frac{1}{r-p+1}<\min\left\{ 1, \frac{1}{p-1}\cdot\frac{2}{N-2}, \frac{1}{1-p+r\gamma}\right\},$ since $0<\gamma<1$. Chossing $\frac{1}{r-p+1}<\alpha<\min\{ 1, \frac{1}{p-1}\cdot\frac{2}{N-2}, \frac{1}{1-p+r\gamma} \}$, we derive $ \max\left\{ \frac{N-2}{N}, \frac{1}{\alpha r}\right\}<\frac{1}{-\alpha+1+\alpha p}, $ and then we can find $\beta>0$ such that \begin{equation} \max\left\{ \frac{N-2}{N}, \frac{1}{\alpha r}\right\} <\frac{1}{\beta}<\frac{1}{-\alpha+1+\alpha p}<2, \label{eqn:15} \end{equation} which satisfies \begin{equation} \frac{\beta}{\alpha r}<1<\frac{\beta}{-\alpha+1+\alpha p}. \label{eqn:16} \end{equation} Recalling that $\chi=u^{\frac{1}{\alpha}}$ satisfies (\ref{nob1})-(\ref{nob3}) with $\displaystyle{\rlap{$-$}\!\int_{\Omega_0} \frac{u^{p-1+\frac{1}{\alpha}}}{\left( \rlap{$-$}\!\int_{\Omega_0} u^r\right)^\gamma}= \frac{\rlap{$-$}\!\int_{\Omega_0} \chi^{-\alpha+1+\alpha p}}{\left( \rlap{$-$}\!\int_{\Omega_0} \chi^{\alpha r}\right)^\gamma}},$ then by virtue of (\ref{eqn:16}) \[ \rlap{$-$}\!\int_{\Omega_0} \chi^{-\alpha+1+\alpha p}\leq \left( \rlap{$-$}\!\int_{\Omega_0} \chi^\beta\right)^{\frac{-\alpha+1+\alpha p}{\beta}}\quad\mbox{and}\quad \left( \rlap{$-$}\!\int_{\Omega_0} \chi^{\alpha r}\right)^\gamma \geq \left( \rlap{$-$}\!\int_{\Omega_0} \chi^\beta\right)^{\frac{\alpha r}{\beta}\cdot\gamma}, \] thus we obtain the following estimate \begin{equation} \frac{\rlap{$-$}\!\int_{\Omega_0} \chi^{-\alpha+1+\alpha p}}{\left( \rlap{$-$}\!\int_{\Omega_0}\chi^{\alpha r}\right)^\gamma}\leq \left( \rlap{$-$}\!\int_{\Omega_0} \chi^\beta\right)^{\frac{-\alpha+1+\alpha p-\alpha r\gamma}{\beta}}=\Vert \chi^{\frac{1}{2}}\Vert_{2\beta}^{2(1-\lambda)}, \label{eqn:3.3} \end{equation} with $0<\lambda=\alpha\{1-p+r\gamma\}<1$, recalling that $\frac{p-1}{r}<\gamma$ and $\alpha<\frac{1}{1-p+r\gamma}$. Averaging (\ref{nob1}) over $\Omega_0$ leads to the following, \begin{equation}\label{ob4} \alpha\frac{d}{d\sigma}\rlap{$-$}\!\int_{\Omega_0} \chi+4\alpha(1-\alpha)\rlap{$-$}\!\int_{\Omega_0}\vert \nabla \chi^{\frac{1}{2}}\vert^2+\Phi(\sigma)\rlap{$-$}\!\int_{\Omega_0} \chi=\Psi(\sigma)\frac{\rlap{$-$}\!\int_{\Omega_0} \chi^{-\alpha+1+\alpha p}}{\left( \rlap{$-$}\!\int_{\Omega_0}\chi^{\alpha r}\right)^\gamma}, \end{equation} and hence \[ \frac{d}{d\sigma}\rlap{$-$}\!\int_{\Omega_0} \chi+4(1-\alpha)\rlap{$-$}\!\int_{\Omega_0}\vert\nabla \chi^{\frac{1}{2}}\vert^2+\frac{m_{\Phi}}{\alpha}\rlap{$-$}\!\int_{\Omega_0} \chi\leq \frac{M_{\Psi}}{\alpha}\Vert \chi^{\frac{1}{2}}\Vert_{2\beta}^{2(1-\lambda)}, \] by virtue of \eqref{mts2a}, \eqref{eqn:15} and \eqref{eqn:3.3}. Now since $1<2\beta<\frac{2N}{N-2}$ holds due to (\ref{eqn:15}) and applying first Sobolev's and then Young's inequalities we derive \[ \frac{d}{d\sigma}\rlap{$-$}\!\int_{\Omega_0}\chi+c\Vert \chi^{\frac{1}{2}}\Vert_{H^1}^2+\frac{M_{\Psi}}{\alpha}\rlap{$-$}\!\int_{\Omega_0} \chi\leq C, \] which implies $\rlap{$-$}\!\int_{\Omega_0} \chi\leq C.$ Since $\frac{1}{\alpha}$ can be chosen to be close to $r-p+1$, the above estimate gives \begin{equation} \Vert u(\cdot,\sigma)\Vert_q\leq C_q, \quad\mbox{for any}\quad 1\leq q<r-p+1, \label{eqn:15h} \end{equation} recalling that $\chi=u^{\frac{1}{\alpha}}.$ Note that $\frac{p-1}{r}<\frac{1}{2}(1-\frac{1}{r})$ implies $\frac{r-p+1}{p}>1$ and thus we obtain global-in-time existence by using the same bootstrap argument as in \cite[Theorem 3.4]{KS16}. \end{proof} \begin{remark} Note that condition \eqref{mts2a} is satisfied in the case of an exponential shrinking domain as indicated in Remark \ref{aal5}, see in particular \eqref{aal3} and \eqref{aal4}. \end{remark} \section{Turing Instability and Pattern Formation}\label{tipf} In the current section, we present a Turing-instability result for problem \eqref{nstregm1k}-\eqref{nstregm3k}, restricting ourselves to the radial case $\Omega_0=B_1(0):=\{x\in \mathbb{R}^N \mid \vert x\vert<1\}.$ Then the solution of $u$ \eqref{nstregm1k}-\eqref{nstregm3k} is radially symmetric, i.e. $u(x,t)=u(R,t)$ for $R=|x|,$ and thus it satisfies the following \begin{eqnarray} && u_{\sigma}=\Delta_R u-\Phi(\sigma)u+\displaystyle\frac{\Psi(\sigma)u^p}{\left(\rlap{$-$}\!\int_{\Omega_0} u^r\right)^{\gamma}}, \quad R\in (0,1),\; \sigma\in (0,\Sigma), \label{rnstregm1}\\ && u_R(0,\sigma)=u(1,t)=0, \quad t\in (0,\Sigma), \label{rnstregm2}\\ && u(R,0)=u_0(R),\quad 0<R<1, \label{rnstregm3} \end{eqnarray} where $\Delta_R u:=u_{RR}+\frac{N-1}{R}u_R.$ It can be seen, see also \cite{KS16, KS18}, that under the Turing condition \eqref{tc}, the spatial homogeneous solutions of \eqref{nstregm1k}-\eqref{nstregm3k}, i.e. the solution of the problem $\frac{du}{d\sigma}=-\Phi(\sigma)u+\Psi(\sigma)u^{p-r\gamma}, \quad \left. u\right\vert_{\sigma=0}=\bar{u}_0>0, $ never exhibits blow-up, as long as $\Phi(\sigma), \Psi(\sigma)$ are both bounded, since the nonlinearity $f(u)=u^{p-r\gamma}$ is sub-linear. On the other hand, considering spatial inhomogeneous solutions of \eqref{nstregm1k}-\eqref{nstregm3k} we will show below, see Theorem \ref{thm:6.1}, that a diffusion-driven instability phenomenon occurs when spiky type of initial conditions are considered. Indeed, next we consider an initial datum of the form, \cite{hy95}, \begin{equation} u_0(R)=\lambda\psi_\delta(R), \label{eqn:6.1} \end{equation} with $0<\lambda\ll 1$ and \begin{equation}\label{id} \psi_\delta(R)=\begin{cases} R^{-a},& \delta\leq R\leq 1, \\ \delta^{-a}\left(1+\frac{a}{2}\right)-\frac{a}{2}\delta^{-(a+2)}R^2, & 0\leq R<\delta, \end{cases} \end{equation} where $a=\frac{2}{p-1}$ and $0<\delta<1$. It is easily seen, \cite{KS16, KS18}, that for $\psi_{\delta}$ the following lemma holds. \begin{lemma}\label{kkl1} For the function $\psi_{\delta}$ defined by \eqref{id} we have: \begin{enumerate} \item[(i)] For any $0<\delta<1,$ there holds in a weak sense \begin{equation} \displaystyle{ \Delta_R \psi_\delta\geq -Na\psi_\delta^p}. \label{eqn:6.3} \end{equation} \item[(ii)] If $m>0$ and $N>ma$, we have \begin{equation}\label{ine} \rlap{$-$}\!\int_{\Omega_0} \psi_\delta^m=\frac{N}{N-ma}+O\left(\delta^{N-ma}\right), \quad \delta\downarrow 0. \end{equation} \end{enumerate} \end{lemma} Now, if we consider \begin{equation} \mu>1+r\gamma \label{eqn:6.15} \end{equation} and set $\alpha_1=\sup_{0<\delta<1}\frac{1}{\bar{\psi}_\delta^\mu}\rlap{$-$}\!\int_{\Omega_0}\psi_\delta^p$, and $\alpha_2=\inf_{0<\delta<1}\frac{1}{\bar{\psi}_\delta^\mu}\rlap{$-$}\!\int_{\Omega_0}\psi_\delta^p$, then since $p>\frac{N}{N-2}$, relation(\ref{ine}) is applicable for $m=p$ and $m=1$, and thus owing to \eqref{eqn:6.15} we obtain \begin{eqnarray}\label{at1} 0<\alpha_1, \alpha_2<\infty. \end{eqnarray} Furthermore, it follows that \begin{equation} d\equiv\inf_{0<\delta<1}\left( \frac{1}{2\alpha_1}\right)^{\frac{r\gamma}{p}}\left( \frac{1}{2\bar{\varphi}_\delta}\right)^{\frac{r\gamma}{p}\mu}>0, \label{eqn:6.7} \end{equation} and the initial data $u_0$ defined by \eqref{eqn:6.1} also satisfies the following lemma, see also \cite{KS16, KS18}, \begin{lemma} If $p>\frac{N}{N-2}$ and $\frac{p-1}{r}<\gamma$, there exists $\lambda_0=\lambda_0(d)>0$ such that for any $0<\lambda\leq \lambda_0$ there holds \begin{equation} \Delta_R u_0+d\lambda^{-r\gamma}u_0^p\geq 2u_0^p. \label{eqn:6.4} \end{equation} \label{lem:6.2} \end{lemma} Hereafter, we fix $0<\lambda\leq \lambda_0=\lambda_0(d)$ so that (\ref{eqn:6.4}) is satisfied. Given $0<\delta<1$, let $\Sigma_\delta>0$ be the maximal existence time of the solution to (\ref{rnstregm1})-(\ref{rnstregm3}) with initial data of the form \eqref{eqn:6.1}-\eqref{id}. Next, we introduce the new variable $z=e^{\int^{\sigma}\Phi(s)\,ds }u,$ so that the linear dissipative term $-\Phi(\sigma) u$ is eliminated and then $z$ satisfies \begin{eqnarray} &&z_\sigma= \Delta_R z +K(\sigma)z^p,\quad R\in (0,1),\; \sigma\in (0,\Sigma_\delta),\label{tbsd3}\\ &&z_R(0,\sigma)=z_R(1,\sigma)=0,\quad \sigma\in (0,\Sigma_\delta),\label{tbsd3a}\\ && z(R,0)=u_0(R),\quad 0<R<1, \label{tbsd3b} \end{eqnarray} where \begin{equation} \label{nnt} K(\sigma)=\frac{\Psi(\sigma)e^{(1+r\gamma-p)\int^\sigma \Phi(s)\,ds}}{\Big(\displaystyle\rlap{$-$}\!\int_{\Omega_0} z^r \Big)^{\gamma}}. \end{equation} It is clear that as long as $\Phi(\sigma)$ is bounded then $u$ blows-up in finite time if and only if $z$ does so. Assuming now that both $\Phi(\sigma)$ and $\Psi(\sigma)$ are positive and bounded, which is the case for the evolution provided by $\psi(\sigma)$ satisfying \eqref{diq1} or for an exponential shrinking domain as indicated in Remarks \ref{rem1} and \ref{aal5}, then by virtue of (\ref{eqn:1.17}) we have \begin{equation} 0< K(\sigma)=\frac{\Psi(\sigma)e^{(1-p)\int^\sigma \Phi(s)\,ds}}{\Big(\displaystyle\rlap{$-$}\!\int_{\Omega_0} u^r \Big)^{\gamma}}\leq C<\infty, \label{eqn:6.8} \end{equation} thus averaging of (\ref{tbsd3}) entails \begin{equation} \frac{d\bar{z}}{d\sigma}=K(\sigma)\rlap{$-$}\!\int_{\Omega_0} z^p, \label{eqn:6.8h} \end{equation} and thus \eqref{eqn:6.8} yields \begin{equation} \bar{z}(\sigma)\geq \bar{z}(0)=\bar{u}_0:=\rlap{$-$}\!\int_{\Omega_0} u_0. \label{eqn:6.9} \end{equation} Henceforth, the positivity and the boundedness of $\Phi(\sigma), \Psi(\sigma)$ as well as the Turing condition \eqref{tc} are imposed. Moving towards the proof of Theorem \ref{thm:6.1} we first need to establish some auxiliary results. Next, we provide a useful estimate of $z$ that will be frequently used throughout the text. \begin{lemma}\label{lem3} The solution $z$ of problem \eqref{tbsd3}-\eqref{tbsd3b} satisfies \begin{eqnarray} && R^N z(R,\sigma)\leq \bar{z}(\sigma) \quad\mbox{in}\quad(0,1)\times (0,\Sigma_{\delta}), \label{eqn:6.10}\\ && \text{and} \notag \\ && z_R\left(\frac{3}{4},\sigma\right)\leq -c, \ \ 0\leq \sigma<\Sigma_\delta, \label{eqn:6.11} \end{eqnarray} for any $0<\delta<1$ and some positive constant $c.$ \end{lemma} \begin{proof} Let us define $w=R^{N-1}z_{R}$, then it is easily checked that $w$ satisfies $\mathcal{H}[w]=0, \quad\mbox{for}\quad (R,\sigma)\in (0,1)\times (0,\Sigma_{\delta})$, with $w(0,\sigma)=w(1,\sigma)=0$, for $\sigma\in(0,\Sigma_{\delta})$, and $w(R,0)<0$, for $0<R<1$, where $\mathcal{H}[w]\equiv w_\sigma-w_{RR}+\frac{N-1}{\rho}w_R-p K(\sigma)z^{p-1}w$. Owing to the maximum principle, and recalling that $K(\sigma)$ is bounded by \eqref{eqn:6.8}, we get that $w\leq 0$, which implies $z_{R}\leq 0$ in $(0,1)\times (0,\Sigma_{\delta})$. Accordingly, inequality (\ref{eqn:6.10}) follows since \begin{eqnarray*} R^N z(R,\sigma) & = & z(R,\sigma)\int_0^R N s^{N-1}ds\leq \int_0^R Nz(s,\sigma)s^{N-1}\,ds \\ & \leq & \int_0^1Nz(s,\sigma)s^{N-1}\,ds=\rlap{$-$}\!\int_{\Omega_0} z=\bar{z}(\sigma). \end{eqnarray*} Now given that $w\leq 0$ together with \eqref{eqn:6.8} we have \[ w_\sigma-w_{RR}+\frac{N-1}{\rho}w_R=pK(\sigma)z^{p-1}w\leq 0 \quad\mbox{in}\quad (0,1)\times (0,\Sigma_{\delta}),\] with $w\left(\frac{1}{2},\sigma\right)\leq0,\quad w\left(1,\sigma\right)\leq 0$, for $\sigma\in (0,\Sigma_{\delta})$, and $w(R,0)=\rho^{N-1}u'_{0}(R)\leq -c$, for $\frac{1}{2}<\rho<1$, which implies $w\leq -c$ in $(\frac{1}{2},1)\times (0, \Sigma_\delta)$, and thus (\ref{eqn:6.11}) holds. \end{proof} \begin{lemma}\label{kkl2} Take $\varepsilon>0$ and $1<q<p$ then $\vartheta$ defined as \begin{equation}\label{ik1} \vartheta:=R^{N-1}z_R+\varepsilon\cdot\frac{R^Nz^q}{\bar{z}^{\gamma+1}}, \end{equation} satisfies \begin{eqnarray} \qquad\mathcal{H}[\vartheta]\leq -\frac{2q\varepsilon}{\bar{z}^{\gamma+1}}z^{q-1}\vartheta &&+\frac{\varepsilon R^Nz^q}{\bar{z}^{2(\gamma+1)}}\left\{2q\varepsilon z^{q-1}\right.\nonumber\\ &&\left. -m_{\Psi}(\gamma+1)\bar{z}^{\gamma-r\gamma}\rlap{$-$}\!\int_{\Omega_0} z^p-m_{\Psi}(p-q)z^{p-1}\bar{z}^{\gamma+1-r\gamma}\right\}, \label{eqn:6.13h} \end{eqnarray} for $(R,\sigma)\in (0,1)\times (0,\Sigma_{\delta}),$ where $m_{\Psi}=\inf_{\sigma\in (0,\Sigma_\delta)}\Psi(\sigma)>0.$ \label{lem:6.4} \end{lemma} \begin{proof} It is readily checked that $\mathcal{H}\left[R^{N-1} z_{R}\right]=0,$ while by straightforward calculations we derive \begin{eqnarray}\label{tbsd6} \mathcal{H}\left[\varepsilon R^{N} \frac{z^q}{\overline{z}^{\gamma+1}}\right]&&=\frac{2q(N-1)\varepsilon R^{N-1}z^{q-1}}{\overline{z}^{\gamma+1}}z_{R}+\frac{q\varepsilon R^N z^{p-1+q}}{\overline{z}^{\gamma+1}} K(\sigma)-\frac{(\gamma+1) \varepsilon R^N z^q}{\overline{z}^{\gamma+2}}\,K(\sigma)\,\rlap{$-$}\!\int_{\Omega_0} z^p\,dx\nonumber\\ &&-\frac{2 q N\varepsilon R^{N-1} z^{q-1}}{\overline{z}^{\gamma+1}}z_R-\frac{q(q-1)\varepsilon R^N z^{q-2}}{\overline{z}^{\gamma+1}}z_R^2-\frac{p\varepsilon R^N z^{p-1+q}}{\overline{z}^{\gamma+1}} K(\sigma)\nonumber\\ &&\leq-\frac{2 q \varepsilon z^{q-1}}{\overline{z}^{\gamma+1}}\vartheta+\frac{\varepsilon R^N z^q}{\overline{z}^{2(\gamma+1)}}\left[2q\varepsilon z^{q-1}-\frac{\Psi(\sigma)(\gamma+1) \overline{z}^\gamma e^{(1+r\gamma-p)\int^\sigma\Phi(s)\,ds}}{\left(\rlap{$-$}\!\int_{\Omega_0} z^r\,dx\right)^\gamma}\rlap{$-$}\!\int_{\Omega_0} z^p\,dx\right.\notag \\ && \left. \quad - \frac{\Psi(\sigma)(p-q) z^{p-1} \overline{z}^{\gamma+1}}{\left(\rlap{$-$}\!\int_{\Omega_0} z^r\,dx\right)^\gamma} e^{(1+r\gamma-p)\int^\sigma \Phi(s)\,ds} \right]. \end{eqnarray} Then by virtue of the H\"{o}lder's inequality, and since $1\leq r\leq p,$ \eqref{tbsd6} entails the desired estimate \eqref{eqn:6.13h}. \end{proof} Next note that when $p>\frac{N}{N-2}$, there is $1<q<p$ such that $N>\frac{2p}{q-1}$ and thus the following quantities \begin{equation} A_1\equiv\sup_{0<\delta<1}\frac{1}{\bar{u}_0^{\mu}} \rlap{$-$}\!\int_{\Omega_0} u_0^p=\lambda^{\mu-p}\alpha_1\quad\mbox{and}\quad A_2\equiv\inf_{0<\delta<1}\frac{1}{\bar{u}_0^{\mu}} \rlap{$-$}\!\int_{\Omega_0} u_0^p=\lambda^{\mu-p}\alpha_2, \label{eqn:6.14} \end{equation} are finite due to \eqref{at1}. An essential ingredient for the proof of Theorem \ref{thm:6.1} is the following key estimate of the $L^p-$norm of $z$ in terms of $A_1$ and $A_2.$ \begin{proposition}\label{lem4} There exist $0<\delta_0<1$ and $0<\sigma_0\leq 1$ independent of any $0<\delta\leq \delta_0,$ such that the following estimate is satisfied \begin{equation}\label{lue} \frac{1}{2} A_2\overline{z}^{\mu}\leq \rlap{$-$}\!\int_{\Omega_0} z^p\,dx\leq 2 A_1 \overline{z}^{\mu}, \end{equation} for any $0<\sigma<\min\{\sigma_0,\Sigma_{\delta}\}.$ \end{proposition} The proof of Proposition \ref{lem4} requires some further auxiliary results provided below. Let us define $0<\sigma_0(\delta)<\Sigma_\delta$ to be the maximal time for which inequality (\ref{lue}) is valid in $0<\sigma<\sigma_0(\delta),$ then we have \begin{equation} \frac{1}{2}A_2\bar{z}^\mu\leq \rlap{$-$}\!\int_{\Omega_0} z^p\leq 2A_1\bar{z}^\mu. \label{eqn:6.18} \end{equation} We only regard the case $\sigma_0(\delta)\leq 1,$ since otherwise there is nothing to prove. Then the following lemma holds true. \begin{lemma}\label{nkl2} There exists $0<\sigma_1<1$ such that \begin{equation} \bar{z}(\sigma)\leq 2\bar{u}_0, \quad 0<\sigma<\min\{ \sigma_1, \sigma_0(\delta)\}, \label{eqn:6.18h} \end{equation} for any $0<\delta<1$. \label{lem:6.6} \end{lemma} \begin{proof} Since $r\geq 1$ and $\sigma_0(\delta)\leq 1$, then by virtue of (\ref{nnt}) and (\ref{eqn:6.8h}) $\frac{d\bar{z}}{d\sigma}\leq 2A_1 M_{\Psi} e^{(1+r\gamma-p)M_{\Phi}}\bar{z}^{\mu-r\gamma}$, for $0<\sigma<\sigma_0(\delta)$, recalling that $M_{\Phi}=\sup_{\sigma\in(0,\Sigma_{\delta})}\Phi(\sigma)<+\infty$ and $M_{\Psi}=\sup_{\sigma\in(0,\Sigma_{\delta})}\Psi(\sigma)<+\infty.$ Setting $C_1=2A_1 M_{\Psi} e^{(1+r\gamma-p)M_{\Phi}}$ and taking into account (\ref{eqn:6.15}) we then derive $\overline{z}(\sigma)\leq \left[\bar{u}_0^{1+r\gamma-\mu}-C_1(\mu-r\gamma-1)\sigma\right]^{-\frac{1}{\mu-r\gamma-1}}. $ Accordingly, (\ref{eqn:6.18h}) holds for any $0<\sigma<\min\{ \sigma_1, \sigma_0(\delta)\}$ where $\sigma_1$ is independent of any $0<\delta<1$ and estimated as $ \sigma_1\leq\min\left\{\frac{1-2^{1+r\gamma-\mu}}{C_1(\mu-r\gamma-1)}\overline{u}_0^{1+r\gamma-\mu},1\right\}. $ \end{proof} Another fruitful estimate is provided by the next lemma. \begin{lemma}\label{nkl3} There exist $0<\delta_0<1$ and $0<R_0<\frac{3}{4}$ such that for any $0<\delta\leq \delta_0$ the following estimate is valid \begin{equation} \frac{1}{\vert\Omega\vert}\int_{B_{R_0}(0)}z^p\leq \frac{A_2}{8}\bar{z}^\mu, \quad\mbox{for}\quad 0<\sigma<\min\{ \sigma_1, \sigma_0(\delta)\}, \label{eqn:6.19} \end{equation} where $B_{R_0}(0)=\{x\in \mathbb{R}^N \mid \vert x\vert<R_0\}.$ \end{lemma} \begin{proof} By virtue of (\ref{eqn:6.9}) and (\ref{eqn:6.18h}) it follows that \begin{equation} \bar{u}_0\leq \bar{z}(\sigma)\leq 2\bar{u}_0, \quad\mbox{for}\quad 0<\sigma<\min\{\sigma_1, \sigma_0(\delta)\}. \label{eqn:6.20} \end{equation} Furthermore, we note that the growth of $\rlap{$-$}\!\int_{\Omega_0} z^p$ is controlled by the estimate (\ref{lue}) for $0<\min\{ \sigma_1, \sigma_0(\delta)\}$ and since $p>q$ then Young's inequality ensures that the second term of the right-hand side in (\ref{eqn:6.13h}) is negative for $0<\sigma<\min\{ \sigma_1, \sigma_0(\delta)\}$, uniformly in $0<\delta<1$, provided that $0<\varepsilon\leq \varepsilon_0$ for some $0<\varepsilon_0\ll 1.$ Therefore \begin{equation} \mathcal{H}[\vartheta]\leq -\frac{2q\varepsilon z^{q-1}}{\bar{z}^{\gamma+1}} \vartheta \quad\mbox{in}\quad (0,1)\times (0,\min\{ \sigma_1, \sigma_0(\delta)\}). \label{eqn:6.21} \end{equation} Moreover (\ref{eqn:6.10}) and (\ref{eqn:6.20}) imply \begin{eqnarray*} \vartheta(R,\sigma) & = & R^{N-1}z_R+\varepsilon\cdot\frac{R^Nz^q}{\bar{z}^{\gamma+1}} \leq R^{N-1}z_R+\varepsilon\cdot R^{N(1-q)}\bar{z}^{q-\gamma-1} \\ & \leq & R^{N-1}z_R+C\cdot\varepsilon R^{N(1-q)}\quad\mbox{in}\quad (0,1)\times (0,\min\{ \sigma_1, \sigma_0(\delta)\}), \end{eqnarray*} which, for $0<\varepsilon\leq \varepsilon_0,$ entails \begin{equation} \vartheta\left(\frac{3}{4},\sigma\right)<0, \quad 0<\sigma<\min\{ \sigma_1, \sigma_0(\delta)\}, \label{eqn:6.22} \end{equation} owing to (\ref{eqn:6.11}) and provided that $0<\varepsilon_0\ll 1.$ Additionally \eqref{ik1} for $t=0$ gives \begin{equation} \vartheta(R,0) = R^{N-1}\left(\lambda \psi_{\delta}'(R)+\varepsilon\lambda^{q-\gamma-1}R\cdot\frac{\psi_\delta^q}{\bar{\psi}_\delta^{\gamma+1}}\right). \label{eqn:6.23} \end{equation} For $0\leq R<\delta$ and $\varepsilon$ small enough and independent of $0<\delta<\delta_0,$ then the right-hand side of (\ref{eqn:6.23}) can be estimated as \begin{eqnarray*} R^{N}\lambda\left( -a\delta^{-a-2}+\varepsilon\lambda^{q-\gamma-2}\cdot \frac{\psi_\delta^q}{\bar{\psi}_\delta^{\gamma+1}} \right)\lesssim R^{N}\lambda\left( -a\delta^{-a-2}+\varepsilon\lambda^{q-\gamma-2}\cdot \delta^{-aq} \right)\lesssim 0, \end{eqnarray*} since by virtue of (\ref{id}) and (\ref{ine}) and for $m=1,$ there holds $\displaystyle\frac{\psi_\delta^q}{\bar{\psi}_\delta^{\gamma+1}}\lesssim \delta^{-aq},\; \delta\downarrow 0,$ uniformly in $0\leq R<\delta,$ taking also into account that $a+2=ap>ak.$ On the other hand, for $\delta\leq R\leq 1$ and by using (\ref{ine}) for $m=1$ we take \begin{equation} \vartheta(R,0)=R^N\lambda\left(-a R^{-a-1}+\varepsilon\lambda^{q-\gamma-1}\frac{R^{1-aq}}{\bar{\psi}_R^{\gamma+1}}\right), \label{eqn:6.24} \end{equation} which, since $a+2=ap>aq$ implies $-a-1<-aq+1$, finally yields $ \vartheta(R,0)<0, \quad \delta\leq R\leq \frac{3}{4},$ for any $0<\delta\leq \delta_0$ and $0<\varepsilon\leq \varepsilon_0$, provided $\varepsilon_0$ is chosen sufficiently small. Accordingly, we derive \begin{equation} \vartheta(R,0)<0, \quad 0\leq R\leq \frac{3}{4}, \label{eqn:6.25} \end{equation} for any $0<\delta\leq \delta_0$ and $0<\varepsilon\leq \varepsilon_0$, provided $0<\varepsilon_0\ll 1.$ In conjunction of (\ref{eqn:6.21}), (\ref{eqn:6.22}) and (\ref{eqn:6.25}) we deduce $ \vartheta(R,\sigma)=R^{N-1}z_R+\varepsilon\cdot\frac{R^Nz^q}{\bar{z}^{\gamma+1}}\leq 0$ in $(0,\frac{3}{4})\times (0,\min\{ \sigma_1, \sigma_0(\delta)\})$, and finally \begin{equation} z(R,\sigma)\leq \left( \frac{\varepsilon}{2}(q-1)\right)^{-\frac{1}{q-1}}\cdot R^{-\frac{2}{q-1}}\cdot\bar{z}^{\frac{\gamma-1}{q-1}}(\sigma) \quad \mbox{in $(0,\frac{3}{4})\times (0,\min\{ \sigma_1, \sigma_0(\delta)\})$}. \label{eqn:6.26} \end{equation} Note that owing to $N>\frac{2p}{q-1}$ there holds $-\frac{2}{q-1}\cdot p+N-1>-1$ and thus (\ref{eqn:6.19}) is valid for some $0<R_0<\frac{3}{4}.$ \end{proof} \begin{remark}\label{nkl1a} Estimate \eqref{eqn:6.26} entails that $z(R,\sigma)$ can only blow-up in the origin $R=0;$ that is, only a single-point blow-up is feasible. \end{remark} Next we prove the key estimate \eqref{lue} using essentially Lemmas \ref{nkl2} and \ref{nkl3}. \begin{proof}[Proof of Proposition \ref{lem4}] By virtue of (\ref{eqn:6.15}) and since $\frac{p-1}{r}<\delta$, there holds that $\ell=\frac{\mu}{p}>1.$ We can easily check, \cite{KS16,KS18}, that $\theta=\displaystyle{\frac{z}{\overline{z}^{\ell}}}$ satisfies \[\theta_{\sigma}=\Delta_R \theta+ \Psi(\sigma)e^{(r\gamma+1-p)\int^\sigma \Phi(s)\,ds}\left[\frac{z^p}{\overline{z}^{\ell}\left(\rlap{$-$}\!\int_{\Omega_0} z^r\right)^{\gamma}}-\frac{\ell z\rlap{$-$}\!\int_{\Omega_0} z^p}{\overline{z}^{\ell+1}\left(\rlap{$-$}\!\int_{\Omega_0} z^r\right)^{\gamma}} \right], \] in $\Omega_0\times(0, \min\{ \sigma_0, \Sigma_\delta\})$, with $\frac{\partial\theta}{\partial\nu}=0$, on $\partial\Omega_0\times(0, \min\{ \sigma_0, \Sigma_\delta\})$, and $\theta(x,0)=\frac{z(x,0)}{\bar{z}_0^{\ell}}$, on $\Omega_0$. In conjunction with (\ref{eqn:1.17}), (\ref{eqn:6.9}), (\ref{eqn:6.10}), (\ref{eqn:6.18}), and (\ref{eqn:6.18h}) we deduce that \begin{eqnarray}\label{kik1} \left\Vert \theta,\ \frac{z^p}{\overline{z}^{\ell}\left(\rlap{$-$}\!\int_{\Omega_0} z^r\right)^{\gamma}}, \ \frac{\ell z\rlap{$-$}\!\int_{\Omega_0} z^p}{\overline{z}^{\ell+1}\left(\rlap{$-$}\!\int_{\Omega_0} z^r\right)^{\gamma}}\right\Vert_{L^\infty((\Omega_0\setminus B_{R_0}(0))\times \min\{ \sigma_1, \sigma_0(\delta)\})} <+\infty, \end{eqnarray} uniformly in $0<\delta\leq \delta_0,$ and using the fact that $\Psi(\sigma)$ and $\Psi(\sigma)$ are both bounded and positive. Estimate \eqref{kik1} by virtue of the standard parabolic regularity, see DeGiorgi-Nash-Moser estimates in \cite[pages 144-145]{Lie96}, entails the existence of $0<\sigma_2\leq \sigma_1$ independent of $0<\delta\leq \delta_0$: $ \sup_{0< \sigma<\min\{\sigma_2, \sigma_0(\delta)\}}\left\Vert \theta^p(\cdot,\sigma)-\theta^p(\cdot,0)\right\Vert_{L^1(\Omega_0\setminus B_{R_0}(0)}\leq \frac{A_2}{8}\vert\Omega_0\vert, $ which yields \begin{equation} \left\vert \frac{1}{\vert\Omega_0\vert}\int_{\Omega_0\setminus B_{R_0}(0)}\frac{z^p}{\bar{z}^\mu}-\frac{1}{\vert\Omega_0\vert}\int_{\Omega_0\setminus B_{R_0}(0)}\frac{z_0^p}{\bar{z}_0^\mu} \right\vert \leq \frac{A_2}{8}, \label{eqn:6.28} \end{equation} with $0<\sigma<\min\{ \sigma_2, \sigma_0(\delta)$ for any $0<\delta\leq \delta_0$. Combining (\ref{eqn:6.19}) and (\ref{eqn:6.28}) we deduce $ \left\vert \rlap{$-$}\!\int_{\Omega_0} \frac{z^p}{\overline{z}^\mu}-\rlap{$-$}\!\int_{\Omega_0} \frac{z_0^p}{\overline{z}_0^\mu}\right\vert\leq \frac{3 A_2}{8}, \;\mbox{for}\; 0<\sigma<\min\{ \sigma_2, \sigma_0(\delta)\}\;\mbox{and}\; 0<\delta\leq \delta_0,$ and thus we finally obtain \begin{equation}\label{tbsd13} \frac{5A_2}{8}\leq \rlap{$-$}\!\int_{\Omega_0} \frac{z^p}{\overline{z}^{\mu}} \leq \frac{11 A_1}{8}, \quad 0<\sigma<\min\{ \sigma_2, \sigma_0(\delta)\}, \ 0<\delta\leq \delta_0, \end{equation} taking also into consideration $ A_2\leq \rlap{$-$}\!\int_{\Omega_0}\frac{z_0^p}{\bar{z}_0^\mu}\leq A_1. $ Consequently, if we consider $\sigma_0(\delta)\leq \sigma_2$ then it follows $ \frac{1}{2}A_2\bar{z}^\mu<\frac{5}{8}A_2\bar{z}^\mu\leq \rlap{$-$}\!\int_{\Omega_0} z^p\leq \frac{11}{8}A_1\bar{z}^\mu<2A_1\bar{z}^\mu$, for $ 0<\sigma<\sigma_0(\delta)$, and thus a continuity argument implies that $\frac{1}{2}A_2\bar{z}^\mu\leq \rlap{$-$}\!\int_{\Omega_0} z^p\leq 2A_1\bar{z}^\mu$, with $0<\sigma<\sigma_0(\delta)+\eta$, for some $\eta>0,$ which contradicts the definition of $\sigma_0(\delta)$. Accordingly, we derive that $\sigma_2<\sigma_0(\delta)$ for any $0<\delta\leq \delta_0$, and the proof of Proposition \ref{lem4} is complete for $\sigma_0=\sigma_2.$ \end{proof} Next we present the Turing instability result intimated at the beginning of the section, which in particular is exhibited in the form of diffusion-driven blow-up. \begin{theorem}\label{thm5} Consider $N\geq 3,\;1\leq r\leq p$, $p>\frac{N}{N-2}, \frac{2}{N}<\frac{p-1}{r}<\gamma$ and $\gamma>1.$ Assume that both $\Phi(\sigma)$ and $\Psi(\sigma)$ are positive and bounded. Then there exists $\lambda_0>0$ such that for any $0<\lambda\leq \lambda_0$ there exists $0<\delta_0=\delta_0(\lambda)<1$ and any solution of problem (\ref{rnstregm1})-(\ref{rnstregm3}) with initial data of the form (\ref{eqn:6.1}) and $0<\delta\leq \delta_0$ blows up in finite time. \label{thm:6.1} \end{theorem} \begin{proof} First note that $\sigma_0\leq \sigma_1$ in (\ref{eqn:6.18h}), then from (\ref{eqn:6.7}) and (\ref{eqn:6.14}) we have \begin{eqnarray} \quad K(\sigma) \geq \frac{m_{\Psi}}{\left( \rlap{$-$}\!\int_{\Omega_0} z^p\right)^{\frac{r\gamma}{p}}}\geq m_{\Psi}\left( \frac{1}{2\alpha_1}\right)^{\frac{r\gamma}{p}}\cdot\left( \frac{1}{2\bar{\psi}_\delta}\right)^{\frac{r\gamma}{p}\mu}\lambda^{-r\gamma}\geq m_{\Psi}d\lambda^{-r\gamma}\equiv D, \label{eqn:6.33} \end{eqnarray} for $0<\sigma<\min\{ \sigma_0, \Sigma_\delta\}.$ Note also that for $0<\lambda\leq \lambda_0(d)$, then inequality (\ref{eqn:6.4}) entails \begin{equation} \Delta u_0+Du_0^p\geq 2u_0^p \label{eqn:6.34} \end{equation} for any $0<\delta\leq \delta_0$. The comparison principle in conjunction with (\ref{eqn:6.33}) and (\ref{eqn:6.34}) then yields \begin{equation} z\geq \tilde{z} \quad\mbox{in}\quad Q_0\equiv\Omega_0\times (0, \min\{ \sigma_0, \Sigma_\delta\}), \label{eqn:6.35} \end{equation} where $\tilde z=\tilde z(x,t)$ solves the following partial differential equation \begin{eqnarray} &&\tilde{z}_{\sigma}=\Delta \tilde{z}+D\tilde{z}^p, \quad\mbox{in}\quad Q_0,\label{eqn:6.36}\\ &&\frac{\partial \tilde{z}}{\partial\nu}=0,\quad\mbox{on}\quad\partial\Omega_0\times(0, \min\{ \sigma_0, \Sigma_\delta\}),\\ &&\tilde{z}(|x|,\sigma)=u_0(\vert x\vert)\quad\mbox{in}\quad \Omega_0. \label{eqn:6.36} \end{eqnarray} Setting $h(x,\sigma):=\tilde{z}_{\sigma}(x,\sigma)-\tilde{z}^p(x,\sigma),$ then \begin{eqnarray*} h_{\sigma} = \Delta h+p(p-1) \tilde{z}^{p-2} |\nabla \tilde{z}|^2+ D p \tilde{z}^{p-1}\,h \geq \Delta h+ D p \tilde{z}^{p-1}\,h \quad\mbox{in}\quad Q_0, \end{eqnarray*} with \begin{eqnarray*} h(x,0)= \Delta \tilde{z}(x,0)+D\tilde{z}^p(x,0)-\tilde{z}^p(x,0)=\Delta u_0+(D-1)u_0^p\geq u_0^p>0,\quad\mbox{in}\quad \Omega_0, \end{eqnarray*} whilst $ \frac{\partial h}{\partial \nu}=0\;\mbox{on}\;\partial\Omega_0\times(0, \min\{ \sigma_0, \Sigma_\delta\}). $ Therefore, owing to the maximum principle, we derive $ \tilde{z}_{\sigma}>\tilde{z}^p\quad\mbox{in}\quad Q_0, $ and thus via integration we obtain \[ \tilde{z}(0,\sigma)\geq\left(\frac{1}{z_0^{p-1}(0)}-(p-1)\sigma\right)^{-\frac{1}{p-1}}=\left\{\left(\frac{\delta^{a}}{\lambda(1+\frac{a}{2})}\right)^{p-1}-(p-1)\sigma\right\}^{-\frac{1}{p-1}} \] for $0<\sigma<\min\{\sigma_0, \Sigma_\delta\}$, and therefore, \begin{equation} \min\{\sigma_0, \Sigma_\delta\}<\frac{1}{p-1}\cdot \left( \frac{\delta^a}{\lambda(1+\frac{a}{2})}\right)^{p-1}. \label{eqn:6.38} \end{equation} Note that for $0<\delta\ll 1$, then the right-hand side on (\ref{eqn:6.38}) is less than $\sigma_0$, so $\Sigma_\delta<\frac{1}{p-1}\cdot \left( \frac{\delta^a}{\lambda(1+\frac{a}{2})}\right)^{p-1}<+\infty$. \end{proof} \begin{remark} Recalling that $z(x,\sigma)=e^{\int^{\sigma}\Phi(s)\,ds} u(x,\sigma)$ we also obtain the occurrence of a single-point blow-up for the solution $u$ of problem \eqref{rnstregm1}-\eqref{rnstregm3}. \end{remark} \begin{remark} Notably, by \eqref{eqn:6.38} we conclude that $\Sigma_\delta \to 0$ as $\delta\to 0,$ i.e. the more spiky initial data we consider then the faster the diffusion-driven blow-up for $z$ and thus for $u$ take place. \end{remark} A diffusion-driven instability (Turing instability) phenomenon, as was first indicated in the seminal paper \cite{t52}, is often followed by pattern formation. A similar situation is observed as a consequence of the driven-diffusion finite-time blow-up provided by Theorem \ref{thm:6.1}, and it is described below. The blow-up rate of the solution $u$ of \eqref{rnstregm1}-\eqref{rnstregm3} and the blow-up pattern (profile) identifying the formed pattern are given. \begin{theorem}\label{tbu} Take $N\ge 3,\;\max\{r, \frac{N}{N-2}\}<p<\frac{N+2}{N-2}$ and $\frac{2}{N}<\frac{p-1}{r}<\gamma.$ Assume that both $\Phi(\sigma)$ and $\Psi(\sigma)$ are positive and bounded. Then the blow-up rate of the solution of \eqref{rnstregm1}-\eqref{rnstregm3} can be characterised as follows \begin{equation} \Vert u(\cdot, \sigma)\Vert_\infty \ \approx \ (\Sigma_{\max}-\sigma)^{-\frac{1}{p-1}}, \quad t\uparrow \Sigma_{\max},\quad\label{ik2} \end{equation} where $\Sigma_{\max}$ stands for the blow-up time. \end{theorem} \begin{proof} We first perceive that by virtue of \eqref{eqn:6.8} and in view of the H\"{o}lder's inequality, since $p>r,$ it follows \begin{eqnarray}\label{nkl4} 0<K(\sigma)=\frac{\Psi(\sigma)e^{(1+r\gamma-p)\int^\sigma \Phi(s)\,ds}}{\Big(\displaystyle\rlap{$-$}\!\int_{\Omega_0} z^r \Big)^{\gamma}}\leq C_1<+\infty. \end{eqnarray} Define now $\Theta$ satisfying the partial differential equation \[\Theta_{\sigma}=\Delta \Theta+C_1\Theta^{p},\quad\mbox{in}\quad \Omega_0\times (0,\Sigma_{max})\], with $\frac{\partial \Theta}{\partial \nu}=0,$ on $\partial\Omega_0\times(0,\Sigma_{max})$, and $\Theta(x,0)=z_0(x)$, in $\Omega_0$, then via comparison $z\leq \Theta$ in $\Omega_0\times (0,\Sigma_{max}).$ Yet it is known, see \cite[Theorem 44.6]{qs07}, that $ |\Theta(x,\sigma)|\leq C_{\eta}|x|^{-\frac{2}{p-1}-\eta}\quad\mbox{for}\quad \eta>0, $ and thus \begin{eqnarray}\label{tbsd18} |z(x,\sigma)|\leq C_{\eta}|x|^{-\frac{2}{p-1}-\eta}\quad\mbox{for}\quad (x,\sigma)\in \Omega_0\times (0, \Sigma_{max}). \end{eqnarray} Following the same steps as in proof of \cite[Theorem 9.1]{KS16} we derive \begin{eqnarray}\label{tbsd18a} \lim_{\sigma\to \Sigma_{max}} K(\sigma)=\omega\in(0,+\infty). \end{eqnarray} By virtue of \eqref{tbsd18a} and applying \cite[Theorem 44.3(ii)]{qs07} we can find a constant $C_{U}>0$ such that \begin{eqnarray}\label{ube} \left|\left|z(\cdot,\sigma)\right|\right|_{\infty}\leq C_{U}\left(\Sigma_{max}-\sigma\right)^{-\frac{1}{(p-1)}}\quad\mbox{in}\quad (0, \Sigma_{max}). \end{eqnarray} Setting $N(\sigma):=\left|\left|z(\cdot,\sigma)\right|\right|_{\infty}=z(0,\sigma),$ then $N(\sigma)$ is differentiable for almost every $\sigma\in(0,\Sigma_{\delta}),$ in view of \cite{fmc85}, and $ \frac{dN}{d\sigma}\leq K(\sigma) N^p(\sigma). $ Notably $K(\sigma)\in C([0,\Sigma_{\max}))$ and owing to \eqref{nkl4} it is bounded in any time interval $[0,\sigma],\; \sigma<\Sigma_{\max};$ then upon integration we obtain \begin{eqnarray}\label{lbe} \left|\left|z(\cdot,\sigma)\right|\right|_{\infty}\geq C_L\left(\Sigma_{\max}-\sigma\right)^{-\frac{1}{(p-1)}}\quad\mbox{in}\quad (0, \Sigma_{\max}), \end{eqnarray} for some positive constant $C_L.$ Recalling that $z(x,\sigma)=e^{\int^{\sigma}\Phi(s)\,ds} u(x,\sigma)$ then \eqref{ube} and \eqref{lbe} entail \begin{eqnarray*} \widetilde{C}_L\left(\Sigma_{max}-\sigma\right)^{-\frac{1}{(p-1)}}\leq\left|\left|u(\cdot,\sigma)\right|\right|_{\infty}\leq \widetilde{C}_U\left(\Sigma_{max}-t\right)^{-\frac{1}{(p-1)}}\quad\mbox{for}\quad \sigma\in(0, \Sigma_{max}), \end{eqnarray*} where now $\widetilde{C}_L, \widetilde{C}_U$ depend on $\Sigma_{max},$ and thus \eqref{ik2} is proved. \end{proof} \begin{remark}\label{nkl7} We first perceive that \eqref{tbsd18} provides a rough form of the blow-up pattern for $z$ and thus for $u$ as well. Additionally, owing to \eqref{nkl4} the non-local problem \eqref{tbsd3}-\eqref{tbsd3b} can be treated as a local one for which the more accurate asymptotic blow-up profile, \cite{mz98}, is known and is given by $ \lim_{\sigma\to \Sigma_{max}}z(|x|,\sigma)\sim C\left[\frac{|\log |x||}{|x|^2}\right]\;\mbox{for}\; |x|\ll 1,\;\mbox{and}\; C>0. $ Using again the relation between $z$ and $u$ we end up with a similar asymptotic blow-up profile for the diffusion-driven-induced blow-up solution $u$ of problem \eqref{rnstregm1}-\eqref{rnstregm3}. This blow-up profile actually determines the form of the developed patterns which are induced as a result of the diffusion-driven instability and it is numerically investigated in the next section. \end{remark} \section{Numerical Experiments}\label{num-sec} To illustrate some of the theoretical results of the previous sections we perform a series of numerical experiments for which we solve the involved PDE problems using the Finite Element Method \cite{j87}, using piecewise linear basis functions and implemented using the adaptive finite-element toolbox ALBERTA \cite{ss05}. In all our simulations (unless stated otherwise) the domain was triangulated using 8192 elements, the discretisation in time was done using the forward Euler method taking $5\times 10^{-4}$ as time-step and the resulting linear system solved using Generalized Minimal Residual iterative solver \cite{sa03}. \subsection{Experiment 1} We take an initial condition $u_0(x)$ and a set of parameters satisfying the assumptions of Theorem~\ref{thm1} and solve \eqref{nstregm1t}-\eqref{nstregm3t} on $\Omega_0=\left[0, 1 \right]^2$. The initial condition is chosen $ u_0(x,0)=\cos(\pi y)+2. $ As for the domain evolution we consider four different cases: \begin{itemize} \item $\rho(t)=e^{\beta t}$ (exponentially growing domain); \item $\rho(t)=e^{-\beta t}$ (exponentially decaying domain); \item $\rho(t)=\frac{e^{\beta t}}{1+\frac{1}{m}\left(e^{\beta t}-1\right)}$ (logistically growing domain); \item $\rho(t)=1$ (static domain). \end{itemize} We summarise all parameters used in Table \ref{table1}. In Fig.~\ref{figure1}, we demonstrate the $||u(x,t)||_\infty$ for each of the domain evolutions, so we can monitor their respective blow-up times. \begin{table}[h!]\label{table1} \begin{center} \begin{tabular}{c|c|c|c|c|c|c} $D_1$ & $p$ & $q$ & $r$ & $s$ & $\beta$ & $m$\\ \hline 1 & 3 & 2 & 1 & 2 & 0.1 & 1.5 \end{tabular} \caption{Set of parameters used in Experiment 1.} \end{center} \end{table} \begin{figure} \caption{Plots representing $||u(x,t)||_\infty$, where $u(x,t)$ is the numerical solution of \eqref{nstregm1t}-\eqref{nstregm3t} for different domain evolutions: static, exponentially decaying and growing, and logistically growing domains, starting from the initial condition $u_0=\cos(\pi y)+2$ in the unit square. Parameters are shown in Table~\ref{table1} and satisfy conditions of Theorem~\ref{thm1}. (Colour version online).} \label{figure1} \end{figure} If we denote by $\Sigma_g$, $\Sigma_s$, $\Sigma_{lg}$and $\Sigma_1$ the blow-up times for the case of exponentially growing and decaying, the logistically growing domains and the static domain, respectively, we observe from Fig.~\ref{figure1} that we have the following relation $\Sigma_g>\Sigma_{lg}>\Sigma_1>\Sigma_s,$ which is in agreement with the mathematical intuition. We now take the same initial condition, $u_0$ and the same initial domain which we assume is evolving exponentially and consider parameters $D_1=1$, $p=1.4$, $q=1$, $r=1$ and $s=2$ for which inequality \eqref{nk11} of Remark~\ref{rem1a} holds. As we can see in Fig.~\ref{figure2}, we have an example of a solution $\bar{u}$ that does not blow up, as already conjectured in the aforementioned remark. Notably, this numerical experiment predicts a very interesting phenomenon both mathematically and biologically. It predicts the infinite-time quenching of the solution of problem \eqref{nstregm1t}-\eqref{nstregm3t} and thus the extinction of the activator in the long run, see also Remark \ref{nny}. Note also that this result is not in contradiction with Proposition \ref{eiq}, where infinite-time quenching is ruled out since condition \eqref{mts2an} is not satisfied for an exponentially growing domain where $\Phi(\sigma)$ is an unbounded function as indicated in Remark \ref{aal5}. \begin{figure} \caption{The plot of $||\bar{u}||_\infty$ resulting from the numerical solution of \eqref{nstregm1t}-\eqref{nstregm3t} considering the unit square as initial domain, evolving exponentially, considering parameters $D_1=1$, $p=1.4$, $q=1$, $r=1$ and $s=2$. (Colour version online).} \label{figure2} \end{figure} \subsection{Experiment 2}\label{exp2} This experiment is meant to illustrate Theorem~\ref{thm4} and we take the same initial data $u_0=\cos(\pi y)+2$ and take $\Omega_0$ as the unit square when numerically solving equations \eqref{nstregm1t}-\eqref{nstregm3t}. As for domain evolution we consider $\rho(t)=e^{\beta t}$, with $\beta=0.1$. To proceed, we consider two sets of parameters, one for which assumptions of Theorem~\ref{thm4} are satisfied and another for which those assumptions are not fulfilled. See Table~\ref{table2} for model parameters. \begin{table}[h!]\label{table2} \begin{center} \begin{tabular}{c|c|c|c|c|c} conditions of Th.~\ref{thm4}& $D_1$ & $p$ & $q$ & $r$ & $s$ \\ \hline are verified & 1 & 1 & 2 & 3 & 2 \\ are not verified & 1 & 3 & 2 & 1 &1 \end{tabular} \caption{Set of parameters used in Experiment 2.} \end{center} \end{table} \begin{figure} \caption{The plot of $||u(x,t)||_\infty$, where $u(x,t)$ is the numerical solution of \eqref{nstregm1t}-\eqref{nstregm3t}. Initial condition is $u_0=\cos(\pi y)+2$ and $\Omega_0$ is the unit square evolving according to exponential growth ($\beta=0.1$). (Colour version online).} \label{figure3} \end{figure} Results shown in Fig.~\ref{figure3} are in agreement with theoretical predictions of Theorem~\ref{thm4} since the solutions exists for all times when the assumption of the theorem are met (Fig.~\ref{figure3}(a)), otherwise a finite-time blow-up is exhibited to occur(Fig.~\ref{figure3}(b)). \subsection{Experiment 3} In this experiment we intend to illustrate Theorem~\ref{thm5} so we numerically solve \eqref{nstregm1t}-\eqref{nstregm3t} in $\mathbb{R}^3$, taking $\Omega_0$ as the unit sphere and initial condition $u_0$ given by \eqref{eqn:6.1}, considering $\delta=0.8$ and $\lambda=0.1.$ As for other parameters we choose $D_1=1$, $p=4$, $q=4$, $r=2$ and $s=1$, which satisfy the conditions of the theorem. In Fig.~\ref{figure4} we display the $L_\infty-$norm of the solution $u$ for three types of evolution laws implemented, namely: exponential decay, logistic decay and no evolution. For the exponential and logistic decay we select the same set of parameters as used in Experiment 1. As we can observe, for all the cases the solution blows up, as theoretically predicted by Theorem~\ref{thm5}. Again the blow-up times have the order $ \Sigma_1>\Sigma_{ls}>\Sigma_s, $ where now $\Sigma_{ls}$ stands for the blow-up time for the logistic decay evolution, beeing in agreement with the mathematical intuition. \begin{figure} \caption{Plots for $||u(x,t)||_\infty$, where $u(x,t)$ is the numerical solution of \eqref{nstregm1t}-\eqref{nstregm3t}, in $\mathbb{R}^3$, considering $\Omega_0$ the unit sphere. Three evolution laws considered: exponential decay, logistic decay and no evolution (static domain). Parameters used: $p=4, q=4, r=2, s=1$ and initial condition given by \eqref{eqn:6.1} taking $\delta=0.8$ and $\lambda=0.1$. (Colour version online).} \label{figure4} \end{figure} In Fig.~\ref{figure5-a} and Fig.~\ref{figure5-b} we compare the initial solution with the solution at $t=0.03$ respectively, for the logistic decay, close to the blow-up time $t=0.03$, by looking at a cross section of the three-dimensional unit sphere $\Omega_0.$ Besides, in Fig.~\ref{figure5-c} and Fig.~\ref{figure5-d} again the solution at section cross of $\Omega_0$ is depicted but now for the stationary and exponential decaying case respectively. Through this experiment we can observe the formation of blow-up (Turing-instability) patterns around the origin $R=0.$ We actually conclude that the evolution of the domain has no impact on the form of blow-up patterns, however it definitely affects the spreading of Turing-instability patterns as it is obvious from Fig.~\ref{figure5-b}, Fig.~\ref{figure5-c} and Fig.~\ref{figure5-d}. \begin{figure}\label{figure5-a} \label{figure5-b} \label{figure5-c} \label{figure5-d} \label{figure5} \end{figure} \subsection{Experiment 4} Next we design a numerical experiment to compare the dynamics of the reaction-diffusion system \eqref{tregm1}-\eqref{tregm2} with that of the non-local problem \eqref{nstregm1t}-\eqref{nstregm3t} under the assumptions of Theorem~\ref{thm1}. To this end we perform an experiment considering $u_0=\hat{u}_0=\cos(\pi y)+2$, $\Omega_0=\left[0, 1\right]^2$, $p=3$, $q=2$, $r=1$ and $s=2$. For the reaction-diffusion system \eqref{tregm1}-\eqref{tregm2} we also take in addition $D_1=0.01$, $D_2=1$, $\tau=0.01$ and $v_0=2$ whilst for \eqref{nstregm1t}-\eqref{nstregm3t} we only choose $D_1=0.01.$ For both cases we consider an exponential decaying evolution, with $\beta=0.1$. Unlike previous numerical examples, here the domain was triangulated using $786432$ elements with timestep $10^{-4}$. The obtained results are displayed in Fig.~\ref{figure6} and they actually demonstrate that reaction-diffusion system \eqref{tregm1}-\eqref{tregm2} and non-local problem \eqref{nstregm1t}-\eqref{nstregm3t} share the same dynamics. In particular the solutions of both problems exhibit blow-up which takes place in finite time. The latter, biologically speaking, means that in the examined case we just need to monitor only the dynamics of the activator, whose dynamics governed by non-local problem \eqref{nstregm1t}-\eqref{nstregm3t}. Then we can get an insight regarding the interaction between both of the chemical reactants (activator and inhibitor) provided by reaction-diffusion system \eqref{tregm1}-\eqref{tregm2}. \begin{figure} \caption{Plots of the $L_\infty$ norm for the numerical solution of \eqref{tregm1}-\eqref{tregm2} and \eqref{nstregm1t}-\eqref{nstregm3t}. Initial condition is $u_0=\hat{u}_0=\cos(\pi y)+2$ and $\Omega_0$ is the unit square, exponentially decaying ($\beta=0.1$). (Colour version online).} \label{figure6} \end{figure} \end{document}
\begin{document} \nocite{*} \title{Fast convergence of inertial dynamics with Hessian-driven damping under geometry assumptions} \begin{abstract} First-order optimization algorithms can be considered as a discretization of ordinary differential equations (ODEs) \cite{su2014differential}. In this perspective, studying the properties of the corresponding trajectories may lead to convergence results which can be transfered to the numerical scheme. In this paper we analyse the following ODE introduced by Attouch et al. in \cite{attouch2016fast}: \begin{equation*} \forall t\geqslant t_0,~\ddot{x}(t)+\frac{\alpha}{t}\dot{x}(t)+\beta H_F(x(t))\dot{x}(t)+\nabla F(x(t))=0, \end{equation*} where $\alpha>0$, $\beta>0$ and $H_F$ denotes the Hessian of $F$. This ODE can be derived to build numerical schemes which do not require $F$ to be twice differentiable as shown in \cite{attouch2020first,attouch2021convergence}. We provide strong convergence results on the error $F(x(t))-F^*$ and integrability properties on $\|\nabla F(x(t))\|$ under some geometry assumptions on $F$ such as quadratic growth around the set of minimizers. In particular, we show that the decay rate of the error for a strongly convex function is $O(t^{-\alpha-\varepsilon})$ for any $\varepsilon>0$. These results are briefly illustrated at the end of the paper. \end{abstract} \textbf{Keywords} Convex optimization, Hessian-driven damping, Lyapunov analysis, \L{}ojasiewicz property, ODEs. \section{Introduction} This paper focuses on the study of the ODE defined by: \begin{equation} \label{eq:Hessian_ODE} \forall t\geqslant t_0,~\ddot{x}(t)+\frac{\alpha}{t} \dot{x}(t)+\beta H_F(x(t)) \dot{x}(t)+ \nabla F(x(t))=0, \tag{DIN-AVD} \end{equation} where $t_0>0$, $\alpha>0$, $\beta>0$, $x(t_0)\in\mathbb{R}^n$, $\dot x(t_0)=0$ and $F:\mathbb{R}^n\rightarrow\mathbb{R}$ is a convex and $C^2$ function whose gradient and Hessian are respectively denoted by $\nabla F$ and $H_F$. We consider that the function $F$ has a non empty set of minimizers $X^*$ and we denote $F^*=\min\limits_{x\in\mathbb{R}^n}F(x)$. The underlying motivation of this analysis lies in the minimization of the function $F$. In \cite{su2014differential}, Su et al. highlight the link between optimization methods and dynamical systems. In particular, this paper considers Nesterov's accelerated gradient method (NAGM) introduced in \cite{nesterov27method} as a discretization of the following ODE: \begin{equation} \label{eq:Nesterov_ODE} \ddot{x}(t)+\frac{\alpha}{t} \dot{x}(t)+\nabla F(x(t))=0, \tag{AVD} \end{equation} and shows that the trajectories defined by \eqref{eq:Nesterov_ODE} and NAGM have related properties. In fact, the authors of \cite{su2014differential} prove that $F(x(t))-F^*=\mathcal{O}(t^{-2})$ for any $\alpha\geqslant3$ and they provide a similar convergence rate for the iterates of NAGM. This continuous approach has been widely adopted in recent works leading to convergence results on optimization schemes such as NAGM \cite{attouch2018fast,jendoubi2015asymptotics,attouch2019rate,apidopoulos2021convergence,sebbouh2019nesterov,aujol:hal-03491527,Aujol2019} and the Heavy-ball method \cite{balti2016asymptotic,aujol2022convergence,aujol2021convergence}. Alvarez et al. introduce in \cite{alvarez2002second} the Dynamical Inertial Newton-like system defined by: \begin{equation} \ddot{x}(t)+\alpha \dot{x}(t)+\beta H_F(x(t)) \dot{x}(t)+ \nabla F(x(t))=0, \tag{DIN} \end{equation} which is a combination of the Newton dynamical system and the Heavy-ball with friction system. This ODE involves an Hessian-driven damping term which reduces the oscillations related to the heavy-ball system. In \cite{attouch2016fast}, Attouch et al. combine a similar Hessian-driven damping term to an asymptotic vanishing damping term resulting in \eqref{eq:Hessian_ODE}. The case $\beta=0$ corresponds to \eqref{eq:Nesterov_ODE} which is related to NAGM. In fact, \eqref{eq:Hessian_ODE} can be linked to the high-resolution ODE for NAGM introduced by Shi et al. in \cite{shi2021understanding}. The authors of \cite{attouch2016fast} prove that if $\beta>0$ and $\alpha\geqslant3$, the convergence rate of the trajectories is the same as \eqref{eq:Nesterov_ODE} and that \begin{equation}\int_{t_0}^{+\infty}t^2\|\nabla F(x(t))\|^2dt<+\infty.\end{equation} This additional result is significant as it ensures the fast convergence of the gradient and therefore a reduction of oscillations. This property of \eqref{eq:Hessian_ODE} is directly linked to the Hessian-driven damping and justifies the derivation of this ODE in order to define an associated numerical scheme. In \cite{attouch2020first}, Attouch et al. study a more general ODE: \begin{equation} \ddot{x}(t)+\frac{\alpha}{t}\dot{x}(t)+\beta(t)H_F(x(t))\dot x(t)+b(t)\nabla F(x(t))=0, \label{eq:ODE_Hgen} \end{equation} and similar convergence results are given under some conditions on $\beta$ and $b$. The authors introduce numerical schemes derived from \eqref{eq:ODE_Hgen} which take advantage of the additional term such as the Inertial Gradient Algorithm with Hessian Damping (IGAHD): \begin{equation} \left\{ \begin{gathered} x_k=y_{k-1}-s\nabla F(y_{k-1}),\\ y_k=x_k+\alpha_k(x_k-x_{k-1})-\beta\sqrt{s}(\nabla F(x_k)-\nabla F(x_{k-1}))-\frac{\beta\sqrt{s}}{k}\nabla F(x_{k-1}), \end{gathered}\right. \label{eq:IGAHD_intro} \end{equation} where $\alpha_k=\frac{k-1}{k+\alpha-1}$, $\alpha>0$, $\beta\geqslant0$ and $s>0$. It is proved in \cite{attouch2020first,attouch2021convergence} that if $\alpha\geqslant 3$, $s\leqslant \frac{1}{L}$ and $\beta\in (0,2\sqrt{s})$, then the sequence $(x_k)_{k\in\mathbb{N}}$ defined by \eqref{eq:IGAHD_intro} satisfies $F(x_k)-F^*=\mathcal{O}\left(k^{-2}\right)$ and \begin{equation} \sum_{k\in\mathbb{N}} k^2\|\nabla F(x_k)\|^2<+\infty. \end{equation} Note that this algorithm only requires $F$ to be differentiable as the Hessian-driven damping is treated as the time derivative of the gradient term. The convergence of the trajectories of \eqref{eq:Hessian_ODE} and \eqref{eq:Nesterov_ODE} were studied under additional geometry assumptions on $F$. Such hypotheses allow faster convergence rates to be found and provide a better understanding of the behaviour of trajectories. Attouch et al. prove in \cite[Theorem~3.1]{attouch2016fast} that if $F$ is $\mu$-strongly convex, $\alpha>3$ and $\beta>0$, then the trajectories of \eqref{eq:Hessian_ODE} satisfy: \begin{equation*} F(x(t))-F^*=\mathcal{O}\left(t^{-\frac{2\alpha}{3}}\right). \end{equation*} Similar convergence rates are given for the trajectories of \eqref{eq:Nesterov_ODE} in \cite[Theorem~4.2]{su2014differential} and \cite[Theorem~3.4]{attouch2018fast}. In this work, we give an analysis of the trajectories of \eqref{eq:Hessian_ODE} on more general assumptions on the geometry of $F$ than strong convexity. We consider functions behaving as $\|x-x^*\|^\gamma$ for $\gamma\geqslant1$ where $x^*\in X^*$ and we provide convergence results on $F(x(t))-F^*$ and $\|\nabla F(x(t))\|$. This geometry assumption on $F$ was studied in \cite{Aujol2019} and \cite{aujol:hal-03491527} for the trajectories of \eqref{eq:Nesterov_ODE} associated to NAGM. This work aims to give a better understanding of the convergence of the trajectories of \eqref{eq:Hessian_ODE} in this setting. The next step would be to study the convergence of corresponding numerical schemes under these assumptions using a similar approach. The main contributions of this work can be summarized as follows: \begin{enumerate} \item Non asymptotic bound on $F(x(t))-F^*$ for the trajectories of \eqref{eq:Hessian_ODE} under the assumption that $F$ has a quadratic growth around its minimizers. The resulting convergence rate is asymptotically the fastest in the literature for \eqref{eq:Hessian_ODE} under this set of assumptions. \item Strong integrability property on $\|\nabla F(x(t))\|$ and $F(x(t))-F^*$ under the same assumptions. Given the geometry of $F$, this improved integrability of the gradient has a direct influence on the convergence of the trajectories $F(x(t))-F^*$. We give an asymptotic convergence rate which is faster than $\mathcal{O}\left(t^{-\frac{2\alpha}{3}}\right)$ under a weaker assumption than strong convexity. \item Asymptotic convergence rate for $F(x(t))-F^*$ and improved integrability of the gradient for flat geometries of $F$, i.e functions behaving as $\|x-x^*\|^\gamma$ where $\gamma>2$. \end{enumerate} \section{Preliminary: Geometry of convex functions} Throughout this paper, we assume that $\mathbb{R}^n$ is equipped with the Euclidean scalar product $\langle \cdot , \cdot \rangle$ and the associated norm $\|\cdot\|$. As usual $B(x,r)$ denotes the open Euclidean ball with center $x\in\mathbb{R}^n$ and radius $r>0$. For any real subset $X\subset\mathbb{R}^n$, the Euclidean distance $d$ is defined as: \begin{equation*} \forall x\in\mathbb{R}^n,~d(x,X)=\inf\limits_{y\in X}\|x-y\|. \end{equation*} In this section, we introduce some geometry conditions that will be investigated later on. \begin{definition} Let $F$ : $\mathbb{R}^{n} \rightarrow \mathbb{R}$ be a convex differentiable function having a non empty set of minimizers $X^*$. The function $F$ satisfies the assumption $\mathcal{G}_{\mu}^\gamma$ for some $\gamma\geqslant1$ and $\mu>0$ if for all $x \in \mathbb{R}^{n}$, \begin{equation} \frac{\mu}{2}d\left(x, X^{*}\right)^{\gamma} \leqslant F(x)-F^*. \label{eq:H2} \end{equation} \end{definition} The hypothesis $\mathcal{G}_\mu^\gamma$ is a growth condition on the function $F$ ensuring that it grows as fast as $\|x-x^*\|^\gamma$ around its set of minimizers. The case $\gamma=2$ corresponds to functions having a quadratic growth around their minimizers including strongly convex functions. As we consider convex functions, $\mathcal{G}_\mu^\gamma$ is directly related to the \L{}ojasiewicz inequality as stated in the following lemma. The proof is given in \cite[Proposition~3.2]{garrigos2017convergence}. \begin{lemme} Let $F:\mathbb{R}^n\rightarrow \mathbb{R}\cup \{+\infty\}$ be a convex differentiable function having a non empty set of minimizers $X^*$. Let $F^*=\inf F$. If $F$ satisfies $\mathcal{G}_\mu^\gamma$ for some $\gamma\geqslant2$ and $\mu>0$, then $F$ has a global \L{}ojasiewicz property with an exponent $1-\frac{1}{\gamma}$, i.e there exists $K>0$ such that: \begin{equation} \forall x\in \mathbb{R}^n,\quad K\left(F(x)-F^*\right)\leqslant \|\nabla F(x)\|^{\frac{\gamma}{\gamma-1}}. \label{eq:Loja_gen} \end{equation} Specifically, if $F$ satisfies $\mathcal{G}_\mu^2$ for some $\mu>0$, then: \begin{equation} \forall x\in \mathbb{R}^n,\quad 2\mu\left(F(x)-F^*\right)\leqslant \|\nabla F(x)\|^2. \label{eq:Lojasiewicz_grad} \end{equation} \label{lem:Loja} \end{lemme} The following assumption was used in \cite{cabot2009long,su2014differential,Aujol2019,apidopoulos2021convergence} and can be seen as a flatness condition. \begin{definition} Let $F$ : $\mathbb{R}^{n} \rightarrow \mathbb{R}$ be a convex differentiable function having a non empty set of minimizers $X^*$. The function $F$ satisfies the assumption $\mathcal{H}_\gamma$ for some $\gamma\geqslant1$ if for all $x^*\in X^*$, \begin{equation}\forall x\in \mathbb{R}^n,\quad F(x)-F^* \leqslant \frac{1}{\gamma}\left\langle\nabla F(x), x-x^{*}\right\rangle. \label{eq:H1}\end{equation} The function $F$ satisfies the assumption $\mathcal{H}_\gamma^{loc}$ for some $\gamma\geqslant1$ if for all $x^*\in X^*$ there exists $\nu>0$ such that, \begin{equation} \forall x\in B(x^*,\nu),\quad F(x)-F^* \leqslant \frac{1}{\gamma}\left\langle\nabla F(x), x-x^{*}\right\rangle. \label{eq:H1loc} \end{equation} \end{definition} To have an intuition of the geometry of functions satisfying $\mathcal{H}_\gamma$, observe that the flatness property \eqref{eq:H1} implies that for any minimizer $x^*\in X^*$, there exists $M>0$ and $\nu>0$ such that: \begin{equation} \forall x\in B(x^*,\nu),~F(x)-F^*\leqslant M\|x-x^*\|^\gamma, \end{equation} see \cite[Lemma~2.2]{Aujol2019}. Therefore, this assumption ensures that it does not grow too fast around its set of minimizers. Note that $\mathcal{H}_1$ corresponds to convexity and it is therefore always satisfied in our setting. \section{Convergence rates of \eqref{eq:Hessian_ODE} under geometry assumptions} In this section, we state fast convergence rates for \eqref{eq:Hessian_ODE} trajectories that can be achieved when $F$ satisfies geometry assumptions such as $\mathcal{G}^\gamma_\mu$ and $\mathcal{H}_\gamma$. The convergence results are given first for sharp geometries and then for flat geometries. \subsection{Sharp geometries} \subsubsection{Contributions} We first consider $F$ as a convex $C^2$ function having a unique minimizer $x^*$ and satisfying $\mathcal{H}_\gamma$ and $\mathcal{G}^2_\mu$ for some $\gamma\geqslant1$. These assumptions gather convex functions having a quadratic growth around their minimizers and consequently strongly convex functions. This set of hypotheses was considered in \cite{Aujol2019} and \cite{aujol:hal-03491527} to analyse the convergence of the trajectories of \eqref{eq:Nesterov_ODE}. In this setting, Aujol et al. show in \cite[Theorem~4.2]{Aujol2019} that if $\gamma\leqslant2$ and $\alpha>1+\frac{2}{\gamma}$, the solution of \eqref{eq:Nesterov_ODE} satisfies \begin{equation} F(x(t))-F^*=\mathcal{O}\left(t^{-\frac{2\alpha\gamma}{\gamma+2}}\right). \label{eq:Optimal_1} \end{equation} In \cite[Theorem~5]{aujol:hal-03491527}, the authors give a non asymptotic convergence bound that recovers asymptotically \eqref{eq:Optimal_1}. Such time-finite rate allows to identify the dependency of the bound according to each parameter. In fact, Aujol et al. show that the bound on $F(x(t))-F^*$ is proportional to $\alpha^{\frac{2\alpha\gamma}{\gamma+2}}$. As a consequence, large values of $\alpha$ will not necessarily accelerate the convergence of the trajectories of \eqref{eq:Nesterov_ODE} despite ensuring a better asymptotical convergence rate. We provide similar convergence results for \eqref{eq:Hessian_ODE} which are summarized in the following theorem. These claims are discussed below. The proof can be found in Section \ref{sec:proof_sharp1}. \begin{theoreme} Let $F: \mathbb{R}^{n} \rightarrow \mathbb{R}$ be a convex $C^2$ function having a unique minimizer $x^*$. Assume that $F$ satisfies $\mathcal{H}_\gamma$ and $\mathcal{G}_\mu^2$ for some $\gamma\geqslant1$ and $\mu>0$. Let $x$ be a solution of \eqref{eq:Hessian_ODE} for all $t\geqslant t_0$ where $t_0>0$, $\alpha>0$ and $\beta>0$. Let $\lambda=\frac{2\alpha}{\gamma+2}$. Then, $\lambda<\alpha$ and we have that: \begin{enumerate} \item if $\alpha>1+\frac{2}{\gamma}$, then for all $t\geqslant t_0+\beta(\alpha-\lambda)$, \begin{equation} F(x(t))-F^*\leqslant \frac{K}{\left(t+\beta(\lambda-\alpha)\right)^{\frac{2\alpha\gamma}{\gamma+2}}} \label{eq:sharp_gen} \end{equation} where $K$ depends on $t_0$, $\alpha$, $\beta$, $\gamma$ and $\mu$. In particular, if $t_0\leqslant \frac{\alpha r^*}{(\gamma+2)\sqrt{\mu}}$, then for all $t\geqslant \frac{\alpha r^{*}}{(\gamma+2) \sqrt{\mu}}+\beta(\alpha-\lambda)$, inequality \eqref{eq:sharp_gen} holds with \begin{equation}\noindent K= C_{1} e^{\frac{2 \gamma}{\gamma+2} C_{2}\left(\alpha-1-\frac{2}{\gamma}\right)}\left(1+\tfrac{\beta\gamma\sqrt{\mu}}{r^*}\right) E_{m}\left(t_{0}\right)\left(\tfrac{\alpha r^*}{(\gamma+2)\sqrt{\mu}}\right)^{\frac{2 \alpha \gamma}{\gamma+2}} , \label{eq:sharp_fatbound} \end{equation} \item if $\alpha=1+\frac{2}{\gamma}$, then for all $t\geqslant t_0+\beta$, \begin{equation} F(x(t))-F^*\leqslant \left((t_0+\beta)^2+\frac{\lambda^2+\sqrt{\mu}}{\mu}\right)e^{\frac{\beta}{t_0}}\frac{E_m(t_0)}{t(t-\beta)}, \label{eq:sharp3} \end{equation} \item if $\alpha\geqslant 1+\frac{2}{\gamma}$, then \begin{equation} \int_{t_0}^{+\infty} u^{\frac{2\alpha\gamma}{\gamma+2}}\| \nabla F(x(u)) \|^{2} du < +\infty, \end{equation} \end{enumerate} where: \begin{itemize}\renewcommand{$\bullet$}{$\bullet$} \item $r^*$ is the unique positive real root of the polynomial :$$r \mapsto r^{3}-(1+C_0)r^{2}-2(1+\sqrt{2}) r-4,$$ \item $C_0=\beta\dfrac{\sqrt{\mu}\gamma(\gamma \lambda-1)}{\gamma \lambda-2},$ \item $C_{1}=\left(1+\frac{2}{r^{*}}\right)^{2}$, \item $C_{2}=\frac{1+C_0}{r^{*}}+\frac{1+\sqrt{2}}{r^{* 2}}+\frac{4}{3 r^{* 3}}$, \item $E_{m}:t\mapsto \left(1+\dfrac{\beta\alpha}{t}\right)\left(F(x(t))-F^{*}\right)+\dfrac{1}{2}\left\|\dot{x}(t)+\beta \nabla F(x(t))\right\|^{2}$. \end{itemize} \label{thm:sharp1} \end{theoreme} The first claim ensures that the trajectories of \eqref{eq:Hessian_ODE} have the same asymptotical convergence rate than the trajectories of \eqref{eq:Nesterov_ODE} if $\alpha>1+\frac{2}{\gamma}$, i.e $F(x(t))-F^*=\mathcal{O}\left(t^{-\frac{2\alpha\gamma}{\gamma+2}}\right)$. This rate is still valid if $\alpha=1+\frac{2}{\gamma}$ as stated in the third claim. Observe that as strongly convex functions satisfy $\mathcal{H}_1$ and $\mathcal{G}^2_\mu$, we give the same convergence rate as \cite{attouch2016fast} for this class of functions i.e $\mathcal{O}\left(t^{-\frac{2\alpha}{3}}\right)$. This rate is also achieved for weaker hypotheses such as the combination of convexity and $\mathcal{G}^2_\mu$. In addition, we give a tight bound on $F(x(t))-F^*$ in \eqref{eq:sharp_fatbound} which highlights the influence of $\alpha$ and $\beta$ on the convergence of the trajectories. Note that if $\beta=0$, this bound is the same as that given in \cite[Theorem~5]{aujol:hal-03491527} for \eqref{eq:Nesterov_ODE}. As in \eqref{eq:Nesterov_ODE}, the upper bound is proportional to $\alpha^{\frac{2\alpha\gamma}{\gamma+2}}$ and consequently setting $\alpha$ too large may not be efficient. Moreover, by optimizing the bounds given in \eqref{eq:sharp_fatbound} and \eqref{eq:sharp3} according to $\beta$ for $\alpha\geqslant1+\frac{2}{\gamma}$, we get that the optimal value is $0$. However, setting $\beta>0$ ensures the fast convergence of the gradient as stated in the third statement. This property of \eqref{eq:Hessian_ODE} is not valid for \eqref{eq:Nesterov_ODE}, i.e for $\beta=0$. This integrability result is an improvement of \begin{equation} \int_{t_0}^{+\infty} u^2\|\nabla F(x(u))\|^2 du<+\infty, \end{equation} proved by Attouch et al. in \cite{attouch2016fast} for convex functions. Furthermore, as we consider that $F$ is convex and satisfies $\mathcal{G}^2_\mu$ for some $\mu>0$, Lemma \ref{lem:Loja} states that it has a \L{}ojasiewicz property with exponent $\frac{1}{2}$. More precisely, for all $x\in\mathbb{R}^n$, \begin{equation} 2\mu(F(x)-F^*)\leqslant\|\nabla F(x)\|^2. \end{equation} Thus, the integrability property of the gradient automatically implies a similar property on the error as stated in the following theorem. The proof is given in Section \ref{sec:proof_sharp2}. \begin{theoreme} Let $F: \mathbb{R}^{n} \rightarrow \mathbb{R}$ be a convex $C^2$ function having a unique minimizer $x^*$ and satisfying $\mathcal{G}_\mu^2$. Let $x$ be a solution of \eqref{eq:Hessian_ODE} for all $t\geqslant t_0$ where $t_0>0$, $\alpha\geqslant3$ and $\beta>0$. Then, for any $\varepsilon\in (0,1)$, \begin{equation} F(x(t))-F^*=\mathcal{O}\left(t^{-\alpha+\varepsilon}\right), \end{equation} and \begin{equation} \int_{t_0}^{+\infty} u^{\alpha-\varepsilon}\left(F(x(u))-F^*\right) du < +\infty. \label{eq:thm2} \end{equation} \label{thm:sharp2} \end{theoreme} The asymptotical rate given in Theorem \ref{thm:sharp2} is faster than that given for strongly convex functions in \cite{attouch2016fast} ($\mathcal{O}\left(t^{-\frac{2\alpha}{3}}\right)$). Moreover, the integrability result \eqref{eq:thm2} is significantly strong as it implies the following statements which are proved in Section \ref{sec:proof_sharp3}. \begin{corollaire} Let $F: \mathbb{R}^{n} \rightarrow \mathbb{R}$ be a convex $C^2$ function having a unique minimizer $x^*$ and satisfying $\mathcal{G}_\mu^2$. Let $x$ be a solution of \eqref{eq:Hessian_ODE} for all $t\geqslant t_0$ where $t_0>0$, $\alpha\geqslant3$ and $\beta>0$. Then, for any $\varepsilon\in (0,1)$, as $t\rightarrow+\infty$, \begin{enumerate} \item \begin{equation} F(z(t))-F^*=o\left(t^{-\alpha-1+\varepsilon}\right), \end{equation} where $z:t\mapsto\frac{\int_{t/2}^tu^{\alpha-\varepsilon} x(u)du}{\int_{t/2}^tu^{\alpha-\varepsilon} du}$. \item \begin{equation} \inf\limits_{u\in[t/2,t]}\left(F(x(u))-F^*\right)=o\left(t^{-\alpha-1+\varepsilon}\right). \end{equation} \item \begin{equation} \liminf\limits_{t\rightarrow+\infty} ~t^{\alpha+1-\varepsilon}\log(t)(F(x(t))-F^*) =0, \end{equation} where $\liminf\limits_{t\rightarrow+\infty} f(t)=\lim\limits_{t\rightarrow+\infty}\left[\inf\limits_{\tau\geqslant t}f(\tau)\right]$. \end{enumerate} \label{cor:sharp2} \end{corollaire} We would like to point out that Theorem \ref{thm:sharp2} relies on Lemma \ref{lem:H1loc} which states that a convex $C^2$ function automatically satisfies the assumption $\mathcal{H}_{2-\delta}^{loc}$ for any $\delta\in (0,1]$. Note that the $C^2$ assumption is necessary to study \eqref{eq:Hessian_ODE} but the corresponding algorithms require only $F$ to be differentiable. Hence, such a strong result may not be valid in the discrete case without this $C^2$ assumption. \subsubsection{Sketch of proof of Theorem \ref{thm:sharp1}} As the proof of Theorem \ref{thm:sharp1} is technical, we give a brief overview of it in this section. The full proof can be found in Section \ref{sec:proof_sharp1}. Theorem \ref{thm:sharp1} states three claims: the first two claims are upper bounds of $F(x(t))-F^*$ in the cases $\alpha>1+\frac{2}{\gamma}$ and $\alpha=1+\frac{2}{\gamma}$ and the third claim is an integrability property on $\|\nabla F(x(t))\|$ for all $\alpha\geqslant1+\frac{2}{\gamma}$. The proof of each statement relies on the following Lyapunov energy: \begin{equation} \mathcal{E}(t) =\left(t^{2}+t \beta(\lambda-\alpha)\right)\left(F(x(t))-F^{*}\right)+\frac{1}{2}\left\|\lambda\left(x(t)-x^{*}\right)+t(\dot{x}(t)+\beta \nabla F(x(t)))\right\|^{2}, \end{equation} where $\lambda=\frac{2\alpha}{\gamma+2}$. \\ The first step consists in showing that the energy $\mathcal{E}$ satisfies a differential inequality of the form: \begin{equation} \forall t\geqslant T, \quad \mathcal{E}^\prime (t)+\frac{p}{t+\beta(\lambda-\alpha)}\mathcal{E}(t)\leqslant\tilde\varphi(t+\beta(\lambda-\alpha))\mathcal{E}(t), \end{equation} for some $T\geqslant t_0$, $p\geqslant0$ and $\tilde\varphi:\mathbb{R}^+\rightarrow\mathbb{R}^+$. In practice, we get a stronger inequality involving an additional term: for all $t\geqslant T$, \begin{equation} \mathcal{E}^\prime (t)+\frac{p}{t+\beta(\lambda-\alpha)}\mathcal{E}(t)+\beta t(t+\beta(\lambda-\alpha)) \| \nabla F(x(t)) \|^{2}\leqslant\tilde\varphi(t+\beta(\lambda-\alpha))\mathcal{E}(t), \label{eq:sharp_fourthclaim} \end{equation} Then by defining the energy $\mathcal{H}$ as \begin{equation} \mathcal{H}:t\mapsto\mathcal{E}(t)(t+\beta(\lambda-\alpha))^pe^{-\tilde\Phi(t+\beta(\lambda-\alpha))}, \end{equation} where $\widetilde{\Phi}(t)=-\int_{t}^{+\infty} \widetilde{\varphi}(x) dx$, it follows that $\mathcal{H}$ is decreasing for all $t\geqslant T$. This allows us to write that for all $t_1\geqslant T$ we have \begin{equation} \forall t\geqslant t_1,\quad F(x(t))-F^*\leqslant e^{-\tilde\Phi(t+\beta(\lambda-\alpha))}\frac{\mathcal{H}(t_1)}{(t+\beta(\lambda-\alpha))^{p+2}}. \label{eq:sharp_template} \end{equation} From this inequality come the two first statements: \begin{itemize}\renewcommand{$\bullet$}{$\bullet$} \item equation \eqref{eq:sharp_gen} given in the first claim follows from a trivial simplification of \eqref{eq:sharp_template}. The bound specified in \eqref{eq:sharp_fatbound} relies on an optimization of $t_1$ in order to get the tightest control on $F(x(t))-F^*$. To do this, $t_1$ is chosen as the minimizer of $$t\mapsto(t+\beta(\lambda-\alpha))^pe^{-\tilde\Phi(t+\beta(\lambda-\alpha))}.$$ Developing \eqref{eq:sharp_template} for this value of $t_1$ leads to the final bound. \item the second claim is a rewriting of \eqref{eq:sharp_template} in the case $\alpha=1+\frac{2}{\gamma}$ for $t_1=T$. \end{itemize} The proof of the third claim is based on the inequality \eqref{eq:sharp_fourthclaim} and follows the same approach. By defining $\mathcal{G}$ as \begin{equation} \mathcal{G}:t\mapsto \mathcal{H}(t)+\beta\int_{T}^tu(u+\beta(\lambda-\alpha))^{p+1}e^{-\widetilde{\Phi}(u+\beta(\lambda-\alpha))}\|\nabla F(x(u))\|^2 du, \end{equation} the inequality \eqref{eq:sharp_fourthclaim} implies that $\mathcal{G}$ is decreasing and as $\mathcal{H}$ is a positive function, \begin{equation} \forall t\geqslant T,\quad \beta\int_{T}^tu(u+\beta(\lambda-\alpha))^{p+1}e^{-\widetilde{\Phi}(u+\beta(\lambda-\alpha))}\|\nabla F(x(u))\|^2 du\leqslant\mathcal{G}(T). \end{equation} Simple calculations lead to the final result. \subsection{Flat geometries} \subsubsection{Contributions} We now focus on functions satisfying $\mathcal{H}_{\gamma_1}$ and $\mathcal{G}^{\gamma_2}_\mu$ where $\gamma_1\geqslant\gamma_2>2$ and $\mu>0$. These functions are said to have a flat geometry because they behave as $\|x-x^*\|^{\gamma_1}$ around their set of minimizers and $\gamma_1>2$. This geometry assumption was investigated in \cite{Aujol2019} for \eqref{eq:Nesterov_ODE}. The authors prove that if $\alpha\geqslant \frac{\gamma_1+2}{\gamma_1-2}$, then \begin{equation} F(x(t))-F^*=\mathcal{O}\left(t^{-\frac{2\gamma_2}{\gamma_2-2}}\right). \label{eq:rate_flat_N} \end{equation} We provide a similar result for \eqref{eq:Hessian_ODE} in the following theorem which is proved in Section \ref{sec:proof_flat1}. We also give an integrability result on $\nabla F$ related to the Hessian driven damping term. \begin{theoreme} Let $F: \mathbb{R}^{n} \rightarrow \mathbb{R}$ be a convex $C^2$ function having a unique minimizer $x^*$. Assume that $F$ satisfies $\mathcal{H}_{\gamma_1}$ and $\mathcal{G}_\mu^{\gamma_2}$ for some $\gamma_1>2$, $\gamma_2>2$ such that $\gamma_{1}\geqslant\gamma_{2}$ and $\mu>0$. Let $x$ be a solution of \eqref{eq:Hessian_ODE} for all $t\geqslant t_0$ where $t_0>0$, $\alpha\geqslant\frac{\gamma_{1}+2}{\gamma_{1}-2}$ and $\beta>0$. Then as $t\rightarrow+\infty$, \begin{equation} F(x(t))-F^{*}=\mathcal{O}\left(t^{-\frac{2 \gamma_{1}}{\gamma_{1}-2}}\right). \label{eq:rate_flat}\end{equation} Moreover, \begin{equation} \int_{t_0}^{+\infty} u^{\frac{2\gamma_1}{\gamma_1-2}}\|\nabla F(x(u))\|^2du<+\infty. \label{eq:integ_flat} \end{equation} \label{thm:flat} \end{theoreme} The asymptotical convergence rate \eqref{eq:rate_flat} is slightly slower than \eqref{eq:rate_flat_N} given by Aujol et al. for \eqref{eq:Nesterov_ODE} as $\gamma_1\geqslant\gamma_2$. However, we give an additional result on the integrability of the gradient which ensures a reduction of oscillations. As specified for sharp geometries, the assumption $\mathcal{G}^{\gamma_2}_\mu$ is equivalent to a \L{}ojasiewicz property with exponent $1-\frac{1}{\gamma_2}$. Consequently, we get that \begin{equation} \int_{t_0}^{+\infty}u^{\frac{2\gamma_1}{\gamma_1-2}}\left(F(x(u))-F^*\right)^{\frac{\gamma_2-1}{2\gamma_2}}du<+\infty. \end{equation} This statement may lead to improved convergence rates according to the value of $\gamma_1$ and $\gamma_2$ as stated in the following corollary which is proved in Section \ref{sec:proof_flat_cor}. \begin{corollaire} Let $F: \mathbb{R}^{n} \rightarrow \mathbb{R}$ be a convex $C^2$ function having a unique minimizer $x^*$. Assume that $F$ satisfies $\mathcal{H}_{\gamma_1}$ and $\mathcal{G}_\mu^{\gamma_2}$ for some $\gamma_1>2$, $\gamma_2>2$ such that $\gamma_{1}\geqslant\gamma_{2}$ and $\mu>0$. Let $x$ be a solution of \eqref{eq:Hessian_ODE} for all $t\geqslant t_0$ where $t_0>0$, $\alpha\geqslant\frac{\gamma_{1}+2}{\gamma_{1}-2}$ and $\beta>0$. Then as $t\rightarrow+\infty$, \begin{equation} \inf\limits_{u\in[t/2,t]}F(x(u))-F^*=o\left(t^{-\frac{(3\gamma_1-2)\gamma_2}{2(\gamma_1-2)(\gamma_2-1)}}\right). \end{equation} \label{cor:flat1} \end{corollaire} Note that if $\gamma_1> 2$ and $\gamma_2\in\left(2,\frac{4\gamma_1}{\gamma_1+2}\right)$, then $\frac{(3\gamma_1-2)\gamma_2}{2(\gamma_1-2)(\gamma_2-1)}>\frac{2\gamma_1}{\gamma_1-2}$. Consequently, for this set of parameters the asymptotical rate of $\inf\limits_{u\in[t/2,t]}F(x(u))-F^*$ given in Corollary \ref{cor:flat1} is faster than the rate of $F(x(t))-F^*$ given in Theorem \ref{thm:flat}. \subsubsection{Sketch of proof of Theorem \ref{thm:flat}} In this section, we give an outline of the proof of Theorem \ref{thm:flat} which is given in Section \ref{sec:proof_flat1}. The proof relies on the analysis of the Lyapunov energies $\mathcal{E}$ and $\mathcal{H}$ defined as: \begin{equation} \begin{gathered} \mathcal{E}(t) =\left(t^{2}+t \beta(\lambda-\alpha)\right)\left(F(x(t))-F^{*}\right)+\frac{\xi}{2}\left\|x(t)-x^{*}\right\|^{2}\\+\frac{1}{2}\left\|\lambda\left(x(t)-x^{*}\right)+t(\dot{x}(t)+\beta \nabla F(x(t)))\right\|^{2}, \end{gathered} \end{equation} \begin{equation} \mathcal{H}(t)=t^p\mathcal{E}(t), \end{equation} where $x^*$ is the unique minimizer of $F$, $\lambda\in\mathbb{R}$, $\xi\in\mathbb{R}$ and $p>0$. The first step is to show that for a well-chosen set of parameters $\left(\lambda,\xi,p\right)$, the following inequality holds: \begin{equation} \forall t\geqslant t_1,~\mathcal{H}^\prime(t)+\beta t^{p+1}(t+\beta(\lambda-\alpha))\|\nabla F(x(t))\|^2\leqslant t^p\beta C_1\left(F(x(t))-F^*\right), \label{eq:flat_sop_ine} \end{equation} where $t_1>0$ and $C_1>0$. In this case, the right term is not zero, implying that we cannot directly deduce that $\mathcal{H}$ is decreasing.\\ Therefore, the second step of the proof consists in investigating the function $\mathcal{G}$ defined by: \begin{equation} \mathcal{G}:t\mapsto \mathcal{H}(t)-\beta C_1 \int_{t_1}^t u^{p}(F(x(u))-F^*)du, \end{equation} where $t_1>T$ and $T$ is a well-chosen parameter. The objective is to use the decreasing nature of $\mathcal{G}$ to show that: \begin{equation} F(x(t))-F^*=\mathcal{O}\left(\frac{1}{t^{p+2}}\right). \end{equation} For this purpose, we show that the function $v$ defined by: \begin{equation} v(t)= t(t+\beta(\lambda-\alpha))^{p+1}(F(x(t))-F^*), \end{equation} is bounded by using the assumptions of the theorem.\\ The third step is to prove the second statement namely: \begin{equation} \int_{t_0}^{+\infty} u^{\frac{2\gamma_1}{\gamma_1-2}}\|\nabla F(x(u))\|^2du<+\infty. \label{eq:flat_sop_grad} \end{equation} This is done by introducing the function $\mathcal{F}$ defined as follows: \begin{equation} \mathcal{F}:t\mapsto \mathcal{H}(t)-\beta C_1 \int_{t_1}^t u^{p-1}a(u)du+\beta \int_{t_1}^t u^{p+1}(u+\beta(\lambda-\alpha))\|\nabla F(x(u))\|^2du. \end{equation} Equation \eqref{eq:flat_sop_grad} is obtained by combining the decreasing nature of $\mathcal{F}$ and the boundedness of $v$. \section{Numerical experiments} In this section, we illustrate the fast convergence rates obtained theoretically for \eqref{eq:Hessian_ODE} with numerical experiments. We consider the following least-squares problem: \begin{equation} \min\limits_{x\in\mathbb{R}^n}F(x):=\|Ax-b\|^2, \label{eq:Least-squares} \end{equation} where $A\in\mathcal{M}_{n\times n}\left(\mathbb{R}\right)$ and $b\in\mathbb{R}^n$. The function $F$ is convex, $C^2$ and satisfies $\mathcal{G}^2_\mu$ for some $\mu>0$. We apply the Inertial Gradient Algorithm with Hessian Damping (IGAHD) which was introduced by Attouch et al. in \cite{attouch2020first}: \begin{equation} \left\{ \begin{gathered} x_k=y_{k-1}-s\nabla F(y_{k-1}),\\ y_k=x_k+\alpha_k(x_k-x_{k-1})-\beta\sqrt{s}(\nabla F(x_k)-\nabla F(x_{k-1}))-\frac{\beta\sqrt{s}}{k}\nabla F(x_{k-1}), \end{gathered}\right. \label{eq:IGAHD2} \end{equation} where $\alpha_k=\frac{k-1}{k+\alpha-1}$, $\alpha>0$, $\beta\geqslant0$ and $s>0$. Observe that the case $\beta=0$ corresponds to the Nesterov's accelerated gradient method. This numerical scheme is derived from the following ODE: \begin{equation} \label{eq:Hessian_modif} \forall t\geqslant t_0,~\ddot{x}(t)+\frac{\alpha}{t} \dot{x}(t)+\beta H_F(x(t)) \dot{x}(t)+ \left(1+\frac{\beta}{t}\right)\nabla F(x(t))=0, \end{equation} which is a slightly modified version of \eqref{eq:Hessian_ODE}. The additional vanishing coefficient in front of the gradient keeps the structure of the dynamic while facilitating the computational aspects. We do not provide any convergence results on IGAHD in this paper but we want to emphasize that the convergence of the iterates of IGAHD is related to the convergence rates obtained for \eqref{eq:Hessian_ODE}. We refer the reader to \cite{attouch2020first,attouch2021convergence} for a detailed analysis of this method. We recall that Attouch et al. prove in \cite{attouch2020first} that if $\alpha\geqslant3$, $s\leqslant\frac{1}{L}$ and $\beta\leqslant 2\sqrt{s}$, then the sequence $(x_k)_{k\in\mathbb{N}}$ defined in \eqref{eq:IGAHD2} satisfies: \begin{equation} F(x_k)-F^*=\mathcal{O}\left(k^{-2}\right). \end{equation} We compare the convergence of the iterates of IGAHD for several values of $\beta$ with the iterates of Nesterov's accelerated gradient method ($\beta=0$) to observe the influence of the Hessian-driven damping. Figure \ref{fig:multi_beta_LS} shows that the additional Hessian related term has a significant impact on the oscillations of the iterates. Indeed, this pathological behavior is reduced as $\beta$ grows. This can be related to the fast convergence of the gradient demonstrated in Theorem \ref{thm:sharp1}. \begin{figure} \caption{Convergence rate of IGAHD for several values of $\beta$ for a least squares problem ($N=500$).} \label{fig:multi_beta_LS} \end{figure} Note that these experiments were made for large values of $\beta$ ($\beta\geqslant \frac{100}{\sqrt{L}}$) and consequently the convergence results given in \cite{attouch2020first,attouch2021convergence} do not hold in this context. Moreover, $\beta$ cannot be chosen too large as the iterates may not converge. There exists a critical value $\tilde\beta$ such that the algorithm does not converge for all $\beta\geqslant \tilde \beta$ and this value vary according to the geometry of $F$. However, no theoretical result on $\tilde\beta$ has been proved. \section{Proofs} \subsection{Proof of Theorem \ref{thm:sharp1}} \label{sec:proof_sharp1} Let $\alpha\geqslant 1+\frac{2}{\gamma}$ and $\lambda=\frac{2 \alpha}{\gamma+2}$. We consider the following Lyapunov function: \begin{equation*} \mathcal{E}(t) =\left(t^{2}+t \beta(\lambda-\alpha)\right)\left(F(x(t))-F^{*}\right)+\frac{1}{2}\left\|\lambda\left(x(t)-x^{*}\right)+t(\dot{x}(t)+\beta \nabla F(x(t)))\right\|^{2}. \end{equation*} \begin{lemme} For all $t\geqslant t_0$: \begin{equation} \begin{aligned}\mathcal{E}^{\prime}(t) &=(2-\gamma \lambda)(t+\beta(\lambda-\alpha))\left(F(x(t))-F^{*}\right)-\beta(\lambda-\alpha)\left(F(x(t))-F^{*}\right)\\ &-\lambda(t+\beta (\lambda -\alpha))\left[-\gamma\left(F(x(t))-F^{*}\right)+\left\langle\nabla F(x(t)), x(t)-x^{*}\right\rangle\right]\\ &+\frac{\lambda(\lambda+1-\alpha)}{t}\left\langle x(t)-x^{*}, t(\dot{x}(t)+\beta \nabla F(x(t)))\right\rangle \\ & +\frac{\lambda+1-\alpha}{t}\|t(\dot{x}(t)+\beta \nabla F(x(t)))\|^{2}- \beta t(t+\beta(\lambda+\alpha)) \| \nabla F(x(t))\|^{2}. \end{aligned} \end{equation} The proof of this lemma is given in Section \ref{subsection:Proof for Lemma 1}. \label{lem:sharp1} \end{lemme} By using Lemma \ref{lem:sharp1} and the assumptions on $F$ we get the following result. \begin{lemme} Let $t\geqslant\max\{t_0,\beta(\alpha-\lambda)\}$. Then, \begin{equation} \begin{aligned} \mathcal{E}^{\prime}(t)+\frac{\gamma \lambda-2}{t} \mathcal{E}(t) & \leqslant K(\alpha)\left( \;\dfrac{\lambda}{t}\left\|x(t)-x^{*}\right\|^{2}+\left\langle x(t)-x^{*}, \dot{x}(t)+\beta \nabla F(x(t))\right\rangle \right) \\ &+\beta(\alpha-\lambda)\left(F(x(t))-F^{*}\right)-\beta t(t+\beta(\lambda-\alpha)) \| \nabla F(x(t)) \|^{2}, \end{aligned} \label{eq:sharp1} \end{equation} where $K(\alpha)=\dfrac{2 \alpha \gamma}{(\gamma+2)^{2}}\left(\alpha-1-\frac{2}{\gamma}\right)=\dfrac{(\gamma \lambda-2)\lambda}{2}$. In particular, if $\alpha=1+\frac{2}{\gamma}$, then \begin{equation} \mathcal{E}^{\prime}(t)+\beta t(t+\beta(\lambda-\alpha))\|\nabla F(x(t))\|^2\leqslant \beta(\alpha-\lambda)(F(x(t))-F^*). \end{equation} The proof of this lemma is given in Section \ref{subsection:Proof for Lemma 3} \label{lem:sharp3} \end{lemme} \noindent\textbf{Case $\alpha>1+\frac{2}{\gamma}$ (Proof of statements 1 and 3).} The inequality \eqref{eq:sharp1} can be rewritten in the following way. \begin{lemme} Let $t>\max\{t_0,\beta(\alpha-\lambda)\}$. Then, if $\alpha>1+\frac{2}{\gamma}$, \begin{equation} \mathcal{E}^{\prime}(t)+\frac{\gamma \lambda-2}{t+\beta(\lambda-\alpha)} \mathcal{E}(t) +\beta t(t+\beta(\lambda-\alpha)) \| \nabla F(x(t)) \|^{2}\leqslant \widetilde{\varphi}\left(t+\beta(\lambda-\alpha)\right) \mathcal{E}(t), \end{equation} where $$ \widetilde{\varphi}:t\mapsto\frac{K(\alpha)}{\mu t^{2}}\left(\sqrt{\mu}(1+C_0)+\frac{2 \alpha}{(\gamma+2) t}(1+\sqrt{2})+\frac{4 \alpha^{2}}{(\gamma+2)^{2} \sqrt{\mu} t^{2}}\right), $$ and $C_0=\beta\dfrac{\sqrt{\mu}\gamma(\gamma \lambda-1)}{\gamma \lambda-2}.$\\ The proof of this lemma is given in Section \ref{subsection:Proof for Lemma 4} \label{lem:sharp4} \end{lemme} Let $\mathcal{H}$ be defined as follows: $$ \mathcal{H}: t \mapsto \mathcal{E}(t) (t+\beta(\lambda-\alpha))^{\gamma \lambda-2} e^{-\widetilde{\Phi}(t+\beta(\lambda-\alpha))},$$ where $\widetilde{\Phi}(t)=-\int_{t}^{+\infty} \widetilde{\varphi}(x) dx$. Lemma \ref{lem:sharp4} ensures that $\mathcal{H}^\prime(t)\leqslant0$ for all $t> \max\{t_0, \beta(\alpha-\lambda)\}$. As a consequence, for all $t_1> \max\{t_0,\beta(\alpha-\lambda)\}$ and $t\geqslant t_1$, $\mathcal{H}(t)\leqslant\mathcal{H}(t_1),$ and thus \begin{equation} \mathcal{E}(t) \leqslant \mathcal{E}\left(t_{1}\right)\left(\frac{t_{1}+\beta(\lambda-\alpha)}{t+\beta(\lambda-\alpha)}\right)^{\lambda \gamma-2} e^{-\widetilde{\Phi}\left(t_{1}+\beta(\lambda-\alpha)\right)+\widetilde{\Phi}(t+\beta(\lambda-\alpha))}. \label{eq:sharp_bound} \end{equation} By choosing $t_1=t_0+\beta(\alpha-\lambda)$, this inequality ensures that for all $t\geqslant t_0+\beta(\alpha-\lambda)$, \begin{equation} \mathcal{E}(t) \leqslant \mathcal{E}\left(t_{0}+\beta(\alpha-\lambda)\right)\left(\frac{t_0}{t+\beta(\lambda-\alpha)}\right)^{\lambda \gamma-2} e^{-\widetilde{\Phi}(t_0)+\widetilde{\Phi}(t+\beta(\lambda-\alpha))}. \end{equation} Observe that the primitive $\widetilde{\Phi}(t)=-\int_{t}^{+\infty} \widetilde{\varphi}(x) dx$ of $\widetilde{\varphi}$ has the following expression: \begin{equation} \widetilde{\Phi}(t)=-\frac{K(\alpha)}{\mu}\left(\frac{\sqrt{\mu}(1+C_0)}{t}+\frac{\alpha(1+\sqrt{2})}{(\gamma+2) t^2}+\frac{4 \alpha^{2}}{3(\gamma+2)^{2} \sqrt{\mu} t^{3}}\right),\label{eq:Phi_diff} \end{equation} showing that $\widetilde{\Phi}$ is non-positive. As a consequence, for all $t\geqslant t_0+\beta(\alpha-\lambda)$, \begin{equation} F(x(t))-F^*\leqslant e^{-\widetilde{\Phi}(t_0)}\mathcal{E}\left(t_{0}+\beta(\alpha-\lambda)\right)\frac{t_0^{\lambda\gamma-2}}{\left(t+\beta(\lambda-\alpha)\right)^{\frac{2\alpha\gamma}{\gamma+2} }}, \end{equation} which proves the first claim of the theorem. The value of $t_1$ can be parametrized to ensure a tight control on the energy $\mathcal{E}(t)$ in \eqref{eq:sharp_bound}. In this proof, $t_1$ is chosen as a minimizer of the following function, $$u \mapsto (u+\beta(\lambda-\alpha))^{\lambda \gamma-2}e^{-\Phi\left(u+\beta(\lambda-\alpha)\right)}.$$ As a consequence, $u=t_1+\beta(\lambda-\alpha)$ satisfies: \begin{equation} \dfrac{\gamma \lambda-2}{u} = \widetilde{\varphi}(u). \label{eq:sharp_phi} \end{equation} Noticing that $\lambda \gamma-2=\frac{\gamma+2}{\alpha} K(\alpha)$, \eqref{eq:sharp_phi} can be rewritten as: $$ \frac{\gamma+2}{\alpha u}=\frac{1}{\mu u^{2}}\left(\sqrt{\mu}(1+C_0)+\frac{2 \alpha}{(\gamma+2) u}(1+\sqrt{2})+\frac{4 \alpha^{2}}{(\gamma+2)^{2} \sqrt{\mu} u^{2}}\right) $$ Introducing $r=(\gamma+2) \frac{\sqrt{\mu}}{\alpha} u$, this is equivalent to: $$ r^{3}-(1+C_0)r^{2}-2(1+\sqrt{2}) r-4=0. $$ For any $C_0>0$, the polynomial $r \mapsto r^{3}-(1+C_0)r^{2}-2(1+\sqrt{2}) r-4$ has a unique real positive root denoted $r^*$. Defining $t_1=\frac{\alpha}{(\gamma+2) \sqrt{\mu}} r^{*}+\beta(\alpha-\lambda)$, if $t_1> \max\{t_0,\beta(\alpha-\lambda)\}$ which is guaranteed if $t_1\geqslant t_0+\beta(\alpha-\lambda)$, then the control on the energy is given by: $$ \forall t \geqslant t_{1}, \mathcal{E}(t) \leqslant \mathcal{E}\left(t_{1}\right)\left(\frac{\alpha r^*}{\left(t+\beta(\lambda-\alpha)\right)(\gamma+2)\sqrt{\mu}}\right)^{\lambda \gamma-2} e^{-\widetilde{\Phi}\left(\frac{\alpha}{(\gamma+2) \sqrt{\mu}} r^{*}\right)+\widetilde{\Phi}(t+\beta(\lambda-\alpha))} $$ Let $E_m$ be an energy function defined for all $t\geqslant t_0$ by: $$E_{m}(t)=\left(1+\dfrac{\beta\alpha}{t}\right)\left(F(x(t))-F^{*}\right)+\dfrac{1}{2}\left\|\dot{x}(t)+\beta \nabla F(x(t))\right\|^{2}$$ Note that this energy is non-increasing since: $$E^{\prime}_{m}(t)=-\dfrac{\beta\alpha}{t^2}\left(F(x(t))-F^{*}\right)-\frac{\alpha}{t}\|\dot{x}(t)\|^{2}-\beta\|\nabla F(x(t))\|^{2} \leqslant 0$$ Hence, $E_{m}$ is uniformly bounded on $\left[t_{0},+\infty[\right.$. We then have: $$ \begin{aligned} \mathcal{E}\left(t_{1}\right) &=\left(t_{1}^{2}+t_{1} \beta(\lambda-\alpha)\right)\left(F\left(x\left(t_{1}\right)\right)-F^*\right)\\ &+\frac{1}{2}\left\|\lambda\left(x\left(t_{1}\right)-x^{*}\right)+t_{1}\left(\dot{x}\left(t_{1}\right)+\beta \nabla F\left(x\left(t_{1}\right)\right)\right)\right\|^{2} \\ &=\left(t_{1}^{2}+t_{1} \beta(\lambda-\alpha)\right) \left(F\left(x\left(t_{1}\right)\right)-F^*\right)+\frac{t_1^2}{2}\left\|\dot{x}\left(t_{1}\right)+\beta \nabla F\left(x\left(t_{1}\right)\right)\right\|^{2}\\ &+\frac{\lambda^2}{2}\left\|x\left(t_{1}\right)-x^{*}\right\|^{2}+\lambda t_{1}\left\langle\left(x\left(t_{1}\right)-x^{*}\right), \dot{x}\left(t_{1}\right)+\beta \nabla F\left(x\left(t_{1}\right)\right)\right\rangle \\ & \leqslant \left(t_{1}^{2}+t_{1} \beta(\lambda-\alpha)\right)\left(F(x(t))-F^*\right)+\frac{\lambda}{2}\left(\lambda+t_1\sqrt{\mu}\right)\|x(t_1)-x^*\|^2 \\&+\frac{t_1}{2}\left(t_1+\frac{\lambda}{\sqrt{\mu}}\right)\|\dot{x}(t)+\beta \nabla F(x(t))\|^{2}, \end{aligned}$$ using the inequality \begin{equation}\left|\langle x(t)-x^{*}, \dot{x}(t)+\beta \nabla F(x(t))\rangle\right| \leqslant \frac{\sqrt{\mu}}{2}\left\|x(t)-x^{*}\right\|^{2}+\frac{1}{2 \sqrt{\mu}}\|(\dot{x}(t)+\beta \nabla F(x(t)))\|^{2}.\label{eq:sharp_inmu}\end{equation} As $F$ satisfies the assumption $\mathcal{G}_\mu^2$ and noticing that $\frac{\lambda}{\sqrt{\mu}}=\frac{2}{r^*}(t_1+\beta(\lambda-\alpha))$ we get that: $$ \begin{aligned} \mathcal{E}(t_1)& \leqslant \left(t_{1}^{2}+t_{1} \beta(\lambda-\alpha)+\frac{\lambda^2}{\mu}+t_1\frac{\lambda}{\sqrt{\mu}}\right)\left(F(x(t))-F^*\right)\\&+\frac{t_1}{2}\left(t_1+\frac{\lambda}{\sqrt{\mu}}\right)\|\dot{x}(t)+\beta \nabla F(x(t))\|^{2} \\ & \leqslant \left(1+\frac{2}{r^*}\right)^2\left(t_{1}^{2}+t_{1} \beta(\lambda-\alpha)\right)\left(F(x(t))-F^*\right)\\&+\frac{1}{2}\left(\left(1+\frac{2}{r^*}\right)t_1^2+\frac{2}{r^*}t_1(t_1+\beta(\lambda-\alpha))\right)\|\dot{x}(t)+\beta \nabla F(x(t))\|^{2} \\ &\leqslant \left(1+\frac{2}{r^*}\right)^2t_1^2\left( F(x(t))-F^*+\frac{1}{2}\|\dot{x}(t)+\beta \nabla F(x(t))\|^{2}\right)\\ &\leqslant \left(1+\frac{2}{r^*}\right)^2t_1^2 E_m(t_1)\leqslant \left(1+\frac{2}{r^*}\right)^2t_1^2 E_m(t_0). \end{aligned} $$ Note that $\widetilde{\Phi}$ given in \eqref{eq:Phi_diff} is non-positive for all $t\geqslant0$ and as $t_1=\frac{\alpha}{(\gamma+2) \sqrt{\mu}} r^{*}+\beta(\alpha-\lambda)$, $$\widetilde{\Phi}(t_1+\beta(\lambda-\alpha))=-\dfrac{\gamma+2}{\alpha} K(\alpha) \left(\frac{1+C_0}{r^{*}}+\frac{1+\sqrt{2}}{r^{* 2}}+\frac{4}{3 r^{* 3}}\right).$$ Therefore, for all $t\geqslant t_1$: $$ F(x(t))-F^* \leqslant C_{1} e^{\frac{2 \gamma}{\gamma+2} C_{2}\left(\alpha-1-\frac{2}{\gamma}\right)}\left(1+\tfrac{\beta(\alpha-\lambda)(\gamma+2)\sqrt{\mu}}{\alpha r^*}\right) E_{m}\left(t_{0}\right)\left(\tfrac{\alpha r^*}{(\gamma+2)\sqrt{\mu}\left(t+\beta(\lambda-\alpha)\right)}\right)^{\frac{2 \alpha \gamma}{\gamma+2}}, $$ where $$ C_{1}=\left(1+\frac{2}{r^{*}}\right)^{2},~ C_{2}=\frac{1+C_0}{r^{*}}+\frac{1+\sqrt{2}}{r^{* 2}}+\frac{4}{3 r^{* 3}}. $$ Let $\mathcal{G}$ be defined as follows: $$\mathcal{G}:t\mapsto \mathcal{H}(t)+\beta\int_{t_0+\beta(\alpha-\lambda)}^tu(u+\beta(\lambda-\alpha))^{\gamma\lambda-1}e^{-\widetilde{\Phi}(u+\beta(\lambda-\alpha))}\|\nabla F(x(u))\|^2 du,$$ Lemma \ref{lem:sharp4} guarantees that $\mathcal{G}^\prime(t)\leqslant0$ for all $t\geqslant t_0+\beta(\alpha-\lambda)$. As a consequence, for all $t\geqslant t_0+\beta(\alpha-\lambda)$, $$\mathcal{G}(t) \leqslant \mathcal{G}(t_0+\beta(\alpha-\lambda)),$$ and as $\mathcal{H}$ is positive: $$\beta\int_{t_0+\beta(\alpha-\lambda)}^tu(u+\beta(\lambda-\alpha))^{\gamma\lambda-1}e^{-\widetilde{\Phi}(u+\beta(\lambda-\alpha))}\|\nabla F(x(u))\|^2 du \leqslant \mathcal{G}(t_0+\beta(\alpha-\lambda)).$$ Moreover, $\widetilde{\Phi}$ is non-positive and thus: $$\beta\int_{t_0+\beta(\alpha-\lambda)}^tu(u+\beta(\lambda-\alpha))^{\gamma\lambda-1}\|\nabla F(x(u))\|^2 du \leqslant \mathcal{G}(t_0+\beta(\alpha-\lambda)).$$ We can deduce that: $$\int_{t_0+\beta(\alpha-\lambda)}^{+\infty}u(u+\beta(\lambda-\alpha))^{\gamma\lambda-1}\|\nabla F(x(u))\|^2 du <+\infty.$$ Note that as $u\mapsto\left(1+\beta\frac{\lambda-\alpha}{u}\right)^{\lambda\gamma-1}$ is decreasing on $(t_0+\beta(\alpha-\lambda),+\infty)$, we have that: \begin{equation} \begin{aligned} \int_{t_0+\beta(\alpha-\lambda)}^{+\infty}u&(u+\beta(\lambda-\alpha))^{\gamma\lambda-1}\|\nabla F(x(u))\|^2 du\geqslant \\&\left(1+\beta\frac{\lambda-\alpha}{t_0+\beta(\alpha-\lambda)}\right)^{\lambda\gamma-1}\int_{t_0+\beta(\alpha-\lambda)}^{+\infty}u^{\frac{2\alpha\gamma}{\gamma+2}}\|\nabla F(x(u))\|^2 du. \end{aligned} \end{equation} In addition, the function defined by $u\mapsto u^{\frac{2\alpha\gamma}{\gamma+2}}\|\nabla F(x(u))\|^2$ is bounded on $(t_0,t_0+\beta(\alpha-\lambda))$ and consequently: \begin{equation} \int_{t_0}^{+\infty}u^{\frac{2\alpha\gamma}{\gamma+2}}\|\nabla F(x(u))\|^2 du <+\infty. \end{equation} \textbf{Case $\alpha=1+\frac{2}{\gamma}$ (Proof of statements 2 and 3).}\\ Lemma \ref{lem:sharp3} ensures that for all $t> \max\{t_0,\beta\}$, \begin{equation} \mathcal{E}^\prime(t)+\beta t(t-\beta)\|\nabla F(x(t))\|^2\leqslant\frac{\beta}{t(t-\beta)}\mathcal{E}(t), \label{eq:sharp_alpha=} \end{equation} noticing that $\alpha-\lambda=1$. This inequality implies that $t\mapsto \mathcal{E}(t)e^{\frac{\beta}{t-\beta}}$ is decreasing on $(t_0+\beta,+\infty)$. Consequently, for all $t\geqslant t_0+\beta$, \begin{equation*} \mathcal{E}(t)\leqslant\mathcal{E}(t_0+\beta)e^{-\frac{\beta}{t-\beta}+\frac{\beta}{t_0}}\leqslant \mathcal{E}(t_0+\beta)e^{\frac{\beta}{t_0}}. \end{equation*} Moreover, \begin{equation*} \begin{aligned} \mathcal{E}(t_0+\beta)&=t_0\left(t_0+\beta\right)\left(F(x(t_0+\beta))-F^*\right)\\&+\frac{1}{2}\|\lambda(x(t_0+\beta)-x^*)+t(\dot x(t_0+\beta)+\beta\nabla F(x(t_0+\beta)))\|^2\\ &\leqslant \left((t_0+\beta)^2+\frac{\lambda^2+\sqrt{\mu}}{\mu}\right)\left(F(x(t_0+\beta))-F^*\right)\\&+\frac{(t_0+\beta)^2+\frac{1}{\sqrt{\mu}}}{2}\|\dot x(t_0+\beta)+\beta\nabla F(x(t_0+\beta))\|^2\\ &\leqslant \left((t_0+\beta)^2+\frac{\lambda^2+\sqrt{\mu}}{\mu}\right)E_m(t_0+\beta) \leqslant\left((t_0+\beta)^2+\frac{\lambda^2+\sqrt{\mu}}{\mu}\right)E_m(t_0), \end{aligned} \end{equation*} using inequality \eqref{eq:sharp_inmu}. Hence, for all $t\geqslant t_0+\beta$, \begin{equation} F(x(t))-F^*\leqslant \left((t_0+\beta)^2+\frac{\lambda^2+\sqrt{\mu}}{\mu}\right)e^{\frac{\beta}{t_0}}\frac{E_m(t_0)}{t(t-\beta)}. \end{equation} Inequality \eqref{eq:sharp_alpha=} also guarantees that $$t\mapsto \mathcal{E}(t)e^{\frac{\beta}{t-\beta}}+\int_{t_0+\beta}^t\beta u(u-\beta)e^{\frac{\beta}{u-\beta}}\|\nabla F(x(u))\|^2du,$$ is bounded on $(t_0+\beta,+\infty)$. As $\mathcal{E}(t)e^{\frac{\beta}{t-\beta}}$ is positive for all $t\geqslant t_0+\beta$, we can deduce that there exists $M>0$ such that for all $t\geqslant t_0+\beta$, \begin{equation*} \int_{t_0+\beta}^t (u-\beta)^2\|\nabla F(x(u))\|^2du\leqslant \int_{t_0+\beta}^t u(u-\beta)e^{\frac{\beta}{u-\beta}}\|\nabla F(x(u))\|^2du<M, \end{equation*} and thus, \begin{equation} \int_{t_0+\beta}^{+\infty} (u-\beta)^2\|\nabla F(x(u))\|^2du<+\infty. \end{equation} By using the same arguments as in the first case, we can conclude that: \begin{equation} \int_{t_0}^{+\infty} u^2\|\nabla F(x(u))\|^2du<+\infty. \end{equation} \qed \subsection{Proof of Theorem \ref{thm:sharp2}} \label{sec:proof_sharp2} Let $F$ be a convex $C^2$ function satisfying $\mathcal{G}_\mu^2$ for some $\mu>0$. The convexity of $F$ implies that $F$ satisfies $\mathcal{H}_1$ and the following lemma ensures that $F$ also satisfies $\mathcal{H}_{2-\delta}^{loc}$ for all $\delta\in(0,1]$. The proof of this lemma is given in Section \ref{sec:proof_H1loc}. \begin{lemme} Let $F: \mathbb{R}^{n} \rightarrow \mathbb{R}$ be a convex $C^2$ function with a non empty set of minimizer $X^*$. Then, for all $\delta\in(0,1]$, the function $F$ satisfies $\mathcal{H}_{2-\delta}^{loc}$. \label{lem:H1loc} \end{lemme} Let $\alpha\geqslant3$, $\beta>0$ and $\varepsilon\in(0,1)$. As $F$ satisfies $\mathcal{H}_{1}$, the first and second claims of Theorem \ref{thm:sharp1} ensure that there exists a decreasing function $\phi$ such that: \begin{equation*} \forall t\geqslant t_0+\frac{\beta\alpha}{3},\quad F(x(t))-F^*\leqslant \phi(t), \end{equation*} where $\phi(t)\rightarrow0$ as $t\rightarrow+\infty$. Therefore, as $F$ satisfies $\mathcal{G}_\mu^2$, for all $\nu>0$, there exists $T\geqslant t_0+\frac{\beta\alpha}{3}$ such that for all $t\geqslant T$, $x(t)\in B(x^*,\nu)$. As a consequence, for all $\delta\in(0,1]$, there exists $T\geqslant t_0+\frac{\beta\alpha}{3}$ such that for all $t\geqslant T$, \begin{equation*} F(x(t))-F^*\leqslant \frac{1}{2-\delta}\left\langle x(t)-x^*,\nabla F(x(t))\right\rangle. \end{equation*} Let $\delta=\frac{4\varepsilon}{\alpha+\varepsilon}$. As $\alpha\geqslant3$ and $\varepsilon\in(0,1)$, the condition $\alpha\geqslant 1+\frac{2}{2-\delta}$ is satisfied. Then, by setting $T$ as the initial time in \eqref{eq:Hessian_ODE}, the first claim of Theorem \ref{thm:sharp1} gives the first result and the third claim of Theorem \ref{thm:sharp1} guarantees that there exists $M>0$ such that \begin{equation*} \int_{T}^{+\infty}u^{\alpha-\varepsilon}\|\nabla F(x(u))\|^2du<M. \end{equation*} As $F$ satisfies $\mathcal{G}_\mu^2$, Lemma \ref{lem:Loja} ensures that \begin{equation*} \int_{T}^{+\infty}u^{\alpha-\varepsilon}\left(F(x(u))-F^*\right) du <\frac{M}{2\mu}. \end{equation*} On the other hand, as $u\mapsto u^{\alpha-\varepsilon}(F(x(u))-F^*)$ is bounded on $\left(t_0,T\right)$, we have that: \begin{equation*} \int_{t_0}^{T}u^{\alpha-\varepsilon}\left(F(x(u))-F^*\right) du <+\infty, \end{equation*} and consequently, \begin{equation*} \int_{t_0}^{+\infty}u^{\alpha-\varepsilon}\left(F(x(u))-F^*\right) du <+\infty. \end{equation*} \qed \subsection{Proof of Theorem \ref{thm:flat}} \label{sec:proof_flat1} We define $\mathcal{E}$ as the following Lyapunov function: $$ \begin{aligned} \mathcal{E}(t) =&\left(t^{2}+t \beta(\lambda-\alpha)\right)\left(F(x(t))-F^{*}\right)+\frac{\xi}{2}\left\|x(t)-x^{*}\right\|^{2}\\ &+\frac{1}{2}\left\|\lambda\left(x(t)-x^{*}\right)+t(\dot{x}(t)+\beta \nabla F(x(t)))\right\|^{2}, \end{aligned} $$ where $x^*$ is the unique minimizer of $F$, $\lambda\in\mathbb{R}$ and $\xi\in\mathbb{R}$. Let $\mathcal{H}$ be the function defined as follows $$ \mathcal{H}(t)=t^{p} \mathcal{E}(t), $$ where $p>0$. Using the notations $$ \begin{aligned} &a(t)=t\left(F(x(t))-F^{*}\right), \quad b(t)=\frac{1}{2t}\left\|\lambda\left(x(t)-x^{*}\right)+t( \dot{x}(t)+\beta \nabla F(x(t)))\right\|^{2}, \\ &c(t)=\frac{1}{2t}\left\|x(t)-x^{*}\right\|^{2}, \end{aligned} $$ we have $$ \mathcal{E}(t)=(t+\beta(\lambda-\alpha))a(t)+t(b(t)+\xi c(t)). $$ Let $p=\frac{4}{\gamma_{1}-2}$, $\lambda=\frac{2}{\gamma_{1}-2}$ and $\xi=\lambda(\lambda+1-\alpha)$. \begin{lemme} Let $p=\frac{4}{\gamma_{1}-2}$, $\lambda=\frac{2}{\gamma_{1}-2}$ and $\xi=\lambda(\lambda+1-\alpha)$, for all $t \geqslant \max(t_0,\beta (\alpha-\lambda),\beta (2(\alpha-\lambda)-1)))$ \begin{equation} \begin{aligned}\mathcal{E}^{\prime}(t) &\leqslant((2-\lambda\gamma_1)t+\beta(\lambda-\alpha-\lambda\gamma_1(2(\lambda-\alpha)+1))))\left(F(x(t))-F^{*}\right) \\ & +2(\lambda+1-\alpha)b(t)-2\lambda^2(\lambda+1-\alpha)c(t)-\beta t(t+\beta (\lambda-\alpha)) \| \nabla F(x(t))\|^{2}. \\ \end{aligned} \end{equation}\label{lem:flat2} The proof of this lemma is given in Section \ref{sec:proof_flat2} \end{lemme} \noindent Consequently, for all $t \geqslant \max(t_0,\beta (\alpha-\lambda),\beta (2(\alpha-\lambda)-1)))$: $$ \begin{aligned} \mathcal{H}^{\prime}(t)&=t^{p-1}\left(p \mathcal{E}(t)+t \mathcal{E}^{\prime}(t)\right)\leqslant t^{p-1}[p \mathcal{E}(t)+2t(\lambda+1-\alpha)b(t)\\ &+t((2-\lambda\gamma_1)t+\beta(\lambda-\alpha-\lambda\gamma_1(2(\lambda-\alpha)+1))\left(F(x(t))-F^{*}\right)\\ &-2t(\lambda^2(\lambda+1-\alpha))c(t)-\beta t^2(t+\beta(\lambda-\alpha))\|\nabla F(x(t))\|^{2}]\\ &\leqslant t^{p}((2-\gamma_1 \lambda+p) a(t)+2 (\lambda-\alpha+1)+p) b(t)+\lambda(\lambda+1-\alpha)(p-2 \lambda) c(t))\\ &+t^{p-1}\beta((p+1)(\lambda-\alpha)-\lambda\gamma_1(2(\lambda-\alpha)+1))a(t)\\&-\beta t^{p+1}(t+\beta(\lambda-\alpha))\|\nabla F(x(t))\|^{2}. \end{aligned} $$ As $p=\frac{4}{\gamma_{1}-2}$ and $\lambda=\frac{2}{\gamma_{1}-2}$ this implies that \begin{equation} \mathcal{H}^{\prime}(t)\leqslant 2 t^{p}\left(\frac{\gamma_{1}+2}{\gamma_{1}-2}-\alpha\right)b(t)+t^{p-1}\beta C_1 a(t)-\beta t^{p+1}(t+\beta(\lambda-\alpha))\|\nabla F(x(t))\|^{2}, \label{eq:Hprime1} \end{equation} where $C_1=(p+1)(\lambda-\alpha)-\lambda\gamma_1(2(\lambda-\alpha)+1)$. Under the assumption $\alpha\geqslant\frac{\gamma_1+2}{\gamma_1-2}$, $C_1$ is strictly positive and \eqref{eq:Hprime1} ensures that\begin{equation} \mathcal{H}^{\prime}(t)\leqslant t^{p-1}\beta C_1 a(t)-\beta t^{p+1}(t+\beta(\lambda-\alpha))\|\nabla F(x(t))\|^{2}. \label{eq:Hprime2} \end{equation} We define $\mathcal{G}$ as follows: $$\mathcal{G}:t\mapsto \mathcal{H}(t)-\beta C_1 \int_{t_1}^t u^{p-1}a(u)du,$$ where $t_1>\max\left\{t_0,\beta(2(\alpha-\lambda)-1),t_m \right\}$ and $t_m>\beta(\alpha-\lambda)$ satisfies \begin{equation}\frac{t_m^p}{(t_m+\beta(\lambda-\alpha))^{p+1}}\beta C_1\leqslant\frac{1}{2}.\end{equation} As $t\mapsto\frac{t^p}{(t+\beta(\lambda-\alpha))^{p+1}}$ is decreasing on $\left(\beta(\alpha-\lambda),+\infty\right)$ and tends towards $0$, $t_m$ is well defined. Equation \eqref{eq:Hprime2} implies that $\mathcal{G}^\prime(t)\leqslant0$ for all $t\geqslant t_1$ and therefore there exists $A\in\mathbb{R}$ such that $\mathcal{G}(t)\leqslant A$ for all $t\geqslant t_1$, and for all $t\geqslant t_1$, $$\begin{aligned} \mathcal{G}(t)&=t^p((t+\beta(\lambda-\alpha))a(t)+tb(t)+t\xi c(t))-\beta C_1 \int_{t_1}^t u^{p-1}a(u)du\\ &\geqslant t^p((t+\beta(\lambda-\alpha))a(t)+t\xi c(t))-\beta C_1 \int_{t_1}^t u^{p-1}a(u)du\\ &\geqslant (t+\beta(\lambda-\alpha))^{p+1}a(t)+t^{p+1}\xi c(t)-\beta C_1 \int_{t_1}^t u^{p-1}a(u)du.\\ \end{aligned}$$ Moreover, $-\beta C_1 \int_{t_1}^t u^{p-1}a(u)du\geqslant-\left(\frac{t_1}{t_1+\beta(\lambda-\alpha)}\right)^{p+1}\beta C_1\int_{t_1}^t\frac{(u+\beta(\lambda-\alpha))^{p+1}a(u)}{u^2}du$. Recall that $F$ satisfies $\mathcal{G}_\mu^{\gamma_2}$, thus there exists $K=\frac{\mu}{2}>0$ such that \begin{equation*} \forall x\in\mathbb{R}^n,\quad K d\left(x, X^{*}\right)^{\gamma_{2}} \leqslant F(x)-F^*, \end{equation*} and therefore $$\begin{aligned} t^{p+1}\xi c(t)&=t^p\frac{\xi}{2}\|x(t)-x^*\|^2=t^p\frac{\xi}{2K^{\frac{2}{\gamma_2}}}\left(K\|x(t)-x^*\|^{\gamma_2}\right)^{\frac{2}{\gamma_2}}. \end{aligned}$$ As $\xi<0$ and $F$ has a unique minimizer, $$\begin{aligned} t^{p+1}\xi c(t)&=t^p\frac{\xi}{2K^{\frac{2}{\gamma_2}}}\left(Kd(x(t),X^*)^{\gamma_2}\right)^{\frac{2}{\gamma_2}} \geqslant t^p\frac{\xi}{2K^{\frac{2}{\gamma_2}}}\left(F(x(t))-F^*\right)^{\frac{2}{\gamma_2}}\\ &\geqslant t^{p-\frac{2}{\gamma_2}}\frac{\xi}{2K^{\frac{2}{\gamma_2}}}a(t)^{\frac{2}{\gamma_2}}\geqslant t^{p-\frac{2}{\gamma_2}-(p+1)\frac{2}{\gamma_2}}\frac{\xi}{2K^{\frac{2}{\gamma_2}}}\left(t^{p+1}a(t)\right)^{\frac{2}{\gamma_2}}\\ &\geqslant t^{p-\frac{2}{\gamma_2}-(p+1)\frac{2}{\gamma_2}}\left(\frac{t_1}{t_1+\beta(\lambda-\alpha)}\right)^{\frac{2(p+1)}{\gamma_2}}\frac{\xi}{2K^{\frac{2}{\gamma_2}}}\left((t+\beta(\alpha-\lambda))^{p+1}a(t)\right)^{\frac{2}{\gamma_2}}. \end{aligned}$$ Recall that $p=\frac{4}{\gamma_1-2}$, therefore $p-\frac{2}{\gamma_2}-(p+1)\frac{2}{\gamma_2}=\frac{4(\gamma_2-\gamma_1)}{\gamma_2(\gamma_1-2)}\leqslant 0$ since $\gamma_1\geqslant\gamma_2$. As $t\geqslant t_1$, we have $$ t^{p+1}\xi c(t)\geqslant \frac{t_1^{p-\frac{2}{\gamma_2}}}{(t_1+\beta(\lambda-\alpha))^{\frac{2(p+1)}{\gamma_2}}}\frac{\xi}{2K^{\frac{2}{\gamma_2}}}\left((t+\beta(\alpha-\lambda))^{p+1}a(t)\right)^{\frac{2}{\gamma_2}}. $$ We define $v:t\mapsto(t+\beta(\lambda-\alpha))^{p+1}a(t)$ for all $t\geqslant t_1$. Then, for all $t\geqslant t_1$: \begin{equation} \mathcal{G}(t)\geqslant v(t)-C_2v(t)^{\frac{2}{\gamma_2}}-\left(\frac{t_1}{t_1+\beta(\lambda-\alpha)}\right)^{p+1}\beta C_1\int_{t_1}^t\frac{v(u)}{u^2}du, \label{eq:control_Gv} \end{equation} where $C_2=-\frac{t_1^{p-\frac{2}{\gamma_2}}}{(t_1+\beta(\lambda-\alpha))^{\frac{2(p+1)}{\gamma_2}}}\frac{\xi}{2K^{\frac{2}{\gamma_2}}}>0$. Let $t_2>t_1$ and $t^*=\underset{t\in[t_1,t_2]}{\text{argmax}}~v(t)$. Then, $$\begin{aligned} \mathcal{G}(t^*)&\geqslant v(t^*)-C_2v(t^*)^{\frac{2}{\gamma_2}}-\left(\frac{t_1}{t_1+\beta(\lambda-\alpha)}\right)^{p+1}\beta C_1\int_{t_1}^{t^*}\frac{v(u)}{u^2}du\\ &\geqslant v(t^*)-C_2 v(t^*)^{\frac{2}{\gamma_2}}-\left(\frac{t_m}{t_m+\beta(\lambda-\alpha)}\right)^{p+1}\beta C_1\int_{t_m}^{+\infty}\frac{v(t^*)}{u^2}du\\ &\geqslant v(t^*)-C_2 v(t^*)^{\frac{2}{\gamma_2}}-\frac{t_m^p}{\left(t_m+\beta(\lambda-\alpha)\right)^{p+1}}\beta C_1v(t^*)\\ &\geqslant \frac{1}{2}v(t^*)-C_2 v(t^*)^{\frac{2}{\gamma_2}}. \end{aligned}$$ As $\mathcal{G}(t)\leqslant A$ for all $t\geqslant t_1$, we get that: $$v(t^*)-2C_2 v(t^*)^{\frac{2}{\gamma_2}}\leqslant 2A,$$ and consequently \begin{equation} v(t^*)^{\frac{2}{\gamma_2}}\left(v(t^*)^{1-\frac{2}{\gamma_2}}-2C_2\right)\leqslant 2A. \label{eq:control_v1} \end{equation} \begin{lemme} Let $x\in\mathbb{R}^+$, $\delta\in(0,1)$, $K_1>0$ and $K_2>0$. Then, $$x^\delta(x^{1-\delta}-K_1)\leqslant K_2\quad \implies\quad x\leqslant \left(K_2^{1-\delta}+K_1\right)^{\frac{1}{1-\delta}}.$$ \label{lem:control_v} \end{lemme} Applying Lemma \ref{lem:control_v} to \eqref{eq:control_v1} we get that \begin{equation} v(t^*)\leqslant\left((2A)^{1-\frac{2}{\gamma_2}}+2C_2\right)^{\frac{\gamma_2}{\gamma_2-2}}, \end{equation} and thus for all $t\in[t_1,t_2]$ \begin{equation} v(t)\leqslant\left((2A)^{1-\frac{2}{\gamma_2}}+2C_2\right)^{\frac{\gamma_2}{\gamma_2-2}}. \end{equation} This bound does not depend on $t_2$ so we can deduce that $v$ is bounded on $[t_1,+\infty)$. As a consequence, there exists $M>0$ such that for all $t\geqslant t_1$: $$v(t)=(t+\beta(\lambda-\alpha))^{p+1}a(t)=t(t+\beta(\lambda-\alpha))^{p+1}(F(x(t))-F^*)\leqslant M,$$ which implies that \begin{equation} F(x(t))-F^*\leqslant \frac{M}{t(t+\beta(\lambda-\alpha))^{p+1}}\leqslant \left(\frac{t_1}{t_1+\beta(\lambda-\alpha)}\right)^{p+1} \frac{M}{t^{p+2}}, \end{equation} i.e. as $t\rightarrow+\infty$ \begin{equation} F(x(t))-F^*=\mathcal{O}\left(t^{-\frac{2\gamma_1}{\gamma_1-2}}\right). \end{equation} Let $\mathcal{F}$ be defined by $$\mathcal{F}:t\mapsto \mathcal{H}(t)-\beta C_1 \int_{t_1}^t u^{p-1}a(u)du+\beta \int_{t_1}^t u^{p+1}(u+\beta(\lambda-\alpha))\|\nabla F(x(u))\|^2du.$$ Equation \eqref{eq:Hprime2} implies that $\mathcal{F}^\prime(t)\leqslant0$ for all $t\geqslant t_1$ and therefore there exists $B\in\mathbb{R}$ such that $\mathcal{F}(t)\leqslant B$ for all $t\geqslant t_1$. By applying \eqref{eq:control_Gv} we get that for all $t\geqslant t_1$ $$\begin{aligned}\mathcal{F}(t)&\geqslant v(t)-C_2v(t)^{\frac{2}{\gamma_2}}-\left(\frac{t_1}{t_1+\beta(\lambda-\alpha)}\right)^{p+1}\beta C_1\int_{t_1}^t\frac{v(u)}{u^2}du\\&+\beta \int_{t_1}^t u^{p+1}(u+\beta(\lambda-\alpha))\|\nabla F(x(u))\|^2du.\end{aligned}$$ We proved that there exists $M>0$ such that for all $t\geqslant t_1$, $v(t)\leqslant M$. Hence, $$-\left(\frac{t_1}{t_1+\beta(\lambda-\alpha)}\right)^{p+1}\beta C_1\int_{t_1}^t\frac{v(u)}{u^2}du\geqslant -M\beta C_1\frac{t_1^p}{(t_1+\beta(\lambda-\alpha))^{p+1}}.$$ \begin{lemme} Let $g:x\mapsto x-Kx^\delta$ for some $K>0$ and $\delta\in(0,1)$. Then for all $x\geqslant 0$, $$g(x)\geqslant K(\delta-1)(\delta K)^{\frac{\delta}{1-\delta}}.$$ \label{lem:min_g} \end{lemme} Lemma \ref{lem:min_g} ensures that for all $t\geqslant t_1$ \begin{equation} v(t)-C_2v(t)^{\frac{2}{\gamma_2}}\geqslant -C_2\left(1-\frac{2}{\gamma_2}\right)\left(\frac{2C_2}{\gamma_2}\right)^\frac{2}{\gamma_2-2}. \end{equation} Thus, $$\begin{aligned} \mathcal{F}(t)&\geqslant -C_2\left(1-\frac{2}{\gamma_2}\right)\left(\frac{2C_2}{\gamma_2}\right)^\frac{2}{\gamma_2-2}-M\beta C_1\frac{t_1^p}{(t_1+\beta(\lambda-\alpha))^{p+1}}\\&+\beta \int_{t_1}^t u^{p+1}(u+\beta(\lambda-\alpha))\|\nabla F(x(u))\|^2du. \end{aligned}$$ As there exists $B\in\mathbb{R}$ such that $\mathcal{F}(t)\leqslant B$, for all $t\geqslant t_1$, we can deduce that $$\begin{aligned} \beta\int_{t_1}^t u^{p+1}(u+\beta(\lambda-\alpha))\|\nabla F(x(u))\|^2du&\leqslant B+C_2\left(1-\frac{2}{\gamma_2}\right)\left(\frac{2C_2}{\gamma_2}\right)^\frac{2}{\gamma_2-2}\\&+M\beta C_1\frac{t_1^p}{(t_1+\beta(\lambda-\alpha))^{p+1}}, \end{aligned}$$ and therefore \begin{equation} \int_{t_1}^{+\infty} (u+\beta(\lambda-\alpha))^{\frac{2\gamma_1}{\gamma_1-2}}\|\nabla F(x(u))\|^2du<+\infty. \end{equation} By using the same arguments as in the proof of Theorem 1 and the boundedness of $u\mapsto(u+\beta(\lambda-\alpha))^{\frac{2\gamma_1}{\gamma_1-2}}\|\nabla F(x(u))\|^2$ on $(t_0,t_1)$, we conclude that: \begin{equation} \int_{t_0}^{+\infty} u^{\frac{2\gamma_1}{\gamma_1-2}}\|\nabla F(x(u))\|^2du<+\infty. \end{equation} \qed \appendix \section{Appendix} \subsection{Proof of Corollary \ref{cor:sharp2}} \label{sec:proof_sharp3} The first claim is obtained by applying the following lemma to Theorem \ref{thm:sharp2}. The proof of this lemma is given in Section \ref{sec:proof_delta+1}. \begin{lemme} Let $F: \mathbb{R}^{n} \rightarrow \mathbb{R}$ be a convex function having a non empty set of minimizers where $F^*=\inf\limits_{x\in\mathbb{R}^n}F(x)$. Assume that for some $t_1>0$ and $\delta>0$, $F$ satisfies: $$\int_{t_1}^{+\infty}u^\delta(F(x(u))-F^*)du<+\infty.$$ Let $z:t\mapsto\frac{\int_{t/2}^tu^\delta x(u)du}{\int_{t/2}^tu^\delta du}$. Then, as $t\rightarrow+\infty$, \begin{equation} F(z(t))-F^*=o\left(t^{-\delta-1}\right). \end{equation} \label{lem:delta+1} \end{lemme} The second and third claim are proved by applying Lemma \ref{lem:delta+1phi} to $\phi:x\mapsto F(x)-F^*$. The proof of this lemma is given in Section \ref{subsection:delta+1phi}. \begin{lemme} Let $\phi: \mathbb{R}^{n} \rightarrow \mathbb{R}^+$ such that for some $t_1>0$ and $\delta>0$, $\phi$ satisfies: \begin{equation*} \int_{t_1}^{+\infty}u^\delta\phi(x(u))du<+\infty. \end{equation*} Then, as $t\rightarrow+\infty$, \begin{equation} \inf\limits_{u\in[t/2,t]}\phi(x(u))=o\left(t^{-\delta-1}\right)\quad\mbox{ and }\quad \liminf\limits_{t\rightarrow+\infty} t^{\delta+1}\log (t)\phi(x(t))=0. \end{equation} \label{lem:delta+1phi} \end{lemme} \subsection{Proof of Corollary \ref{cor:flat1}} \label{sec:proof_flat_cor} Let $F: \mathbb{R}^{n} \rightarrow \mathbb{R}$ be a convex $C^2$ function having a unique minimizer $x^*$. Assume that $F$ satisfies $\mathcal{H}_{\gamma_1}$ and $\mathcal{G}_\mu^{\gamma_2}$ for some $\gamma_1>2$, $\gamma_2>2$ such that $\gamma_{1}\geqslant\gamma_{2}$ and $\mu>0$. Let $x$ be a solution of \eqref{eq:Hessian_ODE} for all $t\geqslant t_0$ where $t_0>0$, $\alpha\geqslant\frac{\gamma_{1}+2}{\gamma_{1}-2}$ and $\beta>0$. Theorem \ref{thm:flat} ensures that: \begin{equation*} \int_{t_0}^{+\infty}u^{\frac{2\gamma_1}{\gamma_1-2}}\|\nabla F(x(u))\|^2du<+\infty. \end{equation*} Moreover, as $F$ satisfies $\mathcal{G}_\mu^{\gamma_2}$ for some $\gamma_2>2$, Lemma \ref{lem:Loja} implies that: \begin{equation} \int_{t_0}^{+\infty}u^{\frac{2\gamma_1}{\gamma_1-2}}\left(F(x(u))-F^*\right)^{\frac{2(\gamma_2-1)}{\gamma_2}}du<+\infty. \label{eq:cor2_1} \end{equation} By applying Lemma \ref{lem:delta+1phi} to $\phi:x\mapsto\left(F(x)-F^*\right)^{\frac{2(\gamma_2-1)}{\gamma_2}}$, we get that as $t$ tends to $+\infty$, \begin{equation*} \inf\limits_{u\in\left[t/2,t\right]}\left(F(x(u))-F^*\right)^{\frac{2(\gamma_2-1)}{\gamma_2}}=o\left(t^{-\frac{3\gamma_1-2}{\gamma_1-2}}\right). \end{equation*} Hence, \begin{equation*} \inf\limits_{u\in\left[t/2,t\right]}F(x(u))-F^*=o\left(t^{-\frac{(3\gamma_1-2)\gamma_2}{2(\gamma_1-2)(\gamma_2-1)}}\right). \end{equation*} \subsection{Proof of Lemma \ref{lem:sharp1}} \label{subsection:Proof for Lemma 1} Recall that for all $t\geqslant t_0$: \begin{equation*} \mathcal{E}(t) =\left(t^{2}+t \beta(\lambda-\alpha)\right)\left(F(x(t))-F^{*}\right)+\frac{1}{2}\left\|\lambda\left(x(t)-x^{*}\right)+t(\dot{x}(t)+\beta \nabla F(x(t)))\right\|^{2}, \end{equation*} The Lyapunov function $\mathcal{E}$ is differentiable and simple calculations give that: \begin{equation*} \begin{aligned} \mathcal{E}^{\prime}(t) &=(2 t+\beta(\lambda-\alpha))\left(F(x(t))-F^{*}\right)+\left(t^{2}+t \beta(\lambda-\alpha)\right)\langle\nabla F(x(t)), \dot{x}(t)\rangle \\ &+\lambda(\lambda+1-\alpha)\left\langle x(t)-x^{*}, \dot{x}(t)\right\rangle+\lambda(\beta-t)\left\langle\nabla F(x(t)), x(t)-x^{*}\right\rangle \\ &+t(\lambda+1-\alpha)\|\dot{x}(t)\|^2+t(\beta-t)\langle\nabla F(x(t)), \dot{x}(t)\rangle \\ &+t \beta(\lambda+1-\alpha)\langle\nabla F(x(t)), \dot{x}(t)\rangle+t \beta(\beta-t)\|\nabla F(x(t))\|^2 \\ &=(2 t+\beta(\lambda-\alpha))\left(F(x(t))-F^{*}\right)+2 t \beta(\lambda+1-\alpha)\langle\nabla F(x(t)), \dot{x}(t)\rangle \\ &+\lambda(\lambda+1-\alpha)\left\langle x(t)-x^{*}, \dot{x}(t)\right\rangle+\lambda(\beta-t)\left\langle\nabla F(x(t)), x(t)-x^{*}\right\rangle \\ &+t(\lambda+1-\alpha)\|\dot{x}(t)\|^2+t \beta(\beta-t)\|\nabla F(x(t))\|^2. \\ \\ \end{aligned} \end{equation*} By rearranging the terms, we get that: \begin{equation*} \begin{aligned} \mathcal{E}^\prime(t)&=(2 t+\beta(\lambda-\alpha))\left(F(x(t))-F^{*}\right) \\ &+\frac{\lambda+1-\alpha}{t}\left[t^{2}\|\dot{x}(t)\|^2+t^{2} \beta^{2}\|\nabla F(x(t))\|^2+2 t^{2} \beta\langle\nabla F(x(t)), \dot{x}(t)\rangle\right] \\ &+\frac{\lambda(\lambda+1-\alpha)}{t}\left[\left\langle x(t)-x^{*}, t \dot{x}(t)\right\rangle+\left\langle x(t)-x^{*}, t \beta \nabla F(x(t))\right\rangle\right] \\ &-t \beta(t+\beta(\lambda-\alpha))\|\nabla F(x(t))\|^{2}-\lambda(t+\beta(\lambda-\alpha))\left\langle\nabla F(x(t)), x(t)-x^{*}\right\rangle\\ &=(2 t+\beta(\lambda-\alpha))\left(F(x(t))-F^{*}\right)+\frac{\lambda+1-\alpha}{t}\|t(\dot{x}(t)+\beta \nabla F(x(t)))\|^{2} \\ &+\frac{\lambda(\lambda+1-\alpha)}{t}\left\langle x(t)-x^{*}, t(\dot{x}(t)+\beta \nabla F(x(t)))\right\rangle \\ &-t \beta(t+\beta(\lambda-\alpha))\|\nabla F(x(t))\|^{2}-\lambda(t+\beta(\lambda-\alpha))\left\langle\nabla F(x(t)), x(t)-x^{*}\right\rangle.\\ \end{aligned} \end{equation*} A last step allows us to conclude that: \begin{equation*} \begin{aligned} \mathcal{E}^\prime(t)&=(2-\gamma \lambda)(t+\beta(\lambda-\alpha))\left(F(x(t))-F^{*}\right)-\beta(\lambda-\alpha)\left(F(x(t))-F^{*}\right) \\ &-\lambda(t+\beta(\lambda-\alpha))\left[-\gamma\left(F(x(t))-F^{*}\right)+\left\langle\nabla F(x(t)), x(t)-x^{*}\right\rangle\right]\\ &+\frac{\lambda(\lambda+1-\alpha)}{t}\left\langle x(t)-x^{*}, t(\dot{x}(t)+\beta \nabla F(x(t)))\right\rangle \\ &+\frac{\lambda+1-\alpha}{t}\|t(\dot{x}(t)+\beta \nabla F(x(t)))\|^{2}-t \beta(t+\beta(\lambda-\alpha))\|\nabla F(x(t))\|^{2}. \end{aligned} \end{equation*} We distinguish several terms containing $F(x(t))-F^*$ as they will be treated separately. \subsection{Proof of Lemma \ref{lem:sharp3}} \label{subsection:Proof for Lemma 3} Notice that for all $t\geqslant t_0$, $$ \begin{aligned} \frac{\gamma \lambda-2}{t} \mathcal{E}(t)=&-(2-\gamma \lambda)(t+\beta(\lambda-\alpha))\left(F(x(t))-F^{*}\right)\\ &+\frac{1}{2} \frac{\gamma \lambda-2}{t}\left\|\lambda\left(x(t)-x^{*}\right)+t(\dot{x}(t)+\beta \nabla F(x(t)))\right\|^{2}. \end{aligned} $$ By applying Lemma \ref{lem:sharp1}, we get that: $$ \begin{aligned} \mathcal{E}^{\prime}(t)+\frac{\gamma \lambda-2}{t} \mathcal{E}(t) &=\frac{1}{2} \frac{\gamma \lambda-2}{t}\left\|\lambda\left(x(t)-x^{*}\right)+t(\dot{x}(t)+\beta \nabla F(x(t)))\right\|^{2} \\ &-\lambda(t+\beta(\lambda-\alpha))\left[-\gamma\left(F(x(t))-F^{*}\right)+\left\langle\nabla F(x(t)), x-x^{*}\right\rangle\right]\\ &+\frac{\lambda(\lambda+1-\alpha)}{t}\left\langle x(t)-x^{*}, t(\dot{x}(t)+\beta \nabla F(x(t)))\right\rangle \\ &+\frac{\lambda+1-\alpha}{t}\|t(\dot{x}(t)+\beta \nabla F(x(t)))\|^{2}-t \beta(t+\beta(\lambda-\alpha))\|\nabla F(x(t))\|^{2} \\ &+\beta(\alpha-\lambda)\left(F(x(t))-F^{*}\right). \end{aligned} $$ As $F$ satisfies $\mathcal{H}_\gamma$, for all $t\geqslant \max\{ t_0,\beta(\alpha-\lambda)\}$ \begin{equation*} \lambda(t+\beta(\lambda-\alpha))\left[-\gamma\left(F(x(t))-F^{*}\right)+\left\langle\nabla F(x(t)), x-x^{*}\right\rangle\right]\geqslant0, \end{equation*} and hence $$ \begin{aligned} \mathcal{E}^{\prime}(t)+\frac{\gamma \lambda-2}{t} \mathcal{E}(t) & \leqslant \frac{1}{2} \frac{\gamma \lambda-2}{t}\left\|\lambda\left(x(t)-x^{*}\right)+t(\dot{x}(t)+\beta \nabla F(x(t)))\right\|^{2} \\ &+\frac{\lambda(\lambda+1-\alpha)}{t}\left\langle x(t)-x^{*}, t(\dot{x}(t)+\beta \nabla F(x(t)))\right\rangle \\ &+\frac{\lambda+1-\alpha}{t}\|t(\dot{x}(t)+\beta \nabla F(x(t)))\|^{2} \\ &+\beta(\alpha-\lambda)\left(F(x(t))-F^{*}\right)-t \beta(t+\beta(\lambda-\alpha))\|\nabla F(x(t))\|^{2}\\ &\leqslant\frac{\gamma \lambda-2}{2 t}\left\|\lambda\left(x(t)-x^{*}\right)\right\|^{2} \\ &+\left(\frac{\lambda+1-\alpha}{t}+\frac{\gamma \lambda-2}{2 t}\right)\|t(\dot{x}(t)+\beta \nabla F(x(t)))\|^{2} \\ &+\left(\frac{\lambda+1-\alpha}{t}+\frac{\gamma \lambda-2}{t}\right)\left\langle \lambda(x(t)-x^{*}), t(\dot{x}(t)+\beta \nabla F(x(t)))\right\rangle \\ &+\beta(\alpha-\lambda)\left(F(x(t))-F^{*}\right)-t \beta(t+\beta(\lambda-\alpha))\|\nabla F(x(t))\|^{2}\end{aligned}$$ Noticing that $2(\lambda-\alpha)+\gamma\lambda=0$, we get that: $$\begin{aligned} \mathcal{E}^{\prime}(t)+\frac{\gamma \lambda-2}{t} \mathcal{E}(t)&\leqslant \frac{\gamma \lambda-2}{2 t}\left\|\lambda\left(x(t)-x^{*}\right)\right\|^{2} \\ &+\frac{\lambda+\gamma \lambda-\alpha-1}{t}\left\langle\lambda\left(x(t)-x^{*}\right), t(\dot{x}(t)+\beta \nabla F(x(t)))\right\rangle\\ &+\beta(\alpha-\lambda)\left(F(x(t))-F^{*}\right)-t \beta(t+\beta(\lambda-\alpha))\|\nabla F(x(t))\|^{2}. \end{aligned} $$ Consequently, \begin{equation} \begin{aligned} \mathcal{E}^{\prime}(t)+\frac{\gamma \lambda-2}{t} \mathcal{E}(t) & \leqslant K(\alpha)\left( \;\dfrac{\lambda}{t}\left\|x(t)-x^{*}\right\|^{2}+\left\langle x(t)-x^{*}, \dot{x}(t)+\beta \nabla F(x(t))\right\rangle \right) \\ &+\beta(\alpha-\lambda)\left(F(x(t))-F^{*}\right)-t \beta(t+\beta(\lambda-\alpha))\|\nabla F(x(t))\|^{2}, \end{aligned} \end{equation} where $K(\alpha)=\frac{2 \alpha \gamma}{(\gamma+2)^{2}}\left(\alpha-1-\frac{2}{\gamma}\right)$. \subsection{Proof of Lemma \ref{lem:sharp4}} \label{subsection:Proof for Lemma 4} Lemma \ref{lem:sharp3} guarantees that for all $t> \beta(\alpha-\lambda)$: $$ \begin{aligned} \mathcal{E}^{\prime}(t)+\frac{\gamma \lambda-2}{t} \mathcal{E}(t) & \leqslant K(\alpha)\left( \;\dfrac{\lambda}{t}\left\|x(t)-x^{*}\right\|^{2}+\left\langle x(t)-x^{*}, \dot{x}(t)+\beta \nabla F(x(t))\right\rangle \right) \\ &+\beta(\alpha-\lambda)\left(F(x(t))-F^{*}\right)-t \beta(t+\beta(\lambda-\alpha))\|\nabla F(x(t))\|^{2}. \end{aligned} $$ By adding $\left(\frac{\gamma \lambda-2}{t+\beta(\lambda-\alpha)}-\frac{\gamma \lambda-2}{t}\right) \mathcal{E}(t)$ to both sides we get: $$ \begin{aligned} \mathcal{E}^{\prime}(t)+\frac{\gamma \lambda-2}{t+\beta(\lambda-\alpha)} \mathcal{E}(t) & \leqslant K(\alpha)\left(\frac{\lambda}{t}\left\|x(t)-x^{*}\right\|^{2}+\left\langle x(t)-x^{*}, \dot{x}(t)+\beta \nabla F(x(t))\right\rangle\right) \\ &+\beta(\alpha-\lambda)\left(F(x(t))-F^{*}\right)+\left(\frac{\gamma \lambda-2}{t+\beta(\lambda-\alpha)}-\frac{\gamma \lambda-2}{t}\right) \mathcal{E}(t)\\&-t \beta(t+\beta(\lambda-\alpha))\|\nabla F(x(t))\|^{2}\\ & \leqslant K(\alpha)\left(\frac{\lambda}{t}\left\|x(t)-x^{*}\right\|^{2}+\left\langle x(t)-x^{*}, \dot{x}(t)+\beta \nabla F(x(t))\right\rangle\right) \\ &+\beta(\alpha-\lambda)\left(F(x(t))-F^{*}\right)+\frac{\beta(\alpha-\lambda)(\gamma\lambda-2)}{t(t+\beta(\lambda-\alpha))} \mathcal{E}(t)\\&-t \beta(t+\beta(\lambda-\alpha))\|\nabla F(x(t))\|^{2} \end{aligned} $$ Recall that for all $t> \beta(\alpha-\lambda)$, $$F(x(t))-F^{*} \leq \dfrac{\mathcal{E}(t)}{t(t+ \beta(\lambda-\alpha))}, $$ and thus: \begin{equation} \begin{aligned} \mathcal{E}^{\prime}(t)+\frac{\gamma \lambda-2}{t+\beta(\lambda-\alpha)} \mathcal{E}(t)& \leqslant K(\alpha)\left(\frac{\lambda}{t}\left\|x(t)-x^{*}\right\|^{2}+\left\langle x(t)-x^{*}, \dot{x}(t)+\beta \nabla F(x(t))\right\rangle\right) \\ &+\left(\frac{\beta(\alpha-\lambda)}{t(t+ \beta(\lambda-\alpha))}+\frac{\beta(\alpha-\lambda)(\gamma\lambda-2)}{t(t+\beta(\lambda-\alpha))}\right) \mathcal{E}(t) \\ &-t \beta(t+\beta(\lambda-\alpha))\|\nabla F(x(t))\|^{2}\\ &\leqslant K(\alpha)\left(\frac{\lambda}{t}\left\|x(t)-x^{*}\right\|^{2}+\left\langle x(t)-x^{*}, \dot{x}(t)+\beta \nabla F(x(t))\right\rangle\right) \\ &+\frac{\beta(\alpha-\lambda)(\gamma \lambda-1)}{t^{2}+t \beta(\lambda-\alpha)} \mathcal{E}(t)-t \beta(t+\beta(\lambda-\alpha))\|\nabla F(x(t))\|^{2}. \end{aligned} \label{eq:balise} \end{equation} The next step is to find a bound of $\frac{\lambda}{t}\left\|x(t)-x^{*}\right\|^{2}+\left\langle x(t)-x^{*}, \dot{x}(t)+\beta \nabla F(x(t))\right\rangle$ depending on $\mathcal{E}(t)$. This will be done by applying the inequalities of the following lemma which is proved in Section \ref{sec:proof_sharp5}. \begin{lemme} Let $u\in\mathbb{R}^n$, $v\in\mathbb{R}^n$ and $a>0$. Then, $$|\langle u,v\rangle|\leqslant\frac{a}{2}\|u\|^2+\frac{1}{2a}\|v\|^2,$$ and $$\|u\|^2\leqslant(1+a)\|u+v\|^2+\left(1+\frac{1}{a}\right)\|v\|^2.$$ \label{lem:sharp5} \end{lemme} Lemma \ref{lem:sharp5} ensures that for all $t> \beta(\alpha-\lambda)$ and $\theta>0$, \begin{equation} |\langle x(t)-x^{*}, \dot{x}(t)+\beta \nabla F(x(t))\rangle| \leqslant \frac{\sqrt{\mu}}{2}\left\|x(t)-x^{*}\right\|^{2}+\frac{1}{2 \sqrt{\mu}}\|\dot{x}(t)+\beta \nabla F(x(t))\|^{2}, \end{equation} and \begin{align} t^{2}\|\dot{x}(t)+\beta \nabla F(x(t)))\|^{2} &\leqslant\left(1+\theta \frac{\alpha}{t \sqrt{\mu}}\right)\left\|\lambda\left(x(t)-x^{*}\right)+t (\dot{x}(t)+\beta \nabla F(x(t)))\right\|^{2}\nonumber\\ &+\lambda^{2}\left(1+\frac{t \sqrt{\mu}}{\theta \alpha}\right)\left\|x(t)-x^{*}\right\|^{2}. \end{align} Hence, for all $\theta>0$, $$ \begin{aligned} &\frac{\lambda}{t}\left\|x(t)-x^{*}\right\|^{2}+\left\langle x(t)-x^{*}, \dot{x}(t)+\beta \nabla F(x(t))\right\rangle \\ &\leqslant \left(\frac{\lambda}{t}+\frac{\sqrt{\mu}}{2}\right)\|x(t)-x^*\|^2+\frac{1}{2\sqrt{\mu}}\|\dot{x}(t)+\beta\nabla F(x(t))\|^2\\ &\leqslant \left(\frac{\lambda^2}{2\sqrt{\mu}t^2}+\frac{\lambda}{t}\left(1+\frac{\lambda}{2\theta\alpha}\right)+\frac{\sqrt{\mu}}{2}\right)\|x(t)-x^*\|^2\\ &+\left(\frac{\theta\alpha}{2 \mu t^3}+\frac{1}{2\sqrt{\mu}t^2}\right) \|\lambda(x(t)-x^*)+t (\dot{x}(t)+\beta \nabla F(x(t)))\|^2\\ &\leqslant \left(\frac{\lambda^2}{\mu^{3/2}t^2}+\frac{2\lambda}{\mu t}\left(1+\frac{\lambda}{2\theta\alpha}\right)+\frac{1}{\sqrt{\mu}}\right)(F(x(t))-F^*)\\ &+\left(\frac{\theta\alpha}{2\mu t^3}+\frac{1}{2\sqrt{\mu}t^2}\right) \|\lambda(x(t)-x^*)+t (\dot{x}(t)+\beta \nabla F(x(t)))\|^2, \end{aligned} $$ as $F$ satisfies $\mathcal{G}_\mu^2$ and has a unique minimizer.\\ As $\alpha>\lambda$ we have that $\frac{1}{t}<\frac{1}{t+\beta(\lambda-\alpha)}$ and thus: $$ \begin{aligned} &\frac{\lambda}{t}\left\|x(t)-x^{*}\right\|^{2}+\left\langle x(t)-x^{*}, \dot{x}(t)+\beta \nabla F(x(t))\right\rangle\\ &\leqslant\left(\tfrac{\lambda^2}{\mu^{3/2}(t+\beta(\lambda-\alpha))^2}+\tfrac{2\lambda}{\mu (t+\beta(\lambda-\alpha))}\left(1+\tfrac{\lambda}{2\theta\alpha}\right)+\tfrac{1}{\sqrt{\mu}}\right)(F(x(t))-F^*)\\ &+\left(\tfrac{\theta\alpha}{2\mu(t+\beta(\lambda-\alpha))^3}+\tfrac{1}{2\sqrt{\mu}(t+\beta(\lambda-\alpha))^2}\right) \|\lambda(x(t)-x^*)+t (\dot{x}(t)+\beta \nabla F(x(t)))\|^2\\ &\leqslant\left(\tfrac{\lambda^2}{\mu^{3/2}(t+\beta(\lambda-\alpha))^4}+\tfrac{2\lambda}{\mu (t+\beta(\lambda-\alpha))^3}\left(1+\tfrac{\lambda}{2\theta\alpha}\right)+\tfrac{1}{\sqrt{\mu}(t+\beta(\lambda-\alpha))^2}\right)t(t+\beta(\lambda-\alpha))(F(x(t))-F^*)\\ &+\left(\tfrac{\theta\alpha}{\mu(t+\beta(\lambda-\alpha))^3}+\tfrac{1}{\sqrt{\mu}(t+\beta(\lambda-\alpha))^2}\right)\frac{1}{2} \|\lambda(x(t)-x^*)+t (\dot{x}(t)+\beta \nabla F(x(t)))\|^2. \end{aligned}$$ \noindent The parameter $\theta$ is then defined to ensure that $\tfrac{2\lambda}{\mu (t+\beta(\lambda-\alpha))^3}\left(1+\tfrac{\lambda}{2\theta\alpha}\right)=\tfrac{\theta\alpha}{\mu(t+\beta(\lambda-\alpha))^3}$. This equality is satisfied for $\theta=\frac{2}{\gamma+2}(1+\sqrt{2})$ and this choice leads to the following inequalities: $$ \begin{aligned} &\frac{\lambda}{t}\left\|x(t)-x^{*}\right\|^{2}+\left\langle x(t)-x^{*}, \dot{x}(t)+\beta \nabla F(x(t))\right\rangle\\ &\leqslant\tfrac{1}{\mu (t+\beta(\lambda-\alpha))^2}\left(\tfrac{\lambda^2}{\sqrt{\mu}(t+\beta(\lambda-\alpha))^2}+\tfrac{\lambda}{t+\beta(\lambda-\alpha)}\left(1+\sqrt{2}\right)+\sqrt{\mu}\right)t(t+\beta(\lambda-\alpha))(F(x(t))-F^*)\\ &+\tfrac{1}{\mu(t+\beta(\lambda-\alpha))^2}\left(\tfrac{\lambda}{t+\beta(\lambda-\alpha)}(1+\sqrt{2})+\sqrt{\mu}\right) \|\lambda(x(t)-x^*)+t (\dot{x}(t)+\beta \nabla F(x(t)))\|^2\\ &\leqslant \tfrac{1}{\mu (t+\beta(\lambda-\alpha))^2}\left(\tfrac{\lambda^2}{\sqrt{\mu}(t+\beta(\lambda-\alpha))^2}+\tfrac{\lambda}{t+\beta(\lambda-\alpha)}\left(1+\sqrt{2}\right)+\sqrt{\mu}\right)\mathcal{E}(t). \end{aligned}$$ Coming back to \eqref{eq:balise} we get that, $$ \begin{aligned} \mathcal{E}^{\prime}(t)+\frac{\gamma \lambda-2}{t+\beta(\lambda-\alpha)} \mathcal{E}(t) &\leqslant K(\alpha)\left(\frac{\lambda}{t}\left\|x(t)-x^{*}\right\|^{2}+\left\langle x(t)-x^{*}, \dot{x}(t)+\beta \nabla F(x(t))\right\rangle\right) \\ &+\frac{\beta(\alpha-\lambda)(\gamma \lambda-1)}{t(t+\beta(\lambda-\alpha))} \mathcal{E}(t)-t \beta(t+\beta(\lambda-\alpha))\|\nabla F(x(t))\|^{2}\\ &\leqslant \tfrac{K(\alpha)}{\mu (t+\beta(\lambda-\alpha))^2}\left(\tfrac{\lambda^2}{\sqrt{\mu}(t+\beta(\lambda-\alpha))^2}+\tfrac{\lambda}{t+\beta(\lambda-\alpha)}\left(1+\sqrt{2}\right)+\sqrt{\mu}\right)\mathcal{E}(t)\\ &+\frac{\beta(\alpha-\lambda)(\gamma \lambda-1)}{(t+\beta(\lambda-\alpha))^2} \mathcal{E}(t)-t \beta(t+\beta(\lambda-\alpha))\|\nabla F(x(t))\|^{2}. \end{aligned} $$ By defining $C_0=\frac{\beta\sqrt{\mu}(\alpha-\lambda)(\gamma\lambda-1)}{K(\alpha)}$, it can be rewritten: $$ \begin{aligned} \mathcal{E}^{\prime}(t)+\tfrac{\gamma \lambda-2}{t+\beta(\lambda-\alpha)} \mathcal{E}(t)&\leqslant \tfrac{K(\alpha)}{\mu (t+\beta(\lambda-\alpha))^2}\left(\tfrac{\lambda^2}{\sqrt{\mu}(t+\beta(\lambda-\alpha))^2}+\tfrac{\lambda}{t+\beta(\lambda-\alpha)}\left(1+\sqrt{2}\right)+\sqrt{\mu}(1+C_0)\right)\mathcal{E}(t)\\ &-t \beta(t+\beta(\lambda-\alpha))\|\nabla F(x(t))\|^2. \end{aligned} $$ and finally, \begin{equation} \mathcal{E}^{\prime}(t)+\frac{\gamma \lambda-2}{t+\beta(\lambda-\alpha)} \mathcal{E}(t)+t \beta(t+\beta(\lambda-\alpha))\|\nabla F(x(t))\|^2 \leqslant \varphi(t+\beta(\lambda-\alpha)) \mathcal{E}(t) \end{equation} where $$ \varphi : t\mapsto \frac{K(\alpha)}{\mu t^{2}}\left(\sqrt{\mu}(1+C_0)+\frac{2 \alpha}{(\gamma+2) t}(1+\sqrt{2})+\frac{4 \alpha^{2}}{(\gamma+2)^{2} \sqrt{\mu} t^{2}}\right) $$ \subsection{Proof of Lemma \ref{lem:H1loc}} \label{sec:proof_H1loc} Let $F: \mathbb{R}^{n} \rightarrow \mathbb{R}$ be a convex $C^2$ function with a non empty set of minimizer $X^*$. Let $\delta\in(0,1]$ and $x^*\in X^*$.\\ We introduce the following lemma which is proved in Section \ref{sec:proof_C2}. \begin{lemme} \label{lem:C2} Let $F: \mathbb{R}^{n} \rightarrow \mathbb{R}$ be a $C^2$ function. Then, for all $x\in\mathbb{R}^n$ and $\varepsilon>0$, there exists $\nu>0$ such that for all $y\in B(x,\nu)$: \begin{equation} (1-\varepsilon)(y-x)^TH_F(x)(y-x)\leqslant (y-x)^TH_F(y)(y-x)\leqslant (1+\varepsilon)(y-x)^TH_F(x)(y-x). \end{equation} \end{lemme} As $F$ is a $C^2$ function, Lemma \ref{lem:C2} ensures that there exists $\nu>0$ such that for all $x\in B\left(x^*,\nu\right)$: \begin{equation} \left(1-\frac{\delta}{4-\delta}\right)K(x)\leqslant (x-x^*)^T H_F(x) (x-x^*)\leqslant \left(1+\frac{\delta}{4-\delta}\right)K(x), \label{eq:hess_loc} \end{equation} where $K(x)=(x-x^*)^T H_F(x^*) (x-x^*)$.\\ Let $\phi_{x,x^*}$ be defined as follows: $$ \begin{aligned} \phi_{x,x^*}:[0,1]&\to\mathbb{R}\\t&\mapsto F\left(tx+(1-t)x^*\right), \end{aligned} $$ for some $x\in B\left(x^*,\nu\right)$. The function $\phi_{x,x^*}$ is twice differentiable and we have that for all $t\in[0,1]$: \begin{equation*} \begin{gathered} \phi_{x,x^*}^\prime(t)=(x-x^*)^T\nabla F(tx+(1-t)x^*),\\ \phi_{x,x^*}^{\prime\prime}(t)=(x-x^*)^TH_F(tx+(1-t)x^*)(x-x^*). \end{gathered} \end{equation*} By rewriting \eqref{eq:hess_loc} at the point $tx+(1-t)x^*$ for some $t\in[0,1]$ we have: \begin{equation} \left(1-\frac{\delta}{4-\delta}\right)\phi_{x,x^*}^{\prime\prime}(0)\leqslant\phi_{x,x^*}^{\prime\prime}(t)\leqslant\left(1+\frac{\delta}{4-\delta}\right)\phi_{x,x^*}^{\prime\prime}(0).\label{eq:deltaH1} \end{equation} By integrating the left-hand inequality of \eqref{eq:deltaH1} and noticing that $\phi_{x,x^*}^{\prime}(0)=0$ (since $\nabla F(x^*)=0$), we get that: $$\forall t\in[0,1],~\left(1-\frac{\delta}{4-\delta}\right)\phi_{x,x^*}^{\prime\prime}(0)t\leqslant\phi_{x,x^*}^{\prime}(t).$$ By integrating the right-hand inequality of \eqref{eq:deltaH1}, we get that: $$\forall t\in[0,1],~ \phi_{x,x^*}(t)-\phi_{x,x^*}(0)\leqslant \left(1+\frac{\delta}{4-\delta}\right)\phi_{x,x^*}^{\prime\prime}(0)\frac{t^2}{2}, $$ and consequently, \begin{equation*} \forall t\in[0,1],~ \phi_{x,x^*}(t)-\phi_{x,x^*}(0)\leqslant \frac{1}{2-\delta}t\phi_{x,x^*}^{\prime}(t). \end{equation*} By choosing $t=1$ and rewriting $\phi_{x,x^*}$ and $\phi_{x,x^*}^\prime$ we deduce that $$F(x)-F^*\leqslant\frac{1}{2-\delta}\langle\nabla F(x), x-x^*\rangle.$$ \subsection{Proof of Lemma \ref{lem:flat2}} \label{sec:proof_flat2} We consider the energy function $\mathcal{E}$ defined for all $t\geqslant t_0$ by: $$ \begin{aligned} \mathcal{E}(t) =&\left(t^{2}+t \beta(\lambda-\alpha)\right)\left(F(x(t))-F^{*}\right)+\frac{\xi}{2}\left\|x(t)-x^{*}\right\|^{2}\\ &+\frac{1}{2}\left\|\lambda\left(x(t)-x^{*}\right)+t(\dot{x}(t)+\beta \nabla F(x(t)))\right\|^{2}. \end{aligned} $$ Let $v:t\mapsto\lambda\left(x(t)-x^{*}\right)+t(\dot{x}(t)+\beta \nabla F(x(t)))$. The function $v$ is differentiable and we have that: $$ \begin{aligned} v^{\prime}(t) &=\lambda \dot{x}(t)+t \ddot{x}(t)+\dot{x}(t)+\beta \nabla F(x(t))+t \beta \nabla^{2} F(x(t)) \dot{x}(t) \\ &=(\lambda+1) \dot{x}(t)+\left(-\alpha \dot{x}(t)-t \beta \nabla^{2} F(x(t)) \dot{x}(t)-t \nabla F(x(t))\right)+\beta \nabla F(x(t))\\ &+t \beta \nabla^{2} F(x(t)) \dot{x}(t)\\ &=(\lambda+1-\alpha) \dot{x}(t)+(\beta-t) \nabla F(x(t)). \end{aligned} $$ By differentiating the function $\mathcal{E}(t)$, we get that: $$ \begin{aligned} \mathcal{E}^{\prime}(t) =&(2 t+\beta(\lambda-\alpha))\left(F(x(t))-F^{*}\right)+\left(t^{2}+t \beta(\lambda-\alpha)\right)\langle\nabla F(x(t)), \dot{x}(t)\rangle+\left\langle v(t), v^{\prime}(t)\right\rangle\\ &+\xi\left\langle x(t)-x^{*}, \dot{x}(t)\right\rangle. \end{aligned} $$ Simple calculations give that: $$ \begin{aligned} \left\langle v(t), v^{\prime}(t)\right\rangle &=\left\langle\lambda\left(x(t)-x^{*}\right)+t(\dot{x}(t)+\beta \nabla F(x(t))),(\lambda+1-\alpha) \dot{x}(t)+(\beta-t) \nabla F(x(t))\right\rangle \\ &=\lambda(\lambda+1-\alpha)\left\langle x(t)-x^{*}, \dot{x}(t)\right\rangle+\lambda(\beta-t)\left\langle\nabla F(x(t)), x(t)-x^{*}\right\rangle \\ &+t(\lambda+1-\alpha)\|\dot{x}(t)\|^2+t(\beta-t)\langle\nabla F(x(t)), \dot{x}(t)\rangle \\ &+t \beta(\lambda+1-\alpha)\langle\nabla F(x(t)), \dot{x}(t)\rangle+t \beta(\beta-t)\|\nabla F(x(t))\|^2. \end{aligned} $$ Consequently, $$ \begin{aligned} \mathcal{E}^{\prime}(t) &=(2 t+\beta(\lambda-\alpha))\left(F(x(t))-F^{*}\right)+\left(t^{2}+t \beta(\lambda-\alpha)\right)\langle\nabla F(x(t)), \dot{x}(t)\rangle \\ &+\lambda(\lambda+1-\alpha)\left\langle x(t)-x^{*}, \dot{x}(t)\right\rangle+\lambda(\beta-t)\left\langle\nabla F(x(t)), x(t)-x^{*}\right\rangle \\ &+t(\lambda+1-\alpha)\|\dot{x}(t)\|^2+t(\beta-t)\langle\nabla F(x(t)), \dot{x}(t)\rangle \\ &+t \beta(\lambda+1-\alpha)\langle\nabla F(x(t)), \dot{x}(t)\rangle+t \beta(\beta-t)\|\nabla F(x(t))\|^2 \\ &+\xi\left\langle x(t)-x^{*}, \dot{x}(t)\right\rangle\\ &=(2 t+\beta(\lambda-\alpha))\left(F(x(t))-F^{*}\right)+2 t \beta(\lambda+1-\alpha)\langle\nabla F(x(t)), \dot{x}(t)\rangle \\ &+\lambda(\lambda+1-\alpha)\left\langle x(t)-x^{*}, \dot{x}(t)\right\rangle+\lambda(\beta-t)\left\langle\nabla F(x(t)), x(t)-x^{*}\right\rangle \\ &+t(\lambda+1-\alpha)\|\dot{x}(t)\|^2+t \beta(\beta-t)\|\nabla F(x(t))\|^2 \\ &+\xi\left\langle x(t)-x^{*}, \dot{x}(t)\right\rangle. \end{aligned}$$ Then by rearranging the terms we obtain that: $$\begin{aligned} \mathcal{E}^\prime(t)&=(2 t+\beta(\lambda-\alpha))\left(F(x(t))-F^{*}\right) \\ &+\frac{\lambda+1-\alpha}{t}\left[t^{2}\|\dot{x}(t)\|^2+2 t^{2} \beta\langle\nabla F(x(t)), \dot{x}(t)\rangle+t^{2} \beta^{2}\|\nabla F(x(t))\|^2\right] \\ &+\frac{\lambda(\lambda+1-\alpha)}{t}\left[\left\langle x(t)-x^{*}, t \dot{x}(t)\right\rangle+\left\langle x(t)-x^{*}, t \beta \nabla F(x(t))\right\rangle\right] \\ &-\beta t(t+\beta(\lambda-\alpha))\|\nabla F(x(t))\|^{2} \\ &-\lambda(t+\beta(\lambda- \alpha))\left\langle\nabla F(x(t)), x(t)-x^{*}\right\rangle \\ &+\xi\left\langle x(t)-x^{*}, \dot{x}(t)\right\rangle\\ &=(2 t+\beta(\lambda-\alpha))\left(F(x(t))-F^{*}\right)+\frac{\lambda+1-\alpha}{t}\|t(\dot{x}(t)+\beta \nabla F(x(t)))\|^{2} \\ &+\frac{\lambda(\lambda+1-\alpha)}{t}\left\langle x(t)-x^{*}, t(\dot{x}(t)+\beta \nabla F(x(t)))\right\rangle \\ &-\beta t(t+\beta(\lambda-\alpha))\|\nabla F(x(t))\|^{2} \\ &-\lambda(t+\beta(\lambda- \alpha))\left\langle\nabla F(x(t)), x(t)-x^{*}\right\rangle\\ &+\xi\left\langle x(t)-x^{*}, \dot{x}(t)\right\rangle\\ &=(2 t+\beta(\lambda-\alpha))\left(F(x(t))-F^{*}\right)+\frac{\lambda+1-\alpha}{t}\|\lambda(x(t)-x^{*})+t(\dot{x}(t)+\beta \nabla F(x(t)))\|^{2} \\ &-\frac{\lambda^2(\lambda+1-\alpha)}{t}\|x(t)-x^{*}\|^{2}-\beta t(t+\beta(\lambda-\alpha))\|\nabla F(x(t))\|^{2} \\ &-\lambda(t+\beta(2(\lambda-\alpha)+1))\left\langle\nabla F(x(t)), x(t)-x^{*}\right\rangle+(\xi-\lambda(\lambda+1-\alpha))\left\langle x(t)-x^{*}, \dot{x}(t)\right\rangle. \end{aligned} $$ Given this expression, we can write that: $$ \begin{aligned} \mathcal{E}^{\prime}(t) &=((2-\lambda\gamma_1)t+\beta(\lambda-\alpha-\lambda\gamma_1(2(\lambda-\alpha)+1))))\left(F(x(t))-F^{*}\right)\\ &+\lambda(t+\beta(2(\lambda-\alpha)+1))\left[\gamma_1\left(F(x(t))-F^{*}\right)-\left\langle\nabla F(x(t)), x(t)-x^{*}\right\rangle\right]\\ &+\frac{\lambda+1-\alpha}{t}\|\lambda(x(t)-x^{*})+t(\dot{x}(t)+\beta \nabla F(x(t)))\|^{2}-\frac{\lambda^2(\lambda+1-\alpha)}{t}\|x(t)-x^{*}\|^{2} \\ &-\beta t(t+\beta(\lambda-\alpha))\|\nabla F(x(t))\|^{2}+(\xi-\lambda(\lambda+1-\alpha))\left\langle x(t)-x^{*}, \dot{x}(t)\right\rangle. \end{aligned} $$ As $F$ satisfies the growth condition $\mathcal{H}_{\gamma_1}$, for all $t\geqslant \beta(2(\alpha-\lambda)-1)$, \begin{equation*} \lambda(t+\beta(2(\lambda-\alpha)+1))\left[\gamma_1\left(F(x(t))-F^{*}\right)-\left\langle\nabla F(x(t)), x(t)-x^{*}\right\rangle\right] \leqslant 0. \end{equation*} Therefore, $$\begin{aligned}\mathcal{E}^{\prime}(t) &\leqslant((2-\lambda\gamma_1)t+\beta(\lambda-\alpha-\lambda\gamma_1(2(\lambda-\alpha)+1))\left(F(x(t))-F^{*}\right) \\ & +\frac{\lambda+1-\alpha}{t}\|\lambda(x(t)-x^{*})+t(\dot{x}(t)+\beta \nabla F(x(t)))\|^{2}\\ & -\frac{\lambda^2(\lambda+1-\alpha)}{t}\|x(t)-x^{*}\|^{2}-\beta t(t+\beta(\lambda-\alpha))\|\nabla F(x(t))\|^{2}.\\ \end{aligned} $$ \subsection{Proof of Lemma \ref{lem:delta+1}} \label{sec:proof_delta+1} Let $F: \mathbb{R}^{n} \rightarrow \mathbb{R}$ be a convex function having a non empty set of minimizers where $F^*=\inf\limits_{x\in\mathbb{R}^n}F(x)$. Assume that for some $t_1>0$ and $\delta>0$, $F$ satisfies: \begin{equation} \int_{t_1}^{+\infty}u^\delta(F(x(u))-F^*)du<+\infty. \label{eq:ex_delta} \end{equation} Let $\varepsilon>0$. Assumption \eqref{eq:ex_delta} ensures that there exists $t_2\geqslant2t_1$ such that: $$\forall t\geqslant t_2,\quad \int_{t/2}^{t}u^\delta(F(x(u))-F^*)du<\varepsilon.$$ Let $z$ be defined as follows: $$z:t\mapsto\frac{\int_{t/2}^tu^\delta x(u)du}{\int_{t/2}^tu^\delta du}.$$ Let $t\geqslant t_2$. We define $\nu$ as : $$ \begin{aligned} \nu : \mathcal{B}([t/2,t])&\rightarrow[0,1]\\ A\qquad&\mapsto\frac{\int_Au^\delta du}{\int_{t/2}^tu^\delta du}, \end{aligned}$$ where $\mathcal{B}([t/2,t])$ is the Borel $\sigma$-algebra on $[t/2,t]$. Then, we can write that $z(t)=\int_{t/2}^t x(u)d\nu(u)$. As $\nu([t/2,t])=1$ and $F$ is a convex function, Jensen's inequality ensures that: $$\begin{aligned} F(z(t))-F^*&=F\left(\int_{t/2}^t x(u)d\nu(u)\right)-F^*\\ &\leqslant \int_{t/2}^t F(x(u))d\nu(u)-F^*\\ &\leqslant \int_{t/2}^t \left(F(x(u))-F^*\right)d\nu(u)\\ &\leqslant \frac{\varepsilon}{\int_{t/2}^tu^\delta du}\\ \end{aligned}$$ Hence, as $t$ tends towards $+\infty$, $F(z(t))-F^*=o\left(t^{-\delta-1}\right).$ \subsection{Proof of Lemma \ref{lem:delta+1phi}} \label{subsection:delta+1phi} Let $\phi: \mathbb{R}^{n} \rightarrow \mathbb{R}^+$ such that for some $t_1>0$ and $\delta>0$, $\phi$ satisfies: \begin{equation} \int_{t_1}^{+\infty}u^\delta\phi(x(u))du<+\infty. \label{eq:delta+12} \end{equation} Let $\varepsilon>0$. Assumption \eqref{eq:delta+12} guarantees that there exists $t_2\geqslant 2t_1$ such that \begin{equation*} \forall t\geqslant t_2,\quad \int_{t/2}^t u^\delta\phi(x(u))du<\varepsilon. \end{equation*} Consequently, for all $t\geqslant t_2$, \begin{equation*} \inf\limits_{u\in[t/2,t]}\phi(x(u))\int_{t/2}^t u^\delta du<\varepsilon, \end{equation*} and \begin{equation*} \inf\limits_{u\in[t/2,t]}\phi(x(u))<\frac{\varepsilon(\delta+1)}{t^{\delta+1}-\left(\frac{t}{2}\right)^{\delta+1}}. \end{equation*} Hence, as $t\rightarrow+\infty$, \begin{equation} \inf\limits_{u\in[t/2,t]}\phi(x(u))=o\left(t^{-\delta-1}\right). \end{equation} We recall that $\liminf\limits_{t\rightarrow+\infty} f(t)=\lim\limits_{t\rightarrow+\infty}\left[\inf\limits_{\tau\geqslant t}f(\tau)\right]$. As $\phi$ is a positive function, we get that: \begin{equation*} \liminf\limits_{t\rightarrow+\infty} t^{\delta+1}\log(t)\phi(x(t))=l\geqslant0. \end{equation*} Suppose that $l>0$. Then there exists $\hat t>t_1$ such that: \begin{equation*} \forall t\geqslant \hat t,\quad t^{\delta+1}\log(t)\phi(x(t))\geqslant \frac{l}{2}, \end{equation*} and hence: \begin{equation*} \forall t\geqslant \hat t,\quad t^\delta\phi(x(t))\geqslant \frac{l}{2t\log(t)}. \end{equation*} This inequality can not hold as we assume that \eqref{eq:delta+12} is satisfied. We can deduce that $l=0$. \subsection{Proof of Lemma \ref{lem:sharp5}} \label{sec:proof_sharp5} Let $u\in\mathbb{R}^n$, $v\in\mathbb{R}^n$ and $a>0$. The first inequality comes from the following inequalities: \begin{equation*} \begin{aligned} \langle u,v\rangle&=\frac{1}{2}\left\|\sqrt{a}u-\frac{v}{\sqrt{a}}\right\|^2-\frac{a}{2}\|u\|^2-\frac{1}{2a}\|v\|^2\geqslant -\frac{a}{2}\|u\|^2-\frac{1}{2a}\|v\|^2, \end{aligned} \end{equation*} and \begin{equation*} \begin{aligned} \langle u,v\rangle&=\frac{a}{2}\|u\|^2+\frac{1}{2a}\|v\|^2-\frac{1}{2}\left\|\sqrt{a}u+\frac{v}{\sqrt{a}}\right\|^2\leqslant \frac{a}{2}\|u\|^2+\frac{1}{2a}\|v\|^2. \end{aligned} \end{equation*} The second inequality is proved by rewriting $\|u\|^2$ as follows: \begin{equation*} \|u\|^2=\|u+v\|^2+\|v\|^2-2\langle u+v,v\rangle, \end{equation*} and by applying the first inequality to $\langle u+v,v\rangle$. \subsection{Proof of Lemma \ref{lem:C2}} \label{sec:proof_C2} Let $F: \mathbb{R}^{n} \rightarrow \mathbb{R}$ be a $C^2$ function. We denote the second order partial derivatives of $F$ by $\partial_{ij} F =\frac{\partial^2 F}{\partial x_i\partial x_j}$ for all $(i,j)\in\llbracket 1,n\rrbracket^2$. Let $x\in\mathbb{R}^n$ and $\varepsilon>0$. For all $(i,j)\in\llbracket 1,n\rrbracket^2$, $\partial_{ij} F$ is continuous on $\mathbb{R}^n$ and consequently, \begin{equation*} \exists \tilde\nu>0,~ \forall y\in B(x,\tilde\nu),~(1-\varepsilon)\partial_{ij} F(x)\leqslant \partial_{ij} F(y)\leqslant (1+\varepsilon)\partial_{ij} F(x). \end{equation*} By taking the minimal value of $\tilde\nu$ for all $(i,j)\in\llbracket 1,n\rrbracket^2$, we get that there exists $\tilde\nu>0$ such that: \begin{equation} \forall (i,j)\in\llbracket 1,n\rrbracket^2,~\forall y\in B(x,\tilde\nu),~(1-\varepsilon)\partial_{ij} F(x)\leqslant \partial_{ij} F(y)\leqslant (1+\varepsilon)\partial_{ij} F(x).\label{eq:cont_12} \end{equation} Let $\nu=\min\left\{\tilde\nu,\left(n\max\limits_{(i,j)\in\llbracket 1,n\rrbracket^2}|\partial_{ij}F(x)|\right)^{-\frac{1}{2}}\right\}$, $y\in B(x,\nu)$ and $h=y-x$. Equation \eqref{eq:cont_12} gives us that for all $(i,j)\in\llbracket 1,n\rrbracket^2$: \begin{equation} \partial_{ij} F(x)h_ih_j-\varepsilon |\partial_{ij} F(x)h_ih_j|\leqslant \partial_{ij} F(y)h_ih_j\leqslant \partial_{ij} F(x)h_ih_j+\varepsilon |\partial_{ij} F(x)h_ih_j|.\label{eq:cont_13} \end{equation} We recall that for all $(i,j)\in\llbracket 1,n\rrbracket^2$, $\left(H_F(x)\right)_{i,j}=\partial_{ij}F(x)$ and therefore: $$\forall (x,h)\in\mathbb{R}^n\times\mathbb{R}^n,~h^T H_F(x) h=\sum_{i=1}^n\sum_{j=1}^n\partial_{ij}F(x)h_ih_j.$$ By summing \eqref{eq:cont_13} for all $(i,j)\in\llbracket 1,n\rrbracket^2$, we get that: \begin{equation*} h^TH_F(x)h-\varepsilon\sum_{i=1}^n\sum_{j=1}^n|\partial_{ij} F(x)h_ih_j|\leqslant h^TH_F(y)h\leqslant h^TH_F(x)h+\varepsilon\sum_{i=1}^n\sum_{j=1}^n|\partial_{ij} F(x)h_ih_j|. \end{equation*} Noticing that $|h_ih_j|\leqslant \frac{1}{2}\left(h_i^2+h_j^2\right)$ for all $(i,j)\in\llbracket 1,n\rrbracket^2$, we can deduce that: \begin{equation*} \begin{aligned} \sum_{i=1}^n\sum_{j=1}^n|\partial_{ij} F(x)h_ih_j|&\leqslant \max\limits_{(i,j)\in\llbracket 1,n\rrbracket^2}|\partial_{ij}F(x)|\sum_{i=1}^n\sum_{j=1}^n|h_ih_j|\\&\leqslant n\max\limits_{(i,j)\in\llbracket 1,n\rrbracket^2}|\partial_{ij}F(x)|\|h\|^2\\&\leqslant n\max\limits_{(i,j)\in\llbracket 1,n\rrbracket^2}|\partial_{ij}F(x)|\nu^2\\&\leqslant 1. \end{aligned} \end{equation*} Hence, \begin{equation*} (1-\varepsilon) h^TH_F(x)h\leqslant h^TH_F(y)h\leqslant (1+\varepsilon)h^TH_F(x)h. \end{equation*} \iffalse $h\in\mathbb{R}^n$. For all $(i,j)\in\llbracket 1,n\rrbracket^2$, $\partial_{ij} F$ is continuous on $\mathbb{R}^n$ and consequently, \begin{equation*} \forall \delta>0,~\exists \tilde\gamma>0, \forall \gamma>0,~\left(\gamma\leqslant \tilde\gamma\right)\implies\left(\partial_{ij} F(x)-\delta\leqslant \partial_{ij} F(x+\gamma h)\leqslant \partial_{ij} F(x)+\delta\right). \end{equation*} By taking the minimal value of $\tilde\gamma$ for all $(i,j)\in\llbracket 1,n\rrbracket^2$ for a given $\delta$, we get that for all $\delta>0$, there exists $\tilde\gamma>0$ such that: \begin{equation} \forall (i,j)\in\llbracket 1,n\rrbracket^2,~\forall \gamma>0,~\left(\gamma\leqslant \tilde\gamma\right)\implies\left(\partial_{ij} F(x)-\delta\leqslant \partial_{ij} F(x+\gamma h)\leqslant \partial_{ij} F(x)+\delta\right).\label{eq:cont_12} \end{equation} Let $\delta=\frac{\varepsilon}{n^2\max\limits_{i\in\llbracket 1,n\rrbracket}h_i^2}$. According to \eqref{eq:cont_12}, there exists $\tilde\gamma>0$ such that for all $(i,j)\in\llbracket 1,n\rrbracket^2$: \begin{equation} \forall \gamma \in (0,\tilde\gamma),~\partial_{ij} F(x)-\frac{\varepsilon}{n^2\max\limits_{i\in\llbracket 1,n\rrbracket}h_i^2}\leqslant \partial_{ij} F(x+\gamma h)\leqslant \partial_{ij} F(x)+\frac{\varepsilon}{n^2\max\limits_{i\in\llbracket 1,n\rrbracket}h_i^2}. \label{eq:cont_13} \end{equation} We recall that for all $(i,j)\in\llbracket 1,n\rrbracket^2$, $\left(H_F(x)\right)_{i,j}=\partial_{ij}F(x)$ and therefore: $$h^T H_F(x) h=\sum_{i=1}^n\sum_{j=1}^n\partial_{ij}F(x)h_ih_j.$$ By multiplying \eqref{eq:cont_13} by $h_ih_j$, we get that for all $(i,j)\in\llbracket 1,n\rrbracket^2$ and $\gamma\in (0,\tilde\gamma)$: \begin{equation*} \partial_{ij} F(x)h_ih_j-\frac{\varepsilon}{n^2\max\limits_{i\in\llbracket 1,n\rrbracket}h_i^2}|h_ih_j|\leqslant \partial_{ij} F(x+\gamma h)h_ih_j\leqslant \partial_{ij} F(x)h_ih_j+\frac{\varepsilon}{n^2\max\limits_{i\in\llbracket 1,n\rrbracket}h_i^2}|h_ih_j|, \end{equation*} which implies that: \begin{equation} \partial_{ij} F(x)h_ih_j-\frac{\varepsilon}{n^2}\leqslant \partial_{ij} F(x+\gamma h)h_ih_j\leqslant \partial_{ij} F(x)h_ih_j+\frac{\varepsilon}{n^2}. \end{equation} Hence, \begin{equation} h^TH_F(x)h-\varepsilon\leqslant h^TH_F(x+\gamma h)h\leqslant h^TH_F(x)h+\varepsilon. \end{equation} Let $\gamma\in \left(0, \min\{1,\tilde\gamma\}\right)$ and $y=x+\gamma h$. We showed that: \begin{equation*} (1-\gamma^2\varepsilon)(y-x)^TH_F(x)(y-x)\leqslant (y-x)^TH_F(y)(y-x)\leqslant (1-\gamma^2\varepsilon)(y-x)^TH_F(x)(y-x), \end{equation*} and as $\gamma\leqslant1$: \begin{equation*} (1-\varepsilon)(y-x)^TH_F(x)(y-x)\leqslant (y-x)^TH_F(y)(y-x)\leqslant (1-\varepsilon)(y-x)^TH_F(x)(y-x). \end{equation*} \fi \end{document}
\begin{document} \title{A note on Gorenstein monomial curves} \begin{abstract} Let $k$ be an arbitrary field. In this note, we show that if a sequence of relatively prime positive integers ${\bf a}=(a_1,a_2,a_3,a_4)$ defines a Gorenstein non complete intersection monomial curve ${\mathcal C}({\bf a})$ in ${\mathbb A}_k^4$, then there exist two vectors ${\bf u}$ and ${\bf v}$ such that ${\mathcal C}({\bf a}+t{\bf u})$ and ${\mathcal C}({\bf a}+t{\bf v})$ are also Gorenstein non complete intersection affine monomial curves for almost all $t\geq 0$. \\ {\sc Keywords:} affine monomial curve, numerical semigroup, Gorenstein curve. \\ {\sc Mathematical subject classification:} 13C40, 14H45, 13D02, 20M25. \end{abstract} Let ${\bf a} = (a_1, \ldots a_n)$ be a sequence of positive integers and $k$ be an arbitrary field. If $\phi: k[x_1, \dots, x_n ]\to k[t]$ is the ring homomorphism defined by $\phi(x_i) = t^{a_i}$, then $I({\bf a}):= \ker \phi$ is a prime ideal of height $n-1$ in $R:=k[x_1, \ldots, x_n]$ which is a weighted homogeneous binomial ideal with the weighting $\deg x_i:= a_i$ on $R$. It is the defining ideal of the affine monomial curve ${\mathcal C}({\bf a})\subset {\mathbb A}_k^n$ parametrically defined by ${\bf a}$ whose coordinate ring is $S({\bf a }):= {\rm Im }\phi = k[t^{a_1}, \ldots, t^{a_n}]\simeq R/I({\bf a})$. As $S({\bf a})$ is isomorphic to $S(d{\bf a})$ for all integer $d\geq 1$, we will assume without loss of generality that $a_1, \ldots, a_n$ are relatively prime. Observe that $S({\bf a})$ is also the semigroup ring of the numerical semigroup $\langle a_1, \ldots a_n\rangle\subset{\mathbb N}$ generated by $a_1, \ldots ,a_n$. As observed in \cite{De} where Delorme characterizes sequences ${\bf a}$ such that $S({\bf a })$ is a complete intersection, this fact does not depend on the field $k$ by \cite[Corollary~1.13]{He}. On the other hand, it is well-known that $S({\bf a })$ is Gorenstein if and only if the numerical semigroup $\langle a_1, \ldots a_n\rangle\subset{\mathbb N}$ is symmetric, which does not depend either on the field $k$. We will thus say that ${\bf a}$ is a complete intersection (respectively Gorenstein) if the semigroup ring $S({\bf a})$ is a complete intersection (respectively Gorenstein). In \cite{JS}, it is shown that if ${\bf a}$ is a complete intersection with $a_1>>0$, then ${\bf a} +t(a_n-a_1)(1,\dots ,1)$ is also a complete intersection for all $t$. In this note, we will use the criterion for Gorenstein monomial curves in ${\mathbb A}_k^4$ due to Bresinsky in \cite{Br} to construct a class of Gorenstein monomial curves in ${\mathbb A}_k^4$. First observe that for each $i$, $1\le i\le n$, there exists a multiple of $a_i$ that belongs to the numerical semigroup generated by the rest of the elements in the sequence and denote by $r_i>0$ the smallest positive integer such that $r_ia_i \in \langle a_1,\ldots,a_{i-1},a_{i+1},\ldots,a_n\rangle$. So we have that \begin{equation}\label{principalrelations} \forall i,\ 1\leq i\leq n,\ r_ia_i = \sum _{j\neq i} r_{ij}a_j,\ r_{ij} \ge 0,\ r_i >0\,. \end{equation} \begin{definition}{\rm The $n\times n$ matrix $D(\bf {a}):=(r_{ij})$ where $r_{ii}:= -r_i$ is called a {\it principal matrix} associated to $\bf {a}$. }\end{definition} \begin{lemma} $D(\bf {a})$ has rank $n-1$. \end{lemma} \begin{proof} Since the system $D({\bf a})X=0$ has solution $X={\bf a}^T$ by (\ref{principalrelations}), $D(\bf {a})$ has rank $\le n-1$. If ${\bf b}^T$ is another solution, then the ideal defining the curve $I({\bf b})$ contains $$f_i = x_i^{r_i} -\prod_{j\neq i}x_j^{r_{ij}}$$ for all $1\le i\le n$. But $I({\bf a})= \sqrt{(f_1, \ldots, f_n)}$ thus $I({\bf a})\subseteq I({\bf b})$ because $I({\bf b})$ is prime. Since both $I({\bf a})$ and $I({\bf b})$ are primes of the same height $n-1$, they must be equal: $I({\bf a})= I({\bf b})$. Since $x_i^{a_j}-x_j^{a_i}\in I({\bf a})$ for all $i$ and $j$, one has that $a_ib_j = a_jb_i$ for all $i$ and $j$ and hence the $2\times n$ matrix whose rows are ${\bf a}$ and ${\bf b}$ is of rank 1, i.e., $a_i = cb_i$ for some $c$. So, rank of $D({\bf a}) = n-1$. \end{proof} \begin{remark}{\rm Observe that $D({\bf a})$ is not uniquely defined. Although the diagonal entries $-r_i$ are uniquely determined, there is not a unique choice for $r_{ij}$ in general. We have the ``map" $D: {\mathbb N}^{[n]} \to T_n$ from the set ${\mathbb N}^{[n]}$ of sequences of $n$ relatively prime positive integers to the subset $T_n$ of $n\times n$ matrices of rank $n-1$ with negative integers on the diagonal and non negative integers outside the diagonal. Note that we can recover $\bf {a}$ from $D(\bf {a})$ by factoring out the greatest common divisor of the $n$ maximal minors of the $n-1\times n-1$ submatrix of $D(\bf {a})$ obtained by removing the first row. In other words, call $D^{-1}:T_n \to N^{[n]}$ the operation that, for $M\in T_n$, takes the first column of $\operatorname{adj} (M)$ and then factors out the g.c.d. to get an element in $N^{[n]}$. Then, $D^{-1}(D({\bf a}))={\bf a}$ for all ${\bf a}\in N^{[n]}$. Now given a matrix $M\in T_n$, $D(D^{-1}(M)) \neq M$ in general as the following example shows: if $M=\left[ \begin{matrix} -4&0&1&1\\ 1&-5&4&0\\ 0&4&-5&1\\ 3&1&0&-2\\ \end{matrix} \right]$ then $D^{-1}(M)=(7,11,12,16)$ and $D(D^{-1}(M) \neq M$ (it is easy to check for example that $r_2=3<5$). }\end{remark} We now focus on the case of Gorenstein monomial curves in ${\mathbb A}_k^4$ so assume that $n=4$. If ${\bf a}$ is Gorenstein but is not a complete intersection, by the characterization in \cite[Theorems~3 and 5]{Br}, there is a principal matrix $D({\bf a})$ that has the following form: \begin{equation}\label{embdim4GorMat} \left[ \begin{matrix} -c_1&0& d_{13} &d_{14}\\ d_{21}&-c_2&0&d_{24}\\ d_{31}&d_{32}& -c_3&0\\ 0&d_{42}&d_{43}&-c_4\\ \end{matrix} \right] \end{equation} with $c_i\ge 2$ and $d_{ij}>0$ for all $1\le i,j\le 4$, the columns summing to zero and all the columns of the adjoint being relatively prime. The first column of the adjoint of this matrix is $-{\bf a}^T$ and Bresinsky's characterization also says that the first column (after removing the signs) of the adjoint of a principal matrix $D({\bf a})$ of this form defines a Gorenstein curve provided the entries of this column are relatively prime. The following is a slight strengthening of this criterion. \begin{theorem}\label{criterion} Let $A$ be a $4\times 4$ matrix of the form $$ A= \left[ \begin{matrix} -c_1&0& d_{13} &d_{14}\\ d_{21}&-c_2&0&d_{24}\\ d_{31}&d_{32}& -c_3&0\\ 0&d_{42}&d_{43}&-c_4\\ \end{matrix} \right] $$ with $c_i\ge 2$ and $d_{ij}>0$ for all $1\le i,j\le 4$, and all the columns summing to zero. Then the first column of the adjoint of $A$ {\rm(}after removing the signs{\rm)} defines a monomial curve provided these entries are relatively prime. \end{theorem} \begin{proof} Consider such a matrix $A$ and let $a_1, a_2, a_3, a_4$ be the entries in the first column of the adjoint of $A$ (after removing the signs). Since we are assuming that they are relatively prime, there exist integers $\lambda_1,\ldots,\lambda_4$ such that $\lambda_1 a_1+\cdots+\lambda_4 a_4=1$. It suffices to show that the four relations in the rows of $A$ are principal relations. We will show this for the first row and the other rows are similar. Suppose that $b_{11} a_1 = b_{12}a_2+b_{13}a_3+b_{14}a_4$ is a relation with $b_{11}\geq 2$ and $b_{12}, b_{13}, b_{14}\geq 0$ and let's show that $b_{11}\geq c_1$. Since the system $ \left[ \begin{matrix} -b_{11}&b_{12}& b_{13} &b_{14}\\ d_{21}&-c_2&0&d_{24}\\ d_{31}&d_{32}& -c_3&0\\ 0&d_{42}&d_{43}&-c_4\\ \end{matrix} \right]Y = 0$ has a nontrivial solution, namely $Y = (a_1, a_2, a_3, a_4)^T$, we see that it has determinant zero. So there exist $x_i$ such that \begin{equation}\label{detzero} (1,x_2, x_3, x_4)\left[ \begin{matrix} -b_{11}&b_{12}& b_{13} &b_{14}\\ d_{21}&-c_2&0&d_{24}\\ d_{31}&d_{32}& -c_3&0\\ 0&d_{42}&d_{43}&-c_4\\ \end{matrix} \right]= 0\,. \end{equation} Consider the matrix $T_4 =\left[ \begin{matrix} -b_{11}&b_{12}& b_{13} &b_{14}\\ d_{21}&-c_2&0&d_{24}\\ d_{31}&d_{32}& -c_3&0\\ \lambda_1&\lambda_2 &\lambda_3 &\lambda_4\\ \end{matrix} \right]$. If the determinant of $T_4$ is $-t_4$, then the last column of its adjoint is $-t_4 (a_1, a_2, a_3, a_4)^T$. This is because $T_4 (a_1, a_2, a_3, a_4)^T = (0,0,0,1)^T$. Hence, looking at the element in the last row and last column of the adjoint of $T_4$, one gets using (\ref{detzero}) that $$- t_4 a_4= \left| \begin{matrix} -b_{11}&b_{12}& b_{13} \\ d_{21}&-c_2&0 \\ d_{31}&d_{32}& -c_3 \\ \end{matrix} \right| = \left| \begin{matrix} 0&-x_4d_{42}&-x_4 d_{43} \\ d_{21}&-c_2&0 \\ d_{31}&d_{32}& -c_3 \\ \end{matrix} \right| = -x_4a_4\,.$$ Hence $t_4 = x_4$, and since $t_4$ is an integer, so is $x_4$. Now, looking at the element in the last column and first row of the adjoint of $T_4$, one has $$t_4a_1 = \left| \begin{matrix} b_{12}&b_{13}& b_{14} \\ -c_2&0&d_{24} \\ d_{32}& -c_3&0 \\ \end{matrix} \right| = b_{12}c_3d_{24}+ b_{13}d_{32}d_{24}+b_{14}c_2c_3>0\,.$$ So, $x_4 = t_4$ is now a positive integer. Consider the matrix $T_2 = \left[ \begin{matrix} -b_{11}&b_{12}& b_{13} &b_{14}\\ d_{31}&d_{32}&-c_3&0 \\ 0&d_{42}&d_{43}& -c_4\\ \lambda_1&\lambda_2 &\lambda_3 &\lambda_4\\ \end{matrix} \right]$ which determinant is denoted by $-t_2$. By similar calculations, we see that $x_2=t_2$ is an integer and, focusing on the element in the last column and third row of the adjoint of $T_2$, one gets that $$ t_2a_3 = \left| \begin{matrix} -b_{11}&b_{12}& b_{14} \\ d_{31}&d_{32}&0 \\ 0& d_{42}&-c_4 \\ \end{matrix} \right| = b_{11}c_4d_{32}+ b_{12}d_{31}c_{4}+b_{14}d_{31}d_{42}>0 $$ so $x_2=t_2$ is also a positive integer. Similarly, using the matrix $T_3= \left[ \begin{matrix} -b_{11}&b_{12}& b_{13} &b_{14}\\ d_{21}&-c_2&0&d_{24}\\ 0&d_{42}&d_{43}& -c_4\\ \lambda_1&\lambda_2 &\lambda_3 &\lambda_4\\ \end{matrix} \right]$ of determinant $-t_3$, one gets that $x_3 = -t_3$ and hence $x_3$ is an integer. However, by calculating the entry in the last column and second row of the adjoint of $T_3$, one gets that $$(-t_3)a_2 = \left| \begin{matrix} - b_{11}&b_{13}& b_{14} \\ d_{21}&0&d_{24} \\ 0& d_{43}&-c_4 \\ \end{matrix} \right| = b_{11}d_{43}d_{24}+b_{13}d_{21}c_4+b_{14}d_{21}d_{43}>0,$$ and hence, $x_3$ is again a positive integer. So, $b_{11} = x_2d_{21}+x_3d_{31}\ge d_{21}+d_{31} = c_1$ as desired. Since we can make any of the $c_i$'s the first row, by rearranging the $a_i$'s suitably, this proves that all of the rows are principal relations and this is a principal matrix. Hence ${\bf a}=(a_1, a_2, a_3, a_4)$ is Gorenstein by Bresinsky's criterion. \end{proof} Denote now the sequence of positive integers by ${\bf a}=(a,a+x,a+y,a+z)$ for some $x,y,z>0$. In other words, we assume that the first integer in the sequence is the smallest but after that we do not assume any ascending order. Recall that we have assumed, without loss of generality, that ${\rm gcd}(a,x,y,z)=1$. The following result gives two families of Gorenstein monomial curves in ${\mathbb A}_k^4$ by translation from a given Gorenstein curve. \begin{theorem}\label{main} Given any Gorenstein non complete intersection monomial curve ${\mathcal C}({\bf a})$ in ${\mathbb A}_k^4$, there exist two vectors ${\bf u}$ and ${\bf v}$ in ${\mathbb N}^4$ such that for all $t\ge 0$, ${\mathcal C}({\bf a} +t{\bf u})$ and ${\mathcal C}({\bf a}+t{\bf v})$ are also Gorenstein non complete intersection monomial curves whenever the entries of the corresponding sequence {\rm(}${\bf a} +t{\bf u}$ for the first family, ${\bf a} +t{\bf v}$ for the second{\rm)} are relatively prime. \end{theorem} \begin{proof} Let $D({\bf a})$ be the principal matrix of ${\bf a}$ given in (\ref{embdim4GorMat}). Then let ${\bf u}$ be the vector of $3 \times 3 $ minors of the $3\times 4$ matrix with $U = \left[ \begin {matrix} d_{21}&-c_2&0&d_{24}\\ 1&0 & -1&0\\ 0&d_{42}&d_{43}&-c_4\\ \end{matrix}\right] $ so that $U {\bf u}^T = (0,0,0)^T$. Focusing on the second row, we see that $u_1 = u_3$. Similarly, let ${\bf v}$ be the third adjugate of the $3\times 4$ matrix $V = \left[ \begin{matrix} -c_1&0& d_{13} &d_{14}\\ 0&-1&0&1\\ d_{31}&d_{32}& -c_3&0\\ \end{matrix} \right] $ so that $V {\bf v}^T = (0,0,0)^T$. We will check now that, as long as their entries are relatively primes, the sequences ${\bf a} +t{\bf u}$ and ${\bf a}+t{\bf v}$ respectively have principal matrices $$ A_t = \left[ \begin{matrix} -c_1-t&0& d_{13} +t&d_{14}\\ d_{21}&-c_2&0&d_{24}\\ d_{31}+t&d_{32}& -c_3-t&0\\ 0&d_{42}&d_{43}&-c_4\\ \end{matrix} \right] \quad\hbox{and}\quad B_t = \left[ \begin{matrix} -c_1&0& d_{13} &d_{14}\\ d_{21}&-c_2-t&0&d_{24}+t\\ d_{31}&d_{32}& -c_3&0\\ 0&d_{42}+t&d_{43}&-c_4-t\\ \end{matrix} \right]$$ and hence define Gorenstein curves. It is a straightforward calculation to check that the rows of these matrices are the relations of ${\bf a} +t{\bf u}$ and ${\bf a}+t{\bf v}$ respectively, i.e., $A_t\times ({\bf a} +t{\bf u})^T=(0,0,0,0)^T$ and $B_t\times ({\bf a} +t{\bf v})^T=(0,0,0,0)^T$ and it suffices to check it for $t=1$. Consider the vector $A_1 \times ({\bf a} + {\bf u})^T$. If we add the first row of $D({\bf a})$ to $U$ to make a square matrix $U'$ then the determinant of $U'$, expanding by its third row, is $a-(a+y)=-y$. On the other hand, the adjoint of $U'$ has ${\bf u}$ as the first column and ${\bf a}$ as the third column, thereby the first row of $U'$ multiplied by ${\bf u}$ equals $-y$ and the third row of $U'$ multiplied by ${\bf a}$ also equals $-y$. Since the first row of $A_1$ is the first row of $U'$ minus the third row of $U'$, we see that the first entry of the vector $A_1 \times ({\bf a} + {\bf u})^T$ is zero, and a similar argument works to show that the third entry of this vector is also zero. Moreover, the second row of $A_1$ coincides with the second row of $D({\bf a})$ so one gets 0 multiplying by ${\bf a}$, and since it is also the first row of $U$, one also gets 0 multiplying by ${\bf u}$ and hence the second entry in the vector $A_1\times ({\bf a}+{\bf u})^T$ is zero. The same argument works for the fourth entry and the proof for ${\bf v}$ is similar. Since the differences between the matrix $D({\bf a})$ and $A_t$ are all in the first and third rows, and $U$ comes from the second and fourth row, we have shown that if $A_1$ is a relation matrix for ${\bf a}+{\bf u}$ , $A_{0} = D({\bf a})$ is a relation matrix for ${\bf a}$. Moreover, note that the changes in $D({\bf a})$ to get $A_t$ or $B_t$ did not alter the column sums and hence the columns still add up to zero. Since $A_1$ has the form, with zeros above the diagonal, with zero in the last column first row, all the non diagonal entries non negative and has all the columns sum to zero, it is principal provided ${\bf a}+{\bf u}$ is relatively prime by Theorem \ref{criterion}. Thus, if the cofactors are relatively prime, they form Gorenstein curves. \end{proof} Note that since we have assumed that the first entry in ${\bf a}$ is the smallest, the first principal relation can not be homogeneous (w.r.t. the usual grading on $R$). Let us see what happens when 2 of the other 3 principal relations are homogeneous. \begin {corollary}\label{corMultConj} Let ${\bf a} = (a, a+x, a+y, a+z)$ be Gorenstein and not a complete intersection. Suppose that both the second and the fourth rows of the matrix $D({\bf a})$ in (\ref{embdim4GorMat}) have their entries summing to zero. Then, $x<z<y$ and ${\bf a} + t\alpha y(1,1,1,1)$ is Gorenstein for all $t\ge 0$, where $\alpha$ is a positive integer determined by $(a,a+x,a+y,a+z)$. \end{corollary} \begin {proof} Since the entries of the second row sum to 0, i.e., $c_2=d_{21}+d_{24}$, we get from $c_2(a+x)=d_{21}a+d_{24}(a+z)$ that \begin{equation}\label{row2sumto0} c_2x=d_{24}z, \end{equation} i.e., $(d_{21}+d_{24})x=d_{24}z$, and hence $x<z$. Similarly, the sum of the entries of the fourth row being 0 implies that \begin{equation}\label{row4sumto0} d_{42}x+d_{43}y=c_4z, \end{equation} i.e., $d_{42}x+d_{43}y=(d_{42}+d_{43})z$, and hence $d_{43}(y-z) = d_{42}(z-x)$ and one has that $y>z$. Moreover, the hypothesis in the corollary, we compute that the vector ${\bf u}$ in the proof of the theorem~\ref{main} is ${\bf u} = b(1,1,1,1)$, where $b = d_{21}c_4+d_{24}d_{43}$. Now set $d:={\rm gcd}(x,z)$. Simplifying (\ref{row2sumto0}) by $d$, one gets that $c_2x/d=d_{24}z/d$ with ${\rm gcd}(x/d,z/d)=1$ so $c_2=qz/d$ and $d_{24}=qx/d$ for some integer $q$. Now simplifying (\ref{row4sumto0}) by $d$ also, one gets that $d$ divides $d_{43}y$ and $d_{43}y/d=c_4z/d-d_{42}x/d$ and hence $qd_{43}y/d=c_2c_4-d_{24}d_{42}=c_4d_{21}+d_{24}(c_4-d_{42})=c_4d_{21}+d_{24}d_{43}=b$. Multiply $b$ by the smallest strictly positive integer $\beta$ so that $\beta b= \alpha y$ for some integer $\alpha$. Note that $\beta\leq d$ and if $t=1$, then $\alpha=qd_{43}$. Then, ${\bf a} + t\alpha y(1, 1,1,1)$ are always Gorenstein for if they have a common factor, then it will be a common factor of $a+ t\alpha y$ and $a+y+t\alpha y$ which necessarily is common factor of both $a$ and $y$. But then, it will be a common factor of $x$ and $z$, i.e., of all entries in ${\bf a}$ which are relatively prime. \end{proof} \begin{remark}{\rm The previous result shows that for the Gorenstein curves satisfying the hypothesis in Corollary~\ref{corMultConj}, the periodicity conjecture holds with period $\alpha y$. }\end{remark} \begin{example}{\rm The sequence ${\bf a}=(11,17,25,19)$ is Gorenstein and not a complete intersection and its principal matrix, $\displaystyle{ D({\bf a})= \left[ \begin{matrix} -4&0&1&1\\ 1&-4&0&3\\ 3&1&-2&0\\ 0&3&1&-4\\ \end{matrix} \right] }$, satisfies the conditions in Corollary~\ref{corMultConj}. Since $b=7$ and ${\rm gcd}(x,z) = 2$, we see that $2b$ is the smallest integral multiple of $y= 14$. Thus, adding any positive multiple of $14$ to all the entries of ${\bf a}$ will provide a new Gorenstein sequence which is not a complete intersection. Observe that adding $b=7$ to all entries in ${\bf a}$ they have a common factor of $2$ and it does not result in a Gorenstein sequence. }\end{example} \begin{remark}{\rm When two rows of the principal matrix different from the second and the fourth ones both have entries summing up to zero, one does not expect to see Gorenstein curves that are not complete intersections if we take $(a+t, a+x+t, a+y+t, a+z+t)$ for $t$ large. For example, the sequence ${\bf a}=(43,67,49,83)$ is Gorenstein and not a complete intersection and its principal matrix, $\displaystyle{ D({\bf a})= \left[ \begin{matrix} -5&0&1&2\\ 2&-5&0&3\\ 3&1&-4&0\\ 0&4&3&-5\\ \end{matrix} \right] }$, satisfies that both its second and third rows have entries summing up to zero. Adding $t(1,1,1,1)$ to ${\bf a}$ will provide a Gorenstein sequence which is not a complete intersection for $t=15$, $49$ and $83$ but does not seem to result in a Gorenstein sequence which is not a complete intersection for larger values of $t$. }\end{remark} We will end this note giving a precise description of a minimal graded free resolution of $S({\bf a})$ as an $R$-module when ${\bf a}$ is Gorenstein and not a complete intersection. Assume that ${\bf a}=(a,a+x,a+y,a+z)$ is Gorenstein but not a complete intersection and let $D({\bf a})$ be the principal matrix of ${\bf a}$ given in (\ref{embdim4GorMat}). Since all Gorenstein grade 3 ideals in $k[x_1,\ldots,x_n]$ must be the ideal of $n$ order pfaffians of an $(n+1)\times (n+1)$ skew symmetric matrix, so must the ideal $I({\bf a})$. The ideal $I ({\bf a})$ described in \cite{Br} is indeed the ideal of minors of the skew symmetric matrix $$ \phi({\bf a}) = \left[ \begin{matrix} 0&0& x_2^{d_{32}} &x_3^{d_{43}}&x_4^{d_{24}}\\ 0&0&x_1^{d_{21}}&x_4^{d_{14}}&x_2^{d_{42}}\\ -x_2^{d_{32}}&-x_1^{d_{21}}&0&0&x_3^{d_{13}}\\ -x_3^{d_{43}}&-x_4^{d_{14}}&0&0&x_1^{d_{31}}\\ -x_4^{d_{24}}&-x_2^{d_{42}}&-x_3^{d_{13}}&-x_1^{d_{31}}&0\\ \end{matrix} \right]\,. $$ The graded resolution of $S({\bf a})$ is $$ 0 \rightarrow R(-(ac_1+(a+z)c_4+(a+x)d_{32})) \stackrel{\delta_3}{\rightarrow} R^5 \stackrel{\phi}{\rightarrow} R^{5}\stackrel{\delta_1} \rightarrow R \rightarrow S({\bf a}) \rightarrow 0 $$ where $\phi = \phi ({\bf a})$ and $\delta _1= (\delta _3) ^T =\delta({\bf a})$ for $$\delta({\bf a}) = [x_1^{c_1}-x_3^{d_{13}}x_4^{d_{14}}, x_3^{c_3}-x_1^{d_{31}}x_2^{d_{32}}, x_4^{c_4}-x_2^{d_{42}}x_3^{d_{43}}, x_2^{c_2}-x_1^{d_{21}}x_4^{d_{24}}, x_1^{d_{21}}x_3^{d_{43}}-x_2^{d_{32}}x_4^{d_{14}}]\,.$$ Observe that the socle degree is ${\bf a} . [c_1, d_{32}, 0, c_4]-3$ where $.$ is the dot product of the vectors. \begin{remark}{\rm By following the proof of the theorem \ref {main}, we see that when one translates ${\bf a}$ through ${\bf u}$, we get that $$ \phi ({\bf a}+t {\bf u})= \left[ \begin{matrix} 0&0& x_2^{d_{32}} &x_3^{d_{43}}&x_4^{d_{24}}\\ 0&0&x_1^{d_{21}}&x_4^{d_{14}}&x_2^{d_{42}}\\ -x_2^{d_{32}}&-x_1^{d_{21}}&0&0&x_3^{d_{13}+t}\\ -x_3^{d_{43}}&-x_4^{d_{14}}&0&0&x_1^{d_{31}+t}\\ -x_4^{d_{24}}&-x_2^{d_{42}}&-x_3^{d_{13}+t}&-x_1^{d_{31}+t}&0\\ \end{matrix} \right] $$ and the socle degree is increased by $t^2u_1+t(u_1c_1+u_2d_{32}+u_4c_4+a)$ to get $ t^2u_1 +t a + [{\bf a}+ t {\bf u}]. [c_1, d_{32}, 0, c_4]-3$. If one translates through $\bf {v}$, we get $$ \phi ({\bf a}+t{\bf v})= \left[ \begin{matrix} 0&0& x_2^{d_{32}} &x_3^{d_{43}}&x_4^{d_{24}+t}\\ 0&0&x_1^{d_{21}}&x_4^{d_{14}}&x_2^{d_{42}+t}\\ -x_2^{d_{32}}&-x_1^{d_{21}}&0&0&x_3^{d_{13}}\\ -x_3^{d_{43}}&-x_4^{d_{14}}&0&0&x_1^{d_{31}}\\ -x_4^{d_{24}+t}&-x_2^{d_{42}+t}&-x_3^{d_{13}}&-x_1^{d_{31}}&0\\ \end{matrix} \right] $$ and the socle degree increases by $t^2v_4+t(v_1c_1+v_2d_{32}+v_4c_4+a+z)$ to get $ t^2v_4 +t (a+z) + [{\bf a}+ t{\bf v}]. [c_1, d_{32}, 0, c_4]-3$. } \end{remark} \end{document}
\begin{document} \begin{abstract} We consider semilinear Schr\"odinger equations with nonlinearity that is a polynomial in the unknown function and its complex conjugate, on $\mathbb{R}^d$ or on the torus. Norm inflation (ill-posedness) of the associated initial value problem is proved in Sobolev spaces of negative indices. To this end, we apply the argument of Iwabuchi and Ogawa (2012), who treated quadratic nonlinearities . This method can be applied whether the spatial domain is non-periodic or periodic and whether the nonlinearity is gauge/scale-invariant or not. \end{abstract} \maketitle \section{Introduction} We consider the initial value problem for semilinear Schr\"odinger equations: \begin{equation}\label{NLS'} \left\{ \begin{array}{@{\,}r@{\;}l} i\partial _tu+\Delta u&=F (u,\bar{u}),\qquad (t,x)\in [0,T] \times Z,\\ u(0,x)&=\phi (x), \end{array} \right. \end{equation} where the spatial domain $Z$ is of the form $Z=\Bo{R}^{d_1}\times \Bo{T} ^{d_2}$, $d_1+d_2=d$, and $F (u,\bar{u})$ is a polynomial in $u,\bar{u}$ without constant and linear terms, explicitly given by \eqq{F (u,\bar{u})=\sum _{j=1}^n\nu _ju^{q_j}\bar{u}^{p_j-q_j}} with mutually different indices $(p_1,q_1),\dots ,(p_n,q_n)$ satisfying $p_j\ge 2$, $0\le q_j\le p_j$ and non-zero complex constants $\nu _1,\dots ,\nu _n$. The aim of this article is to prove \emph{norm inflation} for the initial value problem \eqref{NLS'} in some negative Sobolev spaces. We say norm inflation in $H^s(Z)$ (``\emph{NI$_s$}'' for short) occurs if for any $\delta >0$ there exist $\phi \in H^\infty$ and $T>0$ satisfying \eqq{\tnorm{\phi}{H^s}<\delta ,\qquad 0<T<\delta} such that the corresponding smooth solution $u$ to \eqref{NLS'} exists on $[0,T]$ and \eqq{\tnorm{u(T)}{H^s}>\delta ^{-1}.} Clearly, NI$_s$ implies the discontinuity of the solution map $\phi \mapsto u$ (which is uniquely defined for smooth $\phi$ locally in time) at the origin in the $H^s$ topology, and hence the ill-posedness of \eqref{NLS'} in $H^s$. However, NI$_s$ is a stronger instability property of the flow than the discontinuity, which only requires $0<T\lesssim 1$ and $\tnorm{u(T)}{H^s}\gtrsim 1$. Let us begin with the case of single-term nonlinearity: \begin{equation}\label{NLS} \left\{ \begin{array}{@{\,}r@{\;}l} i\partial _tu+\Delta u&=\nu u^q\bar{u}^{p-q},\qquad (t,x)\in [0,T] \times Z,\\ u(0,x)&=\phi (x), \end{array} \right. \end{equation} where $p\ge 2$ and $0\le q\le p$ are integers, $\nu \in \Bo{C}\setminus \{0\}$ is a constant. The equation is invariant under the scaling transformation $u(t,x)\mapsto \lambda ^{\frac{2}{p-1}}u(\lambda ^2t,\lambda x)$ ($\lambda >0$), and the critical Sobolev index $s$ for which $\tnorm{\lambda ^{\frac{2}{p-1}}\phi (\lambda \cdot )}{\dot{H}^{s}}=\tnorm{\phi}{\dot{H}^{s}}$ is given by \eqq{s=s_c(d,p):=\tfrac{d}{2}-\tfrac{2}{p-1}.} The scaling heuristics suggests that the flow becomes unstable in $H^s$ for $s<s_c(d,p)$. In addition, we will demonstrate norm inflation phenomena by tracking the transfer of energy from high to low frequencies (that is called ``high-to-low frequency cascade''), which naturally restrict us to negative Sobolev spaces. In fact, we will show NI$_s$ with any $s<\min \shugo{s_c(d,p),0}$ for any $Z$ and $(p,q)$, as well as with some negative but scale-subcritical regularities for specific nonlinearities. Precisely, our result reads as follows: \begin{thm}\label{thm:main0} Let $Z$ be a spatial domain of the form $\Bo{R} ^{d_1}\times \Bo{T} ^{d_2}$ with $d_1+d_2=d\ge 1$, and let $p\ge 2$, $0\le q\le p$ be integers. Then, the initial value problem \eqref{NLS} exhibits NI$_s$ in the following cases: \begin{enumerate} \item $Z$ and $(p,q)$ are arbitrary, $s<\min \shugo{s_c(d,p),0}$. \item $d,p,s$ satisfy $s=s_c(d,p)=-\frac{d}{2}$; that is, $(d,p,s)=(1,3,-\frac{1}{2})$ and $(2,2,-1)$. \item $d=1$, $(p,q)=(2,0),(2,2)$ and $s<-1$. \item $Z=\Bo{R}^d$ with $1\le d\le 3$, $(p,q)=(2,1)$ and $s<-\frac{1}{4}$. \item $Z=\Bo{R}^{d_1}\times \Bo{T} ^{d_2}$ with $d_1+d_2\le 3$, $d_2\ge 1$, $(p,q)=(2,1)$ and $s<0$. \item $Z=\Bo{T}$, $(p,q)=(4,1),(4,2),(4,3)$ and $s<0$. \end{enumerate} \end{thm} There is an extensive literature on the ill-posedness of nonlinear Schr\"odinger equations, and a part of the above theorem has been proved in previous works. Concerning ill-posedness in the sense of norm inflation, Christ, Colliander, and Tao \cite{CCT03p-1} treated the case of gauge-invariant power-type nonlinearities $\pm |u|^{p-1}u$ on $\Bo{R}^d$ and proved NI$_s$ when $0<s<s_c(d,p)$ or $s\le -\frac{d}{2}$ (with some additional restriction on $s$ if $p$ is not an odd integer). For the remaining range of regularities $-\frac{d}{2}<s<0$ (when $s_c\ge 0$) they proved the failure of uniform continuity of the solution map. Note that this milder form of ill-posedness is not necessarily incompatible with well-posedness in the sense of Hadamard, for which continuity of the solution map is required. Moreover, since their argument is based on scaling consideration and some ODE analysis, it does not apply in any obvious way to the cases of periodic domains, \footnote{One can still adapt their idea to the periodic setting with additional care. Moreover, although their original argument did not apply to the 1d cubic case with the scaling critical regularity $s=-\frac{1}{2}$, one can modify the argument to cover that case. See \cite{OW15p} for details.} non gauge-invariant nonlinearities, and complex coefficients. Later, Carles, Dumas, and Sparber~\cite{CDS12} and Carles and Kappeler \cite{CK17} studied norm inflation in Sobolev spaces of negative indices for the problem with smooth nonlinearities (i.e., $\pm |u|^{p-1}u$ with an odd integer $p\ge 3$) in $\Bo{R}^d$ and in $\Bo{T}^d$, respectively. They used a geometric optics approach to obtain NI$_s$ for $d\ge 2$ and $s<-\frac{1}{p}$ in the $\Bo{R}^d$ case \footnote{ In \cite{CDS12} they also proved norm inflation for generalized nonlinear Schr\"odinger equations and the Davey-Stewartson system including non-elliptic Laplacian.} and for $s<0$ in the $\Bo{T}^d$ case with the exception of $(d,p)=(1,3)$ for which $s<-\frac{2}{3}$ was assumed. (See \cite{C07,AC09} for related ill-posedness results.) In fact, they showed stronger instability property than NI$_s$ for these cases; that is, norm inflation \emph{with infinite loss of regularity} (see Proposition~\ref{prop:niilr} below for the definition). Our argument, which evaluates each term in the power series expansion of the solution directly, is different from the aforementioned works. Note that, for smooth nonlinearities, Theorem~\ref{thm:main0} covers all the remaining cases in the range $s<\min \shugo{s_c(d,p),0}$ and extends the result to the (partially) periodic setting as well as to the case of general nonlinearities with complex coefficients. Moreover, our argument also gives another proof of the results in \cite{CDS12,CK17} on NI$_s$ with infinite loss of regularity; see Proposition~\ref{prop:niilr} for the precise statement. The one-dimensional cubic equation with nonlinearity $\pm |u|^2u$ has been attracting particular attention due to its various physical backgrounds and complete integrability. Note also that this is the only $L^2$-subcritical case among smooth and gauge-invariant nonlinearities. In spite of the $L^2$ subcriticality, the equation becomes unstable below $L^2$ due to the Galilean invariance, both in $\Bo{R}$ and in $\Bo{T}$. In fact, the initial value problem was shown to be globally well-posed in $L^2$ \cite{T87,B93-1}, whereas it was shown in \cite{KPV01,CCT03} for $\Bo{R}$ and in \cite{BGT02,CCT03} for $\Bo{T}$ that the solution map fails to be uniformly continuous below $L^2$. Ill-posedness below $L^2(\Bo{T} )$ was established in the periodic case by the lack of continuity of the solution map \cite{CCT03p-2,M09} and by the non-existence of solutions \cite{GO18}. Nevertheless, one can show a priori bound in some Sobolev spaces below $L^2$ \cite{KT07,CCT08,KT12,GO18}, which prevents norm inflation. Recent results in \cite{KT16p,KVZ17p} finally gave a priori bound on $H^s$ for $s>-\frac{1}{2}$, both in $\Bo{R}$ and in $\Bo{T}$. We remark that NI$_s$ at $s=-\frac{1}{2}$ shown in Theorem~\ref{thm:main0} ensures the optimality of these results. \footnote{The one-dimensional cubic problem was not treated in the first version of this article. We would like to thank T.~Oh for drawing our attention to this case. } In \cite[Theorem~4.7]{KVZ17p}, Killip, Vi\c{s}an and Zhang also derived a priori bound of the solutions in the norm which is logarithmically stronger than the critical $H^{-\frac{1}{2}}$. Motivated by this result, in addition to Theorem~\ref{thm:main0} (ii) we also show norm inflation for the one-dimensional cubic equation in some ``logarithmically subcritical'' spaces; see Proposition~\ref{prop:A} below. Since the work of Kenig, Ponce, and Vega \cite{KPV96-NLS}, non gauge-invariant nonlinearities have also been intensively studied. In \cite{BT06}, Bejenaru and Tao proposed an abstract framework for proving ill-posedness in the sense of discontinuity of the solution map. They considered the quadratic NLS \eqref{NLS} on $\Bo{R}$ with nonlinearity $u^2$ and obtained a complete dichotomy of Sobolev index $s$ into locally well-posed ($s\ge -1$) and ill-posed ($s<-1$) in the sense mentioned above. Their argument is based on the power series expansion of the solution, and they proved ill-posedness by observing that high-to-low frequency cascades break the continuity of the first nonlinear term in the series. A similar dichotomy was shown for other quadratic nonlinearities $\bar{u}^2$, $u\bar{u}$ in \cite{K09,KT10} by employing the idea of \cite{BT06}. Later, Iwabuchi and Ogawa \cite{IO15} considered the nonlinearity $u^2$, $\bar{u}^2$ in $\Bo{R}$, $\Bo{R}^2$ and refined the idea of \cite{BT06} to prove ill-posedness in the sense of NI$_s$ for $s<-1$ in $\Bo{R}$ and $s\le -1$ in $\Bo{R}^2$. In particular, in the two-dimensional case they could complement the local well-posedness result in $H^s(\Bo{R} ^2)$, $s>-1$, which had been obtained in \cite{K09}. Note that the original argument of \cite{BT06} is not likely to yield norm inflation phenomena nor discontinuity of the solution map at the threshold regularity such as $s=-1$ in the above $\Bo{R}^2$ case. We will have more discussion on this issue in the next section. Another quadratic nonlinearity $u\bar{u}$ was investigated by the same method in \cite{IU15}, where for $\Bo{R}^d$ with $d=1,2,3$ they proved norm inflation in Besov spaces $B^{-1/4}_{2,\sigma}$ of regularity $-\frac{1}{4}$ with $4<\sigma \le \infty$. \footnote{Essentially, they also proved NI$_s$ for $s<-\frac{1}{4}$, i.e., the case (iv) of our Theorem~\ref{thm:main0}. } It turns out that the method of Iwabuchi and Ogawa \cite{IO15} proving norm inflation has a wide applicability. The purpose of the present article is to apply this method to NLS with general nonlinearities. In the last few years the method has been used to a wide range of equations; see for instance \cite{MO15,MO16,HMO16,CP16,Ok17}. \footnote{ In the first version of this article, we only considered gauge-invariant smooth nonlinearities $\nu |u|^{2k}u$, $k\in \Bo{Z}_{>0}$ and linear combinations of them. Note, however, that the method of Iwabuchi and Ogawa \cite{IO15} had been applied before only to quadratic nonlinearities and it was the first result dealing with nonlinearities of general degrees in a unified manner. The authors of \cite{CP16,O17} informed us that their proofs of norm inflation results followed the argument in the first version of this article. We also remark that an estimate proved in the first version (Lemma~\ref{lem:a_k} below) was employed later in \cite{MO16,HMO16,Ok17}. } In \cite{O17,Ok17}, norm inflation based at general initial data was proved for NLS and some other equations. \footnote{ In \cite{Ok17} non gauge-invariant nonlinearities were first treated in a general setting. In fact Theorem~\ref{thm:main0} follows as a corollary of \cite[Proposition~2.5 and Corollary~2.10]{Ok17}. However, we decide to include the non gauge-invariant cases in the present version in order to state Theorem~\ref{thm:main} (for multi-term nonlinearities) with more generality. } We make some additional remarks on Theorem~\ref{thm:main0}. \begin{rem} (i) Concerning one-dimensional periodic cubic NLS below $L^2$, the renormalized (or Wick ordered) equation \eqq{i\partial _tu+\partial _x^2u=\pm \big( |u|^2-2-\hspace{-13pt}\int _{\Bo{T}}|u|^2\big) u} is known to behave better than the original one \eqref{NLS} with nonlinearity $\pm |u|^2u$; see \cite{OS12} for a detailed discussion. We note that our proof can be also applied to the renormalized cubic NLS. In fact, the solutions constructed in Theorem~\ref{thm:main0} is smooth and its $L^2$ norm is conserved. Then, a suitable gauge transformation, which does not change the $H^s$ norm at any time, gives smooth solutions to the renormalized equation that exhibit norm inflation. (ii) In the periodic setting, our proof does not rely on any number theoretic consideration. Hence, it can be easily adapted to the problem on general anisotropic tori, whether rational or irrational; that is, $Z=\Bo{R} ^{d_1}\times [\Bo{R}^{d_2}/(\gamma _1\Bo{Z})\times \cdots \times (\gamma _{d_2}\Bo{Z})]$ for any $\gamma _1,\dots ,\gamma _{d_2}>0$. (iii) When $Z=\Bo{R}$ and $(p,q)=(4,2)$, the example in \cite[Example~5.3]{G00p} suggests that a high-to-low frequency cascade leads to instability of the solution map when $s<-\frac{1}{8}$. However, our argument does not imply NI$_s$ for $-\frac{1}{6}\le s<-\frac{1}{8}$ so far. \end{rem} There are far less results on ill-posedness for multi-term nonlinearities than for \eqref{NLS}. However, such nonlinear terms naturally appear in application. For instance, the nonlinearity $6u^5-4u^3$ appears in a model related to shape-memory alloys \cite{FLS87}, and $(u+2\bar{u}+u\bar{u})u$ is relevant in the study of asymptotic behavior for the Gross-Pitaevskii equation (see {e.g.}~\cite{GNT09}). Note that norm inflation for a multi-term nonlinearity does not immediately follow from that for each nonlinear term. Our next result concerns the equation \eqref{NLS'} of full generality: \begin{thm}\label{thm:main} The initial value problem \eqref{NLS'} exhibits NI$_s$ whenever $s$ satisfies the condition in Theorem~\ref{thm:main0} for at least one term $u^{q_j}\bar{u}^{p_j-q_j}$ in $F (u,\bar{u})$, except for the case where $Z=\Bo{T}$ and $F (u,\bar{u})$ contains $u\bar{u}$. When $Z=\Bo{T}$ and $F (u,\bar{u})$ contains $u\bar{u}$, NI$_s$ occurs in the following cases: \begin{enumerate} \item $s<0$ if $F (u,\bar{u})$ has a quintic or higher term, or one of $u^3\bar{u}$, $u^2\bar{u}^2$, $u\bar{u}^3$. \item $s<-\frac{1}{6}$ if $F (u,\bar{u})$ has $u^4$ or $\bar{u}^4$ but no other quartic or higher terms. \item $s\le -\frac{1}{2}$ if $F (u,\bar{u})$ has a cubic term but no quartic or higher terms. \item $s<0$ if $F (u,\bar{u})$ has no cubic or higher terms. \end{enumerate} \end{thm} In the above theorem, the range of regularities is restricted when $Z=\Bo{T}$ and $F (u,\bar{u})$ has $u\bar{u}$; note that the nonlinear term $u\bar{u}$ by itself leads to NI$_s$ for $s<0$ as shown in Theorem~\ref{thm:main0}. This restriction seems unnatural and an artifact of our argument. The rest of this article is organized as follows. In the next section, we recall the idea of \cite{BT06}, \cite{IO15} and discuss some common features and differences between them. Section~\ref{sec:proof0} is devoted to the proof of Theorem~\ref{thm:main0} for the single-term nonlinearities. Then, in Section~\ref{sec:proof} we see how to treat the multi-term nonlinearities, proving Theorem~\ref{thm:main}. In Appendices, we consider norm inflation with infinite loss of regularity in Section~\ref{sec:niilr} and inflation of various norms with the critical regularity for the one-dimensional cubic problem in Section~\ref{sec:ap}. \section{Strategy for proof}\label{BT-IO} We will use the power series expansion of the solutions to prove norm inflation. To see the idea, let us consider the simplest case of quadratic nonlinearity $u^2$ in \eqref{NLS}. This amounts to considering the integral equation \eq{eq:ie}{u(t)&=e^{it\Delta}\phi -i\int _0^te^{i(t-\tau )\Delta}\big( u(\tau )\cdot u(\tau )\big) \,d\tau \\ &=:\Sc{L}[\phi ](t)+\Sc{N}[u,u](t),\qquad t\in [0,T].} We first recall the argument of Bejenaru and Tao \cite{BT06}. By Picard's iteration, the power series $\sum _{k=1}^\infty U_k[\phi ]$ with \eqq{&U_1[\phi ]:=\Sc{L}[\phi ], \qquad U_2[\phi ]:=\Sc{N}[\Sc{L}[\phi ],\Sc{L}[\phi ]],\\ &U_3[\phi ]:=\Sc{N}[\Sc{L}[\phi ],\Sc{N}[\Sc{L}[\phi ],\Sc{L}[\phi ]]]+\Sc{N}[\Sc{N}[\Sc{L}[\phi ],\Sc{L}[\phi ]],\Sc{L}[\phi ]],\\ &\quad \vdots \\ &U_k[\phi ]:=\sum _{k_1,k_2\ge 1;\,k_1+k_2=k}\Sc{N}[U_{k_1}[\phi ],U_{k_2}[\phi ]] \qquad (k\ge 2)} formally gives a solution to \eqref{eq:ie}. To justify this, we basically need the linear and bilinear estimates \eq{est:qwp}{\norm{\Sc{L}[\phi ]}{S}\le C\norm{\phi}{D},\qquad \norm{\Sc{N}[u_1,u_2]}{S}\le C\norm{u_1}{S}\norm{u_2}{S}} for the space of initial data $D$ and some space $S\subset C([0,T];D)$ in which we construct a solution. In fact, they showed (roughly speaking) the following: \begin{quote} Assume that \eqref{est:qwp} holds with the Banach space $D$ of initial data and some Banach space $S$. Then, (i) for any $k\ge 1$ the operators $U_k:D\to S$ are well-defined and satisfies $\tnorm{U_k[\phi ]}{S}\le (C\tnorm{\phi}{D})^k$, and (ii) there exists $\varepsilon _0>0$ (depending on the constants in \eqref{est:qwp}) such that the solution map $\phi \mapsto u[\phi ]:=\sum _{k=1}^\infty U_k[\phi ]$ is well-defined on $B_D(\varepsilon _0):=\Shugo{\phi \in D}{\tnorm{\phi}{D}\le \varepsilon _0}$ and gives a solution to \eqref{eq:ie}. \end{quote} Next, consider some coarser topologies on $D$ and $S$ induced by the norms $\tnorm{~}{D'}$ and $\tnorm{~}{S'}$ weaker than $\tnorm{~}{D}$ and $\tnorm{~}{S}$, respectively. They claimed the following: \begin{quote} Assume further that the solution map $\phi \mapsto u[\phi ]$ given above is continuous from $(B_D(\varepsilon _0),\tnorm{~}{D'})$ (i.e., $B_D(\varepsilon _0)$ equipped with the $D'$ topology) to $(S,\tnorm{~}{S'})$. Then, for each $k$ the operator $U_k$ is continuous from $(B_D(\varepsilon _0),\tnorm{~}{D'})$ to $(S,\tnorm{~}{S'})$. \end{quote} To show the continuity of $U_k$ in coarser topologies, by its homogeneity one can restrict to sufficiently small initial data. Then, by the estimates \eqref{est:qwp}, contribution of higher order terms $\sum _{k'>k}U_{k'}[\phi ]$ can be made arbitrarily small compared to $U_k[\phi ]$. Combining this fact with the hypothesis that $\sum _{k\ge 1}U_k[\phi ]$ is continuous, one can show the claim by an induction argument on $k$. Now, this claim gives a way to prove ill-posedness in coarse topologies. Namely, one can show the discontinuity of the solution map $\phi \mapsto \sum _{k=1}^\infty U_k[\phi ]$ in coarse topologies by simply establishing the discontinuity of the (more explicit) map $\phi \mapsto U_k[\phi ]$ for at least one $k$. \footnote{It is worth noticing that the continuity of $U_k$ from $(B_D(\varepsilon _0),\tnorm{~}{D'})$ to $(S,\tnorm{~}{S'})$ does not imply its continuity from $(D,\tnorm{~}{D'})$ to $(S,\tnorm{~}{S'})$ in general, even though $U_k$ can be defined for all functions in $D$. By the $k$-linearity of $U_k$, the latter continuity is equivalent to the \emph{boundedness}: $\tnorm{U_k[\phi ]}{S'}\le C\tnorm{\phi}{D'}^k$. Hence, only disproving the boundedness of $U_k$ in coarse topologies (which may imply that the solution map is not $k$ times differentiable) is not sufficient to conclude the discontinuity of the solution map. } We notice that this proof of ill-posedness includes evaluating higher terms by using \eqref{est:qwp}, that is, estimates (or well-posedness) in stronger topology. Here, we observe two facts on this method. First, it cannot yield norm inflation in coarse topologies. This is because the image of the continuous solution map with domain $B_D(\varepsilon _0)$ is bounded in $S$, and hence it must be bounded in weaker norms. Secondly, the `well-posedness' estimates \eqref{est:qwp} in $D,S$ and discontinuity of some $U_k$ in $D',S'$ would imply the discontinuity of $U_k$ in any `intermediate' norms $D'',S''$ satisfying \eqq{\tnorm{\phi}{D'}\lesssim \tnorm{\phi}{D''}\lesssim \tnorm{\phi}{D}^\theta \tnorm{\phi}{D'}^{1-\theta},\qquad \tnorm{u}{S'}\lesssim \tnorm{u}{S''}\lesssim \tnorm{u}{S}} for some $0<\theta <1$. In fact, if $U_k:(B_D(\varepsilon _0),\tnorm{~}{D'})\to S'$ is not continuous, there exist $\shugo{\phi _n}\subset B_D(\varepsilon _0)$ and $\phi _\infty \in B_D(\varepsilon _0)$ such that $\tnorm{\phi _n-\phi _\infty}{D'}\to 0$ ($n\to \infty$) but $\tnorm{U_k[\phi _n]-U_k[\phi _\infty ]}{S'}\gtrsim 1$. Since $\shugo{\phi _n}$ is bounded in $D$, this implies that $\tnorm{\phi _n-\phi _\infty}{D''}\to 0$ and $\tnorm{U_k[\phi _n]-U_k[\phi _\infty ]}{S''}\gtrsim 1$. In particular, if we work in Sobolev spaces: \eqq{D=H^{s_0},\quad S\hookrightarrow C([0,T];H^{s_0}),\quad D'=H^{s_1},\quad S'=C([0,T];H^{s_1})\qquad (s_0>s_1),} then ill-posedness in $H^{s_1}$ as a consequence of the argument in \cite{BT06} should actually yield ill-posedness in any $H^s$, $s_1\le s<s_0$, while we have \eqref{est:qwp}, i.e., well-posedness in $H^{s_0}$. Therefore, the regularity $s_0$ in which we invoke \eqref{est:qwp} must be automatically the threshold regularity for well-/ill-posedness. This explains why the same argument cannot be applied to the two-dimensional quadratic NLS with nonlinearity $u^2$. In fact, as mentioned in Introduction, \eqref{est:qwp} are obtained in $D=H^s$ when $s>-1$ (with a suitable $S$) but fails if $s\le -1$ (for any $S$ continuously embedded into $C([0,T];H^s)$), and hence well-posedness at the threshold regularity is not available in this case. We next recall Iwabuchi and Ogawa's result \cite{IO15}, which settled the aforementioned two-dimensional case. Indeed, the argument in \cite{IO15} is similar to that of \cite{BT06} in that it exploits the power series expansion and shows that one term in the series exhibits instability and dominates all the other terms. Now, we notice that the existence time $T>0$ is allowed to shrink for the purpose of establishing norm inflation, while in \cite{BT06} it is fixed and uniform with respect to the initial data. The main difference of the argument in \cite{IO15} from that of \cite{BT06} is that they worked with the estimates like \eq{est:qwp'}{\norm{\Sc{L}[\phi ]}{S_T}\le C\norm{\phi}{D},\qquad \norm{\Sc{N}[u_1,u_2]}{S_T}\le CT^\delta \norm{u_1}{S_T}\norm{u_2}{S_T}} for the data space $D$, $S_T\subset C([0,T];D)$, and $\delta >0$, and consider the expansion up to different times $T$ according to the initial data. In fact, this enables us to take a sequence of initial data which is unbounded in $D$ (but converges to $0$ in a weaker norm), and such a set of initial data actually yields unbounded sequence of solutions. Another feature of the argument in \cite{IO15} is that higher-order terms were estimated directly in $D'$ by using properties of specific initial data they chose; in \cite{BT06} these terms were simply estimated in $D$ by \eqref{est:qwp} that hold for general functions. \footnote{In fact, we do not need `well-posedness in $D$', i.e., such estimates as \eqref{est:qwp'} that hold for \emph{all} functions in $D$ and $S$. It is enough to estimate the terms $U_k[\phi ]$ just for particularly chosen initial data $\phi$. In some problems this consideration becomes essential; see \cite{Ok17}, Theorem~1.2 and its proof. } At a technical level, another novelty in \cite{IO15} is the use of modulation space $M_{2,1}$ as $D$ instead of Sobolev spaces. The bilinear estimate in \eqref{est:qwp'} is then straightforward thanks to the algebra property of $M_{2,1}$. Finally, we remark that the strategies of \cite{BT06,IO15} work well in the case that the operator $U_k$ involves a significant high-to-low frequency cascade, as mentioned in \cite{BT06}. However, the situation is different in the case of \emph{system} of equations, as there are more than one regularity indices and one cannot simply order two pairs of regularity indices; see e.g.~\cite{MO15}, where the argument of \cite{IO15} was employed to derive norm inflation from nonlinear interactions of ``high$\times$low$\to$high'' type. \section{Proof of Theorem~\ref{thm:main0}}\label{sec:proof0} Let us first consider the case of single-term nonlinearity and prove Theorem~\ref{thm:main0}. The argument in this section basically follows that in \cite{IO15}. Since the coefficient $\nu \neq 0$ plays no role in our proof, we assume $\nu =1$ for simplicity. We write \eqq{\mu _{p,q}(z_1,\dots ,z_p):=\prod _{l=1}^qz_l\prod _{m=q+1}^p\bar{z}_m,\qquad \mu _{p,q}(z):=\mu _{p,q}(z,\dots ,z),} so that $u^q\bar{u}^{p-q}=\mu _{p,q}(u)$. \begin{defn}\label{defn:U_k} For $\phi \in L^2(Z)$, we (formally) define \eqq{U_1[\phi ](t)&:=e^{it\Delta}\phi ,\\ U_k[\phi ](t)&:=-i\sum _{\mat{k_1,\dots ,k_p\ge 1\\ k_1+\dots +k_p=k}}\int _0^t e^{i(t-\tau )\Delta}\mu _{p,q}\big( U_{k_1}[\phi ],\dots ,U_{k_p}[\phi ]\big) (\tau )\,d\tau ,\qquad k\ge 2.} \end{defn} Note that $U_{k}[\phi ]=0$ unless $k\equiv 1\mod p-1$. The expansion $u=\sum _{k=1}^\infty U_k[\phi ]$ of a (unique) solution $u$ to \eqref{NLS} will play a crucial role in the proof. To make sense of this representation, we use modulation spaces. The notion of modulation spaces was introduced by Feichtinger in the 1980s \cite{F83} and nowadays it has become one of the common tools in the study of nonlinear evolution PDEs; see e.g.~the survey \cite{RSW12} and references therein. \begin{defn} Let $A>0$ be a dyadic number. Define the space $M_A$ as the completion of $C_0^\infty (Z)$ with respect to the norm \[ \norm{f}{M_A}:=\sum _{\xi \in A\Bo{Z}^d}\norm{\widehat{f}}{L^2(\xi +Q_A)},\] where $Q_A:=[-\frac{A}{2},\frac{A}{2})^d$. \end{defn} \begin{rem} We consider the space $M_A$ with $A<1$ only when $Z=\Bo{R} ^d$. For $Z=\Bo{R} ^{d_1}\times \Bo{T} ^{d_2}$, the $L^2(\xi +Q_A)$ norm in the above definition means the $L^2$ norm restricted onto $(\xi +Q_A)\cap \widehat{Z}$, where $\widehat{Z}:=\Bo{R} ^{d_1}\times \Bo{Z} ^{d_2}$. If $Z=\Bo{T}^d$, the space $M_1$ coincides with the Wiener algebra $\Sc{F} L^1(\Bo{T} ^d)$. \end{rem} We will only use the following properties of the space $M_A$. The proof is elementary, and thus it is omitted. \begin{lem}\label{lem:M_A} (i) $M_A\cong _AM_1$,\hspace{10pt} $H^{\frac{d}{2}+\varepsilon}\hookrightarrow M_1\hookrightarrow L^2$\hspace{10pt} ($\varepsilon >0$). (ii) There exists $C=C(d)>0$ such that for any $f,g\in M_A$, we have \[ \norm{fg}{M_A}\le CA^{\frac{d}{2}}\norm{f}{M_A}\norm{g}{M_A}.\] \end{lem} Since the space $M_A$ is a Banach algebra and the linear propagator $e^{it\Delta}$ is unitary in $M_A$, we can easily show the following multilinear estimates. \begin{lem}\label{lem:U_k} Let $A\ge 1$ be a dyadic number and $\phi \in M_A$ with $\tnorm{\phi}{M_A}\le M$. Then, there exists $C>0$ independent of $A$ and $M$ such that \eqq{\norm{U_k[\phi ](t)}{M_A}\le t^{\frac{k-1}{p-1}}(CA^{\frac{d}{2}}M)^{k-1}M} for any $t\ge 0$ and $k\ge 1$. \end{lem} \begin{proof} Let $\shugo{a_k} _{k=1}^\infty$ be the sequence defined by \[ a_1=1,\qquad a_k=\frac{p-1}{k-1}\sum _{\mat{k_1,\dots ,k_p\ge 1\\ k_1+\dots +k_p=k}}a_{k_1}\cdots a_{k_p}\qquad (k\ge 2).\] As observed in \cite[Eq.~(16)]{BT06}, one can show inductively that $a_k\le C^k$ for some $C>0$. To be more precise, we state it as the following lemma. The $p=2$ case can be found in \cite[Lemma~4.2]{MO16} with a detailed proof. \begin{lem}\label{lem:a_k} Let $\shugo{b_k}_{k=1}^\infty$ be a sequence of nonnegative real numbers such that \[ b_{k} \le C\sum _{\mat{k_1,\dots ,k_p\ge 1\\ k_1+\dots +k_p=k}}b_{k_1}\cdots b_{k_p},\qquad k\ge 2\] for some $p\ge 2$ and $C>0$. Then, we have \[ b_k\le b_1C_0^{k-1},\qquad k\ge 1;\qquad C_0:=\frac{\pi ^2}{6}(Cp^2)^{\frac{1}{p-1}}b_1.\] \end{lem} By Lemma~\ref{lem:a_k}, it holds $a_k\le C_0^{k-1}$ for some $C_0>0$. Thus, it suffices to show \eqq{\norm{U_{k}[\phi ](t)}{M_A}\le a_kt^{\frac{k-1}{p-1}}(C_1A^{\frac{d}{2}}M)^{k-1}M,\qquad t\ge 0,\quad k\ge 1} for some $C_1>0$. This is trivial if $k=1$. Let $k\ge 2$, and assume the above estimate for $U_1,U_2,\dots ,U_{k-1}$. Using Lemma~\ref{lem:M_A}, we have \eqq{\norm{U_{k}[\phi ](t)}{M_A}&\le CA^{\frac{d}{2}(p-1)}\sum _{\mat{k_1,\dots ,k_p\ge 1\\ k_1+\dots +k_p=k}}\int _0^t \prod _{j=1}^p\norm{U_{k_j}[\phi ](\tau )}{M_A}\,d\tau \\ &\le CA^{\frac{d}{2}(p-1)}(C_1A^{\frac{d}{2}}M)^{k-p}M^p\sum _{\mat{k_1,\dots ,k_p\ge 1\\ k_1+\dots +k_p=k}}a_{k_1}\cdots a_{k_p}\int _0^t \tau ^{\frac{k-p}{p-1}}\,dt\\ &=Ca_kC_1^{k-p}(A^{\frac{d}{2}}M)^{k-1}Mt^{\frac{k-1}{p-1}}.} The estimate for $U_k$ follows by setting $C_1$ to be $C^{\frac{1}{p-1}}$ with the constant $C$ in the last line, which is independent of $k$. \end{proof} A standard argument (cf.~\cite[Theorem~3]{BT06}) with Lemma~\ref{lem:M_A} (ii) and Lemma~\ref{lem:U_k} shows the following local well-posedness of \eqref{NLS} in $M_A$. \begin{cor}\label{cor:lwp} Let $A\ge 1$ be dyadic, and $M>0$. If $0<T\ll (A^{d/2}M)^{-(p-1)}$, then for any $\phi \in M_A$ with $\tnorm{\phi}{M_A}\le M$ the following holds. (i) A unique solution $u$ to the integral equation associated with \eqref{NLS}, \eq{eq:ie'}{u(t)=e^{it\Delta}\phi -i\int _0^t e^{i(t-\tau )\Delta}\mu _{p,q}(u(\tau ))\,d\tau ,\qquad t\in [0,T]} exists in $C([0,T];M_A)$. (ii) The solution $u$ given in (i) has the expression \eqq{u=\sum _{k=1}^\infty U_{k}[\phi ]=\sum _{l=0}^\infty U_{(p-1)l+1}[\phi ],} which converges absolutely in $C([0,T];M_A)$. \end{cor} \begin{proof} (i) Let \eqq{\Psi _{\phi}[u](t):= e^{it\Delta}\phi -i\int _0^t e^{i(t-\tau )\Delta}\mu _{p,q}(u(\tau ))\,d\tau ,} then from Lemma~\ref{lem:M_A} (ii) we have \eqq{\norm{\Psi _\phi [u]}{L^\infty (0,T;M_A)}\le \tnorm{\phi}{M_A}+CTA^{\frac{d}{2}(p-1)}\tnorm{u}{L^\infty (0,T;M_A)}^p} and that $\Psi$ is a contraction on a ball in $C([0,T];M_A)$ if $TA^{\frac{d}{2}(p-1)}\tnorm{\phi}{M_A}^{p-1}\ll 1$. (ii) The series $u=\sum _{k\ge 1}U_k[\phi ]$ converges in $C([0,T];M_A)$ by virtue of Lemma~\ref{lem:U_k}. By uniqueness, it suffices to show that $u$ solves the equation \eqref{eq:ie'}. Let $u_K:=\sum _{k=1}^KU_k[\phi ]$, so that $u=\lim _{K\to \infty}u_K$ in $C([0,T];M_A)$. We see that $\Psi _\phi [u_K]-u_K$ consists of $k$-linear terms in $\phi$ with $K+1\le k\le pK$, and we can show \eqq{\tnorm{\Psi _\phi [u_K]-u_K}{L^\infty (0,T;M_A)}\le C(CT^{\frac{1}{p-1}}A^{\frac{d}{2}}M)^KM} by an argument similar to Lemma~\ref{lem:U_k}. By letting $K\to \infty$, we obtain $\Psi _\phi [u]=u$. \end{proof} \begin{rem} (i) In $M_A$ we have \emph{unconditional} local well-posedness. In particular, the embedding (Lemma~\ref{lem:M_A} (i)) shows that the unique solution with initial data in some high-regularity Sobolev space exists on a time interval $[0,T]$ and coincides with the solution constructed in Corollary~\ref{cor:lwp}. (ii) In the following proof of Theorem~\ref{thm:main0} we will take initial data that are localized in frequency on several cubes of side length $O(A)$ located in $\shugo{|\xi |\gg \max (1,A)}$. For such initial data the $L^2$ norm is comparable with the $M_A$ norm, but much smaller than the Sobolev norms of positive indices. In the $L^2$-supercritical cases (i.e., $s_c(d,p)>0$), no reasonable well-posedness is expected in $L^2$, while the use of higher Sobolev space would verify the power series expansion only on a smaller time interval. In this regard, the space $M_A$ is suitable for our purpose. \end{rem} Let $N,A$ be dyadic numbers to be specified so that $N\gg 1$ and $0< A\ll N$ ($1\le A\ll N$ when $Z$ has a periodic direction). In the proof of norm inflation, we will use initial data $\phi$ of the following form: \eq{cond:phi}{&\widehat{\phi}=rA^{-\frac{d}{2}}N^{-s}\chi _\Omega \quad \text{with a positive constant $r$ and a set $\Omega$ satisfying}\\ &\Omega = \bigcup _{\eta \in \Sigma}(\eta +Q_A)\hspace{10pt} \text{for some $\Sigma \subset \shugo{\xi \in \Bo{R} ^d:|\xi |\sim N}$ s.t. $\# \Sigma \le 3$}.} Note that $\tnorm{\phi}{M_A}\sim rN^{-s}$, $\tnorm{\phi}{H^s}\sim r$. We derive Sobolev bounds of $U_k[\phi ](t)$ with $\phi$ satisfying the above condition. \begin{lem}\label{lem:supp} There exists $C>0$ such that for any $\phi$ satisfying \eqref{cond:phi} and $k\ge 1$, we have \eqq{\big| \supp{\widehat{U_{k}[\phi]}(t)}\big| \le C^kA^d,\qquad t\ge 0.} \end{lem} \begin{proof} Since the $\xi$-support of $\widehat{U_{k}[\phi]}$ is determined by a spatial convolution of $k$ copies of $\hat{\phi}$ or $\hat{\bar{\phi}}=\overline{\hat{\phi}(-\cdot )}$, it is easily seen that \eqq{\Supp{\widehat{U_{k}[\phi]}(t)}{\bigcup _{\eta \in \Sc{S}_k}\big( \eta +Q_{kA}\big)}} for all $t\ge 0$, where $\Sc{S}_1:=\Sigma$ and \eqq{\Sc{S}_k:=&\Shugo{\eta \in \Bo{R}^d}{\eta =\sum _{l=1}^k\eta _l,\,\eta _l\in \Sigma \cup (-\Sigma )\;(1\le l\le k)},\qquad k\ge 2.} Since $\# \Sc{S}_k\le 6^k$, we have \eqq{\big| \supp{\widehat{U_{k}[\phi]}(t)}\big| \le \big| Q_{kA}\big| \# \Sc{S}_k\le (kA)^d6^k\le C^kA^d.\qedhere} \end{proof} \begin{lem}\label{lem:U_k_H^s} Let $\phi$ satisfy \eqref{cond:phi}. Assume that $s<0$. Then, there exists $C>0$ depending only on $d,p,s$ such that the following holds. \begin{enumerate} \item $\norm{U_1[\phi ](T)}{H^s}\le Cr$\hspace{10pt} for any $T\ge 0$. \item $\norm{U_{k}[\phi](T)}{H^s}\le Cr(C\rho)^{k-1}A^{-\frac{d}{2}}N^{-s}f_s(A)$\hspace{10pt} for any $T\ge 0$ and $k\ge 2$, where \eqq{\rho :=rA^{\frac{d}{2}}N^{-s}T^{\frac{1}{p-1}},\qquad f_s(A):=\norm{\LR{\xi}^s}{L^2(\shugo{|\xi |\le A})}.} \end{enumerate} \end{lem} \begin{proof} (i) is easily verified. For (ii), we see that \eqq{&\norm{U_{k}[\phi](t)}{H^s}\le \norm{\LR{\xi}^s}{L^2(\supp{\widehat{U_{k}[\phi]}(t)})}\sup _{\xi \in \Bo{R}^d}\big| \widehat{U_k[\phi]}(t,\xi )\big| \\ &\le \norm{\LR{\xi}^s}{L^2(\supp{\widehat{U_{k}[\phi]}(t)})}\sum _{\mat{k_1,\dots ,k_p\ge 1\\ k_1+\dots +k_p=k}}\int _0^t \norm{\big| v_{k_1}(\tau )\big| *\cdots *\big| v_{k_p}(\tau )\big|}{L^\infty}\,d\tau ,} where $v_{k_l}$ is either $\widehat{U_{k_l}[\phi]}$ or $\widehat{\overline{U_{k_l}[\phi ]}}$. By Young's inequality, the above is bounded by \eqq{&\norm{\LR{\xi}^s}{L^2(\supp{\widehat{U_{k}[\phi]}(t)})}\sum _{\mat{k_1,\dots ,k_p\ge 1\\ k_1+\dots +k_p=k}}\int _0^t \norm{v_{k_1}(\tau )}{L^2}\norm{v_{k_2}(\tau )}{L^2}\prod _{l=3}^{p}\norm{v_{k_l}(\tau )}{L^1}\,d\tau \\ &\le \norm{\LR{\xi}^s}{L^2(\supp{\widehat{U_{k}[\phi]}(t)})}\sum _{\mat{k_1,\dots ,k_p\ge 1\\ k_1+\dots +k_p=k}}\int _0^t \prod _{l=3}^p\big| \supp{\widehat{U_{k_l}[\phi]}(\tau )}\big| ^{\frac{1}{2}}\prod _{l=1}^p\norm{\widehat{U_{k_l}[\phi]}(\tau )}{L^2}\,d\tau .} Since $s<0$, for any bounded set $D\subset \Bo{R} ^d$ it holds that \eqq{\big| \shugo{\LR{\xi}^{s}>\lambda}\cap D\big| \le \big| \shugo{\LR{\xi}^s>\lambda}\cap B_D\big| \qquad (\lambda >0),} where $B_D\subset \Bo{R} ^d$ is the ball centered at the origin with $|D|=|B_D|$. This implies that $\tnorm{\LR{\xi}^s}{L^2(D)}\le \tnorm{\LR{\xi}^s}{L^2(B_D)}$. Moreover, it follows from Lemma~\ref{lem:U_k} with $M=CrN^{-s}$ that \eqq{\norm{U_{k}[\phi](t)}{L^2}\le \norm{U_{k}[\phi](t)}{M_A}\le Ct^{\frac{k-1}{p-1}}(CrA^{\frac{d}{2}}N^{-s})^{k-1}rN^{-s},\qquad k\ge 1 .} Hence, we apply Lemma~\ref{lem:supp} to bound the above by \eqq{&\norm{\LR{\xi}^s}{L^2(\shugo{|\xi |\le C^{\frac{k}{d}}A})}\cdot C^{\frac{k}{2}}A^{\frac{d(p-2)}{2}}\sum _{\mat{k_1,\dots ,k_p\ge 1\\ k_1+\dots +k_p=k}}\int _0^t \prod _{l=1}^p\big[ C\tau ^{\frac{k_l-1}{p-1}}(CrA^{\frac{d}{2}}N^{-s})^{k_l-1}rN^{-s}\big] \,d\tau \\ &\le C^k\norm{\LR{\xi}^s}{L^2(\shugo{|\xi |\le A})}A^{\frac{d(p-2)}{2}+\frac{d}{2}(k-p)}(rN^{-s})^k\int _0^t\tau ^{\frac{k-p}{p-1}}\,d\tau \\ &\le f_s(A)A^{\frac{d}{2}(k-2)}(CrN^{-s})^kt^{\frac{k-1}{p-1}},} which is the desired one. \end{proof} We observe the following lower bounds on the $H^s$ norm of the first nonlinear term in the expansion of the solution. \begin{lem}\label{lem:U_p} The following estimates hold for any $s\in \Bo{R}$. \begin{enumerate} \item Let $(p,q)$ and $Z=\Bo{R}^{d_1}\times \Bo{T} ^{d_2}$ be arbitrary. For $1\le A\ll N$, we define the initial data $\phi$ by \eqref{cond:phi} with $\Sigma =\shugo{Ne_d, -Ne_d, 2Ne_d}$, where $e_d:=(0,\dots ,0,1)\in \Bo{R} ^d$. If $0<T\ll N^{-2}$, then we have \eqq{\norm{U_p[\phi ](T)}{H^s}\gtrsim r\rho ^{p-1}A^{-\frac{d}{2}}N^{-s}f_s(A).} \item Let $(p,q)=(2,1)$ and $Z=\Bo{R} ^d$, $1\le d\le 3$. For $N\gg 1$, define $\phi$ by \eqq{\widehat{\phi}:=rN^{\frac{1}{2}-s}\chi _{Ne_d+\widetilde{Q}_{N^{-1}}}\hspace{10pt} \text{with}\hspace{10pt} r>0,\hspace{10pt} \widetilde{Q}_{N^{-1}}:=[-\tfrac{1}{2},\tfrac{1}{2})^{d-1}\times [-\tfrac{1}{2N},\tfrac{1}{2N}).} Then, for any $0<T\ll 1$ we have \eqq{\norm{U_2[\phi ](T)}{H^s}\gtrsim r^2N^{-2s-\frac{1}{2}}T.} \item Let $(p,q)=(2,1)$ and $Z=\Bo{R} ^{d_1}\times \Bo{T} ^{d_2}$ with $d_1+d_2\le 3$, $d_2\ge 1$. Define $\phi$ by \eqref{cond:phi} with $A=1$, $\Sigma =\shugo{Ne_d}$. Then, for any $0<T\ll 1$ we have \eqq{\norm{U_2[\phi ](T)}{H^s}\gtrsim r^2N^{-2s}T.} \item Let $(p,q)=(4,1)$ or $(4,2)$ or $(4,3)$ and $Z=\Bo{T}$. Define $\phi$ by \eqref{cond:phi} with $A=1$, $\Sigma =\shugo{-N,2N,3N}$. Then, for any $T>0$ we have \eqq{\norm{U_4[\phi ](T)}{H^s}\gtrsim r^4N^{-4s}T.} \end{enumerate} \end{lem} \begin{proof} Note that \eqq{\widehat{U_p[\phi ]}(T,\xi )=ce^{-iT|\xi |^2}\int _{\Gamma}\prod _{l=1}^q\widehat{\phi}(\xi _l)\prod _{m=q+1}^p\overline{\widehat{\phi}(\xi _m)}\int _0^T e^{it\Phi}\,dt,} where \eqs{\Gamma :=\Shugo{(\xi _1,\dots ,\xi _p)}{\sum _{l=1}^q\xi _l-\sum _{m=q+1}^p\xi _m=\xi},\quad \Phi :=|\xi |^2-\sum _{l=1}^q|\xi _l|^2+\sum _{m=q+1}^p|\xi _m|^2.} (i) If we restrict $\xi$ to $Q_A$, we have \eqq{\widehat{U_p[\phi ]}(T,\xi )=c(rA^{-\frac{d}{2}}N^{-s})^pe^{-iT|\xi |^2}\sum _{(\eta _1,\dots ,\eta _p)}\int _{\Gamma}\prod _{l=1}^p\chi _{\eta _l+Q_A}(\xi _l)\int _0^T e^{it\Phi}\,dt,} where the sum is taken over the set \eqq{\Shugo{(\eta _1,\dots ,\eta _p)\in \shugo{\pm Ne_d ,2Ne_d}^p}{\sum _{l=1}^q\eta _l-\sum _{m=q+1}^p\eta _m=0},} which is non-empty for any $(p,q)$. \footnote{If $p$ is even, we can choose $\eta _l$ to be $Ne_d$ or $-Ne_d$ so that $\sum _{l=1}^q\eta _l-\sum _{m=q+1}^p\eta _m=0$. If $p$ is odd, we choose $\eta _1=2Ne_d$ and $\eta _2$ to be $Ne_d$ or $-Ne_d$ so that the output from these two frequencies is either $Ne_d$ or $-Ne_d$. Then, the other $\eta _j$ can be chosen as for $p$ even. } Since $|\Phi| \lesssim N^2$ in the integral, for $0<T\ll N^{-2}$ we have \eqq{|\widehat{U_p[\phi ]}(T,\xi )|\gtrsim (rA^{-\frac{d}{2}}N^{-s})^p(A^d)^{p-1}T\chi _{p^{-1}Q_{A}}(\xi ),} and thus \eqq{\norm{U_p[\phi ](T)}{H^s}\gtrsim (rA^{-\frac{d}{2}}N^{-s})^p(A^d)^{p-1}T\norm{\LR{\xi}^s}{L^2(p^{-1}Q_{A})}\sim r\rho ^{p-1}A^{-\frac{d}{2}}N^{-s}f_s(A).} (ii) In this case we have \eqq{\widehat{U_2[\phi ]}(T,\xi )=c(rN^{\frac{1}{2}-s})^2e^{-iT|\xi |^2}\int _{\xi _1-\xi _2=\xi}\chi _{\widetilde{Q}_{N^{-1}}}(\xi _1-Ne_d)\chi _{\widetilde{Q}_{N^{-1}}}(\xi _2-Ne_d)\int _0^T e^{it\Phi}\,dt,} and in the integral, for $\xi =\xi _1-\xi _2\in \widetilde{Q}_{N^{-1}}$, \eqq{\Phi =|\xi |^2-|\xi _1|^2+|\xi _2|^2=|\xi |^2-|\xi _1-Ne_d|^2+|\xi _2-Ne_d|^2-2(\xi _1-\xi _2)\cdot Ne_d=O(1).} Hence, if $0<T\ll 1$, we have \eqq{|\widehat{U_2[\phi ]}(T)|\gtrsim (rN^{\frac{1}{2}-s})^2N^{-1}T\chi _{2^{-1}\widetilde{Q}_{N^{-1}}},\qquad \norm{U_2[\phi ](T)}{H^s}\gtrsim (rN^{\frac{1}{2}-s})^2N^{-\frac{3}{2}}T} for any $s\in \Bo{R}$. (iii) Similarly to (ii), we see that \eqq{\widehat{U_2[\phi ]}(T,(\xi ',0))=c(rN^{-s})^2e^{-iT|\xi |^2}\int _{\xi _1'-\xi _2'=\xi '}\chi _{[-1/2,1/2 )^{d-1}}(\xi _1')\chi _{[-1/2,1/2 )^{d-1}}(\xi _2')\int _0^T e^{it\Phi}\,dt,} where the integral in $\xi '=(\xi _1,\dots ,\xi _{d-1})$ vanishes if $Z=\Bo{T}$. In the integral, \eqq{\Phi =|(\xi ',0)|^2-|(\xi _1',N)|^2+|(\xi _2',N)|^2=O(1).} Hence, if $0<T\ll 1$, we have \eqq{\norm{U_2[\phi ](T)}{H^s}\ge \norm{\LR{\xi}^s\widehat{U_2[\phi ]}(T)}{L^2(Q_{1/2})}\gtrsim (rN^{-s})^2T} for any $s\in \Bo{R}$. (iv) We first consider $(p,q)=(4,1)$; the case of $(4,3)$ is treated in the same way. Observe that \eqq{&\Shugo{(\eta _1,\dots ,\eta _4)\in \shugo{-N,2N,3N}^4}{\eta _1-\eta _2-\eta _3-\eta _4=0}\\ &=\shugo{(3N,-N,2N,2N),\,(3N,2N,-N,2N),\,(3N,2N,2N,-N)}.} Therefore, we have \eqq{\widehat{U_4[\phi ]}(T,0)&=c(rN^{-s})^4\sum _{\mat{\xi _1,\dots ,\xi _4\in \Bo{Z}\\ \xi _1-\xi _2-\xi _3-\xi _4=0}}\prod _{l=1}^4\chi _{\shugo{-N,2N,3N}}(\xi _l)\int _0^T e^{it\Phi}\,dt\\ &=3c(rN^{-s})^4\int _0^T e^{it\{ 0^2-(3N)^2+(-N)^2+(2N)^2+(2N)^2\}}\,dt =3c(rN^{-s})^4T,} which implies \eqq{\norm{U_4[\phi ](T)}{H^s}\gtrsim (rN^{-s})^4T} for any $s\in \Bo{R}$ and $T>0$. Next, we consider $(p,q)=(4,2)$, which is very similar to the above. Since \eqq{&\Shugo{(\eta _1,\dots ,\eta _4)\in \shugo{-N,2N,3N}^4}{\eta _1+\eta _2-\eta _3-\eta _4=0}\\ &=\Shugo{(\eta _1,\dots ,\eta _4)\in \shugo{-N,2N,3N}^4}{\shugo{\eta _1,\eta _2}=\shugo{\eta _3,\eta _4}},} we have \eqq{\widehat{U_4[\phi ]}(T,0)&=c(rN^{-s})^4\sum _{\mat{\xi _1,\dots ,\xi _4\in \Bo{Z}\\ \xi _1+\xi _2-\xi _3-\xi _4=0}}\prod _{l=1}^4\chi _{\shugo{-N,2N,3N}}(\xi _l)\int _0^T e^{it\Phi}\,dt=15c(rN^{-s})^4T,} and the same estimate holds. \end{proof} Now, we are in a position to prove norm inflation. \begin{proof}[Proof of Theorem~\ref{thm:main0}] We first recall that $U_k[\phi]=0$ unless $k\equiv 1\mod p-1$. If the initial data $\phi$ satisfies \eqref{cond:phi}, Corollary~\ref{cor:lwp} guarantees existence of the solution to \eqref{NLS} and the power series expansion in $M_A$ up to time $T$ whenever $\rho =rA^{\frac{d}{2}}N^{-s}T^{\frac{1}{p-1}}\ll 1$. \underline{Case 1}: General $Z$ and $(p,q)$, $s<\min \shugo{s_c(d,p),0}$. Take $\phi$ as in Lemma~\ref{lem:U_p} (i). From Lemmas~\ref{lem:U_k_H^s} and \ref{lem:U_p}, under the conditions \eq{cond:1}{T\ll N^{-2},\quad \rho \ll 1,\quad r\rho ^{p-1}A^{-\frac{d}{2}}N^{-s}f_s(A)\gg r,} we have \eqq{\norm{u(T)}{H^s}\sim \norm{U_p[\phi ](T)}{H^s}\sim r\rho ^{p-1}A^{-\frac{d}{2}}N^{-s}f_s(A).} Now, we set \eqq{r=(\log N)^{-1},\quad A\sim (\log N)^{-\frac{p+1}{|s|}}N,\quad T=(A^{-\frac{d}{2}}N^s)^{p-1},} so that $\rho = (\log N)^{-1}\ll 1$. The super-critical assumption $s<s_c(d,p)=\frac{d}{2}-\frac{2}{p-1}$ ensures that \eqq{T\sim (\log N)^{\frac{d(p+1)}{2|s|}(p-1)}N^{(s-\frac{d}{2})(p-1)}\ll N^{-2}.} Moreover, since $f_s(A)\gtrsim A^{\frac{d}{2}+s}$ for any $s<0$ and $A\ge 1$, we see that \eqq{r\rho ^{p-1}A^{-\frac{d}{2}}N^{-s}f_s(A)\gtrsim r\rho ^{p-1}A^{s}N^{-s}\sim \log N\gg (\log N)^{-1}=r.} Therefore, \eqref{cond:1} is fulfilled and we have $\tnorm{u(T)}{H^s}\gtrsim \log N$. Noticing $\tnorm{\phi}{H^s}\sim r=(\log N)^{-1}$ and $T\ll N^{-2}$, we show norm inflation by letting $N\to \infty$. \underline{Case 2}: $Z=\Bo{R}$ or $\Bo{T}$, $(p,q)=(2,0)$ or $(2,2)$, $-\frac{3}{2}\le s<-1$. We take the same initial data $\phi$ as in Case 1, but with \eqq{r=(\log N)^{-1},\quad A=1,\quad T=(\log N)^{-1}N^{-2}.} Then, $T\ll N^{-2}$, $\rho =(\log N)^{-2}N^{-2-s}\ll 1$ by $s\ge -\frac{3}{2}$ and \eqq{r\rho ^{p-1}A^{-\frac{d}{2}}N^{-s}f_s(A)\sim r\rho N^{-s}= (\log N)^{-3}N^{-2-2s}\gg 1\gg r} by $s<-1$. Hence, \eqref{cond:1} holds and we have $\tnorm{u(T)}{H^{s}}\sim (\log N)^{-3}N^{-2-2s}\gg 1$, which together with $\tnorm{\phi}{H^s}\sim r\ll 1$ and $T\ll 1$ shows norm inflation by taking $N$ large. \underline{Case 3}: $Z=\Bo{R}$ or $\Bo{T}$, $p=3$, $s=-\frac{1}{2}$. Take the same $\phi$ as in Case 1, but with \eqq{r=(\log N)^{-\frac{1}{12}},\quad A\sim (\log N)^{-\frac{1}{4}}N,\quad T=(\log N)^{-\frac{1}{12}}N^{-2}.} Then, $T\ll N^{-2}$, $\rho \sim (\log N)^{-\frac{1}{4}}\ll 1$ and \eqq{r\rho ^{p-1}A^{-\frac{d}{2}}N^{-s}f_s(A)\sim r\rho ^2A^{-\frac{1}{2}}N^{\frac{1}{2}}(\log A)^{\frac{1}{2}}\sim (\log N)^{\frac{1}{24}}\gg 1\gg r.} Hence, \eqref{cond:1} holds and we have $\tnorm{u(T)}{H^{-\frac{1}{2}}}\sim (\log N)^{\frac{1}{24}}\gg 1$, which implies norm inflation as well. \underline{Case 4}: $Z=\Bo{R} ^2$ or $\Bo{R} \times \Bo{T}$ or $\Bo{T}^2$, $(p,q)=(2,0)$ or $(2,2)$, $s=-1$. We follow the argument in Case 1 again, but with \eqq{r=(\log N)^{-\frac{1}{12}},\quad A\sim (\log N)^{-\frac{1}{4}}N,\quad T=(\log N)^{-\frac{1}{6}}N^{-2}.} Then, $T\ll N^{-2}$, $\rho \sim (\log N)^{-\frac{1}{2}}\ll 1$ and \eqq{r\rho ^{p-1}A^{-\frac{d}{2}}N^{-s}f_s(A)\sim r\rho A^{-1}N(\log A)^{\frac{1}{2}}\sim (\log N)^{\frac{1}{6}}\gg 1\gg r.} Hence, \eqref{cond:1} holds and we have $\tnorm{u(T)}{H^{-1}}\sim (\log N)^{\frac{1}{6}}\gg 1$, which shows NI$_{-1}$. \underline{Case 5}: $Z=\Bo{R} ^{d_1}\times \Bo{T} ^{d_2}$ with $d_1+d_2\le 3$, $d_2\ge 1$, $(p,q)=(2,1)$, and $\frac{d}{2}-2\le s<0$. Take $\phi$ as in Lemma~\ref{lem:U_p} (iii) and choose $r,T$ as $r=(\log N)^{-1}$ and $T=N^s$, which implies \eqq{T\ll 1,\quad \rho \sim rN^{-s}T=(\log N)^{-1}\ll 1,\quad r\rho N^{-s}\sim (\log N)^{-2}N^{-s}\gg 1\gg r.} From Lemmas~\ref{lem:U_k_H^s} and \ref{lem:U_p}, we have $\tnorm{u(T)}{H^s}\sim \norm{U_2[\phi ](T)}{H^s}\sim (\log N)^{-2}N^{-s}\gg 1$, and norm inflation occurs. \underline{Case 6}: $Z=\Bo{T}$, $(p,q)=(4,1)$ or $(4,2)$ or $(4,3)$, and $-\frac{1}{6}\le s<0$. Take $\phi$ as in Lemma~\ref{lem:U_p} (iv), and then take $r=(\log N)^{-1}$ and $T=N^{3s}$, which implies \eqq{T\ll 1,\quad \rho \sim rN^{-s}T^{\frac{1}{3}}=(\log N)^{-1}\ll 1,\quad r\rho N^{-s}\sim (\log N)^{-2}N^{-s}\gg 1\gg r.} Again, we have $\tnorm{u(T)}{H^s}\sim \norm{U_4[\phi ](T)}{H^s}\sim (\log N)^{-4}N^{-s}\gg 1$. \underline{Case 7}: $Z=\Bo{R} ^d$ with $1\le d\le 3$, $(p,q)=(2,1)$, and $\frac{d}{2}-2\le s<-\frac{1}{4}$. In this case the data $\phi$ is taken as in Lemma~\ref{lem:U_p} (ii) and does not satisfy \eqref{cond:phi}, so we need to modify the previous argument. We use anisotropic modulation space $\widetilde{M}$ defined by the norm \eqq{\norm{f}{\widetilde{M}}:=\sum _{\xi \in \Bo{Z}^{d-1}\times N^{-1}\Bo{Z}}\norm{\widehat{f}}{L^2(\xi +\widetilde{Q}_{N^{-1}})}.} We have the product estimate \eqq{\tnorm{fg}{\widetilde{M}}\lesssim N^{-\frac{1}{2}}\tnorm{f}{\widetilde{M}}\tnorm{g}{\widetilde{M}}} in this space. Thus, we follow the proof of Lemma~\ref{lem:U_k} to obtain \eqq{\tnorm{U_k[\phi ](t)}{\widetilde{M}}\le Cr(CrN^{-\frac{1}{2}-s}t)^{k-1}N^{-s}} for any $k\ge 1$, which is used to justify the expansion of the solution in $\widetilde{M}$ up to time $T$ such that $\widetilde{\rho}:=rN^{-\frac{1}{2}-s}T\ll 1$. Then, by the same argument as in the proofs of Lemmas~\ref{lem:supp} and \ref{lem:U_k_H^s}, we see that \eqq{|\supp{\widehat{U_k[\phi ]}(t)}|\le C^kN^{-1},\qquad \tnorm{U_k[\phi ](T)}{H^s}\le Cr(C\widetilde{\rho})^{k-1}N^{-s}.} In particular, $\tnorm{U_2[\phi ](T)}{H^s}\sim r\widetilde{\rho}N^{-s}$ for $0<T\ll 1$ by Lemma~\ref{lem:U_p} (iii). Now, we take $r=(\log N)^{-1}\ll 1$, $T=(\log N)^3N^{2s+\frac{1}{2}}\ll 1$, so that $\widetilde{\rho}=(\log N)^2N^s\ll 1$, $r\widetilde{\rho}N^{-s}=\log N \gg r$. From the estimates above, we have $\tnorm{u(T)}{H^s}\sim \log N \gg 1$, which shows norm inflation. \end{proof} \section{Proof of Theorem~\ref{thm:main}}\label{sec:proof} Here, we see how to use the estimates for single-term nonlinearities for the proof in the multi-term cases. We write $p:=\max _{1\le j\le n}p_j$. For the initial value problem \eqref{NLS'}, the $k$-th order term $U_k[\phi ]$ in the expansion of the solution is given by $U_1[\phi ]:=e^{it\Delta}\phi$ and \eqq{U_k[\phi ]:=-i\sum _{j=1}^n\nu _j\sum _{\mat{k_1,\dots ,k_{p_j}\ge 1\\ k_1+\dots +k_{p_j}=k}}\int _0^t e^{i(t-\tau )\Delta}\mu _{p_j,q_j}\big( U_{k_1}[\phi ](\tau ),\dots ,U_{k_{p_j}}[\phi ](\tau )\big) \,d\tau } for $k\ge 2$ inductively. The following lemmas are verified in the same manner as Lemmas~\ref{lem:a_k}, \ref{lem:U_k}, and Corollary~\ref{cor:lwp}. \begin{lem}\label{lem:a_k'} Let $\shugo{b_k}_{k=1}^\infty$ be a sequence of nonnegative real numbers such that \[ b_{k} \le \sum _{j=1}^nC_j\sum _{\mat{k_1,\dots ,k_{p_j}\ge 1\\ k_1+\dots +k_{p_j}=k}}b_{k_1}\cdots b_{k_{p_j}},\qquad k\ge 2\] for some $p_1,\dots ,p_n\ge 2$ and $C_1,\dots ,C_n>0$. Then, we have \[ b_k\le b_1C_0^{k-1},\qquad k\ge 1,\qquad C_0=\max _{1\le j\le n}\frac{\pi ^2}{6}(nC_jp_j^2)^{\frac{1}{p_j-1}}b_1.\] \end{lem} \begin{lem}\label{lem:U_k'} There exists $C>0$ such that for any $\phi \in M_A$ with $\tnorm{\phi}{M_A}\le M$ we have \eqq{\norm{U_k[\phi ](t)}{M_A}\le t^{\frac{k-1}{p-1}}(CA^{\frac{d}{2}}M)^{k-1}M} for any $0\le t\le 1$ and $k\ge 1$. \end{lem} \begin{lem}\label{lem:MAlwp'} Let $\phi \in M_A$ with $\tnorm{\phi}{M_A}\le M$. If $T>0$ satisfies $A^{\frac{d}{2}}MT^{\frac{1}{p-1}}\ll 1$, then a unique solution $u\in C([0,T];M_A)$ to \eqref{NLS'} exists and has the expansion $u=\sum _{k=1}^\infty U_k[\phi ]$. \end{lem} The next lemma can be verified similarly to Lemma~\ref{lem:U_k_H^s}. \begin{lem}\label{lem:U_k_H^s'} Let $\phi$ satisfy \eqref{cond:phi} and $s<0$. Then, the following holds. \begin{enumerate} \item $\norm{U_1[\phi ](T)}{H^s}\le Cr$\hspace{10pt} for any $T\ge 0$. \item $\norm{U_k[\phi ](T)}{H^s}\le Cr(C\rho)^{k-1}A^{-\frac{d}{2}}N^{-s}f_s(A)$\hspace{10pt} for any $0\le T\le 1$ and $k\ge 2$, where \eqq{\rho =rA^{\frac{d}{2}}N^{-s}T^{\frac{1}{p-1}}\quad (p=\max _{1\le j\le n}p_j),\qquad f_s(A)=\norm{\LR{\xi}^s}{L^2(\shugo{|\xi |\le A})}.} \end{enumerate} \end{lem} We now begin to prove Theorem~\ref{thm:main}. \begin{proof}[Proof of Theorem~\ref{thm:main}] We divide the proof into two cases: (I) One of the terms of order $p$ (highest order) is responsible for norm inflation, or (II) a lower order term determines the range of regularities for norm inflation. Note that (II) occurs only when $Z=\Bo{R}$, $p=3$, $F (u,\bar{u})$ has the term $u\bar{u}$ and $s\in (-\frac{1}{2},-\frac{1}{4})$. (I): Rewrite the nonlinear terms as \eqq{F (u,\bar{u})=\sum _{q=0}^p\nu _{p,q}\mu _{p,q}(u)+\text{(terms of order less than $p$)}.} Note that $\nu _{p,q}$ may be zero but $(\nu _{p,0},\dots ,\nu _{p,p})\neq (0,\dots ,0)$. We divide the series into four parts: \eqq{\sum _{k=1}^\infty U_k[\phi ]&=U_1[\phi ]+\Big\{ \sum _{k=2}^{p}U_k[\phi ]-\Big( -i\sum _{q=0}^p\nu _{p,q}\int _0^te^{i(t-\tau )\Delta}\mu _{p,q}\big( U_1[\phi ](\tau )\big) \,d\tau \Big) \Big\} \\ &\hspace{10pt} +\Big( -i\sum _{q=0}^p\nu _{p,q}\int _0^te^{i(t-\tau )\Delta}\mu _{p,q}\big( U_1[\phi ](\tau )\big) \,d\tau \Big) +\sum _{k=p+1}^\infty U_k[\phi ]\\ &=:U_1[\phi ]+U_{low}[\phi ]+U_{main}[\phi ]+U_{high}[\phi ].} Note that $U_{low}=0$ if $p=2$. The following lemma indicates how $U_{low}$ is dominated by $U_{main}$, and how the contributions of the $(p+1)$ terms in $U_{main}$ can be `separated'. \begin{lem}\label{lem:U_p'} We have the following: \begin{enumerate} \item Let $\phi$ satisfy \eqref{cond:phi} and $s<0$. Let $0<T\le 1$, and assume that $\rho =rA^{\frac{d}{2}}N^{-s}T^{\frac{1}{p-1}}\ll 1$. Then, (if $p\ge 3$,) \eqq{\norm{U_{low}[\phi](T)}{H^s}\lesssim r^2N^{-2s}f_s(A)T^{\frac{1}{p-2}}.} \item Let $q_*\in \shugo{0,1,\dots ,p}$ be such that $\nu _{p,q_*}\neq 0$. Then, for any $T\ge 0$ there exists $j\in \shugo{0,1,\dots ,p}$ such that \eqq{\norm{U_{main}[e^{i\frac{j\pi}{p+1}}\phi ](T)}{H^s}\gtrsim \tnorm{G_{q_*}[\phi ](T)}{H^s},} where \eqq{G_q[\phi ](t):=-i\int _0^te^{i(t-\tau )\Delta}\mu _{p,q}(U_1[\phi ](\tau ))\,d\tau ; \quad U_{main}[\phi ]=\sum _{q=0}^p\nu _{p,q}G_q[\phi ](t).} \end{enumerate} \end{lem} \begin{proof} (i) We notice that the nonlinear terms of highest order $p$ have nothing to do with $U_{low}[\phi ]$. Hence, we estimate by Lemma~\ref{lem:U_k_H^s'} (ii) with $p$ replaced by $p-1$ and have \eqq{\norm{U_{low}[\phi ](T)}{H^s}\le \sum _{k=2}^pCr(CrA^{\frac{d}{2}}N^{-s}T^{\frac{1}{(p-1)-1}})^{k-1}A^{-\frac{d}{2}}N^{-s}f_s(A).} Since $0<T\le 1$ implies $rA^{\frac{d}{2}}N^{-s}T^{\frac{1}{(p-1)-1}}\le \rho \ll 1$, we have \eqq{\norm{U_{low}[\phi ](T)}{H^s}\lesssim r\cdot rA^{\frac{d}{2}}N^{-s}T^{\frac{1}{p-2}}\cdot A^{-\frac{d}{2}}N^{-s}f_s(A).} (ii) We observe that $\zeta _p:=e^{i\frac{\pi}{p+1}}$ satisfies $\sum _{j=0}^{p}\zeta _p^{2qj}=0$ if $q\not\equiv 0\mod p+1$. Since $G_q[\zeta _p^j\phi ]=\zeta _p^{(p-2q)j}G_q[\phi ]$, for any $0\le q_*\le p$ it holds that \eqq{\sum _{j=0}^p\zeta _p^{(2q_*-p)j}U_{main}[\zeta _p^j\phi ]&=\sum _{q=0}^{p}\sum _{j=0}^p\zeta _p^{2(q_*-q)j}\nu _{p,q}G_q[\phi ]=(p+1)\nu _{p,q_*}G_{q_*}[\phi ].} Hence, if $\nu _{p,q_*}\neq 0$, by the triangle inequality we see that \eqq{\sum _{j=0}^p\norm{U_{main}[\zeta _p^j\phi ](T)}{H^s}\ge (p+1)|\nu _{p,q_*}|\norm{G_{q_*}[\phi ](T)}{H^s}. } This implies the claim. \end{proof} By Lemma~\ref{lem:U_p'}, the proof is almost reduced to the case of single-term nonlinearities, as we see below. \underline{Case 1}: General $Z$ and $p$, $s<\min \shugo{s_c(d,p),0}$. Let us take the initial data $\phi$ as in Lemma~\ref{lem:U_p} (i), and assume $\rho =rA^{-\frac{d}{2}}N^{-s}T^{\frac{1}{p-1}}\ll 1$, $0<T\ll N^{-2}$. Lemma~\ref{lem:U_k_H^s'} (ii) yields that \eqq{\norm{U_{high}[\zeta _p^j\phi ](T)}{H^s}\lesssim r\rho ^pA^{-\frac{d}{2}}N^{-s}f_s(A),} while Lemma~\ref{lem:U_p'} (ii) and Lemma~\ref{lem:U_p} (i) imply that \eqq{\norm{U_{main}[\zeta _p^j\phi ](T)}{H^s}\sim r\rho ^{p-1}A^{-\frac{d}{2}}N^{-s}f_s(A)\gg \norm{U_{high}[\zeta _p^j\phi ](T)}{H^s}} for an appropriate $j$. Hence, from Lemma~\ref{lem:U_k_H^s'} (i) and Lemma~\ref{lem:U_p'} (i), \eqq{\norm{u(T)}{H^s}&\ge \tfrac{1}{2}\norm{U_{main}[\zeta _p^j\phi ](T)}{H^s}-\norm{U_{low}[\zeta _p^j\phi ](T)}{H^s}-\norm{U_1[\zeta _p^j\phi ](T)}{H^s}\\ &\ge C^{-1}r\rho ^{p-1}A^{-\frac{d}{2}}N^{-s}f_s(A)-C\big( r^2N^{-2s}f_s(A)T^{\frac{1}{p-2}}+r\big) .} If we take the same choice for $r,A,T$ as in Case~1 of the proof of Theorem~\ref{thm:main0}; \eqq{r=(\log N)^{-1},\quad A\sim (\log N)^{-\frac{p+1}{|s|}}N,\quad T=(A^{-\frac{d}{2}}N^s)^{p-1};\quad \text{so that}\hspace{10pt} \rho = (\log N)^{-1},} all the required conditions for norm inflation are satisfied when $p=2$. Even for $p\ge 3$, it suffices to check that \eqq{r\rho ^{p-1}A^{-\frac{d}{2}}N^{-s}f_s(A)\gg r^2N^{-2s}f_s(A)T^{\frac{1}{p-2}}.} This is equivalent to $\rho ^{p-2}\gg T^{\frac{1}{p-2}-\frac{1}{p-1}}$, which we can easily show. \underline{Case 2-4-5-7}: $p=2$. We need to deal with the following situations: \begin{itemize} \item $d=1$, $\nu _{2,1}=0$, $-\frac{3}{2}\le s<-1$; \item $d=2$, $\nu _{2,1}=0$, $s=-1$; \item $Z=\Bo{R} ^{d_1}\times \Bo{T} ^{d_2}$ with $d_1+d_2\le 3$, $d_2\ge 1$, $\nu _{2,1}\neq 0$, and $\frac{d}{2}-2\le s<0$; \item $Z=\Bo{R} ^d$, $1\le d\le 3$, $\nu _{2,1}\neq 0$, $\frac{d}{2}-2\le s<-\frac{1}{4}$, \end{itemize} which correspond to Cases 2, 4, 5, and 7 in the proof of Theorem~\ref{thm:main0}, respectively. As seen in the preceding case, we do not have to care about $U_{low}$ and the proof is the same as the single-term cases, except that we need to pick up the appropriate one among $u^2$, $u\bar{u}$, $\bar{u}^2$ by using Lemma~\ref{lem:U_p'} (ii). \underline{Case 3}: $d=1$, $p=3$, $s=-\frac{1}{2}$. We take the initial data $e^{i\frac{j\pi}{4}}\phi$ with $\phi$ as in \eqref{cond:phi} and parameters $r,A,T$ as in Case~3 for Theorem~\ref{thm:main0}. Following the argument in Case 1, it suffices to check the condition for $\tnorm{U_{main}}{H^s}\gg \tnorm{U_{low}}{H^s}$; \eqq{r\rho ^2A^{-\frac{1}{2}}N^{\frac{1}{2}}f_{-\frac{1}{2}}(A)\gg r^2Nf_{-\frac{1}{2}}(A)T.} Actually, we see that $\text{L.H.S.}\sim (\log N)^{\frac{1}{24}}\gg (\log N)^{\frac{1}{4}}N^{-1}\sim \text{R.H.S.}$ \underline{Case 6}: $Z=\Bo{T}$, $p=4$, $(\nu _{4,1},\nu _{4,2},\nu _{4,3})\neq (0,0,0)$, $s\in [-\frac{1}{6},0)$. Similarly, we take $e^{i\frac{j\pi}{5}}\phi$ with parameters $r,A,T$ as in Case~6 for Theorem~\ref{thm:main0}. It suffices to verify the condition \eqq{r\rho ^{3}N^{-s}\gg r^2N^{-2s}T^{\frac{1}{2}},} and in fact it holds that $\text{L.H.S.}\sim (\log N)^{-4}N^{-s}\gg (\log N)^{-2}N^{-\frac{s}{2}}\sim \text{R.H.S.}$ (II): Recall that we claim NI$_s$ for $s\in (-\frac{1}{2},-\frac{1}{4})$ in the case of $Z=\Bo{R}$, $p=3$, and $F (u,\bar{u})$ has the term $u\bar{u}$. We take $\phi$ as in \eqref{cond:phi} with $A=N^{-1}$ and $\Sigma =\shugo{N}$ (same as in Case 7 for the single-term nonlinearity). By Lemmas~\ref{lem:MAlwp'} and \ref{lem:U_k_H^s'}, we can expand the solution whenever $\rho =rN^{-\frac{1}{2}-s}T^{\frac{1}{2}}\ll 1$ and we have \eqq{\sum _{k\ge 4}\tnorm{U_k[\phi ](T)}{H^s}\lesssim r\rho ^3N^{\frac{1}{2}-s}f_s(N^{-1})\sim r^4N^{-\frac{3}{2}-4s}T^{\frac{3}{2}}} for $0<T\le 1$. For $U_3$, observing that the Fourier support is in the region $|\xi |\sim N$, we modify the estimate in Lemma~\ref{lem:U_k_H^s'} to obtain \eqq{\tnorm{U_3[\phi ](T)}{H^s}\lesssim r\rho ^2N^{\frac{1}{2}-s}\cdot \tnorm{\LR{\xi}^s}{L^2(\supp \widehat{U_3[\phi ]})}\sim r^3N^{-1-2s}T.} For $U_2$ the contribution from $u^2$ and $\bar{u}^2$ has the Fourier support in high frequency, thus being dominated by the contribution from $u\bar{u}$. By Lemma~\ref{lem:U_p} (ii), we have \eqq{\tnorm{U_2[\phi ](T)}{H^s}\gtrsim r^2N^{-\frac{1}{2}-2s}T} if $0<T\ll 1$. We set $r=(\log N)^{-1}$ and $T=(\log N)^3N^{2s+\frac{1}{2}}$ as before (Case 7 in the single-term case), then it holds that $T\ll 1$, $\rho =(\log N)^{\frac{1}{2}}N^{-\frac{1}{4}}\ll 1$ and \eqq{\tnorm{u(T)}{H^s}\ge C^{-1}r^2N^{-\frac{1}{2}-2s}T-C\big( r+r^3N^{-1-2s}T+r^4N^{-\frac{3}{2}-4s}T^{\frac{3}{2}}\big) \gtrsim \log N\gg 1} for $s\in [-\frac{3}{4},-\frac{1}{4})$, which gives the claimed norm inflation. This concludes the proof of Theorem~\ref{thm:main}. \end{proof} \appendix \section{Norm inflation with infinite loss of regularity}\label{sec:niilr} In this section, we derive norm inflation with infinite loss of regularity for the problem with smooth gauge-invariant nonlinearities: \begin{equation}\label{nuNLS} \left\{ \begin{array}{@{\,}r@{\;}l} i\partial _tu+\Delta u&=\pm |u|^{2\nu}u,\qquad t\in [0,T],\quad x\in Z=\Bo{R} ^{d-d_2}\times \Bo{T}^{d_2},\\ u(0,x)&=\phi (x), \end{array} \right. \end{equation} where $\nu$ is a positive integer. The initial value problem \eqref{nuNLS} on $\Bo{R} ^d$ is invariant under the scaling $u(t,x)\mapsto \lambda ^{\frac{1}{\nu}}u(\lambda ^2t,\lambda x)$, and the critical Sobolev index is $s_c(d,2\nu +1)=\frac{d}{2}-\frac{1}{\nu}$, which is non-negative except for the case $d=\nu =1$. \begin{prop}\label{prop:niilr} We assume the following condition on $s$: \begin{itemize} \item If $d=\nu =1$, then $s<-\frac{2}{3}$; \item if $d\ge 2$, $\nu =1$ and $d_2=0,1$ (i.e., $Z=\Bo{R} ^d$ or $\Bo{R} ^{d-1}\times \Bo{T}$), then $s<-\frac{1}{3}$; \item if $d\ge 1$, $\nu \ge 2$ and $d_2=0$ (i.e., $Z=\Bo{R} ^d$), then $s<-\frac{1}{2\nu +1}$; \item otherwise, $s<0$. \end{itemize} Then, NI$_s$ with infinite loss of regularity occurs for the initial value problem \eqref{nuNLS}: For any $\delta >0$ there exist $\phi \in H^\infty$ and $T>0$ satisfying $\tnorm{\phi}{H^s}<\delta$, $0<T<\delta$ such that the corresponding smooth solution $u$ to \eqref{NLS} exists on $[0,T]$ and $\tnorm{u(T)}{H^\sigma}>\delta ^{-1}$ for \emph{all} $\sigma \in \Bo{R}$. \footnote{ More precisely, we show $\tnorm{\widehat{u}(T)}{L^2(\{ |\xi |\le 1\} )}>\delta ^{-1}$. This implies the claim if we define the Sobolev norm of negative indices $\sigma$ as $\tnorm{f}{H^\sigma}:=\tnorm{\min \{ 1,\,|\xi |^{\sigma}\} \widehat{f}(\xi )}{L^2}$. } \end{prop} \begin{rem} (i) The proofs of Theorems~\ref{thm:main0} and \ref{thm:main} are easily adapted to yield NI$_s$ with \emph{finite} loss of regularity in most cases. However, we only consider here \emph{infinite} loss of regularity. (ii) The coefficient of the nonlinearity is not important in the proof, and the same result holds for any non-zero complex constant. (iii) To show infinite loss of regularity, we need to use the nonlinear interactions of very high frequencies which create a significant output in low frequency $\{ |\xi |\le 1\}$. Except for the case $d=\nu =1$, there are such interactions that are also \emph{resonant}; i.e., there exist non-zero vectors $k_1,\dots ,k_{2\nu +1}\in \Bo{Z}^{d}$ satisfying \eqq{\sum _{j=0}^{\nu}k_{2j+1}=\sum _{l=1}^{\nu}k_{2l},\qquad \sum _{j=0}^{\nu}|k_{2j+1}|^2=\sum _{l=1}^{\nu}|k_{2l}|^2.} This is also the key ingredient in the proof of the previous results \cite{CDS12,CK17}, and hence the restriction on the range of $s$ in Proposition~\ref{prop:niilr} is the same as that in \cite{CDS12,CK17}. A complete characterization of the resonant set \eqq{\Sc{R}_{d,\nu}(k):=\Shugo{(k_m)_{m=1}^{2\nu +1}\in (\Bo{Z}^d)^{2\nu +1}}{k=\sum _{m=1}^{2\nu +1}(-1)^{m+1}k_m,\, |k|^2=\sum _{m=1}^{2\nu +1}(-1)^{m+1}|k_m|^2}} (for $k\in \Bo{Z}^d$ given) is easily obtained in the $\nu =1$ case; see \cite[Proposition~4.1]{CK17} for instance. In Proposition~\ref{prop:char} below, we will provide a complete characterization of the set $\Sc{R}_{1,2}(0)$, which may be of interest in itself. Since $(k_m)_{m=1}^5\in \Sc{R}_{1,2}(k)$ if and only if $(k_m-k)_{m=1}^5\in \Sc{R}_{1,2}(0)$, we have a characterization of $\Sc{R}_{1,2}(k)$ for any $k\in \Bo{Z}$ as well. However, in the proof of Proposition~\ref{prop:niilr} we only need the fact that $\Sc{R}_{d,\nu}(0)$ has an element consisting of non-zero vectors in $\Bo{Z}^d$, except for $(d,\nu )=(1,1)$. \end{rem} \begin{proof}[Proof of Proposition~\ref{prop:niilr}] We follow the proof of Theorem~\ref{thm:main0} but take different initial data to show infinite loss of regularity. Let $N\gg 1$ be a large positive integer and define $\phi \in H^\infty (Z)$ by \eqq{\widehat{\phi}:=rN^{-s}\chi _{\Sigma +Q_1},} where $r=r(N)>0$ is a constant to be chosen later, $Q_1:=[-\tfrac{1}{2},\tfrac{1}{2})^d$, and \eqs{\Sigma := \begin{cases} \shugo{N,2N} &\text{if $d=\nu =1$},\\ \shugo{Ne_{d-1},\,Ne_d,\,N(e_{d-1}+e_d)} &\text{if $d\ge 2$, $\nu =1$},\\ \shugo{Ne_d,\,3Ne_d,\,4Ne_d} &\text{if $d\ge 1$, $\nu \ge 2$}, \end{cases}\\ e_d:=(\underbrace{0,\dots ,0}_{d-1},1),\qquad e_{d-1}:=(\underbrace{0,\dots ,0}_{d-2},1,0). } The argument in Section~\ref{sec:proof0} (with $A=1$) shows the following: \begin{itemize} \item The unique solution $u=u[\phi ]$ to \eqref{nuNLS} exists on $[0,T]$ and has the power series expansion $u=\sum _{k=1}^\infty U_k[\phi ]$ if $\rho :=rN^{-s}T^{\frac{1}{2\nu}}\ll 1$. \item $\tnorm{U_1[\phi ](T)}{H^s}=\tnorm{\phi}{H^s}\sim r$ for any $T\ge 0$. \item $\tnorm{U_k[\phi ](T)}{H^s}\le C\rho ^{k-1}rN^{-s}$ for any $T\ge 0$ and $k\ge 2$. \end{itemize} For the first nonlinear term $U_{2\nu +1}[\phi]$, we observe that \eqq{|\widehat{U_{2\nu +1}[\phi ]}(T,\xi )|&=c(rN^{-s})^{2\nu +1}\Big| \int _\Gamma \prod _{m=1}^{2\nu+1}\chi _{\Sigma +Q_1}(\xi _m) \Big( \int _0^Te^{it\Phi}\,dt\Big) \,d\xi _1\dots d\xi _{2\nu +1}\Big| ,} where \eqq{\Gamma :=\Shugo{(\xi _1,\dots ,\xi _{2\nu +1})}{\sum _{j=0}^\nu \xi _{2j+1}-\sum _{l=1}^\nu \xi _{2l}=\xi},\quad \Phi :=|\xi |^2-\sum _{j=0}^\nu |\xi _{2j+1}|^2+\sum _{l=1}^\nu |\xi _{2l}|^2.} Now, we restrict $\xi$ to the low-frequency region $Q_{1/2}$. If $d=\nu =1$, then we have \eqq{&\chi _{Q_{1/2}}(\xi )\int _\Gamma \prod _{m=1}^{2\nu+1}\chi _{\Sigma +Q_1}(\xi _m) \int _0^Te^{it\Phi}\,dt\\ &=2\chi _{Q_{1/2}}(\xi )\int _\Gamma \chi _{N+Q_1}(\xi _1)\chi _{2N+Q_1}(\xi _2)\chi _{N+Q_1}(\xi _3)\int _0^Te^{it\Phi}\,dt,} and $\Phi =O(N^2)$ in the integral. If $d\ge 2$ and $\nu =1$, we have \eqq{&\chi _{Q_{1/2}}(\xi )\int _\Gamma \prod _{m=1}^{2\nu+1}\chi _{\Sigma +Q_1}(\xi _m) \int _0^Te^{it\Phi}\,dt\\ &=2\chi _{Q_{1/2}}(\xi )\int _\Gamma \chi _{Ne_{d-1}+Q_1}(\xi _1)\chi _{N(e_{d-1}+e_d)+Q_1}(\xi _2)\chi _{Ne_d+Q_1}(\xi _3)\int _0^Te^{it\Phi}\,dt,} and the resonant property implies that \eqq{ \Phi =\begin{cases} O(N) &\text{if $d_2=0,1$},\\ O(1) &\text{if $d_2\ge 2$} \end{cases} } in the integral. Therefore, in these cases we have the following lower bound: \eq{est:A}{\norm{\widehat{U_{2\nu +1}[\phi ]}(T)}{L^2(Q_{1/2})}\ge cT(rN^{-s})^{2\nu +1}=c\rho ^{2\nu}rN^{-s}} \eqq{\text{for any}\quad 0<T\ll \begin{cases} N^{-2} &\text{if $d=\nu =1$},\\ N^{-1} &\text{if $d\ge 2$, $\nu =1$, $d_2=0,1$},\\ ~1 &\text{if $d\ge 2$, $\nu =1$, $d_2\ge 2$}. \end{cases} } The quintic and higher cases are slightly different. On one hand, there are ``almost resonant'' interactions such as \eqq{\prod _{j=1,3}\chi _{Ne_d+Q_1}(\xi _j)\prod _{l=2,4}\chi _{3Ne_d+Q_1}(\xi _l)\prod _{m=5}^{2\nu +1}\chi _{4Ne_d+Q_1}(\xi _m),} for which it holds \eqq{ \Phi =\begin{cases} O(N) &\text{if $d_2=0$},\\ O(1) &\text{if $d_2\ge 1$} \end{cases} } in the integral. On the other hand, some non-resonant interactions such as \eqq{\chi _{3Ne_d+Q_1}(\xi _1)\chi _{4Ne_d+Q_1}(\xi _2)\prod _{m=3}^{2\nu +1}\chi _{Ne_d+Q_1}(\xi _m)} also create low-frequency modes, with $|\Phi |\sim N^2$ in the integral. Hence, if we choose $T>0$ as \eqq{ N^{-2}\ll T\ll \begin{cases} N^{-1} &\text{if $d\ge 1$, $\nu \ge 2$, $d_2=0$},\\ ~1 &\text{if $d\ge 1$, $\nu \ge 2$, $d_2\ge 1$}, \end{cases} } then \eqq{ \begin{cases} \Re \Big( \displaystyle\int _0^Te^{it\Phi}\,dt\Big) \ge \frac{1}{2}T &\text{for ``almost resonant'' interactions},\\[10pt] \Big| \displaystyle\int _0^Te^{it\Phi}\,dt\Big| \le CN^{-2}\ll T &\text{for non-resonant interactions}, \end{cases} } so that no cancellation occurs among ``almost resonant'' interactions, which dominate the non-resonant interactions. Therefore, we have \eqref{est:A} for such $T$ as above. Finally, we set \eqq{ \begin{cases} r:=N^{s+\frac{2}{3}}\log N,\quad T:=N^{-2}(\log N)^{-1} &\text{if $d=\nu =1$},\\ r:=N^{s+\frac{1}{2\nu +1}}\log N,\quad T:=N^{-1}(\log N)^{-1} &\text{if $d\ge 2$, $\nu =1$, $d_2=0,1$}\\[-5pt] &\quad \text{or $d\ge 1$, $\nu \ge 2$, $d_2=0$},\\ r:=N^s\log N,\quad T:=(\log N)^{-(2\nu +\frac{1}{2})} &\text{otherwise}. \end{cases} } We see that, under the assumption on $s$, $\tnorm{\phi}{H^s}\sim r\ll 1$, $T\ll 1$, $\rho \ll 1$, and \eqq{\norm{\widehat{u}(T)}{L^2(Q_{1/2})}&\ge c\norm{\widehat{U_{2\nu +1}[\phi ]}(T)}{L^2(Q_{1/2})}-C\Big( \norm{U_1[\phi ](T)}{H^s}+\sum _{l\ge 2} \norm{U_{2\nu l+1}[\phi ](T)}{H^s}\Big) \\ &\ge c\norm{\widehat{U_{2\nu +1}[\phi ]}(T)}{L^2(Q_{1/2})}\gg 1.} We conclude the proof by letting $N\to \infty$. \end{proof} At the end of this section, we give a characterization of resonant interactions creating the zero mode in the one-dimensional quintic case. \begin{prop}\label{prop:char} The quintuplet $(k_1,\dots ,k_5)\in \Bo{Z}^5$ satisfies \eq{cond:res}{k_1+k_3+k_5=k_2+k_4,\qquad k_1^2+k_3^2+k_5^2=k_2^2+k_4^2} if and only if \eq{char}{\shugo{k_1,\,k_3,\,k_5}&=\shugo{ap,\,bq,\,(a+b)(p+q)},\\ \shugo{k_2,\,k_4}&=\shugo{ap+(a+b)q,\,(a+b)p+bq}} for some $a,b,p,q\in \Bo{Z}$. \end{prop} \begin{exm} (i) Taking $a=p=b=q=1$ in \eqref{char}, we have the quintuplet $(1,3,1,3,4)$ which has appeared in the proof of Proposition~\ref{prop:niilr} above. Also, with $(a,b,p,q)=(-1,2,-2,1)$ we have $(2,3,2,0,-1)$, which gives a resonant interaction for quartic nonlinearities $u^3\bar{u}$, $u\bar{u}^3$ exploited in the proof of Lemma~\ref{lem:U_p}~(iv) above. (ii) The quintuplets $(pq,-q^2,-pq,p^2,p^2-q^2)$ given in \cite[Lemma~4.2]{CK17} can be obtained by setting $a=-q$, $b=p$ in \eqref{char}. \end{exm} \begin{proof}[Proof of Proposition~\ref{prop:char}] The \emph{if} part is verified by a direct computation, so we show the \emph{only if} part. Let $(k_1,\dots ,k_5)\in \Bo{Z}^5$ satisfy \eqref{cond:res}. We start with observing that at least one of $k_1,k_3,k_5$ is an even integer; otherwise, we would have \eqq{k_1^2+k_3^2+k_5^2\equiv 3\not\equiv 1\equiv k_2^2+k_4^2\mod 4,} contradicting \eqref{cond:res}. Without loss of generality, we assume $k_5$ to be even and set \eqq{n_j:=k_j-\tfrac{1}{2}k_5\in \Bo{Z}\quad (j=1,\dots ,5),\qquad n_6:=-\tfrac{1}{2}k_5\in \Bo{Z}.} From \eqref{cond:res} we see that \eqq{n_1+n_3+n_5=n_2+n_4+n_6,\quad n_1^2+n_3^2=n_2^2+n_4^2,\quad n_5=-n_6.} The second equality implies that two vectors $(n_1-n_2,n_3-n_4), (n_1+n_2,n_3+n_4)\in \Bo{Z}^2$ are orthogonal to each other (unless one of them is zero), which allows us to write \eq{id:A}{(n_1-n_2,n_3-n_4)=\alpha (q,p),\quad (n_1+n_2,n_3+n_4)=\beta (-p,q)} with $\alpha ,\beta ,p,q\in \Bo{Z}$. Note that $n_1,\dots ,n_4$ are then written as \eqq{n_1=\tfrac{1}{2}(\alpha q-\beta p),\qquad n_2=-\tfrac{1}{2}(\alpha q+\beta p),\\ n_3=\tfrac{1}{2}(\alpha p+\beta q),\qquad n_4=-\tfrac{1}{2}(\alpha p-\beta q),} and that \eqq{n_5=-n_6=\tfrac{1}{2}(n_5-n_6)=-\tfrac{1}{2}\big\{ (n_1-n_2)+(n_3-n_4)\big\} =-\tfrac{1}{2}\alpha (p+q).} Recalling $k_j=n_j-n_6$ ($j=1,\dots ,5$), we have \eq{ks}{&k_1=-\tfrac{1}{2}(\alpha +\beta )p,\qquad k_3=-\tfrac{1}{2}(\alpha -\beta )q,\qquad k_5=-\alpha (p+q),\\ &\qquad k_2=-\tfrac{1}{2}(\alpha +\beta )p-\alpha q,\qquad k_4=-\tfrac{1}{2}(\alpha -\beta )q-\alpha p.} We next claim that the integers $\alpha ,\beta ,p,q$ can be chosen in \eqref{id:A} so that $\alpha$ and $\beta$ have the same parity. To see this, we notice that the four integers $n_1\pm n_2$, $n_3\pm n_4$ are of the same parity, since all of \eqs{(n_1+n_2)+(n_1-n_2)=2n_1,\qquad (n_3+n_4)+(n_3-n_4)=2n_3,\\ (n_1-n_2)+(n_3-n_4)=n_6-n_5=2n_6} are even. If $n_1\pm n_2$, $n_3\pm n_4$ are odd integers, then by \eqref{id:A} $\alpha$ and $\beta$ must be odd. So, we assume that they are all even. If one of $p,q$ is odd, then both $\alpha$ and $\beta$ must be even. If both $p$ and $q$ are even, we replace $(\alpha ,\beta ,p,q)$ with $(2\alpha ,2\beta ,p/2,q/2)$ to obtain another expression \eqref{id:A} with both $\alpha$ and $\beta$ being even. Hence, the claim is proved. Finally, we set $a:=-\frac{1}{2}(\alpha +\beta )$, $b:=-\frac{1}{2}(\alpha -\beta )$, both of which are integers. Inserting them into \eqref{ks}, we find the expression \eqref{char}. \end{proof} \section{Norm inflation for 1D cubic NLS at the critical regularity}\label{sec:ap} In this section, we consider the particular equation \begin{equation}\label{cNLS} \left\{ \begin{array}{@{\,}r@{\;}l} i\partial _tu+\partial _x^2 u&=\pm |u|^2u,\qquad t\in [0,T],\quad x\in Z=\Bo{R} \text{~or~} \Bo{T} ,\\ u(0,x)&=\phi (x). \end{array} \right. \end{equation} We will show the inflation of the Besov-type scale-critical Sobolev and Fourier-Lebesgue norms with an additional logarithmic factor: \begin{defn} For $1\le p<\infty$, $1\le q\le \infty$ and $\alpha \in \Bo{R}$, define the $D^{[\alpha]}_{p,q}$-norm by \eqq{\norm{f}{D^{[\alpha]}_{p,q}}:=\Big\| N^{-\frac{1}{p}}\LR{\log N}^\alpha \norm{\widehat{f}}{L^p_\xi (\{ N\le \LR{\xi}<2N\} )}\Big\| _{\ell ^q_N(2^{\Bo{Z}_{\ge 0}})} .} We also define the $D^{s}_{p,q}$-norm for $s\in \Bo{R}$ by \eqq{\norm{f}{D^{s}_{p,q}}:=\Big\| N^{s}\norm{\widehat{f}}{L^p_\xi (\{ N\le \LR{\xi}<2N\} )}\Big\| _{\ell ^q_N(2^{\Bo{Z}_{\ge 0}})} .} \end{defn} \begin{rem} (i) We see that $D^{[0]}_{2,q}=D^{-\frac{1}{2}}_{2,q}=B^{-\frac{1}{2}}_{2,q}$ (Besov norm) and $D^{[0]}_{p,p}=\Sc{F} L^{-\frac{1}{p},p}$ (Fourier-Lebesgue norm). In the case of $Z=\Bo{R}$, the homogeneous version of $D^{[0]}_{p,q}$ is scale invariant for any $p,q$. (ii) We have the embeddings $D^{[\alpha ]}_{p_2,q}\hookrightarrow D^{[\alpha]}_{p_1,q}$ if $p_1\le p_2$, $D^{[\alpha ]}_{p,q_1}\hookrightarrow D^{[\alpha ]}_{p,q_2}$ if $q_1\le q_2$. (iii) We will not consider the space $D^{[\alpha ]}_{p,q}$ with $p=\infty$ here, since our argument seems valid only in the space of negative regularity. \end{rem} \begin{prop}\label{prop:A} For the Cauchy problem \eqref{cNLS}, norm inflation occurs in the following cases: (i) In $D^{[\alpha ]}_{p,q}$ for any $1\le q\le \infty$ and $\alpha <\frac{1}{2q}$, if $\frac{3}{2}\le p<\infty$. (ii) In $D^{[\alpha ]}_{p,q}$ and $D^s_{p,q}$ for any $1\le q\le \infty$, $\alpha \in \Bo{R}$ and $s<-\frac{2}{3}$, if $1\le p<\frac{3}{2}$. \end{prop} \begin{rem} (i) If $\frac{3}{2}\le p<\infty$ and $1\le q<\infty$, Proposition~\ref{prop:A} shows inflation of a ``logarithmically subcritical'' norm (i.e., $D^{[\alpha ]}_{p,q}$ with $\alpha >0$). Moreover, if $1\le p<\frac{3}{2}$ we show norm inflation in $D^{s}_{p,q}$ for subcritical regularities $-\frac{2}{3}>s>-\frac{1}{p}$. However, for $q=\infty$ and $p\ge \frac{3}{2}$, inflation is not detected even in the critical norm $D^{[0]}_{p,\infty}$. (ii) In \cite[Theorem~4.7]{KVZ17p} global-in-time a priori bound was established in $D^{[\frac{3}{2}]}_{2,2}$ and $D^{[2]}_{2,\infty}$. Recently, Oh and Wang \cite{OW18p} proved global-in-time bound in $\Sc{F} L^{0,p}$ for $Z=\Bo{T}$ and $2\le p<\infty$. There are still some gaps between these results and ours. In fact, Proposition~\ref{prop:A} shows inflation of $D^{[\frac{1}{4}-]}_{2,2}$ and $D^{[0-]}_{2,\infty}$ norms, as well as in a norm only logarithmically stronger than $\Sc{F} L^{-\frac{1}{p},p}$ for $p\ge 2$. (iii) Guo \cite{G17} also studied \eqref{cNLS} on $\Bo{R}$ in ``almost critical'' spaces. It would be interesting to compare our result with \cite[Theorem~1.8]{G17}, where he showed well-posedness (and hence a priori bound) in some Orlicz-type generalized modulation spaces which are barely smaller than the critical one $M_{2,\infty}$. There is no conflict between these results, because the function spaces for which norm inflation is claimed in Proposition~\ref{prop:A} are not included in $M_{2,\infty}$ due to negative regularity. Note also that the function spaces in \cite[Theorem~1.8]{G17} admit the initial data $\phi$ of the form $\widehat{\phi}(\xi )=[\log (2+|\xi |)]^{-\gamma}$ only for $\gamma >2$ (see \cite[Remark~1.9]{G17}), while it belongs to $D_{p,q}^{[\alpha ]}$ if $\gamma >\alpha +\frac{1}{q}$. (iv) In contrast to the results in \cite{KVZ17p,OW18p}, complete integrability of the equation will play no role in our argument. In particular, Proposition~\ref{prop:A} still holds if we replace the nonlinearity in \eqref{cNLS} with any of the other cubic terms $u^3,\bar{u}^3,u\bar{u}^2$ or any linear combination of them with complex coefficients. \end{rem} \begin{proof}[Proof of Proposition~\ref{prop:A}] We follow the argument in Section~\ref{sec:proof0}. For $1\le \rho <\infty$ and $A>0$, let $M^\rho _A$ be the rescaled modulation space defined by the norm \eqq{\norm{f}{M^\rho _A}:=\sum _{\xi \in A\Bo{Z}}\norm{\widehat{f}}{L^\rho (\xi +I_A)},\qquad I_A:=[ -\tfrac{A}{2},\tfrac{A}{2}) .} It is easy to see that $M_A^\rho $ is a Banach algebra with a product estimate: \eqq{\norm{fg}{M_A^\rho}\le CA^{1-\frac{1}{\rho}}\norm{f}{M^\rho _A}\norm{g}{M^\rho _A}.} Mimicking the proof of Lemma~\ref{lem:U_k}, we see that the operators $U_k$ defined as in Definition~\ref{defn:U_k} satisfy \eq{est:A1}{\norm{U_k[\phi ](t)}{M_A^\rho}\le t^{\frac{k-1}{2}}\big( CA^{1-\frac{1}{\rho}}\tnorm{\phi}{M_A^\rho}\big) ^{k-1}\tnorm{\phi}{M_A^\rho},\qquad t\ge 0,\quad k\ge 1.} We also recall that from Corollary~\ref{cor:lwp}, the power series expansion of the solution map $u[\phi ]=\sum _{k\ge 1}U_k[\phi ]$ is verified in $C([0,T];M_A^2)$ whenever \eq{cond:A1}{0<T\ll \big( A^{\frac{1}{2}}\tnorm{\phi}{M_A^2}\big) ^{-2}.} For the proof of norm inflation in $D^{[\alpha ]}_{p,q}$, we restrict the initial data $\phi$ to those of the form \eqref{cond:phi}; for given $N\gg 1$, we set \eqq{\widehat{\phi}:=rA^{-\frac{1}{p}}N^{\frac{1}{p}}\chi_{(N+I_A)\cup (2N+I_A)},} where $r>0$ and $1\ll A\ll N$ will be specified later according to $N$. Then, since $\tnorm{\phi}{M_A^2}\sim rA^{\frac{1}{2}-\frac{1}{p}}N^{\frac{1}{p}}$, the condition \eqref{cond:A1} is equivalent to \eq{cond:A2}{0<r(TN^2)^{\frac{1}{2}}\Big( \frac{A}{N}\Big) ^{1-\frac{1}{p}}\ll 1.} Moreover, it holds that \eq{est:A2}{\norm{U_1[\phi ](T)}{D^{[\alpha ]}_{p,q}}=\norm{\phi}{D^{[\alpha ]}_{p,q}}\sim r(\log N)^\alpha ,\qquad T\ge 0,} and similarly to Lemma~\ref{lem:U_p} (i), that \eq{est:A3}{\norm{U_3[\phi ](T)}{D^{[\alpha ]}_{p,q}}&\ge cT\big( rA^{-\frac{1}{p}}N^{\frac{1}{p}}\big) ^3A^2\norm{\Sc{F} ^{-1}\chi _{I_{A/2}}}{D^{[\alpha ]}_{p,q}},\qquad 0<T\le \tfrac{1}{100}N^{-2},\\ &=c\Big[ r(TN^2)^{\frac{1}{2}}\Big( \frac{A}{N}\Big) ^{1-\frac{1}{p}}\Big] ^2r\Big( \frac{A}{N}\Big) ^{-\frac{1}{p}}f_{p,q}^\alpha (A),} where \eqq{f_{p,q}^\alpha (A):=\norm{\Sc{F} ^{-1}\chi _{I_{A/2}}}{D^{[\alpha ]}_{p,q}}\sim \begin{cases} (\log A)^{\alpha +\frac{1}{q}}, &\alpha >-\frac{1}{q},\\ (\log \log A)^{\frac{1}{q}}, &\alpha =-\frac{1}{q},\\ ~1, &\alpha <-\frac{1}{q}.\end{cases}} For estimating $U_{2l+1}[\phi]$, $l\ge 2$ in $D^{[\alpha ]}_{p,q}$, we first observe that \eqq{\norm{U_k[\phi ](T)}{D^{[\alpha ]}_{p,q}}\le \norm{\Sc{F} ^{-1}\chi _{\supp{\widehat{U_k[\phi ]}(T)}}}{D^{[\alpha ]}_{p,q}}\norm{\widehat{U_k[\phi ]}(T)}{L^\infty}. } A simple computation yields that \eqq{\norm{\Sc{F} ^{-1}\chi _\Omega}{D^{[\alpha ]}_{p,q}}\le C\norm{\Sc{F} ^{-1}\chi _{I_{|\Omega |}}}{D^{[\alpha ]}_{p,q}}} for any measurable set $\Omega \subset \Bo{R}$ of finite measure. From Lemma~\ref{lem:supp}, we have \eqq{\big| \supp{\widehat{U_k[\phi ]}(T)}\big| \le C^kA, \qquad T\ge 0,\quad k\ge 1,} and hence, \eqq{\norm{\Sc{F} ^{-1}\chi _{\supp{\widehat{U_k[\phi ]}(T)}}}{D^{[\alpha ]}_{p,q}}\le C\norm{\Sc{F} ^{-1}\chi _{I_{C^kA}}}{D^{[\alpha ]}_{p,q}}\le C^kf_{p,q}^\alpha (A).} Moreover, similarly to Lemma~\ref{lem:U_k_H^s} (ii), we use Young's inequality, \eqref{est:A1} and Lemma~\ref{lem:a_k} to obtain \eqq{\norm{\widehat{U_k[\phi ]}(T)}{L^\infty}&\le \sum _{\mat{k_1,k_2,k_3\ge 1\\k_1+k_2+k_3=k}}\int _0^T\norm{\widehat{U_{k_1}[\phi ]}(t)}{M^{\frac{3}{2}}_A}\norm{\widehat{U_{k_2}[\phi ]}(t)}{M^{\frac{3}{2}}_A}\norm{\widehat{U_{k_3}[\phi ]}(t)}{M^{\frac{3}{2}}_A}\,dt\\ &\le \int _0^Tt^{\frac{k-3}{2}}\,dt\cdot \big( CrA^{1-\frac{1}{p}}N^{\frac{1}{p}}\big) ^{k-3}\big( CrA^{\frac{2}{3}-\frac{1}{p}}N^{\frac{1}{p}}\big) ^{3}\\ &\le C\big( CrT^{\frac{1}{2}}A^{1-\frac{1}{p}}N^{\frac{1}{p}}\big) ^{k-1}rA^{-\frac{1}{p}}N^{\frac{1}{p}},\qquad T\ge 0,\quad k\ge 3. } Hence, we have \eq{est:A4}{\norm{U_k[\phi ](T)}{D^{[\alpha ]}_{p,q}}\le C\Big[ Cr(TN^2)^{\frac{1}{2}}\Big( \frac{A}{N}\Big) ^{1-\frac{1}{p}}\Big] ^{k-1}r\Big( \frac{A}{N}\Big) ^{-\frac{1}{p}}f_{p,q}^\alpha (A),\quad T\ge 0,~~k\ge 3.} From \eqref{cond:A2}--\eqref{est:A4}, we only need to check if there exist $r,A,T$ such that \eq{cond:A3}{1\ll A\ll N,\qquad r\ll (\log N)^{-\alpha} ,\qquad (TN^2)\le \tfrac{1}{100},\qquad\qquad \\ \Big[ r(TN^2)^{\frac{1}{2}}\Big( \frac{A}{N}\Big) ^{1-\frac{1}{p}}\Big] ^2\ll 1 \ll \Big[ r(TN^2)^{\frac{1}{2}}\Big( \frac{A}{N}\Big) ^{1-\frac{1}{p}}\Big] ^2r\Big( \frac{A}{N}\Big) ^{-\frac{1}{p}}f^\alpha _{p,q}(A).} When $1\le p<\frac{3}{2}$, it holds that $2(1-\frac{1}{p})\ge 0>2(1-\frac{1}{p})-\frac{1}{p}$. Hence, we may choose \eqs{r=(\log N)^{\min \{ -\alpha ,0\} -1},\quad A=N^{\frac{1}{2}},\quad T=\tfrac{1}{100}N^{-2},} which clearly satisfies \eqref{cond:A3}. (Note that $f^\alpha _{p,q}(A)\gtrsim 1$ for any $p,q,\alpha$.) If $\frac{3}{2}\le p<\infty$, \eqref{cond:A3} would imply that \eqq{1 \ll \Big[ r(TN^2)^{\frac{1}{2}}\Big( \frac{A}{N}\Big) ^{1-\frac{1}{p}}\Big] ^2r\Big( \frac{A}{N}\Big) ^{-\frac{1}{p}}f^\alpha _{p,q}(A)\lesssim r^3f^\alpha _{p,q}(A)\ll (\log N)^{-3\alpha}f^\alpha _{p,q}(A).} In particular, when $\alpha >-\frac{1}{q}$ this condition requires \eqq{(\log N)^{3\alpha}\ll (\log N)^{\alpha +\frac{1}{q}},} which shows the necessity of the restriction $\alpha <\frac{1}{2q}$ in our argument. We now see the possibility of choosing $r,A,T$ with the condition \eqref{cond:A3} in the following two cases separately: (a) If $1\le q<\infty$ and $0\le \alpha<\frac{1}{2q}$, we may take for instance \eqq{r=(\log N)^{-\alpha}(\log \log N)^{-1},\quad A=N(\log \log N)^{-1},\quad T=\tfrac{1}{100}N^{-2}.} (Note that $f^\alpha _{p,q}(A)\sim f^\alpha _{p,q}(N)\sim (\log N)^{\alpha +\frac{1}{q}}$.) (b) If $\alpha <0$, we take \eqq{r=(\log N)^{-\alpha}(\log \log N)^{-1},\quad A=N(\log N)^{\alpha (1-\frac{1}{p})^{-1}},\quad T=\tfrac{1}{100}N^{-2}.} In both cases we easily show \eqref{cond:A3}. Finally, we assume $1\le p<\frac{3}{2}$ and prove norm inflation in $D^s_{p,q}$ for $s<-\frac{2}{3}$. We use the initial data $\phi$ of the form \eqq{\widehat{\phi}:=rN^{-s}\chi_{[N,N+1]\cup [2N,2N+1]}.} Then, the condition \eqref{cond:A1} with $A=1$ is equivalent to \eqq{0<T^{\frac{1}{2}}rN^{-s}=(TN^2)^{\frac{1}{2}}rN^{-s-1}\ll 1.} Repeating the argument above we also verify that \eqs{\norm{U_1[\phi ](T)}{D^s_{p,q}}=\norm{\phi}{D^s_{p,q}}\sim r,\\ \norm{U_3[\phi ](T)}{D^s_{p,q}}\ge c\big( T^\frac{1}{2}rN^{-s}\big) ^2rN^{-s}=c(TN^2)r^3N^{-3s-2}\qquad \text{if $T\le \tfrac{1}{100}N^{-2}$},\\ \norm{U_k[\phi ](T)}{D^s_{p,q}}\le C\big( CT^\frac{1}{2}rN^{-s}\big) ^{k-1}rN^{-s},\qquad T\ge 0,~k\ge 3. } Hence, we set \eqq{r=N^{s+\frac{2}{3}}\log N,\qquad T=\tfrac{1}{100}N^{-2},} so that for $s<-\frac{2}{3}$ we have \eqs{\norm{U_1[\phi ](T)}{D^s_{p,q}}\sim N^{s+\frac{2}{3}}\log N\ll 1,\qquad \norm{U_3[\phi ](T)}{D^s_{p,q}}\gtrsim (\log N)^3\gg 1, \\ \sum _{l\ge 2}\norm{U_{2l+1}[\phi ](T)}{D^s_{p,q}}\lesssim N^{-\frac{2}{3}}(\log N)^5\ll 1,} from which norm inflation is detected by letting $N\to \infty$. \end{proof} \noindent \textbf{Acknowledgments:} The author would like to thank Tadahiro Oh for his generous suggestion and encouragement. This work is partially supported by JSPS KAKENHI Grant-in-Aid for Young Researchers (B) No.~24740086 and No.~16K17626. \end{document}
\begin{document} \title{Critically-enhanced spin-nematic squeezing and entanglement in dipolar spinor condensates} \author{Qing-Shou Tan} \affiliation{Key Laboratory of Hunan Province on Information Photonics and Freespace Optical Communications, Hunan Institute of Science and Technology, Yueyang 414000, China} \affiliation{College of Physics and Electronic Engineering, Hainan Normal University, Haikou 571158, China} \author{Yixiao Huang} \affiliation{School of Science, Zhejiang University of Science and Technology, Hangzhou, Zhejiang, 310023, China} \author{Qiong-Tao Xie} \affiliation{College of Physics and Electronic Engineering, Hainan Normal University, Haikou 571158, China} \author{Xiaoguang Wang} \affiliation{Zhejiang Institute of Modern Physics, Department of Physics, Zhejiang University, Hangzhou 310027, China} \date{\today } \begin{abstract} We study the quantum critical effect enhanced spin-nematic squeezing and quantum Fisher information (QFI) in the spin-1 dipolar atomic Bose-Einstein condensate. We show that the quantum phase transitions can improve the squeezing and QFI in the nearby regime of critical point, and the Heisenberg-limited high-precision metrology can be obtained. The different properties of the ground squeezing and entanglement under even and odd number of atoms are further analyzed, by calculating the exact analytical expressions.We also demonstrate the squeezing and entanglement generated by the spin-mixing dynamics around the phase transition point. It is shown that the steady squeezing and entanglement can be obtained, and the Bogoliubov approximation can well describe the dynamics of spin-nematic squeezed vacuum state. \end{abstract} \maketitle \section{introduction} Spin squeezing has attracted much attention in precision metrology since it was first established by Kitagawa and Ueda~\cite{Kitagawa}. In the past two decades, spin-squeezed states have been widely used in high-precision measurements to beat the standard quantum limit (SQL)~\cite{ Wineland,Wineland2,ma,Pezze,Cronin, Sau, Vitagliano} which is the best estimation limit of separable states with $N$ particles and scales like $1/\sqrt{N}$. In Ref.~\cite{Kitagawa}, two different mechanisms were proposed to generate spin-squeezed states: one-axis twisting (OAT) and two-axis twisting (TAT). The precision allowed by OAT and TAT states scales with $1/N^{2/3}$ and $1/N$, respectively. The best precision of TAT squeezed states is known as the Heisenberg scaling. In experiments, the TAT squeezed states are hard to achieve, while the OAT ones have been applied in Ramsey spectroscopy, atom interferometers and high-precision atomic clocks. The nonlinearity of Bose-Einstein condensates (BECs) caused by atomic collisions can create spin-squeezed states, and is proved to be an ideal candidate for high resolution quantum metrology~\cite{Gross, Riedel}. In particular, the spinor atomic BECs have arisen much interest~\cite{law, chang, Kawaguchi,Stamper-Kurn,duan,you,Mustecapliogl,Kajtoch} due to their significant roles in studying the quantum metrology of many-body spin systems. Basically, these works can be sorted into two categories: spin-1/2 and integer-spin atomic systems. Compared with spin-1/2 atoms, whose states can be uniquely specified by different components of the total spin vector $\hat{\bf {S}}=(\hat{S}_x, \hat{S}_y, \hat{S}_z)$, spin-1 atoms require additional spinor degrees of freedom to describe, associated with the quadrupole or nematic tensor operator, $\hat{Q}_{ij}$ $({i,j}\in{x,y,z})$ \cite{Hamley,Gerving,Hoang,Huang,Masson,Masson2,Niezgoda}. These additional degrees of freedom concomitantly offer more degrees of freedom to squeezing and entanglement. Recently, the spin-nematic squeezing was observed in experiment by the nonlinear collisional dynamics of spinor BEC, and the squeezing can be improved on the SQL by up to 8-10 dB~\cite{Hamley}. In spinor atomic BECs, besides nonlinear collisional interactions, there is also long-range magnetic dipole-dipole interaction (MDDI)~\cite{syi2001,syi,Stuhler,syi2,xing,Giovanazzi,Griesmaier,Puh, Zhangw,huangy}. According to the recent experimental and theoretical observation in $^{23}$Na, $^{87}$Rb and $^{52}$Cr atoms, the MDDIs are indeed not negligible for these spinor condensates. Particularly, the achievements in spinor BECs provide a highly tunable and controllable system where the spin interactions, including the MDDI, can be accurately engineered~\cite{syi2001,syi, Giovanazzi,Chin}. The relative strength of the dipolar interaction and the spin exchange interaction describes a rich phase diagram~\cite{syi,syi2,huangy}. These transitions between different phases are of interest with respect to spin squeezing and entanglement. The present work concerns generating highly spin-nematic squeezing and metrologically useful entanglement in different phases of spin-1 dipolar condensate including an ensemble of $N$ atoms. Both the ground states and dynamical behavior for them are considered. As same as the usual spin squeezing, in spin-nematic squeezing, entanglement is also induced in an ensemble of atomic spins. Quantum Fisher information (QFI)~\cite{Helstrom,Holevo}, which plays a central role in quantum metrology, is able to detect useful multipartite entanglement. It is proved that QFI can perform even better than spin squeezing parameter in the detection of non-Gaussian states~\cite{strobel}. Thus, we can characterize the metrologically useful entanglement with QFI. In the SQL, the QFI $F \propto N$ is reached when uncorrelated atoms are used, while in the Heisenberg limit (HL), $F\propto N^2$ is possible by using entangled states. Under our considered system, in ground state case, there are three sharp changes for both the squeezing and QFI at the phase transition points. More specifically, with the change of MDDI, the QFI ranges from unentangled state scaled as $N$ to highly-entangled state scaled like $N^2$. It enables precision metrology to reach the Heisenberg scalar. The optimal squeezing, similar to TAT $\propto 1/N$, can occur nearby the regimes of vanished MDDI, but at where the behavior is quite different for even and odd $N$. In dynamics case, we focus on the steady squeezing nearby the critical point at which the spin transfer rates are very low. In this case it is possible to obtain analytical prediction for spin-nematic squeezed vacuum state and QFI by adopting Bogoliubov approximation. We also show that the analytical results are in well agreement with the numerical calculations. Our results shed new light on obtaining metrologically useful entanglement to improve the precision of quantum metrology using spinor BECs. This work is organized as follows. In Sec.~\ref{model}, we introduce the physical model of a spin-1 dipolar condensates and present the spin-nematic squeezing parameter and QFI. In Secs.~\ref{ground} and~\ref{dynamics}, we study the critical effect enhanced spin-nematic squeezing and QFI in the cases of ground states and dynamics, respectively. Finally, a conclusion will be presented in Sec.~\ref{conclusion}. \section{FORMULATION}\label{model} \subsection{Model} We consider a trapped gas of $N$ bosonic atoms with hyperfine spin $f=1$. Atoms interact via s-wave collisions and dipolar interaction. Assuming all spin components share a common spatial mode $\phi(r)$, under the single-mode approximation, the total spin-dependent Hamiltonian reads \cite{syi,syi2,xing} \begin{eqnarray}\label{ham1} \hat{H}=(c_{2}'-c_{d}')\hat{\bf S}^{2}+3c_{d}' \hat{S}_{z}^{2}+3c_{d}' \hat{a}_{0}^{\dagger}\hat{a}_{0}. \end{eqnarray} The total many-body angular momentum operator is $\hat{\bf S}=\sum_{\alpha,\beta}\hat{a}_{\alpha}{ \bm F}_{\alpha\beta}\hat{a}_{\beta}$ $(\alpha,\beta \in 0,\pm1 )$ with $\bm F$ being the spin-1 matrices and $\hat{a}_{\alpha}$ the annihilation operator associated with the condensate mode, and the magnetization is defined as $S_z=\hat{a}_1^{\dagger}\hat{a}_1-\hat{a}_{-1}^{\dagger}\hat{a}_{-1}$. The rescaled collisional and dipolar interaction strengths, respectively, are given by $c_{2}'=(c_{2}/2)\int dr|\phi(r)|^{4}$ and $c_{d}'=(c_{d}/4)\int drdr'|\phi(r)|^{2}|\phi(r')|^{2}(1-3\cos^{2}\theta_{e})/|\vec{r}-\vec{r'}|^{3}$ with $\theta_{e}$ being the polar angle of $(\vec{r}-\vec{r'})$. Here $c_{2}=4\pi\hbar^{2}(a_{2}-a_{0})/(3M)$ with $M$ being the mass of the atom and $a_{0,2}$ the $s$-wave scattering length for two spin-1 atoms in the symmetric channel of the total spin 0 and 2, respectively. The strength of the MDDI is given by $c_{d}=\mu_{0}g_{F}^{2}\mu_{B}^{2}/4\pi$ with $\mu_{B}$ the Bohr magneton, and $g_{F}$ the Land\'e $g$-factor. To proceed, it is convenient to rescale the Hamiltonian by using $|c_{2}'|$ as the energy unit, which yields the dimensionless Hamiltonian \begin{eqnarray}\label{ham2} \hat{H}/|c_{2}'|=(\pm1-c)\hat{\bf S}^{2}+3c\hat{S}_{z}^{2}+3c \hat{a}_{0}^{\dagger}\hat{a}_{0}. \end{eqnarray} The sign of $ ``+" $ $(``-")$ corresponding to $c_2'>0$ $ (c_2'<0)$, which is determined by the type of atoms: for $c_2' <0$ (as for $^{87}$Rb) the interaction term favors the ferro-magnetic phase; whereas for $c_2'>0$ (as for $^{23}$Na) the antiferro-magnetic phase minimizes the interaction energy. Here $c\equiv c_{d}'/|c_{2}'|$ is the relative strength of the dipolar interaction with respect to the spin exchange interaction, and is treated as a control parameter. Fortunately, the sign and magnitude of the dipolar interaction strength $c_{d}'$ can be tuned via modifying the trapping geometry~\cite{syi} or a quick rotating orienting field~\cite{Giovanazzi}, and the contact interaction strength $c_{2}'$ is also tunable via Feshbach resonance. Without loss generally, throughout this paper we focus on the case of antiferromagnetic Bose-Einstein condensate, such that $c_2'>0$. Due to the dipolar interaction more new quantum phases can be found by tuning the values of $c$~\cite{syi}. The $c$-dependence ground state of Hamiltonian~(\ref{ham2}) can be found by minimizing $\langle{\hat{H}}\rangle$ in the $|S,m\rangle$ basis, which is defined by \begin{eqnarray} \hat{\bf S}^2|S,m\rangle=s(s+1)|S,m\rangle, \hspace{0.2cm} \hat{S}_z|S,m\rangle=m |S,m\rangle, \end{eqnarray} where $m=0,\pm1,...\pm S$. For a given total number of atoms $N$, the allowable values of $S$ are $S=0,2,4,...,N$. for even $N$, and $S=1,3,5,...,N$ for odd $N$. \subsection{Spin-nematic squeezing parameter and quantum Fisher information} In the case of spin-1 atomic Bose-Einstein condensates, the multipolar moments can be specified in terms of both the spin vector $\hat{S}_{i}$ and nematic tensor $\hat{Q}_{ij}$ $({i,j}\in{x,y,z})$ which constitute SU(3) Lie algebra. Based on the definition of the operator $\hat{Q}_{ij}$~\cite{Hamley,Gerving,Hoang,Huang}, \begin{eqnarray}\label{Qij} \hat{Q}_{yz} & = & \frac{i}{\sqrt{2}}\left(-\hat{a}_{1}^{\dagger}\hat{a}_{0}+\hat{a}_{0}^{\dagger}\hat{a}_{-1}+\hat{a}_{0}^{\dagger}\hat{a}_{1}-\hat{a}_{-1}^{\dagger}\hat{a}_{0}\right), \nonumber\\ \hat{Q}_{xz} & = & \frac{1}{\sqrt{2}}\left(\hat{a}_{1}^{\dagger}\hat{a}_{0}-\hat{a}_{0}^{\dagger}\hat{a}_{-1}+\hat{a}_{0}^{\dagger}\hat{a}_{1}-\hat{a}_{-1}^{\dagger}\hat{a}_{0}\right),\nonumber\\ \hat{Q}_{xx} & = & \frac{2}{3}\hat{a}_{0}^{\dagger}\hat{a}_{0}-\frac{1}{3}\hat{a}_{1}^{\dagger}\hat{a}_{1}-\frac{1}{3}\hat{a}_{-1}^{\dagger}\hat{a}_{-1}+\hat{a}_{1}^{\dagger}\hat{a}_{-1}+\hat{a}_{-1}^{\dagger}\hat{a}_{1},\nonumber\\ \hat{Q}_{yy} & = & -\frac{1}{3}\hat{a}_{1}^{\dagger}\hat{a}_{1}+\frac{2}{3}\hat{a}_{0}^{\dagger}\hat{a}_{0}-\frac{1}{3}\hat{a}_{-1}^{\dagger}\hat{a}_{-1}-\hat{a}_{1}^{\dagger}\hat{a}_{-1}-\hat{a}_{-1}^{\dagger}\hat{a}_{1},\nonumber\\ \hat{Q}_{zz} & = & \frac{2}{3}\hat{a}_{1}^{\dagger}\hat{a}_{1}-\frac{4}{3}\hat{a}_{0}^{\dagger}\hat{a}_{0}+\frac{2}{3}\hat{a}_{-1}^{\dagger}\hat{a}_{-1}, \nonumber \end{eqnarray} there are two different spin-nematic squeezing parameters in the SU(2) subspaces, $\{\hat{S}_{x},\hat{Q}_{yz},\hat{Q}_{zz}- \hat{Q}_{yy}\}$ and $\{\hat{S}_{y},\hat{Q}_{xz},\hat{Q}_{xx}- \hat{Q}_{zz}\}$, which are defined by~\cite{Hamley,Huang} \begin{equation}\label{xi1} \xi_{x(y)}^{2}=\frac{2\langle[\Delta(S_{x(y)} \cos\varphi +Q_{yz(xz)} \sin\varphi)]^{2}\rangle_{{\rm min}}}{|\langle \hat{Q}_{zz}- \hat{Q}_{yy(xx)}\rangle|}, \end{equation} where the minimization is over all the quadrature angle $\varphi$. A state is spin-nematic squeezed if $\xi_{x(y)}^{2}<1$. Below, we focus on the squeezing in the $\{S_{x},Q_{yz},Q_{+}\}$ subspace with $\hat{Q}_{+}=\hat{Q}_{zz}- \hat{Q}_{yy}$. The spin-nematic squeezing parameter may be reduced as \begin{eqnarray}\label{xi2} \xi^2_x =\frac{ A-\sqrt{ B^2+C^2}} {|\langle Q_+ \rangle|} \end{eqnarray} by finding the optimal squeezing angle \begin{eqnarray*} \varphi_{{\rm opt}} & =\begin{cases} \frac{1}{2}\arccos\left(\frac{-B}{\sqrt{B^{2}+C^{2}}}\right) & B\leq0\\ \pi-\frac{1}{2}\arccos\left(\frac{-B}{\sqrt{B^{2}+C^{2}}}\right) & B>0 \end{cases},\\ \end{eqnarray*} where we define \begin{eqnarray}\label{ev} A&=& \langle S_{x}^{2}+Q_{yz}^{2}\rangle , \hspace{0.5cm} B= \langle S_{x}^{2}-Q_{yz}^{2}\rangle , \nonumber \\ C&=& \langle S_{x}Q_{yz}+Q_{yz}S_{x}\rangle. \end{eqnarray} \begin{figure} \caption{The $c$ dependence of the spin-nematic squeezing~$\xi^2_x$ for $N=100$ and $N=101$. The inset shows its $\langle Q_+ \rangle$ with respect to $c$.} \end{figure} A wide variety of spin squeezing techniques have been used to show sub-SQL of metrological sensitivity. To better understand the behavior of enhanced metrological sensitivity, we can evaluate the QFI in the $\hat{\Lambda}=\{\hat{S}_{x}, \hat{Q}_{yz}, \hat{Q}_{+}\}$ subspace. According to Refs.~\cite{ma,ma2,Ferrini,Huang2,liu,Niezgoda}, the QFI $F$ with respect to measured phase $\theta$, acquired by an SU(2) rotation on the input state $\hat{\rho}_{{\rm in}}$, can be explicitly derived as \begin{eqnarray}\label{fi0} F[\hat{\rho}(\theta),\hat{\Lambda}_{\vec{n}}] = \vec{n}C\vec{n}^{T} \end{eqnarray} where \begin{eqnarray} \hat{\rho}(\theta)=\exp(-i\theta\hat{\Lambda}_{\vec{n}})\hat{\rho}_{{\rm in}}\exp(i\theta\hat{\Lambda}_{\vec{n}}) \end{eqnarray} with $\hat{\Lambda}_{\vec{n}}=\hat{\Lambda}\cdot\vec{n}$ being the generator of rotation, and $\vec{n}$ the unit length vector. Here the matrix element for the symmetric matrix $C$ is \begin{eqnarray*} C_{kl}=\sum_{i\neq j}\frac{(p_{i}-p_{j})^{2}}{p_{i}+p_{j}}[\left\langle i\right|\Lambda_{k}\left|j\right\rangle \left\langle j\right|\Lambda_{l}\left|i\right\rangle +\left\langle i\right|\Lambda_{l}\left|j\right\rangle \left\langle j\right|\Lambda_{k}\left|i\right\rangle ], \end{eqnarray*} where $p_{i}(|i\rangle)$ are the eigenvalues (eigenvectors) of $\hat{\rho}(\theta)$. From Eq.~(\ref{fi0}), one finds that to get the highest possible estimation precision $\theta$, a proper direction $\vec{n}$ should be chosen for a given state, which maximizes the value of the QFI. With the help of the symmetric matrix, then the maximal QFI in the $\{\hat{S}_{x}, \hat{Q}_{yz}, \hat{Q}_{+}\}$ subspace can be obtained as \begin{eqnarray}\label{fi} F_{\rm max} &=& 4 \max \{ (\Delta \Lambda_{\perp})^{2}_{{\rm max}}, (\Delta Q_{+}/2)^2\}\nonumber\\ &=& \max \left\{ 2(A+\sqrt{B^2+C^2}), (\Delta Q_{+})^2\right\}, \end{eqnarray} where $\langle Q_+ \rangle$ is normalized by dividing 2 since $|\langle Q_+ \rangle|_{\rm max} =2N$. In Eq.~(\ref{fi}), the maximal possible value of the QFI is $F=4N^2$, which can be obtained only by the fully particle entangled states. On the other hand, separable states can give at most $F=4N$, such as $|0,N,0\rangle$ state. The factor 4 in the scaling of characteristic limits of the QFI is due to SU(3) Lie algebra~\cite{Niezgoda}. In term of the definition in Eq.~(\ref{fi}), a state is entangled in the $\{\hat{S}_{x}, \hat{Q}_{yz}, \hat{Q}_{+}\}$ subspace if QFI $F>4N$. In what follows, we will study the spin-nematic squeezing and QFI in the cases of ground states and spin-mixing dynamics, respectively, when $c_2'>0$. \begin{figure} \caption{ The $c$ dependence of the maximal QFI divided by $4N^2$ for (a) $N=100$ and (b) $N=101$.} \end{figure} \section{ Spin-nematic squeezing and quantum Fisher information in ground states}\label{ground} \begin{table*}[!htp] \floatsetup{floatrowsep=quad,captionskip=10pt} \tabcolsep=8pt \begin{floatrow} \capbtabbox{ \begin{tabular}{|p{0.5cm}|p{1.1cm}|p{1.4cm}|p{1.4cm}| p{1.1cm}|} \hline\hline $c$ & $<$-0.5 & 0 & 1 & $ > 1 $ \\ \hline $|G\rangle$ & $|N,\pm N\rangle$ & $|0,0\rangle$& $|\frac{N}{2},0,\frac{N}{2}\rangle$ & $\approx|N,0\rangle$ \\ \hline $\xi^2_x$ & 1 & undefined & 1 & $>1 $ \\ \hline $F^{\rm max}$ & $2N$ & $\frac{16N(N+3)}{15}$ & $\frac{N^2}{2}+N$ & $ \approx 2N^2$ \\ \hline \hline \end{tabular} }{ \caption{The ground state $|G\rangle$, the spin-nematic squeezing $\xi^2_x$, the maxima QFI $F^{\rm max}$ around the critical points $c<-0.5, c=0, c>=1$ for even $N$.}\label{tab:tb1} } \capbtabbox{ \begin{tabular}{|p{0.5cm}|p{1.1cm}|p{1.4cm}|p{1.5cm}| p{1.1cm}|} \hline\hline $c$ &$<$-0.5 & 0 & 1 & $>1$ \\ \hline $|G\rangle$ & $|S,\pm N\rangle$ & $|1,0\rangle$ & $|\frac{N+1}{2},0,\frac{N-1}{2}\rangle$ & $ \approx |N,0\rangle$ \\ \hline $\xi^2_x$ & 1 & $ \frac{5}{2N+3}$ & 1 &$>1$ \\ \hline $F^{\rm max}$ & $2N$ & $\frac{48N(N+3)-72}{35}$& $\frac{N^2-1}{2}+N$ & $ \approx 2N^2$ \\ \hline \hline \end{tabular} }{ \caption{ The ground state $|G\rangle$, the spin-nematic squeezing $\xi^2_x$, the maxima QFI $F^{\rm max}$ around the critical points $c<-0.5, c=0, c>=1$ for odd $N$.} \label{tab:tb2} } \end{floatrow} \end{table*} Now, we will consider the spin-nematic squeezing and QFI in the case of ground states. Numerically, it is convenient to expand the ground state as \begin{eqnarray} |G \rangle = \sum_{m,k} g_{m,k}|m, k\rangle, \end{eqnarray} in the Fock basis $|m,k\rangle\equiv|N_1, N_0, N_{-1}\rangle$ with the notations $N_{1}=k$, $N_{0}=N-2k+m$ and $N_{-1}= k-m$. Here $m = -N, -N+1,..., N$, for a given $m$, the allowable values of $k$ satisfy the relation $ { \max}(0,m)\leq k\leq {\rm Int}\left[\frac{N+m}{2}\right], $ where ${\rm Int}[x]$ is a function for getting the integer part of $x$. Since Hamiltonian (\ref{ham2}) commutes with $\hat{S}_z$, the ground state must lie in certain $m$-subspace, then the matrix elements of Hamiltonian (\ref{ham2}) become $H_{m,k,m,k'}=\langle m,k|H|m,k'\rangle$. The amplitudes $g_{m,k}$ can be obtained just by numerically diagonalizing the Hamiltonian. Hence the expectation values given in Eq.~(\ref{ev}) read \begin{eqnarray} A= \sum_{m,k} g_{m,k}^2 \left[(2N-4k+2m-1)(2k-m)+2N\right] \end{eqnarray} and \begin{eqnarray} &&\sqrt{B^2+C^2}=4\sum_{m,k} |g_{m,k}g_{m,k+1}| \times \nonumber\\ &&\sqrt{(N-2k+m-1)(N-2k+m)(k+1)(k+1-m)}. \nonumber\\ \end{eqnarray} We can also find the expectation value of $Q_{+}$ \begin{eqnarray} \left\langle Q_{+}\right\rangle =\sum_{m,k} g_{m,k}^{2}(6k-2N-3m), \end{eqnarray} as well as the corresponding fluctuation \begin{eqnarray} (\Delta Q_+)^2 & = &9\left[ \sum_{m,k} g_{m,k}^{2}(2k-m)^2- \left (\sum_{m,k} g_{m,k}^{2}(2k-m)\right)^2 \right]\nonumber\\ &&+ \sum_{m,k} g_{m,k}^{2} [ 2k(k+1)-m(2k+1)]. \end{eqnarray} Substituting the above equations into Eqs.~(\ref{xi2}) and~(\ref{fi}), we can obtain the spin-nematic squeezing and QFI in the case of ground states. Figures~1 and 2 illustrate the $c$ dependence of the spin-nematic squeezing $(10\log_{10}\xi^2_x)$ and QFI for $N=100$ (even number) and $N=101$ (odd number), respectively. From Fig.~1 and 2, we can see there are three sharp changes for both the squeezing and QFI when $c= -0.5$, $c=0$ and $c=1$. The squeezing can be found in the region $-0.5<c<1$. When $c<-0.5$ there is neither squeezing ($\xi^2_x=1$) nor entanglement ($F^{\rm max}=2N$), since the ground state is a Fock state with all the population in either $m_f=1$ or $-1$ state for both the even and odd $N$. When $c\ge1$, there is no squeezing ($\xi^2_x\ge1$) but highly-entangled states. For instance, when $c=1$, $|G\rangle=|N/2,0,N/2\rangle$ (assuming $N$ to be even) is the Twin-Fock state~\cite{holland,hyllus,you,lucke}, which is deeply entangled state in the picture of particles. Recently, Luo \emph{et al} demonstrated near-deterministic generation of this state of ~11,000 atoms in $^{87}$Rb BEC~\cite{you}. While for $c>1$, $|G\rangle \approx |S=N,m=0\rangle$ is the so called Dicke state~\cite{syi,duan,Wieczorek} which is a massively entangled state of all the atoms ($F^{\rm max}\approx 2N^2$). Zhang \emph{et al}~\cite{duan} have proposed a robust method to generate this state in a spinor BEC. From Fig.~1, we can clearly find that the optimal squeezing occurs around $c=0$, but the behavior is different for even and odd $N$ in this regime. Actually, for $c=0$ the spin-nematic squeezing is not well defined for even $N$, therefore, we should discuss the results for even and odd $N$ separately, when $c=0$. For $c=0$ and even $N$, the ground state $|G\rangle$ is the spin-singlet state~\cite{law} \begin{equation} |S=0, m=0\rangle =\sum_{k=0}^{N/2} \tilde{g}_k |k, N-2k, k\rangle \end{equation} with $\tilde{g}_k \equiv g_{0,k}$, where the amplitudes obey the recursion relation \begin{equation} \tilde{g}_0 =\frac{1}{\sqrt{N+1}}, \hspace{0.1cm} \tilde{g}_k=-\sqrt{\frac{N-2k+2}{N-2k+1}}\tilde{g}_{k-1}. \end{equation} After computing the recursion relation, we get \begin{equation} \tilde{g}_k=\frac{ (-1)^k}{\sqrt{N+1}} \prod_{x=0}^{k-1} \sqrt{\frac{N-2x}{N-2x-1}}. \end{equation} Spin-singlet state is a quantum superposition of a chain of Fock state in which the number of atoms in the state $m_f =\pm 1$ is equal. To get some insight, we first calculate the expectation values given in Eqs.~(\ref{ev}) for this state, which yields \begin{eqnarray} A=\sqrt{B^2+C^2}=\frac{4N(N+3)}{15}, \end{eqnarray} and the expectation value of $Q_+$ is $\langle Q_+\rangle =0$. Thus, the spin-nematic squeezing parameter of spin-singlet state is $0/0$ type. It is undefined at this point for even $N$, although the squeezing is strongest when $c \to 0$ (as shown in Fig.~1). However, it has QFI and the optimal value is \begin{eqnarray} F^{\rm max}&=&(\Delta Q_+)^2= 4(\Delta\Lambda_{\perp})^2_{\rm max} \nonumber\\ &=&\frac{16N(N+3)}{15}, \end{eqnarray} which is the Heisenberg scalar. It indicates that the spin-singlet state features genuine multipartite entanglement of the entire ensemble and will be useful for quantum metrology. This conclusion is consistent with the earlier work reported by T\'oth \cite{Toth}. For $c=0$ and odd $N$, the ground state is $|G\rangle=|S=1,m=0\rangle$ which given by \begin{eqnarray}\label{s1} |S=1,m=0\rangle =c_0\sum_{k=0}^{n}c_k|k,N-2k,k\rangle. \end{eqnarray} After computing the recursion relation, the amplitudes read \begin{eqnarray} c_{k} = (-1)^{k}\sqrt{\frac{3(2n-2k+1)}{(k+1)}}\prod_{x=0}^{k-1}\sqrt{\frac{(x+2)(2n-2x)}{(x+1)(2n-2x-1)}},\nonumber \\ \end{eqnarray} and the normalization constant $c_0$ is given by \begin{eqnarray} c_0=\left(\sum_{k=0}^{n} c_k^2 \right)^{-1/2} =\frac{1}{\sqrt{4n^{2}+8n+3}} \end{eqnarray} with $n=(N-1)/2$. By substituting the ground state $|S=1,m=0\rangle$ into Eqs.~(\ref{ev}), we find \begin{subequations} \begin{align} \left\langle Q_{+}\right\rangle & = -\frac{4N+6}{5},\\ (\Delta Q_+)^2 &=\frac{128(N-1)(N+4)}{175},\\ A & = \frac{12N^{2}+36N+12}{35},\\ \sqrt{B^{2}+C^{2}} & = \frac{12N^{2}+36N-48}{35}. \end{align} \end{subequations} Then the value of spin-nematic squeezing is \begin{eqnarray} \xi^2_x=\frac{5}{2N+3}. \end{eqnarray} This squeezing value is quite similar to the TAT case which $\propto 1/N$ for $N\gg1$~\cite{Kitagawa}. According to Eq.~(\ref{fi}), the maximal QFI of ground state~$|S=1,m=0\rangle$ is \begin{eqnarray} F^{\rm max}=4(\Delta \Lambda_{\perp})^2_{\rm max}=\frac{48N(N+3)}{35}-\frac{72}{35}, \end{eqnarray} which also is the Heisenberg scalar. To better show these results, Table 1 (2) lists the ground states $|G\rangle$, the spin-nematic squeezing parameter $\xi^2_x$ and the maximal QFI $F^{\rm max}$ around the critical points for even (odd) $N$. \section{ Spin-nematic squeezing and quantum Fisher information dynamics}\label{dynamics} We now turn to study the spin-nematic squeezing and QFI generated by the spin-mixing dynamics of the dipolar spinor condensate with even $N$. \subsection{Numerical results} The spin-mixing dynamics generated squeezing and QFI can be studied by numerically evolving an initial state under the total spin-dependent Hamiltonian. Here, we consider two different initial states of the system, namely $|0,N,0\rangle$ and $|N/2,0,N/2\rangle$, and then let the states become free dynamic evolution. Hamiltonian (\ref{ham2}) conserves both the total particle number $N$ and magnetization $S_z$, in general, the evolution states have the form \begin{equation} |\Psi(t)=\sum_{k=0}^{N/2}g_{k}(t)|k\rangle, \end{equation} where $|k\rangle\equiv|k,N-2k,k\rangle$ represents the Fock state. \begin{figure} \caption{Time dependence of spin-nematic squeezing parameter $\xi_{x}^{2}$ for different initial state, $|0,N,0\rangle$ (blue thick line) and $|N/2, 0, N/2\rangle$ (red thin line). Here $N=2000$.} \end{figure} \begin{figure} \caption{ Time dependence of average number of atoms in the $m_f=0$ mode normalized by the total number of atoms $N$ with different $c$. The insets show the correspondant spin-nematic squeezing parameter. The initial state of the system is $|0,N,0\rangle$ with $N=2000$.} \end{figure} Here the spin-nematic squeezing parameter can be reduced to \begin{align}\label{xxi2} \xi_{x}^{2} =\frac{A'-2|B'|}{|3\langle a^{\dagger}_1a_1\rangle-N|} \end{align} with $\langle a^{\dagger}_1a_1\rangle=\sum_{k=0}^{N/2}|g_k|^2 k$, and \begin{subequations} \begin{align} A' & = \sum_{k=0}^{N/2}|g_{k}|^{2}[(k+1)(N-2k)+(N-2k+1)k],\\ B' & = \sum_{k=1}^{N/2}g_{k}^{*}g_{k-1}k\sqrt{(N-2k+2)(N-2k+1)}. \end{align} \end{subequations} We can also find the maximal QFI as \begin{eqnarray} F^{\rm max}= {\rm max} \left\{ 4(A'+2|B'|), (\Delta Q_+)^2\right\}, \end{eqnarray} where \begin{eqnarray} (\Delta Q_+)^2=2\sum_k^{N/2}|g_k|^2 k(k+19) -36\left(\sum_k^{N/2}|g_k|^2 k\right)^2. \end{eqnarray} The spin-mixing dynamics will quickly drive the system into a quasi-steady state~\cite{law,chang, syi}, that is the average number of atoms in the spin components will remain unchanged for a long time. The $c$ dependence of the quasi-steady state squeezing as well as population of the $m_{f}=0$ component for two different initial states $|0,N,0\rangle$ and $|N/2, 0, N/2\rangle$ are plotted in Fig.~3. As shown in Fig.~3, the quasi-steady state behavior display sudden change when $c\to1$. For initial state $|0,N,0\rangle$, we can get steady squeezing ($\approx$ - 5dB) in the regime of $c>1$, while the case for initial state $|N/2, 0, N/2\rangle$ will be the opposite. The detailed dynamical behaviors of the squeezing for these states around the critical point $c=1$ are shown in Fig.~4. As is shown, the spin-nematic squeezing can be improved on the SQL by up to $20$dB around the critical point before reaching the steady squeezing. We note that the preparation of highly entangled ideal Twin-Fock state $|N/2, 0, N/2\rangle$ may pose an experimental challenge. Below, we focus on the case of initial states $|0,N,0\rangle$ to understand the steady squeezing behavior shown in Fig.~3 and 4. In term of the spin-nematic squeezing parameter given in Eq.~(\ref{xxi2}), we can find that the squeezing depends on the atomic population. When $c\to 1$, there is essentially no population transfer from the mode $m_f=0$ to the other two modes $m_f=\pm 1$, and hence $\langle N_0\rangle/N \to 1$ which corresponds to squeezed vacuum for the $m_f=\pm 1$ modes~\cite{Hamley,Gerving,Hoang,Huang}. Once $c$ deviated from 1, as evolution time is increased, the ratio $ N_0/N$ will decrease until reach the quasi-steady state, due to the spin-mixing dynamics. In Fig.~5, we show the dynamical behavior of $N_0/N$ as well as the corresponding squeezing. As it is shown, when $c<1$ the average number of atoms in the $m_f=0$ mode descend rapidly and the spin-nematic squeezed vacuum only keeps for a very short time. While $c>1$, due to the small spin mixing parameter, $1-c$, in Hamiltonian~(\ref{ham2}), the ratio $N_0/N$ will fall slowly before reaching the quasi-steady, like-damped oscillation. Corresponding to the evolution of $N_0/N$, there is a damped oscillations of the squeezing, and the quasi-steady squeezing can be obtained. However, we should point out that the steady squeezing is not a spin-nematic squeezed vacuum, and it is a difficult task to write out the explicit form of them. Next, we analytically analysis the dynamical behavior of the squeezed vacuum with Bogoliubov approximation around $c\to1$. \subsection{Bogoliubov approximation} We now use the Bogoliubov approximation to replace the annihilation and creation operators for the condensate with number $N$, that is $a_{0}\approx a_{0}^{\dagger}\approx\sqrt{N}$. Up to phase factor that we may neglect since we are later concerned only with expectation values where the phase would cancel out. Therefore, we can introduce the operators \begin{eqnarray}\label{su11} K_{x} &=& \frac{1}{2}(a_{1}^{\dagger}a_{-1}^{\dagger}+a_{1}a_{-1}), \hspace{0.5cm } K_{y} = -\frac{i}{2}(a_{1}^{\dagger}a_{-1}^{\dagger}-a_{1}a_{-1}), \nonumber \\ K_{z} &=& \frac{1}{2}(a_{1}^{\dagger}a_{1}+a_{-1}a_{-1}^{\dagger}), \end{eqnarray} which belong to the SU(1,1) group and satisfy $ [K_{x},K_{y}] = -iK_{z}, [K_{y},K_{z}] = iK_{x}$ and $[K_{z},K_{x}] =iK_{y}$. Using the definitions in Eq.~({\ref{su11}}), the effective Hamiltonian of Eq.~(\ref{ham2}) is given by \begin{eqnarray} H_{{\rm eff}} & \equiv & \alpha K_{z}+\beta K_{x}, \end{eqnarray} with $c$-dependence parameters \begin{eqnarray} \alpha=2[(1-c)(2N-1)-3c], \hspace{0.3cm} \beta = 4(1-c)N. \end{eqnarray} In terms of the SU(1,1) operators, Eq.~(\ref{ev}) may be expressed as \begin{eqnarray} A=4N \langle K_z\rangle, \hspace{0.5cm} \sqrt{B^2+C^2} = 4N |\langle K_+\rangle|, \end{eqnarray} where $K_{+} = K_{-} ^{\dagger}=K_{x}+iK_{y}=a^\dagger_{1}a^{\dagger}_{-1}$. Therefore, the spin-nematic squeezing parameter and QFI can be reduced to \begin{eqnarray} \xi_{x}^{2} &=& 2\langle K_{z}\rangle-2|\langle K_{+}\rangle|, \\ F^{\rm max}&=& 8N(\langle K_z\rangle +|\langle K_+\rangle|), \end{eqnarray} since $\Delta Q_+ \to 0$. To get the explicit form of both the squeezing and QFI, we only need to calculate the expectation values $\langle K_{z}\rangle$ and $\langle K_{+}\rangle$. With the help of the time evolution operator $U(t)=\exp[-iH_{{\rm eff}}t] $, we have \begin{eqnarray} \langle K_{z}\rangle & = & \langle 0,N, 0| U^{\dagger}(t) K_{z} U(t)|0,N,0\rangle \nonumber\\ & = & \frac{\varGamma_{1}(1+\varGamma^{2})}{2(1-\varGamma^{2})^{2}},\\ |\langle K_{+}\rangle| & = & |\langle 0,N, 0| U^{\dagger}(t) K_{+} U(t)|0,N, 0\rangle|\nonumber\\ & = & \frac{\varGamma_{1}\varGamma}{(1-\varGamma^2)^2}, \end{eqnarray} where \begin{eqnarray} \varGamma &=& \frac{|\beta\sin(\theta t)|}{\sqrt{\alpha^{2}-\beta^{2}\cos^{2}({\theta}t)}}, \hspace{0.2cm} \varGamma_{1} = \frac{\alpha^{2}-\beta^{2}}{\alpha^{2}-\beta^{2}\cos^{2}({\theta}t)}, \nonumber\\ {\theta} &=& \frac{1}{2}\sqrt{\alpha^{2}-\beta^{2}}. \end{eqnarray} If $c>1$ we have $\alpha^{2}>\beta^{2}$, a direct calculation yields \begin{eqnarray} \xi_{x}^{2} = \frac{\varGamma_{1}}{(1+\varGamma)^{2}}, \hspace{0.5cm} F^{\rm max}=\frac{4N\varGamma_{1}}{(1-\varGamma)^{2}}, \end{eqnarray} and the optical values are given by \begin{eqnarray} (\xi_{x}^{2})_{\rm min} &=& \frac{|\alpha|-|\beta|}{|\alpha|+|\beta|}=\frac{2c+1}{4N(c-1)+2c+1},\\ F^{\rm max}(t_{\rm opt})&=& \frac{4N(|\alpha|+|\beta|)}{|\alpha|-|\beta|}=\frac{4N}{(\xi^2_x)_{\rm min}}, \end{eqnarray} when $t_{\rm opt} = {\pi}/{\sqrt{\alpha^{2}-\beta^{2}}}$. The above results are valid when $c\to 1_+$, which corresponds to the spin-nematic squeezed vacuum. Figure 6 shows the comparison of squeezing and QFI between the exact solutions and the Bogoliubov approximation for different $c$. As is shown, with the increasing of $c$ the squeezing and QFI are enhanced, and the Bogoliubov approximation solutions are in agreement with the exact ones when $c \to 1_{+}$. However, for long time the squeezing is not a squeezed vacuum as shown in Fig.~4 and 5, and hence the Bogoliubov approximation will invalid. \begin{figure} \caption{Comparison of dynamical behaviors of $\xi^2_x$ (a-b) and $F^{\rm max}/(4N)$ (c-d) for the exact numerical solution and Bogoliubov approximate solution with different $c$. Here $N=1000$.} \end{figure} \section{conclusion} \label{conclusion} In summary, we have studied the spin-nematic squeezing and QFI under the ground state and spin-mixing dynamics of a antiferromagnetic spin-1 Bose-Einstein condensate, respectively. We have shown that the quantum phases which depend on the relative strengths of the spin-exchange and dipolar interactions can generate highly entangled ground states in several limits, and enable precision metrology to reach the HL. We have also studied the quantum critical effect enhanced spin-nematic squeezing and entanglement in dynamics case. It indicated that the spin-nematic squeezing can be enhanced to $\approx$ - 20dB before arriving the steady values $\approx$ - 5dB. We also demonstrated that the Bogoliubov approximation can well describe the dynamics of spin-nematic squeezed vacuum state. Finally, it should be pointed out that our study here has neglected any external magnetic filed. The presence of external fields will affect the orientation of the spin and hence change the phase diagram. The effect of the magnetic field on spin-1 condensates without MDDI is currently under study~\cite{Hoang, you,duan}. The quantum phase transition, due to the quadratic Zeeman shift, maybe also demonstrate some similar squeezing behavior as our case. Therefore, the quantum critical effect enhanced spin-nematic squeezing and entanglement should also be expected in the case of spinor BEC in an external magnetic. \begin{acknowledgments} Q.S.T. acknowledges support from the NSFC under Grant No. 11805047 and No. 11665010, and Hainan Science and Technology Plan project (Grant No. ZDKJ2019005). Y. H. acknowledges support from the NSFC under Grant No. 11605157. Q.T.X. acknowledges support from the NSFC under Grant No. 11965011. X. W. supported by the National Natural Science Foundation of China (Grant Nos. 11875231 and 11935012), the National Key Research and Development Program of China (Grant Nos. 2017YFA0304202 and 2017YFA0205700), and the Fundamental Research Funds for the Central Universities (Grant No. 2018FZA3005). \end{acknowledgments} \end{document}
\begin{document} \title{{\large Tur\'{a}n numbers of complete $3$-uniform Berge-hypergraphs}} \author{\small L. Maherani$^{\textrm{a}}$, M. Shahsiah$^{\textrm{b,c}}$ \\ \footnotesize $^{\textrm{a}}$ Department of Mathematical Sciences, Isfahan University of Technology,\\ \footnotesize Isfahan, 84156-83111, Iran\\ {\small $^{\textrm{b}}$Department of Mathematics, Alzahra University,}\\ {\small P.O. Box 1993891176, Tehran, Iran}\\ {\small $^{\textrm{c}}$School of Mathematics, Institute for Research in Fundamental Sciences (IPM),}\\ {\small P.O. Box 19395-5746, Tehran, Iran }\\ \footnotesize {l.maherani@math.iut.ac.ir, shahsiah@ipm.ir}} \date {} \footnotesize\maketitle \begin{abstract}\rm{} \footnotesize Given a family $\mathcal{F}$ of $r$-graphs, the Tur\'{a}n number of $\mathcal{F}$ for a given positive integer $N$, denoted by $ex(N,\mathcal{F})$, is the maximum number of edges of an $r$-graph on $N$ vertices that does not contain any member of $\mathcal{F}$ as a subgraph. For given $r\geq 3$, a complete $r$-uniform Berge-hypergraph, denoted by { ${K}_n^{(r)}$}, is an $r$-uniform hypergraph of order $n$ with the core sequence $v_{1}, v_{2}, \ldots ,v_{n}$ as the vertices and distinct edges $e_{ij},$ $1\leq i<j\leq n,$ where every $e_{ij}$ contains both $v_{i}$ and $v_{j}$. Let $\mathcal{F}^{(r)}_n$ be the family of complete $r$-uniform Berge-hypergraphs of order $n.$ We determine precisely $ex(N,\mathcal{F}^{(3)}_{n})$ for $n \geq 13$. We also find the extremal hypergraphs avoiding $\mathcal{F}^{(3)}_{n}$. \\{ {Keywords}:{ \footnotesize Tur\'{a}n number, Extremal hypergraph, Berge-hypergraph. }} \noindent \\{\footnotesize {AMS Subject Classification}: 05C65, 05C35, 05D05.} \end{abstract} \small \section{\normalsize{Introduction}} A {\it hypergraph} $\mathcal{H}$ is a pair $\mathcal{H}=(V,E)$, where $V$ is a finite non-empty set (the set of vertices) and $E$ is a collection of distinct non-empty subsets of $V$ (the set of edges). We denote by $e(\mathcal{H})$ the number of edges of $\mathcal{H}.$ An {\it $r$-uniform hypergraph} or {\it $r$-graph} is a hypergraph such that all its edges have size $r$. A {\it complete $r$-uniform hypergraph} of order $N$, denoted by { $\mathcal{K}_N^r$}, is a hypergraph consisting of all the $r$-subsets of a set $V$ of cardinality $N$. For a family $\mathcal{F}$ of $r$-graphs, we say that the hypergraph $\mathcal{H}$ is $\mathcal{F}$-free if $\mathcal{H}$ does not contain any member of $\mathcal{F}$ as a subgraph. Given a family $\mathcal{F}$ of $r$-graphs, the {\it Tur\'{a}n number} of $\mathcal{F}$ for a given positive integer $N$, denoted by $ex(N,\mathcal{F})$, is the maximum number of edges of an $\mathcal{F}$-free $r$-graph on $N$ vertices. An $\mathcal{F}$-free $r$-graph $\mathcal{H}$ on $N$ vertices is {\it extremal hypergraph} for $\mathcal{F}$ if $e(\mathcal{H})= ex(N,\mathcal{F})$. These are natural generalizations of the classical Tur\'{a}n number for $2$-graphs \cite{turan}. For given $n, r \geq 2$, let $\mathcal{H}^{(r)}_n$ be the family of $r$-graphs $F$ that have at most ${n \choose 2}$ edges, and have some set $T$ of size $n$ such that every pair of vertices in $T$ is contained in some edge of $F$. Let the $r$-graph $H^{(r)}_n \in \mathcal{H}^{(r)}_n$ be obtained from the complete $2$-graph $\mathcal{K}_{n}^{2}$ by enlarging each edge with a new set of $r-2$ vertices. Thus $H^{(r)}_n$ has $(r-2){n \choose 2}+n$ vertices and ${n \choose 2}$ edges. For given $n \geq 5$ and $r\geq 3$, a {\it complete $r$-uniform Berge-hypergraph} of order $n$, denoted by { ${K}_n^{(r)}$}, is an $r$-uniform hypergraph with the core sequence $v_{1}, v_{2}, \ldots ,v_{n}$ as the vertices and ${n \choose 2}$ distinct edges $e_{ij},$ $1\leq i<j\leq n,$ where every $e_{ij}$ contains both $v_{i}$ and $v_{j}$. Note that a complete $r$-uniform Berge-hypergraph is not determined uniquely as there are no constraints on how the $e_{ij}$'s intersect outside $\{v_{1}, v_{2}, \ldots ,v_{n}\}$. \\ Extremal graph theory is that area of combinatorics which is concerned with finding the largest, smallest, or otherwise optimal structures with a given property. There is a long history in the study of extremal problems concerning hypergraphs. The first such result is due to Erd\H{o}s, Ko and Rado \cite{Erdos-ko-rado}.\\ In contrast to the graph case, there are comparatively few known results on the hypergraph Tur\'{a}n problems. In the paper in which Tur\'{a}n proved his classical theorem on the extremal numbers for complete graphs \cite{turan}, he posed the natural question of determining the Tur\'{a}n number of the complete $r$-uniform hypergraphs. Surprisingly, this problem remains open in all cases for $r > 2$, even up to asymptotics. Despite the lack of progress on the Tur\'{a}n problem for dense hypergraphs, there are considerable results on certain sparse hypergraphs. Recently, some interesting results were obtained on the exact value of extremal number of paths and cycles in hypergraphs. F{\"u}redi et al. \cite{loose pathI} determined the extremal number of $r$-uniform loose paths of length $n$ for $r\geq 4$ and large $N$. They also conjectured a similar result for $r = 3$. F{\"u}redi and Jiang \cite{loose cycleI} determined the extremal function of loose cycles of length $n$ for $r\geq 5$ and large $N.$ Recently, Kostochka et al. \cite{loose paths and cycles} extended these results to $r=3$ for loose paths and $r=3,4$ for loose cycles. Gy{\H{o}}ri et al. \cite{Berge paths} found the extremal numbers of $r$-uniform hypergraphs avoiding Berge paths of length $n$. Their results substantially extend earlier results of Erd\H{o}s and Gallai \cite{Erdos-Gallai} on extremal number of paths in graphs. Let $\mathcal{C}_{n}^{(r)}$ denote the family of $r$-graphs that are Berge cycles of length $n$. Gy{\H{o}}ri and Lemons \cite{Berge cyclesI, Berge cycles2} showed that for all $r \geq 3$ and $ n \geq 3$, there exists a positive constant $c_{r,n},$ depending on $r$ and $n,$ such that $$ex(N,\mathcal{C}_{n}^{(r)}) \leq c_{r,n} N^{1+\frac{1}{\lfloor \frac{n}{2} \rfloor}}.$$ Let $N$, $n$, $r$ be integers, where $N\geq n >r$ and $r \geq 2$. Also let $ T_r(N,n-1)$ be the complete $r$-uniform $(n-1)$-partite hypergraph with $N$ vertices and $n-1$ parts $V_1,V_2,...,V_{n-1}$ whose partition sets differ in size by at most 1. Suppose that $t_r(N,n-1)$ denotes the number of edges of $T_r(N,n-1)$. If $N=\ell(n-1)+j$, where $\ell \geq 1$ and $1 \leq j\leq n-1$, then it is straightforward to see that $$t_r(N,n-1)= \sum _{i=0}^{r} \ell ^{r-i} {j \choose i}{n-1-i \choose r-i}.$$ In 2006, Mubayi \cite{mubay} showed that the unique largest $\mathcal{H}_{n}^{(r)}$-free $r$-graph on $N$ vertices is $T_r(N,n-1)$. Settling a conjecture of Mubayi in \cite{mubay}, Pikhurko \cite{pikh} proved that there exists $N_0$ so that the Tur\'{a}n numbers of ${H}_{n}^{(r)}$ and $\mathcal{H}_{n}^{(r)}$ coincide for all $N>N_0.$ Let $\mathcal{F}^{(r)}_n$ be the family of complete $r$-uniform Berge-hypergraphs of order $n.$ Because ${H}_{n}^{(3)} \in \mathcal{F}_{n}^{(3)}$, the Pikhurko's result \cite{pikh} implies that $ex (N,\mathcal{F}_{n}^{(3)}) \leq t_3(N,n-1)$ for sufficiently large $N$. In this paper, for $N \geq 13$, we show that $ex (N,\mathcal{F}_{n}^{(3)})=t_3(N,n-1)$ and $ T_3(N,n-1)$ is the unique extremal hypergraph for $\mathcal{F}_{n}^{(3)}$. More precisely, we prove the following theorem.\\ \begin{theorem}\label{main} Let $N,n$ be integers so that $N\geq n\geq 13$. Then $$ex(N,\mathcal{F}_n^{(3)})=t_3(N,n-1).$$ Furthermore, the unique extremal hypergraph for $\mathcal{F}_n^{(3)}$ is $T_3(N,n-1)$. \end{theorem} First we show that $ex (N,\mathcal{F}_{n}^{(r)}) \geq t_r(N,n-1)$. To see that, consider an arbitrary sequence $v_1,v_2,...,v_n$ of the vertices of $ T_r(N,n-1)$. By the pigeonhole principle, there exists some part $V_h$, $1 \leq h \leq n-1$, in $T_r(N,n-1)$ containing at least two vertices of this sequence. Since every edge of $ T_r(N,n-1)$ includes at most one vertex of each part $V_i$, $1 \leq i \leq n-1$, This sequence can not be the core sequence of a $K_{n}^{(r)}$. Hence $T_r(N,n-1)$ is $\mathcal{F}_{n}^{(r)}$-free and \begin{equation}\label{lbound} ex (N,\mathcal{F}_{n}^{(r)}) \geq t_r(N,n-1), \ \ \ \ \ \ r\geq 3. \end{equation} Therefore, in order to clarify Theorem \ref{main}, it suffices to show that $ex (N,\mathcal{F}_{n}^{(3)}) \leq t_3(N,n-1)$ and $T_3(N,n-1)$ is the only $\mathcal{F}_{n}^{(3)}$-free hypergraph with $N$ vertices and $t_3(N,n-1)$ edges. Here, we give a proof by induction on the number of vertices. More precisely, we prove Theorem \ref{main} in three steps. First, we show that Theorem \ref{main} holds for $N=n$ (see Theorem \ref{n}). Then, in Theorem \ref{l=1}, we demonstrate that it is true for $n\leq N \leq 2n-2.$ Finally, using Theorem \ref{n} and Theorem \ref{l=1}, we show that the desired holds for all $N\geq n$ (Section 3).\\ \noindent \textbf{Conventions and Notations:} For an $r$-uniform hypergraph $\mathcal{H}=(V,E)$, the complement hypergraph of $\mathcal{H}$, denoted by $\mathcal{H}^c$, is the hypergraph on $V$ so that $E(\mathcal{H}^c)= {V \choose r}\setminus E$. Also we say that $X\subseteq V$ is an independent set of $\mathcal{H}$ if for any pair $ v,v' \in X$, there is no edges in $E$ containing both of $v$ and $v'$. For $U\subseteq V$ we denote by $\mathcal{H}[U]$ the subgraph of $\mathcal{H}$ induced by the edges of $U$. For $U, W \subseteq V$, The hypergraph $\mathcal{H}[U,W]$ is the subgraph of $\mathcal{H}$ induced by the edges of $\mathcal{H}$ intersecting both $U$ and $W.$ For a vertex $v \in V$, the degree of $v$ in $\mathcal{H}$, denoted by $d_\mathcal{H}(v)$, is the number of edges in $\mathcal{H}$ containing $v$. Also $\mathcal{H}-v$ is the subhypergraph of $\mathcal{H}$ obtained by deleting of $v$ and all the edges containing it.\\ \section{Preliminaries } In this section, we present some results that will be used in the follow up section. Let $\mathcal{A}=\{A_{1},A_{2},...,A_{n}\}$ be a family of subsets of a set $X$. A system of distinct representatives, or SDR, for the family $\mathcal{A}$, is a set $\{a_1,a_2,...,a_n\}$ of elements of $X$ satisfying two following conditions: \begin{itemize} \item[$\bullet$] $a_i \in A_i$ \ \ \ \ \ \ $i=1,...,n$, \item[$\bullet$] $a_i \neq a_j$ \ \ \ \ \ \ $i \neq j$. \end{itemize} \begin{lemma}\label{sdr} Let $U=\{u_1,u_2,...,u_m\}$, $m\geq 5$ and $x \notin U$. Also, let $\mathcal{A}=\{A_{1},A_{2},...,A_{m}\}$ be a family of sets so that $\vert A_{1}\vert \leq \vert A_{2}\vert \leq ... \vert A_{m}\vert $ and $A_{i} \subseteq \{ B :\ B=\{x, u_i ,u_k \},\ k\neq i \}$ for $ 1\leq i \leq m.$ If $\mathcal{A}$ has no SDR, then $$\vert \bigcup _{i=1}^{m} A_{i} \vert \leq {m-1 \choose 2}$$ and equality holds if and only if $A_{1}= \emptyset$ and $$A_{i} = \{ B :\ B=\{x, u_i ,u_k \},\ k\neq 1,i \} \ \ \ \ \ \ 2\leq i \leq m.$$ \end{lemma} \noindent\textbf{Proof. } Since $\mathcal{A}$ contains no SDR, using the Hall's theorem \cite{hall}, for some $q$, $1\leq q\leq m$, we have $\vert \bigcup _{i=1}^{q} A_{i} \vert \leq q-1$. So $\vert \bigcup _{i=1}^{m} A_{i} \vert \leq f(q),$ where $f(k)= k-1 + {m-k \choose 2}$, for $1 \leq k \leq m$. On the other hand, one can easily see that $f(1)>f(k)$, for $2\leq k \leq m$. Therefore $$\vert \bigcup _{i=1}^{m} A_{i} \vert \leq f(1)={m-1 \choose 2}$$ and the equality holds if and only if $A_{1}= \emptyset$ and $$A_{i} = \{ B :\ B=\{x, u_i ,u_k \},\ k\neq 1,i \} \ \ \ \ \ \ 2\leq i \leq m.$$ $ \blacksquare$\\ In order to state our main results we need some definitions. Let $\mathcal{H}=(V,E)$ be an $r$-uniform hypergraph, where $V=\{v_1,v_2,...,v_n\}$ and $E=\{e_1,e_2,...,e_m\}$. We denote by $B(\mathcal{H}),$ the bipartite graph with parts $X$ and $Y$ so that $X=\{v_iv_k :\ i< k\ \ {\rm and} \ \ v_i,v_k \in V(\mathcal{H})\}$, $Y=E(\mathcal{H})$ and $v_iv_k$ is adjacent to $e_h$ if and only if $\{v_i,v_k \}\subseteq e_h$, for every $v_iv_k\in X$ and $e_h \in Y$. For every $v_iv_k\in X$, $d_{B(\mathcal{H})}(v_iv_k)$ is the number of edges in $B(\mathcal{H})$ containing $v_i v_k$. A matching of $X$ in $B(\mathcal{H})$ is matching that saturates all vertices of $X$. Note that, every matching of $X$ in $B(\mathcal{H})$ is equivalent to a complete $r$-uniform Berge-hypergraph with core sequence $v_1,v_2,...,v_n$. \\ \noindent Now, we demonstrate that Theorem \ref{main} holds for $N=n$. \begin{theorem}\label{n} Let $n\geq 13$ be an integer. The hypergraph $T_3(n,n-1)$ is the only $\mathcal{F}_{n}^{(3)}$-free hypergraph with $n$ vertices and $ex(n,\mathcal{F}_{n}^{(3)})$ edges. \end{theorem} \noindent\textbf{Proof. }Assume that $\mathcal{H}$ is an $\mathcal{F}_{n}^{(3)}$-free hypergraph with $n$ vertices and $ex(n,\mathcal{F}_{n}^{(3)})$ edges. Let $V(\mathcal{H})=\{v_1,v_2,...,v_n\}$. First, suppose that there is a vertex $v\in V(\mathcal{H})$, say $v_n$, so that $d_{\mathcal{H}}(v_n) \leq {n-2 \choose 2}$. Therefore \begin{equation}\label{up1} e(\mathcal{H}) =d_{\mathcal{H}}(v_n) +e(\mathcal{H}-v_n) \leq {n-2 \choose 2} +{n-1 \choose 3}=t_3(n,n-1). \end{equation} \noindent So by (\ref {lbound}) and (\ref{up1}), we have $$ex(n,\mathcal{F}_{n}^{(3)}) = t_3(n,n-1).$$ Therefore $d_{\mathcal{H}}(v_n) ={n-2 \choose 2}$ and $e( \mathcal{H}-v_n )={n-1 \choose 3}$. So $\mathcal{H}-v_n \cong \mathcal{K}_{n-1}^3$ and clearly there is a copy of $K_{n-1}^{(3)}$ with the core sequence $v_1,v_2,...,v_{n-1}$ in $\mathcal{H}-v_n$. Set $x=v_n$, $U=\{v_1,v_2,...,v_{n-1}\}$ and $\mathcal{A}=\{A_{1},A_{2},...,A_{n-1}\}$, where $$A_{i} = \{ B :\ B\in E(\mathcal{H}),\ \{x,v_{i}\} \subseteq B\} \ \ \ \ \ \ 1\leq i\leq n-1.$$ Note that $d_{\mathcal{H}}(v_n)=\vert \bigcup _{i=1}^{n-1} A_{i} \vert= {n-2 \choose 2}.$ Since $ \mathcal{H}$ is $\mathcal{F}_{n}^{(3)}$-free and there is a copy of $K_{n-1}^{(3)}$ in $ \mathcal{H}-v_n$, $\mathcal{A}$ has no SDR. Now, using Lemma \ref{sdr}, we have $\mathcal{H}\cong T_3(n,n-1)$.\\* Now suppose that for every vertex $v\in V(\mathcal{H})$, $d_{\mathcal{H}}(v) \geq {n-2 \choose 2}+1$. Set $G=B(\mathcal{H})$. So we may assume that $G=[X,Y]$, where $$X=\{u_{ik}=v_iv_k :\ i< k\ \ {\rm and} \ \ v_i,v_k \in V(\mathcal{H})\}$$ and $Y=E(\mathcal{H})$. Since, by (\ref{lbound}), $|Y|\geq {n-1 \choose 3} +{n-2 \choose 2},$ we have $\vert X\vert \leq \vert Y\vert$. Let $X=X_1 \cup X_2$, where $X_{1}=\{u \in X :\ d_{G}(u)\leq 4\}$ and $X_2 =X\setminus X_1$. Recall that every matching of $X$ in $G$ is equivalent to a $K_{n}^{(3)}$ in $\mathcal{H}$. We have two following cases.\\ \noindent{\bf Case 1.} $X_1 =\emptyset$.\\ Since for every $y\in Y$ and $u\in X$, we have $d_{G}(y)=3$ and $d_{G}(u)\geq 5$, the Hall's theorem \cite{hall} guarantees the existence of a matching of $X$, a contradiction.\\\\ \noindent{\bf Case 2.} $X_1 \neq \emptyset$.\\ Let $X_1 = \{v_{i_1}v_{i'_1},v_{i_2}v_{i'_2},...,v_{i_t}v_{i'_t}\}$. We show that the following claim holds. \begin{emp}\label{pairdisj1} The elements of $X_1$ are pairwise disjoint. \end{emp} \noindent\textbf{Proof of Claim \ref{pairdisj1}}. Suppose to contrary that for $2 \leq s \leq t$, $ \{wv_{i'_1},wv_{i'_2},...,wv_{i'_s}\} \subseteq X_1$. So $ d_{\mathcal{H}}(w) \leq f(s),$ where $f(k)= 4k + {n-k-1 \choose 2}$ is a function on $k$, $2 \leq k \leq t \leq n-1$. Using $n\geq 13$, it is straightforward to see that the absolute maximum of $f(k)$ occurs in point $k=2$. Hence $$ d_{\mathcal{H}}(w) \leq f(2) = 8 + {n-3 \choose 2}.$$ Since $ 8 + {n-3 \choose 2} < {n-2 \choose 2}+1$ for $n\geq 13$, we have $ d_{\mathcal{H}}(w) < {n-2 \choose 2}+1$. That is a contradiction to our assumption. $ \square$\\ \noindent Since for every vertex $v \in V(\mathcal{H})$, we have $ d_{\mathcal{H}}(v) \geq {n-2 \choose 2}+1$, so for any two vertices $x,y \in V(\mathcal{H})$, there is at least one edge in $E(\mathcal{H})$ containing both of $x$ and $y$. So $d_{G}(v_{i_l}v_{i'_l}) \geq 1$ for every $1 \leq l \leq t$. On the other hand, by Claim \ref{pairdisj1}, the elements of $X_1$ are pairwise disjoint. Therefore $G$ contains a matching $M_1$ of $X_1$. Suppose that $G'=[X_2,Y']$ is the subgraph of $G$ so that $Y' \subset Y$ is obtained by deleting the vertices of $M_1$. Note that for every $u \in X_2$ and $y \in Y'$, we have $d_{G'}(u) \geq 3$ and $d_{G'}(y) \leq 3$. Therefore the Hall's theorem \cite{hall} implies the existence of a matching $M_2$ of $X_2$ in $G'$. This is a contradiction, since $M_1 \cup M_2$ is a matching of $X$ in $G$. This contradiction completes the proof. $ \blacksquare$ \begin{theorem}\label{l=1} Let $n\geq 13$ and $N,n$ be integers so that $n\leq N \leq 2n-2$. Also, let $\mathcal{H}$ be an $\mathcal{F}_{n}^{(3)}$-free hypergraph with $N$ vertices and $ex(N,\mathcal{F}_{n}^{(3)})$ edges. Then $ e(\mathcal{H}) = t_3(N,n-1)$ and $\mathcal{H}\cong T_3(N,n-1)$. \end{theorem} \noindent\textbf{Proof. } Let $N=n-1+j$, where $1\leq j \leq n-1$. We apply induction on $j$. Using Theorem \ref{n}, the basic step $j=1$ is true. For the induction step, let $j >1$. Set $$d={n-2 \choose 2}+(j-1)(n-3)+{j-1 \choose 2}.$$ First suppose that there is a vertex $x \in V(\mathcal{H})$ so that $ d_{\mathcal{H}}(x) \leq d$. So using the induction hypothesis, we have $$e( \mathcal{H}) = d_{\mathcal{H}}(x) + e( \mathcal{H}-x) \leq d+t_3(N-1,n-1) =t_3(N,n-1).$$ Therefore by (\ref{lbound}), we conclude that $e(N,\mathcal{F}_{n}^{(3)}) =t_3(N,n-1)$. Hence $d_{\mathcal{H}}(x) =d$ and $e( \mathcal{H}-x) = t_3(N-1,n-1)$. So, using the induction hypothesis, $\mathcal{H}-x \cong T_3(N-1,n-1)$. Hence we may assume that $\mathcal{H}-x$ is a complete $3$-uniform $(n-1)$-partite hypergraph with parts $V_1,V_2,...,V_{n-1}$, where $$ V_i = \left\lbrace \begin{array}{ll} \{v_i,x_i\} & \ \ \ \ 1\leq i \leq j-1, \\ \{v_i\} & \ \ \ \ j\leq i \leq n-1. \end{array} \right. $$ Let $\mathcal{H}'$ be the induced subgraph of $\mathcal{H}-x$ on $\{v_1,v_2,...,v_{n-1}\}.$ According to the construction of $\mathcal{H}-x$, we have $\mathcal{H}'\cong \mathcal{K}_{n-1}^3$ and so there is a copy of $K_{n-1}^{(3)}$ with core sequence $v_1,v_2,...,v_{n-1}$ in $ \mathcal{H}'$. Set $U=\{v_1,v_2,...,v_{n-1}\}$ and $\mathcal{A}=\{A_{1},A_{2},...,A_{n-1}\}$, where for $1\leq i\leq n-1$, $$A_{i} = \{ e \in E(\mathcal{H}) : \ e=\{x,v_{i},v_k\}, \ \ k\neq i\}.$$ \noindent For a vertex $v \in V(\mathcal{H})$, we denote by $E_v$ the set of edges of $\mathcal{H}$ containing $v$. Clearly we have \begin{equation}\label{dx} d_{\mathcal{H}}(x)= |E_x| = |E_1|+ |E_2|+ |E_3|, \end{equation} where $$E_i= \{e \in E_x:\ \ \vert e \cap \{x_1,x_2,...,x_{j-1}\}\vert =i-1\}, \ \ \ \ \ \ 1\leq i \leq 3.$$ \noindent We have the following claim. \begin{emp}\label{Ex} \noindent \begin{itemize} \item[{\rm (i)}] $\vert E_1\vert \leq {n-2 \choose 2}.$ \item[{\rm (ii)}] $ \vert E_2\vert \leq (j-1)(n-3).$ \item[{\rm(iii)}] $\vert E_3\vert \leq {j-1 \choose 2}.$ \end{itemize} \end{emp} \noindent\textbf{Proof of Claim \ref{Ex}}. (i) Clearly $\vert E_1\vert =\vert \bigcup _{i=1}^{n-1} A_{i} \vert$. If $\mathcal{A}$ contains an SDR, then $x,v_1,v_2,...,v_{n-1}$ is the core sequence of a copy of $K_n^{(3)}$ in $\mathcal{H}$, a contradiction. So, using Lemma \ref{sdr}, $$\vert E_1\vert =\vert \bigcup _{i=1}^{n-1} A_{i} \vert \leq {n-2 \choose 2}.$$ \noindent (ii) For $1 \leq k \leq j-1$, set $$B_k=\{e \in E_2:\ \{x,x_k\}\subseteq e\}.$$ We demonstrate that for $1 \leq k \leq j-1$, $\vert B_k \vert \leq n-3$ and so $$\vert E_2\vert =\vert \bigcup _{k=1}^{j-1} B_k \vert \leq (j-1)(n-3).$$ Because of the similarity, it suffices to show that $\vert B_1 \vert \leq n-3$. Suppose not. So $\vert B_1\vert \geq n-2.$ On the other hand, the construction of $\mathcal{H}-x$ and the fact that $\mathcal{F}_{n}^{(3)}\nsubseteq \mathcal{H}$ imply that every edge in $E_x$ contains at most one vertex of each $V_i,$ for $1\leq i \leq n-1.$ Hence $|B_1|=n-2$ and $$B_1=\{\{x,x_1,v_2\}, \{x,x_1,v_3\},...,\{x,x_1,v_{n-1}\}\}.$$ In this case, there is no edge in $E(\mathcal{H})\setminus B_1$ containing both of $x$ and $v_i,$ for $2 \leq i \leq n-1$. To see it, suppose that $f=\{x,v_2,u\}\in E(\mathcal{H})\setminus B_1$. Let $\mathcal{H}''$ be the induced subgraph of $\mathcal{H}-x$ on $\{x_1,v_2,...,v_{n-1}\}.$ By the construction of $\mathcal{H}-x$, we have $\mathcal{H}''\cong \mathcal{K}_{n-1}^3$ and so $\mathcal{H}''$ contains a $K_{n-1}^{(3)}$, say $\mathcal{K}'$. Hence $x, x_1,v_2,...,v_{n-1}$ represents the core sequence of a $K_{n}^{(3)}$ in $\mathcal{H}$ with the following edge assignments. Set $e_{xx_1}=\{x,x_1,v_2\}$, $e_{xv_2}=f$, $e_{xv_i}=\{x,x_1,v_i\}$ for $3 \leq i \leq n-1$ and other edges are selected from $E(\mathcal{K}')$. That is a contradiction to our assumption. Therefore the set of edges in $\mathcal{H}$ containing $x$ and $v_1$ is a subset of the following set: $$S=\{\{x,v_1,x_2\}, \{x,v_1,x_3\},...,\{x,v_1,x_{j-1}\}\}.$$ Hence $$d_{\mathcal{H}}(x) \leq |B_1|+ |S|+{j-1 \choose 2}= (n-2)+(j-2)+{j-1 \choose 2} < d.$$ This contradiction demonstrates that $\vert B_1 \vert \leq n-3$ and so $\vert E_2\vert \leq (j-1)(n-3)$.\\ \noindent (iii) This case is trivial. $ \square$\\ \noindent Since $d_{\mathcal{H}}(x)=d$, using (\ref{dx}) and Claim \ref{Ex}, we have \begin{equation}\label{ddd} \vert E_1\vert = {n-2 \choose 2},\ \ \ \vert E_2\vert = (j-1)(n-3),\ \ \ \vert E_3\vert = {j-1 \choose 2}. \end{equation} Since $\vert E_1\vert = {n-2 \choose 2}$, using the proof of part (i) of Claim \ref{Ex} and Lemma \ref{sdr}, for some $1 \leq i' \leq n-1$, $A_{i'} = \emptyset$ and $$A_{i} = \{ e \in E(\mathcal{H}) : \ e=\{x,v_{i},v_l\}, \ \ l\neq i,i'\}, \ \ \ \ \ \ 1\leq i \leq n-1\ \ {\rm and} \ \ i\neq i'.$$ If $j \leq i' \leq n-1$, using (\ref{ddd}), we have $\mathcal{H} \cong T_3(N,n-1)$. Hence we may assume that for some $1 \leq i' \leq j-1$, say $i'=1$, ${A}_1 = \emptyset$. By considering the sets $E_1$ and $E_2$ and using (\ref{ddd}), it can be shown that $\mathcal{H}[x,x_1,v_2,...,v_{n-1}]\cong \mathcal{K}_{n}^{3}$ and so it contains a copy of $K_{n}^{(3)}$. This contradiction completes the proof of the theorem. Now we may assume that for every vertex $x \in V(\mathcal{H})$, $d_{\mathcal{H}}(x) \geq d+1$. Set $G=B(\mathcal{H})$. So we may assume that $G=[X,Y]$, where $$X=\{u_{ik}=v_iv_k :\ i< k\ \ {\rm and}\ \ v_i,v_k \in V(\mathcal{H})\}$$ and $Y=E(\mathcal{H})$. Since, by (\ref{lbound}), $|Y|\geq \sum _{i=0}^{3} \ell ^{3-i} {j \choose i}{n-1-i \choose 3-i},$ we have $\vert X\vert \leq \vert Y\vert$. Recall that every matching of $X$ in $G$ is equivalent to a ${K}_{N}^{(3)}$ in $\mathcal{H}$. Let $X=X_1 \cup X_2 \cup X_3$, where \begin{eqnarray*} X_{1}&=&\{u \in X : d_{G}(u)=0\},\\ X_{2}&=&\{u \in X : 1 \leq d_{G}(u)\leq 4\},\\ X_{3}&=&\{u \in X : d_{G}(u)\geq 5\}. \end{eqnarray*} We have one of the following cases:\\ \noindent{\bf Case 1.} $X_1 \cup X_2 =\emptyset.$\\ In this case, the Hall's theorem \cite{hall} guarantees the existence of a matching of $X$ in $G$. That is a contradiction.\\\\ \noindent{\bf Case 2.} $X_1 \cup X_2 \neq \emptyset.$\\ Let $X_1 \cup X_2 = \{v_{i_1}v_{i'_1},v_{i_2}v_{i'_2},...,v_{i_t}v_{i'_t}\}$. First we show that the following claim holds. \begin{emp}\label{pairdisj2} The elements of $X_1 \cup X_2$ are pairwise disjoint. \end{emp} \noindent\textbf{Proof of Claim \ref{pairdisj2}}. Suppose to contrary that for some $2 \leq s \leq t$, $ \{wv_{i'_1},wv_{i'_2},...,wv_{i'_s}\} \subseteq X_1 \cup X_2$. So we have $ d_{\mathcal{H}}(w) \leq f(s),$ where $f(k)= 4k + {n+j-k-2 \choose 2}$ is a function on $k$, $2 \leq k \leq t \leq N-1$. It is straightforward to see that the absolute maximum of $f(k)$ occurs in point $k=2$. Hence $$ d_{\mathcal{H}}(w) \leq f(2) = 8 + {n+j-4 \choose 2}.$$ On the other hand, $ 8 + {n+j-4 \choose 2} < d+1$ for $n\geq 13$. That contradiction completes the proof of our claim. $ \square$\\ Also we have the following claim. \begin{emp}\label{sizex2} $\vert X_1 \vert \leq j-1$. \end{emp} \noindent\textbf{Proof of Claim \ref{sizex2}}. Suppose not. Therefore we may assume that $\{v_{i_1}v_{i'_1},v_{i_2}v_{i'_2},...,v_{i_j}v_{i'_j}\}\subseteq X_1.$ Set $L=\{v_{i_2},v_{i_3},...,v_{i_j}\}$. We have $E_{v_{i_1}}= F_1 \cup F_2 \cup F_3$, where $$F_k =\{e \in E_{v_{i_1}} : \vert e \cap L\vert =k-1\},\ \ \ \ \ \ 1 \leq k \leq 3.$$ Since, using Claim \ref{pairdisj2}, the elements of $X_1 $ are pairwise disjoint, the elements of $L$ are distinct. So, an easy computation shows that $\vert F_1\vert \leq {n-2 \choose 2}$, $\vert F_2\vert \leq (j-1)(n-3)$ and $\vert F_3\vert \leq {j-1 \choose 2}$. Therefore $$d_{\mathcal{H}}(v_{i_1})= \vert E_{v_{i_1}}\vert \leq {n-2 \choose 2}+(j-1)(n-3)+{j-1 \choose 2}=d.$$ This contradiction completes the proof of this claim. $ \square$\\ Using the definition of $X_2,$ for every $u_{ik}=v_iv_k\in X_2,$ we have $d_{G}(u_{ik}) \geq 1.$ On the other hand, by Claim \ref{pairdisj2}, the elements of $X_2$ are pairwise disjoint. Therefore $G$ contains a matching $M_1$ of $X_2$ in $G$. Suppose that $G'=[X_3,Y']$ is the induced subgraph of $G$ so that $Y'\subseteq Y$ is obtained by deleting the vertices of $M_1$. Note that for every $u \in X_3$ and $y \in Y'$, we have $d_{G'}(u) \geq 3$ and $d_{G'}(y) \leq 3$. So the Hall's theorem \cite{hall} guarantees the existence of a matching $M_2$ of $X_3$ in $G'$. Now, using Claim \ref{sizex2}, we may suppose that $X_1= \{v_{i_1}v_{i'_1},v_{i_2}v_{i'_2},...,v_{i_t}v_{i'_t}\}$, where $t\leq j-1$. Set $V'= V(\mathcal{H})\setminus \{v_{i'_1},v_{i'_2},...,v_{i'_t}\}$. Clearly $\vert V'| \geq n$ and $M_1 \cup M_2$ induces a matching of $X_2\cup X_3$ in $G$. As every matching of $X_2\cup X_3$ in $G$ is equivalent to a $K_{\vert V' \vert}^{(3)}$ in $\mathcal{H}[V']$, we have a copy of $K_{n}^{(3)}$ in $\mathcal{H}$. This is a contradiction to our assumption. $ \blacksquare$\\ \section{proof of Theorem \ref{main}} Let $\mathcal{H}$ be an $\mathcal{F}_n^{(3)}$-free hypergraph with $N$ vertices and $ex(N,\mathcal{F}_n^{(3)})$ edges. Also let $N=\ell (n-1)+j,$ where $\ell\geq 1$ and $1\leq j \leq n-1.$ We use induction on $\ell$ to show that $ex(N,\mathcal{F}_n^{(3)})=t_3(N,n-1)$. Using Theorem \ref{l=1}, the basic step $\ell =1$ is true. Now suppose that $\ell >1$. Since at least one $K_{n}^{(3)}$ is made by adding one edge to $\mathcal{H}$, we deduce that $\mathcal{H}$ contains a $K_{n-1}^{(3)}$. Let $\mathcal{K}$ be such a $K_{n-1}^{(3)}$ in $\mathcal{H}$ with the core sequence $v_1,v_2,...,v_{n-1}$ so that $e(\mathcal{H}[v_1,v_2,...,v_{n-1}])\cap e(\mathcal{K})$ is maximum. Let $\mathcal{H}_1=\mathcal{H}[V_1]$, $\mathcal{H}_2=\mathcal{H}[V_2]$ and $\mathcal{H}_3=\mathcal{H}[V_1 ,V_2]$, where $V_1 =V(\mathcal{K})=\{v_1,v_2,...,v_{n-1}\}$ and $V_2=V(\mathcal{H})\setminus V_1$. Also let $N'= \vert V_2\vert = (\ell -1)(n-1)+j$, where $\ell >1$ and $1\leq j \leq n-1$. Set $$\mathcal{H}_3 ^{\vartriangle}= \{e \in E(\mathcal{H}_3) : \ \vert e \cap V_1 \vert =1\ \ {\rm and} \ \ \vert e \cap V_2 \vert =2\},$$ $$\mathcal{H}_3 ^{\triangledown}= \{e \in E(\mathcal{H}_3) :\ \vert e \cap V_1 \vert =2 \ \ {\rm and} \ \ \vert e \cap V_2 \vert =1\}.$$ Note that $E(\mathcal{H}_3)=\mathcal{H}_3 ^{\vartriangle} \cup \mathcal{H}_3 ^{\triangledown}$. So \begin{equation}\label{eh} e(\mathcal{H}) =e(\mathcal{H}_1) +e(\mathcal{H}_2) + \vert \mathcal{H}_3 ^{\vartriangle}\vert + \vert \mathcal{H}_3 ^{\triangledown}\vert. \end{equation} By the induction hypothesis, we have \begin{equation}\label{eh2} e(\mathcal{H}_2) \leq t_3(N',n-1)=\sum _{i=0}^{3} (\ell-1) ^{3-i} {j \choose i}{n-1-i \choose 3-i}. \end{equation} Moreover, \begin{equation}\label{eh3} \vert \mathcal{H}_3^{\vartriangle} \vert \leq t_2(N',n-1). \end{equation} To see that, let $G$ be a graph on $V_2$ so that the vertices $u$ and $v$ of $V_2$ are adjacent in $G$ if and only if there exists the edge $\{x,u,v\} \in \mathcal{H}_3 ^{\vartriangle}$, for some $x \in V_1$. If there is a $K_n$ in $G$, then we can find a $K_n^{(3)}$ in $\mathcal{H}$, a contradiction. Therefore, by Tur\'{a}n's theorem \cite{turan}, we have $\vert \mathcal{H}_3^{\vartriangle} \vert \leq t_2(N',n-1)$.\\ \noindent Now we show that $e(\mathcal{H}_1)+\vert \mathcal{H}_3 ^{\triangledown}\vert \leq {n-1 \choose 3}+N'{n-2 \choose 2}$. For this purpose, set $$\mathcal{B}_1 ^{\triangledown}= \{ e \in \mathcal{H}_3 ^{\triangledown} :\ e \in E(\mathcal{K}) \}$$ and $\mathcal{B}_2 ^{\triangledown}= \mathcal{H}_3 ^{\triangledown}\setminus \mathcal{B}_1 ^{\triangledown}$. Clearly, we have \begin{eqnarray}\label{B2} \vert \mathcal{B}_2 ^{\triangledown}\vert \leq N'{n-2 \choose 2}. \end{eqnarray} To see that, choose an arbitrary vertex $u \in V_2$. Set $x=u$ and $U=V_1=\{v_1,v_2,...,v_{n-1}\}$ and $\mathcal{A}_u=\{A_{1}^u,A_{2}^u,...,A^u_{n-1}\}$, where $$A^u_{i} = \{ e \in \mathcal{B}_2 ^{\triangledown} :\ \{u,v_{i}\} \subset e\}.$$ If $\mathcal{A}_u$ contains an SDR, then $u,v_1,v_2,...,v_{n-1}$ is the core sequence of a copy of $K_n ^{(3)}$ in $\mathcal{H}$, a contradiction. So using Lemma \ref{sdr}, we have $\vert \bigcup _{i=1}^{n-1} A^u_{i} \vert \leq {n-2 \choose 2}$. Since $u$ is choosed as an arbitrary vertex of $V_2$, Thus $\vert \mathcal{B}_2 ^{\triangledown}\vert \leq N'{n-2 \choose 2} $. Now we demonstrate that \begin{eqnarray}\label{eh1h3} e(\mathcal{H}_1) +\vert \mathcal{B}_1 ^{\triangledown}\vert \leq {n-1 \choose 3}. \end{eqnarray} To see this, Suppose that $\vert \mathcal{B}_1 ^{\triangledown}\vert =t$. If $t\leq e(\mathcal{H}_1 ^c)$, then we are done. So we may assume that $e(\mathcal{H}_1 ^c) \leq t-1$. On the other hand, clearly $\mathcal{K}_{n-1}^3$ contains a copy of ${K}_{n-1}^{(3)}$. Therefore, by the maximality of $\mathcal{K}$, at most $t-1$ edges of $\mathcal{K}$ are not in $E(\mathcal{H}_1)$. This is a contradiction to the assumption that $\vert \mathcal{B}_1 ^{\triangledown}\vert =t$. Therefore by (\ref{B2}) and (\ref{eh1h3}), we have \begin{eqnarray}\label{H1H3} e(\mathcal{H}_1)+\vert \mathcal{H}_3 ^{\triangledown}\vert \leq {n-1 \choose 3}+N'{n-2 \choose 2}. \end{eqnarray} Now set $$B= {n-1 \choose 3}+N'{n-2 \choose 2} + t_3(N',n-1) +t_2(N',n-1).$$ Hence by (\ref{eh}),(\ref{eh2}),(\ref{eh3}) and (\ref{H1H3}), we have \begin{eqnarray*} \vert E(\mathcal{H})\vert \leq B &=& {n-1 \choose 3}+((\ell -1)(n-1)+j){n-2 \choose 2} + (\ell -1)^{3}{n-1 \choose 3} + j(\ell -1)^{2}{n-2 \choose 2}\\ &+& (\ell -1) (n-3){j \choose 2} +{j \choose 3} + (\ell -1)^{2}{n-1 \choose 2} + j(\ell -1)(n-2) + {j \choose 2}. \end{eqnarray*} \noindent To demonstrate that $ex(N, \mathcal{F}_n^{(3)}) \leq t_3(N,n-1)$, it suffices to show that $$B \leq t_3(N,n-1) = \ell^{3}{n-1 \choose 3} + j\ell^{2}{n-2 \choose 2} + \ell (n-3){j \choose 2} +{j \choose 3}.$$ By simplifying the above inequality, it suffices to show that \begin{eqnarray*} 3\ell {n-1 \choose 3} + (\ell -1)^2{n-1 \choose 2}+j(\ell -1)(n-2)\\ + {j \choose 2} +(\ell -1)(n-1){n-2 \choose 2} +2j{n-2 \choose 2}\\ \leq 3 \ell^{2} {n-1 \choose 3}+2j\ell {n-2 \choose 2}+{j \choose 2}(n-3). \end{eqnarray*} But the above inequality is certainly true since $n \geq 13$, $\ell >1$ and $j \geq 1$ imply \begin{itemize} \item[$\bullet$] ${j \choose 2} \leq {j \choose 2}(n-3)$. \item[$\bullet$] $ j(\ell -1)(n-2) +2j{n-2 \choose 2} \leq 2j\ell {n-2 \choose 2}$. \item[$\bullet$] $3\ell {n-1 \choose 3}+ (\ell -1)^2{n-1 \choose 2}+(\ell -1)(n-1){n-2 \choose 2} \leq 3 \ell^{2} {n-1 \choose 3}$. \end{itemize} So $$ex(N, \mathcal{F}_n^{(3)}) \leq t_3(N,n-1)$$ and the equality follows by inspection of (\ref{lbound}). Therefore, \begin{itemize} \item[(i)] $e(\mathcal{H}_{2}) =t_3(N',n-1)$. \item[(ii)] $\vert \mathcal{H}_3^{\vartriangle} \vert = t_2(N',n-1)$. \item[(iii)] $\vert \mathcal{B}_2 ^{\triangledown}\vert = N'{n-2 \choose 2}$. \item[(iv)] $e(\mathcal{H}_1) +\vert \mathcal{B}_1 ^{\triangledown}\vert = {n-1 \choose 3}$. \end{itemize} In the sequel, we demonstrate that $\mathcal{H}\cong T_3(N,n-1)$. Since $e(\mathcal{H}_{2}) =t_3(N',n-1)$, the induction hypothesis implies that $\mathcal{H} _{2}\cong T_3(N',n-1)$. Therefore $\mathcal{H}_2$ is a complete $3$-uniform $(n-1)$-partite hypergraph on $N'$ vertices whose partition sets differ in size by at most 1. Assume that $U_1,U_2,...,U_{n-1}$ are the partition sets of $\mathcal{H}_2$. Recall that $U=V_1=\{v_1,v_2,...,v_{n-1}\}$ and for every $u \in V_2$, $\mathcal{A}_u=\{A_{1}^u,A_{2}^u,...,A^u_{n-1}\}$, where $A^u_{i} = \{ e \in \mathcal{B}_2 ^{\triangledown} :\ \{u,v_{i}\} \subset e\}$ and $\vert \bigcup _{i=1}^{n-1} A^u_{i} \vert \leq {n-2 \choose 2}$. Since $\vert \mathcal{B}_2 ^{\triangledown}\vert = N'{n-2 \choose 2}$, $$\vert \bigcup _{i=1}^{n-1} A^{u}_{i} \vert = {n-2 \choose 2},\ \ \ \ \ \ \forall \ u \in V_2.$$ So using Lemma \ref{sdr}, there exists $1 \leq q_u \leq n-1$, so that $A^{u}_{q_{u}}=\emptyset$ and for every $1 \leq i \leq n-1$ and $i \neq q_u$, we have $$A_{i}^{u} = \{ e \in \mathcal{B}_2 ^{\triangledown} :\ \{u,v_{i}\} \subset e\}=\{ \{u,v_i,v_k\}:\ k \neq i, q_u \}.$$ On the other words,$$\bigcup _{i\neq q_u} A_i^{u}= \{ \{u,v_l,v_k\}:\ v_l ,v_k \in V_1,\ l,k \neq q_u\}.$$ So we can partition the vertices of $V_2$ into $n-1$ parts $U'_1,U'_2,...,U'_{n-1}$, so that for every $x \in U'_m$, $A^{x}_m = \emptyset$ and $$\bigcup _{i\neq m} A_i^{x}= \{ \{x,v_l,v_k\}:\ v_l ,v_k \in V_1,\ l,k \neq m\}.$$ Now we show that for $1\leq i\leq n-1$, $U'_i$ is an independent set in $\mathcal{H}$. Suppose not. By symmetry we may assume that for two vertices $x,y \in U'_1$, the edge $\{x,y,z\} \in E(\mathcal{H})$. It can be shown that $x,y,v_2,v_3,...,v_{n-1}$ represents the core sequence of a copy of $K_n^{(3)}$ in $\mathcal{H}$ with the following edge assignments. Set $e_{xy}=\{x,y,z\}$, $e_{xv_i}\in A_i^x$ for $2\leq i\leq n-1$, $e_{yv_i}\in {A}_i^y$ for $2\leq i\leq n-1$ and $e_{v_iv_{i'}}\in E(\mathcal{K})$ for $2\leq i,i'\leq n-1$. Hence $U'_i$'s, $1 \leq i \leq n-1$, are independent sets in $\mathcal{H}$.\\ Therefore $\{U_1,U_2,...,U_{n-1}\}=\{U'_1,U'_2,...,U'_{n-1}\}$. With no loss of generality, we may suppose that $$U_i =U'_i \ \ \ \ \ \ \ \ {\rm for}\ \ 1\leq i\leq n-1.$$ Now we demonstrate that for $1\leq i\leq n-1$, $U_i\cup \{v_i\}$ is an independent set in $\mathcal{H}$. Suppose to the contrary that for some $1\leq h\leq n-1$, $U_h \cup \{v_h\}$ is not independent set. So for some $u_h \in U_h$, $f=\{u_h,v_h,w\}\in E(\mathcal{H})$. Since $U_h$ is an independent set in $\mathcal{H}$, $w\notin U_h$. Choose the vertices $x_1,x_2,...,x_{n-1}$ so that $x_h=u_h$ and $$x_i \in U_i\ \ \ \ \ \ \ \ {\rm for} \ \ i\neq h.$$ Since $\mathcal{H}[\{x_1,x_2,...,x_{n-1}\}]\cong \mathcal{K}_{n-1}^3$, $x_1,x_2,...,x_{n-1}$ is the core sequence of a ${K}_{n-1}^{(3)}$, say $\mathcal{K}'$, in $\mathcal{H}$. Thus $x_1,x_2,...,x_{n-1},v_h$ represents the core sequence of a ${K}_n^{(3)}$ in $\mathcal{H}$ with the following edge assignments. Set $e_{x_hv_h}=f$, $e_{v_hx_i}\in A_h^{x_i}$ for $i\neq h$ and $e_{x_i x_{i'}}\in E(\mathcal{K}')$ for $i,i' \neq h$. \noindent Therefore for $1\leq i \leq n-1$, $W_i=U_i \cup \{v_i\}$ is an independent set in $\mathcal{H}$. So $\mathcal{H}$ is an $n-1$-partite hypergraph with parts $W_1,W_2,...,W_{n-1}$ whose partition sets differ in size by at most 1. Since $e(\mathcal{H})=t_3(N,n-1)$, we deduce that $\mathcal{H}\cong T_3(N,n-1)$. $ \blacksquare$\\ \footnotesize \end{document}
\begin{document} \author[F. Braun and A.C. Mereu]{Francisco Braun and Ana C. Mereu} \address{Departamento de Matem\'atica, Universidade Federal de S\~ao Carlos, 13565-905 S\~ao Carlos, S\~ao Paulo, Brazil} \email{franciscobraun@dm.ufscar.br} \address{Departamento de F\'isica, Qu\'imica e Matem\'atica, Universidade Federal de S\~ao Carlos, 18052-780 Sorocaba, S\~ao Paulo, Brazil} \email{anamereu@ufscar.br} \title[Zero-Hopf bifurcation in a 3-D jerk system] {Zero-Hopf bifurcation in a 3-D jerk system} \subjclass[2010]{34C23, 34C25, 37G10} \keywords{Zero-Hopf Bifurcation, Periodic solutions, Averaging theory} \maketitle \begin{abstract} We consider the 3-D system defined by the jerk equation $\dddot{x} = -a \ddot{x} + x \dot{x}^2 -x^3 -b x + c \dot{x}$, with $a, b, c\in \ensuremath{\mathbb{R}}$. When $a=b=0$ and $c < 0$ the equilibrium point localized at the origin is a zero--Hopf equilibrium. We analyse the zero--Hopf Bifurcation that occur at this point when we persuade a quadratic perturbation of the coefficients, and prove that one, two or three periodic orbits can born when the parameter of the perturbation goes to $0$. \end{abstract} \section{Introduction} Motivated by the development of the Chua circuit \cite{C}, many researchers have been interested in finding other circuits that chaotically oscillate. Some simple third-order ordinary differential equations of the form $$ \dddot{x} = J(\ddot{x},\dot{x},x,t), $$ whose solutions are chaotic are example of such circuits \cite{E}, \cite{Sprott}. In classical mechanics, the function $J$ is called \emph{jerk}, and corresponds to the rate of change of acceleration, or equivalently to the third-time derivative of the position $x$. A jerk flow so is an explicit third order differential equation as above describing the evolution of the position $x(t)$ with the time $t$ The following non-linear third-order differential equation is the jerk flow studied by Vaidyanathan \cite{V}: \begin{equation}\label{jerkequation} \dddot{x} = -a \ddot{x} + x \dot{x}^2 -x^3 -b x + c \dot{x}, \end{equation} where $a$, $b$ and $c$ are parameters. This equation generalizes the one studied by Sprott \cite{Sprott}, where $b=c=0$. In system form, the differential equation \eqref{jerkequation} corresponds to the 3-D jerk system \begin{equation}\label{jerksystem} \begin{aligned} \dot{x} & = y\\ \dot{y} & = z\\ \dot{z} & = -a z - b x+ c y+xy^2 -x^3. \end{aligned} \end{equation} In \cite{V}, Vaidyanathan shows that system \eqref{jerksystem} is chaotic when $a=3.6$, $b=1.3$ and $c=0.1$. The aim of the present paper is to study this system depending on the parameters $(a,b,c) \in \ensuremath{\mathbb{R}}^3$ from another point of view. A \textit{zero-Hopf equilibrium} of a 3-dimensional autonomous differential system is an isolated equilibrium point of the system, which has a zero eigenvalue and a pair of purely imaginary eigenvalues. In general, the \textit{zero-Hopf bifurcation} is a parametric unfolding of a $3$-dimensional autonomous differential system with a zero-Hopf equilibrium. The unfolding can exhibit different topological type of dynamics in the small neighborhood of this isolated equilibrium as the parameter varies in a small neighborhood of the origin. For more information on the zero-Hopf bifurcation, we address the reader to Guckenheimer, Han, Holmes, Kuznetsov, Marsden and Scheurle in \cite{G, GH, H, K,SM}, respectively. As far as we know nobody has studied the existence or non-existence of zero-Hopf equilibria and zero-Hopf bifurcations in the 3-D jerk system \eqref{jerksystem}. The objective of this paper is to persuade this study. Usually the main tool for studying a zero-Hopf bifurcation is to pass the system to the normal form of a zero-Hopf bifurcation. Our analysis, however, will use the \emph{averaging theory} (see Section \ref{section2} for the results on this theory needed for our study). The averaging theory has already been used to study Hopf and zero-Hopf bifurcations in some others differential systems, see for instance \cite{BLM, CLQ, EL, GL, L, ML, LOV}. Our main results are the following. We first characterize the zero-Hopf equilibrium point of system \eqref{jerksystem} in Proposition \ref{prop1}. \begin{proposition}\label{prop1} The differential system \eqref{jerksystem} has a zero-Hopf equilibrium point if and only if $a=b=0$ and $c<0$. In this case, the zero-Hopf equilibrium is the only singular point of the system, and it is localized at the origin. \end{proposition} Then we study when the $3-D$ jerk system \eqref{jerksystem} having a zero-Hopf equilibrium point at the origin of coordinates has a zero-Hopf bifurcation producing some periodic orbit in Theorem \ref{teo1}. \begin{theorem}\label{teo1} Let $a_2,b_2,c_1,c_2,\delta \in \ensuremath{\mathbb{R}}$ such that $3 - \delta^2 \neq 0$ and $2 a_2 \delta^2 \neq b_2$ and set $(a, b, c) = (\varepsilon^2 a_2, \varepsilon^2 b_2, -\delta^2 + \varepsilon c_1+\varepsilon^2 c_2)$. Then the 3-D jerk system \eqref{jerksystem} has a zero-Hopf bifurcation at the equilibrium point localized at the origin of coordinates in the following situations: \begin{enumerate} \item $\dfrac{a_2 \delta ^2+ 2 b_2}{3-\delta^2}<0$ and $\dfrac{a_2\delta^2 -b_2}{3-\delta^2}>0$. In this case three periodic orbits born at the equilibrium point when $\varepsilon\to 0$. \item $\dfrac{a_2 \delta ^2+ 2 b_2}{3-\delta^2}<0$ and $\dfrac{a_2\delta^2 -b_2}{3-\delta^2}<0$. In this case two periodic solutions born at the equilibrium point when $\varepsilon \to 0$. \item $\dfrac{a_2 \delta ^2+ 2 b_2}{3-\delta^2}>0$ and $\dfrac{a_2\delta^2 -b_2}{3-\delta^2}>0$. In this case one periodic orbit borns at the equilibrium point when $\varepsilon \to 0$ \end{enumerate} \end{theorem} We illustrate a case of Theorem \ref{teo1} in Fig. \ref{fig1}. \begin{figure} \caption{Three periodic solutions emanating from the origin of coordinates. Here $\delta = 2$, $a_2 = 1$, $b_2 = 5$, $c_1 = c_2 = 0$ and $\varepsilon = 1/10$.} \label{fig1} \end{figure} We prove our results in Section \ref{s3}. \begin{remark} We prove Theorem \ref{teo1} by applying averaging theory of order $2$. Since we prove that the averaged function cannot be identically zero, it follows that with averaging of higher order we will not find more periodic solutions. \end{remark} \section{The averaging theory of first and second order}\label{section2} In this section we summarize the main results on the theory of averaging which will be used in the proof of Theorem \ref{teo1}. For a proof of the following theorem and more information on averaging theory, we address the reader to \cite{BL1, CLN}. \begin{theorem}\label{averaging} Let $D$ be an open subset of $\ensuremath{\mathbb{R}}^n$, $\varepsilon_f > 0$ and consider the differential system \begin{equation}\label{za2} \dot x(t)=\varepsilon F_{1}(x,t)+\varepsilon^2 F_{2}(x,t)+\varepsilon^3 R(x,t,\varepsilon), \end{equation} where $F_{1}, F_2:D\times\mathbb{R} \to \mathbb{R}^n$, $R:D\times\mathbb{R}\times (-\varepsilon_f,\varepsilon _f)\to \mathbb{R}^n$ are continuous functions, $T$--periodic in the second variable, $F_{1}(\cdot,t)\in C^1(D)$ for all $t\in \ensuremath{\mathbb{R}}$, $F_{1}$, $F_{2}$, $R$ and $D_{x} F_{1}$ are locally Lipschitz with respect to $x$, and $R$ is differentiable with respect to $\varepsilon$. Define $f, g:D\rightarrow \mathbb{R}^n$ as \begin{equation}\label{ave1} f(z)=\dfrac{1}{T} \displaystyle\int _0^{T}F_{1}(z, s)ds, \end{equation} \begin{equation}\label{ave2} g(z)= \dfrac{1}{T} \displaystyle\int _0^{T}\left[D_z F_{1}(z, s) \cdot \displaystyle\int _0^{s} F_{1}(z, t)dt + F_2(z, s)\right]ds, \end{equation} and assume that for an open and bounded subset $V\subset D$ and for each $\varepsilon \in (-\varepsilon_f,\varepsilon_f)\setminus \{0\}$, there exists $a_\varepsilon \in V$ such that $f(a_{\varepsilon})+\varepsilon g(a_{\varepsilon})=0$ and $ d_B(f+\varepsilon g,V,a_\varepsilon)\neq 0$. Then for $|\varepsilon|>0 $ sufficiently small, there exists a $T$--periodic solution $\varphi (\cdot, \varepsilon)$ of system \eqref{za2} such that $\varphi (0, \varepsilon) = a_\varepsilon$. \end{theorem} The expression $d_B(f+\varepsilon g,V,a_\varepsilon)\neq 0$ means that the Brouwer degree of the function $f+\varepsilon g: V\to \ensuremath{\mathbb{R}}^n$ at the fixed point $a_{\varepsilon}$ is not zero. We recall that a sufficient condition for this is that the Jacobian determinant of the function $f+\varepsilon g$ at $a_{\varepsilon}$ is not zero. For the definition, the mentioned and other properties of Brouwer degree we address the reader to Browder's paper \cite{B}. If $f$ is not identically zero, then the zeros of $f+\varepsilon g$ are mainly the zeros of $f$ for $\varepsilon$ sufficiently small. In this case the previous result provides the averaging theory of first order. If $f$ is identically zero and $g$ is not identically zero, then clearly the zeros of $f+\varepsilon g$ are the zeros of $g$. In this case the previous result provides the averaging theory of second order. \section{Proofs}\label{s3} \begin{proof}[Proof of Proposition \ref{prop1}] $(x,0,0)$ with $x = 0$ or $x^2 = - b$, if $b\leq0$, are the singular points of system \eqref{jerksystem}. The characteristic polynomial of the linear part of the system at $(x,0,0)$ is $$ p(\lambda)= - \lambda^3 - a \lambda^2+ c \lambda - b - 3 x^2. $$ In order to have a zero-Hopf equilibrium, we need one null and two purely imaginary, say $\pm i \delta$, with $\delta > 0$, eigenvalues. Imposing that $$ p(\lambda) = -\lambda(\lambda^2+\delta^2), $$ we obtain $a = b = 0$ and $c=-\delta^2$. In particular, the only zero-Hopf equilibrium is $(0,0,0)$ and we are done. \end{proof} \begin{proof}[Proof of Theorem \ref{teo1}] With the parameters $(a, b, c) = (\varepsilon a_1 + \varepsilon^2 a_2, \varepsilon b_1 + \varepsilon^2 b_2, -\delta^2 + \varepsilon c_1 + \varepsilon^2 c_2)$, the $3-D$ jerk system \eqref{jerksystem} takes the form \begin{equation}\label{jerksystem1} \begin{aligned} \dot{x} & =y \\ \dot{y} & = z \\ \dot{z} & = -\varepsilon \left(a_1 + \varepsilon a_2\right) z - \varepsilon \left(b_1 + \varepsilon b_2\right) x+ (-\delta^2 + \varepsilon c_1 + \varepsilon^2 c_2) y + x y^2 - x^3, \end{aligned} \end{equation} As in the proof of Proposition \ref{prop1}, when $\varepsilon = 0$, the eigenvalues at the origin of system \eqref{jerksystem1} are $0$ and $\pm i \delta$. With the change of variables $(x,y,z) = \left(\varepsilon X, \varepsilon Y, \varepsilon Z\right)$ and rescaling of time, system \eqref{jerksystem1} writes \begin{equation}\label{jerksystem2} \begin{aligned} \dot{X} & = Y \\ \dot{Y} & = Z \\ \dot{Z} & = -\delta^2 Y \hspace{-.1cm} + \hspace{-.1cm} \varepsilon\left(-a_1 Z -b_1 X +c_1 Y\right) \hspace{-.1cm} + \hspace{-.1cm} \varepsilon^2\left(-a_2 Z -b_2 X +c_2 Y+XY^2-X^3\right). \end{aligned} \end{equation} Now we make a linear change of variables in order to have the matrix of the linear part of system \eqref{jerksystem2} at the origin when $\varepsilon = 0$ written in its real Jordan normal form $$ J = \left( \begin{array}{ccc} 0 & -\delta &0\\ \delta & 0 & 0\\ 0 & 0 & 0 \end{array} \right). $$ It is simple to see that the change \begin{equation}\label{changevariables} \begin{aligned} X & = w +\frac{v}{\delta}, \\ Y & = u, \\ Z & = - \delta v \end{aligned} \end{equation} makes what we want. In these new variables, system \eqref{jerksystem2} writes \begin{equation}\label{jerksystem3} \begin{aligned} \dot{u} & = - \delta v \\ \dot{v} & = \delta u + \varepsilon \delta \left(h_1 + \varepsilon h_2\right)\\ \dot{w} & = -\varepsilon \left(h_1 + \varepsilon h_2 \right), \end{aligned} \end{equation} with $$ \begin{aligned} h_1 = h_1(u,v,w) & = \frac{b_1 v}{\delta^3} - \frac{c_1 u -b_1 w}{\delta^2} - \frac{a_1 v}{\delta}, \\ h_2 = h_2(u,v,w) & = \frac{v^3}{\delta^5} + \frac{3 v^2 w}{\delta^4} + \frac{(b_2 - u^2 + 3 w^2) v}{\delta^3} - \frac{c_2 u + (u^2 - b_2) w - w^3}{\delta^2} - \frac{a_2 v}{\delta}. \end{aligned} $$ Now we pass the differential system \eqref{jerksystem3} to cylindrical coordinates $(r, \theta,w)$ defined by $u=r\cos\theta$, $v=r \sin\theta$, $w=w$, obtaining \begin{equation}\label{interm} \begin{aligned} \dot{\theta} & = \delta + \varepsilon \delta \frac{\cos \theta}{r} \left(h_1 + \varepsilon h_2\right), \\ \dot{r} & = \varepsilon \delta \sin \theta \left(h_1 + \varepsilon h_2\right), \\ \dot{w} & = - \varepsilon \left(h_1 + \varepsilon h_2\right), \end{aligned} \end{equation} with $h_1 = h_1\left(r \cos \theta, r\sin \theta, w\right)$ and $h_2 = h_2\left(r\cos \theta, r \sin \theta, w\right)$. By introducing $\theta$ as the new independent variable, we obtain \begin{equation}\label{theta} \begin{aligned} \frac{dr}{d\theta} & = \frac{\varepsilon (h_1 + \varepsilon h_2)}{r+\varepsilon \cos \theta (h_1+\varepsilon h_2)} r \sin \theta \\ & = \varepsilon h_1 \sin \theta + 2 \sin \theta \varepsilon^2 \frac{h_2 r - h_1^2 \cos \theta}{r} + O(\varepsilon^3)\\ \frac{d w}{d \theta} & = - \frac{\varepsilon (h_1 + \varepsilon h_2)}{r + \varepsilon \cos \theta (h_1 + \varepsilon h_2)}\frac{r}{\delta} \\ & = -\varepsilon \frac{h_1}{\delta} - 2 \varepsilon^2 \frac{h_2 r- h_1^2 \cos \theta}{r \delta} + O(\varepsilon^3) \end{aligned} \end{equation} In the notation of Theorem \ref{averaging}, by taking $t = \theta$, $T = 2\pi$ and $z = (r, w)^{tr}$, we have $$ F_1(r, w, \theta) = h_1 \left( \begin{array}{c} \sin \theta \\ -1/\delta \end{array} \right), \ \ \ \ F_2(r, w, \theta) = 2 \frac{h_2 r - h_1^2 \cos \theta}{r} \left( \begin{array}{c} \sin \theta \\ -1/\delta \end{array} \right). $$ We calculate $f(r, w)$ obtaining $$ f(r,w) = \frac{1}{2 \pi} \int_0^{2 \pi} F_1(r,w, \theta) d\theta = \left( \begin{array}{c} \dfrac{r}{2 \delta^3} (b_1 - a_1 \delta^2) \\ \dfrac{-b_1 w}{\delta^3} \end{array} \right) $$ The solutions of $f(r,w) = 0$ with $b_1 - a_1\delta^2 \neq 0$ are contained in $r = 0$, and so they are not allowed, because $r$ must be positive. On the other hand, if $b_1 - a_1 \delta^2 = 0$, the zeros of $f(r, w)$ are not isolated. In particular we can not apply the averaging of first order. We pass then to the averaging of second order, assuming $f \equiv 0$. This makes $a_1 = b_1 = 0$. We now calculate $g(r, w)$: $$ \begin{aligned} g(r, w) & = \frac{1}{2 \pi} \int_0^{2 \pi} \left(D_{(r, w)} F_1(r,w,\theta) \int_0^\theta F_1(r,w,s) ds + F_2(r,w,\theta) \right) d\theta \\ & = \frac{1}{2 \pi} \int_0^{2 \pi} \left( \frac{c_1^2 r }{2 \delta^5}\left( \begin{array}{c} \delta \cos \theta \sin^3 \theta \\ -\cos \theta \sin^2 \theta \end{array} \right) + F_2(r, w, \theta) \right) d \theta \\ & = \frac{1}{2 \delta^5} \left( \begin{array}{c} r\left((3 - \delta^2) r^2 + 4 b_2 \delta^2 - 4 a_2 \delta^4 + 12 \delta^2 w^2 \right)/4 \\ -w \left((3 - \delta^2) r^2 + 2 b_2 \delta^2 + 2 \delta^2 w^2 \right) \end{array} \right). \end{aligned} $$ We analyze the solutions of $g(r, w) = 0$, with $r > 0$. We first observe that in order to obtain isolated solutions according to Theorem \ref{averaging}, we ought to have $\delta^2 \neq 3$. With this assumption in force, we readily obtain the following two group of solutions $$ r^2 = \frac{4 (a_2 \delta^2 - b_2) \delta^2}{3 - \delta^2}, \ \ \ \ w = 0, $$ or $$ r^2 = \frac{- 4 (a_2 \delta^2 + 2 b_2) \delta^2}{5 (3 - \delta^2)}, \ \ \ \ w^2 = \frac{2 a_2 \delta^2 - b_2}{5}. $$ The Jacobian determinant of $g$ at the solutions above take the values $$ - \frac{(a_2 \delta^2 - b_2) (2 a_2 \delta^2 - b_2)}{\delta^6} $$ and $$ -\frac{2 (a_2 \delta^2 + 2 b_2) (2 a_2 \delta^2 - b_2)}{5 \delta^6}, $$ respectively. If $\dfrac{a_2\delta^2 -b_2}{3-\delta^2}>0$, $\dfrac{a_2\delta^2 +2b_2}{3-\delta^2}<0$ and $2 a_2 \delta^2 - b_2 \neq 0$, we have the isolated solutions $(r_1, w_1) = \left(2 \delta \sqrt{\dfrac{a_2 \delta^2 - b_2}{3 - \delta^2}} ,0\right)$, $(r_2, w_2) = \left(2 \delta \sqrt{-\dfrac{a_2 \delta^2 + 2 b_2}{5 (3 - \delta^2)}}, \sqrt{\dfrac{2 a_2 \delta^2 - b_2}{5}} \right)$ and $(r_3, w_3) = (r_2, -w_2)$. If $\dfrac{a_2\delta^2 -b_2}{3-\delta^2}>0$, $\dfrac{a_2\delta^2 +2b_2}{3-\delta^2}\geq 0$ and $2 a_2 \delta^2 - b_2 \neq 0$ then we have just the isolated solution $(r_1, w_1)$. If $\dfrac{a_2\delta^2 -b_2}{3-\delta^2}\leq 0$, $\dfrac{a_2\delta^2 +2b_2}{3-\delta^2}<0$ and $2 a_2 \delta^2 - b_2 \neq 0$ we will have just the solutions $(r_2, w_2)$ and $(r_3, w_3)$. Now Theorem \ref{averaging} guarantees that, for $\varepsilon$ sufficiently small, to each root $(r_i, w_i)$ of $g$ it corresponds a periodic solution with period $2 \pi$ of system \eqref{theta} of the form $(r(\theta,\varepsilon), w(\theta,\varepsilon))$, with $(r(0,\varepsilon),w(0,\varepsilon)) = (r_i,w_i)$. Corresponding to this one, system \eqref{interm} has the periodic solution of certain period $T_\varepsilon$ $(\theta(t,\varepsilon),r(t,\varepsilon),w(t,\varepsilon))$ satisfying $\left(\theta(0,\varepsilon), r(0, \varepsilon), w(0, \varepsilon)\right) = (0, r_i, w_i)$. Then system \eqref{jerksystem3} has the periodic solution of period $T_\varepsilon$ $$ \left(u(t,\varepsilon), v(t,\varepsilon), w(t, \varepsilon) \right) = \left( r(t,\varepsilon)\cos \theta(t, \varepsilon), r(t,\varepsilon)\sin \theta(t, \varepsilon), w(t,\varepsilon) \right), $$ for $\varepsilon$ sufficiently small, with $(u(0,\varepsilon), v(0,\varepsilon), w(0,\varepsilon)) = (r_i, 0, w_i)$. Now we apply the change of variables \eqref{changevariables} to this one and obtain for small $\varepsilon$ the periodic solution $\left(X(t, \varepsilon), Y(t,\varepsilon), Z(t,\varepsilon)\right)$ of system \eqref{jerksystem2} with the same period, such that $(X(0,\varepsilon,Y(0,\varepsilon),Z(0,\varepsilon)) = (w_i + r_i/\delta, r_i, 0)$. Finally, for $\varepsilon\neq 0$ sufficiently small, system \eqref{jerksystem1} has the periodic solution $\left(x(t, \varepsilon), y(t, \varepsilon), z(t, \varepsilon)\right) = \left(\varepsilon X(\theta),\varepsilon Y(\theta), \varepsilon Z(\theta)\right)$, with $\left(x(0, \varepsilon), y(0, \varepsilon), z(0, \varepsilon)\right) = \varepsilon (w_i + r_i/\delta, r_i, 0)$, and that clearly tends to the origin of coordinates when $\varepsilon\rightarrow 0$. Thus, this is a periodic solution emanating from the zero-Hopf bifurcation point located at the origin of coordinates when $\varepsilon = 0$. This concludes the proof of Theorem \ref{teo1}. \end{proof} \section*{Acknowledgments} We thank Prof. Luis Fernando Mello for calling our attention to the system studied here. The first author was partially supported by the grant 2017/00136-0, S\~ao Paulo Research Foundation (FAPESP). \end{document}
\begin{document} \title{Advantages of Unfair Quantum Ground-State Sampling} \author{Brian Hu Zhang} \affiliation{Stanford University, Stanford, California 94305, USA} \author{Gene Wagenbreth} \affiliation{Cray, Seattle, WA 98164, USA} \author{Victor Martin-Mayor} \affiliation{Departamento de F\'isica Te\'orica I, Universidad Complutense, 28040 Madrid, Spain} \affiliation{Instituto de Biocomputaci\'on y F\'isica de Sistemas Complejos (BIFI), Zaragoza, Spain} \author{Itay Hen} \email{itayhen@isi.edu} \affiliation{Information Sciences Institute, University of Southern California, Marina del Rey, California 90292, USA} \affiliation{Department of Physics and Astronomy and Center for Quantum Information Science \& Technology, University of Southern California, Los Angeles, California 90089, USA} \date{\today} \begin{abstract} The debate around the potential superiority of quantum annealers over their classical counterparts has been ongoing since the inception of the field. Recent technological breakthroughs, which have led to the manufacture of experimental prototypes of quantum annealing optimizers with sizes approaching the practical regime, have reignited this discussion. However, the demonstration of quantum annealing speedups remains to this day an elusive albeit coveted goal. We examine the power of quantum annealers to provide a different type of quantum enhancement of practical relevance, namely, their ability to serve as useful samplers from the ground-state manifolds of combinatorial optimization problems. We study, both numerically by simulating stoquastic and non-stoquastic quantum annealing processes, and experimentally, using a prototypical quantum annealing processor, the ability of quantum annealers to sample the ground-states of spin glasses differently than thermal samplers. We demonstrate that i) quantum annealers sample the ground-state manifolds of spin glasses very differently than thermal optimizers, ii) the nature of the quantum fluctuations driving the annealing process has a decisive effect on the final distribution, and iii) the experimental quantum annealer samples ground-state manifolds significantly differently than thermal and ideal quantum annealers. We illustrate how quantum annealers may serve as powerful tools when complementing standard sampling algorithms. \end{abstract} \maketitle \section{Introduction} Many problems of practical importance may be cast as a task of finding all the minimizing configurations, or ground-states, of a given cost function. Examples are numerous---among them are SAT filtering\cite{Douglass2015}, hardware fault detection and the verification and validation (V\&V) of safety-critical cyber-physical systems\cite{Pudenz:2013kx}, to mention a few. In the V\&V of safety-critical cyber-physical systems for instance, one is concerned with testing whether a given piece of software contains a bug. This problem can naturally be cast as a constraint satisfaction problem\cite{Pudenz:2013kx} (equivalently, as an optimization problem of the Ising-type) where finding as many of the bugs as possible is critical to the success of the mission. Similar scenarios occur in circuit fault detection where each solution corresponds to a potential discrepancy in the implementation of a circuit and where all discrepancies must be found. The listing of all solutions of a given cost function is, however, generally an intractable task for standard algorithms (it is a problem in the complexity class \#P). This is not only because of the difficulty involved in finding an optimum\cite{papadimitriou2013combinatorial}, but also because of the sheer number of ground-states of the problem which may grow exponentially with input size (a property known to physicists as a non-vanishing entropy density\cite{pauling:35}). Furthermore, the energy landscapes of certain cost functions are known to bias heuristic optimizers, as well as provable solvers, towards certain solutions and away from others\cite{bastea:98}. Thus, the practical importance of sampling from the set of ground-states of intricate cost functions in qualitatively diverse manners is immense---both from the theoretical point of view and for practical reasons. In the context of V\&V for instance, one hopes that employing a suite of qualitatively dissimilar sampling algorithms will unearth nonidentical or even disjoint sets of solutions, leading eventually to the discovery of much larger sets of bugs. Recent technological breakthroughs that have made experimental programmable quantum annealing (QA) optimizers containing thousands of quantum bits\cite{johnson:11,berkley:13} available, have rekindled the interest in annealers as a revolutionary new approach to finding the minimizing assignments of discrete combinatorial cost functions. Quantum annealers\cite{kadowaki:98,farhi:01} provide a unique approach to finding the ground-states of discrete optimization problems, utilizing gradually decreasing quantum fluctuations to traverse barriers in the energy landscape in search of global optima, a mechanism commonly believed to have no classical counterpart\cite{Finnila1994343,Brooke30041999,kadowaki:98,farhi:01,santoro:02,RevModPhys.80.1061,PhysRevB.39.11828}. In the context of ground-state sampling, quantum annealers thus offer the exciting possibility of discovering minimizing assignments that cannot be reached in practice with standard algorithms, potentially offering unique advantages over traditional algorithms for solving problems of practical importance. Here, we put this hypothesis to the test by directly addressing the question of whether quantum annealers can sample the ground-state set of optimization problems differently than their classical counterparts. We further examine the potential inherent in quantum annealers to serve as useful tools in practical settings. We demonstrate, both via numerical simulations and experimentally, by testing a $512$-qubit D-Wave Two quantum annealing optimizer\cite{johnson:11,berkley:13}, that quantum annealers not only produce different distributions over the set of ground-states than simulated thermal annealing optimizers, but that they offer an additional dimension of tunability that does not necessarily have a classical counterpart. Finally, we show that when used in conjunction with existing standard ground-state-sampling or solution counting algorithms, quantum annealers may offer certain unique advantages that may not be otherwise achievable. \section{Sampling the ground-state manifold of spin glasses} Similar to standard classical algorithms, quantum annealers---even ideal fully adiabatic ones held at zero temperature---when tasked with solving optimization problems will generally sample the solution space of optimization problems in a biased manner, producing certain ground-states more frequently than others. Unlike the bias exhibited by thermal algorithms, the uneven sampling of quantum annealers has its origins in the quantum nature of their dynamics: In standard quantum annealing protocols, one engineers a smoothly interpolating Hamiltonian between a simple `driver' Hamiltonian $H_d$ which provides the quantum fluctuations and a classical `problem' Hamiltonian $H_p$ that is diagonal in the computational basis and whose ground-states encode the solutions of an optimization problem \begin{equation}\label{eq:hs} H(s)=(1-s) H_d + s H_p \,, \end{equation} where $s(t)$ is a parameter varying smoothly with time from $0$ at $t=0$ to $1$ at the end of the algorithm, at $t=\mathcal{T}$ [the type of problem and driver Hamiltonians we shall consider are given in Eqs.~\eqref{eq:Hp},~\eqref{eq:hdtf} and~\eqref{eq:def-hdnsq} in the next section]. In the presence of degeneracy in the ground-state manifold of the problem Hamiltonian (which corresponds to multiple minimizing assignments for the cost function), the adiabatic theorem ensures that the state reached at the end of the adiabatic evolution in the limit $\lim_{s \to 1} H(s)$ is still uniquely defined. At the end of the QA evolution, the final state corresponds to a specific linear combination of the classical ground-states \begin{equation}\label{eq:def-anneal-GS} |\psi_{GS}\rangle = \sum_{i=1}^{D} c_i | \phi_i \rangle \end{equation} where $\{|\phi_1 \rangle,|\phi_2 \rangle,\ldots,|\phi_D \rangle\}$ is the set of $D$ classical ground states, or minimizing configurations, of the optimization problem (see Fig.~\ref{fig:example} for an example). The $\{|c_i|^2\}$ are the probabilities for obtaining each of these classical ground-states upon computational-basis measurements at the end of the anneal. These define a probability distribution over the ground-state manifold and depend not only on the structure of the problem Hamiltonian but also on the nature of the quantum fluctuations provided by the driver Hamiltonian (from the point of view of quantum perturbation theory, in the simplest case where first-order perturbation theory breaks all degeneracies, the ground-state in Eq.~\eqref{eq:def-anneal-GS} is merely the ground-state of a restricted driver Hamiltonian, specifically, the driver $H_d$ projected onto the subspace spanned by the ground-states of $H_p$). \begin{figure} \caption{\textbf{Ten lowest energy levels of an $8$-qubit Hamiltonian interpolating between a transverse field driver Hamiltonian and a randomly generated Ising Hamiltonian.} The solid black line indicates the energy of the instantaneous ground-state. The dashed red and dotted green lines indicate excited states that lead to final ground-states and final excited states, respectively. {\bf Inset:} The probabilities for obtaining the various classical ground-states upon measuring the quantum ground-state at the end of the evolution in the computational basis.} \label{fig:example} \end{figure} Since the distribution of minimizing configurations (henceforth, the ground-state distribution, or GSD) generated by a quantum annealer is \emph{intrinsically quantum}, a possibility arises that some quantum GSDs cannot be efficiently generated by classical samplers. Moreover, that the choice of driver Hamiltonian $H_d$ determines these GSDs, offers a tunable handle, or an extra knob, that potentially produces a continuum of probability distributions over the ground-state configurations. It is therefore plausible to assume that certain classically suppressed configurations, i.e., solutions that have very low probabilities of being found via thermal or other classical processes, may have high probabilities of being found or sampled with suitable choices of driver Hamiltonians\cite{q-sig,q-sig2,matsuda,exponentiallyBiased}. In such cases, quantum annealers may be used to replace or \emph{complement} classical samplers, giving rise to a novel form of quantum enhancements. To test whether quantum annealers indeed provide a potentially powerful platform for achieving quantum enhancements for the counting or listing of solutions of hard optimization problems, we study in detail their capabilities to sample ground-state configurations differently than their classical counterparts. As a testbed, we consider spin glasses: disordered, frustrated spin systems\cite{young:98} that may be viewed as prototypical classically hard (also called NP-hard) optimization problems\cite{barahona:82} focusing, for reasons that will become clear later, on problems whose ground-state configurations have been computed in advance. These will be used to test the performance of classical thermal annealers, comparing the outcomes of these against the GSDs produced by ideal zero-temperature stoquastic as well as non-stoquastic quantum annealers. We shall also compare the produced GSDs to those obtained by a prototypical experimental quantum annealer---the D-Wave Two (DW2) processor\cite{johnson:11,Bunyk:2014hb} which consists of an array of superconducting flux qubits designed to solve Ising model instances defined on the graph hardware via a gradually decreasing transverse field Hamiltonian (further details, including a visualization of the Chimera graph and the annealing schedule used to interpolate between $H_d$ and $H_p$, are provided in the Supplementary Information). These comparisons will provide insight into the potential computational power inherent in quantum devices to assist traditional algorithm in finding all (or as many as possible) minimizing configurations of discrete optimization problems. \section{Results: Thermal vs. quantum ground-state sampling} \subsection{Setup\label{sec:setup}} We generate random spin glass instances for which the enumeration of minimizing configurations is a feasible task. This a priori requirement will allow us to properly evaluate the sampling capabilities of the tested algorithms in an unbiased way. To do that, we consider optimization problems whose cost function is of the form: \begin{equation} \label{eq:Hp} H_{p}=\sum_{\langle ij\rangle} J_{ij} s_i s_j \,. \end{equation} The Ising spins, $s_i=\pm 1$, are the variables to be optimized over, and the set of parameters $\{J_{ij}\}$ determines the cost function. To experimentally test the generated instances, we take $\langle ij\rangle$ to sum over the edges of an $N=504$-qubit Chimera graph---the hardware graph of the DW2 processor\cite{Bunyk:2014hb}. The precise enumeration of all minimizing configurations of spin glass instances of more than a few dozen spins is generally an intractable task. We overcome this difficulty here by generating problem instances with \emph{planted solutions}\cite{hen:15,king:15}. As we discuss in the {\bf{Methods}} section, the structure of planted-solution instances allows for the development of a constraint solving bucket algorithm capable of enumerating all minimizing assignments. By doing so, we obtain about $2000$ optimization $504$-bit spin glass instances, each with less than $500$ minimizing configurations, and for which we know {\emph{all}} ground-state configurations. This enables the accurate evaluation of the distributions of success probabilities of \emph{individual minimizing configurations}. We compare the GSDs as obtained by several different algorithms: \begin{enumerate} \item \emph{Simulated annealing (SA)}---This is a well-known, powerful and generic heuristic solver\cite{kirkpatrick:83}. Our SA algorithm uses single spin-flip Metropolis updates with a linear profile of inverse temperatures $\beta=T^{-1}$, going from $\beta_{\min}=0$ to $\beta_{\max}=20$ (with $\beta$ updated after every Metropolis sweep over the lattice spins). Figure~\ref{fig:sa_tts} provides the SA results for a couple of typical instances differing in number of minimizing configurations. The figure shows the average time for each individual minimizing bit assignment plotted as a function of number of sweeps on a log-log scale. For any fixed number of sweeps, we find that the probability of obtaining certain ground-state configurations may vary by orders of magnitude. At first glance, the `unfair' sampling of SA may seem to contradict the Boltzmann distribution which (in accordance with the assumption of equal a priori probability) prescribes the same probability to same-energy configurations for thermalized systems. However, we note that when SA is used as an optimizer (as it is here), the number of sweeps is not large enough for thermalization to take place. As an optimizer, the number of SA Metropolis sweeps is chosen such that on average the time to find a ground-state is minimized. The minimum time-to-solution (for individual solutions) is evident in Fig.~\ref{fig:sa_tts}, as is the convergence of all individual success probabilities into a single value in the limit of long annealing times, consistently with equipartition. \item\emph{Ideal zero-temperature quantum annealer with a transverse field driver (TFQA)}---We also consider an ideal, zero-temperature, fully adiabatic quantum annealer with a transverse-field driver Hamiltonian, namely \begin{equation}\label{eq:hdtf} H_d^{\text{TF}} =- \sum_i \sigma_i^x \,, \end{equation} where $\sigma_i^x$ is the Pauli X spin-$1/2$ matrix acting on spin $i$. The adiabatic process interpolates linearly between the above driver and the final Ising Hamiltonian (Eq.~[\ref{eq:Hp}]) as described in Eq.~(\ref{eq:hs}). The algorithm we devise to infer the quantum ground-state is discussed in detail in {\bf Methods}. \item\emph{Ideal zero-temperature quantum annealer with a non-stoquastic driver (NSQA)}---As already discussed above, quantum annealers offer more control than thermal annealers over the generated fluctuations. This is because unlike thermal fluctuations, quantum fluctuations can be \emph{engineered} (see, e.g., Refs.\cite{dickson:12,crosson:15}). One thus expects different driver Hamiltonians to yield different probability distributions over the ground-state manifold. Of particular interest are the so-called \emph{non-stoquastic} driver Hamiltonians, which cannot be efficiently simulated by classical algorithms. As a test case, we consider an additional quantum annealing process driven by: \begin{equation}\label{eq:def-hdnsq} H_d^{\text{NS}} =- \sum_i \sigma_i^x +\sum_{\langle i j \rangle} \tilde{J}_{ij} \sigma_i^x \sigma_j^x \,. \end{equation} To ensure that the driver in non-stoquastic, we choose the couplings $\tilde{J}_{ij}$ to be the same as the $\sigma^z_i \sigma^z_j$ couplings of the problem Hamiltonian, but with an arbiltrarily chosen sign. \item\emph{Experimental D-Wave Two processor (DW2)}---We also feed the generated instances to the putative DW2 quantum annealing optimizer. This device is designed to solve optimization problems by evolving a known initial configuration---the ground-state of a transverse field towards the ground-state of the classical Ising-model Hamiltonian of Eq.~(\ref{eq:Hp}). Each problem instance was run on the annealer for a minimum of $10^4$ anneals with each anneal lasting $20$ to $40 \mu$s, for a total of more than $10^{7}$ anneals. Each anneal ends up with a measurement in the computational basis yielding either an excited state or a classical ground-state which is subsequently recorded and which is later used to construct a GSD. To overcome the inhomogeneity of the processor as well as other systematic errors, each anneal is carried out with a randomly generated gauge (see Ref.\cite{q108} for more details). \end{enumerate} \subsection{Distinguishing Probability Distributions} We test whether the different algorithms that we consider produce significantly different GSDs, or probability distributions over the ground-state manifold, on the various spin glass instances. To distinguish between two distributions generated by two methods, at least one of which is empirically estimated via experiment, we use a bootstrapped version of the Kolmogorov-Smirnov (KS) test (see {\bf Methods}). Table~\ref{table:properties} summarizes the results of the statistical tests, listing the fraction of instances with different GSDs between any two tested optimizers. As is evident from the table, the distributions generated by the various algorithms are in general significantly different from one another--- pointing to presumably different physical mechanisms generating them, namely thermal or quantum fluctuations of different sources. As we discuss later on, these pronounced differences in the GSDs can allow quantum annealers to serve as potentially powerful tools, when combined with standard techniques, for finding all (or as many as possible) minimizing configurations of combinatorial optimization problems. \begin{table}[hp] \centering \begin{tabular}{|c||c|c|c|c|} \hline & SA & TFQA & NSQA & DW2 \\\hline \hline SA & - &$67\%$ &$65\%$ & $93\%$ \\\hline TFQA & & - & 57\% & $99\%$ \\\hline NSQA & & &- & $99\%$ \\\hline DW2 & & & & - \\\hline \end{tabular} \caption{{\bf Fraction of instances with statistically significant differences in GSD between any two optimizers. Here, the $p$-value is set at $p=0.01$.} } \label{table:properties} \end{table} To quantify the utility of using a combination of two (or more) methods for finding as many minimizing configurations as possible, we define the `bias' $b({\bf p})$ of a GSD as \begin{equation} b({\bf p})=\frac{D}{2(D-1)}\sum_{i=1}^{D} \left| p_i-1/D \right|\,, \end{equation} where we denote probability distributions by ${\bf p}$, and where $p_i$ is the probability of obtaining ground-state $i$. Here, a flat distribution corresponds to $b({\bf p})=0$, whereas an extremely biased one for which all samples are multiples of the same configuration yields $b({\bf p})=1$. If $n$ applications of one optimization method yield a probability distribution ${\bf p}^{(1)}$ and $n$ applications of another yield ${\bf p}^{(2)}$, then a combined effort of $n/2$ samples from each will yield a GSD with a bias $b({\bf \bar{p}})$ where \hbox{${\bf \bar{p}} = \frac1{2}\left({\bf p}^{(1)}+{\bf p}^{(2)} \right)$}. We can therefore quantitatively measure the utility of a combination of two (or more) methods by comparing the bias $b({\bf \bar{p}})$ against that of any one method $b({\bf p}^{(1)})$ [or $b({\bf p}^{(2)})$]. The smaller the bias of the combination, the greater the utility of using the two methods in conjunction. Let us next examine the differences between specific pairs of methods in more detail . \begin{figure} \caption{{\bf Simulated annealing average time to solution---the ratio of success probability to number of sweeps, vs. number of sweeps for different solutions (log-log scale)}. Each line represents a different solution. {\bf Left:} A $504$-bit instance with $16$ solutions. {\bf Right:} A $504$-bit instance with $4$ solutions. For most instances, certain solutions are reached considerably sooner than others.} \label{fig:sa_tts} \end{figure} \subsection{SA vs. TFQA and NSQA} We first compare the GSDs obtained by thermal simulated annealing (SA) against those generated by the transverse-field and the non-stoquastic quantum annealers. A bootstrapped KS test to decide whether they are significantly different suggests a difference that is significant at the $p < 0.01$ level in $67\%$ of the instances with TFQA and $65\%$ of the instances with NSQA. These results are summarized in Fig.~\ref{fig:pvalues}. In the vast majority of cases, there is a qualitative difference between the results produced by QA and those produced by SA. \begin{figure} \caption{{\bf The $p$-values generated by the Kolmogorov-Smirnov tests to quantify the differences between pairs of algorithms}.} \label{fig:pvalues} \end{figure} Figure~\ref{fig:pdgd} depicts the GSDs of several representative instances illustrating the little or no relationship between the thermal and TFQA or NSQA methods. In the left panel we find an instance for which both SA and TFQA produce similar GSDs. The middle and right instances show no clear relationship between SA and the other algorithms. \begin{figure*} \caption{{\bf Three representative GSDs for simulated annealing (SA - blue), ideal transverse-field quantum annealer (TFQA - red) and the D-Wave Two experimental processor (DW2 - yellow).} In the left instance, probabilities for obtaining the various ground-states predict that all solutions are about equally likely. In the middle instance, we observe that those ground-states that our analytic TFQA predicts should appear more often, and do indeed appear more often in the experimental DW2 data. In the right instance, there is no clear relationship between the various algorithms.} \label{fig:pdgd} \end{figure*} We now ask whether using a quantum annealer \emph{together} with a simulated annealer has merit. To that aim, we compare the SA bias $b_{\text{SA}}=b({\bf p}_{\text{SA}})$ against the biases of the combinations SA with TFQA and SA with NSQA. The results are summarized in Fig.~\ref{fig:SAbiasPlots}. In these scatter plots, any data point below the $y=x$ line indicates an advantage to using `assisted' ground-state sampling driven by quantum samplers. As can be clearly observed, for most of the instances the bias of the combination is significantly lower than that of using SA alone. The median SA bias of $\langle b_{\text{SA}} \rangle =0.10(1)$ drops to $\langle b_{\text{SA+TFQA}} \rangle =0.075(0)$ and $\langle b_{\text{SA+NSQA}} \rangle =0.074(2)$ when assisted with TFQA and NSQA, respectively. Also noticeable is the $y=x/2$ line; data points on this line are obtained whenever SA is used in conjunction with a method that yields a flat distribution, in which case the bias is halved. We find, perhaps not surprisingly, that ideal zero-temperature annealing yields flat distributions for many of the tested instances [see also Fig.~\ref{fig:QAbiasPlots}(left)]. This is somewhat analogous to an ideal SA process where full thermalization is reached, in which case the generated GSDs would all be balanced. \begin{figure} \caption{ {\bf Scatter plots of biases of the tested instances' GSDs as obtained by SA vs. a combination of SA with ideal quantum annealers.} Left: SA vs. SA+TFQA. Right: SA vs. SA+NSQA. In the majority of cases, a combination of the methods leads a smaller overall bias, i.e., a lesser degree of unfairness. The solid red line is the `equal bias' $y=x$ line, whereas the dashed green $y=x/2$ line is the bias obtained when SA is combined with a flat, unbiased GSD, in which case the bias is halved. } \label{fig:SAbiasPlots} \end{figure} \begin{figure} \caption{{\bf Scatter plots of biases of the tested instances' quantum GSDs.} Left: Biases of TFQA annealing vs. NSQA annealing showing that the non-stoquastic driver is slightly more biased in general than the transverse field driver (not shown in the figure are the completely unbiased instances which constitute about $57\%$ of the instances). Right: Biases of the TFQA GSDs vs. those of the combined application of TFQA and NSQA indicating that use of an additional non-stoquastic driver considerably reduces the bias of the GSDs.} \label{fig:QAbiasPlots} \end{figure} \subsection{TFQA vs. NSQA} Next, we compare the GSDs produced by the ideal zero-temperature quantum annealers, namely the transverse-field annealer (TFQA) against the non-stoquastic annealer (NSQA). As mentioned earlier, for more than half of the instances ($1105$ out of $1909$) the quantum ground-states are found to be `flat' for both annealers (the median bias for both processes was found to be $<10^{-4}$). Discounting for those, we find in general that the chosen driver has a decisive effect on the GSDs. The GSDs of representative instances are shown in Fig.~\ref{fig:drivercomp}, showcasing the tunability of the probability distributions with respect to the `knob' of quantum fluctuations. For the instance on the left, both drivers sample the ground-state manifold similarly; for the middle and right instances, we observe that the driver has a profound effect on the shape of the distribution. For a fraction of the instances ($<5\%$), ground-states that had zero probability for TFQA had strictly positive probability when NSQA was used. That is, the driver often has an effect not only on the probabilities of various states but also on which states have nonzero amplitudes. \begin{figure*} \caption{{\bf Three sample GSD comparisons between the two drivers: the stoquastic TFQA (red) and non-stoquastic NSQA (yellow).} For the instance on the left both drivers sample the ground-state manifold similarly; for the middle instance less probable configurations for one method become more probable in the other and vice versa; on the right, we find an instance for which states with zero probability of occurring with one type of quantum fluctuations have distinct positive probabilities of occurring in the other.} \label{fig:drivercomp} \end{figure*} Comparing the bias associated with use of a transverse-field quantum annealer with and without the aid of a non-stoquastic driver, we find that the combination of annealers is far less biased than the transverse-field annealer alone. On average, as shown in Fig.~\ref{fig:QAbiasPlots}(right), NSQA distributions are slightly more biased than TFQA distributions. Nonetheless, as is indicated in the figure, which shows a scatter plot of the bias $b({\bf p}_{\text{TFQA}})$ against the biases of the combination of TFQA with NSQA, there is merit in `assisted' non-stoquastic ground-state sampling. \subsection{Experimental DW2 vs. SA and TFQA} Comparing the distributions generated by the experimental quantum annealer DW2 against SA and against an ideal transverse field quantum annealer (TFQA), we find, as in the other comparisons, only a weak relationship between the output distributions. As summarized in Fig.~\ref{fig:pvalues}, in almost all instances, the KS test yielded a significant difference between the GSDs (also see Table~\ref{table:properties}). Figure~\ref{fig:pdgd} shows the GSDs for three representative instances. On the left panel, we find that DW2 as well as SA and TFQA yields an approximately flat probability distribution over the various ground-states. In the middle panel, we find those ground-states that TFQA predicts will appear more often, and indeed appeared more often in the D-Wave experimental GSD. On the right panel, no apparent relationship is found between the three GSDs. To understand whether there is merit in using a DW2 processor alongside an SA algorithm for the purpose of producing a more balanced distribution of ground-states, we compare the bias of using SA alone vs. using SA together with DW2. The results are summarized in Fig.~\ref{fig:SADW2}. As the left panel shows, SA and DW2 produce biases with similar distributions. Interestingly, we find that while for some instances use of both methods does reduce the bias, for many others it does not (right panel). When assisted with the DW2 experimental annealer, the median SA bias of $\langle b_{\text{SA}} \rangle =0.10(1)$ remains at $\langle b_{\text{SA+DW2}} \rangle =0.10(1)$. As we shall discuss next in more detail, while significant differences in the GSDs seem to bode well for the utility of DW2 to generate possible classically suppressed minimizing configurations with high probabilities, the origin of these differences is unclear. \begin{figure} \caption{{\bf Scatter plots of SA biases vs. DW2 biases and a combination of SA and DW2.} Left: The scatter plot indicates that the biases of the DW2 processor are similar in nature to those generated by thermal annealing. Right: Merit of using SA in conjunction with an experimental quantum annealer. Here, results are rather mixed. While for a large portion (over 50\%) of the instances a combination of the methods leads to a smaller overall bias, i.e., a lesser degree of unfairness, for the rest of the instances the bias is larger. The solid red line is the `equal bias' $y=x$ line, whereas the dashed green $y=x/2$ line is the bias obtained via a combination of the horizontal GSD with a flat unbiased GSD.} \label{fig:SADW2} \end{figure} \section{Discussion} In this work we studied the capabilities of quantum annealers to sample the ground-state manifolds of degenerate spin glass optimization problems. We addressed the question of how differently ideal zero-temperature and experimental quantum annealers sample the minimizing configurations of optimization problems than the standard algorithms, specifically thermal annealers. Examining both stoquastic and non-stoquastic quantum fluctuations, we illustrated that quantum annealers produce, in general, qualitatively very different probability distributions than classical annealers; furthermore, that the final distribution depends heavily on the nature of the quantum fluctuations. Moreover, we have shown that using quantum annealers alongside thermal algorithms produces, in general, flatter distributions of ground-states; that is, a combined use is significantly more helpful in generating more ground-states than when using classical algorithms alone. An earlier work by Matsuda et al.~\cite{matsuda} is worth mentioning here, where a five qubit example has been studied to show that usage of a transverse-field QA may result in the uneven sampling of a classical ground state manifold. Specifically, the authors found that some classical ground state configurations were unreachable via that driver, whereas more sophisticated drivers that ensured that any two states in the computational basis had non-vanishing matrix elements, resolved that matter. A major caveat pointed out by the authors of Ref.~\cite{matsuda}, though, was that the non-locality of their enhanced driver Hamiltonian had made it very difficult to study numerically at large system sizes (in addition, non-local terms are also known to be problematic to implement physically). We also note an experimental study of the newer-generation DW2X chip~\cite{exponentiallyBiased} where sampling biases have been observed as well. Our study indeed demonstrates that modifying the driver may be useful. In contrast to the above studies, we have studied non-stoquastic yet local (i.e., two-body) driver Hamiltonians, that as such are physically more reasonable. While we find that non-stoquastic local drivers do indeed produce different GSDs than transverse-field drivers, the manner in which they sample the ground state manifolds is not found to be particularly more uniform than with the standard driver. While we have not explicitly discussed in this work the \emph{speed} in which these distributions are obtained, we have demonstrated, for the first time to our knowledge, that annealers driven by quantum fluctuations may be used to assist existing traditional algorithms in finding all, or as many as possible, minimizing configurations of optimization problems. That quantum annealers may obtain `classically rare' minimizing configurations has numerous immediate applications in various fields, such as k-SAT filtering, detecting hardware faults and verification and validation (V\&V). An interesting question that arises at this stage and which we believe warrants further investigation is how one should choose the strength or nature of the quantum fluctuations to boost the probabilities of classically suppressed configurations. Presumably, algorithms that adaptively control the nature of the driver terms based on repeated applications of the annealing process would be an interesting path to explore (see, e.g., Ref.~\cite{dickson:12}). Another question that remains open has to do with the nature of the GSDs generated by the DW2 experimental quantum annealer. As we have shown, for most instances, the experimentally generated distributions are only weakly correlated with those of the classical thermal ones, nor have these been found to correlate with the distributions obtained by the zero-temperature ideal transverse-field quantum annealers. The dominant mechanisms that determine the distributions emerging from the D-Wave devices remain unknown, however. Specifically, the question of whether the yielded distributions are `quantum' in nature---i.e., dominated by the quantum fluctuations, or classical, i.e., set by thermal fluctuations, or simply by intrinsic control errors (ICE)---is still left unanswered. One plausible explanation is that the generated distributions are a result of a combination of thermal and quantum fluctuations, given the finite-temperature of the device. A recent conjecture\cite{QBM} suggests that these experimental devices in fact sample from the quantum ground-state at a point midway through the anneal at a so-called `freeze-out' point (or region) where thermal fluctuations become too weak to generate any changes to the state. Another plausible scenario is that the generated distributions are determined by intrinsic control errors---those errors that stem from the analog nature of the processor and as such may have a decisive effect on the resulting ground-state distributions\cite{martinMayor:15}. If the latter conjecture is found to be true, then it could be that these differences in GSDs are easily reproducible by classical means by simply injecting random errors to the problem parameters. Finally, this study raises an interesting question concerning the computational complexity of faithfully sampling the quantum ground-states of spin glass Hamiltonians. While for (ideal) quantum annealers the successful sampling of the quantum ground-state consists solely of performing an adiabatic anneal followed by a computational basis measurement, classical algorithms must follow a different path to accomplish the same task. Since non-stoquastic Hamiltonians cannot be efficiently simulated, the prevailing algorithm to date for sampling the quantum ground-state of such Hamiltonians is one which uses degenerate perturbation theory. The latter technique consists of first diagonalizing the problem Hamiltonian to find all its (classical) ground-states, followed by an application of degenerate perturbation theory (up to $N$-th order in the worst case). This last step consists of tracking, in the worst case, all $N$-spin paths from any one classical ground-state to any other. At least naively, the computational complexity involved in doing so scales as $N!$. Thus, the problem of sampling from quantum ground-states generated by non-stoquastic driver Hamiltonians may eventually prove to be a path to quantum enhancements of a novel kind. \section*{Methods} To test the capabilities of quantum annealers tasked with the identification of the individual ground-state configurations of a given problem versus classical algorithms, we first generate random spin glass instances for which the enumeration of the minimizing configurations is feasible. This a priori requirement allows for the proper evaluation of the sampling capabilities of the tested algorithms in an unbiased and consistent manner. \subsection{Generation of Instances} For the generation of instances, in this work we have chosen to study problems constructed around `planted solutions'---an idea borrowed from constraint satisfaction (SAT) problems\cite{Barthel:2002tw,Krzakala:2009qo}. In these problems, the planted solution represents a ground-state configuration of Eq.~\eqref{eq:Hp} that minimizes the energy and is known in advance. The Hamiltonian of a planted-solution spin glass is a sum of terms, each of which consists of a small number of connected spins, namely, $H=\sum_j H_j$\cite{hen:15}. Each term $H_j$ is chosen such that one of its ground-states is the planted solution. It follows then that the planted solution is also a ground-state of the total Hamiltonian, and its energy is the ground-state energy of the Hamiltonian. Knowing the ground-state energy in advance circumvents the need to verify the ground-state energy using exact (provable) solvers, which rapidly become too expensive computationally as the number of variables grows. The interested reader will find a more detailed discussion of planted Ising problems in Ref.\cite{hen:15}. As we show next, studying this type of problem also allows us to devise a practical algorithm capable of finding all minimizing configurations of the generated instances. \subsection{Enumeration of Minimizing Configurations} The enumeration of all minimizing configurations of a given optimization problem is a difficult task in the general case, belonging to the complexity class $\#P$. The exhaustive search for all solutions of an $N$-spin Ising problem becomes unfeasible for $N>40$ bit problems as the search space grows exponentially with the size of the problem. To successfully address this difficulty, we use the fact that our generated instances consist of a sum of terms---each of which has all minimizing configurations as its ground-state (these are commonly referred to as frustration free Hamiltonians). To enumerate all minimizing configurations, we implement a form of the 'bucket' algorithm\cite{dechter} designed to eliminate variables one at a time to perform an exhaustive search efficiently (for a detailed description of the algorithm, see the Supplementary Information). Implementing the bucket algorithm and running it on the randomly generated planted-solution instances, discarding instances with more than $500$ solutions, has yielded the histogram depicted in Fig.~\ref{fig:nsol} which provides the number of problem instances used in this study versus number of ground-states. \begin{figure} \caption{{\bf Histogram of number of problem instances vs. number of ground-states.} The distribution of number of solutions is a function of clause density after discarding instances with more than $500$ solutions.} \label{fig:nsol} \end{figure} \subsection{Calculation of the Quantum Ground-state} Here we review the algorithm used to compute the `quantum ground-state' of $H_p$, namely, the $s$-dependent ground-state in the limit $s\to 1$ (from below) of the $s$-dependent Hamiltonian, \begin{equation} H(s)=(1-s) H_d + s H_p \,. \end{equation} We consider an ideal zero-temperature adiabatic evolution that starts with the driver $H_d$ (e.g., a transverse-field Hamiltonian) at $s=0$, and ends with the problem Hamiltonian $H_p$ at $s=1$, where the problem Hamiltonian encodes a classical cost function with $D$ degenerate ground-states. Calculating the quantum ground-state in the presence of a degenerate ground subspace of an $N \approx 500$-qubit problem Hamiltonian is a nontrivial task. In general, it requires the diagonalization of $H(s)$ which scales exponentially with the number of spins. Our workaround combines the Rayleigh-Ritz variational principle, taking advantage of the fact the sought state is that of minimal energy, combined with degenerate perturbation theory (see e.g., Ref.\cite{galindo:91}), in which $H_d$ is considered a perturbation to $H_p$. We begin by observing that perturbation theory separates the quantum ground-state from the other states in the limit $s \to 1$ (as shown for example in Fig.~\ref{fig:example}) using a growing sequence of sets of states $S_k$ and subspaces $V_k$ such that: \begin{enumerate} \item The set $S_1=\{|\phi_1\rangle, |\phi_2\rangle,\ldots |\phi_D\rangle\}$ contains all the classical ground-states. \item Subspace $V_1$ is the linear span of $S_1$ (i.e., the set of all linear combinations of vectors in $S_1$). \item The states $|\phi\rangle \in S_k$ meet three requirements, (a) $|\phi\rangle$ is orthogonal to $V_{k-1}$, (b) $|\phi\rangle$ belongs to the computational basis and hence is an eigenstate of $H_p$, and (c) $|\phi\rangle$ has a non-vanishing matrix element $\langle \phi| H_d^{k-1}|\phi_i\rangle$ for some classical ground-state $|\phi_i\rangle\in S_1$. \item $V_k$ is the linear span of the union $S_1\cup S_2\cup \ldots \cup S_k$. \end{enumerate} For instance, for the transverse-field driver Eq.~\eqref{eq:hdtf}, the set $S_2$ is composed of the states obtained from each classical ground-state by flipping only one bit (these new states must not be ground-states themselves). Additionally, we take advantage of the symmetry of our Hamiltonians $H(s)$ under global bit flip [see Eqs.~\eqref{eq:Hp},~\eqref{eq:hdtf} and~\eqref{eq:def-hdnsq}]. For any vector in the computational basis $|\phi\rangle$, we denote the state obtained by flipping all bits in $|\phi\rangle$ as $|\bar \phi\rangle$, and the symmetric ($+$) and antisymmetric ($-$) components of $|\phi\rangle$ as $(|\phi\rangle \pm |\bar \phi\rangle)/\sqrt{2}$. Similarly, since $H_d$ is also symmetric under global bit-flip, the subspaces $V_k$ split into symmetric and antisymmetric subspaces $V_k=V_k^\text{S}\oplus V_k^\text{A}$. We shall therefore only consider $V_k^\text{S}$ from now on. Perturbation theory prescribes in a well-defined manner the specific linear combination of states in $V_k^\text{S}$ that compose the ground-state to $k$-th order. However, this prescription is contrived. To simplify the procedure, we turn to the Rayleigh-Ritz variational principle. Restricting our test functions to the subspace $V_k^\text{S}$, the ground-state can also be defined as \begin{equation}\label{eq:RR} |\psi_{GS}(s)\rangle = \arg\min_{\psi\in V_k^\text{S}} \frac{\langle \psi | H(s) | \psi \rangle} {\langle \psi|\psi \rangle} \,, \end{equation} i.e., the state with least energy. Our variational estimate is consistent with the $k$-th order perturbation theory because $V_k^\text{S}$ contains all the states relevant to $k$-th order perturbation. The minimizing state $|\psi_{GS}(s)\rangle$ can be easily found using a conjugate gradient method (for real Hamiltonians, such as ours, the search for the minimum in Eq.~(\ref{eq:RR}) can be restricted to real states \unexpanded{$|\Psi\rangle$}; our conjugate gradient follows Appendix A of Ref.\cite{hen:12}). Our algorithm proceeds as follows. Considering initially only the order $k=1$ in Eq.~\eqref{eq:RR}, we set $s=s_*$, a small number such that the ground- state of $H(s_*)$ is close to the ground-state of $H_d$ and for which $H_p$ serves as a perturbation (we set $s_*=0.1$) and minimize $H(s_*)$. We then verify that the minimizing state is not degenerate. To do that, it is sufficient to run the conjugate gradient minimization routine twice, starting from two independent random states and checking that the two obtained ground-states are linearly dependent. Finding the same ground-state twice in the presence of degeneracies is a zero-measure event as the minimization is that of a quadratic function, for which there are no local maxima separating different minima. If the minimizing state is found to be unique, we proceed to slowly increase $s$ up to $s\to 1$ minimizing the wave-function along the way If on the other hand the degeneracies are not broken at $s=s_*$, we expand the variational search to $V_2^\text{S}$ and then proceed with the numerical annealing all the way to $s\to 1$. The state obtained at the end of this procedure is the quantum ground state of $H_p$ as driven by $H_d$. In the case of $H_d^{\text{TF}}$ we found that out of about $1900$ randomly generated spin glass instances, considering $V_1^\text{S}$ sufficed to find the quantum ground-state for $912$ of them, and the inclusion of $V_2^\text{S}$ sufficed for the rest. In the non-stoquastic $H_d^{\text{NS}}$ case, $1508$ were solved with $V_1^\text{S}$, and the rest with $V_2^\text{S}$. We note that for those instances in which all degeneracies are removed at the level of $V_1^\text{S}$, the numerical annealing can be dispensed of altogether. This is because the problem Hamiltonian $H_p$, when restricted to $V_1^\text{S}$, is merely the classical ground-state energy times the identity. \subsection{Comparing Empirical Distribution Functions} Here we discuss the statistical comparisons of pairs of GSDs over the ground-state manifold of a spin glass with $D$ solutions, where at least one of the distributions is empirically estimated via experiment or simulation. Let the two probability distributions be denoted ${\bf p}^{(1)}$, $ {\bf p}^{(2)}$ where $p^{(1)}_i$ and $p^{(2)}_i$ are the probabilities of obtaining ground-state $i$ with the first and second method, respectively. Let $N_1$ and $N_2$ be the sample sizes (i.e., number of anneals) for these two experiments (if one of the methods generates an analytic probability distribution, this corresponds to taking its number of anneals to infinity). Hence, if ground-state $i$ was found $n_i^{(1)}$ times with method one, then $p^{(1)}_i=n_i^{(1)}/N_1$ (with similar definitions for method two). It follows that the square of the $\chi^2$-distance between distributions ${\bf p}^{(1)}$ and $ {\bf p}^{(2)}$ is\cite{press:95} \begin{equation} \label{eq:chi2-def} \Vert \mathbf{p}^{(1)}-\mathbf{p}^{(2)} \Vert^2_{\chi^2} = \sum_{i=1}^D \frac{\left(\sqrt{\frac{N_1}{N_2}}\,n_i^{(2)}\ -\ \sqrt{\frac{N_2}{N_1}}\,n_i^{(1)}\right)^2}{n_i^{(1)}+n_i^{(2)}}\,. \end{equation} If one of the two probabilities, say $ {\bf p}^{(2)}$, is known analytically, we may just take the limit $N_2\to\infty$ in Eq.~\eqref{eq:chi2-def}, obtaining \begin{equation} \label{eq:chi2-one sided-def} \Vert \mathbf{p}^{(1)}-\mathbf{p}^{(2)} \Vert^2_{\chi^2, \text{one sided}} = \sum_{i=1}^D \frac{ \Big( N_1 p_i^{(2)}\ -\ n_i^{(1)}\Big)^2}{N_1 p_i^{(2)}}\,. \end{equation} The Kolmogorov-Smirnov test proceeds as follows. Here, for the null hypothesis, one assumes that both ${\bf p}^{(1)}$ and $ {\bf p}^{(2)}$ are drawn from the same underlying probability distribution. If the null hypothesis holds, then for large $N_1$, $N_2$ and $D$, the probability distribution for $\Vert \mathbf{p}^{(1)}-\mathbf{p}^{(2)} \Vert^2_{\chi^2}$ can be computed assuming Gaussian statistics. Hence, one would just compare the $\Vert \mathbf{p}^{(1)}-\mathbf{p}^{(2)} \Vert^2_{\chi^2}$ computed from empirical data with the known $\chi^2$-distribution. This comparison yields the probability (or $p$-value) that the null hypothesis holds\cite{press:95}. In our case, neither $N_1$, $N_2$ nor $D$ are exceedingly large. Therefore, rather than relying on a precomputed $\chi^2$-probability, we resort to using a bootstrapped version of the Kolmogorov-Smirnov (KS) test: assuming for the null hypothesis that the probability distributions associated with the two methods are in fact identical, the underlying distribution $p$ is given by: \hbox{$ p_i = (N_1 p^{(1)}_i + N_2 p^{(2)}_i)/(N_1 + N_2) $} (this $p_i$ is our best guess for the probability of getting the $i$-th ground-state given the null hypothesis). As a next step, we simulate synthetic experiments based on this underlying distribution. Each synthetic experiment consists of extracting $N_1$ ground-states to form a synthetic probability distribution ${\bf P}_1$, and $N_2$ ground-states to form a probability distribution ${\bf P}_2$ (if a method is analytic then we set ${\bf P}_i= {\bf p}_i$ by law of large numbers). We simulate a large number of such synthetic experiments, and measure the proportion of experiments for which we find $\norm{{\bf P}_1-{\bf P}_2} > \norm{{\bf p}^{(1)} - {\bf p}^{(2)}}$. This proportion corresponds to the $p$-value for our test. As a check, we have compared our $p$-values with the ones obtained using the tabulated $\chi^2$ probability verifying that the $p$-values of the two tests agree to within a few percent. \pagebreak \section{Supplementary Information} \subsection{\label{app:dwave} The D-Wave Two Quantum Annealer} The D-Wave Two (DW2) processor is marketed by D-Wave Systems Inc. as a quantum annealer. The annealer evolves a physical system of superconducting flux qubits according to the time-dependent Hamiltonian \begin{equation} \label{eq:Hquantum} H(t) = A(t) \sum_{i} H_d^{\text{TF}} + B(t) H_p, \quad t\in[0,\mathcal{T}] \,. \end{equation} The problem Hamiltonian $H_p$ is given by \begin{equation} H_{p}=\sum_{\langle ij\rangle} J_{ij} \sigma^z_i \sigma^z_j + \sum_i h_i \sigma^z_i \,, \end{equation} where $\sigma_i^z$ is the Pauli Z spin-$1/2$ matrix acting on spin $i$, the set $\{J_{ij}\}$ are programmable parameters and $\langle ij\rangle$ sums over the edges of an $N=504$-qubit Chimera graph---the hardware graph of the processor. The transverse field driver Hamiltonian $H_d^{\text{TF}}$ is given by \begin{equation} H_d^{\text{TF}} =- \sum_i \sigma_i^x \,, \end{equation} where $\sigma_i^x$ is the Pauli X spin-$1/2$ matrix acting on spin $i$. The annealing schedules given by $A(s)$ and $B(s)$ are shown in Fig.~\ref{fig:schedule} as a function of the dimensionless parameter $s=t/\mathcal{T}$. Our experiments used the DW2 device housed at University of Southern California's Information Sciences Institute, which is held at an operating temperature of $17$mK. The Chimera graph of the DW2 used in our work is shown in Fig.~\ref{fig:chimera}. Each unit cell is a balanced $K_{4,4}$ bipartite graph. In the ideal Chimera graph (of $512$ qubits) the degree of each vertex is $6$ (except for the corner cells). In the actual DW2 device, only $504$ qubits are functional. \begin{figure} \caption{\textbf{Annealing schedule of the DW2.} The annealing curves $A(s)$ and $B(s)$ with $s=t/\mathcal{T}$ are calculated using rf-SQUID models with independently calibrated qubit parameters. Units of $\hbar = 1$. The operating temperature of $17$mK is also shown as a dashed line.} \label{fig:schedule} \end{figure} \begin{figure} \caption{\textbf{The DW2 Chimera graph.} The qubits or spin variables occupy the vertices (circles) and the couplings $J_{ij}$ are along the edges. Of the $512$ qubits, $504$ were operative in our experiments (green) and $8$ were not (red). } \label{fig:chimera} \end{figure} \subsection{\label{app:csa}Constraint Solver Algorithm} This is a description of the algorithm used to search for and list all solutions, or minimizing configurations, of a spin glass instance with a planted solution. The algorithm takes as input a specified set of constraints, namely, sets of bit assignments, each of which minimizes the local Ising Hamiltonians' $H_j$, where the total planted-solution Hamiltonian is $H=\sum_j H_j$. Each such term is minimized by a finite set of spin configurations (involving only those spins on which the local Hamiltonian is defined). The algorithm is an exhaustive search and is an implementation of the bucket elimination algorithm described in Ref.~\cite{dechter}. The problem structure consists of a set of Ising spins (equivalently, bits) where each value may be $+1$ or $-1$. A set of constraints restricts subsets of the bits to certain values. The goal is to assign values to all the bits so as to satisfy all the constraints. Each constraint (in this case, the optimizing configurations of the $H_j$ Hamiltonians) applies to a subset of the bits. Each constraint contains a list of allowed settings for that subset of bits. The constraint is met if the values of the bits in the subset matches one of the allowed settings. The constraints for a particular problem are read from a text file. A sample input for a single constraint is: \begin{small} \begin{center} \begin{tabular}{|c || r | r| r |r| r| r| r| r| r| r| r| r||} \hline bit & sol. & sol. & sol. & sol. & sol. & sol. & sol. &sol. & sol. & sol. & sol. &sol. \\ index & \#1 & \#2 & \#3 & \#4 & \#5 & \#6 & \#7 & \#8& \#9 & \#10 & \#11 & \#12\\ \hline 480 & 1 & -1& 1 & -1 & 1 & -1& 1 & -1 & 1 & -1 & 1 & -1 \\ 485 & -1 & 1 & 1 & -1 & 1 & -1 & 1 & -1 & 1 & -1 & 1 & -1 \\ 493 & 1 & -1 & 1 & -1 & -1 & 1 & -1 & 1 & -1 & 1 & -1 & 1 \\ 491 &-1 & 1 &-1 & 1 & -1 & 1 & 1 & -1 & 1 & -1 & 1 & -1 \\ 495 &1 & -1 & 1 & -1 & 1 & -1 & 1 & -1 & -1 & 1 & -1 & 1 \\ 487 & 1 & -1 & 1& -1 & 1 & -1 & 1 & -1 & 1 & -1 & -1 & 1 \\ \hline \end{tabular} \end{center} \end{small} The first column is the bit numbers, six in total. The following columns are allowed values for this subset of bits indicating that the term has 12 minimizing bit assignments. This input specifies $12$ allowed settings for bits $480, 485, 493, 491, 495$ and $487$. The bucket elimination algorithm, as applied to this problem, consists of the following steps: \begin{verbatim} First, determine if solutions exist: while(true) select a bit to eliminate if no more bits: exit loop save constraints which contain selected bit combine all constraints containing bit generate new constraints without bit if combined constraints have contradiction: exit loop end while \end{verbatim} This algorithm is guaranteed to find all solutions, but may exceed time and memory limitations. All steps are well-defined except for the step which selects the next bit to eliminate. The order in which bits are eliminated dramatically affects the time and memory required. Determination of the optimal order to eliminate bits is known to be NP complete. Assume there exists a list of constraints, as described above, each of which contains a specified bit. For simplicity, first assume there are only two such constraints. For example, consider the following two simple constraints, both of which contain bit $\#1$: \begin{center} \begin{tabular}{||c |c || r |r| r||} \hline constraint \#1: & bits &\#1 & \#2 & \#3 \\ \hline allowed settings: & $(1_a)$ & -1 & -1 & -1 \\ & $(1_b)$ & +1 & +1 & -1 \\ \hline constraint \#2: & bits & \#1& \#2 & \#4 \\ \hline allowed settings: & $(2_a)$ & -1 & -1 & -1 \\ & $(2_b)$ & +1 & -1 & -1 \\ \hline \end{tabular} \end{center} To combine the two constraints, combine each of the allowed settings in the first constraint with each of the allowed settings in the second constraint. When combining two settings, any bit that is in both settings must agree, or no new constraint is generated. The bit chosen for elimination is not contained in the newly generated constraint. If at the end, no new constraints have been generated, a contradiction exists which prevents these constraints from mutual satisfaction. This indicates that the original problem has no solutions. Application of this step to constraints $1_a, 1_b, 2_a$ and $2_b$ above requires four steps. The bits contained in the newly created constraints contain the union of the bits in the constraints, minus bit $\#1$, the bit being eliminated. \begin{center} \begin{tabular}{|c|c|} \hline & combined configuration\\ \hline combine $1_a$ and $2_a$ & $(-1, -1, -1)$ \\ combine $1_a$ and $2_b$ & empty -- bit $\#1$ differs in the two \\%constraints \\ combine $1_b$ and $2_a$ & empty -- bit $\#1$ differs in the two \\%constraints \\ combine $1_b$ and $2_b$ & empty -- bit $\#2$ differs in the two \\% constraints \\ \hline \end{tabular} \end{center} The new constraint consists of a single entry for bits $\#2$, $\#3$, and $\#4$ set to $(-1, -1, -1)$. To combine more than two constraints, the first two are combined, then the result is combined with the next constraint and repeated until all constraints have been combined. The process of combining constraints can cause the number of allowed bit set values to shrink, as in the example above, or to grow. The amount of time and memory required to eliminate all bits from the original set of constraints is highly dependent on the order in which bits are eliminated from the original set of constraints. Many heuristics were tested to select the best next bit to be eliminated. No deterministic algorithm was found that yielded acceptable time and memory use on all of the input data sets. The approach that was ultimately found to be effective was to use a combination of heuristics and randomness to select the next bit. Each time an elimination bit is to be chosen, one of six heuristics is chosen at random. The six heuristics are different functions of the number of unique bits in the constraints to be combined, the maximum number of solution sets in any of the constraints to be combined, and the sum of the number of solution sets in the constraints to be combined. Solutions are enumerated by iterating over the saved variable tables in reverse order. The last table contains the allowed values for the last variable eliminated. Each of these values can be substituted into the previous table to generate allowed value sets for the last two variables. This process is repeated for each table until allowed value sets for all variables are generated. The list is truncated at each step if the number of value sets exceeds a specified limit. \end{document}
\begin{document} \title{{\bf Instantaneous Quantum Computation} } \author{ Dan~Shepherd\footnote{dan.shepherd@cesg.gsi.gov.uk, shepherd@compsci.bristol.ac.uk}~~and~Michael~J.~Bremner \\ \small\it Department of Computer Science, University of Bristol,\\ \small \it Woodland Road, Bristol, BS8 1UB, United Kingdom. } \date{December 13th, 2008} \maketitle \def\ket#1{|\,#1\,\rangle} \def\bra#1{\langle\, #1\,|} \def\braket#1#2{\langle\, #1\,|\,#2\,\rangle} \def\ketbra#1#2{\ket{#1}\bra{#2}} \def\leavevmode\hbox{\small1\kern-3.8pt\normalsize1}{\leavevmode\hbox{\small1\kern-3.8pt\normalsize1}} \def\span#1{\left< #1 \right>} \def\hbox to \hsize{ \rule[5pt]{2.5cm}{0.5pt} }{\hbox to \hsize{ \rule[5pt]{2.5cm}{0.5pt} }} \def \par \noindent \emph{[This Section Not Yet Complete]} \par { \par \noindent \emph{[This Section Not Yet Complete]} \par } \def\comment#1{} \def\set#1{\{ #1\}} \def\Prob#1{\mbox{Prob}(#1)} \def\modulus#1{\left| #1 \right|} \newcommand{\QED}{\nopagebreak\hspace*{\fill}\mbox{\rule[0pt]{1.5ex}{1.5ex}}} \newcommand{\qed}{\mbox{\rule[0pt]{1.5ex}{1.5ex}}} \newcommand{\half}{\mbox{$\textstyle \frac{1}{2}$} } \def\indicator#1{\left\{ \phantom{\big|} #1 \phantom{\big|}\right\}} \def\mathbbm{Z}{\mathbbm{Z}} \def\mathbbm{R}{\mathbbm{R}} \newtheorem{theorem}{Theorem} \newtheorem{definition}{Definition} \newtheorem{lemma}{Lemma} \newtheorem{example}{Example} \newtheorem{property}{Property} \newtheorem{proposition}{Proposition} \newtheorem{corollary}{Corollary} \newtheorem{conjecture}{Conjecture} \def\textbf{Proof~:~}{\textbf{Proof~:~}} \def\mathcal{C}{\mathcal{C}} \def\mathbbm{C}{\mathbbm{C}} \def\mathbbm{E}{\mathbbm{E}} \def\mathbbm{F}{\mathbbm{F}} \def\mathbbm{P}{\mathbbm{P}} \def\mathbf{a}{\mathbf{a}} \def\mathbf{b}{\mathbf{b}} \def\mathbf{c}{\mathbf{c}} \def\mathbf{d}{\mathbf{d}} \def\mathbf{e}{\mathbf{e}} \def\mathbf{k}{\mathbf{k}} \def\mathbf{l}{\mathbf{l}} \def\mathbf{p}{\mathbf{p}} \def\mathbf{q}{\mathbf{q}} \def\mathbf{s}{\mathbf{s}} \def\mathbf{w}{\mathbf{w}} \def\mathbf{x}{\mathbf{x}} \def\mathbf{X}{\mathbf{X}} \def\mathbf{y}{\mathbf{y}} \def\mathbf{Y}{\mathbf{Y}} \def\mathbf{z}{\mathbf{z}} \def\mathbf{0}{\mathbf{0}} \def\mathbf{1}{\mathbf{1}} \def\mathbbm{S}{\mathbbm{S}} \def\mathcal{B}{\mathcal{B}} \def\varepsilon{\varepsilon} \def\eq#1{=_{\phantom|_{\!\!#1}}} \def\pl#1{+_{\phantom|_{\!\!#1}}} \def\mi#1{-_{\phantom|_{\!\!#1}}} \def\om#1{\omega_{\phantom|_{\!\!#1}}} \begin{abstract} We examine theoretic architectures and an abstract model for a restricted class of quantum computation, called here \emph{temporally unstructured (``instantaneous'') quantum computation} because it allows for essentially no temporal structure within the quantum dynamics. Using the theory of binary matroids, we argue that the paradigm is rich enough to enable sampling from probability distributions that cannot, classically, be sampled from efficiently and accurately. This paradigm also admits simple interactive proof games that may convince a skeptic of the existence of truly quantum effects. Furthermore, these effects can be created using significantly fewer qubits than are required for running Shor's Algorithm. \end{abstract} \def\mathbf{IQP}{\mathbf{IQP}} \def\mathbf{BPP}{\mathbf{BPP}} \def\mathbf{BQP}{\mathbf{BQP}} \section{Introduction} \subsection{Mathematical motivation} It has often been said that underlying the power of quantum computers is the close connection between the computational model and the way we represent dynamics in quantum systems. This connection is implicit in the standard circuit model, where we require a universal gate set for an $n$-logical-qubit processor to be capable of simulating the dynamics of the $n$-qubit unitary group $SU(2^n)$. While there are many equivalent models of (universal) quantum computing, and not all of them explicitly `generate' the special unitary group on $n$ qubits, they each simulate (to within some pre-defined precision) operations drawn from \emph{some} non-abelian unitary group on a set of qubits. Our approach in this paper departs from this well trod path, by focussing almost exclusively on an abelian subgroup of the unitary group. This approach is much more restrictive in the kinds of computation allowed, leading to a computational paradigm that lies somewhere between classical and universal quantum computing. The non-abelian nature of quantum circuit elements is undoubtedly a crucial feature of universal quantum computing; for example, it imposes a clear physical limitation to the time-ordering of the gates in a circuit. In the standard model of quantum computation, the only circuits that can be performed in a single ``time-step'' are those composed only of single-qubit gates and two-qubit gates that act on disjoint sets of qubits. We often refer to such circuits as depth-1 circuits. When an abelian group is being used for the gates within a circuit, that circuit need not be depth-1 in the sense just described, though it will nonetheless be essentially devoid of temporal structure, since the order of the gates is immaterial. Physically, the quantum circuit model can be interpreted as applying a controlled sequence of unitary operations, which can in turn be thought of as a sequence of Hamiltonian evolutions. If any two consecutive gates in a sequence commute with one another, then their order in the sequence can be freely interchanged, or equivalently, their Hamiltonians can simply be combined additively, which corresponds to simultaneous evolution. When \emph{all} gates commute, a single simultaneous Hamiltonian evolution describes the dynamics, whose terms are the individual gates. \subsection{Physical motivation} How can we tell when we have successfully built a quantum computer? Given that tomography quickly becomes difficult as the number of qubits in a system grows, it is pertinent to ask if there is a simpler way of verifying the success of a quantum computation. One way, which has already been attempted in several experiments, \emph{e.g.} \cite{lit:Van01, lit:Lan07, lit:Lu07, lit:Tame06}, would be to use the prototype quantum computer to find the solution to a problem which we think is difficult to solve on a classical computer. For instance, the following scenario is generic. Alice is a skeptic, she doesn't believe that Bob has a quantum device at his disposal. Fortunately, she is relatively certain that classical computers can't efficiently find the prime factors of a large integer, whereas quantum computers can \cite{lit:Sh94} (although many qubits may be required for a convincing demonstration). So she issues a challenge to Bob~: she chooses a large number for which she cannot find the prime factors and sends it to Bob. If Bob then sends back the prime factors of her number within a reasonable time period, she can easily convince herself that Bob must have had a quantum device at his disposal. This scenario in particular is one which has been used in attempts to verify the success of several small-scale quantum computers \cite{lit:Van01, lit:Lan07, lit:Lu07}, though of course the numbers used were too small to be considered hard to factor classically. Unfortunately, so far as we know, Shor's factoring algorithm is a relatively difficult quantum algorithm to perform. It is well known that it can be implemented in a circuit model using polynomial circuit depth and linear circuit width, or logarithmic depth with a larger width \cite{lit:CW00}, or even with constant depth if arbitrarily wide `fanout' gates are allowed \cite{lit:Hoy02}. In either case, we'd apparently require a fully universal set of quantum gates, and more than a thousand logical qubits, for a convincing demonstration. In this paper, we use abelian dynamics to suggest a two-party protocol (with classical message-passing), which could be used to test remotely a quantum device, which we believe is physically far less complex than factorization. We conjecture that it is classically infeasible to simulate the quantum process in our protocol, and that our protocol is simpler to implement than all known versions of Shor's algorithm, not requiring anything like a universal gate set. \subsection{Guide to the paper} We introduce the paradigm $\mathbf{IQP}$, which stands for ``Instantaneous Quantum Polytime''. It is a restricted model of quantum computation, which can also be thought of in terms of an oracle for computation. Here `polytime' means that the process is bound to consume at most a polynomial amount of resource in any reasonable model of quantum computation, while `instantaneous' means that the algorithmic process itself is to contain no inherent temporal structure. We give formal definitions, and argue that there are non-trivial applications for $\mathbf{IQP}$. In particular, a two-party interactive protocol game is described and discussed. These are our main points, to bear in mind throughout the paper~: \begin{itemize} \item We define a restricted model of quantum computation, called $\mathbf{IQP}$ (section \ref{sect:defs}). \item We present several different quantum architectures (section~\ref{sect:architectures}) that can render computations in the $\mathbf{IQP}$ model, suggesting that it is a `lowest common denominator' of some `natural' ideas for computing based on `abelian' notions. \item Although we know of no specific architecture where the Hamiltonians and measurements involved in the $\mathbf{IQP}$ paradigm really are genuinely `easy' to implement, nonetheless there is a clear sense in which the mathematics that underlies the computation is `easy'. Perhaps the ``Graph State'' architecture (section~\ref{sect:architectures}) gives the clearest example of a practical computing idea. \item We argue and conjecture that the probability distributions generated in the $\mathbf{IQP}$ paradigm (section \ref{sect:defs}) are not only classically hard to sample from approximately, but that there are actual polynomial-time protocols (section~\ref{sect:protocol}) that can be completed using an $\mathbf{IQP}$ oracle that (we believe) can't be completed classically in polynomial time. \item We provide an explicit example of such a protocol, involving two parties. The purpose of the protocol is simply for one party to prove to the other that they are capable of approximating a multi-qubit output distribution having characteristics that match the output distribution of a particular $\mathbf{IQP}$ process. In the protocol, Alice designs a problem with a `hidden' property, and sends it to Bob; Bob runs the problem through his $\mathbf{IQP}$ oracle several times, and sends the classical outputs back to Alice; Alice then uses the secret `hidden' property to assess whether there is good evidence for Bob having used a real $\mathbf{IQP}$ oracle. This is all done in section~\ref{sect:protocol}. \item We make a pragmatic analysis of our suggested protocol, showing how to fine-tune its parameters in order to make plausible the conjecture that it really can't be `faked' classically (section~\ref{sect:heuristics}). \item By analogy, this protocol is to quantum computation what Bell experiments are to quantum communication~: the simplest known `proof' of a distinctly quantum phenomenon. (Of course, since there is no mathematical proof published to date of a separation between the power of quantum computation and classical computation, we still have to rely on certain computational hardness conjectures.) \item Despite the existence of protocols apparently requiring an $\mathbf{IQP}$ oracle, we are unable to find any \emph{decision language} in $\mathbf{BPP}^\mathbf{IQP}$ that is not in $\mathbf{BPP}$. There seems to be a sense in which the paradigm isn't able to `compute new information' (section~\ref{sect:heuristics}). \item If there should one day be architectures that can implement $\mathbf{IQP}$ oracles---even though full-blown universal quantum computing remain an unresolved engineering challenge---then our protocol may be an important demonstrator of the power of quantum mechanics for quantum computing. \end{itemize} Much of the mathematics used depends significantly on the theory of binary matroids and binary linear codes, and so we spend some time in section~\ref{sect:defs} recalling some basic definitions. Readers interested in the main construction (the interactive game) should start at section~\ref{sect:protocol} and dip back into the earlier definitions where needed. Pure mathematicians might prefer to start at section~\ref{sect:mathythingo}; cryptographers might particularly appreciate section~\ref{sect:heuristics}; whereas physicists may prefer section~\ref{sect:architectures}. \section{The $\mathbf{IQP}$ paradigm} \label{sect:defs} In this section, we define what we call the ``X-program'' architecture, and use it to define the notion of $\mathbf{IQP}$ oracle. The rest of the paper depends heavily on these definitions. Note that the X-program architecture is not particularly `physical', but is easier to work with than the more physically relevant architectures discussed in section~\ref{sect:architectures}. \subsection{X-programs} \label{sect:Xprogs} Recall that a Pauli $X$ operator acts on a single qubit, exchanging $\ket0$ with $\ket1$, \emph{i.e.} $X = \ketbra01 + \ketbra10$. One can also think of $X$ as a Hamiltonian term, since $X \propto \exp( i\frac\pi2 X )$. An X-program is essentially a Hamiltonian that is a sum of products of $X$s acting on different qubits. In this architecture, allow for a set of $n$ qubits, initialised into the pure separable computational basis state $\ket{\mathbf{0}}$. The \emph{X-program} is specified as a (polysize) list of pairs $(\theta_\mathbf{p}, \mathbf{p}) \in [0,2\pi] \times \mathbbm{F}_2^n$, so $\theta_\mathbf{p}$ is an angle and $\mathbf{p}$ is a string of $n$ bits. Each such program element (pair) is interpreted as the action of a Hamiltonian on the qubits indicated by $\mathbf{p}$, applied for action\footnotemark{} $\theta_\mathbf{p}$~: the Hamiltonian to apply is made up from a product of Pauli $X$ operators on the indicated qubits, and naturally these all commute. \footnotetext{action = the integral of energy over time.} This means that---in principle---the program elements could all be applied simultaneously~: their time ordering is irrelevant. The measurement to be performed, once all the Hamiltonians have been applied, is simply a computational-basis measurement, and the program \emph{output} is simply that measurement result, regarded as a (probabilistic) sample from the vectorspace $\mathbbm{F}_2^n$. Combining this together, we see that the probability distribution for such an output is \begin{equation} \label{eqn:dist1} \mathbbm{P}(\mathbf{X}=\mathbf{x}) ~=~ \left| \bra\mathbf{x} ~\exp\left(~\sum_\mathbf{p} i\theta_\mathbf{p} \bigotimes_{j:p_j=1} X_j~\right)~ \ket{\mathbf{0}^n} \right|^2. \end{equation} The \emph{output string} here is labelled $\mathbf{x}$. The random variable $\mathbf{X}$ here (and throughout) codifies this probability distribution of classical output samples. For almost all of our purposes, we are only interested in the case where the $\theta_\mathbf{p}$ action values are the same for every term. When that condition applies, and the value $\theta$ is specified, the entire X-program can be represented using a $poly(n)$-by-$n$ binary matrix, with each row corresponding to a term in the Hamiltonian. For example, the $7$-by-$4$ binary matrix \begin{eqnarray} \label{eqn:exampleP} P &=& \left( \begin{array}{cccc} 1& 0& 0& 0 \\ 1& 1& 0& 0 \\ 0& 1& 1& 0 \\ 1& 0& 1& 1 \\ 0& 1& 0& 1 \\ 0& 0& 1& 0 \\ 0& 0& 0& 1 \end{array} \right) \end{eqnarray} would represent the $4$-qubit Hamiltonian \begin{eqnarray} H_{P,\theta} &=& \theta \cdot ( X_1 + X_1X_2 + X_2X_3 + X_1X_3X_4 + X_2X_4 + X_3 + X_4 ). \end{eqnarray} \subsection{$\mathbf{IQP}$ oracle} \begin{definition} On input the explicit description of an X-program, we define an $\mathbf{IQP}$ oracle to be any computational method that efficiently returns a sample string from the probability distribution as given at line~(\ref{eqn:dist1}). \end{definition} For a formal definition of what is meant by an $\mathbf{IQP}$ oracle, let it be any device that interfaces to a probabilistic Turing machine via an `oracle tape', so that if the oracle tape holds a description of a particular X-program ($P,\theta$ in the `constant action' case) at the time when the Turing machine calls its `implement oracle' instruction, then in unit time (or perhaps in time polynomial in the length of the description of $P$, \emph{i.e.} polynomial in $n$), a bitstring sample in $\mathbbm{F}_2^n$ from the probability distribution at line~(\ref{eqn:dist1}) is created and written to the oracle tape, and control is passed back to the Turing machine to continue processing. We write the \emph{overall paradigm}---of classical computation augmented by this oracle---as $\mathbf{BPP^{IQP}}$, to denote the fact that classical randomised polytime pre- and post-processing is usually to be considered allowed in a simulation, and to denote the fact that we don't much care which of several quantum architectures might be being used to supply the `\textbf{IQP}-power' of sampling from probability distributions of the form at line~(\ref{eqn:dist1}). This notation is not necessarily supposed to indicate a particular class of \emph{decision languages} as such, but rather a particular class of computations. The interactive proof games in section~\ref{sect:protocol} require the Prover to have access to an $\mathbf{IQP}$ oracle, and to access it a polynomial number of times, though these calls may be made in parallel and without precomputation. \begin{lemma} \label{lem:thing} The probability distribution given at line~(\ref{eqn:dist1}) is equivalent to the one given below. \begin{eqnarray} \label{eqn:distpaths} \mathbbm{P}(\mathbf{X}=\mathbf{x}) &=& \left|~ \sum_{ \mathbf{a} ~:~ \mathbf{a} \cdot P = \mathbf{x}} ~~\prod_{\mathbf{p} ~:~ a_\mathbf{p} = 0} \cos \theta_\mathbf{p} \prod_{\mathbf{p} ~:~ a_\mathbf{p} = 1} i \sin \theta_\mathbf{p} ~\right|^2. \end{eqnarray} \end{lemma} \textbf{Proof~:~} Let $P$ denote the $k$-by-$n$ binary matrix whose rows are the $\mathbf{p}$ vectors of the X-program under consideration. Then using the fact that the Hamiltonian terms in an X-program all commute, we can think of the quantum amplitudes arising in an X-program implementation as a sum over paths, \begin{eqnarray} \lefteqn{ \bra{\mathbf{x}}~ \prod_\mathbf{p} \left( \cos \theta_\mathbf{p} ~+~ i \sin \theta_p \bigotimes_{j:p_j=1} X_j \right) ~\ket{\mathbf{0}^n} } \nonumber \\ &=& \bra{\mathbf{x}} ~\sum_{ \mathbf{a} \in \mathbbm{F}_2^k} ~\prod_{\mathbf{p} ~:~ a_\mathbf{p} = 0} \cos \theta_\mathbf{p} \prod_{\mathbf{p} ~:~ a_\mathbf{p} = 1} i \sin \theta_\mathbf{p} ~\prod_{j=1}^n X_j^{(\mathbf{a} \cdot P)_j} ~~\ket{\mathbf{0}^n}, \end{eqnarray} and hence derive a new form for the probability distribution accordingly. \qed \subsection{Binary matroids and Linear binary codes} Before proceeding to the main topics of the paper, it behooves us to establish the link that these formulas have with the (closely related) theories of binary matroids and binary linear codes. \begin{definition} A linear binary code, $\mathcal{C}$, of length $k$ is a (linear) subspace of the vectorspace $\mathbbm{F}_2^k$, represented explicitly. The elements of $\mathcal{C}$ are called \emph{codewords}, and the Hamming weight $wt(c) \in [0..k]$ of some $c \in \mathcal{C}$ is defined to be the number of 1s it has. The rank of $\mathcal{C}$ is its rank as a vectorspace. \end{definition} Linear binary codes are frequently presented using \emph{generator matrices}, where the columns of the generator matrix form a basis for the code. If $P$ is a generator matrix for a rank $r$ code $\mathcal{C}$, then $P$ has $r$ columns and the codewords are $\{ P \cdot \mathbf{d}^T ~:~ \mathbf{d} \in \mathbbm{F}_2^r \}$. There are many different, isomorphic, definitions for matroids, \cite{lit:matroids}. We shall adopt the following definition. \begin{definition} A $k$-point binary matroid is an equivalence class of matrices defined over $\mathbbm{F}_2$, where each matrix in the equivalence class has exactly $k$ rows, and two matrices are equivalent (written $M_1 \sim M_2$) when for some ($k$-by-$k$) permutation matrix $Q$, the column-echelon reduced form of $M_1$ is the same as the column-echelon reduced form of $Q \cdot M_2$. Here we take column-echelon reduction to delete empty columns, so that the result is full-rank. Hence the rank of a matroid is the rank of any of its representatives. \end{definition} Less formally, this means that a binary matroid is like a matrix over $\mathbbm{F}_2$ that doesn't notice if you rearrange its rows, if you add one of its columns into another (modulo 2), or if you duplicate one of its columns. This means that a matroid is like the generator matrix for a linear binary code, but it doesn't mind if it contains redundancy in its spanning set (\emph{i.e.} has more columns than its rank) and it doesn't care about the actual order of the zeroes and ones in the individual codewords. To be clear, when thinking of a matrix such as $P$ in line~(\ref{eqn:exampleP}), we are simultaneously thinking of its \emph{columns} as the elements of a spanning set for a \emph{code}, and its \emph{rows} as the points of a corresponding \emph{matroid}. Because one cannot express a matroid independently of a representation, we consistently conflate notation for the matrix $P$ with the matroid $P$ that it represents. Perhaps the main structural feature of a binary matroid is its \emph{weight enumerator polynomial}. \begin{definition} \label{def:WEP} If the $k$ rows of binary matrix $P$ establish the points of a $k$-point matroid, then the weight enumerator of the matroid is defined to be the weight enumerator of the $k$-long code $\mathcal{C}$ spanned by the columns of $P$, which in turn is defined to be the bivariate polynomial \begin{eqnarray} WEP_\mathcal{C}(x,y) &=& \sum_{\mathbf{c} \in \mathcal{C}} x^{wt(\mathbf{c})} y^{k-wt(\mathbf{c})}. \end{eqnarray} \end{definition} This is well-defined, because the effect of choosing a different matrix $P$ that represents the same binary matroid simply leads to an isomorphic code that has the same weight-enumerator polynomial as the original code $\mathcal{C}$. \subsection{Bias in probability distributions} \begin{definition} \label{def:bias} If $\mathbf{X}$ is a random variable taking values in $\mathbbm{F}_2^n$, and $\mathbf{s}$ is any element of $\mathbbm{F}_2^n$, then the \emph{bias} of $\mathbf{X}$ in direction $\mathbf{s}$ is simply the probability that $\mathbf{X} \cdot \mathbf{s}^T$ is zero, \emph{i.e.} the probability of a sample being orthogonal to $\mathbf{s}$. \end{definition} Let us now consider an X-program on $n$ qubits that has constant action value $\theta$, whose Hamiltonian terms are specified by the rows of matrix $P$, as discussed earlier. Then we can use lemma~\ref{lem:thing} to obtain the following expression of bias, for any binary vector $\mathbf{s} \in \mathbbm{F}_2^n$~: \begin{eqnarray} \label{eqn:walshpaths} \mathbbm{P}(\mathbf{X} \cdot \mathbf{s}^T=0) &=& \sum_{\mathbf{x} ~:~ \mathbf{x} \cdot \mathbf{s}^T = 0} ~ \left|~ \sum_{ \mathbf{a} ~:~ \mathbf{a} \cdot P = \mathbf{x}} ~ (\cos \theta)^{k-wt(\mathbf{a})} (i \sin \theta)^{wt(\mathbf{a})} ~\right|^2. \end{eqnarray} Since it would obviously be nice to interpret this expression as the evaluation of a weight enumerator polynomial, we are led to define $P_\mathbf{s}$ to be the submatrix of $P$ obtained by deleting all rows $\mathbf{p}$ for which $\mathbf{p} \cdot \mathbf{s}^T = 0$, leaving only those rows for which $\mathbf{p} \cdot \mathbf{s}^T = 1$. We call\footnotemark{} the number of rows remaining $n_\mathbf{s}$. \footnotetext{$n_\mathbf{s}$ is here being used for the length of the code $\mathcal{C}_\mathbf{s}$ in deference to the usual practice of reserving the letter $n$ for code lengths. This $n_\mathbf{s}$ is counting a number of rows, and should not be confused with the $n$ used earlier for counting a number of columns.} This in turn leads to the code $\mathcal{C}_\mathbf{s}$ being the span of the columns of $P_\mathbf{s}$, and likewise a submatroid\footnotemark{} is correspondingly defined. \footnotetext{also called a \emph{matroid minor}} \begin{theorem} \label{thm:bias} When considering constant-action X-programs, the bias expression $\mathbbm{P}(\mathbf{X} \cdot \mathbf{s}^T=0)$ for the random variable $\mathbf{X}$ introduced at line~(\ref{eqn:dist1}) depends only on the action value $\theta$ and (the weight enumerator polynomial of) the $n_\mathbf{s}$-point matroid $P_\mathbf{s}$, as defined above. Moreover, if $\mathcal{C}_\mathbf{s}$ is a binary code representing the matroid $P_\mathbf{s}$, then the following formula\footnotemark{} expresses the bias~: \begin{eqnarray} \label{eqn:walshcode} \mathbbm{P}(\mathbf{X} \cdot \mathbf{s}^T=0) &=& \mathbbm{E}_{\mathbf{c} \sim \mathcal{C}_\mathbf{s}} \left[~ \cos^2\Bigl(~ \theta( n_\mathbf{s} ~-~ 2 \cdot wt(\mathbf{c}) ) ~\Bigr) ~\right]. \end{eqnarray} \end{theorem} \footnotetext{Subscripts on expectation operators indicate a variable ranging uniformly over its natural domain.} \textbf{Proof~:~} See the appendix for a proof. \qed To recap, this means that if we run an X-program using the action value $\theta$ for all program elements, then the probability of the returned sample being orthogonal to an $\mathbf{s}$ of our choosing (`orthogonal' in the $\mathbbm{F}_2$ sense of having zero dot-product with $\mathbf{s}$) depends only on $\theta$ and on the (weight enumerator polynomial of the) linear code obtained by writing the program elements $\mathbf{p}$ as rows of a matrix and ignoring those that are orthogonal to $\mathbf{s}$. There is a definition in the literature for \emph{weighted matroids}, which in this context would correspond to allowing different $\theta$ values for different terms in the Hamiltonian of an X-program. While mathematically (and physically) natural, such considerations would not help with the clarity of our presentation. We emphasise at this point the value of theorem~\ref{thm:bias}~: it means that for any direction $\mathbf{s} \in \mathbbm{F}_2^n$, the bias of the output probability distribution from an X-program $(P,\theta)$ in the direction $\mathbf{s}$ depends \emph{only} on $\theta$ and the rows of $P$ that are \emph{not} orthogonal to $\mathbf{s}$, and not at all on the rows of $P$ that \emph{are} orthogonal to $\mathbf{s}$. Moreover, the bias in direction $\mathbf{s}$ depends \emph{only} on the \emph{matroid} $P_\mathbf{s}$, and not on the particular \emph{matrix} $P_\mathbf{s}$ that represents it. That is, directional bias (definition~\ref{def:bias}) is a matroid invariant. Note that whenever $A$ is an $n$-by-$n$ invertible matrix over $\mathbbm{F}_2$, then $\mathbf{p} \cdot \mathbf{s}^T ~=~ \mathbf{p} \cdot A \cdot A^{-1} \cdot \mathbf{s}^T ~=~ (\mathbf{p} \cdot A) \cdot (\mathbf{s} \cdot A^{-T})^T$, so any invertible column operation on matrix $P$ accompanies an invertible change of basis for the set of directions of which $\mathbf{s}$ is a member. Note also that appending or removing an all zero column to $P$ has the effect of including or excluding a qubit on which no unitary transformations are performed. Thus if $P_\mathbf{s}$ is a submatroid of $P$ by point-deletion, as described earlier, then if the invertible column transformation $A$ is applied to the matrix $P$ that represents the matroid $P$, then the same \emph{matroid} that was formerly called $P_\mathbf{s}$ is still a submatroid, but now it is represented by the matrix $P_{\mathbf{s} \cdot A^{-T}}$. Likewise, appending or removing a column of zeroes to $P$ necessitates an extra zero be appended or removed from any $\mathbf{s}$ that serves as a direction for indicating a submatroid. This is purely an issue of representation, and we consider that intuition about these objects is aided by taking an `abstractist' approach to the geometry. \subsection{Entropy, and trivial cases} \label{sect:entropy} Because it will be useful later, we will define the R\'enyi entropy (collision entropy) of a random variable, before exemplifying theorem~\ref{thm:bias} and proceeding with the main construction of the paper. \begin{definition} The collision entropy, $S_2$, of a discrete random variable, $\mathbf{X}$, measures the randomness of the sampling process by measuring the likelihood of two (independent) samples being the same. It is defined by \begin{eqnarray} \label{eqn:Renyi} 2^{-S_2} &=& \sum_\mathbf{x} \mathbbm{P}(\mathbf{X}=\mathbf{x})^2 ~~=~~ \mathbbm{E}_\mathbf{s} \left[~ \Bigl(~ 2\mathbbm{P}( \mathbf{X} \cdot \mathbf{s}^T = 0 ) - 1 ~\Bigr)^2 ~\right]. \end{eqnarray} \end{definition} And so there are a few `easy cases' for our $\mathbf{X}$ random variable of lemma~\ref{lem:thing} that should be highlighted and dismissed up front~: \begin{lemma} For a constant-action X-program, if $\theta$ is\ldots \begin{itemize} \item \ldots a multiple of $\pi$, then the returned sample will always be $\mathbf{0}$. The collision entropy will be zero. \item \ldots an odd multiple of $\pi/2$, then the returned sample will always be $\sum_{\mathbf{p} \in P} \mathbf{p}$. The collision entropy is zero. \item \ldots an odd multiple of $\pi/4$, then the collision entropy need not be zero, but the probability distribution will be classically simulable to full precision. \end{itemize} \end{lemma} \textbf{Proof~:~} In the first case, considering line~(\ref{eqn:distpaths}), there is then a $\sin(\pi)=0$ factor in every term of the probability, except where $\mathbf{x}=\mathbf{0}$. In the second case, considering again line~(\ref{eqn:distpaths}), there is then a $\cos(\pi/2)=0$ factor in every term, except where all the $\mathbf{p}$ vectors are summed together to give $\mathbf{x}$. The same can also be deduced from theorem~\ref{thm:bias}, which implies that $\mathbf{x}$ will be surely orthogonal to $\mathbf{s}$ exactly when $n_\mathbf{s}$ is even, \emph{i.e.} exactly when an even number of rows of $P$ are \emph{not} orthogonal to $\mathbf{s}$, \emph{i.e.} exactly when $\sum_{\mathbf{p} \in P} \mathbf{p}$ \emph{is} orthogonal to $\mathbf{s}$. For the third case, if $\theta$ is an odd multiple of $\pi/4$, then all the gates in the program would be Clifford gates. By the Gottesman-Knill theorem there is then a classically efficient method for sampling from the distribution, by tracking the evolution of the system using stabilisers, \emph{etc.} \qed For other sufficiently different values of the action parameter, classical intractibility becomes a plausible conjecture (\emph{cf.} \cite{lit:SWVC08, lit:SWVC2}). In particular, the remainder of this paper will specialise to the case $\theta = \pi/8$, since we are able to make all our points about the utility of $\mathbf{IQP}$ even with this restriction. \section{Interactive protocol} \label{sect:protocol} One would naturally like to find some `use' for the ability to sample from the probability distribution that arises from a temporally unstructured quantum polytime computation; a `task' or `proof' that can be completed using \emph{e.g.} an X-program, which could not be completed by purely classical means. In this section we develop our main construction; a two-player interactive protocol game in which a Prover uses an $\mathbf{IQP}$ oracle simply to demonstrate that he does have access to an $\mathbf{IQP}$ oracle. \subsection{At a glance} There are three aspects of design involved in specifying an actual ``Alice \& Bob'' game~: \begin{itemize} \item[A)] a code/matroid construction, for Alice to select a problem, to send to Bob, \item[B)] an architecture or technique by which Bob to take samples from the $\mathbf{IQP}$ distribution of the challenge he receives, to send back to Alice, \item[A')] an hypothesis test for Alice to use to verify (or reject) Bob's attempt. \end{itemize} We have already defined the X-program architecture, and in section~\ref{sect:architectures} we discuss some alternatives that Bob might like to try. In section~\ref{sect:heuristics} we make an analysis of some classical cheating strategies for Bob, in case he can't lay his hands on a quantum computer. The details of precisely how to make a good hypothesis test are omitted from this paper for the sake of brevity, but sourcecode is available on our website (see section~\ref{sect:challenge}). Alice plays the role of the Challenger/Verifier, while Bob plays the role of the Prover. Alice uses secret random data to obfuscate a `causal' matroid $P_\mathbf{s}$ inside a larger matroid $P$, and the latter she publishes (as a matrix) to Bob. Bob interprets matrix $P$ as an X-program to be run several times, with $\theta = \pi/8$. He collects the returned samples, and sends them to Alice. Alice then uses her secret knowledge of `where' in $P$ the special $P_\mathbf{s}$ matroid is hidden, in order to run a statistical test on Bob's data, to validate or refute the notion that Bob has the ability to run X-programs. This application is perhaps the simplest known protocol, requiring (say) $\sim 200$ qubits, that could be expected to convince a skeptic of the existence of some \emph{computational} quantum effect. The reason for this is that there seems to be no classical method to fake even a \emph{classical transcript} of a run of the interactive game between Challenger and Prover, without actually \emph{being} (or subverting the secret random data of) the classical Challenger. \subsection{Concept overview} Consider therefore the following game, between Alice and Bob. Alice, also called the Challenger/Verifier, is a classical player with access to a private random number generator. Bob, also called the Prover, is a supposedly quantum player, whose goal is to convince Alice that he can access an $\mathbf{IQP}$ oracle, \emph{i.e.} run X-programs. The rules of this game are that he has to convince her simply by sending classical data, and so in effect Bob offers to act as a remote $\mathbf{IQP}$ oracle for Alice, while Alice is initially skeptical of Bob's true $\mathbf{IQP}$ abilities. \subsubsection{Alice's challenge} The game begins with Alice choosing some code $\mathcal{C}_\mathbf{s}$ that has certain properties amenable to her analysis. She chooses the code $\mathcal{C}_\mathbf{s}$ in such a way that there is a $\theta$ for which the (quantum) expectation value at line~(\ref{eqn:walshcode}) of theorem~\ref{thm:bias} is somewhere well within $(\frac12, 1)$, and for which the corresponding expectation value that arises from the best-known classical approaches to `cheating' (\emph{e.g.} presumably the one at line~(\ref{eqn:classfromcode}) of section~\ref{sect:classical}, in case $\theta=\pi/8$) is significantly smaller. She then finds a matrix $P_\mathbf{s}$ whose columns generate the code (not necessarily as a basis), and ensures that there is some $\mathbf{s}$ that is not orthogonal to any of the rows of $P_\mathbf{s}$. The vector $\mathbf{s}$ should be thought of not as a structural property of the code $\mathcal{C}_\mathbf{s}$, but as a `locator' that can be used to `pinpoint' $P_\mathbf{s}$ even after is has later been obfuscated. \emph{Obfuscation} of $P_\mathbf{s}$ is achieved by appending arbitrary rows that \emph{are} orthogonal to $\mathbf{s}$. This gives rise to matrix $P$. The matroid $P$ has $P_\mathbf{s}$ as a submatroid, in the sense that removal of the correct set of rows will recover $P_\mathbf{s}$. Alice publishes to Bob a representation of matroid $P$ that hides the structure that she has embedded. Random row permutations are appropriate, and reversible column operations likewise leave the matroid invariant (though the latter will affect $\mathbf{s}$ and must therefore be tracked by Alice). \subsubsection{Bob's proof} Bob, being $\mathbf{BPP^{IQP}}$-capable by hypothesis, may interpret the published $P$ as an X-program, to be run with the (constant) action set to $\theta = \pi/8$ (say). He will be able to generate random vectors which independently have the correct bias in the (unknown to him) direction $\mathbf{s}$, \emph{i.e.} the correct probability of being orthogonal to Alice's secret $\mathbf{s}$, in accordance with theorem~\ref{thm:bias}. Although he may still be entirely unable to recover this $\mathbf{s}$ from such samples, he nonetheless can send to Alice a list of these samples as proof that he is $\mathbf{BPP^{IQP}}$-capable. Note that Bob's strategy is error-tolerant, because if each run of the \textbf{IQP} algorithm were to use a `noisy' $\theta$ value, then the overall proof that he generates will still be valid, providing the noise is small and unbiased and independent between runs. Note also that he can manage several runs in one oracle call, if desired, simply by concatenating the matrix $P$ with itself diagonally. That is to say, we even avoid classical temporal structure (\emph{adaptive feed-forward}) on Bob's part. \subsubsection{Alice's verification} Since Alice knows the secret value $\mathbf{s}$, and can presumably compute the value $\mathbbm{P}(\mathbf{X} \cdot \mathbf{s}^T=0)$ from the code's weight enumerator polynomial (see theorem~\ref{thm:bias} and recall that she is free to choose any $\mathcal{C}_\mathbf{s}$ that suits her purpose), it is not hard for her to use a hypothesis test to confirm that the samples Bob sends are \emph{commensurate} with having been sampled independently from the same distribution that an X-program generates. That is to say, Alice will not try to test whether Bob's data \emph{definitely fits the correct $\mathbf{IQP}$ distribution,} but she will ensure that it has the particular characteristic of a strong bias in the secret direction $\mathbf{s}$. This enables her to test the null hypothesis that Bob is cheating, from the alternative hypothesis that Bob has non-trivial quantum computational power. \emph{This requires belief in several conjectures on Alice's part.} She must believe that there is a classical separation between quantum and classical computing; in particular that $\mathbf{IQP}$ is not classically efficiently approximately simulable---at least she must believe that Bob doesn't know any good simulation tricks. She must believe that her problem is hard---at least she should believe that the problem of identifying the location of $P_\mathbf{s}$ within $P$ is not a $\mathbf{BPP}$ problem---on the assumption that the matroid $P_\mathbf{s}$ is known. If he passes her hypothesis test, Bob will have `proved' to Alice that he ran a quantum computation on her program, provided she is confident that there is no feasible way for Bob to simulate the `proof' data classically efficiently, \emph{i.e.} provided she has performed her hypothesis test correctly against a plausibly best null hypothesis. In particular, Alice should ensure a large collision entropy for the true ($\mathbf{IQP}$) distribution, since she will want to remove all `short circuits' (\emph{i.e.} all the empty rows and all the duplicate rows) from Bob's data, before testing it, to make a test that is both fair and efficient. Otherwise it would be too easy for Bob to generate a set of data that has a strong bias in very many directions simultaneously; and it would be tedious for Alice to confirm that he has not cheated in this way if she did not remove the short circuits. \subsubsection{Significance} This kind of interactive game could be of much significance to validation of early quantum computing architectures, since it gives rise to a simple way of `tomographically ascertaining' the actual presence of at least \emph{some} quantum computing, modulo some relatively basic complexity assumptions. In this sense it is to quantum computation what Bell violation experiments are to quantum communication. \footnote{We have serendipitously identified a construction for which the probability gap---quantum $85.4\%$ over classical $75\%$---precisely matches the gap available in Bell's inequalities. See lemma~\ref{lem:QRCode}.} Of course, this protocol really comes into its own when the architecture being tested happens to have the undesirable engineering feature of being unable to sustain long-term quantum coherence, and therefore perhaps only ever being capable of shallow-depth computation. Unfortunately, the prescription for X-programs given in section~\ref{sect:defs} requires Hamiltonian terms that act across potentially hundreds of qubits, and the alternative architectures discussed in section~\ref{sect:architectures} have similar physical drawbacks that still make this paradigm extremely challenging for today's engineering. Note that this `testing concept' does not use the $\mathbf{IQP}$ paradigm to \emph{compute any data that is unknown to everyone}, nor does it directly provide Bob with any `secret' data that could be used as a witness to validate an $\mathbf{NP}$ language membership claim. Its only effect is to provide Bob with data that he \emph{can't} use for any purpose other than to pass on to Alice as a `proof' of $\mathbf{IQP}$-capability. It is an open problem to find something more commonly associated with computation---perhaps deciding a decision language, for example---that can be achieved specifically by the $\mathbf{BPP^{IQP}}$ paradigm. \subsection{Recommended construction method} \label{sect:recommend} Here is a specific example of a construction methodology (with implicit test methodology) for Alice, which we conjecture to be asymptotically secure (against cheating Prover) and efficient (for both Prover and Verifier). \subsubsection{Recipe for codes} The family of codes that we suggest Alice should employ within the context of the game outlined above are the \emph{quadratic residue codes}. These will be shown to have the significant property that there is a non-negligible gap between the quantum- and best-known-classical-approximation expectation values for the bias in the secret direction, both of which are significantly below 1. (The bias for a truly quantum-enabled Bob has already been defined in theorem~\ref{thm:bias}, in terms of the weight-enumerator polynomial of the causal code. For a classically cheating Bob, we discuss the best-known classical strategies and their biases in section~\ref{sect:classical}.) Consider a quadratic residue code over $\mathbbm{F}_2$ with respect to the prime $q$, chosen so that $q+1$ is a multiple of eight. The rank of such a code is $(q+1)/2$, and the length is $q$. A quadratic residue code is a cyclic code, and can be specified by a single cyclic generator. There are several ways of defining these, but the simplest definition is to take the codeword that has a 1 in the $j$th place if and only if the Legendre symbol\footnotemark{} $\left(\frac{j}{q}\right)$ equals 1, \emph{i.e.} if and only if $j$ is a non-zero quadratic residue modulo $q$. \footnotetext{This Legendre symbol is equivalent, modulo $q$, to $j^{(q-1)/2}$.} For example, if $q=7$ (the smallest example) then the non-zero quadratic residues modulo $q$ are $\{1,2,4\}$, and so the quadratic residue code in question is the rank-$4$ code spanned by the various rotations of the generator $(0,1,1,0,1,0,0,0)^T$. A basis for this code is found in the columns of the matrix at line~(\ref{eqn:exampleP}). \begin{lemma} \label{lem:QRCode} When $q$ is a prime and 8 divides $q+1$, then there is a unique quadratic residue code $\mathcal{C}$ (up to isomorphism) of length $q$ over $\mathbbm{F}_2$, having rank $(q+1)/2$, and it satisfies \begin{eqnarray} \label{eqn:QRCodestats1} \mathbbm{E}_{\mathbf{c} \sim \mathcal{C}} \left[~ \cos^2\Bigl(~ \frac\pi8 ( q ~-~ 2 \cdot wt(\mathbf{c}) ) ~\Bigr) ~\right] &=& \cos^2( \pi/8 ) ~~=~~ 0.854\ldots \end{eqnarray} Moreover, it also satisfies \begin{eqnarray} \label{eqn:QRCodestats2} \mathbbm{P}\Bigl(~ \mathbf{c}_1^T \cdot \mathbf{c}_2 = 0 ~|~ \mathbf{c}_1, \mathbf{c}_2 \sim \mathcal{C} ~\Bigr) &=& 3/4 ~~=~~ 0.75, \end{eqnarray} which is relevant to certain classical strategies (section~\ref{sect:classical}). \end{lemma} \textbf{Proof~:~} The proof of the \emph{rank} of the code and its uniqueness are well established results from classical coding theory \cite{lit:macwilliams}. Other classical results of coding theory include that quadratic residue codes are a parity-bit short of being self-dual and doubly even. That is, the extended quadratic code, with length $q+1$, obtained by appending a single parity bit to each codeword, has every codeword weight a multiple of 4 and every two codewords orthogonal. For line~(\ref{eqn:QRCodestats1}) this means that the (unextended) code has codeword weights which, modulo 4, are half the time 0 and half the time $-1$. On putting these values into the left side of the formula, we immediately obtain the right side. For line~(\ref{eqn:QRCodestats2}) this means that in the (unextended) code, any two codewords are non-orthogonal if and only if they are both odd-parity, which happens a quarter of the time~: from which the formula follows. \qed The corollary here is that if Alice uses one of these codes for her `causal' $\mathcal{C}_\mathbf{s}$, then if Bob runs a series of X-programs (with constant $\theta = \pi/8$) described by the (larger) matrix $P$, the data samples he recovers should be orthogonal to the hidden $\mathbf{s}$ about $85.4\%$ of the time (\emph{cf}. theorem~\ref{thm:bias}); whereas if Bob tries to cheat using the classical strategy outlined in section~\ref{sect:heuristics}, then his data samples will tend to be orthogonal to the hidden $\mathbf{s}$ only about $75\%$ of the time (\emph{cf}. lemma~\ref{lem:classprob}). Alice's hypothesis test therefore basically consists in measuring this single characteristic, after having filtered duplicate and null data samples from Bob's dataset. We would conjecture that Bob has no pragmatic way of boosting these signals, at least not without feedback from Alice, or exponential resources. Note that \emph{with} exponential time on his hands, Bob could choose to simulate classically an $\mathbf{IQP}$ oracle, in order to obtain a dataset with a bias in direction $\mathbf{s}$ that is approximately $85.4\%$. Alternatively, he could consider every possible $\mathbf{s}$ in turn, and test to see whether the matroid obtained by deleting rows orthogonal to his guess is in fact correspondent to a quadratic residue code, assuming he knew that this had been Alice's strategy. \subsubsection{Recipe for obfuscation} Having chosen $q$ as outlined above, and constructed a $q$-by-$(q+1)/2$ binary matrix generating a quadratic residue code, Alice needs to obfuscate it. The easiest way to manage this process is not to start with a particular secret $\mathbf{s}$ in mind, but rather to recognise the obfuscation problem as a \emph{matroid} problem, proceeding as follows~: \begin{itemize} \item Append a column of 1s to the matrix~: this does not change the code spanned by its columns since the all-ones (full-weight) vector is always a codeword of a quadratic code. Other redundant column codewords may also be appended, if desired. \item Append many (say $q$) extra rows to the matrix, each of which is random, subject to having a zero in the column lately appended. This gives rise to a $2q$-point matroid, and ensures that there now \emph{is} an $\mathbf{s}$ such that the causal submatroid (quadratic residue matroid) is defined by non-orthogonality of the rows to that $\mathbf{s}$. \item Reorder the rows randomly. This has no effect on the matroid that the matrix represents, nor on the hidden causal submatroid. Nor does it affect $\mathbf{s}$, the `direction' in which the sumbatroid is hidden. \item Now column-reduce the matrix. There is no (desirable) structure within the particular form of the matrix before column-reduction, nothing that affects either codes or matroids. Echelon-reduction provides a canonical representative for the overall matroid, while stripping away any redundant columns that would otherwise cost an unnecessary qubit, when interpreted as an X-program. By providing a canonical representative, it closes down the possibility that information in Alice's original construction of a basis for her causal code might leak through to Bob, which might be useful to him in guessing $\mathbf{s}$. Rather more importantly, this reduction actually serves to \emph{hide} $\mathbf{s}$. (We can be sure by zero-knowledge reasoning that this hiding process is random~: echelon reduction is canonical and therefore supervenes any column-scrambling process, including a random one.) \item Finally, one might sort the rows, though this is unnecessary. The resulting matrix is the one to publish. It will have at least $(q+1)/2$ columns, since that is the rank of the causal submatroid hidden inside. \end{itemize} \subsubsection{Mathematical problem description} \label{sect:mathythingo} This method of obfuscation amounts to---mathematically speaking---a situation whereby for each suitable prime $q$, we start by acknowledging a particular (public) $q$-point binary matroid $Q$, \emph{viz} the one obtained from the QR-Code of length $q$. Then an `instance' of the obfuscation consists of a published $2q$-point (say) binary matroid $P$, and there is to be a hidden ``obfuscation'' subset $O$ such that $Q = P\backslash O$; and the practical instances occur with $P$ chosen effectively at random, subject to these constraints. (One could choose to make $O$ bigger than $q$ points if that were desired.) This has the feel of a fairly generic hidden substructure problem, so it seems likely that it should be \textbf{NP}-hard to determine the location of the hidden $Q$, given $P$ and the appropriate promise of $Q$'s existence within. More syntactically, we should like to prove that it is \textbf{NP}-complete to decide the related matter of \emph{whether or not} $P$ is of the specified form, given only a matrix for $P$. Clearly this problem is in \textbf{NP}, since one could provide $Q$ \emph{in the appropriate basis} as witness. We conjecture this problem to be \textbf{NP}-complete. \begin{conjecture} \label{conj:NPc} The language of matroids $P$ that contain a quadratic-residue code submatroid $Q$ \emph{by point deletion}, where the size of $Q$ is at least half the size of $P$, is \textbf{NP}-complete under polytime reductions. \end{conjecture} These sorts of conjecture are apparently independent of conjectures about hardness of classical efficient $\mathbf{IQP}$ simulation, since they indicate that \emph{actually identifying the hidden data} is hard, even for a universal quantum computer. Even should this conjecture prove false, we know of no reason to think that a quantum computer would be much better than a classical one at finding the hidden $Q$, notwithstanding Grover's quadratic speed-up for exhaustive search. One might compare the structure of conjecture~\ref{conj:NPc} to that of the following important \emph{theorem} from graph theory~: \begin{proposition} The language of graphs $G$ that contain a complete graph $K$ \emph{by vertex deletion}, where the size of $K$ is at least half the size of $G$, is \textbf{NP}-complete under polytime reductions. \end{proposition} This is a classic result, see \emph{e.g.} \cite{lit:Papa}, where the problem in different guises is called `Clique' and `Independent Set' and `Node Cover'. \subsection{Challenge} \label{sect:challenge} It seems reasonable to conjecture that, using the methodology described, with a QR-code having a value $q \sim 500$, it is very easy to create randomised Interactive Game challenges for $\mathbf{BPP^{IQP}}$-capability, whose distributions have large entropy, which should lead to datasets that would be easy to validate and yet infeasible to forge without an $\mathbf{IQP}$-capable computing device (or knowledge of the secret $\mathbf{s}$ vector). We propose such challenges as being appropriate `targets' for early quantum architectures, since such challenges would essentially seem to be the simplest ones available (at least in terms of inherent temporal structure and number of qubits) that can't apparently be classically met. Accordingly, we have posted on the internet (http://quantumchallenges.wordpress.com) a \$25 challenge problem, of size $q=487$, to help motivate further study. This challenge website includes the source code (C) used to make the challenge matrix, and also the source code of the program that we will use to check candidate solutions, excluding only the secret seed value that we used to randomise the problem. \section{Heuristics} \label{sect:heuristics} Next we address in more detail the reasons for thinking the problem classically intractable, and also give an accounting of our failure to find a \emph{decision language} for proving the worth of $\mathbf{IQP}$. \subsection{Hardness of strong simulation} In support of the supposed complexity of this paradigm, Terhal and Divincenzo \cite{lit:TD02}, and Aaronson \cite{lit:Aa04}, have already showed that it is \textbf{PP-complete} to \emph{strongly simulate} the generic probability distributions that arise hence. \begin{lemma} It is $\mathbf{P^{GapP}}$-hard to determine the numerical value of $\mathbbm{P}(\mathbf{X}=\mathbf{0})$ (as defined in section~\ref{sect:defs}, line~(\ref{eqn:dist1})) to within exponential precision, for arbitrary matroids. \end{lemma} \textbf{Proof~:~} Let $Ker_L(P) = \{ \mathbf{a}^T : \mathbf{a} \cdot P = \mathbf{0} \}$ denote the linear code for which $P$ is a parity-check matrix, and note from line~(\ref{eqn:distpaths}) and definition~\ref{def:WEP} that the probability in question is a function of the weight-enumerator polynomial of this code. Specifically, \begin{eqnarray} \mathbbm{P}(\mathbf{X}=\mathbf{0}) &=& \left|~ WEP_{Ker_L(P)}(~ \cos \theta, ~i \sin \theta ~) ~\right|^2. \end{eqnarray} By varying $\theta$ over the range $(0, \pi/2)$, accurate values of $\mathbbm{P}(\mathbf{X}=\mathbf{0})$ would enable the recovery of the (integral) coefficients of the weight-enumerator polynomial of $Ker_L(P)$, which by choice of $P$ may be set to be any appropriately sized linear binary code we please. The recovery of arbitrary weight-enumerator polynomials is $\mathbf{P^{GapP}}$-hard \cite{lit:vyalyi}. \qed \subsection{Background} There has been a wide range of work into discovering restricted models of quantum computation which \emph{are} classically simulable. For example, quantum circuits generating limited forms of entanglement, with classical simulations based on analysing matrix product states or contracting tensor networks; these circuits have a particularly constrained `circuit-topology', which leads to their simplicity (see \cite{lit:Mar05} for a summary of known results). There is no particular circuit-topology imposed in our Z-network architecture (discussed in section~\ref{sect:architectures}), so it seems unlikely that the same methods would apply here. Other positive classical simulability results include the stabiliser circuits of the Gottesman-Knill theorem and various matchgate constructions (see \cite{lit:Val02, lit:Jo08, lit:SWVC08, lit:SWVC2} and references therein). These constructions differ significantly from our Z-networks in terms of the underlying algebra, the group generated by the set of allowable gates. H\o{}yer and Spalek \cite{lit:Hoy02} have shown that Shor's algorithm for Integer Factorisation can indeed be performed within a \emph{constant} number of timesteps on a Graph State processor (discussed in section~\ref{sect:architectures}), though their constructions offer no reason to believe that that constant might be smaller than, say, $\sim 100$; and of course, a general methodology for reducing the inherent time-complexity of oracle-dependent quantum search algorithms is known to be impossible, due to lower bounds on Grover's algorithm. Dan Browne \cite{lit:Browne06} wrote about \emph{CD-decomposability}, which is the first rigorous treatment that we know of that explicitly links Graph State temporal depth with commutativity of Hamiltonian terms used to simulate a Graph state computation. Dan Simon \cite{lit:Si97} wrote about algorithms that use nothing more than an oracle and a Hadamard transform, and which therefore could be described as `temporally unstructured'. However, his notion of `oracle' was one tailored for a universal quantum architecture, being essentially an arbitrarily complex general unitary transformation, and since there is no natural notion of one of these within our `temporally unstructured' paradigm, there is no real sense in which Simon's algorithm can count as an example of an algorithm within the $\mathbf{BPP^{IQP}}$ framework. In particular, Simon's oracle implements a unitary that does \emph{not} commute with the Hadamard transform. \subsection{Conjectures, implicit and explicit} It is possible to form various hardness conjectures about the classical simulation of these $\mathbf{IQP}$ probability distributions. For a randomly chosen X-program $P$ of a given width $n$, it seems likely that the associated $\mathbf{IQP}$ distribution would be exponentially close to flat random. Conditioned on its \emph{not} being random, there is no particular reason to think it would be approximately efficiently classically samplable. Here is an example of one such conjecture, though the precise details are not important to our arguments. \begin{conjecture} \label{conj:sample} There exists a distribution $\mathcal{D}$ on the set of X-programs, for which no classical Turing machine can gain a non-negligible $\Omega(1/poly)$ advantage in deciding whether or not the distribution associated to an X-program chosen randomly from $\mathcal{D}$ is exponentially close in trace distance to the uniform distribution. \end{conjecture} This particular hardness conjecture is not quite what we really \emph{require}, but it gives an example of a plausible conjecture about classical simulation, and implies that for almost any X-program of interest, there is a certain \emph{event} (subset of output possibilities) whose probability will (probably) be estimated wrongly by your favourite classical polynomial-time event-probability-estimating device. (Many similar conjectures sound equally plausible, in an area where almost nothing is known for sure.) The point to emphasise in context of our two-player interactive protocol games is that it is not unreasonable for Alice to \emph{believe} that Bob can have no classical cheating strategy \emph{so long as} none such has been published nor proven to exist; and so our protocol may still serve as a demonstration (if not a proof) of a genuinely quantum computing phenomenon, despite the lack of proof of any simulation conjecture. Another conjecture implicit in Alice's ability to make a fair hypothesis test---so that Bob will indeed have a good chance of passing the test when he does have an $\mathbf{IQP}$ oracle (or approximate version of one), but will stand little chance of faking a proof if relying on guesswork and (known) classical techniques---is one that ensures that Alice's X-programs really do incorporate a non-negligible amount of entropy. Although we have little to go on besides scant simulation evidence from small examples, we want to make a conjecture that collision entropy is close to maximal within at least one relevant family of random constant-action X-programs. \begin{conjecture} \label{conj:entropy} The expected collision entropy of the probability distribution of a randomly selected X-program of width $n$, with constant action $\pi/8$, scales as $n - O(1)$ with the size of the program. \end{conjecture} This conjecture is perhaps not directly relevant to the `hardness' of the $\mathbf{IQP}$ paradigm itself, but merely relevant to our game construction. Note, for example, that since arbitrary---or random---obfuscation rows are used in the construction of the matrix $P$ in the construction of section~\ref{sect:protocol}, it follows that there will be much about the random variable $\mathbf{X}$ that is arbitrary---or `typical' in some vague sense---to the point that if one were sure that the only structure of significance were the hidden `causal' code $\mathcal{C}_\mathbf{s}$, one could hope to approximate the distribution for $\mathbf{X}$, using knowledge of $\mathbf{s}$, by sampling uniformly at random (no biases) and applying a post-filter to create a bias in direction $\mathbf{s}$ of the required strength. This gives some context for conjecture~\ref{conj:entropy}. \subsection{Classical approximations} \label{sect:classical} Rather than speculate at this stage on which of the very many possible conjectures may or may not be true, we instead turn back to an examination of the mathematical structures underpinning the probability distributions in question. Suppose we wish to construct a probability distribution that arises from some purely classical methods, which can be used to approximate our $\mathbf{IQP}$ distribution. Our motivation here is to check whether any purported application for an $\mathbf{IQP}$ oracle might not be efficiently implemented without any quantum technology. We proceed using the relatively \emph{ad hoc} methods of linear differential cryptanalysis. \subsubsection{Directional derivatives} For the case $\theta = \pi/8$, we will need to consider only second-order derivatives. The same sort of method will apply to the case $\theta = \pi/2^{d+1}$ using $d$th order derivatives, but the presentation would not be improved by considering that general case here. In terms of a binary matrix/X-program $P$, proceed by defining \begin{eqnarray} \label{eqn:def_f} f &:& \mathbbm{F}_2^n ~\rightarrow~ \mathbbm{Z}/16\mathbbm{Z}, \nonumber \\ f(\mathbf{a}) &\equiv& \sum_{\mathbf{p} \in P} (-1)^{\mathbf{p} \cdot \mathbf{a}^T} \pmod{16}, \end{eqnarray} and notate discrete directional derivatives as \begin{eqnarray} \label{eqn:deriv} f_\mathbf{d}(\mathbf{a}) &\equiv& f(\mathbf{a}) - f(\mathbf{a}\oplus\mathbf{d}) \pmod{16}. \end{eqnarray} Consider also the \emph{second} derivatives of $f$, given by \begin{eqnarray} \label{eqn:deriv2} f_{\mathbf{d},\mathbf{e}}(\mathbf{a}) &\equiv& f_\mathbf{e}(\mathbf{a}) ~-~ f_\mathbf{e}(\mathbf{a}\oplus\mathbf{d}) \pmod{16} \nonumber \\ &\equiv& 2\sum_{\mathbf{p} \in P_\mathbf{e}} (-1)^{\mathbf{p} \cdot \mathbf{a}^T} \left( 1 ~-~ (-1)^{\mathbf{p} \cdot \mathbf{d}^T} \right) \pmod{16} \nonumber \\ &\equiv& 4\!\!\!\!\sum_{\mathbf{p} \in P_\mathbf{d} \cap P_\mathbf{e}} (-1)^{\mathbf{p} \cdot \mathbf{a}^T} \pmod{16} \nonumber \\ &\equiv& 4\!\!\!\!\sum_{\mathbf{p} \in P_\mathbf{d} \cap P_\mathbf{e}} ~\prod_{j:p_j=1} \Bigl(~ 1 ~-~ 2a_j ~\Bigr) \pmod{16} \nonumber \\ &\equiv& \sum_{\mathbf{p} \in P_\mathbf{d} \cap P_\mathbf{e}} \left(~ 4 ~+~ 8\!\!\!\sum_{j~:~p_j=1} \!\!a_j ~\right) \pmod{16}, \end{eqnarray} each of which is quite patently a linear function in the bits $(a_1, \ldots, a_n)$ of $\mathbf{a}$, as a function with codomain the ring $\mathbbm{Z}/16\mathbbm{Z}$, regardless of the choice of directions $\mathbf{d},\mathbf{e}$. \begin{lemma} With $f$ defined as per line~(\ref{eqn:def_f}), and $\mathbf{X}$ the random variable of lemma~\ref{lem:thing}, for all~$\mathbf{s}$, \begin{eqnarray} \label{eqn:piby8} \mathbbm{P}( \mathbf{X} \cdot \mathbf{s}^T = 0 ) &=& \mathbbm{E}_\mathbf{a} \left[ \cos^2\Bigl(~ \frac\pi8 \cdot f_\mathbf{s}(\mathbf{a}) ~\Bigr) \right], \end{eqnarray} and so the $\mathbf{IQP}$ probability distribution (in the case $\theta=\pi/8$) may be viewed as a function of $f$ rather than as a function of $P$. \end{lemma} \textbf{Proof~:~} Starting from the proof of theorem~\ref{thm:bias}, line~(\ref{eqn:wooo}), \begin{eqnarray} \mathbbm{P}(\mathbf{X} \cdot \mathbf{s}^T = 0) &=& \frac12\left(~ 1 ~+~ \mathbbm{E}_\mathbf{a} \left[ e^{ i\theta \sum_\mathbf{p} (-1)^{\mathbf{p} \cdot \mathbf{a}^T} \Bigl(1 - (-1)^{\mathbf{p} \cdot \mathbf{s}^T}\Bigr) } \right] ~\right) \nonumber \\ &=& \frac12\left(~ 1 ~+~ \mathbbm{E}_\mathbf{a} \left[ \exp\left( \frac{i\pi}8 \bigl( f(\mathbf{a}) - f(\mathbf{a}\oplus\mathbf{s}) \bigr) \right) \right] ~\right) \nonumber \\ &=& \frac12\left(~ 1 ~+~ \mathbbm{E}_\mathbf{a} \left[ \cos\Bigl(~ \frac\pi8 \cdot f_\mathbf{s}(\mathbf{a}) ~\Bigr) \right] ~\right). \end{eqnarray} The second line above is obtained immediately from the first, using the definition of $f$. The third line follows because the expression is real-valued. The conclusion follows from a basic trigonometric identity, and linearity of the expectation operator. \qed And so \emph{if} there is a hidden $\mathbf{s}$ such that $\mathbbm{P}( \mathbf{X} \cdot \mathbf{s}^T = 0 )=1$, \emph{then} that implies $f_\mathbf{s}(\mathbf{a}) \equiv 0 \pmod{16}$ for all $\mathbf{a}$. This is essentially a non-oracular form of the kind of function that arises in applications of Simon's Algorithm \cite{lit:Si97}, with $\mathbf{s}$ playing the role of a \emph{hidden shift}. One could find linear equations for such an $\mathbf{s}$ if it exists, because it would follow immediately that $f_{\mathbf{d},\mathbf{e}}(\mathbf{s}) = f_{\mathbf{d},\mathbf{e}}(\mathbf{0})$ for any directions $\mathbf{d},\mathbf{e}$, which is by line~(\ref{eqn:deriv}) equivalent with \begin{eqnarray} \left( \sum_{\mathbf{p} \in P_\mathbf{d} \cap P_\mathbf{e}} \!\!\mathbf{p} \right) \cdot \mathbf{s}^T &=& 0. \end{eqnarray} \subsubsection{Classical sampling} To make use of this specific second-order differential property, we need to analyse the probability distribution that a classical player can generate efficiently from it. Proceed by defining a new probability distribution for a new random variable $\mathbf{Y}$, as follows~: \begin{eqnarray} \label{eqn:classprob} \mathbbm{P}(\mathbf{Y}=\mathbf{y}) &=& \mathbbm{P}_{\mathbf{d},\mathbf{e}} \left(~ \sum_{\mathbf{p} \in P_\mathbf{d} \cap P_\mathbf{e}} \!\!\mathbf{p} ~=~ \mathbf{y} ~\right). \end{eqnarray} This may be classically rendered, simply by choosing $\mathbf{d},\mathbf{e} \in \mathbbm{F}_2^n$ independently with a uniform distribution, and then returning the sum of all rows in $P$ that are not orthogonal to either $\mathbf{d}$ or~$\mathbf{e}$. \begin{lemma} \label{lem:classprob} The classical simulable distribution on the random variable $\mathbf{Y}$ defined in line~(\ref{eqn:classprob}) satisfies \begin{eqnarray} \label{eqn:classfromcode} \mathbbm{P}( \mathbf{Y} \cdot \mathbf{s}^T = 0 ) &=& \mathbbm{P}\Bigl(~ \mathbf{c}_1^T \cdot \mathbf{c}_2 = 0 ~~|~~ \mathbf{c}_1,\mathbf{c}_2 \sim \mathcal{C}_\mathbf{s} ~\Bigr) \\ &=& \frac12\left(~ 1 ~+~ 2^{-rank(~ P_\mathbf{s}^T \cdot~ P_\mathbf{s} ~)} ~\right), \end{eqnarray} and so the bias of $\mathbf{Y}$ in direction $\mathbf{s}$ is a function of the matroid $P_\mathbf{s}$. \end{lemma} \textbf{Proof~:~} Starting from line~(\ref{eqn:classprob}), \begin{eqnarray} \mathbbm{P}( \mathbf{Y} \cdot \mathbf{s}^T = 0 ) &=& \sum_{\mathbf{y} ~:~ \mathbf{y} \cdot \mathbf{s}^T = 0} ~~ \mathbbm{P}_{\mathbf{d},\mathbf{e}} \left(~ \sum_{\mathbf{p} \in P_\mathbf{d} \cap P_\mathbf{e}} \!\!\mathbf{p} ~=~ \mathbf{y} ~\right) \\ &=& \mathbbm{P}_{\mathbf{d}, \mathbf{e}} \left(~ \sum_{\mathbf{p} \in P_\mathbf{d} \cap P_\mathbf{e}} \!\!\mathbf{p} \cdot \mathbf{s}^T ~=~ 0 ~\right) \nonumber \\ &=& \mathbbm{P}_{\mathbf{d}, \mathbf{e}} \left(~ wt(~ P\cdot\mathbf{d}^T ~\wedge~ P\cdot\mathbf{e}^T ~\wedge~ P\cdot\mathbf{s}^T ~) \equiv 0 \pmod2 ~\right) \nonumber \\ &=& \mathbbm{P}_{\mathbf{d}, \mathbf{e}} \left(~ wt(~ P_\mathbf{s}\cdot\mathbf{d}^T ~\wedge~ P_\mathbf{s}\cdot\mathbf{e}^T ~) \equiv 0 \pmod2 ~\right) \nonumber \\ &=& \mathbbm{P}_{\mathbf{d}, \mathbf{e}} \left(~ \mathbf{d}\cdot P_\mathbf{s}^T \cdot P_\mathbf{s}\cdot\mathbf{e}^T = 0 ~\right). \nonumber \end{eqnarray} The \emph{wedge operator} $\wedge$ here denotes the logical \emph{AND} between binary column-vectors. The first line of the lemma follows from the obvious substitutions $\mathbf{c}_1 = P_\mathbf{s} \cdot \mathbf{d}^T$, $\mathbf{c}_2 = P_\mathbf{s} \cdot \mathbf{e}^T$. The second line follows because unimodular actions on the left or right of a quadratic form (such as $(P_\mathbf{s}^T \cdot P_\mathbf{s})$) affect neither its rank nor the probabilities derived from it; so it suffices to consider the cases where it is in Smith Normal Form, \emph{i.e.} diagonal, which are trivially verified. Since this expression is patently invariant under invertible linear action on the right and permutation action on the left of $P_\mathbf{s}$, it too is a matroid invariant. \qed \subsubsection{Inter-relation} Thus we have established some kind of correlation between random variables $\mathbf{X}$ and $\mathbf{Y}$. \begin{theorem} \label{thm:unitbound} In the established notation, for X-programs with fixed $\theta=\pi/8$, \begin{eqnarray} \label{eqn:unitbound} \mathbbm{P}( \mathbf{X} \cdot \mathbf{s}^T = 0 ) = 1 ~~&\Rightarrow&~~ \mathbbm{P}( \mathbf{Y} \cdot \mathbf{s}^T = 0 ) = 1. \end{eqnarray} \end{theorem} \textbf{Proof~:~} By theorem~\ref{thm:bias}, the antecedent gives, for all $\mathbf{c} \in \mathcal{C}_\mathbf{s}, ~n_\mathbf{s} \equiv 2 wt(\mathbf{c}) \pmod8$, where $n_\mathbf{s}$ is again the length of the code $\mathcal{C}_\mathbf{s}$. This entails that every codeword in $\mathcal{C}_\mathbf{s}$ has the same weight modulo 4, including the null codeword, so $\mathcal{C}_\mathbf{s}$ must be doubly even\footnotemark{}. \footnotetext{\emph{Doubly even} just means that every codeword has weight a multiple of 4.} It is easy to see that doubly even linear codes are self-dual.\footnotemark{} \footnotetext{\emph{Self-duality} just means that the dual code---consisting of all words orthogonal to every codeword---is equal to the code.} By lemma~\ref{lem:classprob}, the consequent is obtained. \qed The only counterexamples to the \emph{converse} implication seem to occur in the trivial cases whereby the binary matroid $P_\mathbf{s}$ has circuits of length 2, \emph{i.e.} where $P_\mathbf{s}$ has repeated rows. Note that if $\mathbbm{P}( \mathbf{X} \cdot \mathbf{s}^T = 0 )$ were equal to 1 precisely, then by making a list of samples from $\mathbf{IQP}$ runs, storing them in a matrix, and performing Gaussian Elimination to recover the kernel of the matrix, it would be straightforward to compute the hidden $\mathbf{s}$. However, theorem~\ref{thm:unitbound} shows that this is exactly the condition required for being able to compute $\mathbf{s}$ via purely classical means. For this reason, it seems hard to find decision languages that plausibly lie in $\mathbf{BPP^{IQP}} \backslash \mathbf{BPP}$. This random variable $\mathbf{Y}$ is the `best classical approximation' that we have been able to find for $\mathbf{X}$. (The intuition is that it captures all of the `local' information in the function $f$, which is to say all the `local' information in the matroid $P$, so that the only data left unaccounted for and excluded from use within building this classical distribution is the `non-local' matroid information, which is readily available to the quantum distribution via the magic of quantum superposition.) There seems to be no other sensible way of processing $P$ (or $f$) classically, to obtain useful samples efficiently, though it also seems hard to make any rigorous statement to that effect. \begin{conjecture} \label{conj:best} The classical method defined in this section, yielding random variable $\mathbf{Y}$, is asymptotically classically optimal (when comparing worst-case behaviour and restricting to polynomial time) for the simulation of $\mathbf{IQP}$ distributions arising from constant-action $\theta=\pi/8$ X-programs. \end{conjecture} This conjecture lends credence to the design methodology of section~\ref{sect:protocol}. \subsection{Future work} We might also recommend the further study of matroid invariants through quantum techniques, or perhaps the invariants of \emph{weighted} matroids, since they seem to be the natural objects of $\mathbf{IQP}$ computation as hitherto circumscribed. This would seem to be fertile ground for developing examples of things that only genuine quantum computers can achieve. Note that if it weren't for the correlation described in theorem~\ref{thm:unitbound}, then it would be possible to conceive of a mechanism whereby an $\mathbf{IQP}$-capable device could compute an actual secret or witness to something (\emph{e.g.} learn $\mathbf{s}$), so that the computation wouldn't require two rounds of player interaction to achieve something non-trivial. Yet as it stands, it is an open problem to suggest tasks for this paradigm involving no communication nor multi-party concepts. \section{Architectures} \label{sect:architectures} In this section we sketch two more architectures for implementing $\mathbf{IQP}$ oracles. These architectures are probably more physically feasible than X-programs. \subsection{Z-networks} The network- or circuit-model of computation is perhaps the most familiar one. Programs for the first architecture we call ``Z-networks'', since the program is most easily described as a network of gates on an array of qubits, where the allowed gate-set includes just the Controlled-Not gate from any qubit to any qubit and the single-qubit gate that implements the Pauli $Z$ Hamiltonian\footnotemark{} for some time. \footnotetext{$Z = \ketbra00 - \ketbra11$, on a single qubit.} Although this Z-network architecture \emph{does} have a notion of temporal structure---because it is important the order in which the gates of the network are carried out---nonetheless it is useful for our analysis because it turns out to have effectively the same computational power as the X-program architecture under some basic assumptions, and the Lie group structure underpinning the kinds of transformation allowable within the Z-network architecture is particularly easy to work with. On the understanding that $n$ qubits are initialised into $\ket{\mathbf{0}}$ in the computational basis and ultimately measured in the computational basis, it is well known that the gate-set consisting of Controlled-Not gates together with \emph{all single-qubit rotations} is universal for \textbf{BQP}, and the Lie group generated by this gate-set generates the whole of $SU(2^n)$, modulo global phase. However, by the term ``Z-network'' we mean explicitly to limit \emph{the single-qubit gates} to being those which implement $e^{i \theta Z}$ for some action $\theta$, so that the Lie group spanned by the gate-set (represented in the computational basis) consists of unitary matrices that are supported by permutation matrices, \emph{i.e.} those unitaries that have just one non-zero entry per row. Any such unitary can be factored into a diagonal matrix followed by a permutation matrix. (In all cases, global phase is to be regarded as physically irrelevant, and may be `quotiented out' from the groups in question.) We can describe groups by giving generator sets for them. The group containing \emph{all even} permutations and all diagonal elements (modulo global phase) is \begin{eqnarray} \label{eqn:FullLiegroup} \left<~ \mbox{Toffoli, C-Not, } X, ~e^{i\theta Z} ~\right> &=& \left<~ \mbox{any even permutation}, ~\mbox{any diagonal} ~\right>. \end{eqnarray} This `qualifies' as a Z-network group; indeed, all Z-network groups are to be a subgroup of this one. But for the purposes of comparison with X-programs and the $\mathbf{IQP}$ paradigm, it suffices to consider the much simpler group given by \begin{eqnarray} \label{eqn:Liegroup} \left<~ \mbox{C-Not, } X, ~e^{i\theta Z} ~\right> &=& \left<~ \mbox{any linear permutation}, ~\mbox{any diagonal} ~\right>. \end{eqnarray} This latter group does not apparently contain (efficiently) the dynamics of classical computation. (The $X$ gate is necessary to enable the construction of all diagonals, but one might prefer to replace implementation of $X$ by the availability of an ancilla $\ket1$ qubit, so that a C-Not gate can simulate an $X$ gate. In the language of complexity theory, (\ref{eqn:FullLiegroup}) might be said to stand in relation to~(\ref{eqn:Liegroup}) as $\mathbf{P}$ stands to $\mathbf{\bigoplus L}$.) One can see how to build a variety of constructions within the group at line~(\ref{eqn:Liegroup}), using the specified gate-set. \emph{E.g.} by conjugating $e^{i\theta Z_1}$ with two C-Not gates one can create an $e^{i\theta Z_1 Z_2}$ composite unitary, and similarly $e^{i\theta Z_1 Z_2 Z_3}$ can be created, \emph{etc}. \subsubsection{Reductions between Z-networks and X-programs} This neat mathematical structure (\emph{i.e.} as at line~(\ref{eqn:Liegroup})) enables probability distributions of the kind at line~(\ref{eqn:dist1}) to be simulated. \begin{lemma} A Z-network can always be designed to simulate an X-program efficiently. \end{lemma} \textbf{Proof~:~} Simply by initialising the input qubits to the Z-network to $\ket{+^n}$ in the Hadamard basis, and measuring output in the same Hadamard basis, the simulation of a given X-program, $P$, proceeds by constructing a small network of CNot gates and an $\exp(i \theta_\mathbf{p} Z)$ gate for each row $\mathbf{p}$ of $P$. Details omitted for brevity. \qed Conversely, \begin{lemma} An X-program can efficiently simulate a given Z-network, provided that that Z-network uses Hadamard-basis input, C-Nots, $X$ gates, and $e^{i\theta Z}$ gates only, and outputs in the Hadamard basis. \end{lemma} \textbf{Proof~:~} The required reduction just associates one X-program element $(\theta_\mathbf{p}, \mathbf{p})$ to each $e^{i\theta Z}$ gate, setting $\theta_\mathbf{p} \leftarrow \theta$ and specifying $\mathbf{p}$ according to the location of the $e^{i\theta Z}$ gate \emph{and} the totality of C-Not gates to the left of that gate. A final piece of simple post-processing is needed after the measurement phase of the X-program, to account for the C-Nots in the Z-network, but this post-processing simply consists in applying the same C-Nots (with directions reversed) on the classical measurement outcomes. (This is because moving from the Hadamard basis to the computational basis has the effect of reversing the direction of C-Not gates.) \qed The point of these reductions is to highlight the sense in which the group associated to simple Z-networks stands in the same relation to the set of X-programs as the `full' $SU(2^n)$ Lie group stands to the set of proper full-blown quantum algorithms. \subsection{Graph-programs} Programs for the second of these architectures we call ``Graph-programs'', since the program is most easily described as the construction of a graph state followed by a series of measurements of the qubits in the graph state in various bases \cite{lit:Browne06}. Graph state computing architectures are popular candidates for scalable fully universal quantum processors \cite{lit:Raus01, lit:Raus03}. Here we are concerned not with universal architectures, but with the appropriate restriction to `unit time' computation. So, unlike universal graph state computation, our Graph-programs do not admit adaptive feed-forward, which is to say that all measurement angles must be known and fixed at compile-time, so that all measurements can be made simultaneously once the graph state has been built. In this sense, the `depth' of a Graph-program is 1. A graph state has qubits that are initially devoid of information, but which are entangled together according to the pattern of some pre-specified graph. A graph state can be constructed without inherent temporal complexity, perhaps even prepared in a single computational time-step, because there is no implicit reason requiring one edge of the graph to be prepared before any other. (It is still fair to argue that the circuit-depth of the process that generates a graph state is linear in the valency of the graph, but that is not a measure of `inherent' temporal complexity.) We will show how Graph-programs can simulate the output of X-programs if a little trivial classical post-processing of the measurement results is allowed. A Graph-program is taken to be an undirected (usually bipartite) graph with labelled and distinguished vertices. The vertex set is denoted $V$, of cardinality $n$, and for each $v \in V$ there is an element of $SU(2)$ labelling it; $R_v \in SU(2)$. The edge set is denoted $E$. To implement the program, a qubit is associated with each vertex and is initialised to the state $\ket+$ in the Hadamard basis. Then a Controlled-$Z$ Pauli gate is applied between each pair of qubits whose vertices are a pair in $E$. Since these Controlled-$Z$ gates commute, they may be applied simultaneously, at least in theory. Finally, each vertex qubit $v$ is measured in the direction prescribed by its label $R_v$, returning a single classical bit. Clearly the order of measurement doesn't matter, because the measurement direction is \emph{prescribed} rather than \emph{adaptive}. Hence a sample from $\mathbbm{F}_2^n$ (a bit-string) is thus generated as the total measurement result. Combining this together, we see that the probability distribution for such an output is \begin{eqnarray} \label{eqn:dist2} \mathbbm{P}(\mathbf{X}=\mathbf{x}) &=& \left| \bra\mathbf{x} ~\prod_{v \in V} R_v \cdot \!\!\!\!\prod_{(u,v) \in E}\!\!\!\frac{1 + Z_u + Z_v - Z_u Z_v}2 ~ \ket{+^n} \right|^2. \end{eqnarray} Here the measurement has been written using the notation of the computational basis, with an appropriate (passive) rotation immediately prior. \subsubsection{Graph-programs and X-programs} Having described three architectures, we've indicated that the X-programs, characterised by the formulation at line~(\ref{eqn:dist1}), are in some natural sense the `lowest common denominator' amongst the architectures (and unitary groups) of interest. \begin{lemma} A Graph-program can always be designed to simulate an X-program efficiently. \end{lemma} \textbf{Proof~:~} Suppose we're given an X-program, written $\{~ (\theta_\mathbf{p}, \mathbf{p}) ~:~ \mathbf{p} \in P \subset \mathbbm{F}_2^n ~\}$. Then it is straightforward to simulate it on a Graph State architecture, as follows. Let $V$ be the disjoint union of $[1..n]$ and $P$, so that the graph state used to simulate the program will have one \emph{primal} qubit for each qubit being simulated, plus one \emph{ancilla} qubit for each program element $\mathbf{p}$, the total cardinality of $V$ being polynomial in $n$, by hypothesis. Let $(j,\mathbf{p}) \in E$ exactly when the $j$th component of $\mathbf{p}$ is a 1. In this way, the resulting graph is bipartite, linking primal qubits to those ancill\ae{} that they have to do with. Let $R_j$ be the Hadamard element ($H$) for all primal qubits, so that all primal qubits are measured in the Hadamard basis. Let $R_\mathbf{p} = \exp( i\theta_\mathbf{p} X )$, so that every ancilla qubit is measured in the $(YZ)$-plane at an angle specified by the corresponding program element. If the resulting Graph-program is executed, it will return a sample vector $\mathbf{x} \in \mathbbm{F}_2^{n+\#P}$ for which the $n$ bits from the primal qubits are correlated with the $\#P$ bits from the ancill\ae{} in a fashion which captures the desired output, (though these two sets separately---marginally---will look like flat random data.) To recover a sample from the desired distribution, we simply apply a classical Controlled-Not gate from each ancilla bit to each neighbouring primal bit, according to $E$, and then discard all the ancilla bits. One can use simple circuit identities to check that this produces the correct distribution of line~(\ref{eqn:dist1}) precisely. \qed \subsection{Physical comparison} An X-program uses only `one timeslice', but presumes an arbitrarily large gatespan (interaction length), up to $n$, the number of qubits in the X-program. The Z-network reduction is physically preferable because it uses small gates (gatespan = 2) to simulate an X-program. However, it has network depth that is not constant; rather, it is likely to be quadratic in $n$, in general, unless one is careful to make optimisations when `compiling' the network. The Graph-program reduction uses more qubits than $n$, but has better depth properties than the Z-network architecture. Again, only small gates are used. It seems natural to suggest that the `true temporal cost' of implementing a Graph program would either be one process for measurement plus either one process or $degree$ processes for initialising the graph state, where $degree$ denotes the vertex-degree of the underlying graph. The number of qubits required in the simulation of an X-program is also slightly larger, since ancilla qubits are used. The worst case in our interactive protocol application would therefore require about $5q/2$ qubits, rather than $q/2$, though the vertex-degree would no bigger than the number of qubits. The kinds of graph called for in our Graph-program reduction are not the usual cluster state graphs (regular planar lattice arrangement) used in measurement-based quantum computation. The bipartite graphs described in the reduction will usually be far from planar for generic X-programs, having a relatively high genus. This means that the graph cannot be `laid out' on a plane without edges crossing. \section*{Appendix} A proof of theorem~\ref{thm:bias}. Throughout, the variable $\mathbf{p}$ ranges over the rows of the binary matrix $P$, which are the program elements of an X-program. Derive line~(\ref{eqn:walshcode}) from line~(\ref{eqn:dist1}) in the case that the value $\theta$ is constant. \begin{eqnarray} \mathbbm{P}(\mathbf{X}=\mathbf{x}) &=& \left| \bra\mathbf{x} ~\exp\left(~\sum_\mathbf{p} i\theta_\mathbf{p} \bigotimes_{j:p_j=1} X_j~\right)~ \ket{\mathbf{0}^n} \right|^2 \nonumber \\ &=& \left| 2^{-n}\sum_\mathbf{a} (-1)^{\mathbf{x} \cdot \mathbf{a}^T}\bra\mathbf{a} ~~\exp\left(~\sum_\mathbf{p} i\theta_\mathbf{p} \bigotimes_{j:p_j=1} Z_j~\right)~~ \sum_\mathbf{b} \ket{\mathbf{b}} \right|^2 \nonumber \\ &=& \left| ~\mathbbm{E}_{\mathbf{a}}~ \left[ (-1)^{\mathbf{x} \cdot \mathbf{a}^T} ~\exp\left(~i\theta \sum_{\mathbf{p}} (-1)^{\mathbf{p} \cdot \mathbf{a}^T} ~\right) \right] ~\right|^2 \nonumber \\ &=& \mathbbm{E}_{\mathbf{a},\mathbf{d}}~ \left[ (-1)^{\mathbf{x} \cdot \mathbf{d}^T} ~\exp\left(~ i\theta \sum_{\mathbf{p}} (-1)^{\mathbf{p} \cdot \mathbf{a}^T} \Bigl(1 - (-1)^{\mathbf{p} \cdot \mathbf{d}^T}\Bigr) ~\right) \right]. \end{eqnarray} On the second line we made a change of basis, so as to replace the Pauli $X$ operators with Pauli $Z$ ones. \begin{eqnarray} \label{eqn:wooo} \mathbbm{P}(\mathbf{X} \cdot \mathbf{s}^T = 0) &=& 2^{n} ~\mathbbm{E}_{\mathbf{x}} \left[~ \{\mathbf{x}\cdot\mathbf{s}^T=0\} \cdot \mathbbm{P}( \mathbf{X} = \mathbf{x} ) ~\right] \nonumber \\ &=& 2^{n} ~\mathbbm{E}_{\mathbf{a},\mathbf{d},\mathbf{x}} \left[ \frac{(1+(-1)^{\mathbf{x}\cdot\mathbf{s}^T})}2 ~(-1)^{\mathbf{x} \cdot \mathbf{d}^T} ~e^{ i\theta \sum_\mathbf{p} (-1)^{\mathbf{p} \cdot \mathbf{a}^T} \Bigl(1 - (-1)^{\mathbf{p} \cdot \mathbf{d}^T}\Bigr) } \right] \nonumber \\ &=& 2^{n} ~\mathbbm{E}_{\mathbf{a},\mathbf{d}} \left[ \frac{\Bigl( \{\mathbf{d}=\mathbf{0}\} + \{\mathbf{d}=\mathbf{s}\} \Bigr)}2 ~e^{ i\theta \sum_\mathbf{p} (-1)^{\mathbf{p} \cdot \mathbf{a}^T} \Bigl(1 - (-1)^{\mathbf{p} \cdot \mathbf{d}^T}\Bigr) } \right] \nonumber \\ &=& \frac12\left( 1 ~+~ \mathbbm{E}_\mathbf{a} \left[ e^{ i\theta \sum_\mathbf{p} (-1)^{\mathbf{p} \cdot \mathbf{a}^T} \Bigl(1 - (-1)^{\mathbf{p} \cdot \mathbf{s}^T}\Bigr) } \right] \right). \end{eqnarray} These transformations are conceptually simple but notationally untidy. \begin{eqnarray} 2 \cdot \mathbbm{P}(\mathbf{X} \cdot \mathbf{s}^T = 0) - 1 &=& \sum_j e^{ij\theta} ~\mathbbm{E}_{\mathbf{a},\phi} \left[ e^{i\phi\left( -j ~+~ \sum_\mathbf{p} (-1)^{\mathbf{p} \cdot \mathbf{a}^T} \Bigl(1 - (-1)^{\mathbf{p} \cdot \mathbf{s}^T}\Bigr) ~\right)} \right] \nonumber \\ &=& \sum_j e^{ij\theta} ~\mathbbm{P}_{\mathbf{a}} \left(~ j ~=~ 2\!\!\!\!\!\!\sum_{\mathbf{p}~:~\mathbf{p}\cdot\mathbf{s}^T=1} \!\!(-1)^{\mathbf{p} \cdot \mathbf{a}^T} ~\right) \nonumber \\ &=& \sum_j e^{ij\theta} ~\mathbbm{P} \left(~ j = 2 (~ n_\mathbf{s} - 2 \cdot wt( \mathbf{c} ) ~) ~~|~~ \mathbf{c} \sim \mathcal{C}_\mathbf{s} ~\right) \nonumber \\ &=& \sum_w \cos(~ 2\theta(n_\mathbf{s} - 2 w) ~) \cdot \mathbbm{P}\left(~ w = wt( \mathbf{c} ) ~~|~~ \mathbf{c} \sim \mathcal{C}_\mathbf{s} ~\right). \end{eqnarray} Here we have used the standard Fourier decomposition of a periodic function, and used the fact that the function is known to be real. The variable substitution at the third line was $\mathbf{c} = P_\mathbf{s} \cdot \mathbf{a}^T$, understood in the correct basis. At the fourth line it was $w = (2n_\mathbf{s}-j)/4$. \begin{eqnarray*} \mathbbm{P}(\mathbf{X} \cdot \mathbf{s}^T = 0) &=& \sum_{w=0}^{n_\mathbf{s}} \cos^2(~ \theta(n_\mathbf{s} - 2 w) ~) \cdot \mathbbm{P}\left(~ w = wt( \mathbf{c} ) ~~|~~ \mathbf{c} \sim \mathcal{C}_\mathbf{s} ~\right) \\ &=& \mathbbm{E}_{\mathbf{c} \sim \mathcal{C}_\mathbf{s}} \left[~ \cos^2\Bigl(~ \theta( n_\mathbf{s} ~-~ 2 \cdot wt(\mathbf{c}) ) ~\Bigr) ~\right]. ~~~~~~~~~~~~~~~~~~~~~~~ \qed \end{eqnarray*} \end{document} \end{document}
\begin{document} \title{System-time entanglement in a discrete time model} \author{A.\ Boette, R.\ Rossignoli, N.\ Gigena, M.\ Cerezo} \affiliation{Instituto de F\'{\i}sica de La Plata and Departamento de F\'{\i}sica, Universidad Nacional de La Plata, C.C. 67, La Plata (1900), Argentina} \begin{abstract} We present a model of discrete quantum evolution based on quantum correlations between the evolving system and a reference quantum clock system. A quantum circuit for the model is provided, which in the case of a constant Hamiltonian is able to represent the evolution over $2^n$ time steps in terms of just $n$ time qubits and $n$ control gates. We then introduce the concept of system-time entanglement as a measure of distinguishable quantum evolution, based on the entanglement between the system and the reference clock. This quantity vanishes for stationary states and is maximum for systems jumping onto a new orthogonal state at each time step. In the case of a constant Hamiltonian leading to a cyclic evolution it is a measure of the spread over distinct energy eigenstates, and satisfies an entropic energy-time uncertainty relation. The evolution of mixed states is also examined. Analytical expressions for the basic case of a qubit clock, as well as for the continuous limit in the evolution between two states, are provided. \end{abstract} \pacs{03.65.Ta,03.65.Ud,06.30.Ft,03.67.-a} \maketitle \section{Introduction} Ever since the foundations of quantum mechanics, time has been mostly considered as an external classical parameter. Various attempts to incorporate time in a fully quantum framework have nonetheless been made, starting with the Page and Wootters mechanism \cite{PaW.83} and other subsequent proposals \cite{CR.91,IS.94}. This subject has recently received increasing attention in both quantum mechanics \cite{M.14,M.15,FC.13,Ve.14,CR.15} and general relativity \cite{QT.15,Ga.09}, where this problem is considered a key issue in the connection between both theories. In the present work we introduce a simple discrete quantum model of evolution, which on one hand, constitutes a consistent discrete version of the formalism of \cite{PaW.83,QT.15}, while on the other hand, provides a practical means to simulate quantum evolutions. We show that a quantum circuit for the model can be constructed, which in the case of a constant Hamiltonian is able to simulate the evolution over $N=2^n$ times in terms of just $n$ time-qubits and $O(n)$ gates, providing the basis for a parallel-in-time simulation. We then introduce and discuss the concept of system-time entanglement, which arises naturally in the present scenario, as a quantifier of the actual distinguishable evolution undergone by the system. Such quantifier can be related to the minimum time necessarily elapsed by the system. For a constant Hamiltonian we show that this entanglement is bounded above by the entropy associated with the spread over energy eigenstates of the initial state, reaching this bound for a spectrum leading to a cyclic evolution, in which case it satisfies an entropic energy-time uncertainty relation. Illustrative analytical results for a qubit-clock, which constitutes the basic building block in the present setting, are provided. The continuous limit for the evolution between two arbitrary states is also analyzed. \section{Formalism} \subsection{History states} We consider a bipartite system $S+T$, where $S$ represents a quantum system and $T$ a quantum clock system with finite Hilbert space dimension $N$. The whole system is assumed to be in a pure state of the form \begin{equation} |\Psi\rangle=\frac{1}{\sqrt{N}}\sum_{t=0}^{N-1} |\psi_t\rangle|t\rangle\label{1}\,,\end{equation} where $\{|t\rangle,\;t=0,\ldots,N-1\}$ is an orthonormal basis of $T$ and $\{|\psi_t\rangle,\;t=0,\ldots,N-1\}$ arbitrary pure states of $S$. Such state can describe, for instance, the whole evolution of an initial pure state $|\psi_0\rangle$ of $S$ at a discrete set of times $t$. The state $|\psi_t\rangle$ at time $t$ can be recovered as the conditional state of $S$ after a local measurement at $T$ in the previous basis with result $t$: \begin{equation} |\psi_t\rangle\langle\psi_t|=\frac{{\rm Tr}_T\,[|\Psi\rangle\langle\Psi|\,\Pi_t]}{ \langle\Psi|\Pi_t|\Psi\rangle}\,,\end{equation} where $\Pi_t=\mathbb{1}\otimes |t\rangle\langle t|$. In shorthand notation $|\psi_t\rangle\propto \langle t|\Psi\rangle$. If we write \begin{equation} |\psi_t\rangle=U_t|\psi_0\rangle,\;\;t=0,\ldots,N-1\label{u}\,, \end{equation} where $U_t$ are unitary operators at $S$ (with $U_0=\mathbb{1}$), the state (\ref{1}) can be generated with the schematic quantum circuit of Fig.\ \ref{1}. Starting from the product initial state $|\psi_0\rangle|0\rangle$, a Hadamard-like gate \cite{NC.00} at $T$ turns it into the superposition $\frac{1}{\sqrt{N}}\sum_{t=0}^{N-1}|\psi_0\rangle|t\rangle$, after which a control-like gate $\sum_t U_t\otimes |t\rangle\langle t|$ will transform it in the state (\ref{1}). A specific example will be provided in Fig.\ \ref{f2}. \begin{figure} \caption{(Color online) Schematic circuit representing the generation of the system-time pure state (\ref{1}). The control gate performs the operation $U_t$ on $S$ if $T$ is in state $|t\rangle$, while the Hadamard-type gate $H$ creates the superposition $\propto\sum_{t=0}^{N-1}|t\rangle$.} \label{f1} \end{figure} From a formal perspective, the state (\ref{1}) is a ``static'' eigenstate of the $S+T$ translation ``super-operator'' \begin{equation} {\cal U}=\sum_{t=1}^{N} U_{t,t-1}\otimes |t\rangle\langle {t-1}|\,, \label{H}\end{equation} where $U_{t,t-1}=U_t U_{t-1}^\dagger$ evolves the state of $S$ from $t-1$ to $t$ ($|\psi_t\rangle=U_{t,t-1}|\psi_{t-1}\rangle$) and the cyclic condition $|N\rangle\equiv |0\rangle$, i.e.\ $U_{N,N-1}=U_{N-1}^\dagger$, is imposed. Then, \begin{equation} {\cal U}|\Psi\rangle=|\Psi\rangle\label{U1}\,\end{equation} showing that the state (\ref{1}) remains strictly invariant under such global translations in the $S+T$ space. Eq.\ (\ref{U1}) holds for {\it any} choice of initial state $|\psi_0\rangle$ in (\ref{1}). The eigenvalue $1$ of ${\cal U}$ has then a degeneracy equal to the Hilbert space dimension $M$ of $S$, since for $M$ orthogonal initial states $|\psi_0^j\rangle$, $\langle\psi_0^j|\psi_0^l\rangle=\delta_{jl}$, the ensuing states $|\Psi^l\rangle$ are orthogonal due to Eq.\ (\ref{u}): \begin{equation}\langle\Psi^{l}|\Psi^j\rangle=\frac{1}{N}\sum_{t=0}^{N-1}\langle \psi^l_{t}|\psi^j_t\rangle=\langle\psi_0^l|\psi_0^j\rangle=\delta_{lj}\,.\end{equation} The remaining eigenstates of ${\cal U}$ are of the form $|\Psi_k\rangle=\frac{1}{\sqrt{N}}\sum_{t=0}^{N-1} e^{i2\pi kt/N}|\psi_t\rangle|t\rangle\label{lk}$ with $k$ integer and represent the evolution associated with operators $U_t^k= e^{i2\pi kt/N}U_t$: \begin{equation} {\cal U}|\Psi_k\rangle=e^{-i2\pi k/N}|\Psi_k\rangle\,,\;\;k=0,\ldots,N-1 \label{ck}\,. \end{equation} All eigenvalues $\lambda_k=e^{-i2\pi k/N}$ are $M$-fold degenerate by the same previous arguments. The full set of $N$ eigenvalues and a choice of $M N$ orthogonal eigenvectors of ${\cal U}$ are thus obtained. We may then write, for general $U_t$, \begin{equation} {\cal U}=\exp[-i {\cal J}]\label{J}\,,\end{equation} with ${\cal J}$ hermitian and satisfying ${\cal J}|\Psi_k\rangle=2\pi\frac{k}{N}|\Psi_k\rangle$ for $k=0,\ldots,N-1$. In particular, the states (\ref{1}) satisfy \begin{equation} {\cal J}|\Psi\rangle=0\,,\end{equation} which represents a discrete counterpart of the Wheeler-DeWitt equation \cite{QT.15,DW.67,HH.83} determining the state $|\Psi\rangle$ in continuous time theories \cite{QT.15}. In the limit where $t$ becomes a continuous unrestricted variable, the state (\ref{1}) with condition (\ref{u}) becomes in fact that considered in \cite{QT.15}. Note, however, that here ${\cal J}$ is actually defined just modulo $N$, as any ${\cal J}$ satisfying ${\cal J}|\Psi_k\rangle=2\pi(\frac{k}{N}+n_k)|\Psi_k\rangle$ with $n_k$ integer will also fulfill Eq.\ (\ref{J}). All $|\Psi_k\rangle$ are also eigenstates of the hermitian operators ${\cal U}_{\pm}=i^{\frac{1\mp 1}{2}}({\cal U}\pm{\cal U}^\dagger)/2$, with eigenvalues $\cos\frac{2\pi k}{N}$ and $\sin\frac{2\pi k}{N}$ respectively, i.e. $1$ and $0$ for the states (\ref{1}). The latter can then be also obtained as ground states of $-{\cal U}_+$. An hermitian operator ${\cal H}$ similar to $-{\cal U}_+$ but with no cyclic condition (${\cal H}=-\tilde{\cal U}_++I_{S}\otimes I_{T}$, with $\tilde{\cal U}={\cal U}-U^\dagger_{N-1}|0\rangle\langle N-1| +\frac{1}{2}I_S\otimes(|0\rangle\langle 0|+|N-1\rangle\langle N-1|)$ was considered in \cite{FC.13} for deriving a variational approximation to the evolution. \subsection{Constant evolution operator} If $U_{t,t-1}=U$ $\forall$ $t$, then \begin{equation} U_t=(U)^t=\exp[-iHt]\,,\;\;\;t=0,\ldots,N-1\,, \label{C}\end{equation} where $H$ represents a constant Hamiltonian for system $S$. In this case the state (\ref{1}) can be generated with the first step of the circuit employed for phase estimation \cite{NC.00}, depicted in Fig.\ \ref{f2}. If $N=2^n$, such circuit, consisting of just $n$ time qubits and $m=\log_2 M$ system qubits, requires only $n$ initial single qubit Hadamard gates on the time-qubits if initialized at $|0\rangle$ (such that $|0\rangle_T\equiv \otimes_{j=1}^n|0_j\rangle\rightarrow \otimes_{j=1}^n\frac{|0_j\rangle+|1_j\rangle}{\sqrt{2}}= \frac{1}{\sqrt{N}}\sum_{t=0}^{N-1}|t\rangle$ for $t=\sum_{j=1}^{n}t_j 2^{j-1}$), plus $n$ control $U^{2^{j-1}}$ gates acting on the system qubits, which perform the operation $U^t|\psi_0\rangle= \prod_{j=1}^n U^{t_j 2^{j-1}}|\psi_0\rangle$. A measurement of the time qubits with result $t$ makes $S$ collapse to the state $|\psi_t\rangle=e^{-i Ht}|\psi_0\rangle$. \begin{figure}\label{f2} \end{figure} In addition, if $U$ in (\ref{C}) satisfies the cyclic condition $U^N=\mathbb{1}$, which implies that $H$ should have eigenvalues $2\pi k/N$ with $k$ integer, Eq.\ (\ref{H}) can be written as \begin{equation} {\cal U}=U\otimes V=\exp[-i(H\otimes\mathbb{1}_T+\mathbb{1}_S\otimes P)]\,, \label{UU}\end{equation} where $V=\exp[-iP]=\sum_{t=1}^{N} |t\rangle\langle t-1|$ is the (cyclic) time translation operator. Its eigenstates are the discrete Fourier transform (FT) of the time states $|t\rangle$, \begin{equation} V|\tilde{k}\rangle=e^{-i2\pi k/N}|\tilde{k}\rangle,\;\; |\tilde{k}\rangle=\frac{1}{\sqrt{N}}\sum_{t=0}^{N-1}e^{i2\pi kt/N}|t\rangle\label{TT}\,, \end{equation} for $k=0,\ldots,N-1$, such that $P$ is the ``momentum'' associated with the time operator $T$: \begin{equation} T|t\rangle=t|t\rangle\,,\;\;P|\tilde{k}\rangle=2\pi \frac{k}{N}|\tilde{k}\rangle\,.\end{equation} Hence, ${\cal J}=H\otimes\mathbb{1}_T+\mathbb{1}_S\otimes P$ adopts in this case the same form as that of continuous theories \cite{QT.15}. \subsection{System-Time entanglement} Suppose now that one wishes to quantify consistently the ``amount'' of distinguishable evolution of a pure quantum state. Such measure can be related to a minimum time $\tau_m$ (number or fraction of steps) necessarily elapsed by the system. If the state is stationary, $|\psi_t\rangle\propto |\psi_0\rangle$ $\forall$ $t$, the quantifier should vanish (and $\tau_m=0$) whereas if all $N$ states $|\psi_t\rangle$ are orthogonal to each other, the quantifier should be maximum (with $\tau=N-1$), indicating that the state has indeed evolved through $N$ distinguishable states. We now propose the entanglement of the pure state (\ref{1}) (system-time entanglement) as such quantifier, with $\tau_m$ an increasing function of this entanglement. In Figs.\ \ref{f1}--\ref{f2}, such entanglement is just that between the system and the time-qubits, generated by the control $U_t$. We first note that Eq.\ (\ref{1}) is not, in general, the Schmidt decomposition \cite{NC.00} of the state $|\Psi\rangle$, which is \begin{equation} |\Psi\rangle=\sum_{k}\sqrt{p_k}|k\rangle_S|k\rangle_T\label{sd}\,,\end{equation} where $|k\rangle_{S(T)}$ are orthogonal states of $S$ and $T$ ($_\mu\langle k|k'\rangle_{\mu}=\delta_{kk'}$) and $p_k$ the eigenvalues of the reduced states of $S$ and $T$, \begin{equation} \rho_{S(T)}={\rm Tr}_{T(S)}|\Psi\rangle\langle\Psi|= \sum_k p_k|k\rangle_{S(T)}\langle k|\,. \end{equation} The entanglement entropy between $S$ and $T$ is then \begin{equation} E(S,T)=S(\rho_S)=S(\rho_T)=-\sum_k p_k\log_2 p_k\,,\label{2} \end{equation} where $S(\rho)=-{\rm Tr}\rho\log_2\rho$ is the von Neumann entropy. Eq.\ (\ref{2}) satisfies the basic requirements of an evolution quantifier. If the state of $S$ is stationary, $|\psi_t\rangle=e^{i\gamma_t}|\psi_0\rangle$ $\forall$ $t$, the state (\ref{1}) becomes {separable}, \begin{equation} |\Psi\rangle=|\psi_0\rangle(\frac{1}{\sqrt{N}}\sum_t e^{i\gamma_t}|t\rangle)\,,\label{sta}\end{equation} implying $E(S,T)=0$. In contrast, if $|\psi_t\rangle$ evolves through $N$ orthogonal states, then $|\Psi\rangle$ is {\it maximally entangled}, with Eq.\ (\ref{1}) already its Schmidt decomposition and \begin{equation} E(S,T)=E_{\rm max}(S,T)=\log_2 N\label{Emax}\,.\end{equation} It is then natural to define the minimum time $\tau_m$ as \begin{equation}\tau_m=2^{E(S,T)}-1\,,\end{equation} which takes the values $0$ and $N-1$ for the previous extreme cases. The vast majority of evolutions will lie in between. For instance, a periodic evolution of period $L<N$ with $N/L$ integer, such that $|\psi_{t+L}\rangle=e^{i\gamma}|\psi_{t}\rangle$ $\forall$ $t$, will lead to \begin{equation} |\Psi\rangle=\frac{1}{\sqrt{L}}\sum_{t=0}^{L-1} |\psi_t\rangle|t_L\rangle,\;\; |t_L\rangle=\sqrt{\frac{L}{N}}\sum_{k=0}^{N/L-1}e^{i\gamma k}|t+L k\rangle \label{L}\,,\end{equation} with $\langle t'_L|t_L\rangle=\delta_{tt'}$. Hence, its entanglement $E(S,T)$ {\it will be the same as that obtained with an $L$ dimensional effective clock}, as it should. Its maximum value, obtained for $L$ orthogonal states, will then be $\log_2 L$, in which case $\tau_m=L-1$. The Schmidt decomposition (\ref{sd}) represents in this context the ``actual'' evolution between orthogonal states, with $p_k$ proportional to the ``permanence time'' in each of them. A measurement on $T$ in the Schmidt Basis would always identify orthogonal states of $S$ for different results (and viceversa), with the probability distribution of results indicating the ``permanence'' in these states. If in Eq.\ (\ref{1}) there are $n_k$ times $t$ where $|\psi_t\rangle\propto |k\rangle_S$, with $\sum_k n_k=N$ and $|k\rangle_S$ orthogonal states, then \[|\Psi\rangle=\sum_k \sqrt{\frac{n_k}{N}}|k\rangle_S (\frac{1}{\sqrt{n_k}}\sum_{t/|\psi_t\rangle\propto|k\rangle_S}e^{i\gamma_t}|t\rangle)\,,\] which is the Schmidt decomposition (\ref{sd}) with $p_k\propto n_k$, i.e. proportional to the total time in the state $|k\rangle_S$. Note also that Eqs.\ (\ref{sd})--(\ref{2}) are essentially symmetric, so that the roles of $S$ and $T$ can in principle be interchanged. {\it Quadratic entanglement.} A simple quantifier for the general case can be obtained through the entanglement determined by the entropy $S_2(\rho)=2(1-{\rm Tr}\,\rho^2)$, which is just a linear function of the purity ${\rm Tr}\,\rho^2$ and does not require the evaluation of the eigenvalues of $\rho$ \cite{S2.00,RC.03,GR.14} (purity is also more easily accessible experimentally \cite{T.13}). We obtain, using $\rho_S=\frac{1}{N}\sum_t |\psi_t\rangle\langle\psi_t|$, \begin{eqnarray}E_2(S,T)&=&S_2(\rho_T)=S_2(\rho_S)=2(1-{\rm Tr}\,\rho_S^2)\nonumber\\ &=&2{\textstyle\frac{N-1}{N}}(1-{\textstyle\frac{1}{N(N-1)}}\sum_{t\neq t'}|\langle\psi_t|\psi_{t'}\rangle|^2)\,,\label{E2}\end{eqnarray} which is just a decreasing function of the average pairwise squared fidelity between all visited states. If they are all proportional, $E_2(S,T)=0$ whereas if they are all orthogonal, $E_2(S,T)=2\frac{N-1}{N}$ is maximum. If $S$ and $T$ are qubits $E_2(S,T)$ is just the squared {\it concurrence} \cite{Wo.97} of $|\Psi\rangle$. \subsection{Relation with energy spread} In the constant case (\ref{C}), we may expand $|\psi_0\rangle$ in the eigenstates of $U$ or $H$, $|\psi_0\rangle=\sum_k c_k |k\rangle$ with $H|k\rangle=E_k|k\rangle $, such that $|\psi_t\rangle=\sum_k c_k e^{-i E_k t}|k\rangle$ and \begin{equation} |\Psi\rangle=\frac{1}{\sqrt{N}}\sum_{k,t} c_k e^{-iE_k t}|k\rangle|t\rangle =\sum_k c_k |k\rangle|\tilde{k}\rangle_T\,,\label{CK}\end{equation} with $|\tilde{k}\rangle_T=\frac{1}{\sqrt{N}}\sum_{t}e^{-iE_k t}|t\rangle$. We can always assume all $E_k$ distinct in (\ref{CK}) such that $c_k |k\rangle$ is the projection of $|\psi_0\rangle$ onto the eigenspace with energy $E_k$. In the cyclic case $U^N=\mathbb{1}$, with $E_k=2\pi k/N$, $k=0,\ldots,N-1$, the states $|\tilde{k}\rangle_T$ become the orthogonal FT states (\ref{TT}) ($|\tilde{k}\rangle_T=|-\tilde{k}\rangle$). Eq.\ (\ref{CK}) is then {\it the Schmidt decomposition} (\ref{sd}), with $p_k=|c_k|^2$ and \begin{equation} E(S,T)=-\sum_k |c_k|^2\log_2 |c_k|^2\label{ESTK}\,.\end{equation} For this spectrum, entanglement becomes then {\it a measure of the spread of the initial state $|\psi_0\rangle$ over the eigenstates of $H$} with distinct energies. The same holds in the quadratic case (\ref{E2}) where $E_2(E,T)=2\sum_k |c_k|^2(1-|c_k|^2)$. If there is no dispersion $|\psi_0\rangle$ is stationary and entanglement vanishes while if $|\psi_0\rangle$ is uniformly spread over $N$ eigenstates it is maximum ($E(S,T)=\log_2 N$). While Eq.\ (\ref{ESTK}) also holds for a displaced spectrum $E_k=E_0+2\pi k/N$, for an arbitrary spectrum $\{E_k\}$ it will hold approximately if the overlaps $_T\langle \tilde{k}|\tilde{k'}\rangle_T=\frac{1}{N}\sum_t e^{-i(E_k-E_{k'})t}$ are sufficiently small for $k\neq k'$. In general we actually have the strict bound \begin{equation}E(S,T)\leq -\sum_k |c_k|^2\log_2 |c_k|^2, \label{ESTK2}\end{equation} since $|c_k|^2=\sum_{k'}p_{k'}|\langle k|k'\rangle_S|^2$, with $|k\rangle$ the eigenstates of $H$ and $|k'\rangle_S$ the Schmidt states in (\ref{sd}), which implies that the $|c_k|^2$'s are {\it majorized} \cite{Bha.97} by the $p_k$'s: \begin{equation} \{|c_k|^2\}\prec\{p_k\}\,, \label{prec}\end{equation} where $\{|c_k|^2\}$ and $\{p_k\}$ denote the sets sorted in decreasing order. Eq.\ (\ref{prec}) (meaning $\sum_{k=1}^j |c_k|^2\leq \sum_{k=1}^jp_k$ for $j=1\,\ldots,N-1$) implies that the inequality (\ref{ESTK2}) actually holds for any Schur-concave function of the probabilities \cite{Bha.97}, in particular for any entropic form $S_f(\rho)={\rm Tr}\,f(\rho)$ with $f(p)$ concave and satisfying $f(0)=f(1)=0$ \cite{GR.14,CR.02}, such as the von Neumann entropy ($f(\rho)=-\rho\log_2\rho$) and the previous $S_2$ entropy ($f(\rho)=2\rho(\mathbb{1}-\rho)$): \begin{equation}E_f(S,T)=\sum_k f(p_k)\leq \sum_k f(|c_k|^2), \label{ESTK3}\end{equation} as can be easily verified. Eqs.\ (\ref{ESTK})--(\ref{ESTK3}) then indicate that the entropy of the spread over Hamiltonian eigenstates of the initial state provides an upper bound to the corresponding system-time entanglement entropy than can be generated by {\it any} Hamiltonian diagonal in the states $|k\rangle$. The bound is always reached for an equally spaced spectrum $E_k=2\pi k/N\in[0,2\pi]$ leading to a cyclic evolution, which therefore generates {\it the highest possible system-time entanglement for a given initial spread $\{|c_k|^2\}$}. \subsection{Energy-time uncertainty relations} For the aforementioned equally spaced spectrum, we may also expand the state $|\psi_0\rangle$ of $S$ in an orthogonal set of uniformly spread states, \begin{equation} {\textstyle|\psi_0\rangle=\sum_{l=0}^N \tilde{c}_l|\tilde{l}\rangle_S\,, \;\;|\tilde{l}\rangle_S=\frac{1}{\sqrt{N}}\sum_k e^{i2\pi kl/N}|k\rangle\,,}\end{equation} with $\tilde{c}_l=\frac{1}{\sqrt{N}}\sum_k e^{-i2\pi k/N}c_k$ the FT of the $c_k$'s in (\ref{CK}). Since $U^t|\tilde{l}\rangle_S=|\widetilde{l-t}\rangle_S$, it is verified that these maximally spread states $|\tilde{l}\rangle_S$ (which according to Eq.\ (\ref{ESTK}) lead to maximum system-time entanglement $E(S,T)=\log_2 N$) indeed evolve through $N$ orthogonal states $|\widetilde{l-t}\rangle_S$. Moreover, Eq.\ (\ref{CK}) becomes \begin{equation} |\Psi\rangle=\sum_{l,t} \tilde{c}_l|\widetilde{l-t}\rangle_S|t\rangle=\sum_{l}|\tilde{l}\rangle_S(\sum_t \tilde{c}_t |t-l\rangle)\,, \label{U}\end{equation} showing that $\tilde{c}_l$ determines the distribution of time states $|t\rangle$ assigned to each state $|\tilde{l}\rangle_S$, i.e., the uncertainty in its time location. Being related through a finite FT, $\{c_k\}$ and $\{\tilde{c}_l\}$ satisfy various uncertainty relations, such as {\small \cite{DCT.91,PDO.01,Hi.57}} \begin{equation} E(S,T)+\tilde{E}(S,T)\geq \log_2 N\label{US}\,,\end{equation} where $\tilde{E}(S,T)=-\sum_l |\tilde{c}_l|^2\log_2 |\tilde{c}_l|^2$ is the entropy characterizing the time uncertainty and $E(S,T)$ the energy uncertainty (\ref{ESTK}). If localized in energy ($|c_{k}|=\delta_{kk'}$, $E(S,T)=0$), Eq.\ (\ref{US}) implies maximum time uncertainty ($|\tilde{c}_l|=\frac{1}{\sqrt{N}}$, $\tilde{E}(S,T)=\log_2 N$) and viceversa. We also have $n(\{c_k\})\,n(\{\tilde{c}_l\})\geq N$ \cite{DS.89}, where $n(\{\alpha_k\})$ denotes the number of non-zero $\alpha_k$'s. Bounds for the product of variances in the discrete FT are discussed in \cite{MS.08}. \subsection{Mixed states} Let us now consider that $S$ is a bipartite system $A+B$. By taking the partial trace of (\ref{1}), \begin{equation} \rho_{BT}={\rm Tr}_A\,|\Psi\rangle\langle\Psi|= \sum_j {_A}\langle j|\Psi\rangle\langle \Psi|j\rangle_A\label{BT}\,,\end{equation} we see that the system-time state for a subsystem is a {\it mixed state}. Of course, the state of $B$ at time $t$, setting now $\Pi_t=I_B\otimes |t\rangle\langle t|$, is given by the standard expression \begin{equation}\rho_{Bt}=\frac{{\rm Tr}_T\, \rho_{BT}\Pi_t}{{\rm Tr}\,\rho_{BT}\Pi_t} = {\rm Tr}_A|\psi_t\rangle\langle\psi_t|\label{Bt}\,.\end{equation} If the initial state of $S$ is $|\psi_0\rangle=\sum_j\sqrt{q_j}|j\rangle_A|j\rangle_B$ (Schmidt decomposition), Eqs.\ (\ref{BT})--(\ref{Bt}) determine the evolution of an initial mixed state $\rho_{B0}=\sum_j q_j |j\rangle_B\langle j|$ of $B$, considered as a subsystem in a purified state undergoing unitary evolution. For instance, if just subsystem $B$ evolves, such that $U_t=I_A\otimes U_{Bt}$ $\forall$ $t$, Eq.\ (\ref{BT}) leads to \begin{eqnarray}\rho_{BT}&=&\sum_j q_j|\Psi_j\rangle_{BT}\langle\Psi_j|\,,\label{BTs} \end{eqnarray} where $|\Psi_j\rangle_{BT}=\frac{1}{\sqrt{N}}\sum_{t=0}^{n-1}U_{Bt}|j\rangle_B|t\rangle$. Eq.\ (\ref{Bt}) is then the mixture of the pure $B+T$ states associated with each eigenstate of $\rho_{B0}$, and implies the unitary evolution $\rho_{Bt}=U_{Bt}\rho_{B0}U^\dagger_{Bt}$. Since the state (\ref{BT}) is in general mixed, the correlations between $T$ and a subsystem $B$ can be more complex than those with the whole system $S$. The state (\ref{BT}) can in principle exhibit distinct types of correlations, including entanglement \cite{RF.89,BV.96}, discord-like correlations \cite{DZ.01,HV.03,KM.12,RCC.10} and classical-type correlations. The exact evaluation of the quantum correlations is also more difficult, being in general a hard problem \cite{DP.04,YH.14}. We will here consider just the entanglement of formation \cite{BV.96} $E(B,T)$ of the state (\ref{BT}), which, if nonzero, indicates that (\ref{BT}) cannot be written as a convex mixture of pure product states \cite{RF.89} $|\Psi_\alpha\rangle_{BT}=|\psi_\alpha\rangle_{B}|\phi_\alpha\rangle_T$. In this context the latter represent essentially {\it stationary} states. Separability with time would then indicate that $\rho_{BT}$ can be written as a convex mixture of such states, requiring no quantum interaction with the clock system for its formation. \section{Examples} \subsection{The qubit clock} As illustration, we examine the basic case of a qubit clock ($N=2$). Eq.\ (\ref{1}) becomes \begin{eqnarray} |\Psi\rangle&=&(|\psi_0\rangle|0\rangle+|\psi_1\rangle|1\rangle)/\sqrt{2}\nonumber\\ &=&\sqrt{p_+}|++\rangle+\sqrt{p_-}|--\rangle\,,\label{sd2}\\ p_{\pm}&=&(1\pm|\langle\psi_0|\psi_1\rangle|)/2\,,\nonumber \end{eqnarray} where $|\psi_1\rangle=U|\psi_0\rangle$ and (\ref{sd2}) is its Schmidt decomposition, with $|\pm\rangle_S=(|\psi_0\rangle\pm e^{-i\gamma} |\psi_1\rangle)/\sqrt{4p_{\pm}}$, $|\pm\rangle_T=(|0\rangle\pm e^{i\gamma}|1\rangle)/\sqrt{2}$ and $e^{i\gamma}=\frac{\langle \psi_0|\psi_1\rangle}{|\langle\psi_0|\psi_1\rangle|}$. Hence, $E(S,T)=-\sum_{\nu=\pm} p_\nu\log p_\nu$ will be fully determined by the overlap or {\it fidelity} $|\langle\psi_0|\psi_1\rangle|$ between the initial and final states, decreasing as the fidelity increases and becoming maximum for orthogonal states. The quadratic entanglement entropy $E_2(S,T)$ becomes just \begin{equation} E_2(S,T)=4p_+p_-=1-|\langle\psi_0|\psi_1\rangle|^2\label{SF}\,.\end{equation} These results hold for arbitrary dimension $M$ of $S$. The operator (\ref{H}) becomes ${\cal U}=U\otimes|1\rangle\langle 0|+U^\dagger\otimes|0\rangle\langle 1|$, and is directly hermitian, with eigenvalues $e^{i2k\pi/2}=\pm 1$ for $k=0$ or $1$, $M$-fold degenerate. Hence, in this case \begin{equation} {\cal J}=\pi({\cal U}-\mathbb{1})/2\,,\end{equation} involving coupling between $S$ and $T$ unless $U^\dagger\propto U$. For $|\psi_1\rangle$ close to $|\psi_0\rangle$, Eq.\ (\ref{SF}) becomes proportional to the Fubini-Study metric \cite{AA.90}. If $U=\exp[-i\epsilon h]$, an expansion of $|\psi_0\rangle$ in the eigenstates of $h$, $|\psi_0\rangle=\sum_k c_k |k\rangle$ with $h|k\rangle=\varepsilon_k |k\rangle$, leads to \begin{equation} E_2(S,T)=1-|\sum_k |c_k|^2 e^{-i\epsilon \varepsilon_k}|^2\approx \epsilon^2(\langle h^2\rangle-\langle h\rangle^2)\label{apr}\,, \end{equation} where the last expression holds up to $O(\epsilon^2)$. Hence, for a ``small'' evolution the system-time entanglement of a single step is determined by the energy fluctuation $\langle h^2\rangle-\langle h\rangle^2$ in $|\psi_0\rangle$ ($\langle O\rangle\equiv\langle \psi_0|O|\psi_0\rangle$), with $E_2(S,T)$ directly proportional to it. For instance, if $S$ is also a single qubit and $\varepsilon_1-\varepsilon_0=\varepsilon$, the exact expression becomes \begin{eqnarray} E_2(S,T)&=&4\sin^2(\frac{\epsilon \varepsilon}{2})|c_0|^2|c_1|^2\label{apr1}\\&=& 4\sin^2(\frac{\epsilon\varepsilon}{2})\frac{\langle h^2 \rangle-\langle h\rangle^2}{\varepsilon^2}\,, \label{apr2}\end{eqnarray} which reduces to (\ref{apr}) for small $\epsilon$. It is also verified that $E_2(S,T)\leq S_2(|c_0|^2,|c_1|^2)=4|c_0|^2|c_1|^2$, i.e., it is upper bounded by the quadratic entropy of the energy spread (Eq.\ \ref{ESTK3}), reaching the bound for $E=\epsilon\varepsilon=\pi$, in agreement with the general result (\ref{ESTK})--(\ref{ESTK2}. Returning to the case of a general $S$, we also note that $E_2(S,T)$ determines the minimum time required for the evolution from $|\psi_0\rangle$ to $|\psi_1\rangle$ in standard continuous time theories \cite{AA.90}, which depends on the fidelity $|\langle \psi_0|\psi_1\rangle|$ and can then be expressed in terms of $E_2$ as $\hbar\sin^{-1}(\sqrt{E_2(S,T)})/\sqrt{\langle h^2\rangle-\langle h\rangle^2}$. Let us now assume that $S=A+B$ is a two qubit-system, with $U=I_A\otimes U_B$. As previously stated, starting from an initial entangled pure state of $A+B$ (purification of $\rho_{B0}$), the state (\ref{sd2}) will determine the evolution of the reduced state of $B$, leading to \begin{equation}\rho_{Bt}=p|\psi_t^0\rangle\langle\psi_t^0|+q|\psi_t^1\rangle\langle\psi_t^1|\,, \;\;t=0,1\label{rhob00}\end{equation} where $p+q=1$, $\langle\psi_0^0|\psi_0^1\rangle=0$ and $|\psi_1^j\rangle=U_B|\psi_0^j\rangle$ for $j=0,1$. The reduced state (\ref{BTs}) of $B+T$ becomes \begin{equation} \rho_{BT}=p|\Psi_0\rangle\langle\Psi_0|+q|\Psi_1\rangle\langle\Psi_1|\,, \label{rt}\end{equation} with $|\Psi_j\rangle=\frac{1}{\sqrt{2}}(|\psi_0^j\rangle|0\rangle+|\psi_1^j\rangle|1\rangle)$. Since (\ref{rt}) is a two-qubit mixed state, its entanglement of formation can be obtained through the concurrence \cite{Wo.97} $C(B,T)$, whose square is just the entanglement monotone associated with the quadratic entanglement entropy $E_2$ ($C^2(B,T)=E_2(B,T)$ for a pure $B+T$ state). It adopts here the simple expression \begin{equation} C^2(B,T)=(p-q)^2(1-|\langle \psi_0^j|\psi_1^j\rangle|^2)\label{CBT}\,,\end{equation} where $|\langle \psi_ 0^j|\psi_1^j\rangle|=|\langle \psi_0^j|U_B|\psi_0^j\rangle|$ is the same for $j=0$ or $1$ in a qubit system if $\langle\psi_0^0|\psi_0^1\rangle=0$. Eq.\ (\ref{CBT}) is then the pure state result (\ref{SF}) for any of the eigenstates of $\rho_{B0}$ diminished by the factor $(p-q)^2$, vanishing if $\rho_{B0}$ is maximally mixed ($p=q$). Remarkably, Eq.\ (\ref{CBT}) can be also written as \begin{equation} C^2(B,T)=1-F^2(\rho_{B0},\rho_{B1})\,,\label{CF}\end{equation} where $F(\rho_{B0},\rho_{B1})={\rm Tr}\,\sqrt{\rho_{B0}^{1/2}\rho_{B1}\rho_{B0}^{1/2}}$ is again the {\it fidelity} between the initial and final reduced mixed states of $B$ ($F=|\langle \psi_0|\psi_1\rangle|$ if $\rho_{B0}$, $\rho_{B1}$ are pure states). Note also that the total quadratic entanglement entropy is here \[E_2(S,T)=1-|p\langle \psi_1^0|\psi_0^0\rangle+q\langle \psi_1^1|\psi_0^1\rangle|^2,\] satisfying $E_2(S,T)\geq C^2(B,T)$ in agreement with the monogamy inequalities \cite{S2.00,OV.06}, coinciding iff $pq=0$ (pure case). \subsection{The continuous limit} Let us now assume that system $S$ is a qubit, with $T$ of dimension $N$ ($t=0,\ldots,N-1$). This case can also represent the evolution from an initial state $|\psi_0\rangle$ to an arbitrary final state $|\psi_f\rangle$ in a general system $S$ of Hilbert space dimension $M$ if all intermediate states $|\psi_t\rangle$ belong to the subspace generated by $|\psi_0\rangle$ and $|\psi_f\rangle$, such that the whole evolution is contained in a two-dimensional subspace of $S$. Writing the system states as \begin{equation}|\psi_t\rangle=\alpha_t|0\rangle+\beta_t|1\rangle\,,\;\;t=0,\ldots,N-1,\end{equation} with $\langle 0|1\rangle=0$ and $|\alpha_t|^2+|\beta_t|^2=1$, we may rewrite state (\ref{1}) as \begin{eqnarray}|\Psi\rangle&=&\frac{1}{\sqrt{N}}[|0\rangle(\sum_{t}\alpha_t|t\rangle)+ |1\rangle(\sum_{t} \beta_t|t\rangle)]\nonumber\\ &=&\alpha|0\rangle|\phi_0\rangle+\beta|1\rangle|\phi_1\rangle\,,\label{0T}\end{eqnarray} where $|\phi_0\rangle=\frac{1}{\sqrt{N}\alpha}\sum_t\alpha_t|t\rangle$, $|\phi_1\rangle=\frac{1}{\sqrt{N}\beta}\sum_t\beta_t|t\rangle$, are normalized (but not necessarily orthogonal) states of $T$ and all sums over $t$ are from $0$ to $N-1$, with \begin{equation}\alpha^2=\frac{1}{N}\sum_{t}|\alpha_t|^2,\;\;\beta^2=\frac{1}{N}\sum_{t} |\beta_t|^2=1-\alpha^2\,.\label{al}\end{equation} The Schmidt coefficients of the state (\ref{0T}) are given by \begin{equation}p_{\pm}=\frac{1}{2}(1\pm\sqrt{1-4\alpha^2\beta^2(1-|\langle\phi_1|\phi_0\rangle|^2)})\, .\label{ppm2}\end{equation} We then obtain \begin{eqnarray}E_2(S,T)&=&4p_+p_-=4\alpha^2\beta^2(1-|\langle\phi_1|\phi_0\rangle|^2)\nonumber\\ &=&4(\alpha^2\beta^2-\gamma^2),\ \ \gamma=\frac{1}{N}|\sum_{t}\beta_t^*\alpha_t|\label{E22}\,, \end{eqnarray} a result which also follows directly from Eq.\ (\ref{E2}). Let us consider, for instance, the states \begin{equation} |\psi_t\rangle=\cos(\frac{\phi t}{N-1})|0\rangle+\sin(\frac{\phi t}{N-1})|1\rangle\,,\end{equation} such that $S$ evolves from $|\psi_0\rangle=|0\rangle$ to \[|\psi_f\rangle=\cos\phi|0\rangle+\sin\phi|1\rangle\,,\] in $N-1$ steps through intermediate equally spaced states contained within the same plane in the Bloch sphere of $S$. The $S-T$ entanglement of this $N$-time evolution can be evaluated exactly with Eqs.\ (\ref{al})--(\ref{E22}), which yield \begin{equation} E_2(S,T_N)=1-\frac{\sin^2\left(\frac{N\phi}{N-1}\right)}{N^2\sin^2\left(\frac{\phi}{N-1}\right)} \,.\label{E2m} \end{equation} For $N=2$ (single step) we recover Eq.\ (\ref{SF}) ($E_2(S,T_2)=1-\cos^2\phi=1-|\langle\psi_0|\psi_f\rangle|^2$). If $\phi\in[0,\pi/2]$, $E_2(S,T_N)$ is a {\it decreasing} function of $N$ (and an increasing function of $\phi$), but rapidly saturates, approaching {\it a finite limit for} $N\rightarrow\infty$, namely, \begin{equation} E_2(S,T_\infty)=1-\frac{\sin^2\phi}{\phi^2} \,.\label{E3m}\end{equation} Therefore, system-time entanglement decreases as the number of steps through intermediate states between $|\psi_0\rangle$ and $|\psi_f\rangle$ is increased, reflecting the lower average distinguishability between the evolved states, but remains {\it finite} for $N\rightarrow\infty$. In this limit it is still an increasing function of $\phi$ for $\phi\in[0,\pi/2]$, reaching $1-4/\pi^2\approx 0.59$ for $\phi=\pi/2$, i.e., when the system evolves to an orthogonal state ($|\psi_f\rangle=|1\rangle$), and reducing to $\approx \phi^2/3$ for $\phi\rightarrow 0$. Hence, as compared with a single step evolution ($N=2$), the ratio $E_2(S,T_\infty)/E_2(S,T_2)$ increases from $1/3$ for $\phi\rightarrow 0$ to $\approx 0.59$ for $\phi\rightarrow \pi/2$. If $\phi$ is increased beyond $\pi/2$, the coefficients $\alpha_t$, $\beta_t$ cease to be all positive and entanglement can increase beyond $\approx 0.59$ due to the decreased overlap $\gamma$, reflecting higher average distinguishability between evolved states. Entanglement $E_2(S,T_{\infty})$ reaches in fact $1$ at $\phi=\pi$ (and also $k\pi$, $k\geq 1$ integer), i.e., when the final state is proportional to the initial state after having covered the whole circle in the Bloch sphere, since for these values the time states $|\phi_0\rangle$ and $|\phi_1\rangle$ become orthogonal and with equal weights. Note also that for $\phi>\pi/2$, $E_2(S,T_N)$ is not necessarily a decreasing function of $N$, nor an increasing function of $\phi$, exhibiting oscillations: $E_2(S,T_N)=1$ for $\phi=k\pi(N-1)/N$, $k\neq lN$, and $E_2(S,T_N)\rightarrow 0$ for $\phi\rightarrow l\pi(N-1)$, $l$ integer. \section{Conclusions} We proposed a parallel-in-time discrete model of quantum evolution based on a finite dimensional clock entangled with the system. The ensuing history state satisfies a discrete Wheeler-DeWitt-like equation and can be generated through a simple circuit, which for a constant evolution operator can be efficiently implemented with just $O(n)$ qubits and control gates for $2^n$ time intervals. We then showed that the system-clock entanglement $E(S,T)$ is a measure of the actual distinguishable evolution undergone by one of the systems relative to the other. A natural interpretation of the Schmidt decomposition in terms of permanence in distinguishable evolved states is also obtained. For a constant Hamiltonian leading to a cyclic evolution, this entanglement is a measure of the energy spread of the initial state and satisfies an entropic uncertainty inequality with a conjugated entropy which measures the time spread. Such Hamiltonian was rigorously shown to provide the {\it maximum} entanglement $E(S,T)$ compatible with a given distribution over Hamiltonian eigenstates. For other Hamiltonians, $E(S,T)$ (and also general entanglement entropies $E_f(S,T)$) are strictly bounded by the corresponding entropy of this distribution. We have also considered the evolution of mixed states. Although in this case the evaluation and interpretation of system-clock entanglement and correlations become more involved, in the simple yet fundamental case of a qubit clock coupled with a qubit subsystem, such entanglement was seen to be directly determined by the {\it fidelity} between the initial and final states of the qubit. A direct relation between this entanglement and energy fluctuation was also derived for the pure case. Finally, we have also shown that $E(S,T)$ does remain finite and non-zero in the continuous limit, i.e., when the system evolves from an initial to a final state through an arbitrarily large number of closely lying equally spaced intermediate states. The present work opens the way to various further developments, starting from the definition of proper time basis according to the Schmidt decomposition. It could be also possible in principle to incorporate other effects such as interaction between clocks \cite{CR.15}, explore possibilities of an emergent space-time or a qubit model for quantum time crystals \cite{Wi.12}. At the very least, it provides a change of perspective, allowing to identify a qubit clock as a fundamental ``building block'' of a discrete-time based quantum evolution. \acknowledgments The authors acknowledge support from CIC (RR) and CONICET (AB,NG,MC) of Argentina. \end{document}
\begin{document} \title[Polynomial Berezin transform] {Berezin transform in polynomial Bergman spaces} \author[Ameur] {Yacin Ameur} \address{Yacin Ameur\\ Department of Mathematics\\ Uppsala University\\ \\ Box 480\\ 751 06 Uppsala\\ Sweden} \email{yacin.ameur@gmail.com} \author[Hedenmalm] {H\aa{}kan Hedenmalm} \address{H{\aa}kan Hedenmalm\\ Department of Mathematics\\ The Royal Institute of Technology\\ S -- 100 44 Stockholm\\ SWEDEN} \email{haakanh@math.kth.se} \thanks{Research supported by the G\"oran Gustafsson Foundation. The third author is supported by N.S.F. Grant No. 0201893.} \author[Makarov] {Nikolai Makarov} \address{Nikolai Makarov\\ Department of Mathematics\\ California Institute of Technology\\ Pasadena\\ CA 91125\\ USA} \email{makarov@caltech.edu} \begin{abstract} We consider fairly general weight functions $Q:{\mathbb C}\to {\mathbb R}$, and let $K_{m,n}$ denote the reproducing kernel for the space ${H}_{m,n}$ of analytic polynomials $u$ of degree at most $n-1$ of finite norm $\|u\|_{mQ}^2=\int_{\mathbb C}\babs{u(z)}^2\mathrm e^{-mQ(z)}{{\mathrm d} A}(z),$ ${{\mathrm d} A}$ denoting suitably normalized area measure in ${\mathbb C}$. For a continuous bounded function $f$ on ${\mathbb C}$, we consider its (polynomial) Berezin transform \begin{equation*}\mathfrak B_{m,n}f(z)=\int_{\mathbb C} f(\zeta){\mathrm d} B^{\langle z\rangle}_{m,n}(\zeta) \qquad\text{where}\qquad {\mathrm d} B^{\langle z\rangle}_{m,n}(\zeta)=\frac {\babs{K_{m,n}(z,\zeta)}^2} {K_{m,n}(z,z)}\mathrm e^{-mQ(\zeta)}{{\mathrm d} A}(\zeta). \end{equation*} Let ${X}=\{\Delta Q>0\}.$ For a parameter $\tau>0$ we prove that there exists a compact subset ${\mathcal S}_\tau$ of ${\mathbb C}$ such that \begin{equation}\label{berconv}\mathfrak B_{m,n}f(z)\to f(z)\qquad \text{as} \quad m\to\infty\quad \text{and}\quad n- m\tau\to 0,\end{equation} for all continuous bounded $f$ if $z$ is in the interior of ${\mathcal S}_\tau\cap {X}$. Equivalently, the measures $B_{m,n}^{\langle z\rangle}$ converge to the Dirac measure at $z$. The set ${\mathcal S}_\tau$ is the coincidence set for an associated obstacle problem. We also prove that the convergence in \eqref{berconv} is \textit{Gaussian} when $z$ is in the interior of ${\mathcal S}_\tau\cap{X}$, in the sense that with $F(\zeta)=f\left ( \sqrt{m\Delta Q(z)}(\zeta-z)\right )$, we have \begin{equation*} \mathfrak B_{m,n}F(z)\to \int_{\mathbb C} f(\zeta){\mathrm d} P(\zeta), \end{equation*} ${\mathrm d} P(\zeta)=\mathrm e^{-\babs{\zeta}^2}{\mathrm d} A(\zeta)$ denoting the standard Gaussian. In the "model case'' $Q(z)=\babs{z}^2$, ${\mathcal S}_\tau$ is the closed disk with centre $0$ and radius $\sqrt{\tau}$. We prove that if $z$ is fixed with $\babs{z}>\sqrt{\tau}$, the corresponding measures $B_{m,n}^{\langle z\rangle}$ converge to harmonic measure for $z$ relative to the domain ${\mathbb C}^*\setminus{\mathcal S}_\tau$, ${\mathbb C}^*$ denoting the extended plane. Our auxiliary results include $L^2$ estimates for the ${\overline{\partial}}$-equation ${\overline{\partial}} u=f$ when $f$ is a suitable test function and the solution $u$ is restricted by a growth constraint near $\infty$. Our results have applications e.g. to the study of weighted distributions of eigenvalues of random normal matrices. In the companion paper \cite{AHM} we consider such applications, e.g. a proof of Gaussian field convergence for fluctuations of linear statistics of eigenvalues of random normal matrices from the ensemble associated with $Q$. \end{abstract} \maketitle \addtolength{\textheight}{2.2cm} \section{Introduction} \label{intro} \subsection{General introduction to Berezin quantization.} \footnote{In this version we have but slightly modified the formulation of Theorem 2.8 and made some minor corrections.} In a version of quantum theory, a Bargmann-Fock type space of polynomials plays the role of the quantized system, while the corresponding weighted $L^2$ space is the classical analogue. It is therefore natural to study the asymptotics of the quantized system as we approach the semiclassical limit. A particularly useful object is the {\em reproducing kernel} of the Bargmann-Fock type space. To make matters more concrete, let $\mu$ be a finite positive Borel measure on ${\mathbb C}$, and $L^2({\mathbb C};\mu)$ the usual $L^2$ space with inner product $$\big\langle f,g\big\rangle_{L^2({\mathbb C};\mu)}=\int_{\mathbb C} f(z)\overline{g(z)} {\mathrm d}\mu(z).$$ The subspace of $L^2({\mathbb C};\mu)$ of entire functions is the Bergman space $A^2({\mathbb C};\mu)$. We assume that the support of $\mu$ has infinitely many points so that any given polynomial corresponds to a unique element of $L^2({\mathbb C};\mu)$, and write $$J_\mu=\sup\big\{j\in{\mathbb Z};\, \int_{\mathbb C}\babs{z}^{2(j-1)}{\mathrm d}\mu(z)<\infty\big\}.$$ Since $\mu$ is finite, we are ensured that $1\le J_\mu\le \infty$. Let ${\mathcal P}_n$ be the set of analytic polynomials of degree at most $n-1$, and write \begin{equation*}A^2_{\mu,n}=L^2({\mathbb C};\mu)\cap {\mathcal P}_n\subset L^2({\mathbb C};\mu).\end{equation*} Putting $n'=\min\{n,J_\mu\}$, it is thus seen that $A^2_{\mu,n}$ equals ${\mathcal P}_{n'}$ in the sense of sets, with the norm inherited from $L^2({\mathbb C};\mu)$. Hence $A^2_{\mu,n}=A^2_{\mu,n'}$ for all $n$, and the reader who so desires can without loss of generality assume that $n=n'$ in the following. Let $e_1,\ldots, e_{n'}$ be an orthonormal basis for $A^2_{\mu,n}$. The reproducing kernel $K_{\mu,n}$ for the space $A^{2}_{\mu,n}$ is the function \begin{equation*} K_{\mu,n}(z,w)=\sum_{j=1}^{n'}e_j(z)\overline{e_j(w)}. \end{equation*} Then $K_{\mu,n}$ is independent of the choice of an orthonormal basis for $A^2_{\mu,n}$, and it is characterized by the properties that $z\mapsto K_{\mu,n}(z,z_0)$ is in $A^2_{\mu,n}$ and \begin{equation}\label{reprod}u(z_0)= \big\langle u,K_{\mu,n}(\cdot,z_0)\big\rangle_{L^2({\mathbb C};\mu)} =\int_{\mathbb C} u(z) \overline{K_{\mu,n}(z,z_0)} {\mathrm d} \mu(z), \qquad u\in A^2_{\mu,n},\quad z_0\in{\mathbb C}.\end{equation} For a given complex number $z_0$ we now consider the measure \begin{equation}\label{bm}{\mathrm d} B^{\langle z_0\rangle}_{\mu,n}(z)=\frac {\babs{K_{\mu,n}(z,z_0)}^2} {K_{\mu,n}(z_0,z_0)}{\mathrm d}\mu(z),\end{equation} which we may call the \textit{Berezin measure} associated with $\mu$, $n$ and $z_0$. The reproducing property \eqref{reprod} applied to $u=K_{\mu,n}(\cdot,z_0)$ implies that $B_{\mu,n}^{\langle z_0\rangle}$ is a probability measure. In classical physics, $B^{\langle z_0\rangle}_{\mu,n}$ corresponds to a point mass at $z_0$, so our interest focuses on how closely this measure approximates the point mass. \subsection{Weights.} By a \textit{weight} we mean a measurable function $\phi:{\mathbb C}\to {\mathbb R}$ such that the measure \begin{equation*}{\mathrm d}\mu_\phi(z)=\mathrm e^{-\phi(z)}{\mathrm d} A(z),\end{equation*} is a finite measure on ${\mathbb C}$, where ${\mathrm d} A(z)={\mathrm d} x{\mathrm d} y/\pi$ with $z=x+{\mathrm i} y$. We write $L^2_\phi$ for the space $L^2({\mathbb C};\mu_\phi)$ and $A^2_\phi$ for $A^2({\mathbb C};\mu_\phi)$ and the norm of an element $u\in L^2_\phi$ will be denoted by \begin{equation*}\|u\|_\phi^2=\int_{\mathbb C}\babs{u}^2\mathrm e^{-\phi}{{\mathrm d} A}.\end{equation*} For a positive integer $n$, we frequently write \begin{equation}\label{ospaces}L^2_{\phi,n}=\big\{u\in L^2_\phi;\, u(z)={\mathcal O}\left (\babs{z}^{n-1}\right )\quad \text{when}\quad z\to\infty\big\}\end{equation} and \begin{equation}\label{os2}A^2_{\phi,n}=L^2_{\phi,n}\cap A^2_\phi=L^2_{\phi,n}\cap{\mathcal P}_n.\end{equation} Observe that $L^2_{\phi,n}$ usually is non-closed in $L^2_\phi$, whereas the finite-dimensional $A^2_{\phi,n}$ is closed in $L^2_\phi$. \subsection{The weights considered.} \label{weigh} Let $Q:{\mathbb C}\to {\mathbb R}$ be a given measurable function, which satisfies a growth condition of the form \begin{equation}\label{gro}Q(z)\ge \rho\log\babs{z}^2, \qquad |z|\ge C, \end{equation} for some positive numbers $\rho$ and $C$. For a positive number $m$, we now consider weights $\phi_m$ of the form $\phi_m=mQ$. By abuse of notation, we will in this context sometimes refer to $Q$ as the weight. To simplify notation, we set \begin{equation*}{H}_{m,n}=A^2_{mQ,n}=L^2_{mQ}\cap{\mathcal P}_n\end{equation*} and denote by $K_{m,n}$ the reproducing kernel for ${H}_{m,n}$. For a given $z_0$, the corresponding $B_{m,n}^{\langle z_0\rangle}$ is defined accordingly, cf. \eqref{bm}. We think of $Q$ as being fixed while the parameters $m$ and $n$ vary, and also fix a number $\tau$ satisfying $0<\tau<\rho$. To avoid bulky notation, it is customary to reduce the number of parameters to one, by regarding $n=n(m)$ as a function of $m>0$. We will adopt this convention and study the behaviour of the measures $B^{\langle z_0\rangle}_{m,n}$ when $m\to\infty$ and $n-m\tau\to 0$. Adding a real constant to $Q$ means that the inner product in ${H}_{m,n}$ is only chanced by a positive multiplicative constant, and $K_{m,n}$ gets multiplied by the inverse of that number. Hence the corresponding Berezin measures are unchanged, and we can for example w.l.o.g. assume that $Q\ge 1$ on ${\mathbb C}$ when this is convenient. \begin{rem} Replacing $Q$ by $\tau^{-1}Q$ one can assume that $\tau=1$. The most important case (e.g. in random matrix theory) occurs when $\tau=1$ and $m=n$. However, we shall take on the slightly more general approach (with three parameters $m$, $n$ and $\tau$, instead of just $n$) in this note. \end{rem} \subsection{A word on notation.} For real $x$, we write $]x[$ for the largest integer which is strictly smaller than $x$. We write frequently $${\partial}_z=\frac{{\partial}}{{\partial} z}=\frac1{2}\bigg(\frac{{\partial}}{{\partial} x}-{\mathrm i}\frac{{\partial}}{{\partial} y}\bigg),\qquad {\overline{\partial}}_z=\frac{{\partial}}{{\partial} \bar z}=\frac1{2}\bigg(\frac{{\partial}}{{\partial} x}+{\mathrm i}\frac{{\partial}}{{\partial} y}\bigg),\qquad z=x+{\mathrm i} y,$$ and use $\Delta=\Delta_z$ to denote the normalized Laplacian $$\Delta_z={\partial}_z{\overline{\partial}}_z=\frac{1}{4}\bigg( \frac{{\partial}^2}{{\partial} x^2}+\frac{{\partial}^2}{{\partial} y^2}\bigg).$$ We write ${\mathbb D}(z_0;r)$ for the open disk $\{z\in{\mathbb C};\,|z-z_0|<r\}$, and we simplify the notation to ${\mathbb D}$ when $z_0=0$ and $r=1$. The boundary of ${\mathbb D}(z_0,r)$ is denoted ${\mathbb T}(z_0,r)$. When $S$ is a subset of ${\mathbb C}$, we write $S^\circ$ for the interior of $S$, and $\bar{S}$ for its closure; the support of a continuous function $f$ is denoted by $\operatorname{supp} f$. The symbol $A\subset B$ means that $A$ is a subset of $B$, while $A\Subset B$ means that $A$ is a compact subset of $B$. We write $\operatorname{dist\,}(A,B)$ for the Euclidean distance between $A$ and $B$. The symbol \textit{a.e.} is short for "${\mathrm d} A$-almost everywhere'', and \textit{p.m.} is short for "probability measure''. Finally, we use the shorthand $L^2$ to denote the unweighted space $L^2_0=L^2({\mathbb C};{{\mathrm d} A})$. \subsection{Random normal matrices and the Coulomb gas.} This paper is a slight reworking of a preprint from arxiv.org which was written in 2007 and early 2008. During the work, it became convenient to collect, in a separate paper, some estimates needed for the paper \cite{AHM}, which was written simultaneously. More precisely, we wanted to make clear a variant of some results on asymptotic expansions of (polynomial) Bergman kernels which were known in a several variables context (\cite{BBS}, \cite{B}), as well as some uniform estimates in the off-diagonal case. The relevant results for our applications in \cite{AHM} are primarily theorems \ref{th3} and \ref{flock}. A discussion containing related estimates which hold near the boundary will appear in our later publication \cite{AHM2}. \section{Main results} \label{sec-main} \subsection{Berezin quantization.} Fix a non-negative weight $Q$ and two positive numbers $\tau$ and $\rho$, $\tau<\rho$, such that the growth condition \eqref{gro} is satisfied, and form the measures \begin{equation*}{\mathrm d} B^{\langle z_0\rangle}_{m,n}= \berd^{\langle z_0\rangle}_{m,n}{{\mathrm d} A}\qquad \text{where}\qquad \berd^{\langle z_0\rangle}_{m,n}(z)=\frac {\babs{K_{m,n}(z,z_0)}^2} {K_{m,n}(z_0,z_0)}\mathrm e^{-mQ(z)}.\end{equation*} We shall refer to the function $\berd_{m,n}^{\langle z_0\rangle}$ as the \textit{Berezin kernel} associated with $m$, $n$ and $z_0$. Let us now assume that $Q\in {\mathcal C}^2({\mathbb C})$. In the first instance, we ask for which $z_0$ we have convergence \begin{equation*}B^{\langle z_0\rangle}_{m,n}\to \delta_{z_0}\qquad \text{as}\quad m\to\infty \quad \text{and}\quad n-m\tau\to0,\end{equation*} in the sense of measures. In terms of the \textit{Berezin transform} \begin{equation*}\mathfrak B_{m,n} f(z_0)=\int_{\mathbb C} f(z){\mathrm d} B^{\langle z_0\rangle}_{m,n}(z), \end{equation*} we are asking whether \begin{equation}\label{q2} \mathfrak B_{m,n} f(z_0)\to f(z_0) \qquad \text{for all}\quad f\in{\mathcal C}_b({\mathbb C})\quad \text{as}\quad m\to \infty \quad \text{and} \quad n-m\tau\to0. \end{equation} Let ${X}$ be the set of points where $Q$ is strictly subharmonic, $${X}=\{\Delta Q>0\}.$$ We shall find that there exists a compact set ${\mathcal S}_\tau$ such that \eqref{q2} holds for all $z_0$ in the interior of ${\mathcal S}_\tau\cap {X}$, while \eqref{q2} fails whenever $z_0\not\in {\mathcal S}_\tau$. To define ${\mathcal S}_\tau$, we first need to introduce some notions from weighted potential theory, cf. \cite{ST} and \cite{HM}. It is here advantageous to slightly relax the regularity assumption on $Q$ and assume that $Q$ is in the class ${\mathcal C}^{1,1}({\mathbb C})$ consisting of all ${\mathcal C}^1$-smooth functions with Lipschitzian gradients. We will have frequent use of the simple fact that the distributional Laplacian $\Delta F$ of a function $F\in {\mathcal C}^{1,1}({\mathbb C})$ makes sense almost everywhere and is of class $L^\infty_{{\rm loc}}({\mathbb C})$. Let ${\rm SH}_\tau$ denote the set of all subharmonic functions $f:{\mathbb C}\to {\mathbb R}$ which satisfy a growth bound of the form \begin{equation*}f(z)\le\tau\log\babs{z}^2+{\mathcal O}(1)\quad\text{as} \quad z\to\infty.\end{equation*} The \textit{equilibrium potential} corresponding to $Q$ and the parameter $\tau$ is defined by \begin{equation*}\widehat{Q}_\tau(z)=\sup\big\{f(z);\,\,f\in{\rm SH}_\tau \quad\text{and}\quad f\le Q\quad\text{on}\quad {\mathbb C}\big\}.\end{equation*} Clearly, $\widehat{Q}_\tau$ is subharmonic on ${\mathbb C}$. We now define ${\mathcal S}_\tau$ as the coincidence set \begin{equation}\label{coincidence}{\mathcal S}_\tau=\{Q=\widehat{Q}_\tau\}. \end{equation} Evidently, ${\mathcal S}_\tau$ increases with $\tau$, and that $Q$ is subharmonic on ${\mathcal S}_\tau^\circ$. We shall need the following lemma. \begin{lem} \label{drspock} The function $\widehat{Q}_\tau$ is of class ${\mathcal C}^{1,1}({\mathbb C})$, and ${\mathcal S}_\tau$ is compact. Furthermore, $\widehat{Q}_\tau$ is harmonic in ${\mathbb C}\setminus {\mathcal S}_\tau$ and there is a number $C$ such that $\widehat{Q}_\tau(z)\le \tau\log_+\babs{z}^2+C$ for all $z\in{\mathbb C}.$ \end{lem} \begin{proof} This is [\cite{HM}, Prop. 4.2, p. 10] and [\cite{ST}, Th. I.4.7, p. 54].\end{proof} \noindent It is easy to construct subharmonic minorants of critical growth; for $C$ large enough, the function $z\mapsto \tau\log_+\left ( \babs{z}^2/C\right )$ is a minorant of $Q$ of class ${\rm SH}_\tau$. It yields that \begin{equation}\label{qtau}\widehat{Q}_\tau(z)=\tau\log\babs{z}^2+ {\mathcal O}(1)\quad\text{as}\quad z\to \infty.\end{equation} \begin{prop}\label{prop1} Let $Q\in {\mathcal C}^{1,1}({\mathbb C})$ and $z_0\in{\mathbb C}$ an arbitrary point. Then $B_{m,n}^{\langle z_0\rangle}(\Lambda) \to1$ as $m\to\infty$ and $n\le m\tau+ 1$ for every open neighbourhood $\Lambda$ of ${\mathcal S}_\tau$. \end{prop} \begin{proof} See Sect. \ref{sec3}.\end{proof} \noindent It follows from Prop. \ref{prop1} that $B_{m,n}^{\langle z_0\rangle}({\mathbb C}\setminus\Lambda)\to 0$. Hence if $z_0\not\in {\mathcal S}_\tau$, then \eqref{q2} fails in general, and \begin{equation*}\mathfrak B_{m,n}f(z_0)\to 0\qquad \text{as}\quad m\to\infty\quad \text{and} \quad n\le m\tau+1,\end{equation*} for all $f\in{\mathcal C}_b({\mathbb C})$ such that $\operatorname{supp} f\cap{\mathcal S}_\tau=\emptyset$. The situation is entirely different when the point $z_0$ is in the interior of ${\mathcal S}_\tau\cap{X}$. \begin{thm}\label{th1} Assume that $Q\in{\mathcal C}^2({\mathbb C})$ and let $z_0\in {\mathcal S}_\tau^\circ\cap {X}$. Then, for any $f\in{\mathcal C}_b({\mathbb C})$, and any real number $M\ge 0$, the measures $B^{\langle z_0\rangle}_{m,n}$ converge to $\delta_{z_0}$ as $m\to\infty$ and $n\ge m\tau-M$. \end{thm} \begin{proof} See Sect. \ref{p2}. See also Rem. \ref{smick}. \end{proof} \subsection{A more elaborate estimate for the Berezin kernel.} Th. \ref{th1} suggests that if $z_0$ is a given point in the interior of ${\mathcal S}_\tau\cap {X}$, $m$ is large, and $n\ge m\tau-1$, then the density $\berd_{m,n}^{\langle z_0\rangle}(z)$ should attain its maximum for $z$ close to $z_0$. The following theorem implies that this is the case, and gives a good control for $\berd_{m,n}^{\langle z_0\rangle}$ in the critical case, when $n-m\tau\to 0$. \begin{thm} \label{th1.5} Assume that $Q\in{\mathcal C}^2({\mathbb C})$. Let $K$ be a compact subset of ${\mathcal S}_\tau^\circ\cap{X}$ and fix a non-negative number $M$. Put \begin{equation*}d_K=\operatorname{dist\,}(K,{\mathbb C}\setminus({\mathcal S}_\tau\cap{X}))\qquad \text{and}\qquad a_K=\inf\{\Delta Q(z);\, \operatorname{dist\,}(z,K)\le d_K/2\}.\end{equation*} Then there exists positive numbers $C$ and $\epsilon$ such that \begin{equation*} \berd^{\langle z_0\rangle}_{m,n}(z)\le C m\mathrm e^{-\epsilon \sqrt{m}\min\{d_K,\babs{z-z_0}\}}\mathrm e^{-m(Q(z)-\widehat{Q}_\tau(z))},\qquad z_0\in K,\,\, z\in {\mathbb C},\,\, m\tau-M\le n\le m\tau+1. \end{equation*} Here $C$ depends only on $K$, $M$ and $\tau$, while $\epsilon$ only depends on $M$, $a_K$ and $\tau$. \end{thm} \begin{proof} See Sect. \ref{point}. \end{proof} A similar result was proved independently by Berman \cite{B3} after the completion of this note. \begin{rem} \label{smick} For fixed $M$ and $\tau$, the number $\epsilon$ can be chosen proportional to $a_K$. A related result with much more precise information on the dependence of $C$ and $\epsilon$ on the various parameters in question is given in Th. \ref{propn1} and the subsequent remark. We also remark that our proof for Th. \ref{th1.5} is very different from that for Th. \ref{th1}. Thus Th. \ref{th1.5} gives an alternative method for obtaining Th. \ref{th1} in the critical case when $n-m\tau\to 0$. \end{rem} \subsection{Gaussian convergence.} Fix a point $z_0\in {X}$. It will be convenient to introduce the \textit{normalized} Berezin measure $\widehat{B}_{m,n}^{\langle z_0\rangle}$ by \begin{equation} {\mathrm d}\widehat B^{\langle z_0\rangle}_{m,n}=\widehat{\berd}_{m,n}^{\langle z_0\rangle}{{\mathrm d} A}\quad \text{where}\quad \widehat{\berd}_{m,n}^{\langle z_0\rangle}(z)=\frac 1 {m\Delta Q(z_0)} {\berd^{\langle z_0\rangle}_{m,n}\bigg(z_0+\frac{z}{\sqrt{m\Delta Q(z_0)}}\bigg)} .\end{equation} We denote the standard Gaussian p.m. on ${\mathbb C}$ by \begin{equation*} {\mathrm d} P(z)=\mathrm e^{-\babs{z}^2}{\mathrm d} A(z). \end{equation*} We have the following CLT, which gives much more precise information than Th. \ref{th1}. We will settle for stating it for ${\mathcal C}^\infty$ weights $Q$ which are \textit{real-analytic} in a neighbourhood of ${\mathcal S}_\tau$, but remark that, using well-known methods, our proof can be extended to cover the case of, say, ${\mathcal C}^\infty$-smooth weights. We will give further details on possible generalizations later on. \begin{thm}\label{th2} Assume that $Q$ is real-analytic in a neighbourhood of ${\mathcal S}_\tau$. Fix a compact subset $K\Subset {\mathcal S}_\tau^\circ\cap{X}$, a point $z_0\in K$, and a number $M\ge 0$. Then we have \begin{equation}\label{gc1} \int_{\mathbb C}\babs{\widehat{\berd}_{m,n}^{\langle z_0\rangle}(z) -\mathrm e^{-\babs{z}^2}}{{\mathrm d} A}(z)\to0\qquad\text{as}\quad m\to\infty\quad \text{and}\quad n\ge m\tau-M,\end{equation} with uniform convergence for $z_0\in K$. Equivalently, $\widehat B^{\langle z_0\rangle}_{m,n}\to P$ as $m\to\infty$ and $n\ge m\tau-M$. \end{thm} \begin{proof} See Sect. \ref{p2}. \end{proof} \subsection{The expansion formula.} Most of our results in this paper rely on a suitable approximation for $K_{m,n}(z,w)$ when $z,w$ are both close to some point $z_0\in{X}$, and $m$ and $n$ are large. We will here assume that $Q$ be \textit{real-analytic} in a neighbourhood of ${\mathcal S}_\tau$. An adequate and rather far-reaching approximation formula was recently stated by Berman \cite{B}, p. 9, depending on methods from \cite{BBS}; we will here only require a special case of his result. We will need to introduce some definitions. For subsets $S\subset{\mathbb C}$ we write $${\overline{\text{\rm diag}}}(S)=\big\{(z,\bar z)\in{\mathbb C}^2;\,z\in S\big\}.$$ That $Q$ is real-analytic in a neighbourhood of $S$ means that there exists a unique holomorphic function $\psi$ defined in a neighbourhood of ${\overline{\text{\rm diag}}}(S)$ in ${\mathbb C}^2$ such that \begin{equation*} \psi(z,\bar{z})=Q(z)\quad\text{for all}\quad z\in{\mathbb C}. \end{equation*} Explicitly, one has \begin{equation}\label{psi2}\psi(z+h,\overline{z+k})=\sum_{i,j=0}^\infty [{\partial}^i{\overline{\partial}}^j Q](z)\frac {h^i \bar{k}^j} {i!j!}\end{equation} when $h$ and $k$ are sufficiently close to zero. In particular, $[{\partial}_1^i{\partial}_2^j\psi](z,\bar{z})={\partial}^i{\overline{\partial}}^j Q(z)$ for all $z\in{\mathbb C}$. Here, ${\partial}_1$ and ${\partial}_2$ denote differentiation w.r.t. the first and second coordinates. \begin{defn} \label{def1} Let ${b}_0$ and ${b}_1$ denote the functions \begin{equation*}{b}_0={\partial}_1{\partial}_2\psi,\quad {b}_1=\frac12{\partial}_1{\partial}_2 \log\big[{\partial}_1{\partial}_2 \psi\big]= \frac{{\partial}_1^2{\partial}_2^2 \psi {\partial}_1{\partial}_2 \psi -{\partial}_1^2{\partial}_2 \psi{\partial}_1{\partial}_2^2 \psi} {2\big[{\partial}_1{\partial}_2 \psi\big]^2}.\end{equation*} The function ${b}_1$ is well-defined where there is no division by $0$, in particular in a neighborhood of ${\overline{\text{\rm diag}}}({X})$. Along the anti-diagonal, we have \begin{equation*} {b}_0(z,\bar{z})=\Delta Q(z)\quad \text{and}\quad {b}_1(z,\bar{z})=\frac 12\Delta\log\Delta Q(z),\quad z\in {X}. \end{equation*} We note for later use that ${b}_0$ and ${b}_1$ are connected via \begin{equation}\label{funrel}{b}_1(z,w)=\frac 1 2 \frac {\partial} {{\partial} w}\left ( \frac 1{{b}_0(z,w)}\frac {\partial} {{\partial} z}{b}_0(z,w)\right ).\end{equation} We define the \textit{first order approximating Bergman kernel} $K_m^1(z,w)$ by \begin{equation*} K_m^1(z,w)=\left ( m{b}_0(z,\bar{w})+{b}_1(z,\bar{w})\right ) \mathrm e^{m \psi(z,\bar{w})}.\end{equation*} \end{defn} We have the following theorem. \begin{thm}\label{th3} Assume that $Q$ is real-analytic in ${\mathbb C}$. Let $K$ be a compact subset of ${\mathcal S}_\tau^\circ\cap{X}$. Fix a point $z_0\in K$ and a number $M\ge 0$. Then there exists a number $m_0$ depending only on $M$ and $\tau$ and a positive number $\varepsilon$ depending only on $K$ and $M$ such that \begin{equation*} \babs{K_{m,n}(z,w)-K_m^1(z,w)}\mathrm e^{-m(Q(z)+Q(w))/2}\le C m^{-1}, \qquad z_0\in K,\quad z,w\in {\mathbb D}(z_0;\varepsilon), \end{equation*} for all $m\ge m_0$ and $n\ge m\tau-M$. Here $C$ is an absolute constant (depending only on $Q$). In particular, by restricting to the diagonal, we get \begin{equation}\label{ber}\babs{K_{m,n}(z,z)\mathrm e^{-mQ(z)}- \left ( m\Delta Q(z)+\frac 1 2\Delta\log\Delta Q(z)\right )}\le \frac{C}{m}, \qquad z\in K, \end{equation} when $m\ge m_0$ and $n\ge m\tau-M$. \end{thm} \begin{proof} See Sect. \ref{proof}. See also Remark \ref{bel} below. \end{proof} \begin{rem} \label{bel} We want to stress that Th. \ref{th3} has a long history; analogous expansions are well-known for weighted Bergman kernels, see, for instance, \cite{F}, \cite{dBMS}, \cite{J}, \cite{E}, \cite{BBS}, \cite{B}, and the references therein. Moreover, as we mentioned above, Th. \ref{th3} is a slight modification of a more general result in several complex variables stated by Berman \cite{B}, Th. 3.8, which may be obtained by adapting the methods from \cite{BBS}. In our proof, we make frequent use of ideas and techniques developed in \cite{B}, \cite{B2}, \cite{BBS}, and in the book \cite{S}. \end{rem} \begin{rem} (Simple properties of $K_m^1$.) (1) Since ${b}_0$, ${b}_1$ and $\psi$ are real on the anti-diagonal, a suitable version of the reflection principle implies that $K_m^1$ is Hermitian, that is, $\overline{ K_m^1(z,w)}=K_m^1(w,z)$. (2) Using the Taylor series for $\psi$ at $(z,\bar{z})$ (see \eqref{psi2}) and ditto for $Q$ and $z$, we get \begin{equation}\label{bbs} 2\operatorname{Re} \psi(z,\bar{w})-Q(z)-Q(w)=-\Delta Q(w)\babs{w-z}^2+R(z,w),\end{equation} where $R(z,w)={\mathcal O}\big(\babs{w-z}^3\big)$ for $z$, $w$ in a sufficiently small neighbourhood of $z_0$, and the ${\mathcal O}$ is uniform for $z_0$ in a fixed compact subset of ${X}$. In case $\Delta Q(z_0)>0$, it follows that \begin{equation}\label{uniq}2\operatorname{Re} \psi(z,\bar{w})-Q(z)-Q(w)\le-\delta_0\babs{w-z}^2,\qquad z,w\in {\mathbb D}(z_0;2\varepsilon),\end{equation} with $\delta_0=\frac12\Delta Q(z_0)>0$, provided that the positive number $\varepsilon$ is chosen small enough. More generally, if we fix a compact subset $K\Subset {X}$, and put \begin{equation*} \delta_0=\frac 1 2\inf_{\zeta\in K}\,\{\Delta Q(\zeta)\},\end{equation*} we may find an $\varepsilon>0$ such that \eqref{uniq} holds for all $z_0\in K$. With a perhaps somewhat smaller $\varepsilon>0$, we may also ensure that the functions ${b}_0$ and ${b}_1$ are bounded and holomorphic in the set $\{(z,w);\, z,\bar{w}\in{\mathbb D}(z_0;2\varepsilon),\, z_0\in K\}$. We infer that \begin{equation}\label{lead}\babs{K_m^1(z,w)}^2\mathrm e^{-m(Q(z)+Q(w))} \le Cm^2\mathrm e^{-m\delta_0\babs{z-w}^2},\qquad z,w\in {\mathbb D}(z_0;2\varepsilon), \quad z_0\in K, \end{equation} with a number $C$ depending only on $K$ and $\varepsilon$. \end{rem} \subsection{Extensions and possible generalizations.} We discuss possible extensions of Th. \ref{th3} (and also of Th. \ref{th2}). Again, these extensions (at least of Th. \ref{th3}) are essentially implied by a result [\cite{B}, Th. 3.8], stated by R. Berman. For a given positive integer $k$, one may define a \textit{$k$-th order approximating Bergman kernel} \begin{equation*}K_m^k(z,w)=\left ( m{b}_0(z,\bar w)+{b}_1(z,\bar w)+\ldots+m^{-k+1}{b}_k(z,\bar w)\right )\mathrm e^{m\psi(z,\bar w)},\end{equation*} for $z$ and $w$ in a neighbourhood of ${\overline{\text{\rm diag}}}({X})$ where ${b}_i$ are certain holomorphic functions defined in a neighbourhood of ${\overline{\text{\rm diag}}}({X})$. It can be shown that for $z_0\in{X}$ there exists $\varepsilon>0$ such that \begin{equation*}\babs{K_{m,n}(z,w)-K_m^k(z,w)}^2\mathrm e^{-m(Q(z)+Q(w))}\le Cm^{-2k},\end{equation*} whenever $z,w\in{\mathbb D}(z_0;\varepsilon)$ and $n\ge m\tau-1$. The coefficients ${b}_i$ can in principle be determined from a recursion formula involving partial differential equations of increasing order, compare with [Berman et al. \cite{BBS}, (2.15), p.9], where a closely related formula is given. However, the analysis required for calculating higher order coefficients ${b}_i$ for $i\ge 2$ seems to be quite involved, and the first order approximation seems to be sufficient for many practical purposes, cf. \cite{AHM}. We therefore prefer a more direct approach here. Th. \ref{th3} can also be generalized in another direction -- one may relax the assumption that $Q$ be real-analytic, and instead assume that e.g. $Q\in{\mathcal C}^\infty({\mathbb C})$. In this case, the functions ${b}_i$ and $\psi$ will be almost-holomorphic at the anti-diagonal (cf. e.g. \cite{dBMS} for a relevant discussion of such functions). The modifications needed for proving Th. \ref{th3} in this more general case are based on standard arguments; they are essentially as outlined in [\cite{BBS}, Subsect. 2.6 and p. 15]. We leave the details to the interested reader. However, the reader should note that, using this generalized version of Th. \ref{th3}, one may easily extend our proof of Th. \ref{th2}, to the case of ${\mathcal C}^\infty$-smooth weights. \subsection{The Bargmann--Fock case and harmonic measure.} When $z_0\in{\mathbb C}\setminus({\mathcal S}_\tau^\circ\cap {X})$, Prop. \ref{prop1} yields that the measures $B^{\langle z_0\rangle}_{m,n}$ tend to concentrate on ${\mathcal S}_\tau$ as $m\to\infty$ and $n-m\tau\to 0$. However our general results provide no further information regarding the asymptotic distribution. We now specialize to the Bargmann--Fock weight $Q(z)=\babs{z}^{2}$. We then obviously have ${X}={\mathbb C}$. It will be convenient to introduce the functions (truncated exponentials) \begin{equation*}E_k(z)=\sum_{j=0}^k\frac {z^j} {j!}.\end{equation*} It is then easy to check that $K_{m,n}(z,w)=mE_{n-1}(mz\bar{w}),$ ${\mathcal S}_\tau=\overline{{\mathbb D}}(0;\sqrt{\tau}),$ and $\widehat{Q}_\tau(z)=\tau+\tau\log\left (\babs{z}^2/\tau\right )$ when $\babs{z}^2\ge\tau.$ We infer that \begin{equation}\label{bfock}{\mathrm d} B^{\langle z_0\rangle}_{m,n}(z)= m\frac{\babs{E_{n-1}(mz\bar{z}_0)}^2} {E_{n-1}(m\babs{z_0}^2)} \mathrm e^{-m\babs{z}^2}{\mathrm d} A(z). \end{equation} Let ${\mathbb C}^*={\mathbb C}\cup\{\infty\}$ denote the extended plane. \begin{thm}\label{th5} Let $Q(z)=\babs{z}^2$ and $z_0\in{\mathbb C}\setminus {\mathcal S}_\tau$. Then, as $m\to\infty$ and $n/m\to\tau$, the measures $B^{\langle z_0\rangle}_{m,n}$ converge to harmonic measure at $z_0$ with respect to ${\mathbb C}^*\setminus{\mathcal S}_\tau$. \end{thm} \begin{proof} See Sect. \ref{new}. \end{proof} \begin{rem} The above theorem is a special case of Theorem 8.7 in \cite{AHM}. \end{rem} \subsection{Weighted $L^2$ estimates for the ${\overline{\partial}}$-equation with a growth constraint.} Our analysis depends in an essential way on having a good control for the norm-minimal solution $u_*$ in the space $L^2_{mQ,n}$ to the ${\overline{\partial}}$-equation ${\overline{\partial}} u=f$, where $f$ is a given suitable test-function such that $\operatorname{supp} f\subset {\mathcal S}_\tau\cap{X}$. We would not have a problem if $u_*$ were just required to be of class $L^2_{mQ}$ (and not $L^2_{mQ,n}$), for then an elementary version of the well-known $L^2$ estimates of H\"ormander apply, see e.g. \cite{H} or \cite{H2}. The requirement that $u_*$ be of restricted growth gives rise to some difficulties, but we shall see in Sect. \ref{l2estimates} (notably Th. \ref{strb}) that the classical estimates can be adapted to our situation. (Estimates very similar to ours were given independently by Berman in \cite{B3} after this note was completed.) \section{Preparatory lemmas} \label{sec3} \subsection{Preliminaries.} In this section, we prove Prop. \ref{prop1}. At the same time we will get the opportunity to display some elementary inequalities which will be used in later sections. For the results obtained here, it is enough to suppose that $Q$ is ${\mathcal C}^{1,1}$-smooth (and satisfies the growth assumption \eqref{gro}). Thus $\Delta Q$ makes sense almost everywhere and is of class $L^\infty_{\rm loc}({\mathbb C})$. \subsection{Subharmonicity estimates.} We start by giving two simple lemmas which are based on the sub-mean property of subharmonic functions. \begin{lem} \label{berndt} Let $\phi$ be a weight of class ${\mathcal C}^{1,1}({\mathbb C})$ and put $A=\operatorname{\text{\rm ess\,sup}\,}\{\Delta\phi(\zeta);\,\zeta\in{\mathbb D}\}.$ Then if $u$ is bounded and holomorphic in ${\mathbb D}$, we have that $$|u(0)|^2\mathrm e^{-\phi(0)}\le\int_{\mathbb D}|u(\zeta)|^2\mathrm e^{-\phi(\zeta)}\mathrm e^{A|\zeta|^2}{{\mathrm d} A}(\zeta) \le\mathrm e^A\int_{\mathbb D}|u(\zeta)|^2\mathrm e^{-\phi(\zeta)}{{\mathrm d} A}(\zeta).$$ \end{lem} \begin{proof} Consider the function $F(\zeta)=|u(\zeta)|^2\mathrm e^{-\phi(\zeta)+A|\zeta|^2},$ which satisfies $$\Delta \log F=\Delta\log|u|^2-\Delta\phi+A\ge0\qquad \text{a.e.\, on}\quad {\mathbb D},$$ making $F$ logarithmically subharmonic. But then $F$ is subharmonic itself, and so $$F(0)\le\int_{\mathbb D} F(z){{\mathrm d} A}(z).$$ The assertion of the lemma is immediate from this. \end{proof} \begin{lem} \label{lemm2} Let $Q\in {\mathcal C}^{1,1}({\mathbb C})$ and fix $\delta>0$ and $z\in{\mathbb C}$ and put $A=A_{z,\delta}=\operatorname{\text{\rm ess\,sup}\,}\{\Delta Q(\zeta);\, \zeta\in{\mathbb D}(z;\delta)\}.$ Also, let $u$ be holomorphic and bounded in ${\mathbb D}(z;\delta/\sqrt{m})$. Then \begin{equation*}\babs{u(z)}^2e^{-mQ(z)}\le m\delta^{-2}{\mathrm e^{A\delta^2}} \int_{{\mathbb D}(z;\delta/\sqrt{m})}\babs{u(\zeta)}^2e^{-mQ(\zeta)}{{\mathrm d} A}(\zeta). \end{equation*} \end{lem} \begin{proof} The assertion follows if we make the change of variables $\zeta=z+\delta\xi/\sqrt{m}$ where $\zeta\in {\mathbb D}(z;\delta/\sqrt{m})$ and $\xi\in{\mathbb D}$, and apply Lemma \ref{berndt} with the weight $\phi(\xi)=mQ(\zeta)$. \end{proof} \noindent We note the following consequence of the subharmonicity estimates. We will frequently need it in later sections. \begin{lem} \label{lemm3} Let $K$ be a compact subset of ${\mathbb C}$ and $\delta$ a given positive number. We put \begin{equation*}K_\delta=\{z\in{\mathbb C};\, \operatorname{dist\,}(z,K)\le \delta\}\qquad \text{and} \qquad A=\operatorname{\text{\rm ess\,sup}\,}\{\Delta Q(z);\, z\in K_\delta\}.\end{equation*} Then, for all $m,n\ge 1$, \begin{equation*}\babs{K_{m,n}(z,w)}^{2}\mathrm e^{-mQ(z)}\le m\delta^{-2}\mathrm e^{\delta^2 A}\int_{{\mathbb D}(z;\delta/\sqrt{m})}\babs{K_{m,n}(\zeta,w)}^2\mathrm e^{-mQ(\zeta)}{{\mathrm d} A}(\zeta),\quad z\in K,\quad w\in {\mathbb C}.\end{equation*} \end{lem} \begin{proof} Apply Lemma \ref{lemm2} to $u(\zeta)=K_{m,n}(\zeta,w)$. \end{proof} \subsection{A weak maximum principle for weighted polynomials.} Maximum principles for weighted polynomials have a long history, see e.g. \cite{ST}, Chap. III. The following simple lemma will suffice for our present purposes; it is a consequence of \cite{ST}, Th. III.2.1. (Recall that ${\mathcal S}_\tau$ denotes the coincidence set $\{Q=\widehat{Q}_\tau\}$, see \eqref{coincidence}.) \begin{lem} \label{wmax} Suppose that a polynomial $u$ of degree at most $n-1$ satisfies $\babs{u(z)}^2\mathrm e^{-mQ(z)}\le 1$ on ${\mathcal S}_{\tau(m,n)}$, where $\tau(m,n)=(n-1)/m<\rho$. Then $\babs{u(z)}^2\mathrm e^{-m\widehat{Q}_{\tau(m,n)}(z)}\le 1$ on ${\mathbb C}$. \end{lem} \begin{proof} Put $q(z)=m^{-1}\log\babs{u(z)}^2$. The assumptions on $u$ imply that $q\in{\rm SH}_{\tau(m,n)}$ and that $q\le Q$ on ${\mathcal S}_{\tau(m,n)}$. Hence $q\le\widehat{Q}_{\tau(m,n)}$ on ${\mathbb C}$, as desired. \end{proof} \begin{lem} \label{lemm4} Let $u\in{H}_{m,n}$, and suppose that $n\le m\tau+1$. Then \begin{equation*}\babs{u(z)}^2\le m\mathrm e^A\|u\|_{mQ}^2\mathrm e^{m\widehat{Q}_\tau(z)}, \end{equation*} where $A$ denotes the essential supremum of $\Delta Q$ over the set $\{z\in{\mathbb C};\, \operatorname{dist\,}(z,{\mathcal S}_\tau)\le 1\}$. \end{lem} \begin{proof} The assertion that $n\le m\tau+1$ is equivalent to that $\tau(m,n)\le \tau$ where $\tau(m,n)=(n-1)/m$. Thus ${\mathcal S}_{\tau(m,n)}\subset{\mathcal S}_\tau$ and $\widehat{Q}_{\tau(m,n)}\le \widehat{Q}_\tau$. An application of Lemma \ref{lemm2} with $\delta=1$ now gives \begin{equation*}\babs{u(z)}^2\mathrm e^{-mQ(z)} \le m\mathrm e^A\int_{{\mathbb D}(z;1)}\babs{u(\zeta)}^2 \mathrm e^{-mQ(\zeta)}{{\mathrm d} A}(\zeta)\le m\mathrm e^A\int_{\mathbb C}\babs{u(\zeta)}^2\mathrm e^{-mQ(\zeta)} {{\mathrm d} A}(\zeta),\quad z\in{\mathcal S}_{\tau}. \end{equation*} As a consequence, the same estimate holds on ${\mathcal S}_{\tau(m,n)}$. We can thus apply Lemma \ref{wmax}. It yields that \begin{equation*}\babs{u(z)}^2\le m\mathrm e^A\|u\|_{mQ}^2\mathrm e^{m\widehat{Q}_{\tau(m,n)}(z)}, \quad z\in{\mathbb C}.\end{equation*} The desired assertion follows, since $\widehat{Q}_{\tau(m,n)}\le\widehat{Q}_\tau$. \end{proof} \subsection{The proof of Proposition \ref{prop1}} Fix two points $z$ and $z_0$ in ${\mathbb C}$. We apply Lemma \ref{lemm4} to the polynomial \begin{equation*}u(\zeta)=\frac {K_{m,n}(\zeta,z_0)}{\sqrt{K_{m,n}(z_0,z_0)}}, \end{equation*} which is of class ${H}_{m,n}$ and satisfies $\|u\|_{mQ}=1$. It yields that $\babs{u(z)}^2\le m\mathrm e^A\mathrm e^{\widehat{Q}_\tau(z)}$, or \begin{equation}\label{cerd} \berd_{m,n}^{\langle w\rangle}(z)=\babs{u(z)}^2\mathrm e^{-mQ(z)} \le m\mathrm e^{A}\mathrm e^{m(\widehat{Q}_\tau(z)-Q(z))},\quad z\in{\mathbb C},\, n\le m\tau+1.\end{equation} Now let $\Lambda$ be an open neighborhood of ${\mathcal S}_\tau$. Since $Q>\widehat{Q}_\tau$ on ${\mathbb C}\setminus {\mathcal S}_\tau$, the continuity of the functions involved coupled with the growth conditions \eqref{qtau} and \eqref{gro} yield that $Q-\widehat{Q}_\tau$ is bounded below by a positive number on ${\mathbb C}\setminus\Lambda$. It follows that \begin{equation*}B_{m,n}^{\langle z_0\rangle}({\mathbb C}\setminus\Lambda)= \int_{{\mathbb C}\setminus\Lambda}\berd^{\langle z_0\rangle}_{m,n}{\mathrm d} A\to 0\end{equation*} as $m\to\infty$ and $n\le m\tau+1$. Since $B_{m,n}^{\langle z_0\rangle}$ is a p.m. on ${\mathbb C}$, the assertion of Prop. \ref{prop1} follows. $\qed$ \subsection{An estimate for the one-point function.} Our proof of Prop. \ref{prop1} implies a useful estimate for the one-point function $z\mapsto K_{m,n}(z,z)\mathrm e^{-mQ(z)}$. The following result is implicit in \cite{B}. \begin{prop}\label{lemma1} Suppose that $n\le m\tau+1$. Then there exists a number $C$ depending only on $\tau$ such that \begin{equation}\label{berg1}K_{m,n}(z,z)\mathrm e^{-mQ(z)}\le Cm \mathrm e^{-m(Q(z)-\widehat{Q}_\tau(z))},\quad z\in{\mathbb C},\end{equation} \begin{equation}\label{berg2}\babs{K_{m,n}(z,w)}^2\mathrm e^{-m(Q(z)+Q(w))}\le Cm^2\mathrm e^{-m(Q(z)-\widehat{Q}_\tau(z))}\mathrm e^{-m(Q(w)-\widehat{Q}_\tau(w))},\quad z,w\in{\mathbb C}. \end{equation} In particular, $\babs{K_{m,n}(z,w)}^2\mathrm e^{-m(Q(z)+Q(w))}\le Cm^2$ for all $z$ and $w$. \end{prop} \begin{proof} The function $\berd_{m,n}^{\langle z_0\rangle}(z)$ in the diagonal case $z=z_0$ reduces to \begin{equation*}\berd_{m,n}^{\langle z\rangle}(z)=K_{m,n}(z,z)\mathrm e^{-mQ(z)}. \end{equation*} Thus the estimate \eqref{cerd} implies \begin{equation*}K_{m,n}(z,z)\mathrm e^{-mQ(z)}\le m\mathrm e^A\mathrm e^{m(\widehat{Q}_\tau(z)-Q(z))}, \end{equation*} which proves \eqref{berg1}. In order to get \eqref{berg2} it now suffices to use the Cauchy--Schwarz inequality $\babs{K_{m,n}(z,w)}^2\le K_{m,n}(z,z)K_{m,n}(w,w)$ and apply \eqref{berg1} to the two factors in the right hand side. \end{proof} \subsection{Cut-off functions.} \label{cutoff} We will in the following frequently use cut-off functions with various properties. For convenience of the reader who may not be familiar with these details, we collect the necessary facts here. Given any numbers $\delta>0$, $r>0$ and $C>1$, there exists $\chi\in \coity({\mathbb C})$ such that $\chi=1$ on ${\mathbb D}(0;\delta)$, $\chi=0$ outside ${\mathbb D}(0;\delta(1+r))$, $\chi\le 1$ and $|{\overline{\partial}}\chi|^2\le (C/r^2)\chi$ on ${\mathbb C}$. \footnote{Such a $\chi$ can be obtained by standard regularization applied to the Lipschitzian function which equals $(1-(\delta^{-1}\babs{z}-1)/r)^2$ for $1\le\babs{z}\le r$, and is otherwise locally constant on ${\mathbb C}$.} Moreover, with such a choice for $\chi$, it follows that $|{\overline{\partial}}\chi|^2\le (C/r^2)\delta^{-2}\chi$ on ${\mathbb C}$. It is then easy to check that $\|{\overline{\partial}}\chi\|_{L^2}^2\le C(1+2/r)$. Later on, we will often use the values $\delta=3\varepsilon/2$ and $\delta(1+r)=2\varepsilon$, where $\varepsilon>0$ is given. We may then arrange that $\|{\overline{\partial}}\chi\|_{L^2}\le 3$. \section{Weighted estimates for the ${\overline{\partial}}$-equation with a growth constraint}\label{l2estimates} \subsection{General introduction; the Bergman projection and the ${\overline{\partial}}$-equation.} \label{init} Let $\phi$ be a weight on ${\mathbb C}$ (cf. Subsect. \ref{weigh}). We assume throughout that $\phi$ is of class ${\mathcal C}^{1,1}({\mathbb C})$ (so that $\Delta\phi\in L^\infty_{{\rm loc}}({\mathbb C})$), and that $\int_{\mathbb C}\mathrm e^{-\phi}{\mathrm d} A<\infty$ (so that $L^2_\phi$ contains the constant functions). Also fix a positive integer $n$ and recall the definition of the "truncated'' spaces $L^2_{\phi,n}$ and $A^2_{\phi,n}$ (see \eqref{ospaces} and \eqref{os2}). Let $K_{\phi,n}$ denote the reproducing kernel for $A^2_{\phi,n}$, and let ${P}_{\phi,n}:L^2_\phi\to A^2_{\phi,n}$ be the orthogonal projection, \begin{equation}\label{ooproj}{P}_{\phi,n}u(w)=\int_{\mathbb C} u(z)K_{\phi,n}(w,z)\mathrm e^{-\phi(z)}{{\mathrm d} A}(z),\quad u\in L^2_{\phi}.\end{equation} We will in later sections frequently need to estimate ${P}_{\phi,n}u(w)$, especially when $u$ is holomorphic in a neighbourhood of $w$. Now note that for $f$ in the class ${\mathcal C}_0({\mathbb C})$ (continuous functions with compact support), the Cauchy transform $u=C f$ given by \begin{equation*}C f(w)=\int_{\mathbb C}\frac {f(z)} {w-z} {{\mathrm d} A}(z),\end{equation*} satisfies the ${\overline{\partial}}$-equation \begin{equation}\label{dstreck}{\overline{\partial}} u=f.\end{equation} Moreover, $u=C f$ is bounded, and is therefore of class $L^2_{\phi,n}$ for any $n\ge 1$. Thus the function \begin{equation}\label{star}u_{*}(w)=u(w)-{P}_{\phi,n}u(w),\end{equation} solves \eqref{dstreck} and is of class $L^2_{\phi,n}$. It is easy to verify that $u_{*}$ defined by \eqref{star} is the unique norm-minimal solution to \eqref{dstreck} in $L^2_{\phi,n}$ whenever $u\in L^2_{\phi,n}$ is a solution to \eqref{dstreck}. Hence the study of the orthogonal projection ${P}_{\phi,n}u$ is equivalent to the study of the \textit{the $L^2_{\phi,n}$-minimal} solution $u_{*}$ to \eqref{dstreck}. It is useful to observe that $u_{*}$ is characterized amongst the solutions of class $L^2_{\phi,n}$ to \eqref{dstreck} by the condition $u_{*}\bot A^2_{\phi,n}$, or \begin{equation}\label{orthog} \int_{\mathbb C} u_{*}(z)\overline{h(z)}\mathrm e^{-\phi(z)}{{\mathrm d} A}(z)=0,\quad\text{for all}\quad h\in A^2_{\phi,n}.\end{equation} Our principal result in this section, Th. \ref{strb}, states that a variant of the elementary one-dimensional form of the $L^2$ estimates of H\"ormander are valid for $u_{*}$. The important results for the further developments in this paper are however Th. \ref{boh} and Cor. \ref{bh}. The reader may consider to glance at those results and skip to the next section, at a first read. \subsection{$L^2$ estimates.} The $L^2$ estimates of H\"{o}rmander in the most elementary, one-dimensional form applies only to weights $\phi$ which are strictly subharmonic in the entire plane ${\mathbb C}$. The result states that $u_0$, the $L^2_\phi$-minimal solution to \eqref{dstreck} (where $f\in{\mathcal C}_0({\mathbb C})$) satisfies \begin{equation}\label{hoe0}\int_{\mathbb C}\babs{u_0}^2\mathrm e^{-\phi}{{\mathrm d} A}\le\int_{\mathbb C} \babs{f}^2\frac {\mathrm e^{-\phi}} {\Delta\phi}{{\mathrm d} A},\end{equation} provided that $\phi$ is ${\mathcal C}^2$-smooth on ${\mathbb C}$. See \cite{H}, eq. (4.2.6), p. 250 (this is essentially just Green's formula). It is important to observe that the estimate \eqref{hoe0} remains valid for strictly subharmonic weights $\phi$ in the larger class ${\mathcal C}^{1,1}({\mathbb C})$ (that $\phi$ is strictly subharmonic then means that $\Delta\phi>0$ a.e. on ${\mathbb C}$). The proof in \cite{H}, Subsect. 4.2 goes through without changes in this more general case. \subsection{Weighted $L^2_{\phi,n}$ estimates.} We have the following theorem, where we consider two weights $\phi$ and $\widehat{\phi}$ with various properties. The practically minded reader may, with little loss of generality, think of $\phi=mQ$, $\widehat{\phi}(z)=m\widehat{Q}_\tau(z)+\varepsilon\log(1+\babs{z}^2)$, and ${\Sigma}={\mathcal S}_\tau$. This will essentially be the case in all our later applications. \begin{thm}\label{strb} Let ${\Sigma}$ be a compact subset of ${\mathbb C}$, $\phi$, $\widehat\phi$ and ${\varrho}$ three real-valued functions of class ${\mathcal C}^{1,1}({\mathbb C})$, and $n$ a positive integer. We assume that: \begin{enumerate} \item[(1)] $L^2_{\widehat{\phi}}$ contains the constants and there are non-negative numbers $\alpha$ and $\beta$ such that \begin{equation}\label{put}\widehat{\phi}\le \phi+\alpha\quad\text{on}\quad {\mathbb C}\qquad\text{and}\qquad\phi\le \beta+\widehat{\phi}\quad \text{on}\quad {\Sigma},\end{equation} \item[(2)] $A^2_{\widehat{\phi}}\subset A^2_{\phi,n}$, \item[(3)] The function $\widehat{\phi}+{\varrho}$ is strictly subharmonic on ${\mathbb C}$, \item[(4)] ${\varrho}$ is locally constant on ${\mathbb C}\setminus {\Sigma}$, \item[(5)] There exists a number $\kappa$, $0<\kappa<1$, such that \begin{equation}\label{kappa} \frac{|{\overline{\partial}} {\varrho}|^2}{\Delta\widehat{\phi}+\Delta{\varrho}}\le \frac{\kappa^2}{\mathrm e^{\alpha+\beta}}\quad \text{a.e. on}\quad {\Sigma}.\end{equation} \end{enumerate} Let $f\in \coity({\mathbb C})$ be such that \begin{equation*}\operatorname{supp} f\subset {\Sigma}.\end{equation*} Then $u_*$, the $L^2_{\phi,n}$-minimal solution to ${\overline{\partial}} u=f$ satisfies \begin{equation}\label{zapp}\int_{\mathbb C}\babs{u_*}^2\mathrm e^{{\varrho}-\phi}{{\mathrm d} A}\le \frac {\mathrm e^{\alpha+\beta}} {(1-\kappa)^2}\int_{\mathbb C}\babs{f}^2\frac {\mathrm e^{{\varrho}-\phi}} {\Delta\widehat{\phi}+\Delta{\varrho}} {{\mathrm d} A}.\end{equation} \end{thm} \noindent Before we give the proof, we note a simple lemma, which will be put to repeated use. \begin{lem}\label{convo} Suppose that $f\in{\mathcal C}_0({\mathbb C})$. Then \eqref{dstreck} has a solution $u$ in $L^2_{\widehat{\phi}}$. Moreover $v_*$, the $L^2_{\widehat{\phi}}$-minimal solution to \eqref{dstreck} is of class $L^2_{\phi,n}$. \end{lem} \begin{proof} The assumptions imply that the Cauchy transform $C f$ is in $L^2_{\widehat\phi}$. Thus the $L^2_{\widehat\phi}$-minimal solution $v_*$ to \eqref{dstreck} exists, necessarily of the form $v_*=C f+g$ with some $g\in A^2_{\widehat\phi}$. In view of (3), we know that $g\in A^2_{\phi,n}$. The assertion is now immediate. \end{proof} \begin{proof}[Proof of Theorem \ref{strb}] Assume that the right hand side in \eqref{zapp} is finite. In view of \eqref{orthog}, the condition that $u_{*}$ is $L^2_{\phi,n}$-minimal may be expressed as \begin{equation*}\int_{\mathbb C} u_{*}\mathrm e^{\varrho} \bar{h} \mathrm e^{-(\phi+{\varrho})}{{\mathrm d} A}=0 \quad\text{for all}\quad h\in A^2_{\phi,n}. \end{equation*} The latter relation means that the function $w_{0}=u_{*}\mathrm e^{\varrho}$ minimizes the norm in $L^2_{\phi+{\varrho}}$ over elements of the (non-closed) subspace \begin{equation*}\mathrm e^{{\varrho}}\cdot L^2_{\phi,n}=\big\{w;\quad w=h\mathrm e^{\varrho},\quad\text{where}\quad h\in L^2_{\phi,n}\big\}\subset L^2_{\phi+{\varrho}} \end{equation*} which solve the (modified) ${\overline{\partial}}$-equation \begin{equation}\label{mdbar}{\overline{\partial}} w={\overline{\partial}}\left ( u_{*}\mathrm e^{\varrho}\right )= f\mathrm e^{\varrho}+u_{*}{\overline{\partial}}\left ( \mathrm e^{\varrho}\right ). \end{equation} Now, since ${\varrho}$ is bounded on ${\mathbb C}$ and locally constant outside ${\Sigma}$, we have \begin{equation*}L^2_{\phi+{\varrho},n}=\mathrm e^{\varrho}\cdot L^2_{\phi,n}= L^2_{\phi,n} \qquad (\text{as sets}),\end{equation*} and we conclude that $w_{0}$ is the norm-minimal solution to \eqref{mdbar} in $L^2_{\phi+{\varrho},n}$. Let $v_*$ denote the $L^2_{\widehat{\phi}}$-minimal solution to ${\overline{\partial}} u=f$ (see Lemma \ref{convo}). We form the continuous function $g={\overline{\partial}}\left ((u_{*}-v_*)\mathrm e^{\varrho}\right )=(u_{*}-v_*){\overline{\partial}}\left ( \mathrm e^{\varrho}\right ),$ whose support is contained in ${\Sigma}$, and consider the ${\overline{\partial}}$-equation \begin{equation}\label{go}{\overline{\partial}}\xi=g=(u_{*}-v_*){\overline{\partial}}\left ( \mathrm e^{\varrho}\right ). \end{equation} The assertion of Lemma \ref{convo} remains valid if $\widehat\phi$ is replaced by $\widehat{\phi}+{\varrho}$; it follows that \eqref{go} has a solution $\xi\in L^2_{\widehat{\phi}+{\varrho}}$, and moreover, if $\xi_{0}$ denotes the norm-minimal solution to \eqref{go} in $L^2_{\widehat{\phi}+{\varrho}}$, we have $\xi_{0} \in L^2_{\phi+{\varrho},n}$. We now continue by forming the function \begin{equation*}w_1=v_*\mathrm e^{{\varrho}}+\xi_{0}.\end{equation*} It is clear that $w_1\in L^2_{\phi+{\varrho},n}$, and a calculation yields that \begin{equation}\label{nu2}{\overline{\partial}} w_1= f \mathrm e^{{\varrho}}+v_*{\overline{\partial}}\left ( \mathrm e^{{\varrho}}\right ) +(u_{*}-v_*){\overline{\partial}}\left ( \mathrm e^{{\varrho}}\right ) ={\overline{\partial}}\left ( u_{*}\mathrm e^{{\varrho}}\right )={\overline{\partial}} w_{0}. \end{equation} Since $w_{0}$ is norm-minimal in $L^2_{\phi+{\varrho},n}$ over functions $w$ such that ${\overline{\partial}} w={\overline{\partial}} w_{0}$, we must have \begin{equation} \label{ojoj}\int_{\mathbb C}\babs{w_{0}}^2 e^{-(\phi+{\varrho})}{{\mathrm d} A}\le \int_{\mathbb C}\babs{w_1}^2\mathrm e^{-(\phi+{\varrho})}{{\mathrm d} A}\le \mathrm e^{\alpha}\int_{\mathbb C} \babs{w_1}^2 \mathrm e^{-(\widehat{\phi}+{\varrho})}{{\mathrm d} A}, \end{equation} where we have used the condition $\widehat{\phi}\le\phi+\alpha$ to deduce the second inequality. Moreover, since $\xi_0$ is norm-minimal in $L^2_{\widehat{\phi}+{\varrho}}$ amongst solutions to \eqref{go}, we have \begin{equation*} \int_{\mathbb C} w_1\bar{h}\mathrm e^{-(\widehat{\phi}+{\varrho})}{{\mathrm d} A}=\int_{\mathbb C} v_*\bar{h}\mathrm e^{-\widehat{\phi}}{{\mathrm d} A}+ \int_{\mathbb C} \xi_0 \bar{h}\mathrm e^{-(\widehat{\phi}+{\varrho})}{{\mathrm d} A}=0\quad \end{equation*} for all $h\in A^2_{\widehat{\phi}+{\varrho}}$, so the function $w_1$ is in fact the norm-minimal solution to a ${\overline{\partial}}$-equation in $L^2_{\widehat{\phi}+{\varrho}}$. The ${\overline{\partial}}$-equation satisfied by $w_1$ is (see \eqref{nu2}) \begin{equation*}{\overline{\partial}} w_1=f\mathrm e^{\varrho}+u_{*}{\overline{\partial}}\left ( \mathrm e^{\varrho}\right ) = f \mathrm e^{\varrho} +u_{*}\mathrm e^{\varrho}{\overline{\partial}} {\varrho}. \end{equation*} By the estimate \eqref{hoe0} applied to the weight $\widehat\phi+{\varrho}$, we obtain that \begin{equation}\label{inter}\int_{\mathbb C}\babs{w_1}^2 \mathrm e^{-(\widehat{\phi}+{\varrho})}{{\mathrm d} A}\le \int_{{\mathbb C}} \babs{f\mathrm e^{\varrho} +u_{*}\mathrm e^{\varrho}{\overline{\partial}} {\varrho}}^2 \frac {\mathrm e^{-(\widehat{\phi}+{\varrho})}} {\Delta\left ( \widehat{\phi}+{\varrho}\right )}{{\mathrm d} A}=\int_{{\mathbb C}} \babs{f +u_{*}{\overline{\partial}}{\varrho}}^2 \frac {\mathrm e^{{\varrho}-\widehat{\phi}}} {\Delta\widehat{\phi}+\Delta{\varrho}}{{\mathrm d} A}. \end{equation} Since $f$ and ${\overline{\partial}}{\varrho}$ are supported in ${\Sigma}$, and since $\mathrm e^{-\widehat{\phi}}\le \mathrm e^{\beta}\mathrm e^{-\phi}$ there (see \eqref{put}), it is seen that the right hand side in \eqref{inter} can be estimated by \begin{equation}\label{inter2}\mathrm e^{\beta} \int_{\mathbb C}\babs{f+u_*{\overline{\partial}}{\varrho}}^2\frac {\mathrm e^{{\varrho}-\phi}} {\Delta\widehat{\phi}+\Delta{\varrho}} {{\mathrm d} A}.\end{equation} For $t>0$ we now use the inequality $\babs{a+b}^2\le (1+t)\babs{a}^2+ (1+t^{-1})\babs{b}^2$ and the condition \eqref{kappa} to conclude that the integral in \eqref{inter2} is dominated by \begin{equation}\label{inter3}\begin{split}(1+t)&\int_{\mathbb C} \babs{f}^2\frac {\mathrm e^{{\varrho}-\phi}} {\Delta\widehat{\phi}+\Delta{\varrho}} {{\mathrm d} A}+(1+t^{-1})\int_{\mathbb C}\babs{u_*}^2\frac {|{\overline{\partial}}{\varrho}|^2} {\Delta\widehat{\phi}+\Delta{\varrho}} \mathrm e^{{\varrho}-\phi}{{\mathrm d} A}\le\\ &\le (1+t)\int_{\mathbb C}\babs{f}^2\frac {\mathrm e^{{\varrho}-\phi}} {\Delta\widehat{\phi}+\Delta{\varrho}}{{\mathrm d} A}+(1+t^{-1})\frac {\kappa^2} {\mathrm e^{\alpha+\beta}} \int_{\mathbb C}\babs{u_*}^2\mathrm e^{{\varrho}-\phi}{{\mathrm d} A}.\\ \end{split} \end{equation} Tracing back through \eqref{ojoj}, \eqref{inter}, \eqref{inter2} and \eqref{inter3}, we get \begin{equation*}\int_{\mathbb C}\babs{u_*}^2\mathrm e^{{\varrho}-\phi}{{\mathrm d} A}\le \mathrm e^{\alpha+\beta}(1+t)\int_{\mathbb C}\babs{f}^2\frac {\mathrm e^{{\varrho}-\phi}} {\Delta\widehat{\phi}+\Delta{\varrho}}{{\mathrm d} A}+ (1+t^{-1})\kappa^2\int_{\mathbb C}\babs{u_*}^2\mathrm e^{{\varrho}-\phi}{{\mathrm d} A},\end{equation*} which we write as \begin{equation*} \big(1-(1+t^{-1})\kappa^2\big)\int_{\mathbb C}|u_{*}|^2\mathrm e^{{\varrho}-\phi}{{\mathrm d} A} \le\mathrm e^{\alpha+\beta}(1+t)\int_{{\mathbb C}}\babs{f}^2\frac{\mathrm e^{{\varrho}-\phi}} {\Delta\widehat{\phi}+\Delta{\varrho}}{{\mathrm d} A}. \end{equation*} The desired estimate \eqref{zapp} now follows if we pick $t=\kappa/(1-\kappa).$ \end{proof} \subsection{Implementation scheme.} We now fix $Q\in {\mathcal C}^{1,1}({\mathbb C})$ satisfying the growth assumption \eqref{gro} with a fixed $\rho>0$. Adding a constant to $Q$ does not change the problem and so we may assume that $Q\ge 1$ on ${\mathbb C}$. Let us put \begin{equation}\label{sigmat}q_\tau=\sup_{z\in{\mathcal S}_\tau}\{Q(z)\}.\end{equation} We next fix a positive number $\tau<\rho$ and two positive numbers ${M_0}$ and $\bpar$ such that \begin{equation}\label{sss} \bpar\log(1+\babs{z}^2)\le{M_0} \widehat{Q}_\tau(z),\qquad z\in{\mathbb C}.\end{equation} This is possible since $\widehat{Q}_\tau\ge 1$ and $\widehat{Q}_\tau(z)=\tau\log\babs{z}^2+{\mathcal O}(1)$ when $z\to\infty$ (see \eqref{qtau}). In particular, it yields that $\bpar\le {M_0}\tau$. We now define \begin{equation}\label{pha}\phi_m=mQ\quad \text{and}\quad \widehat{\phi}_{m,{M_0},\bpar}(z)=(m-{M_0})\widehat{Q}_\tau(z)+\bpar\log(1+\babs{z}^2).\end{equation} Note that $\widehat\phi_{m,{M_0},\bpar}$ is strictly subharmonic on ${\mathbb C}$ whenever $m\ge {M_0}$ with \begin{equation}\label{bye}\Delta \widehat{\phi}_{m,{M_0},\bpar}(z)=(m-{M_0})\Delta \widehat{Q}_\tau(z)+ \bpar\left ( 1+\babs{z}^2\right )^{-2} \ge \bpar\left ( 1+\babs{z}^2\right )^{-2},\end{equation} and that \eqref{sss} implies (since $\widehat{Q}_\tau\le Q$) \begin{equation*}\widehat{\phi}_{m,{M_0},\bpar}\le \phi_m\qquad \text{on}\quad {\mathbb C}.\end{equation*} Furthermore, \eqref{qtau} implies that \begin{equation}\label{qq}\widehat{\phi}_{m,{M_0},\bpar}=\left ((m-{M_0})\tau+\bpar\right ) \log\babs{z}^2+{\mathcal O}(1), \qquad \text{as}\quad z\to\infty.\end{equation} It yields that \begin{equation*}\phi_m(z)-\widehat{\phi}_{m,{M_0},\bpar}(z)\ge m(\rho-\tau)\log\babs{z}^2+{\mathcal O}(1),\qquad\text{as}\quad z\to\infty.\end{equation*} Note also that \begin{equation*}A^2_{\phi_m,n}=A^2_{mQ}\cap{\mathcal P}_n={H}_{m,n}. \end{equation*} We now check the conditions (1) and (2) of Th. \ref{strb}. (Recall that $]x[$ denotes the largest integer which is strictly smaller than $x$.) \begin{lem} \label{putty} Condition {\rm (1)} holds for $\phi=\phi_m$, $\widehat\phi=\widehat\phi_{m,{M_0},\bpar}$, ${\Sigma}={\mathcal S}_\tau$, $\alpha=0$ and $\beta={M_0} q_\tau$, provided that $(m-{M_0})\tau+\bpar>1$. Condition {\rm (2)}, i.e. $A^2_{\widehat{\phi}_{m,{M_0},\bpar}}\subset {H}_{m,n}$, holds if $n\ge ](m-{M_0})\tau+\bpar[$. Thus conditions {\rm (1)} and {\rm (2)} hold whenever \begin{equation}\label{alag}n\ge](m-{M_0})\tau+\bpar[>0.\end{equation} \end{lem} \begin{proof} We have already shown that $\widehat{\phi}_{m,{M_0},\bpar}\le \phi_m$ on ${\mathbb C}$. Moreover $\phi_m\le\beta+\widehat{\phi}_{m,{M_0},\bpar}$ on ${\mathcal S}_\tau$, since $Q=\widehat{Q}_\tau$ there. The assertion that $\widehat{\phi}_{m,{M_0},\bpar}\le \phi_m$ implies that $A^2_{\widehat{\phi}_{m,{M_0},\bpar}}\subset A^2_{\phi_m}$, and it remains only to prove that $A^2_{\widehat{\phi}_{m,{M_0},\bpar}}\subset {\mathcal P}_n$. But this follows from \eqref{qq}, \eqref{alag}, and the fact that $\int_{{\mathbb C}}(1+|z|^{2})^{-r}{{\mathrm d} A}(z)<\infty$ if and only if $r>1$. \end{proof} We now apply Th. \ref{strb}. It will be convenient to define \begin{equation}\label{ctdef}c_\tau=\inf_{z\in{\mathcal S}_\tau} \big\{\left ( 1+\babs{z}^2\right )^{-2}\big\}.\end{equation} \begin{thm} \label{boh} Let $Q\in{\mathcal C}^{1,1}({\mathbb C})$ and $Q\ge 1$ on ${\mathbb C}$. Fix two positive numbers ${M_0}$ and $\bpar$ such that relation \eqref{sss} is satisfied, and define the positive numbers $q_\tau$ and $c_\tau$ by \eqref{sigmat} and \eqref{ctdef}, and let $\widehat{\phi}_{m,{M_0},\bpar}$ be defined by \eqref{pha}. Suppose there are real-valued functions ${\varrho}_m\in{\mathcal C}^{1,1}({\mathbb C})$ which are locally constant on ${\mathbb C}\setminus{\mathcal S}_\tau$ such that $\widehat{\phi}_{m,{M_0},\bpar}+{\varrho}_m$ is strictly subharmonic on ${\mathbb C}$ and that for some positive number $\kappa<1$ we have \begin{equation*} \frac {|{\overline{\partial}} {\varrho}_m|^2} {\Delta\widehat{\phi}_{m,{M_0},\bpar}+\Delta{\varrho}_m}\le\frac{\kappa^2} {\mathrm e^{{M_0} q_\tau}} \qquad \text{a.e. on}\quad {\mathbb C}. \end{equation*} Suppose furthermore that $f\in\coity({\mathbb C})$ satisfies \begin{equation*}\operatorname{supp} f\subset {\mathcal S}_\tau.\end{equation*} Then, if $n\ge ](m-{M_0})\tau+\bpar[>0$, we have that $u_*$, the $L^2_{mQ,n}$-minimal solution to ${\overline{\partial}} u=f$, satisfies \begin{equation*}\int_{\mathbb C}\babs{u_*}^2 \mathrm e^{{\varrho}_m-mQ}{{\mathrm d} A}\le \frac{\mathrm e^{{M_0} q_\tau}}{(1-\kappa)^2}\int_{{\mathbb C}}\babs{f}^2 \frac {\mathrm e^{{\varrho}_m-mQ}}{ (m-{M_0})\Delta Q+\Delta{\varrho}_m+\bpar c_\tau} {{\mathrm d} A}.\end{equation*} \end{thm} \begin{proof} All conditions in Th. \ref{strb} are fulfilled with $\phi=mQ$, $\widehat{\phi}=\widehat{\phi}_{m,{M_0},\bpar}$, ${\varrho}={\varrho}_m$, $\alpha=0$ and $\beta={M_0} q_\tau$. In view of \eqref{bye}, it yields that \begin{equation*}\label{whop3}\begin{split}\int_{\mathbb C}\babs{u_*}^2\mathrm e^{{\varrho}_m-mQ}{{\mathrm d} A}&\le \frac {\mathrm e^{{M_0} q_\tau}} {(1-\kappa)^2}\int_{{\mathbb C}}\babs{f}^2\frac {\mathrm e^{{\varrho}_m-mQ}} {\Delta\widehat{\phi}_{m,{M_0},\bpar}+ \Delta{\varrho}_m}{{\mathrm d} A}\le\\ &\le \frac {\mathrm e^{{M_0} q_\tau}} {(1-\kappa)^2}\int_{{\mathbb C}}\babs{f}^2\frac {\mathrm e^{{\varrho}_m-mQ}} {(m-{M_0})\Delta\widehat{Q}_\tau+\Delta{\varrho}_m+\bpar c_\tau}{{\mathrm d} A}.\\ \end{split}\end{equation*} The assertion is now immediate from \eqref{whop3}, since $\Delta Q=\Delta \widehat{Q}_\tau$ a.e. on $\operatorname{supp} f$. \end{proof} \noindent In Sect. \ref{point}, we shall use the full force of Th. \ref{boh}. As for now, we just mention the following simple consequence, which also can be proved easily by more elementary means, cf. \cite{B}, p. 10. \begin{cor}\label{bh} Let $Q\in {\mathcal C}^{1,1}({\mathbb C})$ and $Q\ge 1$ on ${\mathbb C}$. Further, let ${\Sigma}$ be a compact subset of ${\mathcal S}_\tau$. Put \begin{equation*}a=\operatorname{\text{\rm ess\,inf}\,}\{\Delta Q(z);\, z\in{\Sigma}\}.\end{equation*} Let $m_0=\max\{2{M_0},(1+{M_0})/\tau\}$ and assume that $f\in \coity({\mathbb C})$ satisfies \begin{equation*}\operatorname{supp} f\subset{\Sigma},\end{equation*} and that $n\ge ](m-{M_0})\tau+\bpar[$ and $m\ge m_0$. Then $u_*$, the $L^2_{mQ,n}$-minimal solution to ${\overline{\partial}} u=f$ satisfies \begin{equation*}\|u_*\|_{mQ}^2\le \frac {2\mathrm e^{{M_0} q_\tau}} {am+\bpar c_\tau}\|f\|_{mQ}^2. \end{equation*} \end{cor} \begin{proof} Take ${\varrho}_m=0$ in Th. \ref{boh} and observe that $m-{M_0}\ge m/2$ and also $](m-{M_0})\tau+\bpar[>0$ whenever $m\ge m_0$. (Any other $m_0$ having these properties would work as well, of course.) \end{proof} \section{Approximate Bergman projections} \subsection{Preliminaries} In this section we state and prove a result (Th. \ref{mlem} below), which we will use to prove the asymptotic expansion in Th. \ref{th3} in the next section. The result in question is a modified version of [Berman et al. \cite{BBS}, Prop. 2.5, p. 9]. Our proof is elementary but rather lengthy, and the reader may find it worthwhile to look at the result and go to the next section at the first read. \subsection{A convention} It will be convenient to be able to define integrals $\int_S \chi(w)Y(w){\mathrm d}\mu(w)$ where $S$ is a $\mu$-measurable subset of ${\mathbb C}$, $\chi$ a cut-off function, and $Y$ a complex-valued $\mu$-measurable function which is well-defined on $\operatorname{supp}\chi$, but not necessarily on the entire domain $S$. By convention, we extend the integrand $Y(w)\chi(w)$ to ${\mathbb C}$ by defining $\chi(w)Y(w)=0$ whenever $\chi(w)=0$, i.e., \textit{we define} \begin{equation*}\int_S \chi(w)Y(w){\mathrm d}\mu(w):=\int_{S\cap \operatorname{supp}\chi}\chi(w)Y(w){\mathrm d}\mu(w).\end{equation*} \subsection{Statement of the result} Recall the definition of the set ${X}=\{\Delta Q>0\}$, as well as of the holomorphic extension $\psi$ of $Q$ from the anti-diagonal, see \eqref{psi2}. For the approximating kernel $K_m^1$, see Definition \ref{def1}. By ${\partial} S$ we mean the positively oriented boundary of a compact set $S$. We have the following theorem. \begin{thm} \label{mlem} Let $Q$ be real-analytic in ${\mathbb C}$. Let $S$ be a compact subset of ${X}$ and fix a point $z_0\in S$. Then there exists \begin{enumerate} \item[(1)] a number $\varepsilon>0$ small enough that $K_m^1(z,w)$ makes sense and is Hermitian for all $z,w\in{\mathbb D}(z_0;2\varepsilon)$ and all $z_0\in S$, \item[(2)] a function $\chi\in\coity({\mathbb C})$ with $\chi=1$ in ${\mathbb D}(z_0;3\varepsilon/2)$, $\chi=0$ outside ${\mathbb D}(z_0;2\varepsilon)$, and $\|{\overline{\partial}}\chi\|_{L^2}\le 3,$ \item[(3)] a real-analytic function $\nu_0(z,w)$ defined for $z,w\in{\mathbb D}(z_0;2\varepsilon)$, \end{enumerate} such that with \begin{equation}\label{izm}I_{m}u(z)=\int_{S} u(w)\chi(w)K_m^1(z,w)\mathrm e^{-mQ(w)}{{\mathrm d} A}(w),\qquad u\in A^2_{mQ},\quad z\in{\mathbb D}(z_0;\varepsilon)\cap S^\circ, \end{equation} we have \begin{equation*}\begin{split} &\babs{I_{m}u(z)-u(z)-\frac 1 {2\pi{\mathrm i}}\int_{{\partial} S}u(w)\chi(w)\mathrm e^{m(\psi(z,\bar{w})-Q(w))}\left ( \frac 1 {z-w}+\nu_0(z,w)\right ){\mathrm d} w} \le\frac{C}{m^{3/2}}\mathrm e^{mQ(z)/2}\|u\|_{mQ},\\ \end{split}\end{equation*} for all $u$ and $z$ as above, with a number $C$ which only depends on $\varepsilon$ and $z_0$. Moreover, the numbers $C$ and $\varepsilon$ may be chosen independently of $z_0$ for $z_0\in S.$ In particular, if $z_0\in S^\circ$ and if $\varepsilon$ is small enough that ${\mathbb D}(z_0;2\varepsilon)\subset S^\circ$, then $\chi=0$ on ${\partial} S$ and so \begin{equation}\label{ching}\babs{I_{m}u(z)-u(z)}\le Cm^{-3/2}\mathrm e^{mQ(z)/2}\|u\|_{mQ}, \qquad u\in A^2_{mQ},\quad z\in{\mathbb D}(z_0;\varepsilon).\end{equation} \end{thm} \begin{rem} The function $\nu_0$ is constructed in the proof, and it depends only on the function $\psi$ and its derivatives. See \eqref{nudef} below. In the case when $z_0$ is in $S^\circ$ and $\varepsilon$ is small enough that ${\mathbb D}(z_0;2\varepsilon)\subset S^\circ$, the conclusion \eqref{ching} makes it seem reasonable to call the functional $u\mapsto I_m u(z)$ an \textit{approximate Bergman projection}, at least locally, for $z$ in a neighbourhood of $z_0$. \end{rem} \subsection{A rough outline; previous work.} In the proof we shall construct three operators $I_m^j$, $j=1,2,3$, such that $I_m u(z)=I_m^1 u(z)+I_m^2 u(z)+I_m^3 u(z)$ for $z$ near $z_0$. The constructions will be made so that \begin{equation*}\begin{split}&I_m^1 u(z)\quad\text{is close to}\quad u(z)+\frac 1 {2\pi{\mathrm i}}\int_{{\partial} S}\frac{u(w)\chi(w)\mathrm e^{m(\psi(z,\bar{w})-Q(w))}} {z-w}{\mathrm d} w,\\ &I_m^2 u(z)\quad \text{is close to}\quad \frac 1 {2\pi{\mathrm i}}\int_{{\partial} S}u(w)\chi(w)\nu_0(z,w)\mathrm e^{m(\psi(z,\bar{w})-Q(w))}{\mathrm d} w,\\ &\text{and}\quad I_m^3 u(z)\quad \text{is "negligible" for large\,} m.\\ \end{split}\end{equation*} The operator $I_m^1$ is easy to construct directly, whereas the definitions of the other operators $I_m^j$ for $j=2,3$ are somewhat more involved. We have found it convenient to start by considering $I_m^1$. Our approach is based on the paper \cite{BBS} by Berman et al., in which a somewhat different situation is treated (notably, $Q$ is assumed strictly subharmonic in the entire plane in \cite{BBS}, which means that a somewhat different inner product $|u|^2_{mQ}=\int \babs{u}^2\mathrm e^{-mQ}\Delta Q{{\mathrm d} A}$ is available). We have found the remarks of Berman in \cite{B}, p. 9 quite useful; the construction in \cite{BBS} is \textit{local} and hence the requirement that $Q$ be globally strictly subharmonic can, at least in principle, be removed. We have found it motivated to give a detailed proof for this statement. Our approach is heavily inspired by that of the aformentioned papers, but is frequently more elementary. The rest of this section is devoted to the proof of Th. \ref{mlem}. \subsection{The first approximation.} Throughout the proof, we keep a point $z_0\in S$ arbitrary but fixed, where $S$ is a given compact subset of ${X}$. Also fix a function $u\in A^2_{mQ}$. The idea is to construct a suitable complex phase function $\phi_z(w)$, such that for fixed $z$ in $S^\circ$ close to $z_0$, the main contribution in \eqref{izm} is of the form \begin{equation}\label{grep}I_m^1 u(z)=\int_{S} \frac {u(w)\chi(w)} {z-w}{\overline{\partial}}_w\left ( \mathrm e^{m\phi_z(w)(z-w)}\right ){{\mathrm d} A}(w).\end{equation} To construct $\phi_z$, we first define, for points $z,w,$ and $\bar{\zeta}$ sufficiently near each other, \begin{equation}\label{bri}\theta(z,w,\zeta)=\int_0^1 {\partial}_1\psi(w+t(z-w),\zeta){\mathrm d} t. \end{equation} Then $\theta$ is holomorphic in a neighbourhood of the subset $\{(z,z,\bar{z});z\in {\mathbb C}\}\subset {\mathbb C}^3$ and \begin{equation*}\theta(z,w,\zeta)(z-w)=\psi(z,\zeta)-\psi(w,\zeta). \end{equation*} Let $\varepsilon>0$ be sufficiently small that ${\mathbb D}(z_0;2\varepsilon)\Subset {X}$ and such that the functions $\psi(z,w)$, ${b}_0(z,w)$ and ${b}_1(z,w)$ make sense and are holomorphic on $\{(z,w);z,\bar{w}\in {\mathbb D}(z_0;2\varepsilon)\}$ (see Definition \ref{def1}). Writing \begin{equation*}\delta_0=\Delta Q(z_0)/2,\end{equation*} we may then use \eqref{bbs} to choose $\varepsilon>0$ somewhat smaller if necessary so that \begin{equation}\label{goodc}2\operatorname{Re}\{\theta(z,w,\bar{w})(z-w)\}=2\operatorname{Re} \psi(z,\bar{w})-2Q(w)\le -\delta_0\babs{z-w}^2+Q(z)-Q(w),\end{equation} for all $z,w\in {\mathbb D}(z_0;2\varepsilon)$. Note that the same $\varepsilon$ can be used for all $z_0$ in $K$. Next, we fix a cut-off function $\chi\in\coity({\mathbb C})$ such that $\chi=1$ in ${\mathbb D}(z_0;3\varepsilon/2)$, $\chi=0$ outside ${\mathbb D}(z_0;2\varepsilon)$, $0\le \chi\le 1$ on ${\mathbb C}$ and $\|{\overline{\partial}}\chi\|_{L^2}\le 3$, see Subsect. \ref{cutoff}. Take a point $z\in S^\circ\cap{\mathbb D}(z_0;\varepsilon)$. We now put \begin{equation*}\phi_z(w)=\theta(z,w,\bar{w}),\qquad\text{which means that} \qquad \phi_z(w)(z-w)=\psi(z,\bar{w})-Q(w),\end{equation*} and consider the corresponding integral $I_m^1 u(z)$ given by \eqref{grep}. We have the following lemma. The result should be compared with \cite{BBS}, Prop. 2.1. \begin{lem} \label{blem}There exist positive numbers $C_1$ and $\delta$ depending only on $z_0$ such that \begin{equation}\label{goo1}\babs{I_{m}^1 u(z)-\frac 1 {2\pi{\mathrm i}}\int_{{\partial} S} \frac {u(w)\chi(w)\mathrm e^{m\phi_z(w)(z-w)}} {z-w}{\mathrm d} w-u(z)}\le C_1\varepsilon^{-1}\mathrm e^{m(Q(z)/2-\delta\varepsilon^2)}\|u\|_{mQ},\end{equation} for all $u\in A^2_{mQ}$ and all $z\in{\mathbb D}(z_0;\varepsilon)\cap S^\circ$. The numbers $C_1$ and $\delta$ may also be chosen independently of the point $z_0\in S$. (Indeed, one may take $\delta=\Delta Q(z_0)\varepsilon^2/16$ and $C_1=6/\varepsilon$.) \end{lem} \begin{proof} Keep $z\in{\mathbb D}(z_0;\varepsilon)\cap S^\circ$ arbitrary but fixed and take $u\in A^2_{mQ}$. Since $\chi(z)=1$ and since $\babs{w-z}\ge\varepsilon/2$ when ${\overline{\partial}}\chi(w)\ne 0$, \eqref{grep} and Cauchy's formula implies (keep in mind that $z\in S^\circ$) \begin{equation}\begin{split}\label{pr}u(z)&+\frac 1 {2\pi{\mathrm i}}\int_{{\partial} S} \frac {u(w)\chi(w)\mathrm e^{m\phi_z(w)(z-w)}} {z-w}{\mathrm d} w=\int_{S} \frac {{\overline{\partial}}_w\left ( u(w)\chi(w)\mathrm e^{m\phi_z(w)(z-w)}\right )} {z-w}{{\mathrm d} A}(w)=\\ &=I_m^1 u(z)+\int_{S}\frac {u(w){\overline{\partial}}\chi(w)} {z-w} \mathrm e^{m\phi_z(w)(z-w)}{{\mathrm d} A}(w)=\\ &=I_m^1 u(z)+\int_{\{w\in S;\,\babs{w-z}\ge \varepsilon/2\}}\frac {u(w){\overline{\partial}}\chi(w)} {z-w} \mathrm e^{m\phi_z(w)(z-w)}{{\mathrm d} A}(w).\\ \end{split} \end{equation} Applying the estimate \eqref{goodc} to the last integral in \eqref{pr} gives \begin{equation*}\begin{split}&\babs{ \int_{\{w\in S;\,\babs{w-z}\ge \varepsilon/2\}}\frac {u(w){\overline{\partial}}\chi(w)} {z-w} \mathrm e^{m\phi_z(w)(z-w)}{{\mathrm d} A}(w)}\le\\ &\qquad\qquad\qquad\le \int_{\{w\in{\mathbb C};\, \babs{w-z}\ge {\varepsilon/2}\}}\frac{\babs{ u(w){\overline{\partial}}\chi(w)}} {\babs{z-w}}\mathrm e^{m(Q(z)/2-Q(w)/2-\delta_0\babs{z-w}^2/2)}{{\mathrm d} A}(w)\le\\ &\qquad\qquad\qquad\le (2/\varepsilon)\mathrm e^{m(Q(z)/2-\delta_0\varepsilon^2/8)}\int_{\mathbb C} \babs{u(w){\overline{\partial}}\chi(w)}\mathrm e^{-mQ(w)/2}{{\mathrm d} A}(w)\le\\ &\qquad\qquad\qquad\le (2/\varepsilon)\mathrm e^{m(Q(z)/2-\delta_0\varepsilon^2/8)}\|u\|_{mQ}\|{\overline{\partial}}\chi\|_{L^2} \le (6/\varepsilon)\mathrm e^{m(Q(z)/2-\delta_0\varepsilon^2/8)},\\ \end{split} \end{equation*} where we have used that $\|{\overline{\partial}}\chi\|_{L^2}\le 3$ to get the last inequality. Recalling that $\delta_0=\Delta Q(z_0)/2$ and combining with \eqref{pr}, we get an estimate \begin{equation*}\babs{u(z)+\frac 1 {2\pi{\mathrm i}}\int_{{\partial} S} \frac {u(w)\chi(w)\mathrm e^{m\phi_z(w)(z-w)}} {z-w}{\mathrm d} w-I_m^1 u(z)}\le (6/\varepsilon)\mathrm e^{m(Q(z)/2-\Delta Q(z_0)\varepsilon^2/16)}.\end{equation*} The proof is finished. \end{proof} \subsection{The change of variables.} Keep $z_0\in S$ fixed and note that the definition of $\theta$ (see \eqref{bri}) implies that \begin{equation}\label{d3}[{\partial}_3\theta](z,z,\bar{z})= [{\partial}_1{\partial}_2\psi](z,\bar{z})=\Delta Q(z),\end{equation} for all $z\in {\mathbb C}$. Since $\Delta Q(z_0)>0$, \eqref{d3} implies that there exists a neighbourhood $U$ of $(z,w,\zeta)=(z_0,z_0,\bar{z}_0)$ such that the function \begin{equation*}F:U\to F(U)\quad:\quad (z,w,\zeta)\mapsto (z,w,\theta(z,w,\zeta))\end{equation*} defines a biholomorphic change of coordinates. In particular, we may regard $\zeta$ as a function of the parameters $z$, $w$ and $\theta$ when $(z,w,\zeta)$ is in $U$. In view of \eqref{d3} we can choose $\varepsilon>0$ such that $[{\partial}_3\theta](z,w,\zeta)\ne 0$ whenever $z,w,\bar{\zeta}\in{\mathbb D}(z_0;2\varepsilon)$. Since $\phi_z(w)=\theta(z,w,\bar{w})$, we have \begin{equation}\label{basic} [{\partial}_3\theta](z,w,\bar{w})={\overline{\partial}}\phi_z(w),\end{equation} and we infer that when $z$ is fixed in ${\mathbb D}(z_0;\varepsilon)$ we have \begin{equation*}{\overline{\partial}} \phi_z(w)\ne 0\quad \text{when}\quad w\in{\mathbb D}(z_0;2\varepsilon), \end{equation*} Moreover, choosing $\varepsilon>0$ somewhat smaller if necessary, we can assume that $U=\{(z,w,\zeta);z,w,\bar{\zeta}\in{\mathbb D}(z_0;2\varepsilon)\}$. Again, note that the same $\varepsilon$ can be used for all $z_0$ in $S$. We shall in the following fix a number $\varepsilon>0$ with the above properties, in addition to the properties which we required earlier, i.e., ${\mathbb D}(z_0;2\varepsilon)\Subset {X}$, the inequality \eqref{goodc} holds for all $z,w\in{\mathbb D}(z_0;2\varepsilon)$, and the functions $\psi$, ${b}_0$ and ${b}_1$ are holomorphic in the set $\{(z,w);z,\bar{w}\in{\mathbb D}(z_0;2\varepsilon)\}$. \subsection{The functions $\Theta_0$ and $\Upsilon_0$ and a formula for $I_m$.} We now use the correspondence $F$ to define a holomorphic function $\widetilde{\psi}$ in ${\mathbb D}(z_0;2\varepsilon)\times F(U)$ via \begin{equation*}\widetilde{\psi}(z,\xi,w,\theta)=\psi(z,\zeta(\xi,w,\theta)).\end{equation*} We also put \begin{equation}\label{d0}\Theta_0(z,w,\theta)=[{\partial}_1{\partial}_4 \widetilde{\psi}](z,z,w,\theta)={b}_0(z,\zeta)\cdot[{\partial}_\theta\zeta](z,w,\theta)= {b}_0(z,\zeta)/[{\partial}_\zeta\theta](z,w,\zeta),\quad \zeta=\zeta(z,w,\theta),\end{equation} where we recall that \begin{equation*}{b}_0={\partial}_1{\partial}_2\psi.\end{equation*} The function $\Theta_0$ was put to use by Berman et al. in [\cite{BBS}, p. 9], where it is called $\Delta_0$. Note that \eqref{bri} implies that $\theta(z,z,\zeta)={\partial}_1\psi(z,\zeta)$ for all $z$ and $\zeta$, so that, $[{\partial}_\zeta\theta](z,z,\zeta)={b}_0(z,\zeta)$. Hence \eqref{d0} gives \begin{equation}\label{d01}\Theta_0(z,z,\theta)= {b}_0(z,\zeta)/{b}_0(z,\zeta)=1.\end{equation} Thus putting \begin{equation*}\Upsilon_0(z,w,\theta)=-\int_0^1[{\partial}_2\Theta_0](z,z+t(w-z),\theta){\mathrm d} t,\end{equation*} we obtain the relation \begin{equation}\label{upsrel}\Upsilon_0(z,w,\theta)(z-w)=\Theta_0(z,w,\theta)-1.\end{equation} We have the following lemma. \begin{lem}\label{funs} \begin{equation}\label{boss}I_{m} u(z)=\int_{S} \frac {u(w)\chi(w)} {z-w}\left ( 1+\frac {{b}_1(z,\bar{w})} {m{b}_0(z,\bar{w})}\right ) \Theta_0(z,w,\phi_z(w)){\overline{\partial}}_w\left ( \mathrm e^{m\phi_z(w)(z-w)}\right ) {{\mathrm d} A}(w).\end{equation} \end{lem} \begin{proof} Combining \eqref{d0} with \eqref{basic} we get the identity \begin{equation}\label{neq} \Theta_0(z,w,\phi_z(w))={b}_0(z,\bar{w})/{\overline{\partial}}\phi_z(w).\end{equation} Recalling that $K_m^1(z,w)=(m{b}_0(z,\bar{w})+{b}_1(z,\bar{w}))\mathrm e^{m\psi(z,\bar{w})}$ and $\phi_z(w)=\theta(z,w,\bar{w})$ we thus have (see \eqref{izm}) \begin{equation*}\begin{split}I_{m} u(z)&=\int_{S} u(w)\chi(w)K_m^1(z,\bar{w})\mathrm e^{-mQ(w)}{{\mathrm d} A}(w)=\\ &=\int_{S} u(w)\chi(w)(m{b}_0(z,\bar{w})+{b}_1(z,\bar{w}))\mathrm e^{m\phi_z(w)(z-w)} {{\mathrm d} A}(w)=\\ &=\int_{S} u(w)\chi(w) (m{b}_0(z,\bar{w})+{b}_1(z,\bar{w}))\frac {{\overline{\partial}}_w (\mathrm e^{m\phi_z(w)(z-w)})} {m(z-w){\overline{\partial}} \phi_z(w)}{{\mathrm d} A}(w)=\\ &=\int_{S} \frac {u(w)\chi(w)} {z-w}\left ( 1+\frac {{b}_1(z,\bar{w})} {m{b}_0(z,\bar{w})}\right ) \frac{{b}_0(z,\bar{w})} {{\overline{\partial}} \phi_z(w)} {\overline{\partial}}_w\left ( \mathrm e^{m\phi_z(w)(z-w)}\right ) {{\mathrm d} A}(w)=\\ &=\int_{S} \frac {u(w)\chi(w)} {z-w}\left ( 1+\frac {{b}_1(z,\bar{w})} {m{b}_0(z,\bar{w})}\right ) \Theta_0(z,w,\phi_z(w)){\overline{\partial}}_w\left ( \mathrm e^{m\phi_z(w)(z-w)}\right ) {{\mathrm d} A}(w), \end{split}\end{equation*} where we have used \eqref{neq} to get the last identity. \end{proof} Comparing \eqref{boss} with the formula \eqref{grep}, we now see a connection between $I_m u(z)$ and $I_m^1 u(z)$. We shall exploit this relation within short. \subsection{The differential equation.} We will now show that the functions ${b}_0$, ${b}_1$ and $\Theta_0$ are connected via an important differential equation. \begin{lem} \label{comp} For all $z,\bar{\zeta}\in{\mathbb D}(z_0;2\varepsilon)$, we have the identity \begin{equation}\label{b1}\frac {{b}_1(z,\zeta)} {{b}_0(z,\zeta)}=-\left (\frac {{\partial}^2} {{\partial} w{\partial}\theta} \Theta_0(z,w,\theta)\right )\biggm|_{z=w,\, \theta=\theta(z,z,\zeta)}. \end{equation} \end{lem} \begin{proof} In view of \eqref{funrel}, it suffices to show that the holomorphic function \begin{equation}\label{B1}B_1(z,\zeta)=-{b}_0(z,\zeta)\cdot [{\partial}_2{\partial}_3\Theta_0](z,z,\theta(z,z,\zeta)))\end{equation} satisfies $B_1(z,\zeta)=\frac 1 2 \frac {\partial} {{\partial} \zeta}\left ( \frac 1 {{b}_0(z,\zeta)}\frac {\partial} {{\partial} z}{b}_0(z,\zeta)\right )$ for all $z$ and $\bar{\zeta}$ in a neighbourhood of $z_0$. To this end, we first note that a Taylor expansion yields that \begin{equation*}\psi(w,\zeta)=\psi(z,\zeta)+ (w-z)\frac {{\partial} \psi} {{\partial} z}(z,\zeta)+\frac {(w-z)^2} 2 \frac {{\partial}^2\psi} {{\partial} z^2}(z,\zeta)+{\mathcal O} ((w-z)^3),\quad \text{as } w\to z,\end{equation*} i.e., \begin{equation*}\theta(z,w,\zeta)=\frac {\psi(w,\zeta)-\psi(z,\zeta)}{w-z}= \frac {{\partial} \psi} {{\partial} z}(z,\zeta)+\frac {w-z} 2 \frac {{\partial}^2 \psi} {{\partial} z^2}(z,\zeta)+{\mathcal O}((w-z)^2). \end{equation*} Differentiating with respect to $\zeta$ and using that ${b}_0={\partial}_1{\partial}_2\psi$ yields \begin{equation}\label{burt}\frac {{\partial}\theta} {{\partial}\zeta}={b}_0(z,\zeta)+ \frac {w-z} 2 \frac {{\partial} {b}_0} {{\partial} z}(z,\zeta)+{\mathcal O}((w-z)^2).\end{equation} Dividing both sides of \eqref{burt} by ${b}_0={b}_0(z,\zeta)$ and using \eqref{d0}, we get \begin{equation*}\frac 1 {\Theta_0}= \frac 1 {{b}_0}\frac {{\partial}\theta}{{\partial} \zeta}= 1+\frac {w-z} {2 {b}_0}\frac {{\partial} {b}_0} {{\partial} z}+{\mathcal O}((w-z)^2).\end{equation*} Inverting this relation we obtain \begin{equation*}\label{doeq}\Theta_0={b}_0/\frac {{\partial}\theta}{{\partial}\zeta}= 1-\frac {w-z} {2{b}_0}\frac {{\partial} {b}_0} {{\partial} z}+{\mathcal O}((w-z)^2).\end{equation*} In view of \eqref{B1}, it yields that \begin{equation*}\begin{split}B_1(z,\zeta(z,z,\theta)) &=-{b}_0(z,\zeta)\cdot \frac {{\partial}^2} {{\partial}\theta{\partial} w} \Theta_0(z,w,\theta)\biggm|_{w=z}=\\ &=\frac {{b}_0(z,\zeta)} 2\frac {\partial} {{\partial}\theta}\left ( \frac 1 {{b}_0(z,\zeta)}\frac {\partial} {{\partial} z} {b}_0(z,\zeta)\right )\biggm|_{z=w}=\\ &=\frac {{b}_0(z,\zeta)} 2 \frac {\partial} {{\partial}\zeta}\left ( \frac 1 {{b}_0(z,\zeta)}\frac{\partial} {{\partial} z} {b}_0(z,\zeta)\right )\biggm|_{z=w}/ \frac {{\partial}\theta}{{\partial}\zeta}(z,z,\zeta)=\\ &=\frac 1 2 \frac {\partial} {{\partial}\zeta}\left ( \frac 1 {{b}_0(z,\zeta)}\frac{\partial} {{\partial} z} {b}_0(z,\zeta)\right )={b}_1(z,\zeta).\\ \end{split} \end{equation*} Here we have used \eqref{d0}, \eqref{d01} and \eqref{funrel} to obtain the last equality. \end{proof} \subsection{The operator $\nabla_m$.} For the further analysis, it will be convenient to consider a differential operator $\nabla_m$ defined for complex smooth functions $A$ of the parameters $z$, $w$ and $\theta$ by \begin{equation}\label{nabla}\nabla_m A=\frac {{\partial} A} {{\partial}\theta}+m(z-w)A= \mathrm e^{-m\theta(z-w)}\frac {\partial} {{\partial} \theta}\left ( \mathrm e^{m\theta(z-w)}A(z,w,\theta)\right ).\end{equation} Cf. \cite{BBS}, p. 5. It will be useful to note that, evaluating at a point where $\theta=\phi_z(w)$ we have \begin{equation}\label{eval}\nabla_m A\biggm|_{\theta=\phi_z(w)}=\frac {\mathrm e^{-m\phi_z(w)(z-w)}} {{\overline{\partial}} \phi_z(w)}{\overline{\partial}}_w\left ( \mathrm e^{m\phi_z(w)(z-w)}A(z,w,\phi_z(w))\right ),\end{equation} as is easily verified by carrying out the differentiation and comparing with \eqref{nabla}. From this, we now derive an important identity, \begin{equation}\begin{split}\label{impo}\int_{S} u(w)&\chi(w)[\nabla_m A](z,w,\phi_z(w)){\overline{\partial}}_w\left (\mathrm e^{m\phi_z(w)(z-w)}\right ){{\mathrm d} A}(w)=\\ &=m\int_{S} u(w)\chi(w){\overline{\partial}}_w\left ( \mathrm e^{m\phi_z(w)(z-w)}A(z,w,\phi_z(w))\right ){{\mathrm d} A}(w)=\\ &=-m\int_{S} u(w){\overline{\partial}}\chi(w) \mathrm e^{m\phi_z(w)(z-w)}A(z,w,\phi_z(w)){{\mathrm d} A}(w)+\\ &\quad +\frac m {2\pi{\mathrm i}}\int_{{\partial} S}u(w)\chi(w)\mathrm e^{m\phi_z(w)(z-w)}A(z,w,\phi_z(w)){\mathrm d} w=\\ &=-m\int_{\{w\in S;\,\babs{z-w}\ge \varepsilon/2\}} u(w){\overline{\partial}}\chi(w) \mathrm e^{m\phi_z(w)(z-w)}A(z,w,\phi_z(w)){{\mathrm d} A}(w)+\\ &\quad +\frac m {2\pi{\mathrm i}}\int_{{\partial} S}u(w)\chi(w)\mathrm e^{m\phi_z(w)(z-w)}A(z,w,\phi_z(w)){\mathrm d} w ,\qquad z\in{\mathbb D}(z_0;\varepsilon) .\\ \end{split} \end{equation} Here we have used \eqref{eval} to deduce the first equality. The second equality follows from Green's formula. To get the last equality we have used that $\babs{z-w}\ge \varepsilon/2$ when ${\overline{\partial}}\chi(w)\ne 0$. We now have the following lemma, which is based on [\cite{BBS}, Lemma 2.3 and the discussion on p. 9]. \begin{lem} \label{tlem} There exist holomorphic functions $A_m(z,w,\theta)$ and $B(z,w,\theta)$, uniformly bounded on $F(U)$, such that \begin{equation}\label{tobe}\left ( 1+ \frac{{b}_1(z,\zeta)}{m{b}_0(z,\zeta)}\right ) \Theta_0(z,w,\theta)=1+m^{-1}\nabla_m A_m(z,w,\theta)+m^{-2}B(z,w,\theta),\end{equation} where it is understood that $\theta=\theta(z,w,\zeta)$. Moreover, we have that \begin{equation}\label{alim}\lim_{m\to\infty}A_m=\Upsilon_0,\end{equation} with uniform convergence on $F(U)$. \end{lem} \begin{proof} Put \begin{equation*}\Theta_1(z,w,\theta)=\frac {{b}_1(z,\zeta)} {{b}_0(z,\zeta)}\Theta_0(z,w,\theta),\end{equation*} so that the left hand side in \eqref{tobe} can be written \begin{equation}\label{obet} \left ( 1+\frac {{b}_1(z,\zeta)} {m{b}_0(z,\zeta)}\right )\Theta_0(z,w,\theta)= \Theta_0(z,w,\theta)+m^{-1}\Theta_1(z,w,\theta).\end{equation} Next, recall that $\Theta_0(z,w,\theta)-1=(z-w)\Upsilon_0(z,w,\theta),$ cf. \eqref{upsrel}. Further, from the relation \eqref{b1}, which reads \begin{equation*}\frac {{b}_1(z,\zeta)} {{b}_0(z,\zeta)}=-\left ( \frac {{\partial}^2} {{\partial}\theta{\partial} w}\Theta_0(z,w,\theta)\right )\biggm|_{z=w},\quad \zeta=\zeta(z,z,\theta),\end{equation*} it follows that \begin{equation}\begin{split}\label{1029}\Theta_1(z,z,\theta)&=-\left ( \frac {{\partial}^2} {{\partial}\theta{\partial} w} \Theta_0(z,w,\theta)\right ) \biggm|_{z=w}\cdot \Theta_0(z,z,\theta) = -\left ( \frac {{\partial}^2} {{\partial}\theta{\partial} w} \Theta_0(z,w,\theta)\right ) \biggm|_{z=w},\\ \end{split}\end{equation} where we have used that $\Theta_0(z,z,\theta)=1$ (see \eqref{d01}) to get the last equality. \eqref{1029} allows us to write \begin{equation*}\Theta_1(z,w,\theta)+{\partial}_\theta{\partial}_w\Theta_0(z,w,\theta)=(z-w)\Upsilon_1(z,w,\theta),\end{equation*} where $\Upsilon_1$ is holomorphic. Now define \begin{equation*}A_m=\Upsilon_0+m^{-1}(\Upsilon_1-({\partial}_\theta{\partial}_w \Upsilon_0))\quad, \quad B=-{\partial}_\theta(\Upsilon_1-({\partial}_\theta{\partial}_w)\Upsilon_0).\end{equation*} Then $A_m$ and $B$ are holomorphic and $A_m\to\Upsilon_0$ as $m\to\infty$. We will see that straightforward calculations show that \begin{equation}\label{dopf}\nabla_m A_m=m\left ( \Theta_0-1\right )+\Theta_1+m^{-1}\frac {\partial} {{\partial}\theta}\left ( \frac {\Theta_1} {z-w}-\frac {{\partial}_\theta \Theta_0} {(z-w)^2}\right )=m\left ( \Theta_0-1\right )+\Theta_1-m^{-1}B.\end{equation} Indeed, when $z\ne w$ we have \begin{equation*}\begin{split}A_m&=\frac {\Theta_0-1} {z-w}+m^{-1}\left ( \frac {\Theta_1+{\partial}_\theta{\partial}_w\Theta_0} {z-w}-{\partial}_w\left (\frac {{\partial}_\theta\Theta_0} {z-w}\right )\right )=\\ &=\frac {\Theta_0-1} {z-w}+m^{-1}\left ( \frac {\Theta_1} {z-w}-\frac {{\partial}_\theta\Theta_0} {(z-w)^2}\right )\\ \end{split}\end{equation*} so that \begin{equation*}{\partial}_\theta A_m=\frac {{\partial}_\theta \Theta_0} {z-w}+m^{-1}\left ( \frac {{\partial}_\theta\Theta_1} {z-w}-\frac {{\partial}_\theta^2\Theta_0} {(z-w)^2}\right ),\end{equation*} and \begin{equation*}m(z-w)A_m=m\left ( \Theta_0-1\right )+\Theta_1-\frac {{\partial}_\theta\Theta_0} {z-w},\end{equation*} which gives \eqref{dopf}, since $\nabla_m A_m={\partial}_\theta A_m+m(z-w)A_m$. Dividing through by $m$ in the \eqref{dopf} now gives \begin{equation*}\Theta_0+m^{-1}\Theta_1=1+m^{-1}\nabla_m A_m+m^{-2}B,\end{equation*} and glancing at \eqref{obet} we obtain \eqref{tobe}. \end{proof} \subsection{Decomposition of $I_m$.} In view of \eqref{boss} and Lemma \ref{tlem}, we can now write \begin{equation}\begin{split}\label{recon}I_{m} u(z)&=\int_{S} \frac {u(w)\chi(w)} {z-w}\left ( 1+\frac {{b}_1(z,\bar{w})} {m{b}_0(z,\bar{w})}\right ) \Theta_0(z,w,\bar{w}){\overline{\partial}}_w\left ( \mathrm e^{m\phi_z(w)(z-w)}\right ) {{\mathrm d} A}(w)=\\ &=\int_{S} \frac {u(w)\chi(w)} {z-w} {\overline{\partial}}_w(\mathrm e^{m\phi_z(w)(z-w)}){{\mathrm d} A}(w)+\\ &+\frac 1 m\int_{S} \frac {u(w)\chi(w)} {z-w}[\nabla_m A_m](z,w,\phi_z(w)){\overline{\partial}}_w(\mathrm e^{m\phi_z(w)(z-w)}){{\mathrm d} A}(w)+\\ &+\frac 1 {m^2}\int_{S} \frac {u(w)\chi(w)} {z-w}B(z,w,\phi_z(w)){\overline{\partial}}_w(\mathrm e^{m\phi_z(w)(z-w)}){{\mathrm d} A}(w),\\ \end{split}\end{equation} where $A_m$ and $B$ are uniformly bounded and holomorphic near $(z_0,z_0,\phi_{z_0}(z_0))$. We recognize the first integral in the right hand side of \eqref{recon} as $I_{m}^1 u(z)$ (see \eqref{grep}). Let us denote the others by \begin{equation*}I_{m}^2 u(z)=\frac 1 m \int_{S} \frac {u(w)\chi(w)} {z-w}\nabla_m A_m(z,w,\phi_z(w)){\overline{\partial}}_w(\mathrm e^{m\phi_z(w)(z-w)}){{\mathrm d} A}(w),\end{equation*} and \begin{equation}\label{iih} I_{m}^3 u(z)=\frac 1 {m^2} \int_{S} \frac {u(w)\chi(w)} {z-w}B(z,w,\phi_z(w)){\overline{\partial}}_w(\mathrm e^{m\phi_z(w)(z-w)}){{\mathrm d} A}(w).\end{equation} \subsection{Estimates for $I_m^2$.} We start by estimating $I_{m}^2 u(z)$ when $z\in{\mathbb D}(z_0;\varepsilon)$, and note that \eqref{impo} yields that \begin{equation}\begin{split}\label{ffsp}I_{m}^2 u(z)=&-\int_{\{w\in S;\,\babs{z-w}\ge\varepsilon/2\}} u(w){\overline{\partial}} \chi(w) \mathrm e^{m\phi_z(w)(z-w)}A_m(z,w,\phi_z(w)){{\mathrm d} A}(w)+\\ &+\frac 1 {2\pi{\mathrm i}}\int_{{\partial} S}u(w)\chi(w)\mathrm e^{m\phi_z(w)(z-w)}A_m(z,w,\phi_z(w)){\mathrm d} w.\\ \end{split} \end{equation} Let us now define \begin{equation}\label{nudef}\nu_0(z,w)=\Upsilon_0(z,w,\phi_z(w)),\end{equation} so that $A_m(z,w,\phi_z(w))\to\nu_0(z,w)$ with uniform convergence for $z,w\in{\mathbb D}(z_0;2\varepsilon)$ when $m\to\infty$, by Lemma \ref{tlem}. Also let $C'=C'(\varepsilon)$ be an upper bound for $\{\babs{A_m(z,w,\phi_z(w))};\, z,w\in{\mathbb D}(z_0;2\varepsilon),\, m\ge 1\}$. We now use \eqref{ffsp} and the estimate \eqref{goodc} to get \begin{equation}\begin{split}\label{goo2}&\babs{I_{m}^2 u(z)-\frac 1 {2\pi{\mathrm i}}\int_{{\partial} S} u(w)\chi(w)\mathrm e^{m\phi_z(w)(z-w)}\nu_0(z,w){\mathrm d} w}\le\\ &\qquad\le C'\int_{\{w\in S;\,\babs{w-z}\ge \varepsilon/2\}} \babs{u(w){\overline{\partial}}\chi(w)}\mathrm e^{m(Q(z)/2-Q(w)/2-\delta_0\babs{z-w}^2/2)}{{\mathrm d} A}(w)\le\\ &\qquad\le C'\mathrm e^{m(Q(z)/2-\delta_0\varepsilon^2/8)}\int_{\mathbb C} \babs{u(w){\overline{\partial}}\chi(w)}\mathrm e^{-Q(w)/2}{{\mathrm d} A}(w)\le\\ &\qquad\le C'\mathrm e^{m(Q(z)/2-\delta_0\varepsilon^2/8)}\|u\|_{mQ}\|{\overline{\partial}}\chi\|_{L^2}\le C_2\mathrm e^{m(Q(z)/2-\delta\varepsilon^2)}\|u\|_{mQ},\\ \end{split} \end{equation} where $\delta=\delta_0/8=\Delta Q(z_0)/16$, and $C_2=3C'$. \subsection{Estimates for $I_m^3$.} To estimate $I_m^3 u(z)$, we note that \eqref{iih} implies \begin{equation*}I_m^3 u(z)=\frac 1 m \int_{\mathbb C} u(w)\chi(w)B(z,w,\phi_z(w)) \mathrm e^{m\phi_z(w)(z-w)}{\overline{\partial}}\phi_z(w){{\mathrm d} A}(w).\end{equation*} It suffices to estimate this integral using \eqref{goodc}. This gives \begin{equation*}\babs{I^3_{m} u(z)}\le \frac 1 m \int_{\mathbb C} \babs{u(w)\chi(w)B(z,w,\phi_z(w))}\mathrm e^{m(Q(z)/2-Q(w)/2)-m\delta_0\babs{z-w}^2/2}\babs{{\overline{\partial}} \phi_z(w)}{{\mathrm d} A}(w).\end{equation*} Since the function $\babs{B(z,w,\phi_z(w)){\overline{\partial}}\phi_z(w)\chi(w)}$ can be estimated by a number $C_3$ independent of $z$ and $w$ for $z\in{\mathbb D}(z_0;\varepsilon)$ and $w\in {\mathbb D}(z_0;2\varepsilon)$, it gives, in view of the Cauchy--Schwarz inequality \begin{equation}\label{goo3}\begin{split}\babs{I^3_{m}u(z)}&\le C_3m^{-1}\mathrm e^{mQ(z)/2}\int_{\mathbb C}\babs{u(w)\chi(w)}\mathrm e^{-mQ(w)/2}{{\mathrm d} A}(w)\le\\ &\le C_3m^{-1}\mathrm e^{mQ(z)/2}\|u\|_{mQ}\left (\int_{\mathbb C} \mathrm e^{-m\delta_0\babs{z-w}^2}{{\mathrm d} A}(w)\right )^{1/2}\le C_3\delta_0^{-1/2}m^{-3/2}\mathrm e^{mQ(z)/2}\|u\|_{mQ},\\ \end{split}\end{equation} where $C_3=C_3(\varepsilon)=\sup\big\{\babs{B(z,w,\phi_z(w)){\overline{\partial}}\phi_z(w)\chi(w)};\, z,w\in {\mathbb D}(z_0;2\varepsilon)\big\}$. \subsection{Conclusion of the proof of Theorem \ref{mlem}} By \eqref{recon} and the estimates \eqref{goo1}, \eqref{goo2} and \eqref{goo3}, (using that $\phi_z(w)(z-w)=\psi(z,\bar{w})-Q(w)$) \begin{equation*}\begin{split}&\babs{I_{m} u(z)-u(z)-\frac 1 {2\pi{\mathrm i}}\int_{{\partial} S}u(w)\chi(w)\left ( \frac 1 {z-w}+\nu_0(z,w)\right )\mathrm e^{m(\psi(z,\bar{w})-Q(w))}{\mathrm d} w}\le\\ &\le \babs{I_{m}^1 u(z)-u(z)-\frac 1 {2\pi{\mathrm i}}\int_{{\partial} S}\frac{u(w)\chi(w)\mathrm e^{m\phi_z(w)(z-w)}} {z-w}{\mathrm d} w}+\\ &\qquad+\babs{I_{m}^2u(z)-\frac 1 {2\pi{\mathrm i}}\int_{{\partial} S}u(w)\chi(w)\nu_0(z,w)\mathrm e^{m\phi_z(w)(z-w)}{\mathrm d} w}+ \babs{I_{m}^3u(z)}\le\\ &\le C_1\varepsilon^{-1}\mathrm e^{m(Q(z)/2-\delta\varepsilon^2)}\|u\|_{mQ}+C_2\mathrm e^{m(Q(z)/2-\delta\varepsilon^2)} \|u\|_{mQ}+C_3\delta_0^{-1}m^{-3/2}\mathrm e^{mQ(z)/2}\|u\|_{mQ},\quad z\in{\mathbb D}(z_0;\varepsilon)\cap S^\circ.\\ \end{split} \end{equation*} It thus suffices to choose a number $C$ large enough that \begin{equation}\label{itsp}C_1\mathrm e^{-\delta m}+C_2\mathrm e^{-\delta m} +C_3m^{-3/2} \le Cm^{-3/2},\quad m\ge 1.\end{equation} Now recall that the numbers $C_1,$ $C_2,$ and $C_3$ only depend on $\varepsilon$, where $\varepsilon>0$ can be chosen independently of $z_0$ for $z_0$ in the given compact subset $S$ of ${X}$. Moreover, we have that $\delta=\Delta Q(z_0)\varepsilon^2/16$, and $\Delta Q(z_0)$ is bounded below by a positive number for $z_0\in S$. It follows that $C$ in \eqref{itsp} can be chosen independently of $z_0\in S$, and the proof is finished. $\qed$ \section{The proof of Theorem \ref{th3}}\label{proof} \subsection{Preliminaries.} In this section we prove Th. \ref{th3}. Our approach which follows [\cite{BBS}, Sect. 3] is obtained by assembling the information from Lemma \ref{lemm2}, Th. \ref{mlem}, and Cor. \ref{bh}. To facilitate the presentation, we divide the proof into steps. First recall that adding a constant to $Q$ means that $K_{m,n}$ is only changed by a multiplicative constant, and hence we can (and will) assume that $Q\ge 1$ on ${\mathbb C}$. Fix a compact subset $K\Subset {\mathcal S}_\tau^\circ\cap{X}$ and a point $z_0\in {\mathcal S}_\tau^\circ\cap {X}$, and let $\varepsilon>0$ and $\chi\in \coity({\mathbb C})$ be as in Th. \ref{mlem}. Choosing $\varepsilon>0$ somewhat smaller if necessary, we may ensure that $2\varepsilon<\operatorname{dist\,}(K,{\mathbb C}\setminus({\mathcal S}_\tau\cap{X}))$ and that \begin{equation}\label{leed}\babs{K_m^1(z,w)}^2\mathrm e^{-m(Q(z)+Q(w))}\le Cm^2\mathrm e^{-\delta_0 m\babs{z-w}^2},\quad z,\bar{w}\in {\mathbb D}(z_0;2\varepsilon),\quad z_0\in K,\end{equation} for some positive numbers $C$ and $\delta_0$ (see \eqref{lead}). We also introduce the set \begin{equation*}{\Sigma}=\{z\in{\mathbb C};\, \operatorname{dist\,}(z,K)\le 2\varepsilon\}\Subset {\mathcal S}_\tau^\circ\cap{X}.\end{equation*} Our goal is to prove an estimate \begin{equation*}\babs{K_{m,n}(z,w)-K_m^1(z,w)}\mathrm e^{-m(Q(z)+Q(w))/2}\le Cm^{-1},\quad z,w\in{\mathbb D}(z_0;\varepsilon),\quad n\ge m\tau-M,\quad m\ge m_0, \end{equation*} where $M\ge 0$ is given, and $C$ and $m_0$ are independent of the specific choice of the point $z_0\in K$. In the following, we let $C$ denote various (more or less) absolute constants which can change meaning from time to time, even within the same chain of inequalities. Let $P_{m,n}:L^2_{mQ}\to {H}_{m,n}$ denote the orthogonal projection, that is, \begin{equation*}P_{m,n}u(z)= \int_{\mathbb C} u(w)K_{m,n}(z,w)\mathrm e^{-mQ(w)} {{\mathrm d} A}(w).\end{equation*} Throughout the proof we \textit{fix} two points $z,w\in{\mathbb D}(z_0;\varepsilon)$ and introduce the functions \begin{equation}\label{func1}u_z(\zeta)=K_{m,n}(\zeta,z)\quad\text{and}\quad v_w(\zeta)=\begin{cases} \chi(\zeta)K_m^1(\zeta,w),\quad& \zeta\in {\mathbb D}(z_0;2\varepsilon),\cr 0, &\text{otherwise},\cr \end{cases} \end{equation} as well as \begin{equation}\label{dwdef} d_w(\zeta)=v_w(\zeta)-P_{m,n}v_w(\zeta),\qquad \zeta\in{\mathbb C}.\end{equation} We note that $v_w\in{\mathcal C}_0^\infty({\mathbb C})$ with $\operatorname{supp} v_w\subset {\Sigma}\Subset{\mathcal S}_\tau^\circ\cap {X}.$ \begin{lem} \label{lemma2} There exists a positive number $C$ independent of $m\ge 1$, $n$, $z$ and $w$, such that \begin{equation}\label{app1} \big|K_{m,n}(z,w)-P_{m,n}v_w(z)\big|\le Cm^{-1} \mathrm e^{m(Q(z)+Q(w))/2}.\end{equation} Moreover, $C$ can be chosen independent of $z_0$ for $z_0\in K$. \end{lem} \begin{proof} By Th. \ref{mlem}, there is a number $C$ depending only on $\varepsilon$ and $K$, such that \begin{equation*}\babs{I_{m} u_z(w)-u_z(w)} \le Cm^{-3/2}\mathrm e^{mQ(w)/2}\|u_z\|_{mQ}, \end{equation*} where $I_{m}$ is given by \eqref{izm} with $S=\Sigma$. Next note that the estimate \eqref{berg1} implies \begin{equation*}\|u_{z}\|_{mQ}^2=K_{m,n}(z,z)\le Cm\mathrm e^{mQ(z)}, \end{equation*} with a number $C$ depending only on $\tau$. We conclude that \begin{equation}\label{fest}\babs{I_{m} u_z(w)-K_{m,n}(w,z)}\le Cm^{-1}\mathrm e^{m(Q(z)+Q(w))/2}.\end{equation} We next note that \begin{equation}\label{ik1}\overline{I_{m} u_z(w)}= \int_{\mathbb C} \chi(\zeta)K_m^1(\zeta,w)K_{m,n}(z,\zeta)\mathrm e^{-mQ(\zeta)}{{\mathrm d} A}(\zeta) =P_{m,n}v_w(z),\end{equation} with $v_w$ given by \eqref{func1}. Thus the statement \eqref{app1} is immediate from \eqref{ik1} and \eqref{fest}. \end{proof} \begin{lem} \label{lemma3} Let two positive numbers ${M_0}$ and $\bpar$ be given such that \eqref{sss} is fulfilled, and put $m_0=\max\{(1+{M_0})/\tau,2{M_0}\}$. Then for all $m,n$ such that $n\ge ](m-{M_0})\tau+\bpar[$ and $m\ge m_0$, there exist positive numbers $\delta$ and $C$, independent of $z_0$, $m$, $n$, $z$, and $w$, such that \begin{equation}\label{sest} \babs{d_w(z)}^2\le Cm^3\mathrm e^{-m\delta}\mathrm e^{m(Q(z)+Q(w))}. \end{equation} \end{lem} \begin{proof} The function $d_w$ is the $L^2_{mQ,n}$-minimal solution to the ${\overline{\partial}}$-equation ${\overline{\partial}} d_w={\overline{\partial}} v_w$. Since $\operatorname{supp} v_w\subset {\mathcal S}_\tau$, Cor. \ref{bh} yields that \begin{equation}\label{dbest} \|d_w\|_{mQ}^2\le C\|{\overline{\partial}} v_w\|_{mQ}^2,\qquad m\ge m_0,\quad n\ge ](m-{M_0})\tau+\bpar[,\end{equation} with a number $C$ depending only on $\tau$, ${M_0}$ and $\bpar$. But ${\overline{\partial}} v_w(\zeta)={\overline{\partial}} \chi(\zeta)K_m^1(\zeta,w)$, whence the estimate \eqref{leed} shows that there are numbers $C$ and $\delta_0>0$ such that \begin{equation}\label{test}|{\overline{\partial}} v_w(\zeta)|^2 \mathrm e^{-m(Q(\zeta)+Q(w))}\le Cm^2|{\overline{\partial}}\chi(\zeta)|^2\mathrm e^{-m\delta_0\babs{\zeta-w}^2},\qquad \zeta\in{\mathbb C}. \end{equation} Now, since $\babs{\zeta-w}\ge \varepsilon/2$ whenever ${\overline{\partial}}\chi(\zeta)\ne 0$, we deduce from \eqref{test} that \begin{equation}\label{quest}|{\overline{\partial}} v_w(\zeta)|^2\mathrm e^{-mQ(\zeta)}\le Cm^2|{\overline{\partial}}\chi(\zeta)|^2\mathrm e^{m(Q(w)-\delta)},\quad \zeta \in{\mathbb C},\end{equation} where $\delta=\delta_0\varepsilon^2/4$. Using \eqref{dbest} and then integrating the inequality \eqref{quest} with respect to ${{\mathrm d} A}(\zeta)$, we get \begin{equation}\label{cest}\|d_w\|_{mQ}^2\le C\|{\overline{\partial}} v_w\|_{mQ}^2 \le Cm^2\mathrm e^{m(Q(w)-\delta)}\|{\overline{\partial}}\chi\|_{L^2}^2\le Cm^2\mathrm e^{m(Q(w)-\delta)}. \end{equation} Next note that the function $d_w$ is holomorphic in the disk ${\mathbb D}(z_0;3\varepsilon/2)$, so that Lemma \ref{lemm2} gives \begin{equation}\label{desto} |d_w(z)|^2\mathrm e^{-mQ(z)}\le Cm\int_{{\mathbb D}(z;\varepsilon/\sqrt{m})}\babs{d_w(\zeta)}^2\mathrm e^{-mQ(\zeta)}{{\mathrm d} A}(\zeta)\le Cm \|d_w\|^2_{mQ}, \end{equation} where $C$ only depends on $\varepsilon$ and $\tau$. Combining \eqref{desto} with \eqref{cest}, we end up with \eqref{sest}, and so we are done. \end{proof} \subsection{Conclusion of the proof of Theorem \ref{th3}.} Since $\chi(z)=1$, we have that $v_w(z)=K_m^1(z,w)$, whence by \eqref{dwdef} \begin{equation*}\babs{K_{m,n}(z,w)-K_m^1(z,w)}= \babs{K_{m,n}(z,w)-v_w(z)} \le\babs{K_{m,n}(z,w)-P_{m,n}v_w(z)}+\babs{d_w(z)}. \end{equation*} In view of Lemmas \ref{lemma2} and \ref{lemma3}, the right hand side can be estimated by \begin{equation*}C\left ( m^{-1}+m^{3/2}\mathrm e^{-m\delta/2}\right ) \mathrm e^{m(Q(z)+Q(w))/2},\end{equation*} whenever $n\ge](m\tau-{M_0})+\bpar[$ and $m\ge m_0$. For $m\ge 1$, the latter expression is dominated by $Cm^{-1}\mathrm e^{m(Q(z)+Q(w))/2}$. The proof is finished. $\qed$ \section{Berezin quantization and Gaussian convergence} \label{p2} \subsection{Preliminaries.} In this section we use the expansion formula for $K_{m,n}$ (Th. \ref{th3}) to prove theorems \ref{th1} and \ref{th2}. In the proofs, we set $\tau=1$ and $m=n$. (The argument in the general case follows the same pattern.) It then becomes natural to write $K_n$ for $K_{n,n}$ etc. We also fix a compact subset $K\Subset {\mathcal S}_1^\circ\cap{X}$, a point $z_0\in K$, and a positive number $\varepsilon$ with the properties listed in Th. \ref{th3}, and we put \begin{equation*}\delta_n=\varepsilon\log n/\sqrt{n}.\end{equation*} \subsection{The proof of Theorem \ref{th1}} It suffices to show that \begin{equation}\label{firstest}B_{n}^{\langle z_0\rangle}({\mathbb D}(z_0;\delta_n))\to 1\quad \text{as}\quad n\to \infty.\end{equation} Indeed, since $B_{n}^{\langle z_0\rangle}$ is a p.m., this implies Th. \ref{th1}. In order to prove \eqref{firstest}, we apply Th. \ref{th3}, which gives \begin{equation*}K_{n}(z_0,z)\mathrm e^{-n(Q(z)+Q(z_0))/2}= \left ( n{b}_0(z_0,\bar{z})+{b}_1(z_0,\bar{z})\right )\mathrm e^{n(\operatorname{Re}\psi(z_0,\bar{z})-(Q(z)+Q(z_0))/2)}+ {\mathcal O}(n^{-1}),\quad z\in{\mathbb D}(z_0;\varepsilon)\end{equation*} when $n\to\infty$, where the ${\mathcal O}$ is uniform for $z_0\in K$. In view of \eqref{bbs}, we have \begin{equation}\label{2ndest}K_{n}(z_0,z)\mathrm e^{-n(Q(z)+Q(z_0))/2}= \left ( n{b}_0(z_0,\bar{z})+{\mathcal O}(1)\right ) \mathrm e^{n(-\Delta Q(z_0)/2+R(z_0,z))}+ {\mathcal O}(n^{-1}),\quad z\in{\mathbb D}(z_0;\varepsilon)\end{equation} with a function $R$ satisfying $\babs{R(z,w)}\le C\babs{z-w}^3$ whenever $z,w\in {\mathbb D}(z_0;\varepsilon)$ and $z_0\in K$. Note that \begin{equation}\label{mroh}n\babs{R(z_0,z)}\le Cn\delta_n^3\le C\log^3 n/\sqrt{n},\qquad \text{when}\quad z\in{\mathbb D}(z_0;\delta_n),\quad z_0\in K.\end{equation} Introducing this estimate in \eqref{2ndest} gives \begin{equation*}\babs{K_{n}(z_0,z)}^2\mathrm e^{-n(Q(z)+Q(z_0))}= \left ( n^2\babs{b_0(z_0,\bar{z})}^2+{\mathcal O}(n)\right )\mathrm e^{-n\Delta Q(z_0)\babs{z_0-z}^2+{\mathcal O}(\log^3 n/\sqrt{n})}+{\mathcal O}(n^{-2}), \end{equation*} for $z\in{\mathbb D}(z_0;\delta_n)$ as $n\to\infty$, where the ${\mathcal O}$-terms are uniform for $z_0\in K$. Furthermore, \eqref{ber} yields that \begin{equation}\label{3dest}\frac {\babs{K_{n}(z_0,z)}^2} {K_{n}(z_0,z_0)}\mathrm e^{-nQ(z)}=\frac {\babs{K_{n}(z_0,z)}^2} {n\Delta Q(z_0)+{\mathcal O}(1)}\mathrm e^{-n(Q(z_0)+Q(z))},\quad z\in{\mathbb D}(z_0;\varepsilon),\end{equation} as $n\to\infty$. Note that the left hand side in \eqref{3dest} is the density $\berd_{n}^{\langle z_0\rangle}(z)$, so that \eqref{2ndest} and \eqref{3dest} implies \begin{equation}\label{4thest}\berd_{n}^{\langle z_0\rangle}(z)=\frac {n^2\babs{{b}_0(z_0,\bar{z})}^2+{\mathcal O}(n)} {n\Delta Q(z_0)+{\mathcal O}(1)}\mathrm e^{-n\Delta Q(z_0)\babs{z-z_0}^2+{\mathcal O}(\log^3n/\sqrt{n})} +{\mathcal O}(n^{-2}),\quad z\in {\mathbb D}(z_0;\delta_n).\end{equation} Integrating \eqref{4thest} with respect to ${{\mathrm d} A}$ over ${\mathbb D}(z_0;\delta_n)$ and using the mean-value theorem for integrals now gives that there are positive numbers $v_{n,z_0}$ converging to $1$ (uniformly for $z_0\in K$), and also complex numbers $b_{n,z_0}$ converging to ${b}_0(z_0,\bar{z}_0)=\Delta Q(z_0)$ (uniformly for $z_0\in K$) as $n\to\infty$, such that \begin{equation*}\begin{split}B_{n}^{\langle z_0\rangle}({\mathbb D}(z_0;\delta_n))&= v_{n,z_0}\frac {n^2\babs{b_{n,z_0}}^2+{\mathcal O}(n)} {n\Delta Q(z_0)+{\mathcal O}(1)}\int_{{\mathbb D}(z_0;\delta_n)} \mathrm e^{-n\Delta Q(z_0)\babs{z-z_0}^2}{{\mathrm d} A}(z)+{\mathcal O}(n^{-2})\int_{{\mathbb D}(z_0;\delta_n)}{{\mathrm d} A}(z)=\\ &=v_{n,z_0}\frac {n^2\babs{b_{n,z_0}}^2+{\mathcal O}(n)} {n^2\Delta Q(z_0)^2+{\mathcal O}(n)}\left ( 1-\mathrm e^{-\Delta Q(z_0)\varepsilon^2\log^2 n}\right )+{\mathcal O}(n^{-2}).\\ \end{split} \end{equation*} The expression in the right hand side converges to $1$ as $n\to\infty$. This proves \eqref{firstest}, and Th. \ref{th1} follows. $\qed$ \subsection{The proof of Theorem \ref{th2}}It suffices to to show that there are numbers $\varepsilon_n$ converging to zero as $n\to\infty$ such that \begin{equation}\label{5thest}\int_{\mathbb C}\babs{\berd_{n}^{\langle z_0\rangle}(z)-n\Delta Q(z_0)\mathrm e^{-n\Delta Q(z_0)\babs{z-z_0}^2}}{{\mathrm d} A}(z)\le \varepsilon_n,\qquad z_0\in K.\end{equation} Indeed, \eqref{gc1} follows form \eqref{5thest} after the change of variables $z\mapsto z_0+z/\sqrt{n\Delta Q(z_0)}$ in the integral in \eqref{5thest}. To prove \eqref{5thest}, we split the integral with respect to the decomposition $\{\babs{z-z_0}<\delta _n\}\cup\{\babs{z-z_0}\ge \delta_n\}$, and put \begin{equation*}A_{n,z_0}= \int_{\{z;\babs{z-z_0}<\delta_n\}}\babs{\berd_{n}^{\langle z_0\rangle}(z)-n\Delta Q(z_0)\mathrm e^{-n\Delta Q(z_0)\babs{z-z_0}^2}}{{\mathrm d} A}(z),\end{equation*} \begin{equation*}B_{n,z_0}= \int_{\{z;\babs{z-z_0}\ge \delta_n\}}\babs{\berd_{n}^{\langle z_0\rangle}(z)-n\Delta Q(z_0)\mathrm e^{-n\Delta Q(z_0)\babs{z-z_0}^2}}{{\mathrm d} A}(z).\end{equation*} Considering $A_{n,z_0}$ first, we get from \eqref{5thest} that \begin{equation*}A_{n,z_0}= \int_{{\mathbb D}(z_0;\delta_n)}\babs{\mathrm e^{nR(z_0,\bar{z})}\frac {n^2\babs{{b}_0(z_0,\bar{z})}^2+{\mathcal O}(n)} {n\Delta Q(z_0)+{\mathcal O}(1)}-n\Delta Q(z_0)}\mathrm e^{-n\Delta Q(z_0)\babs{z-z_0}^2}{{\mathrm d} A}(z)+{\mathcal O}(n^{-2}),\end{equation*} as $n\to\infty$. Next we put \begin{equation*}s_{n,z_0}=\sup_{z\in{\mathbb D}(z_0;\delta_n)}\bigg\{\babs{\mathrm e^{nR(z_0,\bar{z})}\frac {n^2\babs{{b}_0(z_0,\bar{z})}^2+{\mathcal O}(n)} {n\Delta Q(z_0)+{\mathcal O}(1)}-n\Delta Q(z_0)}\bigg\},\end{equation*} and observe \eqref{mroh} implies that $s_{n,z_0}/n\to 0$, uniformly for $z_0\in K$. It yields that \begin{equation*}A_{n,z_0}\le s_{n,z_0}\int_{{\mathbb D}(z_0;\delta_n)}\mathrm e^{-n\Delta Q(z_0)\babs{z-z_0}^2}{{\mathrm d} A}(z)+{\mathcal O}(n^{-2})\le Cs_{n,z_0}/n,\end{equation*} which converges to $0$ as $n\to \infty$ uniformly for $z_0\in K$. To estimate $B_{n,z_0}$ we simply observe that \begin{equation}\label{bmest}B_{n,z_0}\le \int_{\{z;\babs{z-z_0}\ge\delta_n\}}\berd_{n}^{\langle z_0\rangle}(z){{\mathrm d} A}(z)+\int_{\{z;\babs{z-z_0}\ge\delta_n\}}n\Delta Q(z_0)\mathrm e^{-n\Delta Q(z_0)\babs{z-z_0}^2}{{\mathrm d} A}(z). \end{equation} Since $B_{n}^{\langle z_0\rangle}$ is a p.m., the estimate \eqref{firstest} yields that the first integral in the right hand side of \eqref{bmest} converges to $0$ as $n\to\infty$. Moreover, a simple calculation yields that the second integral in \eqref{bmest} converges to $0$ when $n\to\infty$. We have shown that $B_{n,z_0}\to 0$ as $n\to \infty$ with uniform convergence for $z_0\in K$. The proof is finished. $\qed$ \section{Off-diagonal damping}\label{point} \subsection{An estimate for $K_{m,n}$.} In this section we prove Th. \ref{th1.5}. We shall obtain that theorem from Th. \ref{flock} below, which is of independent interest, and has applications in random matrix theory, see \cite{AHM}. Our analysis depends on the following lemma. It will be convenient to define the set \begin{equation*}{\mathcal S}_{\tau,1}=\{\zeta;\, \operatorname{dist\,}(\zeta,{\mathcal S}_\tau)\le 1\}.\end{equation*} \begin{lem}\label{propn1} Assume that $Q\in{\mathcal C}^2({\mathbb C})$. Let $z_0\in{\mathcal S}_\tau\cap{X}$ and let $M$ be given non-negative numbers. Put \begin{equation*}8d=\operatorname{dist\,}(z_0,{\mathbb C}\setminus({\mathcal S}_\tau\cap{X}))\qquad a=\inf\{\Delta Q(\zeta);\, \zeta\in{\mathbb D}(z_0;6d)\}\qquad A=\sup\{\Delta Q(\zeta);\, \zeta\in{\mathcal S}_{\tau,1}\}.\end{equation*} There then exists positive numbers $C$ and $\epsilon$ such that for all $m,n\ge 1$ with $m\tau-M\le n\le m\tau+1$, we have \begin{equation}\label{bupp} \babs{K_{m,n}(z_0,z)}^2\mathrm e^{-m(Q(z_0)+Q(z))}\le C m^2\mathrm e^{-\epsilon\sqrt{m}\min\{d,\babs{z_0-z}\}} ,\qquad z\in {\mathcal S}_\tau. \end{equation} Here $C$ depends only on $M$, $a$, $A$, and $\tau$, while $\epsilon$ only depends on $a$, $\tau$ and $M$. \end{lem} \begin{rem} For fixed $\tau$ and $M$, our method of proof gives that $\epsilon$ can be chosen proportional to $a$ while $C$ can be chosen proportional to $a^{-1}$. Indeed, our proof shows that if there is a positive number $c$ depending only on $\tau$ and $M$ such that, with $a^\prime=\min\{a,1\}$, \begin{equation*}\babs{K_{m,n}(z_0,z)}^2\mathrm e^{-m(Q(z_0)+Q(z))}\le C\frac {m^3} {am+c}\mathrm e^{-\epsilon'a'\sqrt{m}\min\{d,\babs{z_0-z}\}} ,\qquad z\in {\mathcal S}_\tau, \end{equation*} with $C$ and $\epsilon'$ independent of $a$. This estimate can easily be extended to all $z\in{\mathbb C}$ by adapting the proof of Th. \ref{flock} below. \end{rem} \noindent Before we turn to the proof of Lemma \ref{propn1}, we use it to prove the main result of this section. \begin{thm}\label{flock} In the situation of Lemma \ref{propn1} we also have an estimate \begin{equation}\label{brest}\babs{K_{m,n}(z_0,z)}^2\mathrm e^{-m(Q(z_0)+Q(z))}\le Cm^2\mathrm e^{-\epsilon\sqrt{m}\min\{d,\babs{z_0-z}\}}\mathrm e^{-m (Q(z)-\widehat{Q}_\tau(z))},\qquad z\in{\mathbb C},\end{equation} valid for all $z_0\in {\mathcal S}_\tau\cap X$, $m\ge 1$, and all $n$ with $m\tau-M\le n\le m\tau+1$. \end{thm} \begin{proof} In view of Lemma \ref{propn1}, and since $Q=\widehat{Q}_\tau$ on ${\mathcal S}_\tau$, it suffices to show the estimate \eqref{brest} when $z\not\in {\mathcal S}_\tau$. To this end, we consider the function \begin{equation*}f(\zeta)=\log\babs{K_{m,n}(z_0,\zeta)}^2-m\widehat{Q}_\tau(\zeta). \end{equation*} Since $\widehat{Q}_\tau$ is harmonic on ${\mathbb C}\setminus{\mathcal S}_\tau$, $f$ is subharmonic there. Moreover, since $n-1\le m\tau$ and since $K_{m,n}(\cdot,z_0)\in {H}_{m,n}$, we have a simple estimate \begin{equation*}\log\babs{K_{m,n}(z_0,\zeta)}^2\le (n-1)\log\babs{\zeta}^2+{\mathcal O}(1)\le m\tau\log\babs{\zeta}^2+ {\mathcal O}(1) \qquad \text{when}\quad \zeta\to\infty,\end{equation*} while the relation \eqref{qtau} says that $\widehat{Q}_\tau(\zeta)= \tau\log\babs{\zeta}^2+{\mathcal O}(1)$ when $\zeta\to \infty$. Hence $f$ is bounded above. Furthermore, it is clear that $f$ is harmonic in a punctured neighbourhood of $\infty$, which yields that $f$ has a representation $f(\zeta)=h(\zeta)-c\log\babs{\zeta}$ for all large enough $|\zeta|$, where $c$ is a non-negative number and $h$ is harmonic at $\infty$ (see e.g. [\cite{ST}, Cor. 0.3.7, p. 12]). In particular, $f$ extends to a subharmonic function on ${\mathbb C}^*\setminus {\mathcal S}_\tau$. Finally, since Lemma \ref{propn1} yields that \begin{equation*}f(\zeta)\le \log(Cm^2)+mQ(z_0)-\epsilon\sqrt{m}d,\qquad \text{when}\quad \zeta\in{\partial}{\mathcal S}_\tau,\end{equation*} the maximum principle shows that the same estimate holds for all $\zeta \in {\mathbb C}^*\setminus {\mathcal S}_\tau$. This means that \begin{equation}\babs{K_{m,n}(z_0,z)}^2\mathrm e^{-m\widehat{Q}_\tau(z)}\le Cm^2 \mathrm e^{-\epsilon\sqrt{m}d}\mathrm e^{mQ(z_0)},\qquad \text{when}\quad z\not\in {\mathcal S}_\tau, \end{equation} which implies \eqref{brest} when $z\not\in {\mathcal S}_\tau$. \end{proof} \noindent\bf Background. \rm Estimates related to those considered in Lemma \ref{propn1} are known, see e.g. Lindholm's article \cite{L}, Prop. 9, pp. 404--407. We will follow the basic strategy used in that paper in our proof below. Since we are considering a different situation with polynomial Bergman kernels (instead of the full Bergman kernel of $A^2_{mQ}$), and since we are not assuming the weight $Q$ to be globally strictly subharmonic, nontrivial modifications of the classical arguments are needed. Our main tool for accomplishing this is provided by the weighted $L^2$ estimates in Th. \ref{boh}. \subsection{The proof of Lemma \ref{propn1}} We can and will assume that $Q\ge 1$ on ${\mathbb C}$. Moreover, we will denote various constants by the same letter $C$, which can change meaning during the course of the calculations. When $C$ depends on one or several parameters, we will always specify this. To simplify the proof we will first reduce the problem by treating three simple cases. \subsection{Case 1: $md^2\le 1$.} Since we have assumed that $n\le m\tau+1$, the estimate \eqref{berg2} of Prop. \ref{lemma1} applies. It gives that \begin{equation}\label{spuc}\babs{K_{m,n}(z_0,z)}^2\mathrm e^{-m(Q(z_0)+Q(z))}\le Cm^2,\quad z\in{\mathbb C},\end{equation} with a number $C$ depending only on $\tau$. The assertion \eqref{bupp} follows immediately in case $d=0$. In the remaining case we have $d>0$ and $m\le d^{-2}$. Then, for any $\epsilon>0$ and any $z\in{\mathbb C}$, we have that $\mathrm e^{-\epsilon\sqrt{m}\min\{d,\babs{z_0-z}\}}\ge \mathrm e^{-\epsilon d^{-1}d}=\mathrm e^{-\epsilon},$ so that \begin{equation}\label{buss}\babs{K_{m,n}(z_0,z)}^2\mathrm e^{-m(Q(z_0)+Q(z))}\le C\mathrm e^\epsilon\mathrm e^{-\epsilon\sqrt{m}\min\{d,\babs{z_0-z}\}}.\end{equation} This gives the desired estimate \eqref{bupp} with $C$ replaced by $C\mathrm e^\epsilon$. \subsection{Notation}\label{nota} We now fix positive numbers ${M_0}$ and $\bpar$ such that the relation \eqref{sss} is satisfied. We further choose ${M_0}$ and $\bpar$ so that $](m-{M_0})\tau+\bpar[\le m\tau-M$ for all $m\ge 1$, and let $m_0=\max\{(1+{M_0})/\tau,4{M_0}\}$. Let $n$ be a positive integer such that $n\le m\tau+1$. We also fix a point $z\in{\mathcal S}_\tau$ and let $R$ be a number such that ${\mathcal S}_\tau\subset {\mathbb D}(0;R)$. \subsection{Case 2: $m\le m_0$.} Given any $\epsilon>0$, we have that $\mathrm e^{-\epsilon\sqrt{m}\min\{\babs{z_0-z},d\}}\ge \mathrm e^{-\epsilon\sqrt{m_0}R},$ and so \eqref{spuc} implies that \begin{equation*}\babs{K_{m,n}(z_0,z)}^2\mathrm e^{-m(Q(z_0)+Q(z))}\le C\mathrm e^{\epsilon\sqrt{m_0}R}\mathrm e^{-\epsilon\sqrt{m}\min\{\babs{z_0-z},d\}}.\end{equation*} Thus \eqref{bupp} holds with $C$ replaced by $C\mathrm e^{\epsilon\sqrt{m_0}R}$. \subsection{Case 3: $\babs{z-z_0}\le 8/\sqrt{m}$.} In this case, $\mathrm e^{-\epsilon\sqrt{m}\min\{d,\babs{z_0-z}\}}\ge \mathrm e^{-8\epsilon}$, and thus \eqref{spuc} implies \begin{equation*}\babs{K_{m,n}(z_0,z)}^2\mathrm e^{-m(Q(z_0)+Q(z))}\le C\mathrm e^{8\epsilon}m^2\mathrm e^{-\sqrt{m}\min\{d,\babs{z_0-z}\}}. \end{equation*} We have shown \eqref{bupp} in the case when $\babs{z_0-z}\le 8/\sqrt{m}$ with $C$ replaced by $C\mathrm e^{8\epsilon}$. \subsection{Case 4: $m\ge m_0$, $md^2\ge 1$ and $\babs{z-z_0}\ge 8/\sqrt{m}$.} In the sequel we fix any integer $n$ with $n\ge ](m-{M_0})\tau+\bpar[$. Here $](m-{M_0})\tau+\bpar[>0$ for all $m\ge m_0$ by our choice of $m_0$ (see Subsect. \ref{nota}). It is important to note that the assumption $md^2\ge 1$ means that $1/\sqrt{m}\le d$ so that \begin{equation*}8/\sqrt{m}\le 8d=\operatorname{dist\,}(z_0,{\mathbb C}\setminus({\mathcal S}_\tau\cap{X})).\end{equation*} Our starting point is Lemma \ref{lemm3}, which yields that \begin{equation}\label{sim}\babs{K_{m,n}(z,z_0)}^2\mathrm e^{-mQ(z)}\le Cm\int_{{\mathbb D}(z;1/\sqrt{m})}\babs{K_{m,n}(\zeta,z_0)}^2\mathrm e^{-mQ(\zeta)}{{\mathrm d} A}(\zeta), \quad z\in {\mathcal S}_\tau, \end{equation} where the number $C$ only depends on $A$. Define \begin{equation*}\varepsilon_0(\zeta)=\min\{\babs{z_0-\zeta}/2,4d\},\quad \zeta\in {\mathbb C}.\end{equation*} Note that \begin{equation}\label{siz}4/\sqrt{m}\le \varepsilon_0(z)\le 4d.\end{equation} Let $\chi_0$ be a smooth non-negative function such that \begin{equation}\chi_0=0\quad \text{on}\quad {\mathbb D}(z_0;\varepsilon_0(z)/2)\quad \text{and}\quad \chi_0=1\quad\text{outside}\quad {\mathbb D}(z_0;\varepsilon_0(z)),\end{equation} and also $\babs{{\overline{\partial}}\chi_0}^2\le (C/\varepsilon_0(z)^2)\chi_0$ with $C$ an absolute constant ($C=5$ will do). In view of \eqref{siz}, it yields that \begin{equation*}\babs{{\overline{\partial}}\chi_0}^2\le Cm\chi_0\quad \text{on}\quad {\mathbb C},\end{equation*} where $C=5/16$. Notice that ${\overline{\partial}}\chi_0$ is supported in the annulus \begin{equation}\label{ann}U_0=U_0(z_0,z)=\{\zeta;\, \varepsilon_0(z)/2\le \babs{z_0-\zeta}\le \varepsilon_0(z)\}.\end{equation} Since $\chi_0$ is non-negative on ${\mathbb C}$ and since $\chi_0=1$ on ${\mathbb D}(z;1/\sqrt{m})$, the estimate \eqref{sim} implies that \begin{equation}\label{gops}\babs{K_{m,n}(z,z_0)}^2\mathrm e^{-mQ(z)}\le Cm\int_{\mathbb C}\chi_0(\zeta)\babs{K_{m,n}(\zeta,z_0)}^2\mathrm e^{-mQ(\zeta)}{{\mathrm d} A}(\zeta), \end{equation} where $C$ only depends on $A$. Let $H_{\chi_0,m,n}$ be the linear space ${H}_{m,n}$ with inner product \begin{equation*}\langle f,g\rangle_{\chi_0,mQ}=\int_{\mathbb C} f\bar g\chi_0\mathrm e^{-mQ} {{\mathrm d} A}.\end{equation*} We rewrite integral in the the right hand side in \eqref{gops} in the following way \begin{multline} \int_{\mathbb C} \chi_0(\zeta)\babs{K_{m,n}(\zeta,z_0)}^2\mathrm e^{-mQ(\zeta)}{{\mathrm d} A}(\zeta) =\sup\biggl\{\babs{\langle u,K_{m,n}(\cdot,z_0)\rangle_{\chi_0,mQ}}^2;\, u\in{H}_{m,n},\, \int_{\mathbb C} \babs{u}^2\chi_0\mathrm e^{-mQ}{{\mathrm d} A}\le 1\biggr\}=\\ =\sup\biggl\{\babs{\bigl\langle\chi_0 u, K_{m,n}(\cdot,z_0)\bigr\rangle_{mQ}}^2;\,u\in{H}_{m,n}, \,\int_{\mathbb C}\babs{u}^2\chi_0 \mathrm e^{-mQ}{{\mathrm d} A}\le 1\biggr\}=\\ =\sup\biggl\{\big|P_{m,n}[\chi_0 u](z_0)\big|^2;\,u\in{H}_{m,n},\, \int_{\mathbb C}\babs{u}^2\chi_0 \mathrm e^{-mQ}{{\mathrm d} A}\le 1 \biggr\}, \label{basic-1} \end{multline} where $P_{m,n}$ is the orthogonal projection of $L^2_{mQ}$ onto ${H}_{m,n}$. Now fix $u\in{H}_{m,n}$ with $\int_{\mathbb C}\babs{u}^2\chi_0\mathrm e^{-mQ}{{\mathrm d} A}\le 1$ and recall that $P_{m,n}[\chi_0 u]=\chi_0u-u_*,$ where $u_*$ is the $L^2_{mQ,n}$-minimal solution to ${\overline{\partial}} u_*={\overline{\partial}}(\chi_0 u)=u{\overline{\partial}}\chi_0$. In particular, $u_*$ is holomorphic in ${\mathbb D}(z_0;\varepsilon_0(z)/2)$ and $u_*=-P_{m,n}[\chi_0 u]$ there. See Subsect. \ref{init}. We intend to apply Th. \ref{boh} with a suitable real-valued function ${\varrho}_m$. We shall at first only specify ${\varrho}_m$ by requiring certain properties of it. An explicit construction of ${\varrho}_m$ is then given at the end of the proof. \noindent \bf Condition 1. \rm We require that \begin{equation}\label{rc1}{\varrho}_m=0\quad \text{on}\quad {\mathbb D}(z_0;1/(2\sqrt{m})),\end{equation} \noindent \bf Condition 2. \rm There exists a number $\epsilon>0$ depending only on $a$, ${M_0}$ and $\tau$, such that \begin{equation*}{\varrho}_m(\zeta)\le -\epsilon\sqrt{m}\varepsilon_0(z)/4\quad \text{when}\quad \babs{\zeta-z_0}\ge \varepsilon_0(z)/2,\end{equation*} \noindent \bf Condition 3. \rm The various conditions on ${\varrho}_m$ in Th. \ref{boh} are satisfied. More precisely, (i) ${\varrho}_m$ is ${\mathcal C}^{1,1}$-smooth and \begin{equation*}(m-{M_0})\Delta Q(\zeta)+\Delta {\varrho}_m(\zeta)\ge ma/2,\quad \text{for a.e.}\quad \zeta\in \overline{{\mathbb D}}(z_0;6d),\end{equation*} (ii) ${\varrho}_m$ is constant in ${\mathbb C}\setminus \overline{{\mathbb D}}(z_0;6d)$, (iii) We have that \begin{equation*}\frac {|{\overline{\partial}} {\varrho}_m|^2} {ma/2}\le \frac 1 {4\mathrm e^{{M_0} q_\tau}}\quad \text{on}\quad {\mathbb C},\end{equation*} where $q_\tau=\sup_{{\mathcal S}_\tau}\{Q(\zeta)\}$. It is clear that (i), (ii) and (iii) imply the conditions on ${\varrho}_m$ in Th. \ref{boh}, with $\kappa=1/2$. We now turn to consequences of the conditions on ${\varrho}_m$, and start with condition 1. Applying Lemma \ref{lemm2}, we find that, for any real function ${\varrho}_m$ satisfying \eqref{rc1}, we have \begin{equation}\label{gops4}\babs{u_*(z_0)}^2\mathrm e^{-mQ(z_0)}\le Cm\int_{{\mathbb D}(z_0;1/(2\sqrt{m}))}\babs{u_*(\zeta)}^2\mathrm e^{-mQ(\zeta)}{{\mathrm d} A}(\zeta)\le Cm\int_{\mathbb C} \babs{u_*(\zeta)}^2\mathrm e^{{\varrho}_m(\zeta)-mQ(\zeta)}{{\mathrm d} A}(\zeta),\end{equation} where $C$ depends only on $A$. By Condition 3, we may apply Th. \ref{boh} to the integral in the right hand side in \eqref{gops4}. It yields that \begin{equation}\label{gops5}\int_{\mathbb C}\babs{u_*}^2 \mathrm e^{{\varrho}_m-mQ}{{\mathrm d} A}\le C\int_{U_0}\babs{u{\overline{\partial}}\chi_0}^2 \frac {\mathrm e^{{\varrho}_m-mQ}}{ma/2+\bpar c_\tau} {{\mathrm d} A}, \end{equation} with a number $C$ depending only on $\tau$ and ${M_0}$ (note that $(1-\kappa)^{-2}=4$). Here $c_\tau=\inf_{{\mathcal S}_\tau}\{(1+\babs{\zeta}^2)^{-2}\} \ge (1+R^2)^{-2}$ and $U_0$ is the annulus defined in \eqref{ann}. But since $\babs{{\overline{\partial}}\chi_0}^2\le Cm\chi_0$, \eqref{gops5} implies \begin{equation*}\int_{\mathbb C}\babs{u_*}^2 \mathrm e^{{\varrho}_m-mQ}{{\mathrm d} A}\le \frac {Cm}{am+\bpar c_\tau}\int_{U_0}\babs{u}^2\chi_0\mathrm e^{{\varrho}_m-mQ}{{\mathrm d} A}.\end{equation*} Here $C$ only depends on ${M_0}$ and $\tau$. We now use Condition 2, which implies that ${\varrho}_m(\zeta)\le-\epsilon\sqrt{m}\varepsilon_0(z)$ whenever $\zeta\in U_0$. It yields that we may continue to estimate \begin{equation}\label{gops7} \int_{U_0}\babs{u}^2\chi_0\mathrm e^{{\varrho}_m-mQ}{{\mathrm d} A}\le \mathrm e^{-\epsilon\sqrt{m}\varepsilon_0(z)/4}\int_{\mathbb C} \babs{u}^2\chi_0\mathrm e^{-mQ}{{\mathrm d} A}\le \mathrm e^{-\epsilon\sqrt{m}\min\{\babs{z_0-z}/8,d\}},\end{equation} where we have used that $\int_{\mathbb C} \babs{u}^2\chi_0\mathrm e^{-mQ}{{\mathrm d} A}\le 1$ in the last step. Tracing back through \eqref{gops}--\eqref{gops7}, we infer that \begin{equation*}\babs{K_{m,n}(z_0,z)}^2\mathrm e^{-m(Q(z_0)+Q(z))}\le C\frac {m^3} {am+\bpar c_\tau}\mathrm e^{-\epsilon\sqrt{m}\min\{\babs{z_0-z}/8,d\}}\le Ca^{-1}m^2\mathrm e^{-\epsilon\sqrt{m}\min\{\babs{z_0-z}/8,d\}},\end{equation*} where $C$ depends on ${M_0}$, $\tau$, and $A$. This proves Lemma \ref{propn1} (with $\epsilon/8$ instead of $\epsilon$ and $Ca^{-1}$ in place of $C$) under the hypotheses that a function ${\varrho}_m$ satisfying conditions 1,2, and 3 above exists. To finish the proof we must verify the existence of such a ${\varrho}_m$. \subsection{Construction of ${\varrho}_m$.} We now construct a function ${\varrho}_m$ and a positive number $\epsilon$ which satisfy the conditions 1, 2, and 3 above. We look for a radial function of the form \begin{equation*}{\varrho}_m(\zeta)=-\epsilon\sqrt{m}S_m(\babs{\zeta-z_0}),\end{equation*} where the number $\epsilon>0$ will be fixed later. We start by giving an explicit construction of $S_m$, the proof of conditions 1 though 3 will then be accomplished without difficulty. We recall that $1/\sqrt{m}\le d$ and start by specifying the derivative $S_m^\prime$ to be the piecewise linear continuous function on $[0,\infty)$ such that \begin{equation*}S_m^\prime=0\quad \text{on} \quad [0,1/(2\sqrt{m})]\cup [6d,\infty)\quad \text{and}\quad S_m^\prime=1\quad \text{on}\quad [1/\sqrt{m},5d],\end{equation*} and $S_m^\prime$ is affine on each of the intervals $[1/(2\sqrt{m}),1/\sqrt{m}]$ and $[5d,6d]$. The distributional derivative of $S_m^\prime$ is then a linear combination of characteristic functions, \begin{equation*}S_m^{\prime\prime}=2\sqrt{m}{\mathbf 1}_{[1/(2\sqrt{m}), 1/\sqrt{m}]}-(1/d){\mathbf 1}_{[5d,6d]},\end{equation*} so that (since $md^2\ge 1$) \begin{equation*}\babs{S_m^{\prime\prime}}\le \max\{2\sqrt{m},1/d\}=2\sqrt{m}.\end{equation*} We now define $S_m$ by requiring that $S_m(0)=0$. Since $S_m^\prime=0$ on $[0,1/(2\sqrt{m})]$ it is then clear that $S_m=0$ on $[0,1/(2\sqrt{m})]$. Moreover, when $2/\sqrt{m}\le t\le 5d$, we get \begin{equation*}S_m(t)=\int_0^t S_m^\prime(s){\mathrm d} s\ge t-1/\sqrt{m}\ge t/2,\quad t\in [2/\sqrt{m},5d],\end{equation*} since $S_m^\prime=1$ on $[1/\sqrt{m},5d]$. When $t\ge 5d$, we plainly have \begin{equation*}S_m(t)\ge 5d-1/\sqrt{m}\ge 4d.\end{equation*} We conclude that \begin{equation}\label{auto}S_m(t)\ge \min\{t/2,4d\},\quad t\ge 2/\sqrt{m}.\end{equation} In particular, denoting by $c_m$ the constant value that $S_m$ assumes on $[6d,\infty)$, we have $c_m\ge 4d$. This finishes the construction of $S_m$, and the corresponding function ${\varrho}_m$ is clearly of class ${\mathcal C}^{1,1}({\mathbb C})$. We now verify the conditions 1 through 3 above. First, Condition 1 is clear, since $S_m(t)=0$ when $t\le 1/(2\sqrt{m})$. Also, part (ii) of condition 3 is clear; since $S_m(t)=c_m$ is constant when $t\ge 6d$, we have that ${\varrho}_m(\zeta)=-\epsilon\sqrt{m}c_m$ is constant when $\zeta\not\in \overline{{\mathbb D}}(0;6d)$. Since $\varepsilon_0(z)\ge 4/\sqrt{m}$, \eqref{auto} implies that $\varrho_m(\zeta)=-\epsilon S_m(\babs{\zeta-z_0})\le-\epsilon\varepsilon_0(\zeta)$ when $\babs{\zeta-z_0}\ge\varepsilon_0(z)/2$. Since moreover $\varepsilon_0(\zeta)\ge \varepsilon_0(z)/4$ in this case, we get that condition 2 is satisfied. There remains to check parts (i) and (iii) of condition 3, and to make precise what we mean by "$\epsilon$". To this end, we need the following estimates \begin{equation}\label{827}\babs{{\overline{\partial}} {\varrho}_m(\zeta)}^2=\epsilon^2m\babs{S_m^\prime(\babs{\zeta-z_0})}^2/4\le \epsilon^2 m/4,\qquad \zeta\in {\mathbb C},\end{equation} and \begin{equation}\label{828}\babs{\Delta {\varrho}_m(\zeta)}\le \frac \epsilon 4\sqrt{m}\left ( \babs{S_m^{\prime\prime}\left ( \babs{\zeta-z_0}\right )}+\frac {S_m^\prime\left (\babs{z_0-\zeta}\right )} {\babs{z_0-\zeta}}\right )\le \epsilon m,\qquad \zeta\in {\mathbb C},\end{equation} which follows immediately from the properties of $S_m$ (since $\babs{z_0-\zeta}\ge 1/(2\sqrt{m})$ when $S_m^\prime(\babs{z_0-\zeta})\ne 0$). To verify (i), we use \eqref{828}. It yields that it suffices to choose $\epsilon>0$ such that $(m-{M_0})a-\epsilon m\ge ma/2$ for $m\ge m_0$, i.e. $\epsilon\le (1/2-{M_0}/m)a.$ Since we have assumed that $m_0\ge 4{M_0}$, it thus suffices to choose $\epsilon=a/4$. We finally verify (iii). By \eqref{827}, it suffices to choose an $\epsilon>0$ such that $(\epsilon^2/4)/(a/2)\le 1/(4\mathrm e^{{M_0} q_\tau})$, i.e. $\epsilon^2 \le a/(2\mathrm e^{{M_0} q_\tau})$. We have verified the existence of $\epsilon>0$, of the form $\epsilon=\min\{\sqrt{a/(2\mathrm e^{{M_0} q_\tau})},a/2\}$. This shows that the choice $\epsilon=c\min\{a,1\}$ works with a proportionality constant $c$ which depends on ${M_0}$ and $q_\tau$. The proof is finished. $\qed$ \subsection{The proof of Theorem \ref{th1.5}} Let $K$ be a compact subset of ${\mathcal S}_\tau^\circ\cap {X}$, and pick $M\ge 0$. By Th. \ref{th3} we have that $K_{m,n}(z_0,z_0)\mathrm e^{-mQ(z_0)}=m\Delta Q(z_0)+{\mathcal O}(1)$ as $m\to\infty$ and $n\ge m\tau-M$, where the ${\mathcal O}$ is uniform for $z_0\in K$. It yields that \begin{equation}\label{abo}\berd_{m,n}^{\langle z_0\rangle}(z)=\frac {\babs{K_{m,n}(z,z_0)}^2} {K_{m,n}(z_0,z_0)}\mathrm e^{-mQ(z)}=\frac {\babs{K_{m,n}(z,z_0)}^2} {m\Delta Q(z_0)+{\mathcal O}(1)}\mathrm e^{-m(Q(z)+Q(z_0))},\quad z\in{\mathbb C},\end{equation} as $m\to \infty$ and $n\ge m\tau-M$. Since $\Delta Q$ is bounded below by a positive number on $K$, the right hand side in \eqref{abo} can be estimated by \begin{equation*}Cm^{-1}\babs{K_{m,n}(z,z_0)}^2\mathrm e^{-m(Q(z)+Q(z_0))},\quad z\in {\mathbb C}\end{equation*} where $C$ depends on the lower bound of $\Delta Q$ on $K$. The assertion now follows from Th. \ref{flock}. $\qed$ \section{The Bargmann--Fock case and harmonic measure} \label{new} \subsection{Preliminaries.} In this section we prove Th. \ref{th5}. We therefore put $Q(z)=\babs{z}^2$. Recall that in this case ${\mathcal S}_\tau=\overline{{\mathbb D}}(0;\sqrt{\tau}),$ and (see \eqref{bfock}) \begin{equation} \label{bfock-2} {\mathrm d} B^{\langle {z_0}\rangle}_{m,n}(z)= m\frac{|E_{n-1}(mz\bar{z}_0)|^2}{E_{n-1}(m\babs{{z_0}}^2)}\mathrm e^{-m|z|^2}{\mathrm d} A(z)\qquad\text{where}\qquad E_k(z)=\sum_{j=0}^k\frac {z^j} {j!}. \end{equation} \subsection{The action on polynomials.} \begin{prop} Fix a complex number ${z_0}\ne 0$, a positive integer $d$ and let $n$ be an integer, $n\ge d+1$. Then, for all analytic polynomials $u$ of degree at most $d$, we have \begin{equation} \label{pv} \operatorname{p.v.}\int_{\mathbb C} u(z^{-1}){\mathrm d} B^{\langle {z_0}\rangle}_{m,n} \to u(z_0^{-1}),\qquad\text{as}\quad m\to\infty, \end{equation} uniformly in $n$, $n\ge d+1$. \label{prop-pv} \end{prop} \begin{proof} It is sufficient to prove the statement for $u(z)=z^j$ with $j\le d$. The left hand side in \eqref{pv} can then be written \begin{equation*}\operatorname{p.v.} \int_{\mathbb C} z^{-j}{\mathrm d} B^{\langle {z_0}\rangle}_{m,n} =\frac{m{b}_{m,n}^j({z_0})}{E_{n-1}(m\babs{{z_0}}^2)},\end{equation*} where we have put \begin{equation*}{b}_{m,n}^j({z_0})=\operatorname{p.v.} \int_{{\mathbb C}} z^{-j}\bigg|\sum_{k=0}^{n-1}\frac {(m{z_0}\bar{z})^k} {k!}\bigg|^2\mathrm e^{-m\babs{z}^2}{{\mathrm d} A}(z). \end{equation*} Expanding the square yields \begin{equation*}b_{m,n}^j({z_0})=\sum_{k,l=0}^{n-1} \frac {m^{k+l}{z_0}^k\bar{z}_0^l} {k!l!}\operatorname{p.v.}\int_{{\mathbb C}}\bar{z}^kz^{l-j}\mathrm e^{-m\babs{z}^2}{{\mathrm d} A}(z). \end{equation*} Clearly only those $k,l$ for which $k=l-j$ give a non-zero contribution to the sum, and therefore, \begin{equation*}b_{m,n}^j({z_0})={z_0}^{-j}\sum_{l=j}^{n-1}\frac {m^{2l-j}\babs{{z_0}}^{2l}} {(l-j)!l!}\int_{{\mathbb C}}\babs{z}^{2(l-j)}\mathrm e^{-m\babs{z}^2}{{\mathrm d} A}(z)= \frac{1}{m{z_0}^{j}}\sum_{l=j}^{n-1}\frac {m^{l}\babs{{z_0}}^{2l}} {l!}. \end{equation*} It follows that \begin{equation*}b_{m,n}^j({z_0})=\frac{1}{m{z_0}^{j}}\sum_{l=j}^{n-1} \frac {(m\babs{{z_0}}^2)^l} {l!}=\frac{1}{m{z_0}^j}\left ( E_{n-1}(m\babs{{z_0}}^2)- E_{j-1}(m\babs{{z_0}}^2)\right ),\end{equation*} and so \begin{equation*}\frac {mb_{m,n}^j({z_0})} {E_{n-1}(m\babs{{z_0}}^2)}=\frac1{{z_0}^j}\left ( 1-\frac{E_{j-1}(m\babs{{z_0}}^2)}{E_{n-1}(m\babs{{z_0}}^2)}\right ). \end{equation*} Finally, since $j\le d<n$, $$\frac{E_{j-1}(m\babs{{z_0}}^2)}{E_{n-1}(m\babs{{z_0}}^2)}\le \frac{E_{d-1}(m\babs{{z_0}}^2)}{E_{d}(m\babs{{z_0}}^2)}\to0\quad\text{as}\quad m\to\infty.$$ \end{proof} \begin{prop} \label{bulk} Let $0<r<\sqrt{\tau}$, ${z_0}\in{\mathbb C}\setminus\overline{{\mathbb D}}(0;\sqrt{\tau})$ and $u$ an analytic polynomial. Then \begin{equation*}\operatorname{p.v.}\int_{{\mathbb D}(0;r)}u(z^{-1}) {\mathrm d} B^{\langle {z_0}\rangle}_{m,n}(z)\to0 \qquad\text{as}\quad m\to\infty\quad \text{and}\quad n/m\to\tau. \end{equation*} \end{prop} \begin{proof} Put, for $\nu=0,1,2,\ldots$, $$b_{m,n}^\nu({z_0},r)=\operatorname{p.v.}\int_{{\mathbb D}(0;r)}z^{-\nu} {\mathrm d} B^{\langle {z_0}\rangle}_{m,n}.$$ A straightforward calculation based on \eqref{bfock-2} leads to \begin{equation} \label{grr} b_{m,n}^\nu({z_0},r)=\frac 1 {z_0^\nu E_{n-1}(m\babs{{z_0}}^2)}\sum_{j=\nu}^{n-1}\frac {(m\babs{{z_0}}^2)^j} {j!(j-\nu)!}\int_0^{mr^2} s^{j-\nu}\mathrm e^{-s}{\mathrm d} s. \end{equation} We suppose $n$ is greater than $\nu$ by at least two units, so that we may pick an integer $k$ with $\nu<k<n$, and split the sum \eqref{grr} accordingly: \begin{equation} \label{grr-1} b_{m,n}^\nu({z_0},r)=\frac 1 {z_0^\nu E_{n-1}(m\babs{{z_0}}^2)}\Bigg\{\sum_{j=\nu}^{k-1}\frac {(m\babs{{z_0}}^2)^j} {j!(j-\nu)!}\int_0^{mr^2} s^{j-\nu}\mathrm e^{-s}{\mathrm d} s +\sum_{j=k}^{n-1}\frac {(m\babs{{z_0}}^2)^j} {j!(j-\nu)!}\int_0^{mr^2} s^{j-\nu}\mathrm e^{-s}{\mathrm d} s\Bigg\}. \end{equation} We estimate the first term trivially as follows: \begin{equation} \label{grr-2} \sum_{j=\nu}^{k-1}\frac {(m\babs{{z_0}}^2)^j} {j!(j-\nu)!}\int_0^{mr^2} s^{j-\nu}\mathrm e^{-s}{\mathrm d} s\le \sum_{j=\nu}^{k-1}\frac {(m\babs{{z_0}}^2)^j} {j!(j-\nu)!}\int_0^{\infty} s^{j-\nu}\mathrm e^{-s}{\mathrm d} s=\sum_{j=0}^{k-1} \frac {(m\babs{{z_0}}^2)^j}{j!}=E_{k-1}(m|{z_0}|^2). \end{equation} As for the second term, we use the fact that the function $s\mapsto s^{j-\nu}\mathrm e^{-s}$ is increasing on the interval $[0,j-\nu]$, to say that \begin{equation*}\int_0^{mr^2} s^{j-\nu}\mathrm e^{-s}{\mathrm d} s\le(mr^2)^{j-\nu+1}\mathrm e^{-mr^2},\end{equation*} provided that $j\ge mr^2+\nu$. It follows that if $k\ge mr^2+\nu$, then \begin{equation*}\sum_{j=k}^{n-1}\frac {(m\babs{{z_0}}^2)^j} {j!(j-\nu)!}\int_0^{mr^2} s^{j-\nu}\mathrm e^{-s}{\mathrm d} s\le (mr^2)^{1-\nu}\mathrm e^{-mr^2}\sum_{j=k}^{n-1}\frac {(mr\babs{{z_0}})^{2j}} {j!(j-\nu)!}.\end{equation*} By Stirling's formula, $j!\ge \sqrt{2\pi}j^{j+1/2}\mathrm e^{-j},$ so that \begin{equation*}\frac {(mr\babs{{z_0}})^{2j}} {j!}\mathrm e^{-mr^2}\le\frac{1}{\sqrt{2\pi j}}m^{j}|{z_0}|^{2j} \left (\frac{mr^2}{j}\mathrm e^{1-\frac{mr^2}{j}}\right )^j.\end{equation*} Since the function $x\mapsto x\mathrm e^{1-x}$ is increasing on the interval $[0,1]$, it yields that \begin{equation*}\frac {(mr\babs{{z_0}})^{2j}} {j!}\mathrm e^{-mr^2}\le\frac{1}{\sqrt{2\pi j}} m^{j}|{z_0}|^{2j} \left (\frac{mr^2}{k}\mathrm e^{1-\frac{mr^2}{k}}\right )^{j}, \qquad mr^2+\nu\le k\le j.\end{equation*} We write \begin{equation} \label{eq-ckm} c_{k,m}=\frac{mr^2}{k}\mathrm e^{1-\frac{mr^2}{k}}\le1, \qquad mr^2+\nu\le k, \end{equation} and conclude that \begin{multline} \sum_{j=k}^{n-1}\frac {(m\babs{{z_0}}^2)^j} {j!(j-\nu)!}\int_0^{mr^2} s^{j-\nu}\mathrm e^{-s}{\mathrm d} s\le (mr^2)^{1-\nu}\sum_{j=k}^{n-1} \frac{(m|{z_0}|^2c_{k,m})^j}{(j-\nu)!\sqrt{2\pi j}}\\ \le \frac{(mr^2)^{1-\nu}}{\sqrt{2\pi}} (c_{k,m})^k\sum_{j=k}^{n-1}\frac{(m|{z_0}|^2)^j}{(j-\nu)!}\le \frac{mr^2}{\sqrt{2\pi}}\Big(\frac{|{z_0}|}{r}\Big)^{2\nu} (c_{k,m})^kE_{n-\nu-1}(m|{z_0}|^2). \label{eq-101} \end{multline} Now, a combination \eqref{grr-2} and \eqref{eq-101} applied to \eqref{grr-1} yields \begin{multline} \label{grr-1'} \babs{{z_0^{\nu}}b_{m,n}^\nu({z_0},r)}\le\frac {E_{k-1}(m\babs{{z_0}}^2)}{E_{n-1}(m\babs{{z_0}}^2)} +\frac{mr^2}{\sqrt{2\pi}}\Big(\frac{|{z_0}|}{r}\Big)^{2\nu}(c_{k,m})^k \frac{E_{n-\nu-1}(m|{z_0}|^2)}{E_{n-1}(m|{z_0}|^2)}\\ \le\frac {E_{k-1}(m\babs{{z_0}}^2)}{E_{n-1}(m\babs{{z_0}}^2)} +\frac{mr^2}{\sqrt{2\pi}}\Big(\frac{|{z_0}|}{r}\Big)^{2\nu}(c_{k,m})^k. \end{multline} We would like to show that each of the terms on the right hand side of \eqref{grr-1'} can be made small by choosing $k$ cleverly. As for the first term, we appeal to a theorem of Szeg\"o, \cite{Sz}, Hilfssatz 1, p. 54, which states that \begin{equation*}E_l(lx)=\frac 1 {\sqrt{2\pi l}} (\mathrm e x)^l \frac x{x-1}\left ( 1+\varepsilon_l(x)\right )\qquad x>1, \end{equation*} where $\varepsilon_l(x)\to 0$ uniformly on compact subintervals of $(1,\infty)$ as $l\to\infty$. It follows that \begin{equation*} \frac{E_{k-1}(m\babs{{z_0}}^2)}{E_{n-1}(m\babs{{z_0}}^2)}= \sqrt{\frac{n-1}{k-1}}\frac{m|{z_0}|^2-n+1}{m|{z_0}|^2-k+1} \Big(\frac{\mathrm e m|{z_0}|^2}{k-1}\Big)^{k-1}\Big(\frac{\mathrm e m|{z_0}|^2}{n-1}\Big)^{1-n} \frac{1+\varepsilon_{k-1}(\frac{m|{z_0}|^2}{k-1})}{1+\varepsilon_{n-1}(\frac{m|{z_0}|^2}{n-1})}. \end{equation*} Finally, we decide to pick $k$ such that $$k/m\to\beta$$ as $k,m\to\infty$, where $r^2<\beta<\tau$. We observe that with this choice of $k$, the above epsilons tend to zero as $k,m,n\to\infty$. The function $$y\mapsto(\mathrm e/y)^{y},\qquad 0<y\le1$$ is is strictly increasing, so that with $$y_1=\frac{k-1}{m|{z_0}|^2}\approx\frac{\beta}{|{z_0}|^2}, \quad y_2=\frac{n-1}{m|{z_0}|^2}\approx\frac{\tau}{|{z_0}|^2},$$ we have $$\frac{(\mathrm e/y_1)^{y_1}}{(\mathrm e/y_2)^{y_2}}\le\theta<1,$$ where at least for large $k,m,n$, the number $\theta$ may be taken to be independent of $k,m,n$. It follows that $$\Big(\frac{\mathrm e m|{z_0}|^2}{k-1}\Big)^{k-1}\Big(\frac{\mathrm e m|{z_0}|^2}{n-1}\Big)^{1-n} \le \theta^{-m|{z_0}|^2},$$ so that \begin{equation*} \frac{E_{k-1}(m\babs{{z_0}}^2)}{E_{n-1}(m\babs{{z_0}}^2)}\le(1+o(1)) \sqrt{\frac{\tau}{\beta}}\frac{|{z_0}|^2-\tau}{|{z_0}|^2-\beta} \theta^{-m|{z_0}|^2}\to0 \end{equation*} exponentially quickly as $k,m,n\to\infty$. Finally, as for the second term, we observe that the numbers $c_{k,m}$ defined by \eqref{eq-ckm} have the property that $$c_{k,m}\to \frac{r^2}{\beta}\mathrm e^{1-\frac{r^2}{\beta}}<1,$$ as $k,m,n\to\infty$ in the given fashion. In particular, the second term converges exponentially quickly to $0$. The proof is complete. \end{proof} \begin{cor} \label{korp} Let ${z_0}\in{\mathbb C}\setminus\overline{{\mathbb D}}(0;\sqrt{\tau})$, and let $\omega$ be an open set in ${\mathbb C}$ which contains the circle ${\mathbb T}(0,\sqrt{\tau})$. Further, let $u$ be an analytic polynomial. Then \begin{equation*}\int_{\omega}u(z^{-1}){\mathrm d} B^{\langle {z_0}\rangle}_{m,n}(z)\to u(z_0^{-1})\qquad \text{as}\quad m\to\infty\quad \text{and}\quad n/m\to\tau.\end{equation*} \end{cor} \begin{proof} This follows from propositions \ref{bulk} and \ref{prop1}. \end{proof} \subsection{The proof of Theorem \ref{th5}.} Let ${{\mathcal H}}_\tau$ be the class of continuous functions ${\mathbb C}^*\to{\mathbb C}$ which are harmonic on ${\mathbb C}^*\setminus{\mathcal S}_\tau$. For a function $f\in{{\mathcal C}}_b({\mathbb C})$ we write $\widetilde{f}$ for the unique function of class ${{\mathcal H}}_\tau$ which coincides with $f$ on ${\mathbb D}(0,\sqrt{\tau})$. We must show that \begin{equation*}\int_{\mathbb C} f(z){\mathrm d} B^{\langle {z_0}\rangle}_{m,n}(z)\to \widetilde f({z_0})\end{equation*} as $m\to\infty$ and $n/m\to\tau$. See e.g. \cite{GM}, p. 90. Convolving with the F\'ejer kernel, we see that $f$ may be uniformly approximated by functions which on a neighborhood $\omega$ of ${\mathbb T}(0,\sqrt{\tau})$ are of the form $u(z^{-1})$, with $u$ a harmonic polynomial. We may therefore w.l.o.g. suppose that $f$ itself is of this form, i.e. $f(z)=u(z^{-1})$ when $z\in \omega$. Thus $f(z)=\widetilde{f}(z)=u(z^{-1})$ on $\omega$. By Prop. \ref{prop1}, \begin{equation*}\int_{\mathbb C} (f(z)-\widetilde{f}(z)) {\mathrm d} B^{\langle {z_0}\rangle}_{m,n}(z)\to 0,\end{equation*} as $m\to\infty$ and $n/m\to\tau$. Moreover, Cor. \ref{korp} gives that \begin{equation*}\int_{\mathbb C} \widetilde{f}(z){\mathrm d} B^{\langle {z_0}\rangle}_{m,n}(z)=\int_{\mathbb C} u(z^{-1}){\mathrm d} B^{\langle {z_0}\rangle}_{m,n}(z)\to u(z_0^{-1})=\widetilde{f}({z_0}),\end{equation*} as $m\to\infty$ and $n/m\to\tau$. $\qed$ \end{document}
\begin{document} \title[Classifying Lie Algebroid of a Geometric Structure]{The classifying Lie algebroid of a geometric structure I: classes of coframes} \author{Rui Loja Fernandes} \address{Departamento de Matem\'atica, Instituto Superior T\'ecnico, 1049-001, Lisbon, Portugal} \curraddr{Department of Mathematics, The University of Illinois at Urbana-Champaign, 1409 W. Green Street, Urbana, IL 61801, USA} \email{ruiloja@illinois.edu} \author{Ivan Struchiner} \address{Departamento de Matem\'atica, Universidade de S\~ao Paulo, Rua do Mat\~ao 1010, S\~ao Paulo -- SP, Brasil, CEP: 05508-090} \email{ivanstru@ime.usp.br} \thanks{RLF was partially supported by FCT through the Program POCI 2010/FEDER and by projects PTDC/MAT/098936/2008 and PTDC/MAT/117762/2010. IS was partially supported by FAPESP 03/13114-2, CAPES BEX3035/05-0 and by NWO} \subjclass[2010]{ 53C10 (Primary) 53A55, 58D27, 58H05 (Secondary)} \date{} \begin{abstract} We present a systematic study of symmetries, invariants and moduli spaces of classes of coframes. We introduce a classifying Lie algebroid to give a complete description of the solution to Cartan's realization problem that applies to both the local and the global versions of this problem. \end{abstract} \maketitle \tableofcontents \section{Introduction} This is the first of two papers dedicated to a systematic study of symmetries, invariants and moduli spaces of geometric structures of \emph{finite type}. Roughly speaking, a geometric structure of finite type can be described as any geometric structure on a smooth manifold which determines (and is determined by) a coframe on a (possibly different) manifold. Here by a \textbf{coframe} on an $n$-dimensional manifold $M$ we mean a generating set $\set{\theta^1,\ldots, \theta^n}$ of everywhere linearly independent $1$-forms on $M$. Examples of finite type geometric structures include finite type $G$-structures (see, e.g., \cite{Sternberg} or \cite{FernandesStruchiner}), Cartan geometries (see, e.g., \cite{Sharpe} or \cite{FernandesStruchiner}) or linear connections which preserve some tensor on a manifold (see, e.g., \cite{Kobayashi}). The main problem concerning finite type structures is that of determining when two such geometric structures are isomorphic. This is a classical problem, that goes back to \'Elie Cartan \cite{Cartan}, and is usually refered to as the \emph{Cartan equivalence problem}. When two geometric structures of the same kind have been properly characterized by coframes $\set{\theta^i}$ and $\set{\bar{\theta^i}}$ on $n$-dimensional manifolds $M$ and $\bar{M}$, the equivalence problem takes the form: \begin{problem}[Equivalence Problem] \label{equiv:probl} Does there exist a diffeomorphism $\phi:M\rightarrow\bar{M}$ satisfying \[ \phi^{\ast}\bar{\theta}^{i}=\theta^{i} \] for all $1 \leq i \leq n$? \end{problem} We will show that to a coframe there is associated a \textbf{classifying Lie algebroid} which encodes the corresponding equivalence problem. This classifying algebroid plays a crucial role in various geometric, local and global, classification problems associated with coframes. In order to explain how this algebroid arises, recall that the solution to the equivalence problem is based on the simple fact that exterior differentiation and pullbacks commute: taking exterior derivatives and using the fact that $\{ \theta^{i} \}$ is a coframe, we may write the \textbf{structure equations} \begin{equation}\label{structureeqcoframe} \mathrm d \theta^{k}= \sum_{i<j}C_{ij}^{k}(x) \theta^{i}\wedge\theta^{j} \end{equation} for uniquely defined functions $C_{ij}^{k} \in C^{\infty}( M )$ known as \textbf{structure functions}. Analogously, we may write \[\mathrm d \bar{\theta}^{k} = \sum_{i<j}\bar{C}_{ij}^{k}(\bar{x}) \bar{\theta}^{i} \wedge \bar{\theta}^{j}\text{.}\] It then follows from $\phi^{\ast} \mathrm d \bar{\theta}^{k} = \mathrm d \phi^{\ast}\bar{\theta}^{k}$, that a necessary condition for equivalence is that \[\bar{C}_{ij}^{k} (\phi(x)) = C_{ij}^{k}(x).\] Thus, the structure functions are \emph{invariants} of the coframe. One can obtain more invariants of the coframe by further differentiation and this is the key remark to solve the equivalence problem. In the opposite direction, one may consider the problem of determining when a given set of functions $C_{ij}^k$ can be realized as the structure functions of some coframe. This problem was proposed and solved by Cartan in \cite{Cartan}. Following Bryant (see the Appendix in \cite{Bryant}), we state it in the following precise form: \begin{problem}[Cartan's Realization Problem] \label{realiz:probl} One is given: \begin{itemize} \item an integer $n\in\mathbb{N}$, \item a $d$-dimensional manifold $X$, \item a set of functions $C_{ij}^{k}\in C^{\infty}(X)$ with indexes $1\leq i,j,k\leq n$, \item and $n$ vector fields $F_{i}\in\ensuremath{\mathfrak{X}}(X)$ \end{itemize} and asks for the existence of \begin{itemize} \item an $n$-dimensional manifold $M$; \item a coframe $\theta=\{\theta^{i}\}$ on $M$; \item a map $h : M \rightarrow X$ \end{itemize} satisfying \begin{align} \mathrm d \theta^{k} & = \sum_{i<j}(C_{ij}^{k}\circ h) \theta^{i}\wedge\theta^{j} \label{eq: dtheta}\\ \mathrm d h & = \sum_{i}(F_{i}\circ h) \theta^{i}\text{.} \label{eq: dh} \end{align} \end{problem} Notice that the right hand side of equation \eqref{eq: dh} defines a bundle map $TM \to TX$ by first applying the $\theta^i$'s to the tangent vector and evaluating the vector fields $F_i$'s, and then taking the corresponding linear combination. Most often, in concrete examples, one is given the structure functions and looks for all the possible coframes realizing it. In fact, one is interested in the following two problems: \begin{description} \item[Local Classification Problem] What are all germs of coframes which solve Cartan's realization problem? \item[Local Equivalence Problem] When are two such germs of coframes equivalent? \end{description} Later, we shall give complete solutions to the existence problem, as well as both the classification and the equivalence problems. Before we proceed, let us present a simple, classic, but elucidating example showing that Lie theory enters into the picture. \begin{example}[The case $d=0$] \label{ex:constant} Suppose that we are interested in the equivalence and classification problem for coframes whose structure functions are constant, or in the precise formulation of Cartan's realization problem given above, the case where the manifold $X$ reduces to a point (in particular, the vector fields $F_i$ vanish identically). Necessary conditions for the existence of a solution to the realization problem are obtained from \[\mathrm d^{2}\theta^{m}=0 \qquad (m = 1 , \ldots n).\] These conditions imply that the the structure constants must satisfy: \[ C_{im}^lC_{jk}^m+C_{jm}^lC_{ki}^m+C_{km}^lC_{ij}^m=0, \qquad (i,j,k,m = 1 , \ldots n),\] i.e., they are the structure constants of a Lie algebra $\mathfrak{g}$ relative to some basis $\set{e_1, \ldots, e_n}$: \[ [e_i,e_j]=C_{ij}^ke_k.\] This condition is also sufficient. In fact, if we let $M:=G$ be any Lie group which has $\mathfrak{g}$ as its Lie algebra, then the components $\theta^i$ of its $\mathfrak{g}$-valued right invariant Maurer-Cartan form $\omega_{\textrm{MC}}$ relative to some basis $\set{e_1, \ldots, e_n}$ of $\mathfrak{g}$, have $C^k_{ij}$ as its structure functions. Now let $(M, \theta^i, h)$ be any other solution of the realization problem. If we define a $\mathfrak{g}$-valued $1$-form on $M$ by \[ \theta=\sum_{i}\theta^{i}e_{i},\] then the structure equation \eqref{eq: dtheta} for the coframe $\{ \theta^{i}\}$ implies that $\theta$ satisfies the \emph{Maurer-Cartan equation} \[\mathrm d \theta + \frac{1}{2}[\theta,\theta] = 0\text{.}\] It is then well known that there exists a locally defined diffeomorphism $\psi:M\rightarrow G$ onto a neighborhood of the identity such that \[ \theta=\psi^{\ast}\omega_{\textrm{MC}}.\] This is sometimes refered to as \emph{the universal property of the Maurer-Cartan form}. Thus, at least locally, there is only one solution to the realization problem, up to equivalence. \end{example} In general, when the structure functions are not constant, they cannot determine a Lie algebra. Instead, we have the following result which is the basic proposition which underlies all our approach: \begin{theorem} Solutions to Cartan's problem exist if and only if the initial data to the problem determines a Lie algebroid structure on the trivial vector bundle $A\to X$ with fiber $\mathbb R^n$, Lie bracket $[~,~]_A:\Gamma(A)\times\Gamma(A)\to \Gamma(A)$ relative to the standard basis of sections $\{e_1\dots,e_n\}$ given by: \[ [e_i,e_j]_A=C_{ij}^k(x)e_k,\] and anchor $\sharp:A\to TM$ defined by $\sharp(e_i)=F_i$. \end{theorem} We will call $A\to X$ the \textbf{classifying Lie algebroid} of the Cartan's realization problem. Also, the map $h:M\to X$, which determines the particular coframe we are interested in, will be called the \textbf{classifying map} of the realization. We will see in Section \ref{subsec:Algbrd}, that given a coframe $\theta$ on a manifold $M$, there is associated to it a Cartan Realization Problem, and hence a classifying Lie algebroid $A_\theta$. This will be called the \textbf{classifying Lie algebroid of the coframe}. It is important to make the following distinctions: \begin{enumerate} \item[(i)] To a \emph{single} coframe $\theta$, there is associated a Cartan realization problem and, hence, a classifying Lie algebroid $A_\theta$. \item[(ii)] To a Cartan realization problem, there is associated a classifying Lie algebroid $A$ and a \emph{family} of coframes (its solutions). \end{enumerate} Note that distinct Cartan realization problems can share the same coframe as solutions. However, we have the following: \begin{prop} Let $(n,X,C_{ij}^k,F_i)$ be a Cartan realization problem with associated classifying algebroid $A$ and let $(M,\theta,h)$ be a connected realization. Then $h(M)$ is an open subset of an orbit of $A$ so that $A|_{h(M)}\to h(M)$ is also a Lie algebroid. Moreover, there exists a Lie algebroid morphism from $A|_{h(M)}$ to $A_{\theta}$ (the classifying algebroid of the coframe $\theta$) which is a fiberwise isomorphism that covers a submersion $h(M) \to X_{\theta}$. \end{prop} In general, a Lie algebroid does not need to be associated with any Lie groupoid (see \cite{CrainicFernandes} for a complete discussion of this point). However, assume for now that this is the case for the classifying Lie algebroid $A\to X$ and denote by $\mathcal{G}\rightrightarrows X$ a Lie groupoid integrating it. One can then introduce the Maurer-Cartan form $\omega_{\textrm{MC}}$ on $\mathcal{G}$ and it follows from its universal property that: \begin{theorem} Let $\omega_{\textrm{MC}}$ be the Maurer-Cartan form on a Lie groupoid $\mathcal{G}$ integrating the classifying Lie algebroid $A$ of a Cartan's realization problem. Then the pull-back of $\omega_{\textrm{MC}}$ to each $\mathbf{s}$-fiber induces a coframe which gives a solution to the realization problem. Moreover, every solution to Cartan's problem is locally equivalent to one of these. \end{theorem} This settles the classification and the equivalence problems. It also shows one of the main advantages of using our Lie algebroid approach: the classifying Lie algebroid (or more precisely its local Lie groupoid) gives rise to a recipe for constructing explicit solutions to the problem. The solutions are the $\mathbf{s}$-fibers of the groupoid equipped with the restriction of the Maurer-Cartan forms. There are several other advantages in using this Lie algebroid approach. First of all, it turns out that properties of the classifying Lie algebroid are closely related to geometric properties of the coframes it classifies. For example, let us call any self-equivalence of a coframe which preserves the classifying map $h$ a \textbf{symmetry of the realization}. Then: \begin{prop} Let $A\to X$ be the classifying Lie algebroid of a Cartan's realization problem. Then: \begin{enumerate} \item To every point on $X$ there corresponds a germ of a coframe which is a solution to the realization problem; \item Two such germs of realizations are equivalent if and only if they correspond to the same point in $X$; \item The isotropy Lie algebra of $A\to X$ at some $x\in X$ is isomorphic to the \emph{symmetry Lie algebra} of the corresponding germ of realization. \end{enumerate} \end{prop} Second of all, the Lie algebroid approach is well fitted to the study of global aspects of the theory. The first of these global issues that we will consider is the \begin{problem}[Globalization Problem] \label{prob:globalization} Given a Cartan's problem with initial data $(n, X, C^k_{ij}, F_i)$ and two germs of coframes $\theta_0$ and $\theta_1$ which solve the problem, does there exist a global connected solution $(M,\theta,h)$ to the realization problem for which $\theta_0$ is the germ of $\theta$ at a point $p_0 \in M$ and $\theta_1$ is the germ of $\theta$ at a point $p_1 \in M$? \end{problem} Another important global issue is the \emph{global equivalence problem}. In general, the classifying Lie algebroid does not distinguish between a realization and its covers. Here, by a \textbf{realization cover} of $(M,\theta,h)$ we mean a realization of the form $(\tilde{M}, \pi^{\ast}\theta, \pi^{\ast}h)$ where $\pi: \tilde{M} \to M$ is a covering map. It is then natural to consider the following equivalence relation on the set of realizations of a Cartan's problem. Two realization $(M_1, \theta_1, h_1)$ and $(M_2, \theta_2, h_2)$ are said to be \textbf{globally equivalent, up to covering} if they have a common realization cover $(M, \theta, h)$, i.e., \[\xymatrix{& (M, \theta, h) \ar[dl]_{\pi_1} \ar[dr]^{\pi_2} & \\ (M_1, \theta_1, h_1) & & (M_2, \theta_2, h_2).}\] This leads to: \begin{problem}[Global Classification Problem] \label{prob:global:equivalence} What are all the solutions of a Cartan's realization problem up to global equivalence, up to covering? \end{problem} Of course, the solutions to these global problems rely, again, on understanding the classifying Lie algebroid. We need to assume that the restriction of the classifying Lie algebroid to each orbit comes from a Lie groupoid. Note that this assumption is weaker than requiring integrability of the whole Lie algebroid $A$ (see \cite{CrainicFernandes}), so in this case we will say that $A$ is \emph{weakly integrable}. For the globalization problem we then have the following solution: \begin{theorem} Let $A\to X$ be the classifying Lie algebroid of a Cartan's realization problem. Two germs of coframes which belong to the same global connected realization correspond to points in the same orbit of $A$. Moreover, if $A$ is weakly integrable then the converse is also true: two germs of coframes which correspond to points in the same orbit belong to the same global connected realization. \end{theorem} On the other hand, for the global equivalence problem we obtain the following result which generalizes the two main theorems concerning global equivalences due to Olver \cite{Olver}: \begin{theorem} Assume that the classifying Lie algebroid $A\to X$ of a Cartan's realization problem is weakly integrable. Then any solution to Cartan's problem is globally equivalent, up to covering, to an open set of an $\mathbf{s}$-fiber of the source 1-connected Lie groupoid integrating $A|_{L}$, for some leaf of $A$. \end{theorem} Finally, the classifying Lie algebroid can also be used to produce invariants of a geometric structure of finite type, as well as to recover classical results about their symmetry groups. In fact, if the geometric structure is characerized by a coframe $\theta$ on a manifold $M$, with associated classifying algebroid $A_\theta$, then we can view the coframe as a Lie algebroid morphism $\theta: TM \to A_{\theta}$ (which covers the classifying map $h$). As a general principle, the coframe (viewed as a Lie algebroid morphism) pulls back invariants of the classifying Lie algebroid $A_\theta$ to invariants of the coframe. To illustrate this point of view we will introduce a new invariant called the \emph{modular class of a coframe} which is obtained as the pullback of the modular class of its classifying Lie algebroid. We can summarize this outline of the paper by saying that there are essentially two ways in which we use the existence of a classifying Lie algebroid for a Cartan's realization problem. \begin{itemize} \item For a \emph{class} of geometric structures of finite type whose moduli space (of germs) is finite dimensional, we can set up a Cartan's realization problem whose classifying Lie algebroid gives us information about its local equivalence and classification problems, as well as its globalization and global classification problems. \item For a \emph{single} geometric structure of finite type, we can set up a Cartan's realization problem whose classifying Lie algebroid describes the symmetries of the geometry, and also provides a recipe for producing invariants of the structure. \end{itemize} In a sequel to this paper \cite{FernandesStruchiner}, we will discuss finite type $G$-structures and show how one can associate a classifying Lie algebroid to such a structure. There we will also present several geometric examples related to torsion-free connections on $G$-structures. Part of the results described in this paper were obtained by the second author in his PhD Thesis \cite{Struchiner}. \section{Equivalence of coframes and realization problems} \label{sec: Equivalence of Coframes} In this section, we will explain how coframes give rise to Cartan realization problems. First we consider the local problem, where we assume the coframe to be fully regular at some point. Then we consider the global problem for coframes which are fully regular at every point. Our exposition may be seen as a Lie algeboid version of the classical approach, which can be found in the monographs of Olver \cite{Olver} and Sternberg \cite{Sternberg}, to which we refer the reader for more details. \subsection{Local equivalence} \label{subsec:coframes:1} Let $\{\theta^i\}$ and $\{\bar{\theta}^{i}\}$ be coframes on two $n$-dimensional manifolds $M$ and $\bar{M}$. Throughout this text we will use unbarred letters to denote objects on $M$ and barred letters to denote objects on $\bar{M}$. Any (locally defined) diffeomorphism $\phi:M\to \bar{M}$ such that $\phi^*\bar{\theta}^i=\theta^i$ will be called an \textbf{(local) equivalence}. As we saw in the Introduction, the structure equations \eqref{structureeqcoframe} \[ \mathrm d \theta^{k}=\sum_{i<j}C_{ij}^{k}(p) \theta^{i}\wedge\theta^{j} \] and the structure functions $C_{ij}^{k}\in C^{\infty}(M)$ play a crucial role in the study of the equivalence problem. For example, the structure functions of any coframe $\bar{\theta}$ equivalent to $\theta$ must satisfy \[\bar{C}_{ij}^{k}\left( \phi(p) \right) =C_{ij}^{k}(p) \text{,}\] so we may use the structure functions to obtain a set of necessary conditions for solving the equivalence problem. They are examples of \emph{invariant functions}: \begin{definition} A function $I\in C^{\infty}(M)$ is called an \textbf{invariant function of a coframe $\theta=\{\theta^{i}\}$} if for any locally defined self equivalence $\phi:U\to V$, where $U,V\subset M$ are open sets, one has \[I\circ\phi=I\text{.}\] We denote by $\Inv(\theta)\subset C^\infty(M)$ the space of invariant functions of the coframe $\theta$. \end{definition} Now, for any function $f\in C^{\infty}(M)$ and coframe $\theta$ one can define the coframe derivatives $\frac{\partial f}{\partial\theta^{k}}\in C^{\infty}(M)$ as being the coefficients of the differential of $f$ when expressed in terms of the coframe $\{\theta^{i}\}$, \[\mathrm d f=\sum_{k}\frac{\partial f}{\partial\theta^{k}}\theta^{k}\text{.}\] Using the fact that $\mathrm d\phi^{\ast}=\phi^{\ast}\mathrm d$, it follows that: \begin{lemma} If $I\in\Inv(\theta)$ is an invariant function of a coframe $\theta=\{\theta^{i}\}$, then so is $\frac{\partial I}{\partial\theta^{k}}$ for all $1\leq k\leq n$. \end{lemma} Obviously, if $\phi:M\to\bar{M}$ is an equivalence between coframes $\theta=\{\theta^{i}\}$ and $\bar{\theta}=\{\bar{\theta}^{i}\}$, we obtain a bijection $\phi^*:\Inv(\bar{\theta})\to\Inv(\theta)$. So, for example, we have as necessary conditions for equivalence that the structure functions and its coframe derivatives of any order \[ C_{ij}^{k},\frac{\partial C_{ij}^{k}}{\partial\theta^{l}}, \ldots, \frac{\partial^{s}C_{ij}^{k}}{\partial\theta^{l_{1}}\cdots\partial\theta^{l_{t}}},\ldots\] must correspond under the equivalence. This gives an infinite set of conditions for equivalence. However, observe that if invariants $f_{1},...,f_{l}\in\Inv(\theta)$ sastisfy a functional relationship $f_{l}=H(f_{1},...,f_{l-1})$, then the corresponding elements $\bar{f}_{1},...,\bar{f}_{l}$ in $\Inv(\bar{\theta})$ must satisfy the same functional relation \begin{equation*} \bar{f}_{l}=H\left( \bar{f}_{1},...,\bar{f}_{l-1}\right) \end{equation*} for the same function $H$. This shows that we do not need to deal with all the invariant functions in $\Inv(\theta)$, but only with those that are functionally independent. For any subset $\mathcal{C} \subset C^{\infty}(M)$ we define the \textbf{rank of $\mathcal{C}$ at $p\in M$}, denoted by $r_{p}(\mathcal{C})$, to be the dimension of the vector space spanned by $\{\mathrm d_p f : f \in \mathcal{C}\}$. Also, we say that $\mathcal{C}$ is \textbf{regular at $p$} if $r_{p^{\prime}}(\mathcal{C}) = r_{p}(\mathcal{C})$ for all $p^{\prime}$ in a neighborhood of $p$ in $M$. Finally, we say that $\mathcal{C}$ is \textbf{regular}, if it is regular at every point $p \in M$. \begin{definition} A coframe $\theta$ is called \textbf{fully regular at $p\in M$} if the set $\Inv(\theta)$ is regular at $p$. It is called \textbf{fully regular} if it is fully regular at every $p \in M$. \end{definition} By the implicit function theorem, if $\theta$ is a fully regular coframe in $M$ of rank $d$ at $p\in M$, then we can find invariant functions $\{h_{1},...,h_{d}\}\subset\Inv(\theta)$, independent in a neighborhood of $p$, such that every $f\in\Inv(\theta)$ can be written as \begin{equation*} f = H(h_{1},...,h_{d}) \end{equation*} in a neighborhood of $p$. In particular, we can assume that there is an open set $U\subset M$ containing $p$, where this independent set of invariant functions determine a smooth map: \[ h : U \subset M \rightarrow \mathbb{R}^{d},\quad h (p) = (h_{1}(p),...,h_{d}(p)), \] onto an open set $X \subset \mathbb{R}^{d}$. Note that $C_{ij}^k=C_{ij}^k(h_1,\dots,h_d)$, so the structure functions are smooth functions $C_{ij}^{k} \in C^{\infty}(X)$. Also, differentiating $h_{a}$, we find \[ \mathrm d h_{a}=\sum_{i} (F_{i}^{a}\circ h) ~\theta^{i}, \] for some smooth functions $F_i^a\in C^{\infty}(X)$, so we obtain $n$ vector fields on $X$: \[F_i = \sum F^a_i \frac{\partial}{\partial h_a}.\] Since the functions $C_{ij}^{k}$ and the vector fields $F_i$ satisfy \begin{align*} \mathrm d \theta^{k} & = \sum_{i<j}C_{ij}^{k}(h) \theta^{i} \wedge \theta^{j}\\ \mathrm d h & = \sum_{i}F_{i}(h)\theta^{i} \end{align*} we conclude that a coframe which is fully regular at $p$ determines the initial data of a Cartan's Realization Problem (Problem \ref{realiz:probl}). \begin{remark} \label{rem:uniqueness} The data depends on the choice of the functions $\{h_{1},...,h_{d}\}\subset\Inv(\theta)$. However, if $\{h'_{1},...,h'_{d}\}\subset\Inv(\theta)$ is a different choice, defining an open set $X'$, giving rise to data ${C'}_{ij}^{k}\in C^{\infty}(X')$ and $F'_i\in\ensuremath{\mathfrak{X}}(X')$, then there is a diffeomorphism $\phi:X\to X'$ defined from a neighborhood of $h(p)$ to a neighborhood of $h'(p)$, which makes the diagram commutative: \[\xymatrix{&U \ar[dl]_{h}\ar[dr]^{h'}& \\ X\ar[rr]_{\phi}& & X'}\] and is such that $\phi^*{C'}_{ij}^{k}=C_{ij}^{k}$ and $\phi_*F_i=F'_i$. \end{remark} \subsection{Global equivalence} \label{subsec:coframes:2} So far, we have only considered a neighborhood of a point where the coframe is fully regular. We consider now the case where $\theta$ is a fully regular coframe at every point of $M$. \begin{definition} We will say that two points $p$ and $q$ of $M$ are \textbf{locally formally equivalent} if there exist neighborhoods $U_p$ and $U_q$ of $p$ and $q$ in $M$ and a diffeomorphism $\phi: U_p \to U_q$ such that $\phi(p) = q$ and \begin{equation} \label{formally equivalent} f (\phi(p^{\prime})) = f(p^{\prime}), \quad \text{ for all } p^{\prime} \in U_p\text{ and } f\in \Inv(\theta). \end{equation} \end{definition} \begin{remark} \label{rem:formal:local:equiv} We note that $p$ and $q$ are locally formally equivalent if and only if $f(p)=f(q)$ for all $f\in\Inv(\theta)$. The crucial point is that since $\Inv(\theta)$ is closed under the operation of taking coframe derivatives, it follows that if $f(p) = f(q)$ for all $f \in \Inv(\theta)$, and $f_1, \ldots, f_d \in \Inv(\theta)$ are functionally independent in a neighborhood of $p$, then they are also functionally independent in some neighborhood of $q$. We shall see later, in Proposition \ref{prop: formal equivalence}, that local formal equivalence and local equivalence actually coincide. \end{remark} With this definition, we have: \begin{prop} Let $\theta$ be a fully regular coframe on $M$ of rank $d$ and denote by $\sim$ the local formal equivalence relation. The quotient: \[X_\theta := M{/}\sim\] has a natural structure of a smooth manifold of dimension $d$. \end{prop} \begin{proof} On the the orbit space $X_\theta := M{/}\sim$ we consider the quotient topology and we denote by $\kappa_\theta:M\to X_\theta$ the projection. Also, if $U$ is an open subset of $M$, then we will denote by $\Inv_U(\theta)$ the restriction of $\Inv(\theta)$ to $U$. We start by choosing a cover of $M$ by a family of open sets $\set{U_{\lambda}}_{\lambda \in \Lambda}$ such that for each $\lambda \in \Lambda$ there exist elements $h^{\lambda}_1, \ldots , h^{\lambda}_d \in \Inv(\theta)$ whose restriction to $U_{\lambda}$ are functionally independent and generate $\Inv_{U_{\lambda}}(\theta)$. Thus, on each $U_{\lambda}$ we obtain a smooth \emph{open} map $h^{\lambda}: U_{\lambda} \to \mathbb R^d$ defined by \[p \mapsto (h^{\lambda}_1(p), \ldots, h^{\lambda}_d(p)).\] The image of this map is an open subset $h^{\lambda}(U_{\lambda}) \subset \mathbb R^d$, and we have an induced map on the orbit space $\bar{h}^{\lambda}: X_{\lambda} \to \mathbb R^d$, where $X_{\lambda}:=\kappa_\theta(U_{\lambda})\subset X_\theta$. \begin{lemma} \label{lem:aux:1} The sets $X_\lambda$ form an open cover of $X_\theta$ and each $\bar{h}^{\lambda}: X_{\lambda} \to \mathbb R^d$ is a homeomorphism onto the open set $h^{\lambda}(U_{\lambda})\subset \mathbb R^d$. \end{lemma} \begin{proof}[Proof of Lemma \ref{lem:aux:1}] The set $X_\lambda=\kappa_\theta(U_\lambda)$ is open if and only if the saturation of $U_\lambda$, defined by \[ \widetilde{U}_\lambda:=\{q\in M:q\sim p\text{ for some }p\in U_\lambda\},\] is open in $M$. This follows because if $q\in \widetilde{U}_\lambda$ then there exists $p\in U_\lambda$ and a diffeomorphism $\phi: U_p \to U_q$, defined on neighborhoods of $p$ and $q$, such that \eqref{formally equivalent} holds. Then $\phi(U_p\cap U_\lambda)$ is an open set containing $q$, which is contained in $\widetilde{U}_\lambda$. We conclude that the saturation $\widetilde{U}_\lambda$ is open, therefore $X_\lambda$ is also open. The induced map $\bar{h}^{\lambda}: X_{\lambda} \to \mathbb R^d$ is continuous (by definition of the quotient topology), open (since $h^{\lambda}$ is open) and gives a 1:1 map onto the open set $h^{\lambda}(U_{\lambda})\subset \mathbb R^d$. Hence, $\bar{h}^{\lambda}: X_{\lambda} \to h^{\lambda}(U_{\lambda})$ is a homeomorphism. \end{proof} Now observe that if there exists a diffeomorphism defined on an open subset $W_{\lambda}\subset U_{\lambda}$ with values in another open subset $W_{\mu} \subset U_{\mu}$, \[\phi: W_{\lambda} \to W_{\mu},\] and such that \[ f (\phi(p)) = f(p), \quad \text{ for all } p \in W_{\lambda}, \text{ and } f\in \Inv(\theta), \] (for example if $U_{\lambda}\cap U_{\mu} \neq \emptyset$, and $W_{\lambda} = U_{\lambda} \cap U_{\mu} = W_{\mu}$) then the restriction of each $h^{\mu}_i$ to $W_{\mu}$ can be written as a smooth function of the (restriction of the) functions $h^{\lambda}_1, \ldots h^{\lambda}_d$, i.e., \[h^{\mu}_i = H_{\lambda \mu}^i(h^{\lambda}_1, \ldots, h^{\lambda}_d).\] Since both sets $\set{h^{\lambda}_i}$ and $\set{h^{\mu}_i}$ are functionally independent, we obtain a diffeomorphism \[H_{\lambda \mu} : h^{\lambda}(W_{\lambda})\to h^{\mu}(W_{\mu}).\] Therefore, at the level of the orbit space, we obtain the commutative diagram: \[ \xymatrix{&X_\lambda\cap X_\mu \ar[dl]_{\bar{h}^{\lambda}}\ar[dr]^{\bar{h}^{\mu}}& \\ \bar{h}^{\lambda}(X_\lambda\cap X_\mu)\ar[rr]_{H_{\lambda \mu}}& & \bar{h}^{\mu}(X_\lambda\cap X_\mu)} \] Therefore, the family $\{(X_\lambda,\bar{h}^{\lambda})\}$ gives an atlas for $X_\theta$, whose transition functions are precisely the maps $H_{\lambda \mu}$, so we conclude that $X_\theta$ is a smooth manifold, possibly non-Hausdorff. We claim that $X_\theta$ is Hausdorff. Let $x,y\in X_\theta$ and suppose that any two neighborhoods of $x$ and $y$ intersect. To prove the claim we must show that $x=y$. For this, pick $p,q\in M$ such that $\kappa_\theta(p)=x$ and $\kappa_\theta(q)=y$. Then, by our assumption, any neighborhoods of $p$ and $q$ contain points which are in the same equivalence class. Therefore, we can choose sequences $p_n\to p$ and $q_n\to q$ such that $p_n\sim q_n$. It follows that for any $f\in \Inv(\theta)$: \[ f(p)=\lim f(p_n)=\lim f(q_n)=f(q).\] This implies that $p\sim q$ (see Remark \ref{rem:formal:local:equiv}) so $x=y$. \end{proof} We will see that the manifold $X_\theta$ plays a crucial role in the various equivalence and classification problems associated with a fully regular coframe $\theta$. Hence we suggest the following \begin{definition} The manifold $X_\theta$ is called the \textbf{classifying manifold} of the coframe and the natural map $\kappa_\theta: M \to X_\theta$ is called the \textbf{classifying map} of the coframe. \end{definition} \begin{remark} For a arbitrary coframe $\theta$ one can still define the (singular) classifying space $X_\theta:=M/\sim$, where $\sim$ denotes local equivalence (which, now, may differ from formal local equivalence). The space $X_\theta$ is smooth on the dense open set consisting of points where the coframe is fully regular. It is an interesting problem to study the kind of singularities that may arise in this space. \end{remark} If $\theta=\{\theta^i\}$ is a fully regular coframe in $M$ of rank $d$, with classifying manifold $X_\theta$, by Remark \ref{rem:uniqueness}, the structure functions $C_{ij}^{k}$ can be seen as smooth functions in $X_\theta$, while the locally defined vector fields $F_i =\sum_a F_i^a\frac{\partial}{\partial h^\lambda_a}$ glue to globally defined vector fields $F_i\in\ensuremath{\mathfrak{X}}(X_\theta)$. We conclude that: \begin{prop} If $\theta$ is a fully regular coframe on a manifold $M$, then it gives rise to a Cartan's realization problem with initial data $(n, X_\theta, C^k_{ij}, F_i)$ for which $(M, \theta, \kappa_\theta)$ is a solution. \end{prop} \section{The classifying algebroid} \label{sec:ClassAlgbrd} In the previous section we have seen that the equivalence problem for coframes leads naturally to a Cartan realization problem. We now show that if such a problem admits solutions, then it is associated with a Lie algebroid, called the \emph{classifying algebroid} of the realization problem. \subsection{Lie algebroids and Cartan's Problem} \label{subsec:Algbrd} Recall that a Lie algebroid $A\to X$ is a vector bundle equipped with a Lie bracket on its space of sections $\Gamma(A)$ and a bundle map $\sharp:A\to TX$, called the anchor, such that the following Leibniz type identity holds: \[ [\alpha,f\beta]=f[\alpha,\beta]+(\mathscr{L}_{\sharp(\alpha)}f)\beta,\quad (\alpha,\beta\in\Gamma(A),\ f\in C^\infty(X)). \] We refer to \cite{CannasWeinstein,CrainicFernandes:lecture} for details on Lie algebroids. If the vector bundle $A$ is trivial, so that we have a basis of global sections $\{\alpha_1,\dots,\alpha_n\}$, then we can express the bracket by: \begin{equation} \label{eq:bracket:algbrd} [\alpha_i,\alpha_j]=\sum_{k=1}^n C_{ij}^k\alpha_k, \end{equation} for some structure functions $C_{ij}^k\in C^\infty(U)$. The anchor, in turn, is characterized by the vector fields $F_i\in\ensuremath{\mathfrak{X}}(X)$ given by: \begin{equation} \label{eq:anchor:algbrd} F_i:=\sharp(\alpha_i), \quad (i=1,\dots,n). \end{equation} The Leibniz identity and the Jacobi identity for the bracket, lead to the following set of equations: \begin{align} \label{eq:struct:algbrd:1} \left[ F_i, F_j \right] & = \sum_k C^k_{ij}F_k\\ \label{eq:struct:algbrd:2} \mathscr{L}_{F_j}C^i_{kl} + \mathscr{L}_{F_k} C^i_{lj} + \mathscr{L}_{F_l} C^i_{jk}& = \sum_m( C_{mj}^{i}C_{kl}^{m}+C_{mk}^{i}C_{lj}^{m}+C_{ml}^{i}C_{jk}) \end{align} Conversely, if one is given functions $C_{ij}^k\in C^\infty(X)$ and vector fields $F_i\in\ensuremath{\mathfrak{X}}(X)$ satisfying these equations, then one obatins a Lie algebroid structure on the trivial vector bundle $A:=X\times\mathbb R^n$ with Lie bracket \eqref{eq:bracket:algbrd} and anchor \eqref{eq:anchor:algbrd}. \begin{remark} If one also chooses local coordinates $(x_1, \ldots, x_d)$ on a domain $U\subset X$, so that the vector fields $F_i$ are expressed as: \[ F_i=\sum_a F^a_i \frac{\partial}{\partial x_a},\] then we obtain the complete set of \emph{structure functions} $C_{ij}^k,F_i^a\in C^\infty(U)$ of the Lie algebroid $A$. Equations \eqref{eq:struct:algbrd:1} and \eqref{eq:struct:algbrd:2} become: \begin{align*} \sum\limits_{b=1}^{d}\left( F_{i}^{b}\frac{\partial F_{j}^{a}}{\partial x_{b}}-F_{j}^{b}\frac{\partial F_{i}^{a}}{\partial x_{b}}\right) &=\sum\limits_{l=1}^{n}C_{ij}^{l}F_{l}^{a}\\ \sum\limits_{b=1}^{d}\left( F_{j}^{b}\frac{\partial C_{kl}^{i}}{\partial x_{b}}+F_{k}^{b}\frac{\partial C_{lj}^{i}}{\partial x_{b}}+F_{l}^{b} \frac{\partial C_{jk}^{i}}{\partial x_{b}}\right) &= \sum\limits_{m=1} ^{n}\left( C_{mj}^{i}C_{kl}^{m}+C_{mk}^{i}C_{lj}^{m}+C_{ml}^{i}C_{jk} ^{m}\right) \end{align*} We call this set of equations, or its shorter form \eqref{eq:struct:algbrd:1} and \eqref{eq:struct:algbrd:2}, the \textbf{structure equations} of the Lie algebroid. \end{remark} We can now give the necessary conditions for solving the realization problem. They are simply a consequence of the fact that $\mathrm d ^{2}=0$: \begin{prop} \label{NecessaryRealization} Let $(n,X,C_{ij}^{k},F_i)$ be the initial data of a Cartan's realization problem. If for every $x_0\in X$ there is a realization $(M,\theta,h)$ with $h(m_0)=x_0$, for some $m_0\in M$, then the $-C_{ij}^{k}\in C^{\infty}(X)$ and $F_i \in \ensuremath{\mathfrak{X}}(X)$ determine the structure of a Lie algebroid $A$ over $X$. \end{prop} \begin{proof} Differentiating equations \eqref{eq: dtheta} and \eqref{eq: dh} and using $\mathrm d^2=0$, we obtain: \begin{align*} \mathscr{L}_{F_j}C^i_{kl} + \mathscr{L}_{F_k} C^i_{lj} + \mathscr{L}_{F_l} C^i_{jk}&= - \sum_m( C_{mj}^{i}C_{kl}^{m}+C_{mk}^{i}C_{lj}^{m}+C_{ml}^{i}C_{jk})\\ \left[ F_i, F_j \right] &= - \sum_k C^k_{ij}F_k. \end{align*} These are just the structure equations \eqref{eq:struct:algbrd:1} and \eqref{eq:struct:algbrd:2} for the Lie algebroid defined by the trivial vector bundle $A\to X$ with fiber $\mathbb R^n$, with Lie bracket and anchor given by: \begin{align*} \sharp(\alpha_i)&:= F_i,\\ [\alpha_i, \alpha_j]&:= -\sum_k C^k_{ij}\alpha_k, \end{align*} where $\{\alpha_1, \ldots, \alpha_n\}$ is the canonical basis of sections of $A$. \end{proof} Using this proposition we introduce the following basic concept: \begin{definition} The Lie algebroid constructed out of the initial data of a Cartan's realization problem will be called the \textbf{classifying algebroid} of the problem. \end{definition} We will see later that the fact that the initial data of a Cartan realization problem determines a Lie algebroid is also sufficient for the existence of solutions, and that this Lie algebroid leads to solutions of the various classifications problems associated to coframes, which justifies its name. \begin{remark} \label{rem:transitive} Notice that if we start with a fixed coframe $\theta$ on a manifold $M$, then we have an associated Cartan realization problem. The corresponding Lie algebroid will be called the \textbf{classifying algebroid of the coframe} and denoted $A_\theta\to X_\theta$. If the coframe is also the solution of another realization problem, with associated algebroid $A$, we will see later (Corollary \ref{cor:subalgbrd}) the precise relationship between $A_\theta$ and $A$. Also, note that since the vector fields $F_1,\dots, F_n$ generate at each point the tangent space to $X_\theta$, the classifying algebroid of the coframe $A_\theta$ is always transitive, while this does not need to hold with a general classifying Lie algebroid $A$ of a Cartan realization problem. \end{remark} \subsection{Realizations and Lie algebroid morphisms} \label{subsec:morphisms} Let $A\to X$ be the classifying Lie algebroid of a Cartan's problem. Each realization $(M,\theta,h)$ of the problem determines a bundle map: \begin{equation} \label{eq:alg:morphism} \xymatrix{ TM\ar[r]^{\theta}\ar[d]& A\ar[d]\\ M\ar[r]_{h}& X} \end{equation} by setting: \begin{align*} TM&\longrightarrow A=X\times\mathbb R^n,\\ v&\longmapsto (h(p(v)),(\theta^1(v),\dots,\theta^n(v))), \end{align*} where $p:TM\to M$ denotes the projection. This bundle map is a fiberwise isomorphism. We have the following basic result: \begin{prop} \label{prop:morphisms} Let $A\to X$ be the classifying Lie algebroid of a Cartan's problem. The realizations of this problem are in 1:1 correspondence with bundle maps \eqref{eq:alg:morphism} which are Lie algebroid morphisms and fiberwise isomorphisms. \end{prop} Before we prove this result, let us recall some basic facts about Lie algebroid morphisms (see,e.g., \cite{CrainicFernandes:lecture}). A Lie algebroid morphism is a bundle map: \[ \xymatrix{ A\ar[r]^{\mathcal{H}}\ar[d]& B\ar[d]\\ X\ar[r]_{h}& Y} \] which commutes with anchors: \[ \mathrm d h\circ \sharp_A=\sharp_B\circ\mathcal{H},\] and which preserves brackets. Since we are dealing with a bundle map over different basis, to express this condition is somewhat cumbersome (see \cite{CrainicFernandes:lecture}). For us, it will be convenient to use the generalized Maurer-Cartan equation, which we now recall. First we introduce an arbitrary connection $\nabla$ on the vector bundle $B\rightarrow Y$. Then for \emph{any} bundle maps $\mathcal{H},\mathcal{G}:A\to B$ which commute with the anchors, we define: \begin{enumerate} \item[(a)] The differential operator $\mathrm d_{\nabla}\mathcal{H}$: for each $\alpha_1,\alpha_2\in\Gamma(A)$, $\mathrm d_{\nabla}\mathcal{H}(\alpha_{1},\alpha_{2})$ is the section of the pull-back bundle $\Gamma(h^{\ast}B)$ given by(\footnote{Note that, in general, $\mathrm d_{\nabla}^{2}\neq 0$ so that $\mathrm d_{\nabla}$ is not a differential.}): \begin{equation} \label{eq:differential} \mathrm d_{\nabla}\mathcal{H}(\alpha_{1},\alpha_{2}):= \nabla_{\sharp_A\alpha_{1}}\mathcal{H}(\alpha_{2}) - \nabla_{\sharp_A\alpha_{2}}\mathcal{H}(\alpha_{1})-\mathcal{H}([\alpha_{1},\alpha_{2}]) \end{equation} where we use the same letter $\nabla$ for the pullback connection on $h^{\ast}B$. \item[(b)] The bracket $[\mathcal{H},\mathcal{G}]_{\nabla}$: for each $\alpha_1,\alpha_2\in\Gamma(A)$, $[\mathcal{H},\mathcal{G}]_{\nabla}(\alpha_1,\alpha_2)$ is the section of the pull-back bundle $\Gamma(h^{\ast}B)$ given by: \begin{equation} \label{eq:bracket} [\mathcal{H},\mathcal{G}]_{\nabla}(\alpha_1,\alpha_2)=-(T_{\nabla}(\mathcal{H}(\alpha_1),\mathcal{G}(\alpha_2)) + T_{\nabla}(\mathcal{G}(\alpha_1),\mathcal{H}(\alpha_2))), \end{equation} where $T_{\nabla}$ is the torsion of the pullback connection on $h^*B$. \end{enumerate} Now we define the \textbf{generalized Maurer-Cartan equation} for a bundle map of Lie algebroids $\mathcal{H}:A\to B$, which commutes with the anchors, to be: \begin{equation} \label{eq:maurer:cartan} \mathrm d_{\nabla}\mathcal{H}+\frac{1}{2}[\mathcal{H},\mathcal{H}]_{\nabla} = 0. \end{equation} The generalized Maurer-Cartan equation is independent of the choice of connection $\nabla$. A fundamental fact (see \cite{CrainicFernandes:lecture}) is that a bundle map of Lie algebroids $\mathcal{H}:A\to B$ which commutes with the anchors satisfies the generalized Maurer-Cartan equation if and only if it is a Lie algebroid morphism. After these preliminaries, we can turn to the proof of our result. \vskip 10 pt \begin{proof}[Proof of Proposition \ref{prop:morphisms}] First we observe that triples $(M,\theta,h)$, where $\theta$ is a coframe on $M$ and $h:M\to X$ is a smooth map are in 1:1 correspondence with bundle maps $\mathcal{H}_\theta=(h,\theta):TM\to A$ which are fiberwise isomorphisms. Then one checks easily that equation \eqref{eq: dh} is equivalent to the condition that $\mathcal{H}_\theta$ commutes with the anchors. Moreover, if $\nabla$ is the trivial connection on the (trivial) bundle $A=X\times \mathbb R^n$, then equation \eqref{eq: dtheta} is equivalent to the condition that $\theta$ be a solution of the Maurer-Cartan equation \eqref{eq:maurer:cartan}. This is a consequence of the following computation: \begin{align*} (\mathrm d_{\nabla}\theta+\frac{1}{2}[\theta,\theta]_{\nabla})(\partial_{\theta^{i}},\partial_{\theta^{j}}) & = \bar{\nabla}_{(\partial_{\theta^{i}})}\theta(\partial_{\theta^{j}}) - \bar{\nabla}_{(\partial_{\theta^{j}})} \theta(\partial_{\theta^{i}}) - \theta([\partial_{\theta^{i}},\partial_{\theta^{j}}])+\\ &\hskip 50 pt -T_{\bar{\nabla}}(\theta(\partial_{ \theta^{i}}),\theta(\partial_{\theta^{j}}))\\ & = \nabla_{ \sharp \alpha_{i}} \alpha_{j} - \nabla_{ \sharp \alpha_{j}} \alpha_{i}-\theta([\partial_{\theta^{i}},\partial_{\theta^{j}}])+\\ &\hskip 50 pt -T_{\nabla}(\alpha_{i},\alpha_{j})\\ & =[\alpha_{i},\alpha_{j}]-\sum_k\theta^k([\partial_{\theta^{i}},\partial_{\theta^{j}}])\alpha_k\\ & =\sum_k\left(C_{ij}^k-\mathrm d\theta^k(\partial_{\theta^{i}},\partial_{\theta^{j}})\right)\alpha_k\\ & =\sum_k\left(\sum_{l<m}C_{lm}^k\theta^l\wedge\theta^m(\partial_{\theta^{i}},\partial_{\theta^{j}})-\mathrm d\theta^k(\partial_{\theta^{i}},\partial_{\theta^{j}})\right)\alpha_k\\ \end{align*} where $\partial_{ \theta^{i}}=\frac{\partial}{\partial \theta^{i}}$ and $\bar{\nabla}$ denotes the pullback connection on $h^{\ast}A$. \end{proof} \begin{corol} Let $A\to X$ be the classifying Lie algebroid of a Cartan's problem. If $(M,\theta,h)$ is a realization of the problem then the rank of $h$ is locally constant and $h$ maps each connected component of $M$ onto an open subset of an orbit of $A$. \end{corol} \begin{proof} If $\mathcal{H}_\theta=(h,\theta):TM\to A$ is the fiberwise isomorphism associated with the realization $(M,\theta,h)$ then the condition that it commutes with the anchors just means that: \[ \mathrm d h=\sharp\circ\mathcal{H}_\theta.\] Therefore, we conclude that for each $m_0\in M$ (i) there is a neighborhood $U$ of $m_0$ such that image $h(U_0)$ is contained in the orbit of $m_0$, and (ii) $\text{\rm rk}\,(\mathrm d_{m_0}h)$ equals the dimension of this orbit. \end{proof} \begin{remark} Note that, in general, a realization of Cartan's problem may not have constant rank. What the Corollary says is that if the problem has an associated Lie algebroid then every realization has constant rank. By Proposition \ref{NecessaryRealization}, this happens if there is a solution passing through every $x_0\in X$. \end{remark} \section{Maurer-Cartan Forms on Lie groupoids} \label{sec:Maurer-Cartan} For a Cartan realization problem whose manifold $X$ reduces to a point, the structure functions $C_{ij}^k$ are constant and the vector fields $F_i$ vanish. In this case, the classifying algebroid is simply a Lie algebra and one can produce solutions to the realization problem by integrating this Lie algebra into a Lie group and using its Maurer-Cartan form, as was explained in Example \ref{ex:constant}. Our solution for a general Lie algebroid will follow the same pattern where we will now need to integrate the Lie algebroid to a Lie groupoid. Therefore, we will need to discuss first Maurer-Cartan forms on Lie groupoids and their universal property. \subsection{Definition of a Maurer-Cartan Form} \label{subsec:forms} In this paragraph we define Maurer-Cartan forms on Lie groupoids. These turn out to be foliated differential forms with values in the Lie algebroid. Let $\ensuremath{\mathcal{F}}$ be a foliation on $M$. Recall that an \textbf{$\ensuremath{\mathcal{F}}$-foliated $k$-form on $M$} is a section of $\wedge^k T^{\ast}\ensuremath{\mathcal{F}}$, i.e., it is a $k$-form on $M$ which is only defined on $k$-tuples of vector fields which are all tangent to the foliation. \begin{definition} An $\ensuremath{\mathcal{F}}$-foliated differential $1$-form on $M$ with values in a Lie algebroid $A\rightarrow X$ is a bundle map \[ \xymatrix{ T\ensuremath{\mathcal{F}} \ar[d] \ar[r]^\theta & A \ar[d] \\ M \ar[r]_h & X } \] which is compatible with the anchors, i.e., such that $\sharp(\theta(\xi))=\mathrm d h(\xi)$, $\forall\xi\in T\ensuremath{\mathcal{F}}$. \end{definition} Let $\mathcal{G}$ be a Lie groupoid. We will use the convention that arrows in $\mathcal{G}$ go from right to left, so given two arrows $g$ and $h$, the product $hg$ is defined provided that $\mathbf{s}(h)=\mathbf{t}(g)$. In particular, right translation by an element $g\in \mathcal{G}$ is a map \[ R_{g} : \mathbf{s}^{-1}(\mathbf{t}(g)) \rightarrow\mathbf{s}^{-1}(\mathbf{s}(g)) \] so it does not make sense to say that a form on $\mathcal{G}$ is right invariant. We must restrict ourselves to $\mathbf{s}$-foliated differential forms. We will say that an $\mathbf{s}$-foliated differential form $\omega$ on $\mathcal{G}$ is \textbf{right invariant} if \[ \omega(\xi) = \omega((R_{g})_{\ast}(\xi)) \] for every $\xi$ tangent to an $\mathbf{s}$-fiber and $g\in\mathcal{G}$. Equivalently, we will write \[ (R_{g})^{\ast}\omega = \omega. \] \begin{remark} \label{rem:invariance} If $\omega$ is a $\mathbf{s}$-foliated differential form on the groupoid $\mathcal{G}$ with values in its Lie algebroid $A$, the notion of right invariance still makes sense provided that we assume that $\omega:T^{\mathbf{s}}\mathcal{G}\to A$ is a bundle map that covers the target map $\mathbf{t}:\mathcal{G}\to X$. \end{remark} On every Lie groupoid we may define a canonical, right-invariant, differential $1$-form with values in its Lie algebroid: \begin{definition} The \textbf{Maurer-Cartan form of a Lie groupoid $\mathcal{G}$} is the Lie algebroid valued $\mathbf{s}$-foliated $1$-form \[\xymatrix{T^{\mathbf{s}}\mathcal{G} \ar[r]^{\omega_{\textrm{MC}}} \ar[d] & A \ar[d]\\ \mathcal{G} \ar[r]_{\mathbf{t}} & X}\] defined by \[ \omega_{\textrm{MC}}(\xi) := \mathrm d_g R_{g^{-1}}( \xi) \in A_{\mathbf{t}(g)}, \quad (\xi\in T_{g}^{\mathbf{s}}\mathcal{G}). \] \end{definition} For any foliation $\ensuremath{\mathcal{F}}$ of $M$, the bundle $T\ensuremath{\mathcal{F}}\to M$ has an obvious Lie algebroid structure with anchor the inclusion. Hence, it makes sense to talk about the generalized Maurer-Cartan equation in this context (see Section \ref{subsec:morphisms}), and we have the following: \begin{prop} The Maurer-Cartan form $\omega_{\textrm{MC}}$ is a Lie algebroid morphism $\omega_{\textrm{MC}}:T^{\mathbf{s}}\mathcal{G} \to A$, hence it satisfies the generalized Maurer-Cartan equation: \[ \mathrm d_{\nabla}\omega_{\textrm{MC}}+\frac{1}{2}[\omega_{\textrm{MC}},\omega_{\textrm{MC}}]=0. \] \end{prop} \begin{proof} The definition of the Maurer-Cartan form shows that $\omega_{\textrm{MC}}:T^{\mathbf{s}}\mathcal{G} \to A$ preserves anchors. On the other hand, if $\xi_1$ and $\xi_2$ are the right invariant vector fields on $\mathcal{G}$ corresponding to sections $\alpha_1$ and $\alpha_2$ of $A$, then the definition of $\omega_{\textrm{MC}}$ also shows that: \[ \omega_{\textrm{MC}}([\xi_1,\xi_2])=[\alpha_1,\alpha_2], \] so it follows that $\omega_{\textrm{MC}}$ also preserves brackets. \end{proof} \subsection{The Local Universal Property} \label{subsec:local:propert} We will now show that any $1$-form on a differentiable manifold, with values in an integrable Lie algebroid $A$, satisfying the generalized Maurer-Cartan equation is locally the pullback of the Maurer-Cartan form on a Lie groupoid integrating $A$. We will need the following lemma. \begin{lemma} Let $\ensuremath{\mathcal{F}}$ be a foliation on a manifold $M$ and let $\theta$ be an $\ensuremath{\mathcal{F}}$-foliated $1$-form (over $h$) on $M$ with values in a Lie algebroid $A \rightarrow X$ equipped with an arbitrary connection $\nabla$. Assume that the distribution $\mathcal{D} = \{\ker \theta_{x} : x\in M \} \subset T\ensuremath{\mathcal{F}} \subset TM$ has constant rank. Then $\mathcal{D}$ is integrable if and only if $\mathrm d_{\nabla} \theta(\xi_{1},\xi_{2}) = 0$ whenever $\xi _{1},\xi_{2} \in \mathcal{D}$. \end{lemma} \begin{proof} Choose a local basis $\xi_{1},...,\xi_{r} \in \ensuremath{\mathfrak{X}}(\ensuremath{\mathcal{F}})$ of $\mathcal{D}$ in an open set of $M$. Then by Frobenius Theorem, $\mathcal{D}$ is integrable if and only if $[\xi_{i},\xi_{j}] \in \mathrm{span}\{\xi _{1},...,\xi_{r}\}$ for all $1 \leq i,j \leq r$, which happens if and only if $\theta([\xi_{i},\xi_{j}]) = 0$ for all $1 \leq i,j \leq r$. Since $\theta(\xi_{i})= 0$ for all $1 \leq i \leq r$ we have \begin{equation} \mathrm d_{\nabla}\theta(\xi_{i},\xi_{j}) = \bar{\nabla}_{\xi_{i}}\theta(\xi_{j}) - \bar{\nabla}_{\xi_{j}}\theta(\xi_{i}) - \theta([\xi_{i},\xi_{j}]) = -\theta([\xi_{i},\xi_{j}]) \end{equation} from which the result follows. \end{proof} \begin{theorem}[Local Universal Property] \label{Universal} Let $\theta$ be a 1-form (over $h)$ on a manifold $M$, with values in an integrable Lie algebroid $A$, that satisfies the generalized Maurer-Cartan equation. Let $\mathcal{G}$ be a Lie groupoid integrating $A$ and denote by $\omega_{\textrm{MC}}$ its right invariant Maurer-Cartan form. Then, for each $p_0\in M$ and $g_0\in\mathcal{G}$ such that $h(p_0) = \mathbf{t}(g_0)$, there exists a unique locally defined diffeomorphism $\phi : M \rightarrow \mathbf{s}^{-1}(\mathbf{s}(g_0))$ satisfying \[ \phi(p_0)=g_0,\quad \phi^{\ast}\omega_{\textrm{MC}}=\theta. \] \end{theorem} \begin{remark} We can summarize the theorem by saying that, at least locally, there is a unique map $\phi: M \to \mathcal{G}$ which makes the following diagram of Lie algebroid morphisms commute: \[\xymatrix{ TM \ar@{-->}[rr]^{\phi_{\ast}} \ar[dd] \ar[dr]_{\theta} & & T^{\mathbf{s}}\mathcal{G} \ar[dd] \ar[dl]^{\omega_{\textrm{MC}}} \\ & A \ar[dd] & \\ M \ar@{-->}[rr]^{\quad \phi} \ar[dr]_h & & \mathcal{G}\ar[dl]^{\mathbf{t}} \\ & X &}\] \end{remark} \begin{proof} The proof uses the usual technique of the graph, so we will construct the graph of $\phi$ by integrating a convenient distribution. Uniqueness will then follow from the uniqueness of integral submanifolds of a distribution. Let us set: \[ M {_h\hskip-3 pt}\times_{\mathbf{t}}\mathcal{G} = \{(p,g) \in M\times\mathcal{G} : h(p) = \mathbf{t}(g)\}. \] Since $\mathbf{t}$ is a surjective submersion, this fibered product over $X$ is a manifold and it comes equipped with the foliation $\ensuremath{\mathcal{F}}$ given by the fibers of $\mathbf{s} \circ \pi_{\mathcal{G}}$. On $M{_h\hskip-3 pt}\times_{\mathbf{t}}\mathcal{G}$ consider the $A$-valued $\ensuremath{\mathcal{F}}$-foliated $1$-form \[ \Omega=\pi_{M}^{\ast}\theta - \pi_{\mathcal{G}}^{\ast}\omega_{\textrm{MC}} \text{.} \] Let $\mathcal{D} = \ker \Omega$ denote the associated distribution on $M{_h\hskip-3 pt}\times_{\mathbf{t}}\mathcal{G}$. In order to apply the previous lemma, we must first show that $\mathcal{D}$ has constant rank. We will do this by showing that \[ (\mathrm d \pi_{M})_{(p,g)} \vert_{\mathcal{D}_{(p,g)}} : \mathcal{D}_{(p,g)} \rightarrow T_pM \] is an isomorphism for each $(p,g) \in M {_h\hskip-3 pt}\times_{\mathbf{t}}\mathcal{G}$. Note that this also implies that if $\mathcal{D}$ is integrable then its leaf through $(p_0,g_0)$ is locally the graph of a uniquely defined diffeomorphism $\phi:M\to\mathbf{s}^{-1}(\mathbf{s}(g_0))$ satisfying $\phi(p_0)=g_0$. Suppose that $( \mathrm d \pi_{M})_{(p,g)}(v,w) = 0$ for some $(v,w)\in\mathcal{D}_{(p,g)}$. Since $\mathcal{D}$ is contained in $T\ensuremath{\mathcal{F}}$, it follows that $w$ is $\mathbf{s}$-vertical and $\omega_{\textrm{MC}}(w)$ is simply the right translation of $w$ to $\mathbf{1}_{\mathbf{t}(g)}$ and thus, \begin{eqnarray*} ( \mathrm d \pi_{M})_{(p,g)}(v,w) = 0 & \implies & v=0 \\ & \implies & \omega_{\textrm{MC}}(w) = 0 \quad ( =\theta(v)) \\ & \implies & w=0 \\ & \implies & (v,w) = 0 \end{eqnarray*} so that $( \mathrm d \pi_{M})_{(p,g)}$ is injective. Now, if $v\in T_{p}M$ then $(v,(R_{g})_{\ast}\theta(v))$ is an element of $\mathcal{D}_{(p,g)}$ so $(\mathrm d \pi_{M})_{(p,g)}$ is also surjective. Having accomplished this, we may use the preceding lemma to complete the proof. We compute (omitting the pullbacks for simplicity): \begin{align*} \mathrm d_{\nabla}\Omega & = \mathrm d_{\nabla}\theta - \mathrm d_{\nabla}\omega_{\textrm{MC}}\\ & = -\frac{1}{2}[\theta,\theta] + \frac{1}{2}[\omega_{\textrm{MC}},\omega_{\textrm{MC}}] \text{.} \end{align*} Replacing $\theta=\Omega+\omega_{\textrm{MC}}$ we obtain \begin{align*} \mathrm d_{\nabla}\Omega & = -\frac{1}{2}[\Omega + \omega_{\textrm{MC}},\Omega + \omega_{\textrm{MC}}] + \frac{1}{2}[\omega_{\textrm{MC}},\omega_{\textrm{MC}}] \\ & = -\frac{1}{2}[\Omega,\Omega] - \frac{1}{2}[\Omega,\omega_{\textrm{MC}}] - \frac{1}{2}[\omega_{\textrm{MC}},\Omega] \text{.} \end{align*} So $\mathrm d_{\nabla}\Omega(\xi_{1},\xi_{2}) = 0$ whenever $\Omega(\xi_{1}) = 0 = \Omega(\xi_{2})$. Hence $\mathcal{D}$ is integrable and the proof is completed. \end{proof} \begin{remark} With a slight modification, the theorem above is still valid even when $A$ is not integrable. In fact, since an $A$-valued Maurer-Cartan form on $M$ \begin{equation*} (\theta,h) \in\Omega^{1}(M,A) \end{equation*} is the same as a Lie algebroid morphism \begin{equation*} \xymatrix{ TM \ar[d] \ar[r]^\theta & A \ar[d] \\ M \ar[r]_h & X } \end{equation*} it follows that $h(M)$ lies in a single orbit of $A$ in $X$. By restricting $h$ to a small enough neighborhood, we may assume that it's image is a contractible open set $U\subset L$ in an orbit $L$ of $A$ in $X$. Then, the restriction of $A$ to $U$ is integrable \cite{CrainicFernandes} and we may proceed as in the proof of the theorem. \end{remark} As a consequence of the theorem we obtain the following useful corollary: \begin{corol}\label{symmetrywmc} Let $\mathcal{G}$ be a Lie groupoid with Maurer-Cartan form $\omega_{\textrm{MC}}$ and let $x,y\in X$ be points in the same orbit. If $\phi : \mathbf{s}^{-1}(x) \rightarrow \mathbf{s}^{-1}(y)$ is a symmetry of $\omega_{\textrm{MC}}$ (i.e., $\phi^{\ast}\omega_{\textrm{MC}} = \omega_{\textrm{MC}}$) then $\phi = R_g$ for some $g \in \mathcal{G}$. \end{corol} \begin{proof} Note that the equation $\phi^{\ast}\omega_{\textrm{MC}} = \omega_{\textrm{MC}}$ only makes sense provided $\mathbf{t}\circ\phi=\mathbf{t}$ (see Remark \ref{rem:invariance}). Therefore, $g:=\phi(\mathbf{1}_x)$ is an arrow joining $y$ to $x$ and \[ R_g:\mathbf{s}^{-1}(x) \rightarrow \mathbf{s}^{-1}(y), h\mapsto hg\] is a symmetry of $\omega_{\textrm{MC}}$. Since $R_g\mathbf{1}_x=g=\phi(\mathbf{1}_x)$, from the uniqueness part of Theorem \ref{Universal} we must have $\phi=R_g$. \end{proof} \subsection{The Global Universal Property} \label{subsec:global:propert} There is a more conceptual proof of Theorem \ref{Universal} that will lead us to a global version of the universal property of Maurer-Cartan forms. First, observe that Theorem \ref{Universal} is a local result so we may assume that $M$ is simply connected. The source simply connected Lie groupoid integrating $TM$ is then the pair groupoid $M\times M\rightrightarrows M$. By Lie's Second Theorem (see \cite{CrainicFernandes:lecture}), there exists a unique morphism of Lie groupoids \[ \xymatrix{ M\times M \ar@<.5ex>[d]\ar@<-.5ex>[d] \ar[r]^-{H} & \mathcal{G} \ar@<.5ex>[d]\ar@<-.5ex>[d] \\ M \ar[r]_h & X } \] integrating $\theta$. Now, if we are given $p_{0}\in M$ and $g_0\in\mathcal{G}$ with $h(p_0)=\mathbf{t}(g_0)$, then we may write \[ H(p,p^{\prime}) = \phi(p)\phi(p^{\prime})^{-1} \] where $\phi:M\rightarrow\mathbf{s}^{-1}(\mathbf{s}(g_{0}))\subset\mathcal{G}$ is defined by $\phi(p):= H(p,p_{0})g_0 \text{.}$ Note that $\phi$ satisfies \[ \phi(p_{0})=g_0,\quad \phi^{\ast}\omega_{\textrm{MC}}=\theta, \] and so it has the desired properties. In general, when $M$ is not simply connected, the Lie algebroid morphism $\theta$ integrates to a Lie groupoid morphism \[ \xymatrix{ \Pi_1(M) \ar@<.5ex>[d]\ar@<-.5ex>[d] \ar[r]^-{F} & \mathcal{G} \ar@<.5ex>[d]\ar@<-.5ex>[d] \\ M \ar[r]_h & X } \] where $\Pi_{1}(M)$ denotes the fundamental groupoid of $M$. The problem of determining when $\theta$ is globally the pullback of the Maurer-Cartan form on a Lie groupoid $\mathcal{G}$ integrating $A$ is then reduced to determining when the morphism $F$ above factors through the groupoid covering projection \begin{align*} p:\Pi_{1}(M)&\to M\times M,\\ p([\gamma])&:=(\gamma(1),\gamma(0)). \end{align*} \begin{theorem}[Global Universal Property] \label{thm: global universal property} Let $A$ be an integrable Lie algebroid with source simply connected Lie groupoid $\mathcal{G}(A)$ and let $(\theta,h)\in\Omega^{1}(M,A)$ be an $A$-valued differential $1$-form on $M$. Then, given $p_0\in M$ and $g_0\in\mathcal{G}(A)$ with $h(p_0)=\mathbf{t}(g_0)$, there exists an everywhere defined local diffeomorphism \[ \phi : M \rightarrow \mathbf{s}^{-1}(\mathbf{s}(g_{0})),\quad \phi(p_{0})=g_0, \quad \phi^{\ast}\omega_{\textrm{MC}}=\theta, \] if and only if \begin{description} \item[Local obstruction] The 1-form $\theta$ satisfies the generalized Maurer-Cartan equation and \item[Global obstruction] The morphism $F$ integrating $\theta$ is trivial when restricted to the fundamental group $\pi_1(M,p_{0})$ (i.e, the isotropy group at $p_{0}$). \end{description} \end{theorem} \begin{proof} We begin by proving that both conditions are necessary. It is clear that if $\theta=\phi^{\ast}\omega_{\textrm{MC}}$ then $\theta$ satisfies the generalized Maurer-Cartan equation. So all we have to prove is that $F$ is trivial on the isotropy group $\pi_{1}(M,p_{0})$. Suppose that $\phi$ exists. Then the map \[ H:M\times M\rightarrow\mathcal{G} \] given by \[ H(p,p^{\prime}) = \phi(p)\phi(p^{\prime})^{-1} \text{.} \] defines a Lie groupoid morphism over the map $\mathbf{t} \circ \phi$. It then follows from $\phi^{\ast}\omega_{\textrm{MC}} = \theta$ that $\mathbf{t} \circ \phi = h$ and that $H$ integrates $\theta$. To see this, notice that if $f: M \to \mathbf{s}^{-1}(x)$ and $g: M \to \mathbf{t}^{-1}(x)$ are smooth maps, then the differential of $\varphi(p,p^{\prime}) = f(p)\cdot g(p^{\prime})$ is given by \[ (\mathrm d \varphi)_{(p,p^{\prime})}(v,w) = (\mathrm d L_{f(p)})_{g(p^{\prime})}(\mathrm d g)_{p^{\prime}}(w) + (\mathrm d R_{g(p^{\prime})})_{f(p)}({\mathrm d f})_{p}(v) \] for $v,w\in T_{(p,p^{\prime})}(M \times M)$. Thus, in our case we obtain, for any $v\in T_{p}M$, \begin{align*} (\mathrm d H)_{(p,p)}(0,v) &=(\mathrm d R_{\phi(p)^{-1}})_{\phi(p)}(\mathrm d \phi)_{p}(v)\\ &=\omega_{\textrm{MC}}(\phi_{\ast}v)\\ &=\phi^{\ast}\omega_{\textrm{MC}}(v)\\ &=\theta(v). \end{align*} Finally, since $H$ integrates $\theta$ we obtain the commutative diagram \[ \xymatrix{ \Pi_1(M) \ar[d]_p \ar[r]^-F & \mathcal{G} \\ M \times M \ar[ru]_-H } \] where $p$ denotes the covering projection $p([\gamma])=(\gamma(1),\gamma(0))$. We conclude that $F$ only depends on the endpoints of $\gamma$ and not on it's homotopy class, proving the claim. Conversely, suppose both conditions are satisfied. Then by Theorem \ref{Universal}, it follows that $\theta$ is locally the pullback of $\omega_{\textrm{MC}}$ by a map $\phi_{\mathrm{loc}}$. However, since $F$ only depends on the end points of paths $\gamma$ and not on their homotopy class, $F$ factors through \[ \xymatrix{ \Pi_1(M) \ar[d]_p \ar[r]^-F & \mathcal{G} \\ M \times M \ar@{.>}[ru]_-H } \] Thus, by setting \[ \phi(p):=H(p,p_{0})g_0 \] we obtain a global map which, by the uniqueness result in Theorem \ref{Universal}, restricts to the locally defined maps $\phi_{\mathrm{loc}}$. It follows that $\phi^{\ast}\omega_{\textrm{MC}} = \theta$ and $\phi(p_{0})=g_0$ proving the theorem. \end{proof} Recall that we say that a Lie algebroid $A$ is \emph{weakly integrable} if the restriction of $A$ to any orbit is integrable. An immediate consequence of the global universal property is the following corollary: \begin{corol} \label{cor:contrat:mnfld} Let $U$ be a contractible manifold, let $A\to X$ be a weakly integrable Lie algebroid and let $h:U\to X$ be a smooth map. Any two $A$-valued 1-forms $\theta_1,\theta_2\in\Omega^1(U;A)$ over $h$ satisfying the generalized Maurer-Cartan equation are equivalent: for any $p\in U$ there exists a diffeomorphism $\phi:U\to U$, fixing $p$ and commuting with $h$, such that $\phi^*\theta_2=\theta_1$. \end{corol} \begin{remark} Using the $A$-path approach to integrability, introduced in \cite{CrainicFernandes}, one can express the global obstruction condition of the preceding theorem at the infinitesimal level, i.e., without any mention to the Lie groupoid $\mathcal{G}$ integrating $A$. In fact, given any curve $\gamma : I \to M$, the path $\theta \circ \dot{\gamma}: I \to A$ satisfies \[\sharp (\theta \circ {\gamma}(t)) = \frac{\mathrm d}{\mathrm d t}(h \circ \gamma )(t) \text{ for all } t \in I\] and thus, is an $A$-path. We can then rewrite the condition as: \begin{description} \item[Global obstruction] \emph{For every loop $\gamma$ in $M$, homotopic to the constant curve at $p$, the $A$-path $\theta \circ \dot{\gamma}$ is $A$-homotopic to the constant zero $A$-path at $h(p)$.} \end{description} Note also that this last condition can be expressed in terms of a differential equation (see \cite{CrainicFernandes}). \end{remark} \section{Local Classification and symmetries} \label{sec:localclass:symm} We now return to Cartan's realization problem. We will use the Lie groupoid integrating the classifying Lie algebroid to solve the Local Classification Problem and to study the symmetries of the problem. \subsection{Local Classification Problem} \label{Solving the Classification Problem} The results of the previous two sections lead to a solution to the Local Classification Problem. Before we give the main result, we must say a few words about equivalence of realizations. By this we mean: \begin{definition} Let $(n,X,C_{ij}^{k},F_i)$ be the initial data of a Cartan's realization. Given two realizations $(M_1,\theta_1,h_1)$ and $(M_2,\theta_2,h_2)$ a (locally defined) diffeomorphism $\phi:M_1\to M_2$ is called a (local) \textbf{equivalence of realizations} if \[ \phi^*\theta_2=\theta_1,\quad h_2\circ \phi=h_1. \] \end{definition} This notion of equivalence should not be confused with the notion of equivalence of coframe. \begin{remark} Note that if $\theta$ is a coframe on $M$ and $A_\theta\to X_\theta$ is the corresponding classifying Lie algebroid, then we have the realization $(M,\theta,\kappa_\theta)$. In this case, a self equivalence of the realization $\phi:(M,\theta,\kappa_\theta)\to(M,\theta,\kappa_\theta)$ is the same thing as an equivalence of the coframe $\phi:(M,\theta)\to(M,\theta)$, because any self-equivalence $\phi$ of the coframe must satisfy $\kappa_\theta\circ\phi=\kappa_\theta$. Also, given two realizations $(M_1,\theta_1,h_1)$ and $(M_2,\theta_2,h_2)$, any equivalence of coframes $\phi:(M_1,\theta_1)\to(M_2,\theta_2)$ obviously determines an equivalence of realizations $\phi:(M_1,\theta_1,h_2\circ\phi)\to(M_2,\theta_2,h_2)$ but not necessarily of the original realizations, since the identity $h_2\circ \phi=h_1$ may not hold. \end{remark} Now we can state: \begin{theorem}[Local Classification] \label{thm: classification coframes} Let $(n,X,C_{ij}^{k},F_i)$ be the initial data of a Cartan's realization problem and denote by $A\to X$ its classifying Lie algebroid. Then: \begin{enumerate} \item[(i)] Any realization is locally equivalent to a neighborhood of the identity of a fiber $\mathbf{s}^{-1}(x_0)$ of a groupoid $\mathcal{G}$ integrating $A$, equipped with the Maurer-Cartan form. \item[(ii)] Two realizations are locally equivalent if and only if they correspond to the same point $x_0\in X$. \end{enumerate} \end{theorem} \begin{proof} Recall that we have a basis $\{\alpha_{1},\ldots,\alpha_{n}\}$ of global sections of the classifying Lie algebroid $A\rightarrow X$ such that: \[ [\alpha_i,\alpha_j]= -C_{ij}^k \alpha_k,\quad \sharp \alpha_i=F_i.\] Let us suppose for the moment that $A$ is an integrable Lie algebroid and that $\mathcal{G}$ is a Lie groupoid integrating $A$. We denote by $\omega_{\textrm{MC}}$ the Maurer-Cartan form of $\mathcal{G}$ and by $(\omega_{\textrm{MC}}^{1},\ldots,\omega_{\textrm{MC}}^{n})$ its components with respect to the basis $\{\alpha_{1},\ldots,\alpha_{n}\}$. Then, for each $x_{0}\in X$, $(\mathbf{s}^{-1}(x_{0}),\omega_{\textrm{MC}}^{i},\mathbf{t})$ is a realization of Cartan's problem with initial data $(n,X,C_{ij}^{k},F_i)$. A similar argument still holds when $A$ is not integrable. In this case, for each $x_{0}\in X$ we can find a neighborhood $U\subset L$ of $x_{0}$ in the leaf $L$ containing it, such that the restriction of $A$ to $U$ is integrable to a Lie groupoid $\mathcal{G}\rightrightarrows U$. The Maurer-Cartan form of $\mathcal{G}$ takes values in $A\vert_{U}\hookrightarrow A$ so we can see it as an $A$-valued Maurer-Cartan form. It is again clear that $(\mathbf{s}^{-1}(x_{0}),\omega_{\textrm{MC}}^{i},\mathbf{t})$ is a realization of $(n,X,C_{ij}^{k},F_i)$. If $(M,\theta^{i},h)$ is any realization of $(n,X,C_{ij}^{k},F_i)$, Proposition \ref{prop:morphisms} shows that the $A$-valued $1$-form $\theta\in\Omega^{1}(M,A)$ defined by \[ \theta=\sum_{i}\theta^{i}(\alpha_{i} \circ h) \] satisfies the generalized Maurer-Cartan equation. \[ \mathrm d_{\nabla}\theta+\frac{1}{2}[\theta,\theta]_{\nabla}=0.\] Therefore, if $p_{0}\in M$ is such that $h(p_{0})=x_{0}$ then, by the universal property of Maurer-Cartan forms, we can find a neighborhood $V$ of $p_{0}$ in $M$ and a unique diffeomorphism $\phi : V\rightarrow \phi (V) \subset \mathbf{s}^{-1}(x_{0})$ such that $\phi (p_{0})=\mathbf{1}_{x_{0}}$ and $\phi^{\ast}\omega_{\textrm{MC}} =\theta$. This means that any realization of Cartan's problem is locally equivalent to a neighborhood of the identity of an $\mathbf{s}$-fiber of $\mathcal{G}$ equipped with the Maurer-Cartan form. Finally, if $(M_1,\theta_1,h_1)$ and $(M_1,\theta_1,h_1)$ are two realizations of $(n,X,C_{ij}^{k},F_i)$ and $h_1(p_1)=h_2(p_2)=x_0$, then there exist open sets $V_i\subset M_i$ and (unique) diffeomorphisms $\phi_i : V_i\rightarrow \phi_i (V_i) \subset \mathbf{s}^{-1}(x_{0})$ such that $\phi_i(p_{i})=\mathbf{1}_{x_{0}}$ and $\phi_i^{\ast}\omega_{\textrm{MC}} =\theta_i$, $(i=1,2)$. Hence, the map $\phi_2^{-1}\circ\phi_1$ is a local equivalence from $\phi_1^{-1}(\phi_1(V_1)\cap\phi_2(V_2))\subset M_1$ onto $\phi_2^{-1}(\phi_1(V_1)\cap\phi_2(V_2))\subset M_2$. The converse is obvious, since by definition an equivalence between two realizations $(M_1,\theta_1,h_1)$ and $(M_2,\theta_2,h_2)$ of $(n,X,C_{ij}^{k},F_i)$ must satisfy $h_2\circ \phi=h_1$. \end{proof} \subsection{Equivalence versus formal equivalence} \label{subsec:formal:equiv} Another consequence of the universal property of the Maurer-Cartan form is that ``equivalence'' and ``formal equivalence'' for a coframe actually coincide. \begin{prop} \label{prop: formal equivalence} Let $\theta$ be a fully regular coframe on a manifold $M$. Any two points $p,q\in M$ are equivalent if and only if they are formally equivalent. \end{prop} \begin{proof} Clearly, if two points are equivalent then they are formally equivalent. For the converse, let $\phi:U\to V$ be a formal equivalence defined on contractible open sets such that $\phi(p)=q$. Denote by $A_\theta\to X_\theta$ the classifying Lie algebroid of the coframe and by $\kappa_\theta:M\to X_\theta$ the classifying map. Then, by its very definition, we find that $\kappa_\theta(U)=\kappa_\theta(V)=W$ is an open set of $X_\theta$ and that the following diagram commutes: \[ \xymatrix{ U\ar[rr]^{\phi}\ar[dr]_{\kappa_\theta}&&V\ar[dl]^{\kappa_\theta}\\ &W&} \] Our aim is to prove that there exists a diffeomorphism $\psi:U\to V$ such that $\psi(p)=q$ and $\psi^*\theta|_V=\theta|_U$. For this, observe that since $\phi$ is a formal equivalence we have $C^k_{ij}=C^k_{ij}\circ\phi$, so the coframe $\phi^*(\theta|_V)$ satisfies the same structure equations as $\theta|_U$: \[ \mathrm d(\phi^*\theta^k)=\sum_{i<j}C^k_{ij}(\phi^*\theta^i)\wedge (\phi^*\theta^j). \] It follows that we have two Lie algebroid morphisms: \[ \xymatrix{ TU\ar[r]^{\phi^*\theta}\ar[d]&{A_{\theta}}|_W\ar[d]\\ U\ar[r]_{\kappa_\theta}&W } \qquad \xymatrix{ TU\ar[r]^{\theta}\ar[d]&{A_\theta}|_W\ar[d]\\ U\ar[r]_{\kappa_\theta}&W } \] We conclude that $\phi^*(\theta|_V)$ and $\theta|_U$ are both 1-forms in $\Omega^1(U,A_\theta)$ over $\kappa_\theta$ satisfying the Maurer-Cartan equation. By Corollary \ref{cor:contrat:mnfld}, we conclude that there exists a diffeomorphism $\varphi:U\to U$, fixing $p\in U$ and commuting with $\kappa_\theta$, such that $\varphi^*(\phi^*(\theta|_V))=\theta|_U$. The map $\psi:=\phi\circ\varphi:U\to V$ is the desired equivalence mapping $p$ to $q$. \end{proof} \subsection{Symmetries of Realizations} \label{subsec:symmetries} We can also use our solution to the Local Equivalence Problem to recover a few classical results about symmetries of coframes. Many of these results can be traced back to Cartan \cite{Cartan}. The formulation presented here is based on \cite{Bryant}. Note that our only purpose is to show how the classifying Lie algebroid can be used to give very simple proofs of these facts. \begin{definition} Let $\theta$ be a coframe on $M$. A \textbf{symmetry} of $(M,\theta)$ is a self-equivalence, i.e., a diffeomorphism $\phi: M \to M$ such that $\phi^{\ast}\theta=\theta$. An \textbf{infinitesimal symmetry} is a vector field $\xi \in \ensuremath{\mathfrak{X}}(M)$ such that $\mathscr{L}_{\xi}\theta=0$. \end{definition} Clearly, the set of infinitesimal symmetries of a coframe $(M,\theta)$ form a Lie subalgebra $\ensuremath{\mathfrak{X}}(M,\theta)\subset\ensuremath{\mathfrak{X}}(M)$. On the other hand, we recall that the group of diffeomorphism $\Diff(M)$ is a Fr\'echet Lie group for the compact-open $C^\infty$-topology. The set of all symmetries of a coframe $(M,\theta)$ form a topological subgroup $\Diff(M,\theta)\subset \Diff(M)$ for the induced topology. We also have the related notions of symmetry and infinitesimal symmetry of a realization: \begin{definition} If $(M,\theta,h)$ is a realization of Cartan's problem we call a diffeomorphism $\phi:M\to M$ a \textbf{symmetry of the realization} if $\phi^{\ast}\theta=\theta$ \emph{and} $h\circ\phi=h$. Similarly, one calls a vector field $\xi \in \ensuremath{\mathfrak{X}}(M)$ an \textbf{infinitesimal symmetry of the realization} if $\mathscr{L}_{\xi}\theta=0$ \emph{and} $\mathrm d h\cdot \xi=0$. \end{definition} Given a realization $(M, \theta, h)$ we will denote by $\Diff(M,\theta,h)$ the group of symmetries of the realization and by $\ensuremath{\mathfrak{X}}(M,\theta,h)$ the Lie algebra of infinitesimal symmetries of the realization. Obviously, $\Diff(M,\theta,h)$ is a subgroup of the group $\Diff(M,\theta)$ of symmetries of the underlying coframe, while $\ensuremath{\mathfrak{X}}(M,\theta,h)$ is a Lie subalgebra of the Lie algebra $\ensuremath{\mathfrak{X}}(M,\theta)$ of infinitesimal symmetries of the underlying coframe. Our next result describes the relationship between the symmetries and the classifying Lie algebroid: \begin{prop}[Theorem A.2 of \cite{Bryant}] \label{prop:inf:sym:coframe} Let $(M,\theta,h)$ be a realization of a Cartan's problem with classifying Lie algebroid $A \to X$. Then the set $\ensuremath{\mathfrak{X}}(M,\theta,h)_p$ of germs of infinitesimal symmetries of $(M,\theta,h)$ at a point $p$ is a Lie algebra isomorphic to $\mathfrak{g}_{h(p)}$, the isotropy Lie algebra of $A$ at $h(p)$. In particular, if $M$ is connected then $h(M)$ is an open subset of a leaf $L$ of $A$ and \[ \dim \ensuremath{\mathfrak{X}}(M,\theta,h)_p = \dim M - \dim L\] \end{prop} \begin{proof} Since the result is local, we may assume without loss of generality that the classifying Lie algebroid $A \to X$ is integrable. If this is not the case, we can restrict $A$ to a contractible open set $U$ in the leaf containing $h(p)$ so that $A|_U$ is integrable. Let $\mathcal{G} \rightrightarrows X$ be a Lie groupoid with Lie algebroid $A$. Then, by the local universal property of the Maurer-Cartan form on $\mathcal{G}$ (Theorem \ref{Universal}), there is a neighborhood $U$ of $p$ in $M$ and a diffeomorphism $\phi: U \to \phi(U) \subset \mathbf{s}^{-1}(h(p))$ such that $\phi(p) = \mathbf{1}_{h(p)}$ and $\phi^{\ast}\omega_{\textrm{MC}} = \theta$. It follows that $\ensuremath{\mathfrak{X}}(M,\theta,h)_p$ coincides with the set of germs at $\mathbf{1}_{h(p)}$ of vector fields tangent to $\mathbf{s}^{-1}(h(p))$ which are infinitesimal symmetries of the Maurer-Cartan form. Thus, from (the infinitesimal version of) Corollary \ref{symmetrywmc} we may conclude that these may be identified with the isotropy Lie algebra of $A$ at $p$. \end{proof} The previous result applies, in particular, to the classifying algebroid $A_\theta$ of the coframe. Hence, if $M$ is connected, we see that for any $p,q\in M$ the Lie algebras $\ensuremath{\mathfrak{X}}(M,\theta)_p$ and $\ensuremath{\mathfrak{X}}(M,\theta)_q$ are isomorphic. In the case of coframes, the following result is classical (see Theorem 3.2 of \cite{Kobayashi}): \begin{theorem} \label{thm:Lie:symmetries} Let $(M,\theta,h)$ be a realization of a Cartan's problem. Then its group of symmetries $\Diff(M,\theta,h)$ is a (finite dimensional) Lie group of tranformations of $M$ with Lie algebra $\ensuremath{\mathfrak{X}}_c(M,\theta,h)\subset\ensuremath{\mathfrak{X}}(M,\theta,h)$ the subspace generated by the complete infinitesimal symmetries of the realization. \end{theorem} \begin{proof} We can assume that $M$ is connected. A classical theorem of Palais (Theorem IV.III of \cite{Palais}) states that any group $G\subset \Diff(M)$ such that the Lie subalgebra $\mathfrak{g}\subset\ensuremath{\mathfrak{X}}(M)$ generated by the complete vector fields $X$ whose flows $\phi^t_X\in G$ is finite dimensional, is actually a Lie group of tranformations of $M$ with Lie algebra $\mathfrak{g}$. Hence, by Proposition \ref{prop:inf:sym:coframe}, it is enough to prove that for any $p\in M$ the restriction map $\ensuremath{\mathfrak{X}}(M,\theta,h)\to \ensuremath{\mathfrak{X}}(M,\theta,h)_p$ is injective. This follows from the following lemma: \begin{lemma} Let $X\in\ensuremath{\mathfrak{X}}(M,\theta,h)$ be an infinitesimal symmetry and assume that $X$ vanishes at some point $p\in M$. Then $X\equiv 0$. \end{lemma} To prove this lemma, it is enough to to prove that the zero set of $X\in \ensuremath{\mathfrak{X}}(M,\theta,h)$ is both open and closed. It is obviously closed, and to prove that it is open let $q\in M$ be such that $X_q=0$. If $\{X_i\}$ are the linearly independent vector fields dual to the coframe $\{\theta^i\}$, then: \[ \mathscr{L}_X\theta^i=0\quad \Leftrightarrow \quad \mathscr{L}_X X_i=0.\] Hence, we have that: \[ X_q=0 \text{ and } \mathscr{L}_{X_i}X=0.\] Since the $X_i$ are everywhere linearly independent, we conclude that $X$ vanishes in a neighborhood of $q$. \end{proof} We also have the following semi-global symmetry property of a realization: \begin{corol}[Theorem A.3 of \cite{Bryant}] Let $(n, X, C^k_{ij}, F_i)$ be the initial data of a realization problem with associated Lie algebroid $A \to X$. Let $L \subset X$ be a leaf of $A$ whose isotropy Lie algebra is $\mathfrak{g}$, and let $G$ be any Lie group with Lie algebra $\mathfrak{g}$. Then over any contractible set $U \subset L$, there exists a principal $G$-bundle $h: M \to U$ and a $G$-invariant coframe $\theta$ of $M$ such that $(M, \theta, h)$ is a realization of the Cartan's problem. Moreover, this realization is locally unique up to isomorphism. \end{corol} \begin{proof} This is an immediate consequence of our classification result, Theorem \ref{thm: classification coframes}. In fact, since $U$ is contractible and the restriction of $A$ to $U$ is transitive, it follows that $A\vert_U$ is integrable. Moreover, for any Lie group $G$ with Lie algebra $\mathfrak{g}$, there exists a Lie groupoid $\mathcal{G}$ integrating $A|_{U}$ whose isotropy Lie group is isomorphic to $G$. The restriction of the Maurer-Cartan form of $\mathcal{G}$ to any $\mathbf{s}$-fiber furnishes the desired (locally unique) $G$-invariant coframe. \end{proof} We can also use Theorem \ref{thm: classification coframes} to explain the relationship between the classifying algebroid $A\to X$ of a Cartan realization problem and the classifying algebroid of a coframe $\theta$ associated with a realization $(M,\theta,h)$: \begin{corol} \label{cor:subalgbrd} Let $(n,X,C_{ij}^k,F_i)$ be a Cartan realization problem with associated classifying algebroid $A$ and let $(M,\theta,h)$ be a connected realization. Then $h(M)$ is an open subset of an orbit of $A$ and there is a Lie algebroid morphism from the Lie subalgebroid $A|_{h(M)}\subset A$ to $A_\theta\to X_\theta$ (the classifying algebroid of the coframe $\theta$) which is a fiberwise isomorphism that covers a surjective submersion. \end{corol} \begin{proof} Since $M$ is connected, $h(M)$ is an open subset of an orbit of $A$. Also, we have a two leg diagram: \[ \xymatrix{ & M\ar[dl]_h\ar[dr]^{\kappa_\theta}\\ h(M)\ar@{-->}[rr]_{\psi}&& X_\theta} \] Observe that if $h(p)=h(q)$ then Theorem \ref{thm: classification coframes} gives a locally defined diffeomorphism $\phi:M\to M$, such that $\phi(p)=q$ and $\phi^*\theta=\theta$. In other words, $p$ and $q$ are locally formally equivalent so that we must have $\kappa_\theta(p)=\kappa_\theta(q)$. This shows that we have a well defined map $\psi:h(M)\to X_\theta$ that makes the diagram above commutative. Since $\kappa_\theta$ is a surjective submersion, we conclude that $\psi:h(M)\to X_\theta$ is also a surjective submersion. Finally, observe that the coframe yields Lie algebroid maps \[ \xymatrix{ & TM\ar[dl]\ar[dr]\\ A|_{h(M)}\ar@{-->}[rr]&& A_\theta} \] which cover the two legs in the diagram above, and which are fiberwise isomorphism. Hence, we conclude that there is a Lie algebroid morphism $A|_{h(M)}\to A_\theta$, over the map $\psi:h(M)\to X_\theta$, which is a fiberwise isomorphism. \end{proof} \begin{remark} \label{rmk: extra invariants} The corollary above can be interpreted as saying that the realization problem associated to a coframe is the one that has the least amount of invariant functions involved (and thus the largest symmetry group). On the other hand, a general realization problem deals with a coframe along with extra invariants that must be preserved. \end{remark} The following example illustrates the point made in this remark. \begin{example} Suppose that $\Psi: G \times X \to X$ is an action of an $n$-dimensional Lie group $G$ on a manifold $X$, and let $\psi: \mathfrak{g} \to \ensuremath{\mathfrak{X}}(X)$ be the associated infinitesimal action. Let us denote by $A = \mathfrak{g}\ltimes X$ the corresponding transformation Lie algebroid. Since $A$ is a trivial vector bundle, it determines a realization problem. In fact, if $e_1, \ldots, e_n$ is a basis of $\mathfrak{g}$ such that \[[e_i,e_j] = C^k_{ij}e_k \text{ and } F_i = \psi(e_i),\] then $(n,X,C^k_{ij},F_i)$ is the initial data to a realization whose classifying Lie algebroid is $A$. It then follows from Theorem \ref{thm: classification coframes} that every solution to this realization problem is locally equivalent to a neighborhood of the identity in $(G, \omega_{\textrm{MC}}^i, \Psi_x)$, where $\omega_{\textrm{MC}}^i$ are the components of the Maurer-Cartan form on $G$ with respect to the basis $e_1, \ldots, e_n$ of $\mathfrak{g}$ and \[\Psi_x: G \to X, \quad \Psi_x(g) = \Psi(g,x).\] Thus, the infinitesimal symmetry algebra of a realization that corresponds to a point $x \in X$ is the isotropy Lie algebra of the infinitesimal action, i.e., $\ensuremath{\mathfrak{X}}(G, \omega_{\textrm{MC}}^i, \Psi_x)_{\mathrm{Id}} = \ker \psi_x$. Moreover, a map $\phi: G \to G$ which preserves the Maurer-Cartan form is an equivalence, if and only if it leaves the action of $G$ on the orbit through $x$ invariant. In other words, it must be the right translation by an element of the isotropy subgroup $G_x$. On the other hand, the classifying Lie algebroid of the Maurer-Cartan coframe on $G$ is the Lie algebra $\mathfrak{g}$ itself. The symmetries of the Maurer-Cartan coframe on $G$ are the right translations by any element of $G$. \end{example} \section{Global Equivalence} \label{sec:global} We now turn to global questions related to equivalence of coframes. Namely, we will give our solution to the Globalization and to the Global Equivalence Problems \subsection{The Globalization Problem} \label{subsec: Full Realizations} Recall that the \emph{globalization problem} (see Problem \ref{prob:globalization}) asks if two germs of coframes $\theta_0$ and $\theta_1$ which solve the sames realization problem are germs of the same global realization. We recall that we say that a Lie algebroid $A$ is \emph{weakly integrable} if the restriction of $A$ to any orbit is integrable. Then we have: \begin{theorem} \label{thm: globalization} Consider a Cartan problem whose associated classifying Lie algebroid $A\to X$ is weakly integrable. Then $(M_0,\theta_0,h_0)$ and $(M_1,\theta_1,h_1)$ are germs of the same global connected realization $(M,\theta, h)$ if and only if they correspond to points on $X$ in the same orbit of $A$. \end{theorem} \begin{proof} Given the two germs $(M_0,\theta_0,h_0)$ and $(M_1,\theta_1,h_1)$, suppose that they correspond to points $x_0$ and $x_1$ on the same orbit $L$ of $A$: $h_0(m_0)=x_0$ and $h_1(m_1)=x_1$. Since $A$ is weakly integrable, there exists a Lie groupoid $\mathcal{G}_L$ integrating $A|_L$ and the $\mathbf{s}$-fiber at $x_0$ contains a point $g$ with $\mathbf{t}(g) = x_1$. Thus, the germ of $(M_0,\theta_0,h_0)$ at $m_0$ can be identified with the germ of $(\mathbf{s}^{-1}(x_0),\omega_{\textrm{MC}},\mathbf{t})$ at $\mathbf{1}_{x_0}$ and the germ of $(M_1,\theta_1,h_1)$ at $m_1$ can be identified with the germ of $(\mathbf{s}^{-1}(x_0),\omega_{\textrm{MC}},\mathbf{t})$ at $g$. We conclude that $(M_0,\theta_0,h_0)$ and $(M_1,\theta_1,h_1)$ are both germs of the realization $(\mathbf{s}^{-1}(x_0), \omega_{\textrm{MC}}, \mathbf{t})$. Conversely, suppose that there exists a connected realization $(M,\theta,h)$ such that $(M_0,\theta_0,h_0)$ and $(M_1,\theta_1,h_1)$ are the germs of $\theta$ at points $p_0$ and $p_1$ of $M$, respectively. Let $\gamma$ be a curve joining $p_0$ to $p_1$ and cover it by a finite family $U_1, \ldots, U_k$ of open sets of $M$ with the property that the restriction of $\theta$ to each $U_i$ is equivalent to the restriction of the Maurer-Cartan form to an open set of some $\mathbf{s}$-fiber of $\mathcal{G}$, i.e., $\theta |_{U_i} = \phi_i^{\ast}\omega_{\textrm{MC}}$ for some diffeomorphism $\phi_i: U_i \to \phi_i(U_i) \subset \mathbf{s}^{-1}(x_i)$. We proceed by induction on the number of open sets needed to join $p_0$ to $p_1$. Suppose that both $p_0$ and $p_1$ belong to the same open set $U_1$. Then $\phi_1(p_0)$ and $\phi_1(p_1)$ are both on the same $\mathbf{s}$-fiber. Thus, $h(p_0) = \mathbf{t} \circ \phi_1(p_0)$ belongs to the same orbit as $h(p_1) = \mathbf{t} \circ \phi_1(p_1)$. Now assume that the result is true for $k-1$ open sets. Then any point $q$ in $U_{k-1}$ is mapped by $h$ to the same orbit of $h(p_0)$. Let $q$ be a point in $U_{k-1} \cap U_{k}$. Then, on one hand, since $q$ and $p_1$ belong to $U_k$ it follows that $h$ maps them both to the same orbit of $A$. On the other hand, by the inductive hypothesis, it follows that $h$ also maps $p_0$ and $q$ to the same orbit of $A$. \end{proof} \begin{remark} The proposition above may not be true when $A$ is not weakly integrable. The problem is that in this case, the global object associated to $A_L$ is only a topological groupoid which is smooth only in a neighborhood of the identity section. Thus, if $x, y \in X$ are points in the same orbit which are ``too far away'', then we might not be able to find a \emph{differentiable} realization ``covering'' both points at the same time. \end{remark} Motivated by this remark, it is natural to consider the problem of existence of realizations $(M, \theta,h)$ of a Cartan's problem, such that the image of $h$ is the whole leaf of the classifying Lie algebroid. \begin{definition} A connected realization $(M, \theta, h)$ is called \textbf{full} if $h$ is surjective onto the orbit of $A$ that it ``covers''. \end{definition} The paradigmatic example of a full realization is obtained as follows. \begin{example} \label{ex: full} Assume that the classifying Lie algebroid $A$ is weakly integrable. Then the $\mathbf{s}$-fibers of the Weinstein groupoid $\mathcal{G}(A)$ are smooth manifolds and carry (the restriction of) the Maurer-Cartan form. Each triple $(\mathbf{s}^{-1}(x),\omega_{\textrm{MC}},\mathbf{t})$ is a full realization. \end{example} In fact, there is a partial converse to the previous example. To explain this we also need the following: \begin{definition} A realization $(M, \theta, h)$ of a Cartan's problem is said to be \textbf{complete} if it is a full realization and every local equivalence $\phi:U\to V$, defined on open sets $U,V\subset M$ can be extended to a global equivalence $\tilde{\phi}:M\to M$, such that $\tilde{\phi}|_U=\phi$. \end{definition} \begin{example}[(Example \ref{ex: full} continued)] By Corollary \ref{symmetrywmc}, every local equivalence of the realization $(\mathbf{s}^{-1}(x),\omega_{\textrm{MC}},\mathbf{t})$ is the restriction of right translation by same element $g\in\mathcal{G}(A)$. Hence, this is an example of a complete realization. \end{example} Now we can state the following characterization (an analogous situation occurs in \cite{Schwachhofer:integration}): \begin{prop} Let $A \to X$ be the classifying Lie algebroid of a Cartan's realization problem, and let $L \subset X$ be an orbit of $A$. Then there exists a complete realization over $L$ if and only if the restriction $A\vert_L$ is integrable. \end{prop} \begin{proof} Assume that $A\vert_L$ is integrable by $\mathcal{G} \rightrightarrows L$. Then, as observed above, for any $x \in L$, the realization $(\mathbf{s}^{-1}(x),\omega_{\textrm{MC}},\mathbf{t})$ is complete. Conversely, let $(M,\theta,h)$ be a complete realization which covers $L$. By Theorem \ref{thm:Lie:symmetries}, the group of symmetries $G=\Diff(M,\theta,h)$ is a Lie group of transformations of $M$ which fixes the fibers of $h:M\to L$. If $x,y\in M$ are such that $h(x)=h(y)$ then, by Theorem \ref{thm: classification coframes}, there exists a local equivalence $\phi:U\to V$ such that $\phi(x)=y$. By the completeness of the realization, $\phi$ can be extended to a global equivalence $\tilde{\phi}\in G$. We conclude that $G$ acts transitively on the fibers of $h$. Now we claim that the action of $G$ on $M$ is free. This will follow by showing that the fixed point set of a symmetry $\phi:M\to M$ is both open and closed. It is obviously closed. In order to prove that it is also open, we consider the vector fields $\{X_i\}$ dual to the coframe $\{\theta^i\}$ and denote by $\phi^t_{X_i}$ their (local) flows. Then, since $\phi$ is a symmetry, we have: \[ \phi_*X_i=X_i\quad \Leftrightarrow \quad \phi\circ\phi^t_{X_i}=\phi^t_{X_i}\text{ (whenever defined)}.\] Therefore, if $p\in M$ is some fixed point of $\phi$, it follows that $\phi^t_{X_i}(p)$ is also a fixed point of $\phi$. Since the $X_i$ are linearly independent everywhere, it follows that there is neighborhood of $p$ consisting of fixed points. This shows that the fixed point set of $\phi$ is open, and the claim follows. We conclude that the $G$-action on $M$ is free and its orbits are the fibers of $h:M\to L$, so that we have a principal bundle \[\xymatrix{M \ar[d]_{h} & \ar@(dr,ur)@<-6ex>[]_{G}\\ L} \] The Atiyah algebroid of this principal bundle is isomorphic to $A\vert_L$. Hence, $A\vert_L$ is integrable as claimed. \end{proof} \subsection{The Global Classification Problem} \label{subsec: Global Results} We now turn to the global classification of coframes (Problem \ref{prob:global:equivalence}). For the remainder of this section, $(n, X, C^k_{ij}, F_i)$ denotes a realization problem with classifying Lie algebroid $A \to X$ and $(M,\theta,h)$ is a realization. The global equivalence of realizations of a Cartan's problem is much more delicate than the question of local equivalence. The reason is that the classifying Lie algebroid will not distinguish between a realization and its covering. For example, Olver in \cite{Olver:Lie} constructs a simply connected manifold with a Lie algebra valued Maurer-Cartan form which cannot be globally embedded into any Lie group, but which is locally equivalent at every point to an open set of a Lie group. We have the following theorem (the special cases of coframes of rank zero or maximal rank were considered by Olver in \cite{Olver}): \begin{theorem} \label{thm: global equivalence up to cover} Let $(M, \theta, h)$ be a realization of a Cartan problem and suppose that the classifying Lie algebroid $A \to X$ is weakly integrable. Then $M$ is globally equivalent up to a cover to an open set of an $\mathbf{s}$-fiber of a groupoid $\mathcal{G}$ integrating $A$. \end{theorem} \begin{proof} Let $L\subset X$ be the orbit containing $h(M)$ and choose a Lie groupoid $\mathcal{G}_L$ integrating $A|_L$. Also, let $\mathcal{D}$ be the distribution on $M {_h\hskip-3 pt}\times_{\mathbf{t}} \mathcal{G}_L$ considered in the proof of Theorem \ref{Universal} and let $N$ be a maximal integral manifold of $\mathcal{D}$. If $\pi_M: M {_h\hskip-3 pt}\times_{\mathbf{t}} \mathcal{G}_L \to M$ and $\pi_{\mathcal{G}}: M {_h\hskip-3 pt}\times_{\mathbf{t}} \mathcal{G}_L \to \mathcal{G}_L$ denote the natural projections, then $\pi_{\mathcal{G}}(N)$ is totally contained in a single $\mathbf{s}$-fiber of $\mathcal{G}_L$, say, $\mathbf{s}^{-1}(x)$. Moreover, the restrictions of the projections to $N$, $\pi_M: N \to M$ and $\pi_{\mathcal{G}}: N \to \mathbf{s}^{-1}(x)$ are local diffeomorphisms. We claim that $N$, equipped with the coframe $\pi_M^{\ast}\theta = \pi_{\mathcal{G}}^{\ast}\omega_{\textrm{MC}}$, is a common realization cover of $M$ and an open set of an $\mathbf{s}$-fiber of $\mathcal{G}_L$. To show this, all we must show is that $\pi_M(N) = M$. In fact, if this is true, $M$ will be globally equivalent up to cover to $\pi_{\mathcal{G}}(N) \subset \mathbf{s}^{-1}(x)$. To prove this, suppose that $\pi_M(N)$ is a proper submanifold of $M$ and let $p_0 \in M - \pi_M(N)$ be a point in the closure of $\pi_M(N)$ (remember that $\pi_M\vert_{_N}$ is an open map). Then, by the local universal property of Maurer-Cartan forms, there is an open set $U$ of $p_0$ in $M$ and a diffeomorphism $\phi_0: U \to \phi(U) \subset \mathbf{s}^{-1}(h(p_0))$ such that $\phi_0^{\ast}\omega_{\textrm{MC}} = \theta$ and $\phi_0(p_0) = \mathbf{1}_{h(p_0)}$. It follows that the graph of $\phi_0$ is also an integral manifold $N_0$ of the distribution $\mathcal{D}$ on $M {_h\hskip-3 pt}\times_{\mathbf{t}} \mathcal{G}$ which passes through $(p_0, \mathbf{1}_{h(p_0)})$. Now let $p = \pi_M(p,g)$ be any point in $U \cap \pi_M(N)$ where $(p,g) \in N$. Then $\phi_0(p) \in \mathcal{G}$ is an arrow from $h(p_0)$ to $h(p)$ and $g$ is an arrow from $x$ to $h(p)$ and thus $g_0 = \phi_0(p)^{-1} \cdot g$ is an arrow from $x$ to $h(p_0)$, i.e., \[\xymatrix{ \stackrel{\bullet}{h(p_0)} &&& \stackrel{\bullet}{h(p)} \ar@/_0.8cm/[lll]_{\phi_0(p)^{-1}} &&& \stackrel{\bullet}{x} \ar@/_0.8cm/[lll]_{g} \ar@/_1.8cm/[llllll]_{g_0 = \phi_0(p)^{-1} \cdot g}}\] Now, by virtue of the invariance of the Maurer-Cartan form, the manifold \[R_{g_0} N_0 = \set{(\bar{p},\bar{g}\cdot g_0) : (\bar{p},\bar{g}) \in N_0}\] is an integral manifold of $\mathcal{D}$. But then, the point $(p,g) = (p, \phi_0(p) \cdot g_0)$ belongs to $R_{g_0} N_0$ and to $N$, and thus, by the uniqueness and maximality of $N$ it follows that $N$ contains $R_{g_0} N_0$. But then, $(p_0, \phi_0(p_0) \cdot g_0)$ is a point of $N$ which projects through $\pi_M$ to $p_0$, which contradicts the fact that $p_0 \in M -\pi_M(N)$, proving the theorem. \end{proof} We can now deduce the special cases where the rank of the coframe is either zero or maximal: \begin{corol}[Theorem 14.28 of \cite{Olver}] Let $\theta$ be a coframe of rank $0$ on a manifold $M$. Then $M$ is globally equivalent, up to covering, to an open set of a Lie group. \end{corol} \begin{proof} If $\theta$ has rank $0$, the classifying Lie algebroid of the coframe has base $X= \set{*}$, i.e., it is a Lie algebra, so it is integrable by a Lie group. It follows that $M$ is globally equivalent, up to covering, to an open set of this Lie group. \end{proof} \begin{corol}[Theorem 14.30 of \cite{Olver}] Let $(M_1, \theta_1, h_1)$ and $(M_2, \theta_2, h_2)$ be full realizations of rank $n$ over the same leaf $L \subset X$ of the classifiying Lie algebroid $A\to X$. Then $M_1$ and $M_2$ are realization covers of a common realization $(M, \theta, h)$. \end{corol} \begin{proof} The assumptions of the corollary guarantee that $L$ is a $n$-dimensional leaf of $A$ for which $h_1(M_1) = L = h_2(M_2)$. Since $A$ has rank $n$, it follows that the anchor of the restriction of $A$ to $L$ is injective, and thus $A|_L$ is integrable (see \cite{CrainicFernandes}). Moreover, its Lie groupoid $\mathcal{G}$ has discrete isotropy, so its target map restricts to a local diffeomorphism on each source fiber. It follows that we can make $M=L$ into a realization by equipping it with the pullback of the Maurer-Cartan form by the local inverses of $\mathbf{t}$. More precisely, let $U$ be an open set in $\mathbf{s}^{-1}(x)$ for which the restriction of $\mathbf{t}$ is one-to-one and let $V = \mathbf{t}(U)$ be the open image of $U$ by $\mathbf{t}$. Define $\theta_U$ to be the coframe on $V$ given by \[\theta_U = (\mathbf{t}\vert_U^{-1})^{\ast}\omega_{\textrm{MC}}.\] Then $\theta_U$ is the restriction to $V$ of a globally defined coframe on $L$. In fact, suppose that $\bar{U}$ is another open set of $\mathbf{s}^{-1}(x)$ for which $\mathbf{t}\vert_{\bar{U}}$ is one-to-one and such that $\bar{V} = \mathbf{t}(\bar{U})$ intersects $V$. Denote by $\theta_{\bar{U}}$ the coframe on $\bar{V}$ defined by \[\theta_{\bar{U}} = (\mathbf{t}\vert_{\bar{U}}^{-1})^{\ast}\omega_{\textrm{MC}}.\] We will show that $\theta_U$ and $\theta_{\bar{U}}$ coincide on the intersection $V \cap \bar{V}$. After shrinking $U$ and $\bar{U}$, if necessary, we may assume that $V = \bar{V}$. But then, since $\mathcal{G}$ is \'etale, it follows that the isotropy group $\mathcal{G}_x$ is discrete, which implies that $U$ is the right translation of $\bar{U}$ by an element $g\in \mathcal{G}_x$, i.e., $U = R_g(\bar{U})$. Thus, \begin{align*} \theta_U & = (R_g \circ \mathbf{t}\vert_{\bar{U}}^{-1})^{\ast}\omega_{\textrm{MC}} \\ &= (\mathbf{t}\vert_{\bar{U}}^{-1})^{\ast} (R_g^{\ast}\omega_{\textrm{MC}})\\ &=(\mathbf{t}\vert_{\bar{U}}^{-1})^{\ast}\omega_{\textrm{MC}} =\theta_{\bar{U}}, \end{align*} and $\theta$ is a well defined global coframe on $L$. Now, since both $M_1$ and $M_2$ are globally equivalent, up to covering, to open sets in $\mathbf{s}^{-1}(x)$, it follows that the surjective submersions $h_i: M_i \to L$ are realization covers. \end{proof} \begin{remark} There is a different, more direct, argument to see that the $h_i$'s are realization covers. In fact, the coframe $\theta_1$ on $M_1$ is locally the pullback of the Maurer-Cartan form on an $\mathbf{s}$-fiber of $\mathcal{G}$ by some locally defined diffeomorphism $\phi_1 : W_1 \subset M_1 \to \mathbf{s}^{-1}(x)$. After shrinking $W_1$, we may assume that the restriction of $\mathbf{t}$ to $U_1 = \phi_1(W_1)$ is a diffeomorphism. But then, \begin{align*} h_1^{\ast} \theta & = (\mathbf{t} \circ \phi_1)^{\ast} \theta \\ & = \phi_1^{\ast} (\mathbf{t}^{\ast} \circ (\mathbf{t}\vert_U^{-1})^{\ast} \omega_{\textrm{MC}})\\ & = \phi_1^{\ast}\omega_{\textrm{MC}}= \theta_1. \end{align*} Obviously, the very same argument can be given to show that $h_2$ is also a realization cover. This can be summarized by the diagram: \[\xymatrix{& \mathbf{s}^{-1}(x) \ar[dd]^{\mathbf{t}} & \\ M_1 \ar[ur]^{\phi_1} \ar[dr]_{h_1} & & M_2 \ar[ul]_{\phi_2} \ar[dl]^{h_2}\\ & L} \] \end{remark} \subsection{Cohomological Invariants of Geometric Structures} \label{subsec:cohomol:invariants} The classifying Lie algebroid $A$ of a fixed geometric structure should be seen as a basic invariant of global equivalence up to covering of the structure. In fact, we have the following result: \begin{prop} \label{prop:algbrd:cover} Let $\theta$ be a fully regular coframe on $M$ and let $\bar{\theta}$ be an arbitrary coframe on $\bar{M}$. If $(M,\theta)$ and $(\bar{M}, \bar{\theta})$ are globally equivalent, up to covering, then \begin{enumerate} \item[(i)] $\bar{\theta}$ is a fully regular coframe on $\bar{M}$, and \item[(ii)] the classifying Lie algebroids of $\theta$ and $\bar{\theta}$ are isomorphic. \end{enumerate} \end{prop} \begin{proof} In order to prove this proposition, it is enough to consider the case where $\theta$ is a fully regular coframe on $M$, $\bar{\theta}$ is a coframe on another manifold $\bar{M}$ and $\pi:\bar{M}\to M$ is a surjective local diffeomorphism which preserves the coframes. First we need: \begin{lemma} \label{lem:inv:cover} The invariant functions of the two coframes are in 1:1 correspondence: $\Inv(\bar{\theta})=\pi^*\Inv(\theta)$. \end{lemma} \begin{proof}[Proof of Lemma \ref{lem:inv:cover}] Clearly, since $\pi:\bar{M}\to M$ is a cover, the locally defined self equivalences of $(M,\theta)$ and $(\bar{M},\bar{\theta})$ correspond to each other. Hence the result follows. \end{proof} \vskip 10 pt It follows that if $\theta$ is a fully regular coframe so is the coframe $\bar{\theta}$. Let us denote by $A_\theta \to X_\theta$ and $A_{\bar{\theta}}\to X_{\bar{\theta}}$ the classifying algebroids of $(M,\theta)$ and $(\bar{M},\bar{\theta})$, respectively. Also, we denote by $\kappa_\theta:M\to X_\theta$ and $\kappa_{\bar{\theta}}:\bar{M}\to \bar{\theta}$ the corresponding classifying maps, so $(M,\theta,\kappa_\theta)$ and $(\bar{M},\bar{\theta},\kappa_\theta)$ are realizations of $A_\theta$ and $A_{\bar{\theta}}$. Since $(\bar{M},\bar{\theta},\kappa_\theta\circ\pi)$ is also a realization of $A_\theta$ it follows from Corollary \ref{cor:subalgbrd} that there is an algebroid morphism from $A_\theta$ to $A_{\bar{\theta}}$, which is injective on the fibers and covers a local diffeomorphism $X_\theta\to X_{\bar{\theta}}$ making the following diagram commutative: \[ \xymatrix{ M\ar[d]_{\kappa_\theta}& \bar{M}\ar[l]_{\pi}\ar[d]^{\kappa_{\bar{\theta}}}\\ X_\theta\ar[r]& X_{\bar{\theta}} } \] We claim that the map $X_\theta\to X_{\bar{\theta}}$ is injective, so that $A_\theta$ and $A_{\bar{\theta}}$ are isomorphic. In fact, let $x,y\in X_\theta$ which are mapped to the same point in $z\in X_{\bar{\theta}}$. Then we can choose points $p,q\in M$ and $\bar{p},\bar{q}\in\bar{M}$ which are mapped to each other under the commutative diagram above. Then, by Theorem \ref{thm: classification coframes}, there exists a local equivalence $\bar{\phi}:\bar{M}\to \bar{M}$, such that $\phi(\bar{p})=\bar{q}$. But then (after restricting to small enough open sets) $\bar{\phi}$ covers a local equivalence $\phi:M\to M$ such that $\phi(p)=q$. This means that $p$ and $q$ are local equivalent, so $x=\kappa_\theta(p)=\kappa_\theta(q)=y$, thus proving the claim. \end{proof} Even though the isomorphism class of the classifying Lie algebroid does not distinguish coframes which are globally equivalent, up to covering, one can use its Lie algebroid cohomology to obtain invariants of coframes. To illustrate this point of view, we will now describe two invariants of coframes arising in this way. Given a Lie algebroid $A$ we will denote by $(\Omega^\bullet(A),\mathrm d_A)$ the complex of $A$-forms where $\Omega^\bullet(A):=\Gamma(\wedge^{\bullet}A^*)$ and the differential is given by the formula: \begin{multline*} \mathrm d_A \omega (\alpha_0, \ldots \alpha_k) = \sum_{i=1}^k (-1)^i \sharp(\alpha_i)\cdot \omega(\alpha_0, \ldots, \hat{\alpha_i}, \ldots, \alpha_k) + \\ + \sum_{0 \leq i < j \leq k} (-1)^{i+j} \omega([\alpha_i,\alpha_j],\alpha_0, \ldots, \hat{\alpha_i}, \ldots, \hat{\alpha_j},\ldots, \alpha_k). \end{multline*} The resulting cohomology $H^{\bullet}(A)$ is called the Lie algebroid cohomology of $A$. Every Lie algebroid morphism $\Phi:A \to B$ induces a chain map \[\Phi^*:(\Omega^{\bullet}(B), \mathrm d_B) \to (\Omega^\bullet(A),\mathrm d_A),\] and, hence, also a map in cohomology $\Phi^*:H^\bullet(B)\to H^\bullet(A)$. For a fully regular coframe $\theta$ on $M$ the Lie algebroid cohomology of its classifying algebroid $A_\theta$ can be described in terms of \emph{invariants forms}. Let us call $\omega\in\Omega^k(M)$ a \textbf{k-invariant form} if for any locally defined equivalence of the coframe $\phi:M\to M$ one has: \[ \phi^*\omega=\omega. \] Since the differential of an invariant form is again an invariant form, the subspace of invariant forms $\Omega^{\bullet}_\theta(M)$ is a subcomplex of the de Rham complex. \begin{definition} The \textbf{invariant cohomology of $(M,\theta)$}, denoted $H^*_{\theta}(M)$ is the cohomology of the complex of invariant forms. \end{definition} Now we note that: \begin{prop} \label{prop:inv:cohom} Let $(M,\theta)$ be a fully regular coframe. Then the Lie algebroid cohomology $H^*(A_{\theta})$ coincides with the invariant cohomology $H^*_\theta(M)$. \end{prop} \begin{proof} This is simply a consequence of the fact that $\theta^*:\Omega^\bullet(A_\theta)\to\Omega^\bullet(M)$ is injective and has image precisely the complex of invariant forms. In fact, a form $\omega\in\Omega^k(M)$ is the pullback by $\theta$ of a section $\varphi$ of $\wedge^kA^*_\theta$ iff when we write $\omega$ in terms of the coframe $\theta$ we have: \[\omega = \sum_{i_1 < \cdots < i_k}a_{i_1,\ldots,i_k}\circ\kappa_\theta~\theta^{i_1}\wedge \cdots \wedge \theta^{i_k},\] for some smooth functions $a_{i_1,\ldots,i_k}\in C^\infty(X_\theta)$. \end{proof} Since can view the coframe as a Lie algebroid map $\theta:TM\to A_\theta$ (over the classifying map $\kappa_\theta:M\to X_\theta$), there is an induced map in cohomology $\theta^*: H^{\bullet}(A_\theta) \to H^{\bullet}_{\mathrm{dR}}(M)$. By the previous proposition, a class belongs to the image of this map iff it can be represented by an invariant $k$-form. In fact, under the isomorphism $H^*(A_{\theta})\simeq H^*_\theta(M)$ the map $\theta^*: H^{\bullet}(A_\theta) \to H^{\bullet}_{\mathrm{dR}}(M)$ corresponds to the map $H^*_\theta(M) \to H^{\bullet}_{\mathrm{dR}}(M)$ induced by the forgetful map. \begin{remark} If every local equivalence can be extended to a global equivalence (i.e., if $(M,\theta,\kappa_\theta)$ is a complete realization) then $H^*_\theta(M)$ coincides with the invariant cohomology $H^\bullet_G(M)$ associated with the action of $G=\Diff(M,\theta)$, the group of symmetries of the coframe. If further $G$ is compact, then we have $H^\bullet_G(M)=H^{\bullet}_{\mathrm{dR}}(M)$. \end{remark} From Proposition \ref{prop:algbrd:cover} we see that invariant cohomology is an invariant of global equivalence, up to covering. \begin{corol} \label{cor:equiv:cohomology} Let $(M,\theta)$ and $(\bar{M},\bar{\theta})$ be fully regular coframes which are globally equivalent, up to covering. Then their invariant cohomologies are isomorphic $H_{\theta}^*(M)\simeq H^*_{\bar{\theta}}(\bar{M})$. \end{corol} The invariant cohomology of a coframe is the natural place where classes associated with the coframe live. We will study next one such example. \begin{remark} \label{rem:cohom:realization} There is also an obvious extension of these constructions and results to any realization $(M, \theta, h)$ of a Cartan problem, so one can define the invariant cohomology $H^*_{\theta,h}(M)$. This cohomology is invariant under global equivalence and if $(M,\theta, h)$ is complete, and $G = \Diff(M,\theta,h)$, then it coincides with $H^\bullet_G(M)$. \end{remark} \subsection{The modular class} \label{subsec:mod:class} Lie algebroids have associated intrinsic characteristic classes (see \cite{Fernandes:holonomy,Crainic:Van,CrainicFernandes:exotic}). Therefore, for a fully regular coframe $\theta$ we can consider the characteristic classes of its classifying Lie algebroid $A_\theta$ and we obtain certain invariant cohomology classes which maybe called the characteristic classes of the coframe. The simplest of these classes is the \emph{modular class} (\cite{Weinstein:modular1,Weinstein:modular2}) which we recall briefly. Given a Lie algebroid $A\to M$ the line bundle $L_A:=\wedge^{\text{top}}A\otimes\wedge^{\text{top}}T^*M$ carries a natural flat $A$-connection, which makes $L$ into a $A$-representation, defined by: \[ \nabla_\alpha (\omega\otimes\nu)=\mathscr{L}_\alpha\omega\otimes\nu+\omega\otimes\mathscr{L}_{\rho(\alpha)}\nu\quad (\alpha\in\Gamma(A)). \] When $L_A$ is orientable, so that it carries a nowhere vanishing section $\mu\in\Gamma(L_A)$, we have: \[ \nabla_\alpha \mu=\langle c_\mu,\alpha\rangle\mu,\quad \forall \alpha\in\Gamma(A),\] for a 1-form $c_\mu\in\Omega^1(A)$. One checks easily that $c_\mu$ is $\mathrm d_A$-closed and one calls $c_\mu$ the \textbf{modular cocycle} of $A$ relative to the nowhere vanishing section $\mu$. If $\mu'=f \mu$ is some other non-vanishing section, then one finds that: \[ c_{\mu'}=c_\mu+\mathrm d_A \log |f|. \] so the cohomology class $\modular(A):=[c_\mu]\in H^1(A)$ does not depend on the choice of global section $\mu$. One calls $\modular(A)$ the \textbf{modular class} of $A$. When $L_A$ is not orientable, one repeats the construction for $L_A\otimes L_A$ and defines $\modular(A)$ to be one half the cohomology class associated with the line bundle $L_A\otimes L_A$. \begin{definition} The \textbf{modular class of a fully regular coframe} is the invariant cohomology class $\modular(\theta)\in H^1_{\theta}(M)$ which under the natural isomorphism $H^\bullet_{\theta}(M)\simeq H^\bullet (A_\theta)$ corresponds to the class $\modular(A_\theta)$. \end{definition} It should be clear that if $(M,\theta)$ and $(\bar{M},\bar{\theta})$ are fully regular coframes which are globally equivalent, up to covering, then under the natural isomorphism $H_{\theta}^*(M)\simeq H^*_{\bar{\theta}}(\bar{M})$ given by Corollary \ref{cor:equiv:cohomology} the modular classes correspond to each other. Our next proposition leads to a geometric interpretation of the modular class of a coframe: \begin{prop} \label{prop:mod:class} Let $(M,\theta)$ be a fully regular coframe with symmetry Lie algebra $\mathfrak{g}:=\ensuremath{\mathfrak{X}}(M,\theta)$ of dimension $k$. Let $\{\xi_1,\dots,\xi_k\}$ be a basis of $\mathfrak{g}$ and let $\mu_G$ be an invariant k-form which restricts to a volume form on each $\mathfrak{g}$-orbit. Then: \[ \modular(\theta)=-[\mathrm d\log|\langle\mu_G,\xi_1\wedge\cdots\wedge\xi_k\rangle|]\in H^1_\theta(M). \] \end{prop} \begin{proof} For a transitive algebroid the bundle $\ker\rho$ is a natural $A$-representation and we have a natural isomorphism of $A$-representations $\wedge^{\text{top}}\ker\rho\simeq L_A$. By Remark \ref{rem:transitive}, the classifying Lie algebroid $A_\theta\to X_\theta$ is always transitive. The classifying map $k_\theta:M\to X_\theta$, whose fibers coincide with the $\mathfrak{g}$-orbits, give an identification $k_\theta^*\ker\rho\simeq \ker\mathrm d k_\rho$. Hence, we can identify invariant sections of $\ker\mathrm d k_\rho$ with sections of $\ker\rho$, and it follows that any invariant k-form $\mu_G$ which restricts to a volume form on each $\mathfrak{g}$-orbit determines a global section of $L_A^*$ (and, hence, of $L_A$). If $\{\alpha_1,\dots,\alpha_n\}$ denotes the canonical basis of $A$ (which we identify with the invariant vector fields dual to the coframe $\{\theta^1,\dots,\theta^n\}$), then under these identifications, we have that: \[ \nabla_{\alpha_i} \mu_G =\mathscr{L}_{\alpha_i} \mu_G, \] where on the right-hand side we view $\alpha_i$ as an invariant vector field on $M$. It follows that the modular cocycle relative to the section $\mu=\mu_G^*$ is given by: \begin{align*} c_\mu(\alpha_i)&=\frac{\langle c_\mu(\alpha_i)\mu_G,\xi_1\wedge\cdots\wedge\xi_k\rangle}{\langle\mu_G,\xi_1\wedge\cdots\wedge\xi_k\rangle}\\ &=\frac{\langle -\nabla_{\alpha_i}\mu_G,\xi_1\wedge\cdots\wedge\xi_k\rangle}{\langle\mu_G,\xi_1\wedge\cdots\wedge\xi_k\rangle}\\ &=\frac{\langle -\mathscr{L}_{\alpha_i}\mu_G,\xi_1\wedge\cdots\wedge\xi_k\rangle}{\langle\mu_G,\xi_1\wedge\cdots\wedge\xi_k\rangle}\\ &=-\frac{\mathscr{L}_{\alpha_i}\langle\mu_G,\xi_1\wedge\cdots\wedge\xi_k\rangle}{\langle\mu_G,\xi_1\wedge\cdots\wedge\xi_k\rangle}=-\langle \alpha_i,\mathrm d\log|\langle\mu_G,\xi_1\wedge\cdots\wedge\xi_k\rangle|\rangle \end{align*} This proves that the expression for the modular class in the statement of the proposition holds. \end{proof} The proposition shows, in particular, that the natural map $H^*_\theta(M) \to H^{\bullet}_{\mathrm{dR}}(M)$ sends the modular class $\modular(\theta)$ to zero. Thus, the modular class is an obstruction for a complete coframe to have a compact symmetry group, i.e., \begin{corol} Let $(M,\theta)$ be a fully regular coframe. If $\theta$ is complete, and the symmetry group $\Diff(M, \theta)$ is compact, then $\modular(\theta) = 0$. \end{corol} Also, we conclude from Proposition \ref{prop:mod:class} that: \begin{corol} The symmetry Lie algebra $\mathfrak{g}:=\ensuremath{\mathfrak{X}}(M,\theta)$ is unimodular if and only if $\modular(\theta)=0$. \end{corol} \begin{proof} One finds for any $\xi\in\mathfrak{g}$ that: \begin{align*} \mathscr{L}_\xi \log |\langle\mu_G,\xi_1\wedge\cdots\wedge\xi_k\rangle|&=\frac{\mathscr{L}_\xi \langle\mu_G,\xi_1\wedge\cdots\wedge\xi_k\rangle}{\langle\mu_G,\xi_1\wedge\cdots\wedge\xi_k\rangle}\\ &=\frac{\sum_{i=1}^k \langle\mu_G,\xi_1\wedge\cdots \wedge [\xi,\xi_i] \wedge\cdots\wedge\xi_k\rangle}{\langle\mu_G,\xi_1\wedge\cdots\wedge\xi_k\rangle}=\tr(\text{\rm ad}\,\xi) \end{align*} Hence, we conclude that $\log |\langle\mu_G,\xi_1\wedge\cdots\wedge\xi_k\rangle|$ is an invariant function if and only if $\mathfrak{g}$ is unimodular. \end{proof} \begin{remark} More generally, if $(M, \theta, h)$ is a realization of a Cartan problem associated with a classifying Lie algebroid $A \to X$ one can also define its modular class by \[ \modular(M, \theta, h):=\modular(A) \in H^1_{\theta,h}(M) \] where $H^1_{\theta,h}(M)$ denotes the invariant cohomology (see Remark \ref{rem:cohom:realization}). A version of Proposition \ref{prop:mod:class} also holds. \end{remark} \section{An Example: Surfaces of Revolution} \label{sec: examples} In this section, we consider the local classification of surfaces of revolutions up to isometries. We begin by deducing the structure equations of a generic surface of revolution following very closely the differential analysis presented in Chapter 12 of \cite{Olver}. However, since the local moduli space of surfaces of revolution is infinite dimensional (we will see that it depends on an arbitrary function which will be denoted by $H$), we will then restrict to a finite dimensional class of surfaces of revolution, namely those for which the moduli $H$ has a fixed value, and exhibit its classifying Lie algebroid. We use this example mainly to illustrate several of the concepts and results presented throughout the paper. In the sequel to this paper, where we discuss $G$-structures, we will discuss more sophisticated (and interesting examples), such as special symplectic manifolds. Let $x,y,z$ denote the canonical coordinates on $\mathbb R^3$ and let $z = f(x)$ be a curve in the plane $\set{y = 0}$ which does not intersect the $z$-axis, i.e., such that $f(x) \neq 0$ for all $x \in \mathbb R$. Let $\Sigma$ be the surface obtained by rotating $f(x)$ around the $z$-axis, which we can parameterize using polar coordinates \[ \left\{ \begin{array}{l} x = r \cos v \\ y = r \sin v \\ z = f(r) \end{array}\right. \] where $r >0$. The relevant coframes for this classification problem is the one which diagonalizes the metric $\mathrm{ds}^2$ induced by the Euclidean metric on $\Sigma \subset \mathbb R^3$, i.e., the coframes $\{\omega^1,\omega^2\}$ for which \[\mathrm{ds}^2 = (\omega^1)^2 + (\omega^2)^2.\] In order to simplify the expression for such a coframe, we first perform the change of coordinates \[u = \int \sqrt{1 + f^{\prime}(r)^2}\mathrm d r\] with inverse $r = h(u)$, where $h(u) > 0$ for all $u$. With respect to the new coordinates $(u,v)$, the metric can be written as \[\mathrm{ds}^2 = (\mathrm d u)^2 + h(u)^2(\mathrm d v)^2, \] so any diagonalizing coframe for $\mathrm{ds}^2$ is given by \[ \left\{ \begin{array}{l} \omega^1 = \mathrm d u \\ \omega^2 = h(u)\mathrm d v. \end{array}\right. \] Notice that any coframe which differs from $\{\omega^1, \omega^2\}$ by a rotation represents the same metric. This suggests that we should consider the coframe \[ \left\{ \begin{array}{l} \theta^1 = (\sin t) \omega^1 + (\cos t) \omega^2 \\ \theta^2 = (-\cos t) \omega^1 + (\sin t) \omega^2\\ \alpha = \mathrm d t \end{array}\right. \] on $\Sigma \times \mathrm{S}^1$, where $\alpha$ is the Maurer-Cartan form on $\mathrm{S}^1 \cong \mathrm{SO}(2)$. \begin{remark} The passage from $M$ to $M\times G$ is the very first preliminary step in Cartan's equivalence method. The coframe in $M\times G$, obtained by adding the Maurer-Cartan forms, is called the \emph{lifted coframe} (see \cite{Olver}). \end{remark} Next, in order to obtain the structure equations and structure functions, we differentiate the one forms $\theta^1$, $\theta^2$ and $\alpha$ obtaining \begin{equation} \label{eq: structure with torsion} \left\{ \begin{array}{l} \mathrm d\theta^1 = -\alpha \wedge \theta^2 + (\frac{h^{\prime}(u)}{h(u)}\cos t) \theta^1\wedge\theta^2 \\ \mathrm d\theta^2 = \alpha \wedge \theta^1 + (\frac{h^{\prime}(u)}{h(u)}\sin t) \theta^1\wedge\theta^2\\ \mathrm d\alpha = 0 \end{array}\right. \end{equation} These structure equations depend explicitly on the coordinates $u$ and $t$. However, this can be fixed by replacing the form $\alpha$ by \begin{equation}\label{eq: absorption of torsion} \eta = \alpha - \frac{h^{\prime}(u)}{h(u)}((\cos t)\theta^1 + (\sin t)\theta^2) = \alpha - h^{\prime}(u)\mathrm d v. \end{equation} If we now rewrite \eqref{eq: structure with torsion} in terms of the new coframe we obtain \[ \left\{ \begin{array}{l} \mathrm d\theta^1 = -\eta \wedge \theta^2 \\ \mathrm d\theta^2 = \eta \wedge \theta^1\\ \mathrm d\eta = \kappa \theta^1 \wedge \theta^2 \end{array}\right. \] where $\kappa = -h^{\prime \prime}(u)/h(u)$ is the Gaussian curvature of $\Sigma$. \begin{remark} The substitution of $\alpha$ by $\eta$ is another step in Cartan's equivalence method and is known as \emph{absorption of torsion} (see \cite{Olver}). \end{remark} The next step is to calculate the coframe derivative of $\kappa$. We obtain \[\mathrm d \kappa = \kappa^{\prime}(u)((\sin t)\theta^1 - (\cos t)\theta^2).\] However, since $\Sigma$ is a \emph{surface of revolution}, we know that its symmetry Lie group must be at least one dimensional. It follows that there can be at most two independent invariants, and since we already have two invariant functions, namely $t$ and $\kappa$, we conclude that $\kappa^{\prime}(u) = H(\kappa)$ for some $H \in \mathrm{C}^{\infty}(\mathbb R)$. \begin{remark}\label{rmk: t as an invariant} By its definition, $t$ is the angle between the orthogonal coframe at a point and the special coframe $\set{\mathrm d u, h(u)\mathrm d v}$ given in the adapted coordinates. Thus, imposing $t$ as an invariant is the same as considering surfaces of revolution up to isometries which fix the angle $t$. This can be geometrically interpreted as a restriction on the allowed symmetries of $\Sigma$. For a generic surface of revolution this is not really a restriction but, for example, when $H=0$, i.e., when the curvature of $\Sigma$ is constant, this imposes a true restriction on the symmetry group. \end{remark} Finally, from \eqref{eq: absorption of torsion} we find that \[\mathrm d t = \eta + J(\kappa)((\cos t)\theta^1 +(\sin t)\theta^2)\] for some $J \in \mathrm{C}^{\infty}(\mathbb R)$. The relationship between the two functions $H$ and $J$ follows from imposing $\mathrm d^2\kappa = 0$. If we assume $\kappa'(u)\not=0$, i.e., that $H(\kappa)\not=0$, it yields \begin{equation}\label{eq: differential relation} J^{\prime}(\kappa)H(\kappa) = -\kappa - J(\kappa)^2. \end{equation} In conclusion, the full set of \textbf{structure equations for surfaces of revolution} is \begin{equation}\label{eq: structure equations for surfaces of revolution} \left\{ \begin{array}{l} \mathrm d\theta^1 = -\eta \wedge \theta^2 \\ \mathrm d\theta^2 = \eta \wedge \theta^1\\ \mathrm d\eta = \kappa \theta^1 \wedge \theta^2\\ \mathrm d \kappa = H(\kappa)((\sin t)\theta^1 - (\cos t)\theta^2)\\ \mathrm d t = \eta + J(\kappa)((\cos t)\theta^1 +(\sin t)\theta^2) \end{array}\right. \end{equation} where $H$ is an arbitrary smooth function and $J \in \mathrm{C}^{\infty}(\mathbb R)$ is determined by \eqref{eq: differential relation}. Notice that $\mathrm d^2 = 0$ is a consequence of the equations above. \begin{remark}[Continuing Remark \ref{rmk: t as an invariant}] \label{rmk: t as an invariant:2} When $\kappa'(u)=0$ the analysis above collapses and the structure equations for surfaces of revolution reduce to: \begin{equation} \label{eq:constant:curvature} \left\{ \begin{array}{l} \mathrm d\theta^1 = -\eta \wedge \theta^2 \\ \mathrm d\theta^2 = \eta \wedge \theta^1\\ \mathrm d\eta = \kappa \theta^1 \wedge \theta^2\\ \mathrm d \kappa = 0. \end{array}\right. \end{equation} \end{remark} Now that we have found the structure equations for an arbitrary surface of revolution, we may immediately write the classifying Lie algebroid $A$ for such surfaces. As a vector bundle, $A \cong (\mathbb R \times \mathrm{S}^1)\times \mathbb R^3 \to \mathbb R \times \mathrm{S}^1$. Its Lie bracket is given on the constant sections $e_1,e_2$ and $e_3$ by \[ \begin{array}{l} \left[e_1, e_2\right](\kappa, t) = -\kappa e_3 \\ \left[e_1, e_3\right] (\kappa, t)= e_2 \\ \left[e_2, e_3\right] (\kappa, t)= -e_1 \end{array}\] and its anchor is given by \[ \begin{array}{l} \sharp(e_1)(\kappa, t) = H(\kappa)\sin t \frac{\partial}{\partial \kappa} + J(\kappa)\cos t \frac{\partial}{\partial t} \\ \sharp(e_2)(\kappa, t) = -H(\kappa)\cos t \frac{\partial}{\partial \kappa} + J(\kappa)\sin t \frac{\partial}{\partial t} \\ \sharp(e_3)(\kappa, t) = \frac{\partial}{\partial t}. \end{array}\] Again, when $\kappa'(u)=0$ (cf.~Remarks \ref{rmk: t as an invariant} and \ref{rmk: t as an invariant:2}), the analysis collapses and we obtain an algebroid over a 1-dimensional manifold with zero anchor (i.e., a bundle of Lie algebras). For any function $H(\kappa)$ the equations above determine a Lie algebroid. From Theorem \ref{thm: classification coframes} and the fact that the algebroid is transitive, we deduce that: \begin{prop} For any $\kappa_0 \in \mathbb R$ and any $H \in \mathrm{C}^{\infty}(\mathbb R)$ there exists a surface of revolution $\Sigma$ with $p \in \Sigma$ such that $\kappa(p) = \kappa_0$ and for which $\kappa^{\prime}(u)=H(\kappa)$. Moreover, any two such surfaces of revolution are locally isometric in a neighborhood of the points corresponding to $\kappa_0$. \end{prop} We can also use the classifying Lie algebroid to describe the infinitesimal symmetries of surfaces of revolution (see Proposition \ref{prop:inf:sym:coframe} and Theorem \ref{thm:Lie:symmetries}). For this, we must distinguish the cases $\kappa'(u)\not=0$ and $\kappa'(u)=0$: \begin{description} \item[$H(\kappa)\not=0$] In this case, which is generic, it is easy to see that the anchor is surjective, and thus the symmetry Lie algebra is $1$-dimensional, which corresponds to rotation around the axis of revolution. \item[$H(\kappa)=0$] In this case, the surface of revolution has constant curvature. The orbits of the corresponding Lie algebroid are $0$-dimensional, and thus, the symmetry Lie algebra is $3$-dimensional. We have the following 3 possibilities: \begin{center} \begin{tabular}{||c|c|c||}\hline $\mathfrak{sl}_{2}$ & \footnotesize{if $\kappa < 0$} & \footnotesize{Hyperbolic Geometry} \\ \hline $\mathfrak{se}_{2}$ & \footnotesize{if $\kappa=0$} & \footnotesize{Euclidean Geometry} \\ \hline $\mathfrak{so}_{3}$ & \footnotesize{if $\kappa>0$} & \footnotesize{Spherical Geometry} \\ \hline \end{tabular} \end{center} \end{description} \begin{remark} Note that a surface of revolution which has non-constant $H(\kappa)$ which passes through $0$ \emph{cannot} appear as a realization of our Cartan's problem. The reason is that the corresponding coframe will not be fully regular. \end{remark} \begin{remark} Strictly speaking the classifying Lie algebroid described above (and also the conclusions deduced from it) concern the coframe $\theta^1, \theta^2, \eta$, and not the surface itself. However, it will be shown in \cite{FernandesStruchiner} how to deal with Cartan's realization problem on finite type $G$-structures. In particular, the realization problem discussed here should be treated as a realization problem on the orthogonal frame bundle of the surface of revolution. \end{remark} We remark that for each choice of a function $H$, the classifying Lie algebroid discussed above is the classifying Lie algebroid of a single coframe; namely the coframe $\theta^1,\theta^2, \eta$ on the orthogonal frame bundle of the surface of revolution. However, the (local) moduli space of surfaces of revolution is clearly infinite dimensional. Thus, in order to describe a class of surfaces of revolution, we will consider here only those for which $H$ is constant. \begin{remark} The condition that $H$ is constant can be seen a second order differential equation on the curvature tensor $R$ of the induced metric on the surface of revolution. In fact, using the coordinates $u,v$ introduced earlier, we can express the class of surfaces being considered as the class of surfaces of revolution for which \[\nabla_{\frac{\partial}{\partial u}}(\nabla R) = 0,\] where $\nabla$ is the Levi-Civita connection of the surface. Since this condition is equivalent to $\kappa(u)$ being an affine map, we will denote such surfaces by \textbf{affinely curved surfaces of revolution}. \end{remark} Note now that since we are not prescribing a specific value for the constant $H$, we must consider it as a new invariant function for affinely curved surfaces of revolution which is subject to the condition $\mathrm d H = 0$. Similarly, $J$ is no longer determined and must also be added to our set of invariant functions. It then follows from \eqref{eq: differential relation} that whenever $H \neq 0$ (which we assume from now on) \[\mathrm d J = -(\kappa + J^2)((\sin t) \theta^1 - (\cos t) \theta^2).\] Thus, as a complete set of structure equations for affinely curved surfaces of revolution one obtains: \begin{equation}\label{eq: structure equations for affinely curved surfaces of revolution} \left\{ \begin{array}{l} \mathrm d\theta^1 = -\eta \wedge \theta^2 \\ \mathrm d\theta^2 = \eta \wedge \theta^1\\ \mathrm d\eta = \kappa \theta^1 \wedge \theta^2\\ \mathrm d \kappa = H(\kappa)((\sin t)\theta^1 - (\cos t)\theta^2)\\ \mathrm d t = \eta + J(\kappa)((\cos t)\theta^1 +(\sin t)\theta^2)\\ \mathrm d H = 0\\ \mathrm d J = -(\kappa + J^2)((\sin t) \theta^1 - (\cos t) \theta^2). \end{array}\right. \end{equation} Again, it is easily verified that these structure equations have as a formal consequence that $\mathrm d^2 = 0$, and thus they determine the classifying Lie algebroid $B$ for affinely curved surfaces of revolution. As a vector bundle, $B \cong Y\times \mathbb R^3 \to Y$, where \[Y = \set{(\kappa, H, J, t) \in \mathbb R^3 \times \mathrm{S}^1: H \neq 0}.\] Its Lie bracket is given on the constant sections $e_1,e_2$ and $e_3$ by \[ \begin{array}{l} \left[e_1, e_2\right](\kappa, H, J, t) = -\kappa e_3 \\ \left[e_1, e_3\right] (\kappa, H, J, t)= e_2 \\ \left[e_2, e_3\right] (\kappa, H, J, t)= -e_1 \end{array}\] and its anchor is given by \[ \begin{array}{l} \sharp(e_1)(\kappa,H,J, t) = H\sin t \frac{\partial}{\partial \kappa} - (\kappa + J^2)\sin t\frac{\partial}{\partial J}+ J(\kappa)\cos t \frac{\partial}{\partial t} \\ \sharp(e_2)(\kappa,H,J, t) = -H\cos t \frac{\partial}{\partial \kappa} +(\kappa + J^2)\cos t\frac{\partial}{\partial J} + J(\kappa)\sin t \frac{\partial}{\partial t} \\ \sharp(e_3)(\kappa,H,J, t) = \frac{\partial}{\partial t}. \end{array}\] Observe that the orbits of this Lie algebroid are 2-dimensional, so we can conclude that: \begin{prop} For any $\kappa_0, H_0, J_0$ with $H_0 \neq 0$ there exists an affinely curved surface of revolution whose structure invariants take the values $\kappa_0, H_0, J_0$. Moreover, \begin{enumerate} \item Every affinely curved surface of revolution has a one dimensional symmetry Lie algebra. \item If two affinely curved surfaces of revolution are open submanifolds of a third affinely curved surface of revolution, then there invariant constant function $H$ must agree. \end{enumerate} \end{prop} \appendix \section{The classifying Lie algebroid of a single coframe} \label{append:single:coframe} In this appendix we will use the jet bundle approach to give a coordinate free construction of the classifying Lie algebroid associated to a fully regular coframe $\theta$ on a manifold $M$. We begin by recalling some general constructions. Let $N$ be a manifold and let $\pi: N \to M$ be a smooth map. We denote by $\text{\rm J}^1N\to M$ the bundle of 1-jets of sections of $\pi$. Let us denote by $\mathrm{Hom}(TN, \pi^*TM)$ the vector bundle over $N$ whose sections are bundle maps \[ \xymatrix{TN \ar[rr] \ar[rd]& & \pi^*TM\ar[dl]\\ & N, & }\] and by $\text{\rm J}^1(\mathrm{Hom}(TN, \pi^*TM))$ the bundle of $1$-jets of sections of $\mathrm{Hom}(TN, \pi^*TM)$. After a choice of a (local) flat connection, we can identify a point of $\text{\rm J}^1(\mathrm{Hom}(TN, \pi^*TM))$ in the fiber over $x\in N$ with a pair $(q, l_q)$, where $q\in \mathrm{Hom}_x(TN, \pi^*TM)$ and \[l_q: T_xN \to \mathrm{Hom}_x(TN, \pi^*TM)\] is a linear map. It follows that we may define a bundle map \[\xymatrix{\text{\rm J}^1(\mathrm{Hom}(TN, \pi^*TM)) \ar[rr]^{\mathrm{Alt}} \ar[dr] && \mathrm{Hom}(\wedge^2TN, \pi^*TM) \ar[dl]\\ &N&}\] given locally by \[\mathrm{Alt}(q,l_q)(v\wedge w) = \frac{1}{2}l_q(v)(w) - l_q(w)(v).\] It is easy to check that this formula does not depend on the choice of (local) flat connection, so the bundle map is well defined. \begin{example} If we take $M$ to be the real line $\mathbb R$ and $\pi$ to be a constant map, then the construction above yields a map \[\xymatrix{\text{\rm J}^1(T^*N) \ar[rr]^{\mathrm{Alt}} \ar[dr] & & \wedge^2T^*N \ar[dl]\\ & N &}\] that satisfies \[\mathrm{Alt}(\text{\rm j}^1\alpha) = \mathrm d \alpha.\] \end{example} Now, note that we may view $\pi_*$ as a section of $\mathrm{Hom}(TN, \pi^*TM)$. Thus, by applying $\mathrm{Alt}$, we obtain a section $\mathrm{Alt}(\text{\rm j}^1\pi_*) \in \mathrm{Hom}(\wedge^2TN, \pi^*TM)$. \begin{definition} The \textbf{structure tensor} of $\pi: N \to M$ is the map \[c: \text{\rm J}^1N \to \mathrm{Hom}(\wedge^2TM, \pi^*TM)\] defined by \[c(H_p)(v\wedge w) = \mathrm{Alt}(\text{\rm j}^1\pi_*)(\tilde{v}\wedge \tilde{w}),\] where $\tilde{v}$ and $\tilde{w}$ are the horizontal lifts of $v, w \in T_{\pi(p)}M$ to $H_p$. \end{definition} \begin{remark} In the definition above, we have identified $\text{\rm j}^1_x s \in \text{\rm J}^1N$ with the horizontal subspace $H_{s(x)} = \mathrm d_x s(T_xM)$ of $T_{s(x)}N$. \end{remark} \begin{example} In the case where $N$ is the frame bundle of $M$ (or more generally a $G$-structure over $M$), the structure tensor is simply its first order structure function (see, for example, \cite{Sternberg, SingerSternberg} or \cite{FernandesStruchiner}). \end{example} After these preliminary remarks, we now return to intrinsic construction of the classifying Lie algebroid of a fully regular coframe $\theta$ on a manifold $M$. Let us denote by $\mathrm{F}(M)$ the frame bundle of $M$. Thus, \[\xymatrix{\mathrm{F}(M) \ar[d]_{\pi} & \ar@(dr,ur)@<-5ex>[]_{\GL_n}\\ M} \] is a principal $\GL_n$-bundle whose fiber over a point $p$ is \[\pi^{-1}(p) = \set{\phi: \mathbb R^n \to T_pM: \phi \text{ is a linear isomorphism}}.\] The coframe $\theta$ can be identified with a section of $\mathrm{F}(M)$. It gives rise to trivializations $TM \cong M \times \mathbb R^n$ and $T^*M \cong M \times \mathbb R^n$ and with respect to the canonical base of $\mathbb R^n$, we can write $\theta = (\theta^1, \ldots, \theta^n)$ or dually $\frac{\partial}{\partial \theta} = (\theta_1, \ldots, \theta_n)$. A basis of sections of $\wedge^2T^*M \otimes TM$ is then given by \[\theta^{ij}_k = (\theta^i \wedge\theta^j) \otimes \theta_k.\] With respect to this basis, the structure tensor $c$ applied to $\text{\rm j}^1\theta$ gives rise to a section of $\wedge^2T^*M \otimes TM$ whose coordinates are the structure functions, i.e., \[c(\text{\rm j}^1\theta) = \sum C^k_{ij}\theta^{ij}_k.\] It follows that we may view the set $\ensuremath{\mathcal{F}}_0(M)$ of structure functions of $\theta$ as the section $c(\text{\rm j}^1\theta)$ of $\mathrm{Hom}(\wedge^2TM, TM)$. Note also that $\text{\rm j}^1 c$ is a map \[\text{\rm J}^1(\text{\rm J}^1\mathrm{F}(M)) \to \text{\rm J}^1(\mathrm{Hom}(\wedge^2TM, \pi^*TM)).\] Recall that $\text{\rm J}^1(\mathrm{Hom}(\wedge^2TM, \pi^*TM))$ is isomorphic to \[(\wedge^2T^*M \otimes TM) \times \mathrm{Hom}(TM, \wedge^2T^*M \otimes TM).\] Let us denote by $\nabla$ the flat connection on $\wedge^2T^*M \otimes TM$ induced be $\theta$. Then a basis of sections of $\mathrm{Hom}(TM, \wedge^2T^*M \otimes TM)$ is given by \[\theta^{ij}_{k,l} = \nabla_{\theta_l}\theta^{ij}_k.\] With respect to this basis, the section $\text{\rm j}^1c(\text{\rm j}^2\theta)$ of $\text{\rm J}^1( \mathrm{Hom}(\wedge^2TM, TM))$ is written as \[\text{\rm j}^1c(\text{\rm j}^2\theta) = \sum C^k_{ij}\theta^{ij}_k + \frac{\partial C^k_{ij}}{\partial \theta_l}\theta^{ij}_{k,l}.\] Thus we may view the set $\ensuremath{\mathcal{F}}_1(M)$ formed by the structure functions and its coframe derivatives as the section $\text{\rm j}^1c(\text{\rm j}^2\theta)$ of \[\text{\rm J}^1( \mathrm{Hom}(\wedge^2TM, TM)).\] If we continue in this way, we may identify the set $\ensuremath{\mathcal{F}}_r(M)$ of all coframe derivatives of the the structure functions up to order $r$ with the section \[\text{\rm j}^{r-1}c(\text{\rm j}^r \theta) \in \Gamma(\text{\rm J}^r( \mathrm{Hom}(\wedge^2TM, TM))),\] which we shall call the \textbf{$r$-th order structure section} of $\theta$. Now, the coframe $\theta$ induces a trivialization of $\text{\rm J}^r( \mathrm{Hom}(\wedge^2TM, TM))$. Thus we have an identification between all its fibers. We shall denote them by $\mathbb{K}_r$ and by \[p^r_{\theta}: \text{\rm J}^r( \mathrm{Hom}(\wedge^2TM, TM)) \to \mathbb{K}_r\] the associated projection. We observe that the coframe $\theta$ is fully regular if and only if each of the maps \[\xymatrix{ M \ar@/_1.0cm/[rrrrrr]_{\tilde{\kappa}_r} \ar[rrr]^-{\text{\rm j}^{r-1}c(\text{\rm j}^r \theta)} &&& \Gamma(\text{\rm J}^r( \mathrm{Hom}(\wedge^2TM, TM)))\ar[rrr]^-{p^r_{\theta}} &&& \mathbb{K}_r}\] has constant rank. Thus, we can summarize the construction made so far by saying that each coframe $\theta$ on $M$, viewed as a section of $\mathrm{F}(M)$, gives rise to a section \[ s_{\theta}\in \Gamma(\text{\rm J}^{\infty}(\mathrm{Hom}(\wedge^2TM,TM)).\] Moreover, since the coframe $\theta$ induces a trivialization of $\text{\rm J}^{\infty}(\mathrm{Hom}(\wedge^2TM,TM))$, there is a natural projection \[p_{\theta}: \text{\rm J}^{\infty}(\mathrm{Hom}(\wedge^2TM,TM)) \to \mathbb{K}_{\infty},\] where $\mathbb{K}_{\infty}$ denotes the fiber of $\text{\rm J}^{\infty}(\mathrm{Hom}(\wedge^2TM,TM))$. If, additionally, we assume that $\theta$ is fully regular, then $\tilde{\kappa}_{\theta} = p_{\theta} \circ s_{\theta}$ has constant rank, and thus its image in $\mathbb{K}_{\infty}$ is an immersed submanifold (possibly with self intersection) which we denote by $\tilde{X}$, i.e., there is a manifold $X$, and an immersion (not necessarily injective) $\phi: X \to \mathbb{K}_{\infty}$ such that \[\xymatrix{ M \ar[drrr]_{\tilde{\kappa}_{\theta}} \ar@/^1.2cm/[rrrrrr]^{\tilde{\kappa}_\infty} \ar[rrr]^-{s_\theta} &&& \Gamma(\text{\rm J}^{\infty}( \mathrm{Hom}(\wedge^2TM, TM)))\ar[rrr]^-{p_{\theta}} &&& \mathbb{K}_{\infty}\\ &&& X \ar[rrru]_{\phi}&&&}\] We remark that again, as in Section \ref{subsec:coframes:2}, we may take $X$ to be the quotient of $M$ by the equivalence relation where two points $p$ and $q$ are identified if and only if there is a locally defined formal equivalence of $\theta$ which takes $p$ to $q$. Thus, by construction, we may view the structure section of order zero $c(\text{\rm j}^1\theta) \in \Gamma(\mathrm{Hom}(\wedge^2TM, TM))$ as being defined on $X$. In other words, there is a smooth map \[\hat{c} : X \to \mathrm{Hom}(\wedge^2TM, TM),\] such that \[\xymatrix{M \ar[dr]_-{\text{\rm j}^1c(\theta)} \ar[rr]^{\tilde{\kappa}_{\theta}} && X \ar[dl]^{\hat{c}}\\ & \mathrm{Hom}(\wedge^2TM, TM) &}\] It is convenient to use the coframe $\theta$ to fix a trivialization of $TM$ so that $\hat{c}$ becomes a map \[\hat{c}: X \to \mathrm{Hom}(\wedge^2\mathbb R^n,\mathbb R^n).\] Finally, if we define $n$ vector fields on $X$ through \[F_i = (\kappa_{\theta})_*\frac{\partial}{\partial \theta_i} \in \ensuremath{\mathfrak{X}}(X),\] then we can describe the classifying Lie algebroid $A_{\theta} \to X$ of $\theta$ explicitly as follows. We take $A_{\theta}$ to be the trivial vector bundle $A_{\theta} = X \times \mathbb R^n$ and define its structure by \begin{align*} [e_i, e_j]&= -\hat{c}(e_i\wedge e_j)\\ \sharp(e_i) & = F_i \end{align*} where $\{e_1,\dots,e_n\}$ denotes de canonical basis of sections. \end{document}
\begin{document} \author[Robert Laterveer] {Robert Laterveer} \address{Institut de Recherche Math\'ematique Avanc\'ee, CNRS -- Universit\'e de Strasbourg,\ 7 Rue Ren\'e Des\-car\-tes, 67084 Strasbourg CEDEX, FRANCE.} \email{laterv@math.unistra.fr} \title{On the Chow ring of Fano varieties on the Fatighenti-Mongardi list} \begin{abstract} Conjecturally, Fano varieties of K3 type admit a multiplicative Chow--K\"unneth decomposition, in the sense of Shen--Vial. We prove this for many of the families of Fano varieties of K3 type constructed by Fatighenti--Mongardi. This has interesting consequences for the Chow ring of these varieties. \end{abstract} \thanks{\textit{2020 Mathematics Subject Classification:} 14C15, 14C25, 14C30} \keywords{Algebraic cycles, Chow group, motive, Bloch--Beilinson filtration, Beauville's ``splitting property'' conjecture, multiplicative Chow--K\"unneth decomposition, Fano variety, K3 surface} \thanks{Supported by ANR grant ANR-20-CE40-0023.} \maketitle \section{Introduction} Given a smooth projective variety $Y$ over $\mathbb{C}$, let $A^i(Y):=CH^i(Y)_{\mathbb{Q}}$ denote the Chow groups of $Y$ (i.e. the groups of codimension $i$ algebraic cycles on $Y$ with $\mathbb{Q}$-coefficients, modulo rational equivalence). The intersection product defines a ring structure on $A^\ast(Y)=\bigoplus_i A^i(Y)$, the Chow ring of $Y$ \cite{F}. In the case of K3 surfaces, this ring structure has a remarkable property: \begin{theorem}[Beauville--Voisin \cite{BV}]\label{K3} Let $S$ be a K3 surface. The $\mathbb{Q}$-subalgebra \[ R^\ast(S):= \bigl\langle A^1(S), c_j(S) \bigr\rangle\ \ \ \subset\ A^\ast(S) \] injects into cohomology under the cycle class map. \end{theorem} Motivated by the cases of K3 surfaces and abelian varieties, Beauville \cite{Beau3} has conjectured that for certain special varieties, the Chow ring should admit a multiplicative splitting. To make concrete sense of Beauville's elusive ``splitting property conjecture'', Shen--Vial \cite{SV} have introduced the concept of {\em multiplicative Chow--K\"unneth decomposition\/}. It seems both interesting and difficult to better understand the class of special varieties admitting such a decomposition. In \cite{S2}, the following conjecture is raised: \begin{conjecture}\label{conj} Let $X$ be a smooth projective Fano variety of K3 type (i.e. $\dim X=2d$ and the Hodge numbers $h^{p,q}(X)$ are $0$ for all $p\not=q$ except for $h^{d-1,d+1}(X)=h^{d+1,d-1}(X)=1$). Then $X$ has a multiplicative Chow--K\"unneth decomposition. \end{conjecture} This conjecture is verified in some special cases \cite{37}, \cite{39}, \cite{40}, \cite{FLV2}, \cite{S2}. This paper aims to contribute to this program. The main result is as follows: \begin{nonumbering}[=Theorem \ref{main}] Let $X$ be a smooth Fano variety in one of the families of Table \ref{table:1}. Then $X$ has a multiplicative Chow--K\"unneth decomposition. \end{nonumbering} Table \ref{table:1} lists Fano varieties $X$ of K3 type that were constructed by Fatighenti--Mongardi \cite{FM} as hypersurfaces in products of Grassmannians. The K3 surfaces $S$ in Table \ref{table:1} are shown in \cite{FM} to be associated to $X$ on the level of Hodge theory, and on the level of derived categories. In some cases, the geometric relation between $X$ and $S$ is straightforward (e.g., for B1 and B2 the Fano variety $X$ is a blow-up with center the K3 surface $S$); in other cases the geometric relation is more indirect (e.g. for M1, M6, M7, M8, M9, M10 the Fano variety $X$ is related to the K3 surface $S$ via the so-called ``Cayley's trick'', cf. \cite{FM} and subsection \ref{ss:cay} below). To prove Theorem \ref{main}, we have devised a general criterion (Proposition \ref{crit}), which we hope might apply to other Fano varieties of K3 type. To verify the criterion, one needs a motivic relation between the Fano variety $X$ and the associated K3 surface $S$, and one needs a certain instance of the {\em Franchetta property\/}. \begin{table}[h] \centering \begin{tabular}{||c c c c c c||} \hline $\stackrel{\hbox{Label}}{\hbox{in\ \cite{FM}}}$ & $X\subset U$ & $\dim X$ & $\rho(X)$ & $\stackrel{\hbox{Genus of}}{\hbox{associated\ K3}}$ &$\stackrel{\hbox{Also}} {\hbox{occurs\ in}}$ \\ [0.5ex] \hline\hline B1 & $X_{(2,1,1)}\subset\mathbb{P}^3\times\mathbb{P}^1\times\mathbb{P}^1$ & 4 & 3 & 7& \cite{40}\\ B2 & $X_{(2,1)}\subset\Gr(2,4)\times\mathbb{P}^1$ & 4 & 3 & 5& \cite{40}\\ M1 & $X_{(1,1,1)}\subset\mathbb{P}^3\times\mathbb{P}^3\times\mathbb{P}^3$ & 8 & 3 & 3& \cite{IM}\\ M3 & $X_{(1,1)}\subset\Gr(2,5)\times Q_5$ & 10 & 2 & 6&\\ M4 & $X_{(1,1)}\subset\SGr(2,5)\times Q_4$ & 8 & 2 & 6 &\\ M6 & $X_{(1,1)}\subset \mathbb{S}_5\times\mathbb{P}^7$ & 16 & 2 & 7&\\ M7 & $X_{(1,1)}\subset\Gr(2,6)\times \mathbb{P}^5$ & 12 & 2 & 8 &\\ M8 & $X_{(1,1)}\subset\SGr(2,6)\times \mathbb{P}^4$ & 10 & 2 & 8 &\\ M9 & $X_{(1,1)}\subset S_2 \Gr(2,6)\times \mathbb{P}^3$ & 8 & 2 & 8&\\ M10 & $X_{(1,1)}\subset\SGr(3,6)\times \mathbb{P}^3$ & 8 & 2 & 9&\\ S2 & $X_{1}\subset \OGr(2,8)$ & 8 & 2 & 7& \cite{S2}\\ [1ex] \hline \end{tabular} \caption{Families of Fano varieties of K3 type. (As in \cite{FM}, $\Gr(k,m)$ denotes the Grassmannian of $k$-dimensional subspaces of an $m$-dimensional vector space. $\SGr(k,m)$, $S_2 \Gr(k,m)$ and $\OGr(k,m)$ denote the symplectic resp. bisymplectic resp. orthogonal Grassmannian. $\mathbb{S}_5$ denotes a connected component of $\OGr(5,10)$, and $Q_m$ is an $m$-dimensional smooth quadric.)} \label{table:1} \end{table} As a consequence of our main result, the Chow ring of these Fano varieties behaves like the Chow ring of a K3 surface: \begin{nonumberingc}[=Corollary \ref{cor}] Let $X\subset U$ be the inclusion of a Fano variety $X$ in its ambient space $U$, where $X,U$ are as in Table \ref{table:1}. Let $\dim X=2d$. Let $R^\ast(X)\subset A^\ast(X)$ be the $\mathbb{Q}$-subalgebra \[ R^\ast(X):=\Bigl\langle A^1(X), A^2(X), \ldots, A^d(X), c_j(X),\ima\bigl(A^\ast(U)\to A^\ast(X)\bigr)\Bigr\rangle\ \ \ \subset A^\ast(X)\ .\] Then $R^\ast(X)$ injects into cohomology under the cycle class map. \end{nonumberingc} We end this introduction with a challenge. Fatighenti--Mongardi have constructed some more Fano varieties of K3 type for which it would be nice to settle Conjecture \ref{conj} (in particular the families labelled M13 and S1 in \cite{FM}, for which I have not been able to check condition (c3) or (c3$^\prime$) of the general criterion Proposition \ref{crit}). Additionally, the following are some Fano varieties of K3 type in the litterature for which Conjecture \ref{conj} is still open, and for which the methods of the present paper are not sufficiently strong: K\"uchle fourfolds of type $c5$, Pl\"ucker hyperplane sections of $\Gr(3,10)$, intersections of $\Gr(2,8)$ with 4 Pl\"ucker hyperplanes, Gushel--Mukai fourfolds and sixfolds. It would be interesting to devise new methods to treat these families. \vskip0.6cm \begin{convention} In this article, the word {\sl variety\/} will refer to a reduced irreducible scheme of finite type over $\mathbb{C}$. A {\sl subvariety\/} is a (possibly reducible) reduced subscheme which is equidimensional. {\bf All Chow groups will be with rational coefficients}: we denote by $A_j(Y)$ the Chow group of $j$-dimensional cycles on $Y$ with $\mathbb{Q}$-coefficients; for $Y$ smooth of dimension $n$ the notations $A_j(Y)$ and $A^{n-j}(Y)$ are used interchangeably. The notation $A^j_{hom}(Y)$ will be used to indicate the subgroup of homologically trivial cycles. The contravariant category of Chow motives (i.e., pure motives with respect to rational equivalence as in \cite{Sc}, \cite{MNP}) will be denoted $\mathcal M_{\rm rat}$. \end{convention} \section{Preliminaries} \subsection{MCK decomposition} \label{ss:mck} \begin{definition}[Murre \cite{Mur}] Let $X$ be a smooth projective variety of dimension $n$. We say that $X$ has a {\em CK decomposition\/} if there exists a decomposition of the diagonal \[ \Delta_X= \pi^0_X+ \pi^1_X+\cdots +\pi_X^{2n}\ \ \ \hbox{in}\ A^n(X\times X)\ ,\] such that the $\pi^i_X$ are mutually orthogonal idempotents and $(\pi_X^i)_\ast H^\ast(X,\mathbb{Q})= H^i(X,\mathbb{Q})$. (NB: ``CK decomposition'' is shorthand for ``Chow--K\"unneth decomposition''.) \end{definition} \begin{remark} The existence of a CK decomposition for any smooth projective variety is part of Murre's conjectures \cite{Mur}, \cite{J4}. \end{remark} \begin{definition}[Shen--Vial \cite{SV}] Let $X$ be a smooth projective variety of dimension $n$. Let $\Delta_X^{sm}\in A^{2n}(X\times X\times X)$ be the class of the small diagonal \[ \Delta_X^{sm}:=\bigl\{ (x,x,x)\ \vert\ x\in X\bigr\}\ \subset\ X\times X\times X\ .\] An {\em MCK decomposition\/} is a CK decomposition $\{\pi_X^i\}$ of $X$ that is {\em multiplicative\/}, i.e. it satisfies \[ \pi_X^k\circ \Delta_X^{sm}\circ (\pi_X^i\times \pi_X^j)=0\ \ \ \hbox{in}\ A^{2n}(X\times X\times X)\ \ \ \hbox{for\ all\ }i+j\not=k\ .\] (NB: ``MCK decomposition'' is shorthand for ``multiplicative Chow--K\"unneth decomposition''.) \end{definition} \begin{remark} The small diagonal (seen as a correspondence from $X\times X$ to $X$) induces the {\em multiplication morphism\/} \[ \Delta_X^{sm}\colon\ \ h(X)\otimes h(X)\ \to\ h(X)\ \ \ \hbox{in}\ \mathcal M_{\rm rat}\ .\] Let us assume $X$ has a CK decomposition \[ h(X)=\bigoplus_{i=0}^{2n} h^i(X)\ \ \ \hbox{in}\ \mathcal M_{\rm rat}\ .\] By definition, this decomposition is multiplicative if for any $i,j$ the composition \[ h^i(X)\otimes h^j(X)\ \to\ h(X)\otimes h(X)\ \xrightarrow{\Delta_X^{sm}}\ h(X)\ \ \ \hbox{in}\ \mathcal M_{\rm rat}\] factors through $h^{i+j}(X)$. If $X$ has an MCK decomposition, then setting \[ A^i_{(j)}(X):= (\pi_X^{2i-j})_\ast A^i(X) \ ,\] one obtains a bigraded ring structure on the Chow ring: that is, the intersection product sends $A^i_{(j)}(X)\otimes A^{i^\prime}_{(j^\prime)}(X) $ to $A^{i+i^\prime}_{(j+j^\prime)}(X)$. It is expected that for any $X$ with an MCK decomposition, one has \[ A^i_{(j)}(X)\stackrel{??}{=}0\ \ \ \hbox{for}\ j<0\ ,\ \ \ A^i_{(0)}(X)\cap A^i_{hom}(X)\stackrel{??}{=}0\ ;\] this is related to Murre's conjectures B and D, that have been formulated for any CK decomposition \cite{Mur}. The property of having an MCK decomposition is restrictive, and is closely related to Beauville's ``splitting property' conjecture'' \cite{Beau3}. To give an idea: hyperelliptic curves have an MCK decomposition \cite[Example 8.16]{SV}, but the very general curve of genus $\ge 3$ does not have an MCK decomposition \cite[Example 2.3]{FLV2}. As for surfaces: a smooth quartic in $\mathbb{P}^3$ has an MCK decomposition, but a very general surface of degree $ \ge 7$ in $\mathbb{P}^3$ should not have an MCK decomposition \cite[Proposition 3.4]{FLV2}. For more detailed discussion, and examples of varieties with an MCK decomposition, we refer to \cite[Section 8]{SV}, as well as \cite{V6}, \cite{SV2}, \cite{FTV}, \cite{37}, \cite{39}, \cite{40}, \cite{S2}, \cite{46}, \cite{38}, \cite{FLV2}. \end{remark} \subsection{The Franchetta property} \label{ss:fr} \begin{definition} Let $\mathcal X\to B$ be a smooth projective morphism, where $\mathcal X, B$ are smooth quasi-projective varieties. We say that $\mathcal X\to B$ has the {\em Franchetta property in codimension $j$\/} if the following holds: for every $\Gamma\in A^j(\mathcal X)$ such that the restriction $\Gamma\vert_{X_b}$ is homologically trivial for the very general $b\in B$, the restriction $\Gamma\vert_b$ is zero in $A^j(X_b)$ for all $b\in B$. We say that $\mathcal X\to B$ has the {\em Franchetta property\/} if $\mathcal X\to B$ has the Franchetta property in codimension $j$ for all $j$. \end{definition} This property is studied in \cite{PSY}, \cite{BL}, \cite{FLV}, \cite{FLV3}. \begin{definition} Given a family $\mathcal X\to B$ as above, with $X:=X_b$ a fiber, we write \[ GDA^j_B(X):=\ima\Bigl( A^j(\mathcal X)\to A^j(X)\Bigr) \] for the subgroup of {\em generically defined cycles}. In a context where it is clear to which family we are referring, the index $B$ will often be suppressed from the notation. \end{definition} With this notation, the Franchetta property amounts to saying that $GDA^\ast_B(X)$ injects into cohomology, under the cycle class map. \subsection{Cayley's trick and motives} \label{ss:cay} \begin{theorem}[Jiang \cite{Ji}]\label{ji} Let $ E\to U$ be a vector bundle of rank $r\ge 2$ over a smooth projective variety $U$, and let $S:=s^{-1}(0)\subset U$ be the zero locus of a regular section $s\in H^0(U,E)$ such that $S$ is smooth of dimension $\dim U-\rank E$. Let $X:=w^{-1}(0)\subset \mathbb{P}(E)$ be the zero locus of the regular section $w\in H^0(\mathbb{P}(E),\mathcal O_{\mathbb{P}(E)}(1))$ that corresponds to $s$ under the natural isomorphism $H^0(U,E)\cong H^0(\mathbb{P}(E),\mathcal O_{\mathbb{P}(E)}(1))$, and assume $X$ is smooth. There is an isomorphism of Chow motives \[ h(X)\cong h(S)(1-r)\oplus \bigoplus_{i=0}^{r-2} h(U)(-i)\ \ \ \hbox{in}\ \mathcal M_{\rm rat}\ .\] \end{theorem} \begin{proof} This is \cite[Corollary 3.2]{Ji}, which more precisely gives an isomorphism of {\em integral\/} Chow motives. For later use, we now give some details about the isomorphism as constructed in loc. cit.. Let \[ \Gamma:= X\times_U S\ \ \subset\ X\times S \] (this is equal to $\mathbb{P}(\mathcal N_i)=\mathcal H_s\times_X Z$ in the notation of loc. cit.). Let \[ \Pi_i\ \ \in\ A^\ast(X\times U)\ \ \ \ (i=0, \ldots, r-2) \] be correspondences inducing the maps $ (\pi_i)_\ast$ of loc. cit., i.e. \[ (\Pi_i)_\ast= (\pi_i)_\ast := (q_{i+1})_\ast \iota_\ast\colon\ \ A^j(X)\ \to\ A^{j-i}(U)\ ,\] where $\iota\colon X\hookrightarrow\mathbb{P}(E)$ is the inclusion morphism, and the $(q_{i+1})_\ast\colon A_\ast(\mathbb{P}(E))\to A_\ast(U)$ are defined in loc. cit. in terms of the projective bundle formula for $q\colon E\to U$. As indicated in \cite[Corollary 3.2]{Ji} (cf. also \cite[text preceeding Corollary 3.2]{Ji}), there is an isomorphism \[ \Bigl( \Gamma, \Pi_0,\Pi_1,\ldots, \Pi_{r-2}\Bigr)\colon\ \ h(X)\ \xrightarrow{\cong}\ h(S)(1-r)\oplus \bigoplus_{i=0}^{r-2} h(U)(-i)\ \ \ \hbox{in}\ \mathcal M_{\rm rat}\ .\] \end{proof} \begin{remark} In the set-up of Theorem \ref{ji}, a cohomological relation between $X$ and $S$ was established in \cite[Prop. 4.3]{Ko} (cf. also \cite[section 3.7]{IM0}, as well as \cite[Proposition 46]{BFM} for a generalization). A relation on the level of derived categories was established in \cite[Theorem 2.10]{Or} (cf. also \cite[Theorem 2.4]{KKLL} and \cite[Proposition 47]{BFM}). \end{remark} We now make the natural observation that the isomorphism of Theorem \ref{ji} behaves well with respect to families, in the following sense: \begin{notation} Let $X, S, U$ and $E\to U$ be as in Theorem \ref{ji}. Let $B\subset\mathbb{P} H^0(\mathbb{P}(E),\mathcal O_{\mathbb{P}(E)}(1))$ be the Zariski open such that both $X:=X_b\subset\mathbb{P}(E)$ and $S:=S_b\subset U$ are smooth of the expected dimension. Let \[ \mathcal X\to B\ ,\ \ \ \mathcal S\to B \] denote the universal families. \end{notation} \begin{proposition}\label{ji2} Let $X, S, U$ be as in Theorem \ref{ji}. Assume $U$ has trivial Chow groups. For any $m\in\mathbb{N}$, there are injections \[ GDA^j(X^m)\ \hookrightarrow\ GDA^{j+m-mr}(S^m)\oplus \bigoplus GDA^\ast(S^{m-1})\oplus \cdots \cdots \oplus \mathbb{Q}^s\ .\] \end{proposition} \begin{proof} (NB: we will not really need this proposition below, but we include it because it makes some arguments easier, cf. footnote 1 below.) Let us first do the case $m=1$. The isomorphism of Theorem \ref{ji} is {\em generically defined\/}, i.e. there exist relative correspondences $\Gamma_B,\Pi_i^B$ fitting into a commutative diagram \begin{equation}\label{dia} \begin{array}[c]{ccc} A^j(\mathcal X) & \xrightarrow{\bigl( (\Gamma_B)_\ast,(\Pi_0^B)_\ast,\ldots,(\Pi^B_{r-2})_\ast\bigr)} & A^{j+1-r}(\mathcal S)\oplus \bigoplus_{i=0}^{r-2} A^{j-i}(U\times B)\\ &&\\ \downarrow&&\downarrow\\ &&\\ A^j(X) & \xrightarrow{\bigl( \Gamma_\ast,(\Pi_0)_\ast,\ldots,(\Pi_{r-2})_\ast\bigr)} & \ A^{j+1-r}(S)\oplus \bigoplus_{i=0}^{r-2} A^{j-i}(U),\\ \end{array}\end{equation} where vertical arrows are restrictions to a fiber, and the lower horizontal arrow is the isomorphism of Theorem \ref{ji}. Indeed, $\Gamma_B$ can be defined as \[ \Gamma_B:= \mathcal X\times_{U\times B}\mathcal S\ \ \subset\ \mathcal X\times_B \mathcal S\ .\] The $\Pi_i$ are also generically defined (just because the graph of the embedding $\iota\colon X\hookrightarrow \mathbb{P}(E)$ is generically defined). This gives relative correspondences $\Gamma_B, \Pi_i^B$ over $B$ such that the restriction to a fiber over $b\in B$ gives back the correspondences $\Gamma,\Pi_i$ of Theorem \ref{ji}. The fact that this makes diagram \eqref{dia} commute is \cite[Lemma 8.1.6]{MNP}. The commutative diagram \eqref{dia} implies that there is an injective map \begin{equation}\label{1} GDA^j(X)\ \hookrightarrow\ GDA^{j+1-r}(S)\oplus \bigoplus A^\ast(U) = GDA^{j+1-r}(S)\oplus \mathbb{Q}^s\ .\end{equation} The argument for $m>1$ is similar: the isomorphism of motives of Theorem \ref{ji}, combined with the fact that $U$ has trivial Chow groups (and so $h(U)\cong \oplus \mathds{1}(\ast)$) induces an isomorphism of Chow groups \begin{equation}\label{2} A^j(X^m)\ \xrightarrow{\cong}\ A^{j+m-mr}(S^m)\oplus \bigoplus A^\ast(S^{m-1})\oplus \cdots \cdots \oplus \mathbb{Q}^s\ .\end{equation} Here the map from left to right is given by various combinations of the correspondences $\Gamma$ and $\Pi_i$. As we have seen these correspondences are generically defined, and so their products are also generically defined. It follows as above that the map \eqref{2} preserves generically defined cycles. \end{proof} \subsection{A Franchetta-type result} \begin{proposition}\label{spread} Let $Y$ be a smooth projective variety with trivial Chow groups (i.e. $A^\ast_{hom}(Y)=0$). Let $L_1,\ldots,L_r\to Y$ be very ample line bundles, and let $\mathcal X\to B$ be the universal family of smooth complete intersections of type $X=Y\cap H_1\cap\cdots\cap H_r$, where $H_j\in\vert L_j\vert$. Assume the fibers $X$ have $H^{\dim X}_{tr}(X,\mathbb{Q})\not=0$. There is inclusion \[ \ker \Bigl( GDA^{\dim X}_B(X\times X)\to H^{2\dim X}(X\times X,\mathbb{Q})\Bigr)\ \ \subset\ \Bigl\langle (p_1)^\ast GDA^\ast_B(X), (p_2)^\ast GDA^\ast_B(X) \Bigr\rangle\ .\] \end{proposition} \begin{proof} This is essentially equivalent to Voisin's ``spread'' result \cite[Proposition 1.6]{V1} (cf. also \cite[Proposition 5.1]{LNP} for a reformulation). For completeness, we include a quick proof. Let $\bar{B}:=\mathbb{P} H^0(Y,L_1\oplus\cdots\oplus L_r)$ (so that $B\subset\bar{B}$ is a Zariski open), and let us consider the projection \[ \pi\colon\ \ \mathcal X\times_{\bar{B}} \mathcal X\ \to\ Y\times Y\ .\] Using the very ampleness assumption, one finds that $\pi$ is a $\mathbb{P}^s$-bundle over $(Y\times Y)\setminus \Delta_Y$, and a $\mathbb{P}^t$-bundle over $\Delta_Y$. That is, $\pi$ is what is termed a {\em stratified projective bundle\/} in \cite{FLV}. As such, \cite[Proposition 5.2]{FLV} implies the equality \begin{equation}\label{stra} GDA^\ast_B(X\times X)= \ima\Bigl( A^\ast(Y\times Y)\to A^\ast(X\times X)\Bigr) + \Delta_\ast GDA_B^\ast(X)\ ,\end{equation} where $\Delta\colon X\to X\times X$ is the inclusion along the diagonal. Since $Y$ has trivial Chow groups, one has $A^\ast(Y\times Y)\cong A^\ast(Y)\otimes A^\ast(Y)$. Base-point freeness of the $L_j$ implies $\mathcal X\to Y$ has the structure of a projective bundle; it is then readily seen (by a direct argument or by simply applying once more \cite[Proposition 5.2]{FLV}) that \[ GDA^\ast_B(X)=\ima\bigl( A^\ast(Y)\to A^\ast(X)\bigr)\ .\] The equality \eqref{stra} thus reduces to \[ GDA^\ast_B(X\times X)=\Bigl\langle (p_1)^\ast GDA^\ast_B(X), (p_2)^\ast GDA^\ast_B(X), \Delta_X\Bigr\rangle\ \] (where $p_1, p_2$ denote the projection from $X\times X$ to first resp. second factor). The assumption that $X$ has non-zero transcendental cohomology implies that the class of $\Delta_X$ is not decomposable in cohomology. It follows that \[ \begin{split} \ima \Bigl( GDA^{\dim X}_B(X\times X)\to H^{2\dim X}(X\times X,\mathbb{Q})\Bigr) =&\\ \ima\Bigl( \Dec^{\dim X}(X\times X)\to H^{2\dim X}(X\times X,\mathbb{Q})\Bigr)& \oplus \mathbb{Q}[\Delta_X]\ ,\\ \end{split}\] where we use the shorthand \[ \Dec^j(X\times X):= \Bigl\langle (p_1)^\ast GDA^\ast_B(X), (p_2)^\ast GDA^\ast_B(X)\Bigr\rangle\cap A^j(X\times X) \ \] for the {\em decomposable cycles\/}. We now see that if $\Gamma\in GDA^{\dim X}(X\times X)$ is homologically trivial, then $\Gamma$ does not involve the diagonal and so $\Gamma\in \Dec^{\dim X}(X\times X)$. This proves the proposition. \end{proof} \begin{remark} Proposition \ref{spread} has the following consequence: if the family $\mathcal X\to B$ has the Franchetta property, then $\mathcal X\times_B \mathcal X\to B$ has the Franchetta property in codimension $\dim X$. \end{remark} \subsection{HPD and motives} \begin{theorem}\label{hpd} Let $Y_1, Y_2\subset\mathbb{P}(V)$ be smooth projective varieties with trivial Chow groups (i.e. $A^\ast_{hom}(Y_j)=0$), and let $Y_2^\vee\subset\mathbb{P}(V^\vee)$ be the HPD dual of $Y_2$. Let $H\subset \mathbb{P}(V)\times\mathbb{P}(V)$ be a $(1,1)$-divisor, and let $f_H\colon \mathbb{P}(V)\to\mathbb{P}(V^\vee)$ be the morphism defined by $H$. Assume that the varieties \[ \begin{split} X&:= (Y_1\times Y_2)\cap H\ ,\\ S&:=Y_1\cap (f_H)^{-1}(Y_2^\vee)\\ \end{split}\] are smooth and dimensionally transverse. Assume moreover that the Hodge conjecture holds for $S$, that $H^j(S,\mathbb{Q})$ is algebraic for $j\not=\dim S$ and that $H^{\dim S}(S,\mathbb{Q})$ is not completely algebraic. Then there is a split injection of Chow motives \[ h(X)\ \hookrightarrow\ h(S)(-m)\oplus \bigoplus\mathds{1}(\ast)\ \ \ \hbox{in}\ \mathcal M_{\rm rat} \ ,\] where $m:={1\over 2}(\dim X-\dim S)$. (In particular, one has vanishing \[ A^j_{hom}(X) =0\ \ \ \forall\ j > {1\over 2}(\dim X+\dim S)\ .)\] \end{theorem} \begin{proof} Using the HPD formalism, it is proven in \cite[Proposition 2.4]{FM} that there exists a semi-orthogonal decomposition \begin{equation}\label{so} D^b(X)=\bigl\langle D^b(S), A_1,\ldots, A_s\bigr\rangle\ ,\end{equation} where the $A_j$ are some exceptional objects. Using Hochschild homology and the Kostant--Rosenberg isomorphism (cf. for instance \cite[Sections 1.7 and 2.5]{Kuz}), this implies that there exist correspondences $\Phi^\prime$ and $\Xi^\prime$ such that \[ H^{\ast}_{tr}(X,\mathbb{Q})\ \xrightarrow{(\Phi^\prime)_\ast}\ H^{\ast}_{tr}(S,\mathbb{Q})\ \xrightarrow{(\Xi^\prime)_\ast}\ H^{\ast}_{tr}(X,\mathbb{Q}) \] is the identity. (Here, $H^\ast_{tr}( ,\mathbb{Q})$ denotes the orthogonal complement of the algebraic part of cohomology.) By assumption $H^\ast_{tr}(S,\mathbb{Q})=H^{\dim S}_{tr}(S,\mathbb{Q})$, and by weak Lefschetz $H^\ast_{tr}(X,\mathbb{Q})= H^{\dim X}_{tr}(X,\mathbb{Q})$, and so we actually have that \[ H^{\dim X}_{tr}(X,\mathbb{Q})\ \xrightarrow{(\Phi^\prime)_\ast}\ H^{\dim S}_{tr}(S,\mathbb{Q})\ \xrightarrow{(\Xi^\prime)_\ast}\ H^{\dim X}_{tr}(X,\mathbb{Q}) \] is the identity. Again using Hochschild homology and the Kostant--Rosenberg isomorphism, we see that the Hodge conjecture for $S$, plus the decomposition \eqref{so}, implies the Hodge conjecture for $X$. This means that we can find correspondences $\Phi$ and $\Xi$ such that \[ H^{\ast}_{}(X,\mathbb{Q})\ \xrightarrow{\Phi_\ast}\ H^{\dim S}_{}(S,\mathbb{Q}) \oplus \bigoplus \mathbb{Q}(-j)\ \xrightarrow{\Xi_\ast}\ H^{\ast}_{}(X,\mathbb{Q}) \] is the identity, i.e. the cycle \[ \Delta_X - \Xi\circ \Phi\ \ \ \in\ A^{\dim X}(X\times X) \] is homologically trivial. We now consider things family-wise, i.e. we construct universal families $\mathcal X\to B$ and $\mathcal S\to B$, where \[ B\ \ \subset\ \mathbb{P} H^0\bigl(Y_1\times Y_2,\mathcal O_{Y_1\times Y_2}(1,1)\bigr) \] parametrizes all divisors $H$ such that both $X:=X_H$ and $S:=S_H$ are smooth and dimensionally transverse. Applying Voisin's Hilbert schemes argument \cite[Proposition 3.7]{V0} (cf. also \cite[Proposition 2.11]{Lacub}) to this set-up, we may assume that the correspondences $\Phi$ and $\Xi$ are generically defined (with respect to $B$), and so in particular \[ \Delta_X - \Xi\circ \Phi\ \ \ \in\ GDA^{\dim X}(X\times X) \ .\] We observe that $H^{\dim X}_{tr}(X,\mathbb{Q})\cong H^{\dim S}_{tr}(S,\mathbb{Q})$ (this follows from the decomposition \eqref{so}), and so $H^{\dim X}_{tr}(X,\mathbb{Q})\not=0$; all conditions of Proposition \ref{spread} are fulfilled. Applying Proposition \ref{spread} to the cycle $ \Delta_X - \Xi\circ \Phi$, we find that a modification of this cycle vanishes: \[ \Delta_X - \Xi\circ \Phi -\gamma=0\ \ \ \hbox{in}\ A^{\dim X}(X\times X) \ ,\] where \[ \gamma\in \Bigl\langle (p_1)^\ast GDA^\ast(X), (p_2)^\ast GDA^\ast(X)\Bigr\rangle\] is a decomposable cycle. This translates into the fact that (up to adding some trivial motives $\mathds{1}(\ast)$ and modifying the correspondences $\Phi$ and $\Xi$) the composition \[ h(X) \ \xrightarrow{\Phi}\ h^{}_{}(S)(-m) \oplus \bigoplus \mathds{1}(\ast)\ \xrightarrow{\Xi}\ h(X)\ \ \ \hbox{in}\ \mathcal M_{\rm rat} \] is the identity, which proves the proposition. (Finally, the statement in parentheses is a straightforward consequence of the injection of motives: taking Chow groups, one obtains an injection \[ A^j_{hom}(X)\ \hookrightarrow\ A^{j-m}_{hom}(S)\ .\] But the group on the right vanishes for $j-m>\dim S$, which means $j> {1\over 2}(\dim X+\dim S)$.) \end{proof} \begin{example} Here is a sample application of Theorem \ref{hpd}. Let $Y_1=Y_2=\Gr(2,5)\subset\mathbb{P}^9$. Then $Y_2^\vee=\Gr(2,5)\subset(\mathbb{P}^9)^\vee$ and $S:=Y_1\cap (f_H)^{-1}(Y_2^\vee)$ is 3-dimensional (for $H$ sufficiently general). We consider the 11-dimensional variety \[ X:= \bigl(\Gr(2,5)\times \Gr(2,5)\bigr)\cap H\ \ \ \subset \mathbb{P}^9\times \mathbb{P}^9 \ ,\] where $H$ is a general $(1,1)$-divisor. This $X$ is a Fano variety of Calabi--Yau type, considered in \cite[Section 3.3]{IM0}. Theorem \ref{hpd} implies that one has \[ A^j_{hom}(X)=0\ \ \ \forall\ j> 7\ ,\] i.e. $X$ has $\hbox{Niveau}(A^\ast(X))\le 3$ in the sense of \cite{moi}. \end{example} \section{Main result} This section contains the proof of our main result, which is as follows: \begin{theorem}\label{main} Let $X\subset U$ be the inclusion of a Fano variety $X$ in its ambient space $U$, where $X,U$ are as in Table \ref{table:1}. Then $X$ has an MCK decomposition. The Chern classes $c_j(X)$, and the image $\ima\bigl( A^\ast(U)\to A^\ast(X)\bigr)$, lie in $A^\ast_{(0)}(X)$. \end{theorem} \subsection{A criterion} To prove Theorem \ref{main}, we will use the following general criterion: \begin{proposition}\label{crit} Let $\mathcal X\to B$ be a family of smooth projective varieties. Assume the following conditions: \noindent (c0) each fiber $X$ has dimension $2d \ge 8$; \noindent (c1) each fiber $X$ has a self-dual CK decomposition $\{\pi^\ast_X\}$ which is generically defined (with respect to $B$), and $h^j(X)\cong\oplus \mathds{1}(\ast)$ for $j\not=2d$; \noindent (c2) there exists a family of surfaces $\mathcal S\to B^\circ$ where $B^\circ\subset B$ is a countable intersection of non-empty Zariski opens, and for each $b\in B^\circ$ there is a split injection of motives \[ h(X_b) \ \hookrightarrow\ h(S_b)(1-d)\oplus \bigoplus \mathds{1}(\ast)\ \ \ \hbox{in}\ \mathcal M_{\rm rat}\ .\] \noindent (c3) the family $\mathcal S\times_{B^\circ} \mathcal S\to B^\circ$ has the Franchetta property. Then for each fiber $X$, $\{\pi^\ast_X\}$ is an MCK decomposition, and $GDA^\ast(X)\subset A^\ast_{(0)}(X)$. \noindent Moreover, condition (c3) may be replaced by the following: \noindent (c3$^\prime$) $\mathcal S\to B^\circ$ is a family of K3 surfaces, which is the universal family of smooth sections of a direct sum of very ample line bundles on some smooth projective ambient space $V$ with trivial Chow groups (i.e. $A^\ast_{hom}(V)=0$), and $\mathcal S\to B^\circ$ has the Franchetta property. \end{proposition} \begin{proof} Using Voisin's Hilbert schemes argument \cite[Proposition 3.7]{V0} (cf. also \cite[Proposition 2.11]{Lacub}), one may assume that the split injection of (c2) is generically defined (with respect to $B^\circ$). This means that there exists a relative correspondence $\Phi$ fitting into a commutative diagram \[ \begin{array}[c]{ccc} A^j(\mathcal X) & \xrightarrow{ \Phi_\ast} & A^{j+1-d}(\mathcal S)\oplus \bigoplus A^\ast(B^\circ)\\ &&\\ \downarrow&&\downarrow\\ &&\\ A^j(X) & \xrightarrow{ (\Phi\vert_b)_\ast } & \ A^{j+1-d}(S)\oplus \mathbb{Q}^s ,\\ \end{array}\] where vertical arrows are restrictions to a fiber, and the lower horizontal arrow is induced by the injection of (c2). The same then applies to $X\times X$, i.e. there is a commutative diagram \[ \begin{array}[c]{ccc} A^j(\mathcal X\times_{B^\circ} \mathcal X) & \xrightarrow{} & A^{j+2-2d}(\mathcal S\times_{B^\circ} \mathcal S)\oplus \bigoplus A^{\ast}(\mathcal S) \oplus \bigoplus A^\ast(B^\circ) \\ &&\\ \downarrow&&\downarrow\\ &&\\ A^j(X\times X) & \hookrightarrow& \ A^{j+2-2d}(S\times S)\oplus \bigoplus A^{\ast}(S) \oplus \bigoplus \mathbb{Q}^s,\\ \end{array}\] where the lower horizontal arrow is split injective thanks to (c2). That is, there is an injection \[ GDA^j_{B^\circ}(X\times X)\ \hookrightarrow\ GDA^{j+2-2d}_{B^\circ}(S\times S)\oplus \bigoplus GDA^\ast_{B^\circ}(S)\oplus \mathbb{Q}^s\ .\] It then follows from (c3) that $\mathcal X\times_{B^\circ} \mathcal X\to B^\circ$ has the Franchetta property.\footnote{(NB: in practice, one can often avoid recourse to the Hilbert scheme argument in this step. For instance, in the setting of Proposition \ref{p1} below, the split injection of (c2) is generically defined by construction, and one can apply Proposition \ref{ji2} to conclude that $\mathcal X\times_{B^\circ} \mathcal X\to B^\circ$ has the Franchetta property.)} Let us now ascertain that the CK decomposition $\{\pi^\ast_X\}$ is multiplicative. What we need to check is that for each $X=X_b$ one has \begin{equation}\label{this} \pi_X^k\circ \Delta_X^{sm}\circ (\pi_X^i\times \pi_X^j)=0\ \ \ \hbox{in}\ A^{4d}(X\times X\times X)\ \ \ \hbox{for\ all\ }i+j\not=k\ .\end{equation} A standard spread lemma (cf. \cite[Lemma 3.2]{Vo}) shows that it suffices to prove this for all $b\in B^\circ$, so we will henceforth assume that $X=X_b$ with $b\in B^\circ$. We note that the cycle in \eqref{this} is generically defined, and homologically trivial. Let us assume that among the three integers $(i,j,k)$, at least one is different from $2d$. Using the hypothesis $h^j(X)=\oplus\mathds{1}(\ast)$ for $j\not=2d$, we find there is a (generically defined) split injection \[ ( \pi^{4d-i}_X\times \pi^{4d-j}_X\times\pi^k_X)_\ast A^{4d}(X\times X\times X)\ \hookrightarrow\ A^\ast(X\times X)\ .\] Since \[ \pi_X^k\circ \Delta_X^{sm}\circ (\pi_X^i\times \pi_X^j) = ({}^t \pi^i_X\times{}^t \pi^j_X\times\pi^k_X)_\ast (\Delta^{sm}_X) = ( \pi^{4d-i}_X\times \pi^{4d-j}_X\times\pi^k_X)_\ast (\Delta^{sm}_X) \] (where the first equality is an instance of Lieberman's lemma), the required vanishing \eqref{this} now follows from the Franchetta property for $\mathcal X\times_B \mathcal X\to B$. It remains to treat the case $i=j=k=2d$. Using the split injection of motives (2) and taking the tensor product, we find there is a split injection of Chow groups \[ A^j(X\times X\times X)\ \hookrightarrow\ A^{j+3-3d}(S^3)\oplus \bigoplus A^\ast(S^2)\oplus \bigoplus A^\ast(S)\oplus \mathbb{Q}^s\ .\] Moreover (just as we have seen above for $X^2$), this injection respects generically defined cycles, i.e. there is an injection \[ GDA^j(X\times X\times X)\ \hookrightarrow\ GDA^{j+3-3d}(S^3)\oplus \bigoplus GDA^\ast(S^2)\oplus \bigoplus GDA^\ast(S)\oplus \mathbb{Q}^s\ .\] In particular, taking $j=4d$ we find an injection \[ GDA^{4d}(X\times X\times X)\ \hookrightarrow\ GDA^{d+3}(S^3)\oplus \bigoplus GDA^\ast(S^2)\oplus \bigoplus GDA^\ast(S)\oplus \mathbb{Q}^s\ .\] By assumption, $d\ge 4$ and so the summand $GDA^{d+3}(S^3)$ vanishes for dimension reasons. The required vanishing \eqref{this} then follows from the Franchetta property for $\mathcal S\times_{B^\circ} \mathcal S$. This proves that $\{\pi^\ast_X\}$ is MCK. To see that $GDA^\ast(X)\subset A^\ast_{(0)}(X)$, it suffices to note that \[ (\pi^k_X)_\ast GDA^j(X)\ \ \ \ (k\not=2j) \] is generically defined, and homologically trivial. The Franchetta property for $\mathcal X\to B$ (which is implied by the Franchetta property for $\mathcal X\times_{B^\circ} \mathcal X$) then implies the vanishing \[ (\pi^k_X)_\ast GDA^j(X)=0\ \ \ \ (k\not=2j)\ , \] and so $GDA^j(X)\subset (\pi^{2j}_X)_\ast A^j(X)=: A^j_{(0)}(X)$. Let us now proceed to show that condition (c3$^\prime$) implies condition (c3). The hypotheses of (c3$^\prime$) imply that $B^\circ$ is a Zariski open in some $\bar{B}:=\mathbb{P} H^0(V,\oplus_{j=1}^s L_j)$ which is isomorphic to $\mathbb{P}^r$. The very ampleness assumption implies that \[ \pi\colon\ \ \mathcal S\times_{\bar{B}} \mathcal S\ \to\ V\times V \] is a $\mathbb{P}^{r-2s}$-bundle over $(V\times V)\setminus \Delta_V$ and a $\mathbb{P}^{r-s}$-bundle over $\Delta_V$. That is, $\pi$ is a {\em stratified projective bundle\/} in the sense of \cite{FLV}. As such, \cite[Proposition 5.2]{FLV} implies the equality \[ GDA^\ast(S\times S)= \ima\Bigl( A^\ast(V\times V)\to A^\ast(S\times S)\Bigr) + \Delta_\ast GDA^\ast(S)\ ,\] where $\Delta\colon S\to S\times S$ is the inclusion along the diagonal. Since $V$ has trivial Chow groups, one has $A^\ast(V\times V)\cong A^\ast(V)\otimes A^\ast(V)$. Moreover, $\mathcal S\to V$ is a projective bundle and so \cite[Proposition 5.2]{FLV} gives $GDA^\ast(S)=\ima\bigl (A^\ast(V)\to A^\ast(S)\bigr)$. It follows that the above equality reduces to \begin{equation}\label{gda} GDA^\ast(S\times S)=\Bigl\langle (p_1)^\ast GDA^\ast(S), (p_2)^\ast GDA^\ast(S), \Delta_S\Bigr\rangle\ \end{equation} (where $p_1, p_2$ denote the projection from $S\times S$ to first resp. second factor). By assumption, $S$ is a K3 surface and the Franchetta property holds for $\mathcal S\to B^\circ$, which means that \[ GDA^\ast(S) = \mathbb{Q} \oplus GDA^1(S) \oplus \mathbb{Q}[o]\ ,\] where $o\in A^2(S)$ is the Beauville--Voisin class \cite{BV}. Given a divisor $D\in A^1(S)$, it is known that \[ \Delta_S\cdot (p_j)^\ast(D)=\Delta_\ast(D)= D\times o + o\times D\ \ \ \hbox{in}\ A^3(S\times S)\ \] \cite[Proposition 2.6(a)]{BV}. Also, it is known that \[ \Delta_S\cdot (p_j)^\ast(o)= \Delta_\ast(o)= o\times o\ \ \ \hbox{in}\ A^4(S\times S) \] \cite[Proposition 2.6(b)]{BV}. It follows that the right-hand side of \eqref{gda} is {\em decomposable\/} in codimension $>2$, i.e. \[ \begin{split} \Bigl\langle (p_1)^\ast GDA^\ast(S), (p_2)^\ast GDA^\ast(S), \Delta_S\Bigr\rangle \cap A^j(S\times S) &=\\\Bigl\langle (p_1)^\ast GDA^\ast(S), (p_2)^\ast GDA^\ast(S)\Bigr\rangle\cap A^j(S\times S)& \ \ \ \ \ \ \forall j\not=2\ .\\ \end{split}\] Since we know that $GDA^\ast(S)$ injects into cohomology (this is the Franchetta property for $\mathcal S\to B^\circ$), equality \eqref{gda} (plus the K\"unneth decomposition in cohomology) now implies that \[ GDA^j(S\times S)\ \to\ H^{2j}(S\times S,\mathbb{Q}) \] is injective for $j\not=2$. For the case $j=2$, it suffices to remark that $\Delta_S$ is linearly independent from the decomposable part in cohomology (for otherwise $H^{2,0}(S)$ would be zero, which is absurd). The injectivity of \[ GDA^2(S\times S)\ \to\ H^4(S\times S,\mathbb{Q}) \] then follows from \eqref{gda} plus the injectivity of $GDA^\ast(S)\to H^\ast(S,\mathbb{Q})$. This shows that condition (c3$^\prime$) implies condition (c3); the proposition is proven. \end{proof} \subsection{Verifying the criterion: part 1} \begin{proposition}\label{p1} The following families verify the conditions of Proposition \ref{crit}: the universal families $\mathcal X\to B$ of Fano varieties of type M1, M6, M7, M8, M9, M10. \end{proposition} \begin{proof} The existence of a generically defined CK decomposition is an easy consequence of the fact that the Fano varieties $X$ under consideration are complete intersections in an ambient space $U$ with trivial Chow groups, cf. for instance \cite[Lemma 3.6]{V0}. This takes care of conditions (c0) and (c1) of Proposition \ref{crit}. To verify condition (c2), we use Cayley's trick (Theorem \ref{ji}). The K3 surface $S$ associated to the Fano variety $X$ is a complete intersection in an ambient space $V$ as indicated in Table \ref{table:2}. Let us write $2d:=\dim X$. The ambient spaces $V$ that occur all have trivial Chow groups, and so Theorem \ref{ji} gives the split injection of motives \[ h(X)\ \hookrightarrow\ h(S)(1-d)\oplus \bigoplus\mathds{1}(\ast)\ \ \ \hbox{in}\ \mathcal M_{\rm rat}\ ,\] i.e. condition (c2) is verified. \begin{table}[h] \centering \begin{tabular}{||c c c c||} \hline ${\hbox{Label\ in\ \cite{FM}}}$ & $X$ & $\dim X$ & $ S\subset V$ \\ [0.5ex] \hline\hline M1 & $X_{(1,1,1)}\subset\mathbb{P}^3\times\mathbb{P}^3\times\mathbb{P}^3$ & 8 & $S_{1^4}\subset \mathbb{P}^3\times\mathbb{P}^3$ \\ M6 & $X_{(1,1)}\subset \mathbb{S}_5\times\mathbb{P}^7$ & 16 & $S_{1^8}\subset \mathbb{S}_5$ \\ M7 & $X_{(1,1)}\subset\Gr(2,6)\times \mathbb{P}^5$ & 12 & $S_{1^6}\subset \Gr(2,6)$ \\ M8 & $X_{(1,1)}\subset\SGr(2,6)\times \mathbb{P}^4$ & 10 & $S_{1^5}\subset \SGr(2,6)$ \\ M9 & $X_{(1,1)}\subset S_2 \Gr(2,6)\times \mathbb{P}^3$ & 8 & $S_{1^4}\subset S_2 \Gr(2,6)$ \\ M10 & $X_{(1,1)}\subset\SGr(3,6)\times \mathbb{P}^3$ & 8 & $S_{1^4}\subset \SGr(3,6)$ \\ [1ex] \hline \end{tabular} \caption{Fano varieties $X$ and their associated K3 surface $S$.} \label{table:2} \end{table} We observe that all ambient spaces $V$ in Table \ref{table:2} have trivial Chow groups. To verify condition (c3$^\prime$), it only remains to check the Franchetta property for the families $\mathcal S\to B^\circ$. In all these cases, $\mathcal S\to V$ is a projective bundle, and so (using the projective bundle formula, or lazily applying \cite[Proposition 5.2]{FLV}) we find equality \[ GDA^j_{B^\circ}(S)=\ima\bigl( A^j(V)\to A^j(S)\bigr)\ .\] Let us check that the right-hand side injects into cohomology. This is non-trivial only in codimension $j=2$. For the family M1, it suffices to observe that $A^2(\mathbb{P}^3\times\mathbb{P}^3)$ is generated by intersections of divisors, and so \[ \ima\Bigl( A^2(\mathbb{P}^3\times\mathbb{P}^3)\to A^2(S)\Bigr) = \mathbb{Q}[o] \] injects into cohomology. For the family M7, it suffices to check that the restriction of $c_2(Q)\in A^2(\Gr(2,6))$ (where $Q$ denotes the universal quotient bundle) to $S$ is proportional to $o$; this is done in \cite[Proposition 2.1]{PSY}. For the family M6, we may as well verify that \[ \ima \Bigl(A^2(\OGr(5,10))\to A^2(S)\Bigr)=\mathbb{Q}[o] \] (recall that $\mathbb{S}_5$ is a connected component of $\OGr(5,10)$ in its spinor embedding), this is taken care of in \cite[Proposition 2.1]{PSY}. For the families M8 and M9, since $\SGr(2,6)$ and $S_2 \Gr(2,6)$ are complete intersections (of dimension 7 resp. 6) inside $\Gr(2,6)$, there is an isomorphism \[ A^2(S_2 \Gr(2,6))\ \xrightarrow{\cong}\ A^2(\SGr(2,6))\ \xrightarrow{\cong}\ A^2(\Gr(2,6))\ .\] The case M7 then guarantees that $\ima\bigl( A^2 (V)\to A^2(S)\bigr)$ is spanned by $o$. Finally, for the case M10 one observes that $A^2(\SGr(3,6))\cong\mathbb{Q}$ (this follows from \cite[Proposition 2.1]{vdG}, where $\SGr(3,6)$ is denoted $Y_3$), and so \[ \ima \Bigl(A^2(\SGr(3,6))\to A^2(S)\Bigr)=\mathbb{Q}[h^2]= \mathbb{Q}[o] \ .\] \end{proof} \begin{remark} It seems likely that the families M7, M8, M9, M10 can be related to one another via (a higher-codimension version of) the game of {\em projections\/} and {\em jumps\/} of \cite[Sections 3.3 and 3.4]{BFM}. This might simplify the above argument. \end{remark} \subsection{Verifying the criterion: part 2} \begin{proposition}\label{p2} The following families verify the conditions of Proposition \ref{crit}: the universal families $\mathcal X\to B$ of Fano varieties of type M3 and M4. \end{proposition} \begin{proof} The existence of a generically defined CK decomposition follows as above. The difference with the above is that the families M3 and M4 are {\em not\/} in the form of Cayley's trick; hence, to check condition (c2) we now apply Theorem \ref{hpd} rather than Theorem \ref{ji}. For the case M3, Theorem \ref{hpd} applies with $Y_1=\Gr(2,5)$ and $Y_2=Q_5$ a 5-dimensional quadric embedded in $\mathbb{P}^9$. Let $B^\circ\subset B$ be the open parametrizing Fano varieties $X$ of type M3 for which, in the notation of Theorem \ref{hpd}, $S$ is a smooth surface. For each $X=X_b$ with $b\in B^\circ$, Theorem \ref{hpd} gives an injection of motives \[ h(X)\ \hookrightarrow\ h(S)(-4) \oplus \bigoplus\mathds{1}(\ast)\ \ \ \hbox{in}\ \mathcal M_{\rm rat}\ .\] Since quadrics are projectively self-dual, this $S$ is the intersection of $\Gr(2,5)$ with a quadric and 3 hyperplanes in $\mathbb{P}^9$; this is Mukai's model for the general K3 surface of genus 6. That the family $\mathcal S\to B^\circ$ has the Franchetta property is proven in \cite{PSY}. This takes care of conditions (c2) and (c3) of Proposition \ref{crit}. For the family M4, Theorem \ref{hpd} applies again, with $Y_1=\SGr(2,5)$ and $Y_2=Q_4$ a 4-dimensional quadric embedded in $\mathbb{P}^9$. Note that $Y_1$ is a hyperplane section of $\Gr(2,5)$ under its Pl\"ucker embedding. Again, let $B^\circ\subset B$ denote the open where both $X$ and $S$ are smooth dimensionally transverse. Theorem \ref{hpd} now gives an injection of motives \[ h(X)\ \hookrightarrow\ h(S)(-3) \oplus \bigoplus\mathds{1}(\ast)\ \ \ \hbox{in}\ \mathcal M_{\rm rat}\ ,\] where $S$ is again the intersection of $\Gr(2,5)$ with a quadric and 3 hyperplanes in $\mathbb{P}^9$. The family $\mathcal S\to B^\circ$ is now the family of all smooth 2-dimensional complete intersections of $\SGr(2,5)$ with a quadric and 2 hyperplanes. One has that $\mathcal S\to \SGr(2,5)$ is a projective bundle, and so (as before) \[ GDA^2(S)=\ima\Bigl( A^2(\SGr(2,5))\to A^2(S)\Bigr)\ .\] But $A^2(\Gr(2,5))\to A^2(\SGr(2,5))$ is an isomorphism (weak Lefschetz), and so $GDA^2(S)=\mathbb{Q}[o]$ as for the family M3. All conditions of Proposition \ref{crit} are verified. \end{proof} \subsection{Proof of theorem} \begin{proof}(of Theorem \ref{main}) For the families B1 and B2 the result was proven in \cite{40}. The family S2 was treated in \cite{S2}. For the remaining families, we have checked (Propositions \ref{p1} and \ref{p2}) that Proposition \ref{crit} applies, which gives a generically defined MCK decomposition. The Chern classes $c_j(X)$, as well as the image $\ima\bigl(A^\ast(U)\to A^\ast(X)\bigr)$, are clearly generically defined, and so they are in $A^\ast_{(0)}(X)$ thanks to Proposition \ref{crit}. \end{proof} \section{A consequence} \begin{corollary}\label{cor} Let $X\subset U$ be the inclusion of a Fano variety $X$ in its ambient space $U$, where $X,U$ are as in Table \ref{table:1}. Let $\dim X=2d$. Let $R^\ast(X)\subset A^\ast(X)$ be the $\mathbb{Q}$-subalgebra \[ R^\ast(X):=\Bigl\langle A^1(X), A^2(X), \ldots, A^d(X), c_j(X),\ima\bigl(A^\ast(U)\to A^\ast(X)\bigr)\Bigr\rangle\ \ \ \subset A^\ast(X)\ .\] Then $R^\ast(X)$ injects into cohomology under the cycle class map. \end{corollary} \begin{proof} This is a formal consequence of the MCK paradigm. We know (Theorem \ref{main}) that $X$ has an MCK decomposition, and $c_j(X)$ and $\ima\bigl(A^\ast(U)\to A^\ast(X)\bigr)$ are in $A^\ast_{(0)}(X)$. Moreover, we know that \begin{equation}\label{vani} A^j_{hom}(X) =0\ \ \ \ \forall j\not=d+1 \end{equation} (indeed, the injection of motives of Proposition \ref{crit}(c2) induces an injection $A^j_{hom}(X)\hookrightarrow A^{j+1-d}(S)$ where $S$ is a K3 surface). This means that \[ A^j(X) =A^j_{(0)}(X)\ \ \ \ \forall j\not= d+1\ ,\] and so \[ R^\ast(X)\ \ \subset\ A^\ast_{(0)}(X) \ .\] It only remains to check that $A^\ast_{(0)}(X)$ injects into cohomology under the cycle class map. In view of \eqref{vani}, this reduces to checking that the cycle class map induces an injection \[ A^{d+1}_{(0)}(X)\ \ \hookrightarrow\ H^{2d+2}(X,\mathbb{Q})\ .\] By construction, the correspondence $\pi_X^{2d+2}$ is supported on a subvariety $V\times W\subset X\times X$, where $V,W\subset X$ are (possibly reducible) subvarieties of dimension $\dim V=d+1$ and $\dim W=d-1$. As in \cite{BS}, the action of $\pi^{2d+2}_X$ on $A^{d+1}(X)$ factors over $A^0(\widetilde{W})$, where $\widetilde{W}\to W$ is a resolution of singularities. In particular, the action of $\pi^{2d+2}_X$ on $A^{d+1}_{hom}(X)$ factors over $A^0_{hom}(\widetilde{W})=0$ and so is zero. But the action of $\pi^{2d+2}_X$ on $A^{d+1}_{(0)}(X)$ is the identity, and so \[ A^{d+1}_{(0)}(X)\cap A^{d+1}_{hom}(X)=0\ ,\] as requested. \end{proof} \vskip1cm \begin{nonumberingt} Thanks to Lie Fu and Charles Vial for lots of enriching exchanges around the topics of this paper. Thanks to the referee for pertinent comments. Thanks to Kai who is a great expert on Harry Potter trivia. \end{nonumberingt} \vskip1cm \end{document}
\begin{document} \title{Entropy rigidity of Hilbert and Riemannian metrics} \author{Thomas Barthelm\'e} \address{Department of Mathematics, Pennsylvania State University, University Park, State College, PA 16802} \email{thomas.barthelme@queensu.ca} \author{Ludovic Marquis} \address{IRMAR, Universit\'e de Rennes, Rennes, France} \email{ludovic.marquis@univ-rennes1.fr} \author{Andrew Zimmer} \address{Department of Mathematics, University of Chicago, Chicago, IL 60637.} \email{aazimmer@uchicago.edu} \date{\today} \keywords{ } \subjclass[2010]{} \begin{abstract} In this paper we provide two new characterizations of real hyperbolic $n$-space using the Poincar\'e exponent of a discrete group and the volume growth entropy. The first characterization is in the space of Riemannian metrics with Ricci curvature bounded below and generalizes a result of Ledrappier and Wang. The second is in the space of Hilbert metrics and generalizes a result of Crampon. \end{abstract} \maketitle \section{Introduction} Suppose $(X,d)$ is a proper metric space and $o \in X$ is some point. For any discrete group $\Gamma$ acting by isometries on $(X,d)$, we define the \emph{Poincar\'e}, or \emph{critical}, \emph{exponent} of $\Gamma$ as \[ \delta_{\Gamma}(X,d) := \limsup_{r \rightarrow +\infty} \frac{1}{r} \log \# \{\gamma \in \Gamma \mid d(o, \gamma \cdot o ) \leqslant r \}. \] It is straightforward to show that this quantity does not depend on the choice of $o \in X$. If $X$ has a measure $\mu$ one can also define the \emph{volume growth entropy} as \begin{equation*} h_{vol}(X,d,\mu) := \limsup_{r \rightarrow +\infty} \frac{1}{r} \log \mu \left( B_r(o) \right) \end{equation*} where $B_r(o)$ is the open ball of radius $r$ about $o$. This quantity also does not depend on $o \in X$. If the measure $\mu$ is $\Isom(X,d)$-invariant, finite on bounded sets, and positive on open sets then a simple computation (see the proof of Lemma 4.5 in~\cite{Q2006}) shows \begin{equation*} \delta_\Gamma(X,d) \leqslant h_{vol}(X,d,\mu). \end{equation*} When additional assumptions are made, the Poincar\'e exponent and the volume growth entropy may coincide. For instance, if the action of $\Gamma$ on $(X,d)$ is cocompact, a simple argument shows that they are equal (again see the proof of Lemma 4.5 in~\cite{Q2006}). These two invariants have a long and interesting history, as they are intimately related to the geometric and dynamical properties of the space $(X,d)$ (see for instance~\cite{M1979, FM1982}). Moreover, they are often linked to rigidity phenomenons (see for instance~\cite{BCG1995, BCG1996}). In this paper we present two new characterizations of real hyperbolic $n$-space using the Poincar\'e exponent of a discrete group and the volume growth entropy. The first characterization (Theorem~\ref{thm:riem_finite_volume}) is in the space of Riemannian metrics with Ricci curvature bounded below and generalizes a result of Ledrappier and Wang~\cite{LW2010}. The second characterization (Theorem~\ref{thm:hilbert_finite_vol}) is in the space of Hilbert metrics and generalizes a result of Crampon~\cite{Cra2009}. This second result will follow from Theorem~\ref{thm:riem_finite_volume} and a recent result of Tholozan~\cite{Tho2015}. \subsection{Riemannian metrics}\ Suppose $(X,g)$ is a complete, simply connected Riemannian $n$-manifold with $\Ric \geqslant -(n-1)$. Then the Bishop-Gromov volume comparison theorem implies that \begin{align*} h_{vol}(X,g) \leqslant n-1 \end{align*} (in the Riemannian case we always use the Riemannian volume form when considering the volume growth entropy). In particular, the volume growth entropy is maximized when $(X,g)$ is isometric to real hyperbolic $n$-space. There are many other examples which maximize volume growth entropy, but if $(X,g)$ has ``enough'' symmetry then it is reasonable to expect that $h_{vol}(X,g)=n-1$ if and only if $(X,g)$ is isometric to real hyperbolic $n$-space. This was recently proved by Ledrappier and Wang when $X$ covers a compact manifold: \begin{theorem}\cite{LW2010} Let $(X,g)$ be a complete, simply connected Riemannian $n$-manifold and $\Gamma$ be a discrete group acting by isometries on $X$. Suppose that \begin{enumerate} \item $\Ric \geqslant -(n-1)$; \item $\Gamma$ acts properly and freely on $X$ and $\Gamma \backslash X$ is compact; \item $h_{vol}(X,g)= n-1$. \end{enumerate} Then $X$ is isometric to the real hyperbolic space $\Hb^n$. \end{theorem} Our first new characterization of real hyperbolic space replaces compactness with finite volume, but with the cost of replacing $h_{vol}$ by $\delta_\Gamma$. \begin{thmintro} \label{thm:riem_finite_volume} Let $(X,g)$ be a complete, simply connected Riemannian $n$-manifold and $\Gamma$ be a discrete group acting by isometries on $X$. Suppose that \begin{enumerate} \item $\Ric \geqslant -(n-1)$; \item $X$ has bounded curvature; \item $\Gamma$ acts properly and freely on $X$ and $\Gamma \backslash X$ has finite volume; \item the Poincar\'e exponent satisfies $\delta_{\Gamma}(X,g)= n-1$. \end{enumerate} Then $X$ is isometric to the real hyperbolic space $\Hb^n$. \end{thmintro} \begin{remark} As in~\cite{LW2010}, it is possible to prove versions of Theorem~\ref{thm:riem_finite_volume} for K{\"a}hler or Quaternionic manifolds, but we will not pursue such matters here. \end{remark} Theorem~\ref{thm:riem_finite_volume} is a true generalization of Ledrappier and Wang's result: when $\Gamma \backslash X$ is assumed to be compact, $X$ has bounded curvature and the Poincar\'e exponent and the volume growth entropy coincide. Although our proof will follow the general outline of their argument, only assuming finite volume introduces a number of technical complications. Finally, the bounded curvature assumption is important for our argument, but it may be possible to remove it. \subsection{Hilbert metrics} Given a proper convex open set $\Omega \subset \Pb(\Rb^{n+1})$, we let $H_\Omega$ be the associated Hilbert metric. The Hilbert metric is a complete length metric on $\Omega$ which is invariant under the group of projective automorphisms of $\Omega$ \begin{equation*} \Aut(\Omega) : = \{ \varphi \in \PGL_{n+1}(\Rb) : \varphi \Omega = \Omega \}. \end{equation*} Moreover, if $\Omega$ is projectively equivalent to the ball $\Bc$, then $(\Omega, H_\Omega)$ is the Klein--Beltrami model of real hyperbolic $n$-space. Tholozan recently proved the following estimate for the volume growth entropy: \begin{theorem}\cite{Tho2015} If $\Omega \subset \Pb(\Rb^{n+1})$ is a proper convex open set then \begin{equation*} h_{vol}(\Omega, H_\Omega, \mu_B) \leqslant n-1 \end{equation*} where $\mu_B$ is the Busemann--Hausdorff volume associated with $(\Omega, H_{\Omega})$ (or any bi-Lipschitz equivalent measure). \end{theorem} In particular the volume growth entropy is maximized when $\Omega$ is projectively equivalent to the unit ball. There are many other examples which maximize volume growth entropy, for instance, Berck, Bernig and Vernicos \cite{BBV2010} proved that, if $\partial \Omega$ is $C^{1,1}$ then \begin{equation*} h_{vol}(\Omega, H_\Omega, \mu_B) = n-1. \end{equation*} However, once again, assuming that $\Omega$ has ``enough'' symmetry then one should expect that $h_{vol}(\Omega, H_\Omega, \mu_B)=n-1$ if and only if $\Omega$ is projectively equivalent to the unit ball. For instance, Crampon proved the following: \begin{theorem} \cite{Cra2009}\label{thm:crampon} Suppose $\Omega \subset \Pb(\Rb^{n+1})$ is a proper strictly convex open set and there exists a discrete group $\Gamma \leqslant \Aut(\Omega)$ that acts properly, freely, and cocompactly. Then $h_{vol}(\Omega, H_\Omega, \mu_B) \leqslant n-1$ with equality if and only if $\Omega$ is projectively isomorphic to $\Bc$ (and in particular $(\Omega, H_\Omega)$ is isometric to $\Hb^n$). \end{theorem} \begin{remark} For the Hilbert metric, strict convexity of $\Omega$ is somewhat analogous to negative curvature. In particular, for a strictly convex set the Hilbert metric is uniquely geodesic, that is every pair of points are joined by a unique geodesic. Moreover, Benoist~\cite{B2004} proved that when $\Omega$ is strictly convex and has a compact quotient then the induced geodesic flow is Anosov and is $C^{1+\alpha}$. In his proof of Theorem~\ref{thm:crampon}, Crampon first shows that the topological entropy of this flow coincides with the volume growth entropy and then he uses techniques from hyperbolic dynamics to prove rigidity. For a general convex open set, the Hilbert metric may not be uniquely geodesic, but one can consider a natural ``geodesic line'' flow obtained by flowing along the geodesics that are lines segments in $\Pb(\Rb^d)$. However this flow is only $C^0$ and will have ``parallel'' flow lines. Thus Crampon's approach via smooth hyperbolic dynamics will not extend, at least directly, to the general case. \end{remark} Associated to every proper convex open set $\Omega \subset \Pb(\Rb^{n+1})$ is a Riemannian distance $B_\Omega$ on $\Omega$ called the \emph{Blaschke}, or \emph{affine}, distance (see, for instance, \cite{Lof2001,BH2013}). This Riemannian distance is $\Aut(\Omega)$-invariant and by a result of Calabi~\cite{C1972} has Ricci curvature bounded below by $-(n-1)$. In particular, if $d\Vol$ is the associated Riemannian volume form then the Bishop-Gromov volume comparison theorem implies that \begin{equation*} h_{vol}(\Omega, B_\Omega, d\Vol) \leqslant n-1. \end{equation*} Benoist and Hulin \cite{BH2013} showed that the Hilbert distance and the Blaschke distance are bi-Lipschitz equivalent. Tholozan recently proved the following new relation: \begin{theorem}\cite{Tho2015}\label{thm:tho} If $\Omega \subset \Pb(\Rb^{n+1})$ is a proper convex open set, then \begin{equation*} B_\Omega < H_\Omega +1. \end{equation*} In particular, \begin{equation*} h_{vol}(\Omega, H_\Omega, \mu_B) \leqslant h_{vol}(\Omega, B_\Omega, d\Vol) \end{equation*} and if $\Gamma \leqslant \Aut(\Omega)$ is a discrete group then \begin{equation*} \delta_\Gamma(\Omega, H_\Omega) \leqslant \delta_\Gamma(\Omega, B_\Omega). \end{equation*} \end{theorem} Tholozan's result allows one to transfer from the Hilbert setting to the Riemannian setting where many more analytic tools are available. For instance, putting together Tholozan's result, the rigidity result of Ledrappier and Wang~\cite{LW2010} stated above, and some folklore properties of the Blaschke metric, one can remove the strictly convex hypothesis from Crampon's theorem (see Section~\ref{sec:Hilbert} for details): \begin{theorem}\label{thm:hilbert_compact} Suppose $\Omega \subset \Pb(\Rb^{n+1})$ is a proper convex open set and there exists a discrete group $\Gamma \leqslant \Aut(\Omega)$ which acts properly, freely, and cocompactly. Then $h_{vol}(\Omega, H_\Omega, \mu_B) \leqslant n-1$ with equality if and only if $\Omega$ is projectively isomorphic to $\Bc$ (and in particular $(\Omega, H_\Omega)$ is isometric to $\Hb^n$). \end{theorem} Using our generalization of Ledrappier and Wang's result we get a second new characterization of real hyperbolic space. \begin{thmintro}\label{thm:hilbert_finite_vol} Suppose $\Omega \subset \Pb(\Rb^{n+1})$ is a proper convex open set and there exists a discrete group $\Gamma \leqslant \Aut(\Omega)$ which acts properly, freely, and with finite co-volume (with respect to $\mu_B$). Then $\delta_\Gamma(\Omega, H_\Omega) \leqslant n-1$ with equality if and only if $\Omega$ is projectively isomorphic to $\Bc$ (and in particular $(\Omega, H_\Omega)$ is isometric to $\Hb^n$). \end{thmintro} When $\Gamma \backslash \Omega$ is non compact but has finite volume, it is unclear whether or not $h_{vol}(\Omega, H_\Omega, \mu_B)$ and $\delta_\Gamma(\Omega, H_\Omega)$ coincide (for Riemannian negatively curved metrics, there exists groups acting with finite co-volume for which the volume entropy and the critical exponent are distinct \cite{DPPS2009}). However, when $\Omega$ has $C^1$ boundary and is strictly convex then Crampon and Marquis~\cite[Th\'eor\`eme 9.2]{CM2014_geodesic_flow} proved that these two asymptotic invariants coincide. We will prove that in the finite volume quotient case having $C^1$ boundary and being strictly convex are equivalent and thus establish: \begin{corintro}\label{cor:hilbert_finite_vol_2} Suppose $\Omega \subset \Pb(\Rb^{n+1})$ is a proper convex open set which is either strictly convex or has $C^1$ boundary and such that there exists a discrete group $\Gamma \leqslant \Aut(\Omega)$ which acts properly, freely, and with finite co-volume (with respect to $\mu_B$). Then $h_{vol}(\Omega, H_\Omega, \mu_B) \leqslant n-1$ with equality if and only if $\Omega$ is projectively isomorphic to $\Bc$ (and in particular $(\Omega, H_\Omega)$ is isometric to $\Hb^n$). \end{corintro} \begin{remark} This result was announced for surfaces by Crampon in \cite{CraThese}, but his proof was not complete in the finite volume case since some of the dynamical results used are only fully proved in the compact case. \end{remark} \subsection*{Acknowledgments} We would like to thank the referees for their careful reading of our article and their suggestions for improving it. The first author would like to thank Fran\c{c}ois Ledrappier and Nicolas Tholozan for helpful discussions. The second author was supported by the ANR Facettes and the ANR Finsler. The third author was partially supported by NSF grant 1400919. \section{Entropy rigidity for Riemannian metrics} This section is entirely devoted to the proof of Theorem \ref{thm:riem_finite_volume}. It will follow from Proposition \ref{prop:special_busemann} and Proposition \ref{prop:final_step} below. \subsection{The Busemann boundary} In this subsection we describe the Busemann compactification of a non-compact complete Riemannian manifold $(X,g)$. Fix a point $o \in X$. As in~\cite{L2010, LW2010}, we will normalize our Busemann functions such that $\xi(o)=0$. Now, for each $y \in X$, define the Busemann function based at $y$ to be \begin{equation*} b_y(x) := d(x,y)-d(y,o). \end{equation*} As each $b_y$ is $1$-Lipschitz, the embedding $y \rightarrow b_y \in C(X)$ is relatively compact when $C(X)$ is equipped with the topology of uniform convergence on compact subsets. The \emph{Busemann compactification} $\wh{X}$ of $X$ is then defined to be the closure of $X$ in $C(X)$. The \emph{Busemann boundary} of $X$ is the set $\partial \wh{X} = \wh{X} \setminus X$. We begin by recalling some features of this compactification. \begin{theorem} \label{thm:buse_bd_basic} Let $(X,g)$ be a non-compact complete simply connected Riemannian manifold. Then \begin{enumerate} \item $X$ is open in $\wh{X}$, hence the Busemann boundary $\partial \wh{X}$ is compact. \item The action of $\text{Isom}(X)$ on $X$ extends to an action on $\wh{X}$ by homeomorphisms and for $\gamma \in \text{Isom}(X)$ and $\xi \in \partial \wh{X}$ the action is given by \begin{equation*} (\gamma \cdot \xi)(x) = \xi(\gamma^{-1}x)-\xi(\gamma^{-1}o). \end{equation*} \end{enumerate} \end{theorem} The first result can be found in~\cite[Proposition 1]{LW2010}. The second assertion is straightforward to prove. \subsection{Patterson-Sullivan measures} \begin{definition} Let $(X,g)$ be a non-compact complete simply connected Riemannian manifold and $\Gamma \leqslant \Isom(X,g)$ a discrete subgroup with $\delta_{\Gamma} < \infty$. A family of measures $\{ \nu_x : x \in X\}$ on $\partial \wh{X}$ is a (normalized) \emph{Patterson-Sullivan measure} if \begin{enumerate} \item $\nu_o(\partial \wh{X})=1$, \item for any $x,y \in X$ the measures $\nu_x,\nu_y$ are in the same measure class and satisfy \begin{equation*} \frac{d\nu_x}{d\nu_y}(\xi) = e^{-\delta_\Gamma(\xi(x)-\xi(y))}, \end{equation*} \item for any $g \in \Gamma$, $\nu_{gx} = g_*\nu_x$. \end{enumerate} \end{definition} Following the standard construction of Patterson-Sullivan measures via the Poincar\'e series (see for instance Section 2 of~\cite{LW2010}) we obtain: \begin{proposition} Let $(X,g)$ be a non-compact complete simply connected Riemannian manifold and $\Gamma \leqslant \Isom(X,g)$ a discrete subgroup with $\delta_\Gamma < \infty$. Then there exists a Patterson-Sullivan measure $\{ \nu_x : x \in X\}$ on $\partial \wh{X}$. \end{proposition} \subsection{An integral formula} Now suppose $(X,g)$ is a non-compact complete simply connected Riemannian manifold and $\Gamma \leqslant \Isom(X,g)$ is a discrete subgroup with $\delta_\Gamma < \infty$. Moreover, assume that $\Gamma$ acts properly and freely on $X$ and the quotient manifold $M = \Gamma \backslash X$ has finite volume (with respect to the Riemannian volume form). Following \cite{L2010,LW2010}, we introduce the \emph{laminated space} \begin{equation*} X_M = \Gamma \backslash (X \times \partial \wh{X}) \end{equation*} where $\Gamma$ acts diagonally on the product. The space $X_M$ is laminated by the images of $X \times \{ \xi \}$ under the projection. The leaves of this lamination inherit a smooth structure from $X$ and using this structure we can define a gradient $\nabla^{\Wc}$, a divergence $\text{div}^{\Wc}$, and a Laplacian $\Delta^{\Wc}$ in the leaf direction. A Patterson-Sullivan measure $\{\nu_x : x \in X\}$ yields a measure on the laminated space $X_M$ as follows: by definition $d\nu_x(\xi)=e^{-\delta_\Gamma\xi(x)}d\nu_o(\xi)$ for all $x \in X$. In particular if $dx$ is the Riemannian volume form on $X$, then the measure \begin{equation*} d\wt{m}(x,\xi)=e^{-\delta_\Gamma\xi(x)}dxd\nu_o(\xi) \end{equation*} is $\Gamma$-invariant and descends to a measure $\nu$ on $X_M$. The argument at the end of Section 2 of~\cite{LW2010} can be used to show the following: \begin{theorem}\label{thm:integral_formula} With the notation above, if $Y$ is a continuous vector field on $X_M$ which is $C^1$ along the leaves $X \times \{\xi\}$ such that $\norm{Y}_g$ and $\Div^{\Wc} Y$ are in $L^1(X_M, d\nu)$ then \begin{equation*} \int \Div^{\Wc} Y d\nu = \delta_\Gamma\int \left\langle Y,\nabla^{\Wc} \xi \right\rangle d\nu. \end{equation*} \end{theorem} \begin{remark} If $\xi \in \partial \wh{X}$, then $\norm{\nabla^{\Wc} \xi(x) } \leqslant 1$ for almost every $x \in X$. So we see that \begin{equation*} \int \abs{\left\langle Y,\nabla^{\Wc} \xi \right\rangle} d\nu \leqslant \int \norm{Y}_g d\nu < \infty \end{equation*} and thus the right hand side of the equation in Theorem~\ref{thm:integral_formula} is well defined. \end{remark} Now the function $x \rightarrow \nu_x(\partial \wh{X})$ is $\Gamma$-invariant so with a slight abuse of notation the measure $\nu$ has total mass \begin{equation*} \nu(X_M) = \int_M \nu_x(\partial \wh{X} ) dx. \end{equation*} Since $x \rightarrow \nu_x(\partial \wh{X})$ is continuous, if $M$ is compact then the measure $\nu$ is finite. For general finite volume quotients $\Gamma \backslash X$ it is not clear when $\nu$ will be a finite measure, but we can prove the following: \begin{proposition} With the notation above, if $(X,g)$ has $\Ric \geqslant -(n-1)$ and $\delta_{\Gamma} = n-1$ then $\nu(X_M) < \infty$. \end{proposition} \begin{proof} Since $\Ric \geqslant -(n-1)$ the Laplacian comparison theorem implies for any $\xi \in \partial \wh{X}$ we have \begin{equation*} \Delta e^{-(n-1) \xi} \geqslant 0 \end{equation*} in the sense of distribution (see for instance~\cite[Proposition 4]{LW2010}). So in particular, since $\delta_{\Gamma} = n-1$, the function \begin{equation*} f(x):= \nu_x(\partial \wh{X}) = \int_{\partial \wh{X}} e^{-(n-1)\xi(x)} d\nu_o(\xi) \end{equation*} is such that $\Delta f \geqslant 0$ in the sense of distributions. However, thanks to the invariance of the Patterson-Sullivan measure, $f$ is $\Gamma$-invariant and hence descends to a superharmonic function on $M = \Gamma \setminus X$. Since $M$ has finite volume and $f$ is a positive, superharmonic function, $f$ must be constant \cite[Proposition 0.2]{Ada1992}. Then, \begin{equation*} \nu(X_M) = \int_{M} \nu_x(\partial \wh{X}) dx = \int_{M} \nu_o(\partial \wh{X}) dx = \int_{M} dx = \Vol(M). \qedhere \end{equation*} \end{proof} \subsection{A special Busemann function} This subsection is devoted to the proof of the following: \begin{proposition}\label{prop:special_busemann} Suppose $(X,g)$ is a complete simply connected Riemannian manifold with $\Ric \geqslant -(n-1)$ and bounded sectional curvature. Assume $\Gamma \leqslant \Isom(X)$ is a discrete group that acts properly and freely on $X$ such that $M=\Gamma \backslash X$ has finite volume (with respect to the Riemannian volume form). If $\delta_{\Gamma} = n-1$ then there exists $\xi_0 \in \partial \wh{X}$ such that $\Delta \xi_0 \equiv n-1$. \end{proposition} For a general Riemannian manifold, the elements of $\partial \wh{X}$ are only Lipschitz. To overcome this lack of regularity we will consider smooth approximations obtained by convolution with the heat kernel. \begin{definition}\cite[Theorem 7.13]{G2009} Suppose $(X,g)$ is a complete Riemannian manifold. The heat kernel $p_t(x,y) \in C^{\infty}(\Rb_{>0} \times X \times X)$ is the unique function satisfying: \begin{enumerate} \item $\frac{\partial}{\partial t} p_t = \Delta_x p_t = \Delta_y p_t$, \item $p_t(x,y) = p_t(y,x)$ for all $x,y \in X$, \item $\lim_{t \searrow 0} p_t(x,y) = \delta_x(y)$ in the sense of distributions. \end{enumerate} \end{definition} In the argument to follow it will also be helpful to use nicely behaved compactly supported functions: \begin{lemma}\label{lem:nice_cut_off} Suppose $M$ is a complete Riemannian manifold and $x_0 \in M$, then there exists $C>0$ such that for any $r>4$ there is a $C^\infty$ function $\varphi_r \colon M \rightarrow \Rb$ such that \begin{enumerate} \item $0 \leqslant \varphi_r \leqslant 1$ on $M$, \item $\varphi_r \equiv 1$ on $B_r(x_0)$, \item $\varphi_r \equiv 0$ on $M \setminus B_{2r}(x_0)$, \item $\norm{\nabla \varphi_r} \leqslant C/r$ on $M$. \end{enumerate} \end{lemma} \begin{proof} Pick a smooth function $f \colon [0,\infty) \rightarrow \Rb$ such that $0 \leqslant f \leqslant 1$, $f \equiv 1$ on $[0,1]$, and $f \equiv 0$ on $[2,\infty)$. Let $C_1 = \max \{ \abs{f^\prime(t)}\}$. Next, let $g \colon [-1/3,4/3] \rightarrow [0,1]$ be a $C^\infty$ function with $g \equiv 0$ on $[-1/3,1/3]$ and $g \equiv 1$ on $[2/3,4/3]$. Let $C_2 = \max\{ \abs{g^\prime(t)}\}$. We claim that $C=2C_1C_2$ satisfies the conclusion of the lemma. Fix $r > 0$ and define the function $\phi: M \rightarrow \Rb$ by \begin{equation*} \phi(x) = f(d(x,x_0)/r). \end{equation*} Then $\phi$ is $C_1/r$-Lipschitz. Then, we can approximate $\phi$ by a $C^\infty$ function, $\theta \colon X \rightarrow \Rb$, so that $\abs{\phi - \theta} < 1/r$ and $\theta$ is $2C_1/r$-Lipschitz (see, for instance, \cite{AFR2007}). Finally, define \begin{equation*} \varphi_r(x) := g(\theta(x)). \end{equation*} Then $0 \leqslant \varphi_r \leqslant 1$ on $N$ by construction. Moreover, if $x \in B_r(x_0)$, we have that $\phi(x) =1$ and so, $\theta(x) \in [1-1/r,1+1/r] \subset [2/3,4/3]$. Thus, $\varphi_r(x) = 1$. Similarly, if $x \in M \setminus B_{2r}(x_0)$ then $\varphi_r(x) = 0$. Finally, we see that $\varphi_r$ is $2C_1C_2/r$-Lipschitz. \end{proof} For the rest of the subsection assume $(X,g)$ and $\Gamma \leqslant \Isom(X,g)$ satisfy the hypothesis of Proposition \ref{prop:special_busemann}. Let $p_t(x,y)$ be the heat kernel on $X$. By Theorem 4 in~\cite{CLY1981}: for any $t > 0$ there exists $C_p=C_p(t) \geqslant 1$, such that \begin{equation*} p_t(x,y) \leqslant C_p e^{\frac{-d(x,y)^2}{C_p}} \text{ for all } x, y \in X. \end{equation*} On the space $X \times \partial \wh{X}$ define the function \begin{equation*} F_t(x,\xi) := \int_X p_t(x,y) \xi(y) dy. \end{equation*} Because of the above estimate on $p_t(x,y)$, $F_t$ is well defined. In Appendix~\ref{sec:heat_kernel} we will use standard facts about the heat kernel to prove the following: \begin{proposition}\label{prop:heat_kernel} With the notation above, \begin{enumerate} \item For any $t > 0$ and $\xi \in \partial \wh{X}$, the function $x \rightarrow F_t(x,\xi)$ is $C^\infty$. \item For any $t > 0$, the functions $(x, \xi) \rightarrow \nabla_x F_t(x,\xi)$ and $(x, \xi) \rightarrow \Delta_x F_t(x,\xi)$ are continuous. \item For any $t > 0$ and $\xi \in \partial \wh{X}$, \begin{equation*} \norm{\nabla_x F_t(x,\xi)} \leqslant e^{(n-1)t}. \end{equation*} \item For any $t > 0$ and $\xi \in \partial \wh{X}$, \begin{equation*} \Delta_x F_t(x,\xi) \leqslant n-1. \end{equation*} \end{enumerate} \end{proposition} Now, let $\wt{Y}_t(x,\xi) = \nabla_x F_t(x,\xi)$. Then $\wt{Y}_t$ descends to a continuous vector field $Y_t$ on $X_M$ which is $C^\infty$ along the leaves $X \times \{\xi\}$. Next, let $\varphi_r \colon M \rightarrow \Rb$ be as in Lemma~\ref{lem:nice_cut_off} for some $x_0 \in M$. Then, define $\wt{f}_r \colon X \times \partial \wh{X} \rightarrow \Rb$ by $\wt{f}_r(x,\xi) = \varphi_r(\pi' (x))$, where $\pi':X \to M$ is the universal cover map. Since $\wt{f}_r$ is $\Gamma$-invariant, it descends to a continuous function $f_r \colon X_M \rightarrow \Rb$ which is $C^\infty$ along the leaves $X \times \{\xi\}$. Let $\wt{x}_0 \in X$ be a preimage of $x_0 \in M$. For $r > 0$, let $K_r \subset X_M$ be the image of $B_r(\wt{x}_0) \times \partial \wh{X}$ under the map \begin{equation*} \pi\colon X \times \partial \wh{X} \rightarrow X_M. \end{equation*} \begin{lemma} $K_r$ is compact, $f_r \equiv 1$ on $K_r$, and $f_r \equiv 0$ on $X_M \setminus K_{2r}$. \end{lemma} \begin{proof} Clearly, $K_r$ is compact by definition. Notice that $(x,\xi) \in \pi^{-1}(K_r)$ if and only if $x \in \cup_{\gamma \in \Gamma} B_r(\gamma \wt{x}_0)$. Thus, if $(x,\xi) \in \pi^{-1}(K_r)$ then $\wt{f}_r(x, \xi) \equiv 1$, and, if $(x,\xi) \notin \pi^{-1}(K_{2r})$ then $\wt{f}_r \equiv 0$. \end{proof} \begin{lemma} For any $r > 0$ and $t > 0$, \begin{equation*} \norm{f_r Y_t} \in L^1(X_M, d\nu) \end{equation*} and \begin{equation*} \Div^{\Wc} (f_r Y_t) \in L^1(X_M, d\nu). \end{equation*} \end{lemma} \begin{proof} Since $\norm{f_r Y_t} \leqslant e^{(n-1)t}$, the first assertion is obvious. Now \begin{equation*} \Div^{\Wc} (f_r Y_t )= f_r \Div^{\Wc} Y_t + \ip{\nabla^{\Wc} f_r,Y_t}, \end{equation*} so \begin{equation*} \int_{X_M} \abs{ \Div^{\Wc} f_r Y_t} d\nu \leqslant \int_{X_M} f_r \abs{ \Div^{\Wc} Y_t} d\nu + \frac{Ce^{(n-1)t}}{r} \nu(X_M). \end{equation*} However, the support of $f_r$ is compact in $X_M$ and the map $(x,\xi) \rightarrow \Delta_x F(x,\xi)$ is continuous. Thus, $ \abs{ \Div^{\Wc} Y_t} $ is bounded on the support of $f_r$. Hence, \begin{equation*} \int_{X_M} f_r \abs{ \Div^{\Wc} Y_t} d\nu < +\infty. \qedhere \end{equation*} \end{proof} \begin{lemma}\label{lem:L1} For any $t > 0$, \begin{equation*} \Div^{\Wc} Y_t \in L^1(X_M, d\nu), \end{equation*} and \begin{equation*} \int_{X_M \setminus K_{2r}} \abs{ \Div^{\Wc} Y_t} d\nu \leqslant \left( 2n-2+\frac{C}{r} \right) e^{(n-1)t} \nu(X_M \setminus K_r). \end{equation*} \end{lemma} \begin{proof} For a real number $t$, let $t^+ = \max\{ 0, t\}$ and $t^- = \min\{ 0, t\}$. Then, \begin{equation*} \int_{X_M} \abs{\Div^{\Wc} (Y_t)} d \nu = \int_{X_M} \Div^{\Wc} (Y_t)^+ d \nu - \int_{X_M} \Div^{\Wc} (Y_t)^- d \nu \end{equation*} and, by Proposition~\ref{prop:heat_kernel}, \begin{equation*} \int_{X_M} \Div^{\Wc} (Y_t)^+ d \nu \leqslant (n-1) \nu(X_M). \end{equation*} So, it is enough to bound the integral of $ \Div^{\Wc} (Y_t)^-$. By Theorem~\ref{thm:integral_formula}, \begin{equation*} \int_{X_M} \Div^{\Wc} (f_r Y_t) d \nu =(n-1) \int_{X_M} \ip{ f_r Y_t, \nabla^{\Wc} \xi} d\nu. \end{equation*} So, by Proposition~\ref{prop:heat_kernel} (3) and the fact that $\norm{\nabla^{\Wc} \xi(x) } \leqslant 1$ for almost every $x \in X$, \begin{equation*} \abs{\int_{X_M} \Div^{\Wc} (f_r Y_t) d \nu} \leqslant (n-1) e^{(n-1)t} \nu(X_M). \end{equation*} Now, \begin{equation*} \Div^{\Wc} f_r Y_t = f_r \Div^{\Wc} Y_t + \ip{\nabla^{\Wc} f_r, Y_t} \quad \text{and} \quad \abs{ \ip{\nabla^{\Wc} f_r ,Y_t}} \leqslant \frac{C e^{(n-1)t}}{r}, \end{equation*} so \begin{equation*} \abs{\int_{X_M} f_r \Div^{\Wc} (Y_t) d \nu} \leqslant \left( \frac{C}{r} + n-1 \right)e^{(n-1)t}\nu(X_M). \end{equation*} Then \begin{align*} - \int_{X_M} f_r \Div^{\Wc}(Y_t)^- d\nu &= - \int_{X_M} f_r \Div^{\Wc}(Y_t) d\nu + \int_{X_M} f_r \Div^{\Wc}(Y_t)^+ d\nu \\ & \leqslant - \int_{X_M} f_r \Div^{\Wc}(Y_t) d\nu + (n-1) \nu(X_M). \end{align*} Which implies that \begin{equation*} - \int_{X_M} f_r \Div^{\Wc}(Y_t)^- d\nu \leqslant \left( \frac{C}{r} + 2n-2 \right)e^{(n-1)t}\nu(X_M). \end{equation*} Finally $\lim_{r \rightarrow \infty} f_r = 1$ and so, by Fatou's Lemma, \begin{equation*} - \int_{X_M} \Div^{\Wc} (Y_t)^- d \nu \leqslant \liminf_{r \rightarrow \infty} - \int_{X_M} f_r \Div^{\Wc}(Y_t)^- d\nu \leqslant \left( 2n-2\right)e^{(n-1)t}\nu(X_M). \end{equation*} By the remarks at the start of the proof we then have that $\Div^{\Wc} Y_t \in L^1(X_M, d\nu)$. To prove the second assertion, first observe that, for any $t \in \Rb$, $\abs{t} = -t+2t^+$. So, \begin{align*} \int_{X_M \setminus K_{2r}} \abs{\Div^{\Wc} Y_t}d\nu &\leqslant \int_{X_M} (1-f_r) \abs{\Div^{\Wc} Y_t}d\nu \\ & =- \int_{X_M} (1-f_r) \Div^{\Wc} Y_td\nu+2\int_{X_M} (1-f_r) (\Div^{\Wc} Y_t)^+d\nu. \end{align*} Now, \begin{equation*} \int_{X_M} (1-f_r) (\Div^{\Wc} Y_t)^+d\nu \leqslant \int_{X_M} (1-f_r)(n-1) d\nu \leqslant (n-1) \nu(X_M\setminus K_r), \end{equation*} and, by Theorem~\ref{thm:integral_formula}, \begin{align*} - \int_{X_M} (1-f_r) \Div^{\Wc} Y_td\nu &= - \int_{X_M} \Div^{\Wc} \left( (1-f_r)Y_t\right)d\nu + \int_{X_M} \ip{ \nabla^{\Wc}(1-f_r), Y_t} d\nu \\ & \leqslant (n-1) \abs{\int_{X_M} \ip{ (1-f_r) Y_t, \nabla^{\Wc} \xi} d\nu} + \frac{Ce^{(n-1)t}}{r} \nu(X_M \setminus K_r) \\ & \leqslant \left( n-1+\frac{C}{r} \right) e^{(n-1)t} \nu(X_M \setminus K_r). \end{align*} Combining the above inequalities establishes the second assertion of the lemma. \end{proof} We finally have all the ingredients to prove Proposition \ref{prop:special_busemann}. Using Lemma~\ref{lem:L1} we can apply Theorem \ref{thm:integral_formula} to $Y_t(x,\xi)$ and obtain: \begin{equation*} 0=\int_{X_M}\left( \Div^{\Wc} Y_t -(n-1) \left\langle Y_t,\nabla^{\Wc} \xi \right\rangle \right)d\nu. \end{equation*} Moreover, Lemma~\ref{lem:L1} implies that \begin{equation*} \int_{X_M \setminus K_{2r}} \abs{ \Div^{\Wc} Y_t -(n-1) \left\langle Y_t,\nabla^{\Wc} \xi \right\rangle }d\nu \leqslant \left( 2n-2+\frac{C}{r} \right) e^{(n-1)t} \nu(X_M \setminus K_r) + (n-1) \nu(X_M \setminus K_{2r}). \end{equation*} So for any $\varepsilon > 0$, there exists $r > 0$ such that \begin{equation*} \int_{X_M \setminus K_{2r}} \abs{ \Div^{\Wc} Y_t -(n-1) \left\langle Y_t,\nabla^{\Wc} \xi \right\rangle }d\nu \leqslant \varepsilon \end{equation*} for all $0 \leqslant t \leqslant 1$. We now choose a countable and locally finite open cover $\{U_i\}$ of $M$ such that each $U_i$ is small enough so that $\pi^{-1}(U_i)$ is a disjoint union of open sets all diffeomorphic to $U_i$. Let $\{\chi_i\}$ be a partition of unity subordinated to $\{U_i\}$. For each $U_i$, we choose one connected component of its lift that we denote by $\wt{U}_i$ and we write $\wt{\chi}_i$ for the lift of $\chi_i$ to $\wt U_i$. \begin{observation}\label{obs:lifting} If $f \in L^1(X_M, d\nu)$ and $\wt{f}$ is the lift of $f$ to $X\times \partial \wh{X}$, then \begin{align*} \int_{X_M} f d\nu = \sum_{i \in \Nb} \int_{x \in \wt{U}_i} \int_{\partial \wh{X}} \wt{\chi}_i(x) \wt{f}(x,\xi) e^{-(n-1)\xi(x)} d\nu_o dx. \end{align*} \end{observation} Next let \begin{equation*} \Jc := \{ j \in \Nb : U_j \cap B_{2r}(x_0) \neq \emptyset \}. \end{equation*} Because the cover $M = \bigcup U_i$ is locally finite, we see that $\Jc$ is a finite subset of $\Nb$. Notice that $\{ \chi_i \circ \pi\}$ is a partition of unity on $X_M$ and so \begin{align*} \int_{X_M}\left( \sum_{j \in \Jc} \chi_j \circ \pi \right) & \left( \Div^{\Wc} Y_t -(n-1) \left\langle Y_t,\nabla^{\Wc} \xi \right\rangle \right)d\nu = -\int_{X_M}\left( \sum_{j \notin \Jc} \chi_j \circ \pi \right) \left( \Div^{\Wc} Y_t -(n-1) \left\langle Y_t,\nabla^{\Wc} \xi \right\rangle \right)d\nu \\ & \geqslant - \int_{X_M \setminus K_{2r}} \abs{ \Div^{\Wc} Y_t -(n-1) \left\langle Y_t,\nabla^{\Wc} \xi \right\rangle}d\nu \geqslant - \varepsilon. \end{align*} Moreover, by Observation~\ref{obs:lifting} \begin{align*} \int_{X_M} & \left( \sum_{j \in \Jc} \chi_j \circ \pi \right) \left( \Div^{\Wc} Y_t -(n-1) \left\langle Y_t,\nabla^{\Wc} \xi \right\rangle \right)d\nu \\ & = \sum_{j \in \Jc} \int_{x \in \wt{U_j}} \int_{\partial \wh{X}} \wt{\chi}_j(x) \left(\Div^{\Wc} \wt{Y}_t -(n-1) \left\langle \wt{Y}_t,\nabla^{\Wc} \xi \right\rangle \right)e^{-(n-1)\xi(x)} d\nu_o dx \\ &= \sum_{j \in \Jc} \int_{x \in \wt{U_j}} \int_{\partial \wh{X}} \Divw\left( \wt{Y}_t e^{-(n-1)\xi(x)} \wt{\chi}_j \right) - \langle \wt{Y}_t , \nabla \wt{\chi}_j \rangle e^{-(n-1)\xi(x)} d\nu_o dx. \end{align*} Now because each $ \wt{\chi}_j$ is compactly supported in $\wt{U}_j$, Stokes Theorem implies that \begin{align*} \int_{x\in \wt{U}_j} \Divw\left( \wt{Y}_t e^{-(n-1)\xi(x)} \wt{\chi}_j \right) dx=0 \end{align*} and so by Fubini \begin{align*} \varepsilon \geqslant - \int_{X_M} & \left( \sum_{j \in \Jc} \chi_j \circ \pi \right) \left( \Div^{\Wc} Y_t -(n-1) \left\langle Y_t,\nabla^{\Wc} \xi \right\rangle \right)d\nu \\ &= -\sum_{j \in \Jc} \int_{\partial \wh{X}} \left( \int_{x \in \wt{U_j}} \Divw\left( \wt{Y}_t e^{-(n-1)\xi(x)} \wt{\chi}_j \right) - \langle \wt{Y}_t , \nabla \wt{\chi}_j \rangle e^{-(n-1)\xi(x)} dx\right)d\nu_o \\ & = \sum_{j \in \Jc} \int_{\partial \wh{X}} \left(\int_{x \in \wt{U_j}} \langle \wt{Y}_t , \nabla \wt{\chi}_j \rangle e^{-(n-1)\xi(x)} dx\right)d\nu_o. \end{align*} Since the sum is finite, one can send $t \rightarrow 0$ to obtain \begin{equation*} \sum_{j \in \Jc} \int_{\xi \in \partial \wh{X}} \left(\int_{x\in \wt{U}_j}\langle \nabla \xi , \nabla \wt{\chi}_j \rangle e^{-(n-1)\xi(x)} dx \right) d\nu_o \leqslant \varepsilon. \end{equation*} By integration by parts, we have \begin{align*} \sum_{j \in \Jc} \int_{\xi \in \partial \wh{X}} \left(\int_{x\in \wt{U}_j}e^{-(n-1)\xi(x)} \Delta \wt{\chi}_j dx \right) d\nu_o &= - \sum_{j \in \Jc} \int_{\xi \in \partial \wh{X}} \left(\int_{x\in \wt{U}_j}\langle \nabla e^{-(n-1)\xi(x)} , \nabla \wt{\chi}_j \rangle dx \right) d\nu_o \\ &= (n-1) \sum_{j \in \Jc} \int_{\xi \in \partial \wh{X}} \left(\int_{x\in \wt{U}_j}\langle \nabla \xi , \nabla \wt{\chi}_j \rangle e^{-(n-1)\xi(x)} dx \right) d\nu_o. \end{align*} \indent So, \begin{equation*} \sum_{j \in \Jc} \int_{\xi \in \partial \wh{X}} \left(\int_{x\in \wt{U}_j}e^{-(n-1)\xi(x)} \Delta \wt{\chi}_j dx \right) d\nu_o \leqslant \frac{\varepsilon}{n-1}. \end{equation*} By \cite[Proposition 4]{LW2010} (that is still true in our context), $\Delta e^{-(n-1) \xi} \geqslant 0$ in the sense of distribution. Hence, for all $j \in \Jc$, \[ \int_{\xi \in \partial \wh{X}} \int_{x\in \wt{U}_j}e^{-(n-1)\xi(x)} \Delta \wt{\chi}_j dx \geqslant 0. \] So, we conclude that for all $j \in\Jc$, \[ \int_{\xi \in \partial \wh{X}} \int_{x\in \wt{U}_j}e^{-(n-1)\xi(x)} \Delta \wt{\chi}_j dx \leqslant \frac{\varepsilon}{n-1}. \] Since $\varepsilon$ is arbitrarily small, we then deduce that for all $j \in \Nb$, \[ \int_{\xi \in \partial \wh{X}} \int_{x\in \wt{U}_j}e^{-(n-1)\xi(x)} \Delta \wt{\chi}_j dx =0. \] Then, exploiting the fact that $\Delta e^{-(n-1) \xi} \geqslant 0$ again, we see that for $\nu_0$-almost-every $\xi \in \partial \wh{X}$ \[ \int_{x\in \wt{U}_j}e^{-(n-1)\xi(x)} \Delta \wt{\chi}_j dx =0 \] for every $j \in\Nb$. \indent In the argument above, one can replace $\wt U_j$ by $g\cdot \wt U_j$ and $\wt \chi_j$ by $g\cdot \wt \chi_j$ for any $g \in \Gamma$. So for $\nu_0$-almost-every $\xi \in \partial \wh{X}$ \[ \int_{x\in g \cdot \wt{U}_j}e^{-(n-1)\xi(x)} \Delta (g \cdot \wt{\chi}_j) dx =0 \] for every $j \in\Nb$ and every $g \in \Gamma$. One can now conclude that, for $\nu_0$-almost-every $\xi \in \partial \wh{X} $, $\Delta e^{-(n-1)\xi(x)} = 0$ in the sense of distribution in the same way as in \cite[p.472]{LW2010}, which concludes the proof of Proposition~\ref{prop:special_busemann}. \subsection{Final steps} \begin{proposition} \label{prop:final_step} Suppose $(X,g)$ is a complete simply connected Riemannian manifold with $\Ric \geqslant -(n-1)$ and $\Gamma \leqslant \Isom(X)$ is a discrete group that acts properly and freely on $X$ such that $M=\Gamma \backslash X$ has finite volume (with respect to the Riemannian volume form). If there exists $\xi_0 \in \partial \wh{X}$ such that $\Delta \xi_0 \equiv n-1$ then $X$ is isometric to real hyperbolic $n$-space. \end{proposition} \begin{proof} By the proof of Theorem 3.3 in~\cite{W2008} (also see the remark after Theorem 6 in~\cite{LW2010}) if there exists some $\xi_1 \in \partial \wh{X}$ such that $\Delta \xi_1 \equiv n-1$ and $\xi_1 \neq \xi_0$ then $X$ is isometric to real hyperbolic $n$-space. So, suppose for a contradiction, that we have \begin{equation*} \{ \xi_0 \} = \{ \xi \in \partial \wh{X} : \Delta \xi \equiv n-1 \}. \end{equation*} Since \begin{equation*} \Delta (\gamma \cdot \xi)(x) = (\Delta \xi)( \gamma^{-1} x) \end{equation*} we see that $\gamma \cdot \xi_0$ also has constant Laplacian equal to $n-1$. Thus $\gamma \cdot \xi_0 = \xi_0$ for all $\gamma \in \Gamma$. Now if $\gamma \in \Gamma$ we see that \begin{equation*} \textrm{diff}(\gamma)_{\gamma^{-1} x} \nabla \xi_0(\gamma^{-1}x) = \nabla \Big( \xi_0(\gamma^{-1}x) \Big) = \nabla \Big( \xi_0(\gamma^{-1}x)-\xi_0(\gamma^{-1}o) \Big) = \nabla (\gamma \cdot \xi_0)(x) = \nabla \xi_0(x). \end{equation*} Thus, $\nabla \xi_0(x)$ is a $\Gamma$-invariant vector field, and therefore descends to a vector field $V$ on $M$. Now, $\Div V = n-1$ since $\Div \nabla \xi_0 = \Delta \xi_0 \equiv n-1$, and moreover $\norm{V} \leqslant 1$. But, since $M$ has finite volume, there cannot exists a vector field $V$ with $\norm{V}, \Div V \in L^1(M)$ and $\Div V > 0$ (see for instance~\cite{K1981}). \end{proof} Putting together Proposition \ref{prop:special_busemann} and Proposition \ref{prop:final_step} finishes the proof of Theorem \ref{thm:riem_finite_volume}. \begin{remark} Proposition \ref{prop:final_step} is actually not necessary for the proof of Theorem \ref{thm:riem_finite_volume}. Indeed, since we assume that $X$ has bounded curvature, we can replace Proposition \ref{prop:final_step} by \cite[Theorem 6]{LW2010}. However we included this result since it removes the need for the bounded curvature assumption from this step. In particular, we want to emphasize that the bounded curvature assumption is only used in order to get the heat kernel estimates needed for Proposition \ref{prop:special_busemann}. \end{remark} \section{Entropy rigidity for Hilbert metrics}\label{sec:Hilbert} We begin by observing that the Blaschke metric has bounded sectional curvature. For the definition and some properties of the Blaschke metric, we refer to \cite{Lof2001,BH2013}. \begin{lemma} \label{lem:bounded_curvature} Let $\Omega$ be a proper convex open set in $\Pb(\Rb^{n+1})$. There exists a universal constant $C_n$, depending only on the dimension such that the sectional curvature of the Blaschke metric on $\Omega$ is bounded above by $C_n$ and below by $-C_n$. \end{lemma} \begin{proof} Benz\'ecri \cite{Benz1960} proved that the action of $\mathrm{PGL}_{n+1}(\Rb)$ on the set of pointed proper convex open sets \begin{align*} \mathcal{E}:= \{ (x,\Omega): \Omega \subset \Pb(\Rb^{n+1}) \text{ is a proper convex open set and } x \in \Omega\} \end{align*} is cocompact, so all we have to show is that the functions that, to an element $(\Omega, x) \in \mathcal{E}$ associates the maximum and minimum of the sectional curvature of the Blaschke metric at $x$, is $\mathrm{PGL}_{n+1}(\Rb)$-invariant and continuous. The invariance is clear from the definition of the Blaschke metric, and the continuity follows from Corollary 3.3 in \cite{BH2013}. \end{proof} We next prove Theorem~\ref{thm:hilbert_finite_vol} from the introduction: \begin{theorem} Suppose $\Omega \subset \Pb(\Rb^{n+1})$ is a proper convex open set and there exists a discrete group $\Gamma \leqslant \Aut(\Omega)$ which acts properly, freely, and with finite co-volume (with respect to $\mu_B$). Then $\delta_\Gamma(\Omega, H_\Omega) \leqslant n-1$ with equality if and only if $\Omega$ is projectively isomorphic to $\Bc$ (and in particular $(\Omega, H_\Omega)$ is isometric to $\Hb^n$). \end{theorem} \begin{proof} Let $B_\Omega$ be the Blaschke metric on $\Omega$. Then \begin{enumerate} \item $\Gamma$ acts by isometries on $(\Omega, B_\Omega)$ and the action is proper and free, \item $B_\Omega$ has bounded sectional curvature by Lemma~\ref{lem:bounded_curvature}, \item $B_\Omega$ has Ricci curvature bounded below by $-(n-1)$ by a result of Calabi~\cite{C1972}, \item by Theorem~\ref{thm:tho}, $\delta_\Gamma(\Omega, B_\Omega) = n-1$, \item by \cite[Proposition 2.6]{BH2013}, $\Gamma \backslash \Omega$ has finite volume with respect to the Riemannian volume form induced by $B_\Omega$. \end{enumerate} Thus, the Blaschke metric satisfies all of the assumptions of Theorem \ref{thm:riem_finite_volume}, so $(\Omega, B_\Omega)$ is isometric to the real hyperbolic space. Hence, by definition of the Blaschke metric, $(\Omega, H_\Omega)$ is the Klein--Beltrami model of hyperbolic space (see \cite[Theorem 1]{Lof2001}). \end{proof} Since $\delta_\Gamma(\Omega, H_\Omega) = h_{vol}(\Omega, H_\Omega, \mu_B)$ when $\Gamma$ acts co-compactly on $\Omega$ we immediately deduce Theorem~\ref{thm:hilbert_compact} from the introduction: \begin{corollary} Suppose $\Omega \subset \Pb(\Rb^{n+1})$ is a proper convex open set and there exists a discrete group $\Gamma \leqslant \Aut(\Omega)$ which acts properly, freely, and cocompactly. Then $h_{vol}(\Omega, H_\Omega, \mu_B) \leqslant n-1$ with equality if and only if $\Omega$ is projectively isomorphic to $\Bc$ (and in particular $(\Omega, H_\Omega)$ is isometric to $\Hb^n$). \end{corollary} In order to prove Corollary \ref{cor:hilbert_finite_vol_2} from the introduction, we will need the following: \begin{proposition}\label{p:vice_versa} Suppose $\Omega \subset \Pb(\Rb^{n+1})$ is a proper convex open set and there exists a discrete group $\Gamma \leqslant \Aut(\Omega)$ which acts properly, freely, and with finite co-volume (with respect to $\mu_B$). Then $\Omega$ is strictly convex if and only if $\partial \Omega$ is $C^1$. \end{proposition} Before proving the proposition, we first deduce the corollary. \begin{corollary} Suppose $\Omega \subset \Pb(\Rb^{n+1})$ is a proper convex open set which is either strictly convex or has $C^1$ boundary and there exists a discrete group $\Gamma \leqslant \Aut(\Omega)$ which acts properly, freely, and with finite co-volume (with respect to $\mu_B$). Then $h_{vol}(\Omega, H_\Omega, \mu_B) \leqslant n-1$ with equality if and only if $\Omega$ is projectively isomorphic to $\Bc$ (and in particular $(\Omega, H_\Omega)$ is isometric to $\Hb^n$). \end{corollary} \begin{proof}[Proof, assuming Proposition \ref{p:vice_versa}] By Proposition~\ref{p:vice_versa}, $\partial \Omega$ is $C^1$ and $\Omega$ is strictly convex. Thus by~\cite[Th\'eor\`eme 9.2]{CM2014_geodesic_flow} \begin{equation*} h_{vol}(\Omega, H_\Omega, \mu_B) = \delta_\Gamma(\Omega, H_\Omega). \end{equation*} So the corollary follows from Theorem~\ref{thm:hilbert_finite_vol}. \end{proof} \subsection{Proof of Proposition \ref{p:vice_versa}} We begin by recalling some constructions and results. \subsubsection*{Duality} The dual of a proper convex open set of $\Pb(\Rb^{n+1})$ is the set \begin{align*} \Omega^* = \{ \varphi \in \Pb((\Rb^{n+1})^*) : \ker \varphi \cap \overline{\Omega} \neq \emptyset\}. \end{align*} It is straightforward to verify that $\Omega^*$ is a proper convex open subset of $\Pb((\Rb^{n+1})^*)$. Associated with a point $p \in \Pb(\Rb^{n+1})$ is the hyperplane $p^{**}$ in $\Pb((\Rb^{n+1})^*)$ consisting of all $\varphi \in\Pb((\Rb^{n+1})^*)$ with $\varphi(p) = 0$. For convex sets, we have the following connection between the boundaries of $\Omega$ and $\Omega^*$: \begin{observation} Suppose $\Omega \subset \Pb(\Rb^{n+1})$ is a proper convex open set. Then $p \in \partial \Omega$ if and only if $p^{**}$ is a supporting hyperplane of $\Omega^*$. Likewise, $\varphi \in \partial \Omega^*$ if and only if $\ker \varphi$ is a supporting hyperplane of $\Omega$. \end{observation} Now, if $\Omega \subset \Pb(\Rb^{n+1})$ is a proper convex open set and $p \in \partial \Omega$, then $p$ is a $C^1$ point of $\partial \Omega$ if and only if there is a unique supporting hyperplane through $p$. So we have the following: \begin{observation} Suppose $\Omega \subset \Pb(\Rb^{n+1})$ is a proper convex open set. Then $\Omega$ is strictly convex if and only if $\partial \Omega^*$ is $C^1$. Likewise, $\Omega^*$ is strictly convex if and only if $\partial \Omega$ is $C^1$. \end{observation} Finally, any element $\gamma \in \mathrm{Aut}(\Omega)$ acts on $\Omega^*$ via the action on the dual of $\Rb^{n+1}$, that is $\gamma^*(\varphi) = \varphi \circ\gamma^{-1}$, where $\varphi \in (\Rb^{n+1})^*$. We will denote by $\Gamma^* \leqslant \Aut(\Omega^*)$ the dual group of any subgroup $\Gamma$ of $\mathrm{Aut}(\Omega)$. \subsubsection*{Margulis constant} By \cite[Th\'eor\`eme 1]{CM_Margulis} or \cite[Theorem 0.1]{CLT2015}, in any dimension $n$, there exists a positive constant (called a \emph{Margulis constant}) $\varepsilon_n>0$, such that, for every proper convex open subset $\Omega$ of $\Pb(\Rb^{n+1})$, for every $x\in \Omega$, and for every $\Gamma$ discrete subgroup of $\mathrm{Aut}(\Omega)$, if $\Gamma_{\varepsilon_n}(x)$ is the group generated by the elements of $\Gamma$ that move $x$ at a distance less than $\varepsilon_n$ then $\Gamma_{\varepsilon_n}(x)$ is virtually nilpotent. The \emph{thick part} of $\Omega$ is the closed subset of points $x \in \Omega$ such that $\Gamma_{\varepsilon_n}(x) = \{ 1\}$. The thick part of $\Gamma \backslash \Omega$ is the quotient of the thick part of $\Omega$ by $\Gamma$. The \emph{thin part} is the complement of the thick part. If $x$ is inside the thick part then, by definition, the restriction to the ball of radius $\frac{\varepsilon_n}{2}$ of the projection $\Omega \rightarrow \Gamma \backslash \Omega$ is injective. The theorem of Benz\'ecri \cite{Benz1960} mentioned during the proof of Lemma \ref{lem:bounded_curvature} implies that the $\mu_B$-volume of the $H_{\Omega}$-ball of center $x$ and radius $\frac{\varepsilon_n}{2}$ is bounded from below by a constant independent of $x$ or $\Omega$. So, if the quotient $\Gamma \backslash \Omega$ has finite volume then the thick part of $\Gamma \backslash \Omega$ is compact, since it can contain only finitely many disjoint balls of radius $\frac{\varepsilon_n}{2}$. \subsubsection*{About the automorphisms of $\Omega$} Let $\Gamma$ be a torsion-free finite type discrete subgroup of $\textrm{Aut}(\Omega)$. Suppose that $\Omega$ is strictly convex or with $C^1$-boundary, then each non-trivial element $\gamma \in \Gamma$ is either \emph{hyperbolic}, that is, $\gamma$ fixes exactly two points of $\overline{\Omega}$ which are on the boundary, or is \emph{parabolic}, that is, it fixes exactly one point of $\overline{\Omega}$ which is on the boundary (see \cite[Th\'eor\`eme 3.3]{CM_geo_fini}, or \cite[Proposition 2.8]{CLT2015}). Suppose $G \leqslant \Gamma$ is a subgroup generated by two elements $\gamma,\delta$. Then, \begin{enumerate} \item either $G$ is virtually nilpotent and \begin{enumerate} \item either every element of $G$ is hyperbolic and has the same fixed points, \item or every element of $G$ is parabolic and has the same fixed point; \end{enumerate} \item or $G$ contains a free group and no point in $\overline{\Omega}$ is fixed by every element of $G$ \end{enumerate} (see \cite[section 3.5]{CM_geo_fini}, or \cite[Proposition 4.13 and 4.14]{CLT2015}). Let $\varepsilon >0$ be a Margulis constant. Let $\Lambda$ be a maximal parabolic subgroup of $\Gamma$, that is, a non-trivial stabilizer of a point $p\in \partial \Omega$ which contains one parabolic element (and hence, according to the above remarks, contains only parabolic elements). We define $$ \Omega_{\varepsilon}(\Lambda) = \{ x \in \Omega \, | \,\exists \gamma \in \Lambda ,\, H_{\Omega}(x,\gamma x) \leqslant \varepsilon \}. $$ This region is $p$-star-shaped, that is, for every $x \in \Omega_{\varepsilon}(\Lambda)$ the line segment joining $x$ to $p$ is contained in $ \Omega_{\varepsilon}(\Lambda)$. Moreover, if $\Lambda,\Lambda'$ are two distinct maximal parabolic subgroups of $\Gamma$ then $\Omega_{\varepsilon}(\Lambda) \cap \Omega_{\varepsilon}(\Lambda') =\varnothing$ (see \cite[Lemme 6.2]{CM_geo_fini}). \begin{proof}[Proof of Proposition~\ref{p:vice_versa}] By \cite[Corollary 6.7]{CLT2015}, the quotient $\Gamma \backslash \Omega$ has finite volume if and only if the dual quotient $\Gamma^* \backslash \Omega^*$ also has finite volume. Hence, we only have to show that if $\Omega$ is strictly convex and $\Gamma \backslash \Omega$ has finite volume then $\partial \Omega$ is of class $C^1$. Suppose that $\Omega$ is strictly convex. We want to use \cite[Theorem 0.15]{CLT2015} to conclude that $\partial \Omega$ is of class $C^1$. In order to apply that theorem, we need to prove that $\Gamma \backslash \Omega$ is topologically tame and that the holonomy of each boundary component is parabolic. Fix $\varepsilon >0$ a Margulis constant. Since $\Gamma \backslash \Omega$ has finite volume, the thick part of $\Gamma \backslash \Omega$ is compact (by the above remarks). Since $\Omega$ is strictly convex, a connected component $\mathcal{H}$ of the thin part is of one of two types. Either it is a lift of a Margulis tube, that is, a lift of a tubular neighborhood of a closed geodesic of length less than $\varepsilon$. Or it is preserved by a maximal parabolic subgroup $\Lambda$ of $\Gamma$ and $\mathcal{H}=\Omega_{\varepsilon}(\Lambda)$ (see \cite[Lemma 8.2]{CLT2015}). Since there are only a finite number of geodesics of length less than $\varepsilon$, the thick part of $\Gamma \backslash \Omega$ together with all the Margulis tubes is still compact. Hence, if $\mathcal{H}$ is a connected component of the thin part and not a lift of a Margulis tube, the action of $\Lambda$ on $\partial \mathcal{H}$ is cocompact. Since, $\mathcal{H}$ is $p$-star shaped, the quotient $\Gamma \backslash \Omega$ is topologically tame and the holonomy of each boundary component is parabolic. We can thus apply \cite[Theorem 0.15]{CLT2015} to conclude that $\partial \Omega$ is of class $C^1$. \end{proof} \appendix \section{Proof of Proposition~\ref{prop:heat_kernel}}\label{sec:heat_kernel} For the rest of the section suppose that $(X,g)$ is a complete non-compact simply connected Riemannian manifold with $\Ric \geqslant -(n-1)$ and bounded sectional curvature. For a function $f \colon X \rightarrow \Rb$ define the function $P_t(f) \colon X \rightarrow \Rb$ by \begin{equation*} P_t(f) (x) = \int_X p_t(x,y) f(y) dy. \end{equation*} We will need some estimates on the heat kernel: \begin{lemma}\cite[Theorem 4 and Theorem 6]{CLY1981}\label{lem:hk_est} With the notation above, for any $T > 0$, there exists $C > 0$ such that \begin{equation*} p_t(x,y) \leqslant Ct^{-\frac{n}{2}}\exp \left( \frac{-d(x,y)^2}{Ct} \right) \end{equation*} and \begin{equation*} \norm{\nabla_x p_t(x,y)} \leqslant Ct^{-\frac{n+1}{2}}\exp \left( \frac{-d(x,y)^2}{Ct} \right) \end{equation*} for all $t \in (0,T]$ and $x,y \in X$. \end{lemma} \begin{proof}[Proof of Proposition~\ref{prop:heat_kernel}] Recall that $F_t(x,\xi) = P_t(\xi)(x)$. We claim that for any $\xi \in \partial \wh{X}$ \begin{equation*} (\partial_t - \Delta_x)F_t(x,\xi) = 0 \end{equation*} in the sense of distributions. Once this is established part (1) and part (2) follow from standard regularity results (see for instance~\cite[Theorem 7.4]{G2009}). Let $\phi \in C_c^\infty(X \times \Rb_+)$. By Lemma~\ref{lem:hk_est}, $\Delta_x( \phi(x,t)) p_t(x,y) \xi(y)$ and $\partial_t (\phi(x,t)) p_t(x,y) \xi(y)$ are in $L^1(X \times X \times \Rb_+, dxdydt)$. Then, using Fubini and the fact that $\partial_t p_t(x,y) = \Delta_x p_t(x,y)$, we obtain \begin{align*} \int_{X\times \Rb_+}& \Delta_x \phi(x,t) P_t(\xi)(x) dxdt = \int_X \left( \int_{X\times \Rb_+} \Delta_x \phi(x,t) p_t(x,y) dxdt\right) \xi(y) dy \\ & = \int_X \left( \int_{X\times \Rb_+} \phi(x,t) \Delta_x p_t(x,y) dxdt\right) \xi(y) dy = \int_X \left( \int_{X\times \Rb_+} \phi(x,t) \partial_t p_t(x,y) dxdt\right) \xi(y) dy \\ &= -\int_X \left( \int_{X\times \Rb_+} \partial_t \phi(x,t) p_t(x,y) dxdt\right) \xi(y) dy = -\int_{X\times \Rb_+} \partial_t \phi(x,t) P_t(\xi)(x) dxdt. \end{align*} \indent Thus \begin{equation*} (\partial_t - \Delta_x)F_t(x,\xi) = 0 \end{equation*} in the sense of distributions. So part (1) and (2) are established. Now, by~\cite{BE1984}, since $\Ric \geqslant -(n-1)$, if $f \in C_c^\infty(X)$ then \begin{equation*} \norm{\nabla P_t(f)}_{\infty} \leqslant e^{(n-1)t} \norm{\nabla f}_{\infty}. \end{equation*} Moreover, for any $\xi \in \partial \wh{X}$, there exists a sequence $f_m \in C_c^\infty(X)$ such that $f_m$ converges to $\xi$ locally uniformly and $\norm{\nabla f_m}_{\infty} \rightarrow 1$ (see, for instance, \cite{AFR2007}). Hence, each $P_t(f_m)$ is $e^{(n-1)t} \norm{\nabla f_m}_{\infty}$-Lipschitz. Moreover, by Lemma~\ref{lem:hk_est} and the dominated convergence theorem, $P_t(f_m)(x) \rightarrow P_t(\xi)(x)$ for all $x \in X$. Thus, $P_t(\xi)$ is $e^{(n-1)t}$-Lipschitz and \begin{equation*} \norm{\nabla_x F_t(x,\xi)} \leqslant e^{(n-1)t}. \end{equation*} Now fix some non-negative $\phi \in C_c^\infty(X)$. Then we have \begin{multline*} \int_{X} \Delta_x \phi(x) P_t(\xi)(x) dx = \int_X \left( \int_{X} \phi(x) \Delta_x p_t(x,y) dx\right) \xi(y) dy \\ = \int_X \left( \int_{X} \phi(x) \Delta_y p_t(x,y) dx\right) \xi(y) dy = \int_X \Delta_y P_t(\phi)(y) \xi(y) dy. \end{multline*} For $r > 0$, let $\varphi_r : X \rightarrow \Rb$ be as in Lemma~\ref{lem:nice_cut_off}. Then \begin{multline*} \int_X \Delta_y P_t(\phi)(y) \xi(y) dy = \int_X \Delta_y \Big( \varphi_r(y) P_t(\phi)(y)\Big) \xi(y) dy + \int_X \Delta_y \Big( (1-\varphi_r)(y) P_t(\phi)(y)\Big) \xi(y) dy \\ \leqslant (n-1) \int_X \varphi_r(y) P_t(\phi)(y) dy + \int_X \Delta_y \Big( (1-\varphi_r)(y) P_t(\phi)(y)\Big) \xi(y) dy. \end{multline*} Using the dominated convergence theorem once again, we have \begin{equation*} \lim_{r \rightarrow \infty} \int_X \varphi_r(y) P_t(\phi)(y) dy = \int_X P_t(\phi)(y) dy = \int_X \phi(x) dx. \end{equation*} \indent Moreover, since integration by parts holds for Lipschitz functions, \begin{multline*} \abs{ \int_X \Delta_y \Big( (1-\varphi_r)(y) P_t(\phi)(y)\Big) \xi(y) dy }= \abs{\int_X\ip{ \nabla_y \Big( (1-\varphi_r)(y) P_t(\phi)(y)\Big), \nabla_y \xi(y)} dy} \\ \leqslant \frac{C}{r} \int_X P_t(\phi)(y) dy + \int_{ X \setminus B_r(o)} \norm{\nabla P_t(\phi)(y)} dy. \end{multline*} Now, \begin{equation*} \nabla P_t(\phi)(y) = \int_X \nabla_y p_t(x,y) \phi(x) dx, \end{equation*} and so, by Lemma~\ref{lem:hk_est}, \begin{equation*} \norm{\nabla P_t(\phi)(y)} \in L^1(X, dy). \end{equation*} Thus, \begin{equation*} \lim_{r \rightarrow \infty} \int_{ X \setminus B_r(o)} \norm{\nabla P_t(\phi)(y)} dy = 0. \end{equation*} Which implies that \begin{align*} \lim_{r \rightarrow \infty} \int_X & \Delta_y \Big( (1-\varphi_r)(y) P_t(\phi)(y)\Big) \xi(y) dy = 0, \end{align*} and thus \begin{equation*} \int_{X} \Delta_x \phi(x) P_t(\xi)(x) dx \leqslant (n-1) \int_X \phi(x) dx. \end{equation*} Since $\xi \in \partial \wh{X}$ was arbitrary and $\phi \in C_c^\infty(X)$ is an arbitrary non-negative function, we see that \begin{equation*} \Delta_x F_t(x,\xi) \leqslant n-1. \qedhere \end{equation*} \end{proof} \end{document}
\begin{document} \title{Custom fermionic codes for quantum simulation} \author{Riley W. Chien} \email{Riley.W.Chien.gr@dartmouth.edu} \affiliation{Department of Physics and Astronomy, Dartmouth College, Hanover, New Hampshire 03755} \author{James D. Whitfield} \email{James.D.Whitfield@dartmouth.edu} \affiliation{Department of Physics and Astronomy, Dartmouth College, Hanover, New Hampshire 03755} \begin{abstract} Simulating a fermionic system on a quantum computer requires encoding the anti-commuting fermionic variables into the operators acting on the qubit Hilbert space. The most familiar of which, the Jordan-Wigner transformation, encodes fermionic operators into non-local qubit operators. As non-local operators lead to a slower quantum simulation, recent works have proposed ways of encoding fermionic systems locally. In this work, we show that locality may in fact be too strict of a condition and the size of operators can be reduced by encoding the system quasi-locally. We give examples relevant to lattice models of condensed matter and systems relevant to quantum gravity such as SYK models. Further, we provide a general construction for designing codes to suit the problem and resources at hand and show how one particular class of quasi-local encodings can be thought of as arising from truncating the state preparation circuit of a local encoding. We end with a discussion of designing codes in the presence of device connectivity constraints. \end{abstract} \maketitle \section{Introduction} A well-known duality between spins and fermions due to Jordan and Wigner \cite{jordanwigner} has been famously employed in the solutions of spin chains \cite{lieb1961two}. Recent years have seen a new applications of spin-fermion dualities as a way of encoding systems of indistinguishable fermions into a system of distinguishable qubits. Such transformations are employed in the simulation of fermionic systems on quantum computers. The idea of a quantum simulator was conceived in \cite{feynman1999simulating} and further expanded upon in \cite{lloyd1996universal}. It is now expected that quantum computers will become an invaluable tool in the study of physical properties of strongly correlated systems which are out of reach of classical computers such as for example quantum chemistry \cite{mcardle2020quantum}. Target problems include chemical reaction mechanisms \cite{reiher2017elucidating} and the Hubbard model \cite{abrams1997simulation}. Working in second quantization, it is necessary to encode the anti-commuting nature of the fermionic operators into the local qubit degrees of freedom. The solution used in the Jordan-Wigner transformation (JW), is to map local operators on one side of the duality to non-local operators on the other side. When mapping from fermions to spins, the Pauli Z strings are non-local \begin{align} a_j &\to \prod_{k<j} Z_k (X_j + i Y_j)\\ a_j^{\dag} &\to \prod_{k<j} Z_k (X_j - i Y_j). \label{eq: jw} \end{align} In particular, the JW strings can lead to operators as large as the system size. Such non-local operators are known to lead to larger gate counts in the quantum simulation experiment. In particular, the number of qubits an operator acts on nontrivially, a number we refer to as the Pauli weight, determines the required number of CNOT gates to perform a rotation generated by that operator. Thus, lowering the Pauli weight is an important consideration in particular for the Trotterization paradigm of time evolution which may be used phase estimation,as a component of a variational state preparation step, or in quantum imaginary time evolution \cite{motta2020imaginary}. It is important to mention that in regard to phase estimation, recent work has sought to reduce the T-gate complexity as this is likely to determine the dominant time cost of a simulation given the magic state paradigm of universal fault tolerance \cite{bravyi2005universal}. This work does not seek to address this important issue. Nevertheless, we expect this work to be of broad interest. In addition to Jordan-Wigner, a number of other mappings that map $N$ fermionic modes to $N$ qubits are known. These include the parity mapping and the Bravyi-Kitaev mapping \cite{bravyi2002fermionic}. Recently, Jiang et al. put forward an encoding based on ternary trees which they show to have an optimal average Pauli weight across all Majorana operators defined on the system \cite{jiang2020optimal}. Bravyi and Kitaev also proposed in \cite{bravyi2002fermionic} the superfast encoding, an explicit method of encoding a system of fermions into qubit degrees of freedom in such a way as to keep all operators local. This work was closely followed by \cite{ball2005fermions,verstraete2005mapping}. The construction by Bravyi and Kitaev work was explored in \cite{whitfield2016local,setia2018,chien2019analysis} and generalized into the construction we will employ here by Setia et al. in \cite{setia2019superfast}. A number of other encodings which have sought to reduce the necessary qubit resources and Pauli weights of transformed operators have also been proposed \cite{jiang2019majorana,steudtnerAQM}. We mention in particular the low weight encoding of Derby and Klassen \cite{derby2020low} as their construction bears the lowest qubit requirement and operator Pauli weight of known encodings on the square lattice. An investigation of the resulting gauge theory was given by Chen et al. along with a construction that gave more a more careful consideration of spin structures in \cite{chen2018exact,chen2019exact3d}. Local bosonization has also been implemented as a tensor network operator in \cite{shukla2020tensor}, mapping fermionic tensor networks to bosonic tensor networks. Also, recent progress has been made in constructing spin duals of fermionic models in translationally invariant settings using an algebraic formalism \cite{tantivasadakarn2020jordan} with focus towards topological and fracton models. Understanding these local encodings is best done through the toric code model. This famous model, proposed by Kitaev in \cite{kitaev2003fault}, is a model for a topological quantum memory. For our purposes here, it will be most important to note that there are four basic anyon excitations, the vacuum \textbf{1}, a bosonic charge $e$, a bosonic flux $m$, and a fermionic composite of the charge and flux $\varepsilon = e\times m$. These particles are all created, destroyed, and moved at the endpoints of local string operators, made possible by the fact that the system is topologically ordered \cite{levin2003fermions,levin2005string}. The presence or absence of fermionic anyons at a site on the lattice corresponds to the occupancy of the fermionic mode associated with that site. Finally, all known local fermionic encodings are equivalent to the toric code defined on some lattice up to a constant-depth local Clifford circuit. For example, the Derby-Klassen low weight encoding \cite{derby2020low} uses as its underlying topological state the Wen plaquette model \cite{wen2003quantum} which is equivalent to the toric code up to a local basis change on half of the qubits. Thus, the initialization step of a quantum simulation utilizing a local encoding is equivalent to preparing a toric code state. We will return to this point at times throughout the paper when it becomes important. In this paper, we first review a general construction for local fermionic encodings before presenting our main result, a generalization of fermionic encodings that is fully customizable to suit the available resources as well as the problem geometry. The encoding presented here also encompasses a number of existing methods as special cases which we mention along the way. We give a number of examples including lattice models relevant to condensed matter physics and highly connected systems relevant to the quantum gravity community which frustrate existing encoding methods. Finally, given that a number of quantum computing architectures are subject to limited connectivity, we discuss designing codes under device connectivity constraints. \section{Review of local fermionic encoding} In this section we review a certain construction introduced by Setia et al. in \cite{setia2019superfast} for representing fermionic systems in terms of qubits in a local fashion. This method is reminiscent of the construction of \cite{majorana_dualities} for generating highly entangled states using Majorana modes at ends of nanowires. Typically, our problem setting will be that we are interested in some property of a system of $N$ fermionic modes with dynamics governed by a Hamiltonian \begin{equation} H = \sum_{jk} h_{jk} a^{\dag}_{j}a_{k} + \sum_{jklm} h_{jklm} a^{\dag}_{j}a^{\dag}_{k}a_{l}a_{m} + \dots \label{eq: fermionic_ham} \end{equation} where the fermionic operators satisfy the usual anti-commutation relations $\{a_j, a^{\dag}_k \} = a_j a^{\dag}_k + a^{\dag}_k a_j= \delta_{jk}$ and $\{a_j, a_k \} = \{a^{\dag}_j, a^{\dag}_k \} = 0$. As explored in \cite{zohar2018eliminating}, the Hamiltonian could also couple the fermions to a gauge field, for example in the context of a high energy physics simulation. It will be useful to work in the Majorana basis of fermionic operators \begin{align} a_{j} &= \frac{1}{2}(\godd{j} + i\gev{j})\\ a_{j}^{\dag} &= \frac{1}{2}(\godd{j} - i\gev{j})\\ \{\gamma_{j},\gamma_{k}\} &= 2\delta_{jk}. \end{align} From these, we can form two types of operators quadratic in the Majoranas which we will from here on refer to as edge and vertex operators, \begin{align} A_{jk} &= -i \godd{j}\godd{k} \quad \text{ (edge)}\label{eq:edge_op} \\ \label{eq:vertex_op}B_j &= -i \godd{j}\gev{j} \quad \text{ (vertex)}. \end{align} These operators suffice to generate the full algebra of parity preserving fermionic operators. The vertex operator $B_j$ is the parity operator for the mode $j$ and the edge operator $A_{jk}$ is involved in all hopping, pairing, and scattering processes. Note that edge operators involve only odd-indexed Majoranas. Appropriately multiplying an edge operator by vertex operators allows for coupling two fermionic modes by their even indexed Majoranas. For example, a hopping term can be expressed as \begin{equation} a_j^{\dag}a_k + a_k^{\dag}a_j = -i (A_{jk}B_k + B_j A_{jk})/2. \end{equation} \begin{figure} \caption{(a) Local Majorana modes for a degree $d=6$ vertex. (b) Two types of plaquette stabilizers for the triangular lattice with the above encoded Majoranas. (Left) Orange lines connecting Majoranas indicate the corresponding edge operators. (Right) The corresponding qubit operators.} \label{fig:triangular_GSE} \end{figure} We here explicitly place the fermionic system on a graph which we refer to as the interaction graph with fermionic modes associated to vertices $j\in V$. When one writes the fermionic Hamiltonian in terms of edge and vertex operators, if an edge operator, $A_{jk}$, coupling modes $j,k$ is required to write the Hamiltonian in this form, then an edge is placed on the graph connecting vertices $j$ and $k$, $(j,k)\in E$. The graph is the given as the pair $\Gamma = (V,E)$. Further, we can identify the set of plaquettes on the graph $P$ and the boundary of a given plaquette $p$ is the set of edges which form it, $\partial p$. For each vertex $j$ on the graph, we place a number of qubits equal to half the degree of the vertex, $d(j)/2$ (actually $\lceil d(j)/2 \rceil$ but we will assume graphs of even degree for simplicity). We next define $d(j)$ Pauli operators acting on the $d(j)/2$ qubits on site $j$ which we will refer to as local Majoranas \begin{equation} c_{j}^{1},\dots,c_{j}^{d(j)}\in \mathcal{P}_{d(j)/2} \label{eq:local_majoranas} \end{equation} We use the letter $c$ for the local Majoranas to distinguish these Pauli operators from the physical Majorana fermion operators $\gamma$. These local Majoranas may be anything except that they must satisfy the following properties: (1) they must satisfy Majorana anti-commutation relations with the other operators defined on that vertex (they commute with operators defined on other vertices) and (2) they must generate the full Pauli group on $d(j)/2$ qubits. Any choice of definition for the local Majoranas is related to any other choice by a Clifford circuit acting only on qubits at that vertex. In the figures throughout this paper, we will always use Jordan-Wigner to encode the local Majoranas on a given vertex starting from the top clockwise, $\{c_j^1,c_j^2,c_j^3,c_j^4,\dots\} \to \{X_{j1},Y_{j1},Z_{j1}X_{j2},Z_{j1}Y_{j2} \dots\}$ (See Fig. \ref{fig:triangular_GSE}). A choice of definition for the Majoranas based on Fenwick trees can give a decrease in the Pauli weight from $O(d)$ to $O(\log(d))$. As such, the Fenwick tree choice will be preferable for graphs of large degree. We refer readers to \cite{setia2019superfast,whitfield2016local} for a discussion of Fenwick tree encodings. The recently proposed ternary tree construction of \cite{jiang2020optimal} would provide a further reduction in Pauli weight. Each local Majorana is associated to one of the edges incident to the vertex. Each edge then has two local Majoranas associated to it, one from each vertex at its endpoints and we will define our encoded edge operators to be \begin{equation} \tilde{A}_{jk} = \epsilon_{jk}c_j^{p}c_k^{q} \label{eq:encoded_edge} \end{equation} where $\epsilon_{jk} = +1$ if $j<k$ and $= -1$ if $j>k$ and $k$ is the $p$-th neighbor of $j$ and $j$ is the $q$-th neighbor of $k$. Encoded vertex operators are a product of all Majoranas at a given vertex, \begin{equation} \tilde{B}_j = i^{d(j)/2} c_j^1 c_j^2 \dots c_j^{d(j)}. \label{eq:encoded_vertex} \end{equation} If Jordan-Wigner is used to encode the local Majoranas on a given site, then the vertex operator will be Pauli Z acting on all the qubits on that vertex so the occupancy of the mode is stored in the collective parity. Another choice could be made such that the occupancy is stored in a single qubit. One could achieve this an appropriate application of CNOTs. Finally, for each plaquette $p$ on the graph, we have a stabilizer which is given by a product of all the edge operators around the boundary of the plaquette \begin{equation} S(\partial p) = i^{|\partial p|} \prod_{(j,k)\in \partial p} \tilde{A}_{jk}. \label{eq:stabilizer} \end{equation} Whereas in the toric code, the plaquette stabilizers detected the presence of a flux, here the plaquette stabilizers detect the presence of a flux without an accompanying charge and vice-versa. In dimensions higher than 1, the logical subspace is that in which charges have fluxes attached. Recall that charge-flux pairs are fermionic in nature which provides the basis for this construction. An example on a triangular lattice is shown in Fig. \ref{fig:triangular_GSE}. For geometries with non-contractible loops, e.g. a torus, we fix boundary conditions to be periodic with a stabilizer consisting of a product of edge operators around the non-contractible loop. As a final consideration, we consider the odd fermionic parity sector. On a graph with even degree it is not possible to directly encode an odd number of particles. One must introduce an additional non-physical auxiliary mode and create a pair of particles with one occupying the auxiliary mode. The total parity of the physical modes is then odd and the odd-parity simulation can proceed. If however there is a vertex on the graph with an odd degree, then there will be a single unpaired Majorana operator on that vertex. Acting with the unpaired Majorana operator can create or destroy a single particle at that vertex without violating any of the stabilizers. As a simple example without any stabilizers, we consider a 1D chain with open boundaries. At the two ends of the chain, there are two unpaired local Majorana operators. If we for example act with the unpaired Majorana on the first site in the chain, we create a single particle. In this way, acting with unpaired Majorana operators changes our parity sector. We can then proceed with our odd-parity sector simulation. In the next section we show that if we pair the local Majorana operators at the ends of the chain, we impose periodic boundary conditions and lose the ability to enter the odd parity sector. \subsection{1D chain recovers Jordan-Wigner} To further illustrate the construction in a familiar setting that we hope will give some intuition for the later sections, we will encode a 1D chain of fermions with an onsite potential and periodic boundary conditions. The Hamiltonian of this system is \begin{align} H &= t\sum_j (a^{\dag}_{j+1} a_j + a^{\dag}_j a_{j+1} ) + U\sum_j a^{\dag}_j a_j\\ &= \frac{-it}{2}\sum_j (A_{j,j+1}B_{j+1} + B_j A_{j,j+1}) + U\sum_j B_j. \label{eq:free_fermions} \end{align} Each vertex in this 1D chain obviously has $d=2$ so a single qubit is placed at each vertex. We choose $c_j^1 = Y_j$, $c_j^2 = X_j$ for the Majoranas at each site. All the edge operators are then $\tilde{A}_{j,j+1} = X_{j}Y_{j+1}$ and the vertex operators are $\tilde{B}_j = Z_j$. The transformed Hamiltonian then takes the familiar form of the XY chain \begin{equation} \tilde{H} = \frac{t}{2}\sum_j (X_{j}X_{j+1} + Y_{j}Y_{j+1}) + U\sum_j Z_j. \label{eq: XY_chain} \end{equation} Notice that given the basis chosen for the local Majoranas at each vertex, the Jordan-Wigner transformation is recovered. Indeed, an edge operator between two modes not nearest-neighbor connected is necessarily a product of edge operators in a path connecting the two targeted vertices \begin{align} A_{j,j+n} &= A_{j,j+1}\dots A_{j+n-1,j+n}\\ \tilde{A}_{j,j+n} &= (-i)^{n-1 }X_j Z_{j+1} \dots Z_{j+n-1} Y_{j+n} \label{eq:JW_strings} \end{align} giving back the Jordan-Wigner strings of Pauli Zs that all local fermionic encodings are attempting the alleviate. Finally, as we have periodic boundary conditions and thus a (large) loop in our interaction graph, we have a stabilizer which is the product of all edge operators around the loop. This operator, which is given by $S = \prod_j Z_j$, corresponds to the fact that global fermionic parity is preserved and constrained to be even. We could as well choose not to restrict to the subspace stabilized by the loop as previous described in our discussion of the odd parity sector. In that case, edge operators coupling sites $1,N$ would have a Pauli weight extensive in the system size. The total fermionic parity will always be a symmetry of the Hamiltonian by virtue of only even parity operators being physical. The above shows a consequence of imposing the total parity to be a symmetry of the states as well. This clear physical interpretation does not however generalize to plaquette stabilizers in higher dimensions. \section{Main result: Custom fermionic codes} \begin{figure} \caption{(a) Two nearest-diagonal-neighbor couplings (left) fermion picture (right) qubit picture (b) The same two nearest-diagonal-neighbor couplings with diagonal edges omitted from the system graph.} \label{fig:next-nearest_edge_omitted} \end{figure} Our main result is centered on the idea that regardless of the interaction graph determined by the Hamiltonian terms, we may choose to encode the system into whatever geometry we wish. As such, we will begin discussing two separate graphs for the remainder of the paper, the interaction graph as determined by the Hamiltonian and the system graph that we will encode with our qubit system. The system graph must have at least as many vertices as the interaction graph in order to accommodate the fermionic modes. Also, if an edge operator coupling two modes is present in the Hamiltonian is expressed using the operators of (\ref{eq:edge_op} - \ref{eq:vertex_op}), then a path must connect those two vertices on the system graph. Otherwise the system graph may be arbitrary. We will discuss a number of useful modifications of the interaction graph. We will briefly discuss sparsifications of the interaction graph - simply omitting edges in the case of lattice models. We then discuss using a virtual geometry including virtual modes that provide paths across which interactions may take place. We there give an example of a system featuring all-to-all interactions. Next, we give a construction that allows for balancing qubit resource requirements and code locality in the case of lattice models through a blocking construction. We will find it convenient at that time to discuss state preparation. Finally, we will discuss encodings in the presence of constraints on connectivity between qubits using the recently proposed heavy-hexagon lattice as an example. \subsection{Omitting edges} As shown above in (\ref{eq:JW_strings}), if two modes are meant to be coupled with an edge operator but the two modes do not share an edge on the system graph, the two modes are still able to be coupled together, but the interaction will be not strictly local. This will lead to a generalization of the JW Z string where the intermediary vertices will all be acted on by a product of two local Majoranas. These generalized Jordan-Wigner strings are similar to string operators in quantum error correcting codes. For concreteness, consider a $L\times L$ square lattice of fermionic modes interacting with nearest-neighbors and nearest-diagonal-neighbors, those across the diagonal of a square. An example of such a problem is the nearest-diagonal-neighbor Hubbard model. The vertices of the interaction graph are then of degree $d(j)=8$. If we choose the system graph to match the interaction graph, then we require $4$ qubits at every vertex giving $4L^2$ qubits in total. As shown in Fig. \ref{fig:next-nearest_edge_omitted}b, we omit the diagonal edges from our system graph such that each vertex has only degree $d(j)=4$. The diagonal edge operators are then a product of two nearest-neighbor edge operators. The path taken around the square plaquette does not matter as the upper and lower path in each case are equal up to multiplication by a stabilizer. We see that in this case, the Pauli Weight of the qubit operators is smaller for the two paths without the diagonal edges in the system graph. This can be seen in Fig. \ref{fig:next-nearest_edge_omitted}. Also the presence of the additional qubits also increases the Pauli weight of the nearest-neighbor edge operators and the vertex operators. So, we have shown that strict locality is not always optimal and relaxing to quasi-locality is in some cases beneficial. The system graph can be sparsified arbitrarily relative to the interaction graph to save qubits as long as paths connect modes that must be coupled with edge operators. In this construction, the qubit requirement is determined by the vertex degrees and not the number of edges as in the local mappings of \cite{bravyi2002fermionic,chen2018exact}. Thus, if a reduction of qubit requirement is sought, one should delete edges with the aim of reducing the degrees of vertices with target degrees being even numbers. We would like to mention here that if our system has a square lattice interaction graph and we sparsify the graph in certain ways, we recover the auxiliary qubit mappings of Steudtner and Wehner \cite{steudtnerAQM}. Thus, this construction contains the auxiliary qubit mappings as special case. \subsection{Adding virtual modes} \begin{figure} \caption{(left) Schematic of an operator coupling two modes at the boundary of the MERA geometry. The orange arc connecting the two endpoints indicates the path of the generalized Jordan-Wigner string. The path through the virtual space depicted in gray is shorter than the path along the boundary. (right) The qubit operator corresponding to a coupling with the generalized Jordan-Wigner string following the discrete geodesic.} \label{fig:mera_geodesic} \end{figure} Additional virtual modes can be added to the system graph which can in some cases lead to a reduction in the Pauli weight of the transformed Hamiltonians. As always we require closed loops in the system graph to be stabilized by the corresponding plaquette operators. Virtual modes are stabilized by their vertex operators as they will always be unoccupied and so have parity $+1$. Using virtual modes to reduce Pauli weights could be especially useful in cases where one has a complete or nearly complete interaction graph. Nearly complete interaction graphs (within spin sectors) are known to arise in small molecular Hamiltonians using atomic orbital basis sets leading to large simulation costs with strictly local encodings \cite{chien2019analysis}. A number of all-to-all interacting fermionic models have also become popular in recent years in the study of scrambling of quantum information and of AdS/CFT, the most notable of which is the SYK model \cite{SY_model,Sachdev_blackhole}. \subsubsection{All-to-all coupled fermions} We now give an example of a system for which the quantum simulation cost is decreased by using virtual modes. The SYK model consists of $2N$ Majorana fermions with random strength $q$-body interactions coupling all Majoranas. Proposals regarding quantum simulation of the $q=4$ SYK model have previously been put forward \cite{garcia2017digitalSYK,babbush2019quantumSYK,cao2020towardsSYK}. We will consider the $q=2$ case, \begin{equation} H = -i\sum_{j < k}^{2N} J_{jk}\gamma_{j}\gamma_{k}. \label{eq:all-to-all_Hamiltonian} \end{equation} We will pair the $2N$ Majorana fermions into $N$ complex fermions and let the indices now run over complex fermions. It will also be convenient for us to break the terms up by whether the Majoranas are odd or even indexed \begin{multline} H =\\ -i \sum_{j < k}^{N} J_{2j-1,2k-1} \godd{j}\godd{k} -i\sum_{j \leq k}^N J_{2j-1,2k} \godd{j}\gev{k}\\ -i\sum_{j<k}^N J_{2j,2k-1} \gev{j}\godd{k} -i \sum_{j < k}^{N} J_{2j,2k} \gev{j}\gev{k} \end{multline} We can then write the Hamiltonian in terms of edge and vertex operators \begin{multline} H = \sum_{j<k}^{N} J_{2j-1,2k-1} A_{jk} +\sum_j^N J_{2j-1,2j}B_j\\ +i\sum_{j< k}^N J_{2j-1,2k} A_{jk}B_k +i\sum_{j<k}^N J_{2j,2k-1} B_j A_{jk} \\ -\sum_{j<k}^{N} J_{2j,2k} A_{jk} B_j B_k \end{multline} The interaction graph is a complete graph so each vertex has degree $d(j)=N-1$. We calculate the qubit requirement and the Pauli Weight for a number of different system geometries): \begin{enumerate} \item A complete graph with $N$ vertices (Complete Graph) \item A 1D chain with periodic boundary conditions (Linear) \item A geometry with a single virtual mode of degree $N$ connected to all physical modes which only connect to the central virtual mode (N Branches) \item Ternary tree with physical modes at the leaves and virtual vertices of degree $4$ (Ternary Tree) \item Ternary MERA-like geometry with virtual vertices of degree $4$ (Ternary MERA) (a cutout is shown in Fig. \ref{fig:mera_geodesic}) \item Hyperbolic geometry where each vertex is degree $6$ and faces are $4$-sided. Physical modes are identified with legs at the boundary of the disk. ($d=6$ Hyperbolic Tiling) \end{enumerate} \begin{figure} \caption{(Top) Qubit requirements to encode the system of $N$ complex fermions ($2N$ Majorana fermions) given the specified geometry (Bottom) The worst-case Pauli weight of the transformed Hamiltonians.} \label{fig:all-to-all_data} \end{figure} Some of these geometries have taken their inspiration from tensor networks such as MERA \cite{vidal_1D,swingle2012entanglement} as well as existing literature regarding hyperbolic codes \cite{pastawski2015holographic}. For the geometries containing vertices with vertex degrees growing with system size, we use the Fenwick tree as proposed in \cite{setia2019superfast} which reduces the worst-case weight of operators to logarithmic in the degree. Note also that the ternary tree geometry used here is unrelated to the ternary tree geometry construction of \cite{jiang2020optimal}. We present the qubit requirements and the worst-case Pauli weight for each geometry vs the number of complex fermions $N$ in Fig. \ref{fig:all-to-all_data}. For the tree, MERA, and hyperbolic geometries, the Pauli weight data presented should be interpreted as approximate given that only certain numbers of modes completely fill levels in the hierarchical constructions. We now summarize the results shown in Fig. \ref{fig:all-to-all_data}. All geometries except the complete graph required a number of qubits scaling linearly in the number of modes. The complete graph qubit requirements scaled quadratically in the number of modes. While all geometries except the linear chain provided a Pauli weight that scaled as $O(N^2 \log{N})$. The linear geometry had a Pauli weight that scaled as $O(N^3)$. \begin{figure*} \caption{(a) A section of a square lattice of fermionic modes (white circles) with nearest neighbor interaction graph (gray lines). Schematics of two edge operators are shown in orange ( The operators are products of Majoranas from the two fermionic modes). (b) The lattice is partitioned into $2\times 2$ blocks of fermionic modes (white). The system graph (gray) is modified from nearest-neighbor square lattice in (a) to a coarser lattice of blocks. The top left mode in each block remains in the lattice while the remaining modes within the block are connected as a 1D chain. The same two edge operators from (a) are shown here. Operators coupling modes within a block run along the chain. Operators coupling sites in adjacent blocks have a generalized Jordan-Wigner string that traverses the lattice from one block to the next. (c) The qubit-encoded version of the system graph is shown. Each fermionic vertex on the lattice, corresponding to fermionic modes, contain either one or three qubits. The qubit operator versions of the edge operators from (a) and (b) are shown. All operators remain local with respect to the coarse-grained lattice.} \label{fig: blocked_lattice} \end{figure*} In this case, we see that the matching the system graph to the interaction graph exactly is not the most economical encoding strategy. It requires the greatest qubit requirements of all geometries shown, $N(N-1)/2$, while giving a Pauli weight scaling that was comparable to other geometries. The opposite limit, the 1D geometry, requires the least number of qubits, $N$, but due to the non-locality of the string operators, the Pauli weight scales as $O(N^3)$. Geometries where we include virtual modes which provide shorter paths between vertices perform favorably at the cost of more qubits. For example, we can include a single central mode connected to all others which serves as a midpoint for interactions. Using this geometry, all couplings are products of two edge operators from the physical modes to the virtual one and we require $1.5N$ qubits. The Pauli weight still scales as $O(N^2 \log{N})$ but with the Fenwick tree encoding of local Majorana operators was the lowest of all geometries tested. The other three geometries presented in Fig. \ref{fig:all-to-all_data} also all require a number of qubits that is linear in $N$. The hierarchical structure of the geometries means that points separated by $l$ on the boundary have paths through the virtual space that are length $\sim\log{l}$. As a result, they offer Pauli weights that scale as $O(N^2 \log{N})$. In the case of the ternary MERA and $d=6$ hyperbolic geometries we can give a nice interpretation, the couplings feature generalized JW string operators traversing discrete versions of geodesics in the virtual hyperbolic geometry, Fig \ref{fig:mera_geodesic}. We note that in a number of the geometries investigated above, we placed physical modes at the boundaries of a hyperbolic disk or at the leaves of a tree while the other modes were considered virtual i.e. not corresponding to physical modes. We could also have chosen to associate the vertices in the bulk of graph with physical modes. This would provide a savings in the required number of qubits. Previous proposals for simulation of the SYK model have used Jordan-Wigner which in the $q=2$ case leads to a Pauli weight of $O(N^3)$. As we have shown this can be reduced to $O(N^2 \log{N})$. Thus we can reduce the complexity of the simulation with a more careful consideration of encoding geometry. Given that simulations of all-to-all interacting systems can benefit from a virtual geometry, it would be interesting to extend our investigation to sparsified graphs for example the sparsified SYK model proposed in \cite{xu2020sparseSYK}. In addition to contributing towards progress in studies quantum chaos and holography, we also believe that investigations of quantum simulations of such highly connected systems will also benefit simulations of quantum chemistry as chemical Hamiltonians can also feature regions of high connectivity as we have shown in \cite{chien2019analysis}. \subsection{Qubit vs. locality trade-off with blocking} We will now consider an approach to finding a balance between locality and qubit requirement for a system on a $L\times L$ square lattice. A linear (JW) geometry will require $L^2$ qubits but will suffer from long JW strings whereas a strictly local encoding will require roughly $2L^2$ qubits (minus a few on the boundaries). Again, we can reduce the required number of qubits by relaxing the necessity for strictly local interactions. We will divide the system into a number of blocks. Interactions within blocks will be non-local incurring Jordan-Wigner strings that increase in length with the size of the blocks. Interactions between blocks, however, remain local in that the Pauli weight of operators are independent of the size of the full lattice. The blocking goes as follows. Partition the lattice into the desired number of blocks, $b$, determined by the available resources. Treat the modes within the block as though they were on a 1D chain. The lattice is then a coarser lattice with each vertex connected to a 1D chain. The first mode in the chain remains connected to the lattice such that those vertices are of degree $d=5$ in the bulk of the system and so require $3$ qubits. The total number of qubits required is then $L^2 + 2b$ (up to boundaries). With this construction, we are free to interpolate between a strictly local encoding with $L^2$ blocks of size $1$ and $1$ block of size $L^2$ by choosing blocks to be of the desired size. The idea of using a segmented encoding was explored in \cite{whitfield2016local} where segmented versions of the Bravyi-Kitaev and Fenwick tree encodings were explored for the 2D Hubbard model. \subsubsection{As truncated state preparation} We now hope to provide an intuitive picture for the blocking construction. This will also be a convenient time to address state preparation. For simplicity, we will again consider a square lattice of dimension $L\times L$. As previously mentioned, this encoding manages to encode the fermions in a local way by representing them as excitations of the toric code which are odd under particle exchange. Thus, to use this encoding, one must prepare a toric code state which is well known to be topologically ordered and therefore long-range entangled \cite{chen2010local}. Utilizing a MERA quantum circuit, one can prepare a toric code state by introducing entanglement scale-by-scale beginning with long-range entanglement and ending with entanglement at the final lattice scale \cite{aguado2008entanglement}. The circuit is comprised of a number of levels $U = U_1\dots U_{\log{L}}$. Each level $k$ takes as input a state on a lattice of linear size $l$ and a number of ancilla qubits and outputs a state of linear size $2l$, \begin{equation} U_{k} \ket{\psi_l}\ket{0\dots 0} = \ket{\psi_{2l}}. \label{eq:mera_circ} \end{equation} The state at each level has four times as many plaquettes as the previous level and so, with corrections at the boundaries, has about four times as many qubits in addition to the $L^2$ qubits associated to the fermionic modes. Again, at the final level, the total number of qubits is $2L^2$ with correction at the boundaries. Each layer consists of acting with Hadamards and CNOTs locally with respect to the lattice at the given level. The exact form of the circuit can be found in \cite{aguado2008entanglement}. Upon application of the $\log{L}$ layers of the circuit, the toric state is prepared. At this point, a constant depth unitary is performed to satisfy the stabilizers which differ slightly from the true toric code model. The exact form of this circuit depends on the basis chosen for the local Majoranas. If a Jordan-Wigner basis is chosen, the circuit consists merely of a single layer of Pauli X and Z gates. \begin{figure} \caption{Here we show the trade-off between qubit requirement and operator locality. The circles represent the fermionic modes while the lines represent the lattice of the topologically ordered state underpinning the local encoding. Each level represents a case in which one additional level of the state preparation circuit is applied, creating a finer lattice for the underlying toric code. Operators are local only with respect to the lattice spacing of the toric code state. At the top level with no topological order, the fermionic operators may be fully non-local e.g. long JW strings. At the bottom level, the operators are fully local with respect to the lattice spacing of the fermions but twice as many qubits are required. } \label{fig:coarse_grain} \end{figure} The blocking construction can be thought of as a truncation of the state preparation circuit. Truncating the circuit results in a coarse-grained toric code state relative to the lattice of fermionic modes. As such, the fermionic operators which are local due to the topological order of the toric code state, are now only local with respect to the toric code lattice. Operators may be non-local up to the scale of the toric code lattice spacing. The benefit, however, is a savings in qubit resources as each MERA layer requires additional qubits. Thus by utilizing topological order on a coarse-grained lattice, one can realize the trade-off between operator locality and qubit requirement as depicted in Fig. \ref{fig:coarse_grain}. Although each level of the MERA unitary is local with respect to the lattice at each level, it is not local with respect to the final lattice of qubits. With strictly local operations, preparing the topologically ordered toric code state takes a time proportional to the linear size of the lattice \cite{bravyi2006lieb}. \subsubsection{Further generalizations} We now discuss a number of ways this above construction can be generalized. Going beyond a square lattice of blocks, one could perform the partitioning of the system of $N$ modes into a set of general sites $S=\{s_1,\dots,s_{|S|}\}$ each containing a number of modes $n(s_i)$ and where each site is connected to $d(s_i)$ others on the lattice of sites. Then given the construction above, the total number of qubits required would be \begin{equation} \# \text{ of qubits } = N + \sum_i^{|S|} \left\lceil\frac{d(s_i)}{2}\right\rceil. \label{eq: qubits_general_sites} \end{equation} Further, one is free to choose any encoding for the modes within each site. For example, one could choose to encode the modes on a 1D chain as described or choose to use a Fenwick tree \cite{whitfield2016local} or Jiang et al.'s ternary tree encoding \cite{jiang2020optimal}. To emphasize the generality of the construction we are proposing here and to illustrate how the geometry of the interactions should inform the geometry of the qubit system, we would like to sketch how one might encode a system of interest lately, that being a lattice of SYK islands \cite{gu2017latticeSYK,song2017latticeSYK}. The system is a lattice of islands $S=\{s_1,\dots,s_{|S|}\}$ each containing $n(s_i)$ Majorana fermions with two types of interactions, quartic interactions between Majoranas within each island with random strength and quadratic interactions between Majoranas on adjacent islands on the lattice. We propose that such a system would be best simulated as a lattice of sites, where the modes on each site, are placed at vertices on the ``$N$ branches'' geometry or a hierarchical geometry as described above. The chosen geometry is then attached to the lattice at the central vertex. We highlight such a system as it frustrates many of the existing encoding schemes, featuring both highly connected regions as well as a notion of locality. Finally, we reiterate that one is free to use any basis for the local Majorana operators at each vertex. A Jordan-Wigner encoding was chosen for simplicity but an improvement in Pauli weight can be achieved by using a different local basis. For example, a Fenwick tree basis for local Majorana operators as proposed in \cite{setia2019superfast} or Jiang et al.'s ternary tree basis \cite{jiang2020optimal} would give a Pauli weight for Majorana operators logarithmic in the number of qubits. \subsection{Device connectivity constraints} \begin{figure} \caption{(top left) The heavy-hexagon geometry as presented in \cite{chamberland2020topological}. 65 qubits are shown. (top right) We show a choice of grouping pairs of qubits together to encode the degree $3$ vertices. (bottom) We show the system graph where each larger circle corresponds to one of the 49 encoded fermionic modes. Smaller circles represent the qubits associated to each mode.} \label{fig:heavy_hex} \end{figure} Many of the quantum computing platforms under development are subject to connectivity constraints. Notably, these include superconducting qubits which have recently been used to achieve a ``quantum supremacy'' result \cite{arute2019quantumsupremacy}. The processor used in the supremacy experiment features qubits laid out on a square grid with nearest-neighbor connectivity. Recent work on increasing the capabilities of quantum computers as measured in so-called quantum volume \cite{cross2019validating} has led to progress on devices with qubits laid out on lower-degree graphs. In particular, the heavy-hexagon lattice has been identified as a candidate system geometry for realizing quantum error correction while mitigating hardware challenges presented by cross-talk and frequency collisions \cite{chamberland2020topological}. This lattice features qubits placed on vertices of a hexagonal lattice as well as on edges. On such devices subject to connectivity constraints, it may be preferable to encode a fermionic system into a graph informed by the connectivity of the device. To that end, we present here a candidate geometry for encoding a fermionic system into a heavy-hexagon lattice. In Fig. \ref{fig:heavy_hex}, we show a $65$ qubit heavy-hexagon lattice. To each degree $3$ vertex, we associate an additional qubit. Thus, with $16$ degree $3$ vertices, we are able to encode 49 fermionic modes into the 65 qubit heavy-hexagon lattice. As above, coupling modes which do not share an edge in the system geometry will require edge operators containing generalized Jordan-Wigner strings. Designing device-specific encodings for other platforms would proceed similarly, identifying qubits to group together to encode vertices of appropriate degree and embedding a problem withing the device-informed geometry. We leave an investigation of optimal device-specific geometries to future work. \section{Conclusion} In this paper, we have presented a very general construction for designing quantum codes to simulate fermions on quantum computers. The construction realizes the trade-off between qubit resources and operator locality in such a way that one can tailor the encoding to best fit the resources at hand. We have also shown that in some cases, locality is too strict of a constraint and one is better off seeking a quasi-local encoding. We showed this occurs in systems such as square lattice models with nearest- and nearest-diagonal-neighbor coupling where it is best to simply encode the square lattice and realize the diagonal couplings quasi-locally. We also presented the case of a fermionic system with all-to-all connectivity and demonstrated that one should encode this system with a virtual geometry so that generalized Jordan-Wigner strings traverse paths through the virtual geometry. We discussed how quasi-local codes can be interpreted as local codes with truncated state preparation circuits. Finally, we considered the design of custom codes to suit device connectivity constraints. We expect that the encoding construction presented here will find use in quantum simulation experiments ranging from quantum chemistry, to quantum gravity, to condensed matter physics. \end{document}
\begin{document} \begin{frontmatter} \title{Computing controlled invariant sets for hybrid systems with applications to model-predictive control} \author[First]{Beno\^{i}t Legat} \author[Second]{Paulo Tabuada} \author[Third]{Rapha\"{e}l M. Jungers} \newcommand{\email}[1]{\emph{\texttt{#1}}} \address[First]{ICTEAM, Universit\'e catholique de Louvain, 4 Av. G. Lema\^itre, 1348 Louvain-la-Neuve, Belgium (e-mail: \email{benoit.legat@student.uclouvain.be})} \address[Second]{Department of \new{Electrical and Computer} Engineering, UCLA, (e-mail: \email{tabuada@ee.ucla.edu})} \address[Third]{ICTEAM, Universit\'e catholique de Louvain, 4 Av. G. Lema\^itre, 1348 Louvain-la-Neuve, Belgium (e-mail: \email{raphael.jungers@uclouvain.be})} \begin{abstract} \,\,\,\,In this paper, we develop a method for computing controlled invariant sets using Semidefinite Programming. We apply our method to the controller design problem for switching affine systems with polytopic safe sets. The task is reduced to a semidefinite programming problem by enforcing \new{an} invariance relation in the dual space of the geometric problem. The paper ends with an application to safety critical model predictive control. \end{abstract} \begin{keyword} Controller Synthesis; Set Invariance; LMIs; Scalable Methods. \end{keyword} \end{frontmatter} \section{Introduction} The problem of computing a controlled invariant set is a paradigmatic challenge in the broad field of Hybrid Systems control. Indeed, it is for instance crucial in safety-critical applications, such as the control of a platoon of vehicles or air traffic management; see \cite{tomlin1998conflict}, where firm guarantees are needed on our ability to maintain the state in a safe region (e.g.\new{,} with a certain minimal distance between vehicles). In other situations, the dynamical system might be too complicated to analyze exactly in every point of the state space, but yet it can be possible to confine the state within a guaranteed set. Such situations occur frequently in hybrid, embedded, event-triggered systems, because of the complexity of the dynamics. A set is \emph{controlled invariant} (sometimes also referred to as \emph{viable}) if, any trajectory whose initial point is in the set can be kept inside it by means of a proper control action. Given a system with constraint specifications on the states and/or input, the controlled invariant set can be used to determine initial states \new{such that trajectories with these initial conditions are guaranteed to} meet the specifications. Moreover, \new{in some situations,} a state feedback control law can be derived from the knowledge of the controlled invariant set; see \cite{blanchini1999survey} for a survey. The computation of invariant sets is usually \new{achieved using either} polyhedral computations or semidefinite programming. Polyhedral computations are typically restricted to affine constraint specifications but it has been \new{recently shown} that it can also be applied to algebraic constraints; see~\cite{athanasopoulos2016computing}. If the system contains a control input, the computational complexity of the problem becomes even more challenging. Indeed, this requires (see e.g., the procedure p.~201 in \cite{blanchini2015set}) the computation of projections of polytopes when using polyhedral computations and semidefinite programming techniques are not directly applicable. Methods based on polyhedral computations for hybrid control systems have been developped in \cite{rungger2013specification, smith2016interdependence, rungger2017computing}. Unfortunately, the problem of polyhedral projection is well known to severely suffer from the curse of dimensionality, see \cite{avis1995good}, and the additional complexity of the discrete dynamics in hybrid systems makes the problem even less scalable for these systems. The semidefinite programming approach sacrifices exactness of the solution for the sake of algorithmic tractability. In the case of an uncontrolled system $x_{k+1} = Ax_k$, it consists in searching for an ellipsoidal set \[ \mathcal{E}_P = \{\, x \in \R^n \mid x^\Tr P x \leq 1 \,\} \] such that if $x^\Tr P x \leq 1$ then $x^\Tr A^\Tr P A x \leq 1$. Indeed, one can verify that it implies invariance of the set $\mathcal{E}_P$. The S-procedure allows to formulate the search of $P$ as a semidefine program; see \cite{polik2007survey} for a survey on the S-procedure. With the presence of the control $u$ in the system $x_{k+1} = Ax_k + Bu_k$, the condition becomes: \[ x^\Tr P x \leq 1 \Rightarrow \exists u, (A x + Bu)^\Tr P (A x + Bu) \leq 1. \] The control term $u$, or more precisely the existential quantifier $\exists$ prevents the S-procedure to be directly applied. \new{\cite{kurzhanski2005verification} show how to compute an over- and under-approximation of the reachable sets of a hybrid control system. While they approximate \emph{reachable sets} and do not compute \emph{controlled invariant sets}, their approach bears similarities with the method presented in this paper. However, their technique does not rely on semidefinite programming as they propagate ellipsoidal sets and do not need to enforce any invariance property.} In \cite{korda2014convex}, a semidefinite programming method is proposed for the computation of an outer approximation of the maximal controlled invariant sets. While the set computed with this method can be a good approximation of the maximal controlled invariant set, it is an outer approximation and is not controlled invariant unless the approximation is exact. In this paper, we give a general method that circumvents this issue. A key ingredient in our technique is that we work in the dual space of the geometric problem. We detail the application of the method to two classes of hybrid systems: Discrete-Time Affine Hybrid Control System (\new{HCS{} for short}) and Discrete-Time Affine Hybrid Algebraic System (HAS{} for short). HAS{} are not control systems but the computation of invariant sets for such systems presents the same features than for HCS{}. As a matter of fact, we show how to reduce the computation of controlled invariant sets \new{for} HCS{} to the computation of invariant sets \new{for} HAS{}. In this paper we break the problem into four subproblems, which we solve separately. In \secref{constraints}, we show how to reduce the computation of controlled invariant sets of a HCS{} with \emph{constrained} input to controlled invariant sets of a HCS{} with \emph{unconstrained} input. Then in \secref{dtahas}, we give the reduction of the computation of controlled invariant sets of a HCS{} with unconstrained input to invariant sets of a HAS{}. In \secref{duality}, we detail the relation between the algebraic invariance condition of HAS{} on a convex set and its polar set and we discuss how to lift the state space to handle non-homogeneity. In \secref{ellipsoids}, we show that using the results of \secref{duality}, the invariance of ellipsoids for a HAS{} can be formulated as a semidefinite program. \new{We end the paper with an application of the ellipsoidal controlled invariant sets to safety critical model predictive control. We show that precomputing such sets allows to guarantee safety of the model predictive controller and thus to alleviate expensive long-horizon computations thereby removing the need for long horizon.} \section{Controlled Invariant Set} In this section, we define HCS{} and HAS{} and give the invariance conditions for these two classes of hybrid systems. We detail the relation between controlled invariant sets of HCS{} and invariant sets of HAS{}. \subsection{Discrete-Time Affine Hybrid Control System} \label{sec:dtahcs} We will consider the following definition of Discrete-Time Affine Hybrid Control System. \begin{mydef} \label{def:dtahcs} A \emph{Discrete-Time Affine Hybrid Control System (HCS{})} is a system $S = (T, (A_\sigma, B_\sigma, c_\sigma)_{\sigma \in \Sigma},\\ (\mathcal{P}_q, U_q)_{q \in \Nodes})$ where $T = (\Nodes, \Sigma, \to)$ \new{and $\to \subseteq \Nodes \times \Sigma \times \Nodes$}. \new{A trajectory is a sequence $\{(x_k,u_k,\sigma_k)\}_{k \in \mathbb{N}}$} satisfying for all $k \in \mathbb{N}$: \begin{align*} x_{k+1} & = A_{\sigma_{\new{k}}} x_k + B_{\sigma_{\new{k}}} u_k + c_{\sigma_{\new{k}}},\\ x_k \in \mathcal{P}_{q_k}, u_k &\in \mathcal{U}_{q_k}, q_k \trs[\sigma_{\new{k}}] q_{k+1}. \end{align*} \end{mydef} Given a \node{} $q \in \Nodes{}$, we denote the set of allowed switching signals as $\Sigma_q$, the state dimension as $n_{q,x}$ and the input dimension as $n_{q,u}$. \begin{figure} \caption{Illustration for \exemref{cruise1} with two trailers.} \label{fig:cruise} \end{figure} We illustrate this definition with the cruise control example of \cite{rungger2013specification}. \begin{myexem} \label{exem:cruise1} We consider a truck with $M$ trailers as represented by \figref{cruise}. There is a truck with mass $m_0$ and speed $v_0$ followed by multiple trailers with mass $m$ each. The speed of the $i$th trailer is denoted $v_i$. There is a spring with stiffness $k_d$ and elongation $d_1$ (resp. $d_i$) and a damper with coefficient $k_s$ between the truck and the first trailer (resp. the $(i-1)$th trailer and the $i$th trailer). The scalar input $u$ controls the speed $v_0$ of the truck by creating a force $m_0u$. The dynamics of the system is given by the following equations: \begin{align} \notag \dot{v}_0 & = \frac{k_d}{m_0}(v_1 - v_0) - \frac{k_s}{m_0} d_1 + u\\ \notag \dot{v}_i & = \frac{k_d}{m}(v_{i-1} - 2v_i + v_{i+1}) + \frac{k_s}{m} (d_i - d_{i+1}) & 1 \leq i < M\\ \label{eq:truckdyn} \dot{v}_M & = \frac{k_d}{m}(v_{M-1} - v_M) + \frac{k_s}{m} d_M\\ \notag \dot{d}_i & = v_{i-1} - v_i & 1 \leq i \leq M. \end{align} The spring elongation should always remain between $-\SI{0.5}{\meter}$ and $\SI{0.5}{\meter}$ and the speeds of the truck and trailers should remain between $\SI{5}{\meter\per\second}$ and $\SI{35}{\meter\per\second}$. Moreover, there are three speed limits $\bar{v}_a = \SI{15.6}{\meter\per\second}$, $\bar{v}_b = \SI{24.5}{\meter\per\second}$, $\bar{v}_c = \SI{29.5}{\meter\per\second}$ and whenever the truck is informed of a new speed limit, it has \SI{0.8}{\second} to decrease $v_i$ ($0 \leq i \leq M$) below the speed limit. We sample time with a period of \SI{0.4}{\second} and define an initial \node{} $q_{d0}$ and 6 \nodes{} $q_{ij}$ where $i \in \{a, b, c\}$ is the current speed limitation and $j \in \{0, 1\}$ is the number of sampling times left to satisfy the limit. The transitions are $q_{ij} \trs[\sigma] q_{\sigma 1}$ for each $i \in \{a, b, c, d\}$ and $\sigma \in \{a, b, c, d\} \setminus \{i\}$. The symbol $a$ (resp. $b$, $c$) represents that the truck sees a new speed limitation $\bar{v}_a$ (resp. $\bar{v}_b$, $\bar{v}_c$) and $d$ represents that it does not see any new speed limitation. We suppose for simplicity that it is not possible to see a new speed limitation $\bar{v}_\sigma$ from a \node{} $q_{\sigma j}$. \new{The possible transitions are} represented in \figref{cruise1}. \begin{figure} \caption{Transitions and switchings between the \nodes{} for \exemref{cruise1}. Nodes $q_{b1}$ and $q_{b0}$ are not shown for clarity. } \label{fig:cruise1} \end{figure} The reset maps $(A_\sigma, B_\sigma, c_\sigma)$ are simply the integration of the dynamical system \eqref{eq:truckdyn} over \SI{0.4}{\second} with a zero-order hold input extrapolation. Let \begin{align*} P_0 & = \{\, (d, v) \in \R^{2M+1} \mid -0.5 \leq d \leq 0.5, 5 \leq v \leq 35 \,\},\\ P_i & = \{\, (d, v) \in \R^{2M+1} \mid v \leq \bar{v}_i \,\}, \quad i = a, b, c, \end{align*} where $d = (d_1, \ldots, d_M)$, $v = (v_0, \ldots, v_M)$ and inequalities in the two equations above are entrywise. The safe sets are $\mathcal{P}_{q_{d0}} = P_0$ and for $i = a, b, c$, $P_{q_{ij}} = P_0$ if $j > 0$ and $P_{q_{i0}} = P_0 \cap P_i$. The input set is $\mathcal{U}_{ij} = \{\, u \in \R \mid -4 \leq u \leq 4 \,\}$ for each \node{} $q_{ij}$. \end{myexem} \begin{mydef}[Controlled invariant sets for a HCS{}] \label{def:cis} $ $\\ Consider a HCS{} $S$. We say that sets $\Csetvar = (\Csetvar_q)_{q \in \Nodes}$ are \emph{controlled invariant} for $S$ if $\Csetvar_q \subseteq \mathcal{P}_q$ for each $q \in \Nodes$ and $\forall x \in \Csetvar_q, q \trs[\sigma] q'$, $\exists u \in \mathcal{U}_q$ such that \[ A_\sigma x + B_\sigma u + c_\sigma \in \Csetvar_{q'}. \] \end{mydef} \begin{myrem} \label{rem:autonomous} It is important to distinguish two types of switching: \emph{autonomous switching} and \emph{controlled switching}; see details in \cite[Section~1.1.3]{liberzon2012switching}. \defref{cis} is the definition of controlled invariance for autonomous systems and in this paper we only consider systems that switch autonomously. With controlled switching, ``$\forall q \trs[\sigma] q'$'' is replaced by ``$\exists q \trs[\sigma] q'$'' in \defref{cis}. \end{myrem} \subsection{Handling controller constraints} \label{sec:constraints} We say that the input of a HCS{} is \emph{unconstrained} if $\mathcal{U}_q = \R^{n_{q,u}}$ for all $q \in \Nodes$, otherwise we say that the input is \emph{constrained}. The computation of controlled invariant sets for a HCS{} with constrained input can be reduced to the computation of invariant sets for a HCS{} with unconstrained input as shown by the following lemma. \begin{mylem} \label{lem:liftu} The sets $\Csetvar = (\Csetvar_q)_{q \in \Nodes}$ are \emph{controlled invariant} for $S = (T, (A_\sigma, B_\sigma, c_\sigma)_{\sigma \in \Sigma}, (\mathcal{P}_q, U_q)_{q \in \Nodes})$ if and only if their exist controlled invariant sets $\Csetvar' = (\Csetvar_q')_{q \in \Nodes'}$ such that $\Csetvar'_q = \Csetvar_q$ $\forall q \in \Nodes$ for the system $S' = (T', (A_\sigma, B_\sigma, c_\sigma)_{\sigma \in \Sigma'}, (\mathcal{P}'_q, \mathcal{U}'_q)_{q \in \Nodes'})$ where the new transitions $T' = (\Nodes', \Sigma', \to')$ are obtained as follows: For each transition $q \trs \new{r}$ \new{in $T$}, we create a \node{} $q^\sigma$ and the transitions $q \trs[q^0]' q^\sigma$ and $q^\sigma \trs[\sigma']' \new{r}$ \new{in $T'$}. The new safe and input sets are \begin{align*} \mathcal{P}'_q & = \mathcal{P}_q & \mathcal{U}'_q & = \R^{n_{q,u}}\\ \mathcal{P}'_{q^\sigma} & = \mathcal{P}_q \times \mathcal{U}_q & \mathcal{U}'_{q^\sigma} & = \R^0 \end{align*} and the new reset maps are \begin{align*} A_{q^0} & = \begin{bmatrix} I\\ 0 \end{bmatrix} & B_{q^0} & = \begin{bmatrix} 0\\ I \end{bmatrix} & c_{q^0} & = 0\\ A_{\sigma'} & = \begin{bmatrix} A_\sigma & B_\sigma \end{bmatrix} & && c_{\sigma'} & = c_\sigma \end{align*} \new{and $B_{\sigma'}$ is the unique map sending $0 \in \R^0$ to $0 \in \mathbb{R}^{n_r}$.} \begin{proof} Consider controlled invariant sets $\Csetvar'$ for $S'$ and let $\Csetvar = (\Csetvar'_q)_{q \in \Nodes}$. Given $x \in \Csetvar_q$ and $q \trs r$, the controlled invariance of $\Csetvar'$ ensures that there exists $u$ such that $(x, u) \in \Csetvar'_{q^\sigma} \subseteq \mathcal{P}_q \times \mathcal{U}_q$ and $A_\sigma x + B_\sigma u + c_\sigma \in \Csetvar'_{r} = \Csetvar_{r}$. Hence $\Csetvar$ is controlled invariant for $S$. Consider now controlled invariant sets $\Csetvar$ for $S$ and let $\Csetvar' = (\Csetvar'_q)_{q \in \Nodes'}$ where $\Csetvar'_q = \Csetvar_q$ for each $q \in \Nodes$. Given $q \trs r$, for each $x \in \Csetvar'_q = \Csetvar_q$ the controlled invariance of $\Csetvar$ ensures that there exists $u \in \mathcal{U}_q$ such that $A_\sigma x + B_\sigma u + c_\sigma \in \Csetvar_{r} = \Csetvar'_{r}$, setting $\Csetvar'_{q^\sigma}$ to be the union of these pairs $(x, u)$ makes $\Csetvar'$ controlled invariant for $S'$. \end{proof} \end{mylem} \begin{myrem} \label{rem:liftu} If for a given $q$, $\Sigma_q$ is a singleton $\{\sigma\}$, we can merge $q$ and $q^\sigma$ into one state hence have $\mathcal{P}'_q = \mathcal{P}_q \times \mathcal{U}_q$. In that case, $\Csetvar_q$ will be the projection of $\Csetvar'_q$ in its state space. Even if $\Sigma_q$ is not a singleton, we can pick a single $\sigma \in \Sigma_q$ and merge $q$ and $q^\sigma$ into one state and use the reset map \begin{align*} A_{q^0} & = \begin{bmatrix} I & 0\\ 0 & 0 \end{bmatrix} & B_{q^0} & = \begin{bmatrix} 0\\ I \end{bmatrix} & c_{q^0} & = 0\\ \end{align*} so that switchings $\sigma' \in \Sigma_q \setminus \{\sigma\}$ ignore the part of the state of $q$ that corresponds to the input to be used for $\sigma$. \end{myrem} \begin{myexem} \label{exem:cruise2} We represent on \figref{cruise2} the application of the transformation described in \lemref{liftu} to the system of \exemref{cruise1}. We can use \remref{liftu} to avoid creating $q^d$ for each $q$. Moreover, since $(A_\sigma, B_\sigma, c_\sigma)$ does not depend on $\sigma$, we can merge all the \nodes{} $q^a$ (resp. $q^b$, $q^c$) together into a common state that we name $q_{a2}$ (resp. $q_{b2}$, $q_{c2}$). \begin{figure} \caption{Transitions and switchings between the \nodes{} for \exemref{cruise2}. \Nnodes{} $q_{b2}$, $q_{b1}$ and $q_{b0}$ are not shown for clarity. } \label{fig:cruise2} \end{figure} \end{myexem} \subsection{Discrete-Time Affine Hybrid Algebraic System} \label{sec:dtahas} \begin{mydef} \label{def:dtahas} A \emph{Discrete-Time Affine Hybrid Algebraic System (HAS{})} is a system $S = (T, (A_\sigma, E_\sigma, c_\sigma)_{\sigma \in \Sigma},\\ (\mathcal{P}_q)_{q \in \Nodes})$ where $T = (\Nodes, \Sigma, \to)$ \new{and $\to \subseteq \Nodes \times \Sigma \times \Nodes$}. \new{A trajectory is a sequence $\{(x_k,\sigma_k)\}_{k \in \mathbb{N}}$} satisfying for all $k \in \mathbb{N}$: \begin{align*} E_{\sigma_{\new{k}}} x_{k+1} & = A_{\sigma_{\new{k}}} x_k + c_{\sigma_{\new{k}}},\\ x_k \in \mathcal{P}_{q_k}, u_k &\in \mathcal{U}_{q_k}, q_k \trs[\sigma_{\new{k}}] q_{k+1}. \end{align*} \end{mydef} \begin{mydef}[Invariant sets for a HAS{}] \label{def:is} Consider a HAS{} $S$. We say that sets $\Csetvar = (\Csetvar_q)_{q \in \Nodes}$ are \emph{invariant} for $S$ if $\Csetvar_q \subseteq \mathcal{P}_q$ for each $q \in \Nodes$ and for all $q \trs q'$, \begin{equation} \label{eq:isdtahas} A_\sigma \Csetvar_{q} + c_\sigma \subseteq E_\sigma \Csetvar_{q'}. \end{equation} \end{mydef} \begin{myrem} \label{rem:descriptor} \defref{is} \new{can be interpreted as stating that} $\Csetvar$ is invariant if for each transition $q \trs q'$ and $x \in \Csetvar_q$, \begin{quote} there \emph{exists} $y \in \Csetvar_{q'}$ such that $A_\sigma x + c_\sigma = E_\sigma y$. \end{quote} A similar definition exists where this last part is replaced by \begin{quote} for \emph{each} $y$ such that $A_\sigma x + c_\sigma = E_\sigma y$, $y$ must belong to $\Csetvar_{q'}$. \end{quote} This is not equivalent to \defref{is} if $A_\sigma$ and $E_\sigma$ are not full rank. Moreover, computing ellipsoidal invariant sets according to this definition \new{is much easier}: it simply amounts to finding positive definite matrices $Q_q$ such that $A_\sigma^\Tr Q_q A_\sigma \preceq E_\sigma^\Tr Q_{q'} E_\sigma$; see \cite{owens1985consistency}. \end{myrem} We now show that the computation of controlled invariant sets of a HCS{} can be reduced to the computation of invariant sets of a HAS{}. \begin{mylem} \label{lem:proju} The sets $\Csetvar = (\Csetvar_q)_{q \in \Nodes}$ are \emph{controlled invariant} for the HCS{} $S = (T, (A_\sigma, B_\sigma, c_\sigma)_{\sigma \in \Sigma}, (\mathcal{P}_q, \R^{n_{q,u}})_{q \in \Nodes})$ if and only if they are invariant sets for the HAS{} $S' = (T, (E_\sigma A_\sigma, E_\sigma, E_\sigma c_\sigma)_{\sigma \in \Sigma}, (\mathcal{P}_q)_{q \in \Nodes})$ where $E_\sigma$ is a projection on $\Image(B_\sigma)^{\perp}$. \begin{proof} As the input is unconstrained, for each $q \trs q'$ and $x \in \mathcal{P}_q$, there exists $u \in \R^{n_{q,u}}$ such that $A_\sigma x + B_\sigma u + c_\sigma \in \Csetvar_{q'}$ if and only if $E_\sigma A_\sigma x + E_\sigma c_\sigma \in E_\sigma \Csetvar_{q'}$. \end{proof} \end{mylem} \section{Computing controlled invariant sets} \subsection{Duality correspondence for the invariance condition} \label{sec:duality} Given a set $\Csetvar$ and a linear map $A$, we define the following notations: \begin{align} \notag A\Csetvar & = \{\, Ax \mid x \in \Csetvar \,\}\\ \notag A^{-1}\Csetvar & = \{\, x \mid Ax \in \Csetvar \,\}\\ \label{eq:ATr} A^{-\Tr}\Csetvar & = \{\, x \mid A^{\Tr}x \in \Csetvar \,\}. \end{align} Note that $A$ does not need to be invertible in these definitions. Invariant sets can be computed numerically as \emph{sublevel sets}\footnote{\new{The $\ell$-sublevel set of a function $f : \mathbb{R}^n \to \mathbb{R}$ is the set $\{\, x \in \R^n \mid f(x) \leq \ell \,\}$.}} of polynomials functions using Sum-of-Squares. One property of sublevel sets that is usually used can be formulated as follows: If $\Csetvar$ is the $\ell$-sublevel set of a function $f$ then for any function $g$, $g^{-1}(\Csetvar)$ is the $\ell$-sublevel set of the function $f \circ g$. Thanks to this property, computing a set $\Csetvar$ satisfying $A\Csetvar \subseteq \Csetvar$ for some linear map $A$ can be for example achieved by searching for a set $\Csetvar$ being the 1-sublevel set of a polynomial $p(x)$. Indeed, the invariance constraint is equivalent to $\Csetvar \subseteq A^{-1}\Csetvar$ which is equivalent to the following \new{implication} : for all $x$, $p(x) \leq 1 \Rightarrow p(\new{A}x) \leq 1$. The latter proposition can be translated to a constraint of nonnegativity of a polynomial using the Sum-of-Squares formulation and the S-procedure. \begin{mylem}[S-procedure] \label{lem:sproc} Given two symmetric matrices $Q_1, Q_2 \in \R^{n \times n}$, the existence of a $\lambda \geq 0$ such that the matrix $\lambda Q_{\new{1}} - Q_{\new{2}}$ is positive semidefinite is sufficient for the following proposition to hold: \begin{quote} for all $x \in \R^n$, $x^\Tr Q_1 x \leq 0 \Rightarrow x^\Tr Q_2 x \leq 0$ \end{quote} Moreover, if there exists $x \in \R^n$ such that $x^\Tr Q_1 x > 0$ then this condition is also necessary. \end{mylem} For HAS{}, we have in \eqref{eq:isdtahas} an invariance constraint of the form $A\Csetvar \subseteq E\Csetvar$ and we would like to find an equivalent form with a pre-image as we had with $\Csetvar \subseteq A^{-1}\Csetvar$. This can be achieved using the polar of the set $\Csetvar$ thanks to the following lemma. \begin{mylem}[{\cite[Corollary~16.3.2]{rockafellar2015convex}}] \label{lem:podu} For any convex set $\Csetvar$ (resp. convex cone $\Kset$) and linear map $A$, \begin{align*} (A\Csetvar)^{\circ} & = A^{-\Tr} \Csetvar^\circ\\ (A\Kset)^* & = A^{-\Tr} \Kset^* \end{align*} where $\Csetvar^\circ$ denotes the polar of the set $\Csetvar$ and $\Kset^*$ denotes the dual of the cone $\Kset$. \end{mylem} \lemref{podu} shows that $A\Csetvar \subseteq E\Csetvar$ is equivalent to $A^{-\Tr}\Csetvar^\circ \supseteq E^{-\Tr}\Csetvar^\circ$. Since the invariant sets of the HAS{} may not have the origin in their interior, the polar transformation cannot be readily applied. We handle this non-homogeneity by taking the conic hull of the lifted sets $\Csetvar \times \{1\}$. More precisely, we define \begin{align} \tau(\Csetvar) & = \{\, (\lambda x, \lambda) \mid \lambda \geq 0, x \in \Csetvar \,\}\\ r(A, c) & = \begin{bmatrix} A & c\\ 0 & 1 \end{bmatrix}. \end{align} It can be verified that for any set $\Csetvar$, vector $c$ and linear map $A$, \begin{equation} \label{eq:taur} \tau(A\Csetvar + c) = r(A, c) \tau(\Csetvar). \end{equation} Moreover, for any half-space $a^\Tr x \leq \beta$, \begin{equation} \label{eq:hs} a^\Tr x \leq \beta, \forall x \in \Csetvar \Leftrightarrow (-a, \beta) \in \tau(\Csetvar)^*. \end{equation} \begin{mytheo} \label{theo:dtahas} Consider a HAS{} $S$. \new{The} closed convex sets $\Csetvar = (\Csetvar_q)_{q \in \Nodes}$ are \emph{invariant} for $S$ if and only if $\Csetvar_q \subseteq \mathcal{P}_q$ for each $q \in \Nodes$ and for all $q \trs q'$, \begin{equation} \label{eq:dtahas} r(A_\sigma, c_\sigma)^{-\Tr} \tau(\Csetvar_q)^* \supseteq r(E_\sigma, 0)^{-\Tr} \tau(\Csetvar_{q'})^*. \end{equation} \begin{proof} The invariance constraint of \defref{is} \[ A_\sigma \Csetvar_q + c_\sigma \subseteq E_\sigma \Csetvar_{q'} \] can be rewritten, using \eqref{eq:taur}, into \begin{equation} \label{eq:coneinv} r(A_\sigma, c_\sigma) \tau(\Csetvar_q) \subseteq r(E_\sigma, 0) \tau(\Csetvar_{q'}). \end{equation} As the sets $\Csetvar_q$ are closed and convex, so are the cones $\tau(\Csetvar_q)$ hence $\tau(\Csetvar_q)^{**} = \tau(\Csetvar_q)$. Therefore, by \lemref{podu}, \eqref{eq:coneinv} is equivalent to \eqref{eq:dtahas}. \end{proof} \end{mytheo} \subsection{Computation using ellipsoids} \label{sec:ellipsoids} While \theoref{dtahas} holds for any convex sets $(\Csetvar_q)_{q \in \Nodes}$, restricting our attention to ellipsoidal sets renders the invariance condition \eqref{eq:coneinv} amenable to semidefinite programming. Using sublevel sets of polynomials of higher degree would also allow us to use semidefinite programming but we only describe the ellipsoidal case for simplicity. This section details the semidefinite program needed to find these ellipsoidal invariant sets and shows its exactness in \theoref{quad}. We define the following notations for ellipsoids \begin{align*} \Ellc{Q}{c} & = \{\, x \mid (x-c)^\Tr Q (x-c) \leq 1 \,\}\\ \Ellq{D}{d}{\delta} & = \{\, x \mid x^\Tr D x + 2d^\Tr x + \delta \leq 0 \,\}. \end{align*} We denote the set of symmetric matrices of $\R^{n}$ as $\SymK$. \begin{mylem} \label{lem:QD} Let $Q, D \in \SymK$, $c, d \in \R^n$, $\delta \in \R$ with $Q \succ 0$. We have $\Ellc{Q}{c} = \Ellq{D}{d}{\delta}$ if and only if $D \succ 0$ and there exists $\lambda > 0$ such that \begin{align} \label{eq:lambda} \lambda & = d^\Tr D^{-1} d - \delta\\ \label{eq:c} c & = -D^{-1}d\\ \label{eq:Q} Q & = D/\lambda. \end{align} \begin{proof} Substituting $Q$ and $c$ using \eqref{eq:c} and \eqref{eq:Q} in $(x-c)^\Tr Q (x-c) - 1$ gives $(x^\Tr D x + 2d^\Tr x + d^\Tr D^{-1} d - \lambda) / \lambda$. We can conclude the ``if'' part of the proof with \eqref{eq:lambda}. We now show the ``only if'' part. By \lemref{sproc}, for $\Ellc{Q}{c} = \Ellq{D}{d}{\delta}$ to hold, there must exist $\lambda > 0$ such that \[ x^\Tr D x + 2d^\Tr x + \delta = \lambda((x-c)^\Tr Q (x-c) - 1). \] This implies that \begin{align} \label{eq:delta} \delta & = \lambda c^\Tr Q c - \lambda\\ \label{eq:d} d & = -\lambda Qc\\ \label{eq:D} D & = \lambda Q. \end{align} Equations~\eqref{eq:d} and \eqref{eq:D} directly give \eqref{eq:c} and \eqref{eq:Q}. It remains to show \eqref{eq:lambda}. Equation~\eqref{eq:d} is equivalent to $Q^{-1/2}d = -\lambda Q^{1/2}c$ which implies \begin{equation} \label{eq:dQd} d^\Tr Q^{-1} d = \lambda^2 c^\Tr Q c. \end{equation} Combining \eqref{eq:dQd} with \eqref{eq:D}, we get $\lambda c^\Tr Q c = d^\Tr D^{-1} d$ which, combined with \eqref{eq:delta}, gives \eqref{eq:lambda}. \end{proof} \end{mylem} We use the following corollary to represent the cones $\tau(\Csetvar_q)^*$ as the 0-sublevel set of quadratic forms $p(y) = p(x, z) = x^\Tr D_q x + 2d_q^\Tr xz + \delta_q z^2$. \begin{mycoro} \label{coro:convexcone} Let $\Kset = \{\, (x, z) | x^\Tr D x + 2d^\Tr xz + \delta z^2 \leq 0, z \geq 0 \,\}$ be a cone that has a nonempty interior and no intersection with the hyperplane $\{\, (x, 0) | x \in \R^n \,\}$ except the origin. The cone $\Kset$ is convex if and only if $D \succ 0$. \begin{proof} Let $\Csetvar = \Ellq{D}{d}{\delta}$. Since every point of the cone satisfy $z > 0$ except the origin, we have $\tau(\Csetvar) = \Kset$. Therefore, $\Kset$ is convex if and only if $\Csetvar$ is convex. Since $\Kset$ is nonempty, \[ \delta - d^\Tr D d = \min_{x \in \R^n} x^\Tr D x + 2d^\Tr x + \delta < 0. \] We conclude with \lemref{QD}. \end{proof} \end{mycoro} In \cororef{convexcone}, we require the cone to have no intersection with an hyperplane (except the origin). However, the cone $\tau(\Csetvar_q)^*$ has no intersection with the hyperplane $\{\, (x, 0) | x \in \R^n \,\}$ if and only if the origin is contained in $\Csetvar_q$ which may not be the case. In order to alleviate this, the approach we suggest is to suppose that we know one point $h_q$ in the interior of each $\Csetvar_q$ and we use \cororef{convexcone} in a transformed space where $h_q$ is mapped to the $z$-axis $(0, 1)$. For this transformation we use the \emph{Householder reflection} \cite[Section~5.1.2]{golub2012matrix} \[ \House{h} = I - \frac{2}{h^\Tr h} hh^\Tr. \] The householder reflection is symmetric and orthogonal. The optimization problem to solve is represented in \progref{quad}. The transformation of this program to a semidefinite program can be done automatically using the using the standard Sum-of-Square procedure; see \cite{blekherman2012semidefinite}. \begin{myprog} \label{prog:quad} \begin{align} \notag \max_{\substack{D_q \in \SymK, d_q \in \R^n,\\\delta_q \in \R, \lambda_{q \trs q'} \geq 0}} & \quad \sum_{q \in \Nodes} \log \det D_q \\ \label{eq:quadl1} \begin{bmatrix} D_q & d_q\\ d_q^\Tr & \delta_q + 1 \end{bmatrix} & \succ 0\\ \label{eq:quadp} p_q(y) & = y^\Tr \House{h_q} \begin{bmatrix} D_q & d_q\\ d_q^\Tr & \delta_q \end{bmatrix} \House{h_q} y\\ \label{eq:quads} p_q(r(A_\sigma, c_\sigma)^\Tr y) & \leq \lambda_{q \trs q'} p_{q'}(r(E_\sigma, 0)^{\Tr} y),\\ \notag \forall q \in \Nodes, & \forall q \trs q', \forall y \in \R^{n_{q,x}+1}\\ \label{eq:quadc} p_q(-a, \beta) & \leq 0 \quad \forall q \in \Nodes, \forall a^\Tr x \leq \beta \text{ supporting } \mathcal{P}_q\\ \label{eq:quadf} p_q(0, 1) & < 0 \quad \forall q \in \Nodes. \end{align} \end{myprog} The constraint \eqref{eq:quadl1} ensures both convexity of $\tau(\Csetvar_q)^*$ and the fact that $\det D_q$ does not overestimate the volume of the ellipsoid transformed by the Householder reflection. The constraint \eqref{eq:quads} is the S-procedure applied to the condition \eqref{eq:dtahas}. The constraint \eqref{eq:quadc} uses \eqref{eq:hs} to ensure that $\Csetvar_q$ is contained in $\mathcal{P}_q$. The constraint \eqref{eq:quadf} ensures that $\tau(\Csetvar_q)^*$ has non-empty interior. Note that if $\mathcal{P}_q$ has no unbounded subspace, \eqref{eq:quadf} is not necessary since the non-empty interior condition will already be ensured by \eqref{eq:quadc}. \begin{mytheo} \label{theo:quad} Consider a HAS{} $S$ and points $(h_q \in \mathcal{P}_q)_{q \in \Nodes}$. The polynomial $p_q(x, z)$ is feasible for \progref{quad} if and only if there exists invariant convex sets $\Csetvar = (\Csetvar_q)_{q \in \Nodes}$ such that $h_q \in \Csetvar_q$ for each $q \in \Nodes$ and $\tau(\Csetvar_q)^*$ is the 0-sublevel set of $p_q(x, z)$. Moreover, the optimal solution of \progref{quad} is the solution that minimizes the sum of the logarithm of the volume of the intersection of the each cone $\tau(\Csetvar_q)^*$ with the hyperplane \( \{\, x \mid \langle h_q, x \rangle = 1 \,\}. \) \begin{proof} Consider a solution $p = (p_q(x, z))_{q \in \Nodes}$ of \progref{quad}. By \cororef{convexcone}, constraints \eqref{eq:quadl1} and \eqref{eq:quadp} are satisfied if and only if there exists ellipsoids $\Csetvar_q$ such that $\tau(\Csetvar_q)^*$ is the 0-sublevel set of $p_q(x, z)$. By \eqref{eq:hs}, constraint \eqref{eq:quadc} is satisfied if and only if $\Csetvar_q \subseteq \mathcal{P}_q$. By \lemref{sproc}, constraint \eqref{eq:quads} is satisfied if and only if \eqref{eq:dtahas} hold for all $q \trs q'$. Therefore, by \theoref{dtahas}, the solution $p$ is a feasible solution of \progref{quad} if and only if the sets $\Csetvar_q$ are invariant for $S$. Let $Q_q, c_q$ be such that $\Ellc{Q_q}{c_q} = \Ellq{D_q}{d_q}{\delta_q}$ and let $\lambda_q$ be such that $D_q = \lambda_q Q_q$. The volume of the intersection of $\tau(\Csetvar_q)^*$ with the hyperplane \( \{\, x \mid \langle h_q, x \rangle = 1 \,\} \) is $-\det(Q_q)$. Therefore, it remains to show that $\lambda_q = 1$ for an optimal solution. We observe that without the constraint~\eqref{eq:quadl1}, for any feasible solution, $D_q, d_q, \delta_q$ can be scaled by any positive constant while remaining feasible but affecting the objective function. By the Schur complement, constraint~\eqref{eq:quadl1} implies that \[ d_q^\Tr D_q^{-1} d_q - \delta_q \leq 1. \] Combining this inequality with equation~\eqref{eq:lambda} implies that $\lambda_q \leq 1$. Since the objective is to maximize $\det(D_q) = \lambda_q \det(Q_q)$, we know that if $(D_q, d_q, \delta_q)$ is optimal, then $\lambda_q = d_q^\Tr D_q^{-1} d_q - \delta_q = 1$. \end{proof} \end{mytheo} \begin{myexem} \label{exem:cruise3} We apply \progref{quad} to \exemref{cruise2} with the same values for the parameters as the ones used in \cite{rungger2013specification}, that is, $m_0 = \SI{500}{\kilogram}$, $m = \SI{1000}{\kilogram}$, $k_d = \SI{4600}{\newton\second\per\meter}$ and $k_s = \SI{4500}{\newton\per\kilogram}$. The values used for $h_q$ are the same for each \node{} $q \in \Nodes$: $u = d_i = 0$ and $v_0 = v_i = (5+v_a)/2$ for $i = 1, \ldots, M$. We vary the number of trailers $M$ from 1 to 10. \figref{cruisesets} represents the controlled invariant set at \node{} $q_{a0}$. As we can see, the constraints on the trailers are propagated to the truck and, as the number $M$ increases, the truck speed and acceleration become more constrained. The time taken by Mosek 8.1.0.34 (\cite{mosek2017mosek81034}) to solve the problem is given by \figref{cruisebench}\footnote{\new{We set $\lambda_{q \trs q'}$ to 1 for each transition $q \trs q'$ to make the problem convex}.}. \end{myexem} \begin{figure}\label{fig:cruisesets} \end{figure} \begin{figure}\label{fig:cruisebench} \end{figure} \section{Application to Model Predictive Control} \label{sec:mpc} As mentioned in the introduction, the controlled invariant sets can be used to derive a feedback control law. We illustrate this with a Model Predictive Control (MPC) numerical experiment. We consider a truck with one trailer ($M = 1$) as in \exemref{cruise3}. The truck starts with speeds $v_0 = v_1 = \SI{10}{\meter\per\second}$ and spring displacement $d = \SI{0}{\meter}$ and has as objective to maximize the distance covered in \new{\SI{60}{\second}}. The maximal speed is initially \SI{35}{\meter\per\second} but after \new{\SI{30}{\second}}, it drops to $v_a = \SI{15.6}{\meter\per\second}$. In a classical MPC controller, the truck acceleration $u$ is controlled by solving a constrained optimal control problem up to horizon $H$. We observe that if $H \leq \new{\SI{9.2}{\second}}$, the controller is at some point unable to find values of $u$ satisfying input constraints such that the state remains in the safe set. For safety-critical applications, this lack of guarantee is not acceptable as it is necessary to be certain that the system can remain in the safe set. Moreover, in a real-time context, the need to pick a large horizon is problematic as it increases the cost of online computations. \new{In our setting, we constrain the state to remain in} the controlled invariant sets computed in \exemref{cruise3}\footnote{ \new{\exemref{cruise3} corresponds to an MPC controller of horizon \SI{0.8}{\second}. An MPC controller of different horizon computes different controlled invariant sets by updating the hybrid system accordingly.} } \new{and thereby} solve both issues. Indeed, safety is guaranteed for arbitrarily long simulations and the length of the horizon does not influence safety so smaller length can be used. Note that the controlled invariant sets can be computed offline so if it allows to reduce the horizon length, it enables online computational cost to be moved offline. Besides, constraining the state variables to belong to the ellipsoidal controlled invariant sets is straightforward\footnote{The membership to $\Ellc{Q}{c}$ is second order cone representable. Indeed consider a Cholesky factorization $Q = L^\Tr L$, the inequality $(x-c)^\Tr Q (x-c) \leq 1$ can be rewritten as $\|L(x-c)\|_2 \leq 1$ where $\|\cdot\|_2$ is the Euclidean norm.}. The results of the experiment can be found in \figref{speed} and \figref{acceleration}. \begin{figure}\label{fig:speed} \end{figure} \begin{figure}\label{fig:acceleration} \end{figure} \section{Conclusion} We have developed a methodology for computing controlled invariant sets of Discrete-Time Affine Hybrid Control System (HCS{}) and Discrete-Time Affine Hybrid Algebraic System (HAS{}) with \emph{autonomous switching} (see \remref{autonomous}). This method can be \new{combined with semidefinite programming in order} to compute ellipsoidal controlled invariant sets. We have shown that our technique can be used as a building block in a model predictive control scheme. This allows, among other things, to reduce the online computational cost by precomputing controlled invariant sets. We feel that we have only scratched the surface of the potential of the duality correspondence of \secref{duality}. Many extensions of this work are possible such as hybrid systems with controlled switching, or the use of Sum-Of-Squares techniques in order to enrich the geometry of the possible invariant sets. \new{The reformulation of the computation of controlled invariant sets of hybrid control system to the computation of invariant sets of hybrid algebraic system with \lemref{liftu} and \lemref{proju} allows to have a more behavioral invariance relation. In the future, we would like to put our result in the framework of behavioral theory in order to investigate how to further generalize them; see \cite{willems2013introduction}.} \end{document}
\begin{document} \title{S-NEAR-DGD: A Flexible Distributed Stochastic Gradient Method for Inexact Communication} \author{Charikleia~Iakovidou, Ermin~Wei \thanks{{C. Iakovidou and E. Wei are with the Department of Electrical and Computer Engineering, Northwestern University, Evanston, IL USA, {\tt\small chariako@u.northwestern.edu, ermin.wei@northwestern.edu}}} } \maketitle \begin{abstract} We present and analyze a stochastic distributed method (S-NEAR-DGD) that can tolerate inexact computation and inaccurate information exchange to alleviate the problems of costly gradient evaluations and bandwidth-limited communication in large-scale systems. Our method is based on a class of flexible, distributed first order algorithms that allow for the trade-off of computation and communication to best accommodate the application setting. We assume that all the information exchange between nodes is subject to random distortion and that only stochastic approximations of the true gradients are available. Our theoretical results prove that the proposed algorithm converges linearly in expectation to a neighborhood of the optimal solution for strongly convex objective functions with Lipschitz gradients. We characterize the dependence of this neighborhood on algorithm and network parameters, the quality of the communication channel and the precision of the stochastic gradient approximations used. Finally, we provide numerical results to evaluate the empirical performance of our method. \end{abstract} \IEEEpeerreviewmaketitle \section{Introduction} The study of distributed optimization algorithms has been an area of intensive research for more than three decades. The need to harness the computing power of multiprocessors to boost performance and solve increasingly complex problems~\cite{tsitsiklis_Problems_1984,bertsekas_parallel}, and the emergence of a multitude of networked systems that lack central coordination such as wireless sensors networks~\cite{predd_ml_2009,sensor4,sensor5}, power systems~\cite{power5,smart_grids,giannakisPower} and multi-robot and multi-vehicle networks~\cite{vehicles,robots2,robots3}, necessitated the development of optimization algorithms that can be implemented in a distributed manner. Moreover, the proliferation of data in recent years coupled with storage constraints, growing computation costs and privacy concerns have sparked significant interest in decentralized optimization for machine learning~\cite{nedic_ML_review,hong2016unified,richtarik2016parallel}. One of the most well-studied settings in distributed optimization is often referred to as {\it consensus optimization} ~\cite{bertsekas_parallel}. Consider a connected, undirected network of~$n$ nodes $\mathcal{G}(\mathcal{V},\mathcal{E})$, where $\mathcal{V}$ and $\mathcal{E}$ denote the sets of nodes and edges, respectively. The goal is to collectively solve a decomposable optimization problem, \begin{equation} \min_{x \in \mathbb{R}^p} f(x) = \sum_{i=1}^n f_i(x), \label{eq:prob_orig} \end{equation} where $x \in \mathbb{R}^p$ and $f_i: \mathbb{R}^p \rightarrow \mathbb{R}$. Each function $f_i$ is private to node $i$. To enable distributed computation, each node maintains a local copy $x_i$ in $\mathbb{R}^p$ to approximate the global variable $x$. Problem~(\ref{eq:prob_orig}) can then be equivalently reformulated into the following, \begin{equation} \label{eq:consensus_prob} \begin{split} \min_{x_i \in \mathbb{R}^{p}} \sum_{i=1}^n f_i(x_i), \quad \text{s.t. } x_i=x_j, \quad \forall(i,j) \in \mathcal{E}. \end{split} \end{equation} A similar problem setup was studied as far back as in~\cite{tsitsiklis_Problems_1984,tsitsiklis_distributed,bertsekas_parallel}. A well-known iterative method for this problem, the Distributed (Sub)Gradient Descent (DGD) method~\cite{NedicSubgradientConsensus}, involves taking local gradient steps and weighted averages with neighbors at each iteration. A class of algorithms known as gradient tracking methods, which include EXTRA and DIGing, can be viewed as an improved version of DGD with an additional step of averaging gradient information amongst neighbors, and can achieve exact convergence under constant stepsize~\cite{diging,extra,exact_diffusion,harnessing_smoothness}. A recent summary of several distributed methods for solving problem~(\ref{eq:consensus_prob}) can be found in~\cite{survey_distr_opt}. Distributed optimization algorithms typically rely on some combination of local computation steps, where the nodes aim to decrease their local functions, and consensus (or communication) steps, where nodes exchange information with their neighbors over the network. The amounts of computation and communication executed at every iteration are usually fixed for a given algorithm. However due to the diversity of distributed optimization applications, a "one size fits all" approach is unlikely to achieve optimal performance in every setting. The goal of this work is to study and develop a flexible, fast and efficient distributed method that can be customized depending on application-specific requirements and limitations. In addition to flexibility, our proposed method can also address two major challenges for distributed optimization methods: communication bottlenecks and costly gradient evaluations. Next, we summarize some the existing techniques to tackle these two issues. \subsection{Literature review} \subsubsection{Distributed optimization algorithms with quantized communication} The amount of communication between nodes has long been identified as a major performance bottleneck in decentralized computing, especially as the volume and dimensionality of available data increase~\cite{reisizadeh_exact_2018,lan_communication-efficient_2017}. Moreover, in any practical setting where the bandwidth of the communication channel is limited, the information exchanged cannot be represented by real-valued vectors with arbitrary precision. Both of these concerns motivate us to design distributed optimization methods where nodes receive inexact/quantized information from their neighbors. Distributed methods that can work with quantized communication include incremental algorithms~\cite{rabbat_quantized_2005}, DGD~\cite{nedic_distributed_2008,li_distributed_2017} and the dual averaging method~\cite{yuan_distributed_2012}. Different approaches have been proposed to guide distributed algorithms with inexact communication towards optimality, such as using weighted averages of incoming quantized information and local estimates~\cite{reisizadeh_exact_2018,doan_distributed_2018}, designing custom quantizers~\cite{lee2018finite,doan_accelerating_2018,pu_quantization_2017}, employing encoding/decoding schemes~\cite{alistarh2017qsgd,reisizadeh_exact_2018,yi_quantized_2014}, and utilizing error correction terms~\cite{zhu_param_estim,lee2018finite,doan_accelerating_2018}. Among these, only~\cite{pu_quantization_2017,lee2018finite} achieve exact geometric convergence by employing dynamic quantizers that require the tuning of additional parameters and global information at initialization. However, neither of these methods allow for adjusting the amounts of computation and communication executed at every iteration. \subsubsection{Stochastic gradient in distributed optimization} The computation of exact gradients for some large-scale problems can be prohibitively expensive and hence stochastic approximations obtained by sampling a subset of data are often used instead. Various studies and analyses on stochastic gradient based methods have been done in centralized settings ~\cite{sgd_bottou, goyal_minibatch}, federated learning (client-server model)~\cite{federated5,parallel_sgd} and distributed settings over general topologies~\cite{lian_can_nodate, morral_success_2017,pu_central_distr}, which is the setting our work adopts. In this particular setting, existing approaches include stochastic variants of DGD~\cite{sundharram_distributed_2010,srivastava_distributed_2011,lian_can_nodate}, stochastic diffusion algorithms~\cite{towfic_adaptive_2014,morral_success_2017,olshevsky_non-asymptotic_2019}, primal-dual methods~\cite{chatzipanagiotis_distributed_2016,lan_communication-efficient_2017,hong_stochastic_2017}, gradient-push algorithms~\cite{nedic_stochastic_2014,olshevsky_robust_2018}, dual-averaging methods~\cite{duchi_ml_2012}, accelerated distributed algorithms~\cite{fallah2019robust} and stochastic distributed gradient tracking methods ~\cite{mokhtari_dsa:_2015, pu_distributed_2018-1,shen_towards_2018,variance_reduced_gt,li2020sdiging}. While some of these methods can achieve exact convergence (in expectation) with linear rates (eg. stochastic variants of gradient tracking, exact diffusion), they need to be combined with variance reduction techniques that may have excessive memory requirements. Moreover, all of the aforementioned algorithms have a fixed structure and lack adaptability to different cost environments. \subsection{Contributions} In this paper, we propose and analyze a distributed first order algorithm, the Stochastic-NEAR-DGD (S-NEAR-DGD) method, that uses stochastic gradient approximations and tolerates the exchange of noisy information to save on bandwidth and computational resources. Our method is based on a class of flexible algorithms (NEAR-DGD)~\cite{berahas_balancing_2019} that permit the trade-off of computation and communication to best accommodate the application setting. In this work, we generalize our previous results analyzing NEAR-DGD in the presence of either deterministically quantized communication~\cite{berahas2019nested} or stochastic gradient errors~\cite{Iakovidou2019NestedDG}, and unify them under a common, fully stochastic framework. We provide theoretical results to demonstrate that S-NEAR-DGD converges to a neighborhood of the optimal solution with geometric rate, and that if an error-correction mechanism is incorporated to consensus, then the total communication error induced by inexact communication is independent of the number of consensus rounds peformed by our algorithm. Finally, we empirically show by conducting a series of numerical experiments that S-NEAR-DGD performs comparably or better than state-of-the-art methods, depending on how quantized consensus is implemented. The rest of the paper is organized as follows. In Section \ref{sec:algo}, we introduce the S-NEAR-DGD method. Next, we analyze the convergence properties of S-NEAR-DGD in Section~\ref{sec:analysis}. We present our numerical results in Section~\ref{sec:numerical} and conclude this work in Section~\ref{sec:conclusion}. \subsection{Notation} In this paper, all vectors are column vectors. The concatenation of local vectors $v_{i}$ in $\mathbb{R}^p$ is denoted by $\mathbf{v} = [v_i]_{i=\{1,2,\ldots,n\}}$ in $\mathbb{R}^{np}$ with a lowercase boldface letter. We use uppercase boldface letters for matrices. We will use the notations $I_p$ and $1_n$ for the identity matrix of dimension $p$ and the vector of ones of dimension $n$, respectively. The element in the $i$-th row and $j$-th column of a matrix $\*H$ will be denoted by $h_{ij}$, and the $p$-th element of a vector $v$ by $\left[v\right]_p$. The transpose of a vector $v$ will be denoted by $v^T$. We will use $\|\cdot\|$ to denote the $l_2$-norm, i.e. for $v \in \mathbb{R}^p$ $\left\|v\right\|= \sqrt{\sum_{i=1}^p \left[v\right]_i^2}$, and $\langle v, u \rangle$ to denote the inner product of two vectors $v,u$. We will use $\otimes$ to denote the Kronecker product operation. Finally, we define $\mathcal{N}_i$ to be the set of neighbors of node $i$, i.e., $\mathcal{N}_i = \{j \in \mathcal{V}: (i,j) \in \mathcal{E}\}$. \section{The S-NEAR-DGD method} \label{sec:algo} In this section, we first introduce a few standard technical assumptions and notation on our problem setting, followed by a quick review of the NEAR-DGD method, which serves as the main building block of the proposed method. Finally, we present the S-NEAR-DGD method. We adopt the following standard assumptions on the local functions $f_i$ of problem~\eqref{eq:consensus_prob}. \begin{assum} \textbf{(Local Lipschitz gradients)} \label{assum:lip} Each local objective function $f_i$ has \mbox{$L_i$-Lipschitz} continuous gradients, i.e. $ \|\nabla f_i(x) - \nabla f_i(y)\| \leq L_i\|x-y\|,\quad \forall x,y \in \mathbb{R}^p$. \end{assum} \begin{assum} \textbf{(Local strong convexity)} \label{assum:conv} Each local objective function $f_i$ is \mbox{$\mu_i$-strongly} convex, i.e. $ f_i(y) \geq f_i(x) + \langle \nabla f_i(x), y-x \rangle + \frac{\mu_i}{2}\|x-y\|_2^2,\quad \forall x,y \in \mathbb{R}^p$. \end{assum} We can equivalently rewrite problem~\eqref{eq:consensus_prob} in the following compact way, \begin{equation} \label{eq:consensus_prob2} \begin{split} \min_{\*x \in \mathbb{R}^{np}} \*f\left(\*x\right)=\sum_{i=1}^n f_i(x_i), &\quad \text{s.t. } \left(\*W \otimes I_p \right)\*x=\*x, \end{split} \end{equation} where $\*x=[x_i]_{i=\{1,2,\ldots,n\}}$ in $\mathbb{R}^{np}$ is the concatenation of local variables $x_i$ , $\*f:\mathbb{R}^{np}\rightarrow \mathbb{R}$ and $\*W \in \mathbb{R}^{n \times n}$ is a matrix satisfying the following condition. \begin{assum} \textbf{(Consensus matrix)} \label{assum:consensus_mat} The matrix $\*W \in \mathbb{R}^{n \times n}$ has the following properties: i) symmetry, ii) double stochasticity, and iii) $w_{i,j} > 0$ if and only if $j \in \mathcal{N}_i$ or $i=j$ and $w_{i,j} = 0$ otherwise. \end{assum} We will refer to $\*W$ as the \emph{consensus matrix} throughout this work. Since $\*W$ is symmetric it has $n$ real eigenvalues, which we order by $\lambda_n \leq \lambda_{n-1} \leq ... \leq \lambda_2 \leq \lambda_1$ in ascending order. Assumption~\ref{assum:consensus_mat} implies that $\lambda_1 = 1$ and $\lambda_2<\lambda_1$ for any connected network. The remaining eigenvalues have absolute values strictly less than 1, i.e., $-1<\lambda_n$. Moreover, the equality $\left(\*W \otimes I_p \right)\*x=\*x$ holds if and only if $x_i=x_j$ for all $j \in \mathcal{N}_i$~\cite{NedicSubgradientConsensus}, which establishes the equivalence between the two formulations~(\ref{eq:consensus_prob}) and~(\ref{eq:consensus_prob2}). We will refer to the absolute value of the eigenvalue with the second largest absolute value of $\*W$ as $\beta$, i.e. $\beta = \max\left\{|\lambda_2|,|\lambda_n|\right\}$. The iteration $\*x^{k+1}=\left(\*W \otimes I_p \right)\*x^{k},$ commonly referred to as the {\it consensus} step, can be implemented in a distributed way where each node exchanges information with neighbors and updates its private value of $x_i$ by taking a weighted average of its local neighborhood. When taking $k$ to infinity, all nodes in the network converge to the same value and thus reach consensus. It has been shown that $\beta$ controls the speed of consensus, with smaller values yielding faster convergence~\cite{boyd_consensus}. \subsection{The NEAR-DGD method} We next review the NEAR-DGD method~\cite{berahas_balancing_2019}, on which this work is based. NEAR-DGD is an iterative distributed method, where at every iteration, each node first decreases its private cost function by taking a local gradient (or \emph{computation}) step. Next, it performs a number of nested consensus (or \emph{communication}) steps. We denote the local variable at node $i$, iteration count $k$ and consensus round $j$ by $v_{i,k}^j \in \mathbb{R}^p$ (if the variable $v$ is not updated by consensus, the superscript will be omitted). We will use the notation $\bar{v}_k:=\sum_{i=1}^n v_{i,k}$ to refer to the average of $v_{i,k}$ across nodes. Starting from arbitrary initial points $x^{t(0)}_{i,0}=y_{i,0} \in \mathbb{R}^{p}$, the local iterates of NEAR-DGD at iteration $k=1,2,...$ can be expressed as \begin{subequations} \begin{align} &y_{i,k}= x^{t(k-1)}_{i,k-1} - \alpha \nabla f_i \left(x^{t(k-1)}_{i,k-1}\right),\label{eq:near_dgd_local_y_ORIG}\\ &x^{t(k)}_{i,k} = \sum_{l=1}^n w^{t(k)}_{il} y_{l,k},\label{eq:near_dgd_local_x_ORIG} \end{align} \end{subequations} where $t(k)$ is the number of consensus rounds performed at iteration $k$, $\alpha>0$ is a positive steplength, and \begin{equation*} \*W^{t(k)}=\underbrace{\*W \cdot \*W \cdot ... \cdot \*W}_{t(k)\text{ times}} \in \mathbb{R}^{n \times n}. \end{equation*} The flexibility of NEAR-DGD lies in the sequence of consensus rounds per iteration $\{t(k)\}$, which can be tuned depending on the deployed environment to balance convergence accuracy/speed and total application cost. With an increasing sequence of $\{t(k)\}$, NEAR-DGD can achieve exact convergence to the optimal solution of problem~\eqref{eq:prob_orig} and with $t(k) = k$ it converges linearly (in terms of gradient evaluations) for strongly convex functions~\cite{berahas_balancing_2019}. \subsection{The S-NEAR-DGD method} To accommodate bandwidth-limited communication channel, we assume that whenever node $i \in \{1,...,n\}$ needs to communicate a vector $v \in \mathbb{R}^p$ to its neighbors, it sends an approximate vector $\mathcal{T}_c\left[v\right]$ instead, i.e., $\mathcal{T}_c\left[\cdot\right]$ is a randomized operator which modifies the input vector to reduce the bandwidth. Similarly, to model the availability of only inexact gradient information, we assume that instead of the true local gradient $\nabla f_i\left(x_i\right)$, node $i$ calculates an approximation $\mathcal{T}_g\left[\nabla f_i \left(x_i\right)\right]$, where $\mathcal{T}_g\left[\cdot\right]$ is a randomized operator denoting the inexact computation. We refer to this method with inexact communication and gradient computation as the Stochastic-NEAR-DGD (S-NEAR-DGD) method. \begin{algorithm} \SetAlgoLined \KwInit{ Pick $x^{t(0)}_{i,0}=y_{i,0}$} \For{$k=1,2,...$}{ Compute $g_{i,k-1}=\mathcal{T}_g\left[\nabla f_i \left(x^{t(k-1)}_{i,k-1}\right)\right]$\\ Update $y_{i,k}\leftarrow x^{t(k-1)}_{i,k-1} - \alpha g_{i,k-1}$\\ Set $x^0_{i,k}= y_{i,k}$\\ \For{$j=1,...,t(k)$}{ Send $q^{j}_{i,k}=\mathcal{T}_c\left[x^{j-1}_{i,k}\right]$ to neighbors $l \in \mathcal{N}_i$ and receive $q^{j}_{l,k}$ \\ Update $x^j_{i,k} \leftarrow \sum_{l=1}^n \left(w_{il} q^{j}_{l,k} \right) + \left( x^{j-1}_{i,k} - q^j_{i,k}\right)$ } } \caption{S-NEAR-DGD at node $i$} \label{algo:sneardgd} \end{algorithm} Each node $i \in \{1,...,n\}$ initializes and preserves the local variables $x^{j}_{i,k}$ and $y_{i,k}$. At iteration $k$ of S-NEAR-DGD, node $i$ calculates the stochastic gradient approximation $g_{i,k-1}=\mathcal{T}_g\left[\nabla f_i \left(x^{t(k-1)}_{i,k-1}\right)\right]$ and uses it to take a local gradient step and update its internal variable $y_{i,k}$. Next, it sets $x^0_{i,k}= y_{i,k}$ and performs $t(k)$ nested consensus rounds, where during each consensus round $j \in \{1,...,t(k)\}$ it constructs the bandwidth-efficient vector $q^{j}_{i,k}=\mathcal{T}_c\left[x^{j-1}_{i,k}\right]$, forwards it to its neighboring nodes $l \in \mathcal{N}_i$ and receives the vectors $q^j_{l,k}$ from neighbors. Finally, during each consensus round, node $i$ updates its local variable $x^j_{i,k}$ by forming a weighted average of the vectors $q^j_{l,k}$, $l=1,...,n$ and adding the residual error correction term $\left(x^{j-1}_{i,k} - q^j_{i,k}\right)$. The entire procedure is presented in Algorithm~\ref{algo:sneardgd}. Let $\*x^{t(0)}_0=\*y_0=\left[y_{1,0};...;y_{n,0}\right]$ be the concatenation of local initial points $y_{i,0}$ at nodes $i=1,...,n$ as defined in Algorithm~\ref{algo:sneardgd}. The system-wide iterates of S-NEAR-DGD at iteration count $k$ and $j$-th consensus round can be written compactly as, \begin{subequations} \begin{align} &\*y_{k} = \*x_{k-1}^{t(k-1)} - \alpha \*g_{k-1}\label{eq:near_dgd_y},\\ &\*x^j_k = \*x^{j-1}_k +\left(\*Z - I_{np}\right)\*q^{j}_k,\quad j=1,...,t(k), \label{eq:near_dgd_x} \end{align} \end{subequations} where $\*x^0_k=\*y_k$, $\*Z = \left(\*W \otimes I_p\right) \in \mathbb{R}^{np \times np}$, $g_{i,k-1}=\mathcal{T}_g\left[\nabla f_i \left(x^{t(k-1)}_{i,k-1}\right)\right]$, $q^{j}_{i,k}=\mathcal{T}_c\left[x^{j-1}_{i,k}\right]$ for $j=1,...,t(k)$ and $\*g_{k-1}$ and $\*q^j_k$ are the long vectors formed by concatenating $g_{i,k-1}$ and $q^{j}_{i,k}$ over $i$ respectively. Moreover, due to the double stochasticity of $\*W$, the following relations hold for the average iterates $\bar{y}_k=\frac{1}{n}\sum_{i=1}^n y_{i,k}$ and $\bar{x}^j_k=\frac{1}{n}\sum_{i=1}^n x^j_{i,k}$ for all $k$ and $j$, \begin{subequations} \begin{align} &\bar{y}_{k} = \bar{x}_{k-1}^{t(k-1)} - \alpha \bar{g}_{k-1}\label{eq:near_dgd_y_avg},\\ &\bar{x}^j_k = \bar{x}^{j-1}_k,\quad j=1,...,t(k), \label{eq:near_dgd_x_avg} \end{align} \end{subequations} where $\bar{g}_{k-1}=\frac{1}{n}\sum_{i=1}^n g_{i,k-1}$. The operators $\mathcal{T}_c\left[\cdot\right]$ and $\mathcal{T}_g\left[\cdot\right]$ can be interpreted as $ \mathcal{T}_c \left[x^{j-1}_{i,k}\right]=x^{j-1}_{i,k}+\epsilon^{j}_{i,k}$, and $\mathcal{T}_g\left[\nabla f_i \left(x^{t(k-1)}_{i,k-1}\right)\right]=\nabla f_i \left(x^{t(k-1)}_{i,k-1}\right)+\zeta_{i,k}$, where $\epsilon^{j}_{i,k}$ and $\zeta_{i,k}$ are random error vectors. We list our assumptions on these vectors and the operators $\mathcal{T}_c\left[\cdot\right]$ and $\mathcal{T}_g\left[\cdot\right]$ below. \begin{assum}\textbf{(Properties of $\mathcal{T}_c\left[\cdot\right]$)} \label{assum:tc_bound} The operator $\mathcal{T}_c\left[\cdot\right]$ is iid for all $i=1,...,n$, $j=1,...,t(k)$ and $k\geq1$. Moreover, the errors $\epsilon^{j}_{i,k}=\mathcal{T}_c \left[x^{j-1}_{i,k}\right]-x^{j-1}_{i,k}$ have zero mean and bounded variance for all $i=1,...,n$, $j=1,...,t(k)$ and $k\geq1$, i.e., \begin{gather*} \mathbb{E}_{\mathcal{T}_c}\left[\epsilon^j_{i,k}\big|x^{j-1}_{i,k}\right]=0, \quad \mathbb{E}_{\mathcal{T}_c}\left[\|\epsilon^j_{i,k}\|^2 \big |x^{j-1}_{i,k}\right] \leq \sigma_c^2, \end{gather*} where $\sigma_c$ is a positive constant and the expectation is taken over the randomness of $\mathcal{T}_c$. \end{assum} \begin{example}\textbf{(Probabilistic quantizer)} \label{ex:prob_q} An example of an operator satisfying Assumption~\ref{assum:tc_bound} is the probabilistic quantizer in~\cite{yuan_distributed_2012}, defined as follows: for a scalar $x \in \mathbb{R}$, its quantized value $\mathcal{Q}\left[x\right]$ is given by \begin{equation*} \mathcal{Q}\left[x\right]=\begin{cases} \floor*{x} \quad \text{with probability $\left(\ceil*{x}-x\right)\Delta$} \\ \ceil*{x} \quad \text{with probability $\left(x-\floor*{x}\right)\Delta$},\end{cases} \end{equation*} where $\floor*{x}$ and $\ceil*{x}$ denote the operations of rounding down and up to the nearest integer multiple of $1/\Delta$, respectively, and $\Delta$ is a positive integer. \end{example} It is shown in~\cite{yuan_distributed_2012} that $\mathbb{E}\Big[x-\mathcal{Q}\left[x\right]\Big] =0$ and $\mathbb{E}\left[\left|x-\mathcal{Q}\left[x\right]\right|^2\right] \leq \frac{1}{4\Delta^2}$. For any vector $v = [v_i]_{i=\{1,\ldots,p\}}$ in $\mathbb{R}^p$, we can then apply the operator $\mathcal{Q}$ element-wise to obtain $\mathcal{T}_c\left[v\right]=\Big[\mathcal{Q}\left[v_i\right]\Big]_{i=\{1,\ldots,p\}}$ in $\mathbb{R}^p$ with $\mathbb{E}_{\mathcal{T}_c}\left[v - \mathcal{T}_c\left[v\right]\big| v\right] =\mathbf{0}$ and $\mathbb{E}_{\mathcal{T}_c}\left[\left\|v - \mathcal{T}_c\left[v\right]\right\|^2 \big| v\right] \leq \frac{p}{4\Delta^2}=\sigma^2_c$. \begin{assum}\textbf{(Properties of $\mathcal{T}_g\left[\cdot\right]$)} \label{assum:tg_bound} The operator $\mathcal{T}_g\left[\cdot\right]$ is iid for all $i=1,...,n$ and $k\geq1$. Moreover, the errors $\zeta_{i,k}=\mathcal{T}_g\left[\nabla f_i \left(x^{t(k-1)}_{i,k-1}\right)\right]-\nabla f_i \left(x^{t(k-1)}_{i,k-1}\right)$ have zero mean and bounded variance for all $i=1,...,n$ and $k\geq1$, \begin{gather*} \mathbb{E}_{\mathcal{T}_g}\left[\zeta_{i,k} \Big|x^{t(k-1)}_{i,k-1}\right]=0, \quad \mathbb{E}_{\mathcal{T}_g}\left[\left\|\zeta_{i,k}\right\|^2 \Big |x^{t(k-1)}_{i,k-1}\right] \leq \sigma_g^2, \end{gather*} where $\sigma_g$ is a positive constant and the expectation is taken over the randomness of $\mathcal{T}_g$. \end{assum} Assumption~\ref{assum:tg_bound} is standard in the analysis of distributed stochastic gradient methods~\cite{sundharram_distributed_2010,nedic_stochastic_2014,pu_distributed_2018-1,lian_can_nodate}. We make one final assumption on the independence of the operators $\mathcal{T}_c\left[\cdot\right]$ and $\mathcal{T}_g\left[\cdot\right]$, namely that the process of generating stochastic gradient approximations does not affect the process of random quantization and vice versa. \begin{assum}\textbf{(Independence)} \label{assum:iid} The operators $\mathcal{T}_g\left[\cdot\right]$ and $\mathcal{T}_c\left[\cdot\right]$ are independent for all $i=1,...,n$, $j=1,...,t(k)$ and $k\geq1$. \end{assum} Before we conclude this section, we note that there are many possible choices for the operators $\mathcal{T}_c\left[\cdot\right]$ and $\mathcal{T}_g\left[\cdot\right]$ and each would yield a different algorithm instance in the family of NEAR-DGD-based methods. For example, both $\mathcal{T}_c\left[\cdot\right]$ and $\mathcal{T}_g\left[\cdot\right]$ can be identity operators as in~\cite{berahas_balancing_2019}. We considered quantized communication using deterministic (D) algorithms (e.g. rounding to the nearest integer with no uncertainty) in~\cite{berahas2019nested}, while a variant of NEAR-DGD that utilizes stochastic gradient approximations only was presented in~\cite{Iakovidou2019NestedDG}. This work unifies and generalizes these methods. We summarize the related works in Table~\ref{tab:ndgd_allmethods}, denoting deterministic and random error vectors with (D) and (R), respectively. \begin{table*}[t] \begin{center} \begin{tabular}{ c c c } \hline \textbf{Method} & \textbf{Communication} & \textbf{Computation}\\ \hline NEAR-DGD~\cite{berahas_balancing_2019}, NEAR-DGD$^{t_c,t_g}$~\cite{berahas2020convergence} & $ \mathcal{T}_c \left[x^{j}_{i,k}\right]=x^{j}_{i,k}$ & $\mathcal{T}_g\left[\nabla f_i \left(x^{t(k)}_{i,k}\right)\right]=\nabla f_i \left(x^{t(k)}_{i,k}\right)$ \\ NEAR-DGD+Q~\cite{berahas2019nested} & $ \mathcal{T}_c \left[x^{j}_{i,k}\right]=x^{j}_{i,k}+\epsilon^{j+1}_{i,k}$ (D) & $\mathcal{T}_g\left[\nabla f_i \left(x^{t(k)}_{i,k}\right)\right]=\nabla f_i \left(x^{t(k)}_{i,k}\right)$ \\ SG-NEAR-DGD~\cite{Iakovidou2019NestedDG} & $ \mathcal{T}_c \left[x^{j}_{i,k}\right]=x^{j}_{i,k}$ & $\mathcal{T}_g\left[\nabla f_i \left(x^{t(k)}_{i,k}\right)\right]=\nabla f_i \left(x^{t(k)}_{i,k}\right)+\zeta_{i,k+1}$ (R) \\ S-NEAR-DGD (this paper) & $ \mathcal{T}_c \left[x^{j}_{i,k}\right]=x^{j}_{i,k}+\epsilon^{j+1}_{i,k}$ (R) & $\mathcal{T}_g\left[\nabla f_i \left(x^{t(k)}_{i,k}\right)\right]=\nabla f_i \left(x^{t(k)}_{i,k}\right)+\zeta_{i,k+1}$ (R) \\ \hline \end{tabular} \caption{Summary of NEAR-DGD-based methods. (D) and (R) denote deterministic and random error vectors respectively.} \label{tab:ndgd_allmethods} \end{center} \end{table*} \section{Convergence Analysis} \label{sec:analysis} In this section, we present our theoretical results on the convergence of S-NEAR-DGD. We assume that Assumptions~\ref{assum:lip}-\ref{assum:iid} hold for the rest of this paper. We first focus on the instance of our algorithm where the number of consensus rounds is constant at every iteration, i.e., $t(k)=t$ in~\eqref{eq:near_dgd_x} for some positive integer $t>0$. We refer to this method as S-NEAR-DGD$^t$. Next, we will analyze a second variant of S-NEAR-DGD, where the number of consensus steps increases by one at every iteration, namely $t(k)=k$, for $k\geq1$. We will refer to this new version as S-NEAR-DGD$^+$. Before our main analysis, we introduce some additional notation and a number of preliminary results. \subsection{Preliminaries} We will use the notation $\mathcal{F}^j_k$ to denote the $\sigma$-algebra containing all the information generated by S-NEAR-DGD up to and including the $k$-th inexact gradient step (calculated using using $\*g_{k-1}$) and $j$ subsequent nested consensus rounds. This includes the initial point $\*x_0=\*y_0$, the vectors $\{\*x^l_\tau : 1\leq l \leq t(\tau) \text{ if }1\leq \tau<k \text{ and } 1\leq l \leq j \text{ if }\tau=k \}$, the vectors $\*y_\tau$ for $1 \leq \tau\leq k$, the vectors $\{\*q^l_\tau : 1\leq l \leq t(\tau) \text{ if }1\leq \tau<k \text{ and } 1\leq l \leq j \text{ if }\tau=k \}$ and the vectors $\*g_\tau$ for $0\leq \tau\leq k-1$. For example, $\mathcal{F}^0_k$ would denote the $\sigma$-algebra containing all the information up to and including the vector $\*y_k$ generated at the $k$-th gradient step (notice that $\mathcal{F}^0_k$ contains the inexact gradient $\*g_{k-1}$, but not $\*g_k$), while $\mathcal{F}^l_k$ would store all the information produced by S-NEAR-DGD up to and including $\*x^l_k$, generated at the $l^{th}$ consensus round after the $k$-th gradient step using $\*g_{k-1}$. We also introduce 4 lemmas here; Lemmas ~\ref{lem:descent_f} and \ref{lem:global_L_mu} will be used to show that the iterates generated by S-NEAR-DGD$^t$ are bounded and to characterize their distance to the solution of Problem~(\ref{eq:prob_orig}). Next, in Lemmas~\ref{lem:comm_error} and~\ref{lem:comp_error} we prove that the total communication and computation errors in a single iteration of the S-NEAR-DGD$^t$ method have zero mean and bounded variance. These two error terms play a key role in our main analysis of convergence properties. The following lemma is adapted from~\cite[Theorem 2.1.15, Chapter 2]{nesterov_introductory_1998}. \begin{lem}\textbf{(Gradient descent)} \label{lem:descent_f} Let $h:\mathbb{R}^d \rightarrow \mathbb{R}$ be a $\mu$-strongly convex function with $L$-Lipschitz gradients and define $x^\star := \arg \min_x h(x)$. Then the gradient method $x_{k+1} = x_k - \alpha \nabla f\left(x_k\right)$ with steplength $\alpha < \frac{2}{\mu+L}$, generates a sequence $\{x_k\}$ such that \begin{equation*} \begin{split} \left \| x_{k+1} -x^\star \right \|^2 &\leq \left(1-\frac{2\alpha \mu L}{\mu+L}\right)\left \| x_k - x^\star\right\|^2. \end{split} \end{equation*} \end{lem} \begin{lem}\textbf{(Global convexity and smoothness)} \label{lem:global_L_mu} The global function $\*f:\mathbb{R}^{np}\to\mathbb{R}$, $\*f\left(\*x\right)=\frac{1}{n}\sum_{i=1}^n f_i(x_i)$ is $\mu$-strongly convex and $L$-smooth, where $\mu=\min_i \mu_i$ and $L=\max_i L_i$. In addition, the average function $\bar f:\mathbb{R}^{p}\to\mathbb{R}$, $\bar{f}(x)=\frac{1}{n}\sum_{i=1}^n f_i(x)$ is $\mu_{\bar{f}}$-strongly convex and $L_{\bar{f}}$-smooth, where $\mu_{\bar{f}}=\frac{1}{n}\sum_{i=1}^n \mu_i$ and $L_{\bar{f}}=\frac{1}{n}\sum_{i=1}^n L_i$. \end{lem} \begin{proof} This is a direct consequence of Assumptions~\ref{assum:lip} and~\ref{assum:conv}. \end{proof} \begin{lem}\textbf{(Bounded communication error)} \label{lem:comm_error} Let $\mathcal{E}^c_{t,k} := \*x^{t}_k - \*Z^{t}\*y_k$ be the total communication error at the $k$-th iteration of S-NEAR-DGD$^t$, i.e. $t(k)=t$ in~\eqref{eq:near_dgd_y} and~\eqref{eq:near_dgd_x}. Then the following relations hold for $k\geq1$, \begin{equation*} \begin{split} \mathbb{E}_{\mathcal{T}_c}\left[\mathcal{E}^{c}_{t,k} \big| \mathcal{F}^0_k \right] = \mathbf{0}, \quad \mathbb{E}_{\mathcal{T}_c}\left[\left \|\mathcal{E}^{c}_{t,k} \right\|^2 \big| \mathcal{F}^0_k \right] &\leq \frac{4n\sigma_c^2}{1-\beta^2}. \end{split} \end{equation*} \end{lem} \begin{proof} Let $\tilde{\*Z}:=\*Z-I_{np}$. Setting $\*x^0_k=\*y_k$ and applying (\ref{eq:near_dgd_x}), the error $\mathcal{E}^c_{t,k}$ can be expressed as $\mathcal{E}^{c}_{t,k} = \*x^{t-1}_k + \tilde{\*Z} \*q^t_k - \*Z^t\*x^0_k$. Adding and subtracting the quantity $\sum_{j=1}^{t-1}\left(\*Z^{t-j}\*x_k^{j} \right)=\sum_{j=1}^{t-1}\left(\*Z^{t-j}\left(\*x_k^{j-1} + \tilde{\*Z} \*q^j_k \right) \right)$ (by (\ref{eq:near_dgd_x})) yields, \begin{equation*} \begin{split} \mathcal{E}^{c}_{t,k} & = \tilde{\*Z}\left(\*q^t_k-\*x^{t-1}\right) -\sum_{j=1}^{t-2}\left(\*Z^{t-j}\*x_k^{j} \right) + \sum_{j=2}^{t-1}\left(\*Z^{t-j}\*x_k^{j-1}\right)+ \sum_{j=2}^{t-1}\left(\*Z^{t-j}\tilde{\*Z} \*q^j_k \right)+\*Z^{t-1}\tilde{\*Z}\left(\*q^1_k-\*x^0_k\right),\\ \end{split} \end{equation*} where have taken the $(t-1)^{th}$ term out of $-\sum_{j=1}^{t-1}\left(\*Z^{t-j}\*x_k^{j} \right)$ and the $1^{st}$ term out of $\sum_{j=1}^{t-1}\left(\*Z^{t-j}\left(\*x_k^{j-1} + \tilde{\*Z} \*q^j_k \right) \right)$. We observe that $\sum_{j=1}^{t-2}\left(\*Z^{t-j}\*x_k^{j} \right)=\sum_{j=2}^{t-1}\left(\*Z^{t-j+1}\*x_k^{j-1} \right)$, and after rearranging and combining the terms of the previous relation we obtain, \begin{equation} \label{eq:lem_comp_err_1} \begin{split} \mathcal{E}^{c}_{t,k} & = \sum_{j=1}^{t} \left(\*Z^{t-j} \tilde{\*Z}\left(\*q^j_k - \*x^{j-1}_k \right)\right). \end{split} \end{equation} Let $d^j_k = \*q^j_k-\*x^{j-1}_k$. Noticing that $d^j_k=\left[\epsilon^j_{i,k};...;\epsilon^j_{n,k}\right]$ as defined in Assumption~\ref{assum:tc_bound}, it follows that $ \mathbb{E}_{\mathcal{T}_c}\left[d^j_k \Big| \mathcal{F}^{j-1}_k\right] = \mathbf{0}$ for $1\leq j \leq t$. Due to the fact that $\mathcal{F}^{0}_k \subseteq \mathcal{F}^{1}_k \subseteq ... \subseteq \mathcal{F}^{j-1}_k$, applying the tower property of conditional expectation yields, \begin{equation} \label{eq:comm_err_1st_mom} \begin{split} \mathbb{E}_{\mathcal{T}_c}\left[ d^j_k \Big| \mathcal{F}^0_k\right] &= \mathbb{E}_{\mathcal{T}_c}\left[ \mathbb{E}_{\mathcal{T}_c}\left[ d^j_k \Big| \mathcal{F}^{j-1}_k\right] \Big| \mathcal{F}^0_k\right] = \mathbf{0}. \end{split} \end{equation} Combining the preceding relation with~\eqref{eq:lem_comp_err_1} and due to the linearity of expectation, we obtain $\mathbb{E}_{\mathcal{T}_c}\left[ \mathcal{E}^c_{t,k} \big| \mathcal{F}^0_k\right] = \mathbf{0}$. This completes the first part of the proof. Let $D^j_k = \*Z^{t-j}\tilde{\*Z} d^j_k$. By the spectral properties of $\*Z$, we have $\left\|\*Z^{t-j}\tilde{\*Z}\right\| = \max_{i>1}|\lambda_i^{t-j}||\lambda_i-1|\leq 2\beta^{t-j}$. We thus obtain for $1\leq j \leq t$, \begin{equation} \label{eq:comm_err_2nd_mom} \begin{split} \mathbb{E}_{\mathcal{T}_c}&\left[ \left\|D^j_k\right\|^2 \Big| \mathcal{F}^0_k\right] \leq 4\beta^{2(t-j)} \mathbb{E}_{\mathcal{T}_c}\left[ \left\| d^j_k \right\|^2 \Big| \mathcal{F}^0_k\right]\\ &=4\beta^{2(t-j)}\mathbb{E}_{\mathcal{T}_c}\left[ \mathbb{E}_{\mathcal{T}_c}\left[ \left\| d^j_k\right\|^2 \Big| \mathcal{F}^{j-1}_k\right] \bigg| \mathcal{F}^0_k\right] \\ &=4\beta^{2(t-j)}\mathbb{E}_{\mathcal{T}_c}\left[ \sum_{i=1}^{n} \mathbb{E}_{\mathcal{T}_c}\left[ \left\|\epsilon^j_{i,k} \right\|^2 \Big| \mathcal{F}^{j-1}_k\right] \bigg| \mathcal{F}^0_k\right]\\ &\leq 4\beta^{2(t-j)}n\sigma_c^2, \end{split} \end{equation} where we derived the second inequality using the tower property of conditional expectation and applied Assumption~\ref{assum:tc_bound} to get the last inequality. Assumption~\ref{assum:tc_bound} implies that for $i_1 \neq i_2$ and $j_1 \neq j_2$, $\epsilon^{j_1}_{i_1,k}$ and $\epsilon^{j_2}_{i_2,k}$ and by extension $d^{j_1}_k$ and $d^{j_2}_k$ are independent. Eq.~\eqref{eq:comm_err_1st_mom} then yields $\mathbb{E}_{\mathcal{T}_c}\left[\left \langle D^{j_1}_k, D^{j_2}_k \right \rangle\bigg|\mathcal{F}^0_k\right] = \mathbf{0}$. Combining this fact and linearity of expectation yields $\mathbb{E}_{\mathcal{T}_c} \left[\left\| \mathcal{E}^c_{t,k} \right\|^2 \big| \mathcal{F}^{0}_k \right] = \mathbb{E}_{\mathcal{T}_c} \left[ \left\|\sum_{j=1}^{t} D^j_k\right\|^2\Bigg| \mathcal{F}^{0}_k \right] = \sum_{j=1}^{t} \mathbb{E}_{\mathcal{T}_c} \left[ \left\| D^j_k\right\|^2\Bigg| \mathcal{F}^{0}_k \right]$. Applying~\eqref{eq:comm_err_2nd_mom} to this last relation yields, \begin{equation*} \mathbb{E}_{\mathcal{T}_c} \left[\left\| \mathcal{E}^c_{t,k} \right\|^2 \big| \mathcal{F}^{0}_k \right] \leq 4 n \sigma_c^2 \sum_{j=1}^t \beta^{2(t-j)}\leq \frac{4n\sigma_c^2}{1-\beta^2}, \end{equation*} where we used $\sum_{j=1}^t \beta^{2(t-j)}=\sum_{j=0}^{t-1} \beta^{2j} = \frac{1-\beta^{2t}}{1-\beta^2}\leq \frac{1}{1-\beta^2}$ to get the last inequality. \end{proof} \begin{lem}\textbf{(Bounded computation error)} \label{lem:comp_error} Let $\mathcal{E}^g_{k} := \*g_{k-1} -\nabla \*f \left( \*x^t_{k-1}\right)$ be the computation error at the $k$-th iteration of S-NEAR-DGD$^t$. Then the following statements hold for all $k\geq1$, \begin{equation*} \mathbb{E}_{\mathcal{T}_g}\left[\mathcal{E}^{g}_{k} \big| \mathcal{F}^t_{k-1}\right] = \mathbf{0}, \quad \mathbb{E}_{\mathcal{T}_g}\left[\left\|\mathcal{E}^{g}_{k}\right\|^2 \big| \mathcal{F}^t_{k-1} \right] \leq n\sigma_g^2. \end{equation*} \end{lem} \begin{proof} We observe that $\mathcal{E}^{g}_{k} = \left[\zeta_{1,k};...;\zeta_{n,k}\right]$ as defined in Assumption~\ref{assum:tg_bound}. Due to the unbiasedness of $\mathcal{T}_g\left[\cdot\right]$, we obtain \begin{equation*} \mathbb{E}_{\mathcal{T}_g}\left[\mathcal{E}^{g}_{k}\big| \mathcal{F}^t_{k-1} \right] = \mathbb{E}_{\mathcal{T}_g}\left[\*g_{k-1} -\nabla \*f \left( \*x^t_{k-1}\right)\big| \mathcal{F}^t_{k-1} \right] = \mathbf{0}, \end{equation*} For the magnitude square of $\mathcal{E}^{g}_{k}$ we have, \begin{equation*} \left\|\mathcal{E}^{g}_{k}\right\|^2 = \left\| \*g_{k-1} -\nabla \*f \left( \*x^t_{k-1}\right)\right\|^2=\sum_{i=1}^n\left\|\zeta_{i,k}\right\|^2. \end{equation*} Taking the expectation conditional to $\mathcal{F}^{t}_{k-1}$ on both sides of the equation above and using Assumption \ref{assum:tg_bound} establishes the desired results. \end{proof} We are now ready to proceed with our main analysis of the convergence properties of S-NEAR-DGD. \subsection{Main Analysis} For simplicity, from this point on we will use the notation $\mathbb{E}\left[ \cdot \right]$ to denote the expected value taken over the randomness of both $\mathcal{T}_c$ and $\mathcal{T}_g$. We begin our convergence analysis by proving that the iterates generated by S-NEAR-DGD$^t$ are bounded in expectation in Lemma~\ref{lem:bounded_iterates}. Next, we demonstrate that the distance between the local iterates produced by our method and their average is bounded in Lemma~\ref{lem:bounded_variance}. In Lemma~\ref{lem:descent_avg_iter}, we prove an intermediate result stating that the distance between the average iterates of S-NEAR-DGD$^t$ and the optimal solution is bounded. We then use this result to show the linear of convergence of S-NEAR-DGD$^t$ to a neighborhood of the optimal solution in Theorem~\ref{thm:bounded_dist_min}, and we characterize the size of this error neighborhood in terms of network and problem related quantities and the precision of the stochastic gradients and the noisy communication channel. We prove convergence to a neighborhood of the optimal solution for the local iterates of S-NEAR-DGD$^t$ in Corollary~\ref{cor:local_dist}. We conclude our analysis by proving that the average iterates of S-NEAR-DGD$^+$ converge with geometric rate to an improved error neighborhood compared to S-NEAR-DGD$^t$ in Theorem~\ref{thm:near_dgd_plus}. \begin{lem}\textbf{(Bounded iterates)} \label{lem:bounded_iterates} Let $\*x_k$ and $\*y_k$ be the iterates generated by S-NEAR-DGD$^t$ ($t(k)=t$ in Eq. (\ref{eq:near_dgd_x}) and (\ref{eq:near_dgd_y})) starting from initial point $\*y_0=\*x_0 \in \mathbb{R}^{np}$ and let the steplength $\alpha$ satisfy \begin{equation*} \alpha < \frac{2}{\mu + L}, \end{equation*} where $\mu=\min_i \mu_i$ and $L=\max_i L_i$. Then $\*x_k$ and $\*y_k$ are bounded in expectation for $k\geq1$, i.e., \begin{equation*} \begin{split} \mathbb{E}\left[\left\| \*y_{k}\right\|^2 \right]\leq D + \frac{(1+\kappa)^2n\sigma_g^2}{2L^2}+ \frac{2(1+\kappa)^2 n\sigma_c^2}{\alpha\left(1-\beta^2\right)L^2}, \end{split} \end{equation*} \begin{equation*} \begin{split} \mathbb{E}\left[\left\| \*x^t_k \right\|^2 \right] \leq D + \frac{(1+\kappa)^2n\sigma_g^2}{2L^2}+ \frac{2(1+\kappa)^2 n\sigma_c^2}{\alpha\left(1-\beta^2\right)L^2} +\frac{4n\sigma_c^2}{1-\beta^2}, \end{split} \end{equation*} where $D=2 \mathbb{E}\left[ \left\|\*y_0 - \*u^\star \right\|^2 \right]+2\left(1+4\nu^{-3}\right)\left\|\*u^\star\right\|^2$, $\*u^\star=[u_1^\star;u_2^\star;...;u_n^\star] \in \mathbb{R}^{np}$, $u_i^\star = \arg \min_x f_i(x)$, $\nu=\frac{2\alpha\mu L}{\mu+L}$ and $\kappa=L/\mu$ is the condition number of Problem~\eqref{eq:consensus_prob}. \end{lem} \begin{proof} Consider, \begin{equation*} \label{eq:lem_bounded_xy_1} \begin{split} \left\| \*y_{k+1} - \*u^\star \right\|^2 &= \left\|\*x_k^{t} - \alpha \*g_k - \*u^\star \right\|^2\\ &= \left\|\*x_k^{t} - \alpha \nabla \*f(\*x_k^{t}) - \*u^\star - \alpha \mathcal{E}^g_{k+1} \right\|^2 \\ &= \left\|\*x_k^{t} - \alpha \nabla \*f(\*x_k^{t}) - \*u^\star \right\|^2 + \alpha^2\left\| \mathcal{E}^g_{k+1} \right\|^2 - 2\alpha \left \langle \*x_k^{t} - \alpha \nabla \*f(\*x_k^{t}) - \*u^\star, \mathcal{E}^g_{k+1}\right \rangle, \end{split} \end{equation*} where we used (\ref{eq:near_dgd_y}) to get the first equality and added and subtracted $\alpha\nabla\*f\left(\*x^t_k\right)$ and applied the computation error definition $\mathcal{E}^g_{k+1}:=\*g_k - \nabla \*f\left(\*x^t_k\right)$ to obtain the second equality. Taking the expectation conditional to $\mathcal{F}^t_k$ on both sides of the inequality above and applying Lemma~\ref{lem:comp_error} yields, \begin{equation} \label{eq:lem_bounded_xy_2} \begin{split} \mathbb{E}\left[\left\| \*y_{k+1} - \*u^\star \right\|^2 \Big| \mathcal{F}^t_k \right]&\leq \left\|\*x_k^{t} - \alpha \nabla \*f(\*x_k^{t}) - \*u^\star \right\|^2 + \alpha^2 n \sigma_g^2. \end{split} \end{equation} For the first term on the right-hand side of (\ref{eq:lem_bounded_xy_2}), after combining Lemma~\ref{lem:descent_f} with Lemma~\ref{lem:global_L_mu} and due to $\alpha < \frac{2}{\mu+L}$ we acquire, \begin{equation*} \label{eq:lem_bounded_xy_3} \left\|\*x_k^{t} - \alpha \nabla \*f(\*x_k^{t}) - \*u^\star \right\|^2 \leq (1-\nu)\left\|\*x_k^{t}-\*u^\star\right\|^2, \end{equation*} where $\nu=\frac{2\alpha \mu L}{\mu + L}=\frac{2\alpha L}{1+\kappa}<1$. Expanding the term on the right hand side of the above relation yields, \begin{equation*} \label{eq:lem_bounded_xy_4} \begin{split} \left\|\*x_k^{t}-\*u^\star\right\|^2 &= \left\|\mathcal{E}^{c}_{t,k} + \*Z^t\*y_k -\*Z^t\*u^\star + \*Z^t\*u^\star - \*u^\star\right\|^2\\ &= \left\|\mathcal{E}^{c}_{t,k}\right\|^2 + \left\|\*Z^t\left(\*y_k - \*u^\star\right) - \left(I-\*Z^t\right)\*u^\star\right\|^2 + 2\left \langle \mathcal{E}^{c}_{t,k}, \*Z^t\*y_k - \*u^\star \right \rangle \\ &\leq \left\|\mathcal{E}^{c}_{t,k}\right\|^2 + \left(1+\nu\right)\left\|\*Z^t\left(\*y_k - \*u^\star\right)\right\|^2 +\left(1+\nu^{-1}\right)\left\| \left(I-\*Z^t\right)\*u^\star\right\|^2 + 2\left \langle \mathcal{E}^{c}_{t,k}, \*Z^t\*y_k - \*u^\star \right \rangle\\ &\leq \left\|\mathcal{E}^{c}_{t,k}\right\|^2 + \left(1+\nu\right)\left\|\*y_k - \*u^\star\right\|^2 +4\left(1+\nu^{-1}\right)\left\|\*u^\star\right\|^2 + 2\left \langle \mathcal{E}^{c}_{t,k}, \*Z^t\*y_k - \*u^\star \right \rangle, \end{split} \end{equation*} where we added and subtracted the quantities $\*Z^t\*y_k$ and $\*Z^t\*u^\star$ and applied the communication error definition $\mathcal{E}^{c}_{t,k}:=\*x^t_k - \*Z^t\*y_k$ to get the first equality. We used the standard inequality $\pm2\langle a,b\rangle \leq c\|a\|^2 + c^{-1}\|b\|^2$ that holds for any two vectors $a,b$ and positive constant $c>0$ to obtain the first inequality. Finally, we derived the last inequality using the relations $\left\|\*Z^t \right\|=1$ and $\left\|I-\*Z^t \right\|<2$ that hold due to Assumption~\ref{assum:consensus_mat}. Due to the fact that $\mathcal{F}^0_k \subseteq \mathcal{F}^t_k$, combining the preceding three relations and taking the expectation conditional on $\mathcal{F}^0_k$ on both sides of~\eqref{eq:lem_bounded_xy_2} yields, \begin{equation*} \begin{split} \mathbb{E}\Big[\big\| \*y_{k+1} - \*u^\star \big\|^2 \Big| \mathcal{F}^0_k \Big] &\leq \left(1-\nu^2\right) \left\|\*y_k - \*u^\star \right\|^2 + \alpha^2n\sigma_g^2 \\ &\quad+ (1-\nu)\mathbb{E}\left[\left\|\mathcal{E}^c_{t,k}\right\|^2\big|\mathcal{F}^0_k\right]+ 4\nu^{-1}\left(1-\nu^2\right)\left\|\*u^\star\right\|^2+ 2(1-\nu)\mathbb{E}\left[\left \langle \mathcal{E}^{c}_{t,k}, \*Z^t\*y_k - \*u^\star \right \rangle \big|\mathcal{F}^0_k\right]\\ &\leq \left(1-\nu^2\right) \left\|\*y_k - \*u^\star \right\|^2 + \alpha^2n\sigma_g^2 + (1-\nu)\frac{4n\sigma_c^2}{1-\beta^2}+ 4\nu^{-1}\left(1-\nu^2\right)\left\|\*u^\star\right\|^2, \end{split} \end{equation*} where we applied Lemma~\ref{lem:comm_error} to get the last inequality. Taking the total expectation on both sides of the relation above and applying recursively over iterations $0, 1, \ldots, k$ yields, \begin{equation*} \label{eq:lem1_1} \begin{split} \mathbb{E}\Big[\big\| \*y_{k} - \*u^\star \big\|^2\Big] &\leq \left(1-\nu^2\right)^k \mathbb{E}\left[ \left\|\*y_0 - \*u^\star \right\|^2 \right]+ \alpha^2\nu^{-2}n\sigma_g^2 + (\nu^{-2}-\nu^{-1})\frac{4n\sigma_c^2}{1-\beta^2} + 4\left(\nu^{-3}-\nu^{-1}\right)\left\|\*u^\star\right\|^2\\ &\leq \mathbb{E}\left[ \left\|\*y_0 - \*u^\star \right\|^2 \right]+ \alpha^2\nu^{-2}n\sigma_g^2 + \frac{4\nu^{-2}n\sigma_c^2}{1-\beta^2} + 4\nu^{-3}\left\|\*u^\star\right\|^2, \end{split} \end{equation*} where we used $\sum_{h=0}^{k-1}\left(1-\nu^2\right)^h\leq \nu^{-2}$ to get the first inequality and $\nu > 0$ to get the second inequality. Moreover, the statement $\left\| \*y_{k}\right\|^2 = \left\| \*y_{k} - \*u^\star + \*u^\star \right\|^2\leq 2\left\| \*y_{k} - \*u^\star \right\|^2+2\left\| \*u^\star \right\|^2$ trivially holds. Taking the total expectation on both sides of this relation, yields, \begin{equation} \label{eq:bound_yk} \begin{split} \mathbb{E}\Big[\big\| \*y_{k}\big\|^2 \Big]& \leq 2\mathbb{E}\left[\left\| \*y_{k} - \*u^\star \right\|^2\right]+2\left\| \*u^\star \right\|^2\\ &\leq 2 \mathbb{E}\left[ \left\|\*y_0 - \*u^\star \right\|^2 \right]+ 2 \alpha^2\nu^{-2}n\sigma_g^2 + \frac{8\nu^{-2}n\sigma_c^2}{1-\beta^2}+ 2\left(1+4\nu^{-3}\right)\left\|\*u^\star\right\|^2. \end{split} \end{equation} Applying the definitions of $D$ and $\kappa$ to (\ref{eq:bound_yk}) yields the first result of this lemma. Finally, the following statement also holds \begin{equation*} \begin{split} \left\| \*x^t_k \right\|^2 &= \left\| \mathcal{E}^{c}_{t,k} + \*Z^t\*y_k\right\|^2\\ &= \left\| \mathcal{E}^{c}_{t,k}\right\|^2 +\left\| \*Z^t\*y_k\right\|^2 + 2\left \langle\mathcal{E}^{c}_{t,k}, \*Z^t\*y_k \right \rangle\\ &\leq \left\| \mathcal{E}^{c}_{t,k}\right\|^2 +\left\| \*y_k\right\|^2 + 2\left \langle\mathcal{E}^{c}_{t,k}, \*Z^t\*y_k \right \rangle, \end{split} \end{equation*} where we used the non-expansiveness of $\*Z$ for the last inequality. Taking the expectation conditional on $\mathcal{F}^0_k$ on both sides of the preceding relation and applying Lemma \ref{lem:comm_error} yields, \begin{equation*} \label{eq:bound_xk} \begin{split} \mathbb{E}\left[\left\| \*x^t_k \right\|^2 \Big | \mathcal{F}^0_k \right] &\leq \frac{4n\sigma_c^2}{1-\beta^2} +\left\| \*y_k\right\|^2. \end{split} \end{equation*} Taking the total expectation on both sides of the relation above, applying (\ref{eq:bound_yk}) and the definitions of $D$, $\nu$ and $\kappa$ concludes this proof. \end{proof} We will now use the preceding lemma to prove that the distance between the local and the average iterates generated by S-NEAR-DGD$^t$ is bounded. This distance can be interpreted as a measure of consensus violation, with small values indicating small disagreement between nodes. \begin{lem} \textbf{(Bounded distance to average)} \label{lem:bounded_variance} Let $x_{i,k}^t$ and $y_{i,k}$ be the local iterates produced by S-NEAR-DGD$^t$ at node $i$ and iteration $k$ and let $\bar{x}_k^t:=\sum_{i=1}^n x^t_{i,k}$ and $\bar{y}_k:=\sum_{i=1}^n y_{i,k}$ denote the average iterates across all nodes. Then the distance between the local and average iterates is bounded in expectation for all $i=1,...,n$ and $k=1,2,...$, namely, \begin{equation*} \begin{split} &\mathbb{E}\Big[\big\|x^t_{i,k} - \bar{x}^t_k \big\|^2\Big] \leq \mathbb{E}\left[\left\|\*x^t_k - \*M \*x^t_k \right\|^2\right] \leq \beta^{2t}D + \frac{\beta^{2t}(1+\kappa)^2n\sigma_g^2}{2L^2}+ \frac{2\beta^{2t}(1+\kappa)^2n\sigma_c^2}{\alpha^2\left(1-\beta^2\right) L^2} +\frac{4n\sigma_c^2}{1-\beta^2}, \end{split} \end{equation*} and \begin{equation*} \begin{split} &\mathbb{E} \Big[\big\|y_{i,k} - \bar{y}_k \big\|^2 \Big]\leq \mathbb{E}\left[ \left\|\*y_k - \*M \*y_k \right\|^2 \right]\leq D + \frac{(1+\kappa)^2n\sigma_g^2}{2L^2}+ \frac{2(1+\kappa)^2n\sigma_c^2}{\alpha^2\left(1-\beta^2\right) L^2}, \end{split} \end{equation*} where $\*M = \left( \frac{1_n 1_n^T}{n} \otimes I_p \right) \in \mathbb{R}^{np}$ is the averaging matrix, constant $D$ is defined in Lemma~\ref{lem:bounded_iterates}, $\kappa=L/\mu$ is the condition number of Problem~\eqref{eq:consensus_prob}, $L=\max_i L_i$ and $\mu = \min_i L_i$. \end{lem} \begin{proof} Observing that $\sum_{i=1}^n \left\|x^t_{i,k}-\bar{x}^t_k \right\|^2 = \left\| \*x^t_k - \*M \*x^t_k \right\|^2$, we obtain, \begin{equation} \label{eq:lem_bounded_diff_x} \left\|x^t_{i,k}-\bar{x}^t_k \right\|^2 \leq \left\| \*x^t_k - \*M \*x^t_k \right\|^2,\quad i=1,...,n. \end{equation} We can bound the right-hand side of (\ref{eq:lem_bounded_diff_x}) as \begin{equation} \label{eq:lem2_x1} \begin{split} \left\|\*x^t_k - \*M \*x_k\right\|^2 &= \left\|\mathcal{E}^{c}_{t,k} + \*Z^t\*y_k- \*M \*x_k - \*M\*y_k + \*M\*y_k \right\|^2\\ &=\left\|\mathcal{E}^{c}_{t,k} + \left(\*Z^t-\*M\right)\*y_k- \*M\*x_k +\*M \*Z^t\*y_k \right\|^2\\ &=\left\|\left(I-\*M\right)\mathcal{E}^{c}_{t,k} \right\|^2+\left\| \left(\*Z^t-\*M\right)\*y_k \right\|^2 + 2\left \langle\left(I-\*M\right)\mathcal{E}^{c}_{t,k} ,\left(\*Z^t-\*M\right)\*y_k \right \rangle\\ &\leq \left\|\mathcal{E}^{c}_{t,k} \right\|^2+\beta^{2t}\left\|\*y_k \right\|^2 + 2\left \langle\left(I-\*M\right)\mathcal{E}^{c}_{t,k} ,\left(\*Z^t-\*M\right)\*y_k \right \rangle, \end{split} \end{equation} where we applied the definition of the communication error $\mathcal{E}^c_{t,k}$ of Lemma~\ref{lem:comm_error} and added and subtracted $\*M\*y_k$ to obtain the first equality. We used the fact that $\*M \*Z^t = \*M$ to get the second equality. We derive the last inequality from Cauchy-Schwarz and the spectral properties of $\*Z^t = \*W^t \otimes I_p$ and $\*M = \left(\frac{1_n 1_n^T}{n}\right) \otimes I_p$; both $\*W^t$ and $\frac{1_n 1_n^T}{n}$ have a maximum eigenvalue at $1$ associated with the eigenvector $1_n$, implying that the null space of $\*W^t - \frac{1_n 1_n^T}{n}$ is parallel to $1_n$ and $\left\|\*Z^t-\*M\right\|=\left\|\*W^t-\frac{1_n 1_n^T}{n}\right\|=\beta^t$. Taking the expectation conditional to $\mathcal{F}^0_k$ on both sides of~(\ref{eq:lem2_x1}) and applying Lemma~\ref{lem:comm_error} yields, \begin{equation*} \begin{split} \mathbb{E}\left[ \left\|\*x^t_k - \*M \*x_k\right\|^2 \Big | \mathcal{F}^0_k \right] &\leq \frac{4n\sigma_c^2}{1-\beta^2} +\beta^{2t}\left\|\*y_k \right\|^2. \end{split} \end{equation*} Taking the total expectation on both sides and applying Lemma~\ref{lem:bounded_iterates} yields the first result of this lemma. Similarly, the following inequality holds for the $\*y_k$ iterates, \begin{equation} \label{eq:lem_bounded_diff_y} \left\|y_{i,k}-\bar{y}_k \right\|^2 \leq \left\| \*y_k - \*M \*y_k \right\|^2,\quad i=1,...,n. \end{equation} For the right-hand side of (\ref{eq:lem_bounded_diff_y}), we have, \begin{equation*} \begin{split} \left\|\*y_k - \*M\*y_k \right\|^2 &= \left\| \left(I-\*M\right)\*y_k\right\|^2\leq \left\| \*y_k \right\|^2, \end{split} \end{equation*} where we have used the fact that $\left\|I-\*M\right\|=1$. Taking the total expectation on both sides and applying Lemma~\ref{lem:bounded_iterates} concludes this proof. \end{proof} The bounds established in Lemma~\ref{lem:bounded_variance} indicate that there are at least three factors preventing the local iterates produced by S-NEAR-DGD$^t$ from reaching consensus: errors related to network connectivity, represented by $\beta$, and errors caused by the inexact computation process and the noisy communication channel associated with the constants $\sigma_g$ and $\sigma_c$ respectively. Before presenting our main theorem, we state one more intermediate result on the distance of the average $\bar{y}_k$ iterates to the solution of Problem~(\ref{eq:consensus_prob}). \begin{lem} \textbf{(Bounded distance to minimum)} \label{lem:descent_avg_iter} Let $\bar{y}_k:=\frac{1}{n}\sum_{i=1}^n y_{i,k}$ denote the average of the local $y_{i,k}$ iterates generated by S-NEAR-DGD$^t$ under steplength $\alpha$ satisfying \begin{equation*} \alpha < \frac{2}{\mu_{\bar{f}}+L_{\bar{f}}}, \end{equation*} where $\mu_{\bar{f}}=\frac{1}{n}\sum_{i=1}^n\mu_i$ and $L_{\bar{f}}=\frac{1}{n}\sum_{i=1}^n L_i$. Then the following inequality holds for $k=1,2,...$ \begin{equation*} \label{eq:dist_y_bar_x_star} \begin{split} \mathbb{E}\left[ \left\| \bar{y}_{k+1} - x^\star\right\|^2 \Big | \mathcal{F}^t_k \right] &\leq \rho \left\| \bar{x}^t_{k} - x^\star\right\|^2 + \frac{\alpha^2\sigma_g^2}{n} + \frac{\alpha \rho L^2 \Delta_{\*x}}{n\gamma_{\bar{f}}}, \end{split} \end{equation*} where $x^\star = \arg\min_x f(x)$, $\rho=1-\alpha\gamma_{\bar{f}}$, $\gamma_{\bar{f}}=\frac{\mu_{\bar{f}}L_{\bar{f}}}{\mu_{\bar{f}}+L_{\bar{f}}}$, $L=\max_i L_i$, $\Delta_{\*x}=\left\|\*x^t_k - \*M \*x^t_k\right\|^2$ and $\*M = \left( \frac{1_n 1_n^T}{n} \otimes I_p \right) \in \mathbb{R}^{np}$ is the averaging matrix. \end{lem} \begin{proof} Applying ($\ref{eq:near_dgd_y_avg}$) to $(k+1)^{th}$ iteration we obtain, \begin{equation*} \begin{split} \bar{y}_{k+1} = \bar{x}_k^{t} - \alpha \overline{g}_k, \end{split} \end{equation*} where $\bar{g}_k = \frac{1}{n}\sum_{i=1}^n g_{i,k} = \frac{1}{n}\sum_{i=1}^n \left(\zeta_{i,k+1}+ \nabla f_i\left(x^t_{i,k}\right)\right)$. Let $h_k = \frac{1}{n}\sum_{i=1}^n \nabla f_i\left(x^t_{i,k}\right)$. Adding and subtracting $\alpha h_k$ to the right-hand side of the preceding relation and taking the square norm on both sides yields, \begin{equation*} \begin{split} \left\|\bar{y}_{k+1} - x^\star\right\|^2 &= \left\|\bar{x}^t_{k} -\alpha h_k - x^\star\right\|^2 + \alpha^2 \left\| h_k -\bar{g}_k \right\|^2 + 2\alpha \left \langle \bar{x}^t_{k} -\alpha h_k - x^\star, h_k -\bar{g}_k \right \rangle\\ &= \left\|\bar{x}^t_{k} -\alpha h_k - x^\star\right\|^2 + \frac{\alpha^2}{n^2} \left\| \sum_{i=1}^n \zeta_{i,k+1} \right\|^2 - \frac{2\alpha }{n}\sum_{i=1}^n\left \langle \bar{x}^t_{k} -\alpha h_k - x^\star, \zeta_{i,k+1} \right \rangle. \end{split} \end{equation*} Moreover, let $\tilde{\rho}=\frac{\alpha\gamma_{\bar{f}}}{1-2\alpha\gamma_{\bar{f}}} > 0$. We can re-write the first term on the right-hand side of the inequality above as, \begin{equation*} \begin{split} \Big\|\bar{x}^t_{k} -\alpha h_k - x^\star\Big\|^2 &\leq \left(1 + \tilde{\rho}\right) \left\|\bar{x}^t_{k} -\alpha\nabla \bar{f}\left(\bar{x}^t_k\right) - x^\star\right\|^2 + \alpha^2 \left(1+\tilde{\rho}^{-1}\right) \left\|h_k -\nabla \bar{f}\left(\bar{x}^t_k\right)\right\|^2\\ &\leq \left(1 -\alpha \gamma_{\bar{f}}\right) \left\|\bar{x}^t_{k} - x^\star\right\|^2 + \alpha^2 \left(1+\tilde{\rho}^{-1}\right) \left\|h_k - \nabla \bar{f}\left(\bar{x}^t_k\right)\right\|^2, \end{split} \end{equation*} where we added and subtracted the quantity $\alpha \nabla \bar{f}\left(\bar{x}^t_k\right)$ and used the relation $\pm 2\langle a,b\rangle \leq c\|a\|^2+c^{-1}\|b\|^2$ that holds for any two vectors $a,b$ and positive constant $c$ to obtain the first inequality. We derive the second inequality after combining Lemmas~\ref{lem:global_L_mu} and~\ref{lem:descent_f} that hold due to $\alpha < \frac{2}{\mu_{\bar{f}}+L_{\bar{f}}}$ and $x^\star=\arg \min_x \bar{f}(x)$. We notice that $\mathbb{E}\left[ \zeta_{i,k+1} \big| \mathcal{F}^t_k\right]=\mathbf{0}$ and that $\mathbb{E}\left[ \left\|\sum_{i=1}^n \zeta_{i,k+1}\right\|^2 \big|\mathcal{F}^t_k\right] = \mathbb{E}\left[\sum_{i=1}^n \left\|\zeta_{i,k+1}\right\|^2\big| \mathcal{F}^t_k\right] +\mathbb{E}\left[ \sum_{i_1\neq i_2} \left\langle \zeta_{i_1,k+1},\zeta_{i_2,k+1} \right\rangle \big| \mathcal{F}^t_k\right] \leq n\sigma_g^2$ due to Assumption~\ref{assum:tg_bound} and the linearity of expectation. Combining all of the preceding relations and taking the expectation conditional on $\mathcal{F}^t_k$, yields, \begin{equation} \label{eq:lem_bounded_min100} \begin{split} \mathbb{E} \Big[\big\| &\bar{y}_{k+1} - x^\star \big\|^2 \big| \mathcal{F}^t_k \Big] = \left(1-\alpha \gamma_{\bar{f}}\right)\left\|\bar{x}^t_k - x^\star\right\|+ \alpha^2 \left(1+\tilde{\rho}^{-1}\right) \left\|h_k - \nabla \bar{f}\left(\bar{x}^t_k\right)\right\|^2+ \frac{\alpha^2 \sigma_g^2}{n} \end{split} \end{equation} Finally, for any set of vectors $v_i \in \mathbb{R}^p$, $i=1,...,n$ we have $ \left\|\sum_{i=1}^n v_i \right\|^2 = \sum_{h=1}^p \left(\sum_{i=1}^n \left[v_i \right]_h \right)^2 \leq n \sum_{h=1}^p \sum_{i=1}^n \left[v_i \right]_h^2 = n \sum_{i=1}^n \left\|v_i\right\|^2,$ where we used the fact that $\pm 2 a b \leq a^2 + b^2$ for any pair of scalars $a,b$ to get the first inequality and reversed the order of summation to get the last equality. We can use this result to obtain, \begin{equation*} \label{eq:lem_bounded_min103} \begin{split} \Big\|h_k - \nabla \bar{f}\left(\bar{x}^t_k\right)\Big\|^2 &=\frac{1}{n^2}\left\|\sum_{i=1}^n \left(\nabla f_i \left(x^t_{i,k}\right)-\nabla f_i\left(\bar{x}^t_k\right)\right)\right\|^2\\ &\leq \frac{n}{n^2} \sum_{i=1}^n \left\|\nabla f_i \left(x^t_{i,k}\right)-\nabla f_i\left(\bar{x}^t_k\right)\right\|^2\\ & \leq \frac{L^2 }{n} \sum_{i=1}^n \left\|x^t_{i,k}-\bar{x}^t_k\right\|^2 \\ &= \frac{L^2}{n} \left\|\*x^t_k - \*M \*x^t_k\right\|^2, \end{split} \end{equation*} where we used Assumption~\ref{assum:lip} to get the second inequality. Substituting the immediately previous relation in (\ref{eq:lem_bounded_min100}), observing $1+\tilde{\rho}^{-1}=\left(1-\alpha\gamma_{\bar{f}}\right)/\alpha\gamma_{\bar{f}}$ and applying the definition of $\rho$ yields the final result. \end{proof} We have now obtained all necessary results to prove the convergence of S-NEAR-DGD$^t$ to a neighborhood of the optimal solution in the next theorem. \begin{thm} \textbf{(Convergence of S-NEAR-DGD$^t$)} \label{thm:bounded_dist_min} Let $\bar{x}^t_k:=\frac{1}{n}\sum_{i=1}^n x^t_{i,k}$ denote the average of the local $x^t_{i,k}$ iterates generated by S-NEAR-DGD$^t$ from initial point $\*y_0$ and let the steplength $\alpha$ satisfy, \begin{equation*} \alpha < \min \left \{\frac{2}{\mu+L},\frac{2}{\mu_{\bar{f}}+L_{\bar{f}}} \right\}, \end{equation*} where $\mu=\min_i L_i$, $L = \max_i L_i$, $\mu_{\bar{f}}=\frac{1}{n}\sum_{i=1}^n\mu_i$ and $L_{\bar{f}}=\frac{1}{n}\sum_{i=1}^n L_i$. Then the distance of $\bar{x}^t_k$ to the optimal solution $x^\star$ of Problem~(\ref{eq:consensus_prob}) is bounded in expectation for $k=1,2,...$, \begin{equation} \label{eq:thm_xk1} \begin{split} &\mathbb{E}\left[\left\|\bar{x}^t_{k+1} - x^\star\right\|^2 \right] \leq \rho \mathbb{E}\left[ \left\| \bar{x}^t_{k} - x^\star\right\|^2\right] + \frac{\alpha\beta^{2t}\rho L^2D}{n\gamma_{\bar{f}}} + \frac{\alpha^2\sigma_g^2 }{n} + \frac{\alpha\beta^{2t}\left(1+\kappa\right)^2\rho\sigma_g^2}{2\gamma_{\bar{f}}} + \frac{4\alpha\rho L^2\sigma_c^2}{\left(1-\beta^2\right)\gamma_{\bar{f}}} + \frac{2\beta^{2t}\left(1+\kappa\right)^{2}\rho \sigma_c^2}{\alpha\left(1-\beta^2\right)\gamma_{\bar{f}}}, \end{split} \end{equation} and \begin{equation} \label{eq:thm_xk2} \begin{split} &\mathbb{E}\left[\left\|\bar{x}^t_{k} - x^\star\right\|^2 \right] \leq \rho^k \mathbb{E}\left[ \left\| \bar{x}_0 - x^\star\right\|^2\right] + \frac{\beta^{2t}\rho L^2D}{n\gamma_{\bar{f}}^2} + \frac{\alpha \sigma_g^2 }{n\gamma_{\bar{f}}} + \frac{\beta^{2t} \left(1+\kappa \right)^2\rho\sigma_g^2}{2\gamma_{\bar{f}}^2}+ \frac{4\rho L^2\sigma_c^2}{\left(1-\beta^2\right)\gamma_{\bar{f}}^2} + \frac{2\beta^{2t}\left(1+\kappa\right)^2\rho \sigma_c^2}{\alpha^2\left(1-\beta^2\right)\gamma_{\bar{f}}^2}, \end{split} \end{equation} where $\bar{x}_0=\frac{1}{n}\sum_{i=1}^n y_{i,0}$, $\rho=1-\alpha\gamma_{\bar{f}}$, $\gamma_{\bar{f}}=\frac{\mu_{\bar{f}}L_{\bar{f}}}{\mu_{\bar{f}}+L_{\bar{f}}}$, $\kappa=L/\mu$ is the condition number of Problem~\eqref{eq:consensus_prob} and the constant $D$ is defined in Lemma~\ref{lem:bounded_iterates}. \end{thm} \begin{proof} Applying ($\ref{eq:near_dgd_x_avg}$) to the $(k+1)^{th}$ iteration yields, \begin{equation*} \begin{split} \bar{x}^j_{k+1} &= \bar{x}^{j-1}_{k+1}\text{, }j=1,...,t, \end{split} \end{equation*} which in turn implies that $\bar{x}^j_{k+1}=\bar{y}_{k+1}$ for $j=1,...,t$. Hence, the relation $\left\|\bar{x}^t_{k+1} - x^\star\right\|^2 =\left\| \bar{y}_{k+1}- x^\star\right\|^2$ holds. Taking the expectation conditional to $\mathcal{F}^t_{k}$ on both sides of this equality and applying Lemma~\ref{lem:descent_avg_iter} yields, \begin{equation*} \begin{split} \mathbb{E}\left[\left\|\bar{x}^t_{k+1} - x^\star\right\|^2 \Big| \mathcal{F}^t_{k}\right] &\leq \rho \left\| \bar{x}^t_{k} - x^\star\right\|^2 + \frac{\alpha^2\sigma_g^2}{n}+ \frac{\alpha \rho L^2 \Delta_{\*x}}{n\gamma_{\bar{f}}}, \end{split} \end{equation*} where $\Delta_{\*x}=\left\|\*x^t_k - \*M \*x^t_k\right\|^2$. Taking the total expectation on both sides of the relation above and applying Lemma~\ref{lem:bounded_variance} yields, \begin{equation} \label{eq:lem_bounded_min6} \begin{split} &\mathbb{E}\left[\left\|\bar{x}^t_{k+1} - x^\star\right\|^2 \right] \leq \rho \mathbb{E}\left[ \left\| \bar{x}^t_{k} - x^\star\right\|^2\right] + \frac{\alpha\beta^{2t}\rho L^2D}{n\gamma_{\bar{f}}} + \frac{\alpha^2 \sigma_g^2 }{n} + \frac{\alpha\beta^{2t}(1+\kappa)^{2}\rho \sigma_g^2}{2\gamma_{\bar{f}}} + \frac{4\alpha\rho L^2\sigma_c^2}{\gamma_{\bar{f}}\left(1-\beta^2\right)} + \frac{2\beta^{2t}(1+\kappa)^{2}\rho \sigma_c^2}{\alpha\left(1-\beta^2\right)\gamma_{\bar{f}}}. \end{split} \end{equation} We notice that $\rho<1$ and after applying \eqref{eq:lem_bounded_min6} recursively and then using the bound $\sum_{h=0}^{k-1} \rho^h \leq (1-\rho)^{-1}$ we obtain, \begin{equation*} \begin{split} \mathbb{E}\left[\left\|\bar{x}^t_{k} - x^\star\right\|^2 \right] &\leq \rho^k \mathbb{E}\left[ \left\| \bar{x}_0- x^\star\right\|^2\right] + \frac{\alpha\beta^{2t}\rho L^2D}{n\gamma_{\bar{f}}(1-\rho)} + \frac{\alpha^2 \sigma_g^2 }{n(1-\rho)} \\ &\quad + \frac{\alpha\beta^{2t}(1+\kappa)^{2}\rho \sigma_g^2}{2\gamma_{\bar{f}}(1-\rho)} + \frac{4\alpha\rho L^2\sigma_c^2}{\gamma_{\bar{f}}\left(1-\beta^2\right)(1-\rho)} + \frac{2\beta^{2t}(1+\kappa)^{2}\rho \sigma_c^2}{\alpha\left(1-\beta^2\right)\gamma_{\bar{f}}(1-\rho)}. \end{split} \end{equation*} Applying the definition of $\rho$ completes the proof. \end{proof} Theorem~\ref{thm:bounded_dist_min} indicates that the average iterates of S-NEAR-DGD$^t$ converge in expectation to a neighborhood of the optimal solution $x^\star$ of Problem~(\ref{eq:prob_orig}). We have quantified the dependence of this neighborhood on the connectivity of the network and the errors due to imperfect computation and communication through the terms containing the quantities $\beta$, $\sigma_g$ and $\sigma_c$, respectively. We observe that the $2^{nd}$ and $3^{rd}$ error terms in~\eqref{eq:thm_xk2} scale favorably with the number of nodes $n$, yielding a variance reduction effect proportional to network size. Our bounds indicate that higher values of the steplength $\alpha$ yield faster convergence rates $\rho$. On the other hand, $\alpha$ has a mixed effect on the size of the error neighborhood; the $2^{nd}$ term in~\eqref{eq:thm_xk2} is associated with inexact computation and increases with $\alpha$, while the last term in~\eqref{eq:thm_xk2} is associated with noisy communication and decreases with $\alpha$. The size of the error neighborhood increases with the condition number $\kappa$ as expected, while the dependence on the algorithm parameter $t$ indicates that performing additional consensus steps mitigates the error due to network connectivity and the errors induced by the operators $\mathcal{T}_g\left[\cdot\right]$ and $\mathcal{T}_c\left[\cdot\right]$. In the next corollary, we will use Theorem~\ref{thm:bounded_dist_min} and Lemmas~\ref{lem:descent_avg_iter} and~\ref{lem:bounded_iterates} to show that the the local iterates produced by S-NEAR-DGD$^t$ also have bounded distance to the solution of Problem~(\ref{eq:prob_orig}). \begin{cor} \textbf{(Convergence of local iterates (S-NEAR-DGD$^t$)} \label{cor:local_dist} Let $x^t_{i,k}$ and $y_{i,k}$ be the local iterates generated by S-NEAR-DGD$^t$ at node $i$ and iteration $k$ from initial point $\*x_0=\*y_0=[y_{1,0};...;y_{n,0}] \in \mathbb{R}^{np}$ and let the steplength $\alpha$ satisfy \begin{equation*} \alpha < \min \left \{\frac{2}{\mu+L},\frac{2}{\mu_{\bar{f}}+L_{\bar{f}}} \right\}, \end{equation*} where $\mu=\min_i\mu_i$, $L=\max_i L_i$, $\mu_{\bar{f}}=\frac{1}{n}\sum_{i=1}^n\mu_i$ and $L_{\bar{f}}=\frac{1}{n}\sum_{i=1}^n L_i$. Then for $i=1,...,n$ and $k\geq1$ the distance of the local iterates to the solution of Problem~(\ref{eq:consensus_prob}) is bounded, i.e. \begin{equation*} \begin{split} \mathbb{E}\Big[\big \|x^t_{i,k} - x^\star \big\|^2 \Big] &\leq 2\rho^k\mathbb{E} \left[\left\| \bar{x}_0 - x^\star\right\|^2\right] + 2\beta^{2t}\left(1+\frac{C}{n}\right)D + \frac{2\alpha\sigma_g^2}{n\gamma_{\bar{f}}}\\ &\quad+ \frac{\beta^{2t}\left(1+\kappa\right)^2\left(n+C\right)\sigma_g^2}{L^2} + \frac{8(n+C)\sigma_c^2}{1-\beta^2}+ \frac{4\beta^{2t}(1+\kappa)^2(n+C)\sigma_c^2}{\alpha^2(1-\beta^2)L^2}, \end{split} \end{equation*} and, \begin{equation*} \begin{split} \mathbb{E}\Big[\big \|y_{i,k} - x^\star \big\|^2 \Big] &\leq 2\rho^k \mathbb{E}\left[ \left\| \bar{x}_0 - x^\star\right\|^2\right] +2\left(1 + \frac{\beta^{2t}C}{n} \right)D + \frac{2\alpha \sigma_g^2 }{n\gamma_{\bar{f}}} \\ &\quad+ \frac{(1+\kappa)^2\left(n + \beta^{2t}C \right)\sigma_g^2 }{L^2} + \frac{8 C \sigma_c^2}{1-\beta^2} + \frac{ 4(1+\kappa)^2\left(n + \beta^{2t}C \right)\sigma_c^2}{\alpha^2(1-\beta^2) L^2}, \end{split} \end{equation*} where $C =\frac{ \rho L^2}{\gamma_{\bar{f}}^2}$, $\rho=1-\alpha\gamma_{\bar{f}}$, $\gamma_{\bar{f}}=\frac{\mu_{\bar{f}}L_{\bar{f}}}{\mu_{\bar{f}}+L_{\bar{f}}}$, $\bar{x}_0=\sum_{i=1}^n y_{i,0}$ and the constant $D$ is defined in Lemma~\ref{lem:bounded_iterates}. \end{cor} \begin{proof} For all $i \in \{1,...n\}$ and $k\geq1$, the following relation holds for the $x^t_{i,k}$ iterates, \begin{equation} \label{eq:lem_dist_min_local1} \begin{split} \left \|x^t_{i,k} - x^\star \right\|^2 &= \left \|x^t_{i,k} -\bar{x}^t_k + \bar{x}^t_k - x^\star \right\|^2 \\ & \leq 2\left \|x^t_{i,k} -\bar{x}^t_k\right\|^2 +2\left\| \bar{x}^t_k - x^\star \right\|^2, \end{split} \end{equation} where we added and subtracted $\bar{x}^t_k$ to get the first equality. Taking the total expectation on both sides of (\ref{eq:lem_dist_min_local1}) and applying Lemma~\ref{lem:bounded_variance}, Theorem~\ref{thm:bounded_dist_min} and the definition of $C$ yields the first result of this corollary. Similarly, for the $y_{i,k}$ local iterates we have, \begin{equation} \label{eq:lem_dist_min_local3} \begin{split} \left \|y_{i,k} - x^\star \right\|^2 &= \left \|y_{i,k}-\bar{y}_{k} + \bar{y}_{k} - x^\star \right\|^2 \\ &\leq 2 \left\|y_{i,k}-\bar{y}_{k}\right\|^2 +2\left\| \bar{y}_{k} - x^\star \right\|^2\\ & = 2 \left\|y_{i,k}-\bar{y}_{k}\right\|^2 +2\left\| \bar{x}^t_{k} - x^\star \right\|^2, \end{split} \end{equation} where we derive the first equality by adding and subtracting $\bar{y}_{k}$ and used~\eqref{eq:near_dgd_x_avg} to obtain the last equality. Taking the total expectation on both sides of (\ref{eq:lem_dist_min_local3}) and applying Theorem~\ref{thm:bounded_dist_min}, Lemma~\ref{lem:bounded_variance} and the definition of $C$ completes the proof. \end{proof} Corollary~\ref{cor:local_dist} concludes our analysis of the S-NEAR-DGD$^t$ method. For the remainder of this section, we derive the convergence properties of S-NEAR-DGD$^+$, i.e. $t(k)=k$ for $k\geq1$ in~\eqref{eq:near_dgd_y} and~\eqref{eq:near_dgd_x}. \begin{thm}\textbf{(Convergence of S-NEAR-DGD$^+$)} \label{thm:near_dgd_plus} Consider the S-NEAR-DGD$^+$ method, i.e. $t(k)=k$ for $k\geq 1$. Let $\bar{x}^k_k=\frac{1}{n}\sum_{i=1}^n x^k_{i,k}$ be the average iterates produced by S-NEAR-DGD$^+$ and let the steplength $\alpha$ satisfy \begin{equation*} \alpha < \min \left \{\frac{2}{\mu+L},\frac{2}{\mu_{\bar{f}}+L_{\bar{f}}} \right\}. \end{equation*} Then the distance of $\bar{x}^k_k$ to $x^\star$ is bounded for $k=1,2,...$, namely \begin{equation*} \begin{split} &\mathbb{E}\Big[\big\|\bar{x}^k_k - x^\star \big\|^2\Big] \leq \rho^k \mathbb{E} \left[ \left\|\bar{x}_0 - x^\star \right\|^2 \right]+ \frac{\eta \theta^k\alpha\rho L^2D}{n\gamma_{\bar{f}}} + \frac{\alpha\sigma_g^2}{n\gamma_{\bar{f}}}+ \frac{\eta\theta^k\alpha\left(1+\kappa\right)^{2} \rho \sigma_g^2}{2\gamma_{\bar{f}}} + \frac{4\rho L^2 \sigma_c^2}{\left(1-\beta^2\right)\gamma_{\bar{f}}^2} + \frac{2\eta\theta^k\left(1+\kappa\right)^{2} \rho \sigma_c^2}{\alpha\left(1-\beta^2\right)\gamma_{\bar{f}}}, \end{split} \end{equation*} where $ \eta = \left|\beta^2-\rho\right|^{-1}$ and $\theta = \max\left\{\rho,\beta^2\right\}$. \end{thm} \begin{proof} Replacing $t$ with $k$ in (\ref{eq:thm_xk1}) in Theorem~\ref{thm:bounded_dist_min} yields, \begin{equation*} \begin{split} \mathbb{E}\Big[&\big\|\bar{x}^{k+1}_{k+1} - x^\star\big\|^2 \Big] \leq \rho \mathbb{E}\left[\left\| \bar{x}^k_{k} - x^\star\right\|^2\right] + \frac{\alpha\beta^{2k}\rho L^2D}{n\gamma_{\bar{f}}}+\frac{\alpha^2\sigma_g^2}{n}+ \frac{\alpha\beta^{2k}\left(1+\kappa\right)^{2}\rho\sigma_g^2}{2\gamma_{\bar{f}}} + \frac{4\alpha\rho L^2\sigma_c^2}{\left(1-\beta^2\right)\gamma_{\bar{f}}} + \frac{2\beta^{2k}\left(1+\kappa\right)^{2}\rho \sigma_c^2}{\alpha\left(1-\beta^2\right)\gamma_{\bar{f}}}. \end{split} \end{equation*} Applying recursively for iterations $1, 2,\ldots, k$, we obtain, \begin{equation} \label{eq:thm_near_dgd_p_1} \begin{split} \mathbb{E}\Big[&\big\|\bar{x}^k_k - x^\star \big\|^2\Big] \leq \rho^k \mathbb{E} \left[ \left\|\bar{x}_0 - x^\star \right\|^2 \right] + S_1 \left(\frac{\alpha\rho L^2D}{n\gamma_{\bar{f}}} + \frac{\alpha\left(1+\kappa\right)^2\rho \sigma_g^2}{2\gamma_{\bar{f}}} +\frac{2\left(1+\kappa\right)^2\rho \sigma_c^2}{\alpha\left(1-\beta^2\right)\gamma_{\bar{f}}}\right) + S_2 \left(\frac{\alpha^2\sigma_g^2}{n} + \frac{4\alpha\rho L^2\sigma_c^2}{\left(1-\beta^2\right)\gamma_{\bar{f}}} \right)\\ \end{split} \end{equation} where $S_1 = \sum_{j=0}^{k-1} \rho^j \beta^{2(k-1-j)}$ and $S_2 = \sum_{j=0}^{k-1}\rho^j$. Let $\psi = \frac{\rho}{\beta^2}$. Then $S_1 = \beta^{2(k-1)}\sum_{j=0}^{k-1} \psi^j= \beta^{2(k-1)}\frac{1-\psi^k}{1-\psi}= \frac{\beta^{2k}-\rho^k}{\beta^2 - \rho}\leq \eta \theta^k$. Applying this result and the bound $S_2 \leq \frac{1}{1-\rho} = \frac{1}{\alpha \gamma_{\bar{f}}}$ to (\ref{eq:thm_near_dgd_p_1}) yields the final result. \end{proof} Theorem~\ref{thm:near_dgd_plus} indicates that S-NEAR-DGD$^+$ converges with geometric rate $\theta=\max \left\{ \rho , \beta^2 \right \}$ to a neighborhood of the optimal solution $x^\star$ of Problem~\eqref{eq:prob_orig} with size \begin{equation} \label{eq:plus_err_size} \lim_{k\rightarrow \infty} \sup \mathbb{E}\left[\left\|\bar{x}^k_k - x^\star \right\|^2\right] = \frac{\alpha\sigma_g^2}{n\gamma_{\bar{f}}} + \frac{4\rho L^2 \sigma_c^2}{\left(1-\beta^2\right)\gamma_{\bar{f}}^2}. \end{equation} The first error term on right-hand side of Eq.~\eqref{eq:plus_err_size} depends on the variance of the gradient error $\sigma_g$ and is inversely proportional to the network size $n$. This scaling with $n$, which has a similar effect to centralized mini-batching, is a trait that our method shares with a number of distributed stochastic gradient algorithms. The last error term depends on the variance of the communication error $\sigma_c$ and increases with $\beta$, implying that badly connected networks accumulate more communication error over time. Conversely, Eq.~\eqref{eq:thm_xk2} of Theorem~\ref{thm:bounded_dist_min} yields, \begin{equation} \label{eq:t_err_size} \begin{split} \lim_{k\rightarrow \infty} &\sup \mathbb{E}\left[\left\|\bar{x}^t_k - x^\star \right\|^2\right] = \frac{\alpha \sigma_g^2 }{n\gamma_{\bar{f}}} + \frac{4\rho L^2\sigma_c^2}{\left(1-\beta^2\right)\gamma_{\bar{f}}^2}+ \frac{\beta^{2t}\rho}{\gamma_{\bar{f}}^2}\left(\frac{L^2 D}{n}+ \frac{(1+\kappa)^2 \sigma_g^2}{2}+\frac{2(1+\kappa)^2\sigma_c^2}{\alpha(1-\beta^2)}\right). \end{split} \end{equation} Comparing~\eqref{eq:plus_err_size} and~\eqref{eq:t_err_size}, we observe that~\eqref{eq:t_err_size} contains three additional error terms, all of which depend directly on the algorithm parameter $t$. Our results imply that S-NEAR-DGD$^t$ generally converges to a worse error neighborhood than S-NEAR-DGD$^+$, and approaches the error neighborhood of S-NEAR-DGD$^+$ as $t\rightarrow \infty$. \section{Numerical results} \label{sec:numerical} \subsection{Comparison to existing algorithms} To quantify the empirical performance of S-NEAR-DGD, we consider the following regularized logistic regression problem to classify the mushrooms dataset \cite{mushrooms}, \begin{equation*} \min_{x \in \mathbb{R}^p} f(x) = \frac{1}{M}\sum_{s=1}^M \log(1+e^{-b_s \langle A_s , x \rangle}) + \frac{1}{M}\|x\|^2_2, \end{equation*} where $M$ is the total number of samples ($M=8124$), $A \in \mathbb{R}^{M \times p}$ is a feature matrix, $p$ is the problem dimension ($p=118$) and $b \in \{-1,1\}^{M}$ is a vector of labels. To solve this problem in a decentralized manner, we evenly distribute the samples among $n$ nodes and assign to node $i \in \{1,...,n\}$ the private function $f_i(x) = \frac{1}{|S_i|}\sum_{s \in S_i} \log(1+e^{-b_s \langle A_s , x \rangle}) + \frac{1}{M}\|x\|^2_2$, where $S_i$ is the set of sample indices accessible to node $i$. For the network topology we selected a connected, random network of $n=14$ nodes generated with the Erd\H{o}s-R\'{e}nyi model (edge probability $0.5$). To construct the stochastic gradient approximations $g_{i,k-1}=\mathcal{T}_g\left[\nabla f_i \left(x^{t(k-1)}_{i,k-1}\right)\right]$, each node randomly samples with replacement a batch of size $B=16$ from its local distribution and computes a mini-batch gradient. To simulate the inexact communication operator $\mathcal{T}_c \left[\cdot\right]$ we implemented the probabilistic quantizer of Example~\ref{ex:prob_q}. Moreover, to test the effects of different approaches to handle the communication noise/quantization error, we implemented three schemes as listed in Table~\ref{tab:consensus_variants}. Specifically, variant~Q.1 is what S-NEAR-DGD uses in step \eqref{eq:near_dgd_x}, and includes a consensus step using the quantized noisy variables $q_{i,k}^j$ and then adding the error correction term $\left(x^{j-1}_{i,k}-q^j_{i,k}\right)$. Variant~Q.2 considers a more na\"{\i}ve approach, where a simple weighted average of the noisy variables $q_{i,k}$ is calculated without the addition of error correction. Finally, in variant~Q.3 we assume that node $i$ either does not have access to its local quantized variable $q_{i,k}$ or prefers to use the original quantity $x^{j-1}_{i,k}$ whenever possible and thus computes the weighted average using its local unquantized variable and quantized versions from its neighbors. For algorithms that perform consensus step on gradients instead of the decision variables, similar schemes were implemented. \begin{table}[ht] \begin{center} \begin{tabular}{ c c } \hline \textbf{Variant name} & \textbf{Consensus update}\\ \hline Q.1 & $x^j_{i,k} \leftarrow \sum_{l=1}^n \left(w_{il} q^{j}_{l,k} \right) + \left( x^{j-1}_{i,k} - q^j_{i,k}\right)$ \\ Q.2 & $x^j_{i,k} \leftarrow \sum_{l=1}^n \left(w_{il} q_{i,k}^j\right)$ \\ Q.3 & $x^j_{i,k} \leftarrow w_{ii}x^{j-1}_{i,k} + \sum_{l \in \mathcal{N}_i}\left(w_{il} q^j_{l,k}\right)$ \\ \hline \end{tabular} \caption{Quantized consensus step variations} \label{tab:consensus_variants} \end{center} \end{table} We compared the S-NEAR-DGD$^t$ method with $t=2$ and $t=5$, to versions of DGD \cite{NedicSubgradientConsensus,sundharram_distributed_2010}, EXTRA~\cite{extra} and DIGing~\cite{diging} with noisy gradients and communication. All methods use the same mini-batch gradients at every iteration and exchange variables that are quantized with the same stochastic protocol. We implemented the consensus step variations Q.1, Q.2 and Q.3 for all methods (we note that combining DIGing with Q.1 and using the true local gradients recovers the iterates of Q-NEXT~\cite{lee2018finite}. However, the authors of~\cite{lee2018finite} accompany their method with a dynamic quantizer that we did not implement for our numerical experiments ). All algorithms shared the same steplength ($\alpha=1$)\footnote{Smaller steplengths yield data trends like those shown in Fig.~\ref{fig:all_methods}.}. In Figure~\ref{fig:all_methods}, we plot the squared error $\|\bar{x}_k - x^\star\|^2$ versus the number of algorithm iterations (or gradient evaluations) for quantization parameter values $\Delta = 10$~(right) and $\Delta = 10^5$~(left). The most important takeaway from Fig.~\ref{fig:all_methods} is that the two gradient tracking (GT) methods, namely EXTRA and DIGing, diverge without error correction (variants Q.2 and Q.3). Conversely, purely primal methods such as DGD and S-NEAR-DGD appear to be more robust to quantization error when error correction is not incorporated. This observation aligns with recent findings indicating that GT methods are more sensitive to network topology (the value of $\beta$, on which the quantization error strongly depends) compared to primal methods~\cite{Yuan2020CanPM}. Our intuition suggests that variant Q.1 achieves better performance than Q.2 and Q.3 by cancelling out the average quantization error and preventing it from accumulating over time (for an analysis of S-NEAR-DGD under variant Q.2, we refer readers to Section~\ref{appendix:q2} in the Appendix). We notice negligible differences between variants Q.2 and Q.3 for the same algorithm. All methods achieved comparable accuracy when combined with Q.1 and the quantization was relatively fine (Fig. \ref{fig:all_methods}, right), with the exception of DGD which converges further away from the optimal solution. For relatively coarse quantization (Fig.~\ref{fig:all_methods}, left), the two variants of S-NEAR-DGD and EXTRA perform marginally better than DGD and DIGing when combined with variant~Q.1. \begin{figure} \caption{Error plots, $\Delta=10$ (left) and $\Delta=10^5$ (right)} \label{fig:all_methods} \end{figure} \subsection{Scalability} To evaluate the scalability of our method and the effect of network type on convergence accuracy and speed, we tested $5$ network sizes ($n=5,10,15,20,25$) and $5$ different network types: $i$) complete, $ii$) random (connected Erd\H{o}s-R\'{e}nyi, edge probability $0.4$), $iii$) 4-cyclic (i.e. starting from a ring graph, every node is connected to $4$ immediate neighbors), $iv$) ring and $v$) path. We compared $2$ instances of the NEAR-DGD$^t$ method, $i$) $t=1$ and $ii$) $t=7$. We opted to exclude NEAR-DGD$^+$ from this set of experiments to facilitate result interpretability in terms of communication load. We set $\alpha = 1$ and $\Delta=10^2$ for all instances of the experiment, while the batch size for the calculation of the local stochastic gradients was set to $B=16$ in all cases. Different methods applied to networks of identical size selected (randomly, with replacement) the same samples at each iteration. Our results are summarized in Figure~\ref{fig:scaling}. In Fig.~\ref{fig:scaling}, top left, we terminated all experiments after $T=2\cdot 10^4$ iterations and plotted the normalized function value error $\left(f\left(\bar{x}_k\right) - f\left(x^\star\right)\right)/f(x^\star)$, averaged over the last $\tau=10^3$ iterations. We observe that networks with better connectivity converge closer to the true optimal value, implying that the terms inversely proportional to $(1-\beta^2)$ dominate the error neighborhood in Eq.~\eqref{eq:t_err_size}. Adding more nodes improves convergence accuracy for well-connected graphs (complete, random), possibly due to the "variance reduction" effect on the stochastic gradients discussed in the previous section. For the remaining graphs, however, this beneficial effect is outweighed by the decrease in connectivity that results from the introduction of additional nodes and consequently, large values of $n$ yield worse convergence neighborhoods. Increasing the number of consensus steps per iteration has a favorable effect on accuracy, an observation consistent with our theoretical findings in the previous section. For the next set of plots, we analyze the run time and cost of the algorithm until termination. The presence of stochastic noise makes the establishment of a termination criterion that performs well for all parameter combinations a challenging task. Inspired by~\cite{welford_method}, we tracked an approximate time average $\bar{f}$ of the function values $f\left(\bar{x}_k\right)$ using Welford's method for online sample mean computation, i.e. $\bar{f}_k = \bar{f}_{k-1} + \frac{f(\bar{x}_k) - \bar{f}_{k-1}}{k}$, for $k=1,2,...$, with $\bar{f}_0 = f\left(\bar{x}_0\right)$. We terminate the algorithm at iteration count $k$ if $ \left| \frac{\bar{f}_k - \bar{f}_{k-1}}{\bar{f}_{k-1}} \right| < \epsilon$, where $\epsilon$ is a tolerance parameter. In Figure~\ref{fig:scaling}, top right, we graph the number of steps (or gradient evaluations) until the termination criterion described in the previous paragraph is satisfied for $\epsilon=10^{-5}$. We observe a similar trend to Figure~\ref{fig:scaling}, top left, indicating that poor connectivity has an adverse effect on both accuracy and the rate of convergence, although the latter is not predicted by the theory. Increasing the number of consensus steps per iteration reduces the total number of steps needed to satisfy the stopping criterion. Finally, in the bottom row of Fig.~\ref{fig:scaling} we plot the total application cost per node until termination, which we calculated using the cost framework first introduced in~\cite{berahas_balancing_2019}, \begin{equation*} \label{eq:cost_frame} \text{Cost} = c_c \times \# \text{Communications} + c_g \times \# \text{Computations}, \end{equation*} where $c_c$ and $c_g$ are constants representing the application-specific costs of communication and computation respectively. In Fig.~\ref{fig:scaling}, bottom right, the communication is a $100$ times cheaper than computation, i.e. $c_c=0.01 \cdot c_g$. Increasing the number of consensus steps per iteration almost always yields faster convergence in terms of total cost, excluding some cases where the network is already reasonably well-connected (eg. complete graph). In Fig.~\ref{fig:scaling}, bottom left, the costs of computation and communication are equal, i.e. $c_c=c_g$, and increasing the number of consensus steps per iteration results in higher total cost in all cases. \begin{figure} \caption{Dependence on network type and size. Function value error averaged over the last $\tau$ iterations out of $T$ total iterations (top left), steps/gradient evaluations until termination (top right), total cost until termination when communication is cheaper than computation (bottom left), and when communication and computation have the same cost (bottom right).} \label{fig:scaling} \end{figure} \section{Conclusion} \label{sec:conclusion} We have proposed a first order method (S-NEAR-DGD) for distributed optimization over fixed, undirected networks, that can tolerate stochastic gradient approximations and noisy information exchange to alleviate the loads of computation and communication, respectively. The strength of our method lies in its flexible framework, which alternates between gradient steps and a number of nested consensus rounds that can be adjusted to match application requirements. Our analysis indicates that S-NEAR-DGD converges in expectation to a neighborhood of the optimal solution when the local objective functions are strongly convex and have Lipschitz continuous gradients. We have quantified the dependence of this neighborhood on algorithm parameters, problem-related quantities and the topology and size of the network. Empirical results demonstrate that our algorithm performs comparably or better than state-of-the-art methods, depending on the implementation of stochastic quantized consensus. Future directions include the application of this method to nonconvex settings and directed graphs. \appendix In this section, we analyze S-NEAR-DGD under the quantized consensus variant Q.2 of Table~\ref{tab:consensus_variants}. We retain the same notation and assumptions with the main part of this work. The system-wide iterates of S-NEAR-DGD when consensus is implemented with variant Q.2 can be expressed as, \begin{subequations} \begin{align} &\*y_{k} = \*x_{k-1}^{t(k-1)} - \alpha \*g_{k-1}\label{eq:near_dgd_y_q2},\\ &\*x^j_k = \*Z \*q^{j}_k,\quad j=1,...,t(k), \label{eq:near_dgd_x_q2} \end{align} \end{subequations} where $\*g_{k-1}=\left[g_{1,k-1};...;g_{n,k-1}\right]$ , $g_{i,k-1}=\mathcal{T}_g\left[\nabla f_i \left(x^{t(k-1)}_{i,k-1}\right)\right]$, $\*Z = \left(\*W \otimes I_p\right) \in \mathbb{R}^{np \times np}$, $\*q^j_k=\left[q^j_{1,k};...;q^j_{n,k}\right]$ for $j=1,...,t(k)$ and $q^{j}_{i,k}=\mathcal{T}_c\left[x^{j-1}_{i,k}\right]$. In the next subsection, we derive the convergence properties of the iterates generated by~\eqref{eq:near_dgd_y_q2} and~\eqref{eq:near_dgd_x_q2}. We closely follow the proof structure of our main analysis, excluding those Lemmas that are independent of the implementation of consensus (Lemmas~\ref{lem:descent_f}, \ref{lem:global_L_mu}, \ref{lem:comp_error} and~\ref{lem:descent_avg_iter}). In Lemma~\ref{lem:comm_error_q2}, we show that the total communication error in a single iteration of S-NEAR-DGD$^t$ ($t(k)=t$ in~\eqref{eq:near_dgd_y_q2} and~\eqref{eq:near_dgd_x_q2}) is proportional to the parameter $t$, the number of consensus rounds performed. We derive upper bounds for the magnitude of the iterates and the distance of the local iterates to the average iterates in Lemmas~\ref{lem:bounded_iterates_q2} and~\ref{lem:bounded_variance_q2}, respectively. In Theorem~\ref{thm:bounded_dist_min_q2}, we prove that the average iterates $\bar{x}^t_k =\frac{1}{n}\sum_{i=1}^n x^t_{i,k}$ of S-NEAR-DGD$^t$ under Q.2 converge (in expectation) with geometric rate to a neighborhood of the optimal solution. Specifically, this convergence neighborhood can be described by, \begin{equation} \label{eq:t_err_size_q2} \begin{split} \lim_{k\rightarrow \infty} \sup \mathbb{E}\left[\left\|\bar{x}^t_k - x^\star \right\|^2\right] &= \frac{\beta^{2t}\rho L^2D}{n\gamma_{\bar{f}}^2} + \frac{\alpha \sigma_g^2 }{n\gamma_{\bar{f}}}+ \frac{\beta^{2t} \left(1+\kappa \right)^2\rho\sigma_g^2}{2\gamma_{\bar{f}}^2} + \frac{t\sigma_c^2}{n\alpha\gamma_{\bar{f}}} + \frac{\rho L^2t\sigma_c^2}{\gamma_{\bar{f}}^2} + \frac{\beta^{2t}\left(1+\kappa\right)^2\rho t\sigma_c^2}{2\alpha^2\gamma_{\bar{f}}^2}, \end{split} \end{equation} We observe that several terms in~\eqref{eq:t_err_size_q2} are proportional to $t$, the number of consensus rounds per iteration. Conversely, when an error correction mechanism is incorporated to consensus,~\eqref{eq:t_err_size} suggests that error terms are either independent of or decrease with $t$. Next, we prove that the local iterates of S-NEAR-DGD$^t$ under Q.2 converge to a neighborhood of the optimal solution in Corollary~\ref{cor:local_dist_q2}. Finally, we establish an upper bound for the distance of the average iterates $\bar{x}^k_k =\frac{1}{n}\sum_{i=1}^n x^k_{i,k}$ of S-NEAR-DGD$^+$ ($t(k)=k$ in~\eqref{eq:near_dgd_y_q2} and~\eqref{eq:near_dgd_x_q2}) to the optimal solution in Theorem~\ref{thm:near_dgd_plus_q2}. Our theoretical results indicate that this method diverges, as the distance to the solution increases at every iteration. \subsection{Convergence analysis of S-NEAR-DGD under consensus variant Q.2} \label{appendix:q2} \begin{lem}\textbf{(Bounded communication error (Q.2))} \label{lem:comm_error_q2} Let $\mathcal{E}^c_{t,k} := \*x^{t}_k - \*Z^{t}\*y_k$ be the total communication error at the $k$-th iteration of S-NEAR-DGD$^t$ under quantization variant Q.2, i.e. $t(k)=t$ in~\eqref{eq:near_dgd_y_q2} and~\eqref{eq:near_dgd_x_q2}. Then the following relations hold for $k=1,2,...$ \begin{equation*} \begin{split} \mathbb{E}_{\mathcal{T}_c}\left[\mathcal{E}^{c}_{t,k} \big| \mathcal{F}^0_k \right] = \mathbf{0}, \quad \mathbb{E}_{\mathcal{T}_c}\left[\left \|\mathcal{E}^{c}_{t,k} \right\|^2 \big| \mathcal{F}^0_k \right] &\leq nt\sigma_c^2. \end{split} \end{equation*} \end{lem} \begin{proof} Setting $\*x^0_k=\*y_k$, we can re-write $\mathcal{E}^{c}_{t,k}$ as, \begin{equation} \label{eq:lem_comp_err_1_q2} \begin{split} \mathcal{E}^{c}_{t,k} &= \*Z \*q^{t}_k - \*Z^t \*x^0_k\\ &= \*Z \*q^{t}_k - \sum_{j=1}^{t-1}\*Z^{t-j}\*x_k^{j} + \sum_{j=1}^{t-1}\*Z^{t-j}\*x_k^{j} - \*Z^t \*x^0_k\\\ &= \*Z \*q^{t}_k - \sum_{j=1}^{t-1}\*Z^{t-j}\*x_k^{j} + \sum_{j=1}^{t-1}\*Z^{t-j+1}\*q_k^{j} - \*Z^t \*x^0_k\\\ &=\sum_{j=1}^{t} \*Z^{t-j+1}\left(\*q^{j}_k-\*x^{j-1}_k \right), \end{split} \end{equation} where we used (\ref{eq:near_dgd_x_q2}) to get the first equality, added and subtracted the quantity $\sum_{j=1}^{t-1}\*Z^{t-j}\*x_k^{j}$ to get the second equality and used (\ref{eq:near_dgd_x_q2}) again for the third equality. We notice that $\*q^j_k-\*x^{j-1}_k=\left[\epsilon^j_{i,k};...;\epsilon^j_{n,k}\right]$ as defined in Assumption~\ref{assum:tc_bound}. For $j= 1,\hdots, t(k)$ we therefore have, \begin{equation*} \mathbb{E}_{\mathcal{T}_c}\left[ \*q^j_k-\*x^{j-1}_k \Big| \mathcal{F}^{j-1}_k\right] = \mathbf{0}. \end{equation*} Due to the fact that $\mathcal{F}^{0}_k \subseteq \mathcal{F}^{1}_k \subseteq ... \subseteq \mathcal{F}^{j-1}_k$, applying the tower property of conditional expectation we yields, \begin{equation} \label{eq:comm_err_1st_mom_q2} \begin{split} \mathbb{E}_{\mathcal{T}_c}\left[ \*q^j_k-\*x^{j-1}_k \Big| \mathcal{F}^0_k\right] = \mathbb{E}_{\mathcal{T}_c}\left[ \mathbb{E}_{\mathcal{T}_c}\left[ \*q^j_k-\*x^{j-1}_k \Big| \mathcal{F}^{j-1}_k\right] \Big| \mathcal{F}^0_k\right] = \mathbf{0}, \quad j=1,...,t, \end{split} \end{equation} and hence $\mathbb{E}_{\mathcal{T}_c}\left[ \mathcal{E}^c_{t,k} \big| \mathcal{F}^0_k\right] = \mathbf{0}$ due to linearity of expectation and Eq. \eqref{eq:lem_comp_err_1_q2}. Moreover, using the fact that $\left\|\*Z\right\|=1$ due to Assumption~\ref{assum:consensus_mat}, we obtain for $j=1,...,t$, \begin{equation} \label{eq:comm_err_2nd_mom_q2} \begin{split} \mathbb{E}_{\mathcal{T}_c}\left[ \left\| \*Z^{t-j+1}\left(\*q^j_k-\*x^{j-1}_k \right) \right\|^2 \Big| \mathcal{F}^0_k\right] &\leq \mathbb{E}_{\mathcal{T}_c}\left[ \left\| \*q^j_k-\*x^{j-1}_k \right\|^2 \Big| \mathcal{F}^0_k\right]\\ &=\mathbb{E}_{\mathcal{T}_c}\left[ \mathbb{E}_{\mathcal{T}_c}\left[ \left\| \*q^j_k-\*x^{j-1}_k \right\|^2 \Big| \mathcal{F}^{j-1}_k\right] \bigg| \mathcal{F}^0_k\right] \\ &=\mathbb{E}_{\mathcal{T}_c}\left[ \sum_{i=1}^{n} \mathbb{E}_{\mathcal{T}_c}\left[ \left\|\epsilon^j_{i,k} \right\|^2 \Big| \mathcal{F}^{j-1}_k\right] \bigg| \mathcal{F}^0_k\right]\\ &\leq n\sigma_c^2, \end{split} \end{equation} where we derived the second inequality using the tower property of conditional expectation and applied Assumption~\ref{assum:tc_bound} to get the last inequality. Assumption~\ref{assum:tc_bound} implies that for all $i=1,...,n$ and $j_1 \neq j_2$, $\epsilon^{j_1}_{i,k}$ and $\epsilon^{j_2}_{i,k}$ and by extension $\*q^{j_1}_k - \*x^{j_1-1}_k$ and $\*q^{j_2}_k - \*x^{j_2-1}_k$ are independent. Eq.~\eqref{eq:comm_err_1st_mom_q2} then yields, \begin{equation*} \mathbb{E}_{\mathcal{T}_c}\left[\left \langle \*Z^{t-j_1+1}\left(\*q^{j_1}_k - \*x^{j_1-1}_k\right), \*Z^{t-j_1+1}\left(\*q^{j_1}_k - \*x^{j_1-1}_k\right) \right \rangle\bigg|\mathcal{F}^0_k\right] = \mathbf{0}. \end{equation*} Combining the equation above with~\eqref{eq:comm_err_2nd_mom_q2} and by linearity of expectation, after expanding the squared norm $\left\|\mathcal{E}^c_{t,k}\right\|^2$ we obtain, \begin{equation*} \begin{split} \mathbb{E}_{\mathcal{T}_c} \left[\left\| \mathcal{E}^c_{t,k} \right\|^2 \big| \mathcal{F}^{0}_k \right] &= \mathbb{E}_{\mathcal{T}_c} \left[ \left\|\sum_{j=1}^{t} \*Z^{t-j+1}\left(\*q^j_k-\*x^{j-1}_k \right)\right\|^2\Bigg| \mathcal{F}^{0}_k \right] \\ &= \sum_{j=1}^{t} \mathbb{E}_{\mathcal{T}_c} \left[ \left\| \*Z^{t-j+1}\left(\*q^j_k-\*x^{j-1}_k \right)\right\|^2\Bigg| \mathcal{F}^{0}_k \right]\\ &\leq nt\sigma_c^2, \end{split} \end{equation*} which completes the proof. \end{proof} We begin our convergence analysis by proving that the iterates generated by S-NEAR-DGD$^t$ are bounded in expectation. \begin{lem}\textbf{(Bounded iterates (Q.2))} \label{lem:bounded_iterates_q2} Let $\*y_k$ and $\*x_k$ be the iterates generated by (\ref{eq:near_dgd_y_q2}) and (\ref{eq:near_dgd_x_q2}), respectively, from initial point $\*y_0 \in \mathbb{R}^{np}$. Moreover, let the steplength $\alpha$ satisfy, \begin{equation*} \alpha < \frac{2}{\mu + L}, \end{equation*} where $\mu=\min_i \mu_i$ and $L=\max_i L_i$. Then $\*x_k$ and $\*y_k$ are bounded in expectation for $k=1,2,...$, i.e., \begin{equation*} \begin{split} \mathbb{E}\left[\left\| \*y_{k}\right\|^2 \right]\leq D + \frac{(1+\kappa)^2n\sigma_g^2}{2L^2}+ \frac{(1+\kappa)^2nt\sigma_c^2}{2\alpha^2 L^2}, \end{split} \end{equation*} \begin{equation*} \begin{split} \mathbb{E}\left[\left\| \*x^t_k \right\|^2 \right] \leq D + \frac{(1+\kappa)^2n\sigma_g^2}{2L^2}+ \frac{(1+\kappa)^2nt\sigma_c^2}{2\alpha^2 L^2} + nt\sigma_c^2, \end{split} \end{equation*} where $D=2 \mathbb{E}\left[ \left\|\*y_0 - \*u^\star \right\|^2 \right]+2\left(1+4\nu^{-3}\right)\left\|\*u^\star\right\|^2$, $\*u^\star=[u_1^\star;u_2^\star;...;u_n^\star] \in \mathbb{R}^{np}$, $u_i^\star = \arg \min_x f_i(x)$, $\nu=\frac{2\alpha\mu L}{\mu+L}$ and $\kappa=L/\mu$ is the condition number of Problem~\eqref{eq:consensus_prob}. \end{lem} \begin{proof} Consider, \begin{equation*} \begin{split} \left\| \*y_{k+1} - \*u^\star \right\|^2 &= \left\|\*x_k^{t} - \alpha \*g_k - \*u^\star \right\|^2\\ &= \left\|\*x_k^{t} - \alpha \nabla \*f(\*x_k^{t}) - \*u^\star - \alpha \mathcal{E}^g_{k+1} \right\|^2 \\ &= \left\|\*x_k^{t} - \alpha \nabla \*f(\*x_k^{t}) - \*u^\star \right\|^2 + \alpha^2\left\| \mathcal{E}^g_{k+1} \right\|^2 - 2\alpha \left \langle \*x_k^{t} - \alpha \nabla \*f(\*x_k^{t}) - \*u^\star, \mathcal{E}^g_{k+1}\right \rangle, \end{split} \end{equation*} where we used (\ref{eq:near_dgd_y_q2}) to get the first equality and added and subtracted $\alpha\nabla\*f\left(\*x^t_k\right)$ and applied the computation error definition $\mathcal{E}^g_{k+1}:=\*g_k - \nabla \*f\left(\*x^t_k\right)$ to obtain the second equality. Taking the expectation conditional to $\mathcal{F}^t_k$ on both sides of the relation above and applying Lemma~\ref{lem:comp_error} yields, \begin{equation} \label{eq:lem_bounded_xy_2_q2} \begin{split} \mathbb{E}\left[\left\| \*y_{k+1} - \*u^\star \right\|^2 \Big| \mathcal{F}^t_k \right]&\leq \left\|\*x_k^{t} - \alpha \nabla \*f(\*x_k^{t}) - \*u^\star \right\|^2 + \alpha^2 n \sigma_g^2. \end{split} \end{equation} For the first term on the right-hand side of (\ref{eq:lem_bounded_xy_2_q2}), combining Lemma~\ref{lem:descent_f} with Lemma~\ref{lem:global_L_mu} and due to $\alpha < \frac{2}{\mu+L}$, we have \begin{equation*} \label{eq:lem_bounded_xy_3} \left\|\*x_k^{t} - \alpha \nabla \*f(\*x_k^{t}) - \*u^\star \right\|^2 \leq (1-\nu)\left\|\*x_k^{t}-\*u^\star\right\|^2, \end{equation*} where $\nu=\frac{2\alpha \mu L}{\mu + L}=\frac{2\alpha L}{1+\kappa}<1$. Expanding the term on the right hand side of the previous relation yields, \begin{equation*} \label{eq:lem_bounded_xy_4} \begin{split} \left\|\*x_k^{t}-\*u^\star\right\|^2 &= \left\|\mathcal{E}^{c}_{t,k} + \*Z^t\*y_k -\*Z^t\*u^\star + \*Z^t\*u^\star - \*u^\star\right\|^2\\ &= \left\|\mathcal{E}^{c}_{t,k}\right\|^2 + \left\|\*Z^t\left(\*y_k - \*u^\star\right) - \left(I-\*Z^t\right)\*u^\star\right\|^2 + 2\left \langle \mathcal{E}^{c}_{t,k}, \*Z^t\*y_k - \*u^\star \right \rangle \\ &\leq \left\|\mathcal{E}^{c}_{t,k}\right\|^2 + \left(1+\nu\right)\left\|\*Z^t\left(\*y_k - \*u^\star\right)\right\|^2 +\left(1+\nu^{-1}\right)\left\| \left(I-\*Z^t\right)\*u^\star\right\|^2+ 2\left \langle \mathcal{E}^{c}_{t,k}, \*Z^t\*y_k - \*u^\star \right \rangle\\ &\leq \left\|\mathcal{E}^{c}_{t,k}\right\|^2 + \left(1+\nu\right)\left\|\*y_k - \*u^\star\right\|^2 +4\left(1+\nu^{-1}\right)\left\|\*u^\star\right\|^2 + 2\left \langle \mathcal{E}^{c}_{t,k}, \*Z^t\*y_k - \*u^\star \right \rangle, \end{split} \end{equation*} where we added and subtracted the quantities $\*Z^t\*y_k$ and $\*Z^t\*u^\star$ and applied the communication error definition $\mathcal{E}^{c}_{t,k}:=\*x^t_k - \*Z^t\*y_k$ to get the first equality. We used the standard inequality $\pm2\langle a,b\rangle \leq c\|a\|^2 + c^{-1}\|b\|^2$ that holds for any two vectors $a,b$ and positive constant $c>0$ to get the first inequality. Finally, we derive the last inequality using the relations $\left\|\*Z^t \right\|=1$ and $\left\|I-\*Z^t \right\|<2$ that hold due to Assumption~\ref{assum:consensus_mat}. Due to the fact that $\mathcal{F}^0_k \subseteq \mathcal{F}^t_k$, combining the preceding three relations and taking the expectation conditional on $\mathcal{F}^0_k$ on both sides of~\eqref{eq:lem_bounded_xy_2_q2} yields, \begin{equation*} \begin{split} \mathbb{E}\left[\left\| \*y_{k+1} - \*u^\star \right\|^2 \Big| \mathcal{F}^0_k \right]&\leq \left(1-\nu^2\right) \left\|\*y_k - \*u^\star \right\|^2 + \alpha^2n\sigma_g^2 + (1-\nu)\mathbb{E}\left[\left\|\mathcal{E}^c_{t,k}\right\|^2\big|\mathcal{F}^0_k\right]\\ & \quad + 2(1-\nu)\mathbb{E}\left[\left \langle \mathcal{E}^{c}_{t,k}, \*Z^t\*y_k - \*u^\star \right \rangle \big|\mathcal{F}^0_k\right]+ 4\nu^{-1}\left(1-\nu^2\right)\left\|\*u^\star\right\|^2\\ &\leq \left(1-\nu^2\right) \left\|\*y_k - \*u^\star \right\|^2 + \alpha^2n\sigma_g^2 + (1-\nu)nt\sigma_c^2+ 4\nu^{-1}\left(1-\nu^2\right)\left\|\*u^\star\right\|^2, \end{split} \end{equation*} where we applied Lemma~\ref{lem:comm_error_q2} to get the last inequality. Taking the full expectation on both sides of the relation above and applying recursively over iterations $0, 1, \ldots, k$ yields, \begin{equation*} \begin{split} \mathbb{E}\left[\left\| \*y_{k} - \*u^\star \right\|^2\right] &\leq \left(1-\nu^2\right)^k \mathbb{E}\left[ \left\|\*y_0 - \*u^\star \right\|^2 \right]+ \alpha^2\nu^{-2}n\sigma_g^2 + (\nu^{-2}-\nu^{-1})nt\sigma_c^2+ 4\left(\nu^{-3}-\nu^{-1}\right)\left\|\*u^\star\right\|^2\\ &\leq \mathbb{E}\left[ \left\|\*y_0 - \*u^\star \right\|^2 \right]+ \alpha^2\nu^{-2}n\sigma_g^2 + \nu^{-2}nt\sigma_c^2 + 4\nu^{-3}\left\|\*u^\star\right\|^2, \end{split} \end{equation*} where we used $\sum_{h=0}^{k-1}\left(1-\nu^2\right)^h\leq \nu^{-2}$ to get the first inequality and $\nu > 0$ to get the second inequality. Moreover, the following statement holds \begin{equation*} \begin{split} \left\| \*y_{k}\right\|^2 &= \left\| \*y_{k} - \*u^\star + \*u^\star \right\|^2\\ &\leq 2\left\| \*y_{k} - \*u^\star \right\|^2+2\left\| \*u^\star \right\|^2. \end{split} \end{equation*} After taking the total expectation on both sides of the above relation, we obtain \begin{equation} \label{eq:bound_yk_q2} \begin{split} \mathbb{E}\left[\left\| \*y_{k}\right\|^2 \right]& \leq 2\mathbb{E}\left[\left\| \*y_{k} - \*u^\star \right\|^2\right]+2\left\| \*u^\star \right\|^2\\ &\leq 2 \mathbb{E}\left[ \left\|\*y_0 - \*u^\star \right\|^2 \right]+ 2 \alpha^2\nu^{-2}n\sigma_g^2 + 2 \nu^{-2}nt\sigma_c^2 + 2\left(1+4\nu^{-3}\right)\left\|\*u^\star\right\|^2. \end{split} \end{equation} Applying the definitions of $D$ and $\kappa$ to (\ref{eq:bound_yk_q2}) yields the first result of this lemma. Finally, the following statement also holds \begin{equation*} \begin{split} \left\| \*x^t_k \right\|^2 &= \left\| \mathcal{E}^{c}_{t,k} + \*Z^t\*y_k\right\|^2\\ &= \left\| \mathcal{E}^{c}_{t,k}\right\|^2 +\left\| \*Z^t\*y_k\right\|^2 + 2\left \langle\mathcal{E}^{c}_{t,k}, \*Z^t\*y_k \right \rangle\\ &\leq \left\| \mathcal{E}^{c}_{t,k}\right\|^2 +\left\| \*y_k\right\|^2 + 2\left \langle\mathcal{E}^{c}_{t,k}, \*Z^t\*y_k \right \rangle, \end{split} \end{equation*} where we used the non-expansiveness of $\*Z$ for the last inequality. Taking the expectation conditional on $\mathcal{F}^0_k$ on both sides of the preceding relation and applying Lemma \ref{lem:comm_error_q2} yields, \begin{equation} \label{eq:bound_xk} \begin{split} \mathbb{E}\left[\left\| \*x^t_k \right\|^2 \Big | \mathcal{F}^0_k \right] &\leq nt\sigma_c^2 +\left\| \*y_k\right\|^2. \end{split} \end{equation} Taking the total expectation on both sides of the above relation, applying (\ref{eq:bound_yk_q2}) and the definitions of $D$ and $\kappa$ concludes this proof. \end{proof} \begin{lem} \textbf{(Bounded distance to average (Q.2))} \label{lem:bounded_variance_q2} Let $x_{i,k}^t$ and $y_{i,k}$ be the local iterates produced by S-NEAR-DGD$^t$ at node $i$ and iteration count $k$ and let $\bar{x}_k^t:=\sum_{i=1}^n x^t_{i,k}$ and $\bar{y}_k:=\sum_{i=1}^n y_{i,k}$ denote the average iterates across all nodes. Then the distance between the local and average iterates is bounded in expectation for $i=1,...,n$ and $k=1,2,...$, namely \begin{equation*} \begin{split} \mathbb{E}\left[\left\|x^t_{i,k} - \bar{x}^t_k \right\|^2\right] &\leq \mathbb{E}\left[\left\|\*x^t_k - \*M \*x^t_k \right\|^2\right] \leq \beta^{2t}D + \frac{\beta^{2t}(1+\kappa)^2n\sigma_g^2}{2L^2}+ \frac{\beta^{2t}(1+\kappa)^2nt\sigma_c^2}{2\alpha^2 L^2} + nt\sigma_c^2, \end{split} \end{equation*} and \begin{equation*} \begin{split} \mathbb{E} \left[\left\|y_{i,k} - \bar{y}_k \right\|^2 \right]&\leq \mathbb{E}\left[ \left\|\*y_k - \*M \*y_k \right\|^2 \right]\leq D + \frac{(1+\kappa)^2n\sigma_g^2}{2L^2}+ \frac{(1+\kappa)^2nt\sigma_c^2}{2\alpha^2 L^2}, \end{split} \end{equation*} where $\*M = \left( \frac{1_n 1_n^T}{n} \otimes I_p \right) \in \mathbb{R}^{np}$ is the averaging matrix, constant $D$ is defined in Lemma~\ref{lem:bounded_iterates_q2} and $\kappa=L/\mu$ is the condition number of Problem~\eqref{eq:consensus_prob}. \end{lem} \begin{proof} Observing that $\sum_{i=1}^n \left\|x^t_{i,k}-\bar{x}^t_k \right\|^2 = \left\| \*x^t_k - \*M \*x^t_k \right\|^2$, we obtain \begin{equation} \label{eq:lem_bounded_diff_x_q2} \left\|x^t_{i,k}-\bar{x}^t_k \right\|^2 \leq \left\| \*x^t_k - \*M \*x^t_k \right\|^2,\forall i=1,...,n. \end{equation} We can bound the right-hand side of (\ref{eq:lem_bounded_diff_x_q2}) as \begin{equation} \label{eq:lem2_x1_q2} \begin{split} \left\|\*x^t_k - \*M \*x_k\right\|^2 &= \left\|\mathcal{E}^{c}_{t,k} + \*Z^t\*y_k- \*M \*x_k - \*M\*y_k + \*M\*y_k \right\|^2\\ &=\left\|\mathcal{E}^{c}_{t,k} + \left(\*Z^t-\*M\right)\*y_k- \*M\*x_k +\*M \*Z^t\*y_k \right\|^2\\ &=\left\|\left(I-\*M\right)\mathcal{E}^{c}_{t,k} \right\|^2+\left\| \left(\*Z^t-\*M\right)\*y_k \right\|^2 + 2\left \langle\left(I-\*M\right)\mathcal{E}^{c}_{t,k} ,\left(\*Z^t-\*M\right)\*y_k \right \rangle\\ &\leq \left\|\mathcal{E}^{c}_{t,k} \right\|^2+\beta^{2t}\left\|\*y_k \right\|^2 + 2\left \langle\left(I-\*M\right)\mathcal{E}^{c}_{t,k} ,\left(\*Z^t-\*M\right)\*y_k \right \rangle, \end{split} \end{equation} where we applied the definition of the communication error $\mathcal{E}^c_{t,k}$ of Lemma~\ref{lem:comm_error_q2} and added and subtracted $\*M\*y_k$ to obtain the first equality, and used the fact that $\*M \*Z^t = \*M$ to get the second equality. We derive the last inequality from Cauchy-Schwarz and the spectral properties of $\*Z^t = \*W^t \otimes I_p$ and $\*M = \left(\frac{1_n 1_n^T}{n}\right) \otimes I_p$. Taking the expectation conditional to $\mathcal{F}^0_k$ on both sides of (\ref{eq:lem2_x1_q2}) and applying Lemma~\ref{lem:comm_error_q2} yields, \begin{equation*} \begin{split} \mathbb{E}\left[ \left\|\*x^t_k - \*M \*x_k\right\|^2 \Big | \mathcal{F}^0_k \right] &\leq nt\sigma_c^2+\beta^{2t}\left\|\*y_k \right\|^2. \end{split} \end{equation*} Taking the total expectation on both sides and applying Lemma~\ref{lem:bounded_iterates_q2} yields the first result of this lemma. Similarly, the following inequality holds for the $\*y_k$ iterates \begin{equation} \label{eq:lem_bounded_diff_y_q2} \left\|y_{i,k}-\bar{y}_k \right\|^2 \leq \left\| \*y_k - \*M \*y_k \right\|^2,\forall i=1,...,n. \end{equation} For the right-hand side of (\ref{eq:lem_bounded_diff_y_q2}), we have \begin{equation*} \begin{split} \left\|\*y_k - \*M\*y_k \right\|^2 &= \left\| \left(I-\*M\right)\*y_k\right\|^2\\ &\leq \left\| \*y_k \right\|^2, \end{split} \end{equation*} where we have used the fact that $\left\|I-\*M\right\|=1$. Taking the total expectation on both sides and applying Lemma~\ref{lem:bounded_iterates_q2} concludes this proof. \end{proof} \begin{thm} \textbf{(Convergence of S-NEAR-DGD$^t$ (Q.2))} \label{thm:bounded_dist_min_q2} Let $\bar{x}^t_k:=\frac{1}{n}\sum_{i=1}^n x^t_{i,k}$ denote the average of the local iterates generated by~\eqref{eq:near_dgd_x_q2} from initial point $\*y_0$ at iteration $k$ and let the steplength $\alpha$ satisfy \begin{equation*} \alpha < \min \left \{\frac{2}{\mu+L},\frac{2}{\mu_{\bar{f}}+L_{\bar{f}}} \right\}, \end{equation*} where $\mu=\min_i L_i$, $L = \max_i L_i$, $\mu_{\bar{f}}=\frac{1}{n}\sum_{i=1}^n\mu_i$ and $L_{\bar{f}}=\frac{1}{n}\sum_{i=1}^n L_i$. Then the distance of $\bar{x}^t_k$ to the optimal solution $x^\star$ of Problem~(\ref{eq:consensus_prob}) is bounded in expectation for $k=1,2,...$, \begin{equation} \label{eq:thm_xk1_q2} \begin{split} \mathbb{E}\left[\left\|\bar{x}^t_{k+1} - x^\star\right\|^2 \right] &\leq \rho \mathbb{E}\left[ \left\| \bar{x}^t_{k} - x^\star\right\|^2\right] + \frac{\alpha\beta^{2t}\rho L^2D}{n\gamma_{\bar{f}}} + \frac{\alpha^2 \sigma_g^2 }{n} \\ &\quad + \frac{\alpha\beta^{2t}\left(1+\kappa\right)^2\rho\sigma_g^2}{2\gamma_{\bar{f}}} + \frac{t\sigma_c^2}{n} + \frac{\alpha\rho L^2t\sigma_c^2}{\gamma_{\bar{f}}} + \frac{\beta^{2t}\left(1+\kappa\right)^{2}\rho t\sigma_c^2}{2\alpha\gamma_{\bar{f}}}, \end{split} \end{equation} and \begin{equation} \label{eq:thm_xk2_q2} \begin{split} \mathbb{E}\left[\left\|\bar{x}^t_{k} - x^\star\right\|^2 \right] &\leq \rho^k \mathbb{E}\left[ \left\| \bar{x}_0 - x^\star\right\|^2\right] + \frac{\beta^{2t}\rho L^2D}{n\gamma_{\bar{f}}^2} + \frac{\alpha \sigma_g^2 }{n\gamma_{\bar{f}}} \\ &\quad + \frac{\beta^{2t} \left(1+\kappa \right)^2\rho\sigma_g^2}{2\gamma_{\bar{f}}^2} + \frac{t\sigma_c^2}{n\alpha\gamma_{\bar{f}}} + \frac{\rho L^2t\sigma_c^2}{\gamma_{\bar{f}}^2} + \frac{\beta^{2t}\left(1+\kappa\right)^2\rho t\sigma_c^2}{2\alpha^2\gamma_{\bar{f}}^2}, \end{split} \end{equation} where $\bar{x}_0=\frac{1}{n}\sum_{i=1}^n y_{i,0}$, $\rho=1-\alpha\gamma_{\bar{f}}$, $\gamma_{\bar{f}}=\frac{\mu_{\bar{f}}L_{\bar{f}}}{\mu_{\bar{f}}+L_{\bar{f}}}$, $\kappa=L/\mu$ is the condition number of Problem~\eqref{eq:consensus_prob} and the constant $D$ is defined in Lemma~\ref{lem:bounded_iterates_q2}. \end{thm} \begin{proof} Applying ($\ref{eq:near_dgd_x_q2}$) to the $(k+1)^{th}$ iteration and calculating the average iterate $\bar{x}^j_{k+1}=\frac{1}{n}\sum_{i=1}x_{i,k+1}^j$ across all $n$ nodes, we obtain, \begin{equation*} \begin{split} \bar{x}^j_{k+1} &= \bar{q}^{j}_{k+1}\text{, }j=1,...,t, \end{split} \end{equation*} where $\bar{q}^{j}_{k+1} = \frac{1}{n}\sum_{i=1}^n q^j_{i,k+1} = \frac{1}{n}\sum_{i=1}^n \left( \epsilon^{j}_{i,k+1} + x^{j-1}_{i,k+1}\right)$ and $x^0_{i,k+1}=y_{i,k+1}$. Hence, we can show that, \begin{equation*} \begin{split} \bar{x}^t_{k+1} - \bar{y}_{k+1} &= \bar{q}^t_{k+1} -\sum_{j=1}^{t-1}\bar{q}^j_{k+1} + \sum_{j=1}^{t-1}\bar{q}^j_{k+1}-\bar{x}^{0}_{k+1} \\ &= \sum_{j=1}^t\left(\bar{q}^j_{k+1} -\bar{x}^{j-1}_{k+1}\right)\\ &= \frac{1}{n}\sum_{j=1}^{t}\sum_{i=1}^n \epsilon^j_{i,k+1}, \end{split} \end{equation*} where we added and subtracted the quantity $\sum_{j=1}^{t}\bar{q}^j_{k+1}$ to obtain the first equality. Then the following holds for the distance of $\bar{x}^t_{k+1}$ to $x^\star$, \begin{equation} \label{eq:lem_bounded_min200_q2} \begin{split} \left\|\bar{x}^t_{k+1} - x^\star\right\|^2 &=\left\|\bar{x}^t_{k+1} - \bar{y}_{k+1} + \bar{y}_{k+1}- x^\star\right\|^2\\ &\leq \frac{1}{n^2}\left\|\sum_{j=1}^{t}\sum_{i=1}^n \epsilon^j_{i,k+1}\right\|^2 + \left\| \bar{y}_{k+1} - x^\star\right\|^2 + \frac{2}{n}\left\langle \sum_{j=1}^{t}\sum_{i=1}^n \epsilon^j_{i,k+1}, \bar{y}_{k+1} - x^\star\right \rangle, \end{split} \end{equation} where we added and subtracted $\bar{y}_{k+1}$ to get the first equality. By linearity of expectation and Assumption~\ref{assum:tc_bound} we obtain, \begin{equation*} \mathbb{E}\left[\sum_{j=1}^{t}\sum_{i=1}^n \epsilon^j_{i,k+1}\bigg|\mathcal{F}^0_{k+1}\right] = \mathbf{0}. \end{equation*} Moreover, observing that $\mathbb{E}\left[\left\langle \epsilon^{j_1}_{i_1,k+1}, \epsilon^{j_2}_{i_2,k+1} \right \rangle\big|\mathcal{F}^0_{k+1}\right]=\mathbf{0}$ for $j_1\neq j_2$ and $i_1 \neq i_2$ due to Assumption~\ref{assum:tc_bound}, we acquire, \begin{equation*} \begin{split} \frac{1}{n^2}\mathbb{E}\left[\left\|\sum_{j=1}^{t}\sum_{i=1}^n \epsilon^j_{i,k+1}\right\|^2\Bigg|\mathcal{F}^0_{k+1}\right] &= \frac{1}{n^2}\sum_{j=1}^{t}\sum_{i=1}^n \mathbb{E}\left[\left\| \epsilon^j_{i,k+1}\right\|^2\bigg|\mathcal{F}^0_{k+1}\right]\leq \frac{t\sigma_c^2}{n}, \end{split} \end{equation*} where we applied Assumption~\ref{assum:tc_bound} to get the last inequality. Taking the expectation conditional on $\mathcal{F}^0_{k+1}$ on both sides of~(\ref{eq:lem_bounded_min200_q2}) and substituting the two preceding relations yield, \begin{equation} \label{eq:lem_bounded_min201_q2} \begin{split} \mathbb{E}\left[\left\|\bar{x}^t_{k+1} - x^\star\right\|^2 \Big| \mathcal{F}^0_{k+1}\right] &\leq \left\| \bar{y}_{k+1} - x^\star\right\|^2 + \frac{t\sigma_c^2}{n}. \end{split} \end{equation} Due to the fact that $\mathcal{F}^t_{k} \subseteq \mathcal{F}^0_{k+1}$, taking the expectation conditional to $\mathcal{F}^t_{k}$ on both sides of (\ref{eq:lem_bounded_min201_q2}) and applying Lemma~\ref{lem:descent_avg_iter} and the tower property of expectation yields \begin{equation*} \begin{split} \mathbb{E}\left[\left\|\bar{x}^t_{k+1} - x^\star\right\|^2 \Big| \mathcal{F}^t_{k}\right] &\leq \rho \left\| \bar{x}^t_{k} - x^\star\right\|^2 + \frac{\alpha^2 \sigma_g^2}{n} + \frac{\alpha \rho L^2 \left\|\*x^t_k - \*M \*x^t_k\right\|^2}{n\gamma_{\bar{f}}} + \frac{t\sigma_c^2}{n}. \end{split} \end{equation*} After taking the full expectation on both sides of the relation above and applying Lemma~\ref{lem:bounded_variance_q2} we acquire, \begin{equation} \label{eq:lem_bounded_min6_q2} \begin{split} \mathbb{E}\left[\left\|\bar{x}^t_{k+1} - x^\star\right\|^2 \right] &\leq \rho \mathbb{E}\left[ \left\| \bar{x}^t_{k} - x^\star\right\|^2\right] + \frac{\alpha\beta^{2t}\rho L^2D}{n\gamma_{\bar{f}}} + \frac{\alpha^2\sigma_g^2 }{n} + \frac{\alpha\beta^{2t}(1+\kappa)^{2}\rho \sigma_g^2}{2\gamma_{\bar{f}}} + \frac{t\sigma_c^2}{n} + \frac{\alpha\rho L^2t\sigma_c^2}{\gamma_{\bar{f}}} + \frac{\beta^{2t}(1+\kappa)^{2}\rho t\sigma_c^2}{2\alpha\gamma_{\bar{f}}}. \end{split} \end{equation} We notice that $\rho<1$ and therefore after applying \eqref{eq:lem_bounded_min6_q2} recursively and using the relation $\sum_{h=0}^{k-1} \rho^h \leq (1-\rho)^{-1}$ we obtain, \begin{equation*} \begin{split} \mathbb{E}\left[\left\|\bar{x}^t_{k} - x^\star\right\|^2 \right] \leq & \rho^k \mathbb{E}\left[ \left\| \bar{x}_0- x^\star\right\|^2\right] + \frac{\alpha\beta^{2t}\rho L^2D}{n\gamma_{\bar{f}}(1-\rho)} + \frac{\alpha^2 \sigma_g^2 }{n(1-\rho)} \\ &\quad + \frac{\alpha\beta^{2t}(1+\kappa)^{2}\rho \sigma_g^2}{2\gamma_{\bar{f}}(1-\rho)} + \frac{t\sigma_c^2}{n(1-\rho)} + \frac{\alpha\rho L^2t\sigma_c^2}{\gamma_{\bar{f}}(1-\rho)} + \frac{\beta^{2t}(1+\kappa)^{2}\rho t\sigma_c^2}{2\alpha\gamma_{\bar{f}}(1-\rho)}. \end{split} \end{equation*} Applying the relation $1-\rho=\alpha \gamma_{\bar{f}}$ completes the proof. \end{proof} \begin{cor} \textbf{(Convergence of local iterates (Q.2))} \label{cor:local_dist_q2} Let $y_{i,k}$ and $x^t_{i,k}$ be the local iterates generated by~\eqref{eq:near_dgd_y_q2} and~\eqref{eq:near_dgd_x_q2}, respectively, from initial point $\*x_0=\*y_0=[y_{1,0};...;y_{n,0}] \in \mathbb{R}^{np}$ and let the steplength $\alpha$ satisfy \begin{equation*} \alpha < \min \left \{\frac{2}{\mu+L},\frac{2}{\mu_{\bar{f}}+\L_{\bar{f}}} \right\}, \end{equation*} where $\mu=\min_i\mu_i$, $L=\max_i L_i$, $\mu_{\bar{f}}=\frac{1}{n}\sum_{i=1}^n\mu_i$ and $L_{\bar{f}}=\frac{1}{n}\sum_{i=1}^n L_i$. Then for $i=1,...,n$ and $k\geq1$ the distance of the local iterates to the solution of Problem~(\ref{eq:consensus_prob}) is bounded, i.e., \begin{equation*} \begin{split} \mathbb{E}\left[\left \|x^t_{i,k} - x^\star \right\|^2 \right] \leq &2\rho^k\mathbb{E} \left[\left\| \bar{x}_0 - x^\star\right\|^2\right] + 2\beta^{2t}\left(1+\frac{C }{n}\right)D +\frac{2\alpha\sigma_g^2}{n\gamma_{\bar{f}}} \\ &\quad + \frac{\beta^{2t}\left(1+\kappa\right)^2\left(n+C\right)\sigma_g^2}{L^2} + 2\left(n+C\right)t\sigma_c^2+ \frac{\beta^{2t}\left(1+\kappa\right)^2\left(n+ C\right)t\sigma_c^2}{\alpha^2L^2} + \frac{2t\sigma_c^2}{n\alpha\gamma_{\bar{f}}}, \end{split} \end{equation*} and \begin{equation*} \begin{split} \mathbb{E}\left[\left \|y_{i,k+1} - x^\star \right\|^2 \right] &\leq 2\rho^{k+1} \mathbb{E}\left[ \left\| \bar{x}_0 - x^\star\right\|^2\right] + 2D+ \frac{2\beta^{2t}CD}{n} + \frac{2\alpha \sigma_g^2}{n \gamma_{\bar{f}}^2} \\ &\quad + \frac{(1+\kappa)^2\left(n+\beta^{2t}C\right)\sigma_g^2}{L^2} + 2Ct\sigma_c^2+ \frac{(1+\kappa)^2 \left(n + \beta^{2t}C\right) t\sigma_c^2}{\alpha^2 L^2} + \frac{2\rho t\sigma_c^2}{n \alpha \gamma_{\bar{f}}}, \end{split} \end{equation*} where $C= \frac{\rho L^2}{\gamma_{\bar{f}}^2}$ $\rho=1-\alpha\gamma_{\bar{f}}$, $\gamma_{\bar{f}}=\frac{\mu_{\bar{f}}L_{\bar{f}}}{\mu_{\bar{f}}+L_{\bar{f}}}$, $\bar{x}_0=\sum_{i=1}^n y_{i,0}$ and the constant $D$ is defined in Lemma~\ref{lem:bounded_iterates_q2}. \end{cor} \begin{proof} For $i=1,...,n$ and $k\geq1$, for the $x^t_{i,k}$ iterates we have, \begin{equation} \label{eq:lem_dist_min_local1_q2} \begin{split} \left \|x^t_{i,k} - x^\star \right\|^2 &= \left \|x^t_{i,k} -\bar{x}^t_k + \bar{x}^t_k - x^\star \right\|^2 \\ & \leq 2\left \|x^t_{i,k} -\bar{x}^t_k\right\|^2 +2\left\| \bar{x}^t_k - x^\star \right\|^2, \end{split} \end{equation} where we added and subtracted $\bar{x}^t_k$ to get the first equality. Taking the total expectation on both sides of (\ref{eq:lem_dist_min_local1_q2}) and applying Lemma~\ref{lem:bounded_variance_q2} and Theorem~\ref{thm:bounded_dist_min_q2} yields the first result of this corollary. Similarly, for the $y_{i,k}$ local iterates we have, \begin{equation} \label{eq:lem_dist_min_local3_q2} \begin{split} \left \|y_{i,k+1} - x^\star \right\|^2 &= \left \|y_{i,k+1}-\bar{y}_{k+1} + \bar{y}_{k+1} - x^\star \right\|^2 \\ &\leq 2 \left\|y_{i,k+1}-\bar{y}_{k+1}\right\|^2 +2\left\| \bar{y}_{k+1} - x^\star \right\|^2, \end{split} \end{equation} where we derive the first equality by adding and subtracting $\bar{y}_{k+1}$. Moreover, taking the total expectation on both sides of the result of Lemma~\ref{lem:descent_avg_iter} yields, \begin{equation*} \label{eq:lem_local_helper} \begin{split} \mathbb{E}\left[ \left\| \bar{y}_{k+1} - x^\star\right\|^2 \right] &\leq \rho \mathbb{E}\left[ \left\| \bar{x}^t_{k} - x^\star\right\|^2\right] + \frac{\alpha^2 \sigma_g^2}{n} + \frac{\alpha \rho L^2 }{n\gamma_{\bar{f}}} \mathbb{E} \left[\left\|\*x^t_k - \*M \*x^t_k\right\|^2\right]\\ &\leq \rho \bigg(\rho^k \mathbb{E}\left[ \left\| \bar{x}_0 - x^\star\right\|^2\right] + \frac{\beta^{2t}CD}{n} + \frac{\alpha \sigma_g^2 }{n\gamma_{\bar{f}}} + \frac{\beta^{2t} \left(1+\kappa \right)^2\rho\sigma_g^2}{2\gamma_{\bar{f}}^2} + \frac{t\sigma_c^2}{n\alpha\gamma_{\bar{f}}} + Ct\sigma_c^2 + \frac{\beta^{2t}\left(1+\kappa\right)^2\rho t\sigma_c^2}{2\alpha^2\gamma_{\bar{f}}^2} \bigg)\\ &\quad + \frac{\alpha^2 \sigma_g^2}{n} + \frac{\alpha C \gamma_{\bar{f}} }{n}\bigg(\beta^{2t}D + \frac{\beta^{2t}(1+\kappa)^2n\sigma_g^2}{2L^2}+ \frac{\beta^{2t}(1+\kappa)^2nt\sigma_c^2}{2\alpha^2 L^2} + nt\sigma_c^2 \bigg)\\ &=\rho^{k+1} \mathbb{E}\left[ \left\| \bar{x}_0 - x^\star\right\|^2\right] + \frac{\beta^{2t}CD}{n}\left(\rho + \alpha \gamma_{\bar{f}}\right) + \frac{\alpha \sigma_g^2}{n}\left(\alpha + \frac{\rho}{\gamma_{\bar{f}}}\right) + \frac{\beta^{2t} \left(1+\kappa \right)^2\sigma_g^2}{2} \left(\frac{\rho^2}{\gamma_{\bar{f}}^2} + \frac{\alpha C \gamma_{\bar{f}}}{L^2} \right) \\ &\quad + t\sigma_c^2 \left(\frac{\rho}{n \alpha \gamma_{\bar{f}}} + \rho C + \alpha C \gamma_{\bar{f}} \right) + \frac{\beta^{2t}(1+\kappa)^2 t \sigma_c^2}{2\alpha^2} \left( \frac{\rho^2}{\gamma_{\bar{f}}^2} + \frac{\alpha C \gamma_{\bar{f}}}{L^2}\right)\\ &=\rho^{k+1} \mathbb{E}\left[ \left\| \bar{x}_0 - x^\star\right\|^2\right] + \frac{\beta^{2t}CD}{n} + \frac{\alpha \sigma_g^2}{n \gamma_{\bar{f}}^2} \\ &\quad + \frac{\beta^{2t} \left(1+\kappa \right)^2 \rho \sigma_g^2}{2\gamma_{\bar{f}}^2} + t\sigma_c^2 \left(\frac{\rho}{n \alpha \gamma_{\bar{f}}} + C \right) + \frac{\beta^{2t}(1+\kappa)^2 \rho t \sigma_c^2}{2\alpha^2 \gamma_{\bar{f}}^2}, \end{split} \end{equation*} where we applied Theorem~\ref{thm:bounded_dist_min_q2}, Lemma~\ref{lem:bounded_variance_q2} and the definition of $C$ to get the second inequality. We derive the remaining equalities by algebraic manipulation. Taking the total expectation on both sides of (\ref{eq:lem_dist_min_local3_q2}), applying the inequality above and Lemma~\ref{lem:bounded_variance_q2} completes the proof. \end{proof} \begin{thm}\textbf{(Distance to minimum, NEAR-DGD$^+$ (Q.2))} \label{thm:near_dgd_plus_q2} Consider the S-NEAR-DGD$^+$ method under consensus variation Q.2, i.e. $t(k)=k$ in~\eqref{eq:near_dgd_y_q2} and~\eqref{eq:near_dgd_x_q2}. Let $\bar{x}^k_k=\frac{1}{n}\sum_{i=1}^n x^k_{i,k}$ be the average $x_{i,k}^k$ iterates produced by S-NEAR-DGD$^+$ and let the steplength $\alpha$ satisfy, \begin{equation*} \alpha < \min \left \{\frac{2}{\mu+L},\frac{2}{\mu_{\bar{f}}+L_{\bar{f}}} \right\}. \end{equation*} Then the following inequality holds for the distance of $\bar{x}_k$ to $x^\star$ for $k=1,2,...$ \begin{equation*} \begin{split} \mathbb{E}\left[\left\|\bar{x}^k_k - x^\star \right\|^2\right] &\leq \rho^k \mathbb{E} \left[ \left\|\bar{x}_0 - x^\star \right\|^2 \right]+ \frac{\eta\theta^k\alpha\rho L^2D}{n\gamma_{\bar{f}}} + \frac{\alpha\sigma_g^2}{n\gamma_{\bar{f}}}\\ &\quad+ \frac{\eta\theta^k\alpha\left(1+\kappa\right)^{2} \rho \sigma_g^2}{2\gamma_{\bar{f}}} + \frac{(k-1) \sigma_c^2}{n\alpha\gamma_{\bar{f}}} + \frac{\rho L^2 (k-1)\sigma_c^2}{\gamma_{\bar{f}}^2} + \frac{\eta\theta^k\left(1+\kappa\right)^{2} \rho (k-1)\sigma_c^2}{2\alpha\gamma_{\bar{f}}}, \end{split} \end{equation*} where $\eta= \left|\beta^2-\rho\right|^{-1}$ and $\theta = \max\left\{\rho,\beta^2\right\}$. \end{thm} \begin{proof} Replacing $t$ with $k$ in (\ref{eq:thm_xk1_q2}) in Theorem~\ref{thm:bounded_dist_min_q2} yields, \begin{equation*} \begin{split} \mathbb{E}\left[\left\|\bar{x}^{k+1}_{k+1} - x^\star\right\|^2 \right] &\leq \rho \mathbb{E}\left[\left\| \bar{x}^k_{k} - x^\star\right\|^2\right] + \frac{\alpha\beta^{2k}\rho L^2D}{n\gamma_{\bar{f}}} +\frac{\alpha^2\sigma_g^2}{n}\\ &\quad + \frac{\alpha\beta^{2k}\left(1+\kappa\right)^{2}\rho\sigma_g^2}{2\gamma_{\bar{f}}}+ \frac{k\sigma_c^2}{n} + \frac{\alpha\rho L^2k\sigma_c^2}{\gamma_{\bar{f}}} + \frac{\beta^{2k}\left(1+\kappa\right)^{2}\rho k\sigma_c^2}{2\alpha\gamma_{\bar{f}}}. \end{split} \end{equation*} Applying recursively for iterations $1, 2,\ldots, k$, we obtain, \begin{equation} \label{eq:thm_near_dgd_p_1_q2} \begin{split} \mathbb{E}\left[\left\|\bar{x}^k_k - x^\star \right\|^2\right] &\leq \rho^k \mathbb{E} \left[ \left\|\bar{x}_0 - x^\star \right\|^2 \right]+ S_1\left(\frac{\alpha\rho L^2D}{n\gamma_{\bar{f}}} + \frac{\alpha\left(1+\kappa\right)^2\rho \sigma_g^2}{2\gamma_{\bar{f}}} \right) \\ &\quad + S_2\left(\frac{\alpha^2\sigma_g^2}{n}\right) + S_3\left( \frac{\sigma_c^2}{n} + \frac{\alpha\rho L^2\sigma_c^2}{\gamma_{\bar{f}}} \right) + S_4\left(\frac{\left(1+\kappa\right)^2\rho \sigma_c^2}{2\alpha\gamma_{\bar{f}}}\right). \end{split} \end{equation} where \begin{gather*} S_1=\sum_{j=0}^{k-1} \rho^j \beta^{2(k-1-j)}, \quad S_2 = \sum_{j=0}^{k-1}\rho^j\\ S_3 = \sum_{j=0}^{k-1}(k-1-j) \rho^{j}, \quad S_4 = \sum_{j=0}^{k-1} (k-1-j)\rho^j\beta^{2(k-1-j)}. \end{gather*} Let $\psi = \frac{\rho}{\beta^2}$. We can bound the first two sums with $ S_1 = \beta^{2(k-1)} \sum_{j=0}^{k-1} \psi^j = \beta^{2(k-1)}\frac{1-\psi^k}{1-\psi} = \frac{\beta^{2k}-\rho^k}{\beta^2 - \rho} \leq \eta \theta^k$ and $S_2 = \frac{1-\rho^k}{1-\rho} \leq \frac{1}{1-\rho}=\frac{1}{\alpha \gamma_{\bar{f}}}$. For the third sum we obtain $S_3=(k-1)S_2 - \sum_{j=0}^{k-1}j \rho^{j} \leq (k-1)S_2 = \frac{k-1}{\alpha \gamma_{\bar{f}}}$. Finally, we can derive an upper bound for the final sum using $S_4= (k-1)S_1 - \sum_{j=0}^{k-1} j\rho^j\beta^{2(k-1-j)}\leq (k-1)S_1\leq (k-1) \eta \theta^k$. Substituting the sum bounds in (\ref{eq:thm_near_dgd_p_1_q2}) yields the final result of this theorem. \end{proof} \end{document}
\begin{document} \title{On quantification of systemic redundancy in reliable systems } \titlerunning{On quantification of systemic redundancy in reliable systems} \author{Getachew K. Befekadu} \author{Getachew K. Befekadu \and Panos~J.~Antsaklis } \institute{G. K. Befekadu~ ({\large\Letter}\negthinspace) \at Department of Electrical Engineering, University of Notre Dame, Notre Dame, IN 46556, USA. \\ \email{gbefekadu1@nd.edu} \and P. J. Antsaklis \at Department of Electrical Engineering, University of Notre Dame, Notre Dame, IN 46556, USA. \\ \email{antsaklis.1@nd.edu} } \date{February 15, 2015} \maketitle \begin{abstract} In this paper, we consider the problem of quantifying systemic redundancy in reliable systems having multiple controllers with overlapping functionality. In particular, we consider a multi-channel system with multi-controller configurations -- where controllers are required to respond optimally, in the sense of best-response correspondence, a {\it reliable-by-design} requirement, to non-faulty controllers so as to ensure or maintain some system properties. Here we introduce a mathematical framework, based on the notion of relative entropy of probability measures associated with steady-state solutions of Fokker-Planck equations for a family of stochastically perturbed multi-channel systems, that provides useful information towards a systemic assessment of redundancy in the system. \keywords{Entropy \and Fokker-Planck equation \and Liouville equation \and quantification of systemic redundancy \and random perturbation \and relative entropy \and reliable systems} \end{abstract} \section{Introduction} \label{S1} The notion of redundancy, which promotes robustness by ``backing-up" important functions of systems, together with its systemic quantification, has long been recognized as an essential design philosophy by researchers in different fields of studies (to mention a few, e.g., see \cite{KraP02}, \cite{Tau92} and \cite{Wag05} for related discussions in biological systems; see also \cite{CarD99}, \cite{KanB11}, \cite{SieS98} and \cite{RanLOT11} for related discussions in engineering systems). In this paper, we consider the problem of quantifying systemic redundancy in reliable systems having multiple controllers with overlapping functionality. To be more specific, we consider a multi-channel system with multi-controller configurations -- where controllers are required to respond optimally, a {\it reliable-by-design} requirement, to non-faulty controllers so as to maintain the stability of the system when there is a single failure in any of the control channels. Here we introduce a mathematical framework, based on the notion of relative entropy of probability measures associated with steady-state solutions of the Fokker-Planck equations for a family of stochastically perturbed multi-channel systems, that provides useful information towards a systemic assessment of redundancy in the system. For such steady-state solutions of the Fokker-Planck equations, we establish a quantifiable redundancy measure by using the difference between the average relative entropy of the steady-state probability measures, with respect to any single controller failure in the system, and the entropy of the steady-state probability measure under a nominal operating condition, i.e., without any single controller failure in the system. Moreover, we determine the asymptotic behavior of the systemic redundancy measure in the multi-channel system as the random perturbation decreases to zero, where such an asymptotic result can be related to the solutions of controlled Liouville equations for the underlying original unperturbed multi-channel system. It is worth mentioning that some interesting studies on systemic measures, based on information theory, have been reported in literature (e.g., see \cite{ToSE99}, \cite{LiDHKY12} or \cite{EdekG01} in the context of complexity, degeneracy and redundancy measures in biological systems; see \cite{Kit04} or \cite{Kit07} in the context of robustness in biological systems; and see also \cite{Bru83} or \cite{Cru12} for related discussion on the complexity measure for trajectories in dynamical systems). Moreover, such studies have also provided some useful information in characterizing or understanding the systemic measures (based on mutual information between appropriately partitioned input and output spaces) in systems with multiple subsystems/modules having overlapping functionality. Note that the rationale behind our framework follows in some sense the settings of these papers. However, to our knowledge, this problem has not been addressed in the context of reliable systems with multi-controller configurations having ``overlapping or backing-up" functionality, and it is important because it provides a framework for quantifying or gauging systemic redundancy measure in multi-channel systems, for example, when there is a single channel failure in the system. This paper is organized as follows. In Section~\ref{S2}, we present some preliminary results that are useful for our main results. Section~\ref{S3} presents our main results, where we introduce a mathematical framework that provides useful information towards a systemic assessment of redundancy in reliable systems. This section also contains a result on the asymptotic behavior of the systemic redundancy measure in the multi-channel system as the random perturbation decreases to zero. \section{Preliminary results} \label{S2} Consider the following continuous-time multi-channel system \begin{align} \dot{x}(t) &= A x(t) + \sum\nolimits_{i=1}^N B_i u_i(t), \quad x(0)=x_0, \label{Eq1} \end{align} where $x \in \mathbb{R}^{d}$ is the state, $u_i \in \mathbb{R}^{r_i}$ is the control input to the $i$th-channel. Let $S$ be a compact manifold in $\mathbb{R}^{d}$ and let $\rho_0(x) \triangleq \rho(0, x) > 0$, with $\int_{S} \rho_0(x)dx=1$, be an initial density function. Further, let $\psi$ be a smooth function $\psi \colon S \rightarrow \mathbb{R}^{+}$ having compact support, then the expected value of $\psi$ at some future time $t > 0$ is given by \begin{align} E \bigl\{ \psi(x) \bigr\} = \int_{S} \psi(x) \rho(t, x) dx. \label{Eq2} \end{align} Moreover, if we take the time derivative of the above equation and make use of integration by parts, then we will have \begin{align} \int_{S} \psi(x) \frac{\partial \rho(t, x)}{\partial t} dx = - \int_{S} \psi(x) \Bigl \langle \frac{\partial }{\partial x}, \Bigl (A x + \sum\nolimits_{i=1}^N B_i u_i\Bigr)\rho(t, x) \Bigr \rangle dx. \label{Eq3} \end{align} Since $\psi(x)$ is an arbitrary function, we can rewrite the above equation as a first-order partial differential equation (which is also known as the Liouville equation) \begin{align} \frac{\partial \rho(t, x)}{\partial t} = - \Bigl \langle \frac{\partial }{\partial x}, \Bigl (A x + \sum\nolimits_{i=1}^N B_i u_i\Bigr)\rho(t, x) \Bigr \rangle. \label{Eq4} \end{align} \begin{remark} \label{R1} Here we remark that the above first-order partial differential equation describes how the density function $\rho(t, x)$ evolves in time (i.e., Equation~\eqref{Eq4} describes an evolution equation on $L_1\bigl(\mathbb{R}^{d}\bigr)$ under a flow defined by the deterministic system of \eqref{Eq1}). Furthermore, it is easy to verify that \begin{align*} \rho(t, x) > 0, \quad t \ge 0, \end{align*} and further it satisfies \begin{align*} \frac{d}{d t} \int_{S} \rho(t, x) dx = 0. \end{align*} \end{remark} Notice that the partial derivative with respect to $x$ in \eqref{Eq4} depends on whether the input controls $u_i$ are expressed as open-loop functions (i.e., $u_i=u_i(t)$ for $i=1,2, \ldots, N$) or as closed-loop functions (i.e., $u_i=u_i(t,x)$ for $i=1,2, \ldots, N$); and, as a result of this, the solution for $\rho(t, x)$ depends on the type of input controls used in the system. \begin{remark} \label{R2} Recently, using a class of variational problems, the author in \cite{Broc07} has considered the Liouville equations that involve control terms. Moreover, such a formulation is useful for relating the behavior of the solutions of the Liouville equation to that of the behavior of the underlying differential equation of the system. \end{remark} In what follows, we recall some known results that will be used for our main results (e.g., see \cite{Csi67} or \cite{Les14}). \begin{definition}\label{D1} Let $\mu$ be a probability measure on $\mathbb{R}^d$ with respect to the density function $\rho(t, x)$ which satisfies the Liouville equation in \eqref{Eq4} starting from an initial density function $\rho_0(x_0)$. Then, the entropy of $\mu$ (with respect to Lebesgue measure) is defined by \begin{align} H\bigl(\mu\bigr) = - \int_{S} \rho(t, x) \log_2 \rho(t, x) dx, \quad t \ge 0. \label{Eq5} \end{align} \end{definition} \begin{remark} \label{R3} In general, we have the following inequality \begin{align*} - \int_{S} \rho_1(t, x) \log_2 \rho_1(t, x) dx \le - \int_{S} \rho_1(t, x) \log_2 \rho_2(t, x) dx, \end{align*} for any two density functions $\rho_1(t, x)$ and $\rho_2(t, x)$ that satisfy \eqref{Eq4} starting from $\rho_{1,0}(x_0)$ and $\rho_{2,0}(x_0)$, respectively. \end{remark} \begin{definition}\label{D2} Let $\mu_1$ and $\mu_2$ be two probability measures on $\mathbb{R}^d$ with respect to the density functions $\rho_1(t, x)$ and $\rho_2(t, x)$ that satisfy the Liouville equation in \eqref{Eq4} starting from initial density functions $\rho_{1,0}(x_0)$ and $\rho_{2,0}(x_0)$, respectively. Then, the relative entropy of $\mu_2$ with respect to $\mu_1$ is defined by \begin{align} D\bigl(\mu_2 \,\Vert\, \mu_1\bigr) & = \int_{S} \rho_2(t, x) \log_2 \left (\frac{\rho_2(t, x)}{\rho_1(t, x)}\right) dx\notag \\ &= \int_{S} \Bigl(\rho_2(t, x) \log_2 \rho_2(t, x) - \rho_2(t, x) \log_2 \rho_1(t, x) \Bigr) dx, \quad t \ge 0. \label{Eq6} \end{align} \end{definition} \begin{remark} \label{R4} Note that the relative entropy (also called the Kullback-Leibler distance) $D\bigl(\mu_2 \,\Vert\, \mu_1\bigr)$, which measures the deviation of $\mu_2$ with respect to the probability measure $\mu_1$, is nonnegative, i.e., $D \bigl(\mu_2 \,\Vert\, \mu_1\bigr) \ge 0$, and $D\bigl(\mu_2 \,\Vert\, \mu_1\bigr) = 0$ if and only if $\mu_2 = \mu_1$. \end{remark} Next, consider the following stochastically perturbed multi-channel system \begin{align} dx^{\epsilon}(t) = A x^{\epsilon}(t) dt + \sum\nolimits_{i=1}^N B_i u_i(t) dt + \epsilon \sigma(x^{\epsilon}(t)) dW(t), \quad x^{\epsilon}(0)=x_0, \label{Eq7} \end{align} where \begin{itemize} \item[-] $x^{\epsilon}(\cdot)$ is an $\mathbb{R}^{d}$-valued diffusion process, $\epsilon > 0$ is a small parameter that represents the level of random perturbation in the system, \item[-] $\sigma \colon \mathbb{R}^{d} \rightarrow \mathbb{R}^{d \times m}$ is Lipschitz continuous with the least eigenvalue of $\sigma(\cdot)\sigma^T(\cdot)$ uniformly bounded away from zero, i.e., \begin{align*} \sigma(x)\sigma^T(x) \ge \kappa I_{d \times d} , \quad \forall x \in \mathbb{R}^{d}, \end{align*} for some $\kappa > 0$, \item[-] $W(\cdot)$ (with $W(0)=0$) is an $m$-dimensional standard Wiener process, and \item[-] $u_i(\cdot)$ is a $U_i$-valued measurable control process to the $i$th-channel (i.e., an admissible control from the set $U_i \subset \mathbb{R}^{r_i}$) such that, for all $t > s$, $W(t) - W(s)$ is independent of $u_i(\nu)$ for $\nu > s$. \end{itemize} Note that the time evolution of the density function $\rho^{\epsilon}(t, x)$ associated with \eqref{Eq7} satisfies the following second-order partial differential equation (i.e., the Fokker-Planck equation) \begin{align} \frac{\partial \rho^{\epsilon}(t, x)}{\partial t} = - \Bigl \langle \frac{\partial} {\partial x},\, \Bigl( A x + \sum\nolimits_{i=1}^N B_i u_i\Bigr) \rho^{\epsilon}(t, x) \Bigr \rangle + & \frac{\epsilon^2}{2} \Bigl \langle \frac{\partial^2} {\partial x^2},\, \sigma(x)\sigma^T(x) \rho^{\epsilon}(t, x) \Bigr \rangle, \notag \\ & \quad\quad \rho^{\epsilon}(0, x) \,\, \text{is given}. \label{Eq8} \end{align} In this paper, among all solutions of the above Fokker-Planck equation, we will only consider the steady-state solution that satisfies the following stationary Fokker-Planck equation\footnote{For example, see \cite{Kif74} or \cite{VenFre70} for additional discussion on the limiting behavior of invariant measures for systems with small random perturbation.} \begin{align} 0 = - \Bigl \langle \frac{\partial} {\partial x},\, \Bigl( A x + \sum\nolimits_{i=1}^N B_i u_i\Bigr) \rho_{\ast}^{\epsilon}(x) \Bigr \rangle + & \frac{\epsilon^2}{2} \Bigl \langle \frac{\partial^2} {\partial x^2},\, \sigma(x)\sigma^T(x) \rho_{\ast}^{\epsilon}(x) \Bigr \rangle, \notag \\ \rho_{\ast}^{\epsilon}(x) > 0, & \quad \int_{\mathbb{R}^d} \rho_{\ast}^{\epsilon}(x) dx = 1. \label{Eq9} \end{align} In the following section, i.e., Section~\ref{S3}, such a steady-state solution, together with the solution of the Liouville equation, will allow us to provide useful information towards a systemic assessment of redundancy in the reliable multi-channel systems with small random perturbation. \section{Main results} \label{S3} In what follows, we consider a particular class of stabilizing state-feedbacks that satisfies\footnote{$\operatorname{Sp}(A)$ denotes the spectrum of a matrix $A \in \mathbb{R}^{d \times d}$, i.e., $\operatorname{Sp}(A) = \bigl\{s \in \mathbb{C}\,\bigl\lvert \,\rank(A - sI) < d \bigr\}$.} \begin{align} \mathcal{K} \subseteq \Biggl\{\bigl(K_1, K_2, \ldots, K_N\bigr) & \in \prod\nolimits_{i=1}^N \mathbb{R}^{r_i \times d} \biggm \lvert \operatorname{Sp}\Bigl(A + \sum\nolimits_{i=1}^N B_{i} K_i\Bigr) \subset \mathbb{C}^{-} \,\, \& \notag \\ & \operatorname{Sp}\Bigl(A + \sum\nolimits_{i \neq j}^N B_{i} K_i\Bigr) \subset \mathbb{C}^{-},\,\, j = 1,2, \ldots, N \Biggr\}. \label{Eq10} \end{align} \begin{remark} \label{R5} We remark that the above class of state-feedbacks is useful for maintaining the stability of the closed-loop system both when all of the controllers work together, i.e., $\bigl(A + \sum\nolimits_{i=1}^N B_{i} K_i\bigr)$, as well as when there is a single-channel controller failure in the system, i.e., $\bigl(A + \sum\nolimits_{i \neq j}^N B_{i} K_i\bigr)$ for $j=1,2, \ldots, N$. Moreover, such a class of state-feedbacks falls within the redundant/passive fault tolerant controller configurations with overlapping functionality, i.e., a reliable-by-design requirement in the system (e.g., see \cite{BefGA14} or \cite{FujBe09}). \end{remark} Note that, for such a class of state-feedbacks, the density functions $\rho^{(j)}(t, x)$ for $j = 0,1, \ldots, N$, satisfy the following family of Liouville equations \begin{align} \frac{\partial \rho^{(0)}(t, x)}{\partial t} = - \Bigl \langle \frac{\partial }{\partial x}, \Bigl (Ax + \sum\nolimits_{i=1}^N B_i K_i x \Bigr) \rho^{(0)}(t, x) \Bigr \rangle \label{Eq11} \end{align} and \begin{align} \frac{\partial \rho^{(j)}(t, x)}{\partial t} = - \Bigl \langle \frac{\partial }{\partial x}, \Bigl (Ax + \sum\nolimits_{i \neq j}^N B_i K_i x \Bigl) \rho^{(j)}(t, x) \Bigr \rangle, \,\, j=1,2, \ldots, N, \label{Eq12} \end{align} starting from an initial density function $\rho_0(x_0)$. Moreover, using the fact that the solution of \begin{align} \dot{x}(t) = \Bigl (A + \sum\nolimits_{i=1}^N B_i K_i \Bigr)x(t), \quad x(0)=x_0, \label{Eq13} \end{align} can be written as \begin{align} x(t) = \exp(A t) x(0)+ \int_{0}^t \exp(A (t - \lambda)) \sum\nolimits_{i=1}^N B_i K_i x(\lambda) d\lambda. \label{Eq14} \end{align} Then, the density function $\rho^{(0)}(t, x)$ (when $j=0$) corresponding to the Liouville equation in \eqref{Eq11}, with an initial density $\rho_0(x_0)$, is given by \begin{align} \rho^{(0)}(t, x) = \frac{1}{\exp(\operatorname{tr}A t)} \rho_0 \biggl(\exp(- A t)\biggl(x(t) &- \int_{0}^t \exp(A (t - \lambda)) \sum\nolimits_{i=1}^N B_i K_i x(\lambda) d\lambda \biggr ) \biggr). \label{Eq15} \end{align} Similarly, the density functions $\rho^{(j)}(t, x)$ (when $j=1,2, \ldots, N$) corresponding to the Liouville equations in \eqref{Eq12} are given by \begin{align} \rho^{(j)}(t, x) = \frac{1}{\exp(\operatorname{tr}A t)} \rho_0 \biggl(\exp(- A t)\biggl(x(t) &- \int_{0}^t \exp(A (t - \lambda))\sum\nolimits_{i \neq j}^N B_i K_i x(\lambda) d\lambda \biggr ) \biggr). \label{Eq16} \end{align} On the other hand, for a given random perturbation $(\sigma, \epsilon)$, the steady-state density functions $\rho^{(\epsilon, j)}(x) $ for $j = 0,1, \ldots, N$, satisfy the following family of stationary Fokker-Planck equations \begin{align} 0 = - \Bigl \langle \frac{\partial} {\partial x},\, \Bigl ( A x + \sum\nolimits_{i=1}^N B_i K_i x \Bigr) \rho^{(\epsilon, 0)}(x) \Bigr \rangle + & \frac{\epsilon^2}{2} \Bigl \langle \frac{\partial^2} {\partial x^2},\, \sigma(x)\sigma^T(x) \rho^{(\epsilon, 0)}(x) \Bigr \rangle, \notag \\ \rho^{(\epsilon, 0)}(x) > 0, \quad & \int_{\mathbb{R}^d} \rho^{(\epsilon, 0)}(x) dx = 1, \label{Eq17} \end{align} and \begin{align} 0 = - \Bigl \langle \frac{\partial} {\partial x},\, \Bigl ( A x + \sum\nolimits_{i \neq j}^N B_i K_i x \Bigr ) \rho^{(\epsilon, j)}(x) \Bigr \rangle + & \frac{\epsilon^2}{2} \Bigl \langle \frac{\partial^2} {\partial x^2},\, \sigma(x)\sigma^T(x) \rho^{(\epsilon, j)}(x) \Bigr \rangle, \notag \\ \rho^{(\epsilon, j)}(x) > 0, \quad \int_{\mathbb{R}^d} \rho^{(\epsilon, j)}(x) dx = 1,& \quad j = 1, 2, \ldots, N. \label{Eq18} \end{align} \begin{remark} \label{R6} Note that $\sigma(x)$ is Lipschitz continuous, with the least eigenvalue of $\sigma(\cdot)\sigma^T(\cdot)$ uniformly bounded away from zero. Further, if it is twice differentiable on $\mathbb{R}^d$, the uniqueness for smooth steady-state solutions for \eqref{Eq17} and \eqref{Eq18} depends on the behavior of the original unperturbed multi-channel system as well as on the type of input controls used in the system. \end{remark} The following proposition provides a condition on the uniqueness of the solutions for the stationary Fokker-Planck equations. \begin{proposition} \label{P1} Suppose that there exists at least one $N$-tuple stabilizing state-feedbacks that satisfies the conditions in \eqref{Eq10}. Then, there exist unique smooth density functions $\rho^{(\epsilon, j)}(x)$ for $j = 0,1, \ldots, N$, with respect to $(\sigma, \epsilon)$, corresponding to the stationary Fokker-Planck equations in \eqref{Eq17} and \eqref{Eq18}. Furthermore, the relative entropy of $\mu^{(\epsilon, i)}$ for $i = 1, 2, \ldots, N$, with respect to $\mu^{(\epsilon, 0)}$ (i.e., probability measures associated with the density functions $\rho^{(\epsilon, i)}(x)$ and $\rho^{(\epsilon, 0)}(x)$, respectively) is finite, i.e., \begin{align} D\bigl(\mu^{(\epsilon, i)} \,\Vert\, \mu^{(\epsilon, 0)}\bigr) < +\infty. \label{Eq19} \end{align} \end{proposition} \begin{proof} Suppose that $\sigma(x)$ is twice differentiable on $\mathbb{R}^d$. Let the $N$-tuple of state-feedbacks $\bigl(K_1, K_2, \ldots, K_N \bigr)$ satisfy the conditions in \eqref{Eq10}. Then, the origin is a stable equilibrium point for the original unperturbed multi-channel system in \eqref{Eq1} (with respect to this particular set of state-feedbacks). Moreover, for sufficiently small $\epsilon > 0$, there exists a Lyapunov function $V(x) > 0$, with $\lim_{\vert x \vert \rightarrow \infty} V(x) = +\infty$, such that\footnote{Note that the assumption of a common Lyapunov function $V(x)$ is not necessary in \eqref{Eq20} and \eqref{Eq21}.} \begin{align} \Bigl \langle \Bigl ( A x + \sum\nolimits_{i=1}^N B_i K_i x\Bigl) \frac{\partial} {\partial x},\, V(x)\Bigr \rangle + \frac{\epsilon^2}{2} \Bigl \langle \sigma(x)\sigma^T(x) \frac{\partial^2} {\partial x^2},\, V(x)\Bigr \rangle < - \eta, \label{Eq20} \end{align} and \begin{align} \Bigl \langle \Bigl ( A x + \sum\nolimits_{i \neq j}^N B_i K_i x \Bigr ) \frac{\partial} {\partial x},\, V(x)\Bigr \rangle + \frac{\epsilon^2}{2} \Bigl \langle \sigma(x)\sigma^T(x) \frac{\partial^2} {\partial x^2},\, V(x)\Bigr \rangle < - \eta, \notag \\ j =1,2, \dots, N, \label{Eq21} \end{align} for all $x \in \mathbb{R}^d \backslash \{0\}$ and for some constant $\eta > 0$. Then, from Theorem~2.1 and Theorem~5.7 in \cite{BogKrB09}, the stationary measures $\mu^{(\epsilon, j)}$ for $j =0, 1, \dots, N$, of the Fokker-Planck equations in \eqref{Eq17} and \eqref{Eq18} uniquely admit smooth density functions $\rho^{(\epsilon, j)} \in C^{\infty}(\mathbb{R}^d)$ for $j =0, 1, \dots, N$, with $\rho^{(\epsilon, j)} > 0$ on $\mathbb{R}^d$, i.e., \begin{align*} \mu^{(\epsilon, j)}(dx) = \rho^{(\epsilon, j)}(x) dx, \quad j =0, 1, \dots, N. \end{align*} Furthermore, the measure $\mu^{(\epsilon, i)}$ for $i \in \{1, 2, \ldots, N\}$, is absolutely continuous with respect to $\mu^{(\epsilon, 0)}$ (i.e., $\mu^{(\epsilon, i)} \ll \mu^{(\epsilon, 0)}$, $i = 1, 2, \ldots, N$) and, as a result, the relative entropy of $\mu^{(\epsilon, i)}(x)$ with respect to $\mu^{(\epsilon, 0)}(x)$ satisfies the following \begin{align*} D\bigl(\mu^{(\epsilon, i)} \,\Vert\, \mu^{(\epsilon, 0)}\bigr) < +\infty, \quad i = 1, 2, \ldots, N. \end{align*} This completes the proof. \end{proof} Next, let us define the systemic redundancy $r_{(\sigma, \epsilon)} \in \mathbb{R}$ (with respect to the random perturbation $(\sigma, \epsilon)$) as follows \begin{align} r_{(\sigma, \epsilon)} = \frac{1}{2N} \sum\nolimits_{i=1}^N D\bigl(\mu^{(\epsilon, i)} \,\Vert\, \mu^{(\epsilon, 0)}\bigr) - H\bigl(\mu^{(\epsilon, 0)}\bigr), \label{Eq22} \end{align} where $\bigl(1/2N\bigr) \sum\nolimits_{i=1}^N D\bigl(\mu^{(\epsilon, i)} \,\Vert\, \mu^{(\epsilon, 0)}\bigr)$ represents the average relative entropy of probability measures with respect to any single failure in the control channels. \begin{remark} \label{R7} Here we remark that $r_{(\sigma, \epsilon)}$ provides useful information in characterizing the systemic redundancy (i.e., with respect to the state-space and partitioned input spaces) in the system with multi-controller configurations having overlapping functionality, i.e., a requirement to maintain the stability of the system when there is a single failure in any of the control channels. \end{remark} Then, we have the following result on the asymptotic property of the systemic redundancy measure $r_{(\sigma, \epsilon)}$ as the random perturbation decreases to zero (i.e., as $\epsilon \rightarrow 0$). \begin{corollary}\label{C1} Suppose that Proposition~\ref{P1} holds, then the redundancy $r_{(\sigma, \epsilon)}$ satisfies the following asymptotic property \begin{align} r_{(\sigma, \epsilon)} \rightarrow r_{\infty} \quad \text{as} \quad \epsilon \rightarrow 0, \label{Eq23} \end{align} where \begin{align} r_{\infty} &= \lim_{ t \rightarrow \infty} \Biggl[\underbrace{\frac{1}{2N} \sum\nolimits_{i=1}^N D\bigl(\mu^{(i)} \,\Vert\, \mu^{(0)}\bigr) - H\bigl(\mu^{(0)}\bigr)}_{\substack{\triangleq \, r_t, ~ t \ge 0}} \Biggr], \label{Eq24} \end{align} and $\mu^{(j)}$, $j=0, 1, \ldots, N$, are probability measures with respect to the density functions $\rho^{(j)}(t, x)$, $j=0, 1, \ldots, N$, respectively, that satisfy the Liouville equations in \eqref{Eq11} and \eqref{Eq12} starting from an initial density function $\rho_0(x_0)$. \end{corollary} \begin{remark} \label{R8} The proof is based on the idea of comparing the solutions of the Liouville equations to that of the steady-state solutions of the Fokker-Planck equations as $\epsilon \rightarrow 0$, and thus it is omitted. \end{remark} \begin{remark} \label{R9} Note that a closer look at Equation~\eqref{Eq22} (see also Equations~\eqref{Eq23} and \eqref{Eq24} above) shows that $r_{(\sigma, \epsilon)} > r_t$, $\forall t \ge 0$, with respect to some perturbation $(\sigma, \epsilon)$ and further if $\epsilon_1 \le \epsilon_2$, then $r_{(\sigma, \epsilon_1)} \le r_{(\sigma, \epsilon_2)}$. \end{remark} \end{document}
\begin{document} \title{Foiling covert channels and malicious classical post-processing units in quantum key distribution} \author{Marcos Curty$^1$} \email{mcurty@com.uvigo.es} \author{Hoi-Kwong Lo$^{2}$} \affiliation{$^1$Escuela de Ingenier\'ia de Telecomunicaci\'on, Department of Signal Theory and Communications, University of Vigo, Vigo E-36310, Spain\\ $^2$Center for Quantum Information and Quantum Control, Department of Physics and Department of Electrical \& Computer Engineering, University of Toronto, M5S 3G4 Toronto, Canada} \date{\today} \begin{abstract} Existing security proofs of quantum key distribution (QKD) suffer from two fundamental weaknesses. First, memory attacks have emerged as an important threat to the security of even device-independent quantum key distribution (DI-QKD), whenever QKD devices are re-used. This type of attacks constitutes an example of covert channels, which have attracted a lot of attention in security research in conventional cryptographic and communication systems. Second, it is often implicitly assumed that the classical post-processing units of a QKD system are trusted. This is a rather strong assumption and is very hard to justify in practice. Here, we propose a simple solution to these two fundamental problems. Specifically, we show that by using verifiable secret sharing and multiple optical devices and classical post-processing units, one could re-establish the security of QKD. Our techniques are rather general and they apply to both DI-QKD and non-DI-QKD. \end{abstract} \maketitle \section{Introduction} There has been much interest in the subject of quantum key distribution (QKD) in recent years because it holds the promise of providing information-theoretically secure communications based on the laws of quantum physics~\cite{qkd1,qkd2,qkd3}. There is, however, a big gap between the theory~\cite{qkd_t1,qkd_t2} and the practice~\cite{exp1,exp2,exp3,exp4,exp5} of QKD, and the security of QKD implementations is seriously threatened by quantum hacking~\cite{hack4,hack5,hack6,hack7,hack9}. To solve this problem, the ultimate solution is device-independent (DI)-QKD~\cite{diQKD1,diQKD2,diQKD5,diQKD6}, whose security is essentially based on a loophole-free Bell test~\cite{bell1,bell2}. Although no experimental implementation of DI-QKD has been realised yet, the recent demonstrations of loophole-free Bell tests~\cite{belltest1,belltest2,belltest3,belltest4,belltest5} might bring DI-QKD closer to experimental realisation. \begin{figure}\label{mem_att} \end{figure} Despite its conceptual beauty, DI-QKD is however not foolproof. Indeed, one cannot expect that all QKD users will have expertise in experimental quantum optics and electronics. So, unless Alice and Bob manufacture their own QKD devices themselves, it could be very hard for them to guarantee that the devices are indeed honest. For instance, it was shown in~\cite{mem_attack} that DI-QKD is highly vulnerable to the so-called memory attacks. In this type of attacks, a hidden memory device (planted by the eavesdropper, Eve, in say Alice's setup during the manufacturing or initial installation of the QKD system) stores up the key material generated in each QKD session and then leaks this information to Eve in subsequent QKD runs. This situation is illustrated in Fig.~\ref{mem_att}. Importantly, such leakage of key information could be done very slowly over many subsequent QKD runs, and thus it could be very difficult to detect~\cite{mem_attack}. Obviously, this is a fatal loss of security for DI-QKD. Whenever a QKD system is reused for subsequent QKD sessions, the security of the keys generated in previous QKD runs might be compromised.~This is particularly problematic in a network setting with multiple users (who may not all be trustworthy) due to the impostor attack~\cite{mem_attack,counter_mem_attack}. Moreover, note that in principle memory attacks could also work for non-DI-QKD. This is so because, in practice, it could be quite challenging to check wether or not a purchased QKD setup contains such memory. Therefore, in the following, whenever we refer to a QKD system, it will be implicitly understood that it could be either a DI-QKD scheme or a non-DI-QKD scheme, as our results apply to both frameworks. Our view is that memory attacks constitute an example of covert channels~\cite{covert}, which have attracted massive attention in conventional cryptography. With covert channels, seemingly innocent communications in a protocol could leak crucial information that is fatal to its security. One main motivation of our work is indeed to counter covert channels, such as memory attacks, in QKD. Another key weakness in standard QKD security proofs is that they all implicitly assume that the classical post-processing units are trusted. These units are supposed to distill a secure secret key from the raw data generated by the QKD modules by applying techniques such as post-selection of data (or so-called sifting), parameter estimation, error correction, error verification and privacy amplification. However, in view of the many hardware~\cite{covert2,covert2b,covert2c} and software~\cite{covert3} Trojan Horse attacks that have been performed recently in conventional cryptographic systems, such trust is a very strong and unjustified assumption. This scenario is illustrated in Fig.~\ref{fig0}. Indeed, hardware and software Trojans constitute today a key threat to the security of conventional cryptographic devices and this threat is expected to only rise with time, so it cannot be neglected when analysing the security of a QKD implementation. \begin{figure}\label{fig0} \end{figure} And so the key question is: how do we address covert channels and prove security in QKD with untrusted classical post-processing units? The existence of memory attacks in DI-QKD shows that quantum mechanics alone is not enough. Clearly, we need to include some additional assumptions. To solve this problem, we draw inspiration from the idea of verifiable secret sharing (VSS)~\cite{verifiable1,verifiable2} and the existence of secure multiparty computations~\cite{book_smc} in conventional cryptography, where it is known that one can achieve information-theoretic security in a $n$-party cryptographic setup if the number of cheaters is less than $n/3$~\cite{smc1,smc2,maurer_smpc}. Standard VSS schemes, however, assume that all channels are classical, so by using error correction and authentication techniques one can basically make these channels perfect.~In contrast, in QKD, owing to the noisy and lossy quantum channels controlled by Eve and the quantum no-cloning theorem, to distill a final key Alice and Bob need to apply several classical post-processing steps to the raw data produced by the QKD modules in a setting where {\it both} the QKD modules and the classical post-processing units might be corrupted. \begin{figure} \caption{Schematic representation of a MDI-QKD network~\cite{mdiQKD} in which each user has multiple QKD transmitter modules and classical post-processing units. Note that the measurement devices are often the most expensive components of an entire QKD system because single photon detection is highly non-trivial. MDI-QKD allows measurement modules to be totally untrusted, which means that there is no need for redundant measurement modules if our proposal is employed with MDI-QKD. The users just need to have multiple transmitters and classical post-processing units, which thanks to the development of cheap chip-based QKD systems~\cite{chip1,chip2,chip3}, we believe, could render our proposal cost effective in the future. We remark that our approach is also fully compatible with quantum relays and quantum repeaters.} \label{fig_mdi} \end{figure} A key contribution of this paper is thus to show how, despite these obstacles, such VSS approach could be adapted to QKD to re-establish its security. The price that we pay is that now Alice and Bob have to use a redundant number of QKD modules and classical post-processing units. Fortunately, however, with the recent development of measurement-device-independent QKD (MDI-QKD)~\cite{mdiQKD,mdiQKDe1,mdiQKDe2,mdiQKDe3,mdiQKDe4} and chip-based QKD~\cite{chip1,chip2,chip3}, the cost of QKD modules might decrease dramatically over time, see Fig.~\ref{fig_mdi}. So, it is not unrealistic to consider that each of Alice and Bob could possess a few QKD modules and classical post-processing units, each of them purchased from a different vendor. Now, provided that the majority of the vendors are honest and careful in the manufacturing of their devices, it might not be entirely unreasonable to assume that at least one pair of QKD modules is honest and the number of malicious/flawed classical post-processing units is strictly less than one third of the total number of them. With these assumptions in place, we can then apply techniques in conventional multiparty secure computation to prove security in different QKD scenarios with malicious devices. Importantly, if we disregard the cost of authenticating the classical communications, our protocols are optimal with respect to the resulting secret key rate. Moreover, the operations involved are based on simple functions in linear algebra such as bit-wise XOR and multiplication of matrices.~So, they are conceptually simple and easy to implement. \section{QKD with malicious devices}\label{mainresults} Let us start by describing the general scenario that we consider in more detail. It is illustrated in Fig.~\ref{fig_gen}(a). Alice and Bob have $n$ pairs of QKD modules, and, in addition, say Alice (Bob) has $s$ ($r$) classical post-processing units at their disposal, each of them ideally purchased from a different provider. Alice's modules QKD$_{{\rm A}i}$, with $i=1,\ldots,n$, are connected to the classical post-processing units CP$_{{\rm A}i'}$, with $i'=1,\ldots,s$, via secure channels ({\it i.e.}, channels that provide both secrecy and authentication). Also, all the units CP$_{{\rm A}i'}$ are connected to each other via secure channels. The same applies to Bob. Importantly, since all these secure channels are located only within Alice and Bob's labs, in practice they could be implemented, for instance, by using physically protected paths ({\it e.g.}, physical wires that are mechanically and electrically protected against damage and intrusion) which connect only the prescribed devices. Furthermore, each QKD$_{{\rm A}i}$ is connected to its partner QKD$_{{\rm B}i}$ via a quantum channel, and each CP$_{{\rm A}i'}$ is connected to all CP$_{{\rm B}i''}$, with $i'=1,\ldots,s$ and $i''=1,\ldots,r$, via authenticated classical channels~\cite{wegman,wegman2}. Moreover, for simplicity, we shall consider a so-called threshold active adversary structure. That is, we will assume that up to $t<n$ pairs of QKD modules, up to $t'<s/3$ units CP$_{{\rm A}i'}$ and up to $t''<r/3$ independent units CP$_{{\rm B}i''}$ could be corrupted. We say that a pair of QKD modules is corrupted when at least one of them is corrupted. Also, we conservatively assume that corrupted devices do not have to necessarily follow the prescriptions of the protocol but their behaviour is fully controlled by Eve, who could also access all their internal information. We refer the reader to Appendix~\ref{gen_sec} for the security analysis of QKD against a general mixed adversary structure~\cite{fitzi}. The goal is to generate a composable $\epsilon$-secure key, $k_{\rm A}$ and $k_{\rm B}$. That is, $k_{\rm A}$ and $k_{\rm B}$ should be identical except for a minuscule probability $\epsilon_{\rm cor}$, and say $k_{\rm A}$ should be completely random and decoupled from Eve except for a minuscule probability $\epsilon_{\rm sec}$, with $\epsilon_{\rm cor}+\epsilon_{\rm sec}\leq\epsilon$~\cite{comp1,comp2}. Importantly, since now some QKD modules and classical post-processing units could be corrupted, the secrecy condition also implies that $k_{\rm A}$ and $k_{\rm B}$ must be independent of any information held by the corrupted devices {\it after} the execution of the protocol. Otherwise, such corrupted devices could directly leak $k_{\rm A}$ and $k_{\rm B}$ to Eve. Obviously, at the end of the day, some parties might need to have access to the final key, and thus one necessarily must assume that such parties are trusted and located in secure labs. In this regard, our work suggests that when the classical post-processing units at the key distillation layer are untrusted, they should not output the final key $k_{\rm A}$ and $k_{\rm B}$ but they should output shares of it to the key management layer~\cite{kmang1,kmang2}. There, $k_{\rm A}$ and $k_{\rm B}$ could be either reconstructed by say Alice and Bob in secure labs, or its shares could be stored in distributed memories for later use, or they could be employed for instance for encryption purposes via say the one-time pad. Importantly, however, all the key generation process at the key distillation layer can be performed with corrupted devices. Also, we note that, if necessary, operations like storage or encryption at the key management layer could also be performed with corrupted devices by using techniques from secure multiparty computation~\cite{book_smc}. In any case, the actual management and storage of the shares of $k_{\rm A}$ and $k_{\rm B}$ generated by the key distillation layer is responsibility of the key management layer and depends on the particular application. Before we address specific scenarios in detail, let us provide an overview of the general strategy that we follow to achieve our goal, which uses as main ingredients VSS schemes~\cite{verifiable1,verifiable2,maurer_smpc} and privacy amplification techniques (see Appendix~\ref{tool}). The former is employed to defeat corrupted classical post-processing units. Indeed, given that $t'<s/3$ and $t''<r/3$, the use of VSS schemes allows to post-process the raw keys generated by the QKD modules in a distributed setting by acting only on raw key shares. More importantly, this post-processing of raw key shares can be performed such that no set of corrupted classical post-processing units can reconstruct $k_{\rm A}$ and $k_{\rm B}$ and, moreover, it is also guaranteed that $k_{\rm A}$ and $k_{\rm B}$ is a correct key independently of the misbehaviour of the corrupted units which might wish to purposely introduce errors. In this regard, a key insight of our paper is to show that, since all the classical post-processing techniques that are typically applied in QKD are ``linear'' in nature ({\it i.e.}, they involve simple functions in linear algebra such as bit-wise XOR and multiplications of matrices), they are easily implementable in a distributed setting. \begin{figure}\label{fig_gen} \end{figure} Let us illustrate this point with a simple example. In particular, let us consider, for instance, the error correction step in QKD. Here, say Bob wants to correct a certain bit string, $k_{\rm B, key}$, to match that of Alice, which we shall denote by $k_{\rm A, key}$. In general, this process requires that both Alice and Bob first apply certain error correction matrices, $M_{\rm EC}$, to $k_{\rm B, key}$ and $k_{\rm A, key}$ to obtain the syndrome information $s_{\rm A}=M_{\rm EC}k_{\rm A, key}$ and $s_{\rm B}=M_{\rm EC}k_{\rm B, key}$, respectively. Afterward, if $s_{\rm A}\neq{}s_{\rm B}$ Bob modifies $k_{\rm B, key}$ accordingly. This process is then repeated a few times until it is guaranteed that $k_{\rm B, key}=k_{\rm A, key}$ with high probability. Let us now consider again the same procedure but now acting on shares, $k_{\rm A{\it j}, key}$ and $k_{\rm B{\it j}, key}$, of $k_{\rm A, key}$ and $k_{\rm B, key}$ respectively. That is, say $k_{\rm A, key}=\oplus_j^q k_{\rm A{\it j}, key}$ and $k_{\rm B, key}=\oplus_j^q k_{\rm B{\it j}, key}$, with $q$ being the total number of shares. For this, Alice and Bob first apply $M_{\rm EC}$ to $k_{\rm A{\it j}, key}$ and $k_{\rm B{\it j}, key}$ to obtain $s_{{\rm A}j}=M_{\rm EC}k_{\rm A{\it j}, key}$ and $s_{{\rm B}j}=M_{\rm EC}k_{\rm B{\it j}, key}$, respectively, for all $j$. Next, Alice sends $s_{{\rm A}j}$ to Bob who obtains $s_{\rm A}=\oplus_{j=1}^qs_{{\rm A}j}$ and $s_{\rm B}=\oplus_{j=1}^qs_{{\rm B}j}$. This is so because $\oplus_{j=1}^qs_{{\rm A}j}=\oplus_{j=1}^qM_{\rm EC}k_{\rm A{\it j}, key}=M_{\rm EC}\oplus_{j=1}^qk_{\rm A{\it j}, key}=M_{\rm EC}k_{\rm A, key}=s_{\rm A}$, and a similar argument applies to $s_{\rm B}$. Finally, if $s_{\rm A}\neq{}s_{\rm B}$ Bob corrects $k_{\rm B, key}$ by acting on its shares $k_{\rm B{\it j}, key}$. This is so because to flip certain bits in $k_{\rm B, key}$ is equivalent to flip the corresponding bits in one of its shares $k_{\rm B{\it j}, key}$. That is, the error correction step in QKD can be easily performed in a distributed setting by acting only on shares of $k_{\rm A, key}$ and $k_{\rm B, key}$. The same argument applies as well to the other classical post-processing techniques in QKD, as all of them involve only linear operations. To defeat corrupted QKD modules, on the other hand, we use privacy amplification techniques. Suppose, for instance, that each pair of QKD modules, QKD$_{{\rm A}i}$ and QKD$_{{\rm B}i}$, output a raw key, $k'_{{\rm A}i}$ and $k'_{{\rm B}i}$. Moreover, suppose for the moment that the classical post-processing units are trusted and they distill a supposedly $(\epsilon/n)$-secure key, $k''_{{\rm A}i}$ and $k''_{{\rm B}i}$, of length $N$ bits from each pair $k'_{{\rm A}i}$ and $k'_{{\rm B}i}$. Then, we have that the $n\times{}N$ bit strings $k'_{\rm A}=[k''_{{\rm A}1},\ldots, k''_{{\rm A}n}]$ and $k'_{\rm B}=[k''_{{\rm B}1},\ldots, k''_{{\rm B}n}]$ are for certain $\epsilon_{\rm cor}$-correct. The secrecy condition, however, only holds if all the QKD modules are trusted. If say the pair QKD$_{{\rm A}i'}$ and QKD$_{{\rm B}i'}$ is corrupted then the key strings $k''_{{\rm A}i'}$ and $k''_{{\rm B}i'}$ are compromised. So, given that $t<n$ the classical post-processing units can apply privacy amplification to $k'_{\rm A}$ and $k'_{\rm B}$ to extract two shorter $(n-t)\times{}N$ bit strings, $k_{\rm A}$ and $k_{\rm B}$, which are for certain $\epsilon_{\rm sec}$-secret and thus $\epsilon$-secure. In the presence of untrusted classical post-processing units, this process can be performed in a distributed manner by acting on data shares, just as we describe above. In short, the general strategy can be decomposed in three main steps, which are illustrated in Fig.~\ref{fig_gen}(b). First, each pair of QKD modules generates a raw key and the protocol information and sends them to the CP units. Then, in a second step, the CP units distill a supposedly $(\epsilon/n)$-secure key from each raw key received and concatenate the resulting keys to form a longer key bit string. Finally, in the third step, the CP units apply privacy amplification to remove the information that could be known to Eve due to the presence of corrupted QKD modules. Importantly, if the CP units are untrusted, all these steps are performed in a distributed setting by acting on data shares produced by a VSS scheme. Next we evaluate three different scenarios of practical interest in this context. For concreteness, in these examples we use the VSS scheme introduced in~\cite{maurer_smpc} and described in Appendix~\ref{tool}. \subsection{QKD with malicious QKD modules and honest classical post-processing units}\label{ups} We begin by analysing the situation where Alice and Bob have $n$ pairs of QKD modules and up to $t<n$ of them could be corrupted, and each of Alice and Bob has one classical post-processing unit which is assumed to be honest. This scenario is illustrated in Fig.~\ref{fig3} and corresponds to the case $s=r=1$ and $t'=t''=0$ in Fig.~\ref{fig_gen}(a). \begin{figure}\label{fig3} \end{figure} A possible solution to this scenario is rather simple. It is given by {\it Protocol}~$1$ below, which consists of three main steps. \\ \noindent{\it Protocol $1$}: \begin{enumerate} \item {\it Generation of raw keys and protocol information}: Each pair QKD$_{{\rm A}i}$ and QKD$_{{\rm B}i}$ outputs, respectively, the bit strings $k'_{{\rm A}i}$ and $p_{\rm A{\it i},info}$, and $k'_{{\rm B}i}$ and $p_{\rm B{\it i},info}$, or the symbol $\perp_i$ to indicate abort, for all $i=1,\ldots,n$. \item {\it Generation of an $\epsilon_{\rm cor}$-correct key}: The units CP$_{\rm A}$ and CP$_{\rm B}$ use the key distillation procedure prescribed by the QKD protocol to generate an $(\epsilon/n)$-secure key, $k''_{{\rm A}i}$ and $k''_{{\rm B}i}$, from each raw key pair $k'_{{\rm A}i}$ and $k'_{{\rm B}i}$, or they generate the abort symbol $\perp_i$, for all $i=1,\ldots, n$. Afterward, CP$_{\rm A}$ (CP$_{\rm B}$) concatenates the $M\leq{}n$ keys $k''_{{\rm A}i}$ ($k''_{{\rm B}i}$) which are different from $\perp_i$ to form the bit string $k'_{\rm A}=[k''_{{\rm A}1},\ldots, k''_{{\rm A}M}]$ ($k'_{\rm B}=[k''_{{\rm B}1},\ldots, k''_{{\rm B}M}]$).~Since the units CP$_{\rm A}$ and CP$_{\rm B}$ are trusted, $k''_{{\rm A}i}$ and $k''_{{\rm B}i}$ are for certain $(\epsilon_{\rm cor}/n)$-correct $\forall i$ and thus $k'_{\rm A}$ and $k'_{\rm B}$ are $\epsilon_{\rm cor}$-correct. The secrecy condition only holds if all $k''_{{\rm A}i}$ and $k''_{{\rm B}i}$ originate from raw keys output by honest QKD modules. For simplicity, we will suppose that the length of $k''_{{\rm A}i}$ and $k''_{{\rm B}i}$ is $N$ bits $\forall i$. \item {\it Generation of an $\epsilon$-secure key}: CP$_{\rm A}$ and CP$_{\rm B}$ apply a randomly selected universal$_2$ hash function to extract from $k'_{\rm A}$ and $k'_{\rm B}$ two shorter bit strings, $k_{\rm A}$ and $k_{\rm B}$, of length $(M-t)\times{}N$ bits. $k_{\rm A}$ and $k_{\rm B}$ are by definition $\epsilon_{\rm sec}$-secret, and thus, from step~$2$, they are $\epsilon$-secure. \end{enumerate} Note that in step~$3$ of {\it Protocol}~$1$ we consider the worst-case scenario where all $k''_{{\rm A}i}$ and $k''_{{\rm B}i}$ generated by corrupted QKD modules contribute to $k'_{\rm A}$ and $k'_{\rm B}$ respectively, as Alice and Bob cannot discard this case. Most importantly, {\it Protocol}~$1$ allows Alice and Bob to defeat covert channels such as memory attacks in QKD, as this protocol guarantees that none of the corrupted QKD modules can access $k_{\rm A}$ or $k_{\rm B}$. Our results are summarised in the following Claim, whose proof is direct from the definition of {\it Protocol}~$1$. \\ \noindent{\bf Claim 1.} {\it Suppose that Alice and Bob have $n$ pairs of QKD modules and up to $t<n$ of them could be corrupted. Also, suppose that they have one trusted classical post-processing unit each. Let $M\leq{}n$ denote the number of pairs of QKD modules that do not abort and whose raw key could in principle be transformed into an $(\epsilon/n)$-secure key, and let $N$ bits be the length of such supposedly secure key. Protocol~$1$ allows Alice and Bob to distill an $\epsilon$-secure key of length $(M-t)\times{}N$ bits. Moreover, the re-use of the devices does not compromise the security of the keys distilled in previous QKD runs. } \\ Importantly, we remark that {\it Protocol}~$1$ is optimal with respect to the resulting secret key rate. This is so because of the following. If no pair of QKD modules aborts and its raw data could in principle be transformed into a secure key we have, by definition, that the maximum {\it total} final key length is at most $n\times{}N$ bits. Also, we know that up to $t\times{}N$ bits of such key could be compromised by the $t$ pairs of corrupted QKD modules. That is, the maximum secure key length is at most $(n-t)\times{}N$ bits. Moreover, as discussed above, if some pairs of QKD modules abort we must necessarily assume the worst-case scenario where they are honest. This is so because through her interaction with the quantum signals in the channel, Eve could always force honest QKD modules to abort by simply increasing the resulting QBER or phase error rate. That is, in the scenario considered it is not possible to distill a key length greater than $(M-t)\times{}N$ bits. \subsection{QKD with honest QKD modules and malicious classical post-processing units}\label{ups2} We now consider the situation where Alice and Bob have one trusted QKD module each, and Alice (Bob) has $s$ ($r$) classical post-processing units CP$_{{\rm A}i}$ (CP$_{{\rm B}i'}$), with $i=1,\ldots,s$ ($i'=1,\ldots,r$), and up to $t'<s/3$ ($t''<r/3$) of them could be corrupted. This scenario is illustrated in Fig.~\ref{fig2} and corresponds to the case $n=1$ and $t=0$ in Fig.~\ref{fig_gen}(a). \begin{figure}\label{fig2} \end{figure} Since now the units CP$_{{\rm A}i}$ and CP$_{{\rm B}i'}$ could be malicious, we aim to generate shares of an $\epsilon$-secure key, $k_{\rm A}$ and $k_{\rm B}$. A possible solution to this scenario is given by {\it Protocol}~$2$, which uses the VSS protocol introduced in~\cite{maurer_smpc}. These protocols are described in Appendices~\ref{tool} and \ref{app_P2} respectively. Below we provide a sketch of {\it Protocol}~$2$ where, for easy of presentation, we assume $r=s$. It consists of six main steps. \\ \noindent Sketch of {\it Protocol $2$}: \begin{enumerate} \item {\it Generation and distribution of shares of raw keys and protocol information}: QKD$_{\rm A}$ and QKD$_{\rm B}$ output, respectively, $k'_{\rm A}$ and $p_{\rm A,info}$, and $k'_{\rm B}$ and $p_{\rm B,info}$, or the abort symbol $\perp$. If the output is different from $\perp$, QKD$_{\rm A}$ sends $p_{\rm A,info}$ to all CP$_{{\rm A}i}$ and uses a VSS scheme to distribute shares of $k'_{\rm A}$ between the CP$_{{\rm A}i}$. Likewise, QKD$_{\rm B}$ does the same with $p_{\rm B,info}$, $k'_{\rm B}$ and the units CP$_{{\rm B}i'}$. Let $k'_{{\rm A}ij}$ ($k'_{{\rm B}i'j}$) be the $j$-th share of $k'_{\rm A}$ ($k'_{\rm B}$) received by CP$_{{\rm A}i}$ (CP$_{{\rm B}i'}$). Next, the CP$_{{\rm A}i}$ and CP$_{{\rm B}i'}$ send to each other $p_{\rm A,info}$ and $p_{\rm B,info}$. \item {\it Sifting}: Each CP$_{{\rm A}i}$ uses $p_{\rm A,info}$ and $p_{\rm B,info}$ to obtain two bit strings, $k_{\rm A{\it ij}, key}$ and $k_{\rm A{\it ij}, est}$, from $k'_{{\rm A}ij}$. $k_{\rm A{\it ij}, key}$ ($k_{\rm A{\it ij}, est}$) is used for key generation (parameter estimation). Likewise, Bob's CP$_{{\rm B}i'}$ do the same with $k'_{{\rm B}i'j}$. \item {\it Parameter estimation}: The CP$_{{\rm A}i}$ and CP$_{{\rm B}i'}$ use the reconstruct protocol of a VSS scheme to obtain the parts of $k'_{\rm A}$ and $k'_{\rm B}$ that are used for parameter estimation, $k_{\rm A, est}$ and $k_{\rm B, est}$, from their shares $k_{\rm A{\it ij}, est}$ and $k_{\rm B{\it i'j}, est}$. Next, each CP$_{{\rm A}i}$ and CP$_{{\rm B}i'}$ performs locally parameter estimation ({\it e.g.}, they estimate the phase error rate). If the estimated values exceed certain tolerated values, they abort. \item {\it Error correction}: The CP$_{{\rm A}i}$ and CP$_{{\rm B}i'}$ correct the parts of $k'_{\rm A}$ and $k'_{\rm B}$ that are used for key distillation, $k_{\rm A, key}$ and $k_{\rm B, key}$, by acting on their shares $k_{\rm A{\it ij}, key}$ and $k_{\rm B{\it i'j}, key}$. Let ${\hat k}_{\rm A{\it ij}, key}$ and ${\hat k}_{\rm B{\it i'j}, key}$ denote the resulting shares of the corrected keys ${\hat k}_{\rm A, key}$ and ${\hat k}_{\rm B, key}$, and let leak$_{\rm EC}$ bits be the syndrome information interchanged during this step. \item {\it Error verification}: The CP$_{{\rm A}i}$ use the RBS scheme (see Appendix~\ref{tool}) to randomly select a universal$_2$ hash function, $h_{\rm V}$, that is sent to all CP$_{{\rm B}i'}$. The CP$_{{\rm A}i}$ (CP$_{{\rm B}i'}$) compute $h_{{\rm A}ij}=h_{\rm V}({\hat k}_{\rm A{\it ij}, key})$ ($h_{{\rm B}i'j}=h_{\rm V}({\hat k}_{\rm B{\it i'j}, key})$) of length $\lceil\log_2{(4/\epsilon_{\rm cor})}\rceil$ bits, and they use the reconstruct protocol of a VSS scheme to obtain both a hash, $h_{\rm A}$, of ${\hat k}_{\rm A, key}$ and a hash, $h_{\rm B}$, of ${\hat k}_{\rm B, key}$ from the shares $h_{{\rm A}ij}$ and $h_{{\rm B}i'j}$. Finally, each CP$_{{\rm A}i}$ and CP$_{{\rm B}i'}$ aborts if $h_{\rm A}\neq{}h_{\rm B}$. \item {\it Generation of shares of an $\epsilon$-secure key}: The CP$_{{\rm A}i}$ use the RBS scheme to randomly select a universal$_2$ hash function, $h_{\rm P}$, that is sent to all CP$_{{\rm B}i'}$. Each CP$_{{\rm A}i}$ (CP$_{{\rm B}i'}$) obtains the shares, $k_{{\rm A}ij}=h_{\rm P}({\hat k}_{\rm A{\it ij}, key})$ ($k_{{\rm B}i'j}=h_{\rm P}({\hat k}_{\rm B{\it i'j}, key})$), of a key $k_{\rm A}$ ($k_{\rm B}$). \end{enumerate} Given that $t'<M_{\rm A}/3$ and $t''<M_{\rm B}/3$, where $M_{\rm A}$ ($M_{\rm B}$) denotes the number of CP$_{{\rm A}i}$ (CP$_{{\rm B}i'}$) that do not abort, we have that the final key, $k_{\rm A}$ and $k_{\rm B}$, is $\epsilon$-secure (see Appendix~\ref{app_P2}). If they wish so, Alice (Bob) can obtain $k_{\rm A}$ ($k_{\rm B}$) by using the reconstruct protocol of a VSS scheme. That is, Alice (Bob) can use majority voting to obtain first the $j$-th share, $k_{{\rm A}j}$ ($k_{{\rm B}j}$), of $k_{\rm A}$ ($k_{\rm B}$) from $k_{{\rm A}ij}$ ($k_{{\rm B}i'j}$) for all $j=1,\ldots,q$, and then she (he) calculates $k_{\rm A}=\oplus_{j=1}^q k_{{\rm A}j}$ ($k_{\rm B}=\oplus_{j=1}^q k_{{\rm B}j}$), where $q$ is the total number of shares. Our results are summarised in the following Claim, whose proof is direct from the definition of {\it Protocol}~$2$: \\ \noindent{\bf Claim 2.} {\it Suppose that Alice and Bob have one trusted QKD module each, and each of them has, respectively, $s$ and $r$ classical post-processing units. Also, suppose that up to $t'<s/3$ of Alice's units and up to $t''<r/3$ of Bob's units could be corrupted. Then, if we disregard the cost of authenticating the classical channels between Alice and Bob's classical post-processing units, Protocol~$2$ allows them to distill an $\epsilon$-secure key of the same length as would be possible in a completely trusted scenario. Moreover, the re-use of the devices does not compromise the security of the keys distilled in previous QKD runs. } \\ We remark that if we ignore the cost of authenticating the classical channels between the units CP$_{{\rm A}i}$ and CP$_{{\rm B}i'}$, Claim~$2$ implies directly that {\it Protocol}~$2$ is optimal with respect to the resulting secret key length. Also, we refer the reader to Appendix~\ref{alter} for a simpler but less efficient protocol to achieve the same task. \subsection{QKD with malicious QKD modules and malicious classical post-processing units}\label{ups3} Finally, here we consider the situation where Alice and Bob have $n$ pairs of QKD modules, QKD$_{{\rm A}i}$ and QKD$_{{\rm B}i}$ with $i=1,\ldots,n$, and Alice (Bob) has $s$ ($r$) classical post-processing units CP$_{{\rm A}i'}$ (CP$_{{\rm B}i''}$), with $i'=1,\ldots,s$ ($i''=1,\ldots,r$), and up to $t<n$ pairs of QKD modules, up to $t'<s/3$ units CP$_{{\rm A}i'}$ and up to $t''<r/3$ units CP$_{{\rm B}i''}$ could be corrupted. This scenario is illustrated in Fig.~\ref{fig1} and corresponds to the most general case considered in Fig.~\ref{fig_gen}(a). \begin{figure}\label{fig1} \end{figure} For illustrative purposes, let us discuss first a naive protocol that fails to achieve the goal. In particular, suppose for simplicity that $s=r=n$, and, moreover, we have that up to $t<n$ groups $G_i\equiv\{{\rm QKD}_{{\rm A}i}, {\rm QKD}_{{\rm B}i}, {\rm CP}_{{\rm A}i}, {\rm CP}_{{\rm B}i}\}$ could be corrupted, where we say that a group $G_i$ is corrupted if at least one of its elements is corrupted. Then, if one disregards efficiency issues, a straightforward solution to this scenario might appear to be as follows. Each $G_i$ simply generates a supposedly $\epsilon$-secure key, $k_{{\rm A}i}$ and $k_{{\rm B}i}$, and this key is then considered as the $i$-th share of a final key, $k_{\rm A}$ and $k_{\rm B}$. That is, $k_{\rm A}=\oplus_{i=1}^n k_{{\rm A}i}$ and $k_{\rm B}=\oplus_{i=1}^n k_{{\rm B}i}$. Indeed, given that $t<n$, $k_{\rm A}$ and $k_{\rm B}$ is for certain $\epsilon_{\rm sec}$-secret. However, the main problem of this naive approach is that $k_{\rm A}$ and $k_{\rm B}$ might not be {\it correct} because a corrupted $G_i$ could simple output $k_{{\rm A}i}\neq{}k_{{\rm B}i}$ and thus $k_{\rm A}\neq{}k_{\rm B}$. Below we provide a simple solution ({\it Protocol~$3$}) to the general scenario. It builds on {\it Protocols}~$1$ and $2$ above, and it consists of three main steps. \\ \noindent{\it Protocol $3$}: \begin{enumerate} \item {\it Generation and distribution of shares of $(\epsilon/n)$-secure keys}: Each pair QKD$_{{\rm A}i}$ and QKD$_{{\rm B}i}$ uses say {\it Protocol}~$2$ to distribute shares of an $(\epsilon/n)$-secure key, $k_{{\rm A}i}$ and $k_{{\rm B}i}$, or the abort symbol $\perp_i$, between CP$_{{\rm A}i'}$ and CP$_{{\rm B}i''}$, respectively. Let ${\tilde k}_{{\rm A}i'ij}$ (${\tilde k}_{{\rm B}i''ij'}$) be the $j$-th ($j'$-th) share of $k_{{\rm A}i}$ ($k_{{\rm B}i}$) obtained by CP$_{{\rm A}i'}$ (CP$_{{\rm B}i''}$). For simplicity, we will suppose that the length of $k_{{\rm A}i}$ and $k_{{\rm B}i}$ is $N$ bits for all $i$. \item {\it Generation of shares of an $\epsilon_{\rm cor}$-correct key}: Let $\vec{0}$ be the $N$-bit zero vector, and $M$ be the number of $k_{{\rm A}i}$ and $k_{{\rm B}i}$ which are different from $\perp_i$. Each CP$_{{\rm A}i'}$ defines $k''_{{\rm A}i'ij}=[\vec{0}_1,\ldots, \vec{0}_{i-1},{\tilde k}_{{\rm A}i'ij},\vec{0}_{i+1},\ldots, \vec{0}_{M}]$. Likewise, the CP$_{{\rm B}i''}$ form $k''_{{\rm B}i''ij'}$ from ${\tilde k}_{{\rm B}i''ij'}$. $k''_{{\rm A}i'ij}$ and $k''_{{\rm B}i''ij'}$ are by definition shares of an $\epsilon_{\rm cor}$-correct key. The secrecy condition only holds if all $k_{{\rm A}i}$ and $k_{{\rm B}i}$ originate from honest QKD modules. \item {\it Generation of shares of an $\epsilon$-secure key}: The CP$_{{\rm A}i'}$ use the RBS scheme (see Appendix~\ref{tool}) to randomly select a universal$_2$ hash function, $h_{\rm P}$, that is sent to all CP$_{{\rm B}i''}$. Each CP$_{{\rm A}i'}$ (CP$_{{\rm B}i''}$) obtains shares, $k_{{\rm A}i'ij}=h_{\rm P}(k''_{{\rm A}i'ij})$ ($k_{{\rm B}i''ij'}=h_{\rm P}(k''_{{\rm B}i''ij'})$), of length $(M-t)\times{}N$ bits of a final key $k_{\rm A}$ ($k_{\rm B}$). \end{enumerate} Indeed, given that $t'<M_{{\rm A}i}/3$ and $t''<M_{{\rm B}i}/3$ for all $i=1,\ldots,M$, where $M_{{\rm A}i}$ ($M_{{\rm B}i}$) denotes the number of CP$_{{\rm A}i'}$ (CP$_{{\rm B}i''}$) that do not produce $\perp_i$ but output post-processed shares, $k_{{\rm A}i'ij}$ ($k_{{\rm B}i''ij'}$), from $k_{{\rm A}i}$ ($k_{{\rm B}i}$), then the final key, $k_{\rm A}$ and $k_{\rm B}$, is $\epsilon$-secure. Also, Alice (Bob) could obtain $k_{\rm A}$ ($k_{\rm B}$) by using the reconstruct protocol of a VSS (see Appendix~\ref{tool}). That is, Alice (Bob) could use majority voting to obtain the shares $k_{{\rm A}ij}$ and $k_{{\rm B}ij'}$ of $k_{\rm A}$ ($k_{\rm B}$) from $k_{{\rm A}i'ij}$ ($k_{{\rm B}i''ij'}$) for all $i=1,\ldots,M$ and $j=1,\ldots,q$ ($j'=1,\ldots,q'$), and she (he) calculates $k_{\rm A}=\oplus_{i=1}^M\oplus_{j=1}^q k_{{\rm A}ij}$ ($k_{\rm B}=\oplus_{i=1}^M\oplus_{j'=1}^{q'} k_{{\rm B}ij'}$) where $q$ ($q'$) is the total number of shares of $k_{{\rm A}i}$ ($k_{{\rm B}i}$) for each $i$. Our results are summarised in the following Claim, whose proof is direct from the definition of {\it Protocol}~$3$: \\ \noindent{\bf Claim 3.} {\it Suppose that Alice and Bob have $n$ pairs of QKD modules and Alice (Bob) has $s$ ($r$) classical post-processing units. Suppose that up to $t<n$ pairs of QKD modules, up to $t'<s/3$ classical post-processing units of Alice, and up to $t''<r/3$ classical post-processing units of Bob could be corrupted. Let $M\leq{}n$ denote the number of pairs of QKD modules that do not abort and whose raw key can be transformed into a supposedly $(\epsilon/n)$-secure key, and let $N$ bits be the length of such key. Then Protocol $3$ allows Alice and Bob to distill an $\epsilon$-secure key of length $(M-t)\times{}N$ bits. Moreover, the re-use of the devices does not compromise the security of the keys distilled in previous QKD runs.} \\ We remark that if we disregard the cost of authenticating the classical channels between Alice and Bob's classical post-processing units, {\it Protocol}~$3$ is optimal with respect to the resulting secret key length. The argument follows directly from that used in Sec.~\ref{ups}, where we showed that if the classical post-processing units are trusted the secret key rate is upper bounded by $(M-t)\times{}N$ bits. So, in the presence of corrupted classical post-processing units this upper bound also trivially holds. \section{Discussion and conclusions} Security proofs of QKD assume that there are no covert channels and the classical post-processing units are trusted. Unfortunately, however, both assumptions are very hard, if not impossible, to guarantee in practice. Indeed, memory attacks~\cite{mem_attack} constitute a fundamental practical threat to the security of both DI-QKD and non-DI-QKD. They highlight that quantum mechanics alone is not enough to guarantee the security of practical QKD realisations but, for this, one needs to resort to additional assumptions. Also, recent results on Trojan Horse attacks~\cite{covert2,covert2b,covert2c,covert3,military1,military2} against conventional cryptographic systems underline the vulnerabilities of the classical post-processing units, and this threat is expected to only rise with time. In this paper we have introduced a simple solution to overcome these two fundamental security problems and restore the security of QKD. The price to pay is that now Alice and Bob need to have various QKD modules and classical post-processing units at their disposal, bought for example from different vendors. Given that there is at least one pair of honest QKD modules and that the number of corrupted classical post-processing units is less than one third of them, we have shown how VSS schemes, together with privacy amplification techniques, could be used to re-establish the security of QKD. Indeed, VSS and secret sharing techniques have been used previously in quantum information~\cite{extra1,extra1b,extra2}. For instance, the authors of~\cite{extra1,extra1b} proposed a quantum version of VSS to achieve secure multiparty quantum computation, while in~\cite{extra2} classical secret sharing schemes are combined with QKD to achieve information-theoretically secure distributed storage systems. A key insight of our paper is very simple yet potentially very useful: the typical classical post-processing in QKD only involves operations which are ``linear'' in nature, and thus they could be easily implemented in a distributed setting by acting on data shares from say a linear VSS scheme. To illustrate our results, we have proposed specific protocols for three scenarios of practical interest. They assume that either the QKD modules, the classical-port-processing units, or both of them together could be corrupted. Remarkably, if we disregard the cost of classical authentication, all these protocols are optimal with respect to the secret key rate. They use the VSS scheme introduced in~\cite{maurer_smpc}, which is very simple to implement. Its main drawback is, however, that, for a given number of corrupted parties, the number of shares grows exponentially with the total number of parties. Nevertheless, for a small number of parties (which is the scenario that we are interested in QKD), the protocol is efficient in terms of computational complexity. Also, we remark that there exist efficient three-round VSS protocols where the computation and communication required is polynomial in the total number of parties~\cite{eff_VSS}. Moreover, these schemes use a minimum number of communication rounds~\cite{gen}, and they could also be used here. It would be interesting to further investigate the most resource efficient protocols to be used in the QKD framework. \section{Secure multiparty computation toolbox}\label{tool} Here we briefly introduce some definitions and cryptographic protocols that are used in the main text; they are mainly taken from~\cite{book_smc,maurer_smpc}. We consider a scenario with a dealer and $n$ parties, and we suppose a threshold active adversary structure where Eve can actively corrupt the dealer and up to $t$ of the parties. Active corruption means that Eve can fully control the corrupted parties whose behaviour can deviate arbitrarily from the protocol's prescriptions. We refer the reader to Appendix~\ref{ap_struct} for the modeling of more general adversary structures. For simplicity, we assume that all messages are binary strings and the symbol $\oplus$ below denotes bit-wise XOR or bit-wise addition modulo $2$. We remark, however, that the the protocols below work as well over any finite field or ring. In this scenario, a $n$-out-of-$q$ threshold secret sharing (SS) scheme~\cite{ss1,ss2} is a protocol that allows the dealer to split a message $m$ between $n$ parties such that, if he is honest, any group of $q$ or more parties can collaborate to reconstruct $m$ but no group with less than $q$ parties can obtain any information about $m$. If $n=q$, this could be achieved by splitting $m$ into a random sum of $q$ shares $m_i$. That is, one selects the first $q-1$ shares $m_i$ of $m$ at random, and then chooses $m_q=m\oplus{}m_1\oplus\ldots\oplus{}m_{q-1}$~\cite{book_smc}. A drawback of SS schemes is that they do not guarantee the consistency of the shares, which is essential to assure the correctness of the keys delivered by the QKD protocols in the main text. That is, during the reconstruct phase of a SS scheme, corrupted parties could send different $m_i$ to the honest parties such that they obtain different values for $m$. This problem can be solved with verifiable secret sharing (VSS) schemes~\cite{verifiable1,verifiable2}, which distribute $m_i$ in a redundant manner such that the honest parties can use error correction to obtain the correct values. Indeed, provided that the necessary and sufficient conditon $t<n/3$ is satisfied, a VSS scheme guarantees that there exists a well-defined $m$ that all honest parties obtain from their shares~\cite{maurer_smpc,smc1,smc2}. The share and reconstruct protocols of a VSS scheme satisfy three conditions. First, independently of whether or not the dealer is honest, if the share protocol is successful then the reconstruct protocol delivers the same $m$ to all the honest parties. Second, if the dealer is honest, the value of the reconstructed $m$ coincides with that provided by the dealer. And third, if the dealer is honest, the information obtained by any set of $t$ of less parties after the share (reconstruct) protocol is independent of any previous information that they held before the protocol (is just the reconstructed bit string $m$). Below we present a simple VSS scheme that builds on the $q$-out-of-$q$ threshold SS protocol above~\cite{maurer_smpc}, and which we use in {\it Protocols}~$2$ and $3$ in the main text. Importantly, given that $t<n/3$, this scheme provides information-theoretic security~\cite{maurer_smpc}. See Fig.~\ref{fig4} for a graphical representation of its share and reconstruct protocols. \begin{figure}\label{fig4} \end{figure} \\ \noindent{}Share protocol: \begin{enumerate} \item The dealer uses a $q$-out-of-$q$ SS scheme to split $m$ into $q={n \choose n-t}$ shares $m_i$, with $i=1,\ldots,q$. \item Let $\{\sigma_1,\ldots,\sigma_q\}$ denote all $(n-t)$-combinations of the set of $n$ parties. Then, for each $i=1,\ldots,q$, the dealer sends $m_i$ over a secure channel ({\it i.e.}, a channel that provides secrecy and authentication) to each party in $\sigma_i$. If a party does not receive his share, he takes as default share say a zero bit string. \item All pairs of parties in $\sigma_i$ send each other their shares $m_i$ over a secure channel to check if they are indeed equal. If an inconsistency is found, they complain using a broadcast channel. \item If a complaint is raised in $\sigma_i$, the dealer broadcasts $m_i$ to all parties and they accept the share received. Otherwise, the protocol aborts. \end{enumerate} \noindent{}Reconstruct protocol: \begin{enumerate} \item All pairs of parties send each other their shares over an authenticated channel. \item Each party uses majority voting to reconstruct the shares $m_i$ $\forall i$, and then obtains $m=\oplus_{i=1}^{q} m_i$. \end{enumerate} From the description above, it is guaranteed that when the share protocol is successful ({\it i.e.}, it does not abort), all the honest parties who received the $i$-th share of $m$ obtain exactly the same bit string $m_i$. Also, this protocol assures that any share $m_i$ of $m$ is distributed to at least $2t+1$ different parties. This is so because this is the minimum size of any set $\sigma_i$. This means, in particular, that, since the number of corrupted parties is at most $t$, the use of a decision rule based on majority voting in the reconstruct protocol permits all the honest parties to obtain the same fixed $m_i$ for all $i$. Moreover, it is straightforward to show that when the dealer is honest, the reconstructed message $m$ is equal to his original message. Furthermore, we have that $m$ is only revealed to the parties once the reconstruct phase ends. This is so because at least one bit string $m_i$ is only shared by honest parties since there is at least one set $\sigma_i$ which does not contain any corrupted party. Also, note that if a complaint is raised in a certain $\sigma_i$ during the share protocol, the fact that the dealer broadcast $m_i$ to all parties does not violate secrecy. This is so because a complaint can only occur if either the dealer is corrupted or $\sigma_i$ contains at least one corrupted player, hence the adversary knew $m_i$ already. We remark that the broadcast channel which is required in steps $3$ and $4$ of the share protocol can be a simulated channel. Indeed, given that $t<n/3$, there exist efficient poly($n$) protocols that can simulate a broadcast channel with information-theoretic security in an optimal number of $t+1$ communication rounds~\cite{garay,fischer}. Furthermore, if a physical broadcast channel is actually available, there exist efficient information-theoretically secure VSS schemes that only require a majority of honest parties ({\it i.e.}, $t<n/2$) and which could also be used in this context~\cite{tal}. Next we present a simple scheme to generate a common perfectly unbiased random $l$-bit string (RBS) $r$ between $n$ parties when up to $t<n/3$ of them could be corrupted. It follows directly from VSS~\cite{maurer_smpc,smc1,smc2}. For convenience, we call it the RBS protocol. We use it to randomly select universal$_2$ hash functions in {\it Protocols}~$2$ and $3$ in the main text, where we cannot assume the existence of an external honest dealer which provides them to the QKD devices.~The RBS scheme allows mutually untrusted parties to generate and share random numbers through discussions. \\ \noindent{}RBS protocol: \begin{enumerate} \item Say each of the first $t+1$ parties produces locally a random $l$-bit string $r_i$ and sends it to all the other parties using the share protocol above. \item Each party uses a broadcast channel to confirm that they have received all their shares from the first $t+1$ parties. Otherwise, the protocol aborts. \item All parties use the reconstruct protocol above to obtain $r_i$ for all $i=1,\ldots,t+1$. Afterward, each of them calculates locally $r=\oplus_{i=1}^{t+1}r_{i}$. \end{enumerate} It is straightforward to show that this protocol guarantees that all honest parties share a perfectly unbiased random bit string $r$. The use of the share and the reconstruct protocols of a VSS scheme assures that all honest parties reconstruct the same $r_i$ $\forall{}i$ and thus the same $r$. In addition, step $2$ of the protocol guarantees that the first $t+1$ parties generate and distribute their strings $r_i$ before knowing the strings of the other parties. Moreover, since the number of corrupted parties is at most $t$, we have that at least one honest party generates a truly random bit string $r_i$, and thus $r$ is also random. \section{Protocol~$2$}\label{app_P2} Here we present the different steps of {\it Protocol}~$2$ in detail. For concreteness, whenever we refer to the share and reconstruct protocols of a VSS scheme we mean those presented in Appendix~\ref{tool}, which have been introduced in~\cite{maurer_smpc}. Also, to simplify the discussion, in {\it Protocol}~$2$ we consider the case where $p_{\rm A,info}$ and $p_{\rm B,info}$ determine the sifting procedure of the QKD scheme in a deterministic way. That is, there is no {\it random} post-selection of data from the raw key. In addition, we assume that Alice and Bob do not estimate the actual QBER but they apply error correction for a pre-fixed QBER value followed by an error verification step. However, we remark that {\it Protocol}~$2$ could be adapted to cover also these two scenarios. \\ \begin{enumerate} \item {\it Generation and distribution of shares of raw keys and protocol information}: QKD$_{\rm A}$ and QKD$_{\rm B}$ obtain, respectively, the raw keys $k'_{\rm A}$ and $k'_{\rm B}$ and the protocol information $p_{\rm A,info}$ and $p_{\rm B,info}$, or the abort symbol $\perp$. If the result is different from $\perp$, QKD$_{\rm A}$ uses the share protocol of a VSS scheme to create $q={s \choose s-t'}$ shares of $k'_{\rm A}$ and distributes them among the CP$_{{\rm A}i}$, with $i=1,\ldots,s$. Likewise, QKD$_{\rm B}$ creates $q'={r \choose r-t''}$ shares of $k'_{\rm B}$ and distributes them among the CP$_{{\rm B}i'}$, with $i'=1,\ldots,r$. Let $k'_{{\rm A}ij}$ ($k'_{{\rm B}i'j'}$) be the $j$-th ($j'$-th) share of $k'_{\rm A}$ ($k'_{\rm B}$) received by CP$_{{\rm A}i}$ (CP$_{{\rm B}i'}$), with $j=1,\ldots,q$ ($j'=1,\ldots,q'$). Also, QKD$_{\rm A}$ (QKD$_{\rm B}$) sends $p_{\rm A,info}$ ($p_{\rm B,info}$) to all CP$_{{\rm A}i}$ (CP$_{{\rm B}i'}$). Since by assumption QKD$_{\rm A}$ (QKD$_{\rm B}$) is honest, all CP$_{{\rm A}i}$ (CP$_{{\rm B}i'}$) receive the same $p_{\rm A,info}$ ($p_{\rm B,info}$) and the shares $k'_{{\rm A}ij}$ ($k'_{{\rm B}i'j'}$) are equal for all $i$ ($i'$). Next, say the first $2t''+1$ CP$_{{\rm B}i'}$ send $p_{\rm B,info}$ to all CP$_{{\rm A}i}$. Likewise, say the first $2t'+1$ CP$_{{\rm A}i}$ send $p_{\rm A,info}$ (for the detected events) to all CP$_{{\rm B}i'}$. Each CP$_{{\rm A}i}$ (CP$_{{\rm B}i'}$) uses majority voting to determine $p_{\rm B,info}$ ($p_{\rm A,info}$) from the information received. Note that since by assumption the number of corrupted units CP$_{{\rm A}i}$ (CP$_{{\rm B}i'}$) is at most $t'$ ($t''$), $2t'+1$ ($2t''+1$) copies of $p_{\rm A,info}$ ($p_{\rm B,info}$) is enough for the honest parties to be able to reconstruct the correct value of these bit strings by using majority voting. \item {\it Sifting}: Each CP$_{{\rm A}i}$ uses $p_{\rm A,info}$ and $p_{\rm B,info}$ to obtain two bit strings, $k_{\rm A{\it ij}, key}$ and $k_{\rm A{\it ij}, est}$, from $k'_{{\rm A}ij}$. The former (latter) bit string is the part of $k'_{{\rm A}ij}$ that is used for key generation (parameter estimation). Likewise, Bob's CP$_{{\rm B}i'}$ do the same with $k'_{{\rm B}i'j'}$ and obtain $k_{\rm B{\it i'j'}, key}$ and $k_{\rm B{\it i'j'}, est}$. \item {\it Parameter estimation}: All CP$_{{\rm A}i}$ and CP$_{{\rm B}i'}$ use the reconstruct protocol of a VSS scheme to obtain both $k_{\rm A, est}$ and $k_{\rm B, est}$, which are the parts of $k'_{\rm A}$ and $k'_{\rm B}$ that are used for parameter estimation. For this, they send each other their shares $k_{\rm A{\it ij}, est}$ and $k_{\rm B{\it i'j'}, est}$, and each of them uses majority voting to obtain $k_{\rm A{\it j}, est}$ and $k_{\rm B{\it j'}, est}$ for all $j=1,\ldots,q$ and $j'=1,\ldots,q'$. Afterward, they calculate $k_{\rm A, est}=\oplus_{j=1}^qk_{\rm A{\it j}, est}$ and $k_{\rm B, est}=\oplus_{j'=1}^{q'}k_{\rm B{\it j'}, est}$. With $p_{\rm A,info}$, $p_{\rm B,info}$, $k_{\rm A, est}$ and $k_{\rm B, est}$, each CP$_{{\rm A}i}$ and CP$_{{\rm B}i'}$ performs locally the parameter estimation step of the protocol ({\it e.g.}, they estimate the phase error rate). If the estimated values exceed certain tolerated values, they abort. \item {\it Error correction}: The CP$_{{\rm A}i}$ and CP$_{{\rm B}i'}$ perform error correction (for a pre-fixed QBER value) on the parts of $k'_{\rm A}$ and $k'_{\rm B}$ that are used for key distillation, which we denote by $k_{\rm A, key}$ and $k_{\rm B, key}$, by acting on their shares $k_{\rm A{\it ij}, key}$ and $k_{\rm B{\it i'j'}, key}$ respectively. For this, each CP$_{{\rm A}i}$ (CP$_{{\rm B}i'}$) applies certain matrices $M_{\rm EC}$ to $k_{\rm A{\it ij}, key}$ ($k_{\rm B{\it i'j'}, key}$) to obtain $s_{{\rm A}ij}=M_{\rm EC}k_{\rm A{\it ij}, key}$ ($s_{{\rm B}i'j'}=M_{\rm EC}k_{\rm B{\it i'j'}, key}$). Afterward, they use the reconstruct protocol of a VSS scheme to guarantee that all CP$_{{\rm B}i'}$ obtain $s_{\rm A}=M_{\rm EC}k_{\rm A, key}$ and $s_{\rm B}=M_{\rm EC}k_{\rm B, key}$. That is, all CP$_{{\rm A}i}$ and CP$_{{\rm B}i'}$ first send to all the classical post-processing units at Bob's side the bit strings $s_{{\rm A}ij}$ and $s_{{\rm B}i'j'}$. Then, each of Bob's CP units uses majority voting to reconstruct locally $s_{{\rm A}j}$ and $s_{{\rm B}j'}$, for all $j$ and $j'$, from $s_{{\rm A}ij}$ and $s_{{\rm B}i'j'}$. Finally, they obtain $s_{\rm A}=\oplus_{j=1}^qs_{{\rm A}j}$ and $s_{\rm B}=\oplus_{j'=1}^{q'}s_{{\rm B}j'}$. Next, Bob corrects $k_{\rm B, key}$. For this, say all CP$_{{\rm B}i'}$ which have the $j'$-th share $k_{\rm B{\it i'j'}, key}$ for a pre-fixed index $j'=1,\ldots,q'$, flip certain bits of this share depending on the actual values of $s_{\rm A}$ and $s_{\rm B}$. This whole process is repeated until the error correction procedure ends. Let ${\hat k}_{\rm A{\it ij}, key}$ and ${\hat k}_{\rm B{\it i'j'}, key}$ denote the shares $k_{\rm A{\it ij}, key}$ and $k_{\rm B{\it i'j'}, key}$ after error correction, and let leak$_{\rm EC}$ bits be the syndrome information interchanged between Alice and Bob during this step. That is, ${\hat k}_{\rm A{\it ij}, key}$ and ${\hat k}_{\rm B{\it i'j'}, key}$ are actually equal to $k_{\rm A{\it ij}, key}$ and $k_{\rm B{\it i'j'}, key}$ except for the bit strings $k_{\rm B{\it i'j'}, key}$ whose bits have been flipped during error correction. \item {\it Error verification}: All CP$_{{\rm A}i}$ and CP$_{{\rm B}i'}$ check that the error correction step was indeed successful. For this, the CP$_{{\rm A}i}$ use the RBS scheme introduced in Appendix~\ref{tool} to randomly select a universal$_2$ hash function, $h_{\rm V}$. Then, they compute a hash $h_{{\rm A}ij}=h_{\rm V}({\hat k}_{\rm A{\it ij}, key})$ of length $\lceil\log_2{(4/\epsilon_{\rm cor})}\rceil$ bits, and say the first $2t'+1$ CP$_{{\rm A}i}$ send the hash function to all CP$_{{\rm B}i'}$. Bob's CP units reconstruct the hash function by using majority voting and then they calculate $h_{{\rm B}i'j'}=h_{\rm V}({\hat k}_{\rm B{\it i'j'}, key})$. Afterward, all CP$_{{\rm A}i}$ and CP$_{{\rm B}i'}$ use the reconstruct protocol of a VSS scheme to obtain $h_{{\rm A}}=\oplus_{j=1}^qh_{{\rm A}j}$ and $h_{{\rm B}}=\oplus_{j'=1}^{q'}h_{{\rm B}j'}$ from $h_{{\rm A}ij}$ and $h_{{\rm B}i'j'}$. That is, they send each other $h_{{\rm A}ij}$ and $h_{{\rm B}i'j'}$ and they use majority voting to determine $h_{{\rm A}j}$ and $h_{{\rm B}j'}$, for all $j$ and $j'$, from $h_{{\rm A}ij}$ and $h_{{\rm B}i'j'}$. Finally, each of them checks locally whether or not $h_{\rm A}=h_{\rm B}$. If they are not equal, they abort. In so doing, we have that the bit strings ${\hat k}_{\rm A, key}=\oplus_{j=1}^q {\hat k}_{\rm A{\it j}, key}$ and ${\hat k}_{\rm B, key}=\oplus_{j'=1}^{q'} {\hat k}_{\rm B{\it j'}, key}$ are equal except for a minuscule probability $\epsilon_{\rm cor}$, where ${\hat k}_{\rm A{\it j}, key}$ (${\hat k}_{\rm B{\it j'}, key}$) are obtained from ${\hat k}_{\rm A{\it ij}, key}$ (${\hat k}_{\rm B{\it i'j'}, key}$) by using majority voting. \item {\it Generation of shares of an $\epsilon$-secure key}: All CP$_{{\rm A}i}$ and CP$_{{\rm B}i'}$ extract from ${\hat k}_{\rm A, key}$ and ${\hat k}_{\rm B, key}$ the shares of an $\epsilon_{\rm sec}$-secret key, $k_{\rm A}$ and $k_{\rm B}$. For this, the CP$_{{\rm A}i}$ use the RBS scheme to randomly select a proper universal$_2$ hash function, $h_{\rm P}$. Next, they obtain $k_{{\rm A}ij}=h_{\rm P}({\hat k}_{\rm A{\it ij}, key})$ and say the first $2t'+1$ CP$_{{\rm A}i}$ send $h_{\rm P}$ to all CP$_{{\rm B}i'}$. Bob's CP units use majority voting to determine $h_{\rm P}$ from the information received and they calculate $k_{{\rm B}i'j'}=h_{\rm P}({\hat k}_{\rm B{\it i'j'}, key})$. The function $h_{\rm P}$ removes Eve's information from ${\hat k}_{\rm A, key}$, which includes the syndrome information leak$_{\rm EC}$ disclosed during error correction, the hash value of length $\lceil\log_2{(4/\epsilon_{\rm cor})}\rceil$ bits disclosed during error verification, and Eve's information about the key according to the estimated phase error rate. \end{enumerate} As stated in Sec.~\ref{ups2}, when $t'<M_{\rm A}/3$ and $t''<M_{\rm B}/3$, where $M_{\rm A}$ ($M_{\rm B}$) is the number of CP$_{{\rm A}i}$ (CP$_{{\rm B}i'}$) that do not abort, $k_{{\rm A}ij}$ and $k_{{\rm B}i'j'}$ are shares of an $\epsilon$-secure key, $k_{\rm A}$ and $k_{\rm B}$. This is so because the condition $t'<M_{\rm A}/3$ (or, equivalently, $s-t'-(s-M_{\rm A})>2t'$) guarantees that for all $j=1,\ldots,q$, there are at least $2t'+1$ units CP$_{{\rm A}i}$ which send shares $k_{{\rm A}ij}$ to Alice. To see this, note that each share $k_{{\rm A}j}$, for all $j$, is held by $s-t'$ units CP$_{{\rm A}i}$, and by assumption we have that at most $s-M_{\rm A}$ of them could have aborted. A similar argument applies to the condition $t''<M_{\rm B}/3$. To reconstruct $k_{\rm A}$ and $k_{\rm B}$, Alice and Bob can use majority voting to obtain $k_{{\rm A}j}$ and $k_{{\rm B}j'}$ from $k_{{\rm A}ij}$ and $k_{{\rm B}i'j'}$, respectively, and afterward they calculate $k_{\rm A}=\oplus_{j=1}^q k_{{\rm A}j}$ and $k_{\rm B}=\oplus_{j'=1}^{q'} k_{{\rm B}j'}$. \section{Alternative solution for QKD with honest QKD modules and malicious classical post-processing units}\label{alter} In this Appendix we present a conceptually simple, although less efficient, solution than {\it Protocol}~$2$ for the case where $r=s$. The main idea runs as follows. First, QKD$_{\rm A}$ and QKD$_{\rm B}$ perform $s$ independent QKD sessions, each of them is realised with a different pair of units CP$_{{\rm A}i}$ and CP$_{{\rm B}i}$ to generate a supposedly $(\epsilon/s)$-secure key, $k_{{\rm A}i}$ and $k_{{\rm B}i}$, or the abort symbol $\perp_i$. For easy of illustration, we shall assume that the length of each $k_{{\rm A}i}$ and $k_{{\rm B}i}$ is $N$ bits for all $i$. Of course, if say CP$_{{\rm A}i}$ and/or CP$_{{\rm B}i}$ is corrupted then we have that $k_{{\rm A}i}$ and $k_{{\rm B}i}$ could be compromised and known to Eve. Then, in a second step, the keys $k_{{\rm A}i}$ and $k_{{\rm B}i}$ are concatenated to form ${\hat k}_{\rm A}=[k_{{\rm A}1},\ldots,k_{{\rm A}M}]$ and ${\hat k}_{\rm B}=[k_{{\rm B}1},\ldots,k_{{\rm B}M}]$, where $M$ denotes the number of keys $k_{{\rm A}i}$ and $k_{{\rm B}i}$ which are different from the abort symbol. Finally, we apply error verification and privacy amplification to ${\hat k}_{\rm A}$ and ${\hat k}_{\rm B}$ to obtain an $\epsilon$-secure key, $k_{\rm A}$ and $k_{\rm B}$. Importantly, this last step is performed by the classical post-processing units in a distributed setting by acting only on shares of ${\hat k}_{\rm A}$ and ${\hat k}_{\rm B}$. Below we describe the different steps of the protocol in more detail. \\ \noindent{\it Alternative solution to Protocol $2$}: \begin{enumerate} \item {\it Generation of $(\epsilon/s)$-secure keys}: QKD$_{\rm A}$ and QKD$_{\rm B}$ perform $s$ independent QKD sessions, each of which with a different pair of units CP$_{{\rm A}i}$ and CP$_{{\rm B}i}$, with $i=1,\ldots,s$, to obtain the bit strings $k_{{\rm A}i}$ and $k_{{\rm B}i}$, which are supposed to be $(\epsilon/s)$-secure, or the abort symbol $\perp_i$. \item {\it Distribution of shares of $(\epsilon/s)$-secure keys}: Each CP$_{{\rm A}i}$ sends $k_{{\rm A}i}$ to the other classical post-processing units at Alice's side by using the share protocol of a VSS scheme, and all CP$_{{\rm A}i'}$ confirm to each other that they have received their shares. Let $k'_{{\rm A}i'ij}$ be the $j$-th share of $k_{{\rm A}i}$ received by CP$_{{\rm A}i'}$. Likewise, the units CP$_{{\rm B}i}$ act similarly with $k_{{\rm B}i}$. Let $k'_{{\rm B}i'ij}$ be the $j$-th share of $k_{{\rm B}i}$ received by CP$_{{\rm B}i'}$. Each CP$_{{\rm A}i'}$ defines locally the bit strings $k''_{{\rm A}i'ij}=[\vec{0}_1,\ldots, \vec{0}_{i-1}, k'_{{\rm A}i'ij},\vec{0}_{i+1},\ldots, \vec{0}_{M}]$, where $\vec{0}$ is the $N$-bit zero vector and $M$ is the number of keys $k_{{\rm A}i}$ and $k_{{\rm B}i}$ which are different from $\perp_i$. Likewise, the CP$_{{\rm B}i'}$ form $k''_{{\rm B}i'ij}$ from $k'_{{\rm B}i'ij}$. \item {\it Error verification}: The ${\rm CP}_{{\rm A}i'}$ use the RBS scheme to randomly select a universal$_2$ hash function, $h_{\rm V}$. Then, each of them computes locally a hash $h_{{\rm A}i'ij}=h_{\rm V}(k''_{{\rm A}i'ij})$ of length $\lceil\log_2{(4/\epsilon_{\rm cor})}\rceil$ bits for all its bit strings $k''_{{\rm A}i'ij}$, and say the first $2t'+1$ ${\rm CP}_{{\rm A}i'}$ send the hash function to all CP units at Bob's side. Each ${\rm CP}_{{\rm B}i'}$ reconstructs locally the hash function by using majority voting and obtains $h_{{\rm B}i'ij}=h_{\rm V}(k''_{{\rm B}i'ij})$ for all its bit strings $k''_{{\rm B}i'ij}$. Next, all ${\rm CP}_{{\rm A}i'}$ and ${\rm CP}_{{\rm B}i'}$ use the reconstruct protocol of a VSS scheme to obtain both $h_{{\rm A}}=\oplus_{i=1}^M\oplus_{j=1}^qh_{{\rm A}ij}$ and $h_{{\rm B}}=\oplus_{i=1}^M\oplus_{j=1}^qh_{{\rm B}ij}$. That is, they send each other the bit strings $h_{{\rm A}i'ij}$ and $h_{{\rm B}i'ij}$, and each of them uses majority voting to obtain $h_{{\rm A}ij}$ and $h_{{\rm B}ij}$ from $h_{{\rm A}i'ij}$ and $h_{{\rm B}i'ij}$. Finally, each ${\rm CP}_{{\rm A}i'}$ and ${\rm CP}_{{\rm B}i'}$ checks locally if $h_{\rm A}=h_{\rm B}$. If they are not equal, they output the abort symbol $\perp_{i'}$. If they are equal, they proceed to the next step. This error verification step guarantees that $k''_{\rm A}=\oplus_{i=1}^M\oplus_{j=1}^qk''_{{\rm A}ij}$ and $k''_{\rm B}=\oplus_{i=1}^M\oplus_{j=1}^qk''_{{\rm B}ij}$ are equal except for a minuscule probability $\epsilon_{\rm cor}$, where $k''_{{\rm A}ij}$ ($k''_{{\rm B}ij}$) denote the bit strings that could be obtained from $k''_{{\rm A}i'ij}$ ($k''_{{\rm B}i'ij}$) by using majority voting. \item {\it Generation of shares of an $\epsilon$-secure key}: The ${\rm CP}_{{\rm A}i'}$ use the RBS scheme to randomly select a universal$_2$ hash function, $h_{\rm P}$, and they compute $k_{{\rm A}i'ij}=h_{\rm P}(k''_{{\rm A}i'ij})$. Then, say the first $2t'+1$ ${\rm CP}_{{\rm A}i'}$ send $h_{\rm P}$ to all CP at Bob's side which reconstruct locally the hash function by using majority voting, and each ${\rm CP}_{{\rm B}i'}$ computes $k_{{\rm B}i'ij}=h_{\rm P}(k''_{{\rm B}i'ij})$. The function $h_{\rm P}$ maps each $(M\times{}N)$-bit string $k''_{{\rm A}i'ij}$ ($k''_{{\rm B}i'ij}$) to a shorter bit string $k_{{\rm A}i'ij}$ ($k_{{\rm B}i'ij}$) of size $(M-2t')\times{}N-\lceil\log_2{(4/\epsilon_{\rm cor})}\rceil$ bits. \end{enumerate} The reason for reducing the size of $k''_{{\rm A}i'ij}$ and $k''_{{\rm B}i'ij}$ by $2t'\times{}N$ bits in the last step of the protocol is due to the following. In the worst-case scenario, we have that all corrupted CP$_{{\rm A}i}$ could be partnered with honest CP$_{{\rm B}i}$ (and vice versa). This means, in particular, that there could $2t'$ keys $k_{{\rm A}i}$ and $k_{{\rm B}i}$ which could be compromised and, more importantly, Alice and Bob cannot discard that these keys contribute to $k''_{{\rm A}i'ij}$ and $k''_{{\rm B}i'ij}$. Given that $t'<M_{{\rm A}i}/3$ and $t''<M_{{\rm B}i}/3$ for all $i=1,\ldots,M$, where $M_{{\rm A}i}$ ($M_{{\rm B}i}$) denotes the number of CP$_{{\rm A}i'}$ (CP$_{{\rm B}i'}$) that do not produce $\perp_i$ but output post-processed shares, $k_{{\rm A}i'ij}$ ($k_{{\rm B}i'ij}$), from $k_{{\rm A}i}$ ($k_{{\rm B}i}$), then the final key, $k_{\rm A}$ and $k_{\rm B}$, is $\epsilon$-secure. Once again, to reconstruct it, Alice (Bob) can use majority voting to obtain $k_{{\rm A}ij}$ ($k_{{\rm B}ij}$) from $k_{{\rm A}i'ij}$ ($k_{{\rm B}i'ij}$) and then calculate $k_{\rm A}=\oplus_{i=1}^M\oplus_{j=1}^q k_{{\rm A}ij}$ ($k_{\rm B}=\oplus_{i=1}^M\oplus_{j=1}^q k_{{\rm B}ij}$). To conclude this Appendix, let us briefly compare the solution above with that provided by {\it Protocol}~$2$. For this, we first note that the approach above runs $s$ independent QKD sessions while {\it Protocol}~$2$ can distill an $\epsilon$-secure key from one single QKD run. The second main difference is related to the resulting secret key rate. To simplify our discussion, we shall assume, like above, that the length of each $k_{{\rm A}i}$ and $k_{{\rm B}i}$ is $N$ bits for all $i$. After running $s$ QKD sessions, the alternative protocol above can deliver a final key of length roughly $\approx(M-2t')\times{}N$ bits, while if we run {\it Protocol}~$2$ $s$ times the length of the final key would be roughly $s\times{}N$ secret bits. That is, even if we consider the best-case scenario for the alternative approach above ({\it i.e.}, $M=s$), {\it Protocol}~$2$ provides a secret key rate that is $\approx{}s/(s-2t')$ times higher than that provided by this alternative method. \section{General adversary structures}\label{ap_struct} In the main text we have considered the security of QKD against a so-called threshold active adversary structure. As we have already seen, active corruption means that there could exist a central adversary, Eve, who fully controls the behaviour of all the corrupted parties, which do not have to necessarily follow the prescriptions of the protocol. On the other hand, by a threshold adversary structure we refer to an adversary who can corrupt up to $t$ (but not more) of the parties. This is, however, a particular case of what is called a general mixed adversary structure~\cite{fitzi}. By mixed corruption we mean that some of the corrupted parties could also be passively (in contrast to actively) corrupted. Passive corruption indicates that the parties could leak all their information to the adversary, but otherwise they follow all the indications of the protocol correctly. General adversary structures, on the other hand, refer to the fact that the subsets that contain all the potentially corrupted parties could have an arbitrary distribution, {\it i.e.}, they do not need to consists on all possible combinations of up to $t$ parties. To model the corruption capability of a general adversary one can use a so-called $(\Sigma,\Omega)$-adversary structure. This is basically a set that contains all the potentially corruptible subsets of parties. More precisely, let $P$ denote the set of all parties, and let $\Sigma$ and $\Omega$ be structures for $P$ satisfying $\Sigma\subseteq\mathscr{P}(P)$ and $\Omega\subseteq\Sigma$ with $\mathscr{P}(P)$ being the power set of $P$. Here, a structure for $P$ means a subset $\Gamma$ of $\mathscr{P}(P)$ that is closed under taking subsets. That is, if $S\in\Gamma$ and $S'\subseteq{}S$ then $S'\in\Gamma$. Then, a $(\Sigma,\Omega)$-adversary is an adversary that can passively (actively) corrupt a set $\sigma$ ($\omega$) of parties with $\sigma\cup{}\omega\in\Sigma$ and $\omega\in\Omega$. Below, for ease of notation, whenever we describe a structure, we only list its maximal sets. That is, it is implicitly understood (even if it is not explicitly written) that its subsets also belong to the structure. Next, we introduce the share and the reconstruct protocols of a VSS scheme~\cite{maurer_smpc} that provides information-theoretic security against a general $(\Sigma,\Omega)$-adversary given that $P\notin\Sigma\sqcup\Omega\sqcup\Omega$, which can be proven to be a necessary and sufficient condition to achieve security in this framework~\cite{maurer_smpc}. That is, the VSS below is optimal in this sense. Here, $\sqcup$ is an operation on structures defined as $\Gamma_1\sqcup\Gamma_2=\{S_1\cup{}S_2:S_1\in\Gamma_1, S_2\in\Gamma_2\}$. That is, $\Gamma_1\sqcup\Gamma_2$ is a structure that contains all unions of one element of $\Gamma_1$ and one element of $\Gamma_2$. If $\Sigma=\Omega$ ({\it i.e.}, when all corrupted parties are active), note that the condition $P\notin\Sigma\sqcup\Omega\sqcup\Omega$ coincides with that introduced in~\cite{smc1,smc2}. Without loss of generality, below we assume that $\Sigma$ contains $q$ maximal sets $\sigma_i$, {\it i.e.}, $\Sigma=\{\sigma_1, \sigma_2, \ldots, \sigma_q\}$. \\ \noindent{}Share protocol: \begin{enumerate} \item The dealer uses a $q$-out-of-$q$ SS scheme to split the message $m$ into $q$ shares $m_i$, with $i=1,\ldots,q$. \item For each $i=1,\ldots,q$, the dealer sends $m_i$ over a secure channel to each party in the set $\sigma'_i$, where $\sigma'_i$ is defined as the complement of $\sigma_i$. If a party does not receive his share, he takes as default share say a zero bit string. \item All pairs of parties in $\sigma'_i$ send each other their shares $m_i$ over a secure channel to check that their shares are indeed equal. If an inconsistency is found, they complain using a broadcast channel. \item If there is a complaint in $\sigma'_i$, the dealer broadcasts $m_i$ to all parties and they accept the share received. Otherwise, the protocol aborts. \end{enumerate} \noindent{}Reconstruct protocol: \begin{enumerate} \item All parties send their shares to all other parties over an authenticated channel. \item Each party reconstructs locally the shares $m_i$ $\forall i$, and obtains $m=\oplus_{i=1}^{q} m_i$. For this, let $m_{li}$ be the value for $m_i$ sent by the $l$-th party in $\sigma'_i$. Then, each party chooses the unique value $m_i$ such that there exists a $\omega\in\Omega$ satisfying $m_{li}=m_i$ for all $l\in\sigma'_i-\omega$. \end{enumerate} As in the case of the share protocol presented in the main text, the share protocol above also requires the availability of a broadcast channel. Fortunately, however, in this framework it is also possible to simulate such a channel by efficient protocols, with polynomial message and computation complexity, between the different parties~\cite{broad4}. For this, the requirement is that $P\notin\Omega\sqcup\Omega\sqcup\Omega$~\cite{maurer_smpc}. To conclude this Appendix, let us mention that the simple RBS protocol described in the main text can be straightforwardly adapted to be secure also against a general $(\Sigma,\Omega)$-adversary given that $P\notin\Sigma\sqcup\Omega\sqcup\Omega$. For this, we only need to make two modifications. First, we replace the share and reconstruct protocols with the ones described above. And, second, now we do not employ the first $t+1$ parties $P_i\in{}P$ to produce a random bit string each (see step $1$ of the RBS protocol in the main text). Instead, these random bit strings are produced by all the parties in one set $\rho\in\mathscr{P}(P)$ such that $\rho\nsubseteq\Sigma$. In so doing, we guarantee that there is at least one honest party that generates a random bit string. Here, one could take, for instance, the set $\rho$ with the minimum number of parties. \section{QKD secure against general adversary structures}\label{gen_sec} Finally, here we revisit briefly the three practical scenarios that we have considered in the main text, and we discuss how one could easily adapt {\it Protocols}~$1$, $2$ and $3$ to make them secure against a $(\Sigma,\Omega)$-adversary. For this, we shall assume that the structures $\Sigma$ and $\Omega$ are known to both Alice and Bob, which is a standard assumption in this framework. The case of {\it Protocol}~$1$ is rather simple, as it does not require any change. Since in that scenario one assumes that the classical post-processing units CP$_{\rm A}$ and CP$_{\rm B}$ are both honest, it is guaranteed that the concatenated bit strings $k'_{\rm A}$ and $k'_{B}$ are $\epsilon_{\rm cor}$-correct. Also, we have that Eve could know at most $t\times{}N$ bits of say $k'_{\rm A}$ except with probability $\epsilon_{\rm sec}$, where the parameter $t$ now refers to the size of the biggest set in $\Sigma$. That is, here $t$ denotes the maximum number of pairs of QKD modules that can be passively corrupted. Then, by applying a privacy amplification step to $k'_{\rm A}$ and $k'_{B}$, the units CP$_{\rm A}$ and CP$_{\rm B}$ can directly extract an $\epsilon_{\rm sec}$-secret key, $k_{\rm A}$ and $k_{B}$, of length $(M-t)\times{}N$ bits. In fact, one could even use a tighter bound here by selecting the parameter $t$ as the maximum size of all maximal sets in a structure $\Sigma'$ that is obtained from $\Sigma$ by removing from all of its subsets those pairs of QKD modules which have aborted. The case of {\it Protocol}~$2$, on the other hand, requires a few minor modifications. First, now Alice and Bob have to use the share, reconstruct, and RBS protocols introduced above instead of the protocols presented in the main text. This implies that the $(\Sigma,\Omega)$-adversary structure has to satisfy both in Alice and Bob's side $P\notin\Sigma\sqcup\Omega\sqcup\Omega$ to guarantee security. Second, in steps $1$, $5$ and $6$ of {\it Protocol}~$2$, now we do not employ the first $2t'+1$ ($2t''+1$) units CP$_{{\rm A}i}$ (CP$_{{\rm B}i'}$) to reveal the information $p_{\rm A,info}$ ($p_{\rm B,info}$) as well as the hash functions $h_{\rm V}$ and $h_{\rm P}$. Instead, this information is revealed by all units in one set $\sigma'_i$, where $\sigma'_i$ is again the complement of $\sigma_i\in\Sigma$, both for Alice and Bob. One could take, for instance, the set $\sigma'_i$ with the minimum number of parties. Afterward, the information is reconstructed by using the second step of the reconstruct protocol introduced above. Note that such VSS scheme guarantees that, if all the parties in $\sigma'_i$ receive the same information, this information can be reconstructed correctly. And this is indeed guaranteed by {\it Protocol}~$2$, as all units CP$_{{\rm A}i}$ (CP$_{{\rm B}i'}$) obtain precisely the same information $p_{\rm A,info}$ ($p_{\rm B,info}$), $h_{\rm V}$ and $h_{\rm P}$. Finally, to reconstruct the key $k_{\rm A}$ and $k_{\rm B}$ from the shares $k_{{\rm A}ij}$ and $k_{{\rm B}i'j'}$, Alice and Bob now use the second step of the reconstruct protocol introduced in Appendix~\ref{ap_struct}. Finally, one could modify {\it Protocol}~$3$ as follows. Like in the previous case, Alice and Bob replace the share, reconstruct, and RBS protocols with the protocols introduced above, given, of course, that the condition $P\notin\Sigma\sqcup\Omega\sqcup\Omega$ is fulfilled both in Alice and Bob. Also, in step~$1$, one obviously uses the version of {\it Protocol}~$2$ described in the previous paragraph. In addition, in step~$3$, one replaces the method to announce the hash function $h_{\rm P}$ with the procedure described in the previous paragraph. Moreover, also in step $3$, the parameter $t$ that appears in the expression of the secret key length now refers to the size of the biggest set in $\Sigma$. That is, $t$ is the maximum number of pairs of QKD modules that can be passively corrupted (or, alternatively, one can select $t$ as the maximum size of all maximal sets in a structure $\Sigma'$ that is obtained from $\Sigma$ by removing from all of its subsets those pairs of QKD modules which have aborted). Finally, Alice and Bob reconstruct $k_{\rm A}$ and $k_{B}$ by using again the second step of the reconstruct protocol introduced above. Similar arguments can be applied as well to the protocol introduced in Appendix~\ref{alter} as alternative to {\it Protocol}~$2$. \end{document}
\begin{document} \title{Criteria for homotopic maps to be so along monotone homotopies} \author{Sanjeevi Krishnan} \address{Laboratoire d'Informatique de l'\'{E}cole Polytechnique\\Palaiseau, France} \begin{abstract} The state spaces of machines admit the structure of time. A homotopy theory respecting this additional structure can detect machine behavior unseen by classical homotopy theory. In an attempt to bootstrap classical tools into the world of abstract spacetime, we identify criteria for classically homotopic, monotone maps of pospaces to \textit{future homotope}, or homotope along homotopies monotone in both coordinates, to a common map. We show that consequently, a hypercontinuous lattice equipped with its Lawson topology is \textit{future contractible}, or contractible along a future homotopy, if its underlying space has connected CW type. \end{abstract} \maketitle \section{Introduction} The state spaces of machines often admit partial orders which describe the causal relationship between states. For example, the unit interval $\mathbb{I}$ equipped with its standard total order represents the states of a finite, sequential process. Figure \ref{fig:2sem} illustrates the state space $X$ of two sequential processes accessing a binary semaphore. Thinking of the upper corner as the desired end state, we view monotone paths $\mathbb{I}\rightarrow X$ reaching the striped zone as unsafe executions of our binary system, doomed never to terminate successfully. We can thus articulate critical machine behavior in the language of partially ordered spaces. \begin{figure} \caption{State space of a binary semaphore, as in \cite[Figure 7]{fgr:ditop}} \label{fig:2sem} \end{figure} A homotopy theory respecting this additional structure of time potentially can detect machine behavior invisible to classical homotopy theory, as demonstrated in \cite{fgr:ditop}. A suitable theory should distinguish between the homotopy equivalent state spaces given in Figure \ref{fig:inequivalent}, for example. In an attempt to exploit classical arguments in a homotopy theory of preordered spaces, we seek criteria under which two homotopic, monotone maps $X\rightarrow Y$ of pospaces are in fact homotopic through monotone maps. Certain cubical approximation results in \cite{fajstrup:approx} implicitly use one such criterion: when $Y$ is a convex sub-pospace of an ordered topological vector space. Lemma \ref{lem:main.result} identifies alternative criteria which do not require vector space structures: when $X$ is a compact pospace whose ``lower'' sets generated by open subsets are open and $Y$ is a continuous lattice equipped with its Lawson topology. We can further refine classical homotopy theory, following \cite{grandis:d}. Consider two monotone maps $f,g:X\rightarrow Y$ of preordered spaces. We say that $f$ \textit{future homotopes to} $g$ if a classical homotopy from $f$ to $g$ defines a monotone map $X\times\mathbb{I}\rightarrow Y$. We call a preordered space \textit{future contractible} if the identity on it future homotopes to a constant map. Lemma \ref{lem:future.homotope} identifies criteria under which two monotone maps $X\rightarrow Y$ homotopic through monotone maps future homotope to a common map: when $X$ is compact Hausdorff and $Y$ is the order-theoretic dual of a continuous lattice $L$ equipped with the dual Lawson topology of $L$. We obtain the following consequence. \noindent {\bf Proposition \ref{prop:dicontract}} \hspace{1mm}\textit{A hypercontinuous lattice equipped with its Lawson topology is future contractible if its underlying space has connected CW type.}\\ \begin{figure} \caption{Partially ordered state spaces, as in \cite[Figure 14]{fgr:ditop}} \label{fig:inequivalent} \end{figure} It follows that a hypercontinuous lattice equipped with its Lawson topology is ``past'' contractible if its underlying space has connected CW type, by symmetry. In \S\ref{sec:pospaces}, we review some basic definitions, examples, and properties of preordered spaces. In \S\ref{sec:homotopies} we prove Lemmas \ref{lem:main.result} and \ref{lem:future.homotope}, followed by Proposition \ref{prop:dicontract}. \section{Preordered spaces}\label{sec:pospaces} A \textit{preordered space} is a preordered set equipped with a topology. An example is a \textit{topological sup-semilattice} (\textit{inf-semilattice}), a sup-semilattice (inf-semilattice) equipped with a topology making the binary $\sup$ ($\inf$) operator jointly continuous. A \textit{monotone map} is a continuous, (weakly) monotone function between preordered spaces. The forgetful functor $$U:\mathscr{Q}\rightarrow\mathscr{T}$$ from the category $\mathscr{Q}$ of preordered sets and monotone functions to the category $\mathscr{T}$ of spaces and continuous functions has a left adjoint. We write $\ddot{U}:\mathscr{Q}\rightarrow\mathscr{Q}$ for the composite of $U$ with its left adjoint, and we write $\epsilon:\ddot{U}\rightarrow\mathrm{id}_{\mathscr{Q}}$ for the counit of the adjunction. For each preordered space $X$, we write $\leqslant_X$ for its preorder and $$\upper{X}{A}=\bigcup_{a\in A}\{x\;|\;a\leqslant_Xx\},\quad\downer{X}{A}=\bigcup_{a\in A}\{x\;|\;x\leqslant_Xa\}$$ for the ``upper'' and ``lower'' sets, respectively, generated by a subset $A\subset X$. \begin{example} In Figure \ref{fig:lower.open}, $X_1$ is a topological sup-semilattice and $$\downer{X_1}{V_1}=X_1$$ for $V_1$ the circled open subset of $X_1$. \end{example} \begin{example} In Figure \ref{fig:lower.open}, $X_2$ is a topological inf-semilattice and $$\downer{X_2}{V_2}$$ is not open in $X_2$, for $V_2$ the circled open subset of $X_2$. \end{example} \begin{example}[Counterexamples] The pospaces of Figure \ref{fig:lower.open} are neither inf-semilattices nor sup-semilattices, even though their underlying posets are complete lattices. \end{example} Certain preorders are ``continuous'' in the following sense. \begin{definition} A preorder $\leqslant_X$ on (the points of a) space $X$ is \textit{lower open} if $$\downer{X}{V}$$ is open in $X$ for each open subset $V\subset X$. \end{definition} An example of a lower open preorder is the trivial preorder on a space. The class of preordered spaces having lower open preorders is closed under products and coproducts. \begin{lemma}\label{lem:non.branching} All topological sup-semilattices have lower open preorders. \end{lemma} \begin{proof} For each open subset $V$ of a topological $\sup$-semilattice $L$, $$\downer{L}{V}=\pi_2((V\times L)\cap\sup\!^{-1}(V)),$$ where $\pi_2:L\times L\rightarrow L$ denotes projection onto the second factor, is open in $L$ because $\pi_2$ is an open map and $\sup$ is a continuous function $L\times L\rightarrow L$. \end{proof} Recall from \cite{scott:lattices} that a \textit{pospace} is a preordered space $X$ whose partial order $\leqslant_X$ is antisymmetric ($x\leqslant_Xy\leqslant_Xx$ implies $x=y$) and has closed graph in the standard product topology $X\times X$. \begin{example} The preordered spaces in all of the figures are pospaces. \end{example} \begin{figure} \caption{Compact pospaces with and without lower open partial orders.} \label{fig:lower.open} \end{figure} Pospaces are automatically Hausdorff by \cite[Proposition VI-1.4]{scott:lattices}. Examples include Hausdorff topological sup-semilattices and Hausdorff topological inf-semilattices by \cite[Proposition VI-1.14]{scott:lattices}. In particular, \textit{continuous lattices equipped with their Lawson topologies}, which \cite[Theorem VI-3.4]{scott:lattices} characterizes as compact Hausdorff, topological inf-semilattices which have a maximum and whose points admit neighborhood bases of sub-semilattices, are pospaces. We can construct the ``free continuous lattice generated by a compact pospace,'' following \cite[Example VI-3.10 (ii)]{scott:lattices}. Let $\mathscr{P}$ denote the full subcategory of $\mathscr{Q}$ consisting of compact pospaces. Inclusion $i:\mathscr{L}\hookrightarrow\mathscr{P}$ from the category $\mathscr{L}$ of continuous lattices equipped with their Lawson topologies and continuous semilattice homomorphisms preserving maxima has a left adjoint $$F:\mathscr{P}\rightarrow\mathscr{L}$$ sending each compact pospace $X$ with topology $\mathcal{T}_X$ to the poset of all closed subsets $C\subset X$ satisfying $C=\upper{X}{C}$, ordered by reverse inclusion and having topology generated by the subsets $$\{A\;|\;A\subset V\},\;\;\{B\;|\;B\cap W\neq\varnothing\},\quad V,W\in\mathcal{T}_X,\;W=\downer{X}{W}.$$ The unit is the natural map $\upsilon_X:X\rightarrow FX$ defined by $x\mapsto\upper{X}{\{x\}}$. The counit is the infinitary infimum operator $\bigwedge:FL\rightarrow L$. \begin{lemma}\label{lem:alternative.criterion} Consider a compact pospace $X$. The inclusion $$FX\hookrightarrow FUX$$ is continuous if $\leqslant_X$ is lower open. \end{lemma} \begin{proof} Consider an open subset $W\subset X$. The set \begin{eqnarray*} \{B\in FX\;|\;B\cap W\neq\varnothing\} &=& \{B\in FX\;|\;\upper{X}{B}\cap W\neq\varnothing\}\\ &=& \{B\in FX\;|\;B\;\cap\downer{X}{W}\neq\varnothing\} \end{eqnarray*} is open in $FX$ if $\downer{X}{W}$ is open in $X$. The claim then follows. \end{proof} We can thus give a useful recipe for converting continuous functions into monotone maps. \begin{lemma}\label{lem:retraction} For each compact pospace $X$ having lower open partial order and each continuous lattice $Y$ equipped with its Lawson topology, the function $$U:\mathscr{P}(X,Y)\rightarrow\mathscr{T}(UX,UY)$$ has a retraction $f\mapsto(x\mapsto\bigwedge f(\leqslant_X[\{x\}]))$. \end{lemma} \begin{proof} For a continuous function $f:UX\rightarrow UY$, the composite function $$\xymatrix@C35pt{ X\times\mathbb{I}\ar[r]^{\!\!\!\!\!\!\upsilon_{X\times\mathbb{I}}} & F(X\times\mathbb{I})\ar[r]^{\!\!j} & F\ddot{U}(X\times\mathbb{I})\ar[r]^{\;\;\;\;\;F(f)} & F\ddot{U}Y\ar[r]^{\;F(\epsilon_Y)} & FY\ar[r]^{\bigwedge} & Y, }$$ where $j$ denotes the inclusion function, is a monotone map by Lemma \ref{lem:alternative.criterion}. This composite sends $x$ to $\bigwedge f(\leqslant_X[\{x\}])$, which equals $f(x)$ if $f$ is monotone. \end{proof} \section{The homotopy theory}\label{sec:homotopies} We refine the classical homotopy relation, first by defining the ``dihomotopy'' relation of \cite{fgr:ditop}. Let $\mathbb{I}$ be the unit interval $[0,1]$ equipped with its standard total order. Fix preordered spaces $X,Y$. For every pair of monotone maps $$f,g:X\rightarrow Y,$$ we write $f\sim g$ if $f$ is homotopic through monotone maps to $g$, or equivalently, if a homotopy $Uf\sim Ug$ defines a monotone map $X\times\ddot{U}\mathbb{I}\rightarrow Y$. Following classical notation, let $[f]$ denote the $\sim$-class of a monotone map $f:X\rightarrow Y$, and let $[X,Y]$ denote the set of all such equivalence classes $[f]$. The forgetful functor $U:\mathscr{P}\rightarrow\mathscr{T}$ to the category $\mathscr{T}$ of spaces induces a natural function \begin{equation} U_*:[X,Y]\rightarrow[UX,UY] \label{eqn:group.completion} \end{equation} to the set of homotopy classes $[UX\rightarrow UY]$ of continuous functions $UX\rightarrow UY$. \begin{example}\label{eg:non.injective} Consider the pospaces given in Figure \ref{fig:not.dihomotopic}. The monotone map $X_3\rightarrow X_4$ surjectively wrapping the lower blue corner around $X_4$ is homotopic, though not through monotone maps, to a monotone map $X_3\rightarrow X_4$ surjectively wrapping the upper red corner around $X_4$. Thus (\ref{eqn:group.completion}) need not be injective. No monotone map $X_3\rightarrow X_4$ has Brouwer degree greater than $1$. Thus (\ref{eqn:group.completion}) need not be surjective. \end{example} \begin{figure} \caption{$U_*:[X_3,X_4]\rightarrow[UX_3,UX_4]$ neither injective nor surjective.} \label{fig:not.dihomotopic} \end{figure} Directed homotopy theory reduces to classical homotopy theory and order-theory precisely when (\ref{eqn:group.completion}) is injective. The following lemma gives us such a case. \begin{lemma}\label{lem:main.result} For each compact pospace $X$ having lower open partial order and each continuous lattice $Y$ equipped with its Lawson topology, the function $$U_*:[X,Y]\rightarrow[UX,UY]$$ has a well-defined retraction $[f]\mapsto[x\mapsto\bigwedge f(\leqslant_X[\{x\}])].$ \end{lemma} \begin{proof} For a compact pospace $A$ such that $\leqslant_A$ is lower open and a continuous lattice $B$ equipped with its Lawson topology, let $R_{A,B}:\mathscr{T}(UA,UB)\rightarrow\mathscr{P}(A,B)$ denote the retraction defined by Lemma \ref{lem:retraction}. The diagram $$ \xymatrix@C50pt{ \mathscr{T}(UX\coprod UX,UY)\ar[r]^{\quad R_{(X\coprod X),Y}}\ar[d]_{(x\mapsto(x,0))\coprod(x\mapsto(x,1))} & \mathscr{P}(X\coprod X,Y)\ar[d]^{(x\mapsto(x,0))\coprod(x\mapsto(x,1))}\\ \mathscr{T}(UX\times U\mathbb{I},UY)\ar[r]_{\quad R_{X\times\ddot{U}\mathbb{I},Y}} & \mathscr{P}(X\times\ddot{U}\mathbb{I},Y),\\ } $$ is commutative and thus $R_{X,Y}$ passes to $\sim$-classes to define our desired retraction. \end{proof} \begin{example} Consider Figure \ref{fig:monotonization}. Under the retraction given in Lemma \ref{lem:retraction}, the homotopy through monotone paths in (b) is the image of the classical homotopy of paths in (a). \end{example} \begin{figure} \caption{A classical homotopy (a) and a homotopy (b), obtained from an application of Lemma \ref{lem:retraction} to (a), through monotone paths} \label{fig:monotonization} \end{figure} We refine the dihomotopy relation of \cite{fgr:ditop}, following \cite{grandis:d}. \begin{definition} Given preordered spaces $X,Y$ and monotone maps $$f,g:X\rightarrow Y,$$ we say that $f$ \textit{future homotopes to} $g$ if there exists a monotone map $h:X\times\mathbb{I}\rightarrow Y$ such that $h(-,0)=f$ and $h(-,1)=g$. A preordered space $X$ is \textit{future contractible} if $\mathrm{id}_X:X\rightarrow X$ future homotopes to a constant map. \end{definition} \begin{lemma}\label{lem:future.homotope} Consider a pair of monotone maps $$g_1,g_2:X\rightarrow Y$$ from a compact Hausdorff preordered space $X$ to a Lawson semilattice $Y$, homotopic through monotone maps. There exists a monotone map which future homotopes to both $g_1$ and $g_2$. \end{lemma} \begin{proof} Let $h:g_1\sim g_2$ be a homotopy through monotone maps. The rules $$j(x,t)=\bigwedge h(x,[0,1-t]),\quad k(x,t)=\bigwedge h(x,[t,1])$$ define functions $j,k:X\times\mathbb{I}\rightarrow Y$. The functions $j,k$ are continuous by Lemma \ref{lem:retraction} because $\leqslant_{\ddot{U}X\times\mathbb{I}}$ and its order-theoretic dual are lower open. The functions $j,k$ are monotone because $\bigwedge$ is a monotone operator. Thus $j(-,0)=k(-,0)$ future homotopes to $j(-,1)=h(-,0)=g_1$ and $k(-,1)=h(-,1)=g_2$. \end{proof} \begin{example} On hom-sets $\mathscr{P}(X,Y)$ for which $Y$ is a continuous lattice equipped with its Lawson topology, the dihomotopy relation $\sim$ coincides with the \textit{d-homotopy} relation of \cite{grandis:d}, as a consequence of Lemma \ref{lem:future.homotope}. \end{example} Recall that a space has \textit{connected CW type} if it is homotopy equivalent to a connected CW complex. Recall from \cite{scott:lattices} that a hypercontinuous lattice is a continuous lattice whose Lawson and dual Lawson topologies agree. Thus a hypercontinuous lattice equipped with its Lawson topology is precisely a compact Hausdorff (inf- and sup-) topological lattice whose points admit, with respect to each semilattice operation, neighborhood bases of sub-semilattices. \begin{proposition}\label{prop:dicontract} A hypercontinuous lattice equipped with its Lawson topology is future contractible if its underlying space has connected CW type. \end{proposition} \begin{proof} Consider a hypercontinuous lattice $L$ equipped with its Lawson topology, and suppose $UL$ has connected CW type. The space $UL$ is therefore path-connected. Moreover, $UL$ has trivial homotopy groups because the binary $\inf$ operator gives $UL$ the structure of an associative, idempotent $H$-space. The map $\mathrm{id}_L$ is homotopic through monotone maps to a constant map $c$ taking the value $\max L$ by Lemma \ref{lem:main.result} - $\mathrm{id}_{UL}$ is homotopic to $U(c)$ by the Whitehead Theorem, $L$ is a compact pospace, and $\leqslant_L$ is lower open by Lemma \ref{lem:non.branching}. The map $\mathrm{id}_L$ and $c$ future homotope to $c$ by Lemma \ref{lem:future.homotope} because $L$ is the dual of a continuous lattice equipped with the dual Lawson topology of $L$. \end{proof} \section{Conclusion} The state spaces of machines in nature arise as ``locally partially ordered'' geometric realizations of cubical complexes, as in \cite{fgr:ditop}. Such ``locally partially ordered'' spaces are hypercontinuous lattices precisely when they are continuous lattices, the computational steps of computable partially recursive functions in \cite{scott:outline}. Thus Proposition \ref{prop:dicontract} and Example \ref{eg:non.injective} suggest that the directed homotopy theories of \cite{fgr:ditop,grandis:d} measure at least some of the failure, undetected by classical homotopy theory, of a state space to represent a deterministic, computable process. \end{document}
\begin{document} \newcommand{1409.0274}{1409.0274} \allowdisplaybreaks \renewcommand{\arabic{footnote}}{$\star$} \renewcommand{110}{110} \FirstPageHeading \ShortArticleName{Demazure Modules, Chari--Venkatesh Modules and Fusion Products} \ArticleName{Demazure Modules, Chari--Venkatesh Modules\\ and Fusion Products\footnote{This paper is a~contribution to the Special Issue on New Directions in Lie Theory. The full collection is available at \href{http://www.emis.de/journals/SIGMA/LieTheory2014.html}{http://www.emis.de/journals/SIGMA/LieTheory2014.html}}} \Author{Bhimarthi RAVINDER} \AuthorNameForHeading{B.~Ravinder} \Address{The Institute of Mathematical Sciences, CIT campus, Taramani, Chennai 600113, India} \Email{\href{mailto:bravinder@imsc.res.in}{bravinder@imsc.res.in}} \ArticleDates{Received September 11, 2014, in f\/inal form December 01, 2014; Published online December 12, 2014} \Abstract{Let $\mathfrak{g}$ be a~f\/inite-dimensional complex simple Lie algebra with highest root~$\theta$. Given two non-negative integers~$m$,~$n$, we prove that the fusion product of~$m$ copies of the level one Demazure module $D(1,\theta)$ with~$n$ copies of the adjoint representation $\ev_0 V(\theta)$ is independent of the parameters and we give explicit def\/ining relations. As a~consequence, for $\mathfrak{g}$ simply laced, we show that the fusion product of a~special family of Chari--Venkatesh modules is again a~Chari--Venkatesh module. We also get a~description of the truncated Weyl module associated to a~multiple of~$\theta$.} \Keywords{current algebra; Demazure module; Chari--Venkatesh module; truncated Weyl module; fusion product} \Classification{17B67; 17B10} \renewcommand{\arabic{footnote}}{\arabic{footnote}} \setcounter{footnote}{0} \section{Introduction} Let $\mathfrak{g}$ be a~f\/inite-dimensional complex simple Lie algebra with highest root $\theta$. The current algebra $\mathfrak{g}[t]$ associated to $\mathfrak{g}$ is equal to $\mathfrak{g}\otimes \mathbb{C}[t]$, where $\mathbb{C}[t]$ is the polynomial ring in one variable. The degree grading on $\mathbb{C}[t]$ gives a~natural $\mathbb{Z}_{\geq 0}$-grading on $\mathfrak{g}[t]$ and the Lie bracket is given in the obvious way such that the zeroth grade piece $\mathfrak{g}\otimes 1$ is isomorphic to $\mathfrak{g}$. Let $\widehat{\mathfrak{g}}$ be the untwisted af\/f\/ine Lie algebra corresponding to~$\mathfrak{g}$. In this paper, we shall be concerned with the~$\mathfrak{g}[t]$-stable Demazure modules of integrable highest weight representations of~$\widehat{\mathfrak{g}}$. The Demazure modules are actually modules for a~Borel subalgebra $\widehat{\mathfrak{b}}$ of $\widehat{\mathfrak{g}}$. The $\mathfrak{g}[t]$-stable Demazure modules are known to be indexed by pairs $(l,\lambda)$, where~$l$ is a~positive integer and~$\lambda$ is a~dominant integral weight of~$\mathfrak{g}$ (see~\cite{FoL,Naoi}). We denote the corresponding module by $D(l, \lambda)$ and call it the level~$l$ Demazure module with highest weight~$\lambda$; it is in fact a~f\/inite-dimensional graded $\mathfrak{g}[t]$-module. The study of the category of f\/inite-dimensional graded $\mathfrak{g}[t]$-modules has been of interest in recent years for variety of reasons. An important construction in this category is that of the fusion product. The fusion product of f\/inite-dimensional graded $\mathfrak{g}[t]$-modules~\cite{FL} is by def\/inition, dependent on the given parameters. Many people have been working in recent years, to prove the independence of the parameters for the fusion product of certain~$\mathfrak{g}[t]$-modules, see for instance~\cite{CSVW,CV,FoL,Naoi,V}. These works mostly considered the fusion product of Demazure modules of the same level and gave explicit def\/ining relations for them. We ask the most natural question: Can one give similar results for the fusion product of dif\/ferent level Demazure modules? In this paper, we answer this question for some important cases; namely we prove (Corollary~\ref{c2}) that the fusion product of~$m$ copies of the level one Demazure module $D(1, \theta)$ with~$n$ copies of the adjoint representation $\ev_0 V(\theta)$ is independent of the parameters, and we give explicit def\/ining relations. We note that $\ev_0 V(\theta)$ may be thought of as a~Demazure module $D(l,\theta)$ of level $l\geq 2$. More generally, the following is the statement of our main theorem (see Section~\ref{section3} for notation). \begin{theorem} \label{MT} Let $k\geq 1$. For $0\leq i \leq k$, we have the following: \begin{enumerate}\itemsep=0pt \item[$1)$] a~short exact sequence of $\mathfrak{g}[t]$-modules, \begin{gather*} 0\rightarrow \tau_{2k+1-i} \big(D(1, k\theta)/\big\langle \big(x^{-}_\theta \otimes t^{2k-i}\big) \overline{w}_{k\theta}\big\rangle\big) \\ \phantom{0} \xrightarrow{\phi^{-}} D\big(1,(k+1)\theta\big)/\big\langle \big(x^{-}_\theta \otimes t^{2k+2-i}\big) \overline{w}_{(k+1)\theta}\big\rangle \\ \phantom{0} \xrightarrow{\phi^+} D\big(1,(k+1)\theta\big)/\big\langle \big(x^{-}_\theta \otimes t^{2k+1-i}\big) \overline{w}_{(k+1)\theta}\big\rangle \rightarrow 0; \end{gather*} \item[$2)$] an isomorphism of $\mathfrak{g}[t]$-modules, \begin{gather*} D\big(1,(k+1)\theta\big)/\langle \big(x^{-}_\theta \otimes t^{2k+2-i}\big) \overline{w}_{(k+1)\theta}\rangle \cong D(1,\theta)^{* (k+1-i)}* \textup{ev}_0 V(\theta)^{*i}. \end{gather*} \end{enumerate} \end{theorem} We obtain the following two important corollaries: \begin{corollary} \label{c1} Given $k\geq 1$ and $0\leq i \leq k $, we have the following short exact sequence of $\mathfrak{g}[t]$-modules, \begin{gather*} 0 \rightarrow \tau_{2k+1-i}\big(D(1,\theta)^{*(k-i)} * \textup{ev}_0 V(\theta)^{*i}\big) \rightarrow D(1,\theta)^{*(k+1-i)}* \textup{ev}_0 V(\theta)^{*i} \\ \phantom{0} \rightarrow D(1,\theta)^{*(k-i)} * \textup{ev}_0 V(\theta)^{*(i+1)} \rightarrow 0. \end{gather*} \end{corollary} \begin{corollary} \label{c2} Given $m,n\geq 0$, we have the following isomorphism of $\mathfrak{g}[t]$-modules, \begin{gather*} D(1,\theta)^{*m} * \textup{ev}_0 V(\theta)^{*n} \cong D\big(1,(m+n)\theta\big)/\big\langle \big(x^{-}_\theta \otimes t^{2m+n}\big) \overline{w}_{(m+n)\theta}\big\rangle. \end{gather*} \end{corollary} The Corollary~\ref{c2} generalizes a~result of Feigin (see~\cite[Corollary~2]{F}), where he only considers the case $m=0$. Theorem~\ref{MT}, Corollaries~\ref{c1} and~\ref{c2} are proved in Section~\ref{section4}. In~\cite{CV}, Chari and Venkatesh introduced a~large collection of indecomposable graded $\mathfrak{g}[t]$-modules (which we call Chari--Venkatesh or CV modules) such that all Demazure mo\-du\-les~$D(l, \lambda)$ belong to this collection. In the case when $\mathfrak{g}$ is simply laced, Theorem~\ref{MT} enables us to obtain (see Theorem~\ref{T2}) interesting exact sequences between CV modules and to show that the fusion product of a~special family of CV modules is again a~CV module. Theorem~\ref{T2} generalizes results of Chari and Venkatesh (see~\cite[\S~6]{CV}), where they only consider the case $\mathfrak{g}=\mathfrak{sl}_2$. For $n\geq 1$, let $\mathcal{A}_n=\mathbb{C}[t]/(t^n)$ be the truncated algebra. We consider for $k\geq1$ the local Weyl modules $W_{\mathcal{A}_n}(k\theta)$ for the truncated current algebra $\mathfrak{g}\otimes\mathcal{A}_n$. These modules are known to be f\/inite-dimensional, but they are still far from being well understood; even their dimensions are not known. As a~consequence of Theorem~\ref{MT}, we are able to obtain the following description of truncated Weyl modules in terms of local Weyl modules $W(k\theta)$, $k\geq 1$, for the current algebra~$\mathfrak{g}[t]$. The latter modules $W(k\theta)$ are very well understood. \begin{corollary} \label{truncated} Assume that $\mathfrak{g}$ is simply laced. Given $k,n\geq 1$, we have the following isomorphism of $\mathfrak{g}[t]$-modules, \begin{gather*} W_{\mathcal{A}_n}(k\theta) \cong \begin{cases} W(\theta)^{*(n-k)} * \textup{ev}_0 V(\theta)^{*(2k-n)}, & k\leq n < 2k, \\ W(k\theta), & n\geq 2k. \end{cases} \end{gather*} \end{corollary} The Corollary~\ref{truncated} is proved in Section~\ref{section5}. \section{Preliminaries}\label{section2} Throughout the paper, $\mathbb{C}$ denote the f\/ield of complex numbers, $\mathbb{Z}$ the set of integers, $\mathbb{Z}_{\geq 0}$ the set of non-negative integers, $\mathbb{N}$ the set of positive integers and $\mathbb{C}[t]$ the polynomial ring in an indeterminate~$t$. {\bf 2.1.}~Let $\mathfrak{a}$ be a~complex Lie algebra, $\mathbf{U}(\mathfrak{a})$ the corresponding universal enveloping algebra. The current algebra associated to $\mathfrak{a}$ is denoted by $\mathfrak{a}[t]$ and def\/ined as $\mathfrak{a} \otimes \mathbb{C}[t]$, with the Lie bracket \begin{gather*} [a \otimes t^r, b \otimes t^s]=[a, b]\otimes t^{r+s}, \qquad \text{for all} \quad a, b \in \mathfrak{a} \quad \text{and} \quad r, s \in \mathbb{Z}_{\geq 0}. \end{gather*} We let $\mathfrak{a}[t]_{+}$ be the ideal $\mathfrak{a}\otimes t\mathbb{C}[t]$. The degree grading on $\mathbb{C}[t]$ gives a~natural $\mathbb{Z}_{\geq 0}$-grading on~$\mathbf{U}(\mathfrak{a}[t])$ and the subspace of grade~$s$ is given~by \begin{gather*} \mathbf{U}(\mathfrak{a}[t])[s]= \spn \Big\{(a_1\otimes t^{r_1})\cdots (a_k\otimes t^{r_k}):k\geq 1,\, a_i\in\mathfrak{a},\, r_i\in \mathbb{Z}_{\geq 0}, \sum r_i=s\Big\}, \quad \forall\, s\in \mathbb{N}, \end{gather*} and the subspace of grade zero $\mathbf{U}(\mathfrak{a}[t])[0]=\mathbf{U}(\mathfrak{a})$. {\bf 2.2.}~Let $\mathfrak{g}$ be a~f\/inite-dimensional complex simple Lie algebra, with Cartan subalgebra~$\mathfrak{h}$. Let~$R$ (resp.~$R^+$) be the set of roots (resp.\ positive roots) of $\mathfrak{g}$ with respect to $\mathfrak{h}$ and $\theta \in R^+$ be the highest root in~$R$. There is a~non-degenerate, symmetric, Weyl group invariant bilinear form $(\cdot|\cdot)$ on $\mathfrak{h}^*$, which we assume to be normalized so that the square length of a~long root is two. For $\alpha\in R$, $\alpha^{\vee}\in\mathfrak{h}$ denotes the corresponding co-root and we set $d_{\alpha}=2/(\alpha|\alpha)$. For $\alpha\in R$, let $\mathfrak{g}_{\alpha}$ be the corresponding root space of $\mathfrak{g}$ and f\/ix non-zero elements $x^{\pm}_{\alpha}\in \mathfrak{g}_{\pm\alpha}$ such that $[x^{+}_{\alpha},x^{-}_{\alpha}]=\alpha^{\vee}$. We set $\mathfrak{n}^{\pm}=\oplus_{\alpha\in R^{+}} \mathfrak{g}_{\pm\alpha}$. Let $P^{+}$ be the set of dominant integral weights of $\mathfrak{g}$. For $\lambda\in P^+$, $V(\lambda)$ be the corresponding f\/inite-dimensional irreducible $\mathfrak{g}$-module generated~by an element $v_{\lambda}$ with the following def\/ining relations: \begin{gather*} x^{+}_{\alpha}v_\lambda=0, \qquad h v_\lambda = \langle \lambda, h \rangle v_\lambda, \qquad (x^{-}_{\alpha})^{\langle \lambda, \alpha^{\vee} \rangle +1} v_\lambda=0, \qquad \text{for all} \quad \alpha\in R^+, \quad h\in\mathfrak{h}. \end{gather*} {\bf 2.3.}~A graded $\mathfrak{g}[t]$-module is a~$\mathbb{Z}$-graded vector space \begin{gather*} V=\bigoplus_{r\in\mathbb{Z}} V[r] \qquad \text{such that} \quad (x\otimes t^s) V[r]\subset V[r+s], \quad x\in\mathfrak{g}, \quad r\in\mathbb{Z}, \quad s\in\mathbb{Z}_{\geq0}. \end{gather*} For $\mu\in\mathfrak{h}^*$, an element~$v$ of a~graded $\mathfrak{g}[t]$-module~$V$ is said to be of weight~$\mu$, if $(h\otimes 1)v=\langle \mu, h\rangle v$ for all $h\in \mathfrak{h}$. We def\/ine a~morphism between two graded $\mathfrak{g}[t]$-modules as a~degree zero morphism of $\mathfrak{g}[t]$-modules. For $r\in\mathbb{Z}$, let $\tau_r$ be the grade shift operator: if~$V$ is a~graded $\mathfrak{g}[t]$-module then~$\tau_r V$ is the graded $\mathfrak{g}[t]$-module with the graded pieces shifted uniformly by~$r$ and the action of~$\mathfrak{g}[t]$ remains unchanged. For any graded $\mathfrak{g}[t]$-module~$V$ and a~subset~$S$ of~$V$, $\langle S\rangle$ denotes the submodule of~$V$ generated by~$S$. For $\lambda\in P^+$, $\ev_0 V(\lambda)$ be the irreducible graded $\mathfrak{g}[t]$-module such that $\ev_0 V(\lambda)[0]\cong_{\mathfrak{g}} V(\lambda)$ and $\ev_0 V(\lambda)[r]=0$ $\forall\, r\in \mathbb{N}$. In particular, $\mathfrak{g}[t]_{+}(\ev_0 V(\lambda))=0$. {\bf 2.4.}~For $r,s\in\mathbb{Z}_{\geq0}$, we denote \begin{gather*} \mathbf{S}(r,s)=\bigg\{(b_p)_{p\geq0}: b_p\in\mathbb{Z}_{\geq0}, \; \sum\limits_{p\geq 0}b_p =r, \; \sum\limits_{p\geq0}pb_p=s\bigg\}. \end{gather*} For $\alpha\in R^{+}$ and $r,s\in\mathbb{Z}_{\geq0}$, we def\/ine an element $\mathbf{x}^{-}_{\alpha}(r,s)\in\mathbf{U}(\mathfrak{g}[t])[s]$~by \begin{gather*} \mathbf{x}^{-}_{\alpha}(r,s)=\sum\limits_{(b_p)\in\mathbf{S}(r,s)} (x^{-}_{\alpha} \otimes 1)^{(b_0)}(x^{-}_{\alpha} \otimes t)^{(b_1)}\cdots(x^{-}_{\alpha} \otimes t^s)^{(b_s)}, \end{gather*} where for any non-negative integer~$b$ and any $x\in\mathfrak{g}[t]$, we understand $x^{(b)}=x^b/b!$. The following was proved in~\cite{G} (see also~\cite[Lemma 2.3]{CV}). \begin{lemma}\label{garland} Given $s\in\mathbb{N}$, $r\in\mathbb{Z}_{\geq0}$ and $\alpha\in R^{+}$, we have \begin{gather*} (x^{+}_{\alpha} \otimes t)^{(s)}(x^{-}_{\alpha} \otimes 1)^{(s+r)}-(-1)^s\mathbf{x}^{-}_{\alpha}(r,s)\in\mathbf{U}(\mathfrak{g}[t])\mathfrak{n}^{+}[t]\bigoplus \mathbf{U}(\mathfrak{n}^{-}[t])\mathfrak{h}[t]_{+}. \end{gather*} \end{lemma} \section{Weyl, Demazure modules and fusion product}\label{section3} In this section, we recall the def\/initions of local Weyl modules, level one Demazure modules and fusion products. \subsection{Weyl module}\label{section3.1} The def\/inition of the local Weyl module was given originally in~\cite{CP}, later in~\cite{CFK} and~\cite{FL}. \begin{definition} Given $\lambda \in P^+$, the local Weyl module $W(\lambda)$ is the cyclic $\mathfrak{g}[t]$-module generated by an element $w_\lambda$, with following def\/ining relations: \begin{gather} \mathfrak{n}^{+}[t] w_\lambda=0, \qquad (h \otimes t^s) w_\lambda = \langle \lambda, h \rangle \delta_{s,0} w_\lambda=0, \qquad s\geq0, \qquad h\in\mathfrak{h}, \nonumber \\ (x^{-}_\alpha \otimes 1)^{\langle \lambda, \alpha^{\vee} \rangle + 1}w_\lambda=0, \qquad \alpha \in R^{+}. \label{w2} \end{gather} \end{definition} \noindent We note that the relation~\eqref{w2} implies \begin{gather} \label{w2'} \big(x^{-}_\alpha \otimes t^{\langle \lambda, \alpha^{\vee} \rangle}\big) w_\lambda=0, \qquad \alpha \in R^{+}, \end{gather} which is easy to see from Lemma~\ref{garland}. We set the grade of $w_\lambda$ to be zero; then $W(\lambda)$ becomes a~$\mathbb{Z}_{\geq 0}$-graded module with \begin{gather*} W(\lambda)[0] \cong_{\mathfrak{g}} V(\lambda). \end{gather*} Moreover, $\textup{ev}_0 V(\lambda)$ is the unique graded irreducible quotient of $W(\lambda)$. We now specialize to the case $\lambda\in \mathbb{N}\theta$, and obtain some further useful relations that hold in~$W(\lambda)$. \begin{lemma} \label{g} Let $k\in\mathbb{N}$. The following relations hold in the local Weyl module $W((k+1)\theta)$: \begin{enumerate}\itemsep=0pt \item[$1)$] $(x^{-}_{\theta}\otimes 1)^{2k+1}\big(x^{-}_{\theta}\otimes t^{2k+1-i}\big)w_{(k+1)\theta}=0, \qquad \forall\, 0\leq i \leq k$; \item[$2)$] $(x^{-}_{\theta}\otimes t^m)(x^{-}_{\theta} \otimes t^{m+1}) w_{(k+1)\theta} \in \big\langle \big(x^{-}_{\theta} \otimes t^{m+2}\big) w_{(k+1)\theta} \rangle, \qquad \forall\, m\geq k$. \end{enumerate} \end{lemma} \begin{proof} To prove part (1), consider $(x^{+}_{\theta}\!\otimes t^{2k{+}1{-}i}) (x^{-}_{\theta}\otimes 1)^{2k{+}3}w_{(k{+}1)\theta}$. Since $(x^{+}_{\theta}\!\otimes t^{2k{+}1{-}i})w_{(k{+}1)\theta}$ $=0$, we get \begin{gather*} \big(x^{+}_{\theta}\otimes t^{2k+1-i}\big) (x^{-}_{\theta}\otimes 1)^{2k+3}w_{(k+1)\theta} =\big[x^{+}_{\theta}\otimes t^{2k+1-i}, (x^{-}_{\theta}\otimes 1)^{2k+3}\big] w_{(k+1)\theta} \\ \qquad{} =\sum\limits_{j=1}^{2k+3} (x^{-}_{\theta}\otimes 1)^{j-1}\big(\theta^{\vee}\otimes t^{2k+1-i}\big)(x^{-}_{\theta}\otimes 1)^{2k+3-j}w_{(k+1)\theta}. \end{gather*} Since $(\theta^{\vee}\otimes t^{2k+1-i})w_{(k+1)\theta}=0$, we may replace $(\theta^{\vee}\otimes t^{2k+1-i})(x^{-}_{\theta}\otimes 1)^{2k+3-j}$~by \begin{gather*} \big[\theta^{\vee}\otimes t^{2k+1-i}, (x^{-}_{\theta}\otimes 1)^{2k+3-j}\big] =(-2)(2k+3-j)(x^{-}_{\theta}\otimes 1)^{2k+2-j}\big(x^{-}_{\theta}\otimes t^{2k+1-i}\big). \end{gather*} After simplifying, we get \begin{gather*} \big(x^{+}_{\theta}\otimes t^{2k+1-i}\big) (x^{-}_{\theta}\otimes 1)^{2k+3}w_{(k+1)\theta} \\ \qquad{} =(-1)(2k+2)(2k+3)(x^{-}_{\theta}\otimes 1)^{2k+1}\big(x^{-}_{\theta}\otimes t^{2k+1-i}\big)w_{(k+1)\theta}. \end{gather*} Now, using $(x^{-}_{\theta}\otimes 1)^{2k+3}w_{(k+1)\theta}=0$ in $W((k+1)\theta)$, completes the proof of part~(1). Part (2) follows easily by putting $r=2$, $s=2m+1$ and $\alpha=\theta$ in Lemma~\ref{garland}, and using the fact that $(x^{-}_{\theta}\otimes 1)^{2m+3}w_{(k+1)\theta}=0$, $\forall\, m\geq k$ by~\eqref{w2}. \end{proof} \subsection{Level one Demazure module}\label{section3.2} Let $\lambda\in P^{+}$ and $\alpha\in R^{+}$ with $\langle \lambda, \alpha^{\vee}\rangle > 0$. Let $s_\alpha, m_\alpha \in \mathbb{N}$ be the unique positive integers such that \begin{gather*} \langle \lambda, \alpha^{\vee}\rangle = (s_\alpha-1)d_\alpha + m_\alpha, \qquad 0<m_\alpha\leq d_\alpha. \end{gather*} If $\langle \lambda, \alpha^{\vee}\rangle = 0$, set $s_\alpha=0=m_\alpha$. We take the following as a~def\/inition of the level one Demazure module. \begin{definition} \textup{(see~\cite[Corollary 3.5]{CV})} The level one Demazure module $D(1,\lambda)$ is the graded quotient of $W(\lambda)$ by the submodule generated by the union of the following two sets: \begin{gather} \big\{(x^{-}_{\alpha} \otimes t^{s_{\alpha}}) w_{\lambda}: \alpha\in R^{+}~\textup{such that}~d_{\alpha} > 1\big\}, \label{dm1} \\ \big\{\big(x^{-}_{\alpha} \otimes t^{s_{\alpha}-1}\big)^2 w_{\lambda}: \alpha\in R^{+}~\textup{such that}~d_\alpha =3~\textup{and}~m_\alpha=1\big\}. \label{dm2} \end{gather} In particular, for $\mathfrak{g}$ simply laced, $D(1,\lambda)\cong_{\mathfrak{g}[t]} W(\lambda)$. We denote~by $\overline{w}_\lambda$, the image of $w_\lambda$ in~$D(1,\lambda)$. \end{definition} The following proposition gives explicit def\/ining relations for $D(1,k\theta)$. \begin{proposition} \label{WvsD} Given $k\geq 1$, the level~$1$ Demazure module $D(1,k\theta)$ is the graded $\mathfrak{g}[t]$-module generated by an element $\overline{w}_{k\theta}$, with the following defining relations: \begin{gather*} \mathfrak{n}^{+}[t]\, \overline{w}_{k\theta}=0, \qquad (h \otimes t^s) \overline{w}_{k\theta} = \langle k\theta, h\rangle \delta_{s,0} \overline{w}_{k\theta}, \qquad s\geq0, \quad h\in\mathfrak{h}, \\ (x^{-}_{\alpha}\otimes 1) \overline{w}_{k\theta}=0, \qquad \alpha\in R^+, \quad (\theta|\alpha)=0, \\ (x^{-}_{\alpha} \otimes 1)^{kd_{\alpha}+1} \overline{w}_{k\theta}=0, \qquad \big(x^{-}_{\alpha} \otimes t^k\big) \overline{w}_{k\theta}=0, \qquad \alpha\in R^+, \quad (\theta|\alpha)=1, \\ (x^{-}_{\theta} \otimes 1)^{2k+1} \overline{w}_{k\theta}=0. \end{gather*} \end{proposition} \begin{proof} Observe that, from the abstract theory of root systems $(\theta|\alpha)= 0$ or~1, $\forall\, \alpha \in R^{+}\setminus \{\theta\}$. This implies that $\langle k\theta, \alpha^{\vee}\rangle = 0$ or $kd_{\alpha}$, $\forall\, \alpha \in R^{+}\setminus \{\theta\}$. Hence the relations~\eqref{dm2} do not occur in $D(1,k\theta)$ and the relations~\eqref{dm1} are \begin{gather*} \big(x^{-}_{\alpha} \otimes t^k\big) \overline{w}_{k\theta}=0, \qquad \alpha\in R^+, \quad \alpha~~\text{short}, \quad (\theta|\alpha)=1. \end{gather*} For a~long root $\alpha\in R^+$ with $(\theta|\alpha)=1$, by~\eqref{w2'} it follows that $(x^{-}_{\alpha} \otimes t^k) \overline{w}_{k\theta}=0$. Now the other relations are precisely the def\/ining relations of $W(k\theta)$. This proves Proposition~\ref{WvsD}. \end{proof} We record below a~well-known fact, for later use: \begin{gather*} D(1, \theta)\cong_{\mathfrak{g}} V(\theta) \oplus \mathbb{C}. \end{gather*} In particular, \begin{gather} \label{dimd} \dim D(1, \theta)= \dim V(\theta)+1. \end{gather} The following is a~crucial lemma, which we use in proving Theorem~\ref{MT}. \begin{lemma} \label{crucial} Let $k\geq 1$ and $0\leq i \leq k$. The following relations hold in the module $D(1,(k+1)\theta)$: \begin{enumerate}\itemsep=0pt \item[$1)$] $(x^{-}_{\alpha} \otimes 1)^{kd_{\alpha}+1}\big(x^{-}_{\theta} \otimes t^{2k+1-i}\big) \overline{w}_{(k+1)\theta}=0$, $\forall\, \alpha \in R^{+}$, $(\theta|\alpha)=1$; \item[$2)$] $\big(x^{-}_{\alpha} \otimes t^{k}\big) (x^{-}_{\theta} \otimes t^{2k+1-i}) \overline{w}_{(k+1)\theta} \in \big\langle \big(x^{-}_{\theta} \otimes t^{2k+2-i}\big) \overline{w}_{(k+1)\theta} \big\rangle$, $\forall\, \alpha \in R^{+}$, $(\theta|\alpha)=1$; \item[$3)$] $\big(x^{-}_{\theta} \otimes t^{2k-i}\big) \big(x^{-}_{\theta} \otimes t^{2k+1-i}\big) \overline{w}_{(k+1)\theta} \in \big\langle \big(x^{-}_{\theta} \otimes t^{2k+2-i}\big) \overline{w}_{(k+1)\theta} \big\rangle$. \end{enumerate} \end{lemma} \begin{proof} Let $ \alpha \in R^{+} \textup{with} (\theta|\alpha)=1$. This implies that $\theta-\alpha$ is also a~root of $\mathfrak{g}$ and $(\theta|\theta-\alpha)=1$. We now prove part (1). Observe that, $(x^{-}_{\theta} \otimes t^{2k+1-i}) \overline{w}_{(k+1)\theta}$ is an element of weight $k\theta$. Further $(x^{+}_{\alpha} \otimes 1) (x^{-}_{\theta} \otimes t^{2k+1-i}) \overline{w}_{(k+1)\theta}=0$, since $(x^{+}_{\alpha} \otimes 1)\overline{w}_{(k+1)\theta}=0$ and $(x^{-}_{\theta-\alpha} \otimes t^{2k+1-i}) \overline{w}_{(k+1)\theta}=0$, for all $0\leq i \leq k$. Considering the copy of $\mathfrak{sl}_2$ spanned by $x^{+}_{\alpha} \otimes 1$, $x^{-}_{\alpha} \otimes 1$, $\alpha^{\vee}\otimes 1$, we obtain part (1) by standard $\mathfrak{sl}_2$ arguments. We now prove part (2). Putting $r=2$, $s=(3k+1-i)$ and $\alpha=\theta$ in Lemma~\ref{garland}, we get \begin{gather} \big(x^{-}_{\theta} \otimes t^{k}\big) \big(x^{-}_{\theta} \otimes t^{2k+1-i}\big) \overline{w}_{(k+1)\theta} + \sum\limits_{\substack{k+1\leq p\leq q \leq 2k-i\\p+q= 3k+1-i}} \frac{1}{(2\delta_{p, q})!} \big(x^{-}_{\theta} \otimes t^{p}\big) \big(x^{-}_{\theta} \otimes t^{q}\big) \overline{w}_{(k+1)\theta} \nonumber\\ \qquad{} \in \big\langle\big(x^{-}_{\theta}\otimes t^{2k+2-i}\big) \overline{w}_{(k+1)\theta} \big\rangle, \label{e} \end{gather} since $(x^{-}_{\theta}\otimes 1)^{3k+3-i}\overline{w}_{(k+1)\theta}=0$, $\forall\, 0\leq i \leq k$. Now we act on both sides of~\eqref{e} by $x^{+}_{\theta-\alpha}$ and use the relation $(x^{-}_{\alpha} \otimes t^r) \overline{w}_{(k+1)\theta}=0$, for all $r\geq(k+1)$, which gives part~(2). Part~(3) is immediate from the part~(2) of Lemma~\ref{g}. \end{proof} \subsection{Fusion product}\label{section3.3} In this subsection, we recall the def\/inition of the fusion product of f\/inite-dimensional graded cyclic $\mathfrak{g}[t]$-modules given in~\cite{FL} and give some elementary properties. For a~cyclic $\mathfrak{g}[t]$-module~$V$ generated by~$v$, we def\/ine a~f\/iltration $F^{r}V$, $r\in\mathbb{Z}_{\geq 0}$~by \begin{gather*} F^{r}V=\sum\limits_{0\leq s \leq r} \mathbf{U}(\mathfrak{g}[t])[s] v. \end{gather*} We say $F^{-1}V$ is the zero space. The associated graded space $\gr V=\bigoplus_{r\geq 0} F^{r}V/ F^{r-1}V $ naturally becomes a~cyclic $\mathfrak{g}[t]$-module generated by $v+F^{-1}V$, with action given by \begin{gather*} (x\otimes t^s)\big(w+F^{r-1}V\big):= (x\otimes t^s)w+F^{r+s-1}V, \qquad \forall\, x\in \mathfrak{g}, \quad w\in F^{r}V, \quad r, s\in \mathbb{Z}_{\geq0}. \end{gather*} Observe that, $\gr V \cong V$ as $\mathfrak{g}$-modules. The following lemma will be useful. \begin{lemma} \label{f1} Let~$V$ be a~cyclic $\mathfrak{g}[t]$-module. For $r,s\in \mathbb{Z}_{\geq0}$, the following equality holds in the quotient space $F^{r+s}V/ F^{r+s-1}V$. \begin{gather*} (x\otimes t^s)\big(w+F^{r-1}V\big)=\big((x\otimes (t-a_1)\cdots (t-a_s))w\big) + F^{r+s-1}V, \end{gather*} for all $a_1,\dots,a_s\in\mathbb{C}$, $x\in \mathfrak{g}$, $w\in F^{r}V$. \end{lemma} Given a~$\mathfrak{g}[t]$-module~$V$ and $z\in\mathbb{C}$, we def\/ine an another $\mathfrak{g}[t]$-module action on~$V$ as follows: \begin{gather*} (x\otimes t^s)v=\big(x\otimes (t+z)^s\big)v, \qquad x\in\mathfrak{g}, \qquad v\in V, \qquad s\in \mathbb{Z}_{\geq0}. \end{gather*} We denote this new module by $V^z$. Let $V_i$ be a~f\/inite-dimensional cyclic graded $\mathfrak{g}[t]$-module generated by $v_i$, for $1\leq i\leq m$, and let $z_1,\dots,z_m$ be distinct complex numbers. We denote~by \begin{gather*} \mathbf{V}={V_1}^{z_1}\otimes\dots\otimes {V_m}^{z_m}, \end{gather*} the corresponding tensor product of $\mathfrak{g}[t]$-modules. It is easily checked (see~\cite[Proposition 1.4]{FL}) that $\mathbf{V}$ is a~cyclic $\mathfrak{g}[t]$-module generated by $v_1\otimes\dots\otimes v_m$. The associated graded space $\gr \mathbf{V}$ is called the fusion product of $V_1,\dots,V_m$ w.r.t.\ the parameters $z_1,\dots,z_m$, and is denoted by ${V_1}^{z_1}*\cdots*{V_m}^{z_m}$. We denote $v_1*\cdots*v_m=(v_1\otimes\cdots\otimes v_m)+F^{-1}\mathbf{V}$, a~generator of $\gr \mathbf{V}$. For ease of notation we mostly, just write $V_1*\cdots*V_m$ for ${V_1}^{z_1}*\cdots*{V_m}^{z_m}$. But unless explicitly stated, it is assumed that the fusion product does depend on these parameters. The following lemma will be needed later. \begin{lemma} \label{f2} Given $1\leq i\leq m$, let $V_i$ be a~finite-dimensional cyclic graded $\mathfrak{g}[t]$-module generated by $v_i$, and $s_i\in\mathbb{Z}_{\geq0}$. Let $x\in\mathfrak{g}$. If $(x\otimes t^{s_i})v_i=0$, $\forall\, 1\leq i\leq m$ then $(x\otimes t^{s_1+\dots+s_m}) v_1*\dots*v_m=0$. \end{lemma} \begin{proof} Let $z_1,\dots,z_m$ be distinct complex numbers and let $\mathbf{V}$ as above. By using Lemma~\ref{f1}, we get the following equality in $\gr \mathbf{V}$, \begin{gather*} \big(x\otimes t^{s_1+\cdots+s_m}\big) \big((v_1\otimes\cdots\otimes v_m)+F^{-1}\mathbf{V}\big) \\ \qquad{} =\big(\big(x\otimes (t-z_1)^{s_1}\cdots(t-z_m)^{s_m}\big)v_1\otimes\dots\otimes v_m\big) + F^{s_1+\cdots+s_m-1}\mathbf{V}. \end{gather*} Now the proof follows by the def\/inition of the fusion product. \end{proof} \section{Proof of the main theorem}\label{section4} In this section, we prove the existence of maps $\phi^{+}$ and $\phi^{-}$ from Theorem~\ref{MT} and then prove our main theorem (Theorem~\ref{MT}). {\bf 4.1.}~Given $k\geq 1$ and $0\leq i \leq k $, we denote~by \begin{gather*} \mathbf{V}_{i,k} = D(1, k\theta)/\big\langle \big(x^{-}_\theta \otimes t^{2k-i}\big) \overline{w}_{k\theta} \big\rangle, \end{gather*} and let $\overline{v}_{i,k}$ be the image of $\overline{w}_{k\theta}$ in $\mathbf{V}_{i,k}$. Using Proposition~\ref{WvsD}, $\mathbf{V}_{i,k}$ is the cyclic graded $\mathfrak{g}[t]$-module generated by the element $\overline{v}_{i,k}$, with the following def\/ining relations: \begin{gather} (x^{+}_{\alpha}\otimes t^s)\overline{v}_{i,k}=0, \qquad s\geq 0,\quad \alpha\in R^+, \label{r1} \\ (h \otimes t^s) \overline{v}_{i,k} = \langle k\theta, h\rangle \delta_{s,0} \overline{v}_{i,k}, \qquad s\geq0, \quad h\in\mathfrak{h}, \label{r2} \\ (x^{-}_{\alpha}\otimes 1) \overline{v}_{i,k}=0, \qquad \alpha\in R^+, \quad (\theta|\alpha)=0, \label{r3} \\ (x^{-}_{\alpha}\otimes 1)^{kd_{\alpha}+1} \overline{v}_{i,k}=0, \qquad \big(x^{-}_{\alpha} \otimes t^k\big) \overline{v}_{i,k}=0, \qquad \alpha\in R^+, \quad (\theta|\alpha)=1, \label{r4} \\ (x^{-}_{\theta}\otimes 1)^{2k+1} \overline{v}_{i,k}=0, \qquad \big(x^{-}_{\theta} \otimes t^{2k-i}\big) \overline{v}_{i,k}=0. \label{r5} \end{gather} The existence of $\phi^{+}$ is trivial, which we record below. \begin{proposition} The map $\phi^{+} : \mathbf{V}_{i,k+1} \rightarrow \mathbf{V}_{i+1,k+1}$ which takes $\overline{v}_{i,k+1} \rightarrow \overline{v}_{i+1,k+1}$ is a~surjective morphism of $\mathfrak{g}[t]$-modules with $\ker \phi^{+} = \big\langle \big({x^{-}_\theta} \otimes t^{2k+1-i}\big) \overline{v}_{i,k+1}\big\rangle$. \end{proposition} Now we prove the existence of $\phi^{-}$ in the following proposition. \begin{proposition} There exist a~surjective morphism of $\mathfrak{g}[t]$-modules $\phi^{-} : \tau_{2k+1-i} \mathbf{V}_{i,k} \rightarrow \ker \phi^{+}$, such that $\phi^{-}(\overline{v}_{i,k}) = \big(x^{-}_\theta \otimes t^{2k+1-i}\big) \overline{v}_{i,k+1}$. \end{proposition} \begin{proof} We only need to show that $\phi^{-}(\overline{v}_{i,k})$ satisf\/ies the def\/ining relations of $\mathbf{V}_{i,k}$. We start with the relation~\eqref{r1}. First, for $\alpha=\theta$ it is clear. Let $\alpha \in R^{+}\setminus\{\theta\}$; if $(\theta|\alpha)=0$ then also it is clear. If $(\theta|\alpha)=1$ then $(\theta-\alpha)\in R^{+}\setminus\{\theta\}$ and $(\theta|\theta-\alpha)=1$, now it is clear from the relations $(x^{-}_{\theta-\alpha}\otimes t^r)\overline{v}_{i,k+1}=0$ for all $r\geq (k+1)$ in $\mathbf{V}_{i,k+1}$. The relations~\eqref{r2},~\eqref{r3} are trivially satisf\/ied by $\phi^{-}(\overline{v}_{i,k})$. Finally the last two relations~\eqref{r4},~\eqref{r5} are also satisf\/ied by $\phi^{-}(\overline{v}_{i,k})$; in fact these are exactly the statements of Lemmas~\ref{g} and~\ref{crucial}. \end{proof} {\bf 4.2.}~The existence of the surjective maps $\phi^{+}$ and $\phi^{-}$, give the following: \begin{gather} \label{diml} \dim \mathbf{V}_{i,k+1} \leq \dim \mathbf{V}_{i,k} + \dim \mathbf{V}_{i+1,k+1}. \end{gather} The following proposition helps in proving the reverse inequality. \begin{proposition} The map $\psi :\mathbf{V}_{i,k+1}\rightarrow D(1,\theta)^{* (k+1-i)} * \textup{ev}_0 V(\theta)^{*i}$ such that $\psi(\overline{v}_{i,k+1}) = \overline{w}_{\theta}^{* (k+1-i)} * v_{\theta}^{*i}$ is well-defined and surjective morphism of $\mathfrak{g}[t]$-modules. In particular, \begin{gather} \label{dimg} \dim \mathbf{V}_{i,k+1}\geq (\dim D(1, \theta))^{k+1-i} (\dim V(\theta))^{i}. \end{gather} \end{proposition} \begin{proof} We only need to show that $\psi(\overline{v}_{i,k+1})$ satisf\/ies the def\/ining relations of $\mathbf{V}_{i,k+1}$. But they follow easily from the following relations: \begin{gather*} ((h \otimes 1)-\langle (k+1)\theta, h\rangle)\big(\overline{w}_{\theta}^{\otimes (k+1-i)} \otimes v_{\theta}^{\otimes i}\big)=0, \qquad \forall\, h\in\mathfrak{h}, \\ (x^{-}_{\alpha}\otimes 1)^{\langle (k+1)\theta, \qquad \alpha^{\vee} \rangle + 1}\big(\overline{w}_{\theta}^{\otimes (k+1-i)}\otimes v_{\theta}^{\otimes i}\big)=0, \qquad \forall\, \alpha\in R^{+} \end{gather*} (which holds in $D(1,\theta)^{\otimes (k+1-i)} \otimes \textup{ev}_0 V(\theta)^{\otimes i}$) and $(h \otimes t^s)\psi(\overline{v}_{i,k+1})=0, \forall\, s\geq 1$, $h\in\mathfrak{h}$ (which holds in $D(1,\theta)^{* (k+1-i)}*\textup{ev}_0 V(\theta)^{*i}$). Further from Lemma~\ref{f2}, by using the relations \begin{gather*} (x^{+}_{\alpha}\otimes t^s)\overline{w}_{\theta}=0=(x^{+}_{\alpha}\otimes t^s)v_{\theta}, \qquad \forall\, s\geq 0, \quad \alpha\in R^{+}, \\ (x^{-}_{\alpha}\otimes t)\overline{w}_{\theta}=\big(x^{-}_{\theta}\otimes t^{2}\big)\overline{w}_{\theta}=0=(x^{-}_{\theta}\otimes t)v_{\theta}=(x^{-}_{\alpha}\otimes t)v_{\theta}, \qquad \forall\, \alpha \in R^{+}\setminus\{\theta\}, \end{gather*} which holds in $D(1,\theta)$ and $\textup{ev}_0 V(\theta)$. \end{proof} We record below a~result from~\cite{F} and use this in proving our main theorem. \begin{proposition}\textup{\cite[Corollary 2]{F}} \label{F} Given $k\geq 1$, the following is an isomorphism of $\mathfrak{g}[t]$-modules, \begin{gather*} \textup{ev}_0 V(\theta)^{* k} \cong D(1, k\theta)/\big\langle \big({x^{-}_\theta} \otimes t^{k}\big) \overline{w}_{k\theta}\big\rangle. \end{gather*} \end{proposition} {\bf 4.3.}~We now prove Theorem~\ref{MT}, proceeding by induction on~$k$. First, for $k=1$, we prove Theorem~\ref{MT} for $0\leq i \leq 1$. Let $i=1$, observe that $\mathbf{V}_{1,1} \cong_{\mathfrak{g}[t]} \ev_{0} V(\theta)$. Using Proposition~\ref{F},~\eqref{dimd},~\eqref{diml} and~\eqref{dimg} this case follows. Let $i=0$, now observe that $\mathbf{V}_{0,1} \cong_{\mathfrak{g}[t]} D(1,\theta)$. Using part (2) of Theorem~\ref{MT} for $i=1$ and $k=1$,~\eqref{dimd},~\eqref{diml} and~\eqref{dimg} this case also follows. Now let $k\geq2$, and assume Theorem~\ref{MT} holds for $(k-1)$. We prove the assertion for~$k$, proceeding by induction on~$i$. For $i=k$, it follows from Proposition~\ref{F},~\eqref{dimd},~\eqref{diml} and~\eqref{dimg}. Now let $i\leq (k-1)$, and assume Theorem~\ref{MT} holds for $(i+1)$. We now prove for~$i$. Using part (2) of Theorem~\ref{MT}, for $(i+1)$ and~$k$, also for~$i$ and $(k-1)$, and~\eqref{diml}, we get \begin{gather*} \dim \mathbf{V}_{i,k+1} \leq (\dim D(1, \theta))^{k-i} (\dim V(\theta))^{i+1} + (\dim D(1, \theta))^{k-i} (\dim V(\theta))^{i}. \end{gather*} Together with~\eqref{dimd}, we see \begin{gather*} \dim \mathbf{V}_{i,k+1} \leq (\dim D(1,\theta))^{k+1-i} (\dim V(\theta))^{i}. \end{gather*} Now the proof of Theorem~\ref{MT} in this case follows by~\eqref{dimg}. This completes the proof of Theorem~\ref{MT}. Combining parts (1) and (2) of Theorem~\ref{MT}, we get Corollary~\ref{c1}. Using part (2) of Theorem~\ref{MT} and Proposition~\ref{F}, we obtain Corollary~\ref{c2}. \section{CV modules and truncated Weyl modules}\label{section5} We start this section by recalling the def\/inition of CV modules given in~\cite{CV}. For $\mathfrak{g}$ simply laced, we shall restate Theorem~\ref{MT} in terms of these modules. At the end, we also discuss truncated Weyl modules. {\bf 5.1.}~Given $\lambda\in P^{+}$, we say that $\pmb{\xi}=(\xi(\alpha))_{\alpha\in R^+}$ is a~$\lambda$-compatible $|R^{+}|$-tuple of partitions, if \begin{gather*} \xi(\alpha)=\big(\xi(\alpha)_1\geq\dots \geq\xi(\alpha)_j\geq\dots\geq 0\big), \qquad |\xi(\alpha)|= \sum\limits_{j\geq 1}\xi(\alpha)_j = \langle \lambda, \alpha^{\vee} \rangle, \qquad \forall\, \alpha \in R^+. \end{gather*} \begin{definition}[see~\protect{\cite[\S~2]{CV}}] Let $\lambda\in P^{+}$ and $\pmb{\xi}$ be a~$\lambda$-compatible $|R^{+}|$-tuple of partitions. The Chari--Venkatesh module or CV module $V(\pmb\xi)$ is the graded quotient of $W(\lambda)$ by the submodule generated by the following set \begin{gather*} \bigg\{\mathbf{x}^{-}_{\alpha}(r,s)w_{\lambda}: \alpha\in R^+, s, r \in \mathbb{N}~\textup{such that}~s+r\geq 1+rk+\sum\limits_{j\geq k+1} \xi(\alpha)_j~\textup{for some}~k\in \mathbb{N}\bigg\}. \end{gather*} \end{definition} The following lemma (implicit in the proof of Theorem~1 of~\cite{CV}) is useful in understanding CV modules. \begin{lemma} \label{c-v} Let $\lambda\in P^{+}$, $r\in\mathbb{N}$ and $\pmb\xi=(\xi(\alpha))_{\alpha\in R^+}$ a~$\lambda$-compatible $|R^{+}|$-tuple of partitions. If $r\geq \xi(\alpha)_1$ then $\mathbf{x}^{-}_{\alpha}(r,s)w_{\lambda}=0$ in $W(\lambda)$, for all $\alpha\in R^+$, $s, k \in \mathbb{N}$, $s+r\geq 1+rk+\sum\limits_{j\geq k+1} \xi(\alpha)_j$. \end{lemma} \begin{proof} Let $\alpha\in R^+$ and $s, k \in \mathbb{N}$ such that $s+r\geq 1+rk+\sum\limits_{j\geq k+1} \xi(\alpha)_j$. Given $r\geq \xi(\alpha)_1$, it follows that $s+r \geq 1 + \sum\limits_{j\geq 1} \xi(\alpha)_j=1+\langle \lambda, \alpha^{\vee} \rangle$. Now the proof follows by using Lemma~\ref{garland} and~\eqref{w2}. \end{proof} For $\lambda\in P^{+}$, we associate two~$\lambda$-compatible $|R^{+}|$-tuple of partitions as follows: \begin{gather*} \{\lambda\}:=\big(\big(\langle \lambda, \alpha^{\vee} \rangle\big)\big)_{\alpha\in R^{+}}, \qquad \pmb\xi(\lambda):=\big(\big(1^{\langle \lambda, \alpha^{\vee} \rangle}\big)\big)_{\alpha\in R^+}. \end{gather*} Each partition of $\{\lambda\}$ has at most one part, and each part of each partition of $\pmb\xi(\lambda)$ is 1. The CV modules corresponding to these two, have nice descriptions, which we record below for later use; \begin{gather} \label{e1} V(\{\lambda\})\cong_{\mathfrak{g}[t]} \ev_0 V(\lambda), \qquad V(\pmb\xi(\lambda)) \cong_{\mathfrak{g}[t]} W(\lambda). \end{gather} The f\/irst isomorphism follows by taking $s=r=k=1$ in the def\/inition of the CV mo\-du\-le~$V(\{\lambda\})$ and the second isomorphism follows from Lemma~\ref{c-v}. {\bf 5.2.}~Given $k\geq 1$ and $0\leq i \leq k $, we def\/ine the following $|R^{+}|$-tuple of partitions: \begin{alignat*}{3} &\pmb\xi_{i}^{-}:=(\xi^{-}_{i}(\alpha))_{\alpha\in R^{+}}, \qquad&& \text{where} \quad \xi^{-}_{i}(\alpha)= \begin{cases} \big(1^{\langle k\theta, \alpha^{\vee} \rangle}\big), & \alpha\neq \theta, \\ \big(2^{i}, 1^{2(k-i)}\big), & \alpha=\theta, \end{cases}& \\ & \pmb\xi_{i}:=\big(\xi_{i}(\alpha)\big)_{\alpha\in R^{+}}, \qquad && \text{where} \quad \xi_{i}(\alpha)= \begin{cases} \big(1^{\langle (k+1)\theta, \alpha^{\vee} \rangle}\big), & \alpha\neq \theta, \\ \big(2^{i}, 1^{2(k+1-i)}\big), & \alpha=\theta, \end{cases}& \\ & \pmb\xi_{i}^{+}:=\big(\xi^{+}_{i}(\alpha)\big)_{\alpha\in R^{+}}, \qquad && \text{where} \quad \xi^{+}_{i}(\alpha)= \begin{cases} \big(1^{\langle (k+1)\theta, \alpha^{\vee} \rangle}\big), & \alpha\neq \theta, \\ \big(2^{i+1}, 1^{2(k-i)}\big), & \alpha=\theta. \end{cases}& \end{alignat*} For $\mathfrak{g}$ simply laced, we can restate Theorem~\ref{MT} in terms of CV modules as follows: \begin{theorem} \label{T2} Assume that $\mathfrak{g}$ is simply laced. Given $k\geq 1$ and $0\leq i \leq k $, we have the following: \begin{enumerate}\itemsep=0pt \item[$1)$] a~short exact sequence of $\mathfrak{g}[t]$-modules, \begin{gather*} 0 \rightarrow \tau_{2k+1-i}V(\pmb\xi^{-}_{i}) \rightarrow V(\pmb\xi_{i}) \rightarrow V(\pmb\xi^{+}_{i}) \rightarrow 0; \end{gather*} \item[$2)$] an isomorphism of $\mathfrak{g}[t]$-modules, \begin{gather*} V(\pmb\xi_{i}) \simeq V(\pmb\xi(\theta))^{* (k+1-i)} * V(\{\theta\})^{*i}. \end{gather*} \end{enumerate} \end{theorem} \begin{proof} This follows from Theorem~\ref{MT}, by using Lemma~\ref{c-v} and~\eqref{e1}. \end{proof} {\bf 5.3.}~For $n\geq 1$, we def\/ine $\mathcal{A}_n= \mathbb{C}[t]/(t^{n})$. The {\em truncated current algebra} $\mathfrak{g}\otimes \mathcal{A}_n$, can be thought of as the graded quotient of the current algebra $\mathfrak{g}[t]$: \begin{gather*} \mathfrak{g}\otimes \mathcal{A}_n\cong\mathfrak{g}[t]/\big(\mathfrak{g}\otimes t^n \mathbb{C}[t]\big). \end{gather*} Let $k\geq1$. The local Weyl module $W_{\mathcal{A}_n}(k\theta)$ for the truncated current algebra $\mathfrak{g}\otimes \mathcal{A}_n$ is def\/ined in~\cite{CFK}, and we call it the {\em truncated Weyl module}. It is easy to see that $W_{\mathcal{A}_n}(k\theta)$ naturally becomes a~$\mathfrak{g}[t]$-module and the following is an isomorphism of $\mathfrak{g}[t]$-modules, \begin{gather} \label{trul} W_{\mathcal{A}_n}(k\theta) \cong W(k\theta)/\langle (x^{-}_{\theta}\otimes t^{n}) w_{k\theta}\rangle. \end{gather} Now Corollary~\ref{truncated} is immediate from Corollary~\ref{c2}, by using~\eqref{w2'} and~\eqref{trul}. \LastPageEnding \end{document}
\begin{document} \title{On the Spectrum of weighted Laplacian operator and its application to uniqueness of K\"ahler Einstein metrics} \author{Long Li} \maketitle{} \begin{abstract} The purpose of this paper is to provide a new proof of Bando-Mabuchi's uniqueness theorem of K\"ahler Einstein metrics on Fano manifolds, based on the convexity of $Ding$-functional on Chen's weak $\mathcal{C}^{1,\bar{1}}$ geodesic without using any further regularities. Unlike the smooth case, the lack of regularities on the geodesic forbids us to use spectral formula of the weighed Laplacian operator directly. However, we can use smooth $\epsilon$-geodesics to approximate the weak one, then prove that a sequence of eigenfunctions will converge into the first eigenspace of the weighted Laplacian operator. \end{abstract} \section{Introduction} The study of K\"ahler Einstein metrics on Fano manifolds is an old but lasting subject in complex geometry: on geometrical point of view, it characterizes the manifold with constant Ricci curvature, i.e. the K\"ahler metric satisfies \[ Ric( \omega) = \omega; \] on analytical point of view, the complex Monge-Amp\`ere equations arise from the study of this curvature equation, i.e. the K\"ahler potential $\varphi\in\mathcal{H}$ is the solution of the following equation \[ (\omega_0 + i\partial\bar{\partial}\varphi)^{n} = e^{h - \varphi} \omega_0^n \] where $\mathcal{H}:= \{\omega = \omega_0 + i\partial\bar{\partial}\varphi> 0 \}$. Now as a PDE problem on manifolds, it's natural to ask two questions - existence and uniqueness. After Yau's celebrated work[14] on solving the Calabi Conjecture, Tian's $\alpha$ invariant[13] gives a sufficient condition to solve Monge Amp\`ere equation on Fano manifolds in 1980's. Then many people contribute to this problem during these years. And quite recently, Chen-Donalson-Sun's work([8], [9], [10]) proves the existence of K\"ahler Einstein metrics on Fano manfolds is equivalent to K-stability condition.This settles a long standing stability conjecture on K\"ahler Einstein metrics which goes back to Yau. \\ \\ The problem of uniqueness of K\"ahler Einstein metrics on Fano manifolds also keeps attractive during these years. It is first proved by Bando and Mabuchi[1] in 1987, and we will give an alternative proof in this paper. The statement is as follows \begin{thm} Let $X$ be a compact complex manifold with $-K_X>0$. Suppose $\omega_1$ and $\omega_2$ are two K\"ahler Einstein metrics on $X$, then there is a holomorphic automorphism $F$, such that \[ F^*(\omega_2) = \omega_1 \] where this $F$ is generated by a holomorphic vector field $\mathcal{V}$ on X. \end{thm} They solve this problem by considering a special energy(Mabuchi energy) decreasing along certain continuity path. Then the existence of weak $\mathcal{C}^{1,\bar{1}}$ geodesic between any two smooth K\"ahler potentials is proved by X.X.Chen[7] in 2000, and this idea turns out to be an important tool in proving uniqueness theorems. For instance, Berman[3] gives a new proof of Bando-Mabuchi's theorem by arguing the geodesic connecting two K\"ahler Einstein metrics is actually smooth. And Berndtsson[5] proves the uniqueness of possible singular K\"ahler Einstein metrics along $\mathcal{C}^0$ geodesics. He observes the $Ding$-functional is convex along these geodesics from his curvature formula on the Bergman kernel[6]. Moreover, this curvature formula plays a major role to create a holomorphic vector fields when the functional is affine. This method is used by Berman again to prove the uniqueness of Donaldson's equation[2], and generalized to the $klt-pairs$ in [BBEGZ12]. \\ \\ The idea of this paper is also initiated from the convexity of $Ding$-functional along geodesics from a different perspective. However, instead of using Berndtsson's curvature formula, we are going to use the Futaki's formula(refer to Section 2) of weighted Laplacian operator to derive the holomorphic vector fields. Unlike the former case, here the main difficulty arises from the change of metrics during the convergence of Laplacian operators. Fortunately, we have control on the mixed derivatives $\partial_{\alpha}\partial_{\bar{\beta}}\phi$ on the product manifold, i.e. Chen's existence theorem of weak geodesic[7] guarantees a uniform bound of mixed second derivatives of the potential in both space and time directions on the geodesic. Moreover, we can perturb the weak geodesic to a sequence of nearby smooth metrics $\{g_{\epsilon}\}$ with mixed second derivatives under control[7]. \\ \\ However, this is not quite enough for our purpose, because we are lack of a uniform lower bound of these metrics, and the lower bound of metrics(or the upper bounds of the inverse metric) is crucially involved in the weighted Laplacian operator as \[ \Box_{\phi_g}u = \partial^{\phi_g} (\omega_g \lrcorner \bar{\partial} u) \] where $u $ is a smooth function on $X$. More fundamentally, it plays an important role in the $L^2$ norm of $(0,1)$ forms as \[ <\xi,\eta >_g = \int_X g^{i\bar{j}}\xi_{i}\overline{\eta_{j}} d\mu_g \] where $\xi = \xi_{i}d\bar{z}^{i}$ and $\eta= \eta_{j}d\bar{z}^j$. This forbids us to use standard $L^2$ theorems to get the a uniform control. We will overcome this difficulty in Section 5 by separating the Laplacian equation to two equations, i.e. \[ \omega_g\wedge v_g = \bar{\partial} u \] and \[ \partial^{\phi_g} v = \Box_{\phi_g}u. \] This idea is initiated from solving the equation $\partial^{\phi}v= \pi_{\perp}\phi'$ in Berndtsson's work[5]. Then a crutial $W^{1,2}$ estimate of the sequence of vector fields $v_{g_{\epsilon}}$ shows it converges to some vector fields $v_{\infty}$ in strong $L^2$ norm, and a further $L^1$ estimate on $\bar{\partial}v_{g}$ indicates the vector fields $v_{\infty}$ is in fact holomorphic, under certain conditions(refer to proposition 12). This solves our problem on fiber direction, but on time direction we need to argue the holomorphic vector field keeps to be a constant. This is guaranteed since it corresponds to an eigenfuntion in the first eigenspace of the weighted Laplacian operator and satisfies the geodesic equation. \\ \\ $\mathbf{Acknowledgement}$: I would like to express my great thanks to Prof. Xiuxiong Chen, who suggested me to do this problem and showed me the way of approach when the geodesic is smooth. I would also thank to Prof. Eric Bedford, Prof. Song Sun, Prof. Weiyong He, and Dr. Kai Zheng for helpful discussion. And especially, I want to thanks Prof. Futaki for pointing out one error in the old version. Finally, the suggestion from Chengjian Yao also helps me to make this paper more clear. \section{Futaki's formula and Hessian of $Ding$-functional} The manifolds $X$ in our consideration is Fano, then we can assume the K\"ahler class $[\omega] = c_1(X)$, i.e. for each K\"ahler metric $\omega_g$, there exists a smooth function $F_g$ such that \[ Ric(\omega_g) - \omega_g = i\partial\bar{\partial}F_g, \] hence we can define a weighted volume form as $e^F\det g$(we will write $F_g$ as $F$ when there is no confusion), and a pairing for any $u,v\in \mathcal{C}^{\infty}(X)$ \[ (u,v)_g = \int_X u \bar{v} e^{F}\det g, \] then Futaki[9] considers a weighted Laplacian operator \[ \Delta_{F} u = \Delta_g u - \nabla^j u \nabla_j F. \] the reason to do this is because the new Laplacian operator is easy to do integration by parts under the weighted volume form \[ \int_X (\Delta_{F} u )\bar{u} e^{F} \det g = -\int_X (\nabla_j\nabla^j u + \nabla^j u \nabla_j F ) \bar{u} e^{F}\det g \] \[ = \int_X \nabla^j u \nabla_j\bar{u}e^{F}\det g \] \[ = \int_X |\bar{\partial} u|^2 e^{F} \det g \] where the norm of the 1-form is take with respect to the metric $g$. Hence it's an elliptic operator, and its spectral is discrete as $0 < \lambda_1 <\lambda_2 < \cdots$. Then for each eigenfunction $\Delta_{F} u = \lambda u$, Futaki[11] writes the following formula \[ \lambda \int_X |\bar{\partial} u|^2 e^{F}\det g = \int_X |\bar{\partial} u |^2 e^{F}\det g + \int_X |L_g u|^2 e^{F}\det g \] where $L_g $ is a second order differential operator defined as \[ L_g u = \nabla_{\bar{j}}\nabla^i u \frac{\partial}{\partial z^i}\otimes d\bar{z}^j. \] Now observe the RHS of Futaki's formula is in fact $\int_X |\Delta_{F_g}u|^2 e^F\det g$, we can generalize it to all smooth function as \begin{lem} For any smooth function $u$ on $X$, we have \[ \int_X | \Delta_{F}u|^2 e^F\det g = \int_X |\bar{\partial} u |^2 e^{F}\det g + \int_X |L_g u|^2 e^{F}\det g. \] \end{lem} \begin{pf} we can decompose $u = \Sigma_0^{\infty} a_i(u) e_i $ into the eigenspace of the operator $\Delta_{F_g}$, and notice that the eigenfunction $e_i$ is orthogonal with respect to each other under the weighted volume form and metric $g$. Then the first two terms in above equation will preserve this orthogonality, i.e. choose eigenfunctions $u$ and $w$ of $\Delta_F$ which are orthogonal to each other, then \[ \int_X |\bar{\partial}u+ \bar{\partial}w |^2 e^F \det g = \int_X |\bar{\partial}u|^2 e^F \det g + \int_X |\bar{\partial}w|^2e^F\det g \] and \[ \int_X |\Delta_F u+ \Delta_F w |^2 e^F \det g = \int_X |\Delta_F u|^2 e^F \det g + \int_X |\Delta_F w|^2e^F\det g \] Moreover, the differential operator $L_g$ keeps this orthogonality of eigenfunctions, but first notice \[ F_{,\alpha\bar{\beta}} = R_{\alpha\bar{\beta}} - g_{\alpha\bar{\beta}} \] from the definition of $F$, then we compute as follows \[ \int_X \langle L_g u, L_g w \rangle_g e^{F}\det g = \int_X g^{\alpha\bar{\lambda}}g^{\mu\bar{\beta}} u_{,\bar{\lambda}\bar{\beta}} \bar{w}_{,\mu\alpha}e^F\det g \] \[ = -\int_X g^{\alpha\bar{\lambda}}g^{\mu\bar{\beta}} u_{,\bar{\lambda}\bar{\beta}\alpha}\bar{w}_{,\mu} e^F\det g - \int_X g^{\alpha\bar{\lambda}}g^{\mu\bar{\beta}} u_{,\bar{\lambda}\bar{\beta}}\bar{w}_{,\mu} F_{,\alpha} e^F\det g \] \[ = - \int_X g^{\alpha\bar{\lambda}}g^{\mu\bar{\beta}} u_{,\bar{\lambda}\alpha\bar{\beta}}\bar{w}_{,\mu}e^F\det g - \int_X g^{\mu\bar{\beta}} R_{\bar{\beta}}^{\bar{\gamma}} u_{,\bar{\gamma}}\bar{w}_{,\mu} e^F\det g \] \[ + \int_X g^{\alpha\bar{\lambda}}g^{\mu\bar{\beta}} u_{,\bar{\lambda}}\bar{ w}_{,\mu\bar{\beta}} F_{,\alpha} e^F\det g + \int_X g^{\mu\bar{\beta}} u_{,\bar{\lambda}}\bar{w}_{,\mu} F^{, \bar{\lambda}}_{\bar{\beta}} e^F\det g +\int_X g^{\alpha\bar{\lambda}}g^{\mu\bar{\beta}} u_{,\bar{\lambda}}\bar{w}_{,\mu} F_{,\alpha} F_{,\bar{\beta}} e^F\det g \] \[ = \int_X g^{\alpha\bar{\lambda}}g^{\mu\bar{\beta}}u_{,\bar{\lambda}\alpha}\bar{w}_{,\mu\bar{\beta}} e^F\det g + \int_X g^{\alpha\bar{\lambda}}g^{\mu\bar{\beta}} u_{\bar{\lambda}\alpha}\bar{w}_{,\mu} F_{,\bar{\beta}}e^F\det g \] \[ + \int_X g^{\alpha\bar{\lambda}}g^{\mu\bar{\beta}} u_{,\bar{\lambda}}\bar{ w}_{,\mu\bar{\beta}} F_{,\alpha} e^F\det g +\int_X (g^{\alpha\bar{\lambda}} u_{,\bar{\lambda}}F_{,\alpha} )( g^{\mu\bar{\beta}} \bar{w}_{,\mu} F_{,\bar{\beta}}) e^F\det g - \int_X g^{\mu\bar{\beta}}u_{,\bar{\beta}}\bar{w}_{\mu}e^F\det g \] \[ = \int_X ( g^{\alpha\bar{\lambda}}u_{,\alpha\bar{\lambda}} + g^{\alpha\bar{\lambda}}u_{,\bar{\lambda}}F_{,\alpha})(g^{\mu\bar{\beta}}\bar{w}_{\mu\bar{\beta}} + g^{\mu\bar{\beta}}\bar{w}_{\mu}F_{,\bar{\beta}}) e^F \det g \] \[ =\int_X (\Delta_F u, \Delta_F w )_g e^F \det g = 0. \] \end{pf} Next let's consider an easy case: according to He[12], the second derivative of $Ding$-functional on a smooth geodesic equals \[ \frac{\partial^2 \mathcal{D}}{\partial t^2} = (\int_X e^{F_g} \det g)^{-1} \{ \int_X ( |\bar{\partial}\varphi'|^2_g - (\pi_{\perp}\varphi')^2)e^{F_g}\det g \} \] where the metric $g$ is induced by the K\"ahler form $\omega_{\varphi}$, and the projection operator is defined as $\pi_{\perp} u = u- \int_X u e^{F_g} \det g / \int_X e^{F_g}\det g $. This implies $Ding$-functional is convex along smooth geodesics. Now suppose there is a smooth geodesic connecting two K\"ahler Einstein metrics, the $Ding$-functional must keep to be a constant along it. Hence we get \[ \int_X |\bar{\partial}\varphi'|^2_g e^{F_g}\det g = \int_X (\pi_{\perp}\varphi')^2 e^{F_g}\det g, \] then we see the first eigenvalue $\lambda_1$ of the weighted Laplacian operator $\Delta_{F_g}$ is $1$, and $\pi_{\perp}\varphi'$ belong to the first eigenspace, i.e. \[ \Delta_{F_g}(\pi_{\perp}\varphi') = \pi_{\perp}\varphi'. \] Now by Futaki's formula, we see \[ L_g (\pi_{\perp}\varphi') = 0, \] then the induced vector field $V _t = \nabla^i\varphi' \frac{\partial}{\partial z^i}$ is holomorphic on $X$. Moreover, let's differentiate this vector field with respect to $t$ on the geodesic \[ (g^{j\bar{k}}\varphi'_{\bar{k}})' = g^{j\bar{k}}\varphi''_{\bar{k}} - g^{j\bar{q}} \varphi'_{p\bar{q}} g^{p\bar{k}}\varphi'_{\bar{k}} \] \[ = g^{j\bar{k}}(g^{\alpha\bar{\beta}}\varphi'_{\alpha}\varphi'_{\bar{\beta}})_{,\bar{k}} - g^{j\bar{q}} \varphi'_{p\bar{q}} g^{p\bar{k}}\varphi'_{\bar{k}} \] \[ = g^{j\bar{k}} g^{\alpha\bar{\beta}} \varphi'_{\alpha}\varphi'_{,\bar{\beta}\bar{k}} = 0 \] by the holomorphicity of $V_t$. Finally, this gives us a holomorphic vector field $\mathcal{V} = V_t - \partial/\partial t$ on $X\times S$, and its induced automorphism will give the uniqueness of the two K\"ahler Einsteim metrics. \section{Some $L^2$ theorems } In this section, we are going to use $L^2$ theorem to investigate the weighted Laplacian operator $\Delta_{F_g}$ and its spectrum, then we shall project our target to the front eigenspace in the proof of uniqueness theorem. First notice that we always have $\lambda_1 \geqslant 1$ by Futaki's formula. Then we are going to introduce some notations. \\ \\ From now on, we shall assume the manifold $X$ admits non-trivial holomorphic vector fields, and $H^{0,1}(X) =0$. Then fix one $t$ and restrict our attention to this fiber $X\times \{t \}$. Since $-K_X = [\omega]$, we can write \[ \omega_g = i\partial\bar{\partial}\phi_g \] where $\phi_g$ is a plurisubharmonic metric on the line bundle $-K_X$. We claim the measure \[ e^{F_g}\det g = e^{-\phi_g}, \] and this is because locally $F_g = -\log\det g - \phi_g$. Then naturally the pairing between functions on $X$ with this weight can be written as \[ (u,v)_g = \int_X u\bar{v} e^{-\phi_g}. \] Here is the $L^2$ theorem coming to play with. Let's consider the space of all $L^2$ bounded $-K_X$ valued $(n,0)$ forms under the metric $\phi_g$, i.e. it consists of every function $u$ on $X$ such that \[ \int_X |u |^2 e^{-\phi_g} < +\infty, \] we denote this space as $L^2_{(n,0)}(-K_X, \phi_g)$, and similarly we can consider all $L^2$ bounded $-K_X$ valued $(n, 1)$ forms under the weighted norm \[ \int_X g^{\alpha\bar{\beta}}v_{\alpha}\overline{v_{\beta}} e^{-\phi_g} < +\infty, \] and we denote this space as $L^2_{(n,1)}(-K_X, \phi_g)$, then we can define an unbounded operator $\bar{\partial}$ between them \[ \bar{\partial}: L^2_{(n,0)}(-K_X, \phi_g) \dashrightarrow L^2_{(n,1)}(-K_X, \phi_g). \] Notice that the domains of these two operator are not the whole $L^2$ spaces. In fact, we can define \[ dom(\bar{\partial}): = \{ u\in L^2_{(n,0)}(-K_X, \phi_g) ; \ \bar{\partial}u\in L^2_{(n,1)}(-K_X, \phi_g) \}, \] but it is not densely defined in $L^2$ space when $g_{\phi}$ is a $\mathcal{C}^{1,\bar{1}}$ solution of geodesic equation on a fiber $X\times\{ t\}$. Hence we should consider the Hilbert space $\mathcal{H}_1$ to be the closure of $dom(\bar{\partial})$ in $L^2_{(n,0)}(-K_X, \phi_g)$. We claim that $\mathcal{H}_1$ is not empty.\\ \\ First notice that for any non-trivial holomorphic vector field $v \in L^2_{(n-1,0)}(-K_X)$, we can solve the following equation \[ \bar{\partial}u = \omega_{g_{\phi}}\wedge v, \] since $\bar{\partial}( \omega_{g_{\phi}}\wedge v) = 0$ in the sense of distributions, but $\ker(\bar{\partial}) = Range(\bar{\partial})$ from $H^{0,1}(X) =0$. Next, consider the subspace $\mathcal{W}$ containing all such $u$, i.e. define \[ \mathcal{W}: = \{u\in L^2_{(n,0)}(-K_X, \phi_g);\ \bar{\partial}u =\omega_{g_{\phi}}\wedge v, \ \forall v\in L^2_{(n-1,0)}(-K_X) \}, \] then it is a non-empty subspace in $L^2_{(n,0)}(-K_X, \phi_g)$, and it's easy to check \[ \mathcal{W} \subset dom(\bar{\partial}), \] hence we proved the claim. Now $\bar{\partial}$ is a densely defined, closed operator on the Hilbert space $\mathcal{H}_1$ - it's closed from the continuity property of differential operators in the distribution sense. We can discuss its Hilbert adjoint operator $\bar{\partial}^*_{\phi}$, which is a densely defined, closed operator on $L^2_{(n,1)}(-K_X, \phi_g)$. Moreover, they have closed ranges \begin{lem} $\bar{\partial}$ and $\bar{\partial}^{*}_{\phi_g}$ are densely defined, closed operators with closed ranges. \end{lem} \begin{pf} We need to estimate the $L^2$ norm of $\bar{\partial}u$. Take $h$ to be a fixed smooth metric with positive Ricci curvature on $X$, and $u\in dom(\bar{\partial})\cap \ker (\bar{\partial})^{\perp}$, we have \[ \int_X |\bar{\partial}u|_g^2 e^{-\phi_g} \geqslant \int_X |\bar{\partial}u|_h^2 \det h \] \[ \geqslant c \int_X |u|^2 \det h \] \[ \geqslant c' \int_X |u|_g^2 e^{-\phi_g}. \] this estimate implies $\bar{\partial}$ has closed range, and hence its adjoint $\bar{\partial}^*_{\phi_g}$ by functional analysis reason. \end{pf} Then we can define the Laplacian operator as $\Box_{\phi_g} = \bar{\partial}^*_{\phi_g}\bar{\partial}$, where also as an unbounded closed operator, i.e. \[ \Box_{\phi_g}: L^2_{(n,0)}(-K_X, \phi_g)\dashrightarrow L^2_{(n,0)}(-K_X, \phi_g) \] and its domain of definition is \[ dom(\Box_{\phi_g}): =\{ u\in L^2_{(n,0)}(-K_X, \phi_g);\ u\in dom(\bar{\partial})\ and \ \bar{\partial}u \in dom(\bar{\partial}^*_{\phi_g}) \}. \] we claim this operator also has closed range. and \begin{prop} we have \[ \ker \Box_{\phi_g} = coker \Box_{\phi_g}, \] hence they are both finite dimensional. \end{prop} \begin{pf} First note $\ker \Box_{\phi_g} = \ker \bar{\partial}$ is the 1 dimensional space of constant functions on $X$. In order to prove $coker \Box_{\phi_g}$ also has finite rank, it's enough to prove the weighted Laplacian operator has closed range, since it's self-adjoint \[ coker \Box_{\phi_g} = R(\Box_{\phi_g})^{\perp} = \ker\Box_{\phi_g}. \] Now we are going to prove the closed range property, but this follows from the following estimate for $u\in dom(\Box_{\phi_g})\cap \ker (\bar{\partial})^{\perp}$ \[ || u ||_g^2 \leqslant C ||\bar{\partial}u ||_g^2 \] \[ \leqslant C (\Box_{\phi_g} u, u)_g \] \[ \leqslant 2C || \Box_{\phi_g} u ||^2_g + \frac{1}{2} || u ||^2_g \] and hence \[ || u ||^2_g \leqslant C' ||\Box_{\phi_g} u ||^2_g, \] which implies the claim. \end{pf} Notice that this is not enough to guarantee the existence of discrete spectral, but we have a further estimate, \begin{lem} For all $u\in dom(\Box_{\phi_g})\cap \ker(\bar{\partial})^{\perp}$, there is an uniform constant $C$, such that \[ || u ||_{W^{1,2}} \leqslant C || \Box_{\phi_g} u ||^2_g. \] \end{lem} \begin{pf} we still compare it with some fixed smooth weight(metric) $h$, \[ ||\bar{\partial}u ||_h^2 \leqslant C ||\bar{\partial}u ||_g^2 \] \[ =C (\Box_{\phi_g}u, u )_g \] \[ \leqslant C ||\Box_{\phi_g} u||_g ||u||_g \] \[ \leqslant C' ||\Box_{\phi_g} u||_g ||u||_h \] \[ \leqslant C'' ||\Box_{\phi_g}u||_g ||\bar{\partial}u ||_h, \] then \[ ||\bar{\partial}u ||_h^2 \leqslant C'' ||\Box_{\phi_g}u||_g. \] finally, an integration by part gives the desired estimate since \[ \int_X h^{\alpha\bar{\beta}} u_{,\alpha}\overline{u_{,\beta}} \det h = -\int_X h^{\alpha\bar{\beta}} u_{,\alpha\bar{\beta}} \bar{u}\det h \] \[ = -\int_X h^{\alpha\bar{\beta}} u_{,\bar{\beta}\alpha} \bar{u}\det h \] \[ = \int_X h^{\alpha\bar{\beta}}u_{,\bar{\beta}}\overline{u_{,\bar{\alpha}}} \det h \] \end{pf} Then we can discuss the spectral of $\Box_{\phi_g}$, when $g_{\phi}$ is the $\mathcal{C}^{1,\bar{1}}$ function. Suppose $\lambda$ is an eigenvalue of $\Box_{\phi_g}$, and let $\Lambda$ be the corresponding eigenspace, we claim \begin{prop} $\dim \Lambda < +\infty$ \end{prop} \begin{pf} Let $v_i\in \Lambda$ be a sequence of eigenfunctions with bound $L^2$ norm, i.e. $||v_i ||^2_g =1$, then since \[ || v_i ||_{W^{1,2}} \leqslant C || \Box_{\phi_g} v_i ||_g \] \[ = C \lambda, \] hence there exists a $W^{1,2}$ function $v_{\infty}$ such that $v_i\rightarrow v_{\infty}$ in strong $L^2$ norm, by compact embedding theorem. And since $\Lambda =\ker (\Box_{\phi_g}- \lambda I)$ is a closed subspace of $L^2$ \[ v_{\infty}\in \Lambda. \] This implies every bounded sequence in $\Lambda$ has a convergent subsequence, i.e. the unit ball in $\Lambda$ is compact, hence $\dim\Lambda$ is finite. \end{pf} Next we are going to discuss some computations when the weight $\phi_g$ is at least $\mathcal{C}^2$. First notice that formally \[ < \Box_{\phi_g} u, v >_g \ = \ < \bar{ \partial} u, \bar{\partial} v>_g \] for any pairing $u, v$. It's easy to see \[ \Box_{\phi_g}u = \Delta_{\phi_g}u \] for all smooth functions $u$, when the metric $\phi_g$ is smooth. If we look closer at these operators, there is a more computable way to express them. For this purpose, let's assume $\phi_g$ is a $\mathcal{C}^2$ metric, then for any $(n,1)$ form $\alpha $ with value in $-K_X$, \[ \bar{\partial}^*_{\phi_g} \alpha = \partial^{\phi_g} (\omega_g \lrcorner \alpha) \] where $\partial^{\phi} v = e^{\phi}\partial (e^{-\phi} v ) = \partial v - \partial\phi\wedge v$ for any $(n-1,0)$ form with value in $-K_X$(that is a vector field on $X$). Hence if we define \[ v= \omega_g\lrcorner \alpha, \] we will have \[ \bar{\partial}^*_{\phi_g} \alpha = \partial^{\phi_g} v \] and the weighted Laplacian operator could be computed as \[ \Box_{\phi_g} u = \partial^{\phi_g} (\omega_g\lrcorner \bar{\partial} u) \] for $u\in dom \Box_{\phi_g}\cap L^2_{(n,0)}(-K_X, \phi_g)$. Notice that there is commutation relation between the new defined operator $\partial^{\phi}$ and $\bar{\partial}$, that is \begin{equation} \partial^{\phi}\bar{\partial} + \bar{\partial} \partial^{\phi} = i\partial\bar{\partial}\phi\wedge\cdot \end{equation} Now if $u$ is any eigenfunction of the weighted Laplacian operator with eigenvalue $\lambda$, i.e. $\Box_{\phi_g} u = \lambda u$, we can decompose it into two equations \[ \omega_g \lrcorner \bar{\partial}u = v \ \ \ \partial^{\phi_g} v = \lambda u. \] here we can write $v = X\lrcorner 1$, where the constant function $1$ is read as an $(n,0)$ form with value in $-K_X$, and $X= X^{\alpha}\frac{\partial}{\partial z^{\alpha}}$ is a vector field in $(1,0)$ direction on the manifolds. Next we are going to prove Futaki's formula by the commutation equality. \begin{lem}(Futaki's formula) Let $u$ be a eigenfunction of weighted Laplacian with eigenvalue $\lambda$, i.e. $\Box_{\phi_g}u = \lambda u$, then \[ \lambda \int_X |\bar{\partial}u|^2_g e^{-\phi_g} = \int_X (|L_g u|^2 + |\bar{\partial}u|^2_g)e^{-\phi_g}. \] \end{lem} \begin{pf} First notice $u$ is pure real or imaginary. Hence here we will give the proof when $u$ is real valued - the case when $u$ is pure imaginary is similar. Now by the commutation relation of $\partial^{\phi_g}$, we compute $\bar{\partial}(\lambda u)$ \[ -\partial^{\phi_g}\bar{\partial} v + i\partial\bar{\partial}\phi_g \wedge v = \lambda\bar{ \partial} u, \] notice that $i\partial\bar{\partial} \phi_g = \omega_g$, hence \[ -\partial^{\phi_g}\bar{\partial} v = (\lambda -1)\bar{\partial} u, \] pair it with $\bar{\partial}u$, \[ (\lambda -1) \int_X |\bar{\partial}u |^2_ge^{-\phi_g}=-\int_X \langle \partial^{\phi_g}\bar{\partial} v, \bar{\partial}u \rangle_g e^{-\phi_g} \] \[ = \int_X -g^{\lambda\bar{\mu}}\partial_{\alpha} (e^{-\phi_g} \partial_{\bar{\mu}} X^{\alpha}) \overline {\partial_{\bar{\lambda}}u} \] \[ =\int_X \partial_{\bar{\mu}}X^{\alpha} \overline{\partial_{\bar{\alpha}}X^{\mu}} e^{-\phi_g}. \] Now notice that $X^{\alpha} = g^{\alpha\bar{\beta}}u_{,\bar{\beta}}$, under the normal coordinate when $g_{i\bar{j}} = \delta_{ij} \Lambda_i$, \[ \partial_{\bar{\mu}}X^{\alpha}\partial_{\alpha}X^{\bar{\mu}} = g^{\alpha\bar{\beta} }u_{,\bar{\beta}\bar{\mu}} g^{\lambda\bar{\mu}}u_{, \lambda\alpha} \] \[ = \frac{1}{\Lambda_{\alpha} \Lambda_{\lambda}}u_{,\bar{\alpha}\bar{\lambda}} u_{,\lambda\alpha} \] \[ = \frac{1}{\Lambda_{\alpha}\Lambda_{\lambda}} u_{,\bar{\alpha}\bar{\lambda}} u_{,\alpha\lambda} \] \[ = g_{\alpha\bar{\beta}}g^{\lambda\bar{\mu}} \partial_{\bar{\mu}}X^{\alpha} \overline{\partial_{\bar{\lambda}}X^{\beta}}, \] hence we proved the Futaki's formula \[ (\lambda -1)\int_{X}|\bar{\partial}u|^2_ge^{-\phi_g} = \int_X |\bar{\partial}X|^2_g e^{-\phi_g}. \] \end{pf} \section{$Ding$-functionals along the approximation geodesics} Let $X$ be an $n$ dimensional compact complex K\"ahler manifold with K\"ahler metric $\omega$, then we can write the K\"ahler form locally as \[ \omega = g_{\alpha\bar{\beta}}dz^{\alpha}\wedge d\bar{z}^{\beta} \] where $\alpha,\beta = 1,\cdots,n$. Take $S$ to be a cylinder, and $z^{n+1} = t + \sqrt{-1}s$ be its coordinate. Then $z = (z^1,\cdots, z^n, z^{n+1})$ is a point in $X\times S$, and we can define \[ \tilde{\omega} = g_{\alpha\bar{\beta}}dz^{\alpha}\wedge d\bar{z}^{\beta} + dz^{n+1}\wedge d\bar{z}^{n+1} \] as a K\"ahler metric on $X\times S$. And $\tilde{\varphi} = \varphi - |z^{n+1}|^2$ as the new potential. We shall write $\tilde{\omega}$ as $\omega$ and $\tilde{\varphi}$ as $\varphi$ when there is no fusion. Then Chen[7] proves the following two theorems \begin{thm}(Existence of weak geodesic) Let $\varphi_0, \varphi_1\in \mathcal{H}$, then there exists a unique $C^{1,\bar{1}}$ geodesic connecting them, i.e. the following homogenous Monge-Amp\`ere equation has a unique weak solution $\varphi\in \overline{\mathcal{H}}$(the closure is taken under the $C^{1,\bar{1}}$ topology) on $X\times S$ \[ \det (g_{i\bar{j}}+ \partial_{i}\partial_{\bar{j}}\varphi)_{(n+1)(n+1)} = 0 \] where $i,j =1,\cdots, n+1$, and on the boundary $\partial(X\times S)$ \[ \varphi(0,s,z) = \varphi_0(z),\ \varphi(1,s,z) =\varphi_1(z) \] with the following estimate \[ ||\varphi||_{\mathcal{C}^1(X\times S)}+ \max\{|\partial_i\partial_{\bar{j}}\varphi | \} < C \] where $C$ is a uniform constant only depending on $\varphi_0$ and $\varphi_1$. \end{thm} \begin{thm}($\epsilon$- approximation geodeiscs) Given $\varphi_0,\varphi_1\in\mathcal{H}$, we can have a sequence of approximation geodesics $\varphi_{\epsilon}(t, z)$ as follows: for each small $\epsilon>0$, there exists a unique solution of the equation \[ (\varphi_{tt} - |\partial_X \varphi'|_{g_{\varphi}}^2)\det(g_{\varphi}) = \epsilon \det h \] such that there exists a uniform constant $C$ with \[ |\varphi'_t| + |\varphi''_t| + |\varphi|_{\mathcal{C}^1} + \max\{ |\partial_{\alpha}\partial_{\bar{\beta}}\varphi| \} < C, \] and $\varphi_{\epsilon}$ converges to the $\mathcal{C}^{1,\bar{1}}$ geodesic $\varphi$ in the weak $\mathcal{C}^{1,\bar{1}}$ topology. \end{thm} Notice that for any plurisubharmonic metric $\phi$ on $-K_X$, we can write its potential as $\varphi = \phi - \phi_0$, where $\phi$ and $\phi_0$ are corresponding metrics on the line bundle $-K_X$. Now suppose $\phi_0, \phi_1$ are two smooth K\"ahler Einstein metrics on $X$, with their K\"ahler forms $\omega_i=i\partial\bar{\partial}\phi_i, i =0,1$ satisfying \[ \omega_i^n = \frac{e^{-\phi_i}}{\int_X e^{-\phi_i}}. \] define the following functionals \[ \mathcal{F}(\phi):= -\log\int_X e^{-\phi} \] and \[ \mathcal{E}(\phi):= \frac{1}{n+1} \Sigma_{j=0}^n \int_X \varphi \omega_0^j\wedge\omega_{\phi}^{n-j} \] where $\omega_{\phi} = i\partial\bar{\partial}\phi$. Then the $Ding$-functional is defined as \[ \mathcal{D} = -\mathcal{E} + \mathcal{F} = - \frac{1}{n+1} \Sigma_{j=0}^n \int_X (\phi-\phi_0) \omega_0^j\wedge\omega_{\phi}^{n-j}-\log\int_X e^{-\phi}. \] Notice the along a curve of metrics $\phi_t$, the derivative of $Ding$-functional is \[ \frac{\partial\mathcal{D}}{\partial t} = \int_X \phi' ( - \omega_{\phi}^n + \frac{ e^{-\phi}}{\int_X e^{-\phi}}). \] we see the critical point of this functional is the K\"ahler Einstein metric, and its second derivative is \[ \frac{\partial^2 \mathcal{D}}{\partial t^2} = - \int_X (\phi'' - |\partial \phi'|_g^2 )\omega_{\phi}^n+ (\int_X e^{-\phi})^{-1} \{ \int_X(\phi'' - |\partial\phi'|^2_g )e^{-\phi} +\int_X (|\partial\phi'|_g^2 - (\pi_{\perp}\phi')^2)e^{-\phi} \} \] where the metric $g =i\partial\bar{\partial}\phi_t$, and if we denote the term $f = \phi'' - |\partial \phi'|_g^2 $, $c_t = \int_X e^{-\phi}$ and $\delta_t = |\partial\phi'|_g^2 - (\pi_{\perp}\phi')^2$, the equation reads \[ \frac{\partial^2 \mathcal{D}}{\partial t^2} = -\int_X f \omega_{\phi}^n + \int_X ( f+\delta_t) e^{-\phi} /c_t, \] then we are going to consider the behavior of $Ding$-functional on the approximation geodesic. First from Chen's theorem, we can find a $\mathcal{C}^{1,\bar{1}}$ geodesic $\phi_t$ connecting the two K\"ahler Einstein metrics. Moreover for any small $\epsilon>0$, there is the smooth approximation geodesic $\phi_{\epsilon}(t,z)$ connecting the two end points $\phi_0, \phi_1$, which converges weakly to the $\mathcal{C}^{1,\bar{1}}$ geodesic. Now if we consider the $Ding$-functional on these approximation geodesics, we have estimates \[ \frac{\partial^2 \mathcal{D}}{\partial t^2} \geqslant -\epsilon \int_X \det h \] from $f = \epsilon \det h / \det g > 0$ and $\int_X \delta_t e^{-\phi} > 0 $. Let $\epsilon\rightarrow 0$, we see that $Ding$-functional keeps to be convex on $\mathcal{C}^{1,\bar{1}}$ geodesic. Now we can integrate it back along $t$ \[ \frac{\partial \mathcal{D}}{\partial t}(1) -\frac{\partial \mathcal{D}}{\partial t}(0) = \int_{X\times I} -f\omega^n_{\phi}dt + \int_{X\times I} f e^{-\phi}/c_t dt + \int_{X\times I} \delta_t e^{-\phi}/c_t dt, \] notice that at end points $\phi_0,\phi_1$ are both K\"ahler Einstein, hence the first derivative of $Ding$-functionals vanish. And on the approximation geodesic, we have the equation \[ f \det g = \epsilon \det h \] and $f \leqslant \phi'' < C$ uniformly independent of $\epsilon$. Then the equation above reads \[ A \epsilon = \int_{X\times I} f e^{-\phi}/c_t dt + \int_{X\times I} \delta_t e^{-\phi}/c_t dt \] \[ \geqslant \int_{X\times I} f e^{-\phi}dt + \int_{X\times I} \delta_t e^{-\phi}dt, \] because we have uniform $\mathcal{C}^0$ estimate on $\phi_{\epsilon}$. Now since we want to discuss the eigenfunctions on each fiber, we need to a lemma to pull back the estimate to fibers. \begin{lem} Suppose $F_{\epsilon}(t)$ is a sequence of non-negative function on $[0,1]$, with integration estimate \[ \int_0^1 F_{\epsilon} dt < A \epsilon, \] then for almost everywhere $t\in[0,1]$, we can find a subsequence(depending on $t$) $F_{\epsilon_j}$, such that \[ F_{\epsilon_j} < C_t\epsilon_j \] where $C_t$ is a constant independent of $\epsilon$. \end{lem} \begin{pf} Let $\tilde{F}_{\epsilon} = F_{\epsilon}/ \epsilon$, then by Fatou's lemma \[ \int_0^1 \liminf_{\epsilon} \tilde{F}_{\epsilon} dt \leqslant \liminf_{\epsilon} \int_0^1 \tilde{F}_{\epsilon}dt \leqslant A, \] hence the function $\liminf_{\epsilon}\tilde{F}_{\epsilon} \in L^1$, i.e. for almost everywhere $t$, there is a subsequence $\tilde{F}_{\epsilon_j}$ and a constant $C_t$ such that \[ \tilde{F}_{\epsilon_j} < C_t, \] hence \[ F_{\epsilon_j} < C_t \epsilon_j. \] \end{pf} Now put $F_{\epsilon} = \int_X f_{\epsilon}e^{-\phi_{\epsilon}} + \int_X\delta_{\epsilon}e^{-\phi_{\epsilon}}$ and notice the two terms on RHS are both non-negative, we have proved \begin{prop} Consider the approximation geodesic $\phi_{\epsilon} $ connection two K\"ahler Einstein metrics. For almost everywhere $t$, there is a constant $C_t$, such that for each such $t$, there exists a subsequence $\epsilon_j$, such that the following estimates \[ \int_X f e^{-\phi} (\epsilon_j) < C_t \epsilon_j \] and \[ \int_X (|\partial\phi'|^2_g - (\pi_{\perp}\phi')^2)e^{-\phi} (\epsilon_j) < C_t \epsilon_j \] hold simutaneouly. \end{prop} \section{Convergence in the first eigenspace} In this section, we shall focus our attention to the one fiber $X\times\{t \}$, and picked up a subsequence $\phi_{\epsilon_j}$ from above section. Then we can consider the sequence of weighted Laplacian operator $\Box_{\phi_{\epsilon}}$(we shall omit the subindex $j$ here). For each $\epsilon$, we can arrange its eigenvalues as $0<\lambda_1^{\epsilon}\leqslant \lambda_2^{\epsilon}\leqslant\cdots$, corresponding with one eigenfunction $e_i(\epsilon)$, i.e. \[ \Box_{\phi_{\epsilon}} e_i(\epsilon) = \lambda_i^{\epsilon} e_i(\epsilon). \] Then let $u_{\epsilon}(z)$ be a sequence of smooth functions on $X$, such that $u_{\epsilon}\perp \ker\bar{\partial}$. Then it decomposes into the eigenspace of weighted Laplacian operator $\Box_{\phi_{\epsilon}}$, i.e. \[ u_{\epsilon} = \Sigma_{i=1}^{N_{\epsilon}} a_i(\epsilon) e_{i}(\epsilon) \] where $e_i\in \Lambda_i$, and in prior, $N_{\epsilon}$ could equal to $+\infty$ in the above notation. Then we can consider the action by the weighted Laplacian operator on this sequence of functions, i.e. we can write $\Box_{\phi_{\epsilon}} u_{\epsilon}$ as \[ v_{\epsilon} = \omega_{g_{\epsilon}}\lrcorner \bar{\partial}u \] and \[ \partial^{\phi_{\epsilon}} v_{\epsilon} = \Sigma_{i=1}^{N_{\epsilon}} \lambda^{\epsilon}_i a_i(\epsilon)e_i(\epsilon). \] Under certain constraint, we claim these vector fields $v_{\epsilon}$ will converge to a holomorphic one with the same equation satisfied, \begin{prop} Let $u_{\epsilon}$ be a sequence of functions as above. Suppose it satisfies the following conditions: 1) $\Sigma_{i=1}^{N_{\epsilon}} |a_i(\epsilon)|^2 < A$ for an uniform constant $A$, and the sums does not converge to zero. 2) there exists a uniform constant $K$, such that $\lambda_{N_{\epsilon}}^{\epsilon} < K$ for each $\epsilon$ 3) the following estimate holds \begin{equation} \int_X (|\bar{\partial} u_{\epsilon}|^2_{g_{\epsilon}} - (\pi_{\perp}u_{\epsilon})^2)e^{-\phi_{\epsilon}} < C \epsilon. \end{equation} then by passing to a subsequence, we have \[ u_{\epsilon}\rightarrow u_{\infty} \] in strong $L^2$ sense, where $u_{\infty}\in W^{1,2}$ is nontrivial. Moreover there exists a nontrivial holomorphic $(n-1,0)$ form $v_{\infty}$ with value in $-K_X$, such that \[ v_{\epsilon} \rightarrow v_{\infty} \] in strong $L^2$ sense, and the equation \[ \omega_g \wedge v_{\infty} = \bar{\partial} u_{\infty} \] holds in the sense of $L^2$ functions, where $g$ is the metric found on the $\mathcal{C}^{1,\bar{1}}$ geodesic. \end{prop} before proving the proposition, we need a lemma \begin{lem} Let $f_j, g_j$ be two sequence of $L^2$ functions with $|| f_j g_j ||_{L^p} < C$ for some $p \geqslant 1$. Suppose that $\int_X | f_j |^2 d\mu< C'$ and $g_j\rightarrow g\in L^2$ in $L^2$ norm, then there exists an $L^2$ function $f$ such that \[ f_j g_j\rightarrow fg \in L^p \] in the sense of distributions. \end{lem} \begin{pf} First note there exists an $L^2$ function $f$ such that $f_j\rightarrow f$ in weak $L^2$ topology. Then we check \[ \int_X ( fg - f_j g_j ) d\mu = \int_X g(f-f_j)d\mu + \int_X f_j(g-g_j)d\mu, \] the first term on the RHS of above equation converges to zero from the weak convergence of $f_j$, and the second term converges to zero too, since \[ |\int_X f (g-g_j)d\mu|^2 \leqslant (\int_X |f|^2d\mu )(\int_X |g-g_j|^2d\mu) \rightarrow 0. \] hence $f_jg_j$ converges to $fg$ in the sense of distributions. Moreover, from the $L^p$ bound of $f_jg_j$, we have an $L^p$ function $k$ such that $f_jg_j\rightarrow k$ in weak $L^p$ topology. Then \[ fg = k \] as $L^p$ functions. \end{pf} \begin{rmk} Suppose the sequence $|f_j|$ is uniformly bounded in lemma 13, then the limit $f$ is an $L^{\infty}$ function, then $fg\in L^2$ automatically. \end{rmk} \begin{pf}(of proposition 12) First we can write equation (2) as \[ \Sigma_{i=1}^{N_{\epsilon}} (\lambda^{\epsilon}_i -1 )|a_i(\epsilon)|^2 < C\epsilon \] by Futaki's formula, we know \[ \int_X |L_{g_{\epsilon}} u_{\epsilon}|^2 e^{-\phi_{\epsilon}} = \Sigma_{i=1}^{N_{\epsilon}} \lambda_{i}^{\epsilon}(\lambda_i^{\epsilon} -1)|a_i(\epsilon)|^2 \] \[ \leqslant KC\epsilon \] from condition (2) and (3). But if we write $v_{\epsilon} = X_{\epsilon}\lrcorner 1$ for some vector field $X_{\epsilon} = X_{\epsilon}^{\alpha}\frac{\partial }{\partial z^{\alpha}}$, then \[ (L_{g} u )_{\bar{j}}^{\ i}= = g^{i\bar{k}} u_{, \bar{k}\bar{j}} = \frac{\partial X^i}{\partial\bar{z}^j}, \] hence the $L^2$ norm is \[ |L_{g} u|^2 = g_{\alpha\bar{\beta}}g^{\mu\bar{\lambda}}\frac{\partial X^{\alpha}}{\partial\bar{z}^{\lambda}}\overline{\frac{\partial X^{\beta}}{\partial\bar{z}^{\mu}}} = |\frac{\partial X}{\partial\bar{z}}|^2_g. \] now we choose a fixed smooth background metric $h$ to estimate \[ |\frac{\partial X}{\partial\bar{z}}|^2_h = h_{\alpha\bar{\beta}}h^{\mu\bar{\lambda}}\frac{\partial X^{\alpha}}{\partial\bar{z}^{\lambda}}\overline{\frac{\partial X^{\beta}}{\partial\bar{z}^{\mu}}} \] \[ = h_{\alpha\bar{\beta}}h^{\mu\bar{\lambda}} g^{\alpha\bar{\eta}}u_{,\bar{\eta}\bar{\lambda}} g^{\gamma\bar{\beta}}u_{, \gamma\mu} = \frac{1}{\Lambda_{\alpha}^2} |u_{,\bar{\alpha}\bar{\lambda}}|^2 \] \[ \leqslant \Sigma(\frac{\Lambda_{\lambda}}{\Lambda_{\alpha}}) \Sigma \frac{1}{\Lambda_{\alpha}\Lambda_{\lambda}} |u_{,\bar{\alpha}\bar{\lambda}}|^2 \] \[ \leqslant C( tr_{g}h) |\frac{\partial X}{\partial\bar{z}}|^2_g \] where we compute in some normal coordinate. And correspondingly, the $L^2$ norm of $X$ can be estimated by \[ |X|^2_h = h_{\alpha\bar{\beta}}g^{\alpha\bar{\lambda}}u_{,\bar{\lambda}} \overline{g^{\beta\bar{\eta}}u_{, \bar{\eta}}} \] \[ = \frac{1}{\Lambda_{\alpha}^2}|u_{,\bar{\alpha}}|^2 \] \[ \leqslant \Sigma(\frac{1}{\Lambda_{\alpha}}) \Sigma \frac{1}{\Lambda_{\alpha}} |u_{,\bar{\alpha}}|^2 \] \[ \leqslant (tr_g h) |\bar{\partial} u|^2_g. \] Recall that $f = \phi'' - |\partial\phi'|^2_g $ is bounded from above, then we can estimate the $L^2$ norm of $\bar{\partial}v$ as \[ \int_X |\frac{\partial X}{\partial\bar{z}}|^2_h \det h\leqslant C\int_X |\frac{\partial X}{\partial\bar{z}}|^2_h \frac{1}{f} \det h \] \[ \leqslant\frac{C}{\epsilon} \int_X |\frac{\partial X}{\partial\bar{z}}|^2_g (tr_g h) \det g \] \[ \leqslant \frac{C'}{\epsilon}\int_X |\frac{\partial X}{\partial\bar{z}}|^2_g e^{-\phi_g} \leqslant C''. \] note $X$ is a vector in $(1,0)$ direction, which means locally its coefficients are functions. Hence its full gradient is uniformly bounded in $L^2$ norm, i.e. \[ \int_X |\nabla X_{\epsilon} |^2_h\det h < C \] for some constant independent of $\epsilon$. We claim it's also $L^1$ bounded. Recall from our choice of $\epsilon$, we have \[ \int_X f e^{-\phi_{\epsilon}} < C_1 \epsilon, \] then we can estimate \[ \int_X e^{F_{\epsilon}} \det h = \frac{1}{\epsilon} \int_X f e^{-\phi_{\epsilon}} < C_1, \] hence \[ (\int_X |X|_h \det h )^2 \leqslant C ( \int_X |X|^2_h e^{F_{g}}\det g)^2 \] \[ \leqslant C (\int_X |X|^2_h (\det g)^2 e^{F_g})(\int_X e^{F_g}) \] \[ \leqslant C' (\int_X |\bar{\partial}u |^2_g e^{-\phi_g}) < C''. \] Hence it's uniformly $L^1$ bounded, then by Poinc\'are inequality, we know $||X||_{L^2} < C $ for some uniform constant. These together imply the sequence of vector fields $X_{\epsilon}$ are uniformly $W^{1,2}$ bounded. Now by compact imbedding theorem, there exists a vector field $X=X^{\alpha}\frac{\partial}{\partial z^{\alpha}} \in W^{1,2}$ such that $X_{\epsilon}\rightarrow X$ in strong $L^2$ norm. Moreover, observe that \[ (\int_X |\frac{\partial X}{\partial\bar{z}}|_h e^{-\phi_g} )^2= (\int_X |\frac{\partial X}{\partial\bar{z}}|_h e^{F_g}\det g)^2 \] \[ \leqslant ( \int_X |\frac{\partial X}{\partial\bar{z}}|^2_h (\det g)^2 e^{F_g} )(\int_X e^{F_g}) \] \[ \leqslant (C \int_X |\frac{\partial X}{\partial\bar{z}}|^2_g e^{-\phi_g} )(\int_X e^{F_g}) \] \[ \leqslant C' \epsilon \int_X e^{F_g} = C' \int_X f e^{-\phi_g} \] \[ < C'' \epsilon \rightarrow 0 \] from our choice of sequence $\epsilon$. Hence $\bar{\partial}X \rightarrow 0$ in weak $L^1$ sense, but this is enough to imply $\bar\partial X = 0 $ in the sense of distributions. Then $X$ is in fact a holomorphic $(1,0)$ vector field on the manifolds, and we can define $v_{\infty} = X\lrcorner 1$, which is a $-K_X$ valued holomorphic $(n-1,0)$ form. On the other hand, for the function $u_{\epsilon}$ itself, we have \[ \int_X |\bar{\partial}u_{\epsilon}|^2_h \det h \leqslant C \int_X |\bar{\partial} u_{\epsilon}|^2_{g_{\epsilon}} e^{-\phi_{\epsilon}} \] \[ = C\Sigma_{i=1}^{N_{\epsilon}} \lambda_i^{\epsilon}|a_i(\epsilon)|^2 \leqslant C', \] hence $u_{\epsilon}$ has a uniform $W^{1,2}$ bound, and it converges to a function $u_{\infty}\in W^{1,2}$ in strong $L^2$ norm. Then by condition (1), the $L^2$ norm of $u_{\infty}$ is non-trivial. Moreover, we know the equation \[ g^{\epsilon}_{\alpha\bar{\beta}}X_{\epsilon}^{\alpha} = u(\epsilon)_{,\bar{\beta}} \] holds for every $\epsilon$. Now $g^{\epsilon}_{\alpha\bar{\beta}}$ is uniformly bounded from above, hence converges to $g_{\alpha\bar{\beta}}$ in weak $L^{\infty}$, where $g_{\alpha\bar{\beta}}$ is the weak $\mathcal{C}^{1,\bar{1}}$ solution of the geodesic equation. And $X_{\epsilon} \rightarrow X$ in strong $L^2$, hence by the Remark after lemma 13, we see that the equation \[ g_{\alpha\bar{\beta}} X^{\alpha} =\partial_{\bar{\beta}} u_{\infty} \] holds in the sense of $L^2$ functions. In particular, they are equal almost everywhere. Finally, observe that $u_{\infty}\perp \ker\bar{\partial}$, since \[ \int_X u_{\infty} e^{-\phi} = \lim_{\epsilon\rightarrow 0} \int_X u_{\epsilon} e^{-\phi_{\epsilon}} = 0. \] Hence if $v_{\infty} $ is trivial, then $\bar{\partial}u = 0$, i.e. $u\in \ker\bar{\partial}$, which implies $u = 0$, a contradiction. So $v_{\infty}$ is non-trivial too. \end{pf} Notice that before taking the limits, the vector field $v_{\epsilon}$ also satisfies another equation, i.e. \[ \partial^{\phi_{\epsilon}} v_{\epsilon} = \Sigma_{i=1}^{N_{\epsilon}} \lambda_{i}^{\epsilon} a_i(\epsilon) e_i(\epsilon). \] the LHS converges weakly to $\partial^{\phi}v_{\infty}$, since for any smooth testing $(n,0)$ form $W$, \[ \int_X v_{\epsilon}\wedge\overline{\bar{\partial}W} e^{-\phi_{\epsilon}} \rightarrow \int_X v_{\infty}\wedge\overline{\bar{\partial}W}e^{-\phi} \] and the RHS converges to $u_{\infty}$ since condition (3). And the RHS \[ || \Sigma_{i=1}^{N_{\epsilon}} \lambda^{\epsilon}_i a_i(\epsilon) e_i(\epsilon) - u_{\epsilon} ||^2 \leqslant K \Sigma_{i=1}^{N_{\epsilon}} (\lambda^{\epsilon}_i -1) |a_i(\epsilon)|^2 \] converges to zero. We have equality \[ \partial^{\phi}v_{\infty} = u_{\infty} \] holds in the weak sense. But since both sides of above equation are $L^2$ functions, the equation actually holds as $L^2$ functions. This reminds us that $u_{\infty}$ might be the eigenfunction of the operator $\Box_{\phi}$ with eigenvalue $1$. In fact, we have \begin{cor} Let $u_{\epsilon}$ be a sequence of functions satisfying condition (1) - (3) in proposition 7, then there exists a function $u_{\infty}\in W^ {1,2}$ such that \[ u_{\epsilon}\rightarrow u_{\infty} \] in strong $L^2$ sense, and $u_{\infty}$ is a nontrivial eigenfunction of the operator $\Box_{\phi_g}$ with eigenvalue $1$. \end{cor} \begin{pf} First notice $u_{\infty}\in dom(\Box_{\phi_g})$. This is because $\bar{\partial} u = \omega_g\wedge v_{\infty}$, hence $u\in \mathcal{W} \subset dom(\bar{\partial})$, and $\bar{\partial}u \in dom(\bar{\partial}^*_{\phi_g})$ since $v_{\infty}$ is holomorphic. Now for any smooth testing $(n,0)$ form $W$ with value in $-K_X$, we compute \[ \int_X \bar{\partial}^*_{\phi_g}\bar{\partial}u_{\infty} \wedge\overline{W} e^{-\phi_g} = ( \bar{\partial}^*_{\phi_g}\bar{\partial}u_{\infty}, W )_g \] \[ = \langle \bar{\partial}u_{\infty}, \bar{\partial}W \rangle_g \] \[ = \langle \omega_g\wedge v_{\infty}, \bar{\partial}W \rangle_g \] \[ = \int_X v_{\infty}^{\alpha}\overline{\partial_{\bar{\alpha}}W} e^{-\phi_g} \] \[ = ( \partial^{\phi_g}v_{\infty}, W )_g \] \[ = \int_X u_{\infty} \wedge\overline{W} e^{-\phi_g}. \] hence $\Box_{\phi_g}u_{\infty} = u_{\infty}$ as $L^2$ functions. \end{pf} \section{the eigenspace decomposition of $\phi'$ (the easy case)} In this section, we shall construct a sequence of functions $u_{\epsilon}$, which could satisfy the condition $(1) - (3)$ in proposition 12 from $\phi'_{\epsilon}$, then construct a holomorphic vector field from there. However, we need to discuss case by case this time, i.e. let \[ \pi_{\perp}\phi'_{\epsilon} = \Sigma_{i=1}^{+\infty} a_i(\epsilon) e_i(\epsilon), \] then \[ \Box_{\phi_{\epsilon}}(\pi_{\perp}\phi'_{\epsilon}) = \Sigma_{i=1}^{+\infty} \lambda_i^{\epsilon} a_i(\epsilon) e_i(\epsilon). \] Note the restriction from the vanishing of $Ding$-functional gives \begin{equation} \Sigma_{i=1}^{+\infty} ( \lambda_i^{\epsilon}-1) | a_i(\epsilon)|^2 < C\epsilon \end{equation} by passing to the chosen subsequence $\epsilon_j$. And notice that \[ \int_X |\bar{\partial}\phi'_{\epsilon}|^2_h \leqslant C \int_X |\bar{\partial}\phi'_{\epsilon}|^2_{g_{\epsilon}} e^{-\phi_{\epsilon}} \] \[ \leqslant C \int_X \phi''_{\epsilon} e^{-\phi_{\epsilon}} < C', \] then there exists a function $\psi \in W^{1,2}$ such that $\phi'_{\epsilon}\rightarrow \psi$ in strong $L^2$ norm. Hence we can assume \begin{equation} \frac{1}{2} < \Sigma_{i=1}^{+\infty} | a_i(\epsilon)|^2 < 2 \end{equation} for $\epsilon$ small enough. \begin{rmk} In fact ,we have $|\phi_{\epsilon} |_{\mathcal{C}^1 } < C$, hence $|| \phi_{\epsilon} ||_{W^{1,p}} < C$ for any $p$ large. Then by compact imbedding theorem, we can assume \[ \phi_{\epsilon}\rightarrow \phi \] in $\mathcal{C}^{0,\alpha}$ norm. \end{rmk} In fact, we are going to prove \begin{thm} There is a holomorphic vector field $v$ on the manifolds, such that \[ \omega_g\wedge v = \bar{\partial}\psi \] where $\psi$ is the $L^2$ limit of $\phi'_{\epsilon}$ and $g$ is the $\mathcal{C}^{1,\bar{1}}$ solution of geodesic equation. Moreover, $\psi$ is a eigenfunction of the operator $\Box_{\phi_g}$ with eigenvalue $1$, i.e. \[ \Box_{\phi_g}\psi = \psi. \] \end{thm} In order to prove this theorem, we shall discuss case by case. First there are two possibilities for the convergence of eigenvalue $\lambda_i^{\epsilon}$: \\ \\ $Case\ 1$, there exist a finite integer $k$ such that the following two things hold i) for each $1\leqslant i \leqslant k$, $\lambda_i^{\epsilon}\rightarrow 1$ as $\epsilon\rightarrow 0$; ii) $\lambda_{k+1}^{\epsilon}$ does not converges to $1$. \\ \\ $Case\ 2$, for each $1\leqslant i < +\infty$, $\lambda_i^{\epsilon}\rightarrow 1$ as $\epsilon\rightarrow 0$. \\ \\ Let's discuss $Case\ 1$ first in this section. In this case, we shall define \[ u_{\epsilon}: = \Sigma_{i=1}^k a_i(\epsilon)e_i(\epsilon). \] Notice that the divergence of $\lambda_i^{\epsilon}$ implies $\lambda_i^{\epsilon} > 1+\delta$ for some small $\delta>0$, by passing to a subsequence. Then since $\lambda_i^{\epsilon}$ is a non-decreasing sequence in $i$, we have for all $i>k$ \[ \lambda_i^{\epsilon} > 1+\delta \] for the same subsequence. Now by equation (3), we see \[ C\epsilon > \Sigma_{i=k+1}^{+\infty} (\lambda_i^{\epsilon} - 1) |a_i(\epsilon)|^2 \] \[ \geqslant \Sigma_{i=k+1}^{+\infty} \delta |a_i(\epsilon)|^2, \] hence $\Sigma_{i=k+1}^{+\infty} |a_i(\epsilon)|^2 \rightarrow 0$ when $\epsilon\rightarrow 0$. This gives condition (1), i.e. \[ \Sigma_{i=1}^k |a_i(\epsilon)|^2 > 1/4. \] condition (2) is satisfied because $\lambda_{k}^{\epsilon} \rightarrow 1$ by the assumption, and condition (3) is automatically satisfied by equation (3). Hence we can generate a holomorphic vector field $v_{\infty}$ from proposition (12). Moreover, we could see $ ||\pi_{\perp}\phi'_{\epsilon} - u_{\epsilon} ||_{L^2} $ converges to zero in above argument, hence we actually have \[ \psi = u_{\infty} \] after taking the limit. And hence it's the eigenfunction of $\Box_{\phi_g}$ with eigenvalue $1$, by corollary (14). Hence we proved theorem 15 in this case. \section{the hard case} Now we are going to deal with $Case\ 2$, i.e. we assume \[ \lambda_i^{\epsilon} \rightarrow 1 \] for each $1\leqslant i < +\infty$. Here we still subdivide it into two subcases as follows: \\ \\ $subCase\ 1$, for any $1< k<\infty$, the partial sum $\Sigma_{i=1}^{k-1} |a_{i}(\epsilon)|^2 \rightarrow 0$, when $\epsilon\rightarrow 0$. \\ \\ $subCase\ 2$, there exists a finite number $K$, such that $\Sigma_{i=1}^{K-1} |a_{i}(\epsilon)|^2$ does not converge to zero. \\ \\ Before going to the subcases, we need a lemma first \begin{lem} Let $e_i(\epsilon)$ be the eigenfunction of the weighted Laplacian $\Box_{\phi_{\epsilon}}$ with eigenvalue $\lambda_i^{\epsilon}$, i.e. \[ \Box_{\phi_{\epsilon}}e_i(\epsilon) = \lambda_i^{\epsilon} e_i(\epsilon). \] Suppose there exists an uniform constant $C$, such that $\lambda_i^{\epsilon} <1+ C\epsilon$, then $e_i(\epsilon)$ converges to a non-trivial eigenfunction $e_i$ of the operator $\Box_{\phi_g}$ with eigenvalue $1$. Moreover, suppose there is another $j\neq i$, such that $\lambda_j$ satisfies the same condition, then $e_i, e_j$ are mutually orthogonal to each other. \end{lem} \begin{pf} we define $u_{\epsilon}= e_i(\epsilon)$, then condition $(1)$ and $(2)$ hold automatically. And condition (3) is also satisfied because \[ \int_X (|\bar{\partial} u_{\epsilon}|^2_{g_{\epsilon}} - (\pi_{\perp}u_{\epsilon})^2)e^{-\phi_{\epsilon}} = (\lambda_i^{\epsilon}-1) < C\epsilon, \] hence by proposition (12) and corollary (14), we get \[ e_i(\epsilon)\rightarrow e_i \] in strong $L^2$ sense, where $e_i\in W^{1,2}$ is a eigenfunction of $\Box_{\phi_g}$ with eigenvalue $1$. Now for $j\neq i$, we have similar convergence and eigenfunction $e_j$, but \[ \int_X e_i \bar{e}_j e^{-\phi_{g}} = \lim_{\epsilon\rightarrow 0}\int_X e_i(\epsilon)\overline {e_j(\epsilon)} e^{-\phi_{\epsilon}} =0 \] by the strong $L^2$ convergence of $e_i(\epsilon)$, and $L^{\infty}$ convergence of $\phi_{\epsilon}$. \end{pf} Now let's begin to discuss the $subCase\ 1$. For any fixed $k$, by equation (4), we can find a large integer $N_{\epsilon, k}$ such that \[ \Sigma_{i=1}^{N_{\epsilon,k}} |a_i(\epsilon)|^2 \geqslant 1/4 \] by the assumption in this subcase, for $\epsilon$ small \[ \Sigma_{i=k}^{N_{\epsilon,k}}|a_i(\epsilon)|^2 \geqslant 1/8. \] but then by equation (3), \[ \frac{1}{8}(\lambda_k^{\epsilon} -1)\leqslant \Sigma_{i=k}^{N_{\epsilon,k}} (\lambda_i^{\epsilon} -1)|a_i(\epsilon)|^2 < C\epsilon, \] because the sequence $\lambda_i^{\epsilon}$ is non-decreasing. Hence we proved for each $k$, \[ \lambda_k^{\epsilon} < 1+ 8C\epsilon \] for $\epsilon$ small enough. Now by lemma 16, we get an eigenfunction $e_k$ for each $1\leqslant i<\infty$, and they are orthogonal to each other. However, this is impossible since the eigenspace with eigenvalue $1$ of an elliptic operator $\Box_{\phi_g}$ has only finite rank. Hence the $subCase\ 1$ actually never happens. \section{the final case} Let's discuss $subCase\ 2$. Under the assumption in this case, we can find $K_1$, a finite integer, to be the first number such that $\Sigma_{i=1}^{K_1 -1} |a_i(\epsilon)|^2$ does not converge to zero. Then by passing to a subsequence, we can assume $\Sigma_{i=1}^{K_1 -1} |a_i(\epsilon)|^2 > \delta_1 $ for some fixed positive number $\delta_1$. Now consider the truncated sequence \[ \Lambda_1(\phi') = \Sigma_{i=K_1}^{+\infty} a_i(\epsilon) e_i(\epsilon). \] suppose there exists another integer $K_2 > K_1$, such that $\Sigma_{i=K_1}^{K_2 -1} |a_i(\epsilon)|^2$ does not converge to zero, and then we can assume $\Sigma_{i=K_1}^{K_2 -1} |a_i(\epsilon)|^2 > \delta_2 $. We can repeat this argument, to find $0< K_1<K_2<K_3<\cdots$, but we claim this process will terminate in finite steps. \begin{lem} There exists an finite integer $n$, such that \[ \Sigma_{i=K_n}^{+\infty} |a_i(\epsilon)|^2 \rightarrow 0. \] \end{lem} \begin{pf} Let's define a sequence of sequence of functions $u^{(j)}_{\epsilon}$ as \[ u^{(0)}_{\epsilon}: =\Sigma_{i=1}^{K_1-1}a_i(\epsilon)e_i(\epsilon) \] \[ u^{(1)}_{\epsilon}: =\Sigma_{i=K_1}^{K_2-1}a_i(\epsilon)e_i(\epsilon) \] \[ \cdots \] \[ u^{(j)}_{\epsilon}: = =\Sigma_{i=K_{j}}^{K_{j+1}-1}a_i(\epsilon)e_i(\epsilon) \] and so on. We now claim $u_{\epsilon}^{(j)}$ satisfying all the conditions (1) - (3) in proposition (12). Condition (1) is satisfied automatically by assumption, and condition (2) is satisfied since $\lambda_k^{\epsilon}\rightarrow 1$ for any fixed $k$. Condition (3) is satisfied too because of equation (3), i.e. \[ \Sigma_{i=K_j}^{K_{j+1}-1}(\lambda_i^{\epsilon} - 1)|a_i(\epsilon)|^2< C\epsilon, \] then by proposition (12) and corollary (14), we see there exists an non-trivial $W^{1,2}$ function $u^{(j)}$ such that \[ u_{\epsilon}^{(j)}\rightarrow u^{(j)} \] in strong $L^2$ norm. And $u^{(j)}$ is a eigenfunction of operator $\Box_{\phi_g}$ with eigenvalue $1$. However, notice that $u^{j}_{\epsilon}$ and $u_{\epsilon}^{(k)}$ are mutually orthogonal, and by the same argument used in lemma 16, this implies \[ u^{(j)}\perp u^{(k)} \] for all different $j$ and $k$. Now we can find finite many such $u^{(j)}$ since they are all in the eigenspace with eigenvalue $1$ of the weighted Laplacian operator $\Box_{\phi_g}$, hence we proved the lemma. \end{pf} Next we are going to complete the proof of theorem 15. Now let's define \[ u_{\epsilon}: =\Sigma_{i=1}^{K_n-1}a_i(\epsilon) e_i(\epsilon) \] where $K_n$ is the number appearing in lemma 17. Now people can check the three conditions in proposition 12 are satisfied, and hence there exists a $W^{1,2}$ function $u$ such that \[ u_{\epsilon}\rightarrow u \] in $L^2$ sense, and $u$ is a eigenfunction with eigenvalue $1$ of operator $\Box_{\phi_g}$, and there is a holomorphic vector field $v$ such that \[ \omega_g\wedge v = \bar{\partial}u. \] Moreover, the difference of the $L^2$ norm is \[ ||\pi_{\perp}\phi'_{\epsilon} - u_{\epsilon} ||_{L^2} = \Sigma_{i=K_n}^{+\infty}|a_i(\epsilon)|^2 \rightarrow 0 \] by our choice of $K_n$, hence we have \[ \psi = u. \] And we complete the proof. \begin{rmk} If there is no any non-trivial holomorphic vector field on $X$, then {\em proposition 12} directly implies $\phi'=0$ almost everywhere on $X\times I$ from above case by case discussion. Without using {\em corollary 14}, we don not need to invoke any eigenfunction of the first eigenspace of the weighted Laplacian operator in the limit. Hence we proved uniqueness in this case. \end{rmk} \section{Time direction} Up to now, we construct a holomorphic vector field $v_t$ on a fiber $X\times{t}$ for almost everywhere $t\in [0,1]$. And this vector field can be computed as \[ v_t = \omega_g \lrcorner \bar{\partial}\psi \] where $\phi'_{\epsilon}\rightarrow \psi$ in strong $L^2$ norm at time $t$. Notice that there are more information to use for the convergence of $\phi'_{\epsilon}$. In fact, we know $|\phi'|, |\phi_{t\bar{z}}|$ and $|\phi_{z\bar{t}}|$ are all uniformly bounded on $X\times I$, i.e. \[ |\phi'|_{\mathcal{C}^1}< C, \] then we can assume $\phi'_{\epsilon}\rightarrow \phi' \in \mathcal{C}^1(X\times I)$, in $\mathcal{C}^{0,\alpha}$ norm. Hence the two limits actually agree with each other, i.e. \[ \psi = \phi' \] as $L^2$ functions on $X$. Now the holomorphic vector field can be written as \[ v_t = \omega_g \lrcorner \bar{\partial} \phi'. \] Then we can define the following subset of the unit interval \[ S:= \{ t\in I ; \ there\ is\ a\ holomorphic\ vector\ field\ v_t\ on\ X\times\{ t\}\ satisfying\ \omega_{g}\wedge v_t =\bar{\partial}\phi' \} \] we know the set $I - S$ has measure zero. Next we are going to prove a stronger result \begin{prop} The subset $S$ coincides with the whole unit interval, i.e. \[ S = I. \] \end{prop} \begin{pf} First recall that $\phi_{\epsilon}\rightarrow \phi$ in $\mathcal{C}^{0,\alpha}(X\times I)$ norm, by the uniform bound on $\mathcal{C}^1$ norm of $\phi$. Then on each fiber $X\times\{ t\}$, the convergence still holds, i.e. \[ \phi_{\epsilon}\rightarrow \phi \] in $\mathcal{C}^{0,\alpha}(X)$, and this implies \[ g_{\epsilon,\alpha\bar{\beta}} \rightarrow g_{\alpha\bar{\beta}} \] in the sense of distribution on the fiber $X\times\{t \}$. Pick up a point $\underline{t}\in I-S$, and a sequence $t_i \in S$ such that $t_i \rightarrow \underline{t}$. Observe that the space of all holomorphic vector fields is finite dimensional, i.e. let \[ \Gamma(X): = H^0(TX), \] then $\Gamma$ is a finite dimensional vector space. Write $v_{t_i} = X_i\lrcorner 1 $, where $v_{t_i}\in \Gamma$ is the vector field satisfying the equation in the definition of $S$. Observe that $v_t$ is the unique solution to the following equation \[ \partial^{\phi_t}v_t = \Box_{\phi_t}\phi' = \pi_{\perp}\phi' \] under the condition $H^{0,1}(X)=0$, then the standard $L^2$ estimate(Berndtsson[5]) gives us \[ || v_t ||_h \leqslant C || \pi_{\perp} \phi' ||_h \] for some fixed metric $h$ and uniform constant $C$ independent of time $t$. Consider the sequence $\{X_i \}\in H^0(TX)$, the uniform bounds on the $L^2$ norm of $X_i$ shows it must converges under the fixed metric $h$, i.e. there exists a vector field $X\in \Gamma$ such that \[ || X - X_i ||_h^2 \rightarrow 0. \] Let's write $g_{\alpha\bar{\beta}} = g_{ \alpha\bar{\beta}}(\underline{t})$ and $g_{i, \alpha\bar{\beta}} = g_{\alpha\bar{\beta}}(t_i )$, then \[ || X - X_i ||_g^2 \leqslant C || X -X_i ||_h^2, \] hence converges to zero too. Now we claim the equation \[ \omega_g\wedge X = \bar{\partial}\phi' \] holds in the sense of distribution. Put $\chi(z)$ be any smooth compact supported testing function on $X$(we can further assume $\chi$ is supported in some coordinate chart), we fix a pair of index $\alpha, \beta$, and compute \[ \int_X (g_{\alpha\bar{\beta}}X^{\alpha} - g_{i,\alpha\bar{\beta}}X_i^{\alpha} )\chi(z)\det h \] \[ = \int_X \chi (g_{\alpha\bar{\beta}} -g_{i, \alpha\bar{\beta}})X^{\alpha} \det h + \int_X\chi (X^{\alpha} - X_i^{\alpha} )g_{i,\alpha\bar{\beta}} \det h, \] since $g_{i, \alpha\bar{\beta}}$ is uniformly bounded, the second term in above equation converges to zero in strong $L^2$ sense. And the first term, we can decompose it into \[ \int_X \chi (g_{\alpha\bar{\beta}} - g_{i, \alpha\bar{\beta}} ) X^{\alpha}\det h \] \[ = \int_X\chi( g_{\alpha\bar{\beta}} - g_{\alpha\bar{\beta}}^{\epsilon} ) X^{\alpha}\det h -\int_X\chi (g_{i, \alpha\bar{\beta}} - g_{i, \alpha\bar{\beta}}^{\epsilon}) X^{\alpha}\det h + \int_X \chi (g_{i, \alpha\bar{\beta}}^{\epsilon} - g_{\alpha\bar{\beta}}^{\epsilon}) X^{\alpha}\det h, \] the first and second terms converge to zero as $\epsilon\rightarrow 0$, and for the third term, we integration by parts \[ \int_X \chi (g_{i, \alpha\bar{\beta}}^{\epsilon} - g_{\alpha\bar{\beta}}^{\epsilon}) X^{\alpha}\det h = \int_X\chi_{,\bar{\beta}} (\phi^{\epsilon}_{i, \alpha}- \phi^{\epsilon}_{\alpha})X^{\alpha}\det h \] \[ = \int_X\chi_{,\bar{\beta}} (t_i - \underline{t})\phi'_{,\alpha}(t)X^{\alpha}\det h \] \[ \leqslant A |t_i - \underline{t}| \] where $A$ is a constant independent of $\epsilon$. Hence \[ \int_X \chi (g_{\alpha\bar{\beta}} - g_{i, \alpha\bar{\beta}} ) X^{\alpha}\det h \rightarrow 0 \] as $t_i \rightarrow \underline{t}$, and we proved \[ g_{i,\alpha\bar{\beta}}X_i^{\alpha} \rightarrow g_{i,\alpha\bar{\beta}}X_i^{\alpha} \] in the sense of distributions. But we know $\phi'_i\rightarrow \phi'$ in $\mathcal{C}^{0,\alpha}$ norm, hence $\bar{\partial}\phi'_i\rightarrow \bar{\partial}\phi'$ in the sense of distribution too. Finally, the limit equation \[ g_{\alpha\bar{\beta}}X^{\alpha} = \phi'_{,\bar{\beta}} \] holds in distribution sense on $X\times \{\underline{t} \}$. Now since both sides in above equation are $L^{\infty}$ functions, we see the equation actually holds in the sense of $L^2$ functions by the same argument in $Remark\ 1$. \end{pf} Now it makes sense to talk about the time derivative of vector fields $v_t $ in distribution sense, i.e. on the $\mathcal{C}^{1,\bar{1}}$ geodesic, we compute in the sense of distributions \[ \phi''_{,\bar{\beta}} = ( g_{\alpha\bar{\beta}} X^{\alpha} )', \] and computation implies \[ (g^{\alpha\bar{\lambda}}\phi'_{,\alpha}\phi'_{,\bar{\lambda}})_{,\bar{\beta}} = \phi'_{\alpha\bar{\beta}}X^{\alpha} + g_{\alpha\bar{\beta}}(X^{\alpha})'. \] note the RHS is in fact equal to \[ \nabla_{\bar{\beta}}(\phi'_{,\alpha}X^{\alpha}) = \phi'_{,\alpha\bar{\beta}}X^{\alpha}+ \phi'_{,\alpha}X^{\alpha}_{,\bar{\beta}} = \phi'_{,\alpha\bar{\beta}}X^{\alpha}, \] here Leibniz rule makes sense since $X$ is holomorphic. Hence we get \[ g_{\alpha\bar{\beta}}(X^{\alpha})' = 0 \] which is equivalent to the vanishing of $\frac{\partial}{\partial t}v_t = 0$, i.e. we have an unchanged holomorphic vector field $v$ on the geodesic. \\ \\ We finished the proof of uniqueness theorem by taking the holomorphic vector field \[ \mathcal{V}:= \frac{\partial}{\partial t} - V, \] then it's easy to check $\mathcal{L}_{\mathcal{V}} (i\partial\bar{\partial}\phi_t) = 0$ during the flow, hence the induced the automorphism $F$ preserves the metric along the geodesic. \\ \\ \end{document}
\begin{document} \twocolumn[ \icmltitle{AdaptDiffuser: Diffusion Models as Adaptive Self-evolving Planners} \icmlsetsymbol{equal}{*} \begin{icmlauthorlist} \icmlauthor{Zhixuan Liang}{hku} \icmlauthor{Yao Mu}{hku} \icmlauthor{Mingyu Ding}{hku,ucb} \icmlauthor{Fei Ni}{tju} \icmlauthor{Masayoshi Tomizuka}{ucb} \icmlauthor{Ping Luo}{hku,shlab} \end{icmlauthorlist} \icmlaffiliation{hku}{Department of Computer Science, The University of Hong Kong, Hong Kong SAR} \icmlaffiliation{ucb}{University of California, Berkeley, USA} \icmlaffiliation{tju}{College of Intelligence and Computing, Tianjin University, Tianjin, China} \icmlaffiliation{shlab}{Shanghai AI Laboratory, Shanghai, China} \icmlcorrespondingauthor{Ping Luo}{pluo.lhi@gmail.com} \icmlkeywords{Machine Learning, ICML} \vskip 0.3in ] \printAffiliationsAndNotice{} \begin{abstract} Diffusion models have demonstrated their powerful generative capability in many tasks, with great potential to serve as a paradigm for offline reinforcement learning. However, the quality of the diffusion model is limited by the insufficient diversity of training data, which hinders the performance of planning and the generalizability to new tasks. This paper introduces AdaptDiffuser\xspace, an evolutionary planning method with diffusion that can self-evolve to improve the diffusion model hence a better planner, not only for seen tasks but can also adapt to unseen tasks. AdaptDiffuser\xspace enables the generation of rich synthetic expert data for goal-conditioned tasks using guidance from reward gradients. It then selects high-quality data via a discriminator to finetune the diffusion model, which improves the generalization ability to unseen tasks. Empirical experiments on two benchmark environments and two carefully designed unseen tasks in KUKA industrial robot arm and Maze2D environments demonstrate the effectiveness of AdaptDiffuser\xspace. For example, AdaptDiffuser\xspace not only outperforms the previous art Diffuser~\cite{janner2022planning} by 20.8\% on Maze2D and 7.5\% on MuJoCo locomotion, but also adapts better to new tasks, e.g., KUKA pick-and-place, by 27.9\% without requiring additional expert data. More visualization results and demo videos could be found on \href{https://adaptdiffuser.github.io/}{our project page}. \end{abstract} \begin{figure} \caption{Illustration of AdaptDiffuser\xspace.} \label{fig:teaser-a} \caption{Performance comparisons on three benchmarks.} \label{fig:teaser-b} \caption{Overall framework and performance comparison of AdaptDiffuser\xspace. It enables diffusion models to generate rich synthetic expert data using guidance from reward gradients of either seen or unseen goal-conditioned tasks. Then, it iteratively selects high-quality data via a discriminator to finetune the diffusion model for self-evolving, leading to improved performance on seen tasks and better generalizability to unseen tasks. } \label{fig:teaser} \end{figure} \section{Introduction} Offline reinforcement learning (RL)~\cite{levine2020offline,prudencio2022survey} aims to learn policies from previously collected offline data without interacting with the live environment. Traditional offline RL approaches require fitting value functions or computing policy gradients, which are challenging due to limited offline data~\cite{agarwal2020optimistic,kumar2020conservative,wu2019behavior,kidambi2020morel}. Recent advances in generative sequence modeling~\cite{chen2021decision, janner2021offline, janner2022planning} provide effective alternatives to conventional RL problems by modeling the joint distribution of sequences of states, actions, rewards and values. For example, Decision Transformer~\cite{chen2021decision} casts offline RL as a form of conditional sequence modeling, which allows more efficient and stable learning without the need to train policies via traditional RL algorithms like temporal difference learning~\cite{sutton1988learning}. By treating RL as a sequence modeling problem, it bypasses the need of bootstrapping for long-term credit assignment, avoiding one of the ``deadly triad"~\cite{sutton2018reinforcement} challenges in reinforcement learning. Therefore, devising an excellent sequence modeling algorithm is essential for the new generation of offline RL. The diffusion probability model~\cite{rombach2022high,ramesh2022hierarchical}, with its demonstrated success in generative sequence modeling for natural language processing and computer vision, presents an ideal fit for this endeavor. It also shows great potential as a paradigm for planning and decision-making. For example, diffusion-based planning methods~\cite{janner2022planning, ajay2022conditional, wang2022diffusion} train trajectory diffusion models based on offline data and apply flexible constraints on generated trajectories through reward guidance during sampling. In consequence, diffusion planners show notable performance superiority compared with transformer-based planners like Decision Transformer~\cite{chen2021decision} and Trajectory Transformer~\cite{janner2021offline} on long horizon tasks, while enabling goal-conditioned rather than reward-maximizing control at the same time. While diffusion-based planners have achieved success in certain areas, their performance is limited by the lack of diversity in their training data. In decision-making tasks, the cost of collecting a diverse set of offline training data may be high, and this insufficient diversity would impede the ability of the diffusion model to accurately capture the dynamics of the environment and the behavior policy. As a result, diffusion models tend to perform inferior when expert data is insufficient, and particularly when facing new tasks. This raises a natural question: can we use the generated heterogeneous data by the reward-guided diffusion model to improve the diffusion model itself since it has powerful generative sequence modeling capability? As diffusion-based planners can generate quite diverse ``dream" trajectories for multiple tasks which may be different from the original task the training data are sampled from, greatly superior to Decision Transformer \cite{chen2021decision}, enabling the diffusion model to be self-evolutionary makes it a stronger planner, potentially benefiting more decision-making requirements and downstream tasks. In this paper, we present AdaptDiffuser\xspace, a diffusion-based planner for goal-conditioned tasks that can generalize to novel settings and scenarios through self-evolution (see Figure~\ref{fig:teaser}). Unlike conventional approaches that rely heavily on specific expert data, AdaptDiffuser\xspace uses gradient of reinforcement learning rewards, directly integrated into the sampling process, as guidance to generate diverse and heterogeneous synthetic demonstration data for both existing and unseen tasks. The generated demonstration data is then filtered by a discriminator, of which the high-quality ones are used to fine-tune the diffusion model, resulting in a better planner with significantly improved self-bootstrapping capabilities on previously seen tasks and an enhanced ability of generalizing to new tasks. As a consequence, AdaptDiffuser\xspace not only improves the performance of the diffusion-based planner on existing benchmarks, but also enables it to adapt to unseen tasks without the need for additional expert data. It's non-trivial to construct and evaluate AdaptDiffuser\xspace for both seen and unseen tasks. We first conduct empirical experiments on two widely-used benchmarks (MuJoCo~\cite{todorov2012mujoco} and Maze2d) of D4RL~\cite{fu2020d4rl} to verify the self-bootstrapping capability of AdaptDiffuser\xspace on seen tasks. Additionally, we creatively design new pick-and-place tasks based on previous stacking tasks in the KUKA~\cite{schreiber2010fast} industrial robot arm environment, and introduce novel auxiliary tasks (e.g., collecting gold coins) in Maze2D. The newly proposed tasks and settings provide an effective evaluation of the generalization capabilities of AdaptDiffuser\xspace on unseen tasks. Our contributions are three-fold: \textbf{1)} We present AdaptDiffuser\xspace, allowing diffusion-based planners to self-evolve for offline RL by generating high-quality heterogeneous data with reward-integrated diffusion model directly and filtering out inappropriate examples with a discriminator. \textbf{2)} We apply our self-evolutionary AdaptDiffuser\xspace to unseen (zero-shot) tasks without any additional expert data, demonstrating its strong generalization ability and adaptability. \textbf{3)} Extensive experiments on two widely-used offline RL benchmarks from D4RL as well as our carefully designed unseen tasks in KUKA and Maze2d environments validate the effectiveness of AdaptDiffuser\xspace. \section{Related Works} \textbf{Offline Reinforcement Learning.} Offline RL~\cite{levine2020offline,prudencio2022survey} is a popular research field that aims to learn behaviors using only offline data such as those collected from previous experiments or human demonstrations, without the need to interact with the live environment from time to time at the training stage. However, in practice, offline RL faces a major challenge that standard off-policy RL methods may fail due to the overestimation of values, caused by the distribution deviation between the offline dataset and the policy to learn. Most conventional offline RL methods use action-space constraints or value pessimism~\cite{buckman2020importance} to overcome the challenge~\cite{agarwal2020optimistic, kumar2020conservative, siegel2020keep, wu2019behavior, yang2022regularized}. For example, conservative Q-learning (CQL)~\cite{kumar2020conservative} addresses these limitations by learning a conservative Q-function, ensuring the expected value under this Q-function is lower than its true value. \textbf{Reinforcement Learning as Sequence Modeling.} Recently, a new paradigm for Reinforcement Learning (RL) has emerged, in which RL is viewed as a generic sequence modeling problem. It utilizes transformer-style models to model trajectories of states, actions, rewards and values, and turns its prediction capability into a policy that leads to high rewards. As a representative, Decision Transformer (DT)~\cite{chen2021decision} leverages a causally masked transformer to predict the optimal action, which is conditional on an autoregressive model that takes the past state, action, and expected return (reward) into account. It allows the model to consider the long-term consequences of its actions when making a decision. And based on DT, Trajectory Transformer (TT)~\cite{janner2021offline} is proposed to utilize transformer architecture to model distributions over trajectories, repurposes beam search as a planning algorithm, and shows great flexibility across long-horizon dynamics prediction, imitation learning, goal-conditioned RL, and offline RL. Bootstrapped Transformer~\cite{wang2022bootstrapped} further incorporates the idea of bootstrapping into DT and uses the learned model to self-generate more offline data to further improve sequence model training. However, Bootstrapped Transformer could not integrate RL reward into the data synthesizing process directly and can only amplify homogeneous data trivially for its original task, which can boost the performance but cannot enhance the adaptability on another unseen task. Besides, such approaches lack flexibility in adapting to new reward functions and tasks in different environments, as the generated data is not suitable for use in new tasks or environments. Diffuser~\cite{janner2022planning} presents a powerful framework for trajectory generation using the diffusion probabilistic model, which allows the application of flexible constraints on generated trajectories through reward guidance during sampling. The consequent work, Decision Diffuser~\cite{ajay2022conditional} introduces conditional diffusion with reward or constraint guidance for decision-making tasks, further enhancing Diffuser's performance. Additionally, Diffusion-QL~\citep{wang2022diffusion}, adds a regularization term to the training loss of the conditional diffusion model, guiding the model to learn optimal actions. Nevertheless, the performance of these methods is still limited by the quality of offline expert data, leaving room for improvement in adapting to new tasks or settings. \textbf{Diffusion Probabilistic Model.} Diffusion models are a type of generative model that represents the process of generating data as an iterative denoising procedure \cite{sohl2015deep, ho2020denoising}. They have made breakthroughs in multiple tasks such as image generation \cite{song2020denoising}, waveform generation \cite{chen2020wavegrad}, 3D shape generation \cite{zhou2021shape} and text generation \cite{austin2021structured}. These models, which learn the latent structure of the dataset by modeling the way in which data points diffuse through the latent space, are closely related to score matching \cite{hyvarinen2005score} and energy-based models (EBMs) \cite{lecun06atutorial, du2019implicit, nijkamp2019learning, grathwohl2020stein}, as the denoising process can be seen as a form of parameterizing the gradients of the data distribution \cite{song2019generative}. Moreover, in the sampling process, diffusion models allow flexible conditioning \cite{dhariwal2021diffusion} and have the ability to generate compositional behaviors \cite{du2020compositional}. It shows that diffusion models own promising potential to generate effective behaviors from diverse datasets and plan under different reward functions including those not encountered during training. \section{Preliminary} Reinforcement Learning is generally modeled as a Markov Decision Process (MDP) with a fully observable state space, denoted as $\mathcal{M}=(\mathcal{S}, \mathcal{A}, \mathcal{T}, \mathcal{R}, \gamma)$, where $\mathcal{S}$ is the state space and $\mathcal{A}$ is the action space. Besides, $\mathcal{T}$ is the state transition function with the dynamics of this discrete-time system that $\boldsymbol{s}_{t+1}=\mathcal{T}(\boldsymbol{s}_t, \boldsymbol{a}_t)$ at state $\boldsymbol{s}_t \in \mathcal{S}$ given the action $\boldsymbol{a}_t \in \mathcal{A}$. $\mathcal{R}(\boldsymbol{s}_t, \boldsymbol{a}_t)$ defines the reward function and $\gamma\in(0,1]$ is the discount factor for future reward. Considering the offline reinforcement learning as a sequence modeling task, the objective of trajectory optimization is to find the optimal sequence of actions $\boldsymbol{a}_{0:T}^*$ that maximizes the expected return with planning horizon $T$, which is the sum of per time-step rewards or costs $R(\boldsymbol{s}_t, \boldsymbol{a}_t)$: \begin{equation} \boldsymbol{a}_{0:T}^*=\underset{\boldsymbol{a}_{0:T}}{\arg \max } \mathcal{J}(\boldsymbol{s}_0, \boldsymbol{a}_{0:T})=\underset{\boldsymbol{a}_{0:T}}{\arg \max } \sum_{t=0}^T \gamma^t R(\boldsymbol{s}_t, \boldsymbol{a}_t). \end{equation} The sequence data generation methods utilizing diffusion probabilistic models \cite{sohl2015deep, ho2020denoising} pose the generation process as an iterative denoising procedure, denoted by $p_\theta(\boldsymbol{\tau}^{i-1} \mid \boldsymbol{\tau}^i)$ where $\boldsymbol{\tau}$ represents a sequence and $i$ is an indicator of the diffusion timestep. Then the distribution of sequence data is expanded with the step-wise conditional probabilities of the denoising process, \begin{equation} p_\theta\left(\boldsymbol{\tau}^0\right)=\int p\left(\boldsymbol{\tau}^N\right) \prod_{i=1}^N p_\theta\left(\boldsymbol{\tau}^{i-1} \mid \boldsymbol{\tau}^i\right) \mathrm{d} \boldsymbol{\tau}^{1: N} \end{equation} where $p\left(\boldsymbol{\tau}^N\right)$ is a standard normal distribution and $\boldsymbol{\tau}^{0}$ denotes original (noiseless) sequence data. The parameters $\theta$ of the diffusion model are optimized by minimizing the evidence lower bound (ELBO) of negative log-likelihood of $p_\theta\left(\boldsymbol{\tau}^0\right)$, similar to the techniques used in variational Bayesian methods. \begin{equation} \theta^*=\arg \min _\theta-\mathbb{E}_{\boldsymbol{\tau}^0}\left[\log p_\theta\left(\boldsymbol{\tau}^0\right)\right] \label{eq:theta_optim} \end{equation} What's more, as the denoising process is the reverse of a forward diffusion process which corrupts input data by gradually adding noise and is typically denoted by $q\left(\boldsymbol{\tau}^i \mid \boldsymbol{\tau}^{i-1}\right)$, the reverse process can be parameterized as Gaussian under the condition that the forward process obeys the normal distribution and the variance is small enough \cite{feller2015theory}. \begin{equation} p_\theta\left(\boldsymbol{\tau}^{i-1} \mid \boldsymbol{\tau}^i\right)=\mathcal{N}\left(\boldsymbol{\tau}^{i-1} \mid \mu_\theta\left(\boldsymbol{\tau}^i, i\right), \Sigma^i\right) \label{eq:p1} \end{equation} in which $\mu_{\theta}$ and $\Sigma$ are the mean and covariance of the Gaussian distribution respectively. For model training, with the basis on Eq.~\ref{eq:theta_optim} and \ref{eq:p1}, \cite{ho2020denoising} proposes a simplified surrogate loss: \begin{equation} \label{eq:diff_loss} \mathcal{L}_{\text{denoise}}(\theta) \coloneqq \mathbb{E}_{i, \boldsymbol{\tau}^{0} \sim q, \epsilon \sim \mathcal{N}}[||\epsilon - \epsilon_{\theta}(\boldsymbol{\tau}^i, i)||^{2}] \end{equation} where $i \in \{0, 1, ..., N\}$ is the diffusion timestep, $\epsilon \sim \mathcal{N}(\boldsymbol{0}, \boldsymbol{I})$ is the target noise, and $\boldsymbol{\tau}^{i}$ is the trajectory $\boldsymbol{\tau}^{0}$ corrupted by noise $\epsilon$ for $i$ times. This is equivalent to predicting the mean $\mu_{\theta}$ of $p_\theta\left(\boldsymbol{\tau}^{i-1} \mid \boldsymbol{\tau}^i\right)$ as the function mapping from $\epsilon_{\theta}(\boldsymbol{\tau}^i, i)$ to $\mu_{\theta}(\boldsymbol{\tau}^i, i)$ is a closed-form expression. \section{Method} In this section, we first introduce the basic planning with the diffusion method and its limitations. Then, we propose AdaptDiffuser\xspace, a novel self-evolved sequence modeling method for decision-making with the basis of diffusion probabilistic models. AdaptDiffuser\xspace is designed to enhance the performance of diffusion models in existing decision-making tasks, especially the goal-conditioned tasks, and further improve their adaptability in unseen tasks without any expert data to supervise the training process. \subsection{Planning with Task-oriented Diffusion Model} Following previous work \cite{janner2022planning}, we can re-define the planning trajectory as a special kind of sequence data with actions as an additional dimension of states like: \begin{equation} \boldsymbol{\tau}=\left[\begin{array}{llll} \boldsymbol{s}_0 & \boldsymbol{s}_1 & ... & \boldsymbol{s}_T \\ \boldsymbol{a}_0 & \boldsymbol{a}_1 & ... & \boldsymbol{a}_T \end{array}\right] \label{eq:tau} \end{equation} Then we can use the diffusion probabilistic model to perform trajectory generation. However, the aim of planning is not to restore the original trajectory but to predict future actions with the highest reward-to-go, the offline reinforcement learning should be formulated as a conditional generative problem with guided diffusion models that have achieved great success on image synthesis \cite{dhariwal2021diffusion}. So, we drive the conditional diffusion process: \begin{equation} \label{eq:diff_plan} q(\boldsymbol{\tau}^{i+1} | \boldsymbol{\tau}^i), \;\;\;\; p_{\theta}(\boldsymbol{\tau}^{i-1}|\boldsymbol{\tau}^{i}, \boldsymbol{y}(\boldsymbol{\tau})) \end{equation} where the new term $\boldsymbol{y}(\boldsymbol{\tau})$ is some specific information of the given trajectory $\boldsymbol{\tau}$, such as the reward-to-go (return) $\mathcal{J}(\boldsymbol{\tau}^0)$ of the trajectory, the constraints that must be satisfied by the trajectory and so on. On this basis, we can rewrite the optimization objective as, \begin{equation} \theta^*=\arg \min _\theta-\mathbb{E}_{\boldsymbol{\tau}^0}\left[\log p_\theta(\boldsymbol{\tau}^{0} | \boldsymbol{y}(\boldsymbol{\tau}^0))\right] \label{eq:cond_gen_model} \end{equation} \begin{figure} \caption{Overall framework of AdaptDiffuser\xspace. To improve the adaptability of the diffusion model to diverse tasks, rich data with distinct objectives is generated, guided by each task’s reward function. During the diffusion denoising process, we utilize a pre-trained denoising U-Net to progressively generate high-quality trajectories. At each denoising time step, we take the task-specific reward of a trajectory to adjust the gradient of state and action sequence, thereby creating trajectories that align with specific task objectives. Subsequently, the generated synthetic trajectory is evaluated by a discriminator to see if it meets the standards. If yes, it is incorporated into a data pool to fine-tune the diffusion model. The procedure iteratively enhances the generalizability of our model for both seen and unseen settings.} \label{fig:main_frame} \end{figure} Therefore, for tasks aiming to maximize the reward-to-go, we take $\mathcal{O}_{t}$ to denote the optimality of the trajectory at timestep $t$. And $\mathcal{O}_{t}$ obeys Bernoulli distribution with ${p(\mathcal{O}_t=1) = \exp (\gamma^{t} \mathcal{R}(\boldsymbol{s}_t, \boldsymbol{a}_t))}$. When $p(\mathcal{O}_{1:T} \mid \boldsymbol{\tau}^{i})$ meets specific Lipschitz conditions, the conditional transition probability of the reverse diffusion process can be approximated as \cite{feller2015theory}: \begin{equation} \label{eq:guided} p_\theta(\boldsymbol{\tau}^{i-1} \mid \boldsymbol{\tau}^{i}, \mathcal{O}_{1:T}) \approx \mathcal{N}(\boldsymbol{\tau}^{i-1}; \mu_{\theta} + \alpha\Sigma g, \Sigma) \end{equation} \begin{align*} \text{where, } g &= \nabla_{\boldsymbol{\tau}} \log p(\mathcal{O}_{1:T} \mid \boldsymbol{\tau}) |_{\boldsymbol{\tau} = \mu_{\theta}} \\ &= \sum_{t=0}^{T} \gamma^{t} \nabla_{\boldsymbol{s}_t,\boldsymbol{a}_t} \mathcal{R}(\boldsymbol{s}_t, \boldsymbol{a}_t) |_{(\boldsymbol{s}_t,\boldsymbol{a}_t)=\mu_t} = \nabla_{\boldsymbol{\tau}} \mathcal{J}(\mu_{\theta}). \end{align*} Besides, for tasks aiming to satisfy single point conditional constraint (e.g. goal conditioned tasks), the constraint can be simplified by substituting conditional values for the sampled values of all diffusion timesteps $i \in \{0, 1, ..., N\}$. Although this paradigm has achieved competitive results with previous planning methods which are not based on diffusion models, it only performs conditional guidance during the reverse diffusion process and assumes the unconditional diffusion model is trained perfectly over the forward process. However, as depicted in Eq.~\ref{eq:guided}, the quality of generated trajectory $\boldsymbol{\tau}$ depends not only on the guided gradient $g$ but more on the learned means $\mu_{\theta}$ and covariance $\Sigma$ of the unconditional diffusion model. If the learned $\mu_{\theta}$ deviates far from the optimal trajectory, no matter how strong the guidance $g$ is, the final generated result will be highly biased and of low quality. Then, learning from Eq.~\ref{eq:diff_loss}, the quality of $\mu_{\theta}$ hinges on the training data, the quality of which, however, is uneven across different tasks, especially on unseen tasks. Previous diffusion-based planning methods have not solved the problem which limits the performance of these methods on both existing and unseen tasks, and thus have poor adaptation ability. \subsection{Self-evolved Planning with Diffusion} Therefore, with the aim to improve the adaptability of these planners, we propose AdaptDiffuser\xspace, a novel self-evolved decision-making approach based on diffusion probabilistic models, to enhance the quality of the trained means $\mu_{\theta}$ and covariance $\Sigma$ of the forward diffusion process. AdaptDiffuser\xspace relies on self-evolved synthetic data generation to enrich the training dataset which is denoted as $\boldsymbol{\tau}_0$ and synthetic data fine-tuning to boost performance. After that, AdaptDiffuser\xspace follows the paradigm depicted in Eq.~\ref{eq:guided} to find the optimal action sequence for the given task with the guidance of reward gradients. As shown in Figure \ref{fig:main_frame}, to implement AdaptDiffuser\xspace, we firstly generate a large number of synthetic demonstration data for unseen tasks which do not exist in the training dataset in order to simulate a wide range of scenarios and behaviors that the diffusion model may encounter in the real world. This synthetic data is iteratively generated through the sampling process of the original diffusion probabilistic model $\theta_0^*$ with reward guidance, taking the advantage of its great generation ability. We will discuss the details of the synthetic data generation in Section \ref{sec:method_gen} and here we just abbreviate it as a function $\mathcal{G}(\mu_{\theta}, \Sigma, \nabla_{\boldsymbol{\tau}} \mathcal{J}(\mu_{\theta}))$. Secondly, we design a rule-based discriminator $\mathcal{D}$, with reward and dynamics consistency guidance, to select high-quality data from the generated data pool. Previous sequence modeling methods which predict the rewards $\mathcal{R}(\boldsymbol{s}, \boldsymbol{a})$ simultaneously with generated states and actions are unable to solve the dynamics consistency problem that the actual next state with transition model $\boldsymbol{s}' = \mathcal{T}(\boldsymbol{s}, \boldsymbol{a})$ greatly deviates from the predicted next state. What's more, these deviated trajectories are taken as feasible solutions under previous settings. To resolve this problem, AdaptDiffuser\xspace only takes the state sequence $\boldsymbol{s} = \left[\boldsymbol{s}_0, \boldsymbol{s}_1, ..., \boldsymbol{s}_T \right]$ of the generated trajectory and then performs state tracking control using a traditional or neural network-based inverse dynamics model $\mathcal{I}$ to derive real executable actions, denoted as $\widetilde{\boldsymbol{a}}_t = \mathcal{I}(\boldsymbol{s}_t, \boldsymbol{s}_{t+1})$. This step ensures the action that does not violate the robot's dynamic constraints. After that, AdaptDiffuser\xspace performs $\widetilde{\boldsymbol{a}}_t$ to obtain the revised next state $\widetilde{\boldsymbol{s}}_{t+1}=\mathcal{T}\left(\widetilde{\boldsymbol{s}}_t, \widetilde{\boldsymbol{a}}_t\right)$, and then filters out the trajectories whose revised state $\widetilde{\boldsymbol{s}}_{t+1}$ has a too large difference from the generated $\boldsymbol{s}_{t+1}$ (measured by MSE $d=||\widetilde{\boldsymbol{s}}_{t+1} - \boldsymbol{s}_{t+1}||_2$). The remaining trajectories $\widetilde{\boldsymbol{s}}$ are then used to predict the reward by $\widetilde{\mathcal{R}}=\mathcal{R}(\widetilde{\boldsymbol{s}}, \widetilde{\boldsymbol{a}})$ with the new actions $\widetilde{\boldsymbol{a}}$ and are selected according to this reward. In this way, we can derive high-quality synthetic data to fine-tune the diffusion probabilistic model. We repeat this process multiple times in order to continually improve the model's performance and adapt it to new tasks, ultimately improving its generalization performance. So, it can be formulated as, \begin{equation} \begin{split} &\theta_k^* \quad =\arg \min _\theta-\mathbb{E}_{\hat{\boldsymbol{\tau}}_k}\left[\log p_\theta(\hat{\boldsymbol{\tau}}_k | \boldsymbol{y}(\hat{\boldsymbol{\tau}}_k))\right] \\ &\boldsymbol{\tau}_{k+1} = \mathcal{G}\left(\mu_{\theta_k^*}, \Sigma, \nabla_{\boldsymbol{\tau}} \mathcal{J}(\mu_{\theta_k^*})\right) \\ &\hat{\boldsymbol{\tau}}_{k+1} = [\hat{\boldsymbol{\tau}_k}, \mathcal{D}(\widetilde{\mathcal{R}}(\boldsymbol{\tau}_{k+1})) ] \end{split} \end{equation} where $k \in \{0, 1, ...\}$ is the number of iteration rounds and the initial dataset $\hat{\boldsymbol{\tau}}_0 = \boldsymbol{\tau}_0$. \subsection{Reward-guided Synthetic Data Generation} \label{sec:method_gen} To improve the performance and adaptability of the diffusion probabilistic model on unseen tasks, we need to generate synthetic trajectory data using the learned diffusion model at the current iteration. We achieve it by defining a series of tasks with different goals and reward functions. \textbf{Continuous Reward Function.} For the tasks with continuous reward function, represented by MuJoCo \cite{todorov2012mujoco}, we follow the settings that define a binary random variable indicating the optimality with probability mapped from a continuous value, to convert the reward maximization problem to a continuous optimization problem. We can easily take Eq.~\ref{eq:guided} to generate synthetic results. \textbf{Sparse Reward Function.} The reward function of tasks as typified by a goal-conditioned problem like Maze2D is a unit step function $\mathcal{J}(\boldsymbol{\tau}) = \boldsymbol{\chi}_{\boldsymbol{s}_g }(\boldsymbol{\tau})$ whose value is equal to 1 if and only if the generated trajectory contains the goal state $\boldsymbol{s}_g$. The gradient of this reward function is Dirac delta function ~\cite{zhang2021dirac} which is not a classical function and cannot be adopted as guidance. However, if it is considered from the perspective of taking the limit, the constraint can be simplified as replacing all corresponding sampled values with constraints over the diffusion timesteps. \textbf{Combination.} Many realistic tasks need these two sorts of reward functions simultaneously. For example, if there exists an auxiliary task in Maze2D environment that requires the planner to not only find a way from the start point to the goal point but also collect the gold coin in the maze. This task is more difficult and it's infeasible to add this constraint to the sparse reward term because there is no idea about which timestep the generated trajectory should pass the additional reward point (denoted as $\boldsymbol{s}_c$). As a solution, we propose to combine these two sorts of methods and define an auxiliary reward guiding function to satisfy the constraints. \begin{equation} \mathcal{J}(\boldsymbol{\tau}) = \sum_{t=0}^{T} ||\boldsymbol{s}_t - \boldsymbol{s}_c||_p \label{eq:newj} \end{equation} where $p$ represents p-norm. Then, with Eq.~\ref{eq:newj} we plug it into Eq.~\ref{eq:guided} as the marginal probability density function and force the last state of the generated trajectory $\boldsymbol{\tau}^0$ to be $\boldsymbol{s}_c$. The generated trajectories that meet the desired criteria of the discriminator are added to the set of training data for the diffusion model learning as synthetic expert data. This process is repeated multiple times until a sufficient amount of synthetic data has been generated. By iteratively generating and selecting high-quality data based on the guidance of expected return and dynamics transition constraints, we can boost the performance and enhance the adaptability of the diffusion probabilistic model. \section{Experiment} \subsection{Benchmarks} \textbf{Maze2D}: Maze2D \cite{fu2020d4rl} environment is a navigation task in which a 2D agent needs to traverse from a randomly designated location to a fixed goal location where a reward of 1 is given. No reward shaping is provided at any other location. The objective of this task is to evaluate the ability of offline RL algorithms to combine previously collected sub-trajectories in order to find the shortest path to the evaluation goal. Three maze layouts are available: ``umaze", ``medium", and ``large". The expert data for this task is generated by selecting random goal locations and using a planner to generate sequences of waypoints that are followed by using a PD controller to perform dynamic tracking. We also provide a method to derive more diverse layouts with ChatGPT in Appendix \ref{appendix:chatgpt}. \textbf{MuJoCo}: MuJoCo \cite{todorov2012mujoco} is a physics engine that allows for real-time simulation of complex mechanical systems. It has three typical tasks: Hopper, HalfCheetah, and Walker2d. Each task has 4 types of datasets to test the performance of an algorithm: ``medium", ``random", ``medium-replay" and ``medium-expert". The ``medium" dataset is created by training a policy with a certain algorithm and collecting $1$M samples. The ``random" dataset is created by using a randomly initialized policy. The ``medium-replay" dataset includes all samples recorded during training until the policy reaches a certain level of performance. There is also a ``medium-expert" dataset which is a mix of expert demonstrations and sub-optimal data. \textbf{KUKA Robot}: The KUKA Robot \cite{schreiber2010fast} benchmark is a standardized evaluation tool that is self-designed to measure the capabilities of a robot arm equipped with a suction cup at the end. It consists of two tasks: conditional stacking \cite{janner2022planning} and pick-and-place. More details can be seen in Sec. \ref{sec:pick_place}. By successfully completing these tasks, the KUKA Robot benchmark can accurately assess the performance of the robot arm and assist developers in improving its design. \subsection{Performance Enhancement on Existing Tasks} \subsubsection{Experiments on Maze2d Environment} \textbf{Overall Performance.} Navigation in Maze2D environment takes planners hundreds of steps to reach the goal location. Even the best model-free algorithms have to make great efforts to adequately perform credit assignments and reliably reach the target. We plan with AdaptDiffuser\xspace using the strategy of sparse reward function to condition on the start and goal location. We compare our method with the best model-free algorithms (IQL \citealt{kostrikov2021offline} and CQL \citealt{kumar2020conservative}), conventional trajectory optimizer MPPI \cite{williams2015model} and previous diffusion-based approach Diffuser \cite{janner2022planning} in Table \ref{table:maze2d}. This comparison is fair because model-free methods can also identify the location of the goal point which is the only state with a non-zero reward. \begin{table}[t] \caption{ \textbf{Offline Reinforcement Learning Performance in Maze2d Environment.} We show the results of AdaptDiffuser\xspace and previous planning methods to validate the bootstrapping effect of our method on a goal-conditioned task. } \label{table:maze2d} \centering \small \tabcolsep 3pt \begin{tabular}{cccccc} \toprule \multicolumn{1}{c}{\textbf{Environment}} & \textbf{MPPI} & \textbf{CQL} & \textbf{IQL} & \textbf{Diffuser} & \textbf{AdaptDiffuser} \\ \midrule U-Maze & 33.2 & 5.7 & 47.4 & 113.9 & \textbf{135.1} \scriptsize{\raisebox{1pt}{$\pm 5.8$}}\\ Medium & 10.2 & 5.0 & 34.9 & 121.5 & \textbf{129.9} \scriptsize{\raisebox{1pt}{$\pm 4.6$}}\\ Large & 5.1 & 12.5 & 58.6 & 123.0 & \textbf{167.9} \scriptsize{\raisebox{1pt}{$\pm 5.0$}}\\ \midrule \multicolumn{1}{c}{\textbf{Average}} & 16.2 & 7.7 & 47.0 & 119.5 & \textbf{144.3} \hspace{.58cm} \\ \bottomrule \end{tabular} \end{table} \begin{figure}\label{fig:maze-hard} \end{figure} \begin{table*}[t] \caption{\small \textbf{Offline Reinforcement Learning Performance in MuJoCo Environment.} We report normalized average returns of D4RL tasks \citep{fu2020d4rl} in the table. And the mean and the standard error are calculated over 3 random seeds. } \label{table:mujoco} \centering \small \tabcolsep 4.5pt \begin{tabular}{ccccccccccccc} \toprule \textbf{Dataset} & \textbf{Environment} & \textbf{BC} & \textbf{CQL} & \textbf{IQL} & \textbf{DT} & \textbf{TT} & \textbf{MOPO} & \textbf{MOReL} & \textbf{MBOP} & \textbf{Diffuser} & \textbf{AdaptDiffuser}\\ \midrule Med-Expert & HalfCheetah & $55.2$ & $91.6$ & $86.7$ & $86.8$ & $95.0$ & $63.3$ & $53.3$ & $\textbf{105.9}$ & $88.9$ & $89.6$ \scriptsize{\raisebox{1pt}{$\pm 0.8$}} \\ Med-Expert & Hopper & $52.5$ & $105.4$ & $91.5$ & $107.6$ & $\textbf{110.0}$ & $23.7$ & $108.7$ & $55.1$ & $103.3$ & $\textbf{111.6}$ \scriptsize{\raisebox{1pt}{$\pm 2.0$}} \\ Med-Expert & Walker2d & $\textbf{107.5}$ & $\textbf{108.8}$ & $\textbf{109.6}$ & $\textbf{108.1}$ & $101.9$ & $44.6$ & $95.6$ & $70.2$ & $\textbf{106.9}$ & $\textbf{108.2}$ \scriptsize{\raisebox{1pt}{$\pm 0.8$}} \\ \midrule Medium & HalfCheetah & $42.6$ & $44.0$ & $\textbf{47.4}$ & $42.6$ & $\textbf{46.9}$ & $42.3$ & $42.1$ & $44.6$ & $42.8$ & $44.2$ \scriptsize{\raisebox{1pt}{$\pm 0.6$}} \\ Medium & Hopper & $52.9$ & $58.5$ & $66.3$ & $67.6$ & $61.1$ & $28.0$ & $\textbf{95.4}$ & $48.8$ & $74.3$ & $\textbf{96.6}$ \scriptsize{\raisebox{1pt}{$\pm 2.7$}} \\ Medium & Walker2d & $75.3$ & $72.5$ & $78.3$ & $74.0$ & $79.0$ & $17.8$ & $77.8$ & $41.0$ & $79.6$ & $\textbf{84.4}$ \scriptsize{\raisebox{1pt}{$\pm 2.6$}} \\ \midrule Med-Replay & HalfCheetah & $36.6$ & $\textbf{45.5}$ & $\textbf{44.2}$ & $36.6$ & $41.9$ & $53.1$ & $40.2$ & $42.3$ & $37.7$ & $38.3$ \scriptsize{\raisebox{1pt}{$\pm 0.9$}} \\ Med-Replay & Hopper & $18.1$ & $95.0$ & $94.7$ & $82.7$ & $91.5$ & $67.5$ & $\textbf{93.6}$ & $12.4$ & $\textbf{93.6}$ & $\textbf{92.2}$ \scriptsize{\raisebox{1pt}{$\pm 1.5$}} \\ Med-Replay & Walker2d & $26.0$ & $77.2$ & $73.9$ & $66.6$ & $82.6$ & $39.0$ & $49.8$ & $9.7$ & $70.6$ & $\textbf{84.7}$ \scriptsize{\raisebox{1pt}{$\pm 3.1$}} \\ \midrule \multicolumn{2}{c}{\textbf{Average}} & 51.9 & 77.6 & 77.0 & 74.7 & 78.9 & 42.1 & 72.9 & 47.8 & 77.5 & \textbf{83.4} \hspace{.58cm} \\ \bottomrule \end{tabular} \end{table*} As shown in Table \ref{table:maze2d}, scores achieved by AdaptDiffuser\xspace are over 125 in all maze sizes and are 20 points higher than those of Diffuser in average, indicating our method's strong effectiveness in goal-conditioned tasks. \textbf{Visualization of Hard Cases.} In order to more intuitively reflect the improvement of our method compared with previous Diffuser \cite{janner2022planning}, we select one difficult planning example of Maze2D-Medium and one of Maze2D-Large respectively for visualization, as shown in Figure \ref{fig:maze-hard}. Among the Maze2D planning paths with sparse rewards, the example with the longest path to be planned is the hardest one. Therefore, in Maze2D-Medium (Fig. \ref{fig:maze-hard} (a) (b)), we designate the start point as (1, 1) with goal point (6, 6), while in Maze2D-Large (Fig. \ref{fig:maze-hard} (c) (d)), we specify the start point as (1, 7) with goal point (9, 7) in the figure. It can be observed from Fig. \ref{fig:maze-hard} that in Hard Case 1, AdaptDiffuser\xspace generates a shorter and smoother path than that generated by Diffuser. So, AdaptDiffuser achieves a larger reward. And in Hard Case 2, previous Diffuser method even fails to plan while our AdaptDiffuser\xspace derives a feasible path. \subsubsection{Experiments on MuJoCo Environment} \label{sec:mujoco} MuJoCo tasks are employed to test the performance enhancement of our AdaptDiffuser\xspace learned from heterogeneous data of varying quality using the publicly available D4RL datasets \citep{fu2020d4rl}. We evaluate our approach with a number of existing algorithms that cover a variety of data-driven methodologies, including model-free RL algorithms like CQL \cite{kumar2020conservative} and IQL \cite{kostrikov2021offline}; return-conditioning approaches like Decision Transformer (DT) \cite{chen2021decision}; and model-based RL algorithms like Trajectory Transformer (TT) \cite{janner2021offline}, MOPO \cite{yu2020mopo}, MOReL \cite{kidambi2020morel}, and MBOP \cite{argenson2020model}. The results are shown in Table \ref{table:mujoco}. Besides, it is also worth noting that in the MuJoCo environment, the state sequence $\widetilde{\boldsymbol{s}}$ derived by taking the generated actions $\boldsymbol{a}$ is very close to the generated state sequence $\boldsymbol{s}$, so we directly use $\widetilde{\mathcal{R}}(\boldsymbol{s}, \boldsymbol{a})=\mathcal{R}(\boldsymbol{s}, \boldsymbol{a})$ in this dataset. \begin{figure}\label{fig:maze_adapt} \end{figure} Observed from the table, our method AdaptDiffuser\xspace is either competitive or outperforms most of the offline RL baselines across all three different locomotion settings. And more importantly, compared with Diffuser \cite{janner2022planning}, our method achieves higher reward in almost all the datasets and improves the performance greatly, especially in ``Hopper-Medium" and ``Walker2d-Medium" environments. We analyze that this is because the quality of the original data in the ``Medium dataset" is poor, so AdaptDiffuser\xspace has an evident effect on improving the quality of the training dataset, thus significantly enhancing the performance of the planner based on the diffusion probabilistic model. The results of the ``Medium-Expert" dataset verify this analysis because the quality of original data in the ``Medium-Expert" dataset (especially the Halfcheetah environment) has been good enough, making the generation of new data only has a little gain on the model performance. \subsection{Adaptation Ability on Unseen Tasks} \subsubsection{Maze2d with Gold Coin Picking Task} On top of existing Maze2D settings, we carefully design a new task that requires the agent to navigate as well as pick all gold coins in the maze. We show an example with an additional reward in (4, 2) in Figure \ref{fig:maze_adapt}. We can see that when there is no additional reward, both Diffuser \cite{janner2022planning} and our method AdaptDiffuser\xspace choose the shorter path at the bottom of the figure to reach the goal point. But, when additional reward is added in the (4, 2) position of the maze, both planners change to the path walking in the middle of the figure under the guidance of rewards. However, at this time, the path generated by Diffuser causes the agent to collide with the wall, while AdaptDiffuser\xspace generates a smoother collision-free path, reflecting the superiority of our method. \begin{table}[t] \caption{ \textbf{Adaptation Performance on Pick-and-Place Task} } \label{table:kukapick} \tabcolsep 8pt \centering \small \begin{tabular}{ccc} \toprule \multicolumn{1}{c}{\textbf{Environment}} & \textbf{Diffuser} & \textbf{AdaptDiffuser} \\ \midrule Pick and Place setup 1 & 28.16 \scriptsize{\raisebox{1pt}{$\pm 2.0$}} & \textbf{36.03} \scriptsize{\raisebox{1pt}{$\pm 2.1$}} \\ Pick and Place setup 2 & 35.25 \scriptsize{\raisebox{1pt}{$\pm 1.4$}} & \textbf{39.00} \scriptsize{\raisebox{1pt}{$\pm 1.3$}} \\ \midrule \multicolumn{1}{c}{\textbf{Average}} & 31.71 \hspace{.58cm} & \textbf{37.52} \hspace{.58cm} \\ \bottomrule \end{tabular} \end{table} \begin{figure} \caption{\textbf{Visualization of KUKA Pick-and-Place Task.} We require the KUKA Arm to move the blocks from their random initialized positions on the right side of the table to the left and arrange them in the order of yellow, blue, green, and red (from near to far).} \label{fig:pick_vis} \end{figure} \subsubsection{KUKA Pick and Place Task} \label{sec:pick_place} \textbf{Task Specification.} There are two tasks in the KUKA robot arm environment. One is the conditional stacking task, as defined in \cite{janner2022planning}, where the robot must correctly stack blocks in a predetermined order on a designated location, using blocks that have been randomly placed. And the other is the pick-and-place task designed by us, which aims to place the randomly initialized blocks in their own target locations in a predetermined order. The reward functions of both tasks are defined as one upon successful placements and zero otherwise. To test the adaptation capability of AdaptDiffuser\xspace and other baselines, we only provide expert trajectory data for the conditional stacking task, which is generated by PDDLStream \cite{garrett2020pddlstream}, but we require the planner to generalize to pick-and-put task without any expert data. The performance of the pick-and-place task is supposed to be a good measure of the planner's adaptability. \textbf{Adaptation Performance.} In KUKA pick-and-place task, we define the guidance of the conditional diffusion model as the gradient of the reward function about the distance between the current location and the target location. Then, the adaptation performance is displayed in Table \ref{table:kukapick}. There are two setups in KUKA benchmark. In setup 1, the four blocks are initialized randomly on the floor, while in setup 2, the four blocks are stacked at a random location at the beginning. As shown in Table \ref{table:kukapick}, AdaptDiffuser\xspace outperforms Diffuser greatly on both setups while achieving higher performance at setup 2 because all of the blocks start from the same horizontal position. We visualize a successful case of the KUKA pick-and-place task in Figure \ref{fig:pick_vis}, and more visualization results can be seen in Appendix \ref{appedix:vis_kuka}. \subsection{Ablation Study} \subsubsection{Ablation on Iterative Phases} In order to verify the lifting effect of iterative data generation of our method AdaptDiffuser\xspace to improve the performance of the planner, we conduct an ablation experiment on the number of iterative phases of AdaptDiffuser\xspace in the MuJoCo environment of D4RL. \begin{table}[t] \caption{ \textbf{Ablation on Iterative Phases.} The mean and the standard error are calculated over 3 random seeds. } \label{table:mujoco_iterphases} \centering \footnotesize \begin{tabular}{cccc} \toprule \textbf{Dataset} & \textbf{Environment} & \textbf{$1^{\text{st}}$ Phase} & \textbf{$2^{\text{nd}}$ Phase} \\ \midrule Medium-Expert & HalfCheetah & $\textbf{89.3}$ \scriptsize{\raisebox{1pt}{$\pm 0.6$}} & $\textbf{89.6}$ \scriptsize{\raisebox{1pt}{$\pm 0.8$}} \\ Medium-Expert & Hopper & $\textbf{110.7}$ \scriptsize{\raisebox{1pt}{$\pm 3.2$}} & $\textbf{111.6}$ \scriptsize{\raisebox{1pt}{$\pm 2.0$}} \\ Medium-Expert & Walker2d & $\textbf{107.7}$ \scriptsize{\raisebox{1pt}{$\pm 0.9$}} & $\textbf{108.2}$ \scriptsize{\raisebox{1pt}{$\pm 0.8$}}\\ \midrule Medium & HalfCheetah & $\textbf{43.8}$ \scriptsize{\raisebox{1pt}{$\pm 0.5$}} & $\textbf{44.2}$ \scriptsize{\raisebox{1pt}{$\pm 0.6$}} \\ Medium & Hopper & $95.4$ \scriptsize{\raisebox{1pt}{$\pm 3.4$}} & $\textbf{96.6}$ \scriptsize{\raisebox{1pt}{$\pm 2.7$}} \\ Medium & Walker2d & $83.2$ \scriptsize{\raisebox{1pt}{$\pm 3.5$}} & $\textbf{84.4}$ \scriptsize{\raisebox{1pt}{$\pm 2.6$}} \\ \midrule \multicolumn{2}{c}{\textbf{Average}} & 88.4 \hspace{.58cm} & \textbf{89.1} \hspace{.58cm} \\ \bottomrule \end{tabular} \end{table} As shown in Table \ref{table:mujoco_iterphases}, with ``Medium" dataset, due to the low quality of the original dataset, although the data generated in the first phase has greatly supplemented the training dataset and greatly improved the performance (referring to Sec \ref{sec:mujoco}), the performance achieved after the second phase is still significantly improved compared with that of the first phase. However, for ``Medium-Expert" dataset, because the expert data of the dataset has covered most of the environment, and the newly generated data is only more suitable for the planner to learn. So, after a certain improvement in the first phase, the subsequent growth is not obvious. The above experiments verify the effectiveness of AdaptDiffuser\xspace for the multi-phase iterative paradigm, and also show that the boosting effect is no longer obvious after the algorithm performance reaches a certain level. \subsubsection{Ablation on Insufficient Data \& Training} \label{ablation:limit_data} To demonstrate the superiority of our method over previous diffusion-based work Diffuser~\cite{janner2022planning} when the expert data is limited and the training is insufficient, we conducted experiments on the Maze2d-Large dataset using different percentages of expert data (e.g. 20\%, 50\%) with only 25\% training steps to train our model. The results are shown in Table~\ref{tab:ablation_amount}. The setting 100\%$\mathcal{D}$ denotes the full training setting. We can see our AdaptDiffuser\xspace, which uses only 50\% data and 25\% training steps, beats the fully trained Diffuser. AdaptDiffuser\xspace can achieve good performance with a small amount of expert data and training steps. \begin{table}[t] \caption{\textbf{Ablation study on different amounts of expert data.}} \centering \footnotesize \tabcolsep 8pt \begin{tabular}{ccccc} \toprule \textbf{Amount of Data} & \textbf{20\% $\mathcal{D}$} & \textbf{50\% $\mathcal{D}$} & \textbf{100\%$\mathcal{D}$} \\ \midrule Diffuser & 105.0 & 107.9 & 123.0 \\ AdaptDiffuser & \textbf{112.5} & \textbf{123.8} & \textbf{167.9} \\ \bottomrule \end{tabular} \label{tab:ablation_amount} \end{table} \subsubsection{Model Size and Running Time} We show the model size of AdaptDiffuser\xspace measured by the number of parameters in Table \ref{table:model_size} here. And we also analyze the testing time and training time performance in Appendix \ref{appendix:time}. From the analysis, we can see that the inference time of AdaptDiffuser\xspace is almost equal to that of Diffuser \citep{janner2022planning}. \begin{table}[t] \caption{\small \textbf{Model Size of AdaptDiffuser\xspace.}} \label{table:model_size} \centering \small \begin{tabular}{cc} \toprule \textbf{Environment} & \textbf{Total Parameters (Model Size)} \\ \midrule MuJoCo & 3.96 M \\ Maze2D & 3.68 M \\ KUKA Robot & 64.9 M \\ \bottomrule \end{tabular} \end{table} \section{Conclusion} We present AdaptDiffuser\xspace, a method for improving the performance of diffusion-based planners in offline reinforcement learning through self-evolution. By generating diverse, high-quality and heterogeneous expert data using a reward-guided diffusion model and filtering out infeasible data using a rule-based discriminator, AdaptDiffuser\xspace is able to enhance the performance of diffusion models in existing decision-making tasks, especially the goal-conditioned tasks, and further improve the adaptability in unseen tasks without any expert data. Our experiments on two widely-used offline RL benchmarks and our carefully designed unseen tasks in KUKA and Maze2D environments validate the effectiveness of AdaptDiffuser\xspace. \textbf{Discussion of Limitation.} Our method achieves better performance by generating high-quality synthetic data but increases the amount of computation required in training with almost no increase in inference time. Besides, although AdaptDiffuser has proven its effectiveness in several scenarios (e.g. MuJoCo, Maze2d, KUKA), it still faces challenges in high-dimensional observation space tasks. More detailed discussions are given in Appendix \ref{appendix:discuss}. \textbf{Future works.} Further improving the sampling speed and exploring tasks with high-dimensional input are potential areas for future works. And with the help of ChatGPT \cite{ouyang2022training}, we can use prompts to directly generate diverse maze settings to assist synthetic data generation which is also a promising direction. We provide some examples in Appendix \ref{appendix:chatgpt}. \appendix \onecolumn \section{Classifier-Guided Diffusion Model for Planning} \label{appedix:classifier} In this section, we introduce theoretical analysis of conditional diffusion model in detail. We start with an unconditional diffusion probabilistic model with a standard reverse process as $p_{\theta}(\tau^i|\tau^{i+1})$. Then, with a specific label $y$ (for example, goal point in Maze2D or specific reward function in MuJoCo) which is to be conditioned on given a noised trajectory $\tau^i$, the reverse diffusion process can be redefined as $p_{\theta,\phi}(\tau^i|\tau^{i+1},y)$. Apart from the parameters $\theta$ of original diffusion model, a new parameter $\phi$ is introduced here which describes the probability transfer model from noisy trajectory $\tau^i$ to the specific label $y$ which is denoted as $p_{\phi}(y \mid \tau^i)$. \begin{lemma} The marginal probability of a conditional Markov's noising process $q$ conditioned on $y$ is equal to the marginal probability of the unconditional noising process. \begin{equation} q\left(\tau^{i+1} \mid \tau^{i}\right) = q\left(\tau^{i+1} \mid \tau^{i}, y\right) \end{equation} \end{lemma} \begin{proof} \begin{equation*} \begin{split} q\left(\tau^{i+1} \mid \tau^{i}\right) & =\int_{y} q\left(\tau^{i+1}, y \mid \tau^{i}\right) d y \\ & =\int_{y} q\left(\tau^{i+1} \mid \tau^{i}, y\right) p_{\phi}\left(y \mid \tau^{i}\right) d y \\ & =q\left(\tau^{i+1} \mid \tau^{i}, y\right) \int_{y} p_{\phi}\left(y \mid \tau^{i}\right) d y \\ & =q\left(\tau^{i+1} \mid \tau^{i}, y\right) \end{split} \end{equation*} The third line holds because $q\left(\tau^{i+1} \mid \tau^{i}, y\right)$ fits another $y$-independent transition probability according to its definition. \end{proof} \begin{lemma} The probability distribution of specific label $y$ conditioned on $\tau^i$ does not depend on $\tau^{i+1}$. \begin{equation} p_{\theta, \phi}\left(y \mid \tau^{i}, \tau^{i+1}\right) = p_{\phi}\left(y \mid \tau^{i}\right) \end{equation} \end{lemma} \begin{proof} \begin{equation*} \begin{split} p_{\theta, \phi}\left(y \mid \tau^{i}, \tau^{i+1}\right) &=q\left(\tau^{i+1} \mid \tau^{i}, y\right) \frac{p_{\phi}\left(y \mid \tau^{i}\right)}{q\left(\tau^{i+1} \mid \tau^{i}\right)} \\ & =q\left(\tau^{i+1} \mid \tau^{i}\right) \frac{p_{\phi}\left(y \mid \tau^{i}\right)}{q\left(\tau^{i+1} \mid \tau^{i}\right)} \\ & =p_{\phi}\left(y \mid \tau^{i}\right) \end{split} \end{equation*} \end{proof} \begin{theorem} The conditional sampling probability $p_{\theta,\phi}(\tau^i \mid \tau^{i+1},y)$ is proportional to unconditional transition probability $p_{\theta}(\tau^i \mid \tau^{i+1})$ multiplied by classified probability $p_{\phi}(y \mid \tau^i)$. \begin{equation} p_{\theta,\phi}(\tau^i \mid \tau^{i+1},y) = Z p_{\theta}(\tau^i \mid \tau^{i+1})p_{\phi}(y \mid \tau^i) \label{eq:theorem1} \end{equation} \end{theorem} \begin{proof} \begin{equation} \begin{split} p_{\theta,\phi}(\tau^i \mid \tau^{i+1},y) &= \frac{p_{\theta, \phi}\left(\tau^{i}, \tau^{i+1}, y\right)}{p_{\theta, \phi}\left(\tau^{i+1}, y\right)} \\ & =\frac{p_{\theta, \phi}\left(\tau^{i}, \tau^{i+1}, y\right)}{p_{\phi}\left(y \mid \tau^{i+1}\right) p_{\theta}\left(\tau^{i+1}\right)} \\ & =\frac{p_{\theta}\left(\tau^{i} \mid \tau^{i+1}\right) p_{\theta, \phi}\left(y \mid \tau^{i}, \tau^{i+1}\right) p_{\theta}\left(\tau^{i+1}\right)}{p_{\phi}\left(y \mid \tau^{i+1}\right) p_{\theta}\left(\tau^{i+1}\right)} \\ & =\frac{p_{\theta}\left(\tau^{i} \mid \tau^{i+1}\right) p_{\theta, \phi}\left(y \mid \tau^{i}, \tau^{i+1}\right)}{p_{\phi}\left(y \mid \tau^{i+1}\right)} \\ & =\frac{p_{\theta}\left(\tau^{i} \mid \tau^{i+1}\right) p_{\phi}\left(y \mid \tau^{i}\right)}{p_{\phi}\left(y \mid \tau^{i+1}\right)} \end{split} \end{equation} The term $p_{\phi}\left(y \mid \tau^{i+1}\right)$ can be seen as a constant since it's not conditioned on $\tau^{i}$ at the diffusion timestep $i$. \end{proof} Although exact sampling from this distribution (Equation \ref{eq:theorem1}) is difficult, \cite{sohl2015deep} demonstrates that it can be approximated as a modified Gaussian distribution. We show the derivation here. On one hand, as Equation \ref{eq:p1} shows, we can formulate the denoising process with a Gaussian distribution: \begin{align} p_{\theta}(\tau^i \mid \tau^{i+1}) &= \mathcal{N}(\mu, \Sigma) \\ \log p_{\theta}(\tau^i \mid \tau^{i+1}) &= -\frac{1}{2}(\tau^i - \mu)^T \Sigma^{-1} (\tau^{i} - \mu) + C \label{eq:tt1} \end{align} And on the other hand, the number of diffusion steps are usually large, so the difference between $\tau^i$ and $\tau^{i+1}$ is small enough. We can apply Taylor expansion around $\tau^i=\mu$ to $\log p_{\phi}(y \mid \tau^i)$ as, \begin{equation} \log p_{\phi}\left(y \mid \tau^{i}\right) = \log p_{\phi}\left(y \mid \tau^{i}\right)|_{\tau^{i}=\mu}+\left.\left(\tau^{i}-\mu\right) \nabla_{\tau^{i}} \log p_{\phi}\left(y \mid \tau^{i}\right)\right|_{\tau^{i}=\mu} \label{eq:tt2} \end{equation} Therefore, synthesize Equation \ref{eq:tt1} and \ref{eq:tt2}, we derive, \begin{equation} \begin{split} \log p_{\theta,\phi}(\tau^i|\tau^{i+1},y) &= \log p_{\theta}(\tau^i|\tau^{i+1}) + \log p_{\phi}(y|\tau^i)+C_1 \\ & = -\frac{1}{2}\left(\tau^{i}-\mu\right)^{T} \Sigma^{-1}\left(\tau^{i}-\mu\right)+\left(\tau^{i}-\mu\right) \nabla \log p_{\phi}\left(y \mid \tau^{i}\right) +C_{2} \\ & =-\frac{1}{2}\left(\tau^{i}-\mu-\Sigma \nabla \log p_{\phi}\left(y \mid \tau^{i}\right)\right)^{T} \Sigma^{-1}\left(\tau^{i}-\mu-\Sigma \nabla \log p_{\phi}\left(y \mid \tau^{i}\right)\right)+C_{3} \end{split} \end{equation} which means, \begin{equation} p_{\theta,\phi}(\tau^i|\tau^{i+1},y) \approx \mathcal{N}(\tau^i; \mu + \Sigma \nabla_{\tau} \log p_{\phi}\left(y \mid \tau^{i}\right), \Sigma) \end{equation} And it's equal to Equation \ref{eq:guided}. Proven. \section{Visualization Results of KUKA Pick-and-Place Task} \label{appedix:vis_kuka} In this section, we show more visualization results about KUKA pick-and-place task. We require the KUKA Robot Arm to pick green, yellow, blue and red blocks with random initialized positions on the right side of the table one by one and move them to the left side in the order of yellow, blue, green and red (from near to far). \subsection{Pick and Place 1st Green Block} \begin{figure} \caption{The Process of Pick and Place Block 1 (Green Block)} \label{fig:kuka_line1} \end{figure} \subsection{Pick and Place 2nd Yellow Block} \begin{figure} \caption{The Process of Pick and Place Block 2 (Yellow Block)} \label{fig:kuka_line2} \end{figure} \subsection{Pick and Place 3rd Blue Block} \begin{figure} \caption{The Process of Pick and Place Block 3 (Blue Block)} \label{fig:kuka_line3} \end{figure} \subsection{Pick and Place 4th Red Block} \begin{figure} \caption{The Process of Pick and Place Block 4 (Red Block)} \label{fig:kuka_line4} \end{figure} \section{Implementation Details and Hyperparameters} \label{appedix:detail} \subsection{Details of Baseline Performances} \textbf{Maze2D Tasks.} We perform two different tasks on the Maze2D environment to validate the performance enhancement and adaptation ability of AdaptDiffuser\xspace on seen and unseen tasks. \begin{itemize} \item \textbf{Overall Performance of Navigation Task}: We report the performance of CQL and IQL on the standard Maze2D environments from Table~2 in D4RL whitepaper \citep{fu2020d4rl} and follow the hyperparameter settings described in \cite{janner2022planning}. The performance of Diffuser also refers to Table~1 in \cite{janner2022planning}. To reproduce the experimental results, we use the official implementation from the authors of IQL\footnote{\scriptsize\url{https://github.com/ikostrikov/implicit_q_learning} \label{diffuser_code}} and Diffuser\footnote{\scriptsize\url{https://github.com/jannerm/diffuser}}. \item \textbf{Navigation with Gold Coin Picking Task}: We modified the official code of Diffuser and tuned over the hyperparameter $\alpha \in \{-50, -100, -200\}$ (the scalar of the guidance) in Equation \ref{eq:cond_gen_model} to adjust the planner to be competent for newly designed gold coin picking task, which is also the basis of our method AdaptDiffuser\xspace. \end{itemize} \textbf{KUKA Pick and Place Tasks.} Similar to the unseen tasks in Maze2D environment, we also ran the official implementation of IQL and Diffuser. \textbf{MuJoCo Locomotion Tasks.} We report the scores of BC, CQL and IQL from Table~1 in \cite{kostrikov2021offline}. We take down scores of DT from Table~2 in \cite{chen2021decision}, TT from Table~1 in \cite{janner2021offline}, MOPO from Table~1 in \cite{yu2020mopo}, MOReL from Table~2 in \cite{kidambi2020morel}, MBOP from Table~1 in \cite{argenson2020model} and Diffuser from Table~2 in \cite{janner2022planning}. All baselines are trained using the same offline dataset collected by a specific expert policy. \begin{table*}[h] \caption{\small \textbf{Metric Values for Reward Discriminator in MuJoCo Environment.} The rewards are calculated utilizing D4RL \cite{fu2020d4rl} locomotion suite. } \label{table:mujoco_setting} \centering \small \begin{tabular}{cccc} \toprule \textbf{Dataset} & \textbf{Environment} & \textbf{1$^{\text{st}}$ Phase} & \textbf{2$^{\text{nd}}$ Phase}\\ \midrule Med-Expert & HalfCheetah & 10840 & 10867 \\ Med-Expert & Hopper & 3639 & 3681 \\ Med-Expert & Walker2d & 4900 & 4950 \\ \midrule Medium & HalfCheetah & 5005 & 5150 \\ Medium & Hopper & 3211 & 3225 \\ Medium & Walker2d & 3700 & 3843 \\ \midrule Med-Replay & HalfCheetah & 4600 & 4800 \\ Med-Replay & Hopper & 3100 & 3136 \\ Med-Replay & Walker2d & 3900 & 3920 \\ \bottomrule \end{tabular} \end{table*} \subsection{Metric Values for Reward Discriminator} \label{appendix:metric_value} \textbf{Maze2D Environment}. For the three different-size Maze2D settings, unlike MuJoCo, different trajectories are different in lengths which achieve different rewards. So, we not only consider the absolute value of the rewards $\mathcal{R}$ but also introduce trajectory length $\mathcal{L}$ and reward-length ratio into the criteria of discrimination. We prefer trajectories with longer lengths or those having higher reward-length ratios. Additionally, we denote the maximum episode steps of the environment as $Max_e$ (Maze2D-UMaze: $300$, Maze2D-Medium: $600$, Maze2D-Large: $800$). And then, we have following metrics to filter out high-quality data. \begin{itemize} \item \textbf{Maze2D-UMaze}: The trajectory is required to satisfy $\mathcal{L} > 200$ or $\mathcal{L} > 50$ and $\mathcal{R} + 1.0 * (Max_e - \mathcal{L}) > 210$ which is equal to measure the $\mathcal{R}/\mathcal{L}$. \item \textbf{Maze2D-Medium}: The trajectory is required to satisfy $\mathcal{L} > 450$ or $\mathcal{L} > 200$ and $\mathcal{R} + 1.0 * (Max_e - \mathcal{L}) > 400$. \item \textbf{Maze2D-Large}: The trajectory is required to satisfy $\mathcal{L} > 650$ or $\mathcal{L} > 270$ and $\mathcal{R} + 1.0 * (Max_e - \mathcal{L}) > 400$. \end{itemize} \textbf{KUKA Robot Arm}. For the KUKA Robot Arm environment, we define a sparse reward function that achieves one if and only if the placement is successful and zero otherwise. Therefore, we take the condition $\mathcal{R} >= 2.0$ which means at least half of the four placements are successful. \textbf{MuJoCo Environment.} For MuJoCo locomotion environment, as we describe in Sec. \ref{sec:mujoco}, we directly use the reward derived after generated state sequence and action sequence to filter out high-quality synthetic data. The specific values for MuJoCo are shown in Table \ref{table:mujoco_setting}. \subsection{Amount of Synthetic Data for Each Iteration} The amount of synthetic data for each iteration is another important hyperparameter for AdaptDiffuser\xspace. Different tasks have different settings. We give detailed hyperparameters here. \begin{table*}[htb] \caption{\small \textbf{Amount of Synthetic Data for Each Iteration.} The number of synthetic data for KUKA Arm pick-and place task consists of 1000 generated trajectories and 10000 cross-domain trajectories from the unconditional stacking task. } \label{table:amount_data} \centering \small \begin{tabular}{cccc} \toprule \textbf{Dataset} & \textbf{Task} & \textbf{\# of Expert Data} & \textbf{\# of Synthetic Data} \\ \midrule MuJoCo & Locomotion & $10^6$, $2 \times 10^6$& 50000 \\ \midrule Maze2D & Navigation & $10^6$, $2 \times 10^6$, $4 \times 10^6$ & $10^6$ \\ Maze2D & Gold Coin Picking & 0 & $10^6$\\ \midrule KUKA Robot & Unconditional Stacking & 10000 & - \\ KUKA Robot & Pick-and-Place & 0 & 11000 \\ \bottomrule \end{tabular} \end{table*} \subsection{Other Details} \begin{enumerate} \item A temporal U-Net~\cite{ronneberger2015u} with 6 repeated residual blocks is employed to model the noise $\epsilon_\theta$ of the diffusion process. Each block is comprised of two temporal convolutions, each followed by group norm \cite{wu2018group}, and a final Mish non-linearity \cite{misra2019mish}. Timestep embeddings are generated by a single fully-connected layer and added to the activation output after the first temporal convolution of each block. \item The diffusion model is trained using the Adam optimizer \citep{kingma2014adam} with a learning rate of $2\times10^{-4}$ and batch size of $32$. \item The training steps of the diffusion model are $1M$ for MuJoCo locomotion task, $2M$ for tasks on Maze2D and $0.7M$ for KUKA Robot Arm tasks. \item The planning horizon $T$ is set as 32 in all locomotion tasks, $128$ for KUKA pick-and-place, $128$ in Maze2D-UMaze, $192$ in Maze2D-Medium, and $384$ in Maze2D-Large. \item We use $K = 100$ diffusion steps for all locomotion tasks, $1000$ for KUKA robot arm tasks, $64$ for Maze2D-UMaze, $128$ for Maze2D-Medium, and $256$ for Maze2D-Large. \item We choose 2-norm as the auxiliary guided function in the combination setting of Section \ref{sec:method_gen} and the guidance scale $\alpha \in \{1, 5, 10, 50, 100\}$ of which the exact choice depends on the specific task. \end{enumerate} \section{Testing-time and Training-time Analysis} \label{appendix:time} \subsection{Testing-time Characteristic of AdaptDiffuser\xspace} AdaptDiffuser\xspace only generates synthetic data during training and performs denoising once during inference to obtain the optimal trajectory. We show the inference time of generating an action taken by Diffuser \citep{janner2022planning} and our method in Table \ref{table:test_time_mujoco} and Table \ref{table:test_time_maze}. All these data are tested with one \textit{NVIDIA RTX 3090 GPU}. \begin{table*}[htb] \caption{\small \textbf{Testing Time in D4RL MuJoCo Environment.} The unit in the table is second (s). } \label{table:test_time_mujoco} \centering \small \begin{tabular}{cccc} \toprule \textbf{Dataset} & \textbf{Environment} & \textbf{Diffuser} & \textbf{AdaptDiffuser} \\ \midrule Med-Expert & HalfCheetah & 1.38 s & 1.41 s \\ Med-Expert & Hopper & 1.57 s & 1.59 s \\ Med-Expert & Walker2d & 1.60 s & 1.56 s \\ \midrule Medium & HalfCheetah & 1.40 s & 1.40 s \\ Medium & Hopper & 1.60 s & 1.56 s \\ Medium & Walker2d & 1.57 s & 1.57 s \\ \midrule Med-Replay & HalfCheetah & 1.43 s & 1.37 s \\ Med-Replay & Hopper & 1.59 s & 1.55 s \\ Med-Replay & Walker2d & 1.55 s & 1.58 s \\ \bottomrule \end{tabular} \end{table*} \begin{table*}[htb] \caption{\small \textbf{Testing Time in D4RL Maze2D and KUKA Environments.} The test time of KUKA is derived by dividing the trajectory generation time by horizon size. The unit in the table is second (s).} \label{table:test_time_maze} \centering \small \begin{tabular}{ccc} \toprule \textbf{Environment} & \textbf{Diffuser} & \textbf{AdaptDiffuser} \\ \midrule Maze2D U-Maze & 0.70 s & 0.69 s \\ Maze2D Medium & 1.42 s & 1.44 s \\ Maze2D Large & 2.80 s & 2.76 s \\ \midrule KUKA Pick and Place & 0.21 s & 0.21 s \\ \bottomrule \end{tabular} \end{table*} From the tables, we can see that the inference time of AdaptDiffuser is almost equal to that of Diffuser \citep{janner2022planning}. And because the denoising steps of different datasets are different, the testing times are different between environments. For MuJoCo, the inference time of an action is approximately 1.5s, while for Maze2D the inference time is about 1.6s (on average of three environments), and for KUKA about 0.21s. The inference time is feasible for real-time robot control. Additionally, in Section \ref{ablation:limit_data} of our paper, we have also demonstrated how limited number of high quality expert data would affect our method's performance. What's more, as suggested in Diffuser \citep{janner2022planning}, we can improve the testing time by warm-starting the state diffusion, which means we start with the state sequence generated from the previous environment step and then reduce the number of denoising steps. \begin{table*}[b] \caption{\small \textbf{Synthetic Data Generation Time and Training Time in MuJoCo Environment.} The synthetic data generation time listed here is about the time to generate one high-quality trajectory. The total training time of AdaptDiffuser\xspace is the sum of the following three parts. The quality standard of selected trajectories are the same as those stated in Appendix \ref{appendix:metric_value}. The unit in the table is hour (h).} \label{table:train_time_mujoco} \centering \small \begin{tabular}{ccccc} \toprule \textbf{Dataset} & \textbf{Environment} & \textbf{Synthetic Data Gen. Time} & \textbf{AdaptDiffuser Fine-Tuning} & \textbf{Diffuser Training} \\ \midrule Med-Expert & HalfCheetah & 4.4 h & 6.8 h & 44.2 h \\ Med-Expert & Hopper & 5.7 h & 6.4 h & 37.0 h \\ Med-Expert & Walker2d & 3.0 h & 6.6 h & 43.0 h \\ \midrule Medium & HalfCheetah & 2.4 h & 7.0 h & 45.3 h \\ Medium & Hopper & 4.8 h & 6.2 h & 36.2 h \\ Medium & Walker2d & 4.7 h & 6.4 h & 43.0 h \\ \midrule Med-Replay & HalfCheetah & 15.7 h & 7.4 h & 45.3 h \\ Med-Replay & Hopper & 11.9 h & 6.5 h & 36.1 h \\ Med-Replay & Walker2d & 4.3 h & 6.4 h & 42.8 h \\ \bottomrule \end{tabular} \end{table*} \subsection{Training-time Characteristic of AdaptDiffuser\xspace} The training time of AdaptDiffuser\xspace can be seen as the sum of synthetic data generation time and diffusion model training time. The synthetic data generation time depends on the quality standard of the trajectory to be selected. What's more, to accelerate the training, we use the warming-up technique which takes the pre-trained Diffuser model as the basis of AdaptDiffuser, and then performs fine-tuning on new generated data with fewer training steps (1/4 in actual use). Then we show these three parts' times in Table \ref{table:train_time_mujoco}. All these times are tested with one \textit{NVIDIA RTX 3090 GPU}. It can be found from the table that the model training time dominates the total pre-training time while the extra time spent, such as synthetic data generation, is a relatively small part. The total time required to pre-train AdaptDiffuser is on average 54 hours (sum of the three parts) comparable to Diffuser's 41 hours. Besides, the data generation process can be executed parallel. For example, in our D4RL MuJoCo environment, we generate 10 trajectories for each dataset at each phase. Under parallel settings, the total time to collect all ten synthetic trajectories is the same as the time to collect one trajectory. If using more GPUs, the synthetic data generation time can be further reduced. \section{Comparison with Decision Diffuser} Decision Diffuser (DD) \citep{ajay2022conditional} is a concurrent work with ours and improves the performance of Diffuser \citep{janner2022planning} by introducing planning with classifier-free guidance and acting with inverse-dynamics. Generally speaking, our method is a general algorithm that enables diffusion-based planners to have self-evolving ability that can perform well on existing and unseen (zero-shot) tasks, mainly by generating high-quality synthetic data with reward and dynamics consistency guidance for diverse tasks simultaneously. Therefore, regardless of which diffusion-based planner to be used, there can exist AdaptDiffuser, AdaptDecisionDiffuser, etc. It means that the method we introduce to make the planner self-evolving does not conflict with the improvements proposed by Decision Diffuser. The improvements of these two works can complement each other to further enhance the performance of diffusion model-based planners. We also compare the performance of Decision Transformer (DT) \citep{chen2021decision}, Trajectory Transformer (TT) \citep{janner2021offline}, Diffuser \citep{janner2022planning}, Decision Diffuser \citep{ajay2022conditional} and our method here. Results about Decision Diffuser are quoted from \cite{ajay2022conditional}. \begin{table*}[htb] \caption{\small \textbf{Performance Comparison with Decision Diffuser in MuJoCo Environment.} We report normalized average returns of D4RL tasks \citep{fu2020d4rl} in the table. And the mean and the standard error are calculated over 3 random seeds. } \label{table:with_DD} \centering \small \tabcolsep 4.5pt \begin{tabular}{ccccccc} \toprule \textbf{Dataset} & \textbf{Environment} & \textbf{DT} & \textbf{TT} & \textbf{Diffuser} & \textbf{Decision Diffuser} & \textbf{AdaptDiffuser}\\ \midrule Med-Expert & HalfCheetah & $86.8$ & $\textbf{95.0}$ & $88.9$ & $90.6$ & $89.6$ \scriptsize{\raisebox{1pt}{$\pm 0.8$}} \\ Med-Expert & Hopper & $107.6$ & $110.0$ & $103.3$ & $\textbf{111.8}$ & $\textbf{111.6}$ \scriptsize{\raisebox{1pt}{$\pm 2.0$}} \\ Med-Expert & Walker2d & $\textbf{108.1}$ & $101.9$ & $106.9$ & $\textbf{108.8}$ & $\textbf{108.2}$ \scriptsize{\raisebox{1pt}{$\pm 0.8$}} \\ \midrule Medium & HalfCheetah & $42.6$ & $46.9$ & $42.8$ & $\textbf{49.1}$ & $44.2$ \scriptsize{\raisebox{1pt}{$\pm 0.6$}} \\ Medium & Hopper & $67.6$ & $61.1$ & $74.3$ & $79.3$ & $\textbf{96.6}$ \scriptsize{\raisebox{1pt}{$\pm 2.7$}} \\ Medium & Walker2d & $74.0$ & $79.0$ & $79.6$ & $82.5$ & $\textbf{84.4}$ \scriptsize{\raisebox{1pt}{$\pm 2.6$}} \\ \midrule Med-Replay & HalfCheetah & $36.6$ & $41.9$ & $37.7$ & $\textbf{39.3}$ & $\textbf{38.3}$ \scriptsize{\raisebox{1pt}{$\pm 0.9$}} \\ Med-Replay & Hopper & $82.7$ & $91.5$ & $93.6$ & $\textbf{100.0}$ & $92.2$ \scriptsize{\raisebox{1pt}{$\pm 1.5$}} \\ Med-Replay & Walker2d & $66.6$ & $82.6$ & $70.6$ & $75.0$ & $\textbf{84.7}$ \scriptsize{\raisebox{1pt}{$\pm 3.1$}} \\ \midrule \multicolumn{2}{c}{\textbf{Average}} & 74.7 & 78.9 & 77.5 & 81.8 & \textbf{83.4} \hspace{.58cm} \\ \bottomrule \end{tabular} \end{table*} From the table, we can see that in most datasets, the performance of AdaptDiffuser\xspace is comparable to or better than that of Decision Diffuser. And the normalized average return of AdaptDiffuser\xspace is $83.4$ higher than all of the other methods (i.e. $74.7$ of DT, $78.9$ of TT, $77.5$ of Diffuser and $81.8$ of Decision Diffuser). \section{Discussions} \label{appendix:discuss} \subsection{Adapt AdaptDiffuser\xspace to Maze2D Gold Coin Picking Task with Coin Locating Far from the Optimal Path} AdaptDiffuser\xspace works when the gold coin is located nowhere near the optimal path. Figure \ref{fig:maze_adapt} of our paper has shown one case. The sub-figure (b) of Figure \ref{fig:maze_adapt} show the optimal path when there are no gold coins in the maze. (The generated route walks at the bottom of the figure.) And then if we add a gold coin in the (4,2) position of the maze, AdaptDiffuser\xspace will generate a new path that passes through the gold coin as shown in the sub-figure (d) of Figure \ref{fig:maze_adapt}. (The generated route walks in the middle of the figure.) In our point of view, our method works mainly because we change the start point and goal point multiple times during training. Diffusion model can generate trajectories that have not been seen in the expert dataset. And as long as the paths generated during training can cover the entire trajectory space as much as possible, AdaptDiffuser\xspace can generate the path through any location of the gold coin during planning. However, it is true that the success rate of generating trajectories for some extremely hard cases that the gold coin is far from the planned path and the agent has to take a turn back to obtain the gold coin, is lower than that of common cases. \subsection{Adapt AdaptDiffuser\xspace to High-dimensional Observation Space Tasks} AdaptDiffuser\xspace is feasible for high-dimensional observation space tasks. One possible and widely-used solution, we suggest, is to add an embedding module (e.g. MLP) after input to convert the data from high-dimensional space to latent space, and then employ AdaptDiffuser\xspace in latent space to solve the problem. Stable Diffusion \cite{rombach2022high} has shown the effectiveness of this method, which deploys an Auto-Encoder to encode image into a latent representation and uses a decoder to reconstruct the image from the latent after denoising. MineDoJo \cite{fan2022minedojo} also takes this technique and achieves outstanding performance in image-based RL domain. \section{Generate Diverse Maze Layouts with ChatGPT} \label{appendix:chatgpt} Inspired by the remarkable generation capabilities demonstrated by recent advancements in large language models (LLMs), exemplified by ChatGPT, we propose a novel approach that harnesses the potential of LLM to accelerate the process of synthetic data generation. In this section, we focus specifically on utilizing LLM to assist in generating diverse Maze layouts. This objective is driven by the need to create a multitude of distinct maze layouts to facilitate varied path generations, ultimately enhancing the performance and adaptability of AdaptDiffuser\xspace. Traditionally, the manual design of feasible and terrain complex maze environments is a time-consuming endeavor that requires to try and adjust multiple times. In light of this challenge, leveraging ChatGPT for maze environment generation emerges as an appealing alternative, streamlining the process and offering enticing advantages. We show the generated examples in Fig. \ref{fig:chatgpt_maze}. Besides, we can ask the ChatGPT to summarize the rules of generating feasible mazes, shown in Fig. \ref{fig:rule_chat}. \begin{figure} \caption{\textbf{Generated Maze examples by ChatGPT.} From simple terrain to complex terrain (with multiple dead ends and loops).} \label{fig:chatgpt_maze} \end{figure} \begin{figure} \caption{\textbf{Rules for generating maze layouts summarized by ChatGPT.}} \label{fig:rule_chat} \end{figure} We also give our prompts here. We find that providing ChatGPT with a few existing feasible maze examples (few-shot) can effectively improve the quality of the generated mazes, so we design the prompts in this way. From prompt 1 to prompt 2, we also find that the terrains of generated mazes are exactly from simple to complex. \textbf{Prompt1:} ``I will give you a legal string expression of a MAZE. In the MAZE, the `\#' represents the obstacles and the `O' represents the empty space. Could you generate one more maze with different terrain obeying to the rules: The MAZE should be 9*12, and the surrounding of the MAZE should be obstacles, that is `\#', and all empty places should be 4-connected. The example maze is \begin{equation*} \begin{aligned} \text{LARGE\_MAZE} = &``\#\#\#\#\#\#\#\#\#\#\#\#\backslash\backslash"+\\ &``\#OOOO\#OOOOO\#\backslash\backslash"+\\ &``\#O\#\#O\#O\#O\#O\#\backslash\backslash"+\\ &``\#OOOOOO\#OOO\#\backslash\backslash"+\\ &``\#O\#\#\#\#O\#\#\#O\#\backslash\backslash"+\\ &``\#OO\#O\#OOOOO\#\backslash\backslash"+\\ &``\#\#O\#O\#O\#O\#\#\#\backslash\backslash"+\\ &``\#OO\#OOO\#OOO\#\backslash\backslash"+\\ &``\#\#\#\#\#\#\#\#\#\#\#\#"\ " \end{aligned} \end{equation*} \textbf{Prompt2:} ``Please generate more complex Maze that has more complex terrains (i.e. more dead ends, loops, and obstacles)". \end{document}
\begin{document} \title{Lectures on the free period Lagrangian action functional} \author{Alberto Abbondandolo} \address{Ruhr Universit\"at Bochum, Fakult\"at f\"ur Mathematik, Geb\"aude NA 4/33, D-44801 Bochum, Germany} \email{alberto.abbondandolo@rub.de} \maketitle \let\thefootnote\relax\footnote{The present work is part of the author's activities within CAST, a Research Network Program of the European Science Foundation.} \centerline{\em To Kazimierz G\c{e}ba on the occasion of his 80th birthday} \begin{abstract} In this expository article we study the question of the existence of periodic orbits of prescribed energy for classical Hamiltonian systems on compact configuration spaces. We use a variational approach, by studying how the behavior of the free period Lagrangian action functional changes when the energy crosses certain values, known as the Ma\~n\'e critical values. \end{abstract} \tableofcontents \section*{Introduction} The main topic of this expository article is the question of the existence of periodic orbits of prescribed energy for classical Hamiltonian systems on compact configuration spaces. More precisely, we consider a connected closed manifold $M$ and a smooth {\em Tonelli Lagrangian} $L$ on the tangent bundle $TM$ of $M$: The Tonelli assumption means that $L$ is fiberwise uniformly convex and superlinear. It is a very natural assumption: For instance, it guarantees that the Legendre transform is well defined and produces a diffeomorphism between the tangent and the cotangent bundle of $M$, it allows to prove that every pair of points in $M$ is connected by a curve $\gamma$ which minimizes the Lagrangian action \[ \int_{t_0}^{t_1} L(\gamma(t),\gamma'(t)) \, dt \] and is a solution of the Euler-Lagrange equation associated to $L$, which is unique whenever the two points are sufficiently close to each other (see e.g. \cite{bgh98} or \cite{maz11}). Typical examples of Tonelli Lagrangians are {\em electromagnetic Lagrangians}, that is functions of the form \[ L(x,v) = \frac{1}{2} |v|_x^2 + \theta(x)[v] - V(x), \qquad \forall (x,v)\in TM, \] where $|\cdot|_x$ denotes the norm associated to a Riemannian metric on $M$ (the kinetic energy), $\theta$ is a smooth one-form (the magnetic potential) and $V$ is a smooth function (the scalar potential) on $M$. The Euler-Lagrange equations associated to $L$ induce a smooth flow on $TM$ which preserves the energy function $E:TM \rightarrow {\mathbb{R}}$, \[ E(x,v) := d_v L(x,v)[v] - L(x,v), \qquad \forall (x,v)\in TM. \] Given a number $\kappa\in [\min E,+\infty)$, the problem under considerations is the existence of a periodic orbit on the energy level $E^{-1}(\kappa)$. Such a problem has been studied by several authors, for several classes of Tonelli Lagrangians and energy ranges, and by several techniques. For instance, in the case of Lagrangians of the form \[ L(x,v) = \frac{1}{2} |v|_x^2 - V(x), \] one can use the Maupertuis-Jacobi metric as in \cite{ben84} and reduce the problem to the existence of closed Riemannian geodesics either on $M$, if $\kappa$ is larger than the maximum of $V$, or on the domain $\{V\leq \kappa\}\subset M$, which is endowed with a metric which degenerates on the boundary $V^{-1}(\kappa)$, if $\kappa$ is smaller than the maximum of $V$. For a general Tonelli Lagrangian, the role of the maximum of $V$ is played by the {\em Ma\~{n}\'e critical values}. More precisely, important values of the energy are the numbers \[ \min E \leq e_0(L) \leq c_u(L) \leq c_0(L), \] where $e_0(L)$ is the maximal critical value of $E$, $c_u(L)$ is minus the infimum of the mean Lagrangian action \[ \frac{1}{T} \int_0^T L(\gamma(t)\, \gamma'(t))\, dt \] over all contractible closed curves $\gamma$, and $c_0(L)$ is minus the infimum of the mean Lagrangian action over all null-homologous closed curves. In the case of electromagnetic Lagrangians, $e_0(L)=c_u(L)=c_0(L)$ when the magnetic potential $\theta$ vanishes, but the first and the second values are in general distinct when $\theta$ does not vanish ($c_u(L)$ and $c_0(L)$ can be distinct only when the fundamental group of $M$ is sufficiently non-abelian). The importance of $e_0(L)$ is clear, since it marks a change in the topology of $E^{-1}(\kappa)$: If $\kappa>e_0(L)$, then $E^{-1}(\kappa)$ is diffeomorphic to the unit tangent bundle of $M$, if $\kappa<e_0(L)$ then the projection of $E^{-1}(\kappa)$ to $M$ is not surjective anymore. The {\em lowest} Ma\~{n}\'e critical value $c_u(L)$ affects directly the behavior of the {\em free period Lagrangian action functional} \[ {\mathbb{S}}_{\kappa}(\gamma) := \int_0^T \Bigl( L\bigl(\gamma(t),\gamma'(t)\bigr) + \kappa \Bigr) \, dt, \qquad \gamma: {\mathbb{R}}/T{\mathbb{Z}} \rightarrow M. \] The critical points of this functional, whose domain is a suitable space of closed curves $\gamma$ in $M$ of arbitrary period $T$, are exactly the closed orbits of energy $\kappa$. The functional ${\mathbb{S}}_{\kappa}$ is bounded from below on every connected component of the free loop space whenever $\kappa\geq c_u(L)$, and it is unbounded from below on every such connected component when $\kappa< c_u(L)$. The {\em strict} Ma\~{n}\'e critical value $c_0(L)$ is not directly related to the topology of ${\mathbb{S}}_{\kappa}$, but it has dynamical and geometric significance: For $\kappa>c_0(L)$ the energy surface $E^{-1}(\kappa)$ is of {\em restricted contact type}, and the Euler-Lagrangian flow on it is conjugated, up to a time reparametrization, to a {\em Finsler geodesic flow} on $M$, whereas both facts are in general false for $\kappa\leq c_0(L)$. Furthermore, the Ma\~{n}\'e critical values are related to compactness properties of the functional ${\mathbb{S}}_{\kappa}$, such as the {\em Palais-Smale condition}. By exploiting these facts, the free period action functional ${\mathbb{S}}_{\kappa}$ can be effectively used as a variational principle for our problem and allows to prove various results, which we summarize into the following theorem. \begin{Thm*} Let $L$ be a Tonelli Lagrangian on the tangent bundle of the closed manifold $M$. \begin{enumerate} \item If $\kappa>c_u(L)$ and $M$ is not simply connected, then the energy level $E^{-1}(\kappa)$ has a ${\mathbb{S}}_{\kappa}$-minimizing periodic orbit in each non-trivial homotopy class of the free loop space of $M$. \item If $\kappa>c_u(L)$ and $M$ is simply connected, then the energy level $E^{-1}(\kappa)$ has a periodic orbit with positive action ${\mathbb{S}}_{\kappa}$. \item For almost every $\kappa\in (\min E,c_u(L))$ the energy level $E^{-1}(\kappa)$ has a periodic orbit with positive action ${\mathbb{S}}_{\kappa}$. \item If the energy level $E^{-1}(\kappa)$ is stable then $E^{-1}(\kappa)$ has a periodic orbit. \end{enumerate} \end{Thm*} Notice that in (iii) only existence for {\em almost every} energy level in the interval $(\min E,c_u(L))$ (in the sense of Lebesgue measure) is stated: existence for {\em all} energy levels in this range is still unknown, although no counterexamples have been found so far. This issue is related to the fact that the Palais-Smale condition does not hold anymore below $c_u(L)$. The {\em stability condition} which is assumed in (iv) is a weaker form of the contact type condition. The above theorem was first proved in this form by G.~Contreras \cite{con06} (assuming contact type instead of stable in (iv)), building on previous geometric ideas of I.~A.~Taimanov \cite{tai83,tai92b}. Contreras' long paper \cite{con06} contains many other beautiful results, such as the study of the invariant probability measures which one obtains as limits of Palais-Smale sequences which do not converge in the free loop space. This article is meant to be a gentle introduction, including some technical simplifications, to the part of \cite{con06} which concerns periodic orbits. Unlike in a typical survey article, we are more concerned with detailed proofs, which we try to make accessible to a large audience including students, than with a systematic overview of the literature, for which we refer to the beautiful survey of Taimanov \cite{tai92b}, to \cite{cmp04} and to the already cited \cite{con06}. In particular, we start by proving well known abstract results, such as the mountain pass theorem, the general minimax principle, and the construction of the structure of infinite dimensional Hilbert manifold on the space of closed loops on $M$ of Sobolev class $W^{1,2}$ (Sections \ref{mpsec} and \ref{hmlsec}). In Section \ref{sec3} we introduce the already mentioned free period action functional ${\mathbb{S}}_{\kappa}$, which plays a fundamental role in this article and gives it its title: Unlike in \cite{con06}, we use it also to get existence of closed orbits for energies below $e_0(L)$. The Ma\~{n}\'e critical values which are relavant for this article are introduced in Section \ref{mcv}, together with some of their characterizations and with the discussion of two geometric properties of an energy level, namely the contact type and the stability condition. The analysis of Palais-Smale sequences is carried out in Section \ref{pss}, and in Section \ref{pohe} we prove statements (i) and (ii) of the above theorem. The topology of the free period action functional for $\kappa<c_u(L)$ is studied in Section \ref{tfpafle}, and in Section \ref{pole} we finally prove statements (iii) and (iv), using {\em Struwe's monotonicity argument}, together with a weaker version of (iii), using an alternative argument. \paragraph{\bf Acknowledgments} This expository article is the outcome of two series of lectures that the author gave at two summer schools at the Korea Institute for Advanced Study of Seul in 2010 and at the Universit\'e de Neuch\^atel in 2011, respectively. I am grateful to Urs Frauenfelder and Felix Schlenk for organizing these two events, and to Gabriel Paternain, who was also a speaker at the first school, for many fruitful discussions. I would like to thank Jungsoo Kang, who participated to the first school, organized the material and typed a first version of the notes which eventually became this article. I would like to thank also Luca Asselle, who participated to the second school and suggested Lemma \ref{PST0} below, which allows to avoid extra technicalities and the detailed analysis of Palais-Smale sequences with infinitesimal periods which was present in the previous notes. \numberwithin{equation}{section} \section{The minimax principle} \label{mpsec} \paragraph{\bf The mountain pass theorem} Let $H$ be a real Hilbert space and let $f$ be a continuously differentiable real function on $H$. The symbol ${\rm Crit} f$ denotes the set of critical points of $f$. We assume that a certain open sublevel $\{f<a\}$ is not connected, say $\{f<a\}=A\cup B$, with $A$ and $B$ disjoint non-empty open sets. We may think of $A$ and $B$ as two valleys, and consider the set of paths going from one valley to the other one, that is the set $$ \Gamma:=\{\textrm{curves in $H$ with one end in $A$ and the other in $B$}\}. $$ We can define the minimax value of $f$ on $\Gamma$ as $$ c:=\inf_{\gamma\in\Gamma}\max_{x\in\gamma} f(x), $$ and we notice that $a\leq c < +\infty$, because $\Gamma$ is non empty and each of its elements intersects the set $H\setminus (A\cup B) = \{f\geq a\}$. One would expect this mountain pass level $c$ to be a critical value of $f$. The next simple example shows that this is not always the case. \begin{Ex} Consider the smooth function $f$ on ${\mathbb{R}}^2$ defined by \[ f(x,y)=e^x-y^2. \] Then $\{f<0\}$ has two connected components, $c=0$, but $f$ has no critical points. The problem here is that the critical point is pushed to infinity: Indeed, $f(-n,0)=e^{-n}$ converges to the mountain pass level $c=0$ and $df(-n,0)=e^{-n} dx$ tends to zero. \end{Ex} This example suggests the following definition. \begin{Def} A sequence $(x_n)_{n\in{\mathbb{N}}}\subset H$ is called a {\em Palais-Smale sequence} at level $c$ ($(\mathrm{PS})_c$ for short) if $$ \lim_{n\to\infty}f(x_n)=c\quad\mbox{and}\quad \lim_{n\to\infty}df(x_n)=0. $$ The function $f$ is said to satisfy $(\mathrm{PS})_c$ if all $(\mathrm{PS})_c$ sequences are compact. It is said to satisfy $(\mathrm{PS})$ if it satisfies $(\mathrm{PS})_c$ for every $c\in {\mathbb{R}}$. \end{Def} Notice that limiting points of $(\mathrm{PS})_c$ sequences are critical points at level $c$. We can now state the celebrated mountain pass theorem of Ambrosetti and Rabinowitz \cite{ar73} in the following form: \begin{Thm}[Mountain Pass Theorem] \label{mp} Let $f\in C^{1,1}(H)$ be such that $\{f<a\}$ is not connected and let $c$ be defined as above. Then $f$ admits a $(\mathrm{PS})_c$ sequence. In particular, if $f$ satisfies $(\mathrm{PS})_c$, then $c$ is a critical value. \end{Thm} Here $C^{1,1}$ denotes the set of functions whose differential is locally Lipschitz-continuous. \begin{proof} By contradiction, suppose that there exists $\epsilon>0$ such that $||df|| \geq\epsilon$ on the set $\{|f-c|\leq\epsilon\}$. We denote by $\nabla f$ the gradient of $f$ and we assume for sake of simplicity that the locally Lipschitz vector field $-\nabla f$ is positively complete, meaning that its flow $\phi$, that is the solution of \[ \left\{ \begin{aligned} &\frac{\partial}{\partial t}\phi_t (u)=-\nabla f\bigl(\phi_t(u)\bigr),\\[1ex] &\phi_0 (u)=u, \end{aligned} \;\;\right. \] is defined for every $t\geq 0$ and every $u\in H$. This holds, for instance, if $\nabla f$ is globally Lipschitz (in this case the flow of $-\nabla f$ is defined on the whole ${\mathbb{R}}\times H$). See Remark \ref{noncomp} below for a hint on how to remove this extra assumption. Notice that \begin{equation} \label{decr} \frac{d}{dt} f\bigl(\phi_t(u)\bigr) = df\bigl( \phi_t(u) \bigr) \bigl[-\nabla f\bigl(\phi_t(u)\bigr) \bigr] = - \bigl\|df\bigl(\phi_t(u)\bigr)\bigr\|^2, \end{equation} so the function $t\mapsto f(\phi_t(u))$ is decreasing. If $|f(\phi_t(u))-c|\leq\epsilon$ for all $t\in[0,T]$, we have \[ 2\epsilon \geq f(u)-f(\phi_T(u)) =-\int_0^T\frac{d}{dt}f(\phi_t(u))dt =\int_0^T \bigl\|d f\bigl(\phi_t(u)\bigr)\bigr\|^2dt \geq\epsilon^2T, \] from which we conclude that $T\leq 2/\epsilon$. Choose $\gamma\in\Gamma$ such that $\max_\gamma f\leq c+\epsilon$ and set $$ \tilde\gamma=\phi_T(\gamma), \qquad\textrm{for some } T>\frac{2}{\epsilon}. $$ The fact that $f$ decreases along the orbits of $\phi$ implies that $\tilde\gamma$ belongs to $\Gamma$. Since $f\leq c+\epsilon$ on $\gamma$, any $x\in\gamma$ satisfies either (i) $|f(x)-c|\leq\epsilon$ or (ii) $f(x)<c-\epsilon$. Let $x\in \gamma$. If (i) holds, then $f(\phi_T(x))<c-\epsilon$ because $T>2/\epsilon$. If (ii) holds, then $f(\phi_T(x))<c-\epsilon$ because $f$ decreases along the orbits of $\phi$. Therefore we conclude that $\tilde\gamma\subset\{f< c-\epsilon\}$, which contradicts the definition of $c$. \end{proof} \begin{Rmk} \label{noncomp} If the vector field $-\nabla f$ is not positively complete, we can replace it by the complete one $-\nabla f/\sqrt{||\nabla f||^2+1}$. The above proof goes through with minor adjustments. \end{Rmk} \begin{Rmk} The mountain pass theorem holds also for $f\in C^{1,1}(\mathcal{M})$ where $(\mathcal{M},g)$ is a Hilbert manifold equipped with a complete Riemannian metric $g$. In this case, $(x_n)_{n\in{\mathbb{N}}}\subset\mathcal{M}$ is said to be a $(\mathrm{PS})_c$ sequence if $\lim_{n\to\infty}f(x_n)=c$ and $\lim_{n\to\infty}||df(x_n)||=0$, where $\|\cdot\|$ denotes the dual norm induced by $g$. Notice that the (PS) condition and the completeness of $g$ are somehow antagonist requirements: One may always achieve the completeness of an arbitrary Riemannian metric $g$ by multiplying it by a positive function which diverges at infinity (such an operation reduces the set of the Cauchy sequences), while the (PS) condition could be achieved by multiplying $g$ by a positive function which is infinitesimal at infinity (since the dual norm is multiplied by the inverse of this function, this operation reduces the set of the (PS) sequences). \end{Rmk} \begin{Rmk} The mountain pass theorem holds also if $f$ is just continuously differentiable. In this case, its negative gradient vector field is just continuous and may not induce a continuous flow. In order to prove the above theorem, one needs to construct a locally Lipschitz pseudo-gradient vector field for $f$, see for instance \cite[Lemma 3.2]{str00}. The same construction allows to prove the mountain pass theorem for continuously differentiable functions on Banach spaces, or more generally on Banach manifolds. \end{Rmk} \begin{Rmk} When dealing with functions on manifolds, it is sometimes useful to have a formulation of the mountain pass theorem which does not involve the choice of a metric. Here is such a formulation. Assume that $f$ is a continuously differentiable function on a Hilbert manifold $\mathcal{M}$ and that $V$ is a positively complete locally Lipschitz vector field such that $df[V]<0$ on $\mathcal{M} \setminus{\rm Crit} f$. Then the mountain pass theorem holds, provided that we define $(x_n)_{n\in{\mathbb{N}}}\subset \mathcal{M}$ to be a $(\mathrm{PS})_c$ sequence if $f(x_n)$ tends to c and $df(x_n)[V(x_n)]$ is infinitesimal. Now the antagonism is between this form of the (PS) condition and the positive completeness of $V$. \end{Rmk} \paragraph{\bf The general minimax principle} In the proof of Theorem \ref{mp} we have not used the fact that $\Gamma$ is a set of curves, but rather that $\Gamma$ is positively invariant with respect to the negative gradient flow $\phi$ of $f$, meaning that $\phi_t(\gamma)\in\Gamma$ for all $\gamma\in\Gamma$ and $t\geq 0$. Here $\phi$ is either the flow of $-\nabla f$, when this vector field is positively complete, or the flow of some conformally equivalent positively complete vector field, such as $-\nabla f/\sqrt{\|\nabla f\|^2+1}$, in the general case. This simple observation leads to the following powerful generalization of the mountain pass theorem. \begin{Thm}[\bf General Minimax Principle] \label{thm:finite c induces PS} Let $f$ be a $C^{1,1}$ function on the complete Riemannian Hilbert manifold $(\mathcal{M},g)$ and let $\Gamma$ be a set of subsets of $\mathcal{M}$ which is positively invariant with respect to the negative gradient flow of $f$. If the number \[ c= \inf_{\gamma\in\Gamma}\sup_\gamma f \] is finite, then $f$ admits a $(\mathrm{PS})_c$ sequence. In particular, if $f$ satisfies $(\mathrm{PS})_c$, then $c$ is a critical value. \end{Thm} The proof is a straightforward modification of the proof of Theorem \ref{mp}. \begin{Ex} Let $f\in C^{1,1}(H)$, where $H$ is a Hilbert space. If $\pi_k(\{f<a\})\ne 0$ for some $k\geq 0$ and $f$ satisfies $(\mathrm{PS})$, then $f$ has a critical point. Indeed, we can consider the set \[ \begin{split} \Gamma := \bigl\{ z(\overline{B}^{k+1}) \; \Big| \; & z: (\overline{B}^{k+1},\partial B^{k+1}) \rightarrow (H,\{f<a\}) \mbox{ continuous map such that }\\ & [z|_{\partial B^{k+1}}] \neq 0 \mbox{ in } \pi_k(\{f<a\})\Bigr\}, \end{split} \] where $B^{k+1}$ denotes the unit open ball of dimension $k+1$. By applying Theorem \ref{thm:finite c induces PS} with such a $\Gamma$ we get the existence of a critical point at level $c\geq a$. The case $k=0$ is precisely the Mountain Pass Theorem \ref{mp}. \end{Ex} \begin{Rmk} \label{mini} If $\Gamma$ is the class of all one-point sets in $\mathcal{M}$, then $c$ is the infimum of $f$. Therefore, the general minimax principle has as a particular case the following existing result for minimizers: Assume that $f\in C^{1,1}(\mathcal{M})$ is bounded from below, has complete sublevels and satisfies $(\mathrm{PS})_c$ at the level $c=\inf f$; then $f$ has a minimizer. \end{Rmk} \begin{Rmk} \label{trunc} It is sometimes useful to replace the negative gradient flow by a flow which fixes a certain sublevel of $f$. Let $\rho:{\mathbb{R}}\longrightarrow{\mathbb{R}}^+$ be a smooth bounded function such that $\rho=0$ on $(-\infty,b]$ and $\rho>0$ on $(b,+\infty)$. Then we consider the vector field $V=-\rho(f)\cdot\nabla f$ (or $V= - \rho(f) \nabla f/\sqrt{\|\nabla f\|^2+1}$ in the non-positively complete case) and denote its flow by $\phi$. It is a negative gradient flow truncated below level $b$: The function $t\mapsto f(\phi_t(u))$ is constant if $u\in {\rm Crit} f \cup \{f\leq b\}$ and it is strictly decreasing otherwise. If $\Gamma$ is positively invariant with respect to this negative gradient flow truncated below level $b$ and the minimax value $c$ is strictly larger than $b$, then $f$ has a $(\mathrm{PS})_c$ sequence. \end{Rmk} \section{A Hilbert manifold of loops} \label{hmlsec} Let $(M,g)$ be a closed Riemannian manifold of dimension $n$ and consider the Sobolev space of loops $$ W^{1,2}({\mathbb{T}},M):=\Big\{x:{\mathbb{T}}\longrightarrow M\,\Big|\,x\textrm{ is absolutely continuous and } \int_{\mathbb{T}}|x'(s)|^2_{x(s)}ds<\infty\Big\}, $$ where ${\mathbb{T}}:={\mathbb{R}}/{\mathbb{Z}}$ and $|\cdot|_{\cdot}$ denotes the norm induced by $g$. This set of loops is clearly independent from the choice of the Riemannian metric $g$. \paragraph{\bf The smooth structure of $\mathbf{W^{1,2}(T,M)}$} Let us recall the construction of the smooth Hilbert manifold structure on $W^{1,2}({\mathbb{T}},M)$. Fix $x_0\in C^\infty({\mathbb{T}},M)$. Assume for simplicity that $x_0$ preserves the orientation, so that $x_0^*(TM)$ has a trivialization $$ \Phi: {\mathbb{T}}\times{\mathbb{R}}^n\longrightarrow x_0^*(TM). $$ Let $B_r$ be the open ball of radius $r$ about $0$ in ${\mathbb{R}}^n$. Consider a smooth map \[ \varphi:{\mathbb{T}}\times B_r \longrightarrow M, \] such that $\varphi(t,0)=x_0(t)$ and $\varphi(t,\cdot)$ is a diffeomorphism onto an open subset in $M$, for every $t\in {\mathbb{T}}$. For instance, the map \[ \varphi(t,\xi)= \exp_{x_0(t)}\big(\Phi(t,\xi)\big), \] satisfies the above requirements if $r$ is small enough. The map $\varphi$ induces the following parameterization: \begin{equation} \label{eq:chart of the loop space} \varphi_*: W^{1,2}({\mathbb{T}},B_r)\longrightarrow W^{1,2}({\mathbb{T}},M), \quad \zeta\mapsto\varphi\big(\cdot,\zeta(\cdot)\big), \end{equation} where $W^{1,2}({\mathbb{T}},B_r)$ denotes the open subset of the Hilbert space $W^{1,2}({\mathbb{T}},{\mathbb{R}}^n)$ which consists of loops taking values into $B_r$. The collection of all these parameterizations, for every $x_0\in C^{\infty}({\mathbb{T}},M)$ and every $\varphi$ as above, defines a smooth atlas for $W^{1,2}({\mathbb{T}},M)$, which is then a smooth manifold modeled on the Hilbert space $W^{1,2}({\mathbb{T}},{\mathbb{R}}^n)$. Indeed, the smoothness of the transition maps is an immediate consequence of the chain rule. It is worth noticing that the image of the parameterization $\varphi_*$ is $C^0$-open. \begin{Rmk} If $x_0$ is not orientation preserving, the natural model for the connected component of $W^{1,2}({\mathbb{T}},M)$ which contains $x_0$ is the space of $W^{1,2}$ sections of the vector bundle $x_0^*(TM)$. Alternatively, one can define a manifold structure on $W^{1,2}([0,1],M)$ without encountering topological problems, and then see $W^{1,2}({\mathbb{T}},M)$ as the inverse image of the diagonal of $M \times M$ by the smooth submersion \[ W^{1,2}([0,1],M) \rightarrow M \times M, \qquad x \mapsto (x(0),x(1)). \] \end{Rmk} The tangent space of $W^{1,2}({\mathbb{T}},M)$ at $x$ is naturally identified with the space of $W^{1,2}$ sections of $x^*(TM)$. Therefore, we can define a Riemannian metric on $W^{1,2}({\mathbb{T}},M)$ by setting \begin{equation} \label{metr} \langle \xi,\eta\rangle_x:=\int_{{\mathbb{T}}} \Bigl( g(\xi,\eta)+g(\nabla_t\xi,\nabla_t\eta) \Bigr) \,dt, \quad \forall \xi,\eta\in T_x W^{1,2}({\mathbb{T}},M), \end{equation} where $\nabla_t$ denotes the Levi-Civita covariant derivative along $x$. The distance induced by this Riemannian metric is compatible with the topology of $W^{1,2}({\mathbb{T}},M)$. The fact that $M$ is compact implies that this metric on $W^{1,2}({\mathbb{T}},M)$ is complete (more generally, this metric is complete whenever $g$ is complete). The gradient of functionals on $W^{1,2}({\mathbb{T}},M)$ is the one which is associated to such a Riemannian metric. \begin{Rmk} If $\varphi$ is the restriction of a smooth map $B_{r'}\times {\mathbb{T}}\rightarrow M$ with the same properties, for some $r'>r$, then the parameterization $\varphi_*$ is bi-Lipschitz. \end{Rmk} See e.g. \cite{kli82} for more details on the Hilbert manifold structure of $W^{1,2}({\mathbb{T}},M)$. \paragraph{\bf The homotopy type of $\mathbf{W^{1,2}(T,M)}$} The inclusions $$ C^\infty({\mathbb{T}},M)\hookrightarrow W^{1,2}({\mathbb{T}},M)\hookrightarrow C({\mathbb{T}},M) $$ are dense homotopy equivalences. These facts can be proved by embedding $M$ into a Euclidean space ${\mathbb{R}}^N$, by regularizing the loops $x:{\mathbb{T}}\rightarrow M\subset {\mathbb{R}}^N$ by convolution, and by projecting the regularized loop back to $M$ using the tubular neighborhood theorem. In particular, the connected components of $W^{1,2}({\mathbb{T}},M)$ are in one-to-one correspondence with the conjugacy classes of $\pi_1(M)$. See, e.g., \cite[Chapter 10]{lee03} for more details. \section{The free period action functional} \label{sec3} \paragraph{\bf Tonelli Lagrangians} Let $M$ be a connected closed manifold. A function $L\in C^\infty(TM)$ is called a {\em Tonelli Lagrangian} if: \begin{enumerate} \item $L$ is fiberwise uniformly convex, i.e.\ $d_{vv}L(x,v)>0$ for every $(x,v)\in TM$, where $d_{vv}L$ denotes the fiberwise second differential of $L$; \item $L$ has superlinear growth on each fiber, i.e. \[ \lim_{|v|\to +\infty}\frac{L(x,v)}{|v|_x}=+\infty. \] \end{enumerate} The main example of Tonelli Lagrangians is given by the {\em electromagnetic Lagrangians}, that is functions of the form \begin{equation} \label{elmag} L(x,v)=\frac{1}{2}|v|^2_x+\theta(x)[v]-V(x), \end{equation} where $|\cdot|_x$ denotes the norm associated to a Riemannian metric (the kinetic energy), $\theta$ is a smooth one-form (the magnetic potential) and $V$ is a smooth function (the scalar potential) on $M$. We shall omit the subscript $x$ in $|\cdot|_x$ when the point $x$ is clear from the context. The Tonelli assumptions imply that the Euler-Lagrange equation, which in local coordinates can be written as \begin{equation} \label{EL} \frac{d}{dt}\bigl( \partial_v L (\gamma(t),\gamma'(t)) \bigr)= \partial_x L (\gamma(t),\gamma'(t)), \end{equation} is well-posed and defines a smooth flow on $TM$. This flow preserves the energy \[ E:TM\rightarrow {\mathbb{R}}, \quad E(x,v):=d_v L(x,v)[v]-L(x,v), \] where $d_v$ denotes the fiberwise differential. When $L$ has the form (\ref{elmag}), then \begin{equation} \label{ene} E(x,v) = \frac{1}{2} |v|^2 + V(x). \end{equation} In general, the energy function of a Tonelli Lagrangian satisfies the following properties: \begin{itemize} \item[(i)] $E$ is fiberwise uniformly convex and superlinear. \item[(ii)] For any $x\in M$, the restriction of $E$ to $T_x M$ achieves its minimum at $v=0$. \item[(iii)] The point $(\bar x,0)$ is singular for the Euler-Lagrange flow if and only if $(\bar x,0)$ is a critical point of $E$. \end{itemize} We are interested in proving the existence of periodic orbits on a given energy level $E^{-1}(\kappa)$. Since such an energy level is compact, up to the modification of $L$ far away from it, we may assume that the Tonelli Lagrangian $L(x,v)$ is electromagnetic for $|v|$ large. In particular, we have the inequalities \begin{eqnarray} \label{boundsonL} L(x,v) \geq L_0 |v|^2 - L_1, \qquad & \forall (x,v)\in TM,\\ \label{bd2L} d^2_{vv} L(x,v)[u,u] \geq 2 L_0 |u|^2, \qquad &\forall (x,v)\in TM, \; u\in T_x M, \end{eqnarray} for some numbers $L_0>0$ and $L_1\in {\mathbb{R}}$. Moreover, $E$ has the form (\ref{ene}) for $|v|$ large. \paragraph{\bf The free period action functional} We would like to study the Lagrangian action on the space of closed curves of arbitrary period. The latter space can be given a manifold structure by reparametrizing each curve on ${\mathbb{T}}$ and by keeping track of its period as a second variable: Let $\gamma :{\mathbb{R}}/{T{\mathbb{Z}}}\longrightarrow M$ be an absolutely continuous $T$-periodic curve and define $x:{\mathbb{T}} \rightarrow M$ as $x(s):=\gamma(sT)$. The closed curve $\gamma$ is identified with the pair $(x,T)$. The action of $\gamma$ on the time interval $[0,T]$ is the number \[ \int_0^TL\bigl(\gamma(t),\gamma'(t)\bigr)\, dt= T\int_{{\mathbb{T}}} L\bigl(x(s),x'(s)/{T}\bigr)\, ds. \] Fix a real number $\kappa$, the value of the energy for which we would like to find periodic solutions. Consider the {\em free period action functional} corresponding to the energy $\kappa$ \[ {\mathbb{S}}_{\kappa}(\gamma) = {\mathbb{S}}_{\kappa} (x,T) := T\int_{{\mathbb{T}}} \Bigl( L\bigl(x(s),x'(s)/T\bigr) + \kappa \Bigr)\,ds = \int_0^T \Bigl( L\bigl( \gamma(t),\gamma'(t)\bigr)+\kappa\Bigr)\, dt. \] The fact that $L$ is electromagnetic outside a compact subset of $TM$ implies that ${\mathbb{S}}_{\kappa}(x,T)$ is well-defined when $x\in W^{1,2}({\mathbb{T}},M)$. Therefore, we obtain a functional \[ {\mathbb{S}}_{\kappa} : W^{1,2}({\mathbb{T}},M) \times (0,+\infty) \rightarrow {\mathbb{R}}. \] The Hilbert manifold $W^{1,2}({\mathbb{T}},M) \times (0,+\infty)$ is denoted by $\mathcal{M}$. \begin{Lemma}(Regularity properties of ${\mathbb{S}}_{\kappa}$) \begin{itemize} \item[(i)] ${\mathbb{S}}_{\kappa}$ is in $C^{1,1}(\mathcal{M})$ and is twice Gateaux differentiable at every point. \item[(ii)] ${\mathbb{S}}_{\kappa}$ is twice Fr\'ech\'{e}t differentiable at every point if and only if $L$ is electromagnetic on the whole $TM$. In this case, ${\mathbb{S}}_{\kappa}$ is actually smooth on $\mathcal{M}$. \end{itemize} \end{Lemma} See e.g.\ \cite{as09b} for a detailed proof. If $d_x$ denotes the horizontal differential with respect to some horizontal-vertical splitting of $TTM$, the differential of ${\mathbb{S}}_{\kappa}$ with respect to the first variable at some $(x,T)\in \mathcal{M}$ has the form \begin{equation} \label{diff1} \begin{split} d{\mathbb{S}}_{\kappa} (x,T) \bigl[(\xi,0)\bigr] & = T \int_0^1 \Bigl( d_x L\bigl( x, x'/T \bigr) [ \xi ] + d_v L\bigl( x, x'/T \bigr) \bigl[ \nabla_s \xi/T \bigr] \Bigr)\, ds \\ & = \int_0^T \Bigl( d_x L \bigl(\gamma,\gamma'\bigr) [\zeta] + d_v L \bigl(\gamma,\gamma'\bigr) [\nabla_t \zeta] \Bigr)\, dt, \end{split}\end{equation} where $\xi\in T_x W^{1,2}({\mathbb{T}},M)$, $\gamma(t)= x(t/T)$ and $\zeta(t):=\xi(t/T)$. Let $(x,T)$ be a critical point of ${\mathbb{S}}_{\kappa}$. The above formula and an integration by parts imply that $\gamma$ is a $T$-periodic solution of (\ref{EL}). Moreover \begin{equation} \label{eq:diff A w.r.t. T} \begin{split} \frac{\partial {\mathbb{S}}_{\kappa}}{\partial T} (x,T) &= \int_{{\mathbb{T}}} \Bigl( L\bigl( x(s),x'(s)/T\bigr) + \kappa + T \, d_v L\bigl( x(s),x'(s)/T\bigr) \bigl[-x'(s)/T^2\bigr] \Bigr)\, ds \\ &= \int_{{\mathbb{T}}} \Bigl( \kappa - E\bigl( x(s),x'(s)/T\bigr) \Bigr)\, ds = \frac{1}{T}\int_0^T \Bigl( \kappa - E\bigl( \gamma(t),\gamma'(t)\bigr) \Bigr) \, dt. \end{split} \end{equation} Together with the fact that $E$ is constant along the orbits of the Euler-Lagrange flow, the above identity shows that the $T$-periodic orbit $\gamma$ belongs to the energy levek $E^{-1}(\kappa)$. We conclude that $(x,T)$ is a critical point of ${\mathbb{S}}_{\kappa}$ on $\mathcal{M}$ if and only if $\gamma(t):= x(t/T)$ is a $T$-periodic orbit of energy $\kappa$ ($T$ is not necessarily the minimal period). \paragraph{\bf Behavior of ${\mathbb{S}}_{\kappa}$ for $\mathbf{T\to 0}$} The Hilbert manifold $\mathcal{M}=W^{1,2}({\mathbb{T}},M)\times (0,+\infty)$ is endowed with the product Riemannian structure of (\ref{metr}) and the Euclidean metric of $(0,+\infty) \subset {\mathbb{R}}$. As such, it is not complete, the non-converging Cauchy sequences being the sequences $(x_h,T_h)$ with $x_h \rightarrow x\in W^{1,2}({\mathbb{T}},M)$ and $T_h \rightarrow 0$. Therefore, we need to understand the behavior of ${\mathbb{S}}_{\kappa}$ on such sequences. We decompose $\mathcal{M}$ as $\mathcal{M} = \mathcal{M}^{\mathrm{contr}} \sqcup \mathcal{M}^{\mathrm{noncontr}}$, where $\mathcal{M}^{\mathrm{contr}}$ denotes the connected component consisting of contractible loops. \begin{Lemma} \label{Lem:3} \begin{enumerate} \item On $\mathcal{M}^\mathrm{noncontr}$ the sublevels $\{{\mathbb{S}}_{\kappa}\leq c\}$ are complete. More precisely, if $(x_h,T_h)\in \mathcal{M}^\mathrm{noncontr}$ and $T_h \rightarrow 0$, then ${\mathbb{S}}_{\kappa}(x_h,T_h)\rightarrow +\infty$. \item If $(x_h,T_h)\in\mathcal{M}^\mathrm{contr}$ and $T_h\to 0$, then $\liminf_h{\mathbb{S}}_{\kappa}(x_h,T_h)\geq0$. \end{enumerate} \end{Lemma} \begin{proof} By (\ref{boundsonL}), we have the chain of inequalities \begin{equation} \label{lowbdonA} \begin{split} {\mathbb{S}}_{\kappa} (x,T) &= T \int_{{\mathbb{T}}} \Bigl( L\bigl(x,x'/T\bigr) + \kappa \Big)\, ds \geq T \int_{{\mathbb{T}}} \Bigl( L_0 \frac{|x'|^2}{T^2} -L_1 + \kappa \Bigr)\, ds \\ &= \frac{L_0}{T} \int_{{\mathbb{T}}} |x'|^2\, ds - (L_1-\kappa) T \geq \frac{L_0}{T} \ell(x)^2 - (L_1 - \kappa)T, \end{split} \end{equation} where $\ell(x)$ denotes the length of the loop $x$. The length of the non-contractible loops in $M$ is bounded away from zero. Therefore, the estimate (\ref{lowbdonA}) implies statement (i). Statement (ii) is also an immediate consequence of (\ref{lowbdonA}). \end{proof} Since $\mathcal{M}$ is not complete, we cannot expect the vector field $-\nabla {\mathbb{S}}_{\kappa}$ to be positively complete. However, the only sources of non-completeness is the second component approaching zero. The next result says that this may happen only at level zero: \begin{Lemma}\label{Lem:4} Let $(x,T):[0,\sigma^*)\longrightarrow\mathcal{M}^\mathrm{contr}$, $0<\sigma^*<\infty$, be a flow line of $-\nabla {\mathbb{S}}_{\kappa}$ such that \[ \liminf_{\sigma\to\sigma^*}T(\sigma)=0. \] Then \[ \lim_{\sigma\to\sigma^*}{\mathbb{S}}_{\kappa} \big(x(\sigma),T(\sigma)\big)=0. \] \end{Lemma} \begin{proof} Since both $E$ and $L$ are quadratic in $v$ for $|v|$ large, we have the estimate \[ E(x,v) \geq C_0 \, L(x,v) - C_1, \] for some $C_0>0$ and $C_1\in {\mathbb{R}}$. From (\ref{eq:diff A w.r.t. T}) we obtain the inequality \begin{equation*} \begin{split} \frac{\partial {\mathbb{S}}_{\kappa}}{\partial T} ( x,T) &= \frac{1}{T} \int_0^T \bigl( \kappa - E(\gamma,\gamma') \bigr)\, dt \leq \frac{1}{T} \int_0^T \bigl( \kappa - C_0 \,L(\gamma,\gamma') + C_1 \bigr)\, dt \\ &= \kappa + C_1 - \frac{C_0}{T} \int_0^T \bigl( L(\gamma,\gamma')+\kappa \bigr)\, dt + C_0 \kappa = (C_0+1) \kappa + C_1 - \frac{C_0}{T} {\mathbb{S}}_{\kappa} (x,T), \end{split} \end{equation*} which can be rewritten as \begin{equation} \label{boundsondAdt} {\mathbb{S}}_{\kappa} (x,T) \leq \frac{T}{C_0} \Bigl( C - \frac{\partial {\mathbb{S}}_{\kappa}}{\partial T} (x,T) \Bigr), \end{equation} for a suitable constant $C$. By the assumption, there exists an increasing sequence $(\sigma_h)$ which converges to $\sigma^*$ and satisfies $T'(\sigma_h)\leq 0$ and $T(\sigma_h)\rightarrow 0$. Since $\sigma\mapsto (x(\sigma),T(\sigma))$ is a flow line of $-\nabla {\mathbb{S}}_{\kappa}$, \[ 0 \geq T'(\sigma_h) = - \frac{\partial {\mathbb{S}}_{\kappa}}{\partial T} \bigl(x(\sigma_h),T(\sigma_h) \bigr), \] and by (\ref{boundsondAdt}) we have \[ {\mathbb{S}}_{\kappa} \bigl(x(\sigma_h),T(\sigma_h) \bigr) \leq \frac{T(\sigma_h)}{C_0} \Bigl( C - \frac{\partial {\mathbb{S}}_{\kappa}}{\partial T} \bigl(x(\sigma_h),T(\sigma_h) \bigr) \Bigr) \leq \frac{C}{C_0} T(\sigma_h). \] Since $T(\sigma_h)$ is infinitesimal, we obtain \[ \limsup_{h\rightarrow \infty} {\mathbb{S}}_{k} \bigl(x(\sigma_h),T(\sigma_h) \bigr) \leq 0. \] Together with statement (ii) of Lemma \ref{Lem:3} and the monotonicity of the function $\sigma \longmapsto {\mathbb{S}}_{\kappa}(x(\sigma),T(\sigma))$, this concludes the proof. \end{proof} \section{Ma\~n{\'e} critical values, contact type and stability conditions} \label{mcv} \paragraph{\bf The critical values} The following numbers should be interpreted as energy levels and mark important dynamical and geometric changes for the Euler-Lagrange flow induced by the Tonelli Lagrangian $L$: \begin{equation*} \begin{split} c_0(L)&:=\inf\{\kappa\in{\mathbb{R}} \, | \,{\mathbb{S}}_{\kappa}(x,T)\geq 0 \; \forall (x,T)\in \mathcal{M} \mbox{ with }x\mbox{ homologous to zero} \} \\ &= - \inf \Bigl\{ \frac{1}{T} \int_0^T L(\gamma(t),\gamma'(t)) \, dt \, \Big| \, \gamma\in C^{\infty}({\mathbb{R}}/T{\mathbb{Z}},M) \mbox{ homologous to zero, } T>0\Bigr\}, \\ c_u(L)&:=\inf\{\kappa\in{\mathbb{R}} \, | \,{\mathbb{S}}_{\kappa}(x,T)\geq 0 \; \forall (x,T)\in \mathcal{M} \mbox{ with }x\mbox{ contractible} \}\\ &= - \inf \Bigl\{ \frac{1}{T} \int_0^T L(\gamma(t),\gamma'(t)) \, dt \, \Big| \, \gamma\in C^{\infty}({\mathbb{R}}/T{\mathbb{Z}},M) \mbox{ contractible, } T>0 \Bigr\}, \\ e_0(L)&:=\max_{x\in M} E(x,0) = \max \set{E(x,v)}{(x,v) \in {\rm Crit} E}. \end{split} \end{equation*} The number $c_0(L)$ is known as the {\em strict Ma\~n\'e critical value}, while $c_u(L)$ is the {\em lowest Ma\~{n}\'e critical value} (see \cite{man97}). When the fundamental group of $M$ is rich, there are other Ma\~n\'e critical values, which are associated to the different coverings to $M$, but the above ones are those which are more relevant for the question of existence of periodic orbits. It is easy to see that \[ \min E\leq e_0(L)\leq c_u(L)\leq c_0(L). \] When $L$ has the form \eqref{elmag}, $\min E$ is the minimum of the scalar potential $V$, while $e_0(L)$ is its maximum. When the magnetic potential $\theta$ vanishes, the identities $e_0(L)=c_u(L)=c_0(L)$ hold, but in general $e_0(L)$ is strictly lower than the other two numbers. The values $c_u(L)$ and $c_0(L)$ coincide when the fundamental group of $M$ is Abelian and, more generally, when it is ameanable (see \cite{fm07}). The lowest Ma\~{n}\'e critical value $c_u(L)$ plays a decisive role in the geometry of the action functional ${\mathbb{S}}_{\kappa}$, as the next result shows: \begin{Lemma}\label{Lem:5} If $\kappa\geq c_u(L)$, then ${\mathbb{S}}_{\kappa}$ is bounded from below on every connected component of $\mathcal{M}$. If $\kappa<c_u(L)$, then the functional ${\mathbb{S}}_{\kappa}$ is unbounded from below on each connected component of $\mathcal{M}$. \end{Lemma} \begin{proof} Choose $\gamma:{\mathbb{R}}/T{\mathbb{Z}}\rightarrow M$ in some connected component of the free loop space and let $\tilde\gamma:[0,T]\longrightarrow\widetilde M$ be the its lift to the universal covering $\pi:\widetilde M \rightarrow M$. We lift the metric of $M$ to $\widetilde{M}$ and notice that the fact of having fixed the connected component of the free loop space implies that the quantity $\mathrm{dist}\big(\tilde\gamma(T),\tilde\gamma(0)\big)$ is uniformly bounded. Therefore, there exists a path $\tilde{\alpha}:[0,1]\rightarrow \widetilde M$ which joins $\tilde{\gamma}(T)$ to $\tilde{\gamma}(0)$ and has uniformly bounded action \[ \widetilde{{{\mathbb{S}}}}_{\kappa} (\tilde{\alpha}) := \int_0^1 \Bigl( \widetilde L\bigl(\tilde{\alpha}(t), \tilde{\alpha}'(t) \bigr) + \kappa \Bigr) \, dt \leq C, \] where $\widetilde{L}$ denotes the Lagrangian on $T\widetilde M$ which is obtained by lifting $L$. If $\alpha := \pi\circ \tilde{\alpha}$, the juxtaposition $\gamma\# \alpha$ is a contractible loop in $M$. Since $\kappa\geq c_u(L)$, we have \[ 0 \leq {\mathbb{S}}_{\kappa} (\gamma \# \alpha) = {\mathbb{S}}_{\kappa} (\gamma) + {\mathbb{S}}_{\kappa} (\alpha) = {\mathbb{S}}_{\kappa} (\gamma) + \widetilde{{{\mathbb{S}}}}_{\kappa} (\tilde{\alpha}) \leq {\mathbb{S}}_{\kappa} (\gamma)+C, \] from which ${\mathbb{S}}_{\kappa}(\gamma)\geq -C$. When $\kappa<c_u(L)$, the functional ${\mathbb{S}}_{\kappa}$ is unbounded from below on each connected component of $\mathcal{M}$. In fact, if $\alpha$ is a contractible closed curve with ${\mathbb{S}}_{\kappa}(\alpha)<0$, we can modify any closed curve $\gamma$ within its free homotopy class and make it have arbitrarily low action ${\mathbb{S}}_{\kappa}$: Join $\gamma(0)$ to $\alpha(0)$ by some path, wind around $\alpha$ several times, come back to $\gamma(0)$ by the inverse path, and finally go once around $\gamma$. \end{proof} The strict Ma\~n\'e critical value $c_0(L)$ is not directly related to the geometry of ${\mathbb{S}}_{\kappa}$, but has the following important characterization (see \cite{fat97} and \cite{cipp98}): \begin{equation} \label{cara} c_0(L) = \inf \left\{ \max_{x\in M} H(x,\alpha(x)) \, \Big| \, \alpha \mbox{ smooth closed one-form on } M\right\}, \end{equation} where $H:T^*M \rightarrow {\mathbb{R}}$ is the Hamiltonian associated to the Lagrangian $L$ via Legendre duality: \[ H(x,p) := \max_{v\in T_x M} \bigl( p[v] - L(x,v) \bigr). \] Then $H$ is a Tonelli Hamiltonian, meaning that it is fiberwise superlinear and uniformly convex (see the beginning of Section \ref{sec3}). Let $X_H$ be the induced Hamiltonian vector field on $T^*M$, which is defined by the identity \[ \omega(X_H(z),\zeta) = - dH(z)[\zeta], \quad \forall z\in T^*M, \; \zeta\in T_z T^*M, \] where $\omega=dp\wedge dx$ is the standard symplectic form on $T^*M$. The flow of $X_H$ preserves each level $H^{-1}(\kappa)$, where it is conjugated to the Euler-Lagrange flow of $L$ on $E^{-1}(\kappa)$ by the Legendre transform \[ TM \rightarrow T^*M, \quad (x,v) \mapsto \bigl( x , d_v L(x,v) \bigr). \] Assume that $\kappa$ is a regular value of $H$, so that $\Sigma:= H^{-1}(\kappa)$ is a hypersurface. Up to a time reparametrization, the dynamics on $\Sigma$ is determined only by the geometry of $\Sigma$ and not by the Hamiltonian of which $\Sigma$ is an energy level: In fact the nowhere vanishing vector field $X_H|_{\Sigma}$ belongs to the one-dimensional distribution \[ \mathcal{L}_{\Sigma}:=\ker \omega|_{\Sigma}, \] whose integral curves are hence the orbits of $X_H|_{\Sigma}$. The characterization (\ref{cara}) has a dynamical and a geometric consequence. We begin with the dynamical one: \begin{thm} \label{finsler} If $\kappa>c_0(L)$, then the Euler-Lagrange flow on $E^{-1}(\kappa)$ is conjugated up to a time-reparametrization to the geodesic flow which is induced by a Finsler metric on $M$. \end{thm} \begin{proof} Since $\kappa>c_0(L)$, there is a smooth closed one-form $\alpha$ whose image is contained in the sublevel $\{H<\kappa\}$. Since $\alpha$ is closed, the diffeomorphism of $T^*M$ defined by $(x,p) \mapsto (x,p+\alpha(x))$ is symplectic and conjugates the Hamiltonian flow of $H$ to that of $K(x,p):= H(x,p+\alpha(x))$. The energy level $K^{-1}(\kappa)$ is now the boundary of a fiberwise uniformly convex bounded open set which contains the zero section of $T^*M$. Therefore, there exists a fiberwise convex and 2-homogeneous function $F:T^*M \rightarrow [0,+\infty)$ such that $F^{-1}(1)=K^{-1}(\kappa)$. Thus, the Hamiltonian flow of $F$ on $F^{-1}(1)=K^{-1}(\kappa)$ is related to that of $K$ - hence to that of $H$ on $H^{-1}(\kappa)$ - by a time reparametrization. But the Legendre dual of the fiberwise convex and 2-homogeneous Hamiltonian $F$ is the square of a Finsler structure on $M$. We conclude that the orbits of the Euler-Lagrange flow of $L$ of energy $\kappa$ are reparametrized Finsler geodesics. \end{proof} In order to derive the geometric consequence of (\ref{cara}), we need to recall some notions from symplectic topology. \paragraph{\bf Contact type and stable energy hypersurfaces} The energy level $\Sigma$ is said to be of {\em contact type} if there is a one-form $\eta$ on $\Sigma$ which is a primitive of $\omega|_{\Sigma}$ and is such that $\eta$ does not vanish on $\mathcal{L}_{\Sigma}$. Equivalently, there is a smooth vector field $Y$ in a neighborhood of $\Sigma$ which is transverse to $\Sigma$ and such that $L_Y \omega = \omega$ (the vector field $Y$ and the one-form $\eta$ are related by the identity $\imath_Y \omega|_{\Sigma} = \eta$). The energy level $\Sigma$ is said to be of {\em restricted contact type} if the one-form $\eta$ extends to a primitive of $\omega$ on the whole $T^*M$. If the surface $\Sigma\subset T^*M$ bounds an open fiberwise convex set which contains the zero-section (or more generally an open set which is starshaped with respect to the zero section), then it is of restricted contact type: As $\eta$ one can take the Liouville form $p\, dq$. Therefore, arguing as in the first part of the proof of Theorem \ref{finsler}, we obtain the following geometric consequence of the characterization (\ref{cara}): \begin{thm} \label{contact} If $\kappa>c_0(L)$, then $H^{-1}(\kappa)$ is of restricted contact type. \end{thm} If $c_u(L) \leq \kappa \leq c_0(L)$ and $M$ is not the 2-torus, $H^{-1}(\kappa)$ is not of contact type (see \cite[Proposition B.1]{con06}), and it is conjectured that the same is true for $e_0(L)<\kappa<c_u(L)$. If $\min E < \kappa < e_0(L)$, $H^{-1}(\kappa)$ might or might not be of contact type: For instance, if the one-form $\theta(x)[v]:=d_vL(x,0)[v]$ is closed, then every regular energy level is of contact type (see \cite[Proposition C.2]{con06}, in this case $e_0(L)=c_u(L)=c_0(L)$). The contact condition has the following dynamical consequence: If $\Sigma$ is a contact type compact hypersurface in a symplectic manifold $(W,\omega)$ (in our case, $W=T^*M$), then there is a diffeomorphism \[ (-\epsilon, \epsilon) \times \Sigma \rightarrow W, \qquad (r,x) \mapsto \psi_r(x), \] onto an open neighborhood of $\Sigma$ such that $\psi_0$ is the identity on $\Sigma$ and \[ \psi_r : \Sigma \rightarrow \Sigma_r := \psi_r(\Sigma) \] induces an isomorphism between the line bundles $\mathcal{L}_{\Sigma}$ and $\mathcal{L}_{\Sigma_r}$ (the hypersurface $\Sigma_r$ is the image of $\Sigma$ by the flow at time $r$ of the vector field $Y$ given by the contact condition, see \cite[page 122]{hz94}). Therefore, if the hypersurfaces $\Sigma_r$ are level sets of a Hamiltonian $K$, the dynamics of $X_K$ on $\Sigma_r$ is conjugate to the one on $\Sigma_0=\Sigma$ up to a time reparametrization. Hypersurfaces with the above propery are called {\em stable} (see \cite[page 122]{hz94}). The stability condition is weaker than the contact condition, as the following characterization, which is due to K.\ Cieliebak and K.\ Mohnke \cite[Lemma 2.3]{cm05}, shows: \begin{Prop} Let $\Sigma$ be a compact hypersurface in the symplectic manifold $(W,\omega)$. Then the following facts are equivalent: \begin{enumerate} \item $\Sigma$ is stable; \item there is a vector field $Y$ on a neighborhood of $\Sigma$ which is transverse to $\Sigma$ and satisfies $\mathcal{L}_{\Sigma} \subset \ker ( L_Y \omega|_{\Sigma})$; \item there is a one-form $\eta$ on $\Sigma$ such that $\mathcal{L}_{\Sigma} \subset \ker d\eta$ and $\eta$ does not vanish on $\mathcal{L}_{\Sigma}$. \end{enumerate} \end{Prop} \begin{proof} (i) $\Rightarrow$ (ii). By stability, a neighborhood of $\Sigma$ can be identified with $(-\epsilon,\epsilon)\times \Sigma$ in such a way that $\mathcal{L}_{\{r\} \times \Sigma}$ does not depend on $r$. Set $Y:= \partial/\partial r$ and denote by $\phi_t(r,x)=(r+t,x)$ its flow. Then $\ker ( \phi_t^* \omega|_{\{0\} \times \Sigma} )$ does not depend on $t$ and differentiating in $t$ at $t=0$ we get \[ \mathcal{L}_{\Sigma} = \ker \omega|_{\Sigma} \subset \ker ( L_Y \omega|_{\Sigma}). \] (ii) $\Rightarrow$ (iii). If we set $\eta:= \imath_Y \omega|_{\Sigma}$, by Cartan's identity we have \[ d\eta = d \imath_Y \omega|_{\Sigma} = (L_Y \omega - \imath_Y d \omega)|_{\Sigma} = L_Y \omega|_{\Sigma}, \] so $\mathcal{L}_{\Sigma} \subset \ker(L_Y \omega|_{\Sigma}) = \ker d\eta$. If $\xi\neq 0$ is a vector in $\mathcal{L}_{\Sigma}$, then \[ \eta(\xi) = \omega(Y,\xi) \neq 0, \] because $Y\notin T\Sigma$. \noindent (iii) $\Rightarrow$ (i). Consider the closed two-form on $(-\epsilon,\epsilon) \times \Sigma$ \[ \tilde{\omega} = \omega|_{\Sigma} + d(r\eta) = \omega|_{\Sigma} + rd\eta + dr\wedge \eta. \] If $\epsilon$ is small enough, the form $\omega|_{\Sigma} + rd\eta$ is non-degenerate on $\ker \eta$ for every $r\in (-\epsilon,\epsilon)$, from which we deduce that $\tilde{\omega}$ is a symplectic form. Since $\tilde{\omega}|_{\{0\} \times \Sigma}$ coincides with $\omega|_{\Sigma}$, by the coisotropic neighborhood theorem (see \cite{got82}, or \cite[Exercise 3.36]{ms98} for the particular case of a hypersurface), a neighborhood of $\Sigma$ in $W$ is symplectomorphic to $((-\epsilon,\epsilon) \times \Sigma,\tilde{\omega})$, up to the choice of a smaller $\epsilon$. Since for $\xi\in \mathcal{L}_{\Sigma}(x)$ and $\zeta \in T_{(r,x)} (\{r\} \times \Sigma) = (0) \times T_x \Sigma$ there holds \[ \tilde{\omega}(\xi,\zeta) = \omega(\xi,\zeta) + r d\eta (\xi,\zeta) = 0, \] we deduce that $\ker (\tilde{\omega}|_{\{r\} \times \Sigma}) = \mathcal{L}_{\Sigma}$ does not depend on $r$. Therefore, $\{0\} \times \Sigma$ is stable in $((-\epsilon,\epsilon) \times \Sigma,\tilde{\omega})$ and hence $\Sigma$ is stable in $(W,\omega)$. \end{proof} \begin{Rmk} L.~Macarini and G.~P.~Paternain have constructed examples of Tonelli Lagrangians on the tangent bundle of $\mathbb{T}^n$ such that $H^{-1}(\kappa)$ is stable for $\kappa=c_u(L)=c_0(L)$, see \cite{mp10}. \end{Rmk} \section{Palais-Smale sequences} \label{pss} Palais-Smale sequences $(x_h,T_h)$ with $T_h\rightarrow 0$ are a possible source of non-compactness, but the next result shows that they occur only at level zero. \begin{Lemma} \label{PST0} Let $(x_h,T_h)$ be a $(\mathrm{PS})_c$ sequence for ${\mathbb{S}}_{\kappa}$ with $T_h\to0$. Then $c=0$. \end{Lemma} \begin{proof} Set $\gamma_h(t):= x_h(t/T_h)$. Since $(x_h,T_h)$ is a $(\mathrm{PS})$ sequence, the identity (\ref{eq:diff A w.r.t. T}) shows that the sequence \[ \alpha_h := \frac{1}{T_h} \int_0^{T_h} \bigl( E(\gamma_h,\gamma_h') - \kappa \bigr) \, dt = - \frac{\partial {\mathbb{S}}_{\kappa}}{\partial T} (x_h,T_h) \] is infinitesimal and, in particular, bounded. Since $L(x,v)$ is electromagnetic for $|v|$ large, $E(x,v)$ has the form (\ref{ene}) for $|v|$ large and hence satisfies the estimate \[ E(x,v) \geq E_0 |v|^2 - E_1, \qquad \forall (x,v)\in TM, \] for suitable numbers $E_0>0$ and $E_1\in {\mathbb{R}}$. It follows that \[ \alpha_h \geq \frac{1}{T_h} \int_0^{T_h} \bigl( E_0 |\gamma_h'|^2 - E_1 - \kappa \bigr)\, dt = \frac{E_0}{T_h} \int_0^{T_h} |\gamma_h'|^2\,dt - (E_1 + \kappa), \] and hence \[ \int_0^{T_h} |\gamma_h'|^2\,dt \leq \frac{T_h}{E_0} ( \alpha_h + E_1 + \kappa) = O(T_h) \] for $h\rightarrow \infty$. From the lower bound (\ref{boundsonL}) and from the analogous upper bound \[ L(x,v) \leq L_2 |v|^2 + L_3, \qquad \forall (x,v)\in TM, \] for suitable positive number $L_2,L_3$, we find \[ {\mathbb{S}}_{\kappa}(x_h,T_h) = \int_0^{T_h} \bigl( L(\gamma_h,\gamma_h') + \kappa \bigr)\, dt \geq L_0 \int_0^{T_h} |\gamma_h'|^2\, dt + T_h (\kappa - L_1) = O(T_h) \] and \[ {\mathbb{S}}_{\kappa}(x_h,T_h) = \int_0^{T_h} \bigl( L(\gamma_h,\gamma_h') + \kappa \bigr)\, dt \leq L_2 \int_0^{T_h} |\gamma_h'|^2\, dt + T_h (L_3 + \kappa) = O(T_h) \] for $h\rightarrow \infty$. Since $T_h\rightarrow 0$, we conclude that ${\mathbb{S}}_{\kappa}(x_h,T_h)$ is infinitesimal and hence $c=0$. \end{proof} \begin{Rmk} By choosing a suitable metric on the Hilbert manifold $\mathcal{M}$, one can also obtain that for every Palais-Smale sequences $(x_h,T_h)$ with $T_h\rightarrow 0$ the sequence $(x_h)$ converges to an equilibrium solution of the Euler-Lagrange equations which has energy $\kappa$ (see \cite[Proposition 3.8]{con06}). In particular, such Palais-Smale sequences can exist only when $\kappa$ is a critical value of $E$. \end{Rmk} The next result says that Palais-Smale sequences $(x_h,T_h)$ with $(T_h)$ bounded and bounded away from zero are always compact. \begin{Lemma}\label{Lem;2} Let $(x_h,T_h)$ be a $(\mathrm{PS})_c$ sequence for ${\mathbb{S}}_{\kappa}$ with $0<T_*\leq T_h\leq T^*<\infty$. Then $(x_h,T_h)$ is compact in $\mathcal{M}$. \end{Lemma} \begin{proof} Up to a subsequence, we may assume that $(T_h)$ converges to some $T\in [T_*,T^*]$. By (\ref{boundsonL}) we have \begin{equation} \label{isbd} \begin{split} c+o(1) = {\mathbb{S}}_{\kappa}(x_h,T_h) = T_h \int_0^1 \Bigr( L\bigl(x_h,x_h'/T_h \bigr)+\kappa \Bigr)\, ds \\ \geq T_h \int_0^1\Bigr( L_0 \frac{|x_h'|^2}{T_h^2}-(L_1-\kappa)\Bigr)\, ds \geq \frac{L_0}{T^*}\|x_h'\|_2^2-T^* |L_1-\kappa|, \end{split} \end{equation} where $\|\cdot\|_2$ denotes the $L^2$ norm with respect to the fixed Riemannian metric on $M$. Therefore, $\|x_h'\|_2$ is uniformly bounded and $(x_h)$ is 1/2-equi-H\"older-continuous: \[ \mathrm{dist}\big(x_h(s'),x_h(s)\big)\leq\int_s^{s'}|x_h'(r)|\, dr \leq |s'-s|^{1/2} \|x_h'\|_2. \] By the Ascoli-Arzel\`a theorem, up to a subsequence $(x_h)$ converges uniformly to some $x\in C({\mathbb{T}},M)$. In particular, $(x_h)$ eventually belongs to the image of the parameterization $\varphi_*$ induced by a smooth map \[ \varphi: {\mathbb{T}}\times B_r \rightarrow M. \] See \eqref{eq:chart of the loop space} and recall that the image of this parameterization is $C^0$-open. Then $x_h=\varphi_*(\xi_h)$, where $\xi_h$ belongs to $W^{1,2}({\mathbb{T}},B_r)$ and is a $(\mathrm{PS})$ sequence for the functional $$ \widetilde{{\mathbb{S}}}(\xi,T)=T\int_{{\mathbb{T}}}\widetilde L\bigr(s,\xi,\xi'/T \bigr)\,ds, $$ with respect to the standard Hilbert product on $W^{1,2}({\mathbb{T}},{\mathbb{R}}^n)$, where the Lagrangian $\widetilde{L}\in C^{\infty}({\mathbb{T}}\times B_r \times {\mathbb{R}}^n)$ is obtained by pulling back $L+\kappa$ by $\varphi$. Moreover, $(\xi_h)$ converges uniformly and, since $\|\xi_h'\|_2$ is bounded, weakly in $W^{1,2}$ to some $\xi$ in $W^{1,2}({\mathbb{T}},B_r)$. We must prove that this convergence is actually strong in $W^{1,2}$. Since $\widetilde{L}(s,x,v)$ is electromagnetic for $|v|$ large, we have the bounds \begin{equation} \label{bddL} \bigl| d_x \widetilde L (s,x,v) \bigr| \leq C(1+|v|^2),\quad\bigl| d_v \widetilde L (s,x,v)\bigr| \leq C(1+|v|), \end{equation} for a suitable constant $C$. Since $(\xi_h,T_h)$ is a $(\mathrm{PS})$ sequence with $(\xi_h)$ bounded in $W^{1,2}$, we have by (\ref{diff1}) \begin{equation*} \begin{split} o(1) &= d\widetilde{{\mathbb{S}}}(\xi_h,T_h)[(\xi_h-\xi,0)] \\ &= T_h \int_{{\mathbb{T}}} d_x \widetilde L \bigl(s,\xi_h,\xi_h'/T_h\bigr) [\xi_h-\xi]\, ds + T_h \int_{{\mathbb{T}}} d_v \widetilde{L} \bigl(s,\xi_h,\xi_h'/T_h\bigr) \bigl[ (\xi_h'-\xi')/T_h \bigr] \, ds. \end{split} \end{equation*} By the first bound in (\ref{bddL}) and the uniform convergence $\xi_h\rightarrow \xi$, the first integral is infinitesimal. Together with the fact that $(T_h)$ is bounded away from zero, we obtain \begin{equation} \label{uno} \int_{{\mathbb{T}}} d_v \widetilde{L} \bigl(s,\xi_h,\xi_h'/T_h\bigr) \bigl[ (\xi_h'-\xi')/T_h \bigr] \, ds=o(1). \end{equation} From the fiberwise uniform convexity of $\widetilde{L}$, we have the bound \[ d_{vv} \widetilde{L} (s,x,v) [u, u] \geq \delta |u|^2, \quad \forall (s,x,v)\in {\mathbb{T}}\times B_r \times {\mathbb{R}}^n, \; u\in {\mathbb{R}}^n, \] for a suitable positive number $\delta$. It follows that \begin{equation*} \begin{split} d_v \widetilde L \left(s,\xi_h,\frac{\xi_h'}{T_h}\right) & \left[ \frac{\xi_h'-\xi'}{T_h} \right] - d_v\widetilde L \left(s,\xi_h,\frac{\xi'}{T_h} \right)\left[\frac{\xi_h'-\xi'}{T_h} \right] \\ &= \int_0^1 d_{vv} \widetilde L \left(s,\xi_h,\frac{\xi'}{T_h}+\sigma \frac{\xi_h'-\xi'}{T_h} \right) \left[ \frac{\xi_h'-\xi'}{T_h}, \frac{\xi_h'-\xi'}{T_h} \right] \, d\sigma \geq \frac{\delta}{T_h^2} |\xi_h'- \xi'|^2. \end{split} \end{equation*} By integrating this inequality over $s\in {\mathbb{T}}$ and by (\ref{uno}), we obtain \[ o(1)- \int_{{\mathbb{T}}} d_v \widetilde L \bigl(s,\xi_h,\xi'/T_h)\bigl[ (\xi_h'-\xi')/T_h \bigr] \, ds \geq \frac{\delta}{T_h^2} \|\xi_h' - \xi'\|_2^2. \] By the second bound in (\ref{bddL}), the sequence \[ d_v \widetilde L \bigl(s,\xi_h,\xi'/T_h\bigr) \] converges strongly in $L^2$. By the weak $L^2$ convergence to $0$ of $(\xi_h'-\xi)$, we deduce that the integral on the left-hand side of the above inequality is infinitesimal. We conclude that $(\xi_h)$ converges to $\xi$ strongly in $W^{1,2}$. \end{proof} In general, ${\mathbb{S}}_{\kappa}$ might have Palais-Smale sequences $(x_h,T_h)$ with $(T_h)$ unbounded. However, this does not occur when $\kappa$ is larger than the lowest Ma\~{n}\'e critical value $c_u(L)$: \begin{Lemma}\label{Lem:6} If $\kappa>c_u(L)$, then any $(\mathrm{PS})$ sequence $(x_h,T_h)$ in a given connected component of $\mathcal{M}$ with $T_h\geq T_*>0$ is compact. \end{Lemma} \begin{proof} By Lemma \ref{Lem;2}, it is enough to show that $(T_h)$ is bounded from above. Since \[ {\mathbb{S}}_{\kappa} (x,T)={\mathbb{S}}_{c_u(L)} (x,T) + \big(\kappa -c_u(L)\big)T, \] the period \[ T_h = \frac{1}{\kappa -c_u(L)} \bigr({\mathbb{S}}_{\kappa} (x_h,T_h) - {\mathbb{S}}_{c_u(L)} (x_h,T_h)\bigr) \] is bounded from above, because ${\mathbb{S}}_{\kappa}$ is bounded on the $(\mathrm{PS})$ sequence $(x_h,T_h)$ and ${\mathbb{S}}_{c_u(L)}(x_h,T_h)$ is bounded from below by Lemma \ref{Lem:5}. \end{proof} \begin{Rmk} By choosing a suitable metric on $\mathcal{M}$, it is possible to characterize $c_u(L)$ as the infimum of all $\kappa_0$'s such that ${\mathbb{S}}_{\kappa}$ satisfies the Palais-Smale condition for every $\kappa\in [\kappa_0,+\infty)$. See \cite{cipp00}. \end{Rmk} \section{Periodic orbits with high energy} \label{pohe} The following result shows that the energy levels above $c_u(L)$ have always periodic orbits and proves statements (i) and (ii) of the theorem in the Introduction. \begin{Thm} \label{highthm} Assume that $\kappa>c_u(L)$. Then the following facts hold. \begin{enumerate} \item If $M$ is not simply connected, then the energy level $E^{-1}(\kappa)$ has a $\kappa$-action-minimizing periodic orbit in each non-trivial homotopy class of the free loop space. \item If $M$ is simply connected, then the energy level $E^{-1}(\kappa)$ has a periodic orbit with positive action ${\mathbb{S}}_{\kappa}$. \end{enumerate} \end{Thm} \begin{proof} (i) Assume that $M$ is not simply connected. Let $\alpha \in[{\mathbb{T}},M]$ be a non-trivial homotopy class and let $\mathcal{M}_{\alpha}$ be the connected component of $\mathcal{M}^\mathrm{noncontr}$ corresponding to $\alpha$. By Lemma \ref{Lem:5}, the functional ${\mathbb{S}}_{\kappa}$ is bounded from below on $\mathcal{M}_{\alpha}$. By Lemma \ref{Lem:3} (i), the sublevels \[ \{ (x,T)\in \mathcal{M}_{\alpha} \, | \, {\mathbb{S}}_{\kappa}(x,T)\leq c \} \] are complete. Let $(x_h,T_h)\subset \mathcal{M}_{\alpha}$ be a (PS) sequence for ${\mathbb{S}}_{\kappa}$. By Lemma \ref{Lem:3} (i), $(T_h)$ is bounded away from zero, so Lemma \ref{Lem:6} implies that ${\mathbb{S}}_{\kappa}$ satisfies the (PS) condition on $\mathcal{M}_{\alpha}$. By Remark \ref{mini}, we conclude that ${\mathbb{S}}_{\kappa}$ has a minimizer on $\mathcal{M}_{\alpha}$, as we wished to prove. (ii) Assume that $M$ is simply connected, so that $\mathcal{M}=\mathcal{M}^{\mathrm{contr}}$. In this case, ${\mathbb{S}}_{\kappa}$ is strictly positive everywhere, because $\kappa>c_u(L)$, but the infimum of ${\mathbb{S}}_{\kappa}$ is zero, as one readily checks by looking at sequences of the form $(x_0,T_h)$, with $x_0$ a constant loop and $T_h\rightarrow 0$. So the infimum is not achieved. We will find the periodic orbit by considering the same minimax class which Lusternik and Fet \cite{lf51} considered in their proof of the existence of a closed geodesic on a simply connected compact manifold. Since the closed manifold $M$ is simply connected, there exists $l\geq 2$ such that $\pi_l(M)\ne 0$ (a manifold all of whose homotopy groups vanish is contractible, but closed manifolds are never contractible, for instance because their $n$-dimensional homology group with ${\mathbb{Z}}_2$ coefficients does not vanish). We fix a non-zero homotopy class $\mathcal{G} \in\pi_l(M)$. Thanks to the isomorphism $\pi_{l-1}(C({\mathbb{T}},M))\cong\pi_l(M)$, we have an induced non-zero homotopy class \[ \mathcal{H} \in [S^{l-1},C({\mathbb{T}},M)] \cong [S^{l-1},\mathcal{M}], \] and we consider the minimax value \[ c=\inf_{\substack{h:S^{l-1}\to \mathcal{M}\\ h\in \mathcal{H}}} \max_{\xi\in S^{l-1}} {\mathbb{S}}_{\kappa} (h(\xi)). \] Let us show that $c>0$. Since $\mathcal{H}$ is non-trivial, there exists a positive number $a$ such that for every map $h=(x,T): S^{l-1} \rightarrow \mathcal{M}$ belonging to the class $\mathcal{H}$ there holds \[ \max_{\xi \in S^{l-1}} \ell(x(\xi)) \geq a, \] where $\ell(x(\xi))$ denotes the length of the loop $x(\xi)$ (see \cite[Theorem 2.1.8]{kli78}). If $(x,T)$ is an element of $\mathcal{M}$ with $\ell(x)\geq a$, then (\ref{boundsonL}) implies \begin{equation*} \begin{split} {\mathbb{S}}_{\kappa}(x,T) &= T\int_{{\mathbb{T}}} \Bigl(L\bigl(x,x'/T\bigr)+\kappa \Bigr) \,ds \geq T \int_{{\mathbb{T}}} \Bigl( L_0\frac{|x'|^2}{T^2} -L_1+\kappa \Bigr)\, ds \\ & \geq \frac{L_0}{T}\ell(x)^2-T(L_1-\kappa) \geq \frac{L_0}{T}a^2-T(L_1-\kappa). \end{split} \end{equation*} Since $a>0$, the above chain of inequalities implies that there exists $T_0>0$ such that for every $(x,T)\in \mathcal{M}$ with $\ell(x)\geq a$ and ${\mathbb{S}}_{\kappa}(x,T)\leq c+1$, the period $T$ is at least $T_0$. Now let $h\in \mathcal{H}$ be such that the maximum of ${\mathbb{S}}_{\kappa}$ on $h(S^{l-1})$ is less than $c+1$. By the above considerations, there exists $(x,T)$ in $h(S^{l-1})$ with $T\geq T_0$, whence \[ {\mathbb{S}}_{\kappa} (x,T)={\mathbb{S}}_{c_u(L)} (x,T)+ \bigl(\kappa-c_u(L)\bigr)T \geq \bigl(\kappa-c_u(L)\bigr) T_0. \] This shows that the minimax value $c$ is strictly positive. Theorem \ref{thm:finite c induces PS}, together with Remark \ref{trunc} and Lemma \ref{Lem:4}, implies the existence of a $(\mathrm{PS})_c$ sequence $(x_h,T_h)$. Lemma \ref{PST0} guarantees that $(T_h)$ is bounded away from zero, so by Lemma \ref{Lem:6} the sequence $(x_h,T_h)$ has a limiting point in $\mathcal{M}$, which gives us the required periodic orbit. \end{proof} \begin{Rmk} If $M$ is not simply connected and $\kappa>c_u(L)$, the energy level $E^{-1}(\kappa)$ might have no contractible periodic orbits. Consider for instance the Lagrangian $L(x,v)=|v|^2/2$ on the torus ${\mathbb{T}}^n$, equipped with the flat metric. The corresponding Euler-Lagrange flow is the geodesic flow on $T{\mathbb{T}}^n$, $c_u(L)=0$, and all the non-constant closed geodesics on the flat torus are non-contractible. \end{Rmk} \begin{Rmk} \label{veryhigh} If $\kappa>c_0(L)$, the existence of a periodic orbit on $E^{-1}(\kappa)$ also follows from Theorem \ref{finsler}, because every Finsler metric on a closed manifold has a closed geodesic. \end{Rmk} \begin{Rmk} Theorem \ref{finsler} implies that the multiplicity results for closed Finsler geodesics hold also for Hamiltonian orbits at energy levels $\kappa>c_0(L)$. Actually, most of these results remain true for $\kappa>c_u(L)$. In fact, as the proof of Theorem \ref{highthm} suggests, the (PS) condition and the topology of the sublevels of the functional ${\mathbb{S}}_{\kappa}$ are analogous to the corresponding properties of the Finsler geodesic energy functional (with the notable exception of the zero level). \end{Rmk} \section{Topology of the free period action functional for low energies} \label{tfpafle} As we have seen, when $\kappa<c_u(L)$ the functional ${\mathbb{S}}_{\kappa}$ is unbounded from below on each connected component of $\mathcal{M}$. The aim of this section is to show that, nevertheless, some sublevels of ${\mathbb{S}}_{\kappa}$ have a sufficiently rich topology. We start by proving a simple lemma about the integral of a one form. The integral of a given one-form on a curve $x$ is clearly bounded by a constant times the length of $x$. When the support of the curve is contained in a ball of $M$, one may also take the square of the length in this bound, which is a better estimate for short curves. More precisely, we have the following: \begin{Lemma}\label{Lemma:isoper} Let $\theta$ be a smooth one-form on $M$ and let $U\subset M$ be an open set whose closure is diffeomorphic to a closed ball in ${\mathbb{R}}^n$. Then there exists a number $\Theta>0$ such that \[ \bigg| \int_{\mathbb{T}} x^*(\theta) \bigg| \leq \Theta \cdot\ell(x)^2, \] for every closed curve $x:{\mathbb{T}}\rightarrow U$. \end{Lemma} \begin{proof} Up to the change of the constant $\Theta$, we may assume that $U=B_r$ is the ball of radius $r$ around the origin in ${\mathbb{R}}^n$, equipped with the Euclidean metric. Given the closed curve $x:{\mathbb{T}}\rightarrow B_r$, we consider the map \[ X:[0,1]\times {\mathbb{T}} \rightarrow B_r, \quad X(s,t) = x(0) + s \bigl( x(t) - x(0) \bigr). \] Then $X(1,t)=x(t)$ and $X(0,t)=x(0)$, hence by Stokes theorem \begin{equation*} \begin{split} \left| \int_{{\mathbb{T}}} x^*(\theta) \right| &= \bigg| \int_{[0,1] \times {\mathbb{T}}} X^* (d\theta) \bigg| = \left| \int_0^1 ds \int_{{\mathbb{T}}} d\theta\bigl( X(s,t) \bigr) \Bigl[ \frac{\partial X}{\partial s} , \frac{\partial X}{\partial t} \Bigr] \, dt \right| \\ &\leq \|d\theta\|_{\infty} \int_0^1 ds \int_{{\mathbb{T}}} \left| \frac{\partial X}{\partial s} \right| \, \left| \frac{\partial X}{\partial t} \right| \, dt = \|d\theta\|_{\infty} \int_0^1 ds \int_{{\mathbb{T}}} \bigl| x(t) - x(0) \bigr| s \bigl| x'(t) \bigr|\, dt \\ &\leq \frac{1}{2} \|d\theta\|_{\infty}\, \ell(x) \int_0^1 ds \int_{{\mathbb{T}}} s \bigl| x'(t)\bigr|\, dt = \frac{1}{4} \|d\theta\|_{\infty}\, \ell(x)^2, \end{split} \end{equation*} as claimed. \end{proof} \paragraph{\bf The energy range $\mathbf{(e_0(L),c_u(L))}$} If $\kappa<c_u(L)$, there are contractible closed curves with negative action ${\mathbb{S}}_{\kappa}$. Since the space of contractible loops is connected, we can consider the following class of continuous paths in $\mathcal{M}$: \begin{equation} \label{curve} \mathcal{Z}_0 := \bigl\{ (x,T):[0,1] \rightarrow \mathcal{M}\, \big| \, x(0) \mbox{ is a constant loop and } {\mathbb{S}}_{\kappa}(x(1),T(1)) <0 \bigr\}. \end{equation} Notice that if $x_0$ is a constant loop and $T>0$, then \begin{equation} \label{constants} {\mathbb{S}}_{\kappa}(x_0,T) = T \bigl( L(x_0,0) + \kappa \bigr) = T \bigl( \kappa - E(x_0,0) \bigr). \end{equation} When $\kappa>e_0(L)=\max_{x\in M} E(x,0)$, the above quantity is strictly positive (and tends to zero for $T\rightarrow 0$). The next lemma shows that when $e_0(L) < \kappa < c_u(L)$, ${\mathbb{S}}_{\kappa}$ has a sort of mountain pass geometry: \begin{Lemma} \label{Lem:mountain pass} Assume that $e_0(L) <\kappa < c_u(L)$. Then there exists $a>0$ such that for every $z\in \mathcal{Z}_0$ there holds \[ \max_{\sigma\in[0,1]} {\mathbb{S}}_{\kappa} \bigl(z(\sigma)\bigr)\geq a. \] \end{Lemma} \begin{proof} Consider the smooth one-form on $M$, \[ \theta(x) [v] := d_v L(x,0)[v]. \] By taking a Taylor expansion and by using the bound (\ref{bd2L}), we get the estimate \begin{equation} \label{bbb} L(x,v) = L(x,0)+d_vL(x,0)[v]+\frac{1}{2}d_{vv}L(x,s v)[v,v] \geq -E(x,0)+\theta(x)[v]+ L_0 |v|^2, \end{equation} where $s\in [0,1]$. Let $\{U_1,\dots,U_N\}$ be a finite covering of $M$ consisting of open sets whose closures are diffeomorphic to closed Euclidean balls, and let $\Theta >0$ be such that the conclusion of Lemma \ref{Lemma:isoper} holds for the one-form $\theta$, for each of the open sets $U_j$'s. Let $r_0$ be a Lebesgue number for this covering, meaning that every ball of radius $r_0$ is contained in one of the $U_j$'s. We claim that if ${\mathbb{S}}_{\kappa}(x,T)<0$ then \begin{equation} \label{claim} \ell(x)>\min\bigg\{r_0,\frac{\sqrt{L_0(\kappa-e_0(L))}}{\Theta}\bigg\}=:r_1. \end{equation} In fact, assuming that $\ell(x)\leq r_0$, we have that $x({\mathbb{T}})$ is contained in some $U_j$, for $1\leq j \leq N$. Set as usual $\gamma(t)=x(t/T)$. By Lemma \ref{Lemma:isoper} and by (\ref{bbb}), we obtain the chain of inequalities \begin{equation} \label{stma} \begin{split} 0 &>{\mathbb{S}}_{\kappa}(x,T) ={\mathbb{S}}_{\kappa}(\gamma) = \int_0^T \bigr(L(\gamma,\gamma')+\kappa \bigr)\,dt \\ &\geq \int_0^T\bigr( -E(\gamma,0) + \theta(\gamma)[\gamma'] + L_0 |\gamma'|^2 + \kappa \bigr)\, dt \\ &= \int_0^T \bigl( \kappa - E(\gamma,0)\bigr)\, dt + \int_{{\mathbb{R}}/T{\mathbb{Z}}} \gamma^*(\theta) + L_0 \int_0^T |\gamma'|^2\, dt \\ &\geq \bigl(\kappa-e_0(L)\bigr)T-\Theta\cdot \ell(\gamma)^2+\frac{L_0}{T}\ell(\gamma)^2. \end{split} \end{equation} Since we are assuming $\kappa>e_0(L)$, the above estimate implies that $T> L_0/\Theta$ and that \[ \ell(\gamma)^2>\frac{\bigl(\kappa-e_0(L)\bigr)T}{\Theta-L_0/T}>\frac{\bigl(\kappa-e_0(L)\bigr)T}{\Theta}>\frac{\bigl(\kappa-e_0(L)\bigr)L_0}{\Theta^2}, \] which proves (\ref{claim}). Fix some number $r$ in the open interval $(0,r_1)$. Since $z=(x,T)\in \mathcal{Z}_0$, ${\mathbb{S}}_{\kappa}(x(1),T(1))$ is negative, so by (\ref{claim}) the length of $x(1)$ is larger than $r_1$. By continuity, using the fact that $x(0)$ is a constant loop, we get the existence of $\sigma\in(0,1)$ for which $\ell(x(\sigma))=r$. Then (\ref{stma}) implies \[ {\mathbb{S}}_{\kappa}(x(\sigma),T(\sigma)) \geq \bigl(\kappa-e_0(L)\bigr) T +\Bigr(\frac{L_0}{T}-\Theta\Bigr)r^2. \] Minimization in $T$ yields \[ {\mathbb{S}}_{\kappa}(x(\sigma),T(\sigma)) \geq r \bigr(\sqrt{L_0(\kappa-e_0(L))}-\Theta r \bigr)=: a. \] The number $a$ is positive because $r<r_1$. This concludes the proof. \end{proof} \paragraph{\bf The energy range $\mathbf{(\min E,e_0(L))}$} When $\kappa<e_0(L)$, the identity (\ref{constants}) shows that ${\mathbb{S}}_{\kappa}$ takes negative values on some constant loops, and the conclusion of Lemma \ref{Lem:mountain pass} cannot hold. Instead than considering the class of paths which go from some constant loop to a loop of negative action, one has to consider the class of deformations of the space of constant loops - which is diffeomorphic to $M$ - into the space of loops with negative action. More precisely, we consider the set of continuous maps \[ \mathcal{Z}_M = \bigl\{ (x,T): [0,1]\times M \rightarrow \mathcal{M} \, \big| \, x(0,x_0) = x_0 \mbox{ and } {\mathbb{S}}_{\kappa}\bigl(x(1,x_0),T(1,x_0)\bigr)<0, \; \forall x_0\in M\bigr\}. \] \begin{Lemma} \label{notempty} If $\kappa<c_u(L)$, then the set $\mathcal{Z}_M$ is not empty. \end{Lemma} We just sketch the proof, referring to \cite{tai83} for more details (see also \cite{tai10}). The argument follows closely Bangert's technique of ``pulling one loop at a time'' (see \cite{ban80} and \cite{bk83}). Let $M_0\subset M_1 \subset \dots \subset M_n = M$ be a CW-complex decomposition of $M$. Since $\kappa<c_u(L)$ and since the 0-skeleton $M_0$ is a finite set, it is easy to construct a continuous map \[ z_0:[0,1]\times M \rightarrow \mathcal{M}, \quad z_0(\sigma,x_0) = \bigl(y_0(\sigma,x_0),T_0(\sigma,x_0)\bigr), \] such that \begin{enumerate} \item $y_0(0,x_0) = x_0$ for every $x_0\in M$; \item ${\mathbb{S}}_{\kappa}\circ z_0(1,x_0)<0$ for every $x_0\in M_0$. \end{enumerate} Given a positive integer $h$, we may iterate each loop $h$ times and obtain the map \[ z_0^h: [0,1]\times M\rightarrow \mathcal{M}, \quad z_0^h(\sigma,x_0) = \bigl(y_0^h(\sigma,x_0),hT_0(\sigma,x_0)\bigr), \] where \[ y_0^h(\sigma,x_0)(s) := y_0(\sigma,x_0)(hs), \quad \forall (\sigma,x_0)\in [0,1]\times M, \; \forall s\in {\mathbb{T}}. \] Consider an edge $S$ in $M_1$ with end-points $x_0,x_1\in M_0$. The map $z_0^h(1,\cdot)$ maps the the end-points of $S$ into the $h$-th iterates $\alpha^h$ and $\beta^h$ of two loops $\alpha$ and $\beta$ with negative action ${\mathbb{S}}_{\kappa}$. By pulling one of the $h$ loops at a time from $\alpha^h$ to $\beta^h$, one can construct a new map from $S$ into $\mathcal{M}$ with end-points $\alpha^h$ and $\beta^h$ and such that ${\mathbb{S}}_{\kappa}$ is negative on its image, provided that $h$ is large enough. By relying on the map $z_0^h$, this construction can be done globally, and one ends up with a continuous map \[ z_1:[0,1]\times M\rightarrow \mathcal{M}, \quad z_1(\sigma,x_0) = \bigl(y_1(\sigma,x_0),T_1(\sigma,x_0)\bigr), \] such that \renewcommand{\roman{enumi}}{\roman{enumi}} \renewcommand{(\theenumi)}{(\roman{enumi}')} \begin{enumerate} \item $y_1(0,x_0) = x_0$ for every $x_0\in M$; \item ${\mathbb{S}}_{\kappa}\circ z_1(1,x_0)<0$ for every $x_0\in M_1$. \end{enumerate} Iterating this process, one can construct continuous maps \[ z_k:[0,1]\times M\rightarrow \mathcal{M}, \quad z_k(\sigma,x_0) = \bigl(y_k(\sigma,x_0),T_k(\sigma,x_0)\bigr), \] such that \renewcommand{\roman{enumi}}{\roman{enumi}} \renewcommand{(\theenumi)}{(\roman{enumi}'')} \begin{enumerate} \item $y_k(0,x_0) = x_0$ for every $x_0\in M$; \item ${\mathbb{S}}_{\kappa}\circ z_k(1,x_0)<0$ for every $x_0\in M_k$. \end{enumerate} \renewcommand{\roman{enumi}}{\roman{enumi}} \renewcommand{(\theenumi)}{(\roman{enumi})} The map $z_n$ is an element of $\mathcal{Z}_M$. This concludes our sketch of the proof of Lemma \ref{notempty}. The proof of the following result is analogous to the proof of Lemma \ref{Lem:mountain pass}. \begin{Lemma} \label{mtpass2} Assume that $\min E <\kappa < c_u(L)$. Then there exists $a>0$ such that for every $z\in \mathcal{Z}_M$ there holds \[ \max_{(\sigma,x_0) \in[0,1]\times M} {\mathbb{S}}_{\kappa} \bigl(z(\sigma,x_0)\bigr)\geq a. \] \end{Lemma} \section{Periodic orbits with low energy} \label{pole} \paragraph{\bf The Struwe monotonicity argument} When $\kappa\leq c_u(L)$, the periods in a (PS) sequence need not be bounded anymore. Because of this fact, the question of the existence of periodic orbits for every energy $\kappa$ in the interval $[\min E,c_u(L)]$ is open, although no counterexamples are known. The known system which is closer to being a counterexample is the horocycle flow on a closed surface $M$ with constant negative curvature (see e.g. \cite{man91, cmp04}): Such a flow has no periodic orbits (actually, every orbit is dense) and it is the restriction of a Hamiltonian flow to an energy surface at a Ma\~{n}\'e critical value, but the corresponding Lagrangian is well defined only on the (non compact) universal cover of $M$ (such a system belongs to the family of {\em non-exact magnetic flows}, whereas only {\em exact} magnetic flows can be described by a Tonelli Lagrangian on $TM$). The following argument is a version of an argument of Struwe, which says that when dealing with a minimax value associated to a family of functionals depending on a real parameter in a suitable monotone way, there exist compact (PS) sequences for almost every value of the parameter. This argument has applications both to Hamiltonian periodic orbits and to semilinear elliptic equations (see \cite{str90}, \cite[section II.9]{str00} and references therein). Let us assume that $\min E<c_u(L)$, otherwise the interval of low energies collapses to a single level and there is nothing to prove. Given $\kappa\in (\min E,c_u(L))$, let $\Gamma$ be the set of the images of the maps either in $\mathcal{Z}_0$ or in $\mathcal{Z}_M$, which were introduced in the previous section: If $e_0(L)< \kappa < c_u(L)$ we may take $\mathcal{Z}_0$, while in general we should take $\mathcal{Z}_M$. Let $I$ be either the interval $(e_0(L),c_u(L))$ - if we are dealing with $\mathcal{Z}_0$ - or the interval $(\min E,c_u(L))$ - if we are dealing with $\mathcal{Z}_M$. For every $\kappa\in I$, consider the minimax value \begin{equation} \label{minimax} c(\kappa) := \inf_{K \in \Gamma} \max_{(x,T)\in K} {\mathbb{S}}_{\kappa} (x,T). \end{equation} By Lemmas \ref{Lem:mountain pass}, \ref{notempty}, and \ref{mtpass2}, $c(\kappa)$ is finite and positive, and since ${\mathbb{S}}_{\kappa}$ depends monotonically on $\kappa$, the function \[ c:I \rightarrow (0,+\infty) \] is weakly increasing. By Lebesgue Theorem, the set of points of $I$ at which $c$ has a linear modulus of continuity, that is \[ J := \bigl\{ \bar\kappa\in I\, \big| \, \exists \delta>0, \; \exists M>0 \mbox{ s.t. } |c(\kappa) - c(\bar{\kappa})| \leq M |\kappa - \bar{\kappa}| \mbox{ for every } \kappa \in I \mbox{ with }|\kappa - \bar\kappa| < \delta\bigr\}, \] has full Lebesgue measure in $I$. \begin{Lemma} \label{strarg} If $\bar \kappa\in J$, then ${\mathbb{S}}_{\bar\kappa}$ admits a bounded (PS) sequence at level $c(\bar\kappa)$, which consists of contractible loops. \end{Lemma} \begin{proof} First recall that $\Gamma$ is a class of subsets of $\mathcal{M}^{\mathrm{contr}}$. Let $(\kappa_h)\subset I$ be a strictly decreasing sequence which converges to $\bar\kappa$, and set $\epsilon_h:=\kappa_h-\bar \kappa\downarrow0$. We pick $K_h \in \Gamma$ such that \[ \max_{K_h}{\mathbb{S}}_{\kappa_h}\leq c(\kappa_h)+\epsilon_h. \] Let $z=(x,T)\in K_h$ be such that ${\mathbb{S}}_{\bar \kappa}(z)>c(\bar \kappa)-\epsilon_h$. Since $\bar\kappa$ belongs to $J$, we have \[ T = \frac{{\mathbb{S}}_{\kappa_h}(z)-{\mathbb{S}}_{\bar \kappa}(z)}{\kappa_h-\bar\kappa} \leq \frac{c(\kappa_h)+\epsilon_h-c(\bar\kappa)+\epsilon_h}{\epsilon_h}\leq M+2. \] Moreover, \[ {\mathbb{S}}_{\bar \kappa}(z) \leq {\mathbb{S}}_{\kappa_h}(z) \leq c(\kappa_h)+\epsilon_h \leq c(\bar \kappa)+(M+1)\epsilon_h. \] By the above considerations, \[ K_h \subset A_h \cup\big\{{\mathbb{S}}_{\bar \kappa}\leq c(\bar \kappa)-\epsilon_h\big\}, \] where \[ A_h := \big\{(x,T)\,\big|\, T\leq M+2\mbox{ and }{\mathbb{S}}_{\bar \kappa} (x,T) \leq c(\bar \kappa)+(M+1)\epsilon_h\big\}. \] If $(x,T)$ belongs to $A_h$, we have the estimate \[ {\mathbb{S}}_{\bar\kappa}(x,T) \geq \frac{L_0}{M+2}\|x'\|_2^2-(M+2)(L_1-\bar\kappa), \] (see (\ref{isbd})), which shows that $A_h$ is bounded in $\mathcal{M}$, uniformly in $h$. Let $\phi$ be the flow of the vector field obtained by multiplying $-\nabla{\mathbb{S}}_{\bar \kappa}$ by a suitable non-negative function, whose role is to make the vector field bounded on $\mathcal{M}$ and vanishing on the sublevel $\{{\mathbb{S}}_{\bar\kappa} \leq c(\bar\kappa)/4\}$, while keeping the uniform decrease condition \begin{equation} \label{ud} \frac{d}{d\sigma} {\mathbb{S}}_{\bar\kappa} \bigl(\phi_{\sigma}(z)\bigr) \leq - \frac{1}{2} \min \bigl\{ \|d{\mathbb{S}}_{\bar\kappa} (\phi_{\sigma}(z)) \|^2,1 \bigr\}, \quad \mbox{if } {\mathbb{S}}_{\bar\kappa} (\phi_{\sigma}(z)) \geq c(\bar\kappa)/2. \end{equation} See (\ref{decr}) and Remarks \ref{noncomp}, \ref{trunc}. Then Lemma \ref{Lem:4} implies that $\phi$ is well-defined on $[0,+\infty[\times \mathcal{M}$, and the class of sets $\Gamma$ is positively invariant with respect to $\phi$. Since $\phi$ maps bounded sets into bounded sets, we have \begin{equation} \label{dove} \phi([0,1]\times K_h) \subset B_h \cup \bigl\{{\mathbb{S}}_{\bar \kappa}\leq c(\bar \kappa)-\epsilon_h \bigr\}, \end{equation} for some uniformly bounded set \begin{equation} \label{bdd} B_h\subset \bigl\{{\mathbb{S}}_{\bar \kappa}\leq c(\bar \kappa)+(M+1)\epsilon_h \bigr\}. \end{equation} We claim that there exists a sequence $(z_h)\subset \mathcal{M}^{\mathrm{contr}}$ with \[ z_h \in B_h\cap \bigl\{{\mathbb{S}}_{\bar \kappa}\geq c(\bar \kappa)-\epsilon_h \bigr\}, \] and $\| d{\mathbb{S}}_{\bar \kappa}(z_h)\|$ infinitesimal. Such a sequence is clearly a bounded (PS) sequence at level $c(\bar\kappa)$. Assume, by contradiction, the above claim to be false. Then there exists $0<\delta<1$ which satisfies \[ \|d{\mathbb{S}}_{\bar \kappa}\| \geq \delta \quad \mbox{on } B_h \cap \bigl\{{\mathbb{S}}_{\bar \kappa}\geq c(\bar \kappa)-\epsilon_h\bigr\}, \] for every $h$ large enough. Together with (\ref{ud}), (\ref{dove}) and (\ref{bdd}), this implies that, for $h$ large enough, for any $z\in K_h$ such that \[ \phi([0,1]\times \{z\}) \subset \bigl\{ {\mathbb{S}}_{\kappa} \geq c(\bar \kappa)-\epsilon_h\bigr\}, \] there holds \[ {\mathbb{S}}_{\bar \kappa} \big(\phi_1(z)\big) \leq {\mathbb{S}}_{\bar \kappa}(z)- \frac{1}{2} \delta^2 \leq c(\bar \kappa)+ (M+1) \epsilon_h- \frac{1}{2} \delta^2. \] It follows that \[ \max_{\phi_1(K_h)} {\mathbb{S}}_{\bar \kappa} \leq c(\bar \kappa)-\epsilon_h, \] for $h$ large enough. Since $\phi_1(K_h)$ belongs to $\Gamma$, this contradicts the definition of $c(\bar\kappa)$ and concludes the proof. \end{proof} \paragraph{\bf Existence of periodic orbits of low energy} We are finally ready to prove the following result, which is statement (iii) in the theorem of the Introduction: \begin{Thm} \label{ae} For almost every $\kappa \in (\min E,c_u(L))$, there is a contractible periodic orbit $\gamma$ of energy $E(\gamma,\gamma')=\kappa$ and positive action ${\mathbb{S}}_{\kappa}(\gamma)=c(\kappa)$. \end{Thm} \begin{proof} Let $\kappa$ be an element of the full measure set $J\subset I$. By Lemma \ref{strarg}, ${\mathbb{S}}_{\kappa}$ admits a (PS)$_{c(\kappa)}$ sequence $(x_h,T_h)\subset \mathcal{M}^{\mathrm{contr}}$ with $(T_h)$ bounded. By Lemma \ref{PST0}, $(T_h)$ is bounded away from zero, because $c(\kappa)>0$. By Lemma \ref{Lem;2}, the sequence $(x_h,T_h)$ has a limiting point in $\mathcal{M}^{\mathrm{contr}}$, which gives us the required contractible periodic orbit. \end{proof} \begin{Rmk} The existence of a periodic orbit for almost energy level in $(\min E,e_0(L))$ can be proved also by an argument from symplectic topology. In fact, let $H:T^*M \rightarrow {\mathbb{R}}$ be the Hamiltonian which is Legendre dual to $L$. The fact that $\kappa< e_0(L)$ implies that the restriction of the projection $T^*M\rightarrow M$ to $H^{-1}(\kappa)$ is not surjective. Therefore, one can build a Hamiltonian diffeomorphism of $T^*M$ which displaces $H^{-1}(\kappa)$ from itself (see \cite[Proposition 8.2]{con06}). Sets which are displaceable by a Hamiltonian diffeomorphism have finite $\pi_1$-sensitive Hofer-Zehnder capacity (see \cite{sch06} and \cite{fs07}) and this fact implies the almost everywhere existence result for periodic orbits (see \cite{hz94}). See \cite[Corollary 8.3]{con06} for more details on such a proof. \end{Rmk} The next result shows that stable energy levels of Tonelli Hamiltonians posses periodic orbits, proving statement (iv) of the theorem in the Introduction. In particular, the same is true for contact type energy levels. \begin{Cor} Assume that $\kappa$ is a regular value of the Tonelli Hamiltonian $H\in C^{\infty}(T^*M)$ and that the hypersurface $\Sigma=H^{-1}(\kappa)$ is stable. Then $\Sigma$ carries a periodic orbit. \end{Cor} \begin{proof} By stability, we can find a diffeomorphism \[ (-\epsilon, \epsilon) \times \Sigma \rightarrow T^*M, \qquad (r,x) \mapsto \psi_r(x), \] onto an open neighborhood of $\Sigma$ such that $\psi_0$ is the identity on $\Sigma$ and \[ \psi_r : \Sigma \rightarrow \Sigma_r := \psi_r(\Sigma) \] induces an isomorphism between the line bundles $\mathcal{L}_{\Sigma}$ and $\mathcal{L}_{\Sigma_r}$. Up to the choice of a smaller $\epsilon$, we may assume that all the hypersurfaces $\Sigma_r$ are levels of a uniformly convex function. Therefore, they are the level sets of a Tonelli Hamiltonian $K\in C^{\infty}(T^* M )$ (see \cite{mp10} for a detailed construction of $K$). Since the Legendre transform of $K$ is a Tonelli Lagrangian, Theorems \ref{highthm} and \ref{ae} imply that $K^{-1}(\kappa)$ has periodic orbits for almost every $\kappa$. In particular, $\Sigma_r$ has periodic orbits for almost every $r$, but since the dynamics on $\Sigma_r$ and on $\Sigma$ are conjugated, the same is true for $\Sigma$. \end{proof} \begin{Rmk} The above proof shows the usefulness of having a theory which works with Tonelli Lagrangians, rather than just electromagnetic ones. In fact even if the stable hypersurface $\Sigma$ is the level set of an electromagnetic Hamiltonian (that is, it is fiberwise an ellipsoid), the hypersurfaces $\Sigma_r$ given by the stability assumption may be more general fiberwise uniformly convex hypersurfaces. \end{Rmk} \begin{Rmk} If $E^{-1}(\kappa)$ is of contact type and $\pi_* : H_1(E^{-1}(\kappa),{\mathbb{R}}) \rightarrow H_1(M,{\mathbb{R}})$ is injective, then ${\mathbb{S}}_{\kappa}$ satisfies the Palais-Smale condition (with a suitable choice of the metric of $\mathcal{M}$). See \cite[Proposition F]{con06}. Therefore, in this case the existence of a periodic orbit can be obtained also without using stability. \end{Rmk} \begin{Rmk} It can be proved that when $M$ is a closed surface and $L$ is of the form (\ref{elmag}) with $V=0$ (that is, in the case of exact magnetic flows), there are periodic orbits on {\em every} energy level below $c_0(L)$ (see \cite{tai92b}, \cite{tai92c}, \cite{tai92} and \cite{cmp04}). In fact, the advantage of dealing with a surface is that when $\kappa<c_0(L)$ one can minimize ${\mathbb{S}}_{\kappa}$ on a suitable space of {\em embedded} closed curves. In the same setting, one can prove that for almost every energy level below $c_u(L)$ there are infinitely many periodic orbits, at least if all periodic orbits are assumed to be non-degenerate (see \cite{amp13}). \end{Rmk} \paragraph{\textbf{The two Lyapunov functions argument}} We conclude these notes by discussing an alternative argument to deal with the lack of (PS) which is exhibited by ${\mathbb{S}}_{\kappa}$ when $\kappa<c_u(L)$. It allows to prove that the set of energy levels $\kappa$ such that the Euler-Lagrange flow has a periodic orbit of energy $\kappa$ is {\em dense} in $(\min E,c_u(L))$, a weaker statement than Theorem \ref{ae}. However, it has some advantages, which are discussed in Remark \ref{adv} below. This argument is used, in a different context, in \cite{ama08}. Here we shall use it in order to prove the following weaker version of Theorem \ref{ae}: \begin{Thm} \label{dense} Let $\min E<\bar \kappa<c_u(L)$ and assume that there are no contractible periodic orbits of energy $\bar\kappa$ and non-negative action ${\mathbb{S}}_{\bar\kappa}$. Then there exists a strictly decreasing sequence $(\kappa_h)$ which converges to $\bar\kappa$ and is such that the Euler-Lagrange flow has a contractible periodic orbit $\gamma_h$ with energy $\kappa_h$ and period $T_h$, which satisfies ${\mathbb{S}}_{\kappa_h}(\gamma_h)/T_h \downarrow 0$. \end{Thm} \begin{proof} We argue by contradiction and assume that there exists $\tilde\kappa>\bar\kappa$ and $\delta>0$ such that for any $\kappa \in[\bar \kappa,\tilde \kappa]$ all the periodic orbits $\gamma$ of energy $\kappa$ and period $T$ satisfy either ${\mathbb{S}}_{\kappa}(\gamma)/T\geq\delta$ or ${\mathbb{S}}_{\kappa} (\gamma)\leq 0$. Fix real numbers $a>c(\bar \kappa)$ and $\kappa^*\in(\bar \kappa,\tilde \kappa]$. Assume that we can find $\lambda\in[0,1]$ and $(x,T)\in\mathcal{M}$ such that \[ \lambda \, d{\mathbb{S}}_{\bar \kappa}(x,T)+(1-\lambda)\, d{\mathbb{S}}_{\kappa^*}(x,T)=0, \quad 0<{\mathbb{S}}_{\bar \kappa}(x,T)\leq a. \] Then $(x,T)$ is a critical point of ${\mathbb{S}}_{\lambda\bar \kappa+(1-\lambda)\kappa^*}$, hence it is a $T$-periodic orbit with energy $\lambda\bar \kappa+(1-\lambda)\kappa^*$. By what we have assumed at the beginning, we have \[ \delta \leq \frac{1}{T}{\mathbb{S}}_{\lambda\bar \kappa+(1-\lambda)\kappa^*}(x,T) = \frac{1}{T} {\mathbb{S}}_{\bar \kappa}(x,T)+(1-\lambda)(\kappa^*-\bar \kappa) \leq \frac{a}{T} + \kappa^*-\bar\kappa. \] Up to the choice of a smaller $\kappa^*>\bar\kappa$, we may assume that $\kappa^*-\bar \kappa\leq \delta/2$. Then the above estimate implies that \[ T\leq \frac{2a}{\delta} =: T^*. \] With these choices of $\kappa^*$ and $T^*$, we can restate what we have proved so far as: \begin{Lemma} \label{oppo} If $T>T^*$ and $0<{\mathbb{S}}_{\bar \kappa}(x,T)\leq a$, then the segment \[ \mathrm{conv} \bigl\{ d{\mathbb{S}}_{\bar \kappa}(x,T), d{\mathbb{S}}_{\kappa^*}(x,T) \bigr\} \subset T_{(x,T)}^* \mathcal{M} \] does not contain $0$. \end{Lemma} The above lemma allows us to construct a negative pseudo-gradient vector field for ${\mathbb{S}}_{\bar\kappa}$ which has all the good properties of $-\nabla {\mathbb{S}}_{\bar\kappa}$ and moreover has ${\mathbb{S}}_{\kappa^*}$ as a Lyapunov function on the open set \[ A:= \bigl\{T>T^*\} \cap \{0< {\mathbb{S}}_{\bar\kappa} < a \bigr\}. \] In fact, the only obstruction to finding a vector field $W$ whose flow make both ${\mathbb{S}}_{\bar\kappa}$ and ${\mathbb{S}}_{\kappa^*}$ decrease in $A$, is that the differentials of ${\mathbb{S}}_{\bar\kappa}$ and ${\mathbb{S}}_{\kappa^*}$ point in opposite directions in some point of $A$, and this is precisely what is excluded by Lemma \ref{oppo}. More precisely, one can prove the following: \begin{Lemma} \label{pg} There exists a locally Lipschitz vector field $W$ on $\mathcal{M}$ such that: \begin{enumerate} \item $d{\mathbb{S}}_{\bar\kappa}[W]<0$ on $\{{\mathbb{S}}_{\bar\kappa}>0\}$; \item $W$ is forward complete and vanishes on $\{{\mathbb{S}}_{\bar\kappa}\leq 0\}$; \item let $z_h=(x_h,T_h)$ be a sequence in $\mathcal{M}^{\mathrm{contr}}$ such that \[ 0< \inf {\mathbb{S}}_{\bar{\kappa}}(z_h) \leq \sup {\mathbb{S}}_{\bar{\kappa}}(z_h) < +\infty, \quad \lim_{h\rightarrow \infty} d{\mathbb{S}}_{\bar\kappa}(z_h)[W(z_h)] = 0, \] and $(T_h)$ is bounded from above; then $(z_h)$ has a subsequence which converges in $\mathcal{M}^{\mathrm{contr}}$; \item $d{\mathbb{S}}_{\kappa^*}[W]<0$ on $A$. \end{enumerate} \end{Lemma} In fact, one can choose $W$ to be given by the vector field \[ \nabla{\mathbb{S}}_{\bar \kappa}+\chi \frac{\|\nabla{\mathbb{S}}_{\bar \kappa}\|}{\|\nabla{\mathbb{S}}_{\kappa^*}\|}\nabla{\mathbb{S}}_{\kappa^*} \] multiplied by a suitable non-positive function. Here $\chi$ is a suitable cut-off function. See \cite[Lemmas 5.1 and 5.4]{ama08} for a similar construction. We can now prove Theorem \ref{dense}. By the definition of $c(\bar\kappa)$, there is a set $K$ in $\Gamma$ such that \[ \max_{K} {\mathbb{S}}_{\bar \kappa} <a. \] By Lemma \ref{pg} (i) and (ii), for every $\sigma_0>0$ we have \[ \inf_{\sigma\in \sigma_0} \Bigl| d{\mathbb{S}}_{\bar\kappa}\bigl(\phi_{\sigma}(z)\bigr)\bigl[W(\phi_{\sigma}(z))\bigr] \Bigr| \leq \frac{1}{\sigma_0} \int_0^{\sigma_0} \Bigl| d{\mathbb{S}}_{\bar\kappa}\bigl(\phi_{\sigma}(z)\bigr)\bigl[W(\phi_{\sigma}(z))\bigr]\Bigr| \, d\sigma = \frac{ {\mathbb{S}}_{\bar\kappa}(z) - {\mathbb{S}}_{\bar\kappa}(\phi_{\sigma_0}(z))}{\sigma_0}, \] and, by the definition of $c(\bar\kappa)$, \[ \max_{z\in K} {\mathbb{S}}_{\bar\kappa} \bigl( \phi_{\sigma_0}(z) \bigr) \geq c(\bar\kappa). \] By taking a limit for $\sigma_0\rightarrow +\infty$, thanks to Lemma \ref{pg} (ii), the above facts imply that $\phi_{{\mathbb{R}}^+}(K)\cap \{{\mathbb{S}}_{\bar\kappa}>0\}$ contains a sequence $z_h=(x_h,T_h)$ such that \[ 0< c(\bar\kappa) \leq \inf {\mathbb{S}}_{\bar{\kappa}}(z_h) \leq \sup {\mathbb{S}}_{\bar{\kappa}}(z_h) < a \quad \mbox{and} \quad \lim_{h\rightarrow \infty} d{\mathbb{S}}_{\bar\kappa}(z_h)[W(z_h)] = 0. \] It is enough to show that $(T_h)$ is bounded from above: Indeed, in this case Lemma \ref{pg} (iii) implies that $(z_h)$ has a limiting point, which is a critical point of ${\mathbb{S}}_{\bar\kappa}$ with positive action, contradicting the hypothesis of Theorem \ref{dense}. The upper bound on $(T_h)$ is a consequence of the following claim: the period $T$ is bounded on the set $\phi_{{\mathbb{R}}^+}(K)\cap \{{\mathbb{S}}_{\bar\kappa}>0\}$. In order to prove this claim, we first notice that \begin{equation} \label{hhh} {\mathbb{S}}_{\bar \kappa}(x,T)\leq a, \;\; T\leq T^* \quad \Rightarrow \quad {\mathbb{S}}_{\kappa^*}(x,T)\leq a+(\kappa^*-\bar \kappa)T^*=:b. \end{equation} Since $K$ is compact, we can find $d>b$ such that $K\subset \{{\mathbb{S}}_{\kappa^*} < d\}$. Let $\phi$ be the flow of the vector field $W$. We claim that \begin{equation} \label{cc} \phi_{{\mathbb{R}}^+}(K) \cap \bigl\{{\mathbb{S}}_{\bar\kappa} > 0\bigr\} \subset \bigl\{{\mathbb{S}}_{\kappa^*} < d\bigr\}. \end{equation} In fact, let $z\in K$ and let $\sigma_0>0$ be the first instant such that ${\mathbb{S}}_{\kappa^*}(\phi_{\sigma_0}(z))=d$, while ${\mathbb{S}}_{\bar{\kappa}}(\phi_{\sigma_0}(z))> 0$. By Lemma \ref{pg} (i), ${\mathbb{S}}_{\bar\kappa}(\phi_{\sigma_0}(z)) \leq {\mathbb{S}}_{\bar\kappa}(z) < a$. By Lemma \ref{pg} (iv), the point $\phi_{\sigma_0}(z)$ cannot belong to $A$, so $\phi_{\sigma_0}(z)=(x,T)$ with $T\leq T^*$ and (\ref{hhh}) implies that ${\mathbb{S}}_{\kappa^*}(\phi_{\sigma_0}(z)) \leq b < d$. This contradiction proves (\ref{cc}). If ${\mathbb{S}}_{\bar\kappa}(x,T)>0$ and ${\mathbb{S}}_{\kappa^*}(x,T)<d$, then \[ d > {\mathbb{S}}_{\kappa^*}(x,T) = {\mathbb{S}}_{\bar\kappa}(x,T) + (\kappa^*-\bar\kappa) T > (\kappa^*-\bar\kappa) T. \] This shows that the period $T$ is bounded on the set \[ \bigl\{{\mathbb{S}}_{\bar\kappa} > 0\bigr\} \cap \bigl\{{\mathbb{S}}_{\kappa^*} < d\bigr\}, \] and by (\ref{cc}) it is bounded also on \[ \phi_{{\mathbb{R}}^+}(K) \cap \bigl\{{\mathbb{S}}_{\bar\kappa}>0\}, \] as claimed. \end{proof} \begin{Rmk} \label{adv} In the Struwe monotonicity argument, one gets the existence of bounded (PS) sequences at level $c(\bar{\kappa})$, but has no control on the (PS) sequences at other levels. Therefore, it is not clear whether the space of negative gradient flow lines for ${\mathbb{S}}_{\bar\kappa}$ which connect two given critical points - say with positive action - is bounded. An advantage of the two Lyapunov functions argument, is that the latter fact is true for the flow lines of the vector field $W$ constructed in Lemma \ref{pg}: the second Lyapunov function ${\mathbb{S}}_{\kappa^*}$ allows to exclude the existence of flow lines which go arbitrarily far and come back. This fact would allow to develop some global critical point theory for ${\mathbb{S}}_{\bar\kappa}$, such as Morse theory or Lusternik-Schnirelmann theory. This is not useful here, because the a priori estimates which lead to the existence of the pseudo-gradient vector field $W$ come from a contradiction argument. However, it might be useful in situations where these a priori bounds have a different origin, such as for example in the case of tame energy levels (see \cite{cfp10} for the definition of tameness and for motivating examples). \end{Rmk} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \end{document}
\begin{document} \title[Weighted Integral Means of Mixed Areas and Lengths]{Weighted Integral Means of Mixed\\ Areas and Lengths under Holomorphic Mappings} \author{Jie Xiao and Wen Xu} \address{Department of Mathematics and Statistics, Memorial University, NL A1C 5S7, Canada} \email{jxiao@mun.ca; wenxupine@gmail.com} \thanks{JX and WX were in part supported by NSERC of Canada and the Finnish Cultural Foundation, respectively.} \begin{abstract} This note addresses monotonic growths and logarithmic convexities of the weighted ($(1-t^2)^\alpha dt^2$, $-\infty<\alpha<\infty$, $0<t<1$) integral means $\mathsf{A}_{\alpha,\beta}(f,\cdot)$ and $\mathsf{L}_{\alpha,\beta}(f,\cdot)$ of the mixed area $(\pi r^2)^{-\beta}A(f,r)$ and the mixed length $(2\pi r)^{-\beta}L(f,r)$ ($0\le\beta\le 1$ and $0<r<1$) of $f(r\mathbb D)$ and $\partial f(r\mathbb D)$ under a holomorphic map $f$ from the unit disk $\mathbb D$ into the finite complex plane $\mathbb C$. \end{abstract} \maketitle \section{Introduction} From now on, $\mathbb D$ represents the unit disk in the finite complex plane $\mathbb C$, $H(\mathbb D)$ denotes the space of holomorphic mappings $f: \mathbb D\to\mathbb C$, and $U(\mathbb D)$ stands for all univalent functions in $H(\mathbb D)$. For any real number $\alpha$, positive number $r\in (0,1)$ and the standard area measure $dA$, let $$ dA_\alpha(z)=(1-|z|^2)^\alpha dA(z);\quad r\mathbb D=\{z\in\mathbb D: |z|<r\};\quad r\mathbb T=\{z\in\mathbb D: |z|=r\}. $$ In their recent paper \cite{XZ}, Xiao and Zhu have discussed the following area $0<p<\infty$-integral means of $f\in H(\mathbb D)$: $$ {M}_{p,\alpha}(f,r)=\left[\frac{1}{A_\alpha(r\mathbb D)}\int_{r\mathbb D}|f|^p\,dA_\alpha\right]^{\frac1p}, $$ proving that $r\mapsto M_{p,\alpha}(f,r)$ is strictly increasing unless $f$ is a constant, and $\log r\mapsto\log M_{p,\alpha}(f,r)$ is not always convex. This last result suggests a conjecture that $\log r\mapsto\log M_{p,\alpha}(f,r)$ is convex or concave when $\alpha\le 0$ or $\alpha>0$. But, motivated by \cite[Example 10, (ii)]{XZ} we can choose $p=2$, $\alpha=1$, $f(z)=z+c$ and $c>0$ to verify that the conjecture is not true. At the same time, this negative result was also obtained in Wang-Zhu's manuscript \cite{WZ}. So far it is unknown whether the conjecture is generally true for $p\not=2$. The foregoing observation has actually inspired the following investigation. Our concentration is the fundamental case $p=1$. To understand this approach, let us take a look at $M_{1,\alpha}(\cdot,\cdot)$ from a differential geometric viewpoint. Note that $$ {M}_{1,\alpha}(f',r)=\frac{\int_{r\mathbb D}|f'|\,dA_\alpha}{A_\alpha(r\mathbb D)}=\frac{\int_0^r \big[(2\pi t)^{-1}\int_{t\mathbb T}|f'(z)||dz|\big](1-t^2)^\alpha\,dt^2}{\int_0^r (1-t^2)^\alpha\,dt^2}. $$ So, if $f\in U(\mathbb D)$, then $$ (2\pi t)^{-1}\int_{t\mathbb T}|f'(z)|\,|dz| $$ is a kind of mean of the length of $\partial f(t\mathbb D)$, and hence the square of this mean dominates a sort of mean of the area of $f(t\mathbb D)$ in the isoperimetric sense: $$ \Phi_{A}(f,t)=(\pi t^2)^{-1}\int_{t\mathbb D}|f'(z)|^2\,dA(z)\le \left[(2\pi t)^{-1}\int_{t\mathbb T}|f'(z)|\,|dz|\right]^2=\big[\Phi_{L}(f,t)\big]^2. $$ According to the P\'olya-Szeg\"o monotone principle \cite[Problem 309]{PS} (or \cite[Proposition 6.1]{BMM}) and the area Schwarz's lemma in Burckel, Marshall, Minda, Poggi-Corradini and Ransford \cite[Theorem 1.9]{BMM}, $\Phi_{L}(f,\cdot)$ and $\Phi_{A}(f,\cdot)$ are strictly increasing on $(0,1)$ unless $f(z)=a_1z$ with $a_1\not=0$. Furthermore, $\log\Phi_{L}(f,r)$ and $\log\Phi_{A}(f,r)$, equivalently, $\log L(f,r)$ and $\log A(f,r)$, are convex functions of $\log r$ for $r\in (0,1)$, due to the classical Hardy's convexity and \cite[Section 5]{BMM}. Perhaps, it is worth-wise to mention that if $c>0$ is small enough then the universal cover of $\mathbb D$ onto the annulus $\{e^{-\frac{c\pi}{2}}<|z|< e^{\frac{c\pi}{2}}\}$: $$ f(z)=\exp\Big[ic\log\Big(\frac{1+z}{1-z}\Big)\Big] $$ enjoys the property that $\log r\mapsto \log A(f,r)$ is not convex; see \cite[Example 5.1]{BMM}. In the above and below, we have used the following convention: $$ \Phi_{A}(f,r)=\frac{A(f,r)}{\pi r^2}\quad\&\quad \Phi_{L}(f,r)=\frac{L(f,r)}{2\pi r}, $$ where under $r\in (0,1)$ and $f\in H(\mathbb D)$, $A(f,r)$ and $L(f,r)$ stand respectively for the area of $f(r\mathbb D)$ (the projection of the Riemannian image of $r\mathbb D$ by $f$) and the length of $\partial f(r\mathbb D)$ (the boundary of the projection of the Riemannian image of $r\mathbb D$ by $f$) with respect to the standard Euclidean metric on $\mathbb C$. For our purpose, we choose a shortcut notation $$ d\mu_\alpha(t)=(1-t^2)^\alpha dt^2\quad\&\quad \nu_\alpha(t)=\mu_\alpha([0,t])\quad\forall\quad t\in (0,1), $$ and for $0\le\beta\le 1$ define $$ \Phi_{A,\beta}(f,t)=\frac{A(f,t)}{(\pi t^2)^\beta}\quad\&\quad \Phi_{L,\beta}(f,t)=\frac{L(f,t)}{(2\pi t)^\beta}, $$ and then $$ \mathsf{A}_{\alpha,\beta}(f,r)=\frac{\int_0^r \Phi_{A,\beta}(f,t) \,d\mu_\alpha(t)}{\int_0^r d\mu_\alpha(t)}\quad\&\quad \mathsf{L}_{\alpha,\beta}(f,r)=\frac{\int_0^r \Phi_{L,\beta}(f,t)\, d\mu_\alpha(t)}{\int_0^r d\mu_\alpha(t)} $$ which are called the weighted integral means of the mixed area and the mixed length for $f(r\mathbb D)$ and $\partial f(r\mathbb D)$, respectively. In this note, we consider two fundamental properties: monotonic growths and logarithmic convexities of both $\mathsf{A}_{\alpha,\beta}(f,r)$ and $\mathsf{L}_{\alpha,\beta}(f,r)$, thereby producing two specialities: (i) if $r\mapsto \Phi_{L}(f,r)$ is monotone increasing on $(0,1)$, then so is the isoperimetry-induced function: $$ r\mapsto\frac{\int_0^r \big[\Phi_{L,1}(f,t)\big]^2\,d\mu_\alpha(t)}{\int_0^r d\mu_\alpha(t)}\ge \mathsf{A}_{\alpha,1}(f,r); $$ (ii) the log-convexity for $\mathsf{L}_{\alpha,\beta=1}(f,r)$ essentially settles the above-mentioned conjecture. The details (results and their proofs) are arranged in the forthcoming two sections. \section{Monotonic Growth} In this section, we deal with the monotonic growths of $\mathsf{A}_{\alpha,\beta}(f,r)$ and $\mathsf{L}_{\alpha,\beta}(f,r)$, along with their associated Schwarz type lemmas. In what follows, $\mathbb N$ is used as the set of all natural numbers. \subsection{Two Lemmas} The following two preliminary results are needed. \begin{lemma}\cite[Theorems 1 \& 2]{Ma}\label{l2} Let $f\in H(\mathbb D)$ be of the form $f(z)=a_0+\sum_{k=n}^\infty a_kz^k$ with $n\in\mathbb N$. Then: \item{\rm(i)} $\pi r^{2n}\Big[\frac{|f^{(n)}(0)|}{n!}\Big]^2\le A(f,r)\quad\forall\quad r\in (0,1)$. \item{\rm(ii)} $2\pi r^n \Big[\frac{|f^{(n)}(0)|}{n!}\Big]\le L(f,r)\quad\forall\quad r\in (0,1)$. \noindent Moreover, equality in (i) or (ii) holds if and only if $f(z)=a_0+a_nz^n$. \end{lemma} \begin{proof} This may be viewed as the higher order Schwarz type lemma for area and length. See also the proofs of Theorems 1 \& 2 in \cite{Ma}, and their immediate remarks on equalities. Here it is worth noticing three matters: (a) $\frac{f^{(n)}(0)}{n!}$ is just $a_n$; (b) \cite[Corollary 3]{J} presents a different argument for the area case; (c) $L(f,r)$ is greater than or equal to the length $l(r,f)$ of the outer boundary of $f(r\mathbb D)$ (defined in \cite{Ma}) which is not less than the length $l^\#(r,f)$ of the exact outer boundary of $f(r\mathbb D)$ (introduced in \cite{Y}). \end{proof} \begin{lemma}\label{l1} Let $0\le\beta\le 1$. \item{\rm(i)} If $f\in H(\mathbb D)$, then $r\mapsto \Phi_{A,\beta}(f,r)$ is strictly increasing on $(0,1)$ unless \[ f=\left\{\begin{array} {r@{\;}l} constant &\quad \hbox{when}\quad \beta<1\\ linear\ map &\quad \hbox{when}\quad \beta=1. \end{array} \right. \] \item{\rm(ii)} If $f\in U(\mathbb D)$ or $f(z)=a_0+a_nz^n$ with $n\in\mathbb N$, then $r\mapsto \Phi_{L,\beta}(f,r)$ is strictly increasing on $(0,1)$ unless \[ f=\left\{\begin{array} {r@{\;}l} constant &\quad \hbox{when}\quad \beta<1\\ linear\ map & \quad \hbox{when}\quad \beta=1. \end{array} \right. \] \end{lemma} \begin{proof} It is enough to handle $\beta<1$ since the case $\beta=1$ has been treated in \cite[Theorem 1.9 \& Proposition 6.1]{BMM}. The monotonic growths in (i) and (ii) follow from $$ \Phi_{A,\beta}(f,r)=(\pi r^2)^{1-\beta}\Phi_{A,1}(f,r)\quad\&\quad L(f,r)=(2\pi r)^{1-\beta}\Phi_{L,1}(f,r). $$ To see the strictness, we consider two cases. (i) Suppose that $\Phi_{A,\beta}(f,\cdot)$ is not strictly increasing. Then there are $r_1,r_2\in (0,1)$ such that $r_1<r_2$, and $\Phi_{A,\beta}(f,\cdot)$ is a constant on $[r_1,r_2]$. Hence $$ \frac{d}{dr}\Phi_{A,\beta}(f,r)=0\quad\forall\quad r\in [r_1,r_2]. $$ Equivalently, $$ 2\beta A(f,r)=r\frac{d}{dr}A(f,r)\quad\forall\quad r\in [r_1,r_2]. $$ But, according to \cite[(4.2)]{BMM}: $$ 2A(f,r)\le r\frac{d}{dr} A(f,r)\quad\forall\quad r\in (0,1). $$ Since $\beta<1$, we get $A(f,r)=0$ for all $r\in [r_1,r_2]$, whence finding that $f$ is constant. (ii) Now assume that $\Phi_{L,\beta}(f,\cdot)$ is not strictly increasing. There are $r_3,r_4\in (0,1)$ such that $ r_3<r_4$ and $$ 0=\frac{d}{dr}\Phi_{L,\beta}(f,r)=(2\pi r)^{-\beta}\Big[\frac{d}{dr}L(f,r)-\frac{\beta}{r}L(f,r)\Big]=0\quad\forall\quad r\in [r_3,r_4]. $$ If $f\in U(\mathbb D)$ then $$ L(f,r)=\int_{r\mathbb T}|f'(z)|\,|dz| $$ and hence one has the following ``first variation formula" $$ \frac{d}{dr}L(f,r)=\int_0^{2\pi}|f'(re^{i\theta})|d\theta+r\frac{d}{dr}\int_0^{2\pi}|f'(re^{i\theta})|d\theta\quad\forall\quad r\in [r_3,r_4]. $$ The previous three equations yield $$ 0=(1-\beta)\int_0^{2\pi}|f'(re^{i\theta})|d\theta+r\frac{d}{dr}\int_0^{2\pi}|f'(re^{i\theta})|d\theta\quad\forall\quad r\in [r_3,r_4] $$ and so $$ \int_0^{2\pi}|f'(re^{i\theta})|d\theta=0\quad\forall\quad r\in [r_3,r_4]. $$ This ensures that $f$ is a constant, contradicting $f\in U(\mathbb D)$. Therefore, $f(z)$ is of the form $a_0+a_nz^n$. But, since $L(z^n,r)=2\pi r^n$ is strictly increasing, $f$ must be constant. \end{proof} \subsection{Monotonic Growth of $\mathsf{A}_{\alpha,\beta}(f,\cdot)$} This aspect is essentially motivated by the following Schwarz type lemma. \begin{proposition}\label{pr1} Let $-\infty<\alpha<\infty$, $0\le\beta\le 1$, and $f\in H(\mathbb D)$ be of the form $f(z)=a_0+\sum_{k=n}^\infty a_k z^k$ with $n\in\mathbb N$. Then $$ \pi^{1-\beta}\Big[\frac{|f^{(n)}(0)|}{n!}\Big]^2\le \mathsf{A}_{\alpha,\beta}(f,r)\left[\frac{\nu_\alpha(r)}{\int_0^rt^{2(n-\beta)}\,d\mu_\alpha(t)}\right]\quad\forall\quad r\in (0,1) $$ with equality if and only if $f(z)=a_0+a_nz^n$. \end{proposition} \begin{proof} The inequality follows from Lemma \ref{l2} (i) right away. When $f(z)=a_0+a_nz^n$, the last inequality becomes equality due to the equality case of Lemma \ref{l2} (i). Conversely, suppose that the last inequality is an equality. If $f$ does not have the form $a_0+a_nz^n$, then the equality in Lemma \ref{l2} (i) is not true, then there are $r_1,r_2\in (0,1)$ such that $r_1<r_2$ and $$ A(f,t)>\pi t^{2n}\Big[\frac{|f^{(n)}(0)|}{n!}\Big]^2\quad\forall\quad t\in [r_1,r_2]. $$ This strict inequality forces that for $r\in [r_1,r_2]$, \begin{eqnarray*} \pi^{1-\beta}\Big[\frac{|f^{(n)}(0)|}{n!}\Big]^2\int_0^r t^{2(n-\beta)}\,d\mu_\alpha(t)&=&\int_0^r (\pi t^2)^{-\beta}A(f,t)\,d\mu_\alpha(t)\\ &=&\left(\int_0^{r_1}+\int_{r_1}^{r_2}+\int_{r_2}^{r}\right)(\pi t^2)^{-\beta} A(f,t)\,d\mu_\alpha(t)\\ &>&\pi^{1-\beta} \Big[\frac{|f^{(n)}(0)|}{n!}\Big]^2 \int_0^{r} t^{2(n-\beta)}\,d\mu_\alpha(t), \end{eqnarray*} a contradiction. Thus $f(z)=a_0+a_nz^n$. \end{proof} Based on Proposition \ref{pr1}, we find the monotonic growth for $\mathsf{A}_{\alpha,\beta}(\cdot,\cdot)$ as follows. \begin{theorem}\label{th1} Let $-\infty<\alpha<\infty$, $0\le\beta\le 1$, and $f\in H(\mathbb D)$. Then $r\mapsto\mathsf{A}_{\alpha,\beta}(f,r)$ is strictly increasing on $(0,1)$ unless \[ f=\left\{\begin{array} {r@{\;}l} constant &\quad \hbox{when}\quad \beta<1\\ linear\ map &\quad \hbox{when}\quad \beta=1. \end{array} \right. \] Consequently, \item{\rm(i)} \[ \lim_{r\to 0}\mathsf{A}_{\alpha,\beta}(f,r)=\left\{\begin{array} {r@{\;}l} 0\quad & \hbox{when}\quad \beta<1\\ |f'(0)|^2\quad & \hbox{when}\quad \beta=1. \end{array} \right. \] \item{\rm(ii)} If $$ \Phi_{A,\beta}(f,0):=\lim_{r\to 0}\Phi_{A,\beta}(f,r)\quad\&\quad\Phi_{A,\beta}(f,1):=\lim_{r\to 1}\Phi_{A,\beta}(f,r)<\infty, $$ then $$ 0<r<s<1\Rightarrow 0\le \frac{\mathsf{A}_{\alpha,\beta}(f,s)-\mathsf{A}_{\alpha,\beta}(f,r)}{\log\nu_\alpha(s)-\log\nu_\alpha(r)}\leq \Phi_{A,\beta}(f,s)-\Phi_{A,\beta}(f,0) $$ with equality if and only if \[ f=\left\{\begin{array} {r@{\;}l} \hbox{constant}\quad & \hbox{when}\quad \beta<1\\ \hbox{linear\ map}\quad & \hbox{when}\quad \beta=1. \end{array} \right. \] In particular, $t\mapsto \mathsf{A}_{\alpha,\beta}(f,t)$ is Lipschitz with respect to $\log\nu_\alpha(t)$ for $t\in (0,1)$. \end{theorem} \begin{proof} Note that $\nu_\alpha(r)=\int_0^r d\mu_\alpha(t)$. So $d\nu_\alpha(r)$, the differential of $\nu_\alpha(r)$ with respect to $r\in (0,1)$, equals $d\mu_\alpha(r)$. By integration by parts we have $$ \Phi_{A,\beta}(f,r)\nu_\alpha(r)-\int_0^r \Phi_{A,\beta}(f,t)\,d\mu_\alpha(t)=\int_0^r\big[\frac{d}{dt}\Phi_{A,\beta}(f,t)\big] \nu_\alpha(t)\,dt. $$ Differentiating the function $\mathsf{A}_{\alpha,\beta}(f,r)$ with respect to $r$ and using Lemma \ref{l1} (i), we get \begin{align*} \frac{d}{dr}\mathsf{A}_{\alpha,\beta}(f,r)&=\frac{\Phi_{A,\beta}(f,r)2r(1-r^2)^\alpha \nu_\alpha(r)-\Big[\int_0^r\Phi_{A,\beta}(f,t)\, d\mu_\alpha(t)\Big]2r(1-r^2)^\alpha}{\nu_\alpha(r)^2}\\ &=\frac{2r(1-r^2)^\alpha \left[\Phi_{A,\beta}(f,t)\nu_\alpha(r)- \int_0^r \Phi_{A,\beta}(f,t)\, d\mu_\alpha(t)\right]}{\nu_\alpha(r)^2}\\ &=\frac{2r(1-r^2)^\alpha\int_0^r\big[\frac{d}{dt}\Phi_{A,\beta}(f,t)\big] \nu_\alpha(t)\, dt}{\nu_\alpha(r)^2}\geq 0. \end{align*} As a result, $r\mapsto\mathsf{A}_{\alpha,\beta}(f,r)$ increases on $(0,1)$. Next suppose that the just-verified monotonicity is not strict. Then there exist two numbers $r_1,r_2\in (0,1)$ such that $r_1<r_2$ and $$ \mathsf{A}_{\alpha,\beta}(f,r_1)=\mathsf{A}_{\alpha,\beta}(f,r)=\mathsf{A}_{\alpha,\beta}(f,r_2)\quad \forall\quad r\in [r_1,r_2]. $$ Consequently, $$ \frac{d}{dr}\mathsf{A}_{\alpha,\beta}(f,r)=0\quad\forall\quad r\in[r_1,r_2] $$ and so $$ \int_0^r \big[\frac{d}{dt}\Phi_{A,\beta}(f,t)\big]\nu_\alpha(t)\, dt=0\quad\forall\quad r\in [r_1,r_2]. $$ Then we must have $$ \frac{d}{dt}\Phi_{A,\beta}(f,t)=0\quad\forall\quad t\in (0,r)\quad\hbox{with}\quad r\in [r_1,r_2], $$ whence getting that if $\beta<1$ then $f$ must be constant or if $\beta=1$ then $f$ must be linear, thanks to the argument for the strictness in Lemma \ref{l1} (i). It remains to check the rest of Theorem \ref{th1}. (i) The monotonic growth of $\mathsf{A}_{\alpha,\beta}(f,\cdot)$ ensures the existence of the limit. An application of L'H\^{o}pital's rule gives $$ \lim_{r\to 0}\mathsf{A}_{\alpha,\beta}(f,r)=\lim_{r\to 0}\Phi_{A,\beta}(f,r)= \left\{\begin{array} {r@{\;}l} 0\quad & \hbox{when}\quad \beta<1\\ |f'(0)|^2\quad & \hbox{when}\quad \beta=1. \end{array} \right. $$ (ii) Again, the above monotonicity formula of $\mathsf{A}_{\alpha,\beta}(f,\cdot)$ plus the given condition yields that for $s\in (0,1)$, $$ \sup_{r\in (0,s)}\mathsf{A}_{\alpha,\beta}(f,r)=\mathsf{A}_{\alpha,\beta}(f,s)<\infty. $$ Integrating by parts twice and using the monotonicity of $\Phi_{A,\beta}(f,\cdot)$, we obtain that under $0<r<s<1$, \begin{eqnarray*} 0&\le&\mathsf{A}_{\alpha,\beta}(f,s)-\mathsf{A}_{\alpha,\beta}(f,r)\\ &=&\int_r^s\frac{d}{dt}\mathsf{A}_{\alpha,\beta}(f,t)\,dt\\ &=&\int_r^s\left(\int_0^t\big[\frac{d}{d\tau}\Phi_{A,\beta}(f,\tau)\big]\nu_\alpha(\tau)\,d\tau\right)\,\Big[\frac{d\nu_\alpha(t)}{\nu_\alpha(t)^2}\Big]\\ &=&\int_r^s\left(\nu_\alpha(t)\Phi_{A,\beta}(f,t)-\int_0^t\Phi_{A,\beta}(f,\tau)\,d\nu_\alpha(\tau)\right)\,\Big[\frac{d\nu_\alpha(t)}{\nu_\alpha(t)^2}\Big]\\ &\le&\Big[\Phi_{A,\beta}(f,s)-\Phi_{A,\beta}(f,0)\Big]\int_r^s\frac{d\nu_\alpha(t)}{\nu_\alpha(t)}. \end{eqnarray*} This gives the desired inequality right away. Furthermore, the above argument plus Lemma \ref{l1} (i) derives the equality case. \end{proof} As an immediate consequence of Theorem \ref{th1}, we get a sort of ``norm" estimate associated with $\Phi_{A,\beta}(f,\cdot)$. \begin{corollary}\label{pr2} Let $-\infty<\alpha<\infty$, $0\le\beta\le 1$, and $f\in H(\mathbb D)$. \item{\rm(i)} If $-\infty<\alpha\le -1$, then $$ \int_0^1 \Phi_{A,\beta}(f,t)\,d\mu_\alpha(t)=\sup_{r\in (0,1)}\int_0^r \Phi_{A,\beta}(f,t)\,d\mu_\alpha(t)<\infty $$ if and only if $f$ is constant. Moreover, $\sup_{r\in (0,1)}\mathsf{A}_{\alpha,\beta}(f,r)=\Phi_{A,\beta}(f,1).$ \item{\rm(ii)} If $-1<\alpha<\infty$, then $$ \mathsf{A}_{\alpha,\beta}(f,r)\le\mathsf{A}_{\alpha,\beta}(f,1):=\sup_{s\in (0,1)}\mathsf{A}_{\alpha,\beta}(f,s)\quad\forall\quad r\in (0,1), $$ where the inequality becomes an equality for all $r\in (0,1)$ if and only if \[ f=\left\{\begin{array} {r@{\;}l} \hbox{constant}\quad & \hbox{when}\quad \beta<1\\ \hbox{linear\ map}\quad & \hbox{when}\quad \beta=1. \end{array} \right. \] \item{\rm(iii)} The following function $\alpha\mapsto\mathsf{A}_{\alpha,\beta}(f,1)$ is strictly decreasing on $(-1,\infty)$ unless \[ f=\left\{\begin{array} {r@{\;}l} \hbox{constant}\quad & \hbox{when}\quad \beta<1\\ \hbox{linear\ map}\quad & \hbox{when}\quad \beta=1. \end{array} \right. \] \end{corollary} \begin{proof} (i) By Theorem \ref{th1}, we have $$ \mathsf{A}_{\alpha,\beta}(f,r)\leq \frac{\int_0^s \Phi_{A,\beta}(f,t)\,d\mu_\alpha(t)}{\nu_\alpha(s)}\quad\forall\quad r\in (0,s). $$ Note that $$\lim_{s\to 1}\nu_\alpha(s)=\infty\quad\&\quad\lim_{s\to 1}\int_0^s\Phi_{A,\beta}(f,t)\, d\mu_\alpha(t)=\int_0^1 \Phi_{A,\beta}(f,t)\,d\mu_\alpha(t). $$ So, the last integral is finite if and only if $$ \Phi_{A,\beta}(f,r)=0\quad\forall\quad r\in (0,1), $$ equivalently, $A(f,r)=0$ holds for all $r\in (0,1)$, i.e., $f$ is constant. For the remaining part of (i), we may assume that $f$ is not a constant map. Due to $\lim_{r\to 1}\nu_\alpha(r)=\infty$, we obtain $$ \lim_{r\to 1}\int_0^r \Phi_{A,\beta}(f,t)\,d\mu_\alpha(t)=\int_0^1 \Phi_{A,\beta}(f,t)\,d\mu_\alpha(t)=\infty. $$ So, an application of L'H\^{o}pital's rule yields $$ \sup_{0<r<1}\mathsf{A}_{\alpha,\beta}(f,r)=\lim_{r\to 1}\frac{\int_0^r \Phi_{A,\beta}(f,t)\, d\mu_\alpha(t)}{\nu_\alpha(r)}=\lim_{r\to 1}\frac{\Phi_{A,\beta}(f,r)r(1-r^2)^\alpha}{ r(1-r^2)^\alpha}=\Phi_{A,\beta}(f,1). $$ (ii) Under $-1<\alpha<\infty$, we have $$ \lim_{r\to 1}\nu_\alpha(r)=\nu_\alpha(1)\quad\&\quad \lim_{r\to 1}\int_0^r\Phi_{A,\beta}(f,t)\,d\mu_\alpha(t)=\int_0^1 \Phi_{A,\beta}(f,t)\,d\mu_\alpha(t). $$ Thus, by Theorem \ref{th1} it follows that for $r\in (0,1)$, $$ \mathsf{A}_{\alpha,\beta}(f,r)\le\lim_{s\to 1}\mathsf{A}_{\alpha,\beta}(f,s)=\big[\nu_\alpha(1)\big]^{-1}\int_0^1 \Phi_{A,\beta}(f,t)\, d\mu_\alpha(t)=\sup_{s\in (0,1)}\mathsf{A}_{\alpha,\beta}(f,s). $$ The equality case just follows from a straightforward computation and Theorem \ref{th1}. (iii) Suppose $-1<\alpha_1<\alpha_2<\infty$ and $\mathsf{A}_{\alpha_1,\beta}(f,1)<\infty$, then integrating by parts twice, we obtain \begin{align*} \mathsf{A}_{\alpha_2,\beta}(f,1)&= \big[\nu_{\alpha_2}(1)\big]^{-1}\int_0^1\Phi_{ A,\beta}(f,r)\,d\mu_{\alpha_2}(r)\\ &= \big[\nu_{\alpha_2}(1)\big]^{-1}\int_0^1 (1-r^2)^{\alpha_2-\alpha_1}\frac{d}{dr}\left[\int_0^r \Phi_{A,\beta}(f,t)\, d\mu_{\alpha_1}(t)\right]\, dr\\ &= \big[\nu_{\alpha_2}(1)\big]^{-1}\left[-\int_0^1\left(\int_0^r\Phi_{A,\beta}(f,t)\,d\mu_{\alpha_1}(t)\right)\, d(1-r^2)^{\alpha_2-\alpha_1}\right]\\ &\leq \big[\nu_{\alpha_2}(1)\big]^{-1}\mathsf{A}_{\alpha_1,\beta}(f,1)\int_0^1 \nu_{\alpha_1}(r)\, d\big[-(1-r^2)^{\alpha_2-\alpha_1}\big]\\ &=\mathsf{A}_{\alpha_1,\beta}(f,1)\big[\nu_{\alpha_2}(1)\big]^{-1}\left[\int_0^1 (1-r^2)^{\alpha_2-\alpha_1}\,d\mu_{\alpha_1}(r)\right] \\ &=\mathsf{A}_{\alpha_1,\beta}(f,1), \end{align*} thereby establishing $\mathsf{A}_{\alpha_2,\beta}(f,1)\le \mathsf{A}_{\alpha_1,\beta}(f,1)$. If this last inequality becomes equality, then the above argument forces $$ \int_0^r\Phi_{A,\beta}(f,t)\,d\mu_{\alpha_1}(t)=\mathsf{A}_{\alpha_1,\beta}(f,1) \nu_{\alpha_1}(r)\quad\forall\quad r\in (0,1), $$ whence yielding (via the just-verified (ii)) \[ f=\left\{\begin{array} {r@{\;}l} \hbox{constant}\quad & \hbox{when}\quad \beta<1\\ \hbox{linear\ map}\quad & \hbox{when}\quad \beta=1. \end{array} \right. \] \end{proof} \subsection{Monotonic Growth of $\mathsf{L}_{\alpha,\beta}(f,\cdot)$} Correspondingly, we first have the following Schwarz type lemma. \begin{proposition}\label{co1} Let $-\infty<\alpha<\infty$, $0\le\beta\le 1$, and $f\in H(\mathbb D)$ be of the form $f(z)=a_0+\sum_{k=n}^\infty a_kz^k$ with $n\in\mathbb N$. Then $$ (2\pi)^{1-\beta}\Big[\frac{|f^{(n)}(0)|}{n!}\Big]\le \mathsf{L}_{\alpha,\beta}(f,r)\left[\frac{\nu_\alpha(r)}{\int_0^rt^{n-\beta}\,d\mu_\alpha(t)}\right]\quad\forall\quad r\in (0,1) $$ with equality when and only when $f=a_0+a_nz^n$. \end{proposition} \begin{proof} This follows from Lemma \ref{l2} (ii) and its equality case. \end{proof} The coming-up-next monotonicity contains a hypothesis stronger than that for Theorem \ref{th1}. \begin{theorem}\label{th2} Let $-\infty<\alpha<\infty$, $0\le\beta\le 1$, and $f\in U(\mathbb D)$ or $f(z)=a_0+a_nz^n$ with $n\in\mathbb N$. Then $r\mapsto\mathsf{L}_{\alpha,\beta}(f,r)$ is strictly increasing on $(0,1)$ unless \[ f=\left\{\begin{array} {r@{\;}l} constant &\quad \hbox{when}\quad \beta<1\\ linear\ map &\quad \hbox{when}\quad \beta=1. \end{array} \right. \] Consequently, \item{\rm(i)} \[ \lim_{r\to 0}\mathsf{L}_{\alpha,\beta}(f,r)=\left\{\begin{array} {r@{\;}l} 0\quad & \hbox{when}\quad \beta<1\\ |f'(0)|\quad & \hbox{when}\quad \beta=1. \end{array} \right. \] \item{\rm(ii)} If $$ \Phi_{L,\beta}(f,0):=\lim_{r\to 0}\Phi_{L,\beta}(f,r)\quad\&\quad\Phi_{L,\beta}(f,1):=\lim_{r\to 1}\Phi_{L,\beta}(f,r)<\infty, $$ then $$ 0<r<s<1\Rightarrow 0\le \frac{\mathsf{L}_{\alpha,\beta}(f,s)-\mathsf{L}_{\alpha,\beta}(f,r)}{\log\nu_\alpha(s)-\log\nu_\alpha(r)}\leq \Phi_{L,\beta}(f,s)-\Phi_{L,\beta}(f,0) $$ with equality if and only if \[ f=\left\{\begin{array} {r@{\;}l} \hbox{constant}\quad & \hbox{when}\quad \beta<1\\ \hbox{linear\ map}\quad & \hbox{when}\quad \beta=1. \end{array} \right. \] In particular, $t\mapsto \mathsf{L}_{\alpha,\beta}(f,t)$ is Lipschitz with respect to $\log\nu_\alpha(t)$ for $t\in (0,1)$. \end{theorem} \begin{proof} Similar to that for Theorem \ref{th1}, but this time by Lemma \ref{l1} (ii). \end{proof} Naturally, we can establish the so-called ``norm" estimate associated to $\Phi_{L,\beta}(f,\cdot)$. \begin{corollary}\label{co2} Let $0\le\beta\le 1$ and $f\in U(\mathbb D)$ or $f(z)=a_0+a_nz^n$ with $n\in\mathbb N$. \item{\rm(i)} If $-\infty<\alpha\le -1$, then $$ \int_0^1 \Phi_{L,\beta}(f,t)\,d\mu_\alpha(t)=\sup_{r\in (0,1)}\int_0^r \Phi_{L,\beta}(f,t)\,d\mu_\alpha(t)<\infty $$ if and only if $f$ is constant. Moreover, $\sup_{r\in (0,1)}\mathsf{L}_{\alpha,\beta}(f,r)=\Phi_{L,\beta}(f,1).$ \item{\rm(ii)} If $-1<\alpha<\infty$, then $$ \mathsf{L}_{\alpha,\beta}(f,r)\le\mathsf{L}_{\alpha,\beta}(f,1):=\sup_{s\in (0,1)}\mathsf{L}_{\alpha,\beta}(f,s)\quad\forall\quad r\in (0,1), $$ where the inequality becomes an equality for all $r\in (0,1)$ if and only if \[ f=\left\{\begin{array} {r@{\;}l} \hbox{constant}\quad & \hbox{when}\quad \beta<1\\ \hbox{linear\ map}\quad & \hbox{when}\quad \beta=1. \end{array} \right. \] \item{\rm(iii)} $\alpha\mapsto\mathsf{L}_{\alpha,\beta}(f,1)$ is strictly decreasing on $(-1,\infty)$ unless \[ f=\left\{\begin{array} {r@{\;}l} \hbox{constant}\quad & \hbox{when}\quad \beta<1\\ \hbox{linear\ map}\quad & \hbox{when}\quad \beta=1. \end{array} \right. \] \end{corollary} \begin{proof} The argument is similar to that for Corollary \ref{pr2}, but via Lemma \ref{l1} (ii). \end{proof} \section{logarithmic convexity} In this section, we treat the convexities of the functions: $\log r\mapsto \log\mathsf{A}_{\alpha,\beta}(f,r)$ and $\log r\mapsto \log\mathsf{L}_{\alpha,\beta}(f,r)$ for $r\in (0,1)$. \subsection{Two More Lemmas} The following are two technical preliminaries. \begin{lemma}\cite[Corollaries 2-3 \& Proposition 7]{WZ}\label{wz} Suppose $f(x)$ and $\{h_k(x)\}_{k=0}^\infty$ are positive and twice differentiable for $x\in (0,1)$ such that the function $H(x)=\sum_{k=0}^\infty h_k(x)$ is also twice differentiable for $x\in (0,1)$. Then: \item{\rm(i)} $\log x\mapsto\log f(x)$ is convex if and only if $\log x\mapsto\log f(x^2)$ is convex. \item{\rm(ii)} The function $\log x\mapsto \log f(x)$ is convex if and only if the $D$-notation of $f$ $$ D(f(x)):=\frac{f'(x)}{f(x)}+ x\left(\frac{f'(x)}{f(x)}\right)'\ge 0\quad\forall\quad x\in (0,1). $$ \item{\rm(iii)} If for each $k$ the function $\log x\mapsto \log h_k(x)$ is convex, then $\log x\mapsto \log H(x)$ is also convex. \end{lemma} \begin{lemma}\label{uni} Let $f\in H(\mathbb D)$. Then $f$ belongs to $U(\mathbb D)$ provided that one of the following two conditions is valid: \item{\rm(i)} \cite{Nu} or \cite[Lemma 2.1]{AlD} $$ f(0)=f'(0)-1=0\quad\&\quad \left|\frac{z^2f'(z)}{f^2(z)}-1\right|<1\quad\forall\quad z\in \mathbb D. $$ \item{\rm(ii)} \cite[Theorem 1]{Ne} or \cite[Theorem 8.12]{Du} $$ \left|\left[\frac{f''(z)}{f'(z)}\right]'-\frac{1}{2}\left[\frac{f''(z)}{f'(z)}\right]^2\right|\leq 2(1-|z|^2)^{-2}\quad\forall\quad z\in \mathbb D. $$ \end{lemma} \subsection{Log-convexity for $\mathsf{A}_{\alpha,\beta}(f,\cdot)$} Such a property is given below. \begin{theorem}\label{th3} Let $0\le\beta\le 1$ and $0<r<1$. \item{\rm(i)} If $\alpha\in (-\infty,-3)$, then there exist $f, g\in H(\mathbb D)$ such that $\log r\mapsto\log\mathsf{A}_{\alpha,\beta}(f,r)$ is not convex and $\log r\mapsto\log \mathsf{A}_{\alpha,\beta}(g,r)$ is not concave. \item{\rm(ii)} If $\alpha\in [-3,0]$, then $\log r\mapsto \log\mathsf{A}_{\alpha,1}(a_nz^n,r\big)$ is convex for $a_n\not=0$ with $n\in\mathbb N$. Consequently, $$ \log r\mapsto \log\mathsf{A}_{\alpha,1}\big(f,r\big) $$ is convex for all $f\in U(\mathbb D)$. \item{\rm(iii)} If $\alpha\in (0,\infty)$, then $\log r\mapsto\log\mathsf{A}_{\alpha,\beta}(a_nz^n,r)$ is not convex for $a_n\not=0$ and $n\in \mathbb N$. \end{theorem} \begin{proof} The key issue is to check whether or not $\log r\mapsto \log\mathsf{A}_{\alpha,\beta}(z^n,r)$ is convex for $r\in (0,1)$. To see this, let us borrow some symbols from \cite{WZ}. For $\lambda\ge 0$ and $0<x<1$ we define $$ f_\lambda (x)=\int_0^x t^\lambda(1-t)^\alpha dt $$ and $$ \Delta (\lambda, x)=\frac{f_\lambda'(x)}{f_\lambda (x)}+x\left(\frac{f_\lambda'(x)}{f_\lambda(x)}\right)'-\left[\frac{f_0'(x)}{f_0(x)}+x\left(\frac{f_0'(x)}{f_0(x)}\right)'\right]. $$ Given $n\in\mathbb N$. A simple calculation shows $\Phi_{A,\beta}(z^n,t)=\pi^{1-\beta} t^{2(n-\beta)}$, and then a change of variable derives \begin{eqnarray*} \mathsf{A}_{\alpha,\beta}(z^n,r)&=&\frac{\int_0^r \Phi_{A,\beta}(z^n,t)\,d\mu_\alpha(t)}{\nu_\alpha(r)}\\ &=&\frac{\pi^{1-\beta}\int_0^{r^2}t^{n-\beta}(1-t)^\alpha \,dt }{\int_0^{r^2} (1-t)^\alpha\, dt}\\ &=& \pi^{1-\beta}\left[\frac{f_{n-\beta}(r^2)}{f_{0}(r^2)}\right]. \end{eqnarray*} In accordance with Lemma \ref{wz} (i)-(ii), it is readily to work out that $\log r\mapsto\log\mathsf{A}_{\alpha,\beta}(z^n,r)$ is convex for $r\in (0,1)$ if and only if $\Delta (n-\beta, x)\ge 0$ for any $x\in (0,1)$. (i) Under $\alpha\in (-\infty,-3)$, we follow the argument for \cite[Proposition 6]{WZ} to get $$ \lim_{x\to 1}\Delta(\lambda,x)=\frac{\lambda (\alpha+1)(\lambda+2+\alpha)}{(\alpha+2)^2(\alpha+3)}. $$ Choosing \[ f(z)=z^n=\left\{\begin{array} {r@{\;}l} z\quad & \hbox{when}\quad \beta<1\\ z^2\quad & \hbox{when}\quad \beta=1 \end{array} \right. \] and $\lambda=n-\beta$, we find $\lim_{x\to 1}\Delta(\lambda,x)<0$, whence deriving that $\log r\mapsto \log A_\alpha(f,r)$ is not convex. In the meantime, picking $n\in \mathbb N$ such that $n>\beta-(2+\alpha)$ and putting $g(z)=z^n$, we obtain $$ \lim_{x\to 1}\Delta(n-\beta,x)=\frac{(n-\beta)(\alpha+1)(n-\beta+2+\alpha)}{(\alpha+2)^2(\alpha+3)}>0, $$ whence deriving that $\log r\mapsto \log\mathsf{A}_{\alpha,\beta}(g,r)$ is not concave. (ii) Under $\alpha\in [-3,0]$, we handle the two situations. {\it Situation 1}: $f\in U(\mathbb D)$. Upon writing $f(z)=\sum_{n=0}^\infty a_n z^n$, we compute $$ \Phi_{A,1}\big(f(z),t\big)=(\pi t^2)^{-1}A(f,t)=\sum_{n=0}^\infty n|a_n|^2 t^{2(n-1)}, $$ and consequently, $$ \mathsf{A}_{\alpha,1}(f,r)=\frac{\sum_{n=0}^\infty n|a_n|^2\int_0^r (\pi t^2)^{-1}A(z^n,t)\, d\mu_\alpha(t)}{\nu_\alpha(r)}=\sum_{n=0}^\infty n |a_n|^2 \mathsf{A}_{\alpha,1}(z^n,r). $$ So, by Lemma \ref{wz} (iii), we see that the convexity of $$ \log r\mapsto\log\mathsf{A}_{\alpha,1}(f,r)\quad\hbox{under}\quad f\in U(\mathbb D) $$ follows from the convexity of $$ \log r\mapsto\log\mathsf{A}_{\alpha,1}(z^n,r)\quad\hbox{under}\quad n\in\mathbb N. $$ So, it remains to verify this last convexity via the coming-up-next consideration. {\it Situation 2}: $f(z)=a_nz^n$ with $a_n\not=0$. Three cases are required to control. {\it Case 1}: $\alpha=0$. An easy computation shows $$ \mathsf{A}_{0,1}(z^n,r)=n^{-1}{r^{2(n-1)}} $$ and so $\log r\mapsto\log\mathsf{A}_{0,1}(z^n,r)$ is convex. {\it Case 2}: $-2\le\alpha<0$. Under this condition, we see from the arguments for \cite[Propositions 4-5]{WZ} that $$ \Delta(n-1,x)\geq 0\quad\forall\quad n-1\geq 0\ \ \&\ \ 0<x<1, $$ and so that $\log r\mapsto\log\mathsf{A}_{\alpha,1}(z^n,r)$ is convex. {\it Case 3}: $-3\leq \alpha<-2$. With the assumption, we also get from the arguments for \cite[Propositions 4-5]{WZ} that $$ \Delta (n-1,x)\geq \Delta(-2-\alpha,x)>0\quad\forall\quad x\in (0,1)\ \ \&\ \ n-1\in [-2-\alpha,\infty) $$ and so that $\log r\mapsto\log\mathsf{A}_{\alpha,1}(z^n,r)$ is convex when $n\ge 2$. Here it is worth noting that the convexity of $\log r\mapsto\log\mathsf{A}_{\alpha,1}(z,r)=0$ is trivial. (iii) Under $0<\alpha<\infty$, from the argument for \cite[Proposition 6]{WZ} we know that $\Delta(n-\beta,x)<0$ as $x$ is sufficiently close to $1$. Thus $\log r\mapsto \log\mathsf{A}_{\alpha,\beta}(a_n z^n,r)$ is not convex under $a_n\not=0$. \end{proof} The following illustrates that the function $\log r\mapsto \log\mathsf{A}_{\alpha,\beta}(f,r)$ is not always concave for $\alpha>0$, $0\le\beta\le 1$, and $f\in U(\mathbb D)$. \begin{example} Let $\alpha=1$, $\beta\in\{0,1\}$, and $f(z)=z+\frac{z^2}{2}$. Then the function $\log r\mapsto \log\mathsf{A}_{\alpha,\beta}(f,r)$ is neither convex nor concave for $r\in (0,1)$. \end{example} \begin{proof} A direct computation shows $$ \left|\frac{z^2f'(z)}{f^2(z)}-1\right|=\left|\frac{z^2(1+z)}{(z+\frac{z^2}{2})^2}-1\right|=\frac{|z|^2}{|z+2|^2}<1 $$ since $$ |z|<1<2-|z|\leq|z+2|\quad\forall\quad z\in \mathbb D. $$ So, $f\in U(\mathbb D)$ owing to Lemma \ref{uni} (i). By $f'(z)=z+1$ we have $$ A(f,t)=\int_{t\mathbb D}|z+1|^2\, dA(z)=\pi \Big(t^2+\frac{t^4}{2}\Big), $$ plus \[ \int_0^r \Phi_{A,\beta}(f,t)\,d\mu_1(t)=\left\{\begin{array} {r@{\;}l} \frac{\pi}{2}\Big(r^4-\frac{r^6}{3}-\frac{r^8}{4}\Big)\quad & \hbox{when}\quad \beta=0\\ r^2-\frac{r^4}{4}-\frac{r^6}{6}\quad & \hbox{when}\quad \beta=1 \end{array} \right. \] Meanwhile, $$ \nu_1(r)=\int_0^r (1-t^2)dt^2=r^2-\frac{r^4}{2}. $$ So, we get \[ \mathsf{A}_{1,\beta}(f,r)=\left\{\begin{array} {r@{\;}l} \frac{\pi(12r^2-4r^4-3r^6)}{12(2-r^2)} \quad & \hbox{when}\quad \beta=0\\ \frac{12-3r^2-2r^4}{6(2-r^2)}\quad & \hbox{when}\quad \beta=1 \end{array} \right. \] and in turn consider the logarithmic convexities of the following function \[ h_\beta(x)=\left\{\begin{array} {r@{\;}l} \frac{12x-4x^2-3x^3}{2-x}\quad & \hbox{when}\quad \beta=0\\ \frac{12-3x-2x^2}{2-x}\quad & \hbox{when}\quad \beta=1 \end{array} \right. \] for $x\in (0,1)$. Using the so-called D-notation in Lemma \ref{wz}, we have \[ D(h_\beta(x))=\left\{\begin{array} {r@{\;}l} D(12x-4x^2-3x^3)-D(2-x)\quad & \hbox{when}\quad \beta=0\\ D(12-3x-2x^2)-D(2-x)\quad & \hbox{when}\quad \beta=1 \end{array} \right. \] for $x\in (0,1)$. By an elementary calculation, we get \[ \left\{\begin{array} {r@{\;}l} D(12x-4x^2-3x^3)=\frac{-48-144x+12x^2}{(12-4x-3x^2)^2}\\ D(2-x)=\frac{-2}{(2-x)^2}\\ D(12-3x-2x^2)=\frac{-36-96x+6x^2}{(12-3x-2x^2)^2}. \end{array} \right. \] Consequently, \[ D(h_\beta(x))=\left\{\begin{array} {r@{\;}l} \frac{2g_\beta(x)}{(12-4x-3x^2)^2(2-x)^2}\quad & \hbox{when}\quad \beta=0\\ \frac{2g_\beta(x)}{(12-3x-2x^2)^2(2-x)^2}\quad & \hbox{when}\quad \beta=1, \end{array} \right. \] where \[ g_\beta(x)=\left\{\begin{array} {r@{\;}l} 48-288x+232x^2-72x^3+15x^4\quad & \hbox{when}\quad \beta=0\\ 72-192x+147x^2-48x^3+7x^4\quad & \hbox{when}\quad \beta=1. \end{array} \right. \] Now, under $x\in (0,1)$ we find $$ g_0'(x)=-288+464x-216x^2+60x^3\quad \&\quad g_0''(x)=464-432x+180x^2. $$ Clearly, $g_0''(x)$ is an open-upward parabola with the axis of symmetry $x=\frac{6}{5}>1$. By $g_0''(1)=212>0$ and the monotonicity of $g_0''$ on $(0,1)$, we have $g_0''(x)>0$ for all $x\in (0,1)$. Thus $g_0'$ is increasing on $(0,1)$. The following condition $$ g_0'(0)=-288<0\quad \&\quad g_0'(1)=20>0 $$ yields an $x_1\in (0,1)$ such that $g_0'(x)<0$ for $x\in(0,x_1)$ and $g_0'(x)>0$ for $x\in (x_1,1)$. Since $g_0(0)=48$ and $g_0(1)=-65$, there exists an $x_0\in (0,1)$ such that $g_0(x)>0$ for $x\in (0,x_0)$ and $g_0(x)<0$ for $x\in (x_0,1)$. Thus the function $\log x\mapsto\log h_0(x)$ is neither convex nor concave. Similarly, under $x\in (0,1)$ we have $$ g_1'(x)=-192+294x-144x^2+28x^3\quad \&\quad g_1''(x)=294-288x+84x^2. $$ Obviously, $g_1''(x)$ is an open-upward parabola with the axis of symmetry $x=\frac{12}{7}>1$. By $g_1''(1)=90>0$ and the monotonicity of $g_1''$ on $(0,1)$, we have $g_1''(x)>0$ for all $x\in (0,1)$. Thus $g_1'$ is increasing on $(0,1)$. The following condition $$ g_1'(0)=-192<0\quad \&\quad g_1'(1)=-14<0 $$ yields $g_1'(x)<0$ for $x\in(0,1)$. Since $g_1(0)=72$ and $g_1(1)=-14$, there exists an $x_0\in (0,1)$ such that $g_1(x)>0$ for $x\in (0,x_0)$ and $g_1(x)<0$ for $x\in (x_0,1)$. Thus the function $\log x\mapsto\log h_1(x)$ is neither convex nor concave. \end{proof} \subsection{Log-convexity for $\mathsf{L}_{\alpha,\beta}(f,\cdot)$} Analogously, we can establish the expected convexity for the mixed lengths. \begin{theorem}\label{th4} Let $0\le\beta\le 1$ and $0<r<1$. \item{\rm(i)} If $\alpha\in (-\infty,-3)$, then there exist $f, g\in H(\mathbb D)$ such that $\log r\mapsto\log\mathsf{L}_{\alpha,\beta}(f,r)$ is not convex and $\log r\mapsto\log \mathsf{L}_{\alpha,\beta}(g,r)$ is not concave. \item{\rm(ii)} If $\alpha\in [-3,0]$, then $\log r\mapsto \log\mathsf{L}_{\alpha,1}(a_nz^n,r\big)$ is convex for $a_n\not=0$ with $n\in\mathbb N$. Consequently, $\log r\mapsto \log\mathsf{L}_{\alpha,1}(f,r)$ is convex for $f\in U(\mathbb D)$. \item{\rm(iii)} If $\alpha\in (0,\infty)$, then $\log r\mapsto\log\mathsf{L}_{\alpha,\beta}(a_nz^n,r)$ is not convex for $a_n\not=0$ and $n\in \mathbb N$. \end{theorem} \begin{proof} The argument is similar to that for Theorem \ref{th3} except using the following statement for $\alpha\in [-3,0]$ -- If $f\in U(\mathbb D)$, then there exists $g(z)=\sum_{n=0}^\infty b_n z^n$ such that $g$ is the square root of the zero-free derivative $f'$ on $\mathbb D$ and $f'(0)=g^2(0)$, and hence \begin{eqnarray*} \Phi_{L,1}(f,t)&=&(2\pi t)^{-1}\int_{t\mathbb T}|f'(z)||dz|\\ &=&(2\pi t)^{-1} \int_{t\mathbb T} |g(z)|^2|dz|\\ &=&\sum_{n=0}^\infty |b_n|^2 t^{2n}. \end{eqnarray*} \end{proof} Our concluding example shows that under $0<\alpha<\infty$ and $0\le\beta\le 1$ one cannot get that $\log\mathsf{L}_{\alpha,\beta}(f,r)$ is convex or concave in $\log r$ for all functions $f\in U(\mathbb D)$. \begin{example} Let $\alpha=1$, $\beta\in\{0,1\}$, and $f(z)=(z+2)^3$. Then the function $\log r\mapsto\log\mathsf{L}_{\alpha,\beta}(f,r)$ is neither convex nor concave for $r\in (0,1)$. \end{example} \begin{proof} Clearly, we have $$ f'(z)=3(z+2)^2\ \ \&\ \ f''(z)=6(z+2) $$ as well as the Schwarizian derivative $$ \left[\frac{f''(z)}{f'(z)}\right]'-\frac{1}{2}\left[\frac{f''(z)}{f'(z)}\right]^2=\frac{-4}{(z+2)^2}. $$ It is easy to see that $$ \sqrt{2}(1-|z|^2)\leq 2-|z|\quad\forall\quad z\in \mathbb D. $$ So, $$ \left|\left[\frac{f''(z)}{f'(z)}\right]'-\frac{1}{2}\left[\frac{f''(z)}{f'(z)}\right]^2\right|=\frac{4}{|z+2|^2}\leq \frac{4}{(2-|z|)^2}\leq \frac{2}{(1-|z|^2)^2}. $$ By Lemma \ref{uni} (ii), $f$ belongs to $U(\mathbb D)$. Consequently, $$ L(f,t)=\int_0^{2\pi}|f'(te^{i\theta})|t\, d\theta=6\pi t(t^2+4) $$ and \[ \int_0^r\Phi_{L,\beta}(f,t)\,d\mu_1(t)=\left\{\begin{array} {r@{\;}l} 12\pi\Big(\frac{4}{3}r^3-\frac{3}{5}r^5-\frac{1}{7}r^7\Big) \quad & \hbox{when}\quad \beta=0\\ 12r^2-\frac{9}{2}r^4-r^6\quad & \hbox{when}\quad \beta=1. \end{array} \right. \] Note that $\nu_1(r)=r^2-\frac{r^4}{2}$. So, \[ \mathsf{L}_{1,\beta}(f,r)=\left\{\begin{array} {r@{\;}l} \frac{24\pi(140r-63r^3-15r^5)}{105(2-r^2)} \quad & \hbox{when}\quad \beta=0\\ \frac{24-9r^2-2r^4}{2-r^2} \quad & \hbox{when}\quad \beta=1. \end{array} \right. \] To gain our conclusion, we only need to consider the logarithmic convexity of the function \[ h_\beta(x)=\left\{\begin{array} {r@{\;}l} \frac{140x-63x^3-15x^5}{2-x^2}\quad & \hbox{when}\quad \beta=0\\ \frac{24-9x-2x^2}{2-x}\quad & \hbox{when}\quad \beta=1. \end{array} \right. \] {\it Case 1}: $\beta=0$. Applying the definition of $D$-notation, we obtain $$ D(140x-63x^3-15x^5)=\frac{-35280 x-33600 x^3+3780x^5}{(140-63x^2-15x^4)^2} $$ and $$ D(2-x^2)=\frac{-8x}{(2-x^2)^2}, $$ whence reaching $$ D\big(h_0(x)\big)=D(140x-63x^3-15x^5)-D(2-x^2)=\frac{4xg_0(x)}{(140-63x^2-15x^4)^2(2-x^2)^2}, $$ where $$ g_0(x)=3920-33600x^2+28098x^4-8400x^6+1395x^8. $$ Obviously, $$ g_0(0)=3920>0\quad\&\quad g_0(1)=-8587<0. $$ Now letting $s=x^2$, we get $$ g_0(x)=G_0(s)=3920-33600s+28098s^2-8400s^3+1395s^4, $$ and $$ G'_0(s)=-33600+56196s-25200s^2+5580s^3\ \&\ G''_0(s)=56196-50400s+16740s^2. $$ Since the axis of symmetry of $G''_0$ is $s=\frac{140}{93}>1$, $G''_0$ is decreasing on $(0,1)$. Due to $G''_0(1)=22536>0$, we have $G''_0(s)>0$ for all $s\in (0,1)$, i.e., $G'_0(s)$ is increasing on $(0,1)$. By $$ G'_0(0)=-33600<0\quad\&\quad G'_0(1)=2976>0, $$ we conclude that there exists an $s_0\in(0,1)$ such that $G'_0(s)<0$ for $s\in(0,s_0)$ and $G'_0(s)>0$ for $s\in (s_0,1)$. Then there exists an $x_0\in (0,1)$ such that $g_0(x)$ is decreasing for $x\in (0,x_0)$ and $g_0(x)$ is increasing for $x\in(x_0,1)$. Thus there exists an $x_1\in (0,1)$ such that $g_0(x)>0$ for $x\in(0,x_1)$ and $g_0(x)<0$ for $x\in(x_1,1)$. As a result, we find that $\log r\mapsto\log\mathsf{L}_{\alpha,0}(f,r)$ is neither concave nor convex. {\it Case 2}: $\beta=1$. Again using the $D$-notation, we obtain $$ D(24-9x-2x^2)=\frac{-216-192x+18x^2}{(24-9x-2x^2)^2} $$ and $$ D(2-x)=\frac{-2}{(2-x)^2}, $$ whence deriving $$ D\big(h_1(x)\big)=D(24-9x-2x^2)-D(2-x)=\frac{2g_1(x)}{(24-9x-2x^2)^2(2-x)^2}, $$ where $$ g_1(x)=144-384x+297x^2-96x^3+13x^4. $$ Now we have $$ g'_1(x)=-384+594x-288x^2+52x^3\quad \&\quad g''_1(x)=594-576x+156x^2. $$ Since the axis of symmetry of $g''_1(x)$ is $x=\frac{24}{13}>1$, $g''_1(x)$ is decreasing on $(0,1)$. Due to $g''_1(1)=174>0$, we have $g''_1(x)>0$ for all $x\in (0,1)$, i.e., $g'_1(x)$ is increasing on $(0,1)$. By $$ g'_1(0)=-384<0\quad\&\quad g'_1(1)=-26<0, $$ we conclude that $g'_1(x)<0$ for $x\in (0,1)$. Obviously, $$ g_1(0)=144>0\quad \&\quad g_1(1)=-26<0. $$ Hence there exists an $x_0\in (0,1)$ such that $g_1(x)>0$ for $x\in (0,x_0)$ and $g_1(x)<0$ for $x\in (x_0,1)$. Consequently, we find that $\log r\mapsto\log\mathsf{L}_{\alpha,\beta=1}(f,r)$ is neither concave nor convex. \end{proof} \end{document}
\begin{document} \title{Transformation design and nonlinear Hamiltonians} \author{Thomas Brougham$^{\ast}$$\thanks{$^\ast$ Corresponding author. Email: thomas.brougham@gmail.com}$, Goce Chadzitaskos and Igor Jex\\ Department of Physics, FNSPE, Czech Technical University in Prague, B\v{r}ehov\'{a} 7,\\ 115 19 Praha 1, Czech Republic.} \maketitle \abstract{We study a class of nonlinear Hamiltonians, with applications in quantum optics. The interaction terms of these Hamiltonians are generated by taking a linear combination of powers of a simple `beam splitter' Hamiltonian. The entanglement properties of the eigenstates are studied. Finally, we show how to use this class of Hamiltonians to perform special tasks such as conditional state swapping, which can be used to generate optical cat states and to sort photons. \\ keywords: quantum optics, nonlinear optics, quantum information} \section{Introduction} The consideration of nonlinear optical processes within quantum optics has led to the study of many important physical phenomena. These nonlinear optical processes have found many diverse applications, such as generating entangled photons using parametric down conversion \cite{para} or creating optical routers using intensity dependent properties of an optical fibre \cite{jensen, chefles}. The growing field of quantum information has added to the interest in nonlinear optics. This is due to the fact that nonlinear optics can be used to perform certain operations that are not possible using only linear optics. For example, a class of nonlinear optical Hamiltonians, which have been used to model four wave mixers \cite{milburn,yurke,tombesi, yurke2}, have been shown to allow one to prepare a macroscopically distinguishable superposition of quantum states. In this paper we will consider a class of nonlinear Hamiltonians. The Hamiltonians will be functions (e.g. a polynomial) of a quadratic Hamiltonian, $\hat H_0$, that leads to linear differential equations in the field mode operators. Hamiltonians of this type have been used to describe optical beam splitters. The nonlinear Hamiltonians will thus commute with $\hat H_0$. Hence the task of obtaining the eigenvectors and eigenvalues of the nonlinear Hamiltonians will reduce to diagonalising the simple quadratic Hamiltonian. It will be shown that the eigenvalues and eigenvectors are related to a class of orthogonal Krawtchouk polynomials \cite{askey}. We thus establish that this particular class of nonlinear Hamiltonians are exactly solvable. The connection between orthogonal polynomials and nonlinear optical process has been studied previously and general results have been obtained \cite{Goce2, Goce,Horow}. In particular, the mathematical results that we present in section \ref{sec2} is a special case of the general theory presented in \cite{Goce2}. In \cite{Goce2} the general method is described for solving two boson systems in quantum optics via of orthogonal polynomials systems. It is possible to study many of these problems using different mathematical techniques. In particular, beam splitters have bean studied extensively \cite{klauder, campos, leonhardt}. The benefit of using orthogonal polynomials, however, is that there exists extensive literature on their properties, which one can exploited for any given problem, see for example \cite{askey, orthop}. The organization of the paper will be as follows. In section \ref{sec2} we will diagonalise the quadratic Hamiltonian. This will be achieved by first showing the equivalence between diagonalising the Hamiltonian and solving a particular recurrence relation. The solutions of the recurrence relation are the Krawtchouk polynomials. The properties of these polynomials will be used to obtain the spectrum of the Hamiltonian. In section \ref{sec3} we study the entanglement properties of the energy eigenstates. The properties of the nonlinear Hamiltonians will be investigated in section \ref{sec4}. In particular we will show how the Hamiltonians can be used to perform a conditionally swap operation on the two modes, when the modes are both prepared with all the photons initially in one mode. We will also show how this enables one to prepare optical Schr\"{o}dinger cat states. The dynamics of the nonlinear Hamiltonians is studied in further detail in section \ref{sec5}. Finally, we discuss our results in section \ref{conc}. \section{Diagonalising the quadratic Hamlitonian} \label{sec2} In this section we apply the method investigated in \cite{Goce2}. Suppose we have an electromagnetic field that has two modes 1 and 2. Associated with these modes are the creation and annihilation operators $\hat a^{\dagger}_j$ and $\hat a_j$, where $j=1,2$. These operators obey the standard canonical commutator relations $[\hat a_j,\hat a^{\dagger}_j]=\hat 1$ and $[\hat a_j,\hat a_k]=0$, for $j\ne k$. Suppose further that the two modes interact. If this interaction is quadratic, then one possible form for the interaction Hamiltonian is \begin{equation} \label{bsh} \hat H_0=\gamma\hat a^{\dagger}_{1}\hat a_{2}+\gamma^*\hat a^{\dagger}_{2}\hat a_{1}, \end{equation} where the free evolution terms are ignored\footnote{One could imagine that we are working in an interaction picture.} Hamiltonians of the form (\ref{bsh}) are be used to describe beam splitters. A simple class of nonlinear interaction Hamiltonians can be constructed by introducing some nonlinear function $f(x)$ and forming a new Hamiltonian $\hat H=f(\hat H_0)$, where $\hat H_0$ is defined in equation (\ref{bsh}). A simple example of this would be $f(x)$ a polynomial e.g. \begin{equation} \label{nonlinear} \hat H=\mu\hat H_0+\lambda(\hat H_0)^2. \end{equation} The nonlinear Hamiltonian described in equation (\ref{nonlinear} has been studied before in connection with various nonlinear optical phenomena, see for example \cite{milburn,tombesi}. The quadratic Hamiltonian $\hat H_0$ conserves the total number of photons, i.e. \begin{equation} [\hat n,\hat H_0]=0, \end{equation} where $\hat n=\hat n_1+\hat n_2$. The class of nonlinear Hamiltonians $\hat H=f(\hat H_0)$, where $f(x)$ is some polynomial, will also conserve the total number of photons. By construction we see that $[\hat H_0,\hat H]=0$. We can thus obtain the eigenvectors of $\hat H$ by diagonalising $\hat H_0$. The fact that both Hamiltonians conserve the total photon number suggests that we should diagonalise the Hamiltonian (\ref{bsh}) in subspaces with fixed numbers of photons. If we have a subspace where the total number of photons is constant and equals $M$, then the dimensions of this subspace will be $M+1$ and any state belonging to the subspace can be written in the form \begin{equation} |\psi^M\rangle_{12}=\sum_{n=0}^{M}{\xi_n|M-n,n\rangle_{12}}, \end{equation} where $|M-n,n\rangle_{12}$ represents the Fock state $|M-n\rangle_1\otimes|n\rangle_2$. Let $|E^M\rangle_{12}$ denote an eigenstate of (\ref{bsh}), hence $\hat H_0|E^M\rangle_{12}=E|E^M\rangle_{12}$. For the sake of convenience we shall express the eigenstate in the following form \begin{equation} \label{evector} |E^M\rangle_{12}=\sum^{M}_{n=0}{c^M_ne^{-ing}|M-n,n\rangle_{12}}, \end{equation} where $g=\arg(\gamma)$, i.e. $\gamma=|\gamma|e^{ig}$. Acting on equation (\ref{evector}) with $\hat H_0$ yields the expression \begin{eqnarray} \hat H_0|E^M\rangle_{12}=\gamma\sum_n{\sqrt{n(M-n+1)}e^{-ing}c^M_{n}|M-n+1,n-1\rangle_{12}}\nonumber\\ +\gamma^*\sum_n{\sqrt{(n+1)(M-n)}e^{-ing}c^M_n|M-n,n+1\rangle_{12}}=E|E^M\rangle_{12}. \end{eqnarray} By re-arranging this expression one can show that the coefficients $c^M_n$, obey the following three term recurrence relation \begin{equation} \label{3term} \frac{E}{|\gamma|}c^M_n=\sqrt{(n+1)(M-n)}c^M_{n+1}+\sqrt{n(M-n+1)}c^M_{n-1}. \end{equation} The recurrence relation (\ref{3term}) can be solved to find both the eigenvalues and the coefficients that appear in the Fock basis expansion of the energy eigenvectors. The mathematical properties of the recurrence relation (\ref{3term}) enable us to determine many properties of the eigenvectors and eigenvalues of energy. For example, if equation (\ref{evector}) is an eigenvector of $\hat H_0$ corresponding to the eigenvalue $E$, then so is the vector $|-E^M\rangle_{12}=\sum_n{(-1)^n\exp(-ing)c^M_n|M-n,n\rangle_{12}}$, which corresponds to the eigenvalue $-E$. The proof of this fact follows simply by replacing the terms $c^M_n$, in (\ref{3term}), with $(-1)^{n}c^M_n$. We thus see that for every positive eigenvalue of energy there must exist a corresponding negative energy eigenvalue. The coefficients $c^M_n$ will be functions of $E$. If we formally solve the recurrence relation (\ref{3term}), then we find that $c^M_n(E)$ is a polynomial of degree $n$, in the variable $E$. The fact that the total number of photons is $M$ means that $c^M_{M+1}(E)=0$, which means that the roots of the polynomial $c^M_{M+1}(E)$ correspond to the eigenvalues of (\ref{bsh}). To help explain the last fact consider the matrix representation of equation (\ref{3term}) \begin{equation} \frac{E}{|\gamma|}\left(\begin{matrix} c^M_0\\ c^M_1\\ \vdots\\ c^M_M\end{matrix} \right)=\left(\begin{matrix} 0 & \sqrt{M} & 0 & 0 & \ldots & 0 \\ \sqrt{M} & 0 & \sqrt{2(M-1)} & 0 & \ldots & 0\\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\ 0 &\ldots &\ldots & \sqrt{M} & 0\end{matrix} \right)\left(\begin{matrix} c^M_0\\ c^M_1\\ \vdots\\ c^M_M\end{matrix}\right) +\left(\begin{matrix} 0\\ 0\\ \vdots\\ \phi_{M+1}c^M_{M+1}\end{matrix}\right), \end{equation} where $\phi_{M+1}$ is the term that we would find in front of $c^M_{M+1}$ in the recurrence relation (\ref{bsh}). It can be seen that if $c^M_{M+1}(E)=0$, then the column vector ${\bf c}^M$ will be an eigenvector of the Hamiltonian, corresponding to the eigenvalue $E/|\gamma|$. An important point to note about equation (\ref{3term}) is that the factors that multiply both $c^M_{n+1}$ and $c^M_{n-1}$ are positive. This fact enables us to make use of a theorem of Favard's \cite{favard}. For a recurrence relation of the form \begin{equation} xP_n(x)=b_{n+1}P_{n+1}(x)+b_{n-1}P_{n-1}(x), \end{equation} where the sequence $\{b_n\}$ is non-negative, there exists a linear functional $\mathcal{L}$, which has the following properties \begin{eqnarray} \mathcal{L}[1]&=&1,\nonumber\\\mathcal{L}[P_m(x)P_n(x)]&=&0\;\,\text{for}\;m\ne n,\nonumber\\ \mathcal{L}[P^2_n(x)]&\ne& 0. \end{eqnarray} Applying this theorem to (\ref{3term}) leads to the conclusion that the set of polynomials $\{c^M_n(x)\}$, will be orthogonal to each other with respect to the linear functional $\mathcal{L}$. A standard property of orthogonal polynomials is that the roots of an orthogonal polynomial are distinct \cite{mathsphys}. This last result together with the fact that the eigenvalues of $\hat H_0$ are the roots of the polynomial $c^M_{M+1}(E)=0$, implies that the eigenvalues of $\hat H_0$ are nondegenerate, when we are confined to the $M$ photon subspace\footnote{The spectrum of $\hat H_0$ on the whole Hilbert space will have degeneracies due to the fact that the spectrum of $\hat H_0$ on the different subspace is not disjointed and thus can share eigenvalues.}. There is an extensive literature on orthogonal polynomials and their properties \cite{askey,orthop,mathsphys}. One important example are the Krawtchouk polynomials, which obey the following three term recurrence relation \cite{askey} \begin{eqnarray} \label{krawrec} xK_n(x;p,M)&=&\sqrt{p(1-p)(n+1)(M-n)}K_{n+1}(x;p,M)+\left(p[M-n]+n[1-p]\right)K_{n}(x;p,M)+\nonumber\\ &&+\sqrt{p(1-p)n(M-n+1)}K_{n-1}(x;p,M), \end{eqnarray} where $0 < p < 1 $. If we set $p=1/2$, $2x-M=E/|\gamma|$ and $K_n(x;1/2,M)=c^M_n(E)$, then equation (\ref{krawrec}) reduce to equation (\ref{3term}). The coefficients $c^M_n(E)$ are thus Krawtchouk polynomials. The form of the Krawtchouk polynomials, for this choice of parameters, is given by \begin{eqnarray} \label{krawpol} K_{n} (x;1/2,M) = (-1)^nK_0(x;1/2,M)\sqrt{\frac{M!}{n!(M-n)!}} \sum^{n}_{k=0}\frac{(-n)_k (-x)_k }{(-M)_{k} k!}2^{k}, \end{eqnarray} where $(y)_{k}= y (y+1)(y+2)\ldots(y+k-1)$. The roots of the Krawtchouk polynomials can be determined using (\ref{krawpol}). It can easily be shown that the roots of $K_{M+1}(x;1/2,M)=0$ are $x=0, 1, 2, ..., M$, which implies that the energy eigenvalues are \begin{equation} \label{eenergy} E=-M|\gamma|,(2-M)|\gamma|,(4-M)|\gamma|,...,M|\gamma| \end{equation} By diagonalising the Hamiltonian (\ref{bsh}) we have solved the system's dynamics and can analyze relevant quantum effects. \section{Photon distribution and entanglement properties of the energy eigenstates} \label{sec3} In the previous section we found the analytic solution of equation (\ref{3term}), which in turn gives us the form of the eigenvectors of the Hamiltonian (\ref{bsh}). Nevertheless, it is informative to give some simple examples of the eigenvectors of (\ref{bsh}). For the simplest case, when $M=1$, the eigenvalues are just $E=\pm|\gamma|$ and the eigenvectors take the form $|E\pm\rangle=(|10\rangle\pm\exp(-ig)|01\rangle)/\sqrt{2}$. When $M=2$, we find that $E=\pm2|\gamma|, 0$ and \begin{eqnarray} \label{evecm2} |E=\pm2|\gamma|\rangle&=&\frac{1}{2}\left(|20\rangle\pm\sqrt{2}e^{-ig}|11\rangle+ e^{-2ig}|02\rangle\right),\nonumber\\ |E=0\rangle&=&\frac{1}{\sqrt{2}}\left(|20\rangle-e^{-2ig}|02\rangle\right). \end{eqnarray} As $M$ increases the form of the eigenvectors will generally become more complicated. One situation, however, where the eigenvectors have a simple form is when $E=M|\gamma|$, i.e for the eigenvectors corresponding to the maximum eigenvalue of $\hat H_0$. In this case we find \begin{eqnarray} \label{ccc} c^M_n=K_{n} (M;1/2,M)=(-1)^nc^M_0\sqrt{\frac{M!}{n!(M-n)!}}\sum^{n}_{k=0}\frac{(-n)_k }{k!}2^{k}\nonumber\\ =(-1)^nc^M_0\sqrt{\frac{M!}{n!(M-n)!}}\sum^{n}_{k=0}\frac{n!}{k!(n-k)!}(-2)^{k}=c^M_0\sqrt{\frac{M!}{n!(M-n)!}}. \end{eqnarray} If $c^M_0$ is chosen to be $2^{-M/2}$, then $\sum_n{|c^M_n(M|\gamma|)|^2}=1$. A consequence of this result is that the photon probability distribution, $|c^M_n|^2$, will be a binomial distribution when $E=M|\gamma|$. In the limit of large $M$, $|c^M_n(M|\gamma|)|^2$ can be approximated by a Gaussian. The probability distribution $|c^M_n|^2$, of finding $n$ photons in the second mode, depends only on $|E|$. By this we mean that $|c^M_n(E)|^2=|c^M_n(-E)|^2$. This follows directly from the result in section \ref{sec2} that $c^M_n(-E)=(-1)^n c^M_n(E)$, which can be verified using the recurrence relation (\ref{3term}). The behaviour of the probability distributions $|c^M_n(E)|^2$, for $|E|<M|\gamma|$, can be studied by numerically evaluating (\ref{krawpol}). Figure \ref{fig1} shows $|c^M_n|^2$ plotted with $M=38$, for the four largest eigenvalues of $\hat H_0$. We see that each time $E$ is decreased by $2|\gamma|$, an additional peak appears in the probability distribution. Numerical investigations show that this behaviour is a generic feature and occurs for each value of $M$. Figure \ref{fig1} shows that the probability distribution has the symmetry, $|c^M_n|^2=|c^M_{M-n}|^2$. This means that the probability of finding $n$ photons in the first mode is the same as finding $n$ photons in the second mode. This result is physically reasonable due to the symmetric nature of the Hamiltonian (\ref{bsh}), with regards to the two modes. \begin{figure} \caption{Four different plots of the probability distribution for finding $n$ photons in the second mode. Each plot is for the situation where $M=38$ and $|\gamma|=1$, however in each successive plot the energy is decreased by $2|\gamma|$.} \label{fig1} \end{figure} If we calculate the explicit forms for the energy eigenvectors, then we see that the two modes are entangled, i.e they cannot be express in a factorised form. Entanglement is one of the defining characteristics of quantum mechanics and has great importance in the foundations of quantum mechanics and in the field quantum information \cite{nch}. For this reason it is of interest to quantify the entanglement present in the energy eigenstates. A common approach to quantifying the entanglement in a bipartite pure state is to calculate the von Neumann entropy in the reduced state \cite{nch,steve,steve2}. The von Neumann entropy of a density operator $\hat \rho$ is defined to be \begin{equation} S[\hat\rho]=-\text{Tr}[\hat\rho\log(\hat\rho)], \end{equation} where the logarithms are taken to the base 2. The reduced states of the bipartite state $|\psi\rangle_{12}$ are define to be $\hat\rho_1=\text{Tr}_2[|\psi\rangle\langle\psi|]$ and $\hat\rho_2=\text{Tr}_1[|\psi\rangle\langle\psi|]$, where $\text{Tr}_j$ denotes the partial trace over the subsystem $j$. We thus take use the following quantity to measure the entanglement \cite{steve} \begin{equation} \label{eform} S_{ent}(|\psi\rangle\langle\psi|)=2S(\hat\rho_1)=2S(\hat\rho_2). \end{equation} $S_{ent}(|\psi\rangle\langle\psi|)=0$ when our bipartite state is separable. In addition to this, we can see that equation (\ref{eform}) assumes its maximum value when our bipartite state is a maximally entangled state, i.e. the reduced states are proportional to the identity operator. If we express the energy eigenvectors in the form given in equation (\ref{evector}), then the eigenstates are automatically in the Schmidt form \cite{nch}, where the basis states $\{|M-n,n\rangle_{12}\}$ now act as our Schmidt basis. From this observation is is straightforward to show that $S_{ent}$ equals twice the entropy of the probability distribution $|c^M_n(E)|^2$, i.e \begin{equation} \label{Hent} S_{ent}=2S\left(|c^M_n|^2\right)=-2\sum_n{|c^M_n|^2\log(|c^M_n|^2)}, \end{equation} where the logarithms are taken to the base 2. We can now evaluate $S_{ent}$ as a function of the total number of photons $M$. Figure \ref{fig3} shows $S_{ent}$ plotted against $M$, with the energy $E$ either fixed or set equal to the largest possible value given the choice of $M$. We see that as $M$ increases so does $S_{ent}$. The fact that the entanglement increases with $M$ is not surprising as the dimensions of our systems Hilbert space are $M+1$. Increasing the number of photons thus gives us more terms in the Schmidt decomposition of the eigenstates $|E^M\rangle_{12}$. We can also see that the zero eigenstate, i.e. the state $|E^M\rangle_{12}$, with $E=0$, is less entangled than the states with $E=1$ or $E=2$. By examining equation (\ref{3term}) is can be seen that for $E=0$, we would require that every term $c^M_n$ equal zero for $n$ odd. The zero eigenvector thus has roughly half the number of Schmidt terms as the other eigenvectors do. Consequently, the zero eigenvector is not as entangled as the eigenvectors that correspond to $E\ne0$. \begin{figure} \caption{A plot of the entanglement, $S_{ent}$, against the number of photons $M$, for different values of energy $E$, with $\gamma=1$.} \label{fig3} \end{figure} It is interesting to see how the entanglement changes as we vary $E$, with the total number of photons now fixed. Figure \ref{fig4} shows $S_{ent}$ plotted against the eigen-energy, $E$, for several different values of $M$. It can be seen that when $E$ is large, the entropy decreases. The fact that there is a general trend for the entanglement to decrease as $E$ increases can be explained by looking at how the probability distribution, $|c^M_n(E)|^2$, changes when $E$ varies. Figure \ref{fig1} shows that as we decrease $E$ additionally peaks appear in the probability distribution. This means that the probability becomes more spread out and thus the entropy increases. Thinking in terms of correlations we see that for $E$ equal to $M|\gamma|$, we have a single maximum in the probability distribution $|c^M_n|^2$. The Schmidt basis states $|M-n,n\rangle_{12}$ will only have coefficients with large values for the states with $n$ close to $M/2$. When $E$ is decreased we obtain several maxima in the probability distribution $|c^M_n(E)|^2$ and thus we have more Schmidt basis states with larger coefficients and hence the entanglement increases. \begin{figure} \caption{Entanglement, $S_{ent}$, plotted against the energy $E$, for different values of the photon number $M$.} \label{fig4} \end{figure} An interesting feature of figure \ref{fig4} is that there are points where the entanglement increase slightly as $E$ increases. One obvious example of this is when $M$ is even and $E$ increases from zero to $2$ ($\gamma=1$ in figure \ref{fig4}). In this case the increase is due to the anomalous nature of the zero energy eigenvector, i.e it has roughly half the number of nonzero Schmidt coefficients as the other energy eigenvectors. The other instances when the entanglement increase occur due to the fact that $P^M_n(E)$ can sometimes broaden when $E$ increases. Sometimes this broadening will compensate for the loss of peaks in $P^M_n(E)$ and thus the probability distribution is more spread out, which leads to a slight increase in the entropy. \section{Nonlinear Hamiltonians and conditional state swapping} \label{sec4} In section \ref{sec2} we obtained the eigenvalues and eigenvectors corresponding to the quadratic Hamiltonian (\ref{bsh}). These results can now be used to determine the properties of a certain class of nonlinear Hamiltonians, which are constructed from $\hat H_0$ [an example of this is given in equation (\ref{nonlinear})]. Let $f(y)$ be a polynomial in the variable $y$, it is clear that $[\hat H_0,f(\hat H_0)]=0$, and thus $f(\hat H_0)|E^M\rangle_{12}=f(E)|E^M\rangle_{12}$. Similarly, the operator $\hat n=\hat n_1+\hat n_2$, obeys $[\hat n,\hat H_0]=0$ and thus $f(\hat n)|E^M\rangle_{12}=f(M)|E^M\rangle_{12}$. From this we see that the general Hamiltonian \begin{equation} \label{genh} \hat H=\sum_k{\omega_k\hat n^k+\alpha_k(\hat H_0)^k}, \end{equation} can be diagonalised using the eigenvectors $\{|E^M\rangle_{12}\}$. We should note that the spectrum of equation (\ref{genh}) will generally be degenerate and thus we can construct several different sets of eigenvectors for the Hamiltonian. One important practical consideration for our nonlinear Hamiltonians is that the magnitude of the terms $\alpha_k$ should decrease as $k$ increases. In practice, if we want the higher order terms in our Hamiltonian to give an non-negligible contribution to the evolution of our system, we would have to increase the intensity of our light beams. This would correspond to $M$ being large. Even when we increase the intensity of the optical beams that enter the nonlinear media, it will still be difficult to excite higher order terms. For this reason we shall look only at examples were $\hat H$ is at most quadratic in $\hat H_0$, i.e. Hamiltonian with a form given in equation (\ref{nonlinear}). Using our previous results we can easily determine the time evolution of a system described by a Hamiltonian of the form given in equation (\ref{genh}). If at time $t=0$ the system is prepared in the state $|\psi_0\rangle_{12}$, then at a later time $t$, it will be in the state $|\psi_t\rangle_{12}=\exp(-i\hat Ht)|\psi_0\rangle_{12}$, where $\hbar=1$. The problem of determining the evolution thus reduces to expressing the state $|\psi_0\rangle_{12}$ in terms of the basis $\{|E^M\rangle_{12}\}$. In quantum optics, states are often expressed in terms of the Fock basis. It is thus essential to be able to express a given Fock state in terms of the energy eigenvectors. This can be achieved using the completeness relation, $\sum_k{|E_k\rangle\langle E_k|}=\hat 1$, which enables us to obtain that \begin{equation} \label{nstates} |M-n,n\rangle_{12}=\sum_k{_{12}\langle E_k|M-n,n\rangle_{12}|E_k\rangle_{12}}=e^{ing}\sum_k{c^M_n(E_k)|E_k\rangle_{12}}. \end{equation} We now give examples of how the preceding theory can be used to construct Hamiltonians that perform specific tasks. In order to simplify the following calculations we set $\gamma$ real. Suppose we have the Fock state $|M,0\rangle_{12}$, which we want to transform to the state $|0,M\rangle_{12}$, i.e. we swap the two modes. From equation (\ref{nstates}) it is clear that if we want to perform this task, then we must change the coefficient $c^M_0(E_k)$ to $c^M_M(E_K)$. Using equation (\ref{krawpol}) together with some simple algebra leads to the result that \begin{equation} \label{coeftran} c^M_M(E_k)=(-1)^{M+x_{k}}c^M_0(E_k), \end{equation} where $x_{k}=(E_k+M|\gamma|)/(2|\gamma|)$. This result means that when $M+x_{k}$ is odd, $c^M_M(E_k)=-C^M_0(E_k)$, while for $M+x_{k}$ even $c^M_M(E_k)=c^M_0(E_k)$. Suppose that we wish to construct a Hamiltonian that will peform the swap operation $|M,0\rangle_{12}\rightarrow|0,M\rangle_{12}$ for all values of $M$. There is an infinity of different Hamiltonians, which achieve this task. An example of a simple Hamiltonian that performs the swap operation is \begin{equation} \label{lswap} \hat H=\frac{\pi}{2|\gamma|\tau}\left(3|\gamma|\hat n+\hat H_0\right), \end{equation} where $\exp(-i\hat H\tau)|M,0\rangle_{12}=|0,M\rangle_{12}$. This Hamiltonian is linear in $\hat H_0$, we could, however, construct alternative nonlinear Hamiltonians that perform the swap operation. An interesting point to note is that if we had some state $|\psi\rangle=\sum_n{\xi_n|n\rangle}$, then the Hamiltonian described in equation (\ref{lswap}) can be used to enact the transformation $|\psi\rangle_1|0\rangle_2\rightarrow|0\rangle_1|\psi\rangle_2$, i.e. $\exp(-i\hat H\tau)|\psi\rangle_1|0\rangle_2=|0\rangle_1|\psi\rangle_2$. In the previous example the swap operation was insensitive to the total number of photons. It is, however, possible to design Hamiltonians that will perform a swap operation only if the number of photons is either even or odd. Suppose now that we want the swap operation to occur only if the total number of photons is even, i.e $|M,0\rangle_{12}\rightarrow|0,M\rangle_{12}$ only when $M$ is even. Let $\epsilon^M_x$ denote the energy eigenvalue of our nonlinear Hamiltonian, corresponding to the eigenvector $|E^M_x\rangle_{12}$. In order to perform the swap at time $t=\tau$ we require that $\exp(-i\epsilon^M_x\tau)=(-1)^{M+x}$, but only when $M$ is even. A Hamiltonian that achieves this is \begin{equation} \label{evenswap} \hat H=\frac{\pi}{|\gamma|\tau}\left(|\gamma|\hat n+\frac{|\gamma|}{4}\hat n^2+\frac{1}{4|\gamma|}\hat H_0^2\right). \end{equation} One can easily verify that \begin{equation} \label{eventran} \exp(-i\hat H\tau)|M,0\rangle_{12}=\begin{cases} |0,M\rangle_{12},\;\text{for $M$ even},\\ i|M,0\rangle_{12},\;\text{for $M$ odd}. \end{cases} \end{equation} The Hamiltonian (\ref{evenswap}) thus has the property that for a state $|\psi\rangle$, the transformation $|\psi\rangle_1|0\rangle_2\rightarrow|0\rangle_1|\psi\rangle_2$ can only be realized if $|\psi\rangle$ can be expressed as a superposition of Fock states consisting of an even number of photons. We have described how the Hamiltonian given in equation (\ref{evenswap}), can be used to perform a conditional swap operation. The Hamiltonian can also be used to perform other useful tasks, such as preparing two optical cat states form a single coherent state. An optical cat is a state of the form $K(|\alpha\rangle+|-\alpha\rangle)$, where $K$ is a normalisation constant and $|\pm\alpha\rangle$ are coherent states \cite{loudon}. The significance of these states can be seen by recalling that when $|\alpha|$ is large, coherent state behave like a classical monochromatic optical fields. Cat states are thus a superposition of two classical objects\footnote{The name `cat state' is a reference to the thought experiment developed by Schr\"{o}dinger \cite{schrodinger}.}. The Fock state representation of a coherent state is \cite{Klauder} \begin{equation} |\alpha\rangle=\exp\left(-\frac{|\alpha|^2}{2}\right)\sum^{\infty}_{n=0}{\frac{\alpha^n}{\sqrt{n!}}|n\rangle}, \end{equation} where $\alpha$ is a complex number. If we take our initial state to be $|\alpha\rangle_1|0\rangle_2$, then using the Hamiltonian (\ref{evenswap}) will yield the following evolution \begin{eqnarray} \label{cat} \exp(-i\hat H\tau)|\alpha\rangle_1|0\rangle_2=|0\rangle_1\left(\frac{|\alpha\rangle_2+|-\alpha\rangle_2}{2}\right)+i\left(\frac{|\alpha\rangle_1-|-\alpha\rangle_1}{2}\right)|0\rangle_2. \end{eqnarray} We have thus obtained a superposition of two optical cat states. Theoretical schemes for obtaining a superposition of two optical cat states have been proposed previously \cite{milburn, tombesi, yurke2}. In each of the earlier schemes, the Hamiltonians that were used were different from that given in equation (\ref{evenswap}). As we have shown, it is possible to construct Hamiltonians that perform a swap operation conditioned on whether the number of photons is even. It is also interesting to consider a swap operation conditioned on other factors. For instance, we may want to not swap the state if it has a particular number of photons, while allowing the swap to occur if the number of photons differs by one. We now give the form of a Hamiltonian that performs this task. As before, let $\epsilon^M_x$ denote the energy of the nonlinear Hamiltonian. We require that $\exp(-i \epsilon^M_x\tau)=(-1)^{M+x}$ if $M=N\pm 1$, but that $\exp(-i\epsilon^M_x\tau)\ne(-1)^{M+x}$ for $M=N$. It can easily be verified that the following Hamiltonian satisfies the stated conditions \begin{equation} \label{pswap} \hat H=\frac{\pi}{2|\gamma|\tau}\left(3|\gamma|\hat n^2+\hat n\hat H_0-3N|\gamma|\hat n-N\hat H_0\right). \end{equation} This leads to the following dynamics, $\exp(-i\hat H\tau)|N,0\rangle_{12}=|N,0\rangle_{12}$, while $\exp(-i\hat H\tau)|N\pm1,0\rangle_{12}=|0,N\pm1\rangle_{12}$. It is straightforward to modify the Hamiltonian given in equation (\ref{pswap}) for situations where we want to swap states other than $|N\pm1,0\rangle_{12}$. An example of this is when we want to swap states such as $|N\pm2,0\rangle_{12}$. To achieve this we can simply multiply equation (\ref{pswap}) by a factor of $1/2$, i.e. we take our Hamiltonian to be $\hat H/2$. We shall now show how one can use the Hamiltonian (\ref{pswap}) together with our previous nonlinear Hamltonians, so as to discriminate between photon number states. Suppose we have four modes that are initially prepared in the state \begin{equation} \label{nooo} |M\rangle_1|0\rangle_2|0\rangle_3|0\rangle_4, \end{equation} where $M\le 4$. An important problem to address is whether we can determining the value of $M$. The majority of currently available photon detectors are not capable of reliably discriminating between different photon number states. For this reason it would be interesting if we could act on the state (\ref{nooo}), so that the final position of the mode that is not in the vacuum is dependent on the number of photons. In particular, we could arrange for the state to evolve such that the $M$-th mode contains $M$ photons, while the other modes are in the vacuum state. We shall now outline a scheme that enables one to performs this task. At time $t=0$ our system is prepared in the state (\ref{nooo}). We arrange for modes 1 and 2 to interact in a manner governed by the Hamiltonian (\ref{evenswap}). This means that at time $\tau$ the photons will have been transferred to the second modes if $M$ is even, or remain in the first mode for $M$ odd. We now arrange for the first and third modes to be coupled via a Hamiltonian of the form $\hat H/2$, where $\hat H$ is given by equation (\ref{pswap}) and where the $N$ in equation (\ref{pswap}) equals 1. This will result in the first and third mode being swapped when we have three photon. At the same time we shall also couple the second and fourth modes using a Hamiltonian of the form $\hat H/2$, where $\hat H$ is given by equation (\ref{pswap}), with $N=2$. This will result in the second and fourth modes being swapped if the number of photons is four. The effect of coupling the modes in the manner that we have described is that our system will be left in the state \begin{eqnarray} i|1\rangle_1|0\rangle_2|0\rangle_3|0\rangle_4,\;\text{for $M$=1},\nonumber\\ |0\rangle_1|2\rangle_2|0\rangle_3|0\rangle_4,\;\text{for $M$=2},\nonumber\\ |0\rangle_1|0\rangle_2|3\rangle_3|0\rangle_4,\;\text{for $M$=3},\nonumber\\ |0\rangle_1|0\rangle_2|0\rangle_3|4\rangle_4,\;\text{for $M=4$}. \end{eqnarray} We could now put photon detectors at each mode so as to determine which mode contains photons. This in turn would allow us to determine the number of photons that we have. Alternatively we could perform further operations on the individual modes. In addition to this, the above scheme can be used to prepare multiphoton entangled states, provided we can prepare a superposition of a finite number of Fock states, in a single modes. For example, we could transform the separable state $(a|1\rangle+b|2\rangle)|0\rangle$ into the entangled state $a|10\rangle+b|02\rangle$. The various procedures that we have outlined could find applications in quantum information processing and quantum control. \section{Dynamics of the nonlinear Hamiltonians} \label{sec5} In the previous section we constructed nonlinear Hamiltonians that allowed use to perform certain types of operations on the two coupled modes. In this section we will investigate further the dynamics induced by the three Hamiltonians given by (\ref{lswap}), (\ref{evenswap}) and (\ref{pswap}). A natural quantity to study is the number of photons in each mode. For this reason we shall investigate how the expectation value for the number of photons in the second mode changes, i.e. $\langle\hat n_2\rangle$. The initial state of our system will be taken to be $|M,0\rangle_{12}$, and thus $\langle\hat n_2\rangle=0$ at time zero. We shall now study the dynamics associated with the quadratic Hamiltonian (\ref{lswap}). In figure \ref{oscillate} we see the expectation value $\langle\hat n_2\rangle$ plotted as a function of time, for the initial states $|10,0\rangle_{12}$ and $|11,0\rangle_{12}$, with $\gamma=\tau=1$. The expectation value oscillates with period of two (i.e. $2\tau$), the period is thus independent of the number of photons. Another interesting feature of figure \ref{oscillate} is that at time $t=1$, $\langle\hat n_2\rangle=M$, which is consistent with the state having been transformed to $|0,M\rangle_{12}$ at that time. The oscillatory dynamics shown in figure \ref{oscillate} is also observed for other values of $M$, and does not depend on whether $M$ is even or odd. One final interesting feature of the Hamiltonian (\ref{lswap}) is that we obtain oscillations for different choices of the initial state. For example, if the state was initially $|M-n,n\rangle$, then we would still observe oscillations of $\langle\hat n_2\rangle$, with period $2\tau$. The minimum and maximum values of $\langle\hat n_2\rangle$ would, however, respectively change to $\langle\hat n_2\rangle=n$ at $t=0$ and $\langle\hat n_2\rangle=M-n$ at $t=\tau$. Such behaviour is consistent with the Hamiltonian (\ref{lswap}) being able to swap the two modes. \begin{figure} \caption{A plot of the expectation value $\langle\hat n_2\rangle$, against time, with $\gamma=\tau=1$ and where the initial state is $|M,0\rangle_{12}$.} \label{oscillate} \end{figure} The Hamiltonian (\ref{evenswap}) was designed to perform the transformation described in equation (\ref{eventran}). This selective transformation has similarities to the behaviour of a two mode nonlinear directional coupler \cite{jensen,chefles}. This is an optical device which consists of two optical fibers that are coupled in such a way that the dynamics depends on the intensity of the light \cite{chefles}. One obvious difference between these two nonlinear systems is that the conditional dynamics associated with the Hamiltonian (\ref{evenswap}), dependent on whether the number of photons is even or odd and not on how many photons there are. Figure \ref{selective1} shows a plot of the expectation value $\langle\hat n_2\rangle$, against the time, for the initial states $|10,0\rangle_{12}$ and $|11,0\rangle_{12}$. We see that the dynamics is very different for the two different initial states. In particular, $\langle\hat n_2\rangle$ never exceeds the value 5.5, for the case when $M=11$. This is in contrast to the situation for $M=10$, where we find that $\langle\hat n_2\rangle=10$ at $t=1$ (i.e. $t=\tau$). In both cases we see that the behaviour of $\langle\hat n_2\rangle$ is periodic, with period $t=2\tau=2$. When $M=10$, one can observe that the expectation value $\langle\hat n_2\rangle$ temporarily plateaus at the value of 5, which is half the total number of photons. \begin{figure} \caption{A plot of the expectation value $\langle\hat n_2\rangle$, against time, with $\gamma=\tau=1$ and where the initial state is $|M,0\rangle_{12}$. It can be seen that for $M=11$ the Hamiltonian acts to trap some of the energy in the first mode, while for $M=10$ the energy is fully swapped between the two modes.} \label{selective1} \end{figure} It is interesting to investigate the dynamics of the Hamiltonian (\ref{eventran}) for initial states, which have photons in both modes, i.e $|M-n,n\rangle_{12}$, with $n\ne 0$. Figure (\ref{selective2}) show $\langle\hat n_2\rangle$ plotted against time, for the initial states $|8,2\rangle_{12}$ and $|9,2\rangle_{12}$. In both cases the minimum of $\langle\hat n_2\rangle$ is now 2. For the case when $M=10$, we see that at $t=1$, $\langle\hat n_2\rangle=8$, which is consistent with the state undergoing the transformation $|8,2\rangle_{12}\rightarrow|2,8\rangle_{12}$. When the initial state is $|9,2\rangle_{12}$ and thus $M=11$, $\langle\hat n_2\rangle$ is always less than 9, and thus there is never a point where we transfer the 9 photons that were initially in the first mode, to the second mode. \begin{figure} \caption{A plot of the expectation value $\langle\hat n_2\rangle$, against time, with $\gamma=\tau=1$. In plot (a) the initial state is $|8,2\rangle_{12}$, while in plot (b) the initial state is $|9,2\rangle_{12}$.} \label{selective2} \end{figure} The Hamiltonian (\ref{pswap}) was designed so that it would perform the transformation $|N\pm1,0\rangle_{12}\rightarrow|0,N\pm1\rangle_{12}$, while leaving a state with exactly $N$ photons unchanged. It is clear that if our system is initially prepared in the state $|N,0\rangle_{12}$, then the system will stay in that state. In fact, any state with exactly $N$ photons will remain unchanged under the action of (\ref{pswap}). If our system has $N\pm1$ photons then the dynamics of the Hamiltonian (\ref{pswap}) will be identical to that of (\ref{lswap}). \section{Conclusions} \label{conc} We have investigated a class of nonlinear Hamiltonians. These Hamiltonians were constructed from a simple beam splitter Hamiltonian, equation (\ref{bsh}), and from the photon number operator $\hat{n} $. Due to the pivotal role played by the beam splitter Hamiltonian, (\ref{bsh}), we investigated its properties thoroughly. The eigenvalues and eigenvectors of equation (\ref{bsh}) were obtained using the theory of orthogonal polynomials. In particular, it was shown that the eigenvalue problem associated with equation (\ref{bsh}) was equivalent to solving the three term recurrence relation associated with a class of orthogonal polynomials known as Krawtchouk polynomials. We were thus able to show that the dynamics of the beam splitter Hamiltonian and the class of nonlinear Hamiltonians, are exactly solvable using Krawtchouk polynomials. Nonlinear Hamiltonians were then constructed from the beam splitter Hamiltonians and the total photon number operator. The dynamics of these Hamiltonians enabled novel tasks such as state swapping and conditional state swapping, to be performed. The formalism that we have introduced and the Hamiltonians that we have studied, have applications in quantum optics and quantum information. In particular, they can be used in the areas of state preparation and in quantum control. For example, we have shown how the nonlinear Hamiltonian (\ref{evenswap}) can be used to prepare a superposition of Schr\"{o}dinger cats states for a coherent state. In addition to this we have shown how we can control several modes so as to sort photons into one particular mode, where the mode that there are transferred to depends on the total number of photons. This was achieved by turning on couplings between two different modes, where the interactions were described using the nonlinear Hamiltonians (\ref{evenswap}) and (\ref{pswap}). While we have discussed this work in terms of optics and photons, the results would apply to any system of bosonic particles. \end{document}
\begin{document} \date{\today} \keywords{Sobolev space, infinitesimal Hilbertianity, weighted Euclidean space, decomposability bundle, closability of the Sobolev norm} \subjclass[2010]{53C23, 46E35, 26B05} \begin{abstract} We provide a quick proof of the following known result: the Sobolev space associated with the Euclidean space, endowed with the Euclidean distance and an arbitrary Radon measure, is Hilbert. Our new approach relies upon the properties of the Alberti--Marchese decomposability bundle. As a consequence of our arguments, we also prove that if the Sobolev norm is closable on compactly-supported smooth functions, then the reference measure is absolutely continuous with respect to the Lebesgue measure. \end{abstract} \title{A short proof of the infinitesimal Hilbertianity of the weighted Euclidean space} \section*{Introduction} In recent years, the theory of weakly differentiable functions over an abstract metric measure space \(({\rm X},{\sf d},\mu)\) has been extensively studied. Starting from the seminal paper \cite{Cheeger00}, several (essentially equivalent) versions of Sobolev space \(W^{1,2}({\rm X},{\sf d},\mu)\) have been proposed in \cite{Shanmugalingam00,AmbrosioGigliSavare11,DiM14a}. The definition we shall adopt in this paper is the one via test plans and weak upper gradients, which has been introduced by L.\ Ambrosio, N.\ Gigli and G.\ Savar\'{e} in \cite{AmbrosioGigliSavare11}. In general, \(W^{1,2}({\rm X},{\sf d},\mu)\) is a Banach space, but it might be non-Hilbert: for instance, consider the Euclidean space endowed with the \(\ell^\infty\)-norm and the Lebesgue measure. Those metric measure spaces whose associated Sobolev space is Hilbert -- which are said to be \emph{infinitesimally Hilbertian}, cf.\ \cite{Gigli12} -- play a very important role. We refer to the introduction of \cite{LP20} for an account of the main advantages and features of this class of spaces. The aim of this manuscript is to provide a quick proof of the following result (cf.\ Theorem \ref{thm:Eucl_inf_Hilb}): \begin{equation}\tag{\(\star\)}\label{eq:statement_main_result} (\mathbb{R}^d,{\sf d}_{\rm Eucl},\mu)\;\text{ is infinitesimally Hilbertian for any Radon measure } \mu\geq 0\text{ on }\mathbb{R}^d, \end{equation} where \({\sf d}_{\rm Eucl}(x,y)\coloneqq|x-y|\) stands for the Euclidean distance on \(\mathbb{R}^d\). This fact has been originally proven in \cite{GP16-2}, but it can also be alternatively considered as a special case of the main result in \cite{DMGSP18}. The approach we propose here is more direct and is based upon the differentiability theorem \cite{AM16} for Lipschitz functions in \(\mathbb{R}^d\) with respect to a given Radon measure, as we are going to describe. Let \(\mu\geq 0\) be any Radon measure on \(\mathbb{R}^d\). G.\ Alberti and A.\ Marchese proved in \cite{AM16} that it is possible to select the maximal measurable sub-bundle \(V(\mu,\cdot)\) of \(T\mathbb{R}^d\) -- called the \emph{decomposability bundle} of \(\mu\) -- along which all Lipschitz functions are \(\mu\)-a.e.\ differentiable. This way, any given Lipschitz function \(f\colon\mathbb{R}^d\to\mathbb{R}\) is naturally associated with a gradient \(\nabla_{\!\scriptscriptstyle\rm AM} f\), which is an \(L^\infty\)-section of \(V(\mu,\cdot)\). Being \(\nabla_{\!\scriptscriptstyle\rm AM}\) a linear operator, its induced Dirichlet energy functional \(\sfE_{\scriptscriptstyle{\rm AM}}\) on \(L^2(\mu)\) is a quadratic form. Hence, the proof of \eqref{eq:statement_main_result} presented here follows along these lines: \begin{itemize} \item[\(\rm a)\)] The maximality of \(V(\mu,\cdot)\) ensures that the curves selected by a test plan \({\mbox{\boldmath\(\pi\)}}\) on \((\mathbb{R}^d,{\sf d}_{\rm Eucl},\mu)\) are `tangent' to \(V(\mu,\cdot)\), namely, \(\dot\gamma_t\in V(\mu,\gamma_t)\) for \(({\mbox{\boldmath\(\pi\)}}\otimes\mathcal L^1)\)-a.e.\ \((\gamma,t)\). See Lemma \ref{lem:pi_tangent_to_V}. \item[\(\rm b)\)] Given any Lipschitz function \(f\colon\mathbb{R}^d\to\mathbb{R}\), we can deduce from item a) that the modulus of the gradient \(\nabla_{\!\scriptscriptstyle\rm AM} f\) is a weak upper gradient of \(f\); cf.\ Proposition \ref{prop:nablaAM_wug}. \item[\(\rm c)\)] Since Lipschitz functions with compact support are dense in energy in \(W^{1,2}(\mathbb{R}^d,{\sf d}_{\rm Eucl},\mu)\) -- cf.\ Theorem \ref{thm:density_in_energy} below -- we conclude from b) that the Cheeger energy \({\sf E}_{\rm Ch}\) is the lower semicontinuous envelope of \(\sfE_{\scriptscriptstyle{\rm AM}}\). This grants that \({\sf E}_{\rm Ch}\) is a quadratic form, thus accordingly the space \((\mathbb{R}^d,{\sf d}_{\rm Eucl},\mu)\) is infinitesimally Hilbertian. See Theorem \ref{thm:Eucl_inf_Hilb} for the details. \end{itemize} Finally, by combining our techniques with a structural result for Radon measures in the Euclidean space by De Philippis--Rindler \cite{DPR}, we eventually prove (in Theorem \ref{thm:no_closable}) the following claim: \[ \text{The Sobolev norm }\|\cdot\|_{W^{1,2}(\mathbb{R}^d,{\sf d}_{\rm Eucl},\mu)} \text{ is closable on }C^\infty_c\text{-functions}\quad\Longrightarrow\quad\mu\ll\mathcal L^d. \] Cf.\ Definition \ref{def:closability} for the notion of closability we are referring to. This result solves a conjecture that has been posed by M.\ Fukushima (according to V.I.\ Bogachev \cite[Section 2.6]{Bogachev10}). {\bf Acknowledgements.} The second and third named authors acknowledge the support by the Academy of Finland, projects 274372, 307333, 312488, and 314789. \section{Preliminaries} \subsection{Sobolev calculus on metric measure spaces} By \emph{metric measure space} \(({\rm X},{\sf d},\mu)\) we mean a complete, separable metric space \(({\rm X},{\sf d})\) together with a non-negative Radon measure \(\mu\neq 0\). We denote by \({\rm LIP}({\rm X})\) the space of all real-valued Lipschitz functions on \({\rm X}\), whereas \({\rm LIP}_c({\rm X})\) stands for the family of all elements of \({\rm LIP}({\rm X})\) having compact support. Given any \(f\in{\rm LIP}({\rm X})\), we shall denote by \({\rm lip}(f)\colon{\rm X}\to[0,+\infty)\) its \emph{local Lipschitz constant}, which is defined as \[{\rm lip}(f)(x)\coloneqq\left\{\begin{array}{ll} \varlimsup_{y\to x}\big|f(x)-f(y)\big|/{\sf d}(x,y)\\ 0 \end{array}\quad\begin{array}{ll} \text{ if }x\in{\rm X}\text{ is an accumulation point,}\\ \text{ otherwise.} \end{array}\right.\] The metric space \(({\rm X},{\sf d})\) is said to be \emph{proper} provided its bounded, closed subsets are compact. To introduce the notion of Sobolev space \(W^{1,2}({\rm X},{\sf d},\mu)\) that has been proposed in \cite{AmbrosioGigliSavare11}, we first need to recall some terminology. The space \(C\big([0,1],{\rm X}\big)\) of all continuous curves in \({\rm X}\) is a complete, separable metric space if endowed with the sup-distance \({\sf d}_\infty(\gamma,\sigma)\coloneqq\max\big\{{\sf d}(\gamma_t,\sigma_t)\;\big|\;t\in[0,1]\big\}\). We say that \(\gamma\in C\big([0,1],{\rm X}\big)\) is \emph{absolutely continuous} provided there exists a function \(g\in L^1(0,1)\) such that \({\sf d}(\gamma_s,\gamma_t)\leq\int_s^t g(r)\,{\mathrm d} r\) holds for all \(s,t\in[0,1]\) with \(s<t\). The \emph{metric speed} \(|\dot\gamma|\) of \(\gamma\), defined as \(|\dot\gamma_t|\coloneqq\lim_{h\to 0}{\sf d}(\gamma_{t+h},\gamma_t)/|h|\) for \(\mathcal L^1\)-a.e.\ \(t\in[0,1]\), is the minimal integrable function (in the \(\mathcal L^1\)-a.e.\ sense) that can be chosen as \(g\) in the previous inequality; cf.\ \cite[Theorem 1.1.2]{AmbrosioGigliSavare08}. A \emph{test plan} over \(({\rm X},{\sf d},\mu)\) is a Borel probability measure \({\mbox{\boldmath\(\pi\)}}\) on \(C\big([0,1],{\rm X}\big)\), concentrated on absolutely continuous curves, such that the following properties are satisfied: \begin{itemize} \item \textsc{Bounded compression.} There exists \({\rm Comp}({\mbox{\boldmath\(\pi\)}})>0\) such that \(({\rm e}_t)_*{\mbox{\boldmath\(\pi\)}}\leq{\rm Comp}({\mbox{\boldmath\(\pi\)}})\,\mu\) holds for all \(t\in[0,1]\), where \({\rm e}_t\colon C\big([0,1],{\rm X}\big)\to{\rm X}\) stands for the evaluation map \(\gamma\mapsto{\rm e}_t(\gamma)\coloneqq\gamma_t\). \item \textsc{Finite kinetic energy.} It holds that \(\int\!\!\int_0^1|\dot\gamma_t|^2\,{\mathrm d} t\,{\mathrm d}{\mbox{\boldmath\(\pi\)}}(\gamma)<+\infty\). \end{itemize} Let \(f\colon{\rm X}\to\mathbb{R}\) be a given Borel function. We say that \(G\in L^2(\mu)\) is a \emph{weak upper gradient} of \(f\) provided for any test plan \({\mbox{\boldmath\(\pi\)}}\) on \(({\rm X},{\sf d},\mu)\) it holds that \(f\circ\gamma\in W^{1,1}(0,1)\) for \({\mbox{\boldmath\(\pi\)}}\)-a.e.\ \(\gamma\) and that \[\big|(f\circ\gamma)'_t\big|\leq G(\gamma_t)\,|\dot\gamma_t| \quad\text{ for }({\mbox{\boldmath\(\pi\)}}\otimes\mathcal L^1)\text{-a.e.\ }(\gamma,t).\] The minimal such function \(G\) (in the \(\mu\)-a.e.\ sense) is called the \emph{minimal weak upper gradient} of \(f\) and is denoted by \(|Df|\in L^2(\mu)\). \begin{definition}[Sobolev space \cite{AmbrosioGigliSavare11}] The \emph{Sobolev space} \(W^{1,2}({\rm X},{\sf d},\mu)\) is defined as the family of all those functions \(f\in L^2(\mu)\) that admit a weak upper gradient \(G\in L^2(\mu)\). We endow the vector space \(W^{1,2}({\rm X},{\sf d},\mu)\) with the Sobolev norm \(\|f\|_{W^{1,2}({\rm X},{\sf d},\mu)}^2\coloneqq\|f\|_{L^2(\mu)}^2+\big\||Df|\big\|_{L^2(\mu)}^2\). \end{definition} The Sobolev space \(\big(W^{1,2}({\rm X},{\sf d},\mu),\|\cdot\|_{W^{1,2}({\rm X},{\sf d},\mu)}\big)\) is a Banach space, but in general it is not a Hilbert space. This fact motivates the following definition, which has been proposed by N.\ Gigli: \begin{definition}[Infinitesimal Hilbertianity \cite{Gigli12}] We say that a metric measure space \(({\rm X},{\sf d},\mu)\) is \emph{infinitesimally Hilbertian} provided its associated Sobolev space \(W^{1,2}({\rm X},{\sf d},\mu)\) is a Hilbert space. \end{definition} Let us define the \emph{Cheeger energy} functional \({\sf E}_{\rm Ch}\colon L^2(\mu)\to[0,+\infty]\) as \begin{equation}\label{eq:def_E_Ch} {\sf E}_{\rm Ch}(f):=\left\{\begin{array}{ll} \frac{1}{2}\int|Df|^2\,{\mathrm d}\mu\\ +\infty \end{array}\quad\begin{array}{ll} \text{ if }f\in W^{1,2}({\rm X},{\sf d},\mu),\\ \text{ otherwise.} \end{array}\right. \end{equation} It holds that the metric measure space \(({\rm X},{\sf d},\mu)\) is infinitesimally Hilbertian if and only if \({\sf E}_{\rm Ch}\) satisfies the \emph{parallelogram rule} when restricted to \(W^{1,2}({\rm X},{\sf d},\mu)\), \emph{i.e.}, \begin{equation}\label{eq:parallelogram_id} {\sf E}_{\rm Ch}(f+g)+{\sf E}_{\rm Ch}(f-g)=2\,{\sf E}_{\rm Ch}(f)+2\,{\sf E}_{\rm Ch}(g) \quad\text{ for every }f,g\in W^{1,2}({\rm X},{\sf d},\mu). \end{equation} Furthermore, we define the functional \({\sf E}_{\rm lip}\colon L^2(\mu)\to[0,+\infty]\) as \begin{equation}\label{eq:def_E_lip} {\sf E}_{\rm lip}(f)\coloneqq\left\{\begin{array}{ll} \frac{1}{2}\int{\rm lip}^2(f)\,{\mathrm d}\mu\\ +\infty \end{array}\quad\begin{array}{ll} \text{ if }f\in{\rm LIP}_c({\rm X}),\\ \text{ otherwise.} \end{array}\right. \end{equation} Given any \(f\in{\rm LIP}_c({\rm X})\), it holds that \(f\in W^{1,2}({\rm X},{\sf d},\mu)\) and \(|Df|\leq{\rm lip}(f)\) in the \(\mu\)-a.e.\ sense. This ensures that the inequality \({\sf E}_{\rm Ch}\leq{\sf E}_{\rm lip}\) is satisfied. Actually, \({\sf E}_{\rm Ch}\) is the \(L^2(\mu)\)-relaxation of \({\sf E}_{\rm lip}\): \begin{theorem}[Density in energy \cite{AmbrosioGigliSavare11-3}] \label{thm:density_in_energy} Let \(({\rm X},{\sf d},\mu)\) be a metric measure space, with \(({\rm X},{\sf d})\) proper. Then \({\sf E}_{\rm Ch}\) is the \emph{\(L^2(\mu)\)-lower semicontinuous envelope} of \({\sf E}_{\rm lip}\), \emph{i.e.}, it holds that \[{\sf E}_{\rm Ch}(f)=\inf\varliminf_{n\to\infty}{\sf E}_{\rm lip}(f_n)\quad\text{ for every }f\in L^2(\mu),\] where the infimum is taken among all sequences \((f_n)_n\subseteq L^2(\mu)\) such that \(f_n\to f\) in \(L^2(\mu)\). \end{theorem} \subsection{Decomposability bundle}\label{ss:decomposability_bundle} Let us denote by \({\rm Gr}(\mathbb{R}^d)\) the set of all linear subspaces of \(\mathbb{R}^d\). Given any \(V,W\in{\rm Gr}(\mathbb{R}^d)\), we define the distance \({\sf d}_{\rm Gr}(V,W)\) as the Hausdorff distance in \(\mathbb{R}^d\) between the closed unit ball of \(V\) and that of \(W\). Hence, \(\big({\rm Gr}(\mathbb{R}^d),{\sf d}_{\rm Gr}\big)\) is a compact metric space. \begin{theorem}[Decomposability bundle \cite{AM16}]\label{thm:Alberti-Marchese} Let \(\mu\geq 0\) be a given Radon measure on \(\mathbb{R}^d\). Then there exists a \(\mu\)-a.e.\ unique Borel mapping \(V(\mu,\cdot)\colon\mathbb{R}^d\to{\rm Gr}(\mathbb{R}^d)\), called the \emph{decomposability bundle} of \(\mu\), such that the following properties hold: \begin{itemize} \item[\(\rm i)\)] Any function \(f\in{\rm LIP}(\mathbb{R}^d)\) is differentiable at \(\mu\)-a.e.\ \(x\in\mathbb{R}^d\) with respect to \(V(\mu,x)\), \emph{i.e.}, there exists a Borel map \(\nabla_{\!\scriptscriptstyle\rm AM} f\colon\mathbb{R}^d\to\mathbb{R}^d\) such that \(\nabla_{\!\scriptscriptstyle\rm AM} f(x)\in V(\mu,x)\) for all \(x\in\mathbb{R}^d\) and \begin{equation}\label{eq:formula_nablaAM} \lim_{V(\mu,x)\ni v\to 0}\frac{f(x+v)-f(x)-\nabla_{\!\scriptscriptstyle\rm AM} f(x)\cdot v}{|v|}=0 \quad\text{ for }\mu\text{-a.e.\ }x\in\mathbb{R}^d. \end{equation} \item[\(\rm ii)\)] There exists a function \(f_0\in{\rm LIP}(\mathbb{R}^d)\) such that for \(\mu\)-a.e.\ point \(x\in\mathbb{R}^d\) it holds that \(f_0\) is not differentiable at \(x\) with respect to any direction \(v\in\mathbb{R}^d\setminus V(\mu,x)\). \end{itemize} \end{theorem} We refer to \(\nabla_{\!\scriptscriptstyle\rm AM} f\) as the \emph{Alberti--Marchese gradient} of \(f\). It readily follows from \eqref{eq:formula_nablaAM} that \(\nabla_{\!\scriptscriptstyle\rm AM} f\) is uniquely determined (up to \(\mu\)-a.e.\ equality) and that for every \(f,g\in{\rm LIP}(\mathbb{R}^d)\) it holds that \begin{equation}\label{eq:nablaAM_linear} \nabla_{\!\scriptscriptstyle\rm AM}(f\pm g)(x)=\nabla_{\!\scriptscriptstyle\rm AM} f(x)\pm\nabla_{\!\scriptscriptstyle\rm AM} g(x) \quad\text{ for }\mu\text{-a.e.\ }x\in\mathbb{R}^d. \end{equation} \begin{remark}{\rm Theorem \ref{thm:Alberti-Marchese} was actually proven under the additional assumption of \(\mu\) being a finite measure. However, the statement depends only on the null sets of \(\mu\), not on the measure \(\mu\) itself. Therefore, in order to obtain Theorem \ref{thm:Alberti-Marchese} as a consequence of the original result in \cite{AM16}, it is sufficient to replace \(\mu\) with the following Borel probability measure on \(\mathbb{R}^d\): \[ \tilde\mu\coloneqq\sum_{j=1}^\infty\frac{\mu|_{B_j(\bar x)}} {2^j\mu\big(B_j(\bar x)\big)},\quad\text{ for some }\bar x\in{\rm spt}(\mu). \] Observe, indeed, that the measure \(\tilde\mu\) satisfies \(\mu\ll\tilde\mu\ll\mu\). \penalty-20\null \(\blacksquare\)}\end{remark} \begin{remark}{\rm Given any function \(f\in{\rm LIP}(\mathbb{R}^d)\), it holds that \begin{equation}\label{eq:nablaAM_leq_lip} \big|\nabla_{\!\scriptscriptstyle\rm AM} f(x)\big|\leq{\rm lip}(f)(x)\quad\text{ for }\mu\text{-a.e.\ }x\in\mathbb{R}^d. \end{equation} Indeed, fix any point \(x\in\mathbb{R}^d\) such that \(f\) is differentiable at \(x\) with respect to \(V(\mu,x)\). Then for all \(v\in V(\mu,x)\setminus\{0\}\) it holds that \(\nabla_{\!\scriptscriptstyle\rm AM} f(x)\cdot v=|v|\,\lim_{h\searrow 0}\big(f(x+hv)-f(x)\big)/|hv| \leq|v|\,{\rm lip}(f)(x)\) by \eqref{eq:formula_nablaAM}, thus accordingly \(\big|\nabla_{\!\scriptscriptstyle\rm AM} f(x)\big|=\sup\big\{\nabla_{\!\scriptscriptstyle\rm AM} f(x)\cdot v\;\big|\; v\in V(\mu,x),\,|v|\leq 1\big\}\leq{\rm lip}(f)(x)\). \penalty-20\null \(\blacksquare\)}\end{remark} \section{Universal infinitesimal Hilbertianity of the Euclidean space} The objective of this section is to show that the Euclidean space is \emph{universally infinitesimally Hilbertian}, meaning that it is infinitesimally Hilbertian when equipped with any Radon measure; cf.\ Theorem \ref{thm:Eucl_inf_Hilb} below. The strategy of the proof we are going to present here is based upon the structure of the decomposability bundle described in Subsection \ref{ss:decomposability_bundle}. First of all, we prove that any given test plan over the weighted Euclidean space is `tangent', in a suitable sense, to the Alberti--Marchese decomposability bundle: \begin{lemma}\label{lem:pi_tangent_to_V} Let \(\mu\geq 0\) be a given Radon measure on \(\mathbb{R}^d\). Let \({\mbox{\boldmath\(\pi\)}}\) be a test plan on \((\mathbb{R}^d,{\sf d}_{\rm Eucl},\mu)\). Then for \({\mbox{\boldmath\(\pi\)}}\)-a.e.\ \(\gamma\) it holds that \[\dot\gamma_t\in V(\mu,\gamma_t)\quad \text{ for }\mathcal L^1\text{-a.e.\ }t\in[0,1].\] \end{lemma} \begin{proof} Let \(f_0\) be an \(L\)-Lipschitz function as in ii) of Theorem \ref{thm:Alberti-Marchese}. Set \(B\subseteq C\big([0,1],\mathbb{R}^d\big)\times[0,1]\) as \[B\coloneqq\Big\{(\gamma,t)\;\Big|\;\gamma\text{ and }f_0\circ\gamma \text{ are differentiable at }t,\text{ and }\dot\gamma_t\notin V(\mu,\gamma_t)\Big\}.\] It can be easily shown that \(B\) is Borel measurable. We can assume that \(\gamma\) is absolutely continuous (since by definition a test plan is concentrated on absolutely continuous curves); in particular, also \(f_0 \circ \gamma\) is absolutely continuous, and thus both \(\gamma\) and \(f_0 \circ \gamma\) are differentiable \(\mathcal{L}^1\)-almost everywhere. In particular, we are done if we can prove that \(({\mbox{\boldmath\(\pi\)}}\otimes\mathcal L^1)(B)=0\). Call \(B_t\coloneqq\big\{\gamma\;\big|\;(\gamma,t)\in B\big\}\) for every \(t\in[0,1]\). Moreover, \(G\) stands for the set of all \(x\in\mathbb{R}^d\) such that \(f_0\) is not differentiable at \(x\) with respect to any direction \(v\in\mathbb{R}^d\setminus V(\mu,x)\). Thus, \(\mu(\mathbb{R}^d\setminus G)=0\) by Theorem \ref{thm:Alberti-Marchese}. We claim that the inclusion \({\rm e}_t(B_t)\subseteq\mathbb{R}^d\setminus G\) holds for every \(t\in[0,1]\). Indeed, for every \(\gamma\in B_t\) one has that \[\begin{split} \bigg|\frac{f_0(\gamma_t+h\dot\gamma_t)-f_0(\gamma_t)}{h}-(f_0\circ\gamma)'_t\bigg| &\leq\bigg|\frac{f_0(\gamma_t+h\dot\gamma_t)-f_0(\gamma_{t+h})}{h}\bigg| +\bigg|\frac{f_0(\gamma_{t+h})-f_0(\gamma_t)}{h}-(f_0\circ\gamma)'_t\bigg|\\ &\leq\,L\,\bigg|\frac{\gamma_{t+h}-\gamma_t}{h}-\dot\gamma_t\bigg| +\bigg|\frac{f_0(\gamma_{t+h})-f_0(\gamma_t)}{h}-(f_0\circ\gamma)'_t\bigg|, \end{split}\] so by letting \(h\to 0\) we conclude that \(f_0\) is differentiable at \(\gamma_t\) in the direction \(\dot\gamma_t\), \emph{i.e.}, \(\gamma_t\notin G\). Therefore, we conclude that \({\mbox{\boldmath\(\pi\)}}(B_t)\leq{\mbox{\boldmath\(\pi\)}}\big({\rm e}_t^{-1}(\mathbb{R}^d\setminus G)\big)\leq {\rm Comp}({\mbox{\boldmath\(\pi\)}})\,\mu(\mathbb{R}^d\setminus G)=0\) for all \(t\in[0,1]\). This grants that \(({\mbox{\boldmath\(\pi\)}}\otimes\mathcal L^1)(B)=0\) by Fubini theorem, whence the statement follows. \end{proof} As a consequence of Lemma \ref{lem:pi_tangent_to_V}, we can readily prove that the modulus of the Alberti--Marchese gradient of a given Lipschitz function is a weak upper gradient of the function itself: \begin{proposition}\label{prop:nablaAM_wug} Let \(\mu\geq 0\) be a Radon measure on \(\mathbb{R}^d\). Let \(f\in{\rm LIP}_c(\mathbb{R}^d)\) be given. Then the function \(|\nabla_{\!\scriptscriptstyle\rm AM} f|\in L^2(\mu)\) is a weak upper gradient of \(f\). \end{proposition} \begin{proof} Let \({\mbox{\boldmath\(\pi\)}}\) be any test plan over \((\mathbb{R}^d,{\sf d}_{\rm Eucl},\mu)\). We claim that for \({\mbox{\boldmath\(\pi\)}}\)-a.e.\ \(\gamma\) it holds \begin{equation}\label{eq:nablaAM_wug_claim} (f\circ\gamma)'_t=\nabla_{\!\scriptscriptstyle\rm AM} f(\gamma_t)\cdot\dot\gamma_t\quad \text{ for }\mathcal L^1\text{-a.e.\ }t\in[0,1]. \end{equation} Indeed, for \(({\mbox{\boldmath\(\pi\)}}\otimes\mathcal L^1)\)-a.e.\ \((\gamma,t)\) we have that \(f\) is differentiable at \(\gamma_t\) with respect to \(V(\mu,\gamma_t)\) and that \(\dot\gamma_t\in V(\mu,\gamma_t)\); this stems from item i) of Theorem \ref{thm:Alberti-Marchese} and Lemma \ref{lem:pi_tangent_to_V}. Hence, \eqref{eq:formula_nablaAM} yields \[\nabla_{\!\scriptscriptstyle\rm AM} f(\gamma_t)\cdot\dot\gamma_t= \lim_{h\searrow 0}\frac{f(\gamma_t+h\dot\gamma_t)-f(\gamma_t)}{h}= \lim_{h\searrow 0}\frac{f(\gamma_{t+h})-f(\gamma_t)}{h} =(f\circ\gamma)'_t,\] which proves the claim \eqref{eq:nablaAM_wug_claim}. In particular, for \({\mbox{\boldmath\(\pi\)}}\)-a.e.\ curve \(\gamma\) it holds \[\big|(f\circ\gamma)'_t\big|\leq\big|\nabla_{\!\scriptscriptstyle\rm AM} f(\gamma_t)\big|\,|\dot\gamma_t| \quad\text{ for }\mathcal L^1\text{-a.e.\ }t\in[0,1].\] Given that \(|\nabla_{\!\scriptscriptstyle\rm AM} f|\in L^2(\mu)\) by \eqref{eq:nablaAM_leq_lip}, we conclude that \(|Df|\leq|\nabla_{\!\scriptscriptstyle\rm AM} f|\) holds in the \(\mu\)-a.e.\ sense. \end{proof} We are now in a position to prove the universal infinitesimal Hilbertianity of the Euclidean space, as an immediate consequence of Proposition \ref{prop:nablaAM_wug} and of the linearity of \(\nabla_{\!\scriptscriptstyle\rm AM}\): \begin{theorem}[Infinitesimal Hilbertianity of weighted \(\mathbb{R}^d\)]\label{thm:Eucl_inf_Hilb} Let \(\mu\geq 0\) be a Radon measure on \(\mathbb{R}^d\). Then the metric measure space \((\mathbb{R}^d,{\sf d}_{\rm Eucl},\mu)\) is infinitesimally Hilbertian. \end{theorem} \begin{proof} First of all, let us define the \emph{Alberti--Marchese energy} functional \(\sfE_{\scriptscriptstyle{\rm AM}}\colon L^2(\mu)\to[0,+\infty]\) as \[\sfE_{\scriptscriptstyle{\rm AM}}(f)\coloneqq\left\{\begin{array}{ll} \frac{1}{2}\int|\nabla_{\!\scriptscriptstyle\rm AM} f|^2\,{\mathrm d}\mu\\ +\infty \end{array}\quad\begin{array}{ll} \text{ if }f\in{\rm LIP}_c(\mathbb{R}^d),\\ \text{ otherwise.} \end{array}\right.\] Since \(|Df|\leq|\nabla_{\!\scriptscriptstyle\rm AM} f|\leq{\rm lip}(f)\) holds \(\mu\)-a.e.\ for any \(f\in{\rm LIP}_c(\mathbb{R}^d)\) by Proposition \ref{prop:nablaAM_wug} and \eqref{eq:nablaAM_leq_lip}, we have that \({\sf E}_{\rm Ch}\leq\sfE_{\scriptscriptstyle{\rm AM}}\leq{\sf E}_{\rm lip}\), where \({\sf E}_{\rm Ch}\) and \({\sf E}_{\rm lip}\) are defined as in \eqref{eq:def_E_Ch} and \eqref{eq:def_E_lip}, respectively. In view of Theorem \ref{thm:density_in_energy}, we deduce that \({\sf E}_{\rm Ch}\) is the \(L^2(\mu)\)-lower semicontinuous envelope of \(\sfE_{\scriptscriptstyle{\rm AM}}\). Thanks to the identities in \eqref{eq:nablaAM_linear}, we also know that \(\sfE_{\scriptscriptstyle{\rm AM}}\) satisfies the parallelogram rule when restricted to \({\rm LIP}_c(\mathbb{R}^d)\), which means that \begin{equation}\label{eq:parallelogram_id_AM} \sfE_{\scriptscriptstyle{\rm AM}}(f+g)+\sfE_{\scriptscriptstyle{\rm AM}}(f-g)=2\,\sfE_{\scriptscriptstyle{\rm AM}}(f)+2\,\sfE_{\scriptscriptstyle{\rm AM}}(g)\quad\text{ for every }f,g\in{\rm LIP}_c(\mathbb{R}^d). \end{equation} Fix \(f,g\in W^{1,2}(\mathbb{R}^d,{\sf d}_{\rm Eucl},\mu)\). Let us choose any two sequences $(f_n)_n,(g_n)_n\subseteq{\rm LIP}_c(\mathbb{R}^d)$ such that \begin{itemize} \item \(f_n\to f\) and \(g_n\to g\) in \(L^2(\mu)\), \item \(\sfE_{\scriptscriptstyle{\rm AM}}(f_n)\to{\sf E}_{\rm Ch}(f)\) and \(\sfE_{\scriptscriptstyle{\rm AM}}(g_n)\to{\sf E}_{\rm Ch}(g)\). \end{itemize} In particular, observe that \(f_n+g_n\to f+g\) and \(f_n-g_n\to f-g\) in \(L^2(\mu)\). Therefore, it holds that \[\begin{split} {\sf E}_{\rm Ch}(f+g)+{\sf E}_{\rm Ch}(f-g)&\leq\varliminf_{n\to\infty}\big(\sfE_{\scriptscriptstyle{\rm AM}}(f_n+g_n)+\sfE_{\scriptscriptstyle{\rm AM}}(f_n-g_n)\big) \overset{\eqref{eq:parallelogram_id_AM}}=2\lim_{n\to\infty}\big(\sfE_{\scriptscriptstyle{\rm AM}}(f_n)+\sfE_{\scriptscriptstyle{\rm AM}}(g_n)\big)\\ &=2\,{\sf E}_{\rm Ch}(f)+2\,{\sf E}_{\rm Ch}(g). \end{split}\] By replacing $f$ and $g$ with $f+g$ and $f-g$, respectively, we conclude that the converse inequality is verified as well. Consequently, the Cheeger energy \({\sf E}_{\rm Ch}\) satisfies the parallelogram rule \eqref{eq:parallelogram_id}, thus \(W^{1,2}(\mathbb{R}^d,{\sf d}_{\rm Eucl},\mu)\) is a Hilbert space. This completes the proof of the statement. \end{proof} \begin{remark}{\rm As a byproduct of the proof of Theorem \ref{thm:Eucl_inf_Hilb}, we see that for all $f\in W^{1,2}(\mathbb{R}^d,{\sf d}_{\rm Eucl},\mu)$ there exists a sequence $(f_n)_n\subseteq{\rm LIP}_c(\mathbb{R}^d)$ such that $f_n\to f$ and $|\nabla_{\!\scriptscriptstyle\rm AM} f_n|\to|Df|$ in $L^2(\mu)$. \penalty-20\null \(\blacksquare\)}\end{remark} \begin{example}\label{ex:example_Cantor}{\rm Given an arbitrary Radon measure \(\mu\) on \(\mathbb{R}^d\), it might happen that \[|Df|\neq|\nabla_{\!\scriptscriptstyle\rm AM} f|\quad\text{ for some }f\in{\rm LIP}_c(\mathbb{R}^d).\] For instance, consider the measure \(\mu\coloneqq\mathcal L^1|_C\) on \(\mathbb{R}\), where \(C\subseteq\mathbb{R}\) is any Cantor set of positive Lebesgue measure. Since the support of \(\mu\) is totally disconnected, one has that every \(f\in L^2(\mu)\) is a Sobolev function with \(|Df|=0\). However, it holds \(V(\mu,x)=\mathbb{R}\) for \(\mathcal L^1\)-a.e.\ \(x\in C\) by Rademacher theorem, whence for any \(f\in{\rm LIP}(\mathbb{R})\) we have that \(\nabla_{\!\scriptscriptstyle\rm AM} f(x)=f'(x)\) for \(\mathcal L^1\)-a.e.\ \(x\in C\). \penalty-20\null \(\blacksquare\)}\end{example} \section{Closability of the Sobolev norm on smooth functions} The aim of this conclusive section is to address a problem that has been raised by M.\ Fukushima (as reported in \cite[Section 2.6]{Bogachev10}). Namely, we provide a (negative) answer to the following question: {\it Does there exist a singular Radon measure \(\mu\) on \(\mathbb{R}^2\) for which the Sobolev norm \(\|\cdot\|_{W^{1,2}(\mathbb{R}^2,{\sf d}_{\rm Eucl},\mu)}\) is closable on compactly-supported smooth functions (in the sense of Definition \ref{def:closability} below)?} Actually, we are going to prove a stronger result: {\it Given any Radon measure \(\mu\) on \(\mathbb{R}^d\) that is not absolutely continuous with respect to \(\mathscr L^d\), it holds that \(\|\cdot\|_{W^{1,2}(\mathbb{R}^d,{\sf d}_{\rm Eucl},\mu)}\) is not closable on compactly-supported smooth functions.} Cf.\ Theorem \ref{thm:no_closable} below. Let \(f\in C^\infty_c(\mathbb{R}^d)\) be given. Then we denote by \(\nabla f\colon\mathbb{R}^d\to\mathbb{R}^d\) its classical gradient. Note that the identity \(|\nabla f|={\rm lip}(f)\) holds. Given a Radon measure \(\mu\) on \(\mathbb{R}^d\), it is immediate to check that \begin{equation}\label{eq:proj_grad} \nabla_{\!\scriptscriptstyle\rm AM} f(x)=\pi_x\big(\nabla f(x)\big)\quad\text{ for }\mu\text{-a.e.\ }x\in\mathbb{R}^d, \end{equation} where \(\pi_x\colon\mathbb{R}^d\to V(\mu,x)\) stands for the orthogonal projection map. We denote by \(L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)\) the space of all (equivalence classes, up to \(\mu\)-a.e.\ equality, of) Borel maps \(v\colon\mathbb{R}^d\to\mathbb{R}^d\) with \(|v|\in L^2(\mu)\). It holds that \(L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)\) is a Hilbert space if endowed with the norm \(v\mapsto\big(\int|v|^2\,{\mathrm d}\mu\big)^{1/2}\). \begin{definition}[Closability of the Sobolev norm on smooth functions] \label{def:closability} Let \(\mu\) be a Radon measure on \(\mathbb{R}^d\). Then the Sobolev norm \(\|\cdot\|_{W^{1,2}(\mathbb{R}^d,{\sf d}_{\rm Eucl},\mu)}\) is \emph{closable on compactly-supported smooth functions} provided the following property is verified: if a sequence \((f_n)_n\subseteq C^\infty_c(\mathbb{R}^d)\) satisfies \(f_n\to 0\) in \(L^2(\mu)\) and \(\nabla f_n\to v\) in \(L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)\) for some element \(v\in L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)\), then it holds that \(v=0\). \end{definition} In order to provide some alternative characterisations of the above-defined closability property, we need to recall the following improvement of Theorem \ref{thm:density_in_energy} in the weighted Euclidean space case: \begin{theorem}[Density in energy of smooth functions \cite{GP16-2}]\label{thm:density_in_energy_smooth} Let \(\mu\) be a Radon measure on \(\mathbb{R}^d\). Then \({\sf E}_{\rm Ch}\) is the \(L^2(\mu)\)-lower semicontinuous envelope of the functional \[ L^2(\mu)\ni f\longmapsto\left\{\begin{array}{ll} \frac{1}{2}\int|\nabla f|^2\,{\mathrm d}\mu\\ +\infty \end{array}\quad\begin{array}{ll} \text{ if }f\in C^\infty_c(\mathbb{R}^d),\\ \text{ otherwise.} \end{array}\right. \] \end{theorem} \begin{lemma}\label{lem:equiv_closable} Let \(\mu\) be a Radon measure on \(\mathbb{R}^d\). Then the following conditions are equivalent: \begin{itemize} \item[\(\rm i)\)] The Sobolev norm \(\|\cdot\|_{W^{1,2}(\mathbb{R}^d,{\sf d}_{\rm Eucl},\mu)}\) is closable on compactly-supported smooth functions. \item[\(\rm ii)\)] The functional \({\sf E}_{\rm lip}\) -- see \eqref{eq:def_E_lip} -- is \(L^2(\mu)\)-lower semicontinuous when restricted to \(C^\infty_c(\mathbb{R}^d)\). \item[\(\rm iii)\)] The identity \(|Df|=|\nabla f|\) holds \(\mu\)-a.e.\ on \(\mathbb{R}^d\), for every function \(f\in C^\infty_c(\mathbb{R}^d)\). \end{itemize} \end{lemma} \begin{proof}\ \\ {\color{blue}\({\rm i)}\Longrightarrow{\rm ii)}\)} Fix any \(f\in C^\infty_c(\mathbb{R}^d)\) and \((f_n)_n\subseteq C^\infty_c(\mathbb{R}^d)\) such that \(f_n\to f\) in \(L^2(\mu)\). We claim that \begin{equation}\label{eq:equiv_closable_claim} \int|\nabla f|^2\,{\mathrm d}\mu\leq\varliminf_{n\to\infty}\int|\nabla f_n|^2\,{\mathrm d}\mu. \end{equation} Without loss of generality, we may assume the right-hand side in \eqref{eq:equiv_closable_claim} is finite. Therefore, we can find a subsequence \((f_{n_k})_k\) of \((f_n)_n\) and an element \(v\in L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)\) such that \(\lim_k\int|\nabla f_{n_k}|^2\,{\mathrm d}\mu=\varliminf_n\int|\nabla f_n|^2\,{\mathrm d}\mu\) and \(\nabla f_{n_k}\rightharpoonup v\) in the weak topology of \(L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)\). By virtue of Banach--Saks theorem, we can additionally require that \(\nabla\tilde f_k\to v\) in the strong topology of \(L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)\), where we set \(\tilde f_k\coloneqq\frac{1}{k}\sum_{i=1}^k f_{n_i}\in C^\infty_c(\mathbb{R}^d)\) for all \(k\in\mathbb{N}\). Since \(\tilde f_k-f\to 0\) in \(L^2(\mu)\) and \(\nabla(\tilde f_k-f)\to v-\nabla f\) in \(L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)\), we deduce from i) that \(v=\nabla f\). Consequently, we have that \(\nabla f_n\rightharpoonup\nabla f\) in the weak topology of \(L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)\), thus proving \eqref{eq:equiv_closable_claim} by semicontinuity of the norm. In other words, it holds that \({\sf E}_{\rm lip}(f)\leq\varliminf_n{\sf E}_{\rm lip}(f_n)\), which yields the validity of item ii).\\ {\color{blue}\({\rm ii)}\Longrightarrow{\rm iii)}\)} Let \(f\in C^\infty_c(\mathbb{R}^d)\) be given. Theorem \ref{thm:density_in_energy_smooth} yields existence of a sequence \((f_n)_n\subseteq C^\infty_c(\mathbb{R}^d)\) such that \(f_n\to f\) and \(|\nabla f_n|\to|Df|\) in \(L^2(\mu)\). Therefore, item ii) ensures that \[ \frac{1}{2}\int|\nabla f|^2\,{\mathrm d}\mu ={\sf E}_{\rm lip}(f)\leq\varliminf_{n\to\infty}{\sf E}_{\rm lip}(f_n)=\lim_{n\to\infty} \frac{1}{2}\int|\nabla f_n|^2\,{\mathrm d}\mu=\frac{1}{2}\int|Df|^2\,{\mathrm d}\mu. \] Since \(|Df|\leq|\nabla f|\) holds \(\mu\)-a.e.\ on \(\mathbb{R}^d\), we conclude that \(|Df|=|\nabla f|\), thus proving item iii).\\ {\color{blue}\({\rm iii)}\Longrightarrow{\rm i)}\)} We argue by contradiction: suppose that there exists a sequence \((f_n)_n\subseteq C^\infty_c(\mathbb{R}^d)\) such that \(f_n\to 0\) in \(L^2(\mu)\) and \(\nabla f_n\to v\) in \(L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)\) for some \(v\in L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)\setminus\{0\}\). Fix any \(k\in\mathbb{N}\) such that \(\|\nabla f_k-v\|_{L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)} \leq\frac{1}{3}\|v\|_{L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)}\). In particular, \(\|\nabla f_k\|_{L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)} \geq\frac{2}{3}\|v\|_{L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)}\). Let us define \(g_n\coloneqq f_k-f_n\in C^\infty_c(\mathbb{R}^d)\) for every \(n\in\mathbb{N}\). Since \(g_n\to f_k\) in \(L^2(\mu)\) and \(\nabla g_n\to\nabla f_k-v\) in \(L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)\) as \(n\to\infty\), we conclude that \[ \|\nabla f_k\|_{L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)}\geq\frac{2}{3}\,\|v\|_{L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)} >\frac{1}{3}\,\|v\|_{L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)}\geq \|\nabla f_k-v\|_{L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)}= \lim_{n\to\infty}\|\nabla g_n\|_{L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)}, \] whence \({\sf E}_{\rm lip}(f_k)>\lim_n{\sf E}_{\rm lip}(g_n)\). This contradicts the lower semicontinuity of \({\sf E}_{\rm lip}\) on \(C^\infty_c(\mathbb{R}^d)\). Consequently, item i) is proven. \end{proof} The last ingredient we need is the following result proven by G.\ De Philippis and F.\ Rindler: \begin{theorem}[Weak converse of Rademacher theorem \cite{DPR}]\label{thm:DPR} Let \(\mu\) be a Radon measure on \(\mathbb{R}^d\). Suppose all Lipschitz functions \(f\colon\mathbb{R}^d\to\mathbb{R}\) are \(\mu\)-a.e.\ differentiable. Then it holds that \(\mu\ll\mathcal L^d\). \end{theorem} We are finally in a position to prove the following statement concerning closability: \begin{theorem}[Failure of closability for singular measures] \label{thm:no_closable} Let \(\mu\geq 0\) be a given Radon measure on \(\mathbb{R}^d\). Suppose that \(\mu\) is not absolutely continuous with respect to the Lebesgue measure \(\mathcal L^d\). Then the Sobolev norm \(\|\cdot\|_{W^{1,2}(\mathbb{R}^d,{\sf d}_{\rm Eucl},\mu)}\) is not closable on compactly-supported smooth functions. \end{theorem} \begin{proof} First of all, Theorem \ref{thm:DPR} grants the existence of a Lipschitz function \(f\colon\mathbb{R}^d\to\mathbb{R}\) and a Borel set \(P\subseteq\mathbb{R}^d\) such that \(\mu(P)>0\) and \(f\) is not differentiable at any point of \(P\). Recalling Theorem \ref{thm:Alberti-Marchese}, we then see that \(V(\mu,x)\neq\mathbb{R}^n\) for \(\mu\)-a.e.\ \(x\in P\). Therefore, we can find a compact set \(K\subseteq P\) and a vector \(v\in\mathbb{R}^d\) such that \(\mu(K)>0\) and \(v\notin V(\mu,x)\) for \(\mu\)-a.e.\ \(x\in K\). Now pick any \(g\in C^\infty_c(\mathbb{R}^d)\) such that \(\nabla g(x)=v\) holds for all \(x\in K\). Then Proposition \ref{prop:nablaAM_wug} and \eqref{eq:proj_grad} yield \[ |D g|(x)\leq|\nabla_{\!\scriptscriptstyle\rm AM}\,g|(x)=\big|\pi_x\big(\nabla g(x)\big)\big| =\big|\pi_x(v)\big|<|v|=|\nabla g|(x) \quad\text{ for }\mu\text{-a.e.\ }x\in K, \] thus accordingly \(\|\cdot\|_{W^{1,2}(\mathbb{R}^d,{\sf d}_{\rm Eucl},\mu)}\) is not closable on compactly-supported smooth functions by Lemma \ref{lem:equiv_closable}. Hence, the statement is achieved. \end{proof} \begin{remark}{\rm The converse of Theorem \ref{thm:no_closable} might fail. For instance, the measure \(\mu\) described in Example \ref{ex:example_Cantor} is absolutely continuous with respect to \(\mathcal L^1\), but the Sobolev norm \(\|\cdot\|_{W^{1,2}(\mathbb{R},{\sf d}_{\rm Eucl},\mu)}\) is not closable on compactly-supported smooth functions as a consequence of Lemma \ref{lem:equiv_closable}. \penalty-20\null \(\blacksquare\)}\end{remark} \def$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'${$'$} \end{document}
\begin{document} \title{Cavity optomechanics with stoichiometric SiN films} \author{D. J. Wilson} \author{C. A. Regal} \author{S. B. Papp} \author{H. J. Kimble} \address{Norman Bridge Laboratory of Physics 12-33, California Institute of Technology, Pasadena, California 91125} \date{November 16, 2009} \begin{abstract} We study high-stress SiN films for reaching the quantum regime with mesoscopic oscillators connected to a room-temperature thermal bath, for which there are stringent requirements on the oscillators' quality factors and frequencies. Our SiN films support mechanical modes with unprecedented products of mechanical quality factor $Q_m$ and frequency $\nu_m$ reaching $Q_{m} \nu_m \simeq2 \times 10^{13}$ Hz. The SiN membranes exhibit a low optical absorption characterized by Im$(n) \lesssim 10^{-5}$ at 935 nm, representing a 15 times reduction for SiN membranes. We have developed an apparatus to simultaneously cool the motion of multiple mechanical modes based on a short, high-finesse Fabry-Perot cavity and present initial cooling results along with future possibilities. \end{abstract} \maketitle Progress towards observing quantum fluctuations of a mesoscopic mechanical oscillator has accelerated with the recent successful use of cavity light forces to damp and cool mechanical motion \cite{Braginsky1970}. This use of radiation pressure in optomechanical systems combined with cryogenic precooling will likely soon allow ground state cooling, with a variety of recent experiments demonstrating phonon occupations $\bar{n} < 100$ \cite{Marquardt2009note}. Essential to this effort has been the development of ultra-high Q mechanical oscillators that are compatible with low-loss optical systems where radiation pressure dominates over photothermal effects. An enabling advance in this field would be to push capabilities of optomechanics to an extreme where quantum limits could be achieved in the presence of a room temperature thermal bath. Cryogen-free operation would greatly facilitate the integration of mesoscopic quantum mechanical oscillators into hybrid quantum systems. For example, using cold atoms, mechanical oscillators could be coupled to atomic motional states or spin thus linking to a rich quantum optics toolbox \cite{Treutlein2007,Hammerer2009b}. Via projective measurements utilizing atomic ensembles quantum effects could also be recognized without achieving full ground state cooling \cite{Hammerer2009a}. Moreover, the capability for simultaneous coupling to multiple mechanical modes could enable the generation of multipartite entanglement among mechanical and optical modes \cite{Genes2008b}. A promising optomechanical platform introduced at Yale is a geometry in which a flexible SiN membrane with exceptional mechanical properties is coupled to a standard high-finesse Fabry-Perot cavity \cite{Zwickl2007,Thompson2007,Jayich2008}. In this Letter we demonstrate that SiN membranes can be optimized to realize one of the key minimum requirements for approaching ground state cooling from room temperature, namely a mechanical quality factor $Q_m$ larger than the number of room temperature thermal phonons, i.e. $Q_m > \bar{n}_T=k_b T_{{\rm room}} / \hbar \omega_m$ \cite{Marquardt2007a}. We realize an optomechanical system in which cavity cooling can be applied to multiple higher-order modes of these films. This system is based on a short high finesse fabry-perot with a small mode waist (Fig. \ref{schematic}(a,c)) and displays low optical absorption and scattering at a wavelength of interest for cold-atom systems \cite{Treutlein2007,Hammerer2009b,Hammerer2009a}. We analyze the prospects for accessing quantum effects from room temperature in this new regime. \begin{figure} \caption{(a) Experiment schematic: Membrane window in a short cavity. (b) Illustration of $(j,k)=(3,3)$ membrane mode amplitude $a_{jk}$. (c) Optical mode $\psi(x,y)$ (red solid line) compared to membrane modes $(j,k)=(2,2)$ (top) and $(6,6)$ (bottom) for a 0.5 mm $\times$ 0.5 mm square membrane.} \label{schematic} \end{figure} SiN under tensile stress has been recognized for some time for its unusually low mechanical dissipation, particularly among amorphous materials \cite{Verbridge2006,Verbridge2008,Southworth2009,Thompson2007}. Recent optomechanical experiments with SiN membranes used the fundamental mode of a 1 mm low tensile stress film with $Q_m=1.1\times 10^6$ at $\omega_m=2 \pi\times 130$ kHz \cite{Thompson2007}. In our experiments we use a SiN film in its stoichiometric form, Si$_3$N$_4$, which naturally has a large tensile stress of $T\sim1$ GPa. We use sub-mm membranes and focus on the high-order modes (as illustrated in Fig. \ref{schematic}(b)); this along with the increased tension allows us to increase the resonant frequency of our mechanical mode significantly over the mode cooled in Ref. \cite{Thompson2007}. This method of using tension to increase the frequency of mechanical modes while maintaining low dissipation has been recognized as an important tool for seeing quantum effects in a variety of mechanical systems \cite{Thompson2007,Corbitt2007,Verbridge2006,Regal2008}. Specifically, we use 50 nm thick LPCVD nitride membranes from Norcada Inc in a square geometry of size $d\times d=0.5\times0.5$ mm. We characterize the mechanical quality factor by monitoring the ringdown of the mechanical excitation as a function of time (Fig. \ref{Qm} (inset)). To probe the excitation we use an etalon formed between the membrane and a partially reflective mirror. The entire setup is placed under vacuum at $10^{-7}$ Torr. In Fig. \ref{Qm} we plot the measured $Q_m$ as a function of mode frequency; these mode frequencies are consistent with $T=0.9$ GPa, a film mass density of $\rho = 2.7$ g/cm$^3$ \cite{Verbridge2006}, and the expectation of $\nu_m(j,k) = \omega_m(j,k)/2\pi=\sqrt{T/4\rho d^2}\sqrt{j^2+k^2}$, where $j$ and $k$ are mode indices. The highest quality factor observed reaches $Q_m=4\times10^6$; the $Q$-frequency product ($Q_{m} \nu_m$) reaches $2\times10^{13}$ Hz and exceeds $1\times10^{13}$ Hz over a wide range of frequencies from 2 to 12 MHz. The three sets of points in Fig. \ref{Qm} represent separate trials in which a new membrane was mounted in the apparatus in specific ways. Variation between trials is likely due to mechanical coupling between the membrane and the mounting structure. In addition to these three representative sets we have mounted and characterized dozens of other membranes. Based upon our body of results, we infer that mounting techniques that utilize the least contact to the frame allow us to realize the highest quality factors. \begin{figure}\label{Qm} \end{figure} Our $Q$-frequency products clearly extend above the grey-shaded region in Fig. \ref{Qm}, which represents a $Q$-frequency product less than $k_b T_{room}/h$. High-stress Si$_3$N$_4$ was previously used to create membranes with $Q$-frequency products reaching $0.08 \times 10^{13}$ Hz and long nanostrings reaching $0.7 \times 10^{13}$ Hz \cite{Verbridge2006,Verbridge2008}. Our measured products surpass these results and achieve a greater than 10 fold improvement over room-temperature optomechanical systems in which cavity cooling has been implemented thus far (see for example Refs. \cite{Thompson2007,Schliesser2008}). Furthermore, our results illustrate that these amorphous oscillators approach the thermoelastic limit \cite{theorynote} and support $Q$-frequency products that rival the best observed at room temperature in single-crystal materials and diamond (see Ref. \cite{Lee2009} for a summary). To cool the higher-order modes of these membranes, which have the best $Q$-frequency products, we must be able to selectively probe individual antinodes of these modes within a high-finesse cavity (Fig. \ref{schematic}(a,b)). We create a small mode spot size by using a short Fabry-Perot cavity ($L=0.74$ mm) with small radius of curvature mirrors ($R_c=5$ cm). This results in a TEM$_{00}$ optical mode with a $1/e^2$ diameter of $2w_0=$71 $\mu$m at our chosen wavelength of $\lambda_0=935$ nm, which is a ``magic" wavelength for trapping of atomic cesium\cite{McKeever2003}. In Fig. \ref{schematic}(c) this mode size is compared to the mechanical excitation, given by $a_{jk} (x,y)=a_z \sin{(j \pi x/d)} \sin{(k \pi y/d)}$. \begin{figure} \caption{Measured cavity finesse $F$ as a function of membrane position $\Delta z$ about the center of the cavity (black circles). Absent the membrane the finesse is $F_0 = 9160$. The shaded regions represent the envelope of the calculated finesse variation for Im$(n)=0.6 \times 10^{-5}$ (red) and Im$(n)=0$ (green) for comparison. (Insets) Examples of the local dependence as a function of $\Delta z$ compared to theoretical curves for Im$(n)=0.6\times10^{-5}$.} \label{absorption} \end{figure} Another important feature of our optomechanical system is the absorption properties of our Si$_3$N$_4$ films. We measure the absorption by studying the effect of the membrane on the cavity finesse \cite{Jayich2008}. Fig. \ref{absorption} shows the finesse $F$ measured in transmission through the cavity as a function of membrane position $\Delta z$. The measured $F$ is modulated as a function of membrane displacement, $\Delta z$, with a short periodicity of $\lambda_{0}/2$ set by the cavity standing wave (insets to Fig. \ref{absorption}), as well as on the larger scale of $L/2$ with respect to the cavity center ($\Delta z = 0$) with the envelope of modulation of $F$ exceeding the bare cavity finesse $F_{0}=9160$. These effects can be understood in terms of the eigenmodes of the composite cavity with Im$(n)=0$, for which the ratio of intracavity field amplitudes $A_{l,r}$ on the left and right of the membrane varies periodically with $\Delta z$. For a membrane with reflectivity $R_{m}=1-\epsilon$ with $\epsilon \ll1$, the limiting values for the envelope of the finesse variation will be approximately $F_{l,r} \simeq F_{0} (1\pm 2\Delta z/L)$, while for $R_{m} \simeq 0$, the variation of finesse approaches zero. Our membrane with $R_{m}\simeq 0.18$ lies between these extremes, for which the numerical solution shown in Fig. \ref{absorption} has been obtained \cite{Wilson2009b}. We have calculated the expected $F$ measured in transmission for arbitrary $\Delta z$ assuming a real index of $n=2.0$. A one-parameter fit of the data to our model (Fig. \ref{absorption}) reveals absorption corresponding to an imaginary part of the index of Im$(n)$ $=0.6 \times10^{-5}$ for our films, assuming scattering losses are negligible (Fig. \ref{absorption}). The data with $|\Delta z| \gtrsim 50$ $\mu$m are more consistent with Im$(n)$ as high as $0.9\times10^{-5}$, which may be a manifestation of alignment effects. These values are $15-25$ times lower than recently observed with low-stress nitride \cite{Zwickl2007,Jayich2008}, but consistent with studies that show that SiN can have absorption as low as Im$(n) < 10^{-6}$ in the near-IR \cite{Barclay2006,Inukai1994}. Decreasing absorption is important for reaching the full quantum limit of motion detection \cite{Caves1981a} and is required for achieving the large linear coupling and high finesse required for strong coupling between one atom and a membrane \cite{Hammerer2009b}. \begin{figure}\label{cooling} \end{figure} In our high-finesse cavity we have realized radiation pressure cooling of a number of high-order membrane modes. Figure \ref{cooling} demonstrates simultaneous cooling and damping of the $(1,1)$ and $(6,6)$ modes as a function of the detuning $\Delta=\omega_L-\omega_c$ of an input laser field, where $\omega_L$ and $\omega_c$ are the laser and cavity resonance frequencies respectively. To cool the membrane we use light from a diode laser at 935 nm. We probe the membrane's displacement by monitoring the amplitude modulation of the cooling light transmitted through the cavity off resonance. We calculate the membrane motion for a mode at $\omega_m$ via $\langle a_z^2 \rangle_{\omega_m}=\frac{\langle i^2 \rangle_{\omega_m}}{i^2(\Delta)} \frac{(\gamma/2)^2}{g^2} \frac{1}{H(\omega_m,\Delta)}$ where $i(\Delta)$ is the dc photocurrent at detuning $\Delta$ and $\sqrt{\langle i^2 \rangle_{\omega_m}}$ is the rms photocurrent fluctuation obtained by integrating the noise spectrum over a lorentzian peak centered at $\omega_m$. $H(\omega_m,\Delta)=\Delta^2 S_+ S_-$ is a dimensionless factor for the cavity response at arbitrary detuning, where $S_{\pm}=\gamma/[(\omega_m \pm \Delta)^2+(\gamma/2)^2]$. The membrane coupling is described by $g$, which for operation at optimal linear coupling is $0.85 \eta_{jk} \omega_c/L$. The factor of 0.85 is related to the reflectivity of the membrane \cite{Zwickl2007}, whereas $\eta_{jk}=|\int{\int{dx \, dy \, \psi(x,y) \, a_{jk}(x,y)/a_z}}|$ accounts for the mode overlap (Fig. \ref{schematic}(c)) \cite{Gillespie}. Here $\psi(x,y)=(2/\pi w_0^2){\rm exp}(-2r^2/w_0^2)$, and $a_{jk}(x,y)$ is the membrane mode function discussed earlier. We operate near the center of a (6,6) antinode at a distance of $(\delta x,\delta y) \simeq (45 \hspace{4pt} \mu {\rm m}, 120 \hspace{4pt} \mu {\rm m})$ from the center of the (1,1) mode; here $\eta_{66} \simeq 0.63$ and $\eta_{11} \simeq 0.69$, where these values are known at the $10\%$ level. The mode temperature in Fig. \ref{cooling}(a) is then extracted from the integrated mechanical response via $\langle a_z^2 \rangle_{\omega_m} = k_b T /\ m_{e} \omega_m^2$, where the effective mass $m_e$ is $\frac{1}{4}$ the physical mass for all modes. The temperature measured at large detuning agrees with our expectation for room temperature operation to within the uncertainty. We can compare the observed cooling to our expectation based upon a simple cavity-cooling model relevant in the limit of weak coupling (lines in Fig. \ref{cooling}). We operate in a regime where the optical linewidth $\gamma=2 \pi \times 25$ MHz (FWHM for $F=8100$) is somewhat larger than $\omega_m$, in between the so-called bad and good cavity limits \cite{finessenote,Marquardt2007a,WilsonRae2007}. Here the full expression for the cooling rate is given by $\gamma_{{\rm opt}} = g^2 x_{zp}^2 |\alpha|^2 (S_+-S_-)$ where $x_{zp}=\sqrt{\hbar/2 m_e \omega_m}$ and $|\alpha|^2$ is the effective intracavity photon number, which we estimate as $\frac{2 P_{{\rm out}}^0}{\hbar \omega_0} \frac{\gamma/2}{(\gamma/2)^2+\Delta^2}$. The calculated effective damping rate $\gamma_{{\rm eff}}=\gamma_m+\gamma_{{\rm opt}}$ and the expected temperature in this regime $T=T_{{\rm room}} \gamma_m/\gamma_{{\rm eff}}$ follow the trend of the data. We have investigated the cooling theoretically achievable for our realized parameters. Strictly, full ground state cooling requires $\bar{n}_T \gamma_m \ll \gamma \ll \omega_m $, i.e. fully resolved sidebands and an oscillator with a large enough $Q_m$ to allow the required damping. For our results it is most relevant to consider instead the case $ \bar{n}_T \gamma_m \lesssim \gamma \sim \omega_m$ and track the results into a regime of strong cooling, characterized by $g_{{\rm eff}}=g x_{zp} \alpha > \gamma$ \cite{Marquardt2007a,Dobrindt,Groblacher2009}. We thus calculate the expected phonon occupation $\bar{n}$ based on an exact solution to the coupled equations of motion \cite{Marquardt2007a,Genesnote} and integrate over the spectral function $S_{bb}(\omega)=\int dt e^{i\omega t} \langle \hat{b}^\dagger(t) \hat{b} \rangle$ where $\hat{b}^{\dagger}+\hat{b}=\hat{z}$. \begin{figure}\label{theory} \end{figure} Figure \ref{theory}(a) shows the calculated phonon number as a function of the cooling strength for the realistic parameters: $Q_m=4 \times 10^6$, $\nu_m=5$ MHz, $\Delta=-\omega_m$, $\bar{n}_T=k_b T_{{\rm room}}/\hbar \omega_m=1.2 \times 10^6$, and $\gamma=2 \pi \times 5$ and 1 MHz (see caption). Figure \ref{theory}(a) shows reaching $\bar{n}\sim 1$ from room temperature is theoretically achievable for demonstrated parameters before reaching the static bistability point ($g_{{\rm eff}}=\omega_m/2$ for $\gamma \ll \omega_m$) \cite{Dorsel1983}. Another outstanding goal is achieving strong coupling between a ground state mechanical resonator and the cavity field \cite{Marquardt2007a,Dobrindt,Groblacher2009}. Figure \ref{theory}(b) illustrates the spectral function $S_{bb}(\omega)$ as a function of the cavity linewidth for $g_{{\rm eff}} = 2 \pi \times 1$ MHz. The normal-mode splitting indicative of strong coupling appears for phonon occupancies near unity. Observing quantum effects of a mesoscopic oscillator coupled to an ambient thermal bath would be a significant advance. Future work should address experimental challenges to achieving the occupations shown in Fig. \ref{theory}, such as reducing laser phase noise and mitigating the thermal noise of the Fabry-Perot cavity substrates \cite{Kimble2002}. However, to implement a full range of quantum protocols with these oscillators one must achieve occupations $\bar{n} \ll 1$. This will require higher $Q$-frequency products or lower initial thermal occupation, as illustrated by the green line in Fig. \ref{theory}(a). Realizing even higher room temperature $Q$-frequency products than those shown in Fig. \ref{Qm} will be a subject of future investigation, where we note one limitation to consider is thermoelastic dissipation (dashed line in Fig. \ref{Qm}). Further, we note that recent results in Ref. \cite{Southworth2009} indicate that, unlike amorphous SiO$_2$, the dissipation in SiN films decreases monotonically from room temperature down to $\sim$100 K, where liquid nitrogen can be used for cooling. \acknowledgements{We thank Oskar Painter for valuable insights, especially into the absorption properties of Si$_3$N$_4$, and D. E. Chang for helpful discussions. This work was supported by the NSF and by IARPA via the ARO. CR acknowledges support from a Millikan Postdoctoral Fellowship; SP acknowledges support as fellow of the Center for the Physics of Information.} \end{document}
\begin{document} \title[On the failure of the first \v{C}ech homotopy group]{\large O\lowercase{n the failure of the first} \v{C}\lowercase{ech homotopy group to register geometrically relevant fundamental group elements}} \author{Jeremy Brazas} \address{Department of Mathematics, West Chester University of Pennsylvania, West Chester, PA 19383, USA} \email{jbrazas@wcupa.edu} \author{Hanspeter Fischer} \address{Department of Mathematical Sciences, Ball State University, Muncie, IN 47306, USA} \email{hfischer@bsu.edu} \subjclass[2010]{55Q07, 55Q05, 57M12, 57M05} \keywords{First \v{C}ech homotopy group; first shape group; strongly homotopically Hausdorff; homotopically path Hausdorff; 1-UV$_0$; generalized covering projection; discrete monodromy property; inverse limit of free monoids} \date{\today} \begin{abstract} We construct a space $\mathbb{P}$ for which the canonical homomorphism $\pi_1(\mathbb{P},p) \rightarrow \check{\pi}_1(\mathbb{P},p)$ from the fundamental group to the first \v{C}ech homotopy group is not injective, although it has all of the following properties: (1) $\mathbb{P}\setminus\{p\}$ is a 2-manifold with connected non-compact boundary; (2) $\mathbb{P}$ is connected and locally path connected; (3) $\mathbb{P}$ is strongly homotopically Hausdorff; (4) $\mathbb{P}$ is homotopically path Hausdorff; (5) $\mathbb{P}$ is 1-UV$_0$; (6) $\mathbb{P}$ admits a simply connected generalized covering space with monodromies between fibers that have discrete graphs; (7) $\pi_1(\mathbb{P},p)$ naturally injects into the inverse limit of finitely generated free monoids otherwise associated with the Hawaiian Earring; (8) $\pi_1(\mathbb{P},p)$ is locally free. \end{abstract} \maketitle \section{Introduction} The geometric significance of the elements of the fundamental group $\pi_1(X,x)$ of a connected and locally path-connected metric space $X$ is most prominently on display in the context of a simply connected covering space (if it exists) where these elements comprise the group of deck transformations of the covering projection. In this situation, the canonical homomorphism $\pi_1(X,x) \rightarrow \check{\pi}_1(X,x)$ to the first \v{C}ech homotopy group (also called the first shape group \cite{MS}) is an isomorphism~\cite{FZ2007}. In fact, as long as all fundamental group elements are accounted for, that is, if $\pi_1(X,x) \hookrightarrow \check{\pi}_1(X,x)$ is injective, the standard covering construction yields a {\em generalized} covering projection $p:\widetilde{X}\rightarrow X$ with connected, locally path-connected and simply connected $\widetilde{X}$ \cite{FZ2007}. It is characterized by the usual unique lifting property and we have $\pi_1(X,x)\cong Aut(\widetilde{X} \stackrel{p}{\rightarrow} X)$. Examples of spaces for which $\pi_1(X,x) \hookrightarrow \check{\pi}_1(X,x)$ is injective include all one-dimensional separable metric spaces \cite{CF1959b,EK}, all planar spaces \cite{FZ2005}, the Pontryagin sphere, the Pontryagin surface $\Pi_2$, and similar inverse limits of higher-dimensional manifolds \cite{FGu}. Several weaker properties that quantify the geometric relevance of the elements of $\pi_1(X,x)$ can be found in the literature. The {\em strongly homotopically Hausdorff} property, for example, stipulates that for every essential loop in $X$ there should be a limit to how small it can be made at a particular point by a free homotopy \cite{CMRZZ}. The {\em homotopically path Hausdorff} property, on the other hand, calls for $\pi_1(X,x)$ to be T$_1$ in the quotient topology induced by the compact-open topology on the loop space $\Omega(X,x)$ \cite{BFa}. Both of these properties are implied by $\pi_1(X,x) \hookrightarrow \check{\pi}_1(X,x)$ being injective \cite{FRVZ}. Then there are properties that guarantee, in and of themselves, the existence of a simply connected generalized covering space. These include the homotopically path Hausdorff property above and the {\em 1-UV$_0$} property, which requires small null-homotopic loops to contract via small homotopies \cite{BFi,FRVZ}. Even if a generalized covering projection with simply connected domain exists, it might not be a fibration and the monodromies between fibers might not be continuous \cite{FGa}. (Such is the case for the Hawaiian Earring.) However, for all one-dimensional spaces and for all planar spaces, these monodromies have {\em discrete graphs}; a fact implicitly used in the work of Eda \cite{E2002,E2010} and Conner-Kent \cite{CK}. If any two spaces with this property (cf.\@ Definition~\ref{DefDM}) are homotopy equivalent, then their respective wild sets (points at which they are not semilocally simply connected) are homeomorphic (Theorem~\ref{utility}). \begin{figure} \caption{Relationships between some local properties of $\pi_1(X,x)$} \end{figure} As far as the algebraic structure of fundamental groups of low-dimensional spaces is concerned, we recall that the fundamental group of an arbitrary planar Peano continuum (not necessarily homotopy equivalent to a one-dimensional space) is isomorphic to a subgroup of the fundamental group of some one-dimensional planar Peano continuum \cite{CC}. In turn, fundamental groups of one-dimensional path-connected separable metric spaces are locally free \cite{CF1959b} and, in the compact case, have structures similar to that of the Hawaiian Earring $\mathbb{H}$, where the injective function $\pi_1(\mathbb{H})\hookrightarrow \check{\pi}_1(\mathbb{H})$ factors through the limit $\mathcal M$ of an inverse system ${\mathcal M}_1\stackrel{R_1}{\longleftarrow} {\mathcal M}_2\stackrel{R_2}{\longleftarrow}{\mathcal M}_3\stackrel{R_3}{\longleftarrow} \cdots$ of free monoids ${\mathcal M}_n$ on $\{\ell_1^{\pm 1}, \ell_2^{\pm 1}, \cdots, \ell_n^{\pm 1}\}$ with $R_{n-1}$ deleting the letters $\ell_n$ and $\ell_n^{-1}$ from every word \cite{DTW,FZ2013b}. This raises the following \noindent {\bf Question:} {\em Is there a space $X$ for which $\pi_1(X)\rightarrow \check{\pi}_1(X)$ is \underline{not} injective, but with all of the other properties discussed above: low-dimensional, strongly homotopically Hausdorff, homotopically path Hausdorff, 1-UV$_0$, admits a generalized universal covering $p:\widetilde{X}\rightarrow X$ whose monodromies between fibers have discrete graphs, admits a natural injective function $\pi_1(X)\rightarrow {\mathcal M}$, and has locally free fundamental group?} We present a relatively simple and prototypical construction of a two-dimensional space that yields a positive answer to this question. Specifically, in Section~\ref{construction}, we define the space $\mathbb{P}$ mentioned in the abstract, by attaching countably many ``pairs of pants'' to the Hawaiian Earring $\mathbb{H}$, and identify its fundamental group as a direct limit of groups each isomorphic to $\pi_1(\mathbb{H})$ with injective bonding homomorphisms. In particular, $\pi_1(\mathbb{P})$ is locally free (Proposition~\ref{lf}). We show that $\pi_1(\mathbb{P})\rightarrow \check{\pi}_1(\mathbb{P})$ is not injective (Theorem~\ref{thm:notinj}), but that $\mathbb{P}$ is both strongly homotopically Hausdorff (Theorem~\ref{SHHthm}) and homotopically path Hausdorff (Theorem~\ref{HPH}); as far as the authors know, it is the first such example (Remarks~\ref{Z'} and \ref{Y'}). The proof of the latter property hinges on the fact that $\pi_1(\mathbb{P})$ naturally injects into the inverse limit of monoids associated with the Hawaiian Earring\linebreak (Theorem~\ref{inj}), despite $\pi_1(\mathbb{P})$ not being isomorphic to a subgroup of an inverse limit of free groups (Remark~\ref{noalgebra}). Moreover, we show that $\mathbb{P}$ is 1-UV$_0$ (Theorem~\ref{UVThm}) and that $\pi_1(\mathbb{P})$ is locally free (Proposition~\ref{lf}). After a brief review of generalized covering space theory (Section~\ref{review}) we show that the monodromies for the simply connected generalized covering space of $\mathbb{P}$ have discrete graphs (Theorem~\ref{PhasDMP}). We also discuss some general aspects of this property, such as its relationship to the homotopically Hausdorff property relative to a subgroup of the fundamental group (Proposition~\ref{DMP->HH}) and its impact on the stability of wild subsets under homotopy equivalence (Theorem~\ref{utility}). \section{The Hawaiian pants $\mathbb{P}$}\label{construction} Let $C_n\subseteq \mathbb{R}^2$ be the circle of radius $\frac{1}{n}$ centered at $(\frac{1}{n},0)$ and let $\mathbb{H}=\bigcup_{i=1}^\infty C_i\subseteq \mathbb{R}^2$ be the usual Hawaiian Earring with basepoint $b_0=(0,0)$. Define $\ell_n:[0,1]\rightarrow C_n$ by $\ell_n(t)=(\frac{1}{n}(1-\cos 2\pi t), \frac{1}{n}\sin 2\pi t))$. For $n\in \mathbb{N}$, let $D_{n,1}$ and $D_{n,2}$ be two disjoint disks in the interior $D_n^\circ$ of a disk $D_n\subseteq \mathbb{R}^2$ and consider the ``pair of pants'' $P_n=D_n\setminus (D^\circ_{n,1}\cup D^\circ_{n,2})$. Let $\alpha_n, \beta_n, \gamma_n:[0,1]\rightarrow P_n$ be parametrizations of the boundaries $\partial D_n$, $\partial D_{n,1}$, and $\partial D_{n,2}$, respectively, with clockwise orientation. Let $\mathbb{P}$ be the space obtained from $\mathbb{H}$ by attaching all $P_n$ via maps $f_n:\partial P_n\rightarrow \mathbb{H}$ such that $f_n\circ\alpha_n=\ell_{n}$, $f_n\circ\beta_n=\ell_{2n}$ and $f_n\circ\gamma_n=\ell_{2n+1}$. That is, we put $\mathbb{P}=\mathbb{H}\cup_{f} \left(\coprod_{n\in \mathbb{N}} P_n\right)$ where $f:\coprod_{n\in\mathbb{N}}\partial P_n\rightarrow \mathbb{H}$ and $f|_{\partial P_n}=f_n$. We refer to $\mathbb{P}$ as the ``Hawaiian Pants''. (See Figure~\ref{pants}.) \begin{figure} \caption{ Attaching pairs of pants to the Hawaiian Earring} \label{pants} \end{figure} We observe that exactly one pair of pants is attached to $C_1$, namely $P_1$ via $f_1|_{\partial D_1}$, and that for each $n\geqslant 2$, exactly two pairs of pants are attached to $C_n$, namely $P_n$ via $f_n|_{\partial D_n}$, and either $P_{n/2}$ via $f_{n/2}|_{\partial D_{n/2,1}}$ (if $n$ is even) or $P_{(n-1)/2}$ via $f_{(n-1)/2}|_{\partial D_{(n-1)/2,2}}$ (if $n$ is odd). In particular: \begin{proposition} \hspace{1in} \begin{itemize} \item[(a)] $\mathbb{P}\setminus \{b_0\}$ is a 2-manifold with boundary $C_1\setminus \{b_0\}$. \item[(b)] $\mathbb{P}$ is connected and locally path connected. \end{itemize} \end{proposition} We have $f^{-1}_n(b_0)=\{a_n,b_n,c_n\}$ with $a_n\in \partial D_n$, $b_n\in \partial D_{n,1}$ and $c_n\in \partial D_{n,2}$. Choose arcs $A_{n,1}, A_{n,2}\subseteq P_n$ such that $A_{n,1}\cap \partial P_n=\partial A_{n,1}=\{a_n,b_n\}$, $A_{n,2}\cap \partial P_n=\partial A_{n,2}=\{a_n,c_n\}$, and $A_{n,1}\cap A_{n,2}=\{a_n\}$, configured as in Figure~\ref{pants}. Let $B_{n,j}$ be the image of $A_{n,j}$ in $\mathbb{P}$ when attaching $P_n$. Then $B_{n,j}$ is homeomorphic to a circle. We regard $P^\circ_n=D^\circ_n\setminus(D_{n,1}\cup D_{n,2})$ as a subspace of $\mathbb{P}$. Let $\mathbb{H}_n=\bigcup_{i=n}^\infty C_i$ and let $\mathbb{P}_n$ be the space obtained from $\mathbb{H}_n$ by attaching $P_n,P_{n+1},P_{n+2},\dots$. Define $\mathbb{P}_n^+=\bigcup_{i=1}^{n-1} (B_{i,1}\cup B_{i,2})\cup \mathbb{P}_n$ for $n\geqslant 2$ and put $\mathbb{P}_1^+=\mathbb{P}_1=\mathbb{P}$. Then $\mathbb{P}=\mathbb{P}^+_1 \supseteq \mathbb{P}^+_2 \supseteq \mathbb{P}^+_3 \supseteq \cdots$. Since each $P_n$ deformation retracts onto $\partial D_{n,1}\cup A_{n,1}\cup A_{n,2}\cup \partial D_{n,2}$, there are deformation retractions $\phi_n:\mathbb{P}^+_n\times [0,1] \rightarrow \mathbb{P}^+_n$ such that \begin{itemize} \item[(i)] for all $p\in \mathbb{P}^+_n$, we have $\phi_n(p,0)=p$; \item[(ii)] for all $p\in \mathbb{P}^+_n$, we have $\phi_n(p,1)\in \mathbb{P}^+_{n+1}$; \item[(iii)] for all $p\in \mathbb{P}^+_{n+1}$ and all $t\in [0,1]$, we have $\phi_n(p,t)=p$; \item[(iv)] for all $p\in \mathbb{P}^+_n\setminus \mathbb{P}^+_{n+1}$ and all $t\in[0,1]$, we have that $\phi_n(p,t)$ lies in the image of $P_n$ in the quotient $\mathbb{P}^+_n$. \end{itemize} Put $\mathbb{H}_{n,k}=\bigcup_{i=n}^k C_i$ (where $\mathbb{H}_{n,k}=\emptyset$ for $k<n$) and $\mathbb{H}_{n,k}^+=\bigcup_{i=1}^{n-1}(B_{i,1}\cup B_{i,2})\cup \mathbb{H}_{n,k}$. Likewise, put $\mathbb{H}_n^+=\bigcup_{i=1}^{n-1}(B_{i,1}\cup B_{i,2})\cup \mathbb{H}_n$. Note that $\mathbb{H}^+_{n,n-1}= \bigcup_{i=1}^{n-1}(B_{i,1}\cup B_{i,2})$ and $\mathbb{H}_1^+=\mathbb{H}$. Defining $d_n(p)=\phi_n(p,1)$ and letting $r_{n,k}:\mathbb{H}^+_{n}\rightarrow \mathbb{H}^+_{n,k}$ denote the canonical retractions with $r_{n,k}(\bigcup_{i=k+1}^\infty C_i)=\{b_0\}$, we have the following commutative diagrams: \[ \xymatrix{\mathbb{P}^+_1 \ar[r]^{d_1} & \mathbb{P}^+_2 \ar[r]^{d_2} & \cdots \ar[r]^{d_{n-1}} & \mathbb{P}^+_n \ar[r]^{d_n} & \mathbb{P}^+_{n+1} \ar[r]^{d_{n+1}} & \cdots\\ \mathbb{H}^+_1 \ar[r]^{d_1} \ar@{^{(}->}[u] \ar[d]^{r_{1,k}} & \mathbb{H}^+_2 \ar[r]^{d_2} \ar@{^{(}->}[u] \ar[d]^{r_{2,k}} & \cdots \ar[r]^{d_{n-1}} & \mathbb{H}^+_n \ar[r]^{d_n} \ar@{^{(}->}[u] \ar[d]^{r_{n,k}} & \mathbb{H}^+_{n+1} \ar[r]^{d_{n+1}} \ar@{^{(}->}[u] \ar[d]^{r_{n+1,k}} & \cdots\\ \mathbb{H}^+_{1,k} \ar[r]^{d_1} & \mathbb{H}^+_{2,k} \ar[r]^{d_2} & \cdots \ar[r]^{d_{n-1}} & \mathbb{H}^+_{n,k} \ar[r]^{d_n} & \mathbb{H}^+_{n+1,k} & (k\geqslant 2n+1) } \] We may assume that there are parametrizations $\rho_{n,j}:([0,1],\{0,1\})\rightarrow (B_{n,j},b_0)$ such that \begin{equation}\label{peel} d_n\circ \ell_n=\rho_{n,1} \cdot \ell_{2n} \cdot \rho_{n,1}^- \cdot \rho_{n,2}\cdot \ell_{2n+1}\cdot \rho_{n,2}^-. \end{equation} Here, ``$\;\cdot\;$'' denotes the usual concatenation of paths and $\rho_{n,j}^-$ denotes the reverse path of $\rho_{n,j}$, given by $\rho_{n,j}^-(t)=\rho_{n,j}(1-t)$. Taking path homotopy classes, and noting that $[\ell_n]=[d_n\circ \ell_n]\in \pi_1(\mathbb{P},b_0)$, we have \begin{equation}\label{relation} [\ell_n]=[\rho_{n,1}] [ \ell_{2n}] [\rho_{n,1}]^{-1}[ \rho_{n,2}][\ell_{2n+1}][\rho_{n,2}]^{-1}\in \pi_1(\mathbb{P},b_0). \end{equation} \begin{lemma}\label{dn-inj} For every $k\geqslant 2n+1$, $d_{n\#}:\pi_1(\mathbb{H}_{n,k}^+,b_0)\rightarrow \pi_1(\mathbb{H}_{n+1,k}^+,b_0)$ is injective. \end{lemma} \begin{proof} For $m\in \{n,n+1\}$, the group $\pi_1(\mathbb{H}_{m,k}^+,b_0)$ is free on the set \[\{[\rho_{i,j}]\mid 1\leqslant i \leqslant m-1, 1\leqslant j \leqslant 2\}\cup\{[\ell_i]\mid m\leqslant i \leqslant k\}.\] So, the claim follows from Equation~(\ref{peel}) and the fact that $d_n\circ \rho_{i,j}=\rho_{i,j}$ for $1\leqslant i \leqslant n-1$, $1\leqslant j \leqslant 2$ and $d_n\circ \ell_i=\ell_i$ for $n+1\leqslant i\leqslant k$. \end{proof} \begin{Notation} We will denote functions $\sigma:\mathcal{L}\rightarrow \mathcal{N}$ to inverse limits \begin{eqnarray*} \displaystyle \mathcal{N} &=& \lim_{\longleftarrow} \left(\mathcal{N}_1\stackrel{\kappa_1}{\longleftarrow} \mathcal{N}_2\stackrel{\kappa_2}{\longleftarrow} \mathcal{N}_3\stackrel{\kappa_3}{\longleftarrow}\cdots\right) \\ &=&\left\{(x_n)_{n\geqslant 1}\in \prod_{n=1}^\infty \mathcal{N}_n\mid \kappa_n(x_{n+1})=x_n \mbox{ for all }n\geqslant 1\right\} \end{eqnarray*} as sequences $\sigma=(\sigma_n)_{n\geqslant 1}$ with $\sigma_n=\mu_n\circ \sigma:{\mathcal L}\rightarrow {\mathcal N}_n$, where $\mu_n:\mathcal{N}\rightarrow \mathcal{N}_n$ are the projections. \end{Notation} \begin{lemma}\label{dn-com-inj} For every $n$, $d_{n\#}:\pi_1(\mathbb{H}_{n}^+,b_0)\rightarrow \pi_1(\mathbb{H}_{n+1}^+,b_0)$ is injective. \end{lemma} \begin{proof} Let $1\not=[\alpha]\in \pi_1(\mathbb{H}_n^+,b_0)$. Since \[ \mathbb{H}^+_n=\lim_{\longleftarrow} \left( \mathbb{H}^+_{n,n-1} \stackrel{r_{n,n-1}}{\longleftarrow} \mathbb{H}^+_{n,n} \stackrel{r_{n,n}}{\longleftarrow} \mathbb{H}^+_{n,n+1} \stackrel{r_{n,n+1}}{\longleftarrow} \cdots \right)\approx \mathbb{H},\] we have that \[((r_{n,k})_\#)_{k\geqslant n-1}:\pi_1(\mathbb{H}_n^+,b_0)\rightarrow \lim_{\longleftarrow}\left( \pi_1(\mathbb{H}_{n,n-1}^+,b_0) \stackrel{(r_{n,n-1})_\#}{\longleftarrow} \pi_1(\mathbb{H}_{n,n}^+,b_0) \stackrel{(r_{n,n})_\#}{\longleftarrow} \cdots\right)\] is injective \cite{CC2000}. Therefore, there is a $k\geqslant 2n+1$ such that $(r_{n,k})_\#([\alpha])\not=1$. The claim now follows from Lemma~\ref{dn-inj} and applying $\pi_1$ to the diagram above. \end{proof} \begin{lemma}\label{direct} $\pi_1(\mathbb{P},b_0)$ is isomorphic to the direct limit \[\lim_{\longrightarrow} \left( \pi_1(\mathbb{H}_{1}^+,b_0)\stackrel{d_{1\#}}{\longrightarrow} \pi_1(\mathbb{H}_{2}^+,b_0) \stackrel{d_{2\#}}{\longrightarrow} \pi_1(\mathbb{H}_{3}^+,b_0) \stackrel{d_{3\#}}{\longrightarrow} \cdots \right)\] with canonical homomorphisms $\iota_{n\#}:\pi_1(\mathbb{H}_n^+,b_0)\rightarrow \pi_1(\mathbb{P},b_0)$ induced by inclusion $\iota_n:\mathbb{H}_n^+\hookrightarrow \mathbb{P}$. \end{lemma} \begin{proof} For $n\in \mathbb{N}$ and $[\alpha]\in \pi_1(\mathbb{H}_n^+,b_0)$, we have $[d_n\circ \alpha]=[\alpha]\in \pi_1(\mathbb{P},b_0)$. Hence $\iota_{n+1\#}\circ d_{n\#}=\iota_{n\#}$. In order to verify the universal property, let $h_n: \pi_1(\mathbb{H}_n^+,b_0)\rightarrow G$ be a homomorphisms with $h_{n+1}\circ d_{n\#}=h_n$. Let $[\alpha]\in \pi_1(\mathbb{P},b_0)$. Since $\alpha([0,1])$ is compact, there is an $n\in \mathbb{N}$ such that $\alpha([0,1])\cap P^\circ_i=\emptyset$ for all $i\geqslant n$. Put $\beta=d_{n-1}\circ d_{n-2} \circ \cdots \circ d_1\circ\alpha$. Then $\beta:([0,1],\{0,1\})\rightarrow (\mathbb{H}_n^+,b_0)$ and $\iota_{n\#}([\beta])=[\alpha]\in \pi_1(\mathbb{P},b_0)$. Moreover, $h_{n+1}([d_n\circ \beta])=h_n([\beta])$. Put $h([\alpha])=h_n([\beta])$. Once we show that $h:\pi_1(\mathbb{P},b_0)\rightarrow G$ is well-defined, it is clear that $h$ is a homomorphism with $h\circ \iota_{n\#}=h_n$ and that $h$ is the unique homomorphism with this property. To this end, suppose that $[\alpha]=[\widetilde{\alpha}]\in \pi_1(\mathbb{P},b_0)$. Let $H:[0,1]\times [0,1]\rightarrow \mathbb{P}$ be a homotopy with $H(t,0)=\alpha(t)$, $H(t,1)=\widetilde{\alpha}(t)$, and $H(0,t)=H(1,t)=b_0$ for all $t\in [0,1]$. Since $H([0,1]\times [0,1])$ is compact, for $n\in\mathbb{N}$ sufficiently large, we have $d_{n-1}\circ \cdots \circ d_2\circ d_1\circ H([0,1]\times [0,1])\subseteq \mathbb{H}_{n}^+$ so that $[d_{n-1}\circ \cdots \circ d_2\circ d_1\circ \alpha]=[d_{n-1}\circ \cdots \circ d_2\circ d_1\circ \widetilde{\alpha}]\in \pi_1(\mathbb{H}_{n}^+,b_0)$. \end{proof} \begin{lemma}\label{HP-inj} For every $n$, $\iota_{n\#}:\pi_1(\mathbb{H}_n^+,b_0)\rightarrow \pi_1(\mathbb{P},b_0)$ is injective. \end{lemma} \begin{proof} This follows from Lemmas~\ref{dn-com-inj} and \ref{direct}. \end{proof} Recall that a group $G$ is called {\em locally free} if every finitely generated subgroup of $G$ is free. \begin{proposition}\label{lf} $\pi_1(\mathbb{P},b_0)$ is locally free. \end{proposition} \begin{proof} Since $\pi_1(\mathbb{H}^+_n,b_0)$ is isomorphic to a subgroup of an inverse limit of free groups of finite rank, it is locally free \cite[Theorem~1]{CF1959b}. Then, by Lemma~\ref{direct}, $\pi_1(\mathbb{P},b_0)$ is a direct limit of locally free groups, and thus locally free \cite[Lemma~24]{CHM}. \end{proof} \begin{remark}[Metric Hawaiian Pants $\mathbb{P}^\ast \subseteq \mathbb{R}^3$] While $\mathbb{P}$ is not metrizable (it is not first countable at $b_0$), it is naturally homotopy equivalent to the metrizable space formed by attaching the pants $P_n$ to the union $\bigcup_{i=1}^\infty Z_i\subseteq \mathbb{R}^3$ of the cylinders $Z_n=C_n\times[1-n,n-1]$ via identifying $\partial D_n$ with $C_n \times \{1-n\}$, $\partial D_{n,1}$ with $C_{2n} \times \{2n-1\}$, and $\partial D_{n,2}$ with $C_{2n+1} \times \{2n\}$. If we change the embedding $\mathbb{H}\subseteq \mathbb{R}^2$, this procedure yields a subspace of $\mathbb{R}^3$ that is homotopy equivalent to $\mathbb{P}$: For each $n\in \mathbb{N}$, choose a triangle $C'_n\subseteq \mathbb{R}^2$ of diameter less than $1/n$ such that for all $i\not=j$, $C'_i\cap C'_j=\{b_0\}$ and the bounded components of $\mathbb{R}^2\setminus C'_i$ and $\mathbb{R}^2\setminus C'_j$ are disjoint. Then $\bigcup_{i=1}^\infty C'_i \subseteq \mathbb{R}^2$ is homeomorphic to $\mathbb{H}$. The adjunction space resulting from attaching the pants $P_n$ to the union $\bigcup_{i=1}^\infty Z_i'\subseteq \mathbb{R}^3$ of the corresponding cylinders $Z_n'$ can now readily be implemented in $\mathbb{R}^3$ by forming the union of $\bigcup_{i=1}^\infty Z_i'$ with appropriate sets $P_n'\subseteq \mathbb{R}^3$ that are homeomorphic to $P_n$. To obtain a subspace $\mathbb{P}^\ast\subseteq\mathbb{R}^3$ which is semilocally simply connected at all but one point and homotopy equivalent to $\mathbb{P}$, slightly deform the cylinders $Z_n'$ so that their only common point of contact is the origin. Then there is a bijective homotopy equivalence $h:\mathbb{P}\rightarrow \mathbb{P}^\ast$ such that $h(C_n)=C_n'\times\{0\}$ for all $n\in \mathbb{N}$ with homotopy inverse $g:\mathbb{P}^\ast\rightarrow \mathbb{P}$ collapsing the cylinders such that $g|_{P'_n}: P_n'\rightarrow P_n$ is a homeomorphism, $g(Z_n')= C_n$, $g|_{C_n'\times\{0\}}=(h|_{C_n})^{-1}$, and $g(h(B_{n,j}))=B_{n,j}$ for all $n\in \mathbb{N}$ and $j\in \{1,2\}$. The resulting deformation retraction of the cylinders allows the proofs in this paper for those properties of $\mathbb{P}$ that are not homotopy invariant (such as the 1-UV$_0$ property and the discrete monodromy property) to go through for $\mathbb{P}^\ast$ with only minor changes. However, working with $\mathbb{P}$ is conceptually simpler. \end{remark} \section{Non-injectivity into the first \v{C}ech homotopy group}\label{sec:notinj} Let $X$ be a path-connected topological space and $x_0\in X$. For a point $x\in X$, let ${\mathcal T}_x$ denote the set of all open neighborhoods of $x$ in $X$. For $U\in {\mathcal T}_x$ and a path $\alpha$ in $X$ from $x_0$ to $x$, consider the subgroup \[\pi(\alpha,U)=\{[\alpha\cdot \delta\cdot \alpha^-]\mid \delta \mbox{ a loop in } U\}\leqslant \pi_1(X,x_0).\] Let $\pi(x,U)$ denote the normal closure of $\pi(\alpha,U)$ in $\pi_1(X,x_0)$, i.e., the subgroup generated by all $\pi(\beta,U)$, with $\beta$ a path from $x_0$ to $x$: \[\pi(x,U)=\left<\pi(\alpha,U)\mid \alpha(1)=x\right> \leqslant \pi_1(X,x_0).\] \begin{remark}\label{holes} Suppose $U\in {\mathcal T}_x$ is path connected and let $[\gamma]\in \pi_1(X,x_0)$. Then $[\gamma]\in \pi(x,U)$ if and only if there is a map $g:D\setminus(D^\circ_1\cup D^\circ_2\cup \cdots\cup D^\circ_j)\rightarrow X$ from a ``disk with holes'' to $X$ with $g|_{\partial D}=\gamma$ and $g(\partial D_i)\subseteq U$ for all $i\in\{1,2,\dots,j\}$. (Here, $D_1, D_2, \dots, D_j$ are pairwise disjoint disks in the interior $D^\circ$ of the disk $D$.) \end{remark} Let $Cov(X)$ denote the set of all open covers of $X$. For ${\mathcal U}\in Cov(X)$, let $\pi({\mathcal U},x_0)$ denote the subgroup of $\pi_1(X,x_0)$ generated by all $\pi(x,U)$ with $U\in {\mathcal U}$ and $x\in U$. \begin{definition}[Spanier group \cite{Spanier}] The subgroup \[\pi^s(X,x_0)=\bigcap_{{\mathcal U}\in Cov(X)} \pi({\mathcal U},x_0)=\bigcap_{{\mathcal U}\in Cov(X)} \left<\pi(x,U)\mid x\in U\in {\mathcal U}\right>\] of the fundamental group $\pi_1(X,x_0)$ is called the {\em Spanier group} of $X$. \end{definition} \begin{remark}\label{classical} If $X$ is locally path connected, then for a given $H\leqslant \pi_1(X,x_0)$, there is a (classical) covering projection $p:(\widetilde{X},\widetilde{x})\rightarrow (X,x_0)$ with $p_\#\pi_1(\widetilde{X},\widetilde{x})=H$ if and only if there is a ${\mathcal U}\in Cov(X)$ such that $\pi({\mathcal U},x_0)\leqslant H$ \cite{Spanier}. \end{remark} \begin{remark}\label{kernel} The Spanier group $\pi^s(X,x_0)$ is contained in the kernel of the natural homomorphism $\Psi_X:\pi_1(X,x_0)\rightarrow \check{\pi}_1(X,x_0)$ to the first \v{C}ech homotopy group \cite{FZ2007}. If $X$ is locally path connected and metrizable, then $\pi^s(X,x_0)$ equals this kernel \cite{BFa2}. \end{remark} Let $\iota=\iota_1:\mathbb{H}\hookrightarrow \mathbb{P}$ denote inclusion. \begin{lemma}\label{intoS} $\iota_\#\pi_1(\mathbb{H},b_0)\leqslant \pi^s(\mathbb{P},b_0)$. \end{lemma} \begin{proof} Let $[\alpha]\in \pi_1(\mathbb{H},b_0)$ and $\mathcal{U}\in Cov(\mathbb{P})$. Choose $U\in {\mathcal U}$ with $b_0\in U$ and fix $n\in \mathbb{N}$ with $\iota(\mathbb{H}_n)\subseteq U$. Express $[\alpha]=[\gamma_1][\delta_1][\gamma_2][\delta_2]\cdots[\gamma_m][\delta_m]$ with loops $\gamma_j$ in $C_1\cup C_2\cup \cdots \cup C_{n-1}$ and loops $\delta_j$ in $\mathbb{H}_n$. Then $\iota_\#([\delta_j])\in \pi(b_0,U)$ for every $j$. Also, for every $j$, we have $\iota_\#([\gamma_j])\in \left<[\ell_1],[\ell_2],\dots,[\ell_{n-1}]\right>\leqslant \pi_1(\mathbb{P},b_0)$, so that repeated application of Equation~(\ref{relation}) yields $\iota_\#([\gamma_j])\in \pi(b_0,U)$. Hence, $\iota_\#([\alpha])\in \pi^s(\mathbb{P},b_0)$. \end{proof} \begin{theorem}\label{thm:notinj} $\Psi_\mathbb{P}:\pi_1(\mathbb{P},b_0)\rightarrow \check{\pi}_1(\mathbb{P},b_0)$ is not injective. \end{theorem} \begin{proof} Combining Lemma~\ref{HP-inj} (with $n$=1), Lemma~\ref{intoS}, and Remark~\ref{kernel}, we obtain $1\not=\iota_\#\pi_1(\mathbb{H},b_0)\leqslant \pi^s(\mathbb{P},b_0)\leqslant \ker \Psi_\mathbb{P}$. \end{proof} Recall that if the fundamental group of a Peano continuum does not (canonically) inject into the first \v{C}ech homotopy group, then it is not residually n-slender \cite{EF}. However, since $\mathbb{P}$ is not a Peano continuum, we verify this separately: \begin{definition}[Noncommutatively slender \cite{E1992}] A group $G$ is called {\em noncommutatively slender} ({\em n-slender} for short) if for every homomorphism $h:\pi_1(\mathbb{H},b_0)\rightarrow G$, there is a $k\in \mathbb{N}$ such that $h([\alpha])=1$ for all loops $\alpha:([0,1],\{0,1\})\rightarrow (\mathbb{H}_k,b_0)$. \end{definition} Recall that a group $G$ is called {\em residually n-slender} (respectively {\em residually free}) if for every $1\not=g\in G$ there is an n-slender (respectively free) group $S$ and a homomorphism $h:G\rightarrow S$, such that $h(g)\not=1$. \begin{proposition}\label{notres} $\pi_1(\mathbb{P},b_0)$ is not residually n-slender. \end{proposition} \begin{proof} Consider $1\not=[\ell_1]\in \pi_1(\mathbb{P},b_0)$. Let $h:\pi_1(\mathbb{P},b_0) \rightarrow S$ be a homomorphism to an n-slender group $S$. It suffices to show that $h([\ell_1])=1$. If we precompose $h$ with the homomorphism $\iota_\#:\pi_1(\mathbb{H},b_0)\rightarrow \pi_1(\mathbb{P},b_0)$, induced by inclusion $\iota:\mathbb{H}\hookrightarrow \mathbb{P}$, and note that $S$ is n-slender, we see that $h([\ell_k])=h\circ \iota_\#([\ell_k])=1$ for all but finitely many $k$. However, by Equation~(\ref{relation}), we have $[\ell_n]=[\rho_{n,1}][\ell_{2n}][\rho_{n,1}]^{-1} [ \rho_{n,2}][ \ell_{2n+1}][ \rho_{n,2}]^{-1}$ in $\pi_1(\mathbb{P},b_0)$ for all $n$. Hence, $h([\ell_n])=1$ for all $n$. \end{proof} \begin{remark}\label{NRF} Every free group is n-slender \cite{E1992}. So, $\pi_1(\mathbb{P},b_0)$ is not residually free. \end{remark} \section{The strongly homotopically Hausdorff property}\label{sec:SHH} \begin{definition}[Strongly homotopically Hausdorff \cite{CMRZZ}] \label{SHH} A path-connected space $X$ is called {\em strongly homotopically Hausdorff at} $x\in X$ if for every essential loop $\gamma$ in $X$, there is an open neighborhood $U$ of $x$ in $X$ such that $\gamma$ cannot be freely homotoped into $U$, that is, if \[\bigcap_{U\in {\mathcal T}_x} \bigcup_{\alpha(1)\in U} \pi(\alpha,U)=\{1\}.\] (If $U$ is path connected, we may replace ``$\alpha(1)\in U$'' by ``$\alpha(1)=x$''.) The space $X$ is called {\em strongly homotopically Hausdorff} if it is strongly homotopically Hausdorff at every point $x\in X$. \end{definition} \begin{remark}\label{Z'} If the natural homomorphism $\pi_1(X,x) \hookrightarrow \check{\pi}_1(X,x)$ is injective, then $X$ is {\em strongly homotopically Hausdorff} \cite{FRVZ}. However, the converse does not hold in general \cite[Example $Z'$]{FRVZ}. \end{remark} \begin{remark}[The Hawaiian Mapping Torus] Let $f:\mathbb{H}\rightarrow \mathbb{H}$ be the map given by $f\circ\ell_{n}=\ell_{n+1}$, $n\in\mathbb{N}$. The \textit{Hawaiian Mapping Torus} is the space $M_f=\mathbb{H}\times [0,1]/\sim$, where $(x,0)\sim (f(x),1)$ for all $x\in\mathbb{H}$. Identifying $\mathbb{H}$ with the image of $\mathbb{H}\times \{0\}$ in $M_f$, the inclusion $i:\mathbb{H}\hookrightarrow M_f$ induces an injection $i_\#:\pi_1(\mathbb{H},b_0)\rightarrow \pi_1(M_f,b_0)$. Consider the loop $\rho:([0,1],\{0,1\})\rightarrow (M_f,b_0)$ where $\rho(s)$ is the image of $(b_0,s)$ in $M_f$ and put $t=[\rho]\in \pi_1(M_f,b_0)$. From two applications of van Kampen's Theorem, we get that $\pi_1(M_f,b_0)$ is isomorphic to the quotient of $\pi_1(\mathbb{H},b_0)\ast \left< \,t\, \right>$ by the relations $g=t f_{\#}(g)t^{-1}$, $g\in\pi_1(\mathbb{H},b_0)$ (see \cite{SW}). Iterating these relations, we see that each $[\gamma]\in \pi_1(\mathbb{H},b_0)$ factors in $\pi_1(M_f,b_0)$, for every $n\in\mathbb{N}$, as a conjugate $[\rho]^n[f^n\circ\gamma][\rho]^{-n}$ where the diameter of the loop $f^n\circ\gamma$ shrinks to $0$. While $M_f$ is a Peano continuum that embeds into $\mathbb{R}^3$ and has many of the same properties as $\mathbb{P}$, it is not strongly homotopically Hausdorff, since $\ell_1$ is freely homotopic to $\ell_n$ for all $n\in\mathbb{N}$. Our detailed treatment of $\mathbb{P}$ is motivated by the fact that $\pi_1(\mathbb{P},b_0)$ exhibits a somewhat more intricate algebraic phenomenon: in order to write an element $g\in\iota_\#\pi_1(\mathbb{H},b_0)\leqslant \pi_1(\mathbb{P},b_0)$ as a product of conjugates of homotopy classes of arbitrarily small loops (as in the proof of Lemma~\ref{intoS}), it takes an exponentially growing number of distinct conjugating elements, namely products of $[\rho_{i,j}]$. \end{remark} A path $\alpha:[a,b]\rightarrow X$ is called {\em reduced} if for every $a\leqslant s<t\leqslant b$ with $\alpha(s)=\alpha(t)$, the loop $\alpha|_{[s,t]}$ is not null-homotopic in $X$. For a one-dimensional metric space $X$, every path $\alpha:[a,b]\rightarrow X$ is homotopic (relative to endpoints) within $\alpha([a,b])$ to either a constant path or a reduced path, which is unique up to reparametrization \cite{E2002}. A path $\alpha:[a,b]\rightarrow X$ is called {\em cyclically reduced} if $\alpha\cdot \alpha$ is reduced. \begin{lemma}[Lemma~3.11 of \cite{BFi}]\label{cancellinglemma} Let $\lambda:([0,1],0)\to (X,x_0)$ be a reduced path in a one-dimensional metric space $X$, $\delta:[0,1]\to X$ be a reduced loop based at $\lambda(1)$, and $\gamma$ be a reduced representative of $[\lambda\cdot \delta\cdot \lambda^{-}]$. Then there exist $s,t\in [0,1]$ such that $\lambda|_{[0,t]}\circ \phi=\gamma|_{[0,s]}$, for some increasing homeomorphism $\phi:[0,s]\rightarrow [0,t]$, and $\lambda([t,1])\subseteq \delta([0,1])$. \end{lemma} \begin{theorem}\label{SHHthm} $\mathbb{P}$ is strongly homotopically Hausdorff. \end{theorem} \begin{proof}For $n\in \mathbb{N}$, we define $U_n=\mathbb{H}\cap\{(x,y)\in \mathbb{R}^2\mid x<\frac{2n+1}{n(n+1)}\}$. Since $\operatorname{diam}(C_{n+1})=\frac{2}{n+1}<\frac{2n+1}{n(n+1)}<\frac{2}{n}=\operatorname{diam}(C_n)$, we see that the sequence $U_1\supseteq U_2\supseteq U_3 \supseteq \cdots$ forms a neighborhood basis for $\mathbb{H}$ at $b_0$. For every pair $n,k\in \mathbb{N}$, $f_k^{-1}(U_n)$ has three components $L^0_{k,n}$, $L^1_{k,n}$, and $L^2_{k,n}$, each of which is an open arc or a circle (there are four cases based on the position of $n$ relative to $k$, $2k$, and $2k+1$), such that $a_k\in L^0_{k,n}\subseteq \partial D_k$, $b_k\in L^1_{k,n}\subseteq \partial D_{k,1}$ and $c_k\in L^2_{k,n}\subseteq \partial D_{k,2}$. Since, for a given $k$, $L^i_{k,n+1}\subseteq L^i_{k,n}$, we may choose three pairwise disjoint open neigborhoods $N^0_{k,n}$, $N^1_{k,n}$, $N^2_{k,n}$ of $L^0_{k,n}$, $L^1_{k,n}$, $L^2_{k,n}$ in $P_k$, respectively, such that $N^i_{k,n+1}\subseteq N^i_{k,n}$, $N^i_{k,n}\cap \partial P_k=L^i_{k,n}$, and $N^i_{k,n}$ deformation retracts onto $L^i_{k,n}$. Define $V_{k,n}= N^0_{k,n}\cup N^1_{k,n}\cup N^2_{k,n}$. (See Figure~\ref{collar}.) Put $V_n=\bigcup_{k\in\mathbb{N}} f_k(V_{k,n})$. Then $V_n$ is an open neighborhood of $b_0$ in $\mathbb{P}$, $V_{n+1}\subseteq V_n$, and $V_n$ deformation retracts onto $U_n$. (Note that $V_1\supseteq V_2\supseteq V_3\supseteq \cdots$ does not form a neighborhood basis for $\mathbb{P}$ at $b_0$.) \begin{figure} \caption{$V_{k,n}$ (gray region) with $k<2k\leqslant n<2k+1$} \label{collar} \end{figure} Now suppose, to the contrary, that $\mathbb{P}$ is not strongly homotopically Hausdorff. Then $\mathbb{P}$ is not strongly homotopically Hausdorff at $b_0$, since $\mathbb{P}\setminus\{b_0\}$ is locally contractible. Hence, there is a $1\not=[\gamma]\in\pi_1(\mathbb{P},b_0)$ such that for every $n\in \mathbb{N}$, there is a path $\lambda_n:[0,1]\rightarrow \mathbb{P}$ from $\lambda_n(0)=b_0$ to $\lambda_n(1)\in V_n$ with $[\gamma]\in \pi(\lambda_n,V_n)$. Since each $V_n$ is path connected, we may assume that $\lambda_n(1)=b_0$. Choose loops $\delta_n$ in $V_n$ with $[\gamma]=[\lambda_n][\delta_n][\lambda_n]^{-1}\in \pi_1(\mathbb{P},b_0)$. Since $V_n$ deformation retracts onto $U_n$, we may assume that $\delta_n$ lies in $U_n$. We may also assume that each $\delta_n$ is {\em reduced} in $\mathbb{H}$. This implies that $\delta_n$ lies in $\mathbb{H}_{n+1}$, because $C_m$ is not fully contained in $U_n$ if $m\geqslant n$. Replacing each $\lambda_n$ by $\lambda_1^-\cdot \lambda_n$, we may assume that $\gamma=\delta_1$, which lies in $U_1$. There is a maximal $s_0\in [0,1]$ such that $\gamma|_{[0,s_0]}$ is a reparametrization of $(\gamma|_{[t_0,1]})^-$ for some $t_0\in (s_0,1]$. Then $\gamma(s_0)=\gamma(t_0)=b_0$, $\gamma|_{[s_0,t_0]}$ is cyclically reduced, and $[\gamma]=[\gamma|_{[0,s_0]}][\gamma|_{[s_0,t_0]}][\gamma|_{[0,s_0]}]^{-1}$. Replacing each $\lambda_n$ by $\gamma|_{[0,s_0]}^-\cdot \lambda_n$ and replacing $\gamma$ by $\gamma|_{[s_0,t_0]}$, we may assume that $\gamma$ is {\em cyclically} reduced. Let $m$ be minimal such that $\gamma([0,1])$ intersects $C_m\setminus \{b_0\}$. Then $\gamma$ fully traverses $C_m$, at least once, and $\gamma$ lies in $\mathbb{H}_m$. Let $F$ be a homotopy from $\gamma$ to $\lambda_{m}\cdot \delta_{m}\cdot \lambda_{m}^-$ (relative to endpoints) within $\mathbb{P}$. Choose $n>m$ such that the image of $F$ misses all $P^\circ_i$ with $i>n$. Then $F'=d_n\circ d_{n-1}\circ \cdots \circ d_1\circ F$ is a homotopy from $\gamma'=d_n\circ d_{n-1}\circ \cdots \circ d_1\circ \gamma$ to $\lambda\cdot\delta\cdot \lambda^-$ (relative to endpoints) within $\mathbb{H}^+_{n+1}$, where $\lambda=d_n\circ d_{n-1}\circ \cdots \circ d_1\circ \lambda_{m}$ and $\delta=d_n\circ d_{n-1}\circ \cdots \circ d_1\circ \delta_{m}$. On one hand, $\gamma'$ is a cyclically reduced loop in $\mathbb{H}^+_{n+1}$ which traverses $B_{m,1}$. On the other hand, the image of $\delta$ is disjoint from $B_{m,1}\setminus\{b_0\}$. (In particular, $\lambda$ is not null-homotopic in $\mathbb{H}^+_{n+1}$.) Therefore, using reduced representatives $\lambda'$ and $\delta'$ of $[\lambda]$ and $[\delta]$, respectively, we have $s,t>0$ in Lemma~\ref{cancellinglemma}. Applying Lemma~\ref{cancellinglemma} to $\gamma'^-$, as well, we see that $\gamma'$ is not cyclically reduced; a contradiction. \end{proof} \section{An inverse limit of finitely generated free monoids}\label{wordsequences} Let ${\mathcal W}^+_{n,k}$ denote the set of finite words (including the empty word) over the alphabet ${\mathcal A}_{n,k}=\{\rho^{\pm 1}_{i,j}\mid 1\leqslant i \leqslant n-1, 1\leqslant j\leqslant 2\}\cup\{\ell_i^{\pm 1}\mid n\leqslant i \leqslant k\}$. Then ${\mathcal W}^+_{n,k}$ forms a free monoid on the set ${\mathcal A}_{n,k}$ under concatenation. The deletion of a subword of the form $\rho_{i,j}^{+1}\rho_{i,j}^{-1}$, $\rho_{i,j}^{-1}\rho_{i,j}^{+1}$, $\ell_i^{+1}\ell_i^{-1}$, or $\ell_i^{-1}\ell_i^{+1}$ from an element of ${\mathcal W}^+_{n,k}$ is called a {\em cancellation}. A word is called {\em reduced} if it does not allow for any cancellation. Recall that, starting with a fixed element of ${\mathcal W}^+_{n,k}$, every maximal sequence of cancellations results in the same reduced word. Let $F^+_{n,k}\subseteq {\mathcal W}^+_{n,k}$ be the subset of all reduced words. Then $F^+_{n,k}$ forms a free group on the set $\{\rho_{i,j}\mid 1\leqslant i \leqslant n-1,1\leqslant j\leqslant 2\}\cup\{\ell_i\mid n\leqslant i \leqslant k\}$ under concatenation, followed by maximal cancellation. Moreover, we have isomorphisms $h_{n,k}:\pi_1(\mathbb{H}^+_{n,k},b_0)\rightarrow F^+_{n,k}$, mapping $[\rho_{i,j}]\mapsto \rho_{i,j}$ and $[\ell_i]\mapsto \ell_i$. Let $R_{n,k}: {\mathcal W}^+_{n,k+1}\rightarrow {\mathcal W}^+_{n,k}$ denote the function that deletes every occurrence of the letters $\ell_{k+1}$ and $\ell_{k+1}^{-1}$ from a word. Let $S_{n,k}:{\mathcal W}^+_{n,k}\rightarrow F^+_{n,k}$ be the function that maximally cancels words and put $T_{n,k}=S_{n,k}\circ R_{n,k}|_{F^+_{n,k+1}}:F^+_{n,k+1}\rightarrow F^+_{n,k}$. Then $R_{n,k}$, $S_{n,k}$, and $T_{n,k}$ are monoid/group homomorphisms. Define $g_{n,k}=h_{n,k}\circ r_{n,k\#}: \pi_1(\mathbb{H}^+_n,b_0)\rightarrow F^+_{n,k}\subseteq {\mathcal W}^+_{n,k}$. Since $T_{n,k}\circ g_{n,k+1}=g_{n,k}:\pi_1(\mathbb{H}^+_n,b_0)\rightarrow F^+_{n,k}$ and $T_{n,k}\circ h_{n,k+1}=h_{n,k}\circ r_{n,k\#}:\pi_1(\mathbb{H}^+_{n,k+1},b_0)\rightarrow F^+_{n,k}$, we have, for each $n$, as in the proof of Lemma~\ref{dn-com-inj}, an injective homomorphism into an inverse limit $F^+_n$ of free groups: \[g_n=(g_{n,k})_{k\geqslant n-1}:\pi_1(\mathbb{H}^+_n,b_0)\rightarrow F^+_n=\lim_{\longleftarrow} \left( F^+_{n,n-1}\stackrel{T_{n,n-1}}{\longleftarrow} F^+_{n,n}\stackrel{T_{n,n}}{\longleftarrow}F^+_{n,n+1}\stackrel{T_{n,n+1}}{\longleftarrow}\cdots\right).\] (Note that $g_n$ is not surjective and that $F^+_n$ is not free.) \begin{remark}\label{LEC} The image of $g_n$ equals the locally eventually constant sequences, where $(y_{n,k})_{k\geqslant n-1}\in F^+_n$ is called {\em locally eventually constant} if for every $m\geqslant n-1$, the sequence $(R_{n,m}\circ R_{n,m+1} \circ \cdots \circ R_{n,k-1}(y_{n,k}))_{k\geqslant m}$ is eventually constant \cite{MM}. \end{remark} For every $n\in\mathbb{N}$, $x\in \pi_1(\mathbb{H}^+_n,b_0)$, and $m\geqslant n-1$, the sequence $(R_{n,m}\circ R_{n,m+1} \circ \cdots \circ R_{n,k-1}(g_{n,k}(x)))_{k\geqslant m}$ of (unreduced) words is eventually constant, so that we may define functions $\omega_{n,m}:\pi_1(\mathbb{H}^+_n,b_0)\rightarrow \mathcal{W}^+_{n,m}$ by \[\omega_{n,m}(x)=R_{n,m}\circ R_{n,m+1} \circ \cdots \circ R_{n,k-1}(g_{n,k}(x)),\] for sufficiently large $k$, and obtain commutative diagrams \[ \xymatrix{ & {\mathcal W}^+_n \ar[d]^{S_n} & \hspace{-30pt} = \displaystyle \lim_{\longleftarrow} ( {\mathcal W}^+_{n,n-1} \ar[d]^{S_{n,n-1}} & \ar[l]_<(0.2){R_{n,n-1}} \ar[d]^{S_{n,n}} {\mathcal W}^+_{n,n} & \ar[l]_<(0.2){R_{n,n}} \ar[d]^{S_{n,n+1}} {\mathcal W}^+_{n,n+1} & \ar[l]_<(0.2){R_{n,n+1}} \cdots) \\ \pi_1(\mathbb{H}^+_n,b_0) \ar@{^{(}->}[ru]^{\omega_n} \ar@{^{(}->}[r]^<(0.3){g_n} & F^+_n & \hspace{-30pt} = \displaystyle \lim_{\longleftarrow} ( F^+_{n,n-1} & \ar[l]_<(0.2){T_{n,n-1}} F^+_{n,n} & \ar[l]_<(0.2){T_{n,n}} F^+_{n,n+1} & \ar[l]_<(0.2){T_{n,n+1}} \cdots) } \] with injective functions $\omega_n=(\omega_{n,k})_{k\geqslant n-1}$ that output {\em stabilized} word sequences. (See \cite{DTW} or \cite{FZ2013b}, for example.) For $k\geqslant 2n+1$, let $D_{n}: {\mathcal W}^+_{n,k} \rightarrow {\mathcal W}^+_{n+1,k}$ be the monomorphism that replaces every occurrence of the letter $\ell_n$ by $\rho_{n,1}\ell_{2n} \rho_{n,1}^{-1} \rho_{n,2} \ell_{2n+1} \rho_{n,2}^{-1}$ and every occurrence of the letter $\ell_n^{-1}$ by $\rho_{n,2} \ell_{2n+1}^{-1} \rho_{n,2}^{-1}\rho_{n,1}\ell_{2n}^{-1} \rho_{n,1}^{-1}$. Then, for every $k\geqslant 2n+1$, we obtain the following commutative diagram: \[ \xymatrix@=8pt{ &&\pi_1(\mathbb{H}^+_{n,k+1},b_0)\ar[rrr]^{d_{n\#}} \ar[ddd]^{r_{n,k\#}} \ar[dl]_<(0.3){h_{n,k+1}} &&&\pi_1(\mathbb{H}^+_{n+1,k+1},b_0) \ar[ddd]^{r_{n+1,k\#}} \ar[dl]_<(0.3){h_{n+1,k+1}}\\ &F^+_{n,k+1}\ar[rrr]^{D_n} \ar[ddd]^{T_{n,k}}&&&F^+_{n+1,k+1}\ar[ddd]^{T_{n+1,k}} &\\ {\mathcal W}^+_{n,k+1}\ar[rrr]^{D_n}\ar[ddd]^{R_{n,k}} \ar[ru]^<(0.0){S_{n,k+1}}&&&{\mathcal W}^+_{n+1,k+1}\ar[ddd]^{R_{n+1,k}} \ar[ru]^<(0.0){S_{n+1,k+1}}&&\\ &&\pi_1(\mathbb{H}^+_{n,k},b_0)\ar[rrr]^{d_{n\#}} \ar[dl]_<(0.3){h_{n,k}}&&&\pi_1(\mathbb{H}^+_{n+1,k},b_0) \ar[dl]^{h_{n+1,k}}\\ &F^+_{n,k}\ar[rrr]^{D_n} &&&F^+_{n+1,k}&\\ {\mathcal W}^+_{n,k}\ar[rrr]^{D_n} \ar[ru]^{S_{n,k}}&&&{\mathcal W}^+_{n+1,k} \ar[ru]_{S_{n+1,k}}&& } \] \noindent Note: For every $\omega\in F^+_{n,k}$, the word $D_n(\omega)$ is reduced. \begin{remark} In view of the above, the direct limit structure of Lemma~\ref{direct} suggests the possibility of labelling the elements of $\pi_1(\mathbb{P}, b_0)$ using sequences of finite words over finite alphabets that gradually exclude all letters $\ell_n^{\pm 1}$ in favor of only using the conjugating letters $\rho_{n,j}^{\pm 1}$. Since such a shift causes conjugating pairs to become adjacent in words at later levels, we use monoid structures to prevent their cancellation. This, in turn, requires us to work with $D_{n}: {\mathcal W}^+_{n,k} \rightarrow {\mathcal W}^+_{n+1,k}$ in what follows, at a level just deep enough to stabilize the appropriate word sequences. \end{remark} Recall that ${\mathcal W}^+_{n+1,n}$ is the set of finite words over $\{\rho_{i,j}^{\pm 1}\mid 1\leqslant i \leqslant n, 1\leqslant j \leqslant 2\}$. In particular, the set ${\mathcal W}^+_{1,0}$ contains only one element: the empty word. Let $E_{n-1}:{\mathcal W}^+_{n+1,n}\rightarrow {\mathcal W}^+_{n,n-1}$ denote the epimorphism deleting every occurrence of the letters $\rho^{\pm 1}_{n,1}$ and $\rho^{\pm 1}_{n,2}$. Then, for $k\geqslant 2n+1$, we obtain commutative trapezoids: \begin{equation}\label{trapezoid} \xymatrix{ {\mathcal W}^+_{1,k} \ar[r]^{D_1} \ar[d]_{R_{1,k-1}}& {\mathcal W}^+_{2,k} \ar[r]^{D_2} \ar[d]_{R_{2,k-1}}& \cdots \ar[r]^{D_{n-1}}& {\mathcal W}^+_{n,k} \ar[r]^{D_n} \ar[d]_{R_{n,k-1}}& {\mathcal W}^+_{n+1,k} \ar[d]_{R_{n+1,k-1}}\\ \vdots \ar[d]_{R_{1,n}} & \vdots \ar[d]_{R_{2,n}}& & \vdots\ar[d]_{R_{n,n}} & \vdots\ar[d]_{R_{n+1,n}} \\ {\mathcal W}^+_{1,n} \ar[d]_{R_{1,n-1}}& {\mathcal W}^+_{2,n} \ar[d]_{R_{2,n-1}}& & {\mathcal W}^+_{n,n} \ar[d]_{R_{n,n-1}}& {\mathcal W}^+_{n+1,n} \ar[dl]^{E_{n-1}}\\ & & & {\mathcal W}^+_{n,n-1} \ar[dl]^{E_{n-2}} \\ \vdots\ar[d]_{R_{1,1}} &\vdots \ar[d]_{R_{2,1}}& \iddots \ar[dl]^{E_1}\\ {\mathcal W}^+_{1,1} \ar[d]_{R_{1,0}}& {\mathcal W}^+_{2,1} \ar[dl]^{E_0} \\ {\mathcal W}^+_{1,0} } \end{equation} \begin{theorem}\label{inj} There is a well-defined injective function \[\chi=(\chi_n)_{n\in \mathbb{N}}:\pi_1(\mathbb{P},b_0)\hookrightarrow \lim_{\longleftarrow}\left( {\mathcal W}^+_{2,1}\stackrel{E_1}{\longleftarrow} {\mathcal W}^+_{3,2}\stackrel{E_2}{\longleftarrow} {\mathcal W}^+_{4,3}\stackrel{E_3}{\longleftarrow} \cdots \right)\] defined as follows: for a given $[\alpha]\in \pi_1(\mathbb{P},b_0)$ and sufficiently large $n\geqslant 2$, choose $[\beta]\in\pi_1(\mathbb{H}_n^+,b_0)$ with $\iota_{n\#}([\beta])=[\alpha]$ and put $\chi_{n-1}([\alpha])=\omega_{n,n-1}([\beta])\in {\mathcal W}^+_{n,n-1}$. \end{theorem} \begin{proof} First we show that $\chi$ is well-defined. By Lemmas~\ref{direct} and \ref{HP-inj} it suffices to show that for any $[\beta]\in\pi_1(\mathbb{H}_n^+,b_0)$, we have $E_{n-1}(\omega_{n+1,n}(d_{n\#}([\beta])))=\omega_{n,n-1}([\beta])$, making the following diagram commute: \[\xymatrix{ \mathcal{W}_{2,1}^{+} & \mathcal{W}_{3,2}^{+} \ar[l]_-{E_1} & \cdots \ar[l]_-{E_2} & \mathcal{W}_{n,n-1}^{+} \ar[l]_-{E_{n-2}} & \mathcal{W}_{n+1,n}^{+} \ar[l]_-{E_{n-1}} & \cdots \ar[l] \\ \pi_1(\mathbb{H}_{2}^{+},b_0) \ar@{^{(}->}[r]_-{d_{2\#}} \ar[u]_-{\omega_{2,1}} & \pi_1(\mathbb{H}_{3}^{+},b_0) \ar@{^{(}->}[r]_-{d_{3\#}} \ar[u]_-{\omega_{3,2}} & \cdots \ar@{^{(}->}[r]_-{d_{n-1\#}} & \pi_1(\mathbb{H}_{n}^{+},b_0) \ar@{^{(}->}[r]_-{d_{n\#}} \ar[u]_-{\omega_{n,n-1}} & \pi_1(\mathbb{H}_{n+1}^{+},b_0) \ar[u]_-{\omega_{n+1,n}} \ar@{^{(}->}[r] & \cdots \\ }\] (Recall that the underlying set of a direct limit of groups is the direct limit of the underlying sets.) To this end, put $\beta'=d_{n}\circ \beta$. Then $[\beta']\in\pi_1(\mathbb{H}_{n+1}^+,b_0)$. By definition of $\omega_{n,n-1}$ and $\omega_{n+1,n}$, respectively, for sufficiently large $k\geqslant 2n+1$, we have (cf.\@ Remark~\ref{LEC}) \[\omega_{n,n-1}([\beta])= R_{n,n-1}\circ R_{n,n}\circ \cdots \circ R_{n,k-1}\circ g_{n,k}([\beta])\] and \[\omega_{n+1,n}([\beta'])= R_{n+1,n}\circ R_{n+1,n+1} \circ \cdots \circ R_{n+1,k-1}\circ g_{n+1,k}([\beta']).\] Noting that $g_{n+1,k}([\beta'])= h_{n+1,k}\circ r_{n+1,k\#}([\beta'])=h_{n+1,k}\circ r_{n+1,k\#}\circ d_{n\#}([\beta]) =h_{n+1,k}\circ d_{n\#}\circ r_{n,k\#}([\beta])=D_{n}\circ h_{n,k}\circ r_{n,k\#}([\beta]) =D_n\circ g_{n,k}([\beta])$ and that \[E_{n-1}\circ R_{n+1,n}\circ R_{n+1,n+1} \circ \cdots \circ R_{n+1,k-1}\circ D_n=R_{n,n-1}\circ R_{n,n}\circ \cdots \circ R_{n,k-1}\] (see Diagram~(\ref{trapezoid})) we obtain the desired equality: \begin{eqnarray*} E_{n-1}(\omega_{n+1,n}([\beta']))&=&E_{n-1}\circ R_{n+1,n}\circ R_{n+1,n+1} \circ \cdots \circ R_{n+1,k-1}\circ g_{n+1,k}([\beta'])\\ &=&R_{n,n-1}\circ R_{n,n}\circ \cdots \circ R_{n,k-1}\circ g_{n,k}([\beta])\\ &=&\omega_{n,n-1}([\beta]). \end{eqnarray*} Now we show that $\chi$ is injective. Suppose $[\alpha^{(1)}]\not=[\alpha^{(2)}]\in \pi_1(\mathbb{P},b_0)$. Choose $n\geqslant 2$ sufficiently large (as in the proof of Lemma~\ref{direct}) so that $\beta^{(s)}([0,1])\subseteq \mathbb{H}_{n}^+$, where $\beta^{(s)}=d_{n-1}\circ d_{n-2} \circ \cdots \circ d_1\circ\alpha^{(s)}$ for $s\in\{1,2\}$. Then $\iota_{n\#}([\beta^{(s)}])=[\alpha^{(s)}]$ and $[\beta^{(1)}]\not=[\beta^{(2)}]\in \pi_1(\mathbb{H}^+_n,b_0)$. Hence, there is an $m\geqslant n$ such that $\omega_{n,m-1}([\beta^{(1)}])\not=\omega_{n,m-1}([\beta^{(2)}])$. We may assume that $m$ is even. Put $\gamma^{(s)}=d_{m-1}\circ d_{m-2}\circ \cdots \circ d_n\circ \beta^{(s)}$. Choose $k\geqslant 2(m-1)+1$ sufficiently large, such that for $s\in \{1,2\}$, we have: \[\xymatrix{ g_{n,k}([\beta^{(s)}]) \ar@{|->}_{R_{n,m-1}\circ \cdots \circ R_{n,k-1}}[d] \ar@{|->}^{D_{m-1}\circ D_{m-2}\circ \cdots \circ D_n}[rr] & & g_{m,k}([\gamma^{(s)}]) \ar@{|->}^{R_{m,m-1}\circ \cdots \circ R_{m,k-1}}[d] \\ \omega_{n,m-1}([\beta^{(s)}]) & & \hspace{.8in} \omega_{m,m-1}([\gamma^{(s)}])=\chi_{m-1}([\alpha^{(s)}]) }\] For $n\leqslant j\leqslant m-1$, let $\overline{D}_{j,m-1}:\mathcal{W}_{j,m-1}^{+}\rightarrow \mathcal{W}_{j+1,m-1}^{+}$ be the monomorphism that replaces every occurrence of the letter $\ell_{j}$ (respectively $\ell_j^{-1}$) by $\rho_{j,1}\ell_{2j}\rho_{j,1}^{-1}\rho_{j,2}\ell_{2j+1}\rho_{j,2}^{-1}$ (respectively $\rho_{j,2}\ell_{2j+1}^{-1}\rho_{j,2}^{-1}\rho_{j,1}\ell_{2j}^{-1}\rho_{j,1}^{-1}$) if $2j+1\leqslant m-1$, but instead replaces it by $\rho_{j,1}\rho_{j,1}^{-1}\rho_{j,2}\rho_{j,2}^{-1}$ (respectively $\rho_{j,2}\rho_{j,2}^{-1}\rho_{j,1}\rho_{j,1}^{-1}$) if $2j\geqslant m$. Since each $\overline{D}_{j,m-1}$ with $n\leqslant j\leqslant m-1$ is injective, so is their composition $\overline{D}=\overline{D}_{m-1,m-1}\circ \overline{D}_{m-2,m-1}\circ\cdots \circ \overline{D}_{n,m-1}$. Moreover, the following diagram commutes: \[\xymatrix{ \mathcal{F}_{n,k}^{+} \ar[d]_{R_{n,m-1}\circ \cdots \circ R_{n,k-1}} \ar[rrrr]^-{D_{m-1}\circ D_{m-2}\circ \cdots\circ D_n} &&&&\mathcal{F}_{m,k}^{+} \ar[d]^{R_{m,m-1}\circ \cdots \circ R_{m,k-1}}\\ \mathcal{W}_{n,m-1}^{+} \ar[rrrr]_-{\overline{D}} &&&& \mathcal{W}_{m,m-1}^{+} }\] Hence, for $s\in\{1,2\}$, we have \begin{eqnarray*} \overline{D}(\omega_{n,m-1}([\beta^{(s)}])) &=& \overline{D}(R_{n,m-1}\circ \cdots \circ R_{n,k-1}(g_{n,k}([\beta^{(s)}])))\\ &=& R_{m,m-1}\circ \cdots \circ R_{m,k-1}\circ D_{m-1}\circ D_{m-2}\circ \cdots\circ D_n(g_{n,k}([\beta^{(s)}]))\\ &=& R_{m,m-1}\circ \cdots \circ R_{m,k-1}(g_{m,k}([\gamma^{(s)}]))\\ &=& \omega_{m,m-1}([\gamma^{(s)}])\\ &=& \chi_{m-1}([\alpha^{(s)}]). \end{eqnarray*} Since $\omega_{n,m-1}([\beta^{(1)}])\neq \omega_{n,m-1}([\beta^{(2)}])$ and $\overline{D}$ is injective, we have $\chi_{m-1}([\alpha^{(1)}])\neq \chi_{m-1}([\alpha^{(2)}])$. \end{proof} \begin{remark}\label{noalgebra} Although $\bigcup_{i=1}^\infty (B_{i,1}\cup B_{i,2})\subseteq \mathbb{P}$ is a bouquet of circles that is not homeomorphic to $\mathbb{H}$, one can algebraically set up a commutative diagram as follows: \[ \xymatrix{ {\mathcal W}^+_{2,1} \ar[d]_{S_{2,1}}& {\mathcal W}^+_{3,2} \ar[l]_{E_1} \ar[d]_{S_{3,2}} & {\mathcal W}^+_{4,3} \ar[l]_{E_2} \ar[d]_{S_{4,3}} & \cdots \ar[l]_{E_3}\\ F^+_{2,1} & F^+_{3,2} \ar[l]_{J_1} & F^+_{4,3} \ar[l]_{J_2} & \cdots \ar[l]_{J_3} } \] However, there does \underline{not} exist an injective homomorphism \[\pi_1(\mathbb{P},b_0)\hookrightarrow \lim_{\longleftarrow}\left( F^+_{2,1}\stackrel{J_1}{\longleftarrow} F^+_{3,2}\stackrel{J_2}{\longleftarrow} F^+_{4,3}\stackrel{J_3}{\longleftarrow} \cdots \right),\] because $\pi_1(\mathbb{P},b_0)$ is not residually free (Remark~\ref{NRF}). \end{remark} \section{The homotopically path Hausdorff property}\label{sec:HPH} \begin{definition}[Homotopically path Hausdorff \cite{FRVZ}] \label{DefHPH} A path-connected space $X$ is called {\em homotopically path Hausdorff} if for every two paths $\alpha, \beta:[0,1] \rightarrow X$ with $\alpha(0)=\beta(0)$ and $\alpha(1)=\beta(1)$ such that $\alpha\cdot \beta^-$ is not null-homotopic, there is a partition $0=t_0<t_1<\cdots < t_n=1$ of $[0,1]$ and open subsets $U_1, U_2, \dots, U_n$ of $X$ with $\alpha([t_{i-1},t_i])\subseteq U_i$ for all $1\leqslant i\leqslant n$ and with the property that if $\gamma:[0,1]\rightarrow X$ is any path with $\gamma([t_{i-1},t_i])\subseteq U_i$ for all $1\leqslant i\leqslant n$ and with $\gamma(t_i)=\alpha(t_i)$ for all $0\leqslant i \leqslant n$, then $\gamma\cdot \beta^-$ is not null-homotopic. \end{definition} \begin{remark}\label{T1} We recall from \cite{BFa} that a connected and locally path-connected space $X$ is homotopically path Hausdorff if and only if $\pi_1(X,x)$ is T$_1$ in the quotient topology induced by the compact-open topology on the loop space $\Omega(X,x)$. \end{remark} \begin{remark} \label{Y'} If the natural homomorphism $\pi_1(X,x) \hookrightarrow \check{\pi}_1(X,x)$ is injective, then $X$ is homotopically path Hausdorff \cite{FRVZ}. However, the converse does not hold in general \cite[Example $Y'$]{FRVZ}. \end{remark} \begin{theorem}\label{HPH} $\mathbb{P}$ is homotopically path Hausdorff. \end{theorem} \begin{proof} Let $1\not=[\alpha]\in \pi_1(\mathbb{P},b_0)$. We wish to find a partition $0=t_0<t_1<\cdots<t_s=1$ and open subsets $U_1, U_2, \dots, U_s\subseteq \mathbb{P}$ with $\alpha([t_{i-1},t_i])\subseteq U_i$ for all $1\leqslant i \leqslant s$ such that the following property holds: if $\gamma:[0,1]\rightarrow \mathbb{P}$ is any loop with $\gamma(t_i)=\alpha(t_i)$ for all $0\leqslant i\leqslant s$ and $\gamma([t_{i-1},t_i])\subseteq U_i$ for all $1\leqslant i \leqslant s$, then $[\gamma]\not=1\in\pi_1(\mathbb{P},b_0)$. (This property is preserved if we add a subdivision point $t'\in (t_{i-1},t_i)$ and choose $U'=U_i$. Therefore, checking this statement for all essential loops $\alpha$ based at $b_0$, validates Definition~\ref{DefHPH} for all essential loops $\alpha\cdot \beta^-$.) By Theorem~\ref{inj}, there is an $n\in \mathbb{N}$ such that for $\beta=d_{n-1}\circ d_{n-2}\circ \cdots \circ d_1 \circ \alpha$, we have $\beta([0,1])\subseteq \mathbb{H}^+_n$ and $\chi_{n-1}([\alpha])=\omega_{n,n-1}([\beta])\in {\mathcal W}^+_{n,n-1}$ is not the empty word. Choose $k\in \mathbb{N}$ sufficiently large, so that \[R_{n,n-1}\circ R_{n,n}\circ \cdots \circ R_{n,k-1}\circ g_{n,k}([\beta])=\omega_{n,n-1}([\beta]).\] Consider $\beta:[0,1]\rightarrow \mathbb{H}^+_n\subseteq \mathbb{P}^+_n$ and $r_{n,k}:\mathbb{H}^+_n\rightarrow \mathbb{H}^+_{n,k}$. Since $\mathbb{H}^+_{n,k}$ is a finite bouquet of circles, there is an open cover $\mathcal B$ of $\mathbb{H}^+_{n,k}$ with the following property: if $\eta,\tau:([0,1],\{0,1\})\rightarrow (\mathbb{H}^+_{n,k},b_0)$ are two loops that are $\mathcal B$-close, i.e., if for every $t\in [0,1]$, there is a $B\in {\mathcal B}$ with $\{\eta(t),\tau(t)\}\subseteq B$, then $[\eta]=[\tau]\in \pi_1(\mathbb{H}^+_{n,k},b_0)$. Choose an open cover $\{W_j \mid j\in J\}$ of $\mathbb{H}^+_n$ and a cover $\{V_j\mid j\in J\}$ of $\mathbb{H}^+_n$ by open subsets of $\mathbb{P}^+_n$ such that $\{W_j \mid j\in J\}$ refines $\{(r_{n,k})^{-1}(B)\mid B\in {\mathcal B}\}$ and such that each $V_j$ deformation retracts onto $W_j$. (See proof of Theorem~\ref{SHHthm}.) Choose a partition $0=t_0<t_1<\cdots<t_s=1$ and indices $j_1, j_2, \dots , j_s\in J$ such that $\beta([t_{i-1},t_i])\subseteq V_{j_i}$ for $1\leqslant i \leqslant s$. Put $U_i=(d_{n-1}\circ d_{n-2}\circ \cdots \circ d_1)^{-1}(V_{j_i})$ for $1\leqslant i \leqslant s$. Now, let $\gamma:[0,1]\rightarrow \mathbb{P}$ be a loop with $\gamma(t_i)=\alpha(t_i)$ for all $0\leqslant i\leqslant s$ and $\gamma([t_{i-1},t_i])\subseteq U_i$ for all $1\leqslant i \leqslant s$. Then $\gamma:[0,1]\rightarrow \mathbb{P}$ is homotopic (relative to its endpoints) to a loop $\gamma':[0,1]\rightarrow \mathbb{H}^+_n\subseteq \mathbb{P}$ such that $r_{n,k}\circ \gamma':[0,1]\rightarrow \mathbb{H}^+_{n,k}$ and $r_{n,k}\circ \beta:[0,1]\rightarrow \mathbb{H}^+_{n,k}$ are $\mathcal B$\nobreakdash-close. Hence, $[r_{n,k}\circ \gamma']=[r_{n,k}\circ \beta]\in \pi_1(\mathbb{H}^+_{n,k},b_0)$, so that $g_{n,k}([\gamma'])=h_{n,k}([r_{n,k}\circ \gamma'])=h_{n,k}([r_{n,k}\circ \beta])=g_{n,k}([\beta])$. Choose $m\geqslant k$ sufficiently large, so that \[R_{n,n-1}\circ R_{n,n}\circ \cdots \circ R_{n,k-1}\circ R_{n,k}\circ \cdots \circ R_{n,m-1}\circ g_{n,m}([\gamma'])=\omega_{n,n-1}([\gamma']),\] which, as a word in ${\mathcal W}^+_{n,n-1}$, has at least as many letters as \begin{eqnarray*} & & R_{n,n-1}\circ R_{n,n}\circ \cdots \circ R_{n,k-1}\circ T_{n,k}\circ \cdots \circ T_{n,m-1}\circ g_{n,m}([\gamma'])\\&=&R_{n,n-1}\circ R_{n,n}\circ \cdots \circ R_{n,k-1}\circ g_{n,k}([\gamma'])\\&=&R_{n,n-1}\circ R_{n,n}\circ \cdots \circ R_{n,k-1}\circ g_{n,k}([\beta])\\&=&\omega_{n,n-1}([\beta]). \end{eqnarray*} Hence, $\chi_{n-1}([\gamma'])=\omega_{n,n-1}([\gamma'])$ is not the empty word. We conclude that $\chi([\gamma])=\chi([\gamma'])$ is not trivial so that $[\gamma]\not=1\in\pi_1(\mathbb{P},b_0)$. \end{proof} \section{The 1-UV$_0$ property} \begin{definition}[1-UV$_0$ \cite{CMRZZ}]\label{small} We say that $X$ is {\em 1-UV$_0$ at} $x\in X$ if for every neighborhood $U$ of $x$ in $X$, there is an open subset $V$ in $X$ with $x\in V\subseteq U$ such that for every map $f:D^2\rightarrow X$ from the unit disk with $f(S^1)\subseteq V$ there is a map $g:D^2\rightarrow U$ with $f|_{S^1}=g|_{S^1}$. We say that $X$ is {\em 1-UV$_0$} if $X$ is 1-UV$_0$ at every point $x\in X$. \end{definition} \begin{proposition}\label{puv} All one-dimensional spaces and all planar spaces are 1-UV$_0$. \end{proposition} \begin{proof} In a one-dimensional space, every null-homotopic loop contracts within its own image \cite[Lemma~2.2]{CF1959}. In a planar space, every null-homotopic loop has a contraction whose diameter equals that of the image of the loop \cite[Lemma~13]{FZ2005}. \end{proof} \begin{theorem}\label{UVThm} $\mathbb{P}$ is 1-UV$_0$. \end{theorem} \begin{proof} It suffices to show that $\mathbb{P}$ is 1-UV$_0$ at $b_0$. Let $U$ be an open neighborhood of $b_0$ in $\mathbb{P}$. Recall $U_n$ and $f^{-1}_k(U_n)=L^0_{k,n}\cup L^1_{k,n} \cup L^2_{k,n}\subseteq \partial P_k$ from the proof of Theorem~\ref{SHHthm}. Fix $m\in \mathbb{N}$ such that $b_0\in U_m\subseteq U\cap \mathbb{H}$. For each $k\in \mathbb{N}$, choose three pairwise disjoint open neighborhoods $M^0_k,M^1_k,M^2_k$ of $L^0_{k,m},L^1_{k,m},L^2_{k,m}$ in $P_k$, respectively, such that, for $i\in \{0,1,2\}$, $M^i_k\cap \partial P_k=L^i_{k,m}$, $M^i_k\cap P^\circ_k\subseteq U\cap P^\circ_k$, and $M^i_k$ deformation retracts onto $L^i_{k,m}$. Define $V'_{k,m}= M^0_k\cup M^1_k\cup M^2_k$ and $V=\bigcup_{k\in\mathbb{N}} f_k(V'_{k,m})$. Then $V$ is an open subset of $\mathbb{P}$ with $b_0\in V\subseteq U$. Let $f:D^2\rightarrow \mathbb{P}$ be a map with $f(S^1)\subseteq V$. We will show that $\alpha=f|_{S^1}$ contracts within $V$. Since $V$ is path connected, we may assume that $\alpha$ is a loop based at $b_0$ and show that $[\alpha]=1\in \pi_1(V,b_0)$. Since $V$ deformation retracts onto $U_m\subseteq\mathbb{H}$, we may assume that $\alpha$ lies in $\mathbb{H}$. Since $[\alpha]=1\in \pi_1(\mathbb{P},b_0)$, we have $[\alpha]=1\in \pi_1(\mathbb{H},b_0)$ by Lemma~\ref{HP-inj}. As $\mathbb{H}$ is one-dimensional, this implies that $\alpha$ contracts within its own image. \end{proof} \section{Generalized covering projections and the homotopically Hausdorff property}\label{review} In this section, we briefly review generalized covering projections. Let $X$ be a path-connected topological space and $H \leqslant \pi_1(X,x_0)$. Even if there is no classical covering projection $p:(\widetilde{X},\widetilde{x})\rightarrow (X,x_0)$ with $p_\#\pi_1(\widetilde{X},\widetilde{x})=H$ (see Remark~\ref{classical}), there might be a {\em generalized} one: \begin{definition}[Generalized covering projection \cite{B,FZ2007}] Let $X$ be a path-connected topological space. We call a map $q:\widehat{X}\rightarrow X$ a {\em generalized covering projection} if $\widehat{X}$ is nonempty, connected and locally path connected and if for every $\widehat{x}\in \widehat{X}$, for every connected and locally path-connected space $Y$, and for every map $f:(Y,y)\rightarrow (X,q(\widehat{x}))$ with $f_\#\pi_1(Y,y)\leqslant q_\#\pi_1(\widehat{X},\widehat{x})$, there is a unique map $g:(Y,y)\rightarrow (\widehat{X},\widehat{x})$ such that $q\circ g=f$. \end{definition} \begin{remark} Suppose $q:(\widehat{X},\widehat{x})\rightarrow (X,x_0)$ is a generalized covering projection. Then $q_\#:\pi_1(\widehat{X},\widehat{x})\rightarrow \pi_1(X,x_0)$ is injective. If we put $K=q_\#\pi_1(\widehat{X},\widehat{x})$, then $q:\widehat{X}\rightarrow X$ is characterized as usual, up to equivalence, by the conjugacy class of $K$ in $G=\pi_1(X,x_0)$. Moreover, $Aut(\widehat{X}\stackrel{q}{\rightarrow} X)\cong N_G(K)/K$, where $N_G(K)$ denotes the normalizer of $K$ in $G$. (The standard arguments apply \cite[2.3.5 \& 2.6.2]{Spanier}.) \end{remark} If it exists, a generalized covering projection can be obtained in the standard way: On the set of all paths $\alpha:([0,1],0)\rightarrow (X,x_0)$ consider the equivalence relation $\alpha\sim \beta$ if and only if $\alpha(1)=\beta(1)$ and $[\alpha\cdot \beta^-]\in H$. Denote the equivalence class of $\alpha$ by $\left<\alpha\right>$ and denote the set of all equivalence classes by $\widetilde{X}_H$. Let $\widetilde{x}_0$ denote the class containing the constant path at $x_0$. We give $\widetilde{X}_H$ the topology generated by basis elements of the form $\left<\alpha,U\right>=\{\left<\alpha\cdot \gamma \right> \mid \gamma:([0,1],0)\rightarrow (U,\alpha(1))\}$, where $U$ is an open subset of $X$ and $\left<\alpha\right>\in \widetilde{X}_H$ with $\alpha(1)\in U$. Then $\widetilde{X}_H$ is connected and locally path connected and the endpoint projection $p_H:\widetilde{X}_H\rightarrow X$, defined by $p_H(\left<\alpha\right>)=\alpha(1)$, is a continuous surjection. Moreover, the map $p_H:\widetilde{X}_H\rightarrow X$ is open if and only if $X$ is locally path connected. If $p_H:\widetilde{X}_H\rightarrow X$ has unique path lifting, then it is a generalized covering projection and, for every $\left<\alpha\right>\in \widetilde{X}_H$, $(p_H)_\#:\pi_1(\widetilde{X}_H,\left<\alpha\right>)\rightarrow \pi_1(X,\alpha(1))$ is a monomorphism onto $[\alpha^-]H[\alpha]$; in particular $(p_H)_\#\pi_1(\widetilde{X}_H,\widetilde{x}_0)=H$ \cite{FZ2007}. If $X$ admits a generalized covering projection $q:(\widehat{X},\widehat{x})\rightarrow (X,x_0)$ such that $q_\#\pi_1(\widehat{X},\widehat{x})=H$, then there is a homeomorphism $h:(\widehat{X},\widehat{x})\rightarrow (\widetilde{X}_H,\widetilde{x}_0)$ with $p_H\circ h=q$ \cite{B}. \begin{remark}\label{HH} For $p_H:\widetilde{X}_H\rightarrow X$ to have unique path lifting, every fiber $p_H^{-1}(x)$ with $x\in X$ must be $T_1$, but not necessarily discrete \cite{FZ2007}. Moreover, T$_1$ fibers are not sufficient \cite{BFi,VZ}. Note that these fibers are T$_1$ if and only if for every $x\in X$, \[\bigcup_{\alpha(1)=x}\;\; \bigcap_{U\in {\mathcal T}_x} H\pi(\alpha,U)=H.\] \end{remark} \begin{definition}[Homotopically Hausdorff rel.\@ $H$ \cite{FZ2007}] We call $X$ {\em homotopically Hausdorff relative to} $H\leqslant \pi_1(X,x_0)$, if every fiber of $p_H:\widetilde{X}_H\rightarrow X$ is $T_1$. We call $X$ {\em homotopically Hausdorff} if it is homotopically Hausdorff relative to $H=\{1\}$. \end{definition} \begin{remark} If $X$ is strongly homotopically Hausdorff, then $X$ is homotopically Hausdorff. (Compare the formula of Remark~\ref{HH} with that in Definition~\ref{SHH}.) \end{remark} \begin{remark}\label{NormalNotHH} There are normal subgroups $H\trianglelefteqslant \pi_1(\mathbb{H},b_0)$, such that $\mathbb{H}$ is not homotopically Hausdorff relative to $H$. For example, $\mathbb{H}$ is not homotopically Hausdorff relative to the commutator subgroup of $\pi_1(\mathbb{H},b_0)$ \cite[Example~3.10]{BFi}. \end{remark} We abbreviate $\widetilde{X}_{\{1\}}$ by $\widetilde{X}$ and $p_{\{1\}}:\widetilde{X}_{\{1\}}\rightarrow X$ by $p:\widetilde{X}\rightarrow X$. Moreover, note that if $H=\{1\}$, then $\left<\alpha\right>=[\alpha]$. \begin{remark}\label{HPH->UPL} If $X$ is homotopically path Hausdorff, then $p:\widetilde{X}\rightarrow X$ has unique path lifting \cite{FRVZ}. \end{remark} \begin{remark}\label{1UV->UPL} If $X$ is path connected, 1-UV$_0$ and metrizable, then $p:\widetilde{X}\rightarrow X$ has unique path lifting \cite{BFi}. \end{remark} \begin{theorem}\label{Ucov} There exists a generalized covering projection $p:\widetilde{\mathbb{P}}\rightarrow \mathbb{P}$ with $\pi_1(\widetilde{\mathbb{P}},\widetilde{b}_0)=\{1\}$. \end{theorem} \begin{proof} By Theorem~\ref{HPH} and Remark~\ref{HPH->UPL}, $p:\widetilde{\mathbb{P}}\rightarrow \mathbb{P}$ has unique path lifting. Hence, $p:\widetilde{\mathbb{P}}\rightarrow \mathbb{P}$ is a generalized covering projection, $p_\#\pi_1(\widetilde{\mathbb{P}},\widetilde{b}_0)=\{1\}$, and $p_\#:\pi_1(\widetilde{\mathbb{P}},\widetilde{b}_0)\rightarrow \pi_1(\mathbb{P},b_0)$ is injective. \end{proof} \ \section{The discrete monodromy property}\label{sec:DMP} Let $X$ be a path-connected topological space and $H \leqslant \pi_1(X,x_0)$. Even if the map $p_H:\widetilde{X}_H\rightarrow X$ does not have unique path lifting, for every path $\beta:[0,1]\rightarrow X$ and every $\left<\alpha\right>\in p_H^{-1}(\beta(0))$ there is a continuous {\em standard path lift} $\widetilde{\beta}:([0,1],0)\rightarrow (\widetilde{X}_H,\left<\alpha\right>)$ with $p_H\circ \widetilde{\beta}=\beta$, defined by $\widetilde{\beta}(t)=\left<\alpha\cdot \beta_t\right>$, where $\beta_t(s)=\beta(ts)$. Based on the standard path lift, we may define the {\em standard monodromy} for $p_H:\widetilde{X}_H\rightarrow X$, as follows. For a path $\beta:[0,1]\rightarrow X$ from $\beta(0)=x$ to $\beta(1)=y$, we define $\Phi_\beta:p_H^{-1}(x)\rightarrow p_H^{-1}(y)$ by $\Phi_\beta(\left<\alpha\right>)=\left<\alpha\cdot \beta\right>$. Clearly, $\Phi_\beta:p_H^{-1}(x)\rightarrow p_H^{-1}(y)$ is a bijective function with inverse $\Phi_\beta^{-1}=\Phi_{\beta^-}$. However, $\Phi_\beta$ need not be continuous. (Such is the case for $(X,x_0)=(\mathbb{H},b_0)$ and $H=\{1\}$, although $p:\widetilde{X}\rightarrow X$ has unique path lifting. See \cite{FGa} for a discussion.) \begin{remark} Note that $\Phi_\beta$ depends only on the homotopy class $[\beta]$. Moreover, $\left<\alpha\right>\ast\left<\beta\right>:=\Phi_\beta(\left<\alpha\right>)$ is a well-defined group operation on $p_H^{-1}(x_0)$ if and only if $H$ is a normal subgroup of $\pi_1(X,x_0)$. \end{remark} \begin{definition}[Discrete monodromy]\label{DefDM} We say that $X$ has the {\em discrete mono\-dromy property relative to} $H\leqslant \pi_1(X,x_0)$ if for every $x, y\in X$ and for every path\linebreak $\beta:[0,1]\rightarrow X$ from $\beta(0)=x$ to $\beta(1)=y$, the monodromy $\Phi_\beta:p_H^{-1}(x)\rightarrow p_H^{-1}(y)$ is either the identity function or its graph is a discrete subset of $p_H^{-1}(x)\times p_H^{-1}(y)\subseteq \widetilde{X}_H\times \widetilde{X}_H$. We say that $X$ has the {\em discrete monodromy property} if it has the discrete monodromy property relative to $H=\{1\}$. \end{definition} \begin{remark} Clearly, if every fiber of $p_H:\widetilde{X}_H\rightarrow X$ is discrete, then $X$ has the discrete monodromy property relative to $H$. However, the converse does not hold in general: $p:\widetilde{\mathbb{H}}\rightarrow \mathbb{H}$ has the discrete monodromy property (see Proposition~\ref{1dm}), but $p^{-1}(b_0)$ is not discrete. \end{remark} \begin{remark}\hspace{10pt}\label{ID} \begin{itemize} \item[(a)] $\Phi_\beta:p_H^{-1}(x)\rightarrow p_H^{-1}(y)$ is the identity function if and only if $x=y$ and $[\beta]\in [\alpha^-]H[\alpha]$ for all paths $\alpha$ in $X$ from $x_0$ to $x$. \item[(b)] The graph of the identity function $id_{p_H^{-1}(x)} : p_H^{-1}(x)\rightarrow p_H^{-1}(x)$ is discrete if and only if $p_H^{-1}(x)$ is discrete. \item[(c)] $p_H^{-1}(x)$ is discrete if and only if for every path $\alpha$ in $X$ from $x_0$ to $x$, there is a $U\in {\mathcal T}_x$ such that $\pi(\alpha,U)\subseteq H$, i.e., $H\pi(\alpha,U)=H$. \end{itemize} \end{remark} \begin{lemma}\label{discreteEQ} Let $H\leqslant \pi_1(X,x_0)$. The graph of $\Phi_\beta:p_H^{-1}(x)\rightarrow p_H^{-1}(y)$ is discrete if and only if for every path $\alpha$ in $X$ from $x_0$ to $x$, there are $U\in {\mathcal T}_x$ and $V\in {\mathcal T}_y$ such that $H\pi(\alpha,U)\cap H\pi(\alpha\cdot \beta,V)=H$. \end{lemma} \begin{proof} First, observe that if $f:A\rightarrow B$ is an injective function between topological spaces, then its graph $\Gamma=\{(a,b)\in A\times B\mid f(a)=b\}$ is a discrete subset of $A\times B$ if and only if for every $a\in A$ there are $U\in {\mathcal T}_a$ and $V\in {\mathcal T}_{f(a)}$ such that $f(U)\cap V=\{f(a)\}$. Now, suppose that the graph of $\Phi_\beta:p_H^{-1}(x)\rightarrow p_H^{-1}(y)$ is discrete and let $\left<\alpha\right>\in p_H^{-1}(x)$. Choose $U\in {\mathcal T}_x$ and $V\in {\mathcal T}_y$ with \[\Phi_\beta(\left<\alpha,U\right>\cap p_H^{-1}(x))\cap \left(\left<\alpha\cdot \beta,V\right>\cap p_H^{-1}(y)\right)=\{\left<\alpha\cdot \beta\right>\}.\] Let $g\in H\pi(\alpha,U)\cap H\pi(\alpha\cdot \beta,V)$. Then $g=h_1[\alpha\cdot \gamma\cdot \alpha^-]$ for some $h_1\in H$ and some loop $\gamma$ in $U$, and $g=h_2[\alpha\cdot \beta \cdot \delta \cdot \beta^-\cdot \alpha^-]$ for some $h_2\in H$ and some loop $\delta$ in $V$. Hence, $\Phi_\beta(\left<\alpha\cdot \gamma\right>)=\left<\alpha\cdot \gamma\cdot \beta\right>=\left<\alpha\cdot \beta\cdot \delta\right>\in \left<\alpha\cdot \beta,V\right>\cap p_H^{-1}(y)$. Also, $\Phi_\beta(\left<\alpha\cdot \gamma\right>)\in \Phi_\beta(\left<\alpha,U\right>\cap p_H^{-1}(x))$. Therefore, $\left<\alpha\cdot\gamma\cdot \beta\right>= \Phi_\beta(\left<\alpha\cdot \gamma\right>)=\left<\alpha\cdot \beta\right>$, so that $g=h_1[\alpha\cdot \gamma\cdot \alpha^-]\in H$. Conversely, let $\left<\alpha\right>\in p_H^{-1}(x)$ and suppose $U\in {\mathcal T}_x$ and $V\in {\mathcal T}_y$ are such that $H\pi(\alpha,U)\cap H\pi(\alpha\cdot \beta,V)= H$. Let \[\widetilde{x}\in \Phi_\beta(\left<\alpha,U\right>\cap p_H^{-1}(x))\cap \left(\left<\alpha\cdot \beta,V\right>\cap p_H^{-1}(y)\right).\] Then $\widetilde{x}=\left<\alpha\cdot \gamma\cdot \beta\right>$ for some loop $\gamma$ in $U$, and $\widetilde{x}=\left<\alpha\cdot \beta\cdot \delta\right>$ for some loop $\delta$ in $V$. Then $[\alpha\cdot \gamma\cdot \alpha^-][\alpha\cdot \beta\cdot \delta^-\cdot \beta^-\cdot \alpha^-]\in H$, so that $[\alpha\cdot \gamma\cdot \alpha^-]\in H\pi(\alpha,U)\cap H\pi(\alpha\cdot \beta,V)=H$. Hence, $\widetilde{x}=\left<\alpha\cdot \gamma\cdot \beta\right>=\left<\alpha\cdot \beta\right>$. \end{proof} \begin{remark} \label{enough} In order to apply Lemma~\ref{discreteEQ}, there is no need to check every path $\alpha$: if $\alpha$ and $\alpha'$ are two paths from $x_0$ to $x$ such that $[\alpha'\cdot \alpha^-]H=H[\alpha'\cdot \alpha^-]$, then we have $H\pi(\alpha,U)\cap H\pi(\alpha\cdot \beta,V)=H$ if and only if $H\pi(\alpha',U)\cap H\pi(\alpha'\cdot \beta,V)=H$. \end{remark} \begin{remark}\label{prototypical} The space $(X,x_0)=(\mathbb{H}\times [0,1],(b_0,0))$ is the prototypical example of a space that does not have the discrete monodromy property, although $p:\widetilde{X}\rightarrow X$ has unique path lifting. Observe that for the path $\beta(t)=(b_0,t)$ from $x=(b_0,0)$ to $y=(b_0,1)$, the graph of $\Phi_\beta:p^{-1}(x)\rightarrow p^{-1}(y)$ is not discrete. \end{remark} \begin{definition}[Locally quasinormal \cite{FGa}] A subgroup $H\leqslant \pi_1(X,x_0)$ is called {\em locally quasinormal} if for every $x\in X$, for every path $\alpha$ in $X$ from $\alpha(0)=x_0$ to $\alpha(1)=x$, and for every $U\in {\mathcal T}_x$, there is a $V\in {\mathcal T}_x$ such that $x\in V\subseteq U$ and $H\pi(\alpha,V)=\pi(\alpha,V)H$. \end{definition} \begin{remark} Clearly, every normal subgroup of $\pi_1(X,x_0)$ is locally quasinormal. Combining \cite[Lemma~5.2]{FZ2013} with Remark~\ref{ID}(c), we see that if $X$ is locally path connected, then every open subgroup of $\pi_1(X,x_0)$ (in the topology of Remark~\ref{T1}) is locally quasinormal. For example, the nontrivial subgroup $K\leqslant \pi_1(\mathbb{H},b_0)$ from \cite{FZ2013} is open, while it does not contain any nontrivial normal subgroup of $\pi_1(\mathbb{H},b_0)$. \end{remark} The following is a straightforward variation on \cite[Lemma~3.2]{FGa}: \begin{lemma}\label{LQN-NBH} Let $H\leqslant \pi_1(X,x_0)$, $x\in X$, $\alpha$ be a path in $X$ from $x_0$ to $x$, and $U\in {\mathcal T}_x$. Then the following are equivalent: \begin{itemize} \item[(a)] $H\pi(\alpha,U)=\pi(\alpha,U)H$. \item[(b)] For every $\Phi_\beta:p_H^{-1}(x)\rightarrow p_H^{-1}(x)$ with $\Phi_\beta(\left<\alpha\right>)\in \left<\alpha,U\right>\cap p_H^{-1}(x)$, we have $\Phi_{\beta}(\left<\alpha,U\right>\cap p_H^{-1}(x))\subseteq \left<\alpha,U\right>\cap p_H^{-1}(x)$. \end{itemize} \end{lemma} We include the proof for completeness. \begin{proof} (i) First, assume that $H\pi(\alpha,U)=\pi(\alpha,U)H$ and $\Phi_\beta(\left<\alpha\right>)\in \left<\alpha,U\right>\cap p_H^{-1}(x)$. Then $\left<\alpha\cdot \beta\right>=\Phi_\beta(\left<\alpha\right>)=\left<\alpha\cdot \delta\right>$ for some loop $\delta$ in $U$. So, $[\beta]=[\alpha^-]h[\alpha\cdot \delta]$ for some $h\in H$. Let $\left<\gamma\right>\in \left<\alpha,U\right>\cap p_H^{-1}(x)$. Then $[\gamma]=h'[\alpha\cdot \delta']$ for some $h'\in H$ and some loop $\delta'$ in $U$. Since $[\alpha\cdot \delta'\cdot \alpha^-]h\in \pi(\alpha,U)H=H\pi(\alpha,U)$, we have $[\alpha\cdot \delta'\cdot \alpha^-]h=h''[\alpha\cdot \delta''\cdot \alpha^-]$ for some $h''\in H$ and some loop $\delta''$ in $U$. Therefore, we have $[\gamma\cdot \beta]=h'[\alpha\cdot \delta'\cdot\alpha^-]h[\alpha\cdot \delta]=h'h''[\alpha\cdot \delta''\cdot \alpha^-][\alpha\cdot \delta]=h'h''[\alpha\cdot \delta''\cdot \delta]$. Hence, $\Phi_\beta(\left<\gamma\right>)=\left<\gamma\cdot \beta\right>=\left<\alpha\cdot \delta''\cdot\delta\right>\in \left<\alpha,U\right>\cap p_H^{-1}(x)$. (ii) Now, assume that $\Phi_{\beta}(\left<\alpha,U\right>\cap p_H^{-1}(x))\subseteq \left<\alpha,U\right>\cap p_H^{-1}(x)$ whenever $\Phi_\beta(\left<\alpha\right>)\in \left<\alpha,U\right>\cap p_H^{-1}(x)$. It suffices to show that $\pi(\alpha,U)H\subseteq H\pi(\alpha,U)$. Let $[\tau]\in \pi(\alpha,U)H$. Then $[\tau]=[\alpha\cdot \delta\cdot \alpha^-][\gamma]$ for some loop $\delta$ in $U$ and some $[\gamma]\in H$. Put $\beta=\alpha^-\cdot \gamma\cdot \alpha$. Then $\Phi_\beta(\left<\alpha\right>)=\left<\alpha\right>$. Hence, $\left<\alpha\cdot \delta\cdot \beta\right>=\Phi_\beta(\left<\alpha\cdot \delta\right>)=\left<\alpha\cdot \delta'\right>$ for some loop $\delta'$ in $U$. Therefore, $[\tau]=[\alpha\cdot \delta \cdot \beta \cdot (\delta')^-\cdot \alpha^- ][\alpha\cdot \delta'\cdot \alpha^-]\in H\pi(\alpha,U).$ \end{proof} \begin{proposition}\label{DMP->HH} Let $H\leqslant \pi_1(X,x_0)$ be locally quasinormal. If $X$ has the discrete monodromy property relative to $H$, then every fiber of $p_H:\widetilde{X}_H\rightarrow X$ is T$_1$. \end{proposition} \begin{proof} Suppose there is an $x\in X$ such that $p_H^{-1}(x)$ is not T$_1$. Then there are $\left<\alpha\right>,\left<\gamma\right>\in p_H^{-1}(x)$ with $\left<\alpha\right>\not=\left<\gamma\right>$ such that for every $W\in {\mathcal T}_x$ we have $\left<\gamma\right>\in \left<\alpha,W\right>\cap p_H^{-1}(x)$. Let any $U\in {\mathcal T}_x$ be given. Choose $V\in {\mathcal T}_x$ with $x\in V\subseteq U$ and $H\pi(\alpha,V)=\pi(\alpha,V)H$. Put $\widetilde{V}=\left<\alpha,V\right>\cap p_H^{-1}(x)$ and $\beta=\alpha^-\cdot \gamma$. Then $\Phi_\beta(\left<\alpha\right>)=\left<\gamma\right>\in \widetilde{V}$. By Lemma~\ref{LQN-NBH}, $\Phi_\beta(\left<\gamma\right>)\in \widetilde{V}$. Hence, $(\left<\alpha\right>,\Phi_\beta(\left<\alpha\right>))\not=(\left<\gamma\right>,\Phi_\beta(\left<\gamma\right>))$ are both elements of $\widetilde{V}\times \widetilde{V}$. We conclude that $\Phi_\beta:p_H^{-1}(x)\rightarrow p_H^{-1}(x)$ is not the identity function and that its graph is not discrete. \end{proof} \begin{corollary} If $X$ has the discrete monodromy property, then $X$ is homotopically Hausdorff. \end{corollary} \begin{proof} Since $H=\{1\}\leqslant \pi_1(X,x_0)$ is locally quasinormal, this follows from Proposition~\ref{DMP->HH}. \end{proof} The proof of the following proposition is modelled on \cite{E2002} and \cite{CK}. \begin{proposition} \label{1dm} All one-dimensional metric spaces and all planar spaces have the discrete monodromy property. \end{proposition} \begin{proof} Let $x,y\in X$ and let $\beta:[0,1]\rightarrow X$ be a path from $\beta(0)=x$ to $\beta(1)=y$. If $\beta$ is a loop, we assume that it is essential. Let $\alpha$ be any path from $x_0$ to $x$. We wish to find open neighborhoods $U$ and $V$ of $x$ and $y$, respectively, such that $\pi(\alpha,U)\cap \pi(\alpha\cdot \beta,V)=\{1\}\leqslant \pi_1(X,x_0)$. (a) Suppose $X\subseteq \mathbb{R}^2$. If $x=y$, choose $\epsilon>0$ such that the loop $\beta$ cannot be homotoped within $X$ into $N_\epsilon(x)=\{z\in \mathbb{R}^2\mid \|x-z\|<\epsilon\}$, relative to its endpoints. (Here we use the fact that $X$ is homotopically Hausdorff; see Remark~\ref{HH}.) If $x\not=y$, choose any $\epsilon$ with $0<\epsilon<\|x-y\|/2$. Suppose that $\partial N_r(x)\subseteq X$ for all $0<r<\epsilon$. Then $N_\epsilon(x)\subseteq X$. In this case, taking $U=N_\epsilon(x)$ gives $\pi(\alpha,U)=\{1\}$. So, by making $\epsilon$ smaller, if necessary, we may assume that $X\cap \partial N_\epsilon(x)\not=\partial N_\epsilon(x)$. Put $U=X\cap N_\epsilon(x)$ and $V=X\cap N_\epsilon(y)$. Suppose, to the contrary, that there are essential loops $\delta$ and $\tau$ in $U$ and $V$, respectively, with $[\alpha\cdot \delta\cdot \alpha^-]=[\alpha\cdot \beta \cdot \tau \cdot \beta^-\cdot \alpha^-]$, i.e., $[\delta]=[\beta\cdot \tau \cdot \beta^-]$. Then there is a map $h:A\rightarrow X$ from an annulus $A$ whose boundary components $J_1$ and $J_2$ map to $\delta$ and $\tau$, respectively, along with a diametrical arc $a\subseteq A$ connecting $J_1$ to $J_2$ that maps to $\beta$. If $x\not=y$, then $h^{-1}(X\cap \partial N_\epsilon(x))$ clearly separates $J_1$ from $J_2$ in $A$. However, this is also true if $x=y$; for otherwise there is an arc $a'\subseteq A$ connecting $J_1$ to $J_2$ which $h$ maps to a path $\beta'$ in $U$, so that $\beta$ is homotopic within $X$, relative to its endpoints, to the concatenation of an initial subpath $\delta'$ of $\delta$, the path $\beta'$, a terminal subpath $\tau'$ of $\tau$, and a path of the form $\tau\cdot \tau\cdots \tau$ or $\tau^-\cdot \tau^- \cdots \tau^-$, all of which lie in $U$, violating the choice of $\epsilon$. Therefore, as in the proof of \cite[Lemma 5.5]{CK}, the loop $\delta$ contracts within $X$; a contradiction. (b) Suppose $X$ is a one-dimensional metric space. We may assume that $\beta$ is a reduced non-degenerate path (possibly a loop). Choose open neighborhoods $U$ and $V$ of $x$ and $y$ in $X$, respectively, such that $\beta$ is not contained in $U\cup V$. Suppose, to the contrary, that there are essential loops $\delta$ and $\tau$ in $U$ and $V$, respectively, with $[\delta]=[\beta\cdot \tau \cdot \beta^-]$. We may assume that both $\delta$ and $\tau$ are reduced. Then $\beta([0,1])\subseteq \delta([0,1])\cup \tau([0,1])\subseteq U\cup V$ (see Lemma~\ref{cancellinglemma}); a contradiction. \end{proof} \begin{theorem}\label{PhasDMP} $\mathbb{P}$ has the discrete monodromy property. \end{theorem} \begin{proof} Let $1\not=[\beta]\in\pi_1(\mathbb{P},b_0)$. In view of Remark~\ref{ID}(a), Lemma~\ref{discreteEQ} and Remark~\ref{enough}, and since $\mathbb{P}$ locally contractible at every point other than $b_0$, it suffices to find an open neighborhood $V$ of $b_0$ in $\mathbb{P}$ such that $\pi(c_{b_0},V)\cap \pi(\beta,V)=\{1\}$, where $c_{b_0}$ denotes the constant path at $b_0$. Choose $n\in \mathbb{N}$ such that $\beta([0,1])\cap P^\circ_i=\emptyset$ for all $i>n$. Put $\beta'=d_n\circ d_{n-1}\circ \cdots \circ d_1\circ \beta$. Then $\beta'([0,1])\subseteq \mathbb{H}^+_{n+1}$ and $[\beta]=[\beta']\in\pi_1(\mathbb{P},b_0)$. So, we may assume that $\beta$ is a reduced loop in $\mathbb{H}^+_{n+1}$. Increasing $n$ if necessary, we may assume that $\beta$ traverses one of the circles $B_{i,j}$ with $i\in \{1,2,\dots, n\}$ and $j\in \{1,2\}$. As in the proof of Theorem~\ref{UVThm}, we may construct an open neighborhood $V$ of $b_0$ in $\mathbb{P}$ that does not fully contain $B_{i,j}$, such that $V$ deformation retracts onto $\mathbb{H}_{n+1}\subseteq \mathbb{H}$. Suppose, to the contrary, that there are essential loops $\delta$ and $\tau$ in $V$ such that $[\delta]=[\beta\cdot \tau\cdot \beta^-]\in\pi_1(\mathbb{P},b_0)$. We may assume that both $\delta$ and $\tau$ are reduced loops in $\mathbb{H}_{n+1}$. Let $F$ be a homotopy from $\delta$ to $\beta\cdot \tau\cdot \beta^-$ (relative to endpoints) within $\mathbb{P}$. Choose $k\geqslant n$ such that the image of $d_k\circ d_{k-1}\circ \cdots \circ d_1\circ F$ is contained in $\mathbb{H}^+_{k+1}$. Let $\beta'$, $\delta'$ and $\tau'$ be the composition of $d_k\circ d_{k-1}\circ \cdots \circ d_1$ with $\beta$, $\delta$ and $\tau$, respectively. Then $\beta'$, $\delta'$ and $\tau'$ are reduced loops in $\mathbb{H}^+_{k+1}$ such that $[\delta']=[\beta'\cdot \tau'\cdot \beta'^-]\in\pi_1(\mathbb{H}^+_{k+1},b_0)$. However, neither $\delta'$ nor $\tau'$ traverses $B_{i,j}$, while $\beta'$ does; a contradiction. (See Lemma~\ref{cancellinglemma}.) \end{proof} Consider the subspace $w(X)$ of ``wild'' points of $X$, defined by \[w(X)=\{x\in X\mid X \mbox{ is not semilocally simply connected at } x\}.\] The following is the main utility for spaces satisfying the discrete monodromy property, as implicitly used in \cite{E2002} and \cite{CK}. The proof is given after Corollary~\ref{hmpty} below. \begin{theorem}\label{utility} Suppose both $X$ and $Y$ have the discrete monodromy property. If $f:X\rightarrow Y$ is a homotopy equivalence with homotopy inverse $g:Y\rightarrow X$, then $f$ maps $w(X)$ homeomorphically onto $w(Y)$, with inverse $g|_{w(Y)}$. \end{theorem} \begin{remark} In order to see the necessity of the assumptions in Theorem~\ref{utility}, consider $X=\mathbb{H}$ and $Y=\mathbb{H}\times [0,1]$. Then $X$ has the discrete monodromy property, $X$ and $Y$ are homotopy equivalent, but $w(X)=\{b_0\}$ and $w(Y)=\{b_0\}\times [0,1]$ are not homeomorphic. \end{remark} For a path $\beta:[0,1]\rightarrow X$, we let $\varphi_\beta:\pi_1(X,\beta(0))\rightarrow \pi_1(X,\beta(1))$ be the base point changing isomorphism defined by $\varphi_\beta([\delta])=[\beta^-\cdot \delta\cdot \beta]$. \begin{lemma}\label{agree} Suppose $Y$ has the discrete monodromy property. Let $f,g:X\rightarrow Y$ be maps such that $\varphi_\beta\circ f_\#= g_\#:\pi_1(X,x)\rightarrow \pi_1(Y,g(x))$ for some $x\in X$ and some path $\beta$ in $Y$ from $f(x)$ to $g(x)$. If $f(x)\not=g(x)$, then there is a $W\in {\mathcal T}_x$ such that $f_\#:\pi_1(W,x)\rightarrow \pi_1(Y,f(x))$ is trivial. \end{lemma} \begin{proof} Suppose $f(x)\not=g(x)$. Let $c_{f(x)}$ be the constant path at $f(x)$. By Lemma~\ref{discreteEQ}, there are $U\in {\mathcal T}_{f(x)}$ and $V\in {\mathcal T}_{g(x)}$ such that $\pi(c_{f(x)},U)\cap \pi(\beta,V)=\{1\}\leqslant \pi_1(Y,f(x))$. Choose $W\in {\mathcal T}_x$ with $f(W)\subseteq U$ and $g(W)\subseteq V$. Let $\ell$ be a loop in $W$, based at $x$. Then $f_\#([\ell])=[f\circ\ell]=[\beta\cdot (g\circ\ell)\cdot \beta^-]\in \pi(c_{f(x)},U)\cap \pi(\beta,V)=\{1\}$. \end{proof} \begin{corollary}\label{hmpty} Suppose $X$ is path connected and $Y$ has the discrete monodromy property. If $f,g:X\rightarrow Y$ are homotopic maps and $f_\#:\pi_1(X,x_0)\rightarrow \pi_1(Y,f(x_0))$ is injective, then $f|_{w(X)}=g|_{w{(X)}}$. \end{corollary} \begin{proof} Let $F:X\times [0,1]\rightarrow Y$ be a map with $F(x,0)=f(x)$ and $F(x,1)=g(x)$ for all $x\in X$. Fix $x\in w(X)$ and let $\beta:[0,1]\rightarrow Y$ be given by $\beta(t)=F(x,t)$. Then $\varphi_\beta\circ f_\#= g_\#:\pi_1(X,x)\rightarrow \pi_1(Y,g(x))$. Since $x\in w(X)$ and since $f_\#:\pi_1(X,x)\rightarrow \pi_1(Y,f(x))$ is injective, its follows from Lemma~\ref{agree} that $f(x)=g(x)$. \end{proof} \begin{proof}[Proof of Theorem~\ref{utility}] Let $f:X\rightarrow Y$ and $g:Y\rightarrow X$ be a pair of homotopy inverses. Then, for every $x\in X$, $f_\#:\pi_1(X,x)\rightarrow \pi_1(Y,f(x))$ is an isomorphism; in particular, it is injective. Therefore, $f(w(X))\subseteq w(Y)$. Similarly, $g(w(Y))\subseteq w(X)$. Since $g\circ f$ is homotopic to the identity it follows from Corollary~\ref{hmpty} that $g(f(x))=x$ for all $x\in w(X)$. Similarly, $f(g(y))=y$ for all $y\in w(Y)$. \end{proof} When working with spaces for which homomorphisms between fundamental groups are induced by continuous maps up to base point change, as is the case among all one-dimensional and planar Peano continua \cite{CK, E2010, K}, the following provides additional utility: \begin{corollary} Suppose both $X$ and $Y$ have the discrete monodromy property.\linebreak Let $\phi:\pi_1(X,x_0)\rightarrow \pi_1(Y,y_0)$ be an isomorphism with $\phi=\varphi_\alpha\circ f_\#$ and $\phi^{-1}=\varphi_\beta\circ g_\#$ for some maps $f:X\rightarrow Y$ and $g:Y\rightarrow X$ and some paths $\alpha$ and $\beta$. Then $f$ maps $w(X)$ homeomorphically onto $w(Y)$, with inverse $g|_{w(Y)}$. \end{corollary} \begin{proof} For every $x\in X$, $f_\#:\pi_1(X,x)\rightarrow \pi_1(Y,f(x))$ is injective and for every $y\in Y$, $g_\#:\pi_1(Y,y)\rightarrow \pi_1(X,g(y))$ is injective. Therefore, $f(w(X))\subseteq w(Y)$ and $g(w(Y))\subseteq w(X)$. Let $x\in w(X)$. Choose a path $\gamma$ in $X$ from $x_0$ to $x$. Put $\delta=(g\circ f\circ \gamma)^-\cdot (g\circ \alpha)\cdot \beta \cdot \gamma$. Since $\varphi_{(g\circ \alpha)\cdot \beta}\circ (g\circ f)_\#=\varphi_\beta\circ g_\#\circ \varphi_\alpha \circ f_\#=id:\pi_1(X,x_0)\rightarrow \pi_1(X,x_0)$, we have $\varphi_\delta\circ (g\circ f)_\#=id:\pi_1(X,x)\rightarrow \pi_1(X,x)$. By Lemma~\ref{agree}, $g(f(x))=x$. Similarly, $f(g(y))=y$ for all $y\in w(Y)$. \end{proof} \end{document}
\begin{document} \begin{frontmatter} \title{Computing Strong Nash Equilibria for Multiplayer Games} \author{No\'{e}mi Gask\'{o}, Rodica Ioana Lung, D. Dumitrescu\footnote{-All authors have equal contributions}} \address{Babe\c s-Bolyai University, Cluj-Napoca, Romania} \ead{\{gaskonomi, ddumitr\}@cs.ubbcluj.ro, rodica.lung@econ.ubbcluj.ro} \begin{abstract} A new method for computing strong Nash equilibria in multiplayer games that uses the theoretical framework of generative relations combined with a stochastic search method is presented. Generative relations provide a mean to compare two strategy profiles and to assess their relative quality with respect to an equilibria type. The stochastic method used, called Aumann Crowding Based Differential Evolution (A-CrDE), uses a Differential Evolution algorithm that has been successfully used for numerical optimization problem. Numerical experiments illustrate the efficiency of the approach. \end{abstract} \begin{keyword} Non-cooperative games \sep strong Nash (Aumann) equilibrium \sep ge\-ne\-ra\-tive re\-la\-tion \sep differential evolution \end{keyword} \end{frontmatter} \section{Introduction} \newtheorem{definition}{Definition} \newtheorem{remark}{Remark} \newtheorem{example}{Example} \newtheorem{proposition}{Proposition} Strong Nash equilibrium (SNE) or Aumann equilibrium is one of the most appealing equilibrium concepts in non-cooperative game theory \cite{GTintroducere,aum2,gintis2000game}. Proposed by Aumann \cite{aum} as an alternative to the Nash equilibrium (NE), SNEs take into account the fact that some of the players, although having no unilateral incentive to deviate, may benefit (sometimes substantially) from forming alliances/coalitions with other players. While in a NE no player can improve its payoff by unilateral deviation, in a SNE there is no coalition of players that can improve their payoffs (by collective deviation). Thus, SNEs present the advantages of a cooperative behavior in a non-cooperative environment. Two major downsides appear when dealing with SNEs: \begin{itemize}[noitemsep,nolistsep] \item SNEs need not exist for all games; however, this paper is concerned only with games that present at least one SNE; \item the computational complexity related to the necessity of considering all possible coalitions among players. \end{itemize} In spite of that, the strong Nash equilibrium is a robust, worth exploring equilibrium concept; the importance of SNEs is widely studied for classes of games that allow the characterization of SNEs, such as congestion games \cite{Holzman199785}, network games \cite{Holzman2003193, Matsubayashi2006387}, voting models \cite{Keiding2001117,Moulin1982}, etc. Although the existence and properties of SNEs have been studied \cite{Nessah2014871}, there are few computational tools available for computing the SNEs. The complexity of computing a SNE is known to be $\mathcal{NP}-complete$ \cite{Conitzer2008621, DBLP:conf/atal/GattiRS13}. For pure strategy SNEs there are several algorithms designed for specific classes of games: congestion games \cite{0899.90169, Hayrapetyan:2006:ECC:1132516.1132529, Rozenfeld:2006:SCS:2081411.2081419, Hoefer:2010:CPN:1929237.1929264}, connection games \cite{Epstein200951}, continuous games \cite{Nessah2014871}. An algorithm for detecting strong Nash equilibrium in bottleneck congestion games is described in \cite{harks}. Properties, existence conditions, and an analytical algorithm are described in \cite{Nessah2014871}. The aim of this article is to compute SNEs using a heuristic search algorithm. In order to accomplish that, a method to compare two strategy profiles with respect to the characteristics of SNEs and decide if one is ``better'' than the other is needed. Such a binary relation has been proposed in \cite{gasko1} and successfully used to approximate SNEs. This paper studies theoretical aspects related to this relation and furthermore proposes two variants that are less computational consuming in terms of running time. The rest of the article is organized as follows: Section \ref{sec:theory} presents some basic Game Theory notions (non-cooperative game, Nash equilibrium, strong Nash equilibrium). Section \ref{sec:generativerelations} describes the generative relations necessary for the equilibrium detection (the strong Nash, probabilistic strong Nash non-dominated relation) and Section \ref{sec:evo} the evolutionary approach. In Section \ref{sec:numericalexperiments} numerical experiments are presented. The paper ends with Conclusions. \section{Strong Nash equilibria definitions} \label{sec:theory} A non-cooperative game is described by a system of players, actions and payoffs. Each player has some actions/strategies available and a payoff function that takes into account the actions of all players. Formally, a finite strategic non-cooperative game $\Gamma$ is a system $\Gamma=(N,S,U),$ where: \begin{itemize}[noitemsep,nolistsep] \item $N$ represents a set of players, and $n$ is the number of players; \item for each player $i \in N$, $S_{i}$ is the set of actions available, and $$S=S_{1} \times S_{2} \times ... \times S_{n}$$ is the set of all possible situations of the game. An element $s \in S$, $s=(s_1,s_2,...,s_n)$, is called a strategy (or strategy profile) of the game with $s_i$ denoting the strategy of player $i$; \item $U=(u_1,...,u_n)$ is the set of payoff functions; for each $i \in N$, $u_{i}:S \rightarrow R$ represents the payoff function of player $i$. \end{itemize} Let $\mathcal P(N)$ be the power set of $N$, containing all possible player coalitions and $I$ a nonempty set of $\mathcal P(N)$. Then $N-I=\{i\in N; i\not\in I\}$ is the set of the rest of the players. If $I=\{i\}$, i.e. contains only one player, instead of $N-I$ we will write $-i$. Using these notations, if $s,q\in S$, $(s_I, q_{N-I})$ denotes the strategy in which players from $I$ play their strategies from $s$ and players from $N-I$ their strategies from $q$. If $I=\{i\}$, $(s_i, q_{-i})=(q_1,...,q_{i-1},s_i,q_{i+1},...,q_n)$. The Nash equilibrium \cite{nash} is a strategy profile such that no player can unilaterally change her/his strategy to increase her/his payoff. \begin{definition}[Nash equilibrium] A strategy profile $s^{*} \in S$ is a Nash equilibrium if the inequality $$u_i(s_{i}^{},s_{-i}^{*}) \leq u_i(s^{*}),$$ holds, $\forall i=1,...,n, \forall s_{i} \in S_i.$ \end{definition} A Pareto efficient (or optimal) strategy is a situation in which no player can improve his/her payoff without decreasing the payoff of someone else. \begin{definition}[Pareto efficiency] A strategy profile $s^* \in S$ is Pareto efficient if there does not exist a strategy $s \in S$ such that $$u_i(s)\geq u_i(s^*), \forall i \in N,$$ with at least one strict inequality. \end{definition} The strong Nash (or Aumann) equilibrium is a strategy for which no coalition of players has a joint deviation that improves the payoff of each member of the coalition. \begin{definition}[Strong Nash equilibrium] The strategy $s^{*}$ is a strong Nash (Aumann) equilibrium if $\forall I \subseteq N, I \neq \emptyset$ there does not exist any $s_{I}$ such that the inequality $$u_{i}(s_{I}^{},s^{*}_{N-I})> u_i(s^{*})$$ holds $\forall i\in I$. \end{definition} Let us denote by $SNE(\Gamma)$ the set of strong Nash equilibria of the game $G$ and by $NE(\Gamma)$ the set of Nash equilibria in the game $\Gamma$. The following remarks about SNEs are obvious from the definition. \begin{remark}\label{rem:observatii} \begin{itemize}[noitemsep] \item Considering that if we choose deviating coalitions composed from a unique player it is clear that the strong Nash equilibrium reduces to the Nash equilibrium and we can write $$SNE(\Gamma)\subseteq NE(\Gamma).$$ \item The definition of $SNE$ implies that any $SNE$ is Pareto efficient \cite{Nessah2013353}. Evenmore, Nash equilibrium that is also Pareto efficient is a strong Nash equilibrium \cite{Gatti:2013:VCS:2484920.2485034}. \item $SNE$ does not always exists in any non-cooperative games. \end{itemize} \end{remark} \begin{example}\label{ex:ex1} Let us consider a two person coordination game with payoffs presented in Table \ref{table:example1}. The game has two NEs in pure form: $(A,A)$ and $(B,B)$, with the corresponding payoffs $(5,5)$ and $(4,4)$, and one NE in mixed form. Only the strategy profile $(A,A)$ is a strong Nash equilibrium. \begin{table}[h] \caption{The payoff matrix for Example \ref{ex:ex1}} \begin{center} \begin{tabular}{ l | c | c | c |} \multicolumn{4}{r}{Player 2} \\ \cline{2-4} & & A & B \\ \cline{2-4} {Player 1} & A & (5,5) & (3,1) \\ \cline{2-4} & B & (2,3) & (4,4) \\ \cline{2-4} \end{tabular} \end{center} \label{table:example1} \end{table} \end{example} \section{Generative relations} \label{sec:generativerelations} \subparagraph{Generative relations} are used to characterize a certain equilibrium type by using the non-dominance concept. A binary relation $R$ is defined on $S$. If we have $sRq$, with $s,q\in S$, then we say that $s$ dominates $q$ with respect to relation $R$. Conversely, if, for some $s$, $\nexists q$ such that $qRs$, we call $s$ non-dominated with respect to relation $R$. Relation $R$ is called $generative$ for an equilibrium type if the set of nondominated strategy profiles with respect to relation $R$ equals the set of equilibria. A generative relation for the Nash equilibrium was introduced in \cite{iccc2008}. Other generative relations were defined for modified strong Nash and coalition proof Nash equilibrium in \cite{MSE_CPN}, and for strong Berge equilibrium in \cite{SB}. In what follows a generative relation for SNEs is presented. \subsection{Generative relation for strong Nash equilibrium} \label{sec_aumann} In what follows we will assume that the considered game presents at least one strong Nash equilibrium. A relative quality measure of two strategies with respect to strong Nash equilibrium can be defined as \cite{rel_gen_aum}: $$a(s^*,s)=card[i \in I, \emptyset \neq I \subseteq N, u_{i}(s_{I}^{},s^{*}_{N-I})> u_{i}(s^{*}), s_{i}^{} \neq s^{*}_{i}],$$ where $card[M]$ denotes the cardinality of the multiset $M$ (an element $i$ can appear several times in $M$ and each occurrence is counted in $card[M]$). Thus, $a(s^*,s)$ counts the total number of players that would benefit from collectively switching their strategies from $s^*$ to $s$. \begin{definition} Let $s^*,s \in S$. We say that strategy $s^*$ is better than strategy $s$ with respect to strong Nash equilibrium (or strong Nash dominates strategy $s$), and we write $s^* \prec_{A} s $ if the following inequality holds: $$a(s^*,s)<a(s,s^*).$$ \end{definition} Thus, strategy $s^*$ is better in strong Nash sense than a strategy $s$ if there are less players that would be able to increase their payoffs by entering in a coalition that switches strategies from $s^*$ to $s$ than vice-versa. \begin{definition} The strategy profile $s^{*} \in S$ is called strong Nash non-dominated (ANS) if there is no strategy $s \in S, s \neq s^{*}$ such that: $$s\prec_{A} s^{*}.$$ \end{definition} Our assumption is that $ \prec_{A} $ is a generative relation for strong Nash equilibria, i.e. the set of non-dominated strategies with respect to $\prec_{A}$ is equal to the set of \textit{strong Nash equilibria} of the game. In order to prove that, we will use the following property. \begin{proposition} \label{aum0} A strategy profile $s^{*} \in S$ is a strong Nash equilibrium if and only if the equality $$a(s^{*},s)=0$$ holds for all $s \in S$. \end{proposition} \begin{proof} \textit{(i)} Let $s^{*} \in S$ be a SNE. Suppose there exists a strategy profile $s \in S$, such that $a(s^{*},s)=w$, $w >0$. Therefore there exists a set $I, I \subseteq N, I \neq \emptyset $, and $i \in I$, such that $$u_{i}(s_{I}^{},s^{*}_{N-I})> u_i(s^{*}), s_{I}^{}\neq s^{*}_{I}.$$ This contradicts the definition of SNE. \textit{(ii)} Let $s^{*} \in S$ be a strategy profile such that $$\forall s \in S, a(s^{*},s)=0.$$ This means that $$u_{i}(s_{I}^{},s^{*}_{N-I}) \leq u_i(s^{*})$$ for all $I\subseteq N,$ $i \in I$, and for any strategy $s \in S.$ Therefore $s^{*}$ is strong Nash equilibrium. \end{proof} \begin{proposition} \label{eq:aumann1} All SNEs are strong Nash non-dominated solutions, i.e. $$\textrm{SNE}\subseteq ANS.$$ \end{proposition} \begin{proof} Let $s^{*} \in SNE$. Suppose $s^{*}$ is strong Nash dominated. Therefore there exists a strategy profile $s \in S$ dominating $s^*$: $$s \prec_{A} s^*.$$ From the definition of relation $\prec_{A}$, we have $$a(s,s^{*})<a(s^{*},s),$$ and from Proposition \ref{aum0}: $$a(s^{*},s)=0.$$ Therefore $$a(s,s^{*})<0.$$ But this is impossible as $a(s,s^{*})$ denotes the cardinality of a multiset. \end{proof} \begin{proposition} \label{eq:aumann2} All strong Nash non-dominated solutions are strong Nash equilibria, i.e. $$ANS \subseteq SNE.$$ \begin{proof} Let $s^{*}$ be an strong Nash non-dominated strategy profile. Suppose $s^{*}\not\in SNE$. Therefore there must exist (at least one) non-empty coalition $J,$ $j \in J$ and a strategy $s_{J} \in S$, such that \begin{equation} \label{eq:eqaumann} u_{j}(s_{J}^{},s^{*}_{N-J})> u_{j}(s^{*}), \forall j \in J. \end{equation} We consider the coalition fixed, i.e. $J=\{j_{i_1},j_{i_2},...,j_{i_k}\}.$ Let us denote $q=(s_{J}^{},s^{*}_{N-J}).$ Eq. (\ref{eq:eqaumann}) can be written as: \begin{equation} \label{eq:eqaumann1} u_{j}(q)> u_{j}(s^{*}), \forall j \in J. \end{equation} We have $$a(s^*,q)=card[i \in I, \emptyset \neq I \subseteq N, u_{i}(q_{I}^{},s^{*}_{N-I})> u_{i}(s^{*}), q_{i}^{} \neq s^{*}_{i}].$$ which is equivalent with: $$a(s^*,q)=card[i \in I, \emptyset \neq I \subseteq N, u_{i}(s_{j}^{},s^{*}_{N-j})> u_{i}(s^{*}), s_j^{} \neq s^{*}_{i}].$$ From (\ref{eq:eqaumann1}) it follows that \begin{equation} \label{eq:eqaumann2} a(s^{*},q) > 0. \end{equation} On the other hand we have: $$a(q,s^*)=card[i \in I, \emptyset \neq I \subseteq N, u_{i}(s^{*}_{I},q_{N-I}^{})> u_{i}(q), s^{*}_{i} \neq q_{i}^{}, \forall i \in I].$$ But $q_{i}^{} \neq s^{*}_{i}$ holds only for $i=j,$ $j \in J.$ Hence $$a(q,s^*)=card[j, u_{j}(s^{*}_{J},q_{N-J}^{})> u_{j}(q)].$$ But $$(s^{*}_{J},q_{N-J}^{})=s^*.$$ Thus $a(q,s^*)$ can be written as follows: $$a(q,s^*)=card[j, u_{j}(s^{*})> u_{j}(q)].$$ From (\ref{eq:eqaumann1}) it results, that: \begin{equation} \label{eq:eqaumann3} a(q,s^*)=0. \end{equation} From (\ref{eq:eqaumann2}) and (\ref{eq:eqaumann3}) we have: $$a(q,s^{*})<a(s^{*},q),$$ which means that $q \prec_{A} s^{*}.$ The hypothesis that $s^{*}$ is non-dominated is thus contradicted. \end{proof} \end{proposition} \begin{proposition}\label{prop:totala} Relation $\prec_{A}$ is a generative relation for strong Nash equilibria, i.e. $$ SNE=ANS.$$ \begin{proof} Follows directly from Proposition \ref{eq:aumann1} and Proposition \ref{eq:aumann2}. \end{proof} \end{proposition} Although $\prec_A$ is a generative relation for SNEs, it presents the major disadvantage that up to $n2^{n}$ payoff function evaluations are necessary with the corresponding computation of all $2^n-1$ possible coalitions. Because this makes $\prec_A$ impractical from a computational point of view when dealing with a large number of players, two alternatives are proposed in what follows. \subsection{Probabilistic generative relation for strong Nash equilibrium} \label{sec_aumann_prob} In order to reduce the number of evaluations a probabilistic model that only takes into account some randomly generated coalitions is proposed. In the case of $n$ players the total number of possible coalitions is $2^n-1$. Consider a percent $p$ and $\mathcal A_p\subset \mathcal P(N)$ be a set of nonempty subsets of $N$ (possible coalitions) such that $card \{\mathcal A_p \}=[p (2^n-1)]$, where $[\cdot]$ denotes the integer part. The relative quality measure of two strategy profiles $s$ and $s^*$ with respect to $\mathcal A_p$ is: $$a_p(s^*,s)=card[i \in I, \emptyset \neq I \subseteq \mathcal A_p, u_{i}(s_{I}^{},s^{*}_{N-I})> u_{i}(s^{*}), s_{i}^{} \neq s^{*}_{i}],$$ \begin{definition} Let $s^*,s \in S$. Strategy $s^*$ is $\mathcal A_p$-better than strategy $s$ with respect to strong Nash equilibrium (or $s^*$ probabilistic strong Nash dominates strategy $s$), and we write $s^* \prec_{A_p} s $ if the following inequality holds: $$a_p(s^*,s)<a_p(s,s^*).$$ \end{definition} Obviously, if $p=100\%$ the $\prec_{A_p}$ is identical to $\prec_A$. \begin{definition}\label{def:pans} For a given $p\neq 0$, strategy $s^{*} \in S$ is called $p$-strong Nash non-dominated ($pANS$) if there does not exists any $s \in S, s \neq s^{*}$ such that: $$s\prec_{A_p} s^{*}$$ for any $\mathcal A_p \subset \mathcal P(N)$ with $card \{\mathcal A_p \}=[p(2^n-1)]\neq 0$. \end{definition} In the following we will show that $ \prec_{A_p} $ is also a generative relation for strong Nash equilibria, i.e. the set of $p -$non-dominated strategies with respect to $\prec_{A_p}$ approximates the set of \textit{strong Nash equilibria} of the game. \begin{proposition} For any $p\neq 0$, $\prec_{A_p}$ is a generative relation for strong Nash equilibria, i.e. $SE=pANS$. \begin{proof} The first implication is obvious (all SNEs are $p$-strong Nash non-domi\-nated), with the proof analogous with that of Prop. \ref{eq:aumann1}. For the second one, $pANS \subseteq SE$, it is enough, based on Prop. \ref{prop:totala}, to show that $pANS\subseteq ANS$. Consider $s\in pANS$ such that $s\not \in ANS$. If $s\not \in ANS$, there exists $q\in S$ such that $q\prec_A s$, a.i. $$a(q,s)<a(s,q).$$ Let $\{I_k\}_{k=1,...,m_q}$ be the family of $m_q$ coalitions such that $$u_i(q_{I_k},s_{N-I_k})>u _i(s)$$ and denote by $m=[p(2^n-1)]$, $m\neq 0$. If $m>m_q$ then construct a family $\mathcal A_p$ by including all $I_k$ and any other coalitions. If $m\leq m_q$ then construct $\mathcal A_p$ consisting of only coalitions $I_k$, such that at least one for which the relation $$u_i(s_I, q_{N-I})>u_i(q) \quad \forall i\in I$$ is not satisfied is included. Such a coalition exists, otherwise $q$ would not strong Nash dominate $s$. Then, for $\mathcal A_p$, we can write: $$a_p(q,s)<a_p(s,q)$$ contradicting the hypothesis that $s\in pANS$. \end{proof} \end{proposition} The probabilistic relation defined above presents the advantage of requiring less payoff function evaluations than $\prec_A$. \section{Evolutionary approach} \label{sec:evo} Games - in which players try to simultaneously maximize their own payoffs - are similar with multi-objective optimization problems (MOPs) in many features. Population based metaheuristics that can deal with MOPs and are capable of finding of the Pareto optimal set can easily be adapted, by using an appropriate generative relation, to compute certain game equilibrium types. In this section a new evolutionary algorithm, based on the Crowding based Differential Evolution algorithm for multimodal optimization \cite{thomsen04}, called strong Nash Crowding Differential Evolution Algorithm (A-CrDE) is presented. A-CrDE uses the generative relations defined in Section \ref{sec:generativerelations} to evolutionary compute the strong Nash equilibria of a game. \paragraph{A-CrDE population} The individuals from population $P$ represent strategy profiles of the game ($s=(s_1,s_2,...,s_n)$, $n$ is the number of players) that are randomly initialized in the first generation. \paragraph{Crowding Differential Evolution} CrDE extends the Differential Evolution (DE) algorithm with a crowding scheme \cite{thomsen04}. CrDE is based on the conventional DE, the only modification is made regarding the individual (parent) being replaced. Usually, the parent producing the offspring is substituted, whereas in CrDE the offspring replaces the most similar individual among the population if it Aumann dominates it. A \emph{DE/rand/1/exp} scheme is used, as described in Algorithm \ref{alg:crde}. \begin{algorithm} \caption{CrDE - the \emph{DE/rand/1/exp} scheme }\label{alg:crde} \textbf{ \textit{Create offspring} $O[l]$ from parent $P [l]$ } \begin{algorithmic}[1] \STATE $O[l] = P [l]$ \STATE randomly select parents $P [i_1 ]$, $P [i_2 ]$, $P [i_3 ]$, where $i_1 \neq i_2 \neq i_3 \neq i$ \STATE $n = U (0, dim)$ \FOR{$j=0$; $j < dim \wedge U (0, 1) < pc$; $j=j+1$} \STATE $O[l][n] = P [i_1 ][n] + F \ast (P [i_2 ][n] - P [i_3 ][n])$ \STATE $n = (n + 1) \bmod{dim}$ \ENDFOR \end{algorithmic} \end{algorithm} While a final condition is not fulfilled (for example the current number of fitness evaluations performed is below the maximum number of evaluations allowed), for each individual $l$ from the population, an offspring $O[l]$ is created using the scheme presented in Algorithm \ref{alg:crde}, where $U (0, x)$ is a uniformly distributed number between $0$ and $x$, $pc$ denotes the probability of crossover, $F$ is the scaling factor, and $dim$ is the number of problem parameters (problem dimensionality, number of players in this case). \paragraph{The generative relation} Within CrDE the offspring $O[l]$ replaces the most similar parent $P[j]$ if it is fitter. Otherwise, the parent survives and is passed on to the next generation (iteration of the algorithm). Euclidean distance, or any other similarity measure can be used. Within A-CrDE an offspring replaces the parent if it is better than it with respect to the strong Nash equilibrium. Three variants of A-CrDE are considered, each one using one of the generative relations presented in Section \ref{sec:generativerelations}. \paragraph{Outline of A-CrDE} The A-CrDE algorithm is outlined in Algorithm \ref{alg:crdemare}. The output of the algorithm consists on the set of strong Nash nondominated solutions in the last iteration that approximates the set of strong Nash equilibria of the game. \begin{algorithm} \caption{A-CrDE }\label{alg:crdemare} \begin{algorithmic}[1] \STATE Randomly generate initial population $P_0$ of strategies; \WHILE{(not termination condition)} \FOR{each $l=\{1,...,population\: size\}$} \STATE create offspring $O[l]$ from parent $l$; \IF {$O[l]$ \underline{\textit{strong Nash dominates}} the most \underline{similar} parent $j$} \STATE $O[l]$ replaces parent $j$; \ENDIF \ENDFOR \ENDWHILE \end{algorithmic} \end{algorithm} \section{Numerical experiments} \label{sec:numericalexperiments} The detection of strong Nash equilibria using A-CrDE is illustrated for two games that present a SNE. Reported results are averaged over ten runs. Parameter settings used for numerical experiments are presented in Table \ref{tab:par}. Each run, the algorithm reports the distance to the real value of the SNE of the game. All of the experiments were run on a computer with 3.07 GHz CPU and 12 GB main memory. Two different variants of A-CrDE that use: \begin{itemize} \item the generative relation proposed in \ref{sec_aumann} - A-CrDE; \item the probabilistic generative relation proposed in \ref{sec_aumann_prob} - $p$A-CrDE, with $p$ taking values 10\%, 20\%, 30\%, and 40\%; \end{itemize} are studied. \begin{table} \begin{center} \caption{Parameter settings for A-CrDE used for the numerical experiments} \label{tab:par} {\newcommand{\mc}[3]{\multicolumn{#1}{#2}{#3}} \begin{tabular}{lcccc}\hline Parameter & 2 & 3 & 4 & 5 \\ \hline Pop size & \mc{4}{c}{50}\\ Max no evaluations (coalitional evaluations) & \mc{4}{c}{$2\times 10^9$}\\ scaling factor F & \mc{4}{c}{0.5}\\ Crossover rate & \mc{4}{c}{0.9} \\ \hline \end{tabular} } \end{center} \end{table} \subsection{Game 1} The following continuous two person game \cite{nessah_tian}: $$u_1(s_1,s_2)=3s_1^{2}-s_2^2+4s_2,$$ $$u_2(s_1,s_2)=-s_1^{2}+s_1-2s_2, $$ where $$s_i\in[-1,1],i=1,2,$$ presents one strong Nash equilibrium, the strategy profile $(1,-1)$ \cite{nessah}, located on the Pareto frontier of the game. The algorithm correctly computes the strong Nash equilibrium (with all of the three proposed generative relations, in all runs). Figure \ref{fig:game2_payoff} pre\-sents the strong Nash equilibrium detected by A-CrDE for Game 1. \begin{figure} \caption{Game 1. Strong Nash equilibrium detected by A-CrDE. Game 1 has only one NE that is also a SNE.} \label{fig:game2_payoff} \end{figure} \subsection{Game 2: The minimum effort coordination game} Game $G_2$ is based on a micro-foundation model \cite{bryant}. Another version of this game is presented in \cite{Anderson2001177}. Consider a $n$-person coordination game in which each player $i$ chooses an effort level $s_i$. The common part of the effort is determined by the minimum effort of the $n$ effort levels. Each players' payoff is equal to the difference between the common payoff and the cost of the players own payoff ($\alpha s_i$): $$u_i(s)=c_i-\alpha s_i,$$ where $$c_i=\min\{s_1,...,s_n\},$$ and $\alpha<1, s_i \in [0,10],i=1,...,n.$ Consider the cost $\alpha=0.5$. The game has an infinite number of Nash equilibria (each $s_i=s$, $i=1,...,n$, $s,s_i \in [0,10]$ is a Nash equilibrium of the game), so all same effort level is a Nash equilibrium. The game has only one strong Nash equilibrium ($s_i=10$, $i=1,...,n$), which is Pareto efficient. The objective space for the two player version is illustrated in \ref{fig:example2}. \begin{figure} \caption{Payoffs for the minimum effort coordination game for randomly generated strategies} \label{fig:example2} \end{figure} Table \ref{table:sum} presents the distance to the strong Nash equilibrium and standard deviation for 10 different runs for A-CrDE and $p$A-CrDE. For 2, 5 and 10 players the algorithm finds correctly the strong Nash equilibrium. For 15 players a larger number of payoff function evaluations are necessary. \begin{table} \centering \small \caption{Average distance to the strong Nash equilibrium and standard deviation for 10 independent runs} \begin{tabular}{cccccc}\hline {No. of pl.} &{A-CrDE} &\multicolumn{4}{c}{$p$A-CrDE}\\ \hline & & 10\% & 20\% & 30\% & 40\% \\ \hline 2 & $0 \pm 0$ & - & - & - & $0 \pm 0$\\ 5 & $0 \pm 0$ & $0 \pm 0$ &$0 \pm 0$ & $0 \pm 0$ & $0 \pm 0$\\ 10 & $0 \pm 0$ & $0 \pm 0$ & $0 \pm 0$ & $0 \pm 0$&$0 \pm 0$\\ 15 & $5.34 \pm 3.63$ &$2.76 \pm 8.30$ & $6.19 \pm 11.11$ & $ 8.84 \pm 14.52$ & $6.45 \pm 10.74$\\ \end{tabular} \label{table:sum} \end{table} \begin{table} \centering \caption{Average run time (CPU seconds) for strong Nash equilibrium detection with different generative relations (10 independent runs)} \small \begin{tabular}{cccccc}\hline {No. of pl.} &{A-CrDE} &\multicolumn{4}{c}{$p$A-CrDE}\\ \hline & & 10\% & 20\% & 30\% & 40\% \\ \hline 2 & 0.06 & - & - & - & 0.06\\ 5 & 0.1 &0.06 &0.06 &0.07& 0.08\\ 10 & 5.82 & 0.73 & 1.25 & 2.05&2.53\\ 15 & 328.66 &85.11 & 143.6 & 206.43 & 229.05\\ \end{tabular} \label{table:time} \end{table} Table \ref{table:time} presents the average time necessary to correctly compute the strong Nash equilibrium a single run (with distance 0). The results confirm that the algorithms are capable to locate the strong Nash equilibria even for 15 players, but they also indicate an exponential increase of the running time with the number of players. To illustrate the evolution of the search, detailed results obtained for five players are depicted in Figures \ref{fig:game_convergence5} and \ref{fig:game_boxplot5}. Figure \ref{fig:game_convergence5} illustrates the convergence to the Aumann equilibrium for A-CrDE and $p$A-CrDE. Figure \ref{fig:game_boxplot5} depicts boxplots for the five cases, in each case the generation number is reported (for ten runs), in which the correct strong Nash equilibrium is obtained. The same type of results obtained for ten players are depicted Figures \ref{fig:game_convergence10} and \ref{fig:game_boxplot10}. The same conclusions can be drawn for both cases: in terms of payoff function evaluations, $p$A-CrDE seems to converge fastest for small values of $p$; A-CrDE has the slowest convergence. Boxplots representing the number of generations necessary to converge indicate that there are no significant differences between methods. A Wilcoxon sum rank test for assessing the statistical difference between means confirms this assumption. \begin{figure} \caption{Converge to the strong Nash equilibrium for 5 players. Values are smoothed using a moving average filter (with the MATLAB smooth function)} \label{fig:game_convergence5} \end{figure} \begin{figure} \caption{Converge to the strong Nash equilibrium for five players (number of generations in 10 runs) } \label{fig:game_boxplot5} \end{figure} \begin{figure} \caption{Converge to the strong Nash equilibrium for ten players. The values are smoothed using a moving average filter (with the MATLAB smooth function)} \label{fig:game_convergence10} \end{figure} \begin{figure} \caption{Converge to the strong Nash equilibrium for ten players (number of generations in 10 runs) } \label{fig:game_boxplot10} \end{figure} \section{Conclusions} This paper presents an efficient approach to the problem of computing strong Nash equilibria by using evolutionary computation and generative relations. A theoretical framework presenting two generative relations and an empirically designed sets the basis for the evolutionary method. A differential evolution algorithm is adapted to search for SNEs by simply adding a generative relation in the replacing procedure of the method. Numerical examples illustrate the efficiency of the approach. For the minimum effort game, the third variant of the method correctly computes the SNE for instances up to 150 players. \end{document}
\begin{document} \title[Resolutions for monomial curves defined by Arithmetic Sequences] {Minimal graded Free Resolutions for Monomial Curves defined by Arithmetic Sequences} \author{Philippe Gimenez} \address{Department of Algebra, Geometry and Topology, Faculty of Sciences, University of Valladolid, 47005 Valladolid, Spain.} \email{pgimenez@agt.uva.es} \thanks{The first author is partially supported by MTM2010-20279-C02-02, {\it Ministerio de Educaci\'on y Ciencia~-~Espa\~na}.} \author{Indranath Sengupta} \address{Department of Mathematics, Jadavpur University, Kolkata, WB 700 032, India.} \email{sengupta.indranath@gmail.com} \thanks{The second author thanks DST, Government of India for financial support for the project ``Computational Commutative Algebra", reference no. SR/S4/MS: 614/09. } \author{Hema Srinivasan} \address{Mathematics Department, University of Missouri, Columbia, MO 65211, USA.} \email{SrinivasanH@math.missouri.edu} \subjclass[2000]{Primary 13D02; Secondary 13A02, 13C40.} \date{} \begin{abstract} Let ${\bf m}=(m_0,\ldots,m_n)$ be an arithmetic sequence, i.e., a sequence of integers $m_0<\cdots<m_n$ with no common factor that minimally generate the numerical semigroup $\sum_{i=0}^{n}m_i\mathbb N$ and such that $m_i-m_{i-1}=m_{i+1}-m_i$ for all $i\in\{1,\ldots,n-1\}$. The homogeneous coordinate ring $\Gamma_{\bf m}$ of the affine monomial curve parametrically defined by $X_0=t^{m_0},\ldots,X_n=t^{m_n}$ is a graded $R$-module where $R$ is the polynomial ring $k[X_0,\ldots,X_n]$ with the grading obtained by setting $\deg{X_i}:=m_i$. In this paper, we construct an explicit minimal graded free resolution for $\Gamma_{\bf m}$ and show that its Betti numbers depend only on the value of $m_0$ modulo $n$. As a consequence, we prove a conjecture of Herzog and Srinivasan on the eventual periodicity of the Betti numbers of semigroup rings under translation for the monomial curves defined by an arithmetic sequence. \end{abstract} \maketitle \section*{Introduction} The study of affine and projective monomial curves has a long history beginning with the classification of space monomial curves in \cite{herzog}. One of the most dramatic results in the subject is the fact that the number of generators for the defining ideal of these curves in the affine space $\mathbb A^{n+1}$ is unbounded, \cite{bresinski}. Much of the study to date has focussed on determining the generators and the first Betti number of the defining ideal for many different classes of monomial curves. In this paper, we study the later Betti numbers as well as the structure of the resolution. Exact generators in the case of curves defined by an arithmetic sequence or an almost arithmetic sequences are known; see \cite {patil}, \cite{malooseng}, \cite{lipatroberts}. In the case of arithmetic sequences, these ideals have another interesting structure as sum of two determinantal ideals; see \cite {hip}. This provides the main impetus for understanding the resolution of these ideals. In this article, we construct the minimal resolution explicitly for these ideals and compute all the Betti numbers. The main goal of this article is to prove the following conjecture that states that in codimension $n$, there are exactly $n$ distinct paterns for the minimal graded free resolution of a monomial curve defined by an arithmetic sequence: \begin{center} \begin{tabular}{p{11cm}}\it Curves in affine $(n+1)$-space defined by a monomial parametrization $X_0=t^{m_0}$, \ldots, $X_n=t^{m_n}$ where $m_0 < \ldots < m_n$ are positive integers in arithmetic progression have the property that their Betti numbers are determined solely by $m_0$ modulo $n$. \end{tabular} \end{center} Basis for this conjecture came from the observations in \cite {seng}, where an explicit minimal free resolution was constructed for $n=3$, using Gr\"{o}bner basis techniques. Subsequently, this property was verified to hold for several examples in \cite{eaca}, and it was proved for certain special cases in \cite{hip}, leading us to frame it as a conjecture. We give a complete proof of the conjecture in this article. Let $k$ denote an arbitrary field and $R$ be the polynomial ring $k[X_0,\ldots,X_n]$. Associated to a sequence of positive integers ${\bf m} = (m_0, \ldots, m_n)$, we have the $k$-algebra homomorphism $\varphi:R\rightarrow k[t]$ given by $\varphi(X_i)=t^{m_i}$ for all $i=0,\ldots,n$. The ideal ${\mathcal P}:=\ker{\varphi}\subset R$ is the defining ideal of the monomial curve in $\mathbb A_k^{n+1}$ given by the parametrization $X_0=t^{m_0}$, \ldots, $X_n=t^{m_n}$ which we denote by ${C}_{{\bf m}}$ . The $k$-algebra of the numerical semigroup $\Gamma ({\bf m})$ generated by ${\bf m} = (m_0,\ldots,m_n)$ is the semigroup ring $k[\Gamma({\bf m})]:=k[t^{m_0},\ldots,t^{m_n}]\simeq R/{\mathcal P}$ which is one-dimensional. Moreover, ${\mathcal P}$ is a perfect ideal of codimension $n$ and it is well known that it is minimally generated by binomials. Since for any positive integer $t$ the curve ${C}_{t{\bf m}}$ is isomorphic to the curve $ C_{{\bf m}}$ for they have the same defining ideal, we may as well assume without loss of generality that the integers $ m_0, m_1, \ldots, m_n $ have {\it no common factor}. Further, if the semigroup generated by a proper subset of ${\bf m}$ equals the semigroup $\Gamma ({\bf m})$, then the curve ${C}_{{\bf m}}$ is degenerate and the resolution of its coordinate ring can be studied in a polynomial ring with less variables. Hence we can reduce to the case where the integers in ${\bf m}$ {\it minimally generate} the numerical semigroup $\Gamma ({\bf m})$. If, in addition, the integers $m_i$ are in {\it arithmetic progression}, i.e., $m_i-m_{i-1}=m_{i+1}-m_i$ for all $i\in\{1,\ldots,n-1\}$, ${\bf m}$ is said to be an {\bf arithmetic sequence}. An interesting feature that was revealed in \cite{hip} is that when ${\bf m} = (m_0,\ldots,m_n)$ is an arithmetic sequence, the ideal ${\mathcal P}$ can be written as a sum of two determinantal ideals, ${\mathcal P}=I_2(A) + I_2(B)$, as we shall recall in Section~\ref{defidealsection}. Here, $I_2(A)$ is in fact the defining ideal of the rational normal curve in $\mathbb P^n$. Let us write $m_0$ as $m_0=an+b$ where $a,b$ are positive integers and $b\in[1,n]$. When $b=1$, the sum $I_2(A) + I_2(B)$ is again a determinantal ideal and its resolution is described in \cite[Theorem~2.1 \& Corollary~2.3]{hip}. When $b=n$, it is just one generator away from a determinantal ideal which is again simple; see \cite[Theorem~2.4 \& Corollary~2.5]{hip}. The case $b=2$ corresponds to the case where $k[\Gamma({\bf m})]$ is Gorenstein. The resolution was described in this case when $n=4$ in \cite[Theorem~2.6]{hip}. This result will be completed in Section~\ref{secgorenstein} where the resolution will be given for an arbitrary $n$. The observation that the ideals $I_2(A)$ and $ I_2(B)$ are related by linear quotients (Lemma~\ref{colon}) holds the key for the construction of the resolution of ${\mathcal P}$ in general. We construct a tower of mapping cones each of which is a cone over an inclusion of a shifted graded Koszul complex into a graded Eagon-Northcott complex. Unfortunately, the above construction of iterated mapping cone does not yield a minimal free resolution for ${\mathcal P}$ and therefore we will have to get rid of the redundancy and make the resolution minimial. The complete graded description of the resolution is given in the main Theorem \ref {mainThm}. As a consequence, we compute the total Betti numbers $\beta _j$ in Theorem~\ref{ThmIndraConj} as follows: if $m_0\equiv b\mod n$ with $b\in [1,n]$, then $$ \beta_j=j{n\choose j+1}+ \left\{\begin{array}{ll} \displaystyle{(n-b+2-j){n\choose j-1}}&\hbox{if}\ 1\leq j\leq n-b+1,\\ \displaystyle{(j-n+b-1){n\choose j}}&\hbox{if}\ n-b+1<j\leq n. \end{array}\right. $$ These numbers clearly depend only on the reminder of $m_0$ modulo $n$, as conjectured in \cite[Conjecture~1.2]{hip}. As another application of Theorem~\ref{mainThm}, we prove the following conjecture of Herzog and Srinivasan for monomial curves defined by an arithmetic sequence. The strong form of the conjecture says that if ${\bf m}$ is any increasing sequence of non negative integers and ${\bf m} +(j)$ denotes the sequence translated by $j$, then the Betti numbers of the semigroup ring $k[\Gamma({\bf m} +(j))]$ are eventually periodic in $j$. We prove in Theorem \ref{ThmPeriodic} that if ${\bf m}$ is an arithnetic sequence, the strong form of the conjecture holds by showing that the betti numbers are periodic with period $m_n- m_0=nd$ where $d$ is the common difference. We will begin with some preliminaries on the defining ideal of monomial curves associated to arithmetic sequences, followed by some facts from mapping cones that we need in the paper. Section~\ref{secgorenstein} is entirely on Gorenstein monomial curves defined by an arithmetic sequence where we construct the minimal graded free resolution of its coordinate ring as a direct sum of a resoltution and its dual. Section~\ref{secmain} contains the construction of the minimal graded free resolution of the monomial curves defined by an arithmetic sequence in general. The last section has consequences of the main theorem (Theorem~\ref{mainThm}) for some relation between the regularity of the semigroup ring and the Frobenius number of the semigroup, an independent proof of the characterization of Gorenstein curves defined by an arithmetic sequence as well as the proof of the periodicity conjecture of Herzog and Srinivasan for the semigroup rings defined by an arithmetic sequence. \section{Preliminaries}\label{secprelim} Let $({\bf m})= (m_0, \ldots, m_n)$ be an {\it arithmetic sequence}, i.e., a sequence of nonnegative integers such that $m_i=m_0+id$ for some $d\geq 1$ and all $i\in [0,n]$, and such that $m_0, \ldots, m_n$ are relatively prime and minimally generate the numerical semigroup $\displaystyle{\Gamma({\bf m}):=\sum_{0\leq i\leq n}m_i\mathbb N}$. Note that one can always write $m_0$ uniquely as $$m_0=an+b$$ with $a,b$ positive integers and $b\in [1,n]$. The integer $a$ is non-zero because the sequence $({\bf m})= (m_0, \ldots, m_n)$ is minimal. \subsection{Defining ideal of monomial curves associated to arithmetic sequences}\label{defidealsection} One knows by \cite[Theorem~2.1]{hip} that the defining ideal ${\mathcal P}$ of the affine monomial curve $C_{\bf m} \subset \mathbb A_k^{n+1}$ is $I_2(A)+I_2(B)$, the sum of two determinantal ideals of maximal minors with $$A= {\left(\begin{array}{ccc} \begin{array}{c} X_{0}\\[1.5mm] X_{1}\\[1.5mm] \end{array} & \begin{array}{c} \cdots\\[1.5mm] \cdots\\[1.5mm] \end{array} & \begin{array}{c} X_{n-1}\\[1.5mm] X_{n}\\[1.5mm] \end{array} \end{array}\right)}, \ B= {\left(\begin{array}{cccc} \begin{array}{c} X_{n}^{a}\\[1.5mm] X_{0}^{a+d}\\[1.5mm] \end{array} & \begin{array}{c} X_{0}\\[1.5mm] X_{b}\\[1.5mm] \end{array} & \begin{array}{c} \cdots\\[1.5mm] \cdots\\[1.5mm] \end{array} & \begin{array}{c} X_{n-b}\\[1.5mm] X_{n}\\[1.5mm] \end{array} \end{array}\right)}.$$ It is well-known that the ideal $I_2(A)$ is the defining ideal of the rational normal curve in $\mathbb P_k^n$ of degree $n$; see, e.g., \cite[Proposition~6.1]{eis2}. The fact that $I_2(A)$ is contained in ${\mathcal P}$ says that the affine monomial curve $C_{\bf m}$ is lying on the affine cone over the rational normal curve. Indeed, the following easy lemma states that arithmetic sequences are precisely the ones whose associated monomial curve lies on this cone. They are also the only sequences that make the ideal $I_2(A)$ homogeneous with respect to the gradation obtained by setting $\deg{X_i}=m_i$ for all $i\in [0,n]$. \begin{lemma}\label{equivGrad} Let ${\mathcal P}\subset k[X_0,\ldots,X_n]$ be the defining ideal of the non-degenerate monomial curve $C_{\bf m} \subset \mathbb A_k^{n+1}$ associated to a strictly increasing sequence of integers ${\bf m}=(m_0,\ldots,m_n)$. The following are equivalent: \begin{enumerate} \item $\exists\, d\in\mathbb Z,$ such that $ m_i=m_0+id,\ \forall i\in [0,n]$; \item $I_2(A)\subset {\mathcal P}$; \item $I_2(A)$ is homogeneous w.r.t. the weighted gradation on $R$ given by $\deg{X_i}=m_i$ for all $i\in[0,n]$. \end{enumerate} \end{lemma} \begin{proof} As we already recalled, if $m_i=m_0+id$ for some $d\geq 1$ then $I_2(A)\subset {\mathcal P}$ and $I_2(A)$ is homogeneous w.r.t. the weighted gradation, so (1) $\Rightarrow$ (2) and (1) $\Rightarrow$ (3). Conversely, assuming that either (2) or (3) holds, one has that the integers in the sequence $({\bf m})$ satisfy that $m_i+m_{j+1}=m_{i+1}+m_j$ for all $i,j$ such that $0\leq i<j\leq n-1$. In particular, for all $j\in [1,n-1]$, one has that $m_0+m_{j+1}=m_{1}+m_j$, i.e., $m_{j+1}=m_j+d$ if one sets $d:=m_1-m_0\geq 1$. \end{proof} In this paper, homogeneous and graded will mean homogeneous and graded with respect to the weighted gradation on $R$ given by $\deg{x_i}=m_i$ for all $i\in[0,n]$. By Lemma~\ref{equivGrad}, $I_2(A)$ is homogeneous and ${\mathcal P}$ is also homogeneous since ${\mathcal P}:=\ker{\varphi}$ and the map $\varphi:R\rightarrow k[t]$ given by $\varphi(X_i)=t^{m_i}$ is graded of degree 0. \subsection{The weighted graded version of the Eagon-Northcott complex}\label{secENgraded} The minimal resolution of $R/I_2(A)$ is given by the Eagon-Northcott complex of the matrix $A$ because the height of $I_2(A)$ is $n$. Let $F= \oplus _{i=1}^n R e_i$ be a free $R$-module of rank $n$ with basis $e_1, \ldots, e_n$ and $G = Rg_1\oplus Rg_2$ be a free $R$-module of rank 2. The $2\times n$ matrix $A$ represents a map $\varphi: F\longrightarrow G^*$. The Eagon-Northcott complex ${\bf E}$ of the matrix $A$ is $$ {\bf E}:\ 0\longrightarrow E_{n-1} \stackrel{d_{n-1}}{\longrightarrow} E_{n-2} \stackrel{d_{n-2}}{\longrightarrow} \cdots\longrightarrow E_1\stackrel{d_{1}}{\longrightarrow} E_0\,, $$ with $E_0=R$ and, for all $i\in [1,n-1]$, $E_i :=\wedge ^{i+1}F\otimes D_{i-1}G$ and $d_i$ is given by the diagonalization of $\wedge F$ and multiplication of $DG$. \begin{remark}\label{dualENrmk} Note that $R/I_2(A)$ is Cohen-Macaulay and the complex ${\bf E }^*$ is exact. Thus, we get $$ {\bf E}^*:\ 0\longrightarrow R \stackrel {d_1^*}{\longrightarrow} E_1^* \longrightarrow \cdots \stackrel { d_{n-2}^* }{\longrightarrow}\wedge ^{n-1}F^* \otimes D_{n-3}G^*\stackrel { d_{n-1}^* }{\longrightarrow} \wedge^nF^*\otimes D_{n-2}G^*. $$ One has that $d_1^*:R\to \wedge ^2 F^*$ is the map $\wedge ^2 (\varphi^*)$, and for $i>1$, the map $d_i^*: \wedge ^iF^*\otimes D_{i-2}G^* \longrightarrow \wedge ^{i+1}F^*\otimes D_{i-1}G^*$ is given by $$d_i^*(x\otimes y) = \sum _{t=1}^n x\wedge e_t^* \otimes \varphi(e_t)y.$$ After identifying $\wedge ^nF^*$ with $R$ and $\wedge ^{n-1}F^*$ with $F$, we see that $$d_{n-1}^*(e_i \otimes g_1^{*(r)}g_2^{*(s)})=(-1)^{n-i}[ X_{i-1}g_1^{*(r+1)}g_2^*{(s)}+X_{i}g_1^{*(r)}g_2^{*(s+1)}]. $$ This will be useful in Section~\ref{secgorenstein}. \end{remark} The ideal $I_2(A)$ is homogeneous with respect to the usual grading on $R$ and the Eagon-Northcott complex is indeed a minimal graded free resolution of $R/I_2(A)$. This minimal graded resolution is 2- linear and it is as follows: $$ {\bf E}:\ 0\rightarrow R^{n-1}(-n)\stackrel{d_{n-1}}{\longrightarrow} R^{(n-2){n\choose n-1}}(-n+1) \stackrel{d_{n-2}}{\longrightarrow} \cdots \stackrel{d_{s}}{\longrightarrow} R^{(s-1){n\choose s}}(-s) \stackrel{d_{s-1}}{\longrightarrow} \cdots $$ \begin{flushright} $\displaystyle{\cdots \stackrel{d_{2}}{\longrightarrow} R^{n\choose 2}(-2) \stackrel{d_{1}}{\longrightarrow} R \rightarrow R/I_2(A)\rar0. }$ \end{flushright} But $I_2(A)$ is also homogeneous with respect to our weighted gradation on $R$ as observed in Lemma~\ref{equivGrad} and the Eagon-Northcott complex is also a minimal graded free resolution of $R/I_2(A)$ with respect to this weighted gradation. Of course, syzygies are no longer concentrated in one single degree at each step of the resolution as before. As observed in \cite{hip}, the successive graded free modules in this resolution are $E_0=R$ and, for all $s\in [2,n]$, \begin{eqnarray*} E_{s-1}&=&\bigoplus_{1\le r_1<\ldots < r_s \le n } \left(\bigoplus_{k=1}^{s-1} R(- sm_0 +kd -\sum r_i d)\right)\\ &=& \bigoplus_{k=1}^{s-1} \left(\bigoplus_{1\le r_1<\ldots < r_s \le n} R(- (sm_0 -kd +\sum r_i d))\right)\\ &=& \bigoplus_{k=1}^{s-1} \left(\bigoplus_{0\le r_1<\ldots < r_s \le n-1} R(- (sm_0 +(s-k)d +\sum r_i d))\right)\\ &=& \bigoplus_{k=1}^{s-1} \left(\bigoplus_{0\le r_1<\ldots < r_s \le n-1} R(- (sm_0 +kd +\sum r_i d))\right)\\ \,. \end{eqnarray*} \begin{notation} Given two integers $m\geq t\geq 1$, it will be useful to denote by $\sigma(m,t)$ the collection (with repetitions) of all possible sums of $t$ distinct nonnegative integers which are all strictly smaller than $m$, i.e., $$ \sigma (m,t) = \{ \sum _ {0\le r_1<r_2<\cdots <r_t \le m-1} r_i \}\,. $$ For instance, $\sigma (4,2) = \{1,2,3,3,4,5\}$. Note that for all $t$ and $m$ with $1\leq t\leq m$, $\#\sigma (m,t)={m\choose t}$. \end{notation} The weighted graded free resolution of $R/I_2(A)$ given by the Eagon-Northcott complex can now be written as follows: {\small $$ 0\rightarrow \bigoplus_{k=1}^{n-1}R(- (nm_0+kd + {n\choose 2} d )) \longrightarrow\cdots\longrightarrow \bigoplus_{k=1}^{s-1} \left(\bigoplus_{r\in\sigma(n,s)} R(- (sm_0 +kd+rd))\right)\longrightarrow\cdots $$ \begin{flushright} $ \cdots\longrightarrow\displaystyle{ \bigoplus_{r\in\sigma(n,2)}R(-(2m_0+d+r d)) \longrightarrow R \longrightarrow R/I_2(A)\rightarrow 0\,. }$ \end{flushright} } \subsection{Mapping cone}\label{subsecMappingCone} In this section, we will establish our notation for mapping cones, complexes and some facts on mapping cones that we will need. For any complex ${\bf F} = \oplus F_i$, denote by $(\delta_{\bf F})_i:F_i\rightarrow F_{i-1}$ the boundary maps of ${\bf F}$, and for any $t\geq 1$, let ${\bf F^t}$ be the complex whose $i$th term is $(F^t)_i := F_{i-t}$. Now if $\mu: {\bf F}\rightarrow {\bf G}$ is a map of complexes, the {\it mapping cone} (or {\it cone}) over $\mu$ is the complex ${\bf G}\oplus {\bf F^1}$ and it is denoted by $\cone{\mu}$. The boundary maps of this complex are $$ \begin{array}{rccl} \left( \begin{array}{cc} (\delta_{\bf G})_i&(-1)^{i}\mu_{i-1}\\ 0&(\delta_{\bf F})_{i-1} \end{array} \right) :& G_i\oplus F_{i-1}&\rightarrow &G_{i-1}\oplus F_{i-2}\,, \end{array} $$ \noindent i.e., $(\delta _{\cone {\mu}})_i(g_i,f_{i-1})=((\delta_{\bf G})_i(g_i)+(-1)^i \mu _{i-1}(f_{i-1}),(\delta_{\bf F})_{i-1}(f_{i-1}))$. Let's recall a few well known facts on mapping cones that we will need in the sequel. If ${\bf G}$ is acyclic, i.e., $H_i({\bf G})= 0$ for $i\ge 1$, then $\cone {\mu}$ is exact up to degree 1, i.e., $H_i(\cone {\mu}) = 0$ for all $i\ge 2$. When ${\bf G}$ is exact and moreover $\mu_0$ is injective, then $\cone {\mu}$ is acyclic. A situation of special interest is when $\bf F$ is a resolution of $R/J$ and $\bf G$ a resolution of $R/I$ for two ideals $I$ and $J$ in $R$. Then, given a map of complexes $\mu: {\bf F}\rightarrow {\bf G}$, $\cone {\mu}$ resoves $R/I+\mu_0(R)$ provided $\mu_0 (J)$ is contained in $I$. In particular, consider the following situation: let $I$ be an ideal in $R$ and take an element $z\in R$. Then, $$ 0\rightarrow R/(I:z) \stackrel {\mu} \longrightarrow R/I \longrightarrow R/I+(z)\rightarrow 0 $$ is exact, where $\mu$ is the map given by multiplication by $z$. Now if ${\bf F}$ resolves $R/(I:z)$, ${\bf G}$ resolves $R/I$, and $\mu :{\bf F} \rightarrow {\bf G}$ is a map of complexes induced by $\mu$, then $\cone{\mu}$ resolves $R/I+(z)$. Let's consider now the graded version of the previous statements. Assume that $I$ and $J$ are homogeneous ideals in $R$, and consider $\bf F$ a graded resolution of $R/J$, $\bf G$ a graded resolution of $R/I$, and $\mu: {\bf F}\rightarrow {\bf G}$ a graded map of complexes with $\mu (J)\subset I$. Then, the exact complex $\cone {\mu}$ is the graded resolution of $R/I+\mu_0(R)$. In the particular case where $J=(I:z)$ for some homogeneous element $z\in R$ of degree $\delta$, the degree zero map $\mu: R(-\delta)\rightarrow R$ given by multiplication by $z$ induces a graded map of complexes $\mu: {\bf F}(-\delta)\rightarrow {\bf G}$ and $\cone{\mu}$ is a graded free resolution of $R/I+(z)$. Recall that a resolution $\bf F$ of $R/I$ is {\it minimal} if the ranks of the $F_i$'s are minimal or, equivalently, if $(\delta _{\bf F}({\bf F}))\subset {\bf m}{\bf F}$ where ${\bf m} = (x_0, \ldots, x_n)$ is the irrelevant maximal ideal. Thus, in a minimal graded resolution, there are no degree zero components in the resolution unless they are identically zero. If $\bf F$ and $\bf G$ are minimal graded free resolutions of $R/I:z$ and $R/I$ respectively, then the only possible obstructions for $\cone{\mu}$ to be minimal are the degree zero components in $\mu$. In other words, if $\bf F = \oplus F_i $, $\bf G = \oplus G_i$ with $F_i = \bigoplus _{j} R(-d_{ij})$ and $G_i = \bigoplus _j R(-c_{ij})$, and $\mu: {\bf F}\rightarrow {\bf G}$ is the graded map of complexes induced by multiplication by $z$, $R(-\delta)\rightarrow R$, then $\cone {\mu}$ is a minimal graded free resolution of $R/I+(z)$ provided whenever $d_{ij} = c_{ij}$ for some $i$ and $j$, the projection of the restriction map $\mu _i|_{R(-d_{ij})}$ onto $R(-c_{ij})$ is identically zero. If it is not zero, we can split off or cancel the same two summands $R(- d_{ij})$ from both $F_i$ and $G_i$ in the mapping cone construction to achieve minimality. \begin{definition} The {\it minimized mapping cone} of the map of complexes $\mu:{\bf F}\rightarrow {\bf G}$, denoted by $\mincone{\mu}$, is the complex obtained from $\cone{\mu}$ after splitting off all possible summands. \end{definition} If $z\in R$ is homogeneous element of degree $\delta$, $\bf F$ and $\bf G$ are minimal graded free resolutions of $R/I:z$ and $R/I$ respectively, and $\mu: {\bf F}(-\delta)\rightarrow {\bf G}$ is the graded map of complexes induced by multiplication by $z$, then $\mincone{\mu}$ is a minimal graded free resolution of $R/I+(z)$. We finally mention the following result that we will need later on. \begin {proposition}\label{sumcomplex} Let $${\bf F} : 0 \rightarrow F_s\stackrel{f_s}{\longrightarrow} F_{s-1} \stackrel{f_{s-1}}{\longrightarrow} \cdots \longrightarrow F_t\stackrel{f_t}{\longrightarrow} \cdots \longrightarrow F_1 \stackrel{f_1}{\longrightarrow} F_0$$ and $${\bf G }: 0 \rightarrow G_s\stackrel{g_s}{\longrightarrow} G_{s-1} \stackrel{g_{s-1}}{\longrightarrow} \cdots \longrightarrow G_t\stackrel{g_t}{\longrightarrow} \cdots \longrightarrow G_1 \stackrel{g_1}{\longrightarrow} G_0$$ be two exact complexes of free modules and $\varphi = \oplus \varphi_t : \bf F \to \bf G$ be a map of complexes. Suppose that the dual $\bf F^*$ is exact. If $\varphi_s = 0$ then we have a homotopy to $\varphi$ given by $\psi_i:F_{i-1}\to G_{i}$ for $1\le i\le s$ with $\psi_s = 0$ and $\varphi_i = g_{i+1}\circ \psi_{i+1} +(-1)^{s-i}\psi_i \circ f_i $. In particular, $\varphi_0(F_0) \subset g_1(G_1)$. \end{proposition} \begin{proof} Let $\psi_s:F_{s-1}\to G_s$ be the zero map. Since, $\varphi_s= 0$, we get, $\varphi_{s-1}\circ f_s=0$ and hence $f_s^*(\varphi _{s-1}^*)=0$. By the exactness of the dual of ${\bf F}$, there exist a map $\psi_{s-1}^*:G_{s-1}^*\to F_{s-2}^*$ such that $f_{s-1}^*\circ \psi_{s-1}^*= \varphi_{s-1}^*$. So, we get, $\varphi_{s-1} = \psi _{s-1}\circ f_{s-1}-g_s\circ \psi_s$. Suppose that we have constructed $\psi_t$ such that, $\varphi_i = g_{i+1}\circ\psi{i+1} +(-1)^{s-i}\psi_i \circ f_i $ for all $i \ge t$. Now, since $\varphi$ is a map of complexes, we have $ \varphi_{t-1}\circ f_t= g_{t}\circ \varphi_{t}$. Substituting, we get $ \varphi_{t-1}\circ f_t = g_{t}\circ (g_{t+1}\circ \psi{t+1}+(-1)^{s-t} \psi_{t}\circ f_{t} )= (-1)^{s-t}g_t \circ \psi_t \circ f_t$. Thus $(\varphi_{t-1}+(-1)^{s-t+1}g_t\circ \psi_t)\circ f_t=0$. That is, $f_t^*((\varphi_{t-1}^*+(-1)^{s-t+1} \psi_t^*\circ g_t^* )(G_{t-1}^*)=0$. By the exactness of the dual ${\bf F^*}$, we get a map $\psi_{t-1}^*:G_{t-1}^*\to F_{t-2}^*$ such that $\varphi_{t-1}^*+(-1)^{s-t+1} \psi_t^*\circ g_t^* = f_{t-1}^*\circ \psi_{t-1}^*$ and hence $\varphi_{t-1}= g_t \circ \psi_t +(-1)^{s-t+1} \psi_{t-1} \circ f_{t-1} $. Thus we prove the existence of $\psi_i$ for all $i$. Looking at $\psi_1:F_0\to G_1$, we get, $\varphi_0(F_0) = g_1 \circ \psi_1(F_0) \subset g_1(G_1)$. \end{proof} \section{Gorenstein ideals}\label{secgorenstein} In this section we deal with the case when $k[\Gamma({\bf m})]$ is Gorenstein separately and see that an explicit construction of the minimal free resolution is obtained from one single mapping cone construction using the fact observed in Section~\ref{defidealsection} that ${\mathcal P}=I_2(A)+I_2(B)$. Note that the Gorenstein case will also fit into the general construction of iterated mapping cone given in Section~\ref{secmain} as we shall see in Remark~\ref{rmkGor}. The proof we provide in this section completes the argument presented in \cite[Theorem~2.6]{hip} for the case $n=4$. The explicit computation of the Cohen-Macaulay type of $k[\Gamma({\bf m})]$ in \cite[Corollary~6.2]{patseng} under the more general assumption of almost arithmetic sequences implies that if ${\bf m}$ is an arithmetic sequence then $k[\Gamma({\bf m})]$ is Gorenstein if and only if $b=2$. So let's assume that $m_0 \equiv 2 \mod n$ and write $m_0 = an+2$. Then, $$B= {\left(\begin{array}{cccc} \begin{array}{c} X_{n}^{a}\\[1.5mm] X_{0}^{a+d}\\[1.5mm] \end{array} & \begin{array}{c} X_{0}\\[1.5mm] X_{2}\\[1.5mm] \end{array} & \begin{array}{c} \cdots\\[1.5mm] \cdots\\[1.5mm] \end{array} & \begin{array}{c} X_{n-2}\\[1.5mm] X_{n}\\[1.5mm] \end{array} \end{array}\right)}.$$ The ideals $I_2(A)$ and $I_2(B)$ are both of height $n-1$ and the interesting fact is that the ideal ${\mathcal P} = I_2(A)+I_2(B)$ is Gorenstein of height $n$. We will construct a resolution for $R/{\mathcal P}$ as follows. We start with the following preliminary lemma. Consider the map \begin{equation}\label{alpha} \alpha: D_{n-2}G^* \cong R^{n-1}\longrightarrow R\,,\quad g_1^{(i)}g_2^{(n-2-i)}\mapsto ( -1)^i [ X_n^{a}X_{i+2}-X_0^{a+d}X_i]\,. \end{equation} \begin{lemma}\label{Alpha1} Let ${\bf E}$ be the Eagon-Northcott complex which is a minimal resolution of $R/I_2(A)$ and $\bf {E}^*$ be as in Section~\ref{secENgraded}. Then the map $\alpha:\,E_{n-1}^* \longrightarrow R$ defined in {\rm(}\ref{alpha}{\rm)} induces a map of complexes $\alpha:\ {\bf E}^*\to \bf E$. \end{lemma} \begin{proof} Consider the basis element $e_t\otimes g_1^{(i)}g_2^{(n-3-i)}$ in $\wedge ^{n-1}F^*\otimes D_{n-3}G^*$ where $e_t$ denotes ($(-1)^{n-i} e_1^*\wedge \cdots e_{t-1}^*\wedge e_{t+1}^*\wedge \cdots e_{n}^*$. Then $${\small \begin{array}{rcl} \alpha (d_{n-1}^* (e_i \otimes g_1^{(r)}g_2^{(s)})) & = & ( -1)^{i+1} X_{t-1}[ X_n^{a}X_{i+3}-X_0^{a+d}X_{i+1}]+(-1)^iX_t [ X_n^{a}X_{i+2}-X_0^{a+d}X_i] \\ & = & (-1)^{i+1} (X_n^a [X_{t-1}X_{i+3}-X_{t }X_{i+2}] +X_0^{a+d}[X_{t-1}X_{i+1}-X_tX_{i}]) \\ & \in & I_2(A)\,.\\ \end{array}}$$ So, the composition $\wedge ^{n-1}F^*\otimes D_{n-3}G^* \longrightarrow D_{n-2}G^* \stackrel{\alpha}{\longrightarrow}R \longrightarrow R/I_2(A)$ is zero. Since $E$ is exact, we can lift the map $\alpha$ to $\alpha: E^*\to E$ as a map of complexes. \end{proof} \begin {theorem}{\label {minresgor}} Let ${\bf m}=(m_0, \ldots , m_n)$ be an arithmetic sequence with $m_0 \equiv 2 \mod n$. If ${\mathcal P}\subset R$ is the defining ideal of the monomial curve $C_{\bf m}\subset\mathbb A_k^{n+1}$ associated to the sequence ${\bf m}$, then $R/{\mathcal P}$ is Gorenstein of codimension $n$ with minimal resolution given by ${\bf E} \oplus ({\bf E}^*)^{\bf 1}$ where $E$ is the Eagon-Northcott resolution of $R/I_2(A)$. \end {theorem} \begin{proof} As we already recalled in Section~\ref{secprelim}, ${\mathcal P} =I_2(A)+I_2(B)$, and ${\bf E}\to R / I_2(A)\to 0$ is a minimal resolution of $R/I_2(A)$. Let ${\bf E}: 0\to R \to E_1^* \to \cdots \to E_{n-2}^*\to E_{n-1}^*$ be its dual. If $\alpha: R^{n-1} = E_{n-1}^* \to R$ is the map defined in (\ref{alpha}), one has by Lemma~\ref{Alpha1} that $\alpha$ induces a map of complexes ${\bf \alpha} :\bf E^* \longrightarrow \bf E$. Hence the mapping cone ${\bf E} \oplus ({\bf E}^*)^{\bf 1}$ is exact. Note that the image of $\alpha$ is the ideal generated by the $n-1$ principal $2\times 2$ minors of $B$ and $H_0({\bf E} \oplus {\bf E}^*) = \hbox{Im} (d_1)+\hbox{Im}(\alpha) = I_2(A)+I_2(B)$ because the rest of the minors of $B$ are already in the ideal $I_2(A)$. So, ${\bf E}\oplus ({\bf E}^*)^{\bf 1}$ resolves $R/{\mathcal P}$ and it is minimal because $\alpha $ has positive degree. \end{proof} Since for $i\in[1,n-1]$, the rank of the free module $E_i$ is $i{n\choose {i+1}}$, we immediately get the Betti numbers of $R/{\mathcal P}$: \begin{corollary}\label{bettiGor} Let ${\bf m} = (m_0, \ldots , m_n)$ be an arithmetic sequence with $m_0 \equiv 2 \mod n$. Then the Betti numbers of $R/{\mathcal P}$, where ${\mathcal P}\subsetk[X_0,\ldots,X_n]$ is the defining ideal of the monomial curve $C_{\bf m}$ associated to the sequence ${\bf m}$, are $\beta_0=1$, $\beta_i = i{n\choose {i+1}}+(n-i){n\choose {i-1}}$ for $i\in [1,n-1]$, and $\beta_n=1$. \end{corollary} \section{Explicit construction of a minimal graded resolution}\label{secmain} Let's go back to the general situation: $m_0=an+b$ with $a,b$ positive integers and $b\in [1,n]$. In this section, we will not really use that ${\mathcal P} = I_2(A)+I_2(B)$ as in the Gorenstein case but essentially that ${\mathcal P}$ is minimally generated by the 2 by 2 minors of $A$ and the principal minors of $B$, {\it i.e.}, $\Delta_i:=\Delta_{1,i-b+2}(B)=X_n^aX_{i}-X_0^{a+d}X_{i-b}$ for $i\in [b,n]$. In other words, if one sets $$ \forall\,i\in [b,n],\ I_i:=I_2(A)+ (\Delta _b, \ldots , \Delta _{i})\,, $$ then ${\mathcal P}=I_n$. For all $i\in [b,n]$, a graded free resolution of $R/I_i$ will be obtained by a series of iterated mapping cones. Set $$ \delta _i := \deg \Delta _{i} = m_0(a+d+1)+(i-b)d,\ \forall i\in [b,n] $$ (of course the degrees are with respect to the weighted grading). The following lemma is key to the construction of the minimal resolutions. \begin{lemma}\label {colon} \begin{enumerate} \item\label{firstcolon} $I_2(A):\Delta _ b= I_2(A)$. \item\label{coloninclusion} $\forall\, i\in [b,n-1]$, $(X_0, X_1, \ldots , X_{n-1})\subseteq I_i: \Delta _{i+1}$. \end{enumerate} \end{lemma} \begin{proof} (\ref{firstcolon}) holds because $I_2(A)$ is prime and $\Delta_b\notin I_2(A)$. Moreover, for any $i\in [b,n-1]$ and $j\in [0,n-1]$, one has that $$ \begin{array}{rclcl} X_j\Delta_{i+1} &\equiv& X_n^aX_{i}X_{j+1}-X_0^{a+d}X_{i-b+1}X_j&\mod& (X_jX_{i+1}-X_{j+1}X_i)\\ &\equiv& X_0^{a+d}X_{i-b}X_{j+1}-X_0^{a+d}X_{i-b+1}X_j&\mod&\Delta_i \\ &=&X_0^{a+d}(X_{i-b}X_{j+1}-X_{i-b+1}X_j)&& \end{array} $$ which implies that $X_j\Delta_{i+1}\in I_2(A)+ (\Delta _{i})$ because $X_kX_{l+1}-X_{k+1}X_l\in I_2(A)$ for all $k,l\in [0,n-1]$, and we are done. \end{proof} \begin{remark}\label{rmkcolon} Observe that Lemma~\ref{colon}~ (\ref{coloninclusion}) implies that, for all $i\in [b,n-1]$, either $I_i:\Delta _{i+1}=(X_0, X_1, \ldots , X_{n-1})$ or $I_i:\Delta _{i+1}=(X_0, X_1, \ldots , X_{n-1}, X_n^\ell)$ for some $\ell\ge 1$. Indeed, we will see in (2) of Inductive Step~\ref{induction} that the latter never occurs. \end{remark} We are now ready for our iterated mapping cone construction. Recall from Section~\ref{secENgraded} that the minimal graded free resolution ${\bf E }= \oplus _{i=0}^{n-1} E_i$ of $R/I_2(A)$ given by the Eagon-Northcott complex of the matrix $A$ is $$ 0\rightarrow E_{n-1}\longrightarrow\cdots\longrightarrow E_1\longrightarrow E_0\longrightarrow R/I_2(A)\rightarrow 0 $$ with $E_0=R$ and $\displaystyle{E_{s-1}= \bigoplus_{k=1}^{s-1} \left(\bigoplus_{r\in\sigma(n,s) } R(- (sm_0 +kd +r d))\right)}$ for all $s\in [2,n]$. Let ${\bf C}_b = {\bf E} \oplus {\bf E^1}(-\delta _b)$ denote the mapping cone of the map ${\bf \Delta} _b: {\bf E} \rightarrow {\bf E}$ which is induced by multiplication by $\Delta _b$ and $\delta_b = \deg(\Delta_b)$. By Lemma~\ref{colon}~(\ref{firstcolon}) together with the fact that all the individual maps in ${\bf \Delta} _b$ are multiplication by $\Delta _b$ and hence not zero (in fact injective), we get that ${\bf C}_b$ is the minimal resolution of $R/I_ b$. \begin{notation}\label{notationL} Set $ L(s,k):=\bigoplus_{r\in\sigma(n,s)}R(-[m_0(a+d+s+1)+kd+rd]) $ for all $s\in [1,n]$ and $k\ge 1$. Then, for all $s\in [2,n]$, $({\bf C}_{b})_s=E_s\oplus\left(\bigoplus_{k=1}^{s-1}\left(L(s,k)\right)\right)$, and the free modules in ${\bf C}_b$ are \begin{equation}\label{b} \begin{array}{rcl} ({\bf C}_{b})_0&=&E_0=R,\\ ({\bf C}_{b})_1&=&E_1\oplus E_{0}(-\delta_b)=E_1\oplus R(-(m_0(a+d+1))),\\ ({\bf C}_{b})_s&=& E_s\oplus E_{s-1}(-\delta_b)\\ &=&\displaystyle{ E_s\oplus\left(\bigoplus_{k=1}^{s-1}\left(L(s,k)\right)\right),\ \forall s\in[2,n-1] } \\ ({\bf C}_{b})_n&=& \displaystyle{\bigoplus _{k=1}^ {n-1} R(-[m_0(a+d+n+1)+kd+{n\choose 2}d])}\,. \end{array} \end{equation} \end{notation} \begin{remark} If $b = n$, ${\bf C}_b$ is the minimal resolution of $R/{\mathcal P}$. This is indeed the resolution obtained in \cite[Theorem~3.4 and Corollary~3.5]{hip} in the case $b=n$. \end{remark} Let ${\bf K }= \oplus _{i=0}^n K_i$ be the Koszul resolution of $R/(X_0, \ldots , X_{n-1})$, i. e., $$ 0\rightarrow K_{n}\longrightarrow\cdots\longrightarrow K_1\longrightarrow K_0\longrightarrow R/(X_0,\ldots , X_{n-1})\rightarrow 0 $$ with $K_0=R$, $ K_1=\bigoplus_{k=0}^{n-1}R(-(m_0+kd)) $, and $K_{s}= \bigoplus_{r\in\sigma(n,s) }R(-(sm_0+r d)) $ for all $s\in [2,n]$. Note that for all $s\in [2,n]$ and $i\in [b,n]$, $$ K_s(-\delta_i)=K_s(-(m_0(a+d+1)+(i-b)d))=L(s,i-b)\,. $$ For $i\in [b,n]$, we construct inductively two sequences of complexes ${\bf C}_{i}$ and ${\bf M}_i$ both resolving $R/I_i$ with ${\bf M}_i$ being a minimal resolution. For $i=b$, ${\bf C}_{b}={\bf M}_b={\bf E} \oplus {\bf E^1}(-\delta _b)$ is given in (\ref{b}). We will now prove the following sequence of steps that forms the $i$-th step of our construction. \begin{inductive}\label{induction} \begin{enumerate} \item The minimal resolution of $R/I_{i-1}$ has length $n$. \item $I_{i-1}:\Delta _{i}=(X_0, X_1, \ldots , X_{n-1})$. \item Multiplication by $\Delta_i$ on $R$ induces a map of complexes ${\bf \Delta}_i:{\bf K}(-\delta_i)\rightarrow {\bf M}_{i-1}$. \item ${\bf C}_{i}=\cone{{\bf \Delta}_i}$ resolves $R/I_i$. \item ${\bf M}_{i}$ is the minimized mapping cone of ${\bf \Delta}_i$ and is given by \begin{itemize} \item $({\bf M}_{i})_0=R$, \item $({\bf M}_{i})_1=E_1\oplus \displaystyle{\bigoplus_{k=0}^{i-b}R(-[m_0(a+d+1)+kd])}$, \item $({\bf M}_{i})_s= \left\{\begin{array}{ll} E_s \oplus \left(\bigoplus_{k=s-1}^{i-b} L(s-1,k)\right)& \hbox{ for }s\in [2,i-b+1],\\ E_{s} \oplus \left(\bigoplus_{k=i-b+1}^{s-1} L(s,k)\right)& \hbox{ for }s\in [i-b+2,n]. \end{array}\right.$ \end{itemize} \item The minimal resolution of $R/I_{i}$ has length $n$. \end{enumerate} \end{inductive} \noindent (1) $\Rightarrow$ (2). As observed in Remark~\ref{rmkcolon}, by Lemma~\ref{colon}~ (\ref{coloninclusion}) one has that $I_{i-1}:\Delta _{i}$ is either $(X_0, X_1, \ldots , X_{n-1})$ or $(X_0, X_1, \ldots , X_{n-1}, X_n^\ell)$ for some $\ell\ge 1$, and it is well-known that the Koszul complex provides minimal graded free resolutions for $R/(X_0, \ldots , X_{n-1})$ and $R/(X_0, \ldots , X_{n-1}, X_n^\ell)$. Consider the Koszul resolution ${\bf K'}= \oplus _{i=0}^{n+1}K_i'$ of $R/(X_0, \ldots , X_{n-1}, X_n^\ell)$: $$ 0\rightarrow K'_{n+1}\longrightarrow\cdots\longrightarrow K'_1\longrightarrow K'_0\longrightarrow R/(X_0, \ldots , X_{n-1}, X_n^\ell)\rightarrow 0\,. $$ Assume that $I_{i-1}:\Delta_{i}=(X_0, X_1, \ldots , X_{n-1}, X_n^\ell)$ for some $\ell\ge 1$ and consider the complex map ${\bf\Delta}'_{i}:{\bf K'}(-\delta_{i})\rightarrow {\bf M}_{i-1}$ induced by multiplication by $\Delta_{i}$. Then the mapping cone of ${\bf\Delta}'_{i}$ provides a free resolution of $R/I_{i}$ and since ${\bf K'}(-\delta_{i})$ and ${\bf M}_{i-1}$ have length $n+1$ and $n$ respectively, $\cone{{\bf\Delta}'_{i}}$ has length $n+2$. It may not be minimal nevertheless, since $({\bf\Delta}'_{i})_{n+1}:K'_{n+1}(-\delta_{i})\rightarrow 0$ is the zero map, no cancelation can occur at the last step of $\cone{{\bf\Delta}'_{i}}$ and $R/I_{i}$ would have a minimal resolution of length $n+2$ which is impossible since $R$ is a regular ring of length $n+1$. Thus, $I_{i-1}:\Delta _{i}=(X_0, X_1, \ldots , X_{n-1})$. \noindent (2) $\Rightarrow$ (3) $\Rightarrow$ (4) and (5) $\Rightarrow$ (6) are straightforward. It remains to show that (4) $\Rightarrow$ (5). \noindent Set $i=b+t$. The complex map ${\bf \Delta} _{b+t}: {\bf K}(-\delta _{b+t})\rightarrow {\bf M}_{b+t-1}$ induced by multiplication by $\Delta_{b+t}$ (which is of degree $\delta_{b+t}=m_0(a+d+1)+td$) is given by the following diagram (the left column is the shifted Koszul complex ${\bf K}(-\delta_{b+t})$ that resolves $R(-[m_0(a+d+1)+td])/(X_0, \ldots , X_{n-1})$ minimally, and the column on the right hand side is ${\bf M}_{b+t-1}$ that resolves $R/I_{b+t-1}$ minimally): $$ \begin{CD} 0 @. 0\\ @VV V @VV V\\ L(n,t) @> ({\bf \Delta}_{b+t})_n>> \bigoplus_{k=t}^ {n-1} L(n,k) \\ @VV V @VV V\\ L(n-1,t) @>({\bf \Delta}_{b+t})_{n-1}>> E_{n-1}\oplus \left(\bigoplus_ {k={t}}^{n-2} L(n-1,k)\right) \\ @VV V @VV V\\ \vdots @. \vdots \\ @VV V @VV V\\ L(t+2,t) @> ({\bf \Delta} _{b+t})_{t+2}>> E_{t+2} \oplus \left(\bigoplus_{k=t}^{t+1} L(t+2,k)\right) \\ @VV V @VV V\\ L(t+1,t) @> ({\bf \Delta} _{b+t})_{t+1}>> E_{t+1} \oplus L(t+1,t) \\ @VV V @ VV V\\ L(t,t) @>({\bf\Delta} _{b+t})_{t}>> E_{t}\oplus L(t-1,t-1) \\ @VV V @ VV V\\ \vdots @. \vdots \\ @VV V @ VV V\\ L(3,t) @>({\bf\Delta}_{b+t})_3>> E_3 \oplus \left(\bigoplus_{k=2}^{t-1} L(2,k)\right) \\ @VV V @ VV V\\ L(2,t) @>({\bf \Delta} _{b+t})_{2}>> E_{2} \oplus \left(\bigoplus_{k=1}^{t-1} L(1,k) \right) \\ @VV V @ VV V\\ L(1,t) @>({\bf \Delta}_{b+t})_1>> E_1\oplus \bigoplus_{k=0}^{t-1}R(-[m_0(a+d+1)+kd]) \\ @VV V @ VV V\\ R(-[m_0(a+d+1)+td]) @> ({\bf \Delta}_{b+t})_0 >> R \end{CD} $$ Now observe in the previous diagram that, for $s\in [t+1,n]$, the left side $({\bf K}(-\delta_{b+t}))_s$ corresponds to the summand $k=t$ on the right hand side and we will show that it splits off entirely. The following lemma shows that none of these maps are identically zero. \begin {lemma}\label {positive} Let ${\bf K}$, ${\bf M}$ and ${\bf \Delta} _{b+t}: {\bf K}(-\delta _{b+t})\rightarrow {\bf M}_{b+t-1}$ be as before. Then, $({\bf \Delta} _{b+t})_i\neq 0$ for all $0\le i\le n$. In fact, if we choose a basis for the free modules $K_i$, then we can pick a map of complexes ${\bf \Delta} _{b+t}$ such that $({\bf \Delta} _{b+t})_i $ is not zero on any of the chosen basis elements of $K_i$. \end{lemma} \begin{proof} Since $\bf M$ is minimal of length $n$ and $({\bf \Delta} _{b+t})_0$ given by multiplication by $\Delta_{b+t}$ is injective (not zero), all the maps $({\bf \Delta} _{b+t})_i, 0\le i\le {n-1}$ are not zero and can be so chosen to be not zero on any chosen basis elements of $K_t$. The only question is for the last map $({\bf \Delta} _{b+t})_n$. This we will take care by the Proposition~\ref{sumcomplex}. Since $\Delta _{b+t}$ is not contained in the ideal $I_{b+t-1}$, we get that $({\bf \Delta} _{b+t})_0(R) = \Delta _{b+t}R$ is not contained in the image of ${\bf M}_1$. Thus by Proposition~\ref{sumcomplex}, we see that $({\bf \Delta} _{b+t})_n$ is not equal to zero. Since $K_n= R$, therefore, we get that $({\bf \Delta} _{b+t})_n$ is not equal to zero on a basis for the free module $K_n$. \end{proof} Next we show that the projection of $({\bf \Delta} _{b+t})_s$ onto $L(s,t)$ is not zero for any $s\geq t+1$. By degree consideration, the projection of $({\bf \Delta} _{b+t})_s$ onto $L(s,k)$ for $k>t$ is certainly zero. What is left follows from the next lemma. \begin {lemma}\label {projection} For all $1\le t\le n-b$, the projection of $({\bf \Delta}_{b+t})_k(R(-[m_0(a+d+k+1)+td+rd])$ onto $R(-[m_0(a+d+k+1)+td+rd])$ is not zero, and hence is a multiplication by a unit, for every $k\in [t+1,n]$ and $r\in \sigma(n,k)$. \end{lemma} \begin{proof} It suffices to show that \begin{equation}\label{p} ({\bf \Delta}_{b+t})_k(R(-[m_0(a+d+k+1)+td+rd])\not\subset E_k \end{equation} that is, the projection of $({\bf \Delta}_{b+t})_k(L(k,t))$ onto $L(k,t)$ is not zero. If this projection is not zero for some $k$ and $t$, then the projection of $({\bf \Delta}_{b+t})_k(R(-[m_0(a+d+k+1)+td+rd])$ onto $R(-[m_0(a+d+k+1)+td+rd]$ is not zero for the lowest $r\in\sigma(n,k)$. Then we can split it off and go to the next smallest $r$ and so we get the lemma. Now the rest of the proof is by descending induction on $k$. None of these maps $\Delta$ are identically zero by Lemma~\ref{positive}. If $k=n$, $E_n=0$ and (\ref{p}) holds, hence the lemma is true for all $t$. Assume that the lemma holds for all $k\in [s+1,n]$ and for some $s$. Let $k=s$. Suppose that there is an $r= r_1+\cdots +r_s$ such that $(\Delta _{b+t})_{s}(R(-[m_0(a+d+s+1)+td+rd]) $ is entirely in $E_s$. Pick $r_0$ to be the smallest non negative integer not in $\{r_1, \ldots, r_s\}$. Consider the commutative diagram: $$ \begin{array}{ccc} R(-[m_0(a+d+s+2)+td+r+r_0d] &\longrightarrow& ({\bf M}_{b+t-1})_{s+1}\\ \downarrow && \downarrow\\ L(s,t) &\longrightarrow & ({\bf M}_{b+t-1})_s \end{array} $$ Since the lemma is true for $s+1$, we can take for every $r\in \sigma(n,s+1)$, $$(\Delta _{b+t})_{s+1}(R(-[m_0(a+d+s+2)+td+rd+r_0d]) = R(-[m_0(a+d+s+2)+td+rd+r_0d])+...$$ Continuing with the vertical arrow on the right, we see that $R(-[m_0(a+d+s+2)+td+rd+r_0d])$ maps onto $$\Delta _{b+t-1}E_S+\sum _{i\ge 0}(\pm)X_{r_i}R(-[m_0(a+d+s+2)+td+(r+r_0-r_i)d]).$$ Thus every one of the $r_i$ and in particular $X_{r_0}$ that makes up the sum $r+r_0$ will appear as part of the the image in $(M_{b+t-1})_s/E_s$. Thus this image is not contained in $E_s\bigoplus (X_{r_1}, X_{r_2},\ldots , X_{r_s})\bigoplus R(-[m_0(a+d+s+2)+(t-1)d+(r+r_0-r_i)d])$. If we first come down and then apply $(\Delta _{b+t-1})_s$, the image is $(\Delta_{b+t-1})_s(\sum _{i\ge 0} \pm X_{r_i} 1 (-[m_0(a+d+s+2)+td+(r+r_0-r_i)d])$ which is contained in $(E_s\bigoplus (X_{r_1}, X_{r_2},\ldots , X_{r_s})\bigoplus R(-[m_0(a+d+s+2)+td+(r+r_0-r_i)d])$ - a contradiction. This completes the induction and the proof. \end{proof} For $s\in [t+1,n]$, the left side $({\bf K}(-\delta_{b+t}))_s$ splits off entirely with the summand $k=t$ on the right hand side. On the other hand, for $s\in [0,t]$, no cancelation occurs. So the minimal free resolution of $R/I_i$ is as described in (5) of Inductive Step~\ref{induction}. This completes our inductive construction and we have proved the following: \begin{theorem}\label{resIi} Let ${\bf m}=(m_0, \ldots, m_n)$ be an arithmetic sequence with common difference $d$ and write $m_0=an+b$ for $a,b$ two positive integers with $b\in [1,n]$. Consider the polynomial ring $R=k[X_0,\ldots , X_n]$ with $\deg{X_i}=m_i$. Let $J\subset R$ be the defining ideal of the rational normal curve and ${\bf E}$ be its minimal resolution given by the Eagon-Northcott complex. Set $\Delta_i:=X_n^aX_{i}-X_0^{a+d}X_{i-b}$ for all $i\in [b,n]$, $\delta_i:=\deg{\Delta_i}=m_0(a+d+1)+(i-b)d$, and consider the ideal $I_i:=J+ (\Delta _b, \ldots, \Delta _{i})\subset R$. Then for all $i\in [b,n]$, $R/I_i$ is Cohen-Macaulay of codimension $n$ with minimal graded free resolution ${\bf M}_i$ given by $$ \begin{array}{rcl} {\bf M}_b&=&\mincone{{\bf \Delta}_b:{\bf E}(-\delta_b)\rightarrow {\bf E}}=\cone{{\bf \Delta}_b:{\bf E}(-\delta_b)\rightarrow {\bf E}} \\ {\bf M}_i&=&\mincone{{\bf \Delta}_i:{\bf K}(-\delta_i)\rightarrow {\bf M}_{i-1}}\,,\ \forall i\in [b+1,n] \end{array} $$ where ${\bf K}$ is the Koszul resolution of $(X_0,\ldots,X_{n-1})$. The free modules in ${\bf M}_i$ are explicitly given by $$ 0\rightarrow ({\bf M}_i)_n \longrightarrow ({\bf M}_i)_{n-1} \longrightarrow \cdots \longrightarrow ({\bf M}_i)_1\longrightarrow R\longrightarrow R/I_i\rightarrow 0 $$ where \begin{itemize} \item $({\bf M}_{i})_0=R$, \item $({\bf M}_{i})_1=E_1\oplus \displaystyle{\bigoplus_{k=0}^{i-b}R(-[m_0(a+d+1)+kd])}$, \item $({\bf M}_{i})_s= \left\{\begin{array}{ll} E_s \oplus \left(\bigoplus_{k=s-1}^{i-b} L(s-1,k)\right)& \hbox{ for }s\in [2,i-b+1],\\ E_{s} \oplus \left(\bigoplus_{k=i-b+1}^{s-1} L(s,k)\right)& \hbox{ for }s\in [i-b+2,n]. \end{array}\right.$ \end{itemize} In particular, $({\bf M}_{i})_n= \left\{\begin{array}{ll} L(n-1,n-1)& \hbox{ if $i=n$ and $b=1$},\\ \left(\bigoplus_{k=i-b+1}^{n-1} L(n,k)\right)& \hbox{ otherwise.} \end{array}\right.$ \end{theorem} \begin{corollary}\label{CMtypeIi} Using notations in Theorem~\ref{resIi}, the Cohen-Macaulay of type of $R/I_i$ is $\left\{ \begin{array}{ll} n &\hbox{ if } i=n \hbox{ and } b=1,\\ n-1+b-i &\hbox{ otherwise.} \end{array}\right.$ \end{corollary} In particular, since $I_n={\mathcal P}$ is the defining ideal of the monomial curve $C_{\bf m}$, we get the following theorem. \begin{theorem}\label{mainThm} Let ${\bf m}=(m_0, \ldots, m_n)$ be an arithmetic sequence and write $m_0=an+b$ for $a,b$ two positive integers with $b\in [1,n]$. If ${\mathcal P}\subset R:=k[X_0,\ldots,X_n]$ is the defining ideal of the monomial curve $C_{\bf m}\subset\mathbb A_k^{n+1}$ associated to ${\bf m}$ then $R/{\mathcal P}$ is Cohen-Macaulay of codimension $n$ and its minimal graded free resolution ${\bf M}_n$ is $$ 0\rightarrow F_n \longrightarrow E_{n-1}\oplus F_{n-1} \longrightarrow \cdots \longrightarrow E_1\oplus F_1\longrightarrow R\longrightarrow R/{\mathcal P}\rightarrow 0 $$ where, for all $s\in[2,n]$, $\displaystyle{E_{s-1}= \bigoplus_{k=1}^{s-1} \left(\bigoplus_{r\in\sigma(n,s) } R(- (sm_0+kd+rd))\right)}$, and $$ \begin{array}{rcl} F_1 &=& \displaystyle{ \left(\bigoplus _{k=0}^{n-b}R(-[m_0(a+d+1)+kd])\right)}\\ F_2 &=& \displaystyle{ \left(\bigoplus_{k=1}^{n-b}\left( \bigoplus_{r=0}^{n-1}R(-[(m_0(a+d+2)+kd +rd])\right)\right)} \\ F_s &=& \left\{ \begin{array}{ll} \displaystyle{ \left(\bigoplus_{k=0}^{n-b+1-s}\left( \bigoplus_{r\in \sigma (n,s-1)} R(-[m_0(a+d+s)+kd+rd])\right)\right)}& \hbox{\rm if}\ s\in[3,n-b+1], \\ \displaystyle{ \left(\bigoplus_{k=1}^{s-n+b-1}\left( \bigoplus_{r\in \sigma (n,s)} R(-[m_0(a+d+s+1)+kd+rd])\right)\right)}& \hbox{\rm if}\ s\in[n-b+2,n]. \end{array}\right. \end{array} $$ \end{theorem} \section{Consequences}\label{secconsequences} The first direct consequence of Theorem~\ref{mainThm} shows that \cite[Conjecture~2.2]{hip}, which was first stated in \cite{eaca}, holds: \begin{theorem}\label{ThmIndraConj} With notations as in Theorem~\ref{mainThm}, the Betti numbers of $R/{\mathcal P}$ only depend on $n$ and on the value of $m_0$ modulo $n$. \end{theorem} \begin{proof} If one reads the Betti numbers $\{\beta_j,\ j\in [0,n]\}$ in the minimal graded free resolution in Theorem~\ref{mainThm}, one gets that $\beta_0=1$ and \begin{equation}\label{bettiGeneral} \beta_j=j{n\choose j+1}+ \left\{\begin{array}{ll} \displaystyle{(n-b+2-j){n\choose j-1}}&\hbox{if}\ 1\leq j\leq n-b+1,\\ \displaystyle{(j-n+b-1){n\choose j}}&\hbox{if}\ n-b+1<j\leq n, \end{array}\right. \end{equation} where $m_0\equiv b\mod n$ and $b\in [1,n]$ and hence the statement holds. \end{proof} \begin{example} For $n=4$, if ${\mathcal P}\subset R=k[X_0,\ldots , X_4]$ is the defining ideal of the monomial curve associated to an arithmetic sequence ${\bf m}=(m_0,\ldots , m_4)$, the $4$ different patterns for the global Betti numbers of $R/{\mathcal P}$ are as follows. They correspond respectively to $b=1$, 2, 3 and 4: $$ \begin{array}{lllllllllllllll} 0 & \rightarrow & R^4 & \longrightarrow & R^{15} & \longrightarrow & R^{20} & \longrightarrow & R^{10} & \longrightarrow & R & \longrightarrow & R/{\mathcal P} & \rightarrow & 0 \\ 0 & \rightarrow & R & \longrightarrow & R^9 & \longrightarrow & R^{16} & \longrightarrow & R^9 & \longrightarrow & R & \longrightarrow & R/{\mathcal P} & \rightarrow & 0 \\ 0 & \rightarrow & R^2 & \longrightarrow & R^7 & \longrightarrow & R^{12} & \longrightarrow & R^8 & \longrightarrow & R & \longrightarrow & R/{\mathcal P} & \rightarrow & 0 \\ 0 & \rightarrow & R^3 & \longrightarrow & R^{11} & \longrightarrow & R^{14} & \longrightarrow & R^7 & \longrightarrow & R & \longrightarrow & R/{\mathcal P} & \rightarrow & 0 \end{array} $$ For example, the two arithmetic sequences ${\bf m}_1=(11,13,15,17,19)$ and ${\bf m}_2=(7,12,17,22,27)$ fit into the third pattern because $11\equiv 7\equiv 3\mod 4$. Denoting by ${\mathcal P}_1$ and ${\mathcal P}_2$ the defining ideal of $C_{{\bf m}_1}$ and $C_{{\bf m}_2}$ respectively, the minimal graded free resolutions of $R/{\mathcal P}_1$ and $R/{\mathcal P}_2$ are given by Theorem~\ref{mainThm}, and the result can easily be checked using the softwares CoCoA, Macaulay2 or Singular: {\footnotesize \begin{eqnarray*} 0\rightarrow R(-115)\oplus R(-117) \longrightarrow \begin{array}{r} R(-58)\oplus R(-60)\\ \oplus R(-62)\oplus R(-98)\\ \oplus R(-100)\oplus R(-102)\\ \oplus R(-104) \end{array} \longrightarrow \begin{array}{r} R(-41)\oplus R(-43)^2\\ \oplus R(-45)^2\oplus R(-47)^2\\ \oplus R(-49)\oplus R(-68)\\ \oplus R(-70)\oplus R(-72)\\ \oplus R(-74) \end{array}\\ \longrightarrow \begin{array}{r} R(-26)\oplus R(-28)\\ \oplus R(-30)^2\oplus R(-32)\\ \oplus R(-34)\oplus R(-55)\\ \oplus R(-57) \end{array} \longrightarrow R\longrightarrow R/{\mathcal P}_1\rightarrow 0. \end{eqnarray*} \begin{eqnarray*} 0\rightarrow R(-117)\oplus R(-122) \longrightarrow \begin{array}{r} R(-63)\oplus R(-68)\\ \oplus R(-73)\oplus R(-95)\\ \oplus R(-100)\oplus R(-105)\\ \oplus R(-110) \end{array} \longrightarrow \begin{array}{r} R(-41)\oplus R(-46)^2\\ \oplus R(-51)^2\oplus R(-56)^2\\ \oplus R(-61)^2\oplus R(-66)\\ \oplus R(-71)\oplus R(-76) \end{array}\\ \longrightarrow \begin{array}{r} R(-24)\oplus R(-29)\\ \oplus R(-34)^2\oplus R(-39)\\ \oplus R(-44)\oplus R(-49)\\ \oplus R(-54) \end{array} \longrightarrow R\longrightarrow R/{\mathcal P}_2\rightarrow 0. \end{eqnarray*} } \end{example} \begin{remark} It is important to note that the phenomenon described in Theorem~\ref{ThmIndraConj} is something special about arithmetic sequences only. In general, if $0 < m_{0} < \cdots < m_{n}$ is a sequence of integers then the Betti numbers of the monomial curve defined by $x_{i} = t^{m_{i}}$ do not depend only on $n$ and the remainder of $m_{0}$ upon division by $n$. It does not hold even for almost arithmetic sequences, even in dimension 3. In dimension 3, a monomial curve is either a complete intersection with $\beta_1=2, \beta_2=1$ or an ideal of $ 2\times 2$ minors of a $3\times 2$ matrix with $\beta_1=3$ and $\beta_2=2$. Now, it is easy to see that for ${\bf m} = (7, 10, 15)$, $C_{\bf m}$ is a complete intersection with ${\mathcal P} = (X_1^3-X_2^2, X_0^5-X_1^2X_2)$ where as for ${\bf m} =(13,16, 21)$, $C_{\bf m}$ is not a complete intersection. However both $7$ and $13 $ are odd and hence equal 1 modulo 2. \end {remark} \begin{corollary} If $I_i\subset R$ is as in Theorem~\ref{resIi}, $R/I_i$ is Gorenstein if and only if \begin{itemize} \item $b=2$, $i=n$, \item $b=1$, $i=n-1$, or \item $n=1$. \end{itemize} In particular, we recover the result of \cite[Corollary~6.2]{patseng} that $R/{\mathcal P}$ is Gorenstein if and only if $b=2$. Moreover, $R/I_i$ are never level unless they are Gorenstein. \end{corollary} \begin{proof} If $n=1$, $I_1={\mathcal P}$ is principal and therefore Gorenstein. By Corollary~\ref{CMtypeIi}, the type of $R/I_i$ is $n-1+b-i$. Since $i\neq n$, $n-1+b-i\geq b-1>1$ if $b>2$ so $R/I_i$ is Gorenstein if and only if either $b=2$, $i=n$ or $b=1$, $i=n-1$. The non-levelness of $R/I_i$ when it is not Gorenstein follows directly from the degrees in the resolution. \end{proof} \begin{remark}\label{rmkGor} If $m_0\equiv 2\mod n$, i.e., if $R/{\mathcal P}$ is Gorenstein, then by (\ref{bettiGeneral}) one has that $\beta_0=\beta_n=1$ and $\beta_j=j{n\choose j+1}+(n-j){n\choose j-1}$ for all $j\in [1,n-1]$ which is Corollary~\ref{bettiGor}. Note that this result is obtained here by making a resolution minimal which is obtained through iterated mapping cone construction, while it had been obtained in Section~\ref{secgorenstein} by a direct argument. \end{remark} \begin{remark} If $m_0\equiv 1\mod n$, one gets by (\ref{bettiGeneral}) that for all $j\in [1,n]$, $\beta_j=j{n\choose j+1}+(n+1-j){n\choose j-1}$. Note that in this case the Betti numbers had already been obtained in \cite[Theorem~3.1]{hip} where we show that for all $j\in [1,n]$, $\beta_j=j{n+1\choose j+1}$. One can easily check that both numbers coincide. \end{remark} \begin{theorem}\label{CMtype} With notations as in Theorem~\ref{mainThm}, the Cohen Macaulay type of $R/{\mathcal P}$ is the unique integer $c$ in $[1,n]$ such that $c\equiv m_0-1 \mod n$. \end{theorem} \begin{proof} The Cohen-Macaulay type of $R/{\mathcal P}$, $\beta_n$, is computed by the first formula in (\ref{bettiGeneral}) when $b=1$, and by the second otherwise. Thus, $\beta_n={n\choose n-1}=n$ if $b=1$, and $\beta_n=(b-1){n\choose n}=b-1$ otherwise. \end{proof} Putting together Theorem~\ref{ThmIndraConj} and Theorem~\ref{CMtype}, one gets the following: \begin{corollary}\label{CMtypeAndBetti} With notations as in Theorem~\ref{mainThm}, the Cohen-Macaulay type of $R/{\mathcal P}$ determines all its Betti numbers. \end{corollary} Note that in the previous corollary, the same result holds if one substitutes the minimal number of generators of ${\mathcal P}$ for the Cohen-Macaulay type of $R/{\mathcal P}$. \begin {remark} One can also deduce from the minimal graded free resolution in Theorem~\ref{mainThm}, the value of the (weighted) Castelnuovo-Mumford regularity of $R/{\mathcal P}$ which is indeed $$ \hbox{reg}(R/{\mathcal P})= \left\{\begin{array}{ll} {n\choose 2}d+m_0(a+d)+n(m_0-1)&\hbox{if }b=1, \\ ({n\choose 2}+b-1)d+m_0(a+d+1)+n(m_0-1)&\hbox{if }b\geq 2. \end{array}\right. $$ On the other hand, the Frobenius number $g({\bf m})$ of the numerical semigroup $\Gamma({\bf m})$ can be computed using \cite[Theorem~3.2.2]{ramirezbook} and one gets that $g({\bf m})=(a-1)m_0+d(m_0-1)$ if $b=1$, and $g({\bf m})=am_0+d(m_0-1)$ if $b\geq 2$. Thus $ \hbox{reg}(R/{\mathcal P})-g({\bf m})=({n\choose 2}+b)d+m_0+n(m_0-1)\,. $ In particular, the regularity is always an upper bound for the conductor $g({\bf m})+1$ of the numerical semigroup $\Gamma({\bf m})$ and it is, in general, much bigger. The above relation between the regularity of the semigroup ring and the Frobenius number of the semigroup can be nicely expressed as follows: $$ \hbox{reg}(R/{\mathcal P})=g({\bf m}) +\sum_{i=0}^{n}m_i-(n-b)d-n\,. $$ \end{remark} Finally, we use Theorem \ref{ThmIndraConj} to prove a conjecture of Herzog and Srinivasan on eventual periodicity of Betti numbers of semigroup rings in our context. Given a sequence of positive integers ${\bf m}= (m_0, \ldots, m_n)$ and a positive integer $j$, denote by ${\bf m}+(j) = {\bf m} +(j,j, \ldots, j)$. Herzog and Srinivasan have conjectured the following: \begin{Conjecture}[Herzog and Srinivasan] Let ${\bf m}$ and ${\bf m}+(j)$ be as above. \begin{itemize} \item[HS1] The Betti numbers of the semigroup ring $k[\Gamma({\bf m} +(j))]$ are eventually periodic in $j$. \item[HS2] The number of minimal generators of the defining ideal of the monomial curve $C_{{\bf m}+(j)}$ is eventually periodic in $j$ with period $m_n-m_0$. \item[HS3] The number of minimal generators of the defining ideal of the monomial curve $C_{{\bf m}+(j)}$ is bounded for all $j$. \end{itemize} \end{Conjecture} They prove the conjecture for $n=2$ and A. Marzullo proves it for some cases when $n=4$ in \cite {Ad}. Our Theorem~\ref{ThmPeriodic} proves this periodicity conjecture in its strong form (HS1) for arithmetic sequences. \begin{theorem}\label{ThmPeriodic} If ${\bf m}=(m_0,\ldots,m_n)$ is an arithmetic sequence and ${\bf m}+(j)={\bf m} +(j,j, \ldots, j)$, then the Betti numbers of $C_{{\bf m}+(j)}$ are eventually periodic in $j$ with period $m_n-m_0$. \end{theorem} \begin{proof} Let ${\bf m}=(m_0,\ldots,m_n)$ be an arithmetic sequence and $j\in\mathbb N$. Since $(m_0+j, \ldots, m_n+j)$ is in arithmetic progression, $\gcd{(m_0+j,\ldots,m_n+j)}=\gcd{(m_0+j,d)}$ where $d$ is the common difference in ${\bf m}$. We denote by $\widetilde{.}$ division by $\gcd{(m_0+j,d)}$. Then $\widetilde{m_i+j}=\widetilde{m_0+j}+i\widetilde{d}$ and $\widetilde{{\bf m}+(j)}=(\widetilde{m_0+j},\ldots,\widetilde{m_n+j})$ has gcd 1. Moreover, $C_{{\bf m}+(j)}=C_{\widetilde{{\bf m}+(j)}}$. We claim that for $j\geq nd-m_0$, $\widetilde{{\bf m}+(j)}$ is always an arithmetic sequence. If $\widetilde{m_k+j}=\sum_{i\neq k}r_i(\widetilde{m_i+j})$ for $r_i\in\mathbb N$, then $k\widetilde{d}+\widetilde{m_0+j}=\sum_{i\neq k}r_i(\widetilde{m_0+j}+i\widetilde{d})$ and hence $\widetilde{d}(k-\sum_{i\neq k}ir_i)=(\widetilde{m_0+j})(\sum_{i\neq k}r_i-1)\geq n\widetilde{d}$ and this is not possible since $k-\sum_{i\neq k}ir_i<n$. Now, \begin{eqnarray*} \widetilde{m_0+j+nd}&=&\widetilde{m_0+j}+n\widetilde{d}\\ &\equiv&\widetilde{m_0+j}\mod n. \end{eqnarray*} Note that $\gcd{(m_0+j,d)}$ is periodic with period $d$. So the Betti numbers of $C_{{\bf m}+(j)}$ are periodic with period $nd$ for $j$ large enough. \end{proof} \end{document}
\begin{document} \begin{abstract} The free-boundary compressible 1-D Euler equations with moving {\it physical vacuum} boundary are a system of hyperbolic conservation laws which are both {\it characteristic} and {\it degenerate}. The {\it physical vacuum singularity} (or rate-of-degeneracy) requires the sound speed $c= \gamma \rho^{ \gamma -1}$ to scale as the square-root of the distance to the vacuum boundary, and has attracted a great deal of attention in recent years \cite{IMA2009}. We establish the existence of unique solutions to this system on a short time-interval, which are smooth (in Sobolev spaces) all the way to the moving boundary. The proof is founded on a new higher-order Hardy-type inequality in conjunction with an approximation of the Euler equations consisting of a particular degenerate parabolic regularization. Our regular solutions can be viewed as {\it degenerate viscosity solutions}. \end{abstract} \maketitle {\small \tableofcontents} \section{Introduction} \label{sec_introduction} \subsection{Compressible Euler equations and the physical vacuum boundary} This paper is concerned with the evolving vacuum state in a compressible gas flow. For $0 \le t \le T$, the evolution of a one-dimensional compressible gas moving inside of a dynamic vacuum boundary is modeled by the one-phase compressible Euler equations: \begin{subequations} \label{ceuler} \begin{alignat}{2} \rho\left[u_t+ uu_x\right] + p(\rho)_x &=0 &&\text{in} \ \ I (t) \,, \label{ceuler.a}\\ \rho_t+ (\rho u)_x&=0 &&\text{in} \ \ I (t) \,, \label{ceuler.b}\\ p &= 0 \ \ &&\text{on} \ \ \Gamma(t) \,, \label{ceuler.c}\\ \mathcal{V} (\Gamma(t))& = u &&\ \ \label{ceuler.d}\\ (\rho,u) &= (\rho_0,u_0 ) \ \ &&\text{on} \ \ I(0) \,, \label{ceuler.e}\\ I (0) &= I =\{0<x<1\} \,. && \label{ceuler.f} \end{alignat} \end{subequations} The open, bounded interval $I (t) \subset \mathbb{R} $ denotes the changing domain occupied by the gas, $\Gamma(t):= \partial I (t)$ denotes the moving vacuum boundary, $ \mathcal{V} (\Gamma(t))$ denotes the velocity of $\Gamma(t)$. The scalar field $u $ denotes the Eulerian velocity field, $p$ denotes the pressure function, and $\rho$ denotes the density of the gas. The equation of state $p(\rho)$ is given by \begin{equation}\label{eos} p(x,t)= C_\gamma\, \rho(x,t)^\gamma\ \ \text{ for } \ \ \gamma> 1 , \end{equation} where $C_\gamma $ is the adiabatic constant which we set to one, and $$ \rho>0 \ \text{ in } \ I (t) \ \ \ \text{ and } \ \ \ \rho=0 \ \text{ on } \Gamma(t) \,. $$ Equation (\ref{ceuler.a}) is the conservation of momentum; (\ref{ceuler.b}) is the conservation of mass; the boundary condition (\ref{ceuler.c}) states that the pressure (and hence density) must vanish along the vacuum boundary; (\ref{ceuler.d}) states that the vacuum boundary is moving with the fluid velocity, and (\ref{ceuler.e})-(\ref{ceuler.f}) are the initial conditions for the density, velocity, and domain. Using the equation of state (\ref{eos}), (\ref{ceuler.a}) is written as \begin{alignat}{2} \rho [u_t+ uu_x]+ (\rho^{\gamma})_x &=0 \ \ \ &&\text{in} \ \ I (t) \,. \tag{ \ref{ceuler.a}'} \end{alignat} With the sound speed given by $c^2(x,t)= \gamma \rho^{\gamma-1}(x,t)$, and with $c_0 = c(x,0)$, the condition \begin{equation}\label{phys_vac} 0 < \left|\frac{ \partial c_0^2}{ \partial x}\right| < \infty \text{ on } \Gamma \end{equation} defines a {\it physical vacuum} boundary (or singularity) (see \cite{Liu1996}, \cite{LiYa1997}, \cite{LiYa2000}, \cite{XuYa2005}). Since $ \rho_0 >0$ in $ I$, (\ref{phys_vac}) implies that for some positive constant $C$ and $x\in I$ near the vacuum boundary $\Gamma$, \begin{equation}\label{degen} \rho_0^{\gamma-1}(x) \ge C \text{dist}(x, \Gamma) \,. \end{equation} Equivalently, the physical vacuum condition (\ref{degen}) implies that for some $ \alpha >0$, \begin{equation}\label{degen1} \left|\frac{ \partial \rho_0^{\gamma-1}}{\partial x}(x)\right|\ge 1 \text{ for any $x$ satisfying $d(x,\partial I)\le\alpha$} \,, \end{equation} and for a constant $C_ \alpha $, depending on $ \alpha $, \begin{equation}\label{degen2} \rho_0^{\gamma-1}(x)\ge C_\alpha>0 \text{ for any $x$ such that $d(x,\partial I)\ge\alpha$} \,. \end{equation} Because of condition (\ref{degen}), the compressible Euler system (\ref{ceuler}) is a {\it degenerate} and {\it characteristic} hyperbolic system to which standard methods of symmetric hyperbolic conservation laws cannot be applied in standard Sobolev spaces. In \cite{CoLiSh2009}, we established a priori estimates for the multi-D compressible Euler equations with physical vacuum boundary. The main result of this paper is the construction of unique solutions in the 1-D case, which are smooth all the way to the moving vacuum boundary on a (short) time-interval $[0,T]$, where $T$ depends on the initial data. We combine the methodology of our a priori estimates \cite{CoLiSh2009}, with a particular degenerate parabolic regularization of Euler, which follows our methodology in \cite{CoSh2006, CoSh2007}, as well as a new higher-order Hardy-type inequality which permits the construction of solutions to this degenerate parabolic regularization. As we describe below in Section \ref{history}, our solutions can be thought of as {\it degenerate viscosity solutions}. The multi-D existence theory is treated in \cite{CoSh2009}. \subsection{Fixing the domain and the Lagrangian variables on the reference interval $ I$} We transform the system (\ref{ceuler}) into Lagrangian variables. We let $\eta(x,t)$ denote the ``position'' of the gas particle $x$ at time $t$. Thus, \begin{equation} \nonumber \begin{array}{c} \partial_t \eta = u \circ \eta $ for $ t>0 $ and $ \eta(x,0)=x \end{array} \end{equation} where $\circ $ denotes composition so that $[u \circ \eta] (x,t):= u(\eta(x,t),t)\,.$ We set \begin{align*} v &= u \circ \eta \text{ (Lagrangian velocity)}, \\ f&= \rho \circ \eta \text{ (Lagrangian density)}. \end{align*} The Lagrangian version of equations (\ref{ceuler.a})-(\ref{ceuler.b}) can be written on the fixed reference domain $ I$ as \begin{subequations} \label{ceuler00} \begin{alignat}{2} f v_t + ( f^ \gamma)_x &=0 \ \ && \text{ in } I \times (0,T] \,, \label{ceuler00.a} \\ f _t + f v_x/\eta_x &=0 \ \ && \text{ in } I \times (0,T] \,,\label{ceuler00.b} \\ f &=0 \ \ && \text{ in } I \times (0,T] \,,\label{ceuler00.c} \\ (f,v,\eta) &=(\rho_0, u_0, e) \ \ \ \ && \text{ in } I \times \{t=0\} \,, \label{ceuler00.d} \end{alignat} \end{subequations} where $e(x)=x$ denotes the identity map on $I $. It follows from solving the equation (\ref{ceuler00.b}) that \begin{equation}\label{J} f =\rho \circ \eta = \rho_0/\eta_x, \end{equation} so that the initial density function $\rho_0$ can be viewed as a parameter in the Euler equations. Let $\Gamma:= \partial I $ denote the initial vacuum boundary; we write the compressible Euler equations (\ref{ceuler00}) as \begin{subequations} \label{ceuler0} \begin{alignat}{2} \rho_0 v_t + (\rho_0^ \gamma /\eta_x^\gamma)_x &=0 \ \ && \text{ in } I \times (0,T] \,, \label{ceuler0.a} \\ (\eta, v) &=( e, u_0) \ \ \ \ && \text{ in } I \times \{t=0\} \,, \label{ceuler0.b} \\ \rho_0^{\gamma-1}& = 0 \ \ &&\text{ on } \Gamma \,, \label{ceuler0.c} \end{alignat} \end{subequations} with $ \rho _0^{ \gamma -1}(x) \ge C \operatorname{dist}( x, \Gamma ) $ for $x \in I$ near $\Gamma$. \subsection{Setting $\gamma=2$}We will begin our analysis of (\ref{ceuler0}) by considering the case that $\gamma=2$, in which case, we seek solutions $\eta$ to the following system: \begin{subequations} \label{ce0} \begin{alignat}{2} \rho_0 v_t + (\rho_0^2/\eta_x^2)_x&=0 &&\text{in} \ \ I \times (0,T] \,, \label{ce0.a}\\ (\eta,v)&= (e,u_0 ) \ \ \ &&\text{on} \ \ I \times \{t=0\} \,, \label{ce0.b}\\ \rho_0& = 0 \ \ &&\text{ on } \Gamma \,, \label{ce0.c} \end{alignat} \end{subequations} with $ \rho _0(x) \ge C \operatorname{dist}( x, \Gamma ) $ for $x \in I$ near $\Gamma$. The equation (\ref{ce0.a}) is equivalent to \begin{equation} v_t + 2 \eta_x ^{-1} (\rho_0 \eta_x^{-1} )_x=0 \label{ce_vor} \,, \end{equation} and (\ref{ce_vor}) can be written as \begin{equation} v_t +\rho_0 ( \eta_x^{-2})_x + 2(\rho_0)_x \eta_x^{-2} =0 \label{ce_elliptic} \,. \end{equation} Because of the degeneracy caused by $\rho_0 =0$ on $\Gamma$, all three equivalent forms of the compressible Euler equations are crucially used in our analysis. The equation (\ref{ce0.a}) is used for energy estimates, while (\ref{ce_vor}) and (\ref{ce_elliptic}) are used for additional elliptic-type estimates that rely on our higher-order Hardy-type inequality. \subsection{The reference domain $ I$}\label{subsec_domain} As we have already noted above, the initial domain $ I \subset \mathbb{R} $ at time $t=0$ is given by $$ I = (0,1)\,, $$ and the initial boundary points are given by $\Gamma = \partial I = \{0,1\}$. \subsection{The higher-order energy function for the case $\gamma=2$} We will consider the following higher-order energy function: \begin{align} E(t,v) & =\sum_{s=0}^4 \|\partial_t^sv(t,\cdot)\|^2_{H^{2-\frac{s}{2}}(I)} + \sum_{s=0}^2 \|\rho_0 \partial_t^{2s} v(t,\cdot)\|^2_{H^{3-s}(I)} \nonumber \\ & \qquad \qquad +\|\sqrt{\rho_0}\partial_t \partial_x^{2} v(t,\cdot)\|^2_{L^2(I)} +\|\sqrt{\rho_0}\partial_t^{3}\partial_x v(t,\cdot)\|^2_{L^2(I)}\,. \label{E} \end{align} We define the polynomial function $M_0$ by \begin{equation}\label{M0} M_0 = P(E(0,v)) \,. \end{equation} \subsection{The Main Result} \begin{theorem} [Existence and uniqueness for the case $\gamma=2$] \label{theorem_main} Given initial data $(u_0, \rho_0)$ such that $M_0< \infty $ and the physical vacuum condition (\ref{degen}) holds for $\rho_0$, there exists a solution to (\ref{ce0}) (and hence (\ref{ceuler})) on $[0,T]$ for $T>0$ taken sufficiently small, such that $$ \sup_{t \in [0,T]} E(t) \le 2M_0 \,. $$ Moreover if the initial data satisfies \begin{equation}\label{uniquedata} \sum_{s=0}^3 \|\partial_t^sv(0,\cdot)\|^2_{{3-s}} + \sum_{s=0}^3 \|\rho_0 \partial_t^{2s} v(0,\cdot)\|^2_{{4-s}} < \infty \,, \end{equation} then the solution is unique. \end{theorem} \begin{remark} The case of arbitrary $\gamma >1$ is treated in Theorem \ref{thm_main2} below. \end{remark} \begin{remark} Given the regularity provided by the energy function (\ref{E}), we see that the Lagrangian flow map $\eta \in C([0,T], H^2(I))$. In our estimates for the multi-D problem \cite{CoLiSh2009}, we showed that $\eta$ gains regularity with respect to the velocity field $v$, and this fact is essential to control the geometry of the evolving free-surface. This improved regularity for $\eta$ also holds in the 1-D setting, but it is not necessary for our estimates, as no geometry is involved. \end{remark} \begin{remark} Given $u_0$ and $\rho_0$, and using the fact that $\eta(x,0)=x$, the quantity $v_t|_{t=0}$ is computed using (\ref{ce0.a}): $$ v_t|_{{t=0}} =- \left.\left({\frac{1}{\rho_0}} (\rho_0^2/\eta_x^2)_x \right)\right|_{t=0} = - 2\frac{\partial \rho_0}{\partial x} \,. $$ Similarly, for all $k \in \mathbb{N} $, $$ \partial_t^k v|_{{t=0}} =- \left. \left( {\frac{1}{\rho_0}} (\rho_0^2/\eta_x^2)_x \right)\right|_{t=0} \,, $$ so that each $\partial_t^k v|_{t=0}$ is a function of space-derivates of $u_0$ and $\rho_0$. \end{remark} \begin{remark}Notice that our functional framework provides solutions which have optimal Sobolev regularity all the way to the boundary. Hence, in the physical case that $c \sim \sqrt{\text{dist}(\partial I)}$, no singular behavior occurs near the vacuum boundary, even though both families of characteristics cross, and in particular, meet tangentially to $\Gamma(t)$ at a point. \end{remark} \begin{remark} Because of the degeneracy of the density function $\rho_0$ at the initial boundary $ \partial I$, no compatibility conditions are required of the initial data. \end{remark} \subsection{History of prior results for the compressible Euler equations with vacuum boundary}\label{history} Some of the early developments in the theory of vacuum states for compressible gas dynamics can be found in \cite{LiSm1980} and \cite{Lin1987}. We are aware of only a handful of previous theorems pertaining to the existence of solutions to the compressible and {\it undamped} Euler equations with moving vacuum boundary. Makino \cite{M1986} considered compactly supported initial data, and treated the compressible Euler equations for a gas as being set on $\mathbb{R}^3 \times (0,T]$. With his methodology, it is not possible to track the location of the vacuum boundary (nor is it necessary); nevertheless, an existence theory was developed in this context, by a variable change that permitted the standard theory of symmetric hyperbolic systems to be employed. Unfortunately, the constraints on the data are too severe to allow for the evolution of the physical vacuum boundary. In \cite{Li2005b}, Lindblad proved existence and uniqueness for the 3-D compressible Euler equations modeling a {\it liquid} rather than a gas. For a compressible liquid, the density $\rho\ge \lambda>0$ is assumed to be a strictly positive constant on the moving vacuum boundary $\Gamma(t)$ and is thus uniformly bounded below by a positive constant. As such, the compressible liquid provides a uniformly hyperbolic, but characteristic, system. Lindblad used Lagrangian variables combined with Nash-Moser iteration to construct solutions. More recently, Trakhinin \cite{Tr2008} provided an alternative proof for the existence of a compressible liquid, employing a solution strategy based on symmetric hyperbolic systems combined with Nash-Moser iteration. In the presence of damping, and with mild singularity, some existence results of smooth solutions are available, based on the adaptation of the theory of symmetric hyperbolic systems. In \cite{LiYa1997}, a local existence theory was developed for the case that $c^ \alpha $ (with $0< \alpha \le 1$) is smooth across $\Gamma$, using methods that are not applicable to the local existence theory for the physical vacuum boundary. An existence theory for the small perturbation of a planar wave was developed in \cite{XuYa2005}. See also \cite{LiYa2000} and \cite{Yang2006}, for other features of the vacuum state problem. The only existence theory for the physical vacuum boundary condition that we know of can be found in the recent paper by Jang and Masmoudi \cite{JaMa2009} for the 1-D compressible gas, wherein weighted Sobolev norms are used for the energy estimates. From these weighted norms, the regularity of the solutions cannot be directly determined. Letting $d$ denote the distance function to the boundary $\partial I$, and letting $\| \cdot \|_0 $ denote the $L^2(I)$-norm, an example of the type of bound that is proved for the velocity field in \cite{JaMa2009} is the following: \begin{align} &\| d\, v\|_0^2 + \|d\, v_x\|_0^2 + \|d\, v_{xx} + 2 v_x\|_0^2 + \|d\, v_{xxx} + 2 v_{xx} - 2 d^{-1}\, v_x\|_0^2 \nonumber \\ &\qquad\qquad \qquad \qquad \qquad\qquad \qquad + \|d\, v_{xxxx} + 4 v_{xxx} - 4 d^{-1}\, v_{xx}\|_0^2 < \infty \,. \label{jama} \end{align} The problem with inferring the regularity of $v$ from this bound can already be seen at the level of an $H^1(I)$ estimate. In particular, the bound on the norm $\|d\, v_{xx} + 2 v_x\|_0^2$ only implies a bound on $\|d\, v_{xx}\|_0^2$ and $\|v_x\|_0^2$ if the integration by parts on the cross-term, $$ 4\int_I d\, v_{xx} \, v_x = -2\int_I d_x \, |v_x|^2 \,, $$ can be justified, which in turn requires having better regularity for $v_x$ than the a priori bounds provide. Any methodology which seeks regularity in (unweighted) Sobolev spaces for solutions must contend with this type of issue. We overcome this difficulty by constructing (sufficiently) smooth solutions to a degenerate parabolic regularization, and thus the sort of integration-by-parts difficulty just described is overcome. One can view our solutions as {\it degenerate viscosity solutions}. The key to their construction is our higher-order Hardy-type inequality that we provide below. \section{Notation and Weighted Spaces}\label{notation} \subsection{Differentiation and norms in the open interval $I$} Throughout the paper the symbol $D$ will be used to denote $\frac{\partial}{\partial x}$. For integers $k\ge 0$, we define the Sobolev space $H^k(I)$ to be the completion of $C^\infty(I)$ in the norm $$\|u\|_k := \left( \sum_{a\le k}\int_\Omega \left| D^a u(x) \right|^2 dx\right)^{1/2}\,.$$ For real numbers $s\ge 0$, the Sobolev spaces $H^s(I)$ and the norms $\| \cdot \|_s$ are defined by interpolation. We use $H^1_0(I)$ to denote the subspace of $H^1(I)$ consisting of those functions $u(x)$ that vanish at $x=0$ and $x=1$. \subsection{The embedding of a weighted Sobolev space} Using $d$ to denote the distance function to the boundary $\Gamma$, and letting $p=1$ or $2$, the weighted Sobolev space $H^1_{d^p}(I)$, with norm given by $\int_I d(x)^p (|F(x)|^2+| DF (x)|^2 )\, dx$ for any $F \in H^1_{d^p}(I)$, satisfies the following embedding: $$H^1_{d^p}(I) \hookrightarrow H^{1 - \frac{p}{2}}(I)\,,$$ so that there is a constant $C>0$ depending only on $I$ and $p$ such that \begin{equation}\label{w-embed} \|F\|_{1-p/2} ^2 \le C \int_I d(x)^p \bigl( |F(x)|^2 + \left| DF(x) \right|^2\bigr) \, dx\,. \end{equation} See, for example, Section 8.8 in Kufner \cite{Kufner1985}. \section{A higher-order Hardy-type inequality} We will make fundamental use of the following generalization of the well-known Hardy inequality to higher-order derivatives: \begin{lemma}[Higher-order Hardy-type inequality]\label{Hardy} Let $s\ge 1$ be a given integer, and suppose that \begin{equation}\nonumber u\in H^s( I)\cap H^1_0( I)\,. \end{equation} Then if $d$ denotes the distance fuction to $\partial I$, we have that $\displaystyle\frac{u}{d}\in H^{s-1}( I)$ with \begin{equation} \label{Hardys} \displaystyle\left\|\frac{u}{d}\right\|_{s-1}\le C \|u\|_s. \end{equation} \end{lemma} \begin{proof} We use an induction argument. The case $s=1$ is of course the classical Hardy inequality. Let us now assume that the inequality (\ref{Hardys}) holds for a given $s\ge 1$, and suppose that $$u\in H^{s+1}( I)\cap H^1_0( I)\,.$$ Using $D$ to denote $\frac{\partial}{\partial x}$, a simple computation shows that for $m \in \mathbb{N} $, \begin{equation} \label{Hardy1} D^m(\frac{u}{d})=\frac{f}{d^{m+1}}, \end{equation} with $$f=\sum_{k=0}^m C_m^k D^{m-k}u\ k! (-1)^k d^{m-k}$$ for a constant $C_m^k$ depending on $k$ and $m$. From the regularity of $u$, we see that $f\in H^1_0( I)$. Next, with $D= \frac{\partial}{\partial x}$, we obtain the identity \begin{align} Df&=\sum_{k=0}^s C_s^k D^{s+1-k}u\ k! (-1)^k d^{s-k}+\sum_{k=0}^{s-1} C_s^k D^{s-k}u\ k! (-1)^k d^{s-k-1}(s-k)\nonumber\\ &=D^{s+1}u\ s! (-1)^s d^s +\sum_{k=1}^s C_s^k D^{s+1-k}u\ k! (-1)^k d^{s-k}\nonumber\\ & \qquad\qquad\qquad +\sum_{k=0}^{s-1} C_s^{k+1} D^{s-k}u\ (k+1)! (-1)^k d^{s-k-1}\nonumber\\ &=D^{s+1}u\ s! (-1)^s d^s \,. \label{cs1} \end{align} Since $f\in H^1_0( I)$, we deduce from (\ref{cs1}) that for any $x\in (0,\frac{1}{2}]$, \begin{equation*} f(x)=(-1)^s s!\ \int_0^x D^{s+1}u(y)\ y^s dy, \end{equation*} which by substitution in (\ref{Hardy1}) yields the identity \begin{equation*} \displaystyle D^s(\frac{u}{d})(x)=\frac{(-1)^s s!\ \int_0^x D^{s+1}u(y)\ y^s dy}{x^{s+1}}, \end{equation*} which by a simple majoration provides the bound \begin{equation*} \displaystyle \bigl|D^s(\frac{u}{d})(x)\bigr|\le s!\ \frac{\psi_1(x)\int_0^x |D^{s+1}u(y)|\ dy}{d(x)}, \end{equation*} where $\psi_1$ is the piecewise affine function equal to $1$ on $[0,\frac{1}{2}]$ and to $0$ on $[\frac{3}{4},1]$. Next, for any $x\in [\frac{1}{2},1)$, we obtain similarly that \begin{equation*} \displaystyle \bigl|D^s(\frac{u}{d})(x)\bigr|\le s!\ \frac{\psi_2(x)\int_x^1 |D^{s+1}u(y)|\ dy}{d(x)}, \end{equation*} where $\psi_2$ is the piecewise affine function equal to $0$ on $[0,\frac{1}{4}]$ and to $1$ on $[\frac{1}{2},1]$. so that for any $x\in I$: \begin{equation} \label{Hardy2} \displaystyle \bigl|D^s(\frac{u}{d})(x)\bigr|\le s!\ \frac{\psi_1(x)\int_0^x |D^{s+1}u(y)|\ dy+\psi_2(x)\int_x^1 |D^{s+1}u(y)|\ dy}{d(x)}. \end{equation} Now, with $g=\psi_1(x)\int_0^x |D^{s+1}u(y)|\ dy+\psi_2(x)\int_x^1 |D^{s+1}u(y)|\ dy$, we notice that $g\in H_1^0( I)$, with $$\|g\|_1\le C\|D^{s+1}u\|_0.$$ Therefore, by the classical Hardy inequality, we infer from (\ref{Hardy2}) that \begin{equation} \label{Hardy3} \bigl\|D^s(\frac{u}{d})\bigr\|_0\le C \|g\|_1\le C \|D^{s+1}u\|_0. \end{equation} Since we assumed in our induction process that our generalized Hardy inequality is true at order $s$, we then have that $$\bigl\|\frac{u}{d}\bigr\|_{s-1}\le C \|u\|_s,$$ which, together with (\ref{Hardy3}), implies that $$\bigl\|\frac{u}{d}\bigr\|_{s}\le C \|u\|_{s+1},$$ and thus establishes the property at order $s+1$, and concludes the proof. \end{proof} In order to obtain estimates independent of a regularization parameter $\kappa$ defined in Section \ref{statement}, we will also need the following Lemma, whose proof can be found in Lemma 1, Section 6 of \cite{CoSh2006}: \begin{lemma}\label{kelliptic} Let $\kappa>0$ and $g\in L^\infty(0,T;H^s(I)))$ be given, and let $f\in H^1(0,T;H^s(I))$ be such that $$f+\kappa f_t=g\ \ \ \text{in}\ (0,T)\times I.$$ Then, $$\|f\|_{L^\infty(0,T;H^s(I))}\le C\, \max\{\|f(0)\|_s,\|g\|_{L^\infty(0,T;H^s(I))}\}.$$ \end{lemma} \section{A degenerate parabolic approximation of the compressible Euler equations in vacuum} \label{statement} For $ \kappa >0$, we consider the following nonlinear degenerate parabolic approximation of the compressible Euler system (\ref{ce0}): \begin{subequations} \label{approximate} \begin{alignat}{2} \rho_0 v_t + (\rho_0^2/\eta_x^2)_x&= \kappa [\rho_0^2 v_x]_x &&\text{in} \ \ I \times (0,T] \,, \label{approximate.a}\\ (\eta,v)&= (e,u_0 ) \ \ \ &&\text{on} \ \ I \times \{t=0\} \,, \label{approximate.b}\\ \rho_0& = 0 \ \ &&\text{ on } \Gamma \,, \label{approximate.c} \end{alignat} \end{subequations} with $ \rho _0(x) \ge C \operatorname{dist}( x, \Gamma ) $ for $x \in I$ near $\Gamma$. We will first obtain the existence of a solution to (\ref{approximate}) on a short time interval $[0,T_ \kappa ]$ (with $T_ \kappa $ depending a priori on $\kappa$). We will then perform energy estimates on this solution that will show that the time of existence, in fact, does not depend on $\kappa$, and moreover that our a priori estimates for this sequence of solutions is also independently of $\kappa$. The existence of a solution to the compressible Euler equations (\ref{ce0}) is obtained as the weak limit as $ \kappa \to 0$ of the sequence of solutions to (\ref{approximate}). \section{Solving the parabolic $\kappa$-problem (\ref{approximate}) by a fixed-point method} For notational convenience, we will write $$ \eta ' = \frac{\partial \eta}{ \partial x}$$ and similarly for other functions. \subsection{Assumption on initial data} Given $u_0$ and $\rho_0$, and using the fact that $\eta(x,0)=x$, the quantity $v_t|_{t=0}$ for the degenerate parabolic $ \kappa $-problem is computed using (\ref{approximate.a}): $$ v_t|_{{t=0}} = \left.\left(\frac{ \kappa }{\rho_0} [\rho_0^2 v']' - {\frac{1}{\rho_0}} (\rho_0^2/\eta'^2)' \right)\right|_{t=0} = \left(\frac{ \kappa }{\rho_0} [\rho_0^2 u_0']' - 2 \rho_0' \right) \,. $$ Similarly, for all $k \in \mathbb{N} $, $$ \partial_t^k v|_{{t=0}} = \left. \frac{\partial^k}{\partial t^k}\left(\frac{ \kappa }{\rho_0} [\rho_0^2 v']' - {\frac{1}{\rho_0}} (\rho_0^2/\eta'^2)' \right)\right|_{t=0} \,. $$ These formulae make it clear that each $\partial_t^k v|_{t=0}$ is a function of space-derivates of $u_0$ and $\rho_0$. For the purposes of constructing solutions to the degenerate parabolic $ \kappa $-problem (\ref{approximate}), it is convenient to smooth the initial data. By standard convolution methods, we assume that $u_0 \in C^ \infty (I)$ and that $\rho_0 \in C^ \infty (I)$ and satisfies (\ref{degen1}) and (\ref{degen2}). \subsection{Functional framework for the fixed-point scheme and some notational conventions} For $T>0$, we shall denote by $\mathcal{X}_T$ the following Hilbert space: \begin{align} \mathcal{X}_T=\{&v\in L^2(0,T;H^2(I))|\ \partial_t^4 v\in L^2(0,T;H^1(I)), \ \rho_0 \partial_t^4 v\in L^2(0,T;H^2(I)) \\ & \ \ \partial_t^3 v\in L^2(0,T;H^2(I)), \ \rho_0 \partial_t^3 v\in L^2(0,T;H^3(I)) \}\,, \end{align} endowed with its natural Hilbert norm: \begin{align*} \|v\|_{\mathcal{X}_T}^2 & = \| v\|_{L^2(0,T;H^2(I))}^2 + \| \partial_t^4 v\| _{L^2(0,T;H^1(I))}^2 + \| \rho_0 \partial_t^4 v\| _{L^2(0,T;H^2(I))}^2 \\ & \qquad + \| \partial_t^3 v\| _{L^2(0,T;H^2(I))}^2 + \| \rho_0 \partial_t^3 v\| _{L^2(0,T;H^3(I))}^2 \,. \end{align*} For $M>0$ given sufficiently large, we define the following closed, bounded, convex subset of $\mathcal{X}_T$: \begin{align}\label{ctm} \mathcal{C}_T(M)=\{ v\in \mathcal{X}_T \ : \ \|v\|_{\mathcal{X}_T}\le M\}, \end{align} which is indeed non empty if $M$ is large enough. Henceforth, we assume that $T>0$ is given such that independently of the choice of $v \in \mathcal{C} _T(M)$, $$ \eta(x,t) =x + \int_0^t v(x,s)ds $$ is injective for $t \in [0,T]$, and that ${\frac{1}{2}} \le \eta'(x,t) \le {\frac{3}{2}} $ for $t\in [0,T]$ and $x \in \overline I$. This can be achieved by taking $T>0$ sufficiently small: with $e(x)=x$, notice that $$ \| \eta'( \cdot ,t) -e\|_1 = \| \int_0^t v'(\cdot ,s) ds\|_1 \le \sqrt{T}M \,. $$ The space $ \mathcal{X} _T$ will be appropriate for our fixed-point methodology to prove existence of a solution to our $\kappa$-regularized parabolic problem (\ref{approximate}). Finally, we define the polynomial function $ \mathcal{N} _0$ of norms of the initial data as follows: \begin{equation}\label{N0} \mathcal{N} _0 = P( \| \sqrt{\rho_0} \partial_t^5 v'(0)\|_0, \| \partial_t^4 v(0)\|_1, \| \partial_t^3 v(0)\|_2 , \| \partial_t^2 v(0)\|_2 , \| \partial_t v(0)\|_2, \| u_0\|_2) \,. \end{equation} \subsection{A theorem for the existence and uniqueness of solutions to the parabolic $\kappa $-problem} We will make use of the Tychonoff fixed-point Theorem in our fixed-point procedure (see, for example, \cite{Deimling1985}). Recall that this states that for a reflexive separable Banach space $\mathcal{X}_T$, and $\mathcal{C}_T(M) \subset \mathcal{X}_T$ a closed, convex, bounded subset, if $F : \mathcal{C}_T(M) \to \mathcal{C}_T(M)$ is weakly sequentially continuous into $\mathcal{X}_T$, then $F$ has at least one fixed point. \begin{theorem}[Solutions to the parabolic $ \kappa $-problem]\label{thm_ksoln} Given our smooth data, for $T $ taken sufficiently small, there exists a unique solution $v \in \mathcal{X}_T$ to the degenerate parabolic $ \kappa $-problem (\ref{approximate}). \end{theorem} \subsection{Linearizing the degenerate parabolic $ \kappa $-problem} Given $\bar v \in \mathcal{C}_T(M)$, and defining $\bar \eta (x,t) =x + \int_0^t \bar v(x,\tau)d\tau$, we consider the linear equation for $v$: \begin{equation} \label{linear1} \rho_0 v_t+ \left[\frac{\rho_0^2}{\bar\eta'^2}\right]'-\kappa [\rho_0^2 v']'=0 \,. \end{equation} We will prove the following: \begin{enumerate} \item $v$ is a unique solution to (\ref{linear1}); \item $v \in \mathcal{C}_T(M)$ for $T$ taken sufficiently small; \item the map $\bar v \mapsto v: \mathcal{C}_T(M) \to \mathcal{C}_T(M)$, and is sequentially weakly continuous in $\mathcal{X}_T$. \end{enumerate} The solution to our parabolic $ \kappa $-problem (\ref{approximate}) will then be obtained as a fixed-point of the map $\bar v \mapsto v$ in $\mathcal{X}_T$ via the Tychonoff fixed-point Theorem. In order to use our higher-order Hardy-type inequality, Lemma \ref{Hardy}, it will be convenient to introduce the new variable $$X=\rho_0 v'$$ which then belongs to $H_0^1( I)$. By a simple computation, we see that (\ref{linear1}) is equivalent to \begin{equation*} v_t'+ \left[\frac{2}{\bar\eta'}\Bigl(\frac{\rho_0}{\bar\eta'}\Bigr)'\right]'-\kappa \left[\frac{1}{\rho_0}\Bigl[\rho_0^2 v'\Bigr]'\right]'=0, \end{equation*} and hence that \begin{subequations} \label{div} \begin{alignat}{2} \frac{X_t}{\rho_0}-\kappa \left[\frac{1}{\rho_0} (\rho_0\ X)'\right]'&=-\left[\frac{2}{\bar\eta'}\Bigl(\frac{\rho_0}{\bar\eta'}\Bigr)'\right]' && \ \text{ in }\ [0,T]\times I,\\ X&=0 && \ \text{ on }\ \ [0,T]\times\partial I,\\ X|_{t=0}&=\rho_0 u_0' && \ \text{ on } \ I. \end{alignat} \end{subequations} We shall therefore solve the degenerate linear parabolic problem (\ref{div}) with Dirichlet boundary conditions, which (as we will prove) will surprising admit a solution with arbitrarily high space regularity (depending on the regularity of the right-hand side and the initial data, of course), and not just an $H_0^1(I)$-type weak solution. After we obtain the solution $X$, we will then easily find our solution $v$ to (\ref{linear1}). In order to construct our fixed-point, we will need to obtain estimates for $X$ (and hence $v$) with high space regularity; in particular, we will need to study the fifth time-differentiated problem. For this purpose, it is convenient to define the new variable \begin{equation}\label{defineY} Y=\partial_t^5 X =\rho_0 \partial_t^5 v'\,, \end{equation} and consider the following equation for $Y$: \begin{subequations} \label{divt} \begin{alignat}{2} \frac{Y_{t}}{\rho_0}-\kappa \left[\frac{1}{\rho_0} (\rho_0\ Y)'\right]'&=-\partial_t^5\left[\frac{2}{\bar\eta'}\Bigl(\frac{\rho_0}{\bar\eta'}\Bigr)'\right]' && \ \hbox{in}\ [0,T]\times I, \label{divt.a}\\ Y&=0&& \ \hbox{on}\ [0,T]\times\partial I, \label{divt.b} \\ Y|_{t=0}&=Y_{\text{init}} && \ \hbox{in}\ I \,, \label{divt.c} \end{alignat} \end{subequations} where $Y_{\text{init}} = \rho_0 \partial_t^5 v'|_{t=0}$. \subsection{Existence of a weak solution to the linear problem (\ref{divt}) by a Galerkin scheme} Let $(e_n)_{n\in\mathbb N}$ denote a Hilbert basis of $H_0^1( I)$, with each $e_n$ being of class $H^2( I)$. Such a choice of basis is indeed possible as we can take for instance the eigenfunctions of the Laplace operator on $I$ with vanishing Dirichlet boundary conditions. We then define the Galerkin approximation at order $n\ge 1$ of (\ref{div}) as being under the form $Y_n=\sum_{i=0}^n \lambda_i^n(t) e_i$ such that: \begin{subequations} \label{divn} \begin{align} \forall k\in\{0,...,k\},\ \left(\frac{{Y_n}_t}{\rho_0} + \kappa (\rho_0 Y_n)' \,, \frac{ e_k' }{\rho_0} \right)_{L^2( I)} &=\left(\partial_t^5\bigr[\frac{2}{\bar\eta'}\bigl[\frac{\rho_0}{\bar\eta'}\bigr]'\bigl] \ ,\ e_k'\right)_{L^2( I)}\ \hbox{in}\ [0,T],\\ \lambda_i^n(0)&=(Y_{\text{init}},e_i)_{L^2( I)}. \end{align} \end{subequations} Since each $e_i$ is in $H^2( I)\cap H_0^1( I)$, we have by our high-order Hardy-type inequality (\ref{Hardy}) that $$\frac{e_i}{\rho_0}\in H^1( I) \,;$$ therefore, each integral written in (\ref{divn}) is well-defined. Furthermore, as the $e_i$ are linearly independent, so are the $\frac{e_i}{\sqrt{\rho_0}}$ and therefore the determinant of the matrix $$\Bigl(\bigl(\frac{e_i}{\sqrt{\rho_0}},\frac{e_j}{\sqrt{\rho_0}}\bigr)_{L^2( I)}\Bigr)_{(i,j)\in\mathbb N_n=\{1,...,n\}}$$ is nonzero. This implies that our finite-dimensional Galerkin approximation (\ref{divn}) is a well-defined first-order differential system of order $n+1$, which therefore has a solution on a time interval $[0,T_n]$, where $T_n$ a priori depends on the dimension $n$ of the Galerkin approximation. In order to prove that $T_n=T$, with $T$ independent of $n$, we notice that since $Y_n$ is a linear combination of the $e_i$ ($i\in \mathbb N_n$), we have that \begin{equation} \label{g1} \left(\frac{{Y_n}_t}{\rho_0}-\kappa [\frac{1}{\rho_0} (\rho_0\ Y_n)']',\ Y_n\right)_{L^2( I)} =\left(\partial_t^5\bigr[\frac{2}{\bar\eta'}\bigl[\frac{\rho_0}{\bar\eta'}\bigr]'\bigl]\ ,\ Y_n'\right)_{L^2( I)}. \end{equation} Since ${Y_n}\in H_0^1( I)$ and $[\frac{1}{\rho_0} (\rho_0\ Y_n)']'\in H^1( I)$, integration by parts yields \begin{align} - \int_I [\frac{1}{\rho_0} (\rho_0\ Y_n)']'\ Y_n=\int_I [\frac{1}{\rho_0} (\rho_0\ Y_n)']Y_n' =\int_I {Y_n'}^2+ \int_I \rho_0'\frac{Y_n}{\rho_0} Y_n'. \label{g2} \end{align} Next, using our higher-order Hardy-type inequality, we see that $\frac{Y_n}{\rho_0}\in H^1( I)$, and thus $$ \int_I \rho_0'\frac{Y_n}{\rho_0} Y_n' =-\int_I \rho_0'\frac{1}{\rho_0} Y_n' Y_n+\int_I \frac{\rho_0'^2}{\rho_0^2} {Y_n^2}-\int_I \rho_0''\frac{Y_n^2}{\rho_0}, $$ which implies that $$ \int_I \rho_0'\frac{Y_n}{\rho_0} Y_n' =\frac{1}{2} \int_I \frac{\rho_0'^2}{\rho_0^2} {Y_n^2}-\frac{1}{2} \int_I \rho_0''\frac{Y_n^2}{\rho_0}. $$ Substitution of this identity into (\ref{g1}) and (\ref{g2}) yields \begin{equation*} \frac{1}{2} \bigl[\frac{d}{dt}\int_I\frac{Y_n^2}{\rho_0}- \kappa \int_I \rho_0''\frac{Y_n^2}{\rho_0}\bigr] +\kappa\int_I Y_n'^2+\frac{1}{2}\kappa\int_I \frac{\rho_0'^2}{\rho_0^2} Y_n^2= -\int_I \partial_t^5\bigl[\frac{2}{\bar\eta'}\bigl[\frac{\rho_0}{\bar\eta'}\bigr]'\bigr] Y_n'\ , \end{equation*} which shows that (since our given $\bar v\in \mathcal{C}_T(M)$): \begin{equation*} \frac{d}{dt}\int_I\frac{Y_n^2}{\rho_0}- \kappa \|\rho_0''\|_{L^\infty} \int_I \frac{Y_n^2}{\rho_0}+{\kappa}\int_I Y_n'^2+\kappa\int_I \rho_0'^2\frac{Y_n^2}{\rho_0^2}\le C_\kappa, \end{equation*} for a constant $C_ \kappa $ depending on $ \kappa $. Consequently, $T_n=T$ with $T$ independent of $n$, and \begin{equation} \sup_{[0,T]} \int_I \frac{Y_n^2}{\rho_0} +{\kappa}\int_0^T\int_I Y_n'^2\le C_\kappa T + C \mathcal{N} _0 \,, \end{equation} where $ \mathcal{N} _0$ is defined in (\ref{N0}). Thus, there exists a subsequence of $(Y_n)$ which converges weakly to some $Y\in L^2(0,T;H_0^1( I))$, which satisfies \begin{equation} \label{g3} \sup_{[0,T]}\int_I \frac{Y^2}{\rho_0} +{\kappa}\int_0^T\int_I Y'^2\le C_\kappa T + C \mathcal{N} _0 \,. \end{equation} With (\ref{defineY}), we see that \begin{equation}\label{pt5v} \|\rho_0 \partial_t^5v' \|_{L^2(0,T; H^1(I))} \le C_\kappa T + C \mathcal{N} _0 \,. \end{equation} Furthermore, it can also be shown using standard arguments that $Y$ is a solution of (\ref{divt}) (where (\ref{divt.a}) is satisfied almost everywhere in $[0,T]\times I$ and holds in a variational sense for all test functions in $L^2(0,T;H_0^1( I))$), and that \begin{equation}\nonumber \frac{Y_t}{\rho_0} \in L^2(0,T; H ^{-1} (I)) \,. \end{equation} Now, with the functions $$ X_i = \rho_0\left.\frac{ \partial^i v'}{ \partial t^i} \right|_{t=0} \text{ for } i=0,1,2,3,4\,, $$ we define \begin{equation} \label{Z} Z(t,x)= \int_0^t Y(\cdot,x) = \rho_0(x) \int_0^t \partial_t^5 v' ( \cdot, x ) = \rho_0(x) \partial_t^4 v'( t, x) - X_4(x) \,, \end{equation} and \begin{equation} \label{X} X(t,x)=\sum_{i=0}^4 X_i t^i+\int_0^t\int_0^{t_4}\int_0^{t_3}\int_0^{t_2} Z(t_1,x) dt_1dt_2dt_3dt_4 \,. \end{equation} We then see that $X\in C^0([0,T];H_0^1( I))$ is a solution of (\ref{div}), with $\partial_t^5X=Y$. In order to obtain a fixed-point for the map $\bar v \mapsto v$, we need to establish better space regularity for $Z$, and hence $X$ and $v$. \subsection{Improved space regularity for $Z$} We introduce the variable $\check v$ defined by \begin{equation*} \check v(t,x)=\int_0^x \frac{X(t,\cdot)}{\rho_0(\cdot)} \,, \end{equation*} so that $\check v$ vanishes at $x=0$, and will us to employ the Poincar\'{e} inequality with this variable. It is easy to see that \begin{equation*} X=\rho_0 \check v', \end{equation*} and \begin{equation}\label{Z2} Z=\rho_0 \partial_t^4\check v'. \end{equation} Thanks to the standard Hardy inequality, we thus have that \begin{equation*} \|\partial_t^4\check v'\|_0\le C \|Z\|_1, \end{equation*} and hence by Poincar\'e's inequality, \begin{equation} \label{g4} \|\partial_t^4\check v(t,\cdot)\|_1\le C\|Z(t,\cdot)\|_1. \end{equation} With $$ F_0= \frac{Y_{\text{init}}}{\rho_0} + \left. \partial_t^4\bigr[\frac{2}{\bar\eta'}\bigl[\frac{\rho_0}{\bar\eta'}\bigr]'\bigl]' \right|_{t=0}\,, $$ our starting point is the equation \begin{equation*} \frac{Y}{\rho_0}-\kappa [\frac{1}{\rho_0} (\rho_0\ Z)']' =-\partial_t^4\bigr[\frac{2}{\bar\eta'}\bigl[\frac{\rho_0}{\bar\eta'}\bigr]'\bigl]' + F_0 \ \hbox{in}\ [0,T]\times I, \end{equation*} which follows from our definition of $Z$ given in (\ref{Z}) and time-integration of (\ref{divt.a}). From this equation, we infer that \begin{equation*} \kappa \bigl\|[\frac{1}{\rho_0}(\rho_0\ Z)']'\bigr\|_0\le \bigl\|\partial_t^4\bigl[\frac{2}{\bar\eta'}\bigl[\frac{\rho_0}{\bar\eta'}\bigr]'\bigl]'\bigr\|_0 +\bigl\|\frac{Y}{\rho_0}\bigr\|_0 + \|F_0\|_0 \,. \end{equation*} By the standard Hardy inequality and the fact that $\bar v\in \mathcal{C}_T(M)$, we obtain the estimate \begin{equation*} \kappa \bigl\|[\frac{1}{\rho_0}(\rho_0\ Z)']'\bigr\|_0\le C_M \sqrt{T} + C\|{Y}\|_1 + \mathcal{N} _0 \,, \end{equation*} where $C_M$ is a constant that depends on $M$. In particular, using (\ref{Z2}), we see that $$ \frac{1}{\rho_0}(\rho_0\ Z)' = \rho_0 \partial_t^4\check v''+2 \rho_0'\partial_t^4\check v' $$ so that \begin{equation*} \kappa \bigl\|\rho_0 \partial_t^4\check v'''+3 \rho_0'\partial_t^4\check v'' + 2 \rho_0'' \partial_t^4 \check v'\bigr\|_0 \le C_M \sqrt{T} + C\|{Y}\|_1 + \mathcal{N} _0 \,, \end{equation*} which implies that \begin{subequations} \begin{align} \kappa \bigl\|(\rho_0 \partial_t^4\check v)'''\bigr\|_0 & \le C_M \sqrt{T} + C\|{Y}\|_1 + \mathcal{N} _0+ \kappa \|\rho_0''' \partial_t^4 \check v\|_0 + \kappa \|3\rho_0''\partial_t^4 \check v'\|_0\nonumber\\ & \le C_M \sqrt{T} + C\|{Y}\|_1+ \mathcal{N} _0+ \kappa( \|\rho_0'''\|_{L^\infty}+ 3 \|\rho_0''\|_{L^\infty} )\|Z\|_1\nonumber \,, \end{align} \end{subequations} where we have used (\ref{g4}) in the second inequality above. Having established in (\ref{g4}) that $(\rho_0\partial_t^4\check v)\in H_0^1( I)$, elliptic regularity shows that \begin{equation} \label{g5} \kappa \|\rho_0 \partial_t^4\check v\|_3 \le C_M \sqrt{T} + C\|{Y}\|_1 + \mathcal{N} _0 + \kappa( \|\rho_0'''\|_{L^\infty}+ 3 \|\rho_0''\|_{L^\infty} )\|Z\|_1\,. \end{equation} Now, thanks to our high-order Hardy-type inequality, we infer from from (\ref{g5}) that \begin{equation} \label{g6} \kappa \| \partial_t^4\check v\|_2 \le C_M \sqrt{T} + C\|Y\|_1 + \mathcal{N} _0 + \kappa( \|\rho_0'''\|_{L^\infty}+ 3 \|\rho_0''\|_{L^\infty} )\|Z\|_1\,. \end{equation} Next we see that (\ref{g5}) implies that \begin{equation*} \kappa \|\rho_0 \partial_t^4\check v'+\rho' \partial_t^4\check v\|_2 \le C_M \sqrt{T} + C\|Y\|_1+ \mathcal{N} _0 + \kappa( \|\rho_0'''\|_{L^\infty}+ 3 \|\rho_0''\|_{L^\infty} )\|Z\|_1\,, \end{equation*} which thanks to (\ref{g6}) and (\ref{Z2}) implies that \begin{equation} \label{g7} \kappa \|Z\|_2 \le C_M \sqrt{T} + C\|Y\|_1 + \mathcal{N} _0+ \kappa( \|\rho_0'''\|_{L^\infty}+ 3 \|\rho_0''\|_{L^\infty} )\|Z\|_1\,. \end{equation} \subsection{Definition of $v$ and the existence of a fixed-point} We are now in a position to define $v$ in the following fashion: let us first define on $[0,T]$ \begin{equation*} f(t)=u_0(0)-\int_0^t \frac{1}{\rho_0}\bigl[\frac{\rho_0^2}{\bar\eta'^2}\bigr]'(\cdot,0)+\kappa\int_0^t\frac{1}{\rho_0} [\rho_0 X]'(\cdot,0), \end{equation*} which is well-defined thanks to (\ref{g7}) and (\ref{g3}). We next define \begin{equation} v(t,x)=f(t)+\check v(t,x). \end{equation} We then notice that from (\ref{div}), we immediately have that \begin{equation*} v_t'+ \bigl[\frac{1}{\rho_0} \bigl[\frac{\rho_0^2}{\bar\eta'^2}\bigr]'\bigr]'-\kappa \bigl[\frac{1}{\rho_0} [\rho_0^2 v']'\bigr]'=0, \end{equation*} from which we infer that in $[0,T]\times I$ \begin{equation*} v_t+ \frac{1}{\rho_0} \bigl[\frac{\rho_0^2}{\bar\eta'^2}\bigr]'-\kappa \frac{1}{\rho_0} [\rho_0^2 v']'=g(t), \end{equation*} for some function $g$ depending only on $t$. By taking the trace of this equation on the left end-point $x=0$, we see that \begin{equation*} v_t(t,0)+ \frac{1}{\rho_0} \bigl[\frac{\rho_0^2}{\bar\eta'^2}\bigr]'(t,0)-\kappa \frac{1}{\rho_0} [\rho_0^2 v']'(t,0)=g(t), \end{equation*} which together with the identity $$v_t(t,0)=f_t(t)=-\frac{1}{\rho_0} \bigl[\frac{\rho_0^2}{\bar\eta'^2}\bigr]'(t,0)+\kappa \frac{1}{\rho_0} [\rho_0^2 v']'(t,0)$$ shows that $g(t)=0$. Therefore, $v$ is a solution of (\ref{linear1}), and also satisfies by construction $v(0,\cdot)=u_0(\cdot)$. We can now establish the existence of a fixed-point for the mapping $\bar v\rightarrow v$ in $\mathcal{C}_T(M)$, with $T$ taken sufficiently small and depending a priori on $\kappa$. We first notice that, thanks to the estimates (\ref{g7}) and (\ref{g3}), we have the inequality \begin{equation*} \|\partial_t^4 f(t)\|_{L^2(0,T)}\le \mathcal{N} _0+\sqrt{T} C_M, \end{equation*} which together with (\ref{g6}) and (\ref{g3}) provides the estimate \begin{equation} \label{g8} \|\partial_t^4 v\|_{L^2(0,T;H^2( I))}\le \mathcal{N} _0+C_\kappa \sqrt{T} C_M \,. \end{equation} Then, (\ref{g5}) implies that \begin{equation} \label{g8b} \|\rho_0\partial_t^4 v\|_{L^2(0,T;H^3( I))}+ \|\partial_t^4 v\|_{L^2(0,T;H^2( I))}\le \mathcal{N} _0+C_\kappa \sqrt{T} C_M \,, \end{equation} and combining this with (\ref{pt5v}) shows that \begin{equation} \label{g8c} \|\partial_t^5 v\|_{L^2(0,T;H^1( I))}+ \|\rho_0 \partial_t^5 v'\|_{L^2(0,T;H^1( I))}\le \mathcal{N} _0+C_\kappa \sqrt{T} C_M \,, \end{equation} In turn, (\ref{g8b}) shows that for \begin{equation} \label{g9} T\le \frac{\mathcal{N} _0^2}{C_\kappa C_M}, \end{equation} $v\in \mathcal{C}_T(M)$. Moreover, it is clear that there is only one solution $v\in L^2(0,T;H^2( I))$ of (\ref{linear1}) with $v(0)=u_0$ (where this initial condition is well-defined as $\|v_t\|_{L^2(0,T;H^1( I))}\le \mathcal{N} _0 +C_\kappa \sqrt{T} C_M$), since if we denote by $w$ another solution with the same regularity, the difference $\delta v=v-w$ satisfies $\delta v(0,\cdot)=0$ with $\rho_0\delta v_t-\kappa [\rho_0^2\delta v']'=0$ which implies $$\frac{1}{2}\frac{d}{dt}\int_I \rho_0 \delta v^2 +\kappa \int_I \rho_0^2 \delta v^2=0,$$ which with $\delta v(0,\cdot)=0$ implies $\delta v=0$. So the mapping $\bar v\rightarrow v$ is well defined, and thanks to (\ref{g8}) is a mapping from $\mathcal{C}_T(M)$ into itself for $T=T_\kappa$ satisfying \ref{g9}). As it is furthermore clear that it is weakly continuous in the $L^2(0,T_\kappa;H^2( I))$ norm, the Tychonoff fixed-point theorem \cite{Deimling1985} provides us with the existence of a fixed-point to this mapping. Such a fixed-point, which we denote by $v_\kappa$, is a solution of the nonlinear degenerate parabolic $\kappa$-problem (\ref{approximate}), with initial condition $v_\kappa(0,\cdot)=u_0(\cdot)$. It should be clear that the fixed-point $v_ \kappa $ also satisfies (\ref{pt5v}) and (\ref{g8b}) so that \begin{align} &\|\rho_0 \partial_t^4 v_ \kappa \|_{L^2(0,T;H^3( I))} + \|\partial_t^4 v_ \kappa \|_{L^2(0,T;H^2( I))} \nonumber \\ & \qquad + \|\rho_0 \partial_t^5 v'_ \kappa \|_{L^2(0,T;H^1( I))} + \|\partial_t^5 v_ \kappa \|_{L^2(0,T;H^1( I))} \le M \,, \label{g8d} \end{align} In the next section, we establish $\kappa $-independent estimates for $v_\kappa$ in $L^2(0,T_\kappa;H^2( I))$ (which are indeed possible because our parabolic approximate $ \kappa $-problem respects the stucture of the original compressible Euler equations (\ref{ce0})), from which we infer a short time-interval of existence $[0,T]$, with $T$ independent of $\kappa$. These $ \kappa $-independent estimates will allows us to pass to the weak limit of the sequence $v_ \kappa $ as $ \kappa \to 0$ to obtain the solution to (\ref{ce0}). \section{Asymptotic estimates for $v_\kappa$ which are independent of $\kappa$.} \subsection{The higher-order energy function appropriate for the asymptotic estimates as $ \kappa \to 0$.} Our objective in this section is to show that the higher-order energy function $E$ defined in (\ref{E}) satisfies the inequality \begin{equation}\label{poly} \sup_{t \in [0,T]} E(t) \le M_0 + C\,T\, P( \sup_{t \in [0,T]} E(t)) \,, \end{equation} where $P$ denotes a polynomial function, and for $T>0$ taken sufficiently small, with $M_0$ defined in (\ref{M0}). The norms in $E$ are for solutions $v_ \kappa $ to our degenerate parabolic $ \kappa $-problem (\ref{approximate}). According to Theorem \ref{thm_ksoln}, $v_\kappa \in X_{T_ \kappa }$ with the additional bound $ \|\partial_t^4 v_ \kappa \|_{L^2(0,T_ \kappa ;H^2( I))} < \infty $ provided by (\ref{g8}). As such, the energy function $E$ is continuous with respect to $t$, and the inequality (\ref{poly}) would thus establish a time interval of existence and bound which are both independent of $ \kappa $. For the sake of notational convenience, we shall denote $v_\kappa$ by $\tilde v$. \subsection{A $\kappa$-independent energy estimate on the fifth time-differentiated problem} Our starting point shall be the fifth time-differentiated problem of (\ref{approximate}) for which we have, by naturally using $\partial_t^5 \tilde v\in L^2(0,T_\kappa;H^1( I))$ as a test function, the following identity: \begin{equation} \label{a1} \underbrace{\frac{1}{2} \frac{d}{dt}\int_I{\rho_0}|\partial_t^5\tilde v|^2}_{\mathcal{I} _1} \ - \ \underbrace{\int_I \partial_t^5\bigl[\frac{\rho_0^2}{\tilde\eta'^2}\bigr] \partial_t^5\tilde v'}_{ \mathcal{I} _2} \ + \ \underbrace{\kappa\int_I \rho_0^2(\partial_t^5\tilde v')^2=0 }_{ \mathcal{I} _3}\,. \end{equation} In order to form the exact time-derivative in term $ \mathcal{I} _1$, we rely on the fact that solutions we constructed to (\ref{approximate}) satisfy $ \partial_t^6 v \in L^2(0,T_ \kappa ; L^2 (I))$, which follows from the relation \begin{equation*} \partial_t^6 \tilde v = \partial_t^5\left[\frac{2}{\tilde\eta'}\Bigl(\frac{\rho_0}{\tilde\eta'}\Bigr)'\right]+ \frac{\kappa}{\rho_0}\Bigl[\rho_0^2 \partial_t^5 v'\Bigr]' \,, \end{equation*} and the estimate (\ref{g8d}). Upon integration in time, both the terms $ \mathcal{I} _1$ and $ \mathcal{I} _3$ provide sign-definite energy contributions, so we focus our attention on the nonlinear estimates required of the term $ \mathcal{I} _2$. We see that \begin{align*} - \mathcal{I} _2&=2\int_I \partial_t^4\tilde v'\bigl[\frac{\rho_0^2}{\tilde\eta'^3}\bigr] \partial_t^5\tilde v'+\sum_{a=1}^4 c_a \int_I \partial_t^{5-a}\frac{1}{\tilde\eta'}\partial_t^{a}\frac{1}{\tilde\eta'}\rho_0^2\partial_t^5\tilde v'\nonumber\\ &=\frac{d}{dt}\int_I (\partial_t^4\tilde v')^2\frac{\rho_0^2}{\tilde\eta'^3}+3\int_I (\partial_t^4\tilde v')^2\tilde v'\frac{\rho_0^2}{\tilde\eta'^4}+\sum_{a=1}^4 c_a \int_I \partial_t^{5-a}\frac{1}{\tilde\eta'}\partial_t^{a}\frac{1}{\tilde\eta'}\rho_0^2\partial_t^5\tilde v' \,. \end{align*} Hence integrating (\ref{a1}) from $0$ to $t\in[0,T_\kappa]$, we find that \begin{align} \label{a2} \frac{1}{2} \int_I{\rho_0}\partial_t^5\tilde v^2 (t) &+ \int_I (\partial_t^4\tilde v')^2\frac{\rho_0^2}{\tilde\eta'^3} (t) +\kappa\int_0^t\int_I \rho^2(\partial_t^5\tilde v')^2\nonumber\\ &=\frac{1}{2} \int_I{\rho_0}\partial_t^5\tilde v^2 (0) + \int_I (\partial_t^4\tilde v')^2\frac{\rho_0^2}{\tilde\eta'^3} (0)-3\int_0^t\int_I (\partial_t^4\tilde v')^2\tilde v'\frac{\rho_0^2}{\tilde\eta'^4}\nonumber\\ &\ \ \ -\sum_{a=1}^4 c_a \int_0^t\int_I \partial_t^{5-a}\frac{1}{\tilde\eta'}\partial_t^{a}\frac{1}{\tilde\eta'}\rho_0^2\partial_t^5\tilde v'\ . \end{align} We next show that all of the error terms, comprising the right-hand side of (\ref{a2}) can be bounded by $C t P( \sup_{[0,t]} E)$. For the first spacetime integral appearing on the right-hand side of (\ref{a2}), it is clear that \begin{equation} \label{a3} -3\int_0^t\int_I (\partial_t^4\tilde v')^2\tilde v'\frac{\rho_0^2}{\tilde\eta'^4} \le C t \ P(\sup_{[0,t]} E). \end{equation} We now study the last integrals on the right-hand side of (\ref{a2}). Using integration-by-parts in time, we have that \begin{equation} \label{a4} \int_0^t\int_I \partial_t^{5-a}\frac{1}{\tilde\eta'}\partial_t^{a}\frac{1}{\tilde\eta'}\rho_0^2\partial_t^5\tilde v'=-\int_0^t\int_I [\partial_t^{5-a}\frac{1}{\tilde\eta'}\partial_t^{a}\frac{1}{\tilde\eta'}]_t\rho_0^2\partial_t^4\tilde v'+\left. \int_I \partial_t^{5-a}\frac{1}{\tilde\eta'}\partial_t^{a}\frac{1}{\tilde\eta'}\rho_0^2\partial_t^4\tilde v'\right|_0^t. \end{equation} We first consider the spacetime integral on the right-hand side of (\ref{a4}). As the sum is taken for $a=1$ to $4$, we then see that it will be written under the form of the sum of spacetime integrals of the following type: \begin{align*} I_1&=\int_0^t\int_I \rho_0 \partial_t^4\tilde v' R(\tilde \eta) \rho_0\partial_t^4\tilde v',\\ I_2&=\int_0^t\int_I \rho_0 \partial_t^3\tilde v' v' \partial_t \tilde v' R(\tilde \eta) \rho_0\partial_t^4\tilde v',\\ I_3&=\int_0^t\int_I \rho_0 \partial_t^2\tilde v' v' \partial_t^2 \tilde v' R(\tilde \eta) \rho_0\partial_t^4\tilde v',\\ I_4&=\int_0^t\int_I \rho_0 \partial_t^2\tilde v' v' R(\tilde \eta) (\partial_t\tilde v')^2\rho_0\partial_t^4\tilde v'\ , \end{align*} Where $R(\tilde\eta)$ denotes a rational function of $\tilde\eta$. We first immediately see that \begin{equation} \label{a5} |I_1|\le C t \ P(\sup_{[0,t]} E). \end{equation} Next, we have that \begin{align} \label{a6} |I_2| &\le C \int_0^t\int_I \|\rho_0 \partial_t^3\tilde v'\|_{L^4} \|v'\|_{L^\infty} \|\partial_t \tilde v'\|_{L^4} \|R(\tilde \eta)\|_{L^\infty} \|\rho_0\partial_t^4\tilde v'\|_0\nonumber\\ &\le C \int_0^t\int_I \|\rho_0 \partial_t^3\tilde v'\|_{H^{\frac{1}{2}}} \|v'\|_{L^\infty} \|\partial_t \tilde v'\|_{H^{\frac{1}{2}}} \|R(\tilde \eta)\|_{L^\infty} \|\rho_0\partial_t^4\tilde v'\|_0\nonumber\\ & \le C t \ P(\sup_{[0,t]} E). \end{align} Similarly, \begin{align} \label{a6b} |I_3| &\le C \int_0^t \|\rho_0 \partial_t^2\tilde v'\|_{L^\infty} \|v'\|_{L^\infty} \|\partial_t^2 \tilde v'\|_0\| R(\tilde \eta)\|_{L^\infty} \|\rho_0\partial_t^4\tilde v'\|_0\nonumber\\ &\le C t \ P(\sup_{[0,t]} E), \end{align} and \begin{align} \label{a7} |I_4| &\le C \int_0^t \|\rho_0 \partial_t^2\tilde v'\|_{L^\infty} \|v'\|_{L^\infty} \|\partial_t \tilde v'\|^2_{L^4}\| R(\tilde \eta)\|_{L^\infty} \|\rho_0\partial_t^4\tilde v'\|_0\nonumber\\ & \le C t \ P(\sup_{[0,t]} E), \end{align} where we have used the fact that in 1-D, $\|\cdot\|_{L^\infty}\le C \|\cdot\|_{H^1}$ and $\|\cdot\|_{L^4}\le C \|\cdot\|_{H^{\frac{1}{2}}}$. Therefore, estimates (\ref{a2})--(\ref{a6}) provide us with \begin{equation*} \frac{1}{2} \int_I{\rho_0}\partial_t^5\tilde v^2 (t) + \int_I (\partial_t^4\tilde v')^2\frac{\rho_0^2}{\tilde\eta'^3} (t) +\kappa\int_0^t\int_I \rho_0^2(\partial_t^5\tilde v')^2 \le M_0+ C t \ P(\sup_{[0,t]} E), \end{equation*} and thus, employing the fundamental theorem of calculus, \begin{align} \label{a8} &\frac{1}{2} \int_I{\rho_0}\partial_t^5\tilde v^2 (t) + \int_I (\rho_0 \partial_t^4\tilde v')^2 (t) +\kappa\int_0^t\int_I \rho_0^2(\partial_t^5\tilde v')^2\nonumber\\ &\qquad\qquad\qquad \qquad \le M_0+ C t \ P(\sup_{[0,t]} E)+ 3\int_I (\partial_t^4\tilde v')^2(t){\rho_0^2}\int_0^t \frac{\tilde v'}{\tilde\eta'^4}\nonumber\\ &\qquad\qquad\qquad \qquad \le M_0+ C t \ P(\sup_{[0,t]} E). \end{align} \subsection{Elliptic and Hardy-type estimates for $\partial_t^2 v(t)$} Having obtained the energy estimate (\ref{a8}) for the fifth time-differentiated problem, we can begin our bootstrapping argument. We now consider the third time-differentiated version of (\ref{approximate.a}), \begin{equation*} \bigl[\partial_t^3\frac{\rho_0^2}{\tilde\eta'^2}\bigr]'-\kappa [\rho_0^2 \partial_t^3 \tilde v']'=-\rho_0 \partial_t^4 \tilde v, \end{equation*} which can be written as \begin{equation*} -2\bigl[\frac{\rho_0^2\partial_t^2 \tilde v'}{\tilde\eta'^3}\bigr]'-\kappa [\rho_0^2 \partial_t^3 \tilde v']'=-\rho_0\partial_t^4 \tilde v+c_1\bigl[\frac{\rho_0^2\partial_t \tilde v'\ \tilde v'}{\tilde\eta'^4}\bigr]' +c_2 \bigl[\frac{\rho_0^2 \tilde v'^3}{\tilde\eta'^5}\bigr]', \end{equation*} and finally rewritten as the following identity: \begin{align} -2\bigl[{\rho_0^2\partial_t^2 \tilde v'}\bigr]'-\kappa [\rho_0^2 \partial_t^3 \tilde v']'=&-\rho_0\partial_t^4 \tilde v+c_1\bigl[\frac{\rho_0^2\partial_t \tilde v'\ \tilde v'}{\tilde\eta'^4}\bigr]' +c_2 \bigl[\frac{\rho_0^2 \tilde v'^3}{\tilde\eta'^5}\bigr]' \nonumber\\ &-2\bigl[{\rho_0^2\partial_t^2 \tilde v'}\bigr]'(1-\frac{1}{\tilde\eta'^3})-6\rho_0^2\partial_t^2\tilde v'\frac{\tilde\eta''}{\tilde\eta'^4}\ . \label{cs3} \end{align} Here, $c_1$ and $c_2$ are constants whose exact value is not important. Therefore, using Lemma \ref{kelliptic} together with the fundamental theorem of calculus for the fourth term on the right-hand side of (\ref{cs3}), we obtain that for any $t\in [0,T_\kappa]$, \begin{align} \label{a9} \sup_{[0,t]} \bigl\|\frac{2}{\rho_0}\bigl[{\rho_0^2\partial_t^2 \tilde v'}\bigr]'\bigr\|_0 \le & \sup_{[0,t]}\|\partial_t^4 \tilde v\|_0 +\sup_{[0,t]} \bigl\|\frac{c_1}{\rho_0}\bigl[\frac{\rho_0^2\partial_t \tilde v'\ \tilde v'}{\tilde\eta'^4}\bigr]'\bigr\|_0 +\sup_{[0,t]} \bigl\|\frac{c_2}{\rho_0} \bigl[\frac{\rho_0^2 \tilde v'^3}{\tilde\eta'^5}\bigr]'\bigr\|_0\nonumber\\ &+\sup_{[0,t]} \bigl\|\frac{2}{\rho_0}\bigl[{\rho_0^2\partial_t^2 \tilde v'}\bigr]'\|_0\|3\int_0^\cdot\frac{\tilde v'}{\tilde\eta'^4}\|_{L^\infty}+6\sup_{[0,t]} \bigl\|\rho_0\partial_t^2\tilde v'\frac{\tilde\eta''}{\tilde\eta'^4}\bigr\|_0\ . \end{align} We next estimate each term on the right hand side of (\ref{a9}). For the first term, we will use our estimate (\ref{a8}) from which we infer for each $t\in [0,T_\kappa]$: \begin{equation*} \int_I \rho_0^2 [\partial_t^4\tilde v^2+\partial_t^4\tilde v'^2] (t) \le M_0+ C t \ P(\sup_{[0,t]} E). \end{equation*} Note that the first term on the left-hand side of (\ref{a10}) comes from the first term of (\ref{a8}), together with the fact that $\displaystyle\partial_t^4 v(t,x)=v_4(x)+\int_0^t \partial_t^5 v(\cdot,x).$ Therefore, the Sobolev weighted embedding estimate (\ref{w-embed}) provides us with the following estimate: \begin{equation} \label{a10} \int_I \partial_t^4\tilde v^2 (t) \le M_0+ C t \ P(\sup_{[0,t]} E). \end{equation} The remaining terms will be estimated by simply using the definition of the energy function $E$. For the second term on the right-hand side of (\ref{a9}), we have that \begin{align*} \bigl\|\frac{1}{\rho_0}\bigl[\frac{\rho_0^2\partial_t \tilde v'\ \tilde v'}{\tilde\eta'^4}\bigr]'\bigr\|_0 & \le \|(\rho_0 v_t')'\|_0 \bigl\|\frac{\tilde v'}{\tilde\eta'^4}\bigr\|_{L^\infty}+\bigl\|\tilde v_t'\bigl[\frac{\rho_0\ \tilde v'}{\tilde\eta'^4}\bigr]'\bigr\|_0\\ & \le C \|(\rho_0 \tilde v_t')'\|_0 \|\tilde v'\|_{\frac{3}{4}}+\bigl\|\tilde v_t'\bigl[\frac{\rho_0'\ \tilde v'}{\tilde\eta'^4}\bigr]\bigr\|_0+\bigl\|\tilde v_t'\bigl[\frac{\rho_0\ \tilde v''}{\tilde\eta'^4}\bigr]\bigr\|_0+4\bigl\|\tilde v_t'\bigl[\frac{\rho_0\ \tilde v'\tilde\eta''}{\tilde\eta'^5}\bigr]\bigr\|_0\\ & \le C \|(\rho_0 v_1')'+\int_0^\cdot (\rho_0 v_{tt}')'\|_0 \|\tilde v'\|_1^{\frac{1}{2}} \|v_1'+\int_0^\cdot \tilde v_t'\|_{\frac{1}{2}}^{\frac{1}{2}}+C\|v_1'+\int_0^\cdot \tilde v_{tt}'\|_0 \|\tilde v'\|_{\frac{3}{4}}\\ & \qquad \ \ +C \|\tilde v_t'\|_0 \|\rho_0 v''\|_{\frac{3}{4}} + C \|\tilde v'\|_{\frac{3}{4}}\|\int_0^\cdot \tilde v''\|_0 \|\rho_0 v_1'+\int_0^\cdot\rho_0 \tilde v_{tt}'\|_1\\ & \le \|(\rho_0 v_1')'+\int_0^\cdot (\rho_0 v_{tt}')'\|_0 \|\tilde v'\|_1^{\frac{1}{2}} \|v_1' +\int_0^\cdot \tilde v_t'\|_{\frac{1}{2}}^{\frac{1}{2}}\\ & \qquad \ \ +C\|v_1'+\int_0^\cdot \tilde v_{tt}'\|_0 \|\tilde v'\|_1^{\frac{1}{2}} \|v_1'+\int_0^\cdot \tilde v_t'\|_{\frac{1}{2}}^{\frac{1}{2}} \\ & \qquad \ \ +C \|\tilde v_t'\|_0 \|\rho_0 v''\|_1^{\frac{3}{4}} \|\rho_0 v_0''+\int_0^\cdot \rho_0 v_t''\|_0^{\frac{1}{4}}, \end{align*} where we have used the fact that $\|\cdot\|_{L^\infty}\le C \|\cdot\|_{\frac{3}{4}}$. Thanks to the definition of $E$, the previous inequality provides us, for any $t\in [0,T_\kappa]$, with \begin{equation} \label{a11} \sup_{[0,t]}\bigl\|\frac{1}{\rho_0}\bigl[\frac{\rho_0^2\partial_t \tilde v'\ \tilde v'}{\tilde\eta'^4}\bigr]'\bigr\|_0\le C \sup_{[0,t]} E^{\frac{3}{4}}\ (M_0+t P(\sup_{[0,t]}E)). \end{equation} For the third term on the right-hand side of (\ref{a9}), we have similarly that \begin{align*} \bigl\|\frac{1}{\rho_0} \bigl[\frac{\rho_0^2 \tilde v'^3}{\tilde\eta'^5}\bigr]'\bigr\|_0 &\le \|(\rho_0 v')'\|_{L^\infty} \bigl\|\frac{\tilde v'^2}{\tilde\eta'^5}\bigr\|_{L^2}+\bigl\|\tilde v'\bigl[\frac{\rho_0\ \tilde v'^2}{\tilde\eta'^5}\bigr]'\bigr\|_0\\ & \le C \|(\rho_0 \tilde v')'\|_{\frac{3}{4}} \|\tilde v'\|^2_{\frac{1}{2}}+\bigl\|\tilde v'\bigl[\frac{\rho_0'\ \tilde v'^2}{\tilde\eta'^5}\bigr]\bigr\|_0+2\bigl\|\tilde v'\bigl[\frac{\rho_0\ \tilde v''\tilde v'}{\tilde\eta'^5}\bigr]\bigr\|_0+5\bigl\|\tilde v'\bigl[\frac{\rho_0\ \tilde v'^2\tilde\eta''}{\tilde\eta'^6}\bigr]\bigr\|_0\\ & \le C \|(\rho_0 v_0')'+\int_0^\cdot (\rho_0 v_{t}')'\|_0^{\frac{1}{4}} \|(\rho_0 \tilde v')'\|_1^{\frac{3}{4}} \|\tilde v_0'+\int_0^\cdot \tilde v'_t\|_{\frac{1}{2}}^2 +C \|\tilde v_0'+\int_0^\cdot \tilde v'_t\|_{\frac{1}{2}}^3\\ &\qquad \ \ +C \|\tilde v'\|_{\frac{1}{2}}^2 \|\rho_0 v''\|_{L^4} + C \|\tilde v'\|_{\frac{1}{2}}^3\|\rho_0 \tilde\eta''\|_{L^4}\\ & \le C \|(\rho_0 v_0')'+\int_0^\cdot (\rho_0 v_{t}')'\|_0^{\frac{1}{4}} \|(\rho_0 \tilde v')'\|_1^{\frac{3}{4}} \|\tilde v_0'+\int_0^\cdot \tilde v'_t\|_{\frac{1}{2}}^2 +C \|\tilde v_0'+\int_0^\cdot \tilde v'_t\|_{\frac{1}{2}}^3\\ &\qquad \ \ +C \|\tilde v_0'+\int_0^\cdot \tilde v_t'\|_{\frac{1}{2}}^2 \|\rho_0 v_0''+\int_0^\cdot \rho_0 v_t''\|_0^{\frac{1}{2}}\|\rho_0 \tilde v''\|_1^{\frac{1}{2}} + C \|\tilde v_0'+\int_0^\cdot \tilde v_t'\|_{\frac{1}{2}}^3 \|\int_0^\cdot \rho_0 v''\|_1, \end{align*} where we have used the fact that $\|\cdot\|_{L^p}\le C_p \|\cdot\|_{\frac{1}{2}}$, for all $1<p<\infty$. Again, using the definition of $E$, the previous inequality provides us for any $t\in [0,T_\kappa]$ with \begin{equation} \label{a12} \sup_{[0,t]}\bigl\| \frac{1}{\rho_0} \bigl[\frac{\rho_0^2 \tilde v'^3}{\tilde\eta'^5}\bigr]' \bigr\|_0\le C \sup_{[0,t]} E^{\frac{1}{2}}\ (M_0+t P(\sup_{[0,t]}E)). \end{equation} For the fourth term of the right-hand side of (\ref{a9}), we see that \begin{align} \label{a13} \bigl\|\frac{2}{\rho_0}\bigl[{\rho_0^2\partial_t^2 \tilde v'}\bigr]'\|_0\|3\int_0^\cdot\frac{\tilde v'}{\tilde\eta'^4}\|_{L^\infty} (t) & \le C [\ \|\rho_0 \partial_t^2 v''\|_0+\|\partial_t v'\|_0\ ] t \sup_{[0,t]} \|\tilde v\|_2\nonumber\\ &\le C t P(\sup_{[0,t]} E)\ . \end{align} Similarly, the fifth term of the right-hand side of (\ref{a9}) yields the following estimate: \begin{align} \label{a14} \bigl\|\rho_0\partial_t^2\tilde v'\frac{\tilde\eta''}{\tilde\eta'^4}\bigr\|_0 (t) &\le C \|\rho_0\partial_t^2\tilde v'\|_{L^\infty}\|\tilde\eta''\|_0\nonumber\\ &\le C \|\rho_0\partial_t^2\tilde v'\|_1 \|\int_0^\cdot\tilde v''\|_0\nonumber\\ & \le C t P(\sup_{[0,t]} E)\ . \end{align} Combining the estimates (\ref{a11})--(\ref{a14}), we obtain the inequality \begin{equation} \label{a15} \sup_{[0,t]} \bigl\|\frac{2}{\rho_0}\bigl[{\rho_0^2\partial_t^2 \tilde v'}\bigr]'\bigr\|_0 \le C t P(\sup_{[0,t]} E) + C \sup_{[0,t]} E^{\frac{3}{4}}\ (M_0+t P(\sup_{[0,t]}E)). \end{equation} At this stage, we remind the reader, that the solution $\tilde v$ to our parabolic $ \kappa $-problem is in $X_{T_ \kappa }$, so that for any $t\in [0,T_\kappa]$, $\partial_t^2 \tilde v\in H^2(I)$. Notice that $$ \frac{1}{\rho_0}\bigl[{\rho_0^2\partial_t^2 \tilde v'}\bigr]' = \rho_0 \partial_t^2 \tilde v'' + 2 \rho_0'\partial_t^2\tilde v' \,, $$ so (\ref{a15}) is equivalent to \begin{equation}\label{a15b} \sup_{[0,t]} \bigl\| \rho_0 \partial_t^2 \tilde v'' + 2 \rho_0'\partial_t^2\tilde v' \bigr\|_0 \le C t P(\sup_{[0,t]} E) + C \sup_{[0,t]} E^{\frac{3}{4}}\ (M_0+t P(\sup_{[0,t]}E)). \end{equation} From this inequality, we would like to conclude that both $\| \partial_t^2 \tilde v '\|_0$ and $\| \rho_0 \partial_t^2 \tilde v'' \|_0$ are bounded by the right-hand side of (\ref{a15b}); the regularity provided by solutions of the $ \kappa $-problem allow us to arrive at this conclusion. By expanding the left-hand side of (\ref{a15b}), we see that \begin{align} \label{a16} \sup_{[0,t]} \bigl\| \rho_0 \partial_t^2 \tilde v'' + 2\rho_0' \partial_t^2\tilde v' \bigr\|_0^2&=\|\rho_0\partial_t^2\tilde v''\|_0^2+4\|\rho_0'\partial_t^2\tilde v'\|_0^2+4\int_I \rho_0 \partial_t^2 \tilde v'' \ \rho_0'\partial_t^2\tilde v' \,. \end{align} Given the regularity of $\partial_t^2 \tilde v$ provide by our parabolic $ \kappa$-problem, we notice that the cross-term in (\ref{a16}) is an exact derivative, $$ 4\int_I \rho_0 \partial_t^2 \tilde v'' \ \rho_0'\partial_t^2\tilde v' = 2 \int_I \rho_0\rho_0' \frac{\partial}{\partial x} |\partial_t^2 \tilde v'|^2 \,, $$ so that by integrating-by-parts, we find that $$ 4\int_I \rho_0 \partial_t^2 \tilde v'' \ \rho_0'\partial_t^2\tilde v' = -2 \|\rho_0'\partial_t^2\tilde v'\|_0^2 - \int_I \rho_0 \partial_t^2 \tilde v' \ \rho_0'' \partial_t^2 \tilde v' \,, $$ and hence (\ref{a16}) becomes \begin{align} \label{a16b} \sup_{[0,t]} \bigl\| \rho_0 \partial_t^2 \tilde v'' + 2\rho_0' \partial_t^2\tilde v' \bigr\|_0^2&=\|\rho_0\partial_t^2\tilde v''\|_0^2+2\|\rho_0'\partial_t^2\tilde v'\|_0^2 - \int_I \rho_0 \partial_t^2 \tilde v' \ \rho_0'' \partial_t^2 \tilde v' \,. \end{align} Since the energy function $E$ contains $\rho_0 \partial_t^2 \tilde v(t) \in H^2(I)$ and $\partial_t^2\tilde v(t) \in H^1(I)$, the fundamental theorem of calculus shows that $$ \int_I \rho_0 \partial_t^2\tilde v' \ \rho_0'' \partial_t^2 \tilde v' \le C t P(\sup_{[0,t]} E) + C \sup_{[0,t]} E^{\frac{3}{4}}\ (M_0+t P(\sup_{[0,t]}E)) \,. $$ Combining this inequality with (\ref{a16b}) and (\ref{a15}), yields \begin{equation*} \sup_{[0,t]} [ \|\rho_0\partial_t^2\tilde v''\|_0+\|\rho_0'\partial_t^2\tilde v'\|_0 ] \le C t P(\sup_{[0,t]} E) + C \sup_{[0,t]} E^{\frac{3}{4}}\ (M_0+t P(\sup_{[0,t]}E)), \end{equation*} and thus \begin{equation*} \sup_{[0,t]} [ \|\rho_0\partial_t^2\tilde v''\|_0+\|\rho_0'\partial_t^2\tilde v'\|_0 + \|\rho_0\partial_t^2\tilde v'\|_0 ] \le M_0+C t P(\sup_{[0,t]} E) + C \sup_{[0,t]} E^{\frac{3}{4}}\ (M_0+t P(\sup_{[0,t]}E)), \end{equation*} and hence with the physical vacuum conditions on $\rho_0$ given by (\ref{degen1}) and (\ref{degen2}), we have that \begin{equation*} \sup_{[0,t]} [ \|\rho_0\partial_t^2\tilde v''\|_0+\|\partial_t^2\tilde v'\|_0 ] \le M_0+C t P(\sup_{[0,t]} E) + C \sup_{[0,t]} E^{\frac{3}{4}}\ (M_0+t P(\sup_{[0,t]}E)), \end{equation*} which, together with (\ref{a10}), provide us with the estimate \begin{equation} \label{a17} \sup_{[0,t]} [ \|\rho_0\partial_t^2\tilde v''\|_0+\|\partial_t^2\tilde v\|_1 ] \le M_0+C t P(\sup_{[0,t]} E) + C \sup_{[0,t]} E^{\frac{3}{4}}\ (M_0+t P(\sup_{[0,t]}E)). \end{equation} \subsection{Elliptic and Hardy-type estimates for $v(t)$.} Having obtained the estimates for $\partial_t^2 \tilde v(t)$ in (\ref{a17}), we can next obtain our estimates for $\tilde v(t)$. To do so, we consider the first time-differentiated version of (\ref{approximate.a}), which yields the equation \begin{equation*} -2\bigl[\frac{\rho_0^2 \tilde v'}{\tilde\eta'^3}\bigr]'-\kappa [\rho_0^2 \partial_t \tilde v']'=-\rho_0\partial_t^2 \tilde v, \end{equation*} which we rewrite as the following identity: \begin{align} -\frac{2}{\rho_0}\bigl[{\rho_0^2\tilde v'}\bigr]'-\frac{\kappa}{\rho_0} [\rho_0^2 \partial_t \tilde v']'=&-\partial_t^2 \tilde v-\frac{2}{\rho_0}\bigl[{\rho_0^2 \tilde v'}\bigr]'(1-\frac{1}{\tilde\eta'^3})\ . \label{cs11} \end{align} Using Lemma \ref{kelliptic}, we see that for any $t\in [0,T_\kappa]$, \begin{align} \label{a18} \sup_{[0,t]} \bigl\|\frac{1}{\rho_0}\bigl[{\rho_0^2 \tilde v'}\bigr]'\bigr\|_1 \le & C\sup_{[0,t]}\| \partial_t^2 \tilde v\|_1 +C\sup_{[0,t]} \bigl\|\frac{1}{\rho_0}\bigl[{\rho_0^2 \tilde v'}\bigr]'\|_1\|\int_0^\cdot\frac{\tilde v'}{\tilde\eta'^4}\|_{L^\infty} +C \|\frac{1}{\rho_0}[\rho_0^2 \tilde v']'.\frac{\tilde\eta''}{\tilde\eta'^4}\|_0 \end{align} We next estimate each term on the right hand side of (\ref{a18}). The bound for the first term on the right-hand side of (\ref{a18}) is provided by (\ref{a17}). The second term of the right-hand side of (\ref{a18}) is estimated as follows: \begin{align} \label{a20} \bigl\|\frac{1}{\rho_0}\bigl[{\rho_0^2 \tilde v'}\bigr]'\|_1\|\int_0^\cdot\frac{\tilde v'}{\tilde\eta'^4}\|_{L^\infty} (t) &\le C [ \| \rho_0 \tilde v'''\|_0+\|v\|_2]\ t \sup_{[0,t]} \|\tilde v\|_2\nonumber\\ &\le C t P(\sup_{[0,t]} E)\ . \end{align} For the third term on the right-hand side of (\ref{a18}), \begin{align} \label{a21} \|\frac{1}{\rho_0}[\rho_0^2 \tilde v']'.[\frac{\tilde\eta''}{\tilde\eta'^4}]\|_1 (t) & \le C \|\frac{1}{\rho_0}[\rho_0^2 \tilde v']'\|_{L^\infty} \|\tilde\eta''\|_0\nonumber\\ &\le C [ \| \rho_0 \tilde v'''\|_0+\|v\|_2] \|\int_0^t \tilde v''\|\nonumber\\ & \le C t P(\sup_{[0,t]} E)\ . \end{align} Combining these estimates provides the inequality \begin{equation*} \sup_{[0,t]} \bigl\|\frac{1}{\rho_0}\bigl[{\rho_0^2\tilde v'}\bigr]'\bigr\|_1 \le C t P(\sup_{[0,t]} E) + C \sup_{[0,t]} E^{\frac{3}{4}}\ (M_0+t P(\sup_{[0,t]}E)), \end{equation*} which leads us immediately to: \begin{equation} \label{a22} \sup_{[0,t]} \| \rho_0\tilde v'''+3\rho_0'v''\|_0 \le C t P(\sup_{[0,t]} E) + C \sup_{[0,t]} E^{\frac{3}{4}}\ (M_0+t P(\sup_{[0,t]}E)) \,. \end{equation} Now, since for any $t\in [0,T_\kappa]$, the solution $\tilde v$ to our parabolic $ \kappa $-problem is in $H^3(I)$, we infer that $\rho_0 v'''\in L^2(I)$. We can then apply the same integration-by-parts argument as in \cite{CoLiSh2009} to find that \begin{align} \label{a23} \|\rho_0\tilde v'''+3\rho_0'v''\|^2_0&=\|\rho_0\tilde v'''\|_0^2+9\|{\rho_0'}\tilde v''\|_0^2+3\int_I \rho_0\rho_0' [|\tilde v''|^2]'\nonumber\\ &=\|\rho_0\tilde v'''\|_0^2+9\|\rho_0'\tilde v''\|_0^2-3\int_I [\rho_0\rho_0''+\rho_0'^2] |\tilde v''|^2\nonumber\\ &=\|\rho_0\tilde v'''\|_0^2+6\|\rho_0'\tilde v''\|_0^2-3\int_I \rho_0\rho_0'' |\tilde v''|^2 \end{align} Combined with (\ref{a22}), this yields: \begin{align*} \sup_{[0,t]} [ \|\rho_0\tilde v'''\|_0+\|\rho_0'\tilde v''\|_0 ] \le & C t P(\sup_{[0,t]} E) + C \sup_{[0,t]} E^{\frac{3}{4}}\ (M_0+t P(\sup_{[0,t]}E))\nonumber\\ &+M_0+C\|\int_0^t \sqrt{\rho_0}\tilde v_t''\|_0, \end{align*} and thus \begin{equation*} \sup_{[0,t]} [ \|\rho_0\tilde v'''\|_0+\|\rho_0'\tilde v''\|_0 + \|\rho_0\tilde v''\|_0 ] \le M_0+C t P(\sup_{[0,t]} E) + C \sup_{[0,t]} E^{\frac{3}{4}}\ (M_0, \end{equation*} With (\ref{degen1}) and (\ref{degen2}), it follows that \begin{equation*} \sup_{[0,t]} [ \|\rho_0\tilde v'''\|_0+\|\tilde v''\|_0 ] \le M_0+C t P(\sup_{[0,t]} E) + C \sup_{[0,t]} E^{\frac{3}{4}}\ (M_0+t P(\sup_{[0,t]}E)), \end{equation*} and hence \begin{equation} \label{a24} \sup_{[0,t]} [ \|\rho_0\tilde v'''\|_0+\|\tilde v\|_2 ] \le M_0+C t P(\sup_{[0,t]} E) + C \sup_{[0,t]} E^{\frac{3}{4}}\ (M_0+t P(\sup_{[0,t]}E)). \end{equation} \subsection{Elliptic and Hardy-type estimates for $\partial_t^3v(t)$ and $\partial_t v(t)$} We consider the fourth time-differentiated version of (\ref{approximate.a}): $$ \bigl[\partial_t^4\frac{\rho_0^2}{\tilde\eta'^2}\bigr]'-\kappa [\rho_0^2 \partial_t^4 \tilde v']'=-\rho_0 \partial_t^5 \tilde v \,, $$ which can be rewritten as \begin{equation*} -2\bigl[\frac{\rho_0^2\partial_t^3 \tilde v'}{\tilde\eta'^3}\bigr]'-\kappa [\rho_0^2 \partial_t^4 \tilde v']'=-\rho_0\partial_t^5 \tilde v+c_1\bigl[\frac{\rho_0^2\partial_t^2 \tilde v'\ \tilde v'}{\tilde\eta'^4}\bigr]' +c_2 \bigl[\frac{\rho_0^{\frac{3}{2}} \partial_t\tilde v'^2}{\tilde\eta'^5}\bigr]', \end{equation*} for some constants $c_1$ and $c_2$. By employing the fundamental theorem of calculus and dividing by $\rho_0^ {\frac{1}{2}} $, we obtain the equation \begin{align*} -\frac{2}{\rho_0^{\frac{1}{2}}}\bigl[{\rho_0^2\partial_t^3 \tilde v'}\bigr]'-\frac{\kappa}{\rho_0^{\frac{1}{2}}} [\rho_0^2 \partial_t^4 \tilde v']'=&-\sqrt{\rho_0}\partial_t^5 \tilde v+\frac{c_1}{{\rho_0}^{\frac{1}{2}}}\bigl[\frac{\rho_0^2\partial_t^2 \tilde v'\ \tilde v'}{\tilde\eta'^4}\bigr]' +\frac{c_2}{\rho_0^{\frac{1}{2}}} \bigl[\frac{\rho_0^2 \partial_t\tilde v'^2}{\tilde\eta'^5}\bigr]'\\ &-\frac{2}{\rho_0^{\frac{1}{2}}}\bigl[{\rho_0^2\partial_t^3 \tilde v'}\bigr]'(1-\frac{1}{\tilde\eta'^3})-6\rho_0^{\frac{3}{2}}\partial_t^3\tilde v'\frac{\tilde\eta''}{\tilde\eta'^4}\ . \end{align*} For any $t\in [0,T_\kappa]$, Lemma \ref{kelliptic} provides the $ \kappa $-independent estimate \begin{align} \label{a25} \sup_{[0,t]} \bigl\|\frac{2}{\rho_0^{\frac{1}{2}}}\bigl[{\rho_0^2\partial_t^3 \tilde v'}\bigr]'\bigr\|_0 \le & \sup_{[0,t]}\|\sqrt{\rho_0}\partial_t^5 \tilde v\|_0 +\sup_{[0,t]} \bigl\|\frac{c_1}{\rho_0^{\frac{1}{2}}}\bigl[\frac{\rho_0^2\partial_t^2 \tilde v'\ \tilde v'}{\tilde\eta'^4}\bigr]'\bigr\|_0 +\sup_{[0,t]} \bigl\|\frac{c_2}{\rho_0^{\frac{1}{2}}} \bigl[\frac{\rho_0^2 \partial_t\tilde v'^2}{\tilde\eta'^5}\bigr]'\bigr\|_0\nonumber\\ &+\sup_{[0,t]} \bigl\|\frac{2}{\rho_0^{\frac{1}{2}}}\bigl[{\rho_0^2\partial_t^3 \tilde v'}\bigr]'\|_0\|3\int_0^\cdot\frac{\tilde v'}{\tilde\eta'^4}\|_{L^\infty}+6\sup_{[0,t]} \bigl\|\rho_0^{\frac{3}{2}}\partial_t^2\tilde v'\frac{\tilde\eta''}{\tilde\eta'^4}\bigr\|_0\ . \end{align} We estimate each term on the right-hand side of (\ref{a25}). The first term on the right-hand side is bounded by $ M_0+ C t \ P(\sup_{[0,t]} E)$ thanks to (\ref{a8}). For the second term on the right-hand side of (\ref{a25}), we have that \begin{align*} \bigl\|\frac{1}{\rho_0^{\frac{1}{2}}}\bigl[\frac{\rho_0^2\partial_t^2 \tilde v'\ \tilde v'}{\tilde\eta'^4}\bigr]'\bigr\|_0 &\le \|\sqrt{\rho_0}(\rho_0 \partial_t^2\tilde v')'\|_0 \bigl\|\frac{\tilde v'}{\tilde\eta'^4}\bigr\|_{L^\infty}+\bigl\|\sqrt{\rho_0}\partial_t^2\tilde v'\bigl[\frac{\rho_0\ \tilde v'}{\tilde\eta'^4}\bigr]'\bigr\|_0\\ &\le C \|\sqrt{\rho_0}(\rho_0 \partial_t^2\tilde v')'\|_0 \|\tilde v'\|_{\frac{3}{4}}+\bigl\|\sqrt{\rho_0}\partial_t^2\tilde v'\bigl[\frac{\rho_0'\ \tilde v'}{\tilde\eta'^4}\bigr]\bigr\|_0+\bigl\|\sqrt{\rho_0}\partial_t^2\tilde v'\bigl[\frac{\rho_0\ \tilde v''}{\tilde\eta'^4}\bigr]\bigr\|_0\nonumber\\ &\ \ +4\bigl\|\sqrt{\rho_0}\partial_t^2\tilde v'\bigl[\frac{\rho_0\ \tilde v'\tilde\eta''}{\tilde\eta'^5}\bigr]\bigr\|_0\\ & \le C \|\sqrt{\rho_0}(\rho_0 \tilde v_2')'+\int_0^\cdot \sqrt{\rho_0}(\rho_0 \partial_t^3\tilde v')'\|_0 \|\tilde v'\|_1^{\frac{1}{2}} \|\tilde v_1'+\int_0^\cdot \tilde v_t'\|_{\frac{1}{2}}^{\frac{1}{2}}\nonumber\\ &\ \ +C\|\sqrt{\rho_0}\tilde v_2'+\int_0^\cdot \sqrt{\rho_0}\partial_t^3\tilde v'\|_0 \|\tilde v'\|_{\frac{3}{4}}\\ &\ \ +C \|\sqrt{\rho_0}\partial_t^2\tilde v'\|_0 \|\rho_0 v''\|_{\frac{3}{4}} + C \|\tilde v'\|_{\frac{3}{4}}\|\int_0^\cdot \tilde v''\|_0 \|\rho_0^{\frac{3}{2}} \tilde v_2'+\int_0^\cdot\rho_0^{\frac{3}{2}} \partial_t^3\tilde v'\|_1\\ & \le \|\sqrt{\rho_0}(\rho_0 \tilde v_2')'+\int_0^\cdot \sqrt{\rho_0}(\rho_0 \tilde v_{tt}')'\|_0 \|\tilde v'\|_1^{\frac{1}{2}} \|\tilde v_1' +\int_0^\cdot \tilde v_t'\|_{\frac{1}{2}}^{\frac{1}{2}}\\ &\ \ +C\|\sqrt{\rho_0}\tilde v_2'+\int_0^\cdot \sqrt{\rho_0}\tilde v_{tt}'\|_0 \|\tilde v'\|_1^{\frac{1}{2}} \|\tilde v_1'+\int_0^\cdot \tilde v_t'\|_{\frac{1}{2}}^{\frac{1}{2}} \\ &\ \ +C \|\sqrt{\rho_0}\partial_t^2\tilde v'\|_0 \|\rho_0 v''\|_1^{\frac{3}{4}} \|\rho_0 v_0''+\int_0^\cdot \rho_0 v_t''\|_0^{\frac{1}{4}}\nonumber\\ &\ \ +C \|\tilde v'\|_1^{\frac{1}{2}} \|\tilde v_1'+\int_0^\cdot \tilde v_t'\|_{\frac{1}{2}}^{\frac{1}{2}} \|\int_0^\cdot \tilde v''\|_0 \|\rho_0^{\frac{3}{2}} \tilde v_2'+\int_0^\cdot\rho_0^{\frac{3}{2}} \partial_t^3\tilde v'\|_1 \,, \end{align*} where we have again used the fact that $\|\cdot\|_{L^\infty}\le C \|\cdot\|_{\frac{3}{4}}$. Thanks to the definition of $E$, the previous inequality shows that for any $t\in [0,T_\kappa]$, \begin{equation} \label{a27} \sup_{[0,t]}\bigl\|\frac{1}{\sqrt{\rho_0}}\bigl[\frac{\rho_0^2\partial_t^2 \tilde v'\ \tilde v'}{\tilde\eta'^4}\bigr]'\bigr\|_0\le C \sup_{[0,t]} E^{\frac{3}{4}}\ (M_0+t P(\sup_{[0,t]}E)). \end{equation} For the third term on the right-hand side of (\ref{a25}), we have similarly that \begin{align} \label{a28} \bigl\|\frac{1}{\sqrt{\rho_0}} \bigl[\frac{\rho_0^2 \partial_t\tilde v'^2}{\tilde\eta'^5}\bigr]'\bigr\|_0 (t) & \le 2\|(\rho_0 \partial_t\tilde v')'\|_{0} \bigl\|\sqrt{\rho_0}\frac{\partial_t\tilde v'}{\tilde\eta'^5}\bigr\|_{L^\infty}+5\bigl\|\sqrt{\rho_0}\frac{\partial_t\tilde v'^2}{\tilde\eta'^6}\|_{L^1}\|\rho_0 \tilde\eta''\|_{L^\infty}\nonumber\\ & \le C \|(\rho_0\tilde v_1')'+\int_0^t (\rho_0\partial_{tt}\tilde v')'\|_0\|\sqrt{\rho_0}\partial_t\tilde v'\|_{\frac{3}{4}}+C \|\partial_t\tilde v'\|_0^2\|\int_0^t (\rho_0\tilde v'')'\|_0\nonumber\\ & \le C \|(\rho_0\tilde v_1')'+\int_0^t (\rho_0\partial_{tt}\tilde v')'\|_0\|\sqrt{\rho_0} \partial_t\tilde v'\|_{0}^{1-\alpha}\|(\sqrt{\rho_0}\partial_t\tilde v')'\|_{L^{2-a}}^{\alpha}\nonumber\\ &\ \ +C \|\partial_t\tilde v'\|_0^2\|\int_0^t (\rho_0\tilde v'')'\|_0\nonumber\\ & \le C \|(\rho_0\tilde v_1')'+\int_0^t (\rho_0\partial_{tt}\tilde v')'\|_0\|\tilde v'_1+\int_0^t\partial_{tt}\tilde v'\|_{0}^{1-\alpha}\|(\sqrt{\rho_0}\partial_t\tilde v')'\|_{L^{2-a}}^{\alpha}\nonumber\\ &\ \ +C \|\partial_t\tilde v'\|_0^2\|\int_0^t (\rho_0\tilde v'')'\|_0 \,, \end{align} where $0<a<\frac{1}{2}$ is given and $0<\alpha=\frac{3-3a}{4+3a}<1$. The only term on the right-hand side of (\ref{a28}) which is not directly contained in the definition of $E$ is $\|(\sqrt{\rho_0}\partial_t\tilde v')'\|_{L^{2-a}}^\alpha$. To this end, we notice that \begin{align} \label{a29} \|(\sqrt{\rho_0}\partial_t\tilde v')'\|_{L^{2-a}} &\le \|\frac{\partial_t\tilde v'}{2\sqrt{\rho_0}}\|_{L^{2-a}}+\|\sqrt{\rho_0} \tilde v_{tt}''\|_0\nonumber\\ &\le \|\frac{1}{2\sqrt{\rho_0}}\|_{L^{2-\frac{a}{2}}}\|\partial_t\tilde v'\|_{\frac{1}{2}}+\|\sqrt{\rho_0} \tilde v_{tt}''\|_0 \,, \end{align} where we have used the fact that $\|\cdot\|_{L^p}\le C_p \|\cdot\|_{\frac{1}{2}}$, for all $1<p<\infty$. Thanks to the definition of $E$, the previous inequality and (\ref{a28}) provides us for any $t\in [0,T_\kappa]$ with \begin{equation} \label{a30} \sup_{[0,t]}\bigl\| \frac{1}{\rho_0} \bigl[\frac{\rho_0^2 \tilde v'^3}{\tilde\eta'^5}\bigr]' \bigr\|_0\le C \sup_{[0,t]} E^\alpha\ (M_0+t P(\sup_{[0,t]}E)), \end{equation} where we recall again that $0<\alpha=\frac{3-3a}{4+3a}<1$. The fourth term on the right-hand side of (\ref{a25}) is easily treated: \begin{align} \label{a31} \bigl\|\frac{1}{\sqrt{\rho_0}}\bigl[{\rho_0^2\partial_t^3 \tilde v'}\bigr]'\|_0\|\int_0^t\frac{\tilde v'}{\tilde\eta'^4}\|_{L^\infty} (t) & \le C [\ \|\rho_0^{\frac{3}{2}} \partial_t^3 v''\|_0+\|\sqrt{\rho_0}\partial_t^3 v'\|_0\ ] t \sup_{[0,t]} \|\tilde v\|_2\nonumber\\ & \le C t P(\sup_{[0,t]} E)\ . \end{align} Similarly, the fifth term on the right-hand side of (\ref{a25}) is estimated as follows: \begin{align} \label{a32} \bigl\|\rho_0^{\frac{3}{2}}\partial_t^3\tilde v'\frac{\tilde\eta''}{\tilde\eta'^4}\bigr\|_0 (t) & \le C \|\rho_0^{\frac{3}{2}}\partial_t^3\tilde v'\|_{L^\infty}\|\tilde\eta''\|_0\nonumber\\ & \le C [\ \|\rho_0^{\frac{3}{2}} \partial_t^3 v''\|_0+\|\sqrt{\rho_0}\partial_t^3 v'\|_0\ ] \bigl\|\int_0^t\tilde v''\bigr\|_0\nonumber\\ & \le C t P(\sup_{[0,t]} E)\ . \end{align} Combining the estimates (\ref{a25})--(\ref{a32}), we can infer that \begin{equation} \label{a33} \sup_{[0,t]} \bigl\|\frac{1}{\rho_0^{\frac{1}{2}}}\bigl[{\rho_0^2\partial_t^3 \tilde v'}\bigr]'\bigr\|_0 \le C t P(\sup_{[0,t]} E) + C \sup_{[0,t]} E^\alpha\ (M_0+t P(\sup_{[0,t]}E)). \end{equation} Now, since for any $t\in [0,T_\kappa]$, solutions to our parabolic $ \kappa $-problem have the regularity $\partial_t^2v\in H^2(I)$, we integrate-by-parts: \begin{align} \label{a34} \bigl\|\frac{1}{\rho_0^{\frac{1}{2}}}\bigl[{\rho_0^2\partial_t^3 \tilde v'}\bigr]'\bigr\|^2_0&=\|\rho_0^{\frac{3}{2}}\partial_t^3\tilde v''\|_0^2+4\|{\rho_0}^{\frac{1}{2}}\rho_0'\partial_t^3\tilde v'\|_0^2+2\int_I \rho_0'\rho_0^2 [|\partial_t^3\tilde v'|^2]'\nonumber\\ &=\|\rho_0^{\frac{3}{2}}\partial_t^3\tilde v''\|_0^2+4\|\rho_0'\rho_0^{\frac{1}{2}}\partial_t^3\tilde v'\|_0^2-4\int_I \rho_0'^2\rho_0 |\partial_t^3\tilde v'|^2-2\int_I \rho_0''\rho_0^2 |\partial_t^3\tilde v'|^2\nonumber\\ &=\|\rho_0^{\frac{3}{2}}\partial_t^3\tilde v''\|_0^2-2 \int_I \rho_0''\rho_0^2 |\partial_t^3\tilde v'|^2. \end{align} Combined with (\ref{a33}), and the fact that $\rho_0\partial_t^3\tilde v'=\rho_0\tilde v_3+\int_0^cdot \rho_0\partial_t^4\tilde v'$ for the second term on the right-hand side of (\ref{a34}), we find that \begin{equation} \label{a35} \sup_{[0,t]} \|\rho_0^{\frac{3}{2}}\partial_t^3\tilde v''\|_0 \le C t P(\sup_{[0,t]} E) + C \sup_{[0,t]} E^\alpha\ (M_0+t P(\sup_{[0,t]}E)). \end{equation} Now, since $\frac{1}{\rho_0^{\frac{1}{2}}}\bigl[{\rho_0^2\partial_t^3 \tilde v'}\bigr]'=\rho_0^{\frac{3}{2}}\partial_t^3\tilde v''+2 {\rho_0}^{\frac{1}{2}}\rho_0'\partial_t^3\tilde v'$, the estimates (\ref{a33}) and (\ref{a35}) also imply that \begin{equation} \label{a36} \sup_{[0,t]} \|{\rho_0}^{\frac{1}{2}}\rho_0'\partial_t^3\tilde v'\|_0 \le C t P(\sup_{[0,t]} E) + C \sup_{[0,t]} E^\alpha\ (M_0+t P(\sup_{[0,t]}E)). \end{equation} Therefore, \begin{align*} \sup_{[0,t]} [ \|\rho_0^{\frac{3}{2}}\partial_t^3\tilde v''\|_0+\|\rho_0'\rho_0^{\frac{1}{2}}\partial_t^3\tilde v'\|_0 +\|\rho_0^{\frac{3}{2}}\partial_t^3\tilde v'\|_0] \le &M_0+C t P(\sup_{[0,t]} E) \\ &+ C \sup_{[0,t]} E^\alpha\ (M_0+t P(\sup_{[0,t]}E)), \end{align*} so that with (\ref{degen1}) and (\ref{degen2}), \begin{equation*} \sup_{[0,t]} [ \|\rho_0^{\frac{3}{2}}\partial_t^3\tilde v''\|_0+\|\rho_0^{\frac{1}{2}}\partial_t^3\tilde v'\|_0 ] \le M_0+C t P(\sup_{[0,t]} E) + C \sup_{[0,t]} E^\alpha\ (M_0+t P(\sup_{[0,t]}E)) \,. \end{equation*} Together with (\ref{a10}) and the weighted embedding estimate (\ref{w-embed}), the above inequality shows that \begin{equation} \label{a37} \sup_{[0,t]} [ \|\rho_0^{\frac{3}{2}}\partial_t^3\tilde v''\|_0+\|\partial_t^3\tilde v\|_{\frac{1}{2}} ] \le M_0+C t P(\sup_{[0,t]} E) + C \sup_{[0,t]} E^\alpha\ (M_0+t P(\sup_{[0,t]}E)). \end{equation} By studying the second time-differentiated version of (\ref{approximate.a}) in the same manner, we find that \begin{equation} \label{a38} \sup_{[0,t]} [ \|\rho_0^{\frac{3}{2}}\partial_t\tilde v'''\|_0+\|\partial_t\tilde v\|_{\frac{3}{2}} ] \le M_0+C t P(\sup_{[0,t]} E) + C \sup_{[0,t]} E^\alpha\ (M_0+t P(\sup_{[0,t]}E)) \,. \end{equation} \section{Proof of Theorem \ref{theorem_main}} \subsection{Time of existence and bounds independent of $ \epsilon $ and existence of solutions to (\ref{ce0})}\label{subsec_finish1} Summing the inequalities (\ref{a8}), (\ref{a17}), (\ref{a24}), (\ref{a37}), (\ref{a38}), we find that $$ \sup_{t \in [0,T]} E(t) \le M_0+C t P(\sup_{[0,t]} E) + C \sup_{[0,t]} E^\alpha\ (M_0+t P(\sup_{[0,t]}E)) \,. $$ As $ \alpha < 1$, by employing Young's inequality and readjusting the constants, we obtain $$ \sup_{t \in [0,T]} E(t) \le M_0 + C \, T\, P({\sup_{t\in[0,T]}} E(t)) \,. $$ Just as in Section 9 of \cite{CoSh2006}, this provides us with a time of existence $T_1$ independent of $\kappa$ and an estimate on $(0,T_1)$ independent of $\kappa$ of the type: \begin{equation} \sup_{t \in [0,T_1]} E(t) \le 2 M_0 \,. \label{eq420} \end{equation} In particular, our sequence of solutions $(v_ \kappa )$ satisfy the $ \kappa $-independent bound (\ref{eq420}) on the $ \kappa $-independent time-interval $(0,T_1)$. \subsection{The limit as $ \kappa \rightarrow 0$} By the $\kappa$-independent estimate (\ref{eq420}), there exists a subsequence of $\{v_ \kappa \}$ which converges weakly to $v$ in $L^2(0,T; H^2(I))$. With $\eta(t,x) = x + \int_0^t v(s,x)ds$, by standard compactness arguments, we see that a further subsequence of $v_ \kappa $ and $\eta'_ \kappa$ uniformly converges to $v$ and $\eta'$, respectively, which shows that $v$ is the solution to (\ref{ce0}) and $v(0,x) = u_0(x)$. \subsection{Uniqueness of solutions to the compressible Euler equations (\ref{ce0})} For uniqueness, we require the initial data to have one space-derivative better regularity than for existence. Given the assumption (\ref{uniquedata}) on the data $(u_0,\rho_0)$, repeating our argument for existence, we can produce a solution $v$ on $[0,T_1]$ which satisfies the estimate $$ \sum_{s=0}^3\left[ \|\partial_t^{2s}v(t,\cdot)\|^2_{H^{3-s}(I)} + \|\rho_0 \partial_t^{2s} v(t,\cdot)\|^2_{H^{4-s}(I)} \right] < \infty \,, $$ and has the flow $\eta(t,x)=x + \int_0^t v(s,x) ds$. For the sake of contradiction, let us assume that $w$ is also a solution on $[0,T_1]$ with initial data $(u_0, \rho_0)$, satisfying the same estimate, with flow $\psi(t,x)=x + \int_0^t w(s,x)ds$. We define $$ \delta v = v -w \,, $$ in which case we have the following equation for $ \delta v$: \begin{subequations} \label{cunique} \begin{alignat}{2} \rho_0 \delta v_t + (\rho_0^2[{\eta' }^{-2} - {\psi'} ^{-2} ])'&=0 &&\text{in} \ \ I \times (0,T_1] \,, \label{cunique.a}\\ \delta v&= 0 \ \ \ &&\text{on} \ \ I \times \{t=0\} \,, \label{cunique.b}\\ \rho_0& = 0 \ \ &&\text{ on } \partial I \,. \label{cunique.c} \end{alignat} \end{subequations} By considering the fifth time-differentiated version of (\ref{cunique.a}) and taking the $L^2(I)$ inner-product with $ \partial_t \delta v$, we obtain the analogue of (\ref{a8}) (with $ \kappa =0$) for $ \delta v$. The additional error terms which arise are easily controlled by the fact that both $v$ and $w$ have one space-derivative better regularity than the energy function $E$. This produces a good bound for $ \partial_t^4 \delta v \in L^ \infty (0,T_1; L^2(I))$. By repeating the elliptic and Hardy-type estimates for $\partial_t^2 \delta v \in L^ \infty (0,T_1; H^1(I))$ and $\partial \delta v \in L^ \infty (0,T_1; H^2(I))$, and using (\ref{cunique.b}), we obtain the inequality \begin{align*} & \sup_{t \in [0,T_1]} (\|\partial_t^4 \delta v(t)\|_0^2 + \|\partial_t^2 \delta v(t)\|_1^2 + \| \delta v(t)\|_2^2 ) \\ & \qquad\qquad\qquad \le C\, T_1 \, P( \sup_{t \in [0,T_1]} (\|\partial_t^4 \delta v(t)\|_0^2 + \|\partial_t^2 \delta v(t)\|_1^2 + \| \delta v(t)\|_2^2 )) \,, \end{align*} which shows that $ \delta v=0$. \subsection{Optimal regularity for initial data} We smoothed our initial data $(u_0,\rho_0)$ in order to construct solutions to our degenerate parabolic $ \kappa $-problem (\ref{approximate}). Having obtained solutions which depend only on $E(0,v)$, a standard density argument shows that the initial data needs only to satisfy $M_0 < \infty $. \section{The case $\gamma \neq 2$} In this section, we describe the modifications to the energy function and the methodology for the case that $\gamma \neq 2$. We denote by $a_0$ the integer satisfying the inequality $$ 1 < 1+ {\frac{1}{\gamma -1}} -a_0 \le 2 \,. $$ Letting $$ d(x) = \text{dist}(x, \partial I) \,, $$ We consider the following higher-order energy function: \begin{align*} E_\gamma(t,v) & = \sum_{s=0}^4 \| v(t, \cdot )\|^2_{2 - {\frac{s}{2}} } + \sum_{s=0}^2 \| d \, \partial_t^{2s} v(t, \cdot )\|^2_{3-s} + \| \sqrt{d} \, \partial_t\partial_x^{2} v(t, \cdot )\|^2_{0} + \| \sqrt{d} \, \partial_t^3 \partial_x v(t, \cdot )\|^2_{0} \\ & \qquad + \sum_{a=0}^{a_0} \| \sqrt{d}^{1+ {\frac{1}{\gamma-1}} - a} \partial_t^{4+a_0-a} v'( t, \cdot )\|_0^2 \,, \end{align*} and define the polynomial function $M_0^\gamma = P( E_ \gamma (0, v))$. Notice the last sum in $E_ \gamma $ appears whenever $ \gamma < 2$, and the number of time-differentiated problems increases as $\gamma \to 1$. Using the same procedure as we have detailed for the case that $\gamma =2$, we have the following \begin{theorem}[Existence and uniqueness for any $\gamma >1$]\label{thm_main2} Given initial data $(u_0, \rho_0)$ such that $M^\gamma_0< \infty $ and the physical vacuum condition (\ref{degen}) holds for $\rho_0$, there exists a solution to (\ref{ceuler0}) (and hence (\ref{ceuler})) on $[0,T_\gamma]$ for $T_\gamma>0$ taken sufficiently small, such that $$ \sup_{t \in [0,T]} E(t) \le 2M^\gamma_0 \,. $$ Moreover if the initial data satisfies $$ \sum_{s=0}^3 \|\partial_t^sv(0,\cdot)\|^2_{H^{3-s}(I)} + \sum_{s=0}^3 \|d\, \partial_t^{2s} v(0,\cdot)\|^2_{H^{4-s}(I)} + \sum_{a=0}^{a_0} \| \sqrt{d}^{1+ {\frac{1}{\gamma-1}} - a} \partial_t^{6+a_0-a} v'( 0, \cdot )\|_0^2< \infty \,, $$ then the solution is unique. \end{theorem} \noindent {\bf Acknowledgments.} SS was supported by the National Science Foundation under grant DMS-0701056. \end{document}
\begin{document} \title[Fourier transform and middle convolution for $\D$-modules]{Fourier transform and middle convolution for irregular $\D$-modules} \author{D.~Arinkin} \date{\today} \begin{abstract} In \cite{BE}, S.~Block and H.~Esnault constructed the local Fourier transform for $\D$-modules. We present a different approach to the local Fourier transform, which makes its properties almost tautological. We apply the local Fourier transform to compute the local version of Katz's middle convolution. \end{abstract} \maketitle \section{Introduction} G.~Laumon defined the local Fourier transformations of $l$-adic sheaves in \cite{La}. In the context of $\D$-modules, the local Fourier transform was constructed by S.~Bloch and H.~Esnault in \cite{BE}. One can also view the $\D$-modular local Fourier transform as the formal microlocalization defined by R.~Garc{\'{\i}}a L{\'o}pez in \cite{GL}. In this paper, we present another approach to the local Fourier transform. Roughly speaking, the idea is to consider a $\D$-module on the punctured neighborhood of a point $x\in\A1$ as a $\D$-module on $\A1$ (of course, the resulting $\D$-module is not holonomic). We then claim that the Fourier transform of this non-holonomic $\D$-module is again supported on the formal neighborhood of a point. This yields a transform for $\D$-modules on formal disk: the local Fourier transform. Thus the local Fourier transform appears as the (global) Fourier transform applied to non-holonomic $\D$-modules of a special kind. The main property of the local Fourier transform is that it relates the singularities of a holonomic $\D$-module and those of its (global) Fourier transform. For instance, if $M$ is a holonomic $\D_{\A1}$-module, the singularity of its Fourier transform $\Four(M)$ at $x\in\A1$ is obtained by the local Fourier transform from the singularity of $M$ at infinity; see Corollary~\ref{co:localfourier} for the precise statement. (Actually, `singularity' here refers to the formal vanishing cycles functor described in Section~\ref{sc:disk}.) The main property is essentially the formal stationary phase formula of \cite{GL}; in the settings of \cite{BE}, it follows from \cite[Corollary 2.5]{BE}. One advantage of our definition of the local Fourier transform is that the main property becomes a tautology: it follows from adjunctions between natural functors. On the other hand, the direct proof of the formal stationary phase formula (found in \cite{GL}) appears quite complicated. Using the main property, we give a simple conceptual proof of the invariance of the rigidity index under the Fourier transform, which is one of the main results of \cite{BE}. We then develop a similar framework for another transform $\Rad$ of $\D$-modules. $\Rad$ is the Radon transform studied by A.~D'Agnolo and M.~Eastwood in \cite{AE} (we only consider $\D$-modules on $\p1$ in this paper, but \cite{AE} applies to $\p{n}$). One can also view $\Rad$ as a `twisted version' of the transform defined by J.~-L.~Brylinski in \cite{Br}; in a sense, $\Rad$ is also a particular case of the Radon transform defined by A.~Braverman and A.~Polishchuk in \cite{BP}. Finally, $\Rad$ can be interpreted as Katz's additive middle convolution with the Kummer local system in the sense of \cite{Ka}. We are going to call the Radon transform for $\D$-modules on $\p1$ the \emph{Katz-Radon transform}. Different approaches to $\Rad$ are summarized in Section~\ref{ssc:radon}. We define the local Katz-Radon transform. It is an auto-equivalence of the category of $\D$-modules on the punctured formal disk. Similarly to the Fourier transform, the local Katz-Radon transform describes the effect of the (global) Katz-Radon transform on the `singularity' of $\D$-modules (see Corollary~\ref{co:localradon}). Finally, we prove an explicit formula for the local Katz-Radon transform. This answers (in the settings of $\D$-modules) the question posed by N.~Katz in \cite[Section 3.4]{Ka}. \subsection{Acknowledgments} I am very grateful to A.~Beilinson, S.~Bloch, and V.~Drinfeld for stimulating discussions. I would also like to thank the Mathematics Department of the University of Chicago for its hospitality. \section{Main results} \subsection{Notation} We fix a ground field $\kk$ of characteristic zero. Thus, a `variety' is a `variety over $\kk$', `$\p1$' is `${\mathbb P}^1_\kk$', and so on. The algebraic closure of $\kk$ is denoted by $\overline\kk$. For a variety $X$, we denote by $X(\overline\kk)$ the set of points of $X$ over $\kk$. By $x\in X$, we mean that $x$ is a closed point of $X$; equivalently, $x$ is a Galois orbit in $X(\overline\kk)$. We denote the field of definition of $x\in X$ by $\kk_x$. If $X$ is a curve, $A_x$ stands for the completion of the local ring of $x\in X$, and $K_x$ stands for its fraction field. If $z$ is a local coordinate at $x$, we have $A_x=\kk_x[[z]]$, $K_x=\kk_x((z))$. Let $K=\kk((z))$ be the field of formal Laurent series. (The choice of a local coordinate $z$ is not essential.) Denote by $$\D_K=K\left\langle\frac{d}{dz}\right\rangle$$ the ring of differential operators over $K$. Let $\DMod{K}$ be the category of left $\D_K$-modules. The \emph{rank} of $M\in\DMod{K}$ is $\rk M=\dim_KM$. By definition, $M$ is \emph{holonomic} if $\rk M<\infty$. Denote by $\DHol{K}\subset\DMod{K}$ the full subcategory of holonomic $\D_K$-modules. \subsection{Local Fourier transform: example}\label{sc:explicit} The local Fourier transform comes in several `flavors': $\Four(x,\infty)$, $\Four(\infty,x)$, and $\Four(\infty,\infty)$. Here $x\in\A1$ (it is possible to reduce to the case $x=0$, although this is not immediate if $x$ is not $\kk$-rational). To simplify the exposition, we start by focusing on one of the `flavors' and consider $\Four(0,\infty)$. Fix a coordinate $z$ on $\A1$. Let $K_0=\kk((z))$ be the field of formal Laurent series at $0$. Fix $M\in\DHol{K_0}$. Explicitly, $M$ is a finite-dimensional vector space over $K_0$ equipped with a $\kk$-linear derivation $$\partial_z:M\to M.$$ The inclusion \begin{equation} \kk[z]\hookrightarrow\kk((z))\label{eq:laurent_to_poly} \end{equation} allows us to view $M$ as a $\D$-module on $\A1$. In other words, we consider on $M$ the action of the Weyl algebra $$W=\kk\left\langle z,\frac{d}{dz}\right\rangle$$ of polynomial differential operators. We denote this $\D_\A1$-module by $\jo_{0*}M$, where $\jo_0$ refers to the embedding of the punctured formal neighborhood of $0$ into $\A1$. Of course, $\jo_{0*}M$ is not holonomic. Actually, $\jo_{0*}M$ gives one of the two ways to view $M$ as a $\D_\A1$-module. Indeed, \eqref{eq:laurent_to_poly} is a composition \begin{equation*} \kk[z]\hookrightarrow\kk[[z]]\hookrightarrow\kk((z)), \end{equation*} so $\jo_{0*}M=j_{0*}\jo_*M$, where $j_0$ (resp. $\jo$) is the embedding of the formal disk at $0$ into $\A1$ (resp. the embedding of the punctured formal disk into the formal disk). However, there are two dual ways to extend a $\D$-module across the puncture: $\jo_*$ and $\jo_!$, so we obtain another $\D_\A1$-module $$M_!=j_{0*}\jo_!M.$$ Consider now the Fourier transform $\Four(M_!)$. As a $\kk$-vector space, it coincides with $M_!$, but the Weyl algebra acts on $\Four(M_!)$ through the automorphism \begin{equation} \label{eq:F} \four:W\to W:\qquad\four(z)=-\frac{d}{dz},\four\left(\frac{d}{dz}\right)=z. \end{equation} We claim that $\Four(M_!)$ is actually a holonomic $\D$-module on the punctured formal disk at infinity, extended to $\A1$ as described above. We call this holonomic $\D$-module the \emph{local Fourier transform} of $M$ and denote it by $\Four(0,\infty)M\in\DHol{K_\infty}$. That is, \begin{equation} \Four(M_!)=\jo_{\infty*}(\Four(0,\infty)M),\label{eq:four_0_infty} \end{equation} where $\jo_\infty$ is the embedding of the punctured formal neighborhood at infinity into $\A1$. (Note that $!$-extension across the puncture is meaningless at $\infty$, because $\infty\not\in\A1$.) However, \eqref{eq:four_0_infty} does not completely determine $\Four(0,\infty)$, because the functor $\jo_{\infty*}$ (as well as $\jo_{0*}$ and $j_{0*}\jo_!$) is not fully faithful. In other words, $\Four(M_!)$ has a well defined action of $W$, but we need to extend it to an action of $\D_{K_\infty}$. To make such extension unique, we consider topology on $M_!$. The definition of $\Four(0,\infty)M$ can thus be summarized as follows. $M_!$ has an action of $\kk[[z]]$ and a derivation $\partial_z$. Equip $M_!$ with the $z$-adic topology (see Section~\ref{sc:topology}), and consider on $M_!$ the $\kk$-linear operators \begin{equation} \zeta=-\partial^{-1}_z:M_!\to M_!\qquad\partial_\zeta=-\partial^2_z z:M_!\to M_!, \label{eq:localfourier} \end{equation} where $\zeta=1/z$ is the coordinate at $\infty\in\p1$. We then make the following claims. \begin{enumerate} \item\label{it:first} $\zeta:M_!\to M_!$ is well defined, that is, $\partial_z:M_!\to M_!$ is invertible. \item $\zeta:M_!\to M_!$ is continuous in the $z$-adic topology, moreover, $\zeta^n\to 0$ as $n\to\infty$; in other words, $\zeta$ is $z$-adically contracting. Thus $\zeta$ defines a an action of $K_\infty=\kk((\zeta))$ on $M_!$. \item $\dim_{K_\infty}M_!<\infty$, so $M_!$ with derivation $\partial_\zeta$ yields an object $$\Four(0,\infty)M\in\DHol{K_\infty}.$$ This defines a functor $\Four(0,\infty):\DHol{K_0}\to\DHol{K_\infty}$. \item $\Four(0,\infty)$ is an equivalence between $\DHol{K_0}$ and the full subcategory $$\DHol{K_\infty}^{<1}\subset\DHol{K_\infty}$$ of objects whose irreducible components have slopes smaller than $1$. \item\label{it:last} The $z$-adic topology and the $\zeta$-adic topology on $M_!$ coincide. \end{enumerate} Let us compare this definition of $\Four(0,\infty)$ with that of \cite{BE}. In \cite{BE}, there is an additional restriction that $M$ has no horizontal sections. From out point of view, this restriction guarantees that the two extensions $\jo_{0*}M$ and $j_{0*}\jo_!M$ coincide, which simplifies the above construction. If one defines $\Four(0,\infty)M$ following \cite{BE}, then \cite[Proposition 3.7]{BE} shows that $M\in\DHol{K_0}$ and $\Four(0,\infty)M\in\DHol{K_\infty}$ are equal as $\kk$-vector spaces, while the $\D$-module structures are related by \eqref{eq:localfourier}. The proof of this proposition shows that $\zeta$ is $z$-adically contracting; this implies that the two definitions of the local Fourier transform agree. For the local Fourier transform $\Four(\infty,\infty)$, the corresponding statements are contained in \cite[Proposition 3.12]{BE}. \begin{remark} One can derive the claims \eqref{it:first}--\eqref{it:last} from \cite{BE}, at least assuming $M$ has no horizontal sections. We present a direct proof in Section~\ref{sc:localfourierproof}. \label{rm:BE} \end{remark} \subsection{Local Fourier transform} Consider the infinity $\infty\in\p1$. Write \begin{equation} \label{eq:g1le1} \DHol{K_\infty}=\DHol{K_\infty}^{>1}\oplus\DHol{K_\infty}^{\le 1}, \end{equation} where the two terms correspond to full subcategories of $\D_{K_\infty}$-modules with slopes greater than one and less or equal than one, respectively. Further, split \begin{equation} \label{eq:le1} \DHol{K_\infty}^{\le1}=\bigoplus_{\alpha\in\A1}\DHol{K_\infty}^{\le 1,(\alpha)}, \end{equation} according to the leading term of the derivation. More precisely, consider the maximal unramified extension $$K^{unr}_\infty=K_\infty\otimes_\kk\overline\kk=\overline\kk((\zeta)),$$ where $\zeta=1/z$ is the coordinate at $\infty$. For any $\beta\in\overline\kk$, let $\ell_\beta\in\DHol{K_\infty^{unr}}$ be the vector space $K_\infty^{unr}$ equipped with derivation $$\partial_\zeta=\frac{d}{d\zeta}+\frac{\beta}{\zeta^2}.$$ Let $\DHol{K^{unr}_\infty}^{<1}\subset\DHol{K^{unr}_\infty}$ be the full subcategory of modules whose components have slopes less than one, and set $$\DHol{K^{unr}_\infty}^{\le 1,(\beta)}=\ell_\beta\otimes\DHol{K^{unr}_\infty}^{<1}.$$ Finally, for $\alpha\in\A1$, we define full subcategory $\DHol{K_\infty}^{\le 1,(\alpha)}\subset\DHol{K_\infty}$ by $$M\in\DHol{K_\infty}^{\le1,(\alpha)}\text{ if and only if }M\otimes\overline\kk\in\bigoplus_{\beta\in\alpha}\DHol{K_\infty^{unr}}^{\le 1,(\beta)}.$$ Here the direct sum is over all geometric points $\beta\in\A1(\overline\kk)$ corresponding to the closed point $\alpha$. \begin{remark} The local system $\ell_\alpha$ for $\alpha\in\A1$ is defined over $\kk_\alpha$. That is, $\ell_\alpha$ makes sense in $\DHol{K_\infty\otimes\kk_\alpha}$. We can therefore set $$\DHol{K_\infty\otimes\kk_\alpha}^{\le 1,(\alpha)}=\ell_\alpha\otimes\DHol{K_\infty\otimes\kk_\alpha}^{<1}.$$ Then $\DHol{K_\infty}^{\le 1,(\alpha)}$ can be defined as the essential image of $\DHol{K_\infty\otimes\kk_\alpha}^{\le 1,(\alpha)}$ under the restriction of scalars functor $$\DHol{K_\infty\otimes\kk_\alpha}\to\DHol{K_\infty}.$$ \end{remark} \begin{THEOREM} \begin{enumerate} \item For any $x\in\A1$, there is an equivalence $$\Four(x,\infty):\DHol{K_x}\to\DHol{K_\infty}^{\le1,(x)}$$ and a functorial isomorphism $$\Four(j_{x*}\jo_!(M))\iso\jo_{\infty*}(\Four(x,\infty)(M)).$$ The isomorphism is a homeomorphism in the natural topology (defined in Section~\ref{sc:topology}). This determines $\Four(x,\infty)$ up to a natural isomorphism (Lemma~\ref{lm:topology}). \item For any $x\in\A1$, there is also an equivalence $$\Four(\infty,x):\DHol{K_\infty}^{\le1,(x)}\to\DHol{K_x}$$ and a functorial isomorphism $$\Four(\jo_{\infty*}(M))\iso j_{x*}\jo_!(\Four(x,\infty)(M)).$$ The isomorphism is a homeomorphism in the topology of Section \ref{sc:topology}, which determines $\Four(\infty,x)$ up to a natural isomorphism. \item Finally, there exists an equivalence $$\Four(\infty,\infty):\DHol{K_\infty}^{>1}\to\DHol{K_\infty}^{>1}$$ and a functorial isomorphism $$\Four(\jo_{\infty*}(M))\iso\jo_{\infty*}(\Four(\infty,\infty)(M)).$$ The isomorphism is a homeomorphism in the topology of Section \ref{sc:topology}, which determines $\Four(\infty,\infty)$ up to a natural isomorphism. \end{enumerate} \label{th:localfourier} \end{THEOREM} The equivalences of Theorem~\ref{th:localfourier} are called the \emph{local Fourier transforms}. We prove Theorem~\ref{th:localfourier} in Section~\ref{sc:localfourierproof}. \subsection{Fourier transform and rigidity} The functor $\jo_{\infty *}$ has a left adjoint $$\Psi_\infty=\jo^*_\infty:\DHol{\A1}\to\DHol{K_\infty}:M\mapsto K_\infty\otimes M,$$ where $\DHol{\A1}$ is the category of holonomic $\D$-modules on $\A1$. Similarly, for any $x\in\A1$, the extension functor $j_{x*}\jo_!$ has a left adjoint $$\Phi_x:\DHol{\A1}\to\DHol{K_x},$$ which we call the \emph{formal vanishing cycles functor} (defined in Section~\ref{sc:curve}). For $N\in\DHol{K_\infty}$, denote by $$N^{\le1,(x)}\in\DHol{K_\infty}^{\le 1,(x)}\quad (x\in\A1),\qquad N^{>1}\in\DHol{K_\infty}^{>1}$$ its components with respect to the decompositions \eqref{eq:g1le1}, \eqref{eq:le1}. \begin{corollary}\label{co:localfourier} Fix $M\in\DHol{\A1}$. \begin{enumerate} \item For any $x\in\A1$, there are natural isomorphisms \begin{gather} \Phi_x(\Four(M))=\Four(\infty,x)\left(\Psi_\infty(M)^{\le 1,(x)}\right),\label{eq:ainfty}\\ \Psi_\infty(\Four(M))^{\le 1,(x)}=\Four(x,\infty)\Phi_x(M)\label{eq:inftya}. \end{gather} \item Similarly, there is a natural isomorphism \begin{equation} \Psi_\infty(\Four(M))^{>1}=\Four(\infty,\infty)\left(\Psi_\infty(M)^{>1}\right)\label{eq:inftyinfty}. \end{equation} \end{enumerate} \end{corollary} \begin{proof} Follows immediately from Theorem~\ref{th:localfourier}. \end{proof} Note that \eqref{eq:inftya} and \eqref{eq:inftyinfty} can be combined as follows: \begin{equation} \label{eq:infty} \Psi_\infty\Four(M)=\Four(\infty,\infty)\left(\Psi_\infty(M)^{>1}\right)\oplus\bigoplus_{x\in\A1}\Four(x,\infty)\Phi_x(M). \end{equation} \begin{remark}\label{rm:phase} Compare \eqref{eq:infty} with the formal stationary phase formula of \cite{GL}: \begin{equation} \Psi_\infty\Four(M)=\bigoplus_{x\in\p1}\cF^{(x,\infty)}M, \label{eq:phase} \end{equation} where $\cF^{(x,\infty)}$ (resp. $\cF^{(\infty,\infty)}$) is the ordinary microlocalization of $M$ at $x$ (resp. the $(\infty,\infty)$ microlocalization of $M$). Actually, the corresponding terms in \eqref{eq:infty} and \eqref{eq:phase} are equal, so that for instance $$\cF^{(x,\infty)}M=\Four(x,\infty)\Phi_x(M),$$ see Section~\ref{sc:Phi}. \end{remark} Because of Corollary~\ref{co:localfourier}, one can relate the `formal type' of $M$ with the `formal type' of $\Four(M)$. Actually, one has to assume that both $M$ and $\Four(M)$ are middle extensions of local systems from open subsets of $\A1$; see Section~\ref{sc:formal type} for the definitions and Section~\ref{sc:fourier and type} for precise statements. In particular, the isotypical (that is, preserving the formal type) deformations of $M$ and those of $\Four(M)$ are in one-to-one correspondence. For instance, $M$ is \emph{rigid} (has no non-constant isotypical deformations) if and only if its Fourier transform is rigid. This statement goes back to N.~Katz in $l$-adic settings (\cite[Theorem 3.0.3]{Ka}); the version for $\D$-modules is due to S.~Bloch and H.~Esnault (\cite[Theorem 4.3]{BE}). Corollary~\ref{co:localfourier} provides a conceptual proof of this statement. \subsection{Katz-Radon transform} Consider now the Katz-Radon transform. Fix $\lambda\in\kk-\Z$, and let $\D_\lambda$ be the corresponding ring of twisted differential operators on $\p1$ (see Section~\ref{sc:TDOmodules} for details). Denote by $\DHol{\lambda}$ the category of holonomic $\D_\lambda$-modules on $\p1$. \emph{The Katz-Radon transform} is an equivalence of categories $\Rad:\DHol{\lambda}\to\DHol{-\lambda}$. It is defined in \cite{AE}; we sketch several approaches to $\Rad$ in Section~\ref{ssc:radon}. \begin{THEOREM} For any $x\in\p1$, there is an equivalence $$\Rad(x,x):\DHol{K_x}\to\DHol{K_x}$$ called the local Katz-Radon transform and a functorial isomorphism $$\Rad(j_{x*}\jo_!(M))\iso j_{x*}\jo_!(\Rad(x,x)(M)).$$ The isomorphism is a homeomorphism in the topology of Section \ref{sc:topology}. This determines $\Rad(x,x)$ up to a natural isomorphism (by Lemma \ref{lm:topology}). \label{th:localradon} \end{THEOREM} \begin{remark*} It would be interesting to apply these ideas to other `one-dimensional integral transforms', such as the Mellin transform of \cite{Mellin}. \end{remark*} It turns out that the local Radon transform $\Rad(x,x)$ can be described in simple terms. Fix $x\in\p1$. For $\gamma\in\kk$, denote by $\cK_x^\gamma\in\DHol{K_x}$ the \emph{Kummer local system} with residue $\gamma\in\kk$. Explicitly, $\cK_x^\gamma$ is the vector space $K_x=\kk_x((z))$ equipped with the derivation $$\partial_z=\frac{d}{dz}+\frac{\gamma}{z}.$$ Here $z$ is a local coordinate at $x$. \begin{THEOREM} For $M\in\DHol{K_x}$ and $s\in\Q$ denote by $M^s$ the maximal submodule of $M$ whose all components have slope $s$. Then $$\Rad(x,x)M\simeq\bigoplus_sM^s\otimes\cK_x^{\lambda(s+1)}.$$ \label{th:computeradon} \end{THEOREM} The problem of computing the local Katz-Radon transform was posed in \cite[Section 3.4]{Ka}. Theorem~\ref{th:computeradon} solves it in the settings of $\D$-modules. However, the proof does not extend to the $l$-adic settings. \subsection{Organization} The rest of this paper is organized as follows. In Section~\ref{sc:formal disk}, we consider the category of holonomic $\D$-modules on the formal disk. In Section~\ref{sc:curves}, we review the basic functors on holonomic $\D$-modules, and the notion of isotypical deformation of local systems. We study the local Fourier transform in Section~\ref{sc:fourier} and the local Katz-Radon transform in Section~\ref{sc:radon}. Finally, Section~\ref{sc:computeradon} we prove the explicit formula for the Katz-Radon transform (Theorem~\ref{th:computeradon}). \section{$\D$-modules on formal disk}\label{sc:formal disk} \subsection{Functors on $\D$-modules} \label{sc:disk} Let $A=\kk[[z]]$ be the ring of formal Taylor series; $K=\kk((z))$ is the field of fractions of $A$. Denote by $$\D_A=A\left\langle\frac{d}{dz}\right\rangle$$ the ring of differential operators over $A$ and by $\DMod{A}$ the category of left $\D_A$-modules. Explicitly, $M\in\DMod{A}$ is an $A$-module $M$ equipped with a derivation $\partial_z:M\to M$. The \emph{rank} of $M$ is $$\rk M=\dim_K(K\otimes_AM).$$ By definition, $M\in\DMod{A}$ is \emph{holonomic} if it is finitely generated and has finite rank. Let $\DHol{A}\subset\DMod{A}$ be the full subcategory of holonomic $\cD_A$-modules. We work with the following functors (all of them except $\Phi$ are standard.) \begin{itemize} \item \emph{Verdier duality}: For $M\in\DHol{A}$, denote its dual by $DM$. For $M\in\DHol{K}$, the dual $DM\in\DHol{K}$ is simply the dual vector space $M^\vee$ equipped with the natural derivation. \item \emph{Restriction}: For $M\in\DMod{A}$, set $$\jo^*(M)=K\otimes_AM\in\DMod{K}.$$ Here $\jo$ is the embedding of the formal punctured disk into the formal disk. Sometimes, we call the restriction functor $\jo^*:\DHol{A}\to\DHol{K}$ the \emph{formal nearby cycles} functor and denote it by $\Psi=\jo^*$. \item \emph{Extensions}: Any $M\in\DMod{K}$ can be viewed as a $\D_A$-module using inclusion $\D_A\subset\D_K$; the corresponding object is denoted $\jo_*(M)$. If $M\in\DHol{K}$, we set $$\jo_!(M)=D(\jo_*(DM)).$$ \item \emph{Formal vanishing cycles}: The last functor is $\Phi:\DHol{A}\to\DHol{K}$. It can be defined as the left adjoint of $\jo_!$ (or the right adjoint of $\jo_*$). See Section~\ref{sc:Phi} for a more explicit description. \end{itemize} \begin{proposition} \label{pp:functors} \begin{enumerate} \item The Verdier duality $D$ gives involutive anti-equivalences of $\DHol{K}$ and $\DHol{A}$. \item $\Psi$ and $\Phi$ are exact and commute with the duality. \item $\jo^*\jo_*=\jo^*\jo_!=Id$. \item $\jo_*$ is exact and fully faithful. $M\in\DHol{A}$ belongs to the essential image $\jo_*(\DHol{K})$ if and only if it satisfies the following equivalent conditions: \begin{enumerate} \item The action of $z$ on $M$ is invertible; \item $\Ext^i_{\D_A}(\delta,M)=0$ for $i=0,1$; \item $\Ext^i_{\D_A}(M,A)=0$ for $i=0,1$; \item $\Hom_{\D_A}(\delta,M)=\Hom_{\D_A}(M,A)=0$; \item $i^!M=0$ (in the derived sense). \end{enumerate} Here $\delta\in\DHol{A}$ is the $\D$-module of $\delta$-functions $\D_A/\D_Az$, and $A\in\DHol{A}$ stands for the constant $\D$-module $\D_A/\D_A(d/dz)$. Finally, $i$ is the closed embedding of the special point into the formal disk. \item\label{it:!} $\jo_!$ is exact and fully faithful. $M\in\DHol{A}$ belongs to the essential image $\jo_!(\DHol{K})$ if and only if it satisfies the following equivalent conditions: \begin{enumerate} \item \label{it:!inv} The action of $d/dz$ on $M$ is invertible; \item \label{it:!A} $\Ext^i_{\D_A}(A,M)=0$ for $i=0,1$; \item $\Ext^i_{\D_A}(M,\delta)=0$ for $i=0,1$; \item \label{it:!h} $\Hom_{\D_A}(A,M)=\Hom_{\D_A}(M,\delta)=0$; \item $i^*M=0$ (in the derived sense). \end{enumerate} \item\label{it:adj} The following pairs of functors are adjoint: $(\Psi,\jo_*)$, $(\jo_*,\Phi)$, $(\Phi,\jo_!)$, $(\jo_!,\Psi)$. \end{enumerate} \end{proposition} We prove Proposition~\ref{pp:functors} in Section~\ref{sc:proofs}. \subsection{Construction of $\Phi$}\label{sc:Phi} Proposition~\ref{pp:functors} can be used to describe $\Phi$. By Proposition~\ref{pp:functors}\eqref{it:!}, we can identify $\DHol{K}$ with its image $\jo_!(\DHol{K})\subset\DHol{A}$. Then $\jo_!$ becomes the inclusion $\jo_!(\DHol{K})\hookrightarrow\DHol{A}$, and $\Phi$ is the left adjoint of the inclusion. Fix $M\in\DHol{A}$. There is a unique up to isomorphism object $M'\in\jo_!(\DHol{K})$ together with a map $\can:M\to M'$ such that $\ker(\can)$ and $\coker(\can)$ are constant $\D_A$-modules. Namely, let $$M^{hor}=A\otimes\Hom_{\D_A}(A,M)$$ be the maximal constant submodule of $M$, and let $M'$ be the universal extension of $M/M^{hor}$ by a constant $\D_A$-module. We thus get a sequence of $\D_A$-modules: \begin{equation} 0\to A\otimes\Hom_{\D_A}(A,M)\to M\to M'\to A\otimes\Ext^1_{\D_A}(A,M)\to 0. \label{eq:sequence_phi} \end{equation} Note that $\Ext^1_{\D_A}(A,M)=\Ext^1_{\D_A}(A,M/M_{hor})$. By Proposition~\ref{pp:functors}\eqref{it:!A}, $M'\in\jo_!(\DHol{K})$, and we define $\Psi(M)$ by $\jo_!\Psi(M)=M'$. Dually, one can construct $\Phi$ by presenting the right adjoint of the inclusion $\jo_*(\DHol{K})\hookrightarrow\DHol{A}$. We can also interpret $\Phi$ using the formal microlocalization of \cite{GL}. Recall the definitions. Denote by $\mD$ the ring of formal microdifferential operators $$\mD=\left\{\sum_{i=-\infty}^k a_i(z)\left(\frac{d}{dz}\right)^i:a_i(z)\in A,k\text{ is not fixed}\right\}.$$ ($\mD$ does not depend on the choice of the local coordinate $z$.) In \cite{GL}, $\mD$ is denoted by $\cF^{(c,\infty)}$, where $K=K_c$. We have a natural embedding $\cD_A\hookrightarrow\mD$. \begin{example} Consider $\jo_!M$ for $M\in\DHol{K}$. The action of $d/dz$ on $\jo_!M$ is invertible. One can check that it induces an action of $\mD$ on $M$ (because $d/dz$ is nicely expanding on $\jo_!M$ in the sense of Section~\ref{sc:localfourierproof}). \label{ex:md} \end{example} \begin{proposition} For any $M\in\DHol{A}$, $$\mD\otimes_{\D_A}M=\jo_!\Phi(M).$$ \label{pp:phi_phase} \end{proposition} \begin{proof} First, note that for the constant $\D$-module $A\in\DHol{A}$, we have $$\mD\otimes_{\D_A}A=0.$$ By \eqref{eq:sequence_phi}, it remains to check that the natural map \begin{equation} \label{eq:m!} \jo_!M\to\mD\otimes_{\D_A}\jo_!M,\quad M\in\DHol{K} \end{equation} is an isomorphism. Note that \eqref{eq:m!} is injective by Example~\ref{ex:md}. We prove surjectivity of \eqref{eq:m!} using the local Fourier transform. Identify $K$ with $K_0$ for $0\in\A1$ (we prefer working at $0$ so that the coordinate $z$ on $\A1$ is also a local coordinate at $0$). The local Fourier transform $\Four(0,\infty)M$ can be described in terms of $\mD$ as follows. Let $\zeta=1/z$ be the coordinate at $\infty$, so $K_\infty=\kk((\zeta))$. Embed $\D_{K_\infty}$ into $\mD$ by \eqref{eq:localfourier} as $$\zeta\mapsto-\left(\frac{d}{dz}\right)^{-1},\qquad\frac{d}{d\zeta}\mapsto-\frac{d^2}{dz^2}z.$$ By Example~\ref{ex:md}, $\jo_!M$ has an action of $\mD$, and $\Four(0,\infty)M$ is obtained by restricting it to $\D_{K_\infty}$. In particular, $\jo_!M$ is holonomic as a $\D_{K_\infty}$-module, and therefore it possesses a cyclic vector. Now the claim follows from the division theorem \cite[Theorem 1.1]{GL}. \end{proof} Since $\jo_!$ is fully faithful, Proposition~\ref{pp:phi_phase} completely describes $\Phi$. It also relates $\Phi$ and the formal microlocalization of \cite{GL}. The formal microlocalization amounts to viewing $\mD\otimes_{\D_A}M$ as a $\D_{K_\infty}$-module; by Proposition~\ref{pp:phi_phase}, this $\D_{K_\infty}$-module is the local Fourier transform of $\Phi(M)$ (cf. Remark~\ref{rm:phase}). \subsection{Proof of Proposition~\ref{pp:functors}} \label{sc:proofs} Note that the category $\DHol{K}$ decomposes as a direct sum $$\DHol{K}=\DHol{K}^{reg}\oplus\DHol{K}^{irreg},$$ where $\DHol{K}^{reg}$ (resp. $\DHol{K}^{irreg}$) is the full subcategory of regular (resp. purely irregular) submodules. Similarly, there is a decomposition $$\DHol{A}=\DHol{A}^{reg}\oplus\DHol{A}^{irreg}$$ (see \cite[Theorem III.2.3]{Mal}). All of the above functors respect this decomposition. Moreover, $\Psi$ restricts to an equivalence $$\DHol{A}^{irreg}\iso\DHol{K}^{irreg};$$ the inverse equivalence is $\jo_*=\jo_!$. Thus Proposition~\ref{pp:functors} is obvious in the case purely irregular modules. Let us look at the regular case. It is instructive to start with $\kk=\C$. Then the categories have the following well-known descriptions, which we copied from \cite[Theorem II.1.1, Theorem II.3.1]{Mal}. \begin{itemize} \item $\DHol{K}^{reg}$ is the category of local systems on a punctured disk. It is equivalent to the category of pairs $(V,\rho)$, where $V$ is a finite-dimensional vector space and $\rho\in\Aut(V)$. Geometrically, $V$ is the space of nearby cycles and $\rho$ is the monodromy of a local system. \item $\DHol{A}^{reg}$ is the category of perverse sheaves on a disk that are smooth away from the puncture. It is equivalent to the category of collections $(V, V',\alpha,\beta)$, where $V$ and $V'$ are finite-dimensional vector spaces, and linear operators $\alpha:V\to V'$ and $\beta:V'\to V$ are such that $\alpha\beta+\id$ (equivalently, $\beta\alpha+\id$) is invertible. Geometrically, $V$ and $V'$ are the spaces of nearby and vanishing cycles, respectively. \end{itemize} Under these equivalences, the functors between $\DHol{A}^{reg}$ and $\DHol{K}^{reg}$ can be described as follows: \begin{equation}\label{eq:PhiPsi} \begin{aligned} \Psi(V,V',\alpha,\beta)&=(V,\beta\alpha+\id)\cr \Phi(V,V',\alpha,\beta)&=(V',\alpha\beta+\id)\cr \jo_*(V,\rho)&=(V,V,\id,\rho-\id)\cr \jo_!(V,\rho)&=(V,V,\rho-\id,\id)\cr D(V,\rho)&=(V^*,(\rho^*)^{-1})\cr D(V,V',\alpha,\beta)&=(V^*,(V')^*,-\beta^*,\alpha^*(\beta^*\alpha^*+\id)^{-1}) \end{aligned} \end{equation} The claims of Proposition~\ref{pp:functors} are now obvious. For arbitrary field $\kk$, this description of regular $\D$-modules fails, because the Riemann-Hilbert correspondence is unavailable. However, the description still holds for $\D$-modules with unipotent monodromies. That is, we consider the decomposition $$\DHol{K}^{reg}=\DHol{K}^{uni}\oplus\DHol{K}^{non-uni},$$ where $M\in\DHol{K}^{uni}$ (resp. $M\in\DHol{K}^{non-uni}$) if and only if all irreducible components of $M$ are constant (resp. non-constant). There is also a corresponding decomposition $$\DHol{A}^{reg}=\DHol{A}^{uni}\oplus\DHol{A}^{non-uni};$$ explicitly, $M\in\DHol{A}^{uni}$ if and only if any irreducible component of $M$ is isomorphic to $A$ or $\delta$. The categories $\DHol{A}^{non-uni}$ and $\DHol{K}^{non-uni}$ are equivalent, and on these categories, Proposition~\ref{pp:functors} is obvious. On the other hand, $\DHol{K}^{uni}$ is equivalent to the category of pairs $(V,\rho)$ with unipotent $\rho$, while $\DHol{A}^{uni}$ is equivalent to the category of collections $(V,V',\alpha,\beta)$ with unipotent $\alpha\beta+\id$. On these categories, we prove Proposition~\ref{pp:functors} by using \eqref{eq:PhiPsi}. \qed \begin{remark*} Proposition~\ref{pp:functors} involves a somewhat arbitrary normalization. Namely, $\Phi$ can be defined as either the left adjoint of $\jo_!$ or the right adjoint of $\jo_*$, so we need a canonical isomorphism between the two adjoints. Equivalently, one has to construct a canonical commutativity isomorphism \begin{equation} D\Psi(M)\iso\Psi(DM),\quad (M\in\DHol{A})\label{eq:dpsi}. \end{equation} Our proof of Proposition~\ref{pp:functors} amounts to the following normalization of \eqref{eq:dpsi}. For $M\in\DHol{A}^{irreg}\oplus\DHol{A}^{non-uni}$, we have $\Psi(M)=\Phi(M)$, and we use the isomorphism $D\Psi(M)\iso\Psi(DM)$. On the other hand, for $M\in\DHol{A}^{uni}$, the isomorphism is prescribed by \eqref{eq:PhiPsi}. \end{remark*} \subsection{Goresky-MacPherson extension}\label{sc:!*} Define $\jo_{!*}:\DHol{K}\to\DHol{A}$ by $$\jo_{!*}(M)=\im(\jo_!(M)\to\jo_*(M)).$$ Here the functorial morphism $\jo_!\to\jo_*$ is given by the adjunction. \begin{proposition} $\jo_{!*}$ is fully faithful, but not exact. It commutes with the Verdier duality. Also, $\jo^*\jo_{!*}=Id$.\qed \end{proposition} It is easy to relate $\jo_{!*}$ and $\Phi$. \begin{lemma}\label{lm:Phi!*} There is an isomorphism $$\Phi(\jo_{!*}(M))=M/M^{hor},$$ functorial in $M\in\DHol{K}$. Here $M^{hor}$ is the maximal trivial submodule of $M$; in other words, $M^{hor}$ is generated by the horizontal sections of $M$. \qed \end{lemma} \begin{corollary} The isomorphism class of $M\in\DHol{K}$ is uniquely determined by the isomorphism class of $\Phi(\jo_{!*}(M))$ together with $\rk(M)$. \qed \label{co:formaltype} \end{corollary} These statements can be proved by the argument of Section~\ref{sc:proofs}. The counterpart of \eqref{eq:PhiPsi} is \begin{equation} \label{eq:!*} \jo_{!*}(V,\rho)=(V,(\rho-\id)(V),\rho-\id,\id). \end{equation} \section{$\D$-modules on curves}\label{sc:curves} Fix a smooth curve $X$ over $\kk$ (not necessarily projective). Denote by $\DMod{X}$ the category of quasicoherent left $\D_X$-modules and by $\DHol{X}\subset\DMod{X}$ the full subcategory of holonomic $\D_X$-modules. Recall that $M\in\DMod{X}$ is \emph{holonomic} if it is finitely generated and its generic rank is finite at all generic points of $X$. \subsection{Formal nearby and vanishing cycles} \label{sc:curve} Fix a closed point $x\in X$. Recall that $A_x$ and $K_x$ are the ring of Taylor series and the field of Laurent series at $x$, respectively. The map of schemes $j_x:\spec(A_x)\to X$ induces a pair of functors \begin{align*} j_x^*&:\DMod{X}\to\DMod{A_x}\\ j_{x*}&:\DMod{A_x}\to\DMod{X}. \end{align*} \begin{lemma} \begin{enumerate} \item $j_x^*$ and $j_{x*}$ are exact; \item $j_x^*$ is the left adjoint of $j_{x*}$; \item $j_x^*(\DHol{X})\subset\DHol{A_x}$; besides, $j_x^*$ commutes with the Verdier duality. (Of course, $j_{x*}(\DHol{A_x})\not\subset\DHol{X}$.) \end{enumerate} \qed \end{lemma} \begin{corollary} Define $\Psi_x,\Phi_x:\DHol{X}\to\DHol{K_x}$ (the functors of formal nearby and vanishing cycles at $x$) by $\Psi_x=\Psi\circ j_x^*$, $\Phi_x=\Phi\circ j_x^*$. \begin{enumerate} \item $\Psi_x$ and $\Phi_x$ are exact functors that commute with the Verdier duality. \item\label{it:cycles2} $\Psi_x$ and $\Phi_x$ are left adjoints of $j_{x*}\circ\jo_*$ and $j_{x*}\circ\jo_!$, respectively. \end{enumerate} \qed \label{co:cycles} \end{corollary} \begin{remark*} The second claim of the corollary requires some explanation, because the functors $j_{x*}\circ\jo_*$ and $j_{x*}\circ\jo_!$ fail to preserve holonomicity. For instance, for $\Phi$ the claim is that there is a functorial isomorphism $$\Hom_{\D_X}(M,j_{x*}\circ\jo_!(N))=\Hom_{\D_K}(\Phi_x(M),N),\quad M\in\DHol{X},N\in\DHol{K_x};$$ here all $\D$-modules except for $j_{x*}\circ\jo_!(N)$ are holonomic. (The situation is less confusing for the nearby cycles functor $\Psi$, because one can work with quasi-coherent $\D$-modules throughout.) \end{remark*} Now let us look at an infinite point. In other words, let $\oX\supset X$ be the smooth compactification of $X$, and let $x\in\oX-X$. We have a natural morphism of schemes $\jo_x:\spec(K_x)\to X$, which induces two functors \begin{align*} \jo_x^*&:\DMod{X}\to\DMod{K_x}\\ \jo_{x*}&:\DMod{K_x}\to\DMod{X}. \end{align*} We sometimes denote $\jo_x^*$ by $\Psi_x$; it is the left adjoint of $\jo_{x*}$. \subsection{Topology on $\D_A$-modules}\label{sc:topology} Once again, consider $x\in X$. Clearly, the functor $j_{x*}:\DHol{A_x}\to\DMod{X}$ is faithful, but not full. The reason is that the functor forgets the natural topology on $M\in\DHol{A_x}$. Let us make precise statements. Recall the definition of the ($z$-adic) topology on $M\in\DHol{A_x}$: \begin{definition*} A subspace $U\subset M$ is open if for any finitely-generated $A$-submodule $N\subset M$, there is $k$ such that $U\supset z^k N$. Here $z\in A_x$ is a local coordinate. Open subspaces form a base of neighborhoods of $0\in M$. \end{definition*} We can now view $j_{x*}(M)$ as a topological $\D_X$-module. \begin{lemma} \label{lm:topology} For any $M,N\in\DHol{A_x}$, the map $$\Hom_{\D_{A_x}}(M,N)\to \Hom_{\D_X}(j_{x*}M,j_{x*}N)$$ identifies $\Hom_{\D_{A_x}}(M,N)$ with the subspace of continuous homomorphisms between $j_{x*}M$ and $j_{x*}N$. In other words, the functor $j_{x*}$ is a fully faithful embedding of $\DMod{A_x}$ into the category of topological $\D_X$-modules. \end{lemma} \begin{proof} Clearly, $\Hom_{\D_X}(j_{x*}M,j_{x*}N)$ identifies with the space of homomorphisms $M\to N$ of $\D_{O_x}$-modules. Here $O_x\subset A_x$ is the local ring of $x$, and $\D_{O_x}\subset\D_{A_x}$ is the corresponding ring of differential operators. The lemma follows from density of $O_x$ in $A_x$ in $z$-adic topology. \end{proof} Of course, similar construction can be carried out at infinity. Namely, for $x\in\oX-X$, any module $M\in\DHol{K}$ carries a natural topology. This allows us to view $\jo_{x*}(M)$ as a topological $\D_X$-module. The functor $\jo_{x*}$ is a fully faithful embedding of $\DHol{K}$ into the category of topological $\D_X$-modules. \subsection{Euler characteristic} Let $M\in\DHol{\oX}$ be a holonomic $\D$-module on a smooth projective curve $\oX$. For simplicity, assume that $\oX$ is irreducible. Consider the Euler characteristic of $M$ $$\chi_{dR}(M)=\dim H^0_{dR}(\oX,M)-\dim H^1_{dR}(\oX,M)+\dim H^2_{dR}(\oX,M).$$ Here $H_{dR}$ stands for the de Rham cohomology (or, equivalently, the derived direct image for the map $\oX\to\spec(\kk)$). The Euler-Poincar\'e formula due to Deligne expresses $\chi_{dR}(M)$ in local terms as follows: \begin{proposition}\label{pp:EulerPoincare} Let $g$ be the genus of $\oX$. Then \begin{align*} \chi_{dR}(M)&=\rk(M)(2-2g)-\sum_{x\in\oX(\overline\kk)}(\rk\Phi_x(M)+\irreg(\Psi_x(M)))\\ &=\rk(M)(2-2g)-\sum_{x\in\oX}[\kk_x:\kk](\rk\Phi_x(M)+\irreg(\Psi_x(M))). \end{align*} \qed \end{proposition} Here for $N\in\DHol{K_x}$, $\irreg(N)$ is the irregularity of $N$. Note that $$\irreg(\Psi_x(M))=\irreg(\Phi_x(M)).$$ \subsection{Formal type and rigidity}\label{sc:formal type} Suppose that $\oX$ is projective, smooth, and irreducible. Let $L$ be a local system (that is, a vector bundle with connection) on a non-empty open subset $U\subset\oX$. \begin{definition} The \emph{formal type} of $L$ is the collection of isomorphism classes $\{[\Psi_x(L)]\}$ of $\Psi_x(L)$ for all closed points $x\in\oX$. \end{definition} If $x\in U$, then $\Psi_x(L)$ is a constant $\D_K$-module, so its isomorphism class is determined by its rank. In other words, the formal type of $L$ can be reconstructed from the collection of isomorphism classes $\{[\Psi_x(L)]\}$ for all $x\in\oX-U$ and $\rk(L)$. Let us study deformations of $L$. Fix an Artinian local ring $R$ whose residue field is $\kk$. Let $L_R$ be an $R$-deformation of $L$. That is, $L_R$ is a local system on $U$ equipped with a flat action of $R$ and an identification $L=\kk\otimes_RL_R$. \begin{definition}[cf. formula~(4.30) in \cite{BE}] The deformation $L_R$ is \emph{isotypical} if for any $x\in\oX$, there is an isomorphism $\Psi_x(L_R)\simeq R\otimes_\kk L$ of $R\otimes_k\D_{K_x}$-modules. Of course, this condition is automatic for $x\in U$. \end{definition} Consider now the first-order deformations of $L$, that is, $R=\kk[\epsilon]/(\epsilon^2)$ is the ring of dual numbers. Explicitly, first-order deformations are extensions of $L$ by itself, and therefore the space of first-order deformations of $L$ is $\Ext^1_{D_U}(L,L)=H^1_{dR}(U,\END(L))$. Here $\END(L)$ stands for the local system of endomorphisms of $L$. \begin{lemma}[\cite{BE}, formula~(4.33)] Let $j_U:U\hookrightarrow\oX$ be the open embedding, and consider $\D_\oX$-modules $j_{U,!*}(\END(L))\subset j_{U,*}(\END(L))$. The space of isotypical first-order deformations is identified with $$H^1_{dR}(\oX,j_{U,!*}(\END(L)))\subset H^1_{dR}(\oX,j_{U,*}(\END(L)))=H^1_{dR}(U,\END(L)).$$ \label{lm:deformation} \end{lemma} \begin{proof} \cite[Remark~4.1]{BE} yields an exact sequence \begin{multline*} 0\to H^1_{dR}(\oX,j_{U,!*}(\END(L)))\to H^1_{dR}(\oX,j_{U,*}(\END(L))\to\cr\bigoplus_{x\in\oX-U}H^1_{dR}(K_x,\Psi_x(\END(L)))\to 0. \end{multline*} By definition, $\alpha\in H^1_{dR}(\oX,j_*(\END(L))$ is isotypical if and only if its image in $H^1_{dR}(K_x,\Psi_x(\END(L)))$ (which controls deformations of $\Psi_x(L)$) vanishes for all $x$. This implies the statement. \end{proof} \begin{definition} $L$ is \emph{rigid} if any first-order isotypical deformation of $L$ is trivial. The \emph{rigidity index} of $L$ is given by $$\rig(L)=\chi_{dR}(j_{U,!*}(\END(L)).$$ \end{definition} The Euler-Poincar\'e formula shows that $\rig(L)$ depends only on the formal type $\{[\Psi_x(L)]\}$. \begin{remarks*} Clearly, any isotypical deformation of a rigid local system $L$ over any local Artinian base $R$ is trivial. It is well known that $\rig(L)$ is always even, because $\END(L)$ is self-dual. \end{remarks*} \begin{corollary} \begin{enumerate} \item \label{it:def1} $L$ is rigid if and only if $H^1_{dR}(\oX,j_{!*}(\END(L)))=0.$ \item \label{it:def2} Assume $L$ is irreducible. Then $\rig(L)\le 2$, and $L$ is rigid if and only if $\rig(L)=2$. \end{enumerate} \end{corollary} \begin{proof} \eqref{it:def1} follows from Lemma~\ref{lm:deformation}; \eqref{it:def2} follows from \eqref{it:def1} since $$H^0_{dR}(\oX,j_{!*}(\END(L)))=H^2_{dR}(\oX,j_{!*}(\END(L)))=\C.$$ \end{proof} \begin{remark*} Assume $\kk$ is algebraically closed. Usually, rigidity is defined as follows: a local system $L$ on $U$ is \emph{physically rigid} if for any other local system $L'$ on $U$ such that $\Psi_x(L)\simeq\Psi_x(L')$ for all $x\in\oX$ satisfies $L\simeq L'$ (\cite{Ka}). However, irreducible $L$ is physically rigid if and only if $\rig(L)=2$ (``physical rigidity and cohomological rigidity are equivalent"). If $L$ has regular singularities, this is \cite[Theorem~1.1.2]{Ka}; for irregular singularities, see \cite[Theorem~4.7,Theorem~4.10]{BE}. If $\kk$ is not algebraically closed, one has to distinguish between `physical rigidity' and `geometric physical rigidity'. More precisely, geometrically irreducible $L$ satisfies $\rig(L)=2$ if and only if $L\otimes_\kk\kk'$ is physically rigid for any finite extension $\kk\subset\kk'$ (\cite[Theorem~4.10]{BE}). \end{remark*} \section{Fourier transform}\label{sc:fourier} \subsection{Global Fourier transform} In this section, we work with the curve $X=\A1$, and $z$ is the coordinate on $\A1$. Recall that the Weyl algebra $$W=\kk\left\langle z,\frac{d}{dz}\right\rangle$$ is the ring of polynomial differential operators on $\A1$. The category $\DMod{\A1}$ is identified with the category of $W$-modules. $$\Four:\DMod{\A1}\to\DMod{\A1}$$ is the Fourier functor. The Fourier transform preserves holonomicity: $$\Four(\DHol{\A1})\subset\DHol{\A1}.$$ Besides the description of $\Four$ using an automorphism $\four:W\to W$ (as in Section~\ref{sc:explicit}), we can construct $\Four$ as an integral transform $$\Four(M)=p_{2,*}(p_1^!(M)\otimes\cE),$$ where $$p_i:\A2\to\A1:(z_1,z_2)\mapsto z_i\qquad i=1,2$$ are the projections, $p_{2,*}$ stands for the $\D$-modular direct image, and $\cE$ is the $\D$-module on $\A2$ with single generator that we denote $\exp(z_1z_2)$ and defining relations $$\left(\frac{\partial}{\partial z_1}-z_2\right)\exp(z_1z_2)=\left(\frac{\partial}{\partial z_2}-z_1\right)\exp(z_1z_2)=0.$$ \begin{remark*} The algebra of (global) differential operators on $\A2$ equals the tensor product $W\otimes_\kk W$. The global sections of $\cE$ form a module over this algebra. The module is identified with $W$, on which $W\otimes_\kk W$ acts by $$(D_1\otimes D_2)\cdot D=D_1\cdot D\cdot \four(D_2)^*.$$ Here $D_2^*$ is the formal adjoint of $D_2$ given by $$\left(\sum a_i(z)\frac{d^i}{dz^i}\right)^*=\sum\left(-\frac{d}{dz}\right)^i a_i(z).$$ In other words, $D\mapsto D^*$ is the anti-involution of $W$ relating the left and right $\D$-modules. \end{remark*} \subsection{Rank of the Fourier transform} Fix $M\in\DHol{\A1}$. Consider $\Psi_\infty(M)\in\DHol{K_\infty}$. Let us decompose $$\Psi_\infty(M)=\Psi_\infty(M)^{>1}\oplus\Psi_\infty(M)^{\le1},$$ where all slopes of the first (resp. second) summand are greater than one (resp. do not exceed one). \begin{proposition}[{\cite[Proposition V.1.5]{Mal}}]\label{pp:rankfourier} \begin{multline*} \rk(\Four(M))=\irreg(\Psi_\infty(M)^{>1})-\rk(\Psi_\infty(M)^{>1})+\cr\sum_{x\in\A1(\overline\kk)}(\rk(\Phi_x(M))+\irreg(\Psi_x(M))); \end{multline*} equivalently, \begin{multline*} \rk(\Four(M)=\irreg(\Psi_\infty(M)^{>1})-\rk(\Psi_\infty(M)^{>1})+\cr\sum_{x\in\A1}[\kk_x:\kk](\rk(\Phi_x(M))+\irreg(\Psi_x(M))). \end{multline*} \end{proposition} \begin{proof} Using the description of $\Four$ as an integral transform, we see that the fiber of $\Four(M)$ at $x\in\A1(\overline\kk)$ equals $H^1(\A1\otimes\overline\kk,M\otimes\ell)$, where $\ell$ is a rank one local system on $\A1\otimes\overline\kk$ that has a second order pole at infinity with the leading term given by $x$. For generic $x$, $H^0(\A1\otimes\overline\kk,M\otimes\ell)=H^2(\A1\otimes\overline\kk,M\otimes\ell)=0$, so we have $\rk(\Four(M))=-\chi_{dR}(\A1\otimes\overline\kk,M\otimes\ell)$. The proposition now follows from the Euler-Poincar\'e formula (Proposition~\ref{pp:EulerPoincare}). \end{proof} \subsection{Proof of Theorem~\ref{th:localfourier}}\label{sc:localfourierproof} As pointed out in Remark~\ref{rm:BE}, in many cases Theorem~\ref{th:localfourier} follows from the results of \cite{BE}. Our exposition is independent of \cite{BE}. As we saw in Section~\ref{sc:explicit}, Theorem~\ref{th:localfourier} reduces to relatively simple statements about differential operators over formal power series. Let us make the relevant properties of differential operators explicit. Recall the definition of a Tate vector spaces over $\kk$, which we copied from \cite{Dr}. \begin{definition} Let $V$ be a topological vector space over $\kk$, where $\kk$ is equipped with the discrete topology. $V$ is \emph{linearly compact} if it is complete, Hausdorff, and has a base of neighborhoods of zero consisting of subspaces of finite codimension. Equivalently, a linearly compact space is the topological dual of a discrete space. $V$ is a \emph{Tate space} if it has a linearly compact open subspace. \end{definition} Consider now $A$-modules for $A\simeq\kk[[z]]$. \begin{definition} \label{df:tatemodule} An $A$-module $M$ is \emph{of Tate type} if there is a finitely generated submodule $M'\subset M$ such that $M/M'$ is a torsion module that is `cofinitely generated' in the sense that $$\dim_k\Ann_z(M/M')<\infty,\qquad\text{where }\Ann_z(M/M')=\{m\in M/M'\st zm=0\}.$$ \end{definition} \begin{lemma} \begin{enumerate} \item\label{it:module1} Any finitely generated $A$-module $M$ is linearly compact in the $z$-adic topology. \item\label{it:module2} Any $A$-module $M$ of Tate type is a Tate vector space in the $z$-adic topology. \end{enumerate} \end{lemma} \begin{proof} \eqref{it:module1} follows from the Nakayama Lemma. \eqref{it:module2}. The submodule $M'$ of Definition~\ref{df:tatemodule} is linearly compact and open. \end{proof} \begin{remark*} The condition that $M$ is of Tate type is not necessary for $M$ to be a Tate vector space; for example, it suffices to require that $M$ has a finitely generated submodule $M'$ such that $M/M'$ is a torsion module. \end{remark*} \begin{proposition} \label{pp:tate} Let $V$ be a Tate space. Suppose an operator $\opz:V\to V$ satisfies the following conditions: \begin{enumerate} \item\label{it:Tate1} $\opz$ is continuous, open and (linearly) compact. In other words, if $V'\subset V$ is an open linearly compact subspace, then so are $\opz(V')$ and $\opz^{-1}(V')$. \item\label{it:Tate3} $\opz$ is contracting. In other words, $\opz^n\to0$ in the sense that for any linearly compact subspace $V'\subset V$ and any open subspace $U\subset V$, we have $\opz^n(V')\subset U$ for $n\gg0$. \end{enumerate} Then there exists a unique structure of a Tate type $A$-module on $V$ such that $z\in A$ acts as $\opz$ and the topology on $V$ coincides with the $z$-adic topology. This induces an equivalence between the category of $A$-modules of Tate type and pairs $(V,\opz)$, where $V$ is a Tate space and $\opz$ is an operator satisfying \eqref{it:Tate1} and \eqref{it:Tate3}. \end{proposition} \begin{proof} The proof is quite straightforward. The action of $A$ on $V$ is naturally defined as $$\left(\sum{c_i}z^i\right)v=\sum c_i\opz^i v,$$ where the right-hand side converges by \eqref{it:Tate3}. Let $V'\subset V$ be a linearly compact open subspace. By \eqref{it:Tate3}, the infinite sum $$M'=\sum_i \opz^i V'$$ stabilizes after finitely many summands, so by \eqref{it:Tate1}, $M'\subset V$ is $\opz$-invariant, open, and linearly compact. Clearly, $\opz^iM'$ form a basis of neighborhoods of zero. The Nakayama Lemma now implies that $M'$ is a finitely generated $A$-module. Finally, $V/M'$ is a torsion $M'$-module (by \eqref{it:Tate3}) which is cofinitely generated (by \eqref{it:Tate1}). Therefore, $V$ is of Tate type. \end{proof} We use the following terminology. For a Tate space $V$, an operator $\opz:V\to V$ is \emph{nicely contracting} if $\opz$ satisfies the hypotheses of Proposition~\ref{pp:tate}; an operator $\opz$ is \emph{nicely expanding} if it is invertible and $\opz^{-1}$ is nicely contracting. We apply Proposition~\ref{pp:tate} in the following situation: $M\in\DHol{A}$ (with $z$-adic topology), and $\opz:M\to M$ is a differential operator $\opz\in\D_A$. We determine whether $\opz$ is nicely contracting (or nicely expanding) using the description of bundles with connections on formal disk (see \cite{Mal}). \begin{examples} \label{ex:contracting} Suppose $M\in\DHol{K}$, where $K=\kk((z))$ is the fraction field of $A$. Fix an integer $\alpha>0$. Then $z^\alpha\partial_z$ is strongly contracting on $M=\jo_*M$ if and only if slopes of all components of $M$ are less than $\alpha-1$. In other words, the condition is $M\in\DHol{K}^{<\alpha-1}$. Now consider $\jo_!M\in\DHol{A}$. Then $z^\alpha\partial_z$ is strongly expanding on $\jo_!M$ if and only if slopes of all components of $M$ are greater than $\alpha-1$. In other words, the condition is $M\in\DHol{K}^{>\alpha-1}$. Here we work with $\jo_!M$ to guarantee that the operator is invertible. Finally, consider on $M$ the operator $p(z^2\partial_z)$, where $p(z)\in\kk[z]$ is the minimal polynomial of $x\in\A1$. Then $p(z^2\partial_z)$ is contracting on $M$ if and only if $M\in\DHol{K}^{\le 1,(x)}$. \end{examples} \begin{proposition} \begin{enumerate} \item The functor $$M\mapsto j_{0*}\jo_!(M),\quad M\in\DHol{K_0}$$ is an equivalence between $\DHol{K_0}$ and the category of $W$-modules $V$ equipped with a structure of a Tate space such that $d/dz\in W$ is nicely expanding and $z\in W$ is nicely contracting on $V$. \item\label{cat:1} More generally, let $p(z)\in\kk[z]$ be the minimal polynomial of $x\in\A1$. Then $$M\mapsto j_{x*}\jo_!(M),\quad M\in\DHol{K_x}$$ is an equivalence between $\DHol{K_x}$ and the category of $W$-modules $V$ equipped with a structure of a Tate space such that $d/dz$ is nicely expanding and $p(z)$ is nicely contracting on $V$. \item\label{cat:2} Again, let $p(z)$ be the minimal polynomial of $x\in\A1$. Then $\jo_{\infty*}$ is an equivalence between $$\DHol{K_\infty}^{\le 1,(x)}\subset\DHol{K_\infty}$$ and the category of $W$-modules $V$ equipped with a structure of a Tate space such that $z$ is nicely expanding and $p(d/dz)$ is nicely contracting on $V$. \item\label{cat:3} Finally, $\jo_{\infty*}$ is an equivalence between $$\DHol{K_\infty}^{>1}\subset\DHol{K_\infty}$$ and the category of $W$-modules $V$ such that $z$ and $d/dz$ are nicely expanding on $V$. \end{enumerate} \end{proposition} \begin{proof} Follows from Examples~\ref{ex:contracting}.\end{proof} Clearly, the Fourier transform interchanges the categories \eqref{cat:1} and \eqref{cat:2}, and sends category \eqref{cat:3} to itself. This completes the proof of Theorem~\ref{th:localfourier}. \qed \subsection{Example: local Fourier transform of the Kummer local system} \label{sc:Kummer} Let $\cK_0^\alpha\subset\DHol{K_0}$ be the Kummer local system at $0$ with residue $\alpha$, as in Theorem~\ref{th:computeradon}. Recall that $\cK_0^\alpha$ is $\kk((z))$ equipped with the derivation $$\partial_z=\frac{d}{dz}+\frac{\alpha}{z}.$$ One can view the generator $1\in\cK_0^\alpha$ as $z^\alpha$, then derivation is the usual derivative. Up to isomorphism, $\cK_0^\alpha$ depends on $\alpha$ only modulo $\Z$. Let us compute $\Four(0,\infty)\cK_0^\alpha\in\DHol{K_\infty}$ following the recipe of Section~\ref{sc:explicit}. Assume first $\alpha\not\in\Z$. Then $\jo_*\cK_0^\alpha=\jo_!\cK_0^\alpha=\kk((z))$. By \eqref{eq:localfourier}, we see that $$\zeta^k\cdot 1=\frac{\Gamma(-\alpha-k)}{\Gamma(-\alpha)}z^k,$$ so $\kk((z))=\kk((\zeta))\cdot 1$. The derivation $\partial_\zeta$ on $\kk((\zeta))\cdot 1$ is determined by $$\partial_\zeta(1)=-\alpha(\alpha+1)\frac{1}{z}=(\alpha+1)\zeta^{-1}\cdot 1.$$ That is, the resulting $\D_{K_\infty}$-module is $\cK_\infty^{\alpha+1}$. Suppose now $\alpha\in\Z$. Without loss of generality, we may assume that $\alpha=0$. Then $$\jo_!\cK_0^0=\kk[[z]]\oplus\kk[\partial_z]\partial_z(1),\quad z\partial_z(1)=0.$$ One can view $1\in\jo_!\cK_0^0$ as the Heaviside step function; $\partial_z(1)$ is the delta function. Then $$\zeta^k\cdot 1=\begin{cases}\frac{(-1)^k}{k!}z^k,\quad k\ge0\\ (-1)^k\partial_z^{-k}(1),\quad k<0.\end{cases}$$ Again, $\jo_!\cK^0_0=\kk((\zeta))\cdot 1$. The derivation $\partial_\zeta$ satisfies $$\partial_\zeta(1)=-\partial_z(1)=\zeta^{-1}\cdot 1,$$ so as a $\D_{K_\infty}$-module, we get $\cK_\infty^1$. To summarize, \begin{equation} \Four(0,\infty)\cK_0^\alpha\simeq\cK_\infty^{\alpha+1}\text{ for all }\alpha\in\kk. \label{eq:kummer} \end{equation} \subsection{Fourier transform and formal type}\label{sc:fourier and type} Let $L$ be a local system on open subset $U\subset\A1$, and consider $M=j_{!*}L\in\DHol{\A1}$. Here $j=j_U:U\hookrightarrow\A1$. As in Section \ref{sc:formal type}, the formal type of $L$ is the collection of isomorphism classes $\{[\Psi_x(L)]\}_{x\in\p1}$. By Corollary~\ref{co:formaltype}, we can instead use the collection $$\left(\{[\Phi_x(M)]\}_{x\in\A1},\Psi_\infty(M)\right).$$ Suppose now that $\Four(M)$ is also a Goresky-MacPherson extension $\Four(M)=\hat\jmath_{!*}\hat L$ for a local system $\hat L$ on an open subset $\hat U\subset\A1$ (here $\hat\jmath:\hat U\hookrightarrow\A1$). Then \eqref{eq:ainfty}, \eqref{eq:infty} determine the formal type of $\hat L$ given the formal type of $L$. This allows us to relate isotypical deformations of $L$ to those of $\hat L$. \begin{corollary} Let $L$ and $\hat L$ be as above. \begin{enumerate} \item \label{it:iso1} For any Artinian local ring $R$ and any isotypical $R$-deformation $L_R$ of $L$, $$\hat L_R=\Four(j_{!*}L_R)|_{\hat U}$$ is an isotypical $R$-deformation of $\hat L$; \item \label{it:iso2} This yields a one-to-one correspondence between isotypical deformations of $L$ and of $\hat L$; \item \label{it:iso3} $L$ is rigid if and only if $\hat L$ is rigid. \end{enumerate} \label{co:isotypicalfourier} \end{corollary} \begin{proof} \eqref{it:iso1} Set $\hat M=\Four(j_{!*}L)$, $\hat M_R=\Four(j_{!*}L_R)$. By assumption, $$\Psi_x(L_R)\simeq\Psi_x(L)\otimes_\kk R\quad (x\in\p1),$$ so Lemma~\ref{lm:Phi!*} implies that $$\Phi_x(j_{!*}L_R)\simeq\Phi_x(j_{!*}L)\otimes_\kk R\quad (x\in\A1).$$ Therefore, $$\Psi_\infty(\hat M_R)\simeq\Psi_\infty(\hat M)\otimes_\kk R,\qquad\Phi_x(\hat M_R)\simeq\Phi_x(\hat M)\otimes_\kk R\quad (x\in\A1)$$ by \eqref{eq:ainfty}, \eqref{eq:infty}. Now note that $j_{!*}L_R$ is an $R$-deformation of $j_{!*}L\in\DHol{\A1}$; that is, $j_{!*}L_R$ is $R$-flat and $j_{!*}L=\kk\otimes_R(j_{!*}L_R)$. Therefore, $\hat M_R$ is a flat deformation of $\hat M$. Finally, $\Psi_x(\hat M_R)$ is a flat deformation of $\Psi_x(\hat M)$ for all $x\in\p1$. Now it is easy to see that $$\Psi_x(M_R)\simeq\Psi_x(M)\otimes_\kk R\quad(x\in\p1).$$ Note that the statement is local in the sense that it concerns only the image of $M$ in $\DHol{A_x}$. One can then use the argument of Section~\ref{sc:proofs}: first reduce to the case of unipotent monodromy, and then apply \eqref{eq:!*}. \eqref{it:iso2} It suffices to check that $\hat M_R=\hat j_{!*}(M_R|_{\hat U})$. Again, the claim is essentially local and can be proved using \eqref{eq:!*}. \eqref{it:iso3} Follows from \eqref{it:iso2} applied to first-order deformations. \end{proof} \begin{remark*} Corollary~\ref{co:isotypicalfourier} remains true for isotypical families parametrized by arbitrary schemes. In other words, the Fourier transform gives an isomorphism between the moduli spaces of connections of corresponding formal types. However, we do not consider families of connections parametrized by schemes in this paper. \end{remark*} \begin{corollary}[{\cite[Theorem 4.3]{BE}, compare \cite[Theorem 3.0.3]{Ka}}] For $L$ and $\hat L$ as above, $$\rig(L)=\rig(\hat L).$$ \label{co:rigidityfourier} \end{corollary} \begin{proof} It suffices to establish a natural isomorphism $$H^i_{dR}(\p1,\overline j_{!*}(\END(L)))\iso H^i_{dR}(\p1,\overline{\hat\jmath}_{!*}(\END(\hat L))),\quad i=0,1,2.$$ Here $\overline j:U\hookrightarrow\p1$ and $\overline{\hat\jmath}:\hat U\to\p1$ are the natural embeddings. For $i=1$, the isomorphism is given by Corollary~\ref{co:isotypicalfourier}\eqref{it:iso2}. For $i=0$, we have $$H^0_{dR}(\p1,\overline j_{!*}(\END(L)))=\End(L)=\End(j_{!*}(L)),$$ and the isomorphism is given by the Fourier functor. For $i=2$, we use the Verdier duality $$H^2_{dR}(\p1,\overline j_{!*}(\END(L)))=(H^0_{dR}(\p1,\overline j_{!*}(\END(L))))^\vee.$$ \end{proof} \section{Katz-Radon transform}\label{sc:radon} \subsection{Twisted $\D$-modules on $\p1$} \label{sc:TDOmodules} Denote by $\D_1$ the sheaf of rings of twisted differential operators (TDOs) on $\p1$ acting on $\cO(1)$ (see \cite{BB2} for the definition of TDO rings). The TDO rings form a Picard category over $\kk$, so we can scale $\D_1$ by any $\lambda\in\kk$. Denote the resulting TDO ring by $\D_\lambda$. Informally, $\D_\lambda$ is the ring acting on $\cO(1)^{\otimes\lambda}$. Here is an explicit description of $\D_\lambda$. Let us write $\p1=\A1\cup\{\infty\}$. Then $(\D_\lambda)|_\A1$ is identified with $\D_\A1$, while at the neighborhood of $\infty$, $\D_\lambda$ is generated by functions and the vector field $$\frac{\partial}{\partial \zeta}+\frac{\lambda}{\zeta}.$$ As before, $\zeta$ is the coordinate at $\infty$. Denote by $\DMod{\lambda}$ the category of quasicoherent $\D_\lambda$-modules, and by $\DHol{\lambda}$ the full subcategory of holonomic modules. \begin{remark}\label{rm:complex} If $\kk=\C$, we can approach $\D_\lambda$-modules analytically. We view quasi-coherent sheaves on $\p1$ as sheaves of modules over $C^\infty$-functions on $\p1$ equipped with `connections in the anti-holomorphic direction'. In this way, $\D_\p1$-modules can be thought of as $C^\infty({\mathbb P}_\C^1)$-modules equipped with a flat connection. Consider $\lambda\cdot c_1(\cO(1))\in H^2(\p1,\C)$. Let us represent it by a $C^\infty$-differential form $\omega$. We can then view $\D_\lambda$-modules as $C^\infty({\mathbb P}_\C^1)$-modules equipped with a connection whose curvature equals $\omega$. This can also be used to describe the TDO ring $\D_\lambda$ (as holomorphic differential operators acting on such modules). From this point of view, the explicit description of $\D_\lambda$ presented above corresponds to taking $\omega$ equal to a multiple of the $\delta$-function at $\infty$. \end{remark} From now on, assume $\lambda\not\in\Z$. We then have the following equivalent descriptions of $\DMod{\lambda}$. Let $$W_2=\kk\left\langle z_1,z_2,\frac{\partial}{\partial z_1},\frac{\partial}{\partial z_2}\right\rangle$$ be the algebra of differential operators on $\A2$. Define a grading on $W_2$ by $$\deg(z_1)=\deg(z_2)=1; \quad\deg\left(\frac{\partial}{\partial z_1}\right)=\deg\left(\frac{\partial}{\partial z_2}\right)=-1.$$ This grading corresponds to the natural action of $\gm$ on $\A2$. We denote by $\Mod(W_2)_\lambda$ the category of graded $W_2$ modules $M=\bigoplus_{i\in\Z} M^{(i)}$ such that the Euler vector field $$z_1\pd1+z_2\pd2$$ acts on $M^{(i)}$ as $\lambda+i$. \begin{remark}\label{rm:twistedmodulesgeometrically} Geometrically, $M$ is a $\D$-module on $\A2$. The grading defines an action of $\gm$ on $M$, so $M$ is weakly $\gm$-equivariant. The restriction on the action of the Euler vector field is a twisted version of the strong equivariance (untwisted strong equivariance corresponds to $\lambda=0$). Informally, strong equivariance requires that the restriction of $M$ to $\gm$ orbits is constant, while in the twisted version, the restriction is a local system with regular singularities and scalar monodromy $\exp(2\pi\sqrt{-1}\lambda)$. In other words, we work with monodromic $\D$-modules on $\A2$. \end{remark} \begin{proposition} The categories $\DMod{\lambda}$ and $\Mod(W_2)_\lambda$ are naturally equivalent. The equivalence is given by $$M\mapsto\bigoplus_i H^0(\p1,M\otimes_\cO\cO(i)).$$ \label{pp:localization} \end{proposition} \begin{proof} Let us use the geometric description of $\Mod(W_2)_\lambda$ presented in Remark~\ref{rm:twistedmodulesgeometrically}. It follows from definition that $\DMod{\lambda}$ can be identified with twisted strongly equivariant $\D$-modules on $\A2-\{0\}$. Therefore, it suffices to show that the categories of twisted strongly equivariant $\D$-modules on $\A2-\{0\}$ and on $\A2$ are equivalent. This is true because there are no non-trivial twisted strongly equivariant $\D$-modules supported by $\{0\}$, as $\lambda\not\in\Z$. \end{proof} Note that the degree zero component $H^0(\p1,M)$ is naturally a module over the quotient $$\left\{D\in W_2\left\st D\text{ is homogeneous, }\deg(D)=0\left\}\left/\left(z_1\pd1+z_2\pd2-\lambda\right)\right.\right.\right.\right..$$ It is easy to see that this quotient equals $H^0(\p1,\cD_\lambda)$. \begin{remark*} $H^0(\p1,\cD_\lambda)$ is generated by $$z_i\pd{j}\quad(i,j=1,2).$$ The generators satisfy the commutator relations of $\gl_2$. This allows us to identify $H^0(\p1,\cD_\lambda)$ with the quotient of the universal enveloping algebra of $U(\gl_2)$ corresponding to a central character of $U(\gl_2)$. \end{remark*} Proposition~\ref{pp:localization} implies the following localization result (as in \cite{BB1}) \begin{corollary} The correspondence \begin{equation} \label{eq:H0} M\mapsto H^0(\p1,M),\quad M\in\DMod{\lambda} \end{equation} is an equivalence between the category $\DMod{\lambda}$ and the category $\Mod(H^0(\p1,\cD_\lambda))$ of $H^0(\p1,\cD_\lambda)$-modules. In other words, $\p1$ is $\cD_\lambda$-affine. \label{co:localization} \end{corollary} \begin{proof} We need to show that \eqref{eq:H0} is exact and that $H^0(\p1,M)=0$ implies $M=0$. Both claims follow from Proposition~\ref{pp:localization}. \end{proof} \subsection{Formal type for twisted $\D$-modules} In a neighborhood of any $x\in\p1$, we can identify the sheaf $\D_\lambda$ with the untwisted sheaf $\D_\p1$. More precisely, consider the restriction $A_x\otimes\D_\lambda$ of $\D_\lambda$ to the formal disk centered at $x$. \begin{lemma} There is an isomorphism $$A_x\otimes\D_\lambda\iso\D_{A_x}$$ that acts tautologically on functions $A_x\subset D_\lambda$. The isomorphism is unique up to conjugation by an invertible function. \qed \label{lm:twisted formal type} \end{lemma} If we choose an isomorphism of Lemma~\ref{lm:twisted formal type}, the functors of Section \ref{sc:curve} can be defined for twisted $\D$-modules. By Lemma~\ref{lm:twisted formal type}, different choices lead to isomorphic functors. \begin{example} Any $M\in\DHol{\A1}$ can be viewed as a $\D_\lambda|_{\A1}$-module using the identification between $\D_\lambda|_{\A1}$ and $\D_\A1$. Therefore, besides the `untwisted' extension $\oM=j_*(M)\in\DHol{\p1}$, we have a twisted version $\oM_\lambda\in\DHol{\lambda}$. Then $\oM_\lambda=\oM\otimes c_\lambda$, where $c_\lambda$ is a rank one $\D_\lambda$-module with regular singularity at $\infty$ and no other singularities. It is clear from this description that $$\Psi_x(\oM_\lambda)\simeq\Psi_x(\oM)\quad x\in\A1,$$ while $\Psi_\infty(\oM_\lambda)$ is shifted: $$\Psi_x(\oM_\lambda)\simeq\Psi_x(\oM)\otimes\Psi_\infty(c_\lambda).$$ Note that $\Psi_\infty(c_\lambda)\simeq\cK_\infty^\lambda$. Suppose now that $\kk=\C$, and assume that $M$ has regular singularities. Then instead of $\Psi_x(M)$, we can consider the monodromies $\rho_1,\dots,\rho_n\in GL(M_x)$ around the singularities of $M$ (this involves fixing a base point $x\in\p1$ and loops around the singularities). The monodromies satisfy the relation $$\rho_1\cdots\rho_n=\id.$$ If we now consider $M$ as a $\D_\lambda$-modules, we can still define the monodromies $\rho_i$, but the relation is twisted: $$\rho_1\dots\rho_n=\exp(2\pi\sqrt{-1}\lambda)\id.$$ Of course, this is consistent with Remark~\ref{rm:complex}. \label{ex:twist} \end{example} \subsection{Katz-Radon transform}\label{ssc:radon} The Katz-Radon transform is an equivalence \begin{equation} \label{eq:rad} \Rad:\DMod{\lambda}\to\DMod{-\lambda} \end{equation} that preserves holonomicity. We give several equivalent definitions below, but essentially there are two approaches. If one works with twisted $\D$-modules, one can define the Radon transform on $\p{n}$ for any $n$ (which is constructed in \cite{AE}); the first three definitions make sense in this context. The remaining two definitions restrict $\D$-modules to $\A1\subset\p1$. The twist is then eliminated, and the integral transform becomes Katz's middle convolution (which is introduced in \cite{Ka}). This approach is specific for $n=1$. Note that up to equivalence, $\DMod{\lambda}$ depends only on the image of $\lambda$ in $\kk/\Z$. {\it Two-dimensional Fourier transform:} Let us use the equivalence of Proposition~\ref{pp:localization}. The Fourier transform gives an automorphism $\four:W_2\to W_2$ that inverts degree and acts on the Euler field as $$\four\left(z_1\pd1+z_2\pd2\right)=-2-z_1\pd1-z_2\pd2.$$ It induces a functor on graded $W_2$-modules: $$\Four:\Mod(W_2)_\lambda\to\Mod(W_2)_{-2-\lambda}=\Mod(W_2)_{-\lambda}.$$ This yields \eqref{eq:rad}. {\it Involution of global sections:} It is easy to reformulate the above definition using the equivalence of Corollary~\ref{co:localization}. We see that $\Rad$ is induced by the isomorphism: $$\rad:H^0(\p1,\D_\lambda)\to H^0(\p1,\D_{-2-\lambda}):z_i\pd{j}\mapsto-\pd{i}z_j\quad(i,j=1,2).$$ {\it Integral transform:} The Fourier transform for $\D_\A2$-modules can be viewed as an integral transform. In the case of twisted strongly $\gm$-equivariant $\D_\A2$-modules, this yields the following description of the Katz-Radon transform. Consider on $\p1\times\p1$ the TDO ring $$\D_{(-\lambda,-\lambda)}=p_1^\cdot\D_{-\lambda}\odot p_2^\cdot\D_{-\lambda}.$$ Here $p_{1,2}:\p1\times\p1\to\p1$ are the natural projections, and $p_1^\cdot$ (resp. $\odot$) stands for the pull back (resp. Barr's sum) of TDO rings. Let $\cK$ be a rank one $\D_{(-\lambda,-\lambda)}$-module with regular singularities along the diagonal ($\cK$ is smooth away from the diagonal). Actually, $\D_{(-\lambda,-\lambda)}$ is naturally isomorphic to $\D_{\p1\times\p1}$ away from the diagonal; this allows us to define $\cK$ canonically). Then \begin{equation} \Rad(M)=p_{2,*}(p_1^!(M)\otimes\cK). \label{eq:integralradon} \end{equation} {\it Middle convolution:} Suppose $M\in\DHol{\A1}$. Let us describe $\Rad(j_{!*}(M))|_{\A1}\in\DHol{\A1}$, where $j:\A1\hookrightarrow\p1$, and the Goresky-MacPherson extension $j_{!*}(M)$ is taken in the sense of $\D_\lambda$-modules. Here we use the identification $\D_\lambda|_{\A1}\simeq\D_\A1$. \begin{remark*} Note that choosing $\infty\in\p1$ breaks the symmetry. On the other hand, a holonomic $\D_\lambda$-module can be obtained as a Goresky-MacPherson extension from $\p1-\{\infty\}$ for almost all choices of $\infty\in\p1$, so this freedom of choice allows us to determine the Katz-Radon transform of any holonomic $\D_\lambda$-module. \end{remark*} Let $\cK^\lambda\in\DHol{\A1}$ be the Kummer $\D$-module: it is a rank one sheaf whose only singularities are first-order poles at $0$ and $\infty$ with residues $\lambda$ and $-\lambda$, respectively. Consider $m:\A2\to\A1:(x,y)\mapsto y-x$ and let $j_2:\A2\hookrightarrow\p1\times\A1$ be the open embedding. Then $\cK|_\A2=m^!(\cK^\lambda)$; moreover, $$(p_1^!(M)\otimes \cK)|_{\p1\times\A1}=j_{2,!*}(p_1^!(M)\otimes m^!(\cK^{\lambda})),$$ and \eqref{eq:integralradon} gives \begin{equation} \Rad(j_{!*}M)|_{\A1}=p_{2,*}(j_{2,!*}(p_1^!(M)\otimes m^!(\cK^\lambda))).\label{eq:middleconvolution} \end{equation} The right-hand side of \eqref{eq:middleconvolution} is called \emph{the additive middle convolution} $M\star_{mid}K$ of $M$ and $\cK^\lambda$. See \cite[Section 2.6]{Ka} for the notion of middle convolution on arbitrary group; \cite[Proposition 2.8.4]{Ka} shows that in the case of additive group, middle convolution can be defined by \eqref{eq:middleconvolution}. \begin{remark} \label{rm:convoluters} Suppose $M\in\DHol{U}$ for an open subset $U\subset\A1$. To extend of $M$ to a $\D_\lambda$-module, we use an isomorphism $(\D_\lambda)|_U\simeq\D_U$. Generally speaking, there are many choices of such isomorphism. We are using the restriction of the isomorphism $(\D_\lambda)|_\A1\simeq\D_\A1$ from Section \ref{sc:TDOmodules}, however, it depends on the choice of $\infty\in\p1-U$. In other words, there are canonical extension functors from the category of $(\D_\lambda)|_U$-modules. To $M\in\DHol{U}$, we associate a $(\D_\lambda)|_U$-module $M\otimes c_\lambda$, where $c_\lambda$ is a rank one $\D_\lambda$-module from Example~\ref{ex:twist}. However, we could use any rank one $\D_\lambda$-module $c$ that is smooth on $U$. In the description of the Katz-Radon transform via the middle convolution, different choices of the module $c$ correspond to the `convoluters' of \cite{Si}. \end{remark} \begin{remark} \label{rm:*convolution} Similarly, $\Rad(j_*M)|_\A1$ for $M\in\DMod{\A1}$ can be described using the ordinary convolution (rather than the middle convolution). Namely, $$ \Rad(j_*(M))|_\A1=M\star\cK^\lambda=p_{2,*}(p_1^!(M)\otimes m^!(\cK^\lambda)). $$ As usual, the convolution can be rewritten using the Fourier transform: \begin{equation} \Rad(j_*(M))|_\A1=\Four^{-1}(\Four(M)\otimes\Four(\cK^\lambda)),\label{eq:fourierradon*} \end{equation} where $\Four^{-1}$ stands for the inverse Fourier transform. Note that $\Four(\cK^\lambda)\simeq\cK^{-\lambda}$. \end{remark} {\it One-dimensional Fourier transform:} Finally, one can rewrite the middle convolution using the Fourier transform, as in \cite[Section 2.10]{Ka}. \begin{lemma}[{cf. \cite[Proposition~2.10.5]{Ka}}] For $M\in\DHol{\A1}$, there is a natural isomorphism $$\Four(\Rad(j_{!*}M)|_\A1)=j_{U,!*}(\Four(M)|_U\otimes\cK^{-\lambda}|_U),$$ where $U=\A1-\{0\}$ and $j_U:U\hookrightarrow\A1$. \label{lm:fourierradon} \end{lemma} \begin{proof} By definition, $j_{!*}M$ is the smallest submodule of $j_*M$ such that the quotient is a direct sum of copies of $\delta_\infty$ (the $\D$-module of $\delta$-functions at infinity). Therefore, $\Rad(j_{!*}M)$ is the smallest submodule of $\Rad(j_*M)$ such that the quotient is a direct sum of copies of $\Rad(\delta_\infty)$. This implies that $\Rad(j_{!*}M)|_\A1$ is the smallest submodule of $\Rad(j_*M)|_\A1$ such that the quotient is a constant $\D$-module (because $\Rad(\delta_\infty)|_\A1$ is constant). Now it suffices to use \eqref{eq:fourierradon*}. \end{proof} \subsection{Properties of Katz-Radon transform} Let us prove the properties of the Katz-Radon transform similar to the properties of the Fourier transform established in Section~\ref{sc:fourier}. \begin{proposition}\label{pp:rankradon} For $M\in\DHol{\lambda}$, \begin{align*} \rk(\Rad(M))&=\sum_{x\in\p1(\overline\kk)}(\rk(\Phi_x(M))+\irreg(\Psi_x(M)))-\rk(M)\\ &=\sum_{x\in\p1}[\kk_x:\kk](\rk(\Phi_x(M))+\irreg(\Psi_x(M)))-\rk(M) \end{align*} \end{proposition} \begin{proof} Using the description of $\Rad$ as an integral transform, we see that the fiber of $\Rad(M)$ at $x\in\p1(\overline\kk)$ equals $H^1(\p1\otimes\overline\kk,M\otimes\ell)$, where rank one local system $\ell\in\Hol{\D_{-\lambda}\otimes\overline\kk}$ is smooth on $\p1\otimes\overline\kk-\{x\}$ and has a first-order pole at $x$. For generic $x$, $H^0(\p1\otimes\overline\kk,M\otimes\ell)=H^2(\p1\otimes\overline\kk,M\otimes\ell)=0$, so that $$\rk(\Rad(M))=-\chi_{dR}(\p1\otimes\overline\kk,M\otimes\ell).$$ The proposition now follows from the Euler-Poincar\'e formula (Proposition~\ref{pp:EulerPoincare}). \end{proof} Let us construct the local Katz-Radon transform. \begin{proof}[Proof of Theorem~\ref{th:localradon}] We use Remark \ref{rm:*convolution}. Choose $\infty\in\p1-\{x\}$, and consider on $\A1=\p1-\{\infty\}$ the $\D_\A1$-module $M_!=j_{x*}(\jo_!M)$. For the embedding $j:\A1\hookrightarrow\p1$, we have $$\Four(\Rad(j_*M_!)|_\A1)=\Four(M_!)\otimes\Four(\cK^{\lambda}).$$ However, $$\Four(M_!)=\jo_{\infty*}(\Four(x,\infty)M)$$ by Theorem~\ref{th:localfourier}, and therefore \begin{equation} \label{eq:rad!} \Rad(j_*M_!)|_\A1=j_{x*}\jo_! (\Four(x,\infty)^{-1}(\Four(x,\infty)(M)\otimes\Psi_\infty(\Four(\cK^{\lambda})))). \end{equation} Note that $\Psi_\infty(\Four(\cK^{\lambda}))\simeq\cK^\lambda_\infty\in\DHol{K_\infty}$; recall that $\cK^\lambda_\infty$ stands for a rank one local system with regular singularity and residue $\lambda$ at $\infty$. Since \eqref{eq:rad!} holds for any choice of $\infty\in\p1-\{x\}$, we see that \begin{equation} \Rad(x,x)(M)\simeq\Four(x,\infty)^{-1}(\Four(x,\infty)(M)\otimes\cK^\lambda_\infty). \label{eq:localradon} \end{equation} \end{proof} \begin{corollary}\label{co:localradon} For $M\in\DHol{\p1}$ and $x\in\DHol{\p1}$, we have a natural isomorphism $$\Rad(x,x)\Phi_x(M)\iso \Phi_x(\Rad(M)).$$ In particular, $\Phi_x(M)=0$ (that is, $M$ is smooth at $x$) if and only if $\Phi_x(\Rad(M))=0$. \end{corollary} \begin{proof} Combine Theorem~\ref{th:localradon} and Corollary~\ref{co:cycles}\eqref{it:cycles2}. Alternatively, we can derive it from the corresponding property of the local Fourier transform (Corollary~\ref{co:localfourier}) by Lemma~\ref{lm:fourierradon} \end{proof} Let $L$ be a local system on open subset $U\subset\A1$. Using the identification $ \D_\lambda|_{\A1}\simeq\D_\A1$, we define the twisted Goresky-MacPherson extension $M=j_{!*}(L)\in\DHol{\lambda}$ for $j=j_U:U\hookrightarrow\p1$. The formal type of $L$ is completely described by the collection $$\left(\{[\Phi_x(M)]\}_{x\in\p1},\rk(M)\right).$$ Suppose that $\Rad(M)$ is also a twisted Goresky-MacPherson extension $\Rad(M)=j_{!*} (\tL)$ for a local system $\tL$ defined on the same open set $U$ (see Corollary~\ref{co:localradon}). Using Proposition~\ref{pp:rankradon} and Corollary~\ref{co:localradon}, we can determine the formal type of $\tL$ given the formal type of $L$. This allows us to relate isotypical deformations of $L$ to those of $\tL$. \begin{corollary} Let $L$ and $\tL$ be as above. \begin{enumerate} \item For any local Artinian ring $R$, there is a one-to-one correspondence between isotypical deformations of $L$ and of $\tL$ given by $$L_R\mapsto \Rad(j_{!*}L_R)|_U.$$ \item $L$ is rigid if and only if $\tL$ is rigid. \item $\rig(L)=\rig(\tL).$ \end{enumerate} \label{co:isotypicalradon} \end{corollary} \begin{proof} Analogous to Corollaries \ref{co:isotypicalfourier} and \ref{co:rigidityfourier}. It can also be derived from these corollaries using Lemma~\ref{lm:fourierradon}. \end{proof} \begin{remarks*} Unlike the Fourier transform, the Katz-Radon transform preserves regularity of singularities (\cite{Ka}). This follows immediately from its description as an integral transform. The definition of the local Katz-Radon transform is not completely canonical, because we needed the isomorphism of Lemma~\ref{lm:twisted formal type} (this is somewhat similar to Remark~\ref{rm:convoluters}). For this reason, $\Rad(x,x)$ is defined only up to a non-canonical isomorphism. Equivalently, $\Rad(x,x)$ is naturally defined as a functor between categories of twisted $\D_{K_x}$-modules, and the twists can be eliminated, but not canonically. The Katz-Radon transform makes sense for twisted $\D$-modules on a twisted form $X$ of $\p1$ (that is, $X$ can be a smooth rational irreducible projective curve without $\kk$-points). In this case, one cannot interpret the Katz-Radon transform using the middle convolution or the Fourier transform without extending scalars. \end{remarks*} \section{Calculation of local Katz-Radon transform}\label{sc:computeradon} In this section, we prove Theorem~\ref{th:computeradon}. \subsection{Powers of differential operators} Informally speaking, we prove Theorem~\ref{th:computeradon} by looking at powers $\partial_z^\alpha$ of derivation for $\alpha\in\kk$. We start with the following observation about operators $\kk((z))\to\kk((z)).$ Suppose $P:\kk((z))\to\kk((z))$ is a $\kk$-linear operator of the form $$P\left(\sum_\beta c_\beta z^\beta\right)=\sum_\beta c_\beta\sum_{i\ge 0} p_i(\beta) z^{\beta+d+i}.$$ Here $d$ is a fixed integer (the degree of $P$ with respect to the natural filtration). Up to reindexing, $p_i(\beta)\in\kk$ are the entries of the infinite matrix corresponding to $P$. The powers of $P$ can be written in the same form: \begin{equation} P^\alpha\left(\sum_\beta c_\beta z^\beta\right)=\sum_\beta c_\beta\sum_{i\ge 0}p_i(\alpha,\beta) z^{\beta+\alpha d+i}, \label{eq:powers} \end{equation} where $\alpha$ is a non-negative integer. For instance, $p_i(1,\beta)=p_i(\beta)$, and $p_i(0,\beta)=0$ if $i>0$. Suppose that $P$ satisfies the following two conditions: \begin{enumerate} \item \label{cn:1} $p_0(\beta)=1$; \item \label{cn:2} $p_i(\beta)$ is polynomial in $\beta$. \end{enumerate} \begin{remark*} If the degrees of polynomials $p_i(\beta)$ are uniformly bounded, $P$ is a differential operator. In general, the second condition means that $P$ is a `differential operator of infinite degree'. \end{remark*} \begin{lemma} If $P$ satisfies \eqref{cn:1} and \eqref{cn:2}, then $p_i(\alpha,\beta)$ is a polynomial in $\alpha$ and $\beta$ for all $i$. \label{lm:polynomialcoefficients} \end{lemma} \begin{proof} Proceed by induction in $i$. The base is $p_0(\alpha,\beta)=1$. Suppose we already know that $p_0(\alpha,\beta),\dots,p_{i-1}(\alpha,\beta)$ are polynomials. The identity $P^\alpha=P\cdot P^{\alpha-1}$ implies that $$p_i(\alpha,\beta)=\sum_{j=0}^i p_{i-j}(\beta+(\alpha-1)d+j)p_j(\alpha-1,\beta).$$ By the induction hypothesis, $p_i(\alpha,\beta)-p_i(\alpha-1,\beta)$ is a polynomial in $\alpha$ and $\beta$. Finally, $p_i(0,\beta)$ is a polynomial in $m$, and the lemma follows. \end{proof} Lemma~\ref{lm:polynomialcoefficients} allows us to define powers $P^\alpha$ for all $\alpha\in\kk$ in the following sense. For any $\gamma\in\kk$, consider the one-dimensional vector space $$z^\gamma\kk((z))=\left.\left\{\sum_{i=-k}^\infty c_{\gamma+i} z^{\gamma+i}\right\st \text{$k$ is not fixed}\right\}$$ over $\kk((z))$. Of course, $z^\gamma\kk((z))$ depends only on the image of $\gamma$ in $\kk/\Z$. Note that $\frac{d}{dz}$ acts on $z^\gamma\kk((z))$; the corresponding $\D$-module is the Kummer local system. For any $\alpha,\gamma\in\kk$, we define the operator $$P^\alpha:z^\gamma\kk((z))\to z^{\gamma+d\alpha}\kk((z))$$ by \eqref{eq:powers}. We can prove algebraic identities involving $P^\alpha$ by rewriting them in terms of $p_i(\alpha,\beta)$ and then verifying them for integers $\alpha,\beta$. Here is an important example. \begin{corollary} For any $\alpha',\alpha''$, we have $P^{\alpha'+\alpha''}=P^{\alpha'}\cdot P^{\alpha''}$. (The domain of the operators is $z^\gamma\kk((z))$ for any $\gamma\in\kk$.) \label{co:powers} \end{corollary} \begin{proof} In terms of $p_i(\alpha,\beta)$, we have to show that \begin{equation} p_i(\alpha'+\alpha'',\beta)=\sum_{j=0}^i p_{i-j}(\alpha',\beta+\alpha''d+j)p_j(\alpha'',\beta) \label{eq:ppowers} \end{equation} for all $i$. Both sides of \eqref{eq:ppowers} belong to $\kk[\alpha',\alpha'',\beta]$, therefore it suffices to verify \eqref{eq:powers} on the Zariski dense set $$\{(\alpha',\alpha'',\beta)\st\text{$\alpha'$, $\alpha''$, and $\beta$ are integers, $\alpha',\alpha''\ge 0$}\},$$ where it holds by definition. \end{proof} \subsection{Proof of Theorem~\ref{th:computeradon}} Let us start with some simplifying assumptions. First, consider the maximal unramified extension $K^{unr}_x\supset K_x$. If $z$ is a local coordinate at $x$, $K^{unr}_x=\overline{\kk}((z))$. The isomorphism class of an object $M\in\DHol{K_x}$ is determined by the isomorphism class of its image in $\DHol{K^{unr}_x}$. Therefore, we can assume without losing generality that $\kk$ is algebraically closed. Also, it suffices to prove Theorem~\ref{th:computeradon} for irreducible $M\in\DHol{K_x}$. Choose a coordinate $z$ on $\p1$ such that $x$ is given by $z=0$. By \eqref{eq:localradon}, we need to show that \begin{equation} (\Four(0,\infty)M)\otimes\cK_\infty^\lambda=\Four(0,\infty)\left(M\otimes\cK_0^{\lambda(1+\slope(M))}\right) \label{eq:computeradon} \end{equation} for irreducible $M\in\DHol{K_0}$. Our final assumption is that $M$ is irregular: $\slope(M)>0$. In the case of regular singularities, Theorem~\ref{th:computeradon} was proved by N.~Katz in \cite{Ka}. Indeed, in this case $M\simeq\cK_0^\alpha$ for some $\alpha\in\kk$, and \eqref{eq:computeradon} follows from \eqref{eq:kummer}. Let us use the well-known description of irreducible local systems on a punctured disk (see for instance \cite[Theorem III.1.2]{Mal}). It implies that there is an isomorphism $$M\simeq\kk((z^{1/r}))$$ for a ramified extension $\kk((z^{1/r}))\supset K_0$ such that the derivation on $M$ is given by \begin{equation} \partial_z=\frac{d}{dz}+f(z) \label{eq:derivation} \end{equation} for some $f(z)$ of the form \begin{equation} f(z)=C z^{-\slope(M)-1}+\dots\in\kk((z^{1/r})),\quad C\in\kk-\{0\}. \label{eq:mu} \end{equation} For any $\gamma\in\kk$, consider the vector space $$z^\gamma\kk((z^{1/r}))$$. Equip it with the derivation \eqref{eq:derivation}; the resulting $\D_{K_0}$-module is $M\otimes\cK^\gamma_0$. Consider the operator $$P=\frac{1}{C}\partial_z=\frac{1}{C}\left(\frac{d}{dz}+f(z)\right):\kk((z^{1/r}))\to\kk((z^{1/r})),$$ where $C$ is the leading coefficient of $f$ as in \eqref{eq:mu}. Lemma~\ref{lm:polynomialcoefficients} applies to $P$, so we can define powers $$P^\alpha:z^\gamma\kk((z^{1/r}))\to z^{\gamma-(1+\slope(M))\alpha}\kk((z^{1/r})).$$ In particular, for $\gamma=0$, we obtain a $\kk$-linear map $$P^\alpha:M\to M\otimes\cK_0^{-(1+\slope(M))\alpha}.$$ Its properties are summarized below. \begin{proposition} For any $\alpha\in\kk$, \begin{enumerate} \item\label{it:P1} $P^\alpha$ is invertible; \item\label{it:P2} $P^\alpha\partial_z=\partial_z P^\alpha$; \item\label{it:P3} $P^\alpha z=z P^\alpha+\frac{\alpha}{C} P^{\alpha-1}$. \end{enumerate} \label{pp:powers} \end{proposition} \begin{proof} \eqref{it:P1} follows from Corollary~\ref{co:powers}; the inverse map is $P^{-\alpha}$. \eqref{it:P2} follows from Corollary~\ref{co:powers}, because $\partial_z=C\cdot P^1$. \eqref{it:P3} can be proved by the same method as Corollary~\ref{co:powers} by first verifying it when $\alpha$ is a positive integer. \end{proof} Consider now the local Fourier transform $\Four(0,\infty)M$. Denote by $\zeta=\frac{1}{z}$ the coordinate at $\infty\in\p1$. As described in Section~\ref{sc:explicit}, $\cF(0,\infty)M$ coincides with $M$ as a $\kk$-vector space; the action of $\zeta$ (resp. the derivation $\partial_\zeta$) is given by $-\partial_z^{-1}$ (resp. $-\partial_z^2z$). $P^\alpha$ can thus be viewed as a $\kk$-linear map $$\Four(0,\infty)M\to\Four(0,\infty)(M\otimes\cK_0^{-(1+\slope(M))\alpha}).$$ Proposition~\ref{pp:powers} implies that $P^\alpha$ is a $\kk((\zeta))$-linear isomorphism that satisfies $$\partial_\zeta P^\alpha=-\partial_z^2zP^\alpha= -P^\alpha\partial_z^2z+\frac{\alpha}{C}\partial^2_z P^{\alpha-1}= P^\alpha\partial_\zeta+\alpha P^\alpha\partial_z=P^\alpha\left(\partial_\zeta-\frac{\alpha}{\zeta}\right).$$ In other words, $P^\alpha$ yields an isomorphism of $\D$-modules $$\Four(0,\infty)(M)\otimes\cK_\infty^{-\alpha}\iso\Four(0,\infty)\left(M\otimes\cK_0^{-(1+\slope(M))\alpha}\right).$$ Setting $\alpha=-\lambda$ gives \eqref{eq:computeradon}. This completes the proof of Theorem~\ref{th:computeradon}. \qed \nocite{*} \end{document}
\begin{document} \title{Photonic tensor networks produced by a single quantum emitter} \author{Hannes Pichler} \email[These authors contributed equally to this work\\Corresponding author: ]{hannes.pichler@cfa.harvard.edu} \affiliation{ITAMP, Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138, USA} \affiliation{Physics Department, Harvard University, Cambridge, Massachusetts 02138, USA} \author{Soonwon Choi} \email[These authors contributed equally to this work\\Corresponding author: ]{hannes.pichler@cfa.harvard.edu} \affiliation{Physics Department, Harvard University, Cambridge, Massachusetts 02138, USA} \author{Peter Zoller} \affiliation{Institute for Theoretical Physics, University of Innsbruck, A-6020 Innsbruck, Austria} \affiliation{Institute for Quantum Optics and Quantum Information of the Austrian Academy of Sciences, A-6020 Innsbruck, Austria} \author{Mikhail D. Lukin} \affiliation{Physics Department, Harvard University, Cambridge, Massachusetts 02138, USA} \begin{abstract} We propose and analyze a protocol to generate two dimensional tensor network states using a single quantum system that sequentially interacts with a 1D string of qubits. This is accomplished by using parts of the string itself as a quantum queue memory. As a physical implementation, we consider a single atom or atom like system coupled to a 1D waveguide with a distant mirror, where guided photons represent the qubits while the mirror allows the implementation of the queue memory. We identify the class of many-body quantum states that can be produced using this approach. These include universal resources for measurement based quantum computation and states associated with topologically ordered phases. We discuss an explicit protocol to deterministically create a 2D cluster state in a quantum nanophotonic experiment, that allows for a realization of a quantum computer using a single atom coupled to light. \end{abstract} \date{\today} \maketitle Controlled generation of multi-qubit entanglement is central to quantum information science \cite{Kimble:2008if,Gisin:2007by,Briegel:2009gg}. In particular, quantum communication requires the use of photonic qubits, where information is encoded in the photon number or the polarization degrees of freedom of light. By coupling a single atom or atom-like system to a photonic waveguide, one can deterministically produce photonic qubits as well as atom-photon entanglement. Indeed, a systematic control over such single quantum emitters has been demonstrated in a variety of experimental systems \cite{Mitsch:2014fz,Goban:2014eq,Sollner:2015fc,Sipahigil:2016hy,vanLoo:2013df,Hoi:2015fh,Tiecke:vw,Reiserer:2014hf,He:2013bq,Lodahl:2015fy} and used for fundamental tests of quantum mechanics \cite{Hensen:2015dw}. More generally, it is known that combining quantum emitters with minimal resources such as quantum memories and few qubit registers can provide a powerful platform for quantum networks \cite{Kimble:2008if,Duan:2001dt,Reiserer:2015en}. Beyond quantum communication applications, recently it has been proposed and demonstrated that individual quantum emitters can be used to produce a sequence of photons that are entangled in a multipartite way \cite{Schon:2005fk,Lindner:2009eu,Economou:2010jt,Schwartz:2016dj}. This can be potentially of interest for quantum computation and simulation \cite{Osborne:2010hr,Barrett:2013ef,Eichler:2015if}. However, the entanglement structure of the resultant many-body state is characterized by so called matrix product states (MPS) \cite{Schon:2007bw,Verstraete:2010bf,Schollwock:2011gl}, which can be efficiently simulated classically, hence limiting their potential utility. This Letter describes a method to generate quantum states with higher dimensional entanglement structures, using minimal resources. The key idea is to employ a quantum memory (such as a delay line for photons) that allows for repeated interactions between a small quantum system (such as a quantum emitter) with a 1D string of individual qubits \cite{Pichler:2016bx,Grimsmo:2015gf}. Specifically, we describe an explicit protocol to create the 2D cluster state \cite{Rausssendorf:2001js}, a universal resource for quantum computation \cite{Raussendorf:2003ca} using photonic qubits interacting with a single few-state quantum system. More generally, we characterize the class of states achievable in this setting in terms of so-called projected entangled pair states (PEPS) \cite{Verstraete:2004cf}. This opens new avenues for quantum information processing \cite{Rausssendorf:2001js} and photonic simulation of quantum many-body physics with currently available experimental techniques \cite{Osborne:2010hr,Barrett:2013ef,Eichler:2015if}. \begin{figure} \caption{Schematic setting. (a) We consider a $d$-dimensional entangling quantum system that sequentially interacts with qubits on a 1D semi-infinite conveyor belt. In each step $k$, the entangling unit can interact with two of these qubits, labelled $k$ and $k+N$, such that qubits $k+1,\dots k+N$ represent a quantum memory queue. (b) Physical interpretation: Wrapping the tape around a cylinder with the proper circumference, the entangling unit interacts with two qubits that are neighboring along the introduced vertical dimension. As time progresses the entangling unit moves along the tape creating a 2D entanglement structure. } \label{fig1} \end{figure} To illustrate the main idea, we first consider a simplified setup independent of any specific experimental implementation, where a small quantum system $\mathcal{Q}$ is sequentially interacting with a string of qubits moving on a conveyor belt. In each discrete time step, $\mathcal{Q}$ may write quantum states on initially uncorrelated qubits by unitary evolution and generate an output state. If $\mathcal{Q}$ interacts with one qubit (at position $k$) at each time $k$, it ``carries'' correlations from one qubit to another, thereby generating entanglement among them \cite{Schon:2005fk}. In this approach, the size of $\mathcal{Q}$ defines the information capacity and limits the maximum entanglement. In fact, the resulting quantum states can be exactly represented by so called matrix product state \cite{Schon:2005fk}, which naturally appear in one dimensional many-body systems \cite{Fannes:1992vq,White:1992zz,Vidal:2003gb,Orus:2014ja}. The key idea of the present work is to allow $\mathcal{Q}$ to interact with qubits repeatedly and non-locally such that the information stored in step $k$ is also available in step $k+N$. This is achieved by additional interactions between $\mathcal{Q}$ and qubit $k+N$ in step $k$ [Fig.~1(a)]. Such a storage and retrieval of information effectively realizes a quantum queue memory of size $N$. Owing to this memory, the resultant output state in general exhibits a qualitatively different entanglement structure. To visualize it, let us imagine rearranging the qubits such that the string winds around a cylinder with circumference $N$ as in Fig.~\ref{fig1}(b). We interpret the new geometry as 2D square lattice with shifted periodic boundary conditions. In this picture, $\mathcal{Q}$ can create correlations between neighboring qubits not only in horizontal direction ($k$ and $k\pm1$) in subsequent steps, but also in vertical direction ($k$ and $k\pm N$) via simultaneous interactions in each turn of the protocol. In particular, assuming that the qubits are initialized in the state $\ket{0}$, the unitary time evolution in each time step $k$ reduces to a map \begin{align}\label{eq1} \hat{U}[k] = \sum_{i,a,b,c,d} U[k]_{a,b,c,d}^{i} \ket{i,a,b}\bra{c,0,d}, \end{align} where $\ket{i,a,b} \equiv \ket{i}_k\ket{a}_{k+N}\ket{b}_\mathcal{Q}$ denotes a state with $k$-th and $k+N$-th qubits and $\mathcal{Q}$ in states $i$, $a$, $b$, respectively. Repeated application of such maps produces a quantum state characterized by a 2D network of tensors $U[k]_{a,b,c,d}^{i}$. This kind of representation corresponds to the description of 2D many-body systems within the framework of PEPS \cite{Verstraete:2008ko,Schuch:2007fm,Banuls:2008ch}, which include resources for measurement based quantum computation (MBQC) as well as topologically ordered systems. \begin{figure}\label{fig2} \end{figure} \textit{2D photonic cluster state.---} We now analyze this scheme in detail starting with the example of the 2D cluster state with a concrete physical realization. Specifically, our scheme can be directly implemented in nanophotonic experiments. We consider an individual quantum emitter (e.g.~an atom) representing $\mc{Q}$, coupled to an one-dimensional semi-infinite waveguide [Fig.~\ref{fig2}(a)]. This atom has two metastable states $\ket{g_1}\equiv\ket{0}_\mc{Q}$ and $\ket{g_2}\equiv \ket{1}_\mc{Q}$, which can be coherently manipulated by a classical field $\Omega_1(t)$ [Fig.~\ref{fig2}(b)]. The state $\ket{g_2}$ can be excited to state $\ket{e_L}$ using a laser with Rabi frequency $\Omega_2(t)$. Following each excitation the atom will decay to $\ket{g_2}$ emitting a photon into the waveguide. Qubits are encoded via the absence ($\ket{0}_k$) or presence ($\ket{1}_k$) of a photon during the time interval $k$. For now, we assume that the atom-photon coupling is chiral \cite{Lodahl:2017bz} such that all these photons are emitted unidirectionally by the atom into the waveguide, e.g. to the left in Fig.~\ref{fig2}(a). The waveguide is terminated by a mirror located at a distance $L$ from the atom. Finally, another excited state $\ket{e_R}$, degenerate with $\ket{e_L}$, couples to the right moving photons reflected from the mirror \cite{Guimond:2016kf}. We denote the corresponding decay rates by $\gamma_L$ and $\gamma_R$ [Fig.~\ref{fig2}(b)]. We note that alternative implementations that do not require chiral atom-photon interactions are also possible, as discussed below. Our protocol starts by first generating 1D cluster states of left-propagating photons \cite{Lindner:2009eu}. To this end, the atom is initially prepared in the state $\ket{0}_\mc{Q}$. Then a rapid $\pi/2$-pulse is applied on the atomic qubit, followed by a $\pi$-pulse on the $\ket{g_2} \rightarrow \ket{e_L}$ transition [Fig.~\ref{fig2}(c)]. The subsequent decay from state $\ket{e_L}$ to $\ket{g_2}$ results in entanglement between the atom and the emitted photon, i.e. $\ket{0}_\mc{Q}\ket{0}_1+\ket{1}_\mc{Q}\ket{1}_1$. When this pulse sequence is repeated for $n$ times, one can show that this protocol leads to a train of photonic qubits in the form of 1D cluster state. We note that an analogous scheme has been already demonstrated in an experiment using a quantum dot \cite{Schwartz:2016dj}, following a proposal by Lindner and Rudolph \cite{Lindner:2009eu}. Interestingly, the 2D cluster state is generated from exactly the same sequence if we take into account the effect of the mirror and the scattering of the right moving photons from the atom. Each of the left-moving photons is reflected from the mirror and returns to the atom after a time delay $\tau=2L/c$, where $c$ denotes the speed of light. We are interested in the situation where this time delay is large so that the $k$-th photon interacts for the second time with the atom in between the two pulses of the $(k+N)$-th step of the protocol. This is achieved, for example, by setting $\tau = (N-1/2)T$ where $T$ is the time duration of each time step [see Fig.\ref{fig2}(c)]. Crucially, when the atom is in the state $\ket{g_2}$, the right moving photon is resonantly coupled to the $\ket{g_2}\rightarrow \ket{e_R}$ transition, picking up a scattering phase shift of $\pi$ without any reflection \cite{Lodahl:2017bz}. In contrast, when the atom is in state $\ket{g_1}$, or the photon mode is empty, there is no interaction. This process implements a controlled $\sigma^z$ gate \begin{align}\label{eqPhasegate} \hat Z_{\mc{Q},k}=\ket{0}_\mc{Q}\bra{0}\otimes \mathbb{1}_k+\ket{1}_\mc{Q}\bra{1}\otimes \sigma_k^z \end{align} and entangles the atom and the $k$-th photon. In turn, the subsequently generated $k+N$-th photon inherits this entanglement, thereby giving rises to the 2D structure described above. Formally, the protocol can be interpreted as a sequential application of gates $\hat X_{\mc{Q},k+N} \hat Z_{\mc{Q},k}\hat H_\mc{Q}$, on the atom and (photonic) qubits $k$ and $k+N$, that are initially prepared in the trivial state $\ket{0}_\mc{Q}\bigotimes_k\ket{0}_k$ [Fig.~\ref{fig2}(d,e)]. Here $\hat H_\mc{Q}=\frac{1}{\sqrt{2}}(\sigma_\mc{Q}^{z}+\sigma_\mc{Q}^{x})$ is a Hadamard rotation of the atom and $\hat X_{\mc{Q},k}=\ket{0}_\mc{Q}\bra{0}\otimes \mathbb{1}_k+\ket{1}_\mc{Q}\bra{1}\otimes \sigma_k^x$ is a flip of the $k$-th qubit state, controlled by the atom. One can show [see supplementary material (SM)], that after $(M+1)\times N$ turns this gives exactly the 2D cluster state on a $M\times N$ square lattice with shifted periodic boundary conditions \footnote{Up to a trivial operation disentangling the atom.}: \begin{align}\label{eq6} \ket{\psi_{\mc{C}_{2D}}}=\lr{\prod_{k=1}^{N(M+1)}\hat X_{\mc{Q},k+N}\hat Z_{\mc{Q},k} \hat H_\mc{Q} }\ket{0}_\mc{Q}\bigotimes_k\ket{0}_k. \end{align} \begin{figure}\label{fig3} \end{figure} Before proceeding, we examine the necessary conditions for the implementation of our protocol. First, photons generated in this pulsed scheme have a finite bandwidth $\mc{B}$. In order to realize the controlled phase gate \eqref{eqPhasegate} this bandwidth must be small, i.e., $\mc{B}\ll\gamma_R$ \cite{Lodahl:2017bz}; otherwise, the scattering by the atom does not only imprint a phase but also distort the photon wavepacket, reducing the gate fidelity, $\mc{F}_Z$ [Fig.~\ref{fig3}(c) and SM]. Narrow-bandwidth photons, and correspondingly, high fidelity gates, can be obtained by shaping the photon wavepacket, for example, by exciting the atom via a third stable level $\ket{g_3}$ in a Raman-type configuration [Fig.~\ref{fig3}(a)]. In particular, the gate fidelity can be significantly increased by shaping the temporal profiles \cite{Pechal:2014db}, i.e. eliminating the error to first order in $\mc{B}/\gamma_R$ [Fig.~\ref{fig3}(c) and SM]. Moreover, time symmetric photons simplify the measurement of the qubits in an arbitrary basis, since they can be perfectly absorbed by a second atom acting as the measurement device \cite{Cirac:1997is}. Such techniques allow the full implementation of MBQC using our protocol \cite{Rausssendorf:2001js}. Second, the information capacity of the queue memory is bounded by the number of photons in the delay line. In order to well distinguish two consecutive photons, one can only generate them at a rate $1/T\ll \mc{B}$ [see Fig.~\ref{fig3}]. Therefore, the requirement for narrow-bandwidth photons is competing with the effective size of the achievable memory, $N$. Thus, high fidelity implementation requires the hierarchy \begin{align} N\sim \tau/T\ll \tau\mc{B}\ll\gamma_R \tau. \end{align} Note that the quantum memory lifetime can be dramatically enhanced e.g. via a dispersive slow-light medium \cite{Lukin:2003ct,Fleischhauer:2005ti}. Finally, apart from this fundamental considerations, experimental imperfections will eventually limit the achievable size of the cluster state. One of the most important challenges is the photon loss, often quantified by the so-called cooperativity $\eta_j=\gamma_j/\Gamma_j$ ($j=L,R$). Here $\Gamma_j$ denotes the effective rate of photon loss arising from emission into unguided modes (from state $\ket{e_j}$) and amplitude attenuation in the waveguide. Large, high-fidelity cluster states can be obtained in the regime $\eta_j\gg 1$, where the achievable system sizes scale as $NM\lesssim (1/\eta_L+2/\eta_R)^{-1}$ [Fig.~\ref{fig3}(e)]. High cooperativities have been demonstrated in nanophotonic experiments with neutral atoms and solid-state emitters \cite{Tiecke:vw}. We also note that our protocol can be adapted for settings that do not have chiral couplings. For example, when an atom is coupled to a waveguide via a one-sided cavity \cite{Tiecke:vw,Reiserer:2014hf}, the delayed feedback can be introduced by a distant, switchable mirror [Fig.~\ref{fig3}(b)]. Proper control of the mirror can ensure that each generated photon interacts exactly twice with the atom before it leaves at the output port, as required for our protocol. Moreover, in such a setting it is possible to encode qubit states in photon polarizations rather than number degrees of freedom [see Fig.~\ref{fig3}(b)], allowing the detection of photon loss errors. \textit{Generalizations and Outlook}.--- Apart from applications in quantum computing, our scheme can be harnessed to study strongly correlated quantum systems. Output fields of a quantum emitter can be used as a variational class to search for ground states of Hamiltonians of interest. While this has been discussed previously for 1D problems \cite{Haegeman:2010bc,Eichler:2015if}, where the generated states are limited to MPS and can be simulated classically \cite{Vidal:2008bk}, a 2D tensor network is qualitatively different from a MPS, since an exact contraction, e.g. to calculate correlation functions, is in general intractable on classical computers \cite{Schuch:2007fm}. Moreover, our scheme allows study of many-body phenomena that are present only in dimensions larger than one, such as topologically non-trivial phases. Indeed, the class of states that can be created in our approach can be completely characterized. More specifically, we are interested in the structure of the wavefunction $\ket{\Psi(k)}$, describing $\mathcal{Q}$ and the string of qubits, after $k$ steps: \begin{align}\label{eq5} \ket{\Psi(k)}&=\hat U[k]\ket{\Psi(k-1)}=\prod_{j=1}^k\hat U[j] \ket{\Psi(0)}, \end{align} where the unitary evolution $\hat U[j]$ acts only on $\mathcal{Q}$ and qubits $j$ and $j+N$ and, for concreteness, we choose $\ket{\Psi(0)}=\ket{0}_\mc{Q}\bigotimes_k\ket{0}_k$ \footnote{Note that we use a convention where the ordering in the product is defined via $\prod_{j=1}^k M_j=M_kM_{k-1}\dots M_1$.}. Given the initial state and the limited support of $\hat{U}$, the wavefunction is entirely specified by $U[j]_{a,b,c,d}^i$ (see eq.~\eqref{eq1}). In particular, $U[j]_{a,b,c,d}^i$ can be understood as a rank-5 tensor; where the ``physical index'' $i$ denotes the state of the qubit $j$, the ``horizontal bonds'' $b$ and $d$ run over internal degrees of freedom of $\mc{Q}$, and ``vertical bonds'' $a$ and $c$ enumerate the quantum states of input into and output from queue memory, respectively [see Fig.~\ref{fig4}(a)]. Therefore, we can write $\ket{\Psi(k)}$ as \begin{align}\label{eq3} \ket{\Psi(k)}=\sum_{i_\mc{Q},\{i_j\}} \mc{C}(\{U[j]^{i_j}\})\ket{i_\mc{Q},i_1,i_2,\cdots}, \end{align} where $\mc{C}(\dots)$ denotes the contraction of the 2D tensor networks in Fig.~\ref{fig4}(b) and $\ket{i_\mc{Q},i_1,i_2,\cdots}$ enumerates configurations of $\mathcal{Q}$ and the string of qubits. \begin{figure} \caption{Tensor network representation of the generated states. (a) Graphical representation of the tensors $U[k]$. (b) Representation of the state \eqref{eq5} in terms of the tensors given in (a). Connected lines indicate contractions, red open lines denote physical indices corresponding to the states of qubits in the output. In the top row the tensors are contracted with the initial state of the $\mathcal{Q}$ and the first $N$ qubits in the string, as indicated by the black circles. The open legs at the bottom correspond to the state of the qubits in the memory queue after step $k$, and the last open line on the bottom right, corresponds to the state of $\mc{Q}$. (c) Circuit model that gives the tensor \eqref{eq8} (up to local unitary operations) for an uncorrelated initial state in which $\mathcal{Q}$ and qubits $k_1,k_4$ ($k=1,2,\dots$) are in state $(\ket{0}+\ket{1})/\sqrt{2}$ and all other qubits in $\ket{0}$ [cf.~Fig.~\ref{fig2}(d,e)]} \label{fig4} \end{figure} Since the tensors describing the state are given by the matrix elements of a unitary matrix, they have to satisfy \begin{align}\label{eq7} \sum_{i,a,b}U^{i}_{a,b,c,d}(U^{i}_{a,b,c',d'})^\ast=\delta_{c,c'}\delta_{d,d'}. \end{align} Physically, this isometric condition reflects the deterministic and sequential nature of the protocol --- at a given step $k$ the state of qubits $j<k$ can not be changed. Every tensor network that can be brought into a form respecting \eqref{eq7} can be constructed in our approach. Per construction we showed above that this includes universal resources for MBQC. Interestingly, this class of states includes exotic states with topological order such as string-net states \cite{Buerschaper:2009fi} or the ground state of Kitaev's Hamiltonian \cite{Kitaev:2003jw}. In the latter case the ground state can be represented by a 2D network of translationally invariant tensors \cite{Orus:2014ja,Schuch:2010jp,Schuch:2012cq} \begin{align}\label{eq8} U^{i_1,i_2,i_3,i_4}_{a,b,c,d}=\delta_{a+b,i_1}\delta_{d+a,i_2}\delta_{c+d,i_3}\delta_{b+c,i_4}/4, \end{align} where $i_1, \dots, i_4$ denote the states of four physical qubits in a unit cell and $\{a, b, c, d\}$ run over the bond dimension 2. This tensor explicitly satisfies \eqref{eq7} and can immediately be translated into a protocol similar to the one for generating the 2D cluster state. In Fig. \ref{fig4}(c) we give an explicit circuit representation of the stroboscopic step $\hat U$, where we only utilize gates that are accessible in photonic systems discussed above, such as controlled single photon generation $\hat X_{\mc{Q},k}$ and atom-photon phase gates $\hat Z_{\mc{Q},k}$. The four physical qubits of each unit cell are in this case represented by four sequentially generated photons in each step. Our work can be extended in several ways. In analogy to continuous MPS in 1D \cite{Verstraete:2010bf}, our protocol can be adapted to use continuos driving fields, which results in hybrid continuous-discrete 2D tensor network states. Moreover, adding multiple delay lines allows $\mathcal{Q}$ to interact with more than two qubits, and thus provides tensor networks in higher dimensions. This is of relevance for fault tolerant implementations of MBQC using 3D cluster states \cite{Briegel:2009gg,Raussendorf:2006je}. A promising extension of our nanophotonic protocol is implementation of MBQC that can tolerate up to 50\% of counterfactual errors due to photon loss \cite{Varnava:2006jd,Dawson:2006eo}. Finally, besides nanophotonic setups our `single-atom quantum computer' can be implemented with microwave photons \cite{vanLoo:2013df,Hoi:2015fh}, phonons \cite{Ramos:2014ut}, surface-acoustic waves \cite{Gustafsson:2014gu,Guo:2016ui} or moving tapes of spin qubits \cite{Benito:2016cw}. \onecolumngrid { \center \bf \large Supplemental Material for: \\ Photonic tensor networks produced by a single quantum emitter\vspace*{0.1cm}\\ \vspace*{0.0cm} } \begin{center} Hannes Pichler$^{1,2}$, Soonwon Choi$^{2}$, Peter Zoller$^{3,4}$, and Mikhail D. Lukin $^{2}$\\ \vspace*{0.15cm} \small{\textit{$^1$ITAMP, Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138, USA\\ $^2$Physics Department, Harvard University, Cambridge, Massachusetts 02138, USA\\ $^3$Institute for Theoretical Physics, University of Innsbruck, A-6020 Innsbruck, Austria\\ $^4$ Institute for Quantum Optics and Quantum Information of the Austrian Academy of Sciences, A-6020 Innsbruck, Austria\\}} \vspace*{0.25cm} \end{center} \twocolumngrid \section{2D cluster state representation} In this section we prove that $\ket{\psi_{\mc{C}_{2D}}}$ in Eq.~\eqref{eq6} represents the 2D cluster state. To this end we start first with the representation of a the 1D cluster state on $K+1$ qubits: \begin{align} \ket{\psi_{\mc{C}_{1D}}}=\prod_{k=1}^{K} Z_{k,k+1}\ket{+}^{\otimes {K+1}}. \end{align} Using the swap operator $S_{i,j}$ that exchanges the quantum states of qubits $i$ and $j$ we can rewrite the 1D cluster state as \begin{align} \ket{\psi_{\mc{C}_{1D}}}=\lr{\prod_{k=1}^{K} S_{\mc{Q},k}Z_{\mc{Q},k} }\ket{+}_\mc{Q}\bigotimes_{k=1}^{K}\ket{+}_k \end{align} where we used the relation $Z_{b,c}=S_{a,b}Z_{a,c}S_{a,b}$, and identified the $K+1$th qubit with the ancilla $\mc{Q}$. Note that throughout this paper we use a convention where the ordering in the product is defined via $\prod_{j=1}^k M_j=M_kM_{k-1}\dots M_1$. We now use the relation \begin{align} S_{\mc{Q},v}Z_{\mc{Q},v}\ket{\psi}_\mc{Q}\otimes\ket{+}_v= H_\mc{Q} X_{\mc{Q},v} H_v \ket{\psi}_\mc{Q}\otimes\ket{+}_v, \end{align} where $H_x$ is the Hadamard gate acting on qubit $x$. We note that this equality is not an operator identity but a property of states of the form $\ket{\psi}_\mc{Q}\otimes\ket{+}_v$, where the qubit $v$ must be in the state $\ket{+}_v$ while the ancillary system $\ket{\psi}_\mc{Q}$ may be in an arbitrary state, potentially entangled to other systems. Using these relations, our state can be written as \begin{align} \ket{\psi_{\mc{C}_{1D}}}=\lr{\prod_{k=1}^K H_\mc{Q} X_{\mc{Q},k}} \ket{+}_\mc{Q} \bigotimes_{k=1}^K\ket{0}_k. \end{align} We note that this representation of the 1D cluster state has been also used in Ref.~\cite{Lindner:2009eu}. We now proceed to the construction of the 2D cluster state. From its definition, the 2D cluster state can be obtained from the 1D cluster state above by introducing additional entanglement (via phase gates) between qubits $k$ and $k+N$~\cite{Rausssendorf:2001js}. This gives a 2D cluster state on a square lattice with shifted periodic boundary conditions, where the extend of the (shifted) periodic direction is set by $N$. \begin{align} \ket{\psi_{\mc{C}_{2D}}}&=\lr{\prod_{k=N+1}^{K}Z_{k,k-N}}\lr{\prod_{k=1}^K H_\mc{Q} X_{\mc{Q},k}} \ket{+}_\mc{Q} \bigotimes_{k}\ket{0}_k\nonumber\\ &=\lr{\prod_{k=1}^K H_\mc{Q} Z_{k,k-N}X_{\mc{Q},k}} \ket{+}_\mc{Q} \bigotimes_{k}\ket{0}_k. \end{align} In the second line we used the fact the $[Z_{j,j+N},X_{\mc{Q},k}]=0$ for $j>k$, and $Z_{k,j}\ket{0}_j=\mathbb{1}_k\otimes\mathbb{1}_{j}\ket{0}_j$. Now we make use of the identity \begin{align} Z_{n,m}X_{\mc{Q},n}\ket{\psi}_{\mc{Q},m}\ket{0}_n=X_{\mc{Q},n}Z_{\mc{Q},m}\ket{\psi}_{\mc{Q},m}\ket{0}_n, \end{align} where $\ket{\psi}_{\mc{Q},m}$ is an arbitrary state of $\mc{Q}$ and every qubit including $m \neq n$ but not $n$. Again, this relation is not an operator identity, but a property of a state where qubit $n$ is in the separable state $\ket{0}_n$. Using this identity we arrive at \begin{align} \ket{\psi_{\mc{C}_{2D}}}=\lr{\prod_{k=1}^{K} H_\mc{Q} X_{\mc{Q},k} Z_{\mc{Q},k-N}}\ket{+}_\mc{Q}\bigotimes_k\ket{0}_k \end{align} which is (up to a shift of the index and a rotation of $\mc{Q}$) exactly our protocol given in eq.~\eqref{eq6}. \section{Imperfections} \subsection{Phase gate fidelity due to finite bandwidth of photons.} Without shaping the wave packets of the emitted photons, each photon produced in a single step has a Lorentzian spectral profile, whose temporal profile is \begin{align}\label{unshaped_photon} f(t)=\sqrt{\gamma_L}e^{-\gamma_L t/2}\Theta(t)=i \int d\omega\,\frac{2\pi}{\sqrt{\gamma_L}}\frac{1}{\omega+i\gamma_L/2}e^{-i\omega t} \end{align} where we chose a normalization $\int dt|f(t)|^2=1$. The scattering phase shift for the chiral forward scattering (if the atom is in state $\ket{g_2}$) is determined by the transmission: \begin{align}\label{eqtransmission} t(\omega)=\frac{\omega-i\gamma_R/2}{\omega+i\gamma_R/2} \end{align} such that the wave packet $f(t)$ transforms into \begin{align} \tilde f(t)&=i \int d\omega\,\frac{2\pi}{\sqrt{\gamma_L}}\frac{\omega-i\gamma_R/2}{\omega+i\gamma_R/2}\frac{1}{\omega+i\gamma_L/2}e^{-i\omega t}\\ &=-\sqrt{\gamma_L}\lr{\frac{\gamma_R+\gamma_L}{\gamma_R-\gamma_L}e^{-\gamma_L t/2}-2\frac{\gamma_R}{\gamma_{R}-\gamma_L}e^{-\gamma_R t/2}}\Theta(t) \end{align} It is straightforward to calculate the overlap \begin{align} \int dt f^\ast(t) \tilde f (t)=-\frac{1-\gamma_L/\gamma_R}{1+\gamma_{L}/\gamma_R} \end{align} With this we obtain the fidelity \cite{Nielsen:2002ks} of the controlled phase gate: \begin{align} \mathcal{F}_Z=\frac{2 \frac{\gamma_l}{\gamma_{R}} (\frac{\gamma_l}{\gamma_{R}}+3)+5}{5 (\frac{\gamma_l}{\gamma_{R}}+1)^2}=1-\frac{4}{5}\frac{\gamma_L}{\gamma_R}+\mc{O}\lr{\frac{\gamma_L^2}{\gamma_R^2}}. \end{align} \subsubsection{Pulse shaping} If we shape the coupling via $\gamma_L(t)=\frac{4 \Omega^2(t)}{\gamma}$ then $f(t)= \sqrt{\gamma_L(t)} \exp\lr{-\int_0^t ds \gamma_L(s)/2} $. Pulse shaping allows to create photon wave packets that are symmetric in time. Straightforward calculation shows that, for example, the gaussian wave packet \begin{align} f(t)=\sqrt{\mc{B}/\sqrt{\pi}}e^{-\mc{B}^2 (t-t_0)^2/2} \end{align} can be obtained by \begin{align}\label{shaped_decay}\gamma_L(t)=\frac{2\mc{B} e^{-\mc{B}^2 (t-t_0)^2}}{\sqrt{\pi}(1-{\rm erf}(\mc{B} (t-t_0)))}. \end{align} The corresponding fidelity of the phase gate can be calculated from \begin{align} \int dt f^\ast(t) \tilde f (t)&=1-\frac{\sqrt{\pi } e^{\frac{1}{4 x^2}}\lr{1- \text{erf}\left(\frac{1}{2 x}\right)}}{x}\nonumber\\&=-1 + 4 x^2+\mc{O}(x^4) \end{align} with $x=\mc{B}/\gamma_R$, and the gaussian error function $\textrm{erf}(z)=\frac{2}{\sqrt{\pi}}\int_0^z dt e^{-t^2}$. This gives the fidelity $\mc{F}_Z=1-\frac{8}{5}x^2+\mathcal{O}(x^4)$ \cite{Nielsen:2002ks}. We note that the linear order in $x$ vanishes in this expression unlike in the previous a case without the shaping of wave packet. This is a consequence of temporally symmetric wave packet, which can significantly improve the fidelity. \subsection{Fidelity of the controlled not gate} In the proposed implementation to create the 2D cluster state, the gate $\hat X_{\mathcal{Q},k+N}$ is realized by emission of a photon associated with the transition $\ket{e_L} \rightarrow \ket{g_2}$ during the second half of each timestep with period $T$. In order for the gate $\hat X_{\mathcal{Q},k+N}$ to work, we thus require $T/2$ to be much larger than the temporal extend of the emitted photon; otherwise, the next step in our protocol would proceed even before the gate $\hat{X}_{\mathcal{Q},k_N}$ is completed, leading to an error. This gate fidelity can computed from the quantity \begin{align} \epsilon=\int_{t_0}^{t_0+T/2} dt \gamma_L(t) \exp\lr{-\int_0^t ds \gamma_L(s)} \end{align} via $\mc{F}_X=1-\frac{2}{3}\epsilon+\frac{1}{6}\epsilon^2$. Without shaping the photon wave packet, i.e., with $\gamma_L(t)=\gamma_L$ as in eq.~\eqref{unshaped_photon} one gets $\epsilon=e^{-\gamma_L T/2}$, while for the pulse shaped photon \eqref{shaped_decay} (with $t_0=-T/4$) we find $\epsilon=1-\sqrt{2}\frac{\textrm{erf}(\mc{B} T/4)}{\sqrt{1+\textrm{erf}(\mc{B} T/4)}}$. For large $x=\mc{B} T/4 \gg1$ we have $\epsilon\rightarrow e^{-x^2}/(\sqrt{\pi}x)$. In both cases the gate fidelity approaches 1 exponentially, but in the case of a shaped photon wave packet this approach is again faster. \end{document}
\begin{document} \begin{titlepage} \begin{center} \vspace*{30mm} \textsc{\LARGE -- Diploma Thesis --} \hrule height 2pt {\huge\bfseries \parbox[0pt][1.3em][c]{0cm}{}Representation of Conditional \parbox[0pt][1.3em][c]{0cm}{}Expectations in Gaussian Analysis on \parbox[0pt][1.3em][c]{0cm}{}Sequence Spaces} \hrule height 2pt {\large Handed in: March 31, 2011\\[2mm] Last modification: \today } \begin{minipage}{0.3\textwidth} \begin{flushleft} \large \emph{Author:}\\ \textsc{Felix Riemann} \end{flushleft} \end{minipage} \begin{minipage}{0.493\textwidth} \begin{flushright} \large \emph{Advisor:}\\ \textsc{Prof. Dr. Martin Grothaus} \end{flushright} \end{minipage} \begin{minipage}{0.8\textwidth} {\large\textsc{TU Kaiserslautern, Department of Mathematics}}\linebreak {\large\textsc{Functional Analysis and Stochastic Analysis Group}} \end{minipage} \end{center} \end{titlepage} \tableofcontents \chapter*{Introduction} \addcontentsline{toc}{chapter}{Introduction} In infinite-dimensional analysis, the concept of Gaussian analysis is used to establish a Gaussian measure on a linear space of infinite dimension. While there does not exist a Lebesgue measure (i.e. a translation-invariant and locally finite measure which is non-trivial) in infinite dimensions, Gaussian measures can still be defined in this setting. Since Gaussian systems often occur in stochastic analysis, this theory becomes useful in solving stochastic partial differential equations. Furthermore, the area of White Noise Analysis may be used to approach Feynman path integrals in quantum physics. For a complete nuclear space $\mathcal{N}$ which is densely and continuously embedded in a separable real Hilbert space $\big(\mathcal{H},(\cdot,\cdot)_\mathcal{H}\big)$, a Gaussian measure can be constructed on $\mathcal{N}'$, the topological dual space of $\mathcal{N}$. By identifying $\mathcal{H}$ with its topological dual space $\mathcal{H}'$, we obtain a so-called Gel'fand triple $\mathcal{N}\subset\mathcal{H}=\mathcal{H}'\subset\mathcal{N}'$. The Bochner-Minlos theorem states that if we equip $\mathcal{N}'$ with its cylindrical $\sigma$-algebra, for a characterisic function $C:\mathcal{N}\to\mathbb{C}$ there exists a unique measure $\mu$ on $\mathcal{N}'$ such that for all $\eta\in\mathcal{N}$ one has \[ \int_{\mathcal{N}'}\exp(i\langle\eta,\omega\rangle)d\mu(\omega)=C(\eta), \] where $\langle\cdot,\cdot\rangle$ denotes the canonical dual pairing between $\mathcal{N}$ and $\mathcal{N}'$. The characteristic function $C$ is usually chosen to be $\eta\mapsto\exp\left(-\frac{1}{2}(\eta,\eta)_\mathcal{H}\right)$ in order to obtain a standard Gaussian measure. However, our goal is to insert a covariance operator $A$ in this measure. Motivated by applications (see Section~\ref{appsect}) we will build our Gaussian space around the sequence space $\ell^2(\mathcal{H})$. For this purpose we will characterize $\ell^2(\mathcal{H})$ and show that an operator $A\in L(\ell^2(\mathbb{R}))$ naturally can be extended to an element of $L(\ell_2(\mathcal{H}))$. Such operators will be used as a special type of correlation operators which allow a more explicit representation of certain conditional expectations. In the second chapter the correlated Gaussian measure will be constructed. We will define $s(\mathcal{N})$, a dense nuclear subspace of $\ell^2(\mathcal{H})$, and establish the measure $\mu_A$ on its topological dual space $s'(\mathcal{N})$ by means of the Bochner-Minlos theorem, using the characterisic function \[ s(\mathcal{N})\ni\varphi\longmapsto\exp\left(-\frac{1}{2}(\varphi,A\varphi)_\mathcal{H}\right)\in\mathbb{R}, \] where the corvariance operator $A$ is a self-adjoint and positive definite operator in $L(\ell^2(\mathcal{H}))$. We will rederive the orthogonal decomposition into wick polynomials in $L^2(\mu_A)$, called chaos decomposition, and see that it slightly differs from the usual decomposition, as the kernels are going to be elements from $\ell_A^2(\mathcal{H})$, the completion of $\ell^2(\mathcal{H})$ with respect to the inner product generated by $A$. Finally, in the third chapter, a representation for the conditional expectation of arbitrary random variables in $L^2(\mu_A)$ will be given, where we condition on a $\sigma$-algebra generated by monomials. Later on we will exploit the structure of the sequence space to obtain a representation of conditional expectations of monomials conditioned on countably many monomials and give an application of the results. Throughout the whole thesis, the Gaussian spaces will only be considered over the field $\mathbb{R}$, even if Gaussian analysis usually deals with the complexification of these spaces. However, the results in here may be generalized to cover the complex case. Furthermore, to simplify the proofs, we will consider the Hilbert space $\mathcal{H}$ to satisfy $\dim\mathcal{H}=\infty$, even though the proofs for finite-dimensional $\mathcal{H}$ work similar. \begin{chapter}{Square Summable Sequences} Throughout all chapters, $\big(\mathcal{H},(\cdot,\cdot)_\mathcal{H}\big)$ will be a real separable Hilbert space with $\dim \mathcal{H}=\infty$. By $(e_k)_{k\in\mathbb{N}}$ we will denote the unit vectors in $\ell^2(\mathbb{R})$, i.e. $e_k=(\delta_{k,l})_{l\in\mathbb{N}}\in\ell^2(\mathbb{R})$ for $k\in\mathbb{N}$. \begin{section}{Characterization} \begin{definition} The Hilbert space of square summable sequences in $\mathcal{H}$ we denote by \[ \ell^2(\mathcal{H}):=\left\{f\in \mathcal{H}^\mathbb{N}:\sum_{k=1}^\infty\|f_k\|_\mathcal{H}^2<\infty\right\} \] together with its inner product \[ \ell^2(\mathcal{H})\times\ell^2(\mathcal{H})\ni(f,g)\longmapsto(f,g)_{\ell^2(\mathcal{H})}:=\sum_{k=1}^\infty(f_k,g_k)_\mathcal{H}\in\mathbb{R}. \] The norm on $\ell^2(\mathcal{H})$ we denote by $\|\cdot\|_{\ell^2(\mathcal{H})}:=\sqrt{(\cdot,\cdot)_{\ell^2(\mathcal{H})}}$ as usual. \end{definition} \begin{lemma} The mappings \[ \mathcal{H}\times\ell^2(\mathbb{R})\ni (h,x)\longmapsto h\bullet x:=(hx_k)_{k\in\mathbb{N}}\in\ell^2(\mathcal{H})\quad\text{and} \] \[ \ell^2(\mathcal{H})\times\ell^2(\mathbb{R})\ni (f,x)\longmapsto [f,x]:=\sum_{k=1}^\infty x_kf_k\in\mathcal{H} \] are bilinear and satisfy \begin{enumerate} \item $\|\cdot\bullet x\|_{L(\mathcal{H},\ell^2(\mathcal{H}))}=\|x\|_{\ell^2(\mathbb{R})}$, \item $\|h\bullet\cdot\|_{L(\ell^2(\mathbb{R}),\ell^2(\mathcal{H}))}=\|h\|_{\mathcal{H}}$, \item $\|[\cdot,x]\|_{L(\ell^2(\mathcal{H}),\mathcal{H})}=\|x\|_{\ell^2(\mathbb{R})}$ and \item $\|[f,\cdot]\|_{L(\ell^2(\mathbb{R}),\mathcal{H})}\le\|f\|_{\ell^2(\mathcal{H})}$ \end{enumerate} for all $x\in\ell^2(\mathbb{R})$, $h\in\mathcal{H}$ and $f\in\ell^2(\mathcal{H})$. \begin{proof} The bilinearity of both mappings is clear. For $x\in\ell^2(\mathbb{R})$ and $h\in\mathcal{H}$ we have \[ \|h\bullet x\|_{\ell^2(\mathcal{H})}^2=\sum_{k=1}^\infty x_k^2\|h\|_\mathcal{H}^2=\|x\|_{\ell^2(\mathbb{R})}^2\|h\|_\mathcal{H}^2, \] so $\cdot\bullet\cdot$ is well-defined and both \textit{(i)} and \textit{(ii)} are proven. For $f\in\ell^2(\mathcal{H})$ by the triangle inequality for $\|\cdot\|_\mathcal{H}$ and the Cauchy-Bunyakovsky-Schwarz inequality we obtain \begin{equation}\label{somelabel0} \left\|\sum_{k=m}^nx_kf_k\right\|_\mathcal{H}\le\sum_{k=m}^n|x_k|\|f_k\|_\mathcal{H}\le\left(\sum_{k=m}^n|x_k|^2\right)^\frac{1}{2}\left(\sum_{k=m}^n\|f_k\|_\mathcal{H}^2\right)^\frac{1}{2} \end{equation} for all $n,m\in\mathbb{N}$. This yields that $\big(\sum_{k=1}^n x_kf_k\big)_{n\in\mathbb{N}}$ is a Cauchy-Sequence in $\mathcal{H}$, so $[\cdot,\cdot]$ is well-defined. By setting $m=1$ in \eqref{somelabel0} and taking the limit $n\to\infty$ we obtain the estimation $\|[f,x]\|_\mathcal{H}\le\|f\|_{\ell^2(\mathcal{H})}\|x\|_{\ell^2(\mathbb{R})}$ which proves \textit{(iv)} and shows $\|[\cdot,x]\|_{L(\ell^2(\mathcal{H}),\mathcal{H})}\le\|x\|_{\ell^2(\mathbb{R})}$. For the proof of \textit{(iii)} it is left to show $\|[\cdot,x]\|_{L(\ell^2(\mathcal{H}),\mathcal{H})}=\|x\|_{\ell^2(\mathbb{R})}$. We compute \[ \big\|[h\bullet x,x]\big\|_\mathcal{H}=\left\|\sum_{k=1}^\infty hx_k^2\right\|_\mathcal{H}=\left|\sum_{k=1}^\infty x_k^2\right|\|h\|_\mathcal{H}=\|x\|_{\ell^2(\mathbb{R})}^2\|h\|_\mathcal{H}=\|x\|_{\ell^2(\mathbb{R})}\|h\bullet x\|_{\ell^2(\mathcal{H})}. \] This also shows that the inequality in \textit{(iv)} is sharp. If $f=(f_1,f_2,0,0,\dots)\in\ell^2(\mathcal{H})$ for some orthonormal $f_1,f_2\in\mathcal{H}$, then $\|f\|_{\ell^2(\mathcal{H})}^2=2$ and for all $x\in\ell^2(\mathbb{R})$ with $x\not=0$ we have \[ \big\|[f,x]\big\|_\mathcal{H}^2=\|x_1f_1+x_2f_2\|_\mathcal{H}^2=x_1^2+x_2^2\le\|x\|_{\ell^2(\mathbb{R})}^2<\|x\|_{\ell^2(\mathbb{R})}^2\|f\|_{\ell^2(\mathcal{H})}^2, \] so the inequality in \textit{(iv)} even may be strict. \end{proof} \end{lemma} \begin{proposition}\label{rechenregeln} Let $f\in\ell^2(\mathcal{H})$, $h,g\in\mathcal{H}$ and $x,y\in\ell^2(\mathbb{R})$. We have the following identities: \begin{enumerate} \item $(h\bullet x,g\bullet y)_{\ell^2(\mathcal{H})}=(h,g)_\mathcal{H}(x,y)_{\ell^2(\mathbb{R})}$. \item $(f,h\bullet x)_{\ell^2(\mathcal{H})}=\big([f,x],h\big)_\mathcal{H}$. \item $[h\bullet x,y]=(x,y)_{\ell^2(\mathbb{R})}h$. \end{enumerate} \begin{proof} This is done by the following straightforward computations: \begin{enumerate} \item $(h\bullet x,g\bullet y)_{\ell^2(\mathcal{H})}=\sum_{k=1}^\infty(hx_k,gy_k)_\mathcal{H}=\sum_{k=1}^\infty x_ky_k(h,g)_\mathcal{H}=(h,g)_\mathcal{H}(x,y)_{\ell^2(\mathbb{R})}$. \item $(f,h\bullet x)_{\ell^2(\mathcal{H})}=\sum_{k=1}^\infty(f_k,hx_k)_\mathcal{H}=\big(\sum_{k=1}^\infty x_kf_k,h\big)_\mathcal{H}=\big([f,x],h\big)_\mathcal{H}$. \item $[h\bullet x,y]=\sum_{k=1}^\infty hx_ky_k=(x,y)_{\ell^2(\mathbb{R})}h$. \end{enumerate} \end{proof} \end{proposition} \begin{remark} For the sake of notational simplicity, we will often omit the indexes of the norms, i.e. we will write $\|\cdot\|$ for $\|\cdot\|_{\ell^2(\mathbb{R})}$, $\|\cdot\|_\mathcal{H}$, $\|\cdot\|_{\ell^2(\mathcal{H})}$ et cetera, since there is no risk of confusion. Analogously we will deal with inner products. \end{remark} \begin{proposition}\label{finseqdense} Let $(h_i)_{i\in\mathbb{N}}$ be an orthonormal basis of $\mathcal{H}$. Then $\{h_i\bullet e_k:i,k\in\mathbb{N}\}$ is an orthonormal basis of $\ell^2(\mathcal{H})$. \begin{proof} For $i,i',k,k'\in\mathbb{N}$ we clearly have \[ (h_i\bullet e_k,h_{i'}\bullet e_{k'})=(h_i,h_{i'})(e_k,e_{k'})=\delta_{i,i'}\delta_{k,k'}=\delta_{(i,k),(i',k')} \] by Proposition~\ref{rechenregeln}. Now let $f\in\ell^2(\mathcal{H})$ and $\varepsilon>0$ be arbitrary. Choose $N\in\mathbb{N}$ such that $\sum_{k=N+1}^\infty\|f_k\|^2<\varepsilon/2$. For each $k=1,\dots,N$ there exists $g_k\in\spann\{h_i:i\in\mathbb{N}\}$ with $\|f_k-g_k\|^2<\varepsilon/2N$. Then for $g:=\sum_{k=1}^Ng_k\bullet e_k\in\spann\{h_i\bullet e_k:i,k\in\mathbb{N}\}$ we have \[ \|f-g\|^2=\sum_{k=1}^\infty\|f_k-g_k\|^2=\sum_{k=1}^N\|f_k-g_k\|^2+\sum_{k=N+1}^\infty\|f_k\|^2<\varepsilon. \] \end{proof} \end{proposition} \begin{remark} While it is clear that for $f\in\ell^2(\mathcal{H})$ it holds $f=\sum_{k=1}^\infty f_k\bullet e_k$, it may not be too obvious that such an identity also exists for an arbitrary orthonormal basis $(b_k)_{k\in\mathbb{N}}$ of $\ell^2(\mathbb{R})$, i.e. that there exists a sequence $(f_k^b)_{k\in\mathbb{N}}$ in $\mathcal{H}$ such that $f=\sum_{k=1}^\infty f_k^b\bullet b_k$. The rest of this section will deal with the proof that this identity is valid for $f_k^b:=[f,b_k]$. \end{remark} \begin{definition} For an orthonormal basis $b=(b_k)_{k\in\mathbb{N}}$ in $\ell^2(\mathbb{R})$ and $N\in\mathbb{N}$ we define \[ \ell^2(\mathcal{H})_b^{(N)}:=\spann\left\{h\bullet b_k:h\in\mathcal{H},k=1,\dots,N\right\}\subset\ell^2(\mathcal{H}). \] \end{definition} In the following, $b=(b_k)_{k\in\mathbb{N}}$ will always be an arbitrary orthonormal basis of $\ell^2(\mathbb{R})$, if not stated otherwise. \begin{proposition} For $N\in\mathbb{N}$ we have \[ \ell^2(\mathcal{H})_b^{(N)}=\left\{\sum_{k=1}^Nh_k\bullet b_k:h_1,\dots,h_N\in\mathcal{H}\right\}. \] \begin{proof} If $f\in\ell^2(\mathcal{H})_b^{(N)}$ then there exist $m\in\mathbb{N}$ and $\alpha_i\in\mathbb{R}$, $g_i\in\mathcal{H}$ and $k_i\in\{1,\dots,N\}$ for $i=1,\dots,m$ such that \[ f=\sum_{i=1}^m\alpha_ig_i\bullet b_{k_i}=\sum_{k=1}^N\Bigg(\sum_{\substack{i=1,\dots,m:\\k_i=k}}\alpha_ig_i\Bigg)\bullet b_k=\sum_{k=1}^Nh_k\bullet b_k,\quad\text{where }h_k:=\sum_{\substack{i=1,\dots,m:\\k_i=k}}\alpha_ig_i\in\mathcal{H}. \] \end{proof} \end{proposition} \begin{corollary}\label{finseqclosed} The space $\ell^2(\mathcal{H})_b^{(N)}$ is a closed subspace of $\ell^2(\mathcal{H})$ for all $N\in\mathbb{N}$. \begin{proof} For a Cauchy sequence $(f^{(n)})_{n\in\mathbb{N}}$ in $\ell^2(\mathcal{H})_b^{(N)}$, as above there exist $f_1^{(n)},\dots,f_N^{(n)}\in\mathcal{H}$ such that we have $f^{(n)}=\sum_{k=1}^Nf_k^{(n)}\bullet b_k$ for $n\in\mathbb{N}$. Hence \[ \big\|f^{(n)}-f^{(m)}\big\|^2=\left\|\sum_{k=1}^N\big(f_k^{(n)}-f_k^{(m)}\big)\bullet b_k\right\|^2=\sum_{k=1}^N\big\|f_k^{(n)}-f_k^{(m)}\big\|^2\quad\text{for }n,m\in\mathbb{N}, \] which yields that for all $k=1,\dots,N$ the sequence $\big(f_k^{(n)}\big)_{n\in\mathbb{N}}$ is a Cauchy sequence in $\mathcal{H}$ with some limit $f_k\in\mathcal{H}$. Then $f:=\sum_{k=1}^Nf_k\bullet b_k\in\ell^2(\mathcal{H})_b^{(N)}$ is the limit of $(f^{(n)})_{n\in\mathbb{N}}$, since \[ \lim_{n\to\infty}\big\|f-f^{(n)}\big\|^2=\lim_{n\to\infty}\sum_{k=1}^N\big\|f_k-f_k^{(n)}\big\|^2=0. \] \end{proof} \end{corollary} \begin{lemma}\label{someorthorelation} Let $f\in\ell^2(\mathcal{H})$ and $N\in\mathbb{N}$. Then $f-\sum_{k=1}^N[f,b_k]\bullet b_k\in\ell^2(\mathcal{H})_b^{{(N)}\perp}$. \begin{proof} Let $h_1,\dots,h_N\in\mathcal{H}$ be arbitrary. By Proposition~\ref{rechenregeln} we have \begin{align*} \left(f-\sum_{k=1}^N[f,b_k]\bullet b_k,\sum_{k=1}^Nh_k\bullet b_k\right) & = \left(f,\sum_{k=1}^Nh_k\bullet b_k\right)-\left(\sum_{k=1}^N[f,b_k]\bullet b_k,\sum_{k=1}^Nh_k\bullet b_k\right)\\ & = \sum_{k=1}^N(f,h_k\bullet b_k)-\sum_{k=1}^N\big([f,b_k],h_k\big)\\ & = \sum_{k=1}^N\big([f,b_k],h_k\big)-\sum_{k=1}^N\big([f,b_k],h_k\big)\\ & = 0. \end{align*} \end{proof} \end{lemma} \begin{lemma} The subspace \[ \ell^2(\mathcal{H})_b^{\infty}:=\bigcup_{N\in\mathbb{N}}\ell^2(\mathcal{H})_b^{(N)} \] is dense in $\ell^2(\mathcal{H})$. \begin{proof} Since the subspace of finite sequences is dense in $\ell^2(\mathcal{H})$, it suffices to approximate $h\bullet e_s$ for some given $h\in\mathcal{H}$ and $s\in\mathbb{N}$. We may assume $h\not=0$. Since $(b_k)_{k\in\mathbb{N}}$ is an orthonormal basis of $\ell^2(\mathbb{R})$ we have $e_s=\sum_{k=1}^\infty(e_s,b_k)b_k$, hence for $\varepsilon>0$ there exists $N\in\mathbb{N}$ such that $\big\|e_s-\sum_{k=1}^N(e_s,b_k)b_k\big\|<\varepsilon\|h\|^{-1}$. We compute \[ \left\|h\bullet e_s-\sum_{k=1}^N(e_s,b_k)h\bullet b_k\right\|=\left\|h\bullet e_s-h\bullet\sum_{k=1}^N(e_s,b_k) b_k\right\|=\|h\|\cdot\left\|e_s-\sum_{k=1}^N(e_s,b_k) b_k\right\|<\varepsilon. \] \end{proof} \end{lemma} \begin{remark}[Theorem of best approximation] Let $H$ be a pre-Hilbert space and $G$ be a complete subspace of $H$. For a fixed $h\in H$ there exists a unique $g\in G$ such that $\|h-g\|=\dist(h,G):=\inf_{u\in G}\|h-u\|$. Furthermore $g$ is characterized by $h-g\in G^\perp$. \end{remark} \begin{theorem} For $f\in\ell^2(\mathcal{H})$ we have the identity \[ f=\sum_{k=1}^\infty[f,b_k]\bullet b_k. \] In particular $\|f\|^2=\sum_{k=1}^\infty\|[f,b_k]\|^2$. \begin{proof} By Lemma~\ref{someorthorelation} for all $f\in\ell^2(\mathcal{H})$ and $N\in\mathbb{N}$ it holds \[ f-\sum_{k=1}^N[f,b_k]\bullet b_k\in\ell^2(\mathcal{H})_b^{{(N)}\perp}, \] and since $\ell^2(\mathcal{H})_b^{(N)}$ is closed by Corollary~\ref{finseqclosed}, the theorem of best approximation yields \[ \left\|f-\sum_{k=1}^N[f,b_k]\bullet b_k\right\|\le\|f-g\|\quad\text{for all }g\in\ell^2(\mathcal{H})_b^{(N)}. \] To a given $\varepsilon>0$, we choose $N_0\in\mathbb{N}$ and $g\in\ell^2(\mathcal{H})_b^{(N_0)}$ with $\|f-g\|<\varepsilon$. Note that $g\in\ell^2(\mathcal{H})_b^{(N)}$ for all $N\ge N_0$. We obtain \[ \left\|f-\sum_{k=1}^N[f,b_k]\bullet b_k\right\|\le\|f-g\|<\varepsilon\quad\text{for all }N\ge N_0, \] i.e. $f=\lim_{N\to\infty}\sum_{k=1}^N[f,b_k]\bullet b_k$. This also yields \[ \|f\|^2=\lim_{N\to\infty}\left\|\sum_{k=1}^N[f,b_k]\bullet b_k\right\|^2=\lim_{N\to\infty}\sum_{k=1}^N\big\|[f,b_k]\big\|^2. \] \end{proof} \end{theorem} Using this theorem, we can easily generalize Proposition~\ref{finseqdense}: \begin{corollary} If $(h_i)_{i\in\mathbb{N}}$ is an orthonormal basis of $\mathcal{H}$, then $\{h_i\bullet b_k:i,k\in\mathbb{N}\}$ is an orthonormal basis of $\ell^2(\mathcal{H})$. \end{corollary} \end{section} \begin{section}{Operators with Matrix Representation} \begin{theorem}\label{matrixextension} Let $A\in L(\ell^2(\mathbb{R}))$. Then by abuse of notation we define \begin{equation}\label{extensionformula} \ell^2(\mathcal{H})\ni f\longmapsto Af:=\sum_{k=1}^\infty[f,b_k]\bullet Ab_k\in\ell^2(\mathcal{H}) \end{equation} and $A$ becomes a bounded linear operator on $\ell^2(\mathcal{H})$ with $\|A\|_{L(\ell^2(\mathcal{H}))}=\|A\|_{L(\ell^2(\mathbb{R}))}$. This definition does not depend on the particular choice of the orthonormal basis $b=(b_k)_{k\in\mathbb{N}}$. \begin{proof} We first show that $A$ is a bounded linear operator on the dense subspace $\ell^2(\mathcal{H})_b^\infty$ and uniquely extend it to an element of $L(\ell^2(\mathcal{H}))$. To this end, let $(h_i)_{i\in\mathbb{N}}$ be an orthonormal basis of $\mathcal{H}$ and $N\in\mathbb{N}$. For $f\in\ell^2(\mathcal{H})_b^{(N)}$, $k=1,\dots,N$ and $i\in\mathbb{N}$ we denote \[ f_{ki}:=\big([f,b_k],h_i\big)_\mathcal{H}\in\mathbb{R}\quad\text{and}\quad x_i^N:=\sum_{k=1}^Nf_{ki}b_k\in\ell^2(\mathbb{R}). \] Since $(h_i)_{i\in\mathbb{N}}$ is an orthonormal basis of $\mathcal{H}$ we have \begin{equation}\label{fkifliequation} \big([f,b_k],[f,b_l]\big)=\sum_{i=1}^\infty\big([f,b_k],h_i\big)\big(h_i,[f,b_l]\big)=\sum_{i=1}^\infty f_{ki}f_{li} \end{equation} and thus in particular \begin{equation}\label{xinequation} \sum_{i=1}^\infty\left\|x_i^N\right\|^2=\sum_{i=1}^\infty\sum_{k=1}^Nf_{ki}^2=\sum_{k=1}^N\sum_{i=1}^\infty f_{ki}^2\stackrel{\eqref{fkifliequation}}{=}\sum_{k=1}^N\big\|[f,b_k]\big\|^2=\|f\|^2. \end{equation} Using these equations we obtain \begin{align*} \left\|\sum_{k=1}^N[f,b_k]\bullet Ab_k\right\|^2 & = \sum_{k,l=1}^N\big([f,b_k],[f,b_l]\big)(Ab_k,Ab_l)\\ & \stackrel{\eqref{fkifliequation}}{=} \sum_{k,l=1}^N\sum_{i=1}^\infty f_{ki}f_{li}(Ab_k,Ab_l)\\ & = \sum_{i=1}^\infty \left(Ax_i^N,Ax_i^N\right)\\ & \le \|A\|_{L(\ell^2(\mathbb{R}))}^2\sum_{i=1}^\infty\left\|x_i^N\right\|^2\\ & \stackrel{\eqref{xinequation}}{=} \|A\|_{L(\ell^2(\mathbb{R}))}^2\|f\|^2, \end{align*} which gives rise to $A\in L(\ell^2(\mathcal{H}))$ with $\|A\|_{L(\ell^2(\mathcal{H}))}\le\|A\|_{L(\ell^2(\mathbb{R}))}$. For an arbitrary $f\in\ell^2(\mathcal{H})$ we then have \[ Af=A\left(\lim_{N\to\infty}\sum_{k=1}^N[f,b_k]\bullet b_k\right)=\lim_{N\to\infty}A\sum_{k=1}^N[f,b_k]\bullet b_k=\lim_{N\to\infty}\sum_{k=1}^N[f,b_k]\bullet Ab_k, \] so our definition in \eqref{extensionformula} makes sense. In order to show equality for the operator norms, we note that for $h\in\mathcal{H}$ and $x\in\ell^2(\mathcal{H})$ by Proposition~\ref{rechenregeln} and continuity of $h\bullet\cdot$ it holds \begin{equation}\label{Ahx=hAx} A(h\bullet x)=\sum_{k=1}^\infty[h\bullet x,b_k] \bullet Ab_k=\sum_{k=1}^\infty(x,b_k)h \bullet Ab_k=h\bullet A\left(\sum_{k=1}^\infty(x,b_k)b_k\right)=h\bullet Ax. \end{equation} Hence to a given $\varepsilon>0$ we choose $x\in\ell^2(\mathbb{R})$ with $\|Ax\|>(\|A\|_{L(\ell^2(\mathbb{R}))}-\varepsilon)\|x\|$ to obtain \[ \|A(h\bullet x)\|=\|h\bullet Ax\|=\|h\|\cdot\|Ax\|\ge\|h\|(\|A\|_{L(\ell^2(\mathbb{R}))}-\varepsilon)\|x\|=(\|A\|_{L(\ell^2(\mathbb{R}))}-\varepsilon)\|h\bullet x\|. \] This gives $\|A\|_{L(\ell^2(\mathcal{H}))}=\|A\|_{L(\ell^2(\mathbb{R}))}$. Finally, for an arbitrary orthonormal basis $(\beta_k)_{k\in\mathbb{N}}$ of $\ell^2(\mathbb{R})$ and $f\in\ell^2(\mathcal{H})$, by continuity of $A$ and Equation~\eqref{Ahx=hAx} we have \[ Af=A\left(\sum_{k=1}^\infty[f,\beta_k]\bullet\beta_k\right)=\sum_{k=1}^\infty A\big([f,\beta_k]\bullet\beta_k\big)=\sum_{k=1}^\infty[f,\beta_k]\bullet A\beta_k. \] \end{proof} \end{theorem} \begin{remark} If $\lim_{n\to\infty} x_n=x$ and $\lim_{n\to\infty} y_n=y$ in $\mathcal{H}$, then $\lim_{n\to\infty}(x_n,y_n)=(x,y)$. \begin{proof} By the triangle inequality for the modulus and the Cauchy-Bunyakovsky-Schwarz inequality for $(\cdot,\cdot)$ we have \[ |(x,y)-(x_n,y_n)|\le |(x,y)-(x_n,y)|+|(x_n,y)-(x_n,y_n)|\le \|x-x_n\|\cdot\|y\|+\|x_n\|\cdot\|y-y_n\|, \] where the right hand side converges to zero since $(x_n)_{n\in\mathbb{N}}$ is bounded. \end{proof} \end{remark} \begin{lemma} If $A\in L(\ell^2(\mathbb{R}))$ is (self-adjoint/positive definite), then $A\in L(\ell^2(\mathcal{H}))$ is (self-adjoint/positive definite). \begin{proof} Let $f,g\in\ell^2(\mathcal{H})$. If $A\in L(\ell^2(\mathbb{R}))$ is self-adjoint, then by the remark above we have \begin{align*} (f,Ag)_{\ell^2(\mathcal{H})} & = \lim_{N\to\infty}\left(\sum_{k=1}^N[f,b_k]\bullet b_k,\sum_{l=1}^N[g,b_l]\bullet Ab_l\right)_{\ell^2(\mathcal{H})}\\ & = \lim_{N\to\infty}\sum_{k,l=1}^N\big([f,b_k],[g,b_l]\big)_\mathcal{H}(b_k,Ab_l)_{\ell^2(\mathbb{R})}\\ & = \lim_{N\to\infty}\sum_{k,l=1}^N\big([f,b_k],[g,b_l]\big)_\mathcal{H}(Ab_k,b_l)_{\ell^2(\mathbb{R})}\\ & = \lim_{N\to\infty}\left(\sum_{k=1}^N[f,b_k]\bullet Ab_k,\sum_{l=1}^N[g,b_l]\bullet b_l\right)_{\ell^2(\mathcal{H})}\\ & = (Af,g)_{\ell^2(\mathcal{H})}. \end{align*} Now assume $A\in L(\ell^2(\mathbb{R}))$ to be positive definite and let $(h_i)_{i\in\mathbb{N}}$ be an orthonormal basis of $\mathcal{H}$. We abbreviate $f_{ki}:=\big([f,b_k],h_i\big)_\mathcal{H}\in\mathbb{R}$ for $k,i\in\mathbb{N}$ and set $x_i^N:=\sum_{k=1}^Nf_{ki}b_k\in\ell^2(\mathbb{R})$ for $N\in\mathbb{N}$ as in the proof of Theorem~\ref{matrixextension}. For $i\in\mathbb{N}$ we then have \[ \sum_{k=1}^\infty f_{ki}^2=\sum_{k=1}^\infty\big([f,b_k],h_i\big)^2\le\sum_{k=1}^\infty\big\|[f,b_k]\big\|^2\|h_i\|^2=\sum_{k=1}^\infty\big\|[f,b_k]\big\|^2=\|f\|^2<\infty, \] hence we can define \[ x_i:=\lim_{N\to\infty}x_i^N=\sum_{k=1}^\infty f_{ki}b_k\in\ell^2(\mathbb{R}) \] and obtain $(x_i,Ax_i)_{\ell^2(\mathbb{R})}\ge 0$ by assumption. Together with Fatou's Lemma we compute \begin{align*} (f,Af)_{\ell^2(\mathcal{H})} & = \lim_{N\to\infty}\left(\sum_{k=1}^N[f,b_k]\bullet b_k,\sum_{l=1}^N[f,b_l]\bullet Ab_l\right)_{\ell^2(\mathcal{H})}\\ & = \lim_{N\to\infty}\sum_{k,l=1}^N\big([f,b_k],[f,b_l]\big)_\mathcal{H}(b_k,Ab_l)_{\ell^2(\mathbb{R})}\\ & \stackrel{\eqref{fkifliequation}}{=} \lim_{N\to\infty}\sum_{k,l=1}^N\sum_{i=1}^\infty f_{ki}f_{li}(b_k,Ab_l)_{\ell^2(\mathbb{R})}\\ & = \lim_{N\to\infty}\sum_{i=1}^\infty\big(x_i^N,Ax_i^N\big)_{\ell^2(\mathbb{R})}\\ & \ge \sum_{i=1}^\infty\lim_{N\to\infty}\big(x_i^N,Ax_i^N\big)_{\ell^2(\mathbb{R})}\\ & = \sum_{i=1}^\infty\underbrace{\big(x_i,Ax_i\big)_{\ell^2(\mathbb{R})}}_{\ge 0}\\ & \ge 0. \end{align*} If $(f,Af)_{\ell^2(\mathcal{H})}=0$, then $(x_i,Ax_i)_{\ell^2(\mathbb{R})}=0$ for all $i\in\mathbb{N}$ and therefore $f_{ki}=0$ for all $i,k\in\mathbb{N}$, hence $[f,b_k]=0$ for all $k\in\mathbb{N}$ which yields $f=0$. \end{proof} \end{lemma} \begin{corollary} If $A\in L(\ell^2(\mathbb{R}))$ is self-adjoint and positive definite, then \[ \ell^2(\mathcal{H})\times\ell^2(\mathcal{H})\ni(f,g)\longmapsto(f,g)_A:=(f,Ag)_{\ell^2(\mathcal{H})}\in\mathbb{R} \] defines an inner product on $\ell^2(\mathcal{H})$ with corresponding norm $\|\cdot\|_A:=\sqrt{(\cdot,\cdot)_A}$ for which we have $\|\cdot\|_A\le\|A\|^\frac{1}{2}\|\cdot\|$. \end{corollary} \begin{corollary}\label{miniregel} For $f\in\ell^2(\mathcal{H})$, $g,h\in\mathcal{H}$, $x,y\in\ell^2(\mathbb{R})$ and a self-adjoint and positive definite operator $A\in L(\ell^2(\mathbb{R}))$ we have the following identities: \begin{enumerate} \item $(h\bullet x,g\bullet y)_A=(h,g)(x,y)_A$. \item $(f,h\bullet x)_A=\big([f,Ax],h\big)$. \item $[h\bullet x,Ay]=(x,y)_Ah$. \end{enumerate} \begin{proof} These are direct implications of Proposition~\ref{rechenregeln} together with Equation~\eqref{Ahx=hAx}: \begin{enumerate} \item $(h\bullet x,g\bullet y)_A=(h\bullet x,g\bullet Ay)=(h,g)(x,Ay)=(h,g)(x,y)_A$. \item $(f,h\bullet x)_A=(f,h\bullet Ax)=\big([f,Ax],h\big)$. \item $[h\bullet x,Ay]=(x,Ay)h=(x,y)_Ah$. \end{enumerate} \end{proof} \end{corollary} \end{section} \end{chapter} \begin{chapter}{The Correlated Gaussian Measure} \begin{section}{Construction} Countably Hilbert spaces and in particular nuclear spaces have widely been studied in various literature. We briefly state the following definition for a \emph{Gel'fand triple}, also known as \emph{rigged Hilbert space}, see e.g. \cite{GelVil1964}, which serves our intention to construct a Gaussian measure by means of the Bochner-Minlos theorem. Within the definition we collect some common facts. \begin{definition}\label{defgelfand} Let $\mathcal{N}$ be a topological vector space and $\mathcal{N}'$ its topological dual space. We call $\mathcal{N}\subset\mathcal{H}\subset\mathcal{N}'$ a \emph{Gel'fand triple} if the following holds: The topology on $\mathcal{N}$ is defined by a family of inner products $((\cdot,\cdot)_p)_{p\in\mathbb{N}_0}$ with corresponding norms $(\|\cdot\|_p)_{p\in\mathbb{N}_0}$, which we assume to be compatible in the sense that if $p,q\in\mathbb{N}_0$ and a sequence $(\xi_n)_{n\in\mathbb{N}}$ in $\mathcal{N}$ converges to zero with respect to $\|\cdot\|_p$ and is a Cauchy sequence with respect to $\|\cdot\|_q$, then $\lim_{n\to\infty}\|\xi_n\|_q=0$. It is easy to show that the topology is induced by the translation invariant metric \[ \mathcal{N}\times\mathcal{N}\ni(\xi,\zeta)\longmapsto\sum_{p=0}^\infty2^{-p}\frac{\|\xi-\zeta\|_p}{1+\|\xi-\zeta\|_p}\in\mathbb{R}. \] Furthermore assume that $\mathcal{N}$ is complete with respect to this metric. Without loss of generality we may assume $(\cdot,\cdot)_p\le(\cdot,\cdot)_{p+1}$ for $p\in\mathbb{N}_0$, since otherwise we may replace the family of inner products with the family given by $(\cdot,\cdot)_p':=\sum_{k=0}^p(\cdot,\cdot)_k$, which does not alter the topology on $\mathcal{N}$ but is monotonously increasing. For $p\in\mathbb{N}_0$ let $\mathcal{N}_p$ be the Hilbert space obtained by taking the abstract completion of $\mathcal{N}$ with respect to $\|\cdot\|_p$. Assume $\mathcal{N}_0=\mathcal{H}$, which implies $\mathcal{N}\subset\mathcal{H}$ densely and continuously. Since the family of norms is increasing, by identifying $\mathcal{H}$ with its topological dual space $\mathcal{H}'$, we obtain the chain of spaces \[ \mathcal{N}\subset\cdots\subset\mathcal{N}_2\subset\mathcal{N}_1\subset\mathcal{H}=\mathcal{H}'\subset\mathcal{N}_{-1}\subset\mathcal{N}_{-2}\subset\cdots\subset\mathcal{N}', \] where $\mathcal{N}_{-p}$ is the topological dual space of $\mathcal{N}_p$ for $p\in\mathbb{N}_0$. The completeness of $\mathcal{N}$ is actually equivalent to $\mathcal{N}=\bigcap_{p\in\mathbb{N}_0}\mathcal{N}_p$. It can be shown $\mathcal{N}'=\bigcup_{p\in\mathbb{N}_0}\mathcal{N}_{-p}$ and we consider the finest topology on $\mathcal{N}'$ such that all inclusions $\mathcal{N}_{-p}\hookrightarrow\mathcal{N}'$ are continuous. The final important assumption is that for each $p\in\mathbb{N}_0$ the inclusion $N_{p+1,p}:\mathcal{N}_{p+1}\hookrightarrow\mathcal{N}_p$ is a Hilbert-Schmidt operator, i.e. for some orthonormal basis $(\eta_k)_{k\in\mathbb{N}}$ in $\mathcal{N}_{p+1}$ we have \[ \|N_{p+1,p}\|_{\text{HS}}^2:=\sum_{k=1}^\infty\|\eta_k\|_p^2<\infty, \] whose value does not depend on the particular choice of the orthonormal basis $(\eta_k)_{k\in\mathbb{N}}$. \end{definition} \begin{definition} For $p\in\mathbb{Z}$ and some Hilbert space $\big(H,(\cdot,\cdot)_H\big)$ we denote \[ \ell_p^2(H):=\left\{f\in H^\mathbb{N}:\sum_{k=1}^\infty k^{2p}\|f_k\|_H^2<\infty\right\}, \] which becomes a Hilbert space itself in an obvious way. \end{definition} The following theorem yields a Gef'fand triple with central Hilbert space $\ell^2(\mathcal{H})$, if a Gel'fand triple with $\mathcal{H}$ as central Hilbert space is already given. \begin{theorem}\label{seqgelfand} Assume we have a Gel'fand triple $\mathcal{N}\subset\mathcal{H}\subset\mathcal{N}'$ and let the spaces $\mathcal{N}_p$, $p\in\mathbb{N}_0$ be as in Definition~\ref{defgelfand}. Then we obtain a Gel'fand triple $s(\mathcal{N})\subset\ell^2(\mathcal{H})\subset s'(\mathcal{N})$ by defining \[ s(\mathcal{N}):=\bigcap_{p\in\mathbb{N}_0}\ell_p^2(\mathcal{N}_p). \] The topology on $s(\mathcal{N})$ we define to be given by the family of norms on $\ell_p^2(\mathcal{N}_p)$ for $p\in\mathbb{N}_0$. \begin{proof} Let $\|\cdot\|_{p,p}$ denote the norm on $\ell_p^2(\mathcal{N}_p)$ for $p\in\mathbb{N}_0$. Clearly these norms are compatible in the sense of Definition~\ref{defgelfand} and monotonously increasing. Furthermore $\ell_0^2(\mathcal{N}_0)=\ell^2(\mathcal{H})$ and for $p\in\mathbb{N}_0$ the abstract completion of $s(\mathcal{N})$ with respect to $\|\cdot\|_{p,p}$ yields exactly $\ell_p^2(\mathcal{N}_p)$, since $s(\mathcal{N})$ contains the set of finite sequences in $\mathcal{N}$ which are dense in the complete space $\ell_p^2(\mathcal{N}_p)$. Using $s(\mathcal{N})=\bigcap_{p\in\mathbb{N}_0}\ell_p^2(\mathcal{N}_p)$ yields that the metric \[ s(\mathcal{N})\times s(\mathcal{N})\ni(\varphi,\psi)\longmapsto\sum_{p=0}^\infty2^{-p}\frac{\|\varphi-\psi\|_{p,p}}{1+\|\varphi-\psi\|_{p,p}}\in\mathbb{R} \] is complete, and it clearly induces the topology on $s(\mathcal{N})$. It remains to show that for each $p\in\mathbb{N}_0$ the inclusion $I_{p+1,p}:\ell_{p+1}^2(\mathcal{N}_{p+1})\hookrightarrow\ell_p^2(\mathcal{N}_p)$ is a Hilbert-Schmidt operator. To this end let $p\in\mathbb{N}_0$ and $(\eta_k)_{k\in\mathbb{N}}$ be an orthonormal basis of $\mathcal{N}_{p+1}$. Then clearly the set \[ \left\{\frac{(\eta_k\delta_{l,m})_{m\in\mathbb{N}}}{\|(\eta_k\delta_{l,m})_{m\in\mathbb{N}}\|_{p+1,p+1}}:k,l\in\mathbb{N}\right\}=\left\{\left(\frac{\eta_k\delta_{l,m}}{l^{p+1}}\right)_{m\in\mathbb{N}}:k,l\in\mathbb{N}\right\} \] is an orthonormal basis of $\ell_{p+1}^2(\mathcal{N}_{p+1})$. We compute \[ \|I_{p+1,p}\|_\text{HS}^2=\sum_{k,l=1}^\infty\frac{\|(\eta_k\delta_{l,m})_{m\in\mathbb{N}}\|_{p,p}^2}{\|(\eta_k\delta_{l,m})_{m\in\mathbb{N}}\|_{p+1,p+1}^2}=\sum_{k,l=1}^\infty\|\eta_k\|_p^2\frac{l^{2p}}{l^{2p+2}}=\|N_{p+1,p}\|_\text{HS}^2\sum_{l=1}^\infty\frac{1}{l^2}<\infty, \] where $N_{p+1,p}:\mathcal{N}_{p+1}\hookrightarrow\mathcal{N}_p$ is the inclusion and $\|\cdot\|_p$ denotes the norm on $\mathcal{N}_p$. This completes the proof. \end{proof} \end{theorem} \begin{example} Consider the \emph{Schwartz space of functions of rapid decrease}, defined by \[ S(\mathbb{R}):=\left\{\eta\in C^\infty(\mathbb{R}):\|\eta\|_{n,m}:=\sup_{x\in\mathbb{R}}\left|x^mD^n\eta(x)\right|<\infty\text{ for all }n,m\in\mathbb{N}_0\right\} \] and equipped with the topology given by the family of seminorms $(\|\cdot\|_{n,m})_{n,m\in\mathbb{N}}$. It is well-known that this is a completely metrizable dense nuclear subspace of the Hilbert space $L^2(\mathbb{R},\dx)$ and thus yields a Gel'fand triple $S(\mathbb{R})\subset L^2(\mathbb{R},\dx)\subset S'(\mathbb{R})$, which is the standard triple used in White Noise Analysis, see \cite{HKPS1993,ReedSimon1980}. The above theorem can be applied to obtain a Gel'fand triple $s(S(\mathbb{R}))\subset\ell^2(L^2(\mathbb{R},\dx))\subset s'(S(\mathbb{R}))$. \end{example} \begin{notation} We denote the canonical dual pairing between $s(\mathcal{N})$ and $s'(\mathcal{N})$ by \[ s(\mathcal{N})\times s'(\mathcal{N})\ni(\varphi,\omega)\longmapsto\langle\varphi,\omega\rangle:=\omega(\varphi)\in\mathbb{R}. \] Since we identify $\ell^2(\mathcal{H})$ with its dual, for $\varphi\in s(\mathcal{N})$ and $\omega\in\ell^2(\mathcal{H})\subset s'(\mathcal{N})$ we have \[ \langle\varphi,\omega\rangle=(\varphi,\omega)_{\ell^2(\mathcal{H})}. \] \end{notation} \begin{definition} We equip $s'(\mathcal{N})$ with the $\sigma$-algebra generated by the mappings \[ s'(\mathcal{N})\ni\omega\longmapsto\big(\langle\varphi_1,\omega\rangle,\dots,\langle\varphi_n,\omega\rangle\big)\in\mathbb{R}^n,\quad\text{for }n\in\mathbb{N}\text{ and }\varphi_1,\dots,\varphi_n\in s(\mathcal{N}), \] which is also called the \emph{cylindrical} $\sigma$-algebra. \end{definition} The Bochner-Minlos theorem is the standard tool used to obtain a Gaussian measure on spaces like $s'(\mathcal{N})$, see \cite{Obata1994}. \begin{theorem}[Bochner-Minlos theorem] Let $C:s(\mathcal{N})\to\mathbb{C}$ be a characteristic function, in other words we have $C(0)=1$ and $C$ is continuous and positive semidefinite, i.e. \[ \sum_{i,j=1}^n\alpha_i\overline{\alpha_j}C(\varphi_i-\varphi_j)\ge 0\quad\text{ for all }n\in\mathbb{N}\text{ and }\alpha_i\in\mathbb{C},\,\varphi_i\in\mathcal{N}\text{ for }i=1,\dots,n. \] Then there exists a unique measure $\mu$ on $s'(\mathcal{N})$ which fulfills \[ \int_{s'(\mathcal{N})}\exp\big(i\langle\varphi,\omega\rangle\big)d\mu(\omega)=C(\varphi)\quad\text{for all }\varphi\in s(\mathcal{N}). \] Clearly the measure obtained is a probability measure, since \[ \mu(s'(\mathcal{N}))=\int_{s'(\mathcal{N})}1d\mu(\omega)=\int_{s'(\mathcal{N})}\exp(i\la0,\omega\rangle)d\mu(\omega)=C(0)=1. \] \end{theorem} \begin{theorem} Let $(\cdot,\cdot)'$ be any inner product on $\ell^2(\mathcal{H})$ which is continuous. Then \[ s(\mathcal{N})\ni\varphi\longmapsto C(\varphi):=\exp\left(-\frac{1}{2}(\varphi,\varphi)'\right)\in\mathbb{C} \] is a characteristic function in the sense of the Bochner-Minlos theorem. \begin{proof} The equality $C(0)=1$ is clear. Furthermore $C$ is continuous since the embedding $s(\mathcal{N})\subset\ell^2(\mathcal{H})$ is continuous and $(\cdot,\cdot)'$ is continuous on $\ell^2(\mathcal{H})$. Let $n\in\mathbb{N}$ and $\varphi_i\in\mathcal{N}$ for $i=1,\dots,n$. Due to the fact \[ \sum_{i,j=1}^n\alpha_i\alpha_j(\varphi_i,\varphi_j)'=\left(\sum_{i=1}^n\alpha_i\varphi_i,\sum_{j=1}^n\alpha_j\varphi_j\right)'\ge 0\quad\text{for all }\alpha\in\mathbb{R}^n,\] the matrix $((\varphi_i,\varphi_j)')_{i,j=1,\dots,n}$ and thus also $(\exp((\varphi_i,\varphi_j)'))_{i,j=1,\dots,n}$ is positive semidefinite by Lemma~\ref{realposdeflemma} and Corollary~\ref{expisposdef}, see page~\pageref{realposdeflemma}. Now let $\alpha\in\mathbb{C}^n$ be arbitrary. We compute \begin{align*} \sum_{i,j=1}^n\alpha_i\overline{\alpha_j}C(\varphi_i-\varphi_j) & = \sum_{i,j=1}^n\alpha_i\overline{\alpha_j}\exp\left(-\frac{1}{2}\big((\varphi_i,\varphi_i)'-2(\varphi_i,\varphi_j)'+(\varphi_j,\varphi_j)'\big)\right)\\ & = \sum_{i,j=1}^n\beta_i\overline{\beta_j}\exp\left((\varphi_i,\varphi_j)'\right)\\ & \ge 0, \end{align*} where $\beta_i=\alpha_i\exp\left(-\frac{1}{2}(\varphi_i,\varphi_i)'\right)$ for $i=1,\dots,n$. \end{proof} \end{theorem} \begin{definition}\label{defmuA} Let $A\in L(\ell^2(\mathcal{H}))$ be self-adjoint and positive definite and denote the inner product it generates on $\ell^2(\mathcal{H})$ by \[ \ell^2(\mathcal{H})\times\ell^2(\mathcal{H})\ni(g,h)\longmapsto(g,h)_A:=(g,Ah)_{\ell^2(\mathcal{H})} \] with corresponding norm $\|\cdot\|_A:=\sqrt{(\cdot,\cdot)_A}$. Since $A$ is continuous, so is $(\cdot,\cdot)_A$. The unique measure $\mu_A$ on $s'(\mathcal{N})$ fulfilling \[ \int_{s'(\mathcal{N})}\exp\left(i\langle\varphi,\omega\rangle\right)d\mu_A(\omega)=\exp\left(-\frac{1}{2}(\varphi,\varphi)_A\right)\quad\text{for all }\varphi\in s(\mathcal{N}), \] which exists due to the above theorem, we call the \emph{Gaussian measure with covariance operator $A$}. We denote $L^2(\mu_A):=L^2(s'(\mathcal{N}),\mu_A;\mathbb{R})$ and by abuse of notation we will denote the norm on $L^2(\mu_A)$ again by $\|\cdot\|_A$. While Gaussian analysis is usually performed on the complexification of this space, we will stick to the real setting as it suffices for our purposes. However, the results may be transferred to the complex case. To save some space in our equations, we will simply write $s'$ instead of $s'(\mathcal{N})$ when integrating, so $\int_{s'(\mathcal{N})}fd\mu_A$ becomes $\int_{s'}fd\mu_A$ for integrable or non-negative measureable $f$. \end{definition} For the rest of this thesis, $A\in L(\ell^2(\mathcal{H}))$ will assumed to be self-adjoint and positive definite. \end{section} \begin{section}{Properties} \begin{remark}\label{imagemeasure} If $(\Omega,\mathcal{F},m)$ is a measure space, $(\Omega',\mathcal{F}')$ a measureable space and $T:\Omega\to\Omega'$ a measureable map, then $T(m):=m\circ T^{-1}$ is a measure on $\Omega'$, called the \emph{image measure of $m$ under $T$}, and for any measureable $f:\Omega'\to\mathbb{R}$ which is either integrable or non-negative we have \begin{equation}\label{imagemeasureformula} \int_{\Omega'}f(\omega')dT(m)(\omega')=\int_\Omega f(T(\omega))dm(\omega) \end{equation} in the sense that either both sides are infinite or both sides are finite and take the same value. Clearly if $T=T'$ almost surely for some for measureable $T':\Omega\to\Omega'$, then $T(m)=T'(m)$. \end{remark} \begin{definition} By $\mu_n$ we denote the standard Gaussian measure on the measureable space $(\mathbb{R}^n,\mathcal{B}(\mathbb{R}^n))$, i.e. the measure defined by \[ \mu_n(B)=\left(\frac{1}{\sqrt{2\pi}}\right)^n\int_{B}\exp\left(-\frac{1}{2}|x|^2\right)dx\quad\text{for }B\in\mathcal{B}(\mathbb{R}^n). \] It is uniquely characterized by its Fourier transform \[ \mathbb{R}^n\ni p\longmapsto\int_{\mathbb{R}^n}\exp\big(i(p,x)_{\mathbb{R}^n}\big) d\mu_n(x)=\exp\Big(-\frac{1}{2}|p|^2\Big)\in\mathbb{R}. \] \end{definition} \begin{lemma}\label{orthogauss} Let $n\in\mathbb{N}$ and $\varphi_1,\dots,\varphi_n\in s(\mathcal{N})$ be orthonormal with respect to $(\cdot,\cdot)_A$. Then the image measure of $\mu_A$ under \[ s'(\mathcal{N})\ni\omega\longmapsto T(\omega):=\big(\langle\varphi_1,\omega\rangle,\dots,\langle\varphi_n,\omega\rangle\big)\in\mathbb{R}^n \] is the standard Gaussian measure $\mu_n$ on $\mathbb{R}^n$. \begin{proof} By Formula~\eqref{imagemeasureformula} from Remark~\ref{imagemeasure} for $p\in\mathbb{R}^n$ we have \begin{align*} \int_{\mathbb{R}^n}\exp\big(i(p,x)_{\mathbb{R}^n}\big)dT(\mu_A)(x) & = \int_{s'}\exp\Big(i\big(p,T(\omega)\big)_{\mathbb{R}^n}\Big)d\mu_A(\omega)\\ & = \int_{s'}\exp\bigg(i\sum_{j=1}^m p_j\langle\varphi_j,\omega\rangle\bigg)d\mu_A(\omega)\\ & = \int_{s'}\exp\bigg(i\Big\langle\sum_{j=1}^m p_j\varphi_j,\omega\Big\rangle\bigg)d\mu_A(\omega)\\ & = \exp\bigg(-\frac{1}{2}\Big\|\sum_{j=1}^n p_j\varphi_j\Big\|_A^2\bigg)\\ & = \exp\bigg(-\frac{1}{2}\sum_{j=1}^n p_j^2\bigg)\\ & = \exp\Big(-\frac{1}{2}|p|^2\Big), \end{align*} hence $\mu_n$ and $T(\mu_A)$ have the same Fourier transforms, thus $\mu_n=T(\mu_A)$. \end{proof} \end{lemma} \begin{corollary}\label{orthofubini} Let $n\in\mathbb{N}$ and $\varphi_1,\dots,\varphi_n\in s(\mathcal{N})$ be orthonormal with respect to $(\cdot,\cdot)_A$. If for each $i=1,\dots,n$ we have that the measureable function $G_i:\mathbb{R}\to\mathbb{R}$ is non-negative or integrable with respect to the Gaussian measure $\mu_1$, then \[ \int_{s'}\prod_{i=1}^nG_i(\langle\varphi_i,\omega\rangle)d\mu_A(\omega)=\prod_{i=1}^n\int_{s'}G_i(\langle\varphi_i,\omega\rangle)d\mu_A(\omega). \] \begin{proof} Since $\mu_n$ is the product measure of $n$ one-dimensional measures $\mu_1$, Fubini's Theorem and the lemma above yield \begin{align*} \int_{s'}\prod_{i=1}^nG_i(\langle\varphi_i,\omega\rangle)d\mu_A(\omega) & = \int_{\mathbb{R}^n}\prod_{i=1}^nG_i(x_i)d\mu_n(x_1,\dots,x_n)\\ & = \prod_{i=1}^n\int_{\mathbb{R}}G_i(x_i)d\mu_1(x_i)\\ & = \prod_{i=1}^n\int_{s'}G_i(\langle\varphi_i,\omega\rangle)d\mu_A(\omega). \end{align*} \end{proof} \end{corollary} The following yields an isometry from $s(\mathcal{N})$ to $L^2(\mu_A)$. \begin{lemma}\label{isoembedding} Let $\varphi\in s(\mathcal{N})$. Then $\langle\varphi,\cdot\rangle\in L^2(\mu_A)$ with $\|\langle\varphi,\cdot\rangle\|_A=\|\varphi\|_A$. \begin{proof} For $\varphi=0$ the statement is clear. Otherwise by Lemma~\ref{orthogauss} we have \[ \|\langle\varphi,\cdot\rangle\|_A^2=\int_{s'}\langle\varphi,\omega\rangle^2d\mu_A(\omega)=\|\varphi\|_A^2\int_{s'}\Big\langle\frac{\varphi}{\|\varphi\|_A},\omega\Big\rangle^2d\mu_A(\omega)=\|\varphi\|_A^2\int_\mathbb{R} x^2d\mu_1(x)=\|\varphi\|_A^2, \] where we used the well-known fact $\int_\mathbb{R} x^2d\mu_1(x)=1$. \end{proof} \end{lemma} \begin{definition} We denote the abstract completion of $\ell^2(\mathcal{H})$ with respect to $(\cdot,\cdot)_A$ by $\ell_A^2(\mathcal{H})$ and also denote its norm and inner product by $\|\cdot\|_A$ and $(\cdot,\cdot)_A$, respectively. \end{definition} \begin{corollary}\label{sinellAcontanddense} The inclusion $s(\mathcal{N})\subset\ell_A^2(\mathcal{H})$ is dense. \begin{proof} To a given $\varepsilon>0$ and $f\in\ell_A^2(\mathcal{H})$ choose $g\in\ell^2(\mathcal{H})$ with $\|f-g\|_A<\varepsilon$. For this $g$ there exists $\varphi\in s(\mathcal{N})$ with $\|g-\varphi\|_{\ell^2(\mathcal{H})}<\varepsilon$. Then \[ \|f-\varphi\|_A\le\|f-g\|_A+\|g-\varphi\|_A<\varepsilon+\|A\|^\frac{1}{2}\|g-\varphi\|_{\ell^2(\mathcal{H})}<\left(1+\|A\|^\frac{1}{2}\right)\cdot\varepsilon. \] \end{proof} \end{corollary} \begin{lemma}\label{Hilbertkernels} Let $f\in\ell_A^2(\mathcal{H})$. Since $s(\mathcal{N})\subset\ell_A^2(\mathcal{H})$ is dense, there exists a sequence $(\varphi_k)_{k\in\mathbb{N}}$ in $s(\mathcal{N})$ such that $\lim_{k\to\infty}\varphi_k=f$ in $\ell_A^2(\mathcal{H})$. Then $(\langle\varphi_k,\cdot\rangle)_{k\in\mathbb{N}}$ is a Cauchy sequence in $L^2(\mu_A)$, whose limit is independent of the choice of the approximating sequence $(\varphi_k)_{k\in\mathbb{N}}$. Hence $\langle f,\cdot\rangle:=\lim_{k\to\infty}\langle\varphi_k,\cdot\rangle\in L^2(\mu_A)$ can be defined and for $f\in s(\mathcal{N})$ this definition coincides with the equivalence class of the pointwisely defined function $\omega\mapsto\langle f,\omega\rangle$. Furthermore it holds $\|\langle f,\cdot\rangle\|_A=\|f\|_A$. \begin{proof} By Lemma~\ref{isoembedding} we have that $(\langle\varphi_k,\cdot\rangle)_{k\in\mathbb{N}}$ is a Cauchy sequence in $L^2(\mu_A)$ and hence converges. If $(\psi_k)_{k\in\mathbb{N}}$ is another sequence in $s(\mathcal{N})$ approximating $f$, then for all $k\in\mathbb{N}$ we have \[ \|\langle\varphi_k,\cdot\rangle-\langle\psi_k,\cdot\rangle\|_A=\|\varphi_k-\psi_k\|_A \le \|\varphi_k-f\|_A+\|f-\psi_k\|_A, \] so $\lim_{k\to\infty}\|\langle\varphi_k,\cdot\rangle-\langle\psi_k,\cdot\rangle\|_A=0$ and the sequences $(\langle\varphi_k,\cdot\rangle)_{k\in\mathbb{N}}$ and $(\langle\psi_k,\cdot\rangle)_{k\in\mathbb{N}}$ take the same limit, which we denote by $\langle f,\cdot\rangle$. By continuity of the norm it holds \[ \|\langle f,\cdot\rangle\|_A=\lim_{k\to\infty}\|\langle\varphi_k,\cdot\rangle\|_A=\lim_{k\to\infty}\|\varphi_k\|_A=\|f\|_A. \] \end{proof} \end{lemma} \begin{corollary} For $f,g\in\ell_A^2(\mathcal{H})$ we have $(\langle f,\cdot\rangle,\langle g,\cdot\rangle)_A=(f,g)_A$. \begin{proof} By the well-known polarization identity we have \[ (\langle f,\cdot\rangle,\langle g,\cdot\rangle)_A=\frac{1}{4}\left(\|\langle f+g,\cdot\rangle\|_A^2-\|\langle f-g,\cdot\rangle\|_A^2\right)=\frac{1}{4}\left(\| f+g\|_A^2-\| f-g\|_A^2\right)=(f,g)_A. \] \end{proof} \end{corollary} \begin{notation} Let $(\Omega,\mathcal{F},\nu)$ be a measure space and $(f_n)_{n\in\mathbb{N}}$ be a sequence of real-valued measureable functions on $\Omega$ such that the measureable set $N:=\Omega\setminus\{\omega:\lim_{n\to\infty}f_n(\omega)\text{ exists}\}$ has measure zero. Then we define the function $\lim_{n\to\infty}f_n:\Omega\to\mathbb{R}$ by \[ \left(\lim_{n\to\infty}f_n\right)(\omega):=\lim_{n\to\infty}\mathbf{1}_{\Omega\setminus N}(\omega)f_n(\omega)=\begin{cases}\lim_{n\to\infty}f_n(\omega)&\omega\in\Omega\setminus N\\0&\omega\in N\end{cases},\quad\omega\in\Omega,\] which is measureable. \end{notation} \begin{remark} Let $(\Omega,\mathcal{F},\nu)$ be a measure space and let $\lim_{n\to\infty}[f_n]=[f]$ in $L^p(\Omega)$ for some $p\in[1,\infty)$. Then there exists a subsequence $(n_k)_{k\in\mathbb{N}}$ such that $\lim_{k\to\infty}f_{n_k}(\omega)=f(\omega)$ for almost all $\omega\in\Omega$ (or almost surely, if $\nu$ is a probability measure). \end{remark} \begin{remark}\label{Hoelderprob} Let $(\Omega,\mathcal{F},P)$ be a probability space and $1\le p\le q<\infty$. By H\"older's inequality we have $L^q(\Omega,P)\subset L^p(\Omega,P)$ and $\|\cdot\|_{L^p}\le\|\cdot\|_{L^q}$ on $L^q(\Omega,P)$. \end{remark} \begin{proposition}\label{generalizedcharfn} For $f\in\ell_A^2(\mathcal{H})$ we have \[ \int_{s'}\exp\left(i\langle f,\cdot\rangle\right)d\mu_A=\exp\left(-\frac{1}{2}(f,f)_A\right). \] \begin{proof} Let $(\varphi_k)_{k\in\mathbb{N}}$ be a sequence in $s(\mathcal{N})$ with $\lim_{k\to\infty}\varphi_k=f$ in $\ell_A^2(\mathcal{H})$. Then we have $\lim_{k\to\infty}\langle\varphi_k,\cdot\rangle=\langle f,\cdot\rangle$ in $L^2(\mu_A)$ by definition. We fix some pointwisely defined representative of $\langle f,\cdot\rangle$ and also denote it by $\langle f,\cdot\rangle$. By dropping to a subsequence we may assume that we have $\lim_{k\to\infty}\langle\varphi_k,\cdot\rangle=\langle f,\cdot\rangle$ almost surely. Since for all $k\in\mathbb{N}$ it holds $|\exp\left(i\langle\varphi_k,\cdot\rangle\right)|=1\in L^2(\mu_A)\subset L^1(\mu_A)$, we apply Lebesgue's theorem of dominated convergence to obtain \begin{align*} \int_{s'}\exp\left(i\langle f,\omega\rangle\right)d\mu_A(\omega) & = \lim_{k\to\infty}\int_{s'}\exp\left(i\langle\varphi_k,\omega\rangle\right)d\mu_A(\omega)\\ & = \lim_{k\to\infty}\exp\left(-\frac{1}{2}(\varphi_k,\varphi_k)_A\right)\\ & = \exp\left(-\frac{1}{2}(f,f)_A\right). \end{align*} \end{proof} \end{proposition} We now may generalize Lemma~\ref{orthogauss} and Corollary~\ref{orthofubini} in the following ways: \begin{corollary}\label{orthogaussgeneral} Let $n\in\mathbb{N}$ and $f_1,\dots,f_n$ be an orthonormal system in $\ell_A^2(\mathcal{H})$. Then the image measure of $\mu_A$ under $T:=\big(\langle f_1,\cdot\rangle,\dots,\langle f_n,\cdot\rangle\big)$ is the standard Gaussian measure $\mu_n$ on $\mathbb{R}^n$. \begin{proof} The proof works exactly along the same lines as the proof of Lemma~\ref{orthogauss}, where Remark~\ref{imagemeasure} is used to care about the fact that $T$ is only defined up to almost sure equality and Proposition~\ref{generalizedcharfn} is used to express the Fourier transform in terms of the measure's characterisic function as in the proof of Lemma~\ref{orthogauss}. \end{proof} \end{corollary} \begin{corollary}\label{orthofubinigeneral} Let $n\in\mathbb{N}$ and $f_1,\dots,f_n$ be an orthonormal system in $\ell_A^2(\mathcal{H})$. If for each $i=1,\dots,n$ we have that the measureable function $G_i:\mathbb{R}\to\mathbb{R}$ is non-negative or integrable with respect to the Gaussian measure $\mu_1$ on $\mathbb{R}$, then \[ \int_{s'}\prod_{i=1}^nG_i(\langle f_i,\cdot\rangle)d\mu_A=\prod_{i=1}^n\int_{s'}G_i(\langle f_i,\cdot\rangle)d\mu_A. \] \end{corollary} \end{section} \begin{section}{The Chaos Decomposition} \begin{notation} For two vector spaces $X$ and $Y$ over the same field $\mathbb{K}$, their \emph{algebraic tensor product}, which again is a vector space over $\mathbb{K}$, is denoted by $X\otimes Y$. Up to an isomorphism in the category of linear $\mathbb{K}$-spaces it is uniquely characterized by the following universal property: There exists a bilinear map $B:X\times Y\to X\otimes Y$ such that for any vector space $V$ over $\mathbb{K}$ and any bilinear map $B':X\times Y\to V$ there exists a unique linear map $L:X\otimes Y\to V$ such that \[ B'(x,y)=L(B(x,y))\quad\text{for all }x\in X,\,y\in Y. \] It can be shown that such a space always exists. For $x\in X$, $y\in Y$ we will simply write $x\otimes y$ instead of $B(x,y)$. The algebraic tensor product is associative in the sense that if $Z$ is another vector space over $\mathbb{K}$, then the spaces $(X\otimes Y)\otimes Z$ and $X\otimes(Y\otimes Z)$ are isomorphic in the category of linear $\mathbb{K}$-spaces and we will denote both of the spaces by $X\otimes Y\otimes Z$. \end{notation} \begin{lemma} For two spaces $X$ and $Y$ over the same field $\mathbb{K}$ we have \[ X\otimes Y=\spann\{x\otimes y:x\in X,y\in Y\}. \] \begin{proof} Let $S:=\spann\{x\otimes y:x\in X,y\in Y\}$. By Zorn's lemma there exists a subspace $T$ of $X\otimes Y$ such that $X\otimes Y=S\oplus T$. Assume $S\subsetneq X\otimes Y$, so $\dim T\not=0$. Then for $L=\id_{X\otimes Y}$ and $L':X\otimes Y\to X\otimes Y$ defined by $L'(s+t)=s$ for $s\in S$ and $t\in T$, we have $L(x\otimes y)=x\otimes y=L'(x\otimes y)$ for all $x\in X$ and $y\in Y$, but $L\not=L'$. This is a contradiction to the universal property of the algebraic tensor product in view of $V=X\otimes Y$ and $B'=B=\otimes$. \end{proof} \end{lemma} \begin{remark} Let $v\in X\otimes Y$. We have proven that there exist $m\in\mathbb{N}$ and $\alpha_k\in\mathbb{K}$, $x_k\in X$ and $y_k\in Y$ for $k=1,\dots,m$ such that $v=\sum_{k=1}^m\alpha_kx_k\otimes y_k$. If we set $x_k':=\alpha_kx_k$ for $k=1,\dots,m$, we obtain the easier representation $v=\sum_{k=1}^mx_k'\otimes y_k$. \end{remark} \begin{definition} Let $X$ be a real or complex vector space and $n\in\mathbb{N}$. We define the \emph{symmetrization} of $x^{(n)}=x_1\otimes\cdots\otimes x_n\in X\otimes\cdots\otimes X$ to be \[ \widehat{x^{(n)}}:=x_1\widehat{\otimes}\cdots\widehat{\otimes}x_n:=\frac{1}{n!}\sum_{\sigma\in S_n}x_{\sigma(1)}\otimes\cdots\otimes x_{\sigma(n)}. \] Here $S_n$ stands for the group of permutations on $\{1,\dots,n\}$. One can show that this induces a linear operator on $X\otimes\cdots\otimes X$. For general $x^{(n)}\in X\otimes\cdots\otimes X$, the symmetrization of $\widehat{x^{(n)}}$ again yields $\widehat{x^{(n)}}$. If $x^{(n)}=\widehat{x^{(n)}}$, we call $x^{(n)}$ symmetric and denote the subspace of symmetric elements by \[ X\widehat{\otimes}\cdots\widehat{\otimes}X:=\{x^{(n)}\in X\otimes\cdots\otimes X:x^{(n)}=\widehat{x^{(n)}}\}. \] \end{definition} \begin{notation} Let $X$ be a vector space over $\mathbb{K}\in\{\mathbb{R},\mathbb{C}\}$, $x\in X$ and $n\in\mathbb{N}$. We denote the $n^\text{th}$ tensor power of $x$ by $x^{\otimes n}:=x\otimes\cdots\otimes x\in X\otimes\cdots\otimes X$ and set $x^{\otimes 0}=1\in\mathbb{K}$. If $L_1,\dots,L_n:X\to X$ are linear operators, we uniquely define the linear operator $L_1\otimes\cdots\otimes L_n$ on $X\otimes\cdots\otimes X$ by \[ L_1\otimes\cdots\otimes L_n(x_1\otimes\cdots\otimes x_n):=L_1x_1\otimes\cdots\otimes L_nx_n\quad\text{for }x_1,\dots,x_n\in X \] and its symmetrization $L_1\widehat{\otimes}\cdots\widehat{\otimes}L_n$ by \[ L_1\widehat{\otimes}\cdots\widehat{\otimes}L_n(x_1\otimes\cdots\otimes x_n):=L_1x_1\widehat{\otimes}\cdots\widehat{\otimes} L_nx_n\quad\text{for }x_1,\dots,x_n\in X. \] For a linear operator $L:X\to X$ we introduce the notations \[ L^{\otimes n}:=L\otimes\cdots\otimes L,\quad L^{\wot n}:=L\widehat{\otimes}\cdots\widehat{\otimes}L\quad\text{and}\quad L^{\otimes 0}:=\id_\mathbb{K}. \] \end{notation} \begin{corollary} For a real or complex vector space $X$ and $n\in\mathbb{N}$ it holds \[ X\widehat{\otimes}\cdots\widehat{\otimes}X=\spann\left\{x_1\widehat{\otimes}\cdots\widehat{\otimes}x_n:x_1,\cdots,x_n\in X\right\}. \] \begin{proof} Let $x^{(n)}\in X\widehat{\otimes}\cdots\widehat{\otimes}X$. There exist $m\in\mathbb{N}$ and $x_1^k,\dots,x_n^k\in X$ for $k=1,\dots,m$ such that $x^{(n)}=\sum_{k=1}^m x_1^k\otimes\cdots\otimes x_n^k$. Thus \[ x^{(n)}=\widehat{x^{(n)}}=\sum_{k=1}^m x_1^k\widehat{\otimes}\cdots\widehat{\otimes} x_n^k\in\spann\left\{x_1\widehat{\otimes}\cdots\widehat{\otimes}x_n:x_1,\cdots,x_n\in X\right\}. \] \end{proof} \end{corollary} The following can be found in \cite{Obata1994}: \begin{lemma}[Polarization formula]\label{polarizationformula} Let $X$ and $Y$ be real or complex vector spaces, $n\in\mathbb{N}$ and $F:X^n\to Y$ be multilinear and symmetric. Then for $x_1,\dots,x_n\in X$ it holds \[ F(x_1,\dots,x_n)=\frac{1}{2^nn!}\sum_{B\in\{\pm 1\}^n}B_1\cdots B_nA(B_1x_1+\dots+B_nx_n), \] where $A(x):=F(x,\dots,x)$ for $x\in X$. \end{lemma} \begin{corollary} For a real or complex space $X$ and $n\in\mathbb{N}$ we have \[ X\widehat{\otimes}\cdots\widehat{\otimes}X=\spann\left\{x^{\otimes n}:x\in X\right\}. \] \begin{proof} Clearly $X\widehat{\otimes}\cdots\widehat{\otimes}X$ contains all elements of the form $x^{\otimes n}$, where $x\in X$. For the other inclusion define $F:X^n\to X\otimes\cdots\otimes X$ by $F(x_1,\dots,x_n):=x_1\widehat{\otimes}\cdots\widehat{\otimes} x_n$ for $x_1,\dots,x_n\in X$. Applying the polarization formula yields $x_1\widehat{\otimes}\cdots\widehat{\otimes} x_n\in\spann\{x^{\otimes n}:x\in X\}$. Thus \[ X\widehat{\otimes}\cdots\widehat{\otimes}X=\spann\left\{x_1\widehat{\otimes}\cdots\widehat{\otimes} x_n:x_1,\dots,x_n\in X\right\}=\spann\left\{x^{\otimes n}:x\in X\right\}. \] \end{proof} \end{corollary} \begin{definition} For $n\in\mathbb{N}$ we denote $s(\mathcal{N})^{\otimes n}:=s(\mathcal{N})\otimes\cdots\otimes s(\mathcal{N})$ and its subspace of symmetric elements by $s(\mathcal{N})^{\wot n}:=s(\mathcal{N})\widehat{\otimes}\cdots\widehat{\otimes}s(\mathcal{N})$. We set $s(\mathcal{N})^{\otimes 0}:=s(\mathcal{N})^{{\wh{\otimes}} 0}:=\mathbb{R}$. \end{definition} \begin{definition} Let $n\in\mathbb{N}$. By $\ell_A^2(\mathcal{H})^{\otimes n}$ we denote the abstract completion of the space $\ell_A^2(\mathcal{H})\otimes\cdots\otimes\ell_A^2(\mathcal{H})$ with respect to the unique inner product which fulfills \[ \big(f_1\otimes\cdots\otimes f_n,g_1\otimes\cdots\otimes g_n\big)_A:=\prod_{k=1}^n(f_k,g_k)_A\quad\text{for }f_1,\dots,f_n,g_1,\dots,g_n\in\ell_A^2(\mathcal{H}). \] The space $\ell_A^2(\mathcal{H})^{\wot n}$ is defined to be the closure of $\ell_A^2(\mathcal{H})\widehat{\otimes}\cdots\widehat{\otimes}\ell_A^2(\mathcal{H})$ in $\ell_A^2(\mathcal{H})^{\otimes n}$, i.e. \[ \ell_A^2(\mathcal{H})^{\wot n}=\overline{\spann\left\{f^{\otimes n}:f\in\ell_A^2(\mathcal{H})\right\}}\subset\ell_A^2(\mathcal{H})^{\otimes n}. \] Further we set $\ell_A^2(\mathcal{H})^{\otimes 0}:=\ell_A^2(\mathcal{H})^{{\wh{\otimes}} 0}:=\mathbb{R}$. \end{definition} \begin{remark} It is noted that we defined $s(\mathcal{N})^{\otimes n}$ simply as an algebraic tensor product, while $\ell_A^2(\mathcal{H})^{\otimes n}$ is the abstract completion of the algebraic tensor product with respect to some inner product. This may seem confusing, but for our purposes this yields the simplest notation. \end{remark} \begin{remark} If $L_1,\dots,L_n\in L(\ell_A^2(\mathcal{H}))$, then $L_1\otimes\cdots\otimes L_n\in L\left(\ell_A^2(\mathcal{H})\otimes\cdots\otimes\ell_A^2(\mathcal{H})\right)$ with operator norm $\|L_1\otimes\cdots\otimes L_n\|=\|L_1\|\cdots\|L_n\|$ and hence can be extended to an element of $L\left(\ell_A^2(\mathcal{H})^{\otimes n}\right)$, see e.g. \cite{Dixmier1981}. \end{remark} \begin{corollary}\label{densetensors} For all $n\in\mathbb{N}$ the inclusions $s(\mathcal{N})^{\otimes n}\subset\ell_A^2(\mathcal{H})^{\otimes n}$ and $s(\mathcal{N})^{\wot n}\subset\ell_A^2(\mathcal{H})^{\wot n}$ are dense. \begin{proof} For $n=1$ this is Corollary~\ref{sinellAcontanddense}. Assume the density of $s(\mathcal{N})^{\otimes n}\subset\ell_A^2(\mathcal{H})^{\otimes n}$ has been proven for some $n\in\mathbb{N}$ and let $F\in\ell_A^2(\mathcal{H})^{\otimes n}$ and $f\in\ell_A^2(\mathcal{H})$. It suffices to approximate $F\otimes f$, since the linear span of such elements is dense in $\ell_A^2(\mathcal{H})^{\otimes n+1}$. We may assume $F\not=0$ and $f\not=0$ since otherwise $F\otimes f=0\in s(\mathcal{N})^{\otimes n+1}$. For $\varepsilon>0$ choose $\Phi\in s(\mathcal{N})^{\otimes n}$ with $\|F-\Phi\|_A<\varepsilon\|f\|_A^{-1}$. We may enforce $\Phi\not=0$ since $F\not=0$. Choose $\varphi\in s(\mathcal{N})$ with $\|f-\varphi\|_A<\varepsilon\|\Phi\|_A^{-1}$. Then \begin{align*} \|F\otimes f-\Phi\otimes\varphi\|_A & \le \|F\otimes f-\Phi\otimes f\|_A+\|\Phi\otimes f-\Phi\otimes\varphi\|_A\\ & = \|F-\Phi\|_A\|f\|_A+\|\Phi\|_A\|f-\varphi\|_A\\ & < 2\varepsilon. \end{align*} To prove the density of $s(\mathcal{N})^{\wot n}\subset\ell_A^2(\mathcal{H})^{\wot n}$, we observe that for $f\in\ell_A^2(\mathcal{H})$ and a sequence $(\varphi_k)_{k\in\mathbb{N}}$ in $s(\mathcal{N})$ with $\lim_{k\to\infty}\varphi_k=f$ it holds \begin{align*} \lim_{k\to\infty}\|f^{\otimes n}-\varphi_k^{\otimes n}\|_A^2 & = \lim_{k\to\infty}\|f^{\otimes n}\|_A^2+\|\varphi_k^{\otimes n}\|_A^2-2(f^{\otimes n},\varphi_k^{\otimes n})_A\\ & = \lim_{k\to\infty}\|f\|_A^{2n}+\|\varphi_k\|_A^{2n}-2(f,\varphi_k)_A^n\\ & =0. \end{align*} Since $\spann\{f^{\otimes n}:f\in\ell_A^2(\mathcal{H})\}$ is dense in $\ell_A^2(\mathcal{H})^{\wot n}$ by definition, the statement is proven. \end{proof} \end{corollary} \begin{definition} The set of monomials $\mathcal{M}_n$ of order $n\in\mathbb{N}_0$ on $s'(\mathcal{N})$ we define by \[ \mathcal{M}_n:=\left\{s'(\mathcal{N})\ni\omega\mapsto\langle\varphi^{(n)},\omega^{\otimes n}\rangle\in\mathbb{R}:\varphi^{(n)}\in s(\mathcal{N})^{\otimes n}\right\}. \] Since for $\omega\in s'(\mathcal{N})$, $n\in\mathbb{N}_0$ and $\varphi^{(n)}\in s(\mathcal{N})^{\otimes n}$ it holds $\langle\varphi^{(n)},\omega^{\otimes n}\rangle=\langle\widehat{\varphi^{(n)}},\omega^{\otimes n}\rangle$, the polarization formula yields \[ \mathcal{M}_n=\spann\left\{s'(\mathcal{N})\ni\omega\mapsto\langle\varphi^{\otimes n},\omega^{\otimes n}\rangle=\langle\varphi,\omega\rangle^n\in\mathbb{R}:\varphi\in s(\mathcal{N})\right\}. \] Furthermore, we define the set $\mathcal{P}_n$ of polynomials of degree $n\in\mathbb{N}_0$ on $s'(\mathcal{N})$ and the set of polynomials $\mathcal{P}$ on $s'(\mathcal{N})$ by \[ \mathcal{P}_n:=\sum_{k=0}^n\mathcal{M}_k \quad\text{and}\quad \mathcal{P}:=\bigcup_{n\in\mathbb{N}_0}\mathcal{P}_n, \] respectively. \end{definition} A proof of the following important result works along the same lines as the corresponding proof in \cite{Obata1994}: \begin{theorem} The set of polynomials is dense in $L^2(\mu_A)$. \end{theorem} \begin{definition} We define $\tau_A:s(\mathcal{N})^{\otimes 2}\to\mathbb{R}$ as the unique linear extension of the operator fulfilling $\tau_A(\varphi\otimes\psi):=\langle\varphi,A\psi\rangle=(\varphi,\psi)_A$ for $\varphi,\psi\in s(\mathcal{N})$, which exists due to the universal property of the tensor product. \end{definition} \begin{definition} For $\omega\in s'(\mathcal{N})$ and $n\in\mathbb{N}_0$ we inductively define $\colon\omega^{\otimes n}\colon\in\big(s(\mathcal{N})^{\otimes n}\big)^*$, the algebraic dual space of $s(\mathcal{N})^{\otimes n}$, by \[ \colon\omega^{\otimes 0}\colon:=\id_\mathbb{R},\quad\colon\omega^{\otimes 1}\colon:=\omega,\quad\text{and} \] \[ \colon\omega^{\otimes n}\colon:=\omega{\wh{\otimes}}\colon\omega^{\otimes n-1}\colon-(n-1)\tau_A{\wh{\otimes}}\colon\omega^{\otimes n-2}\colon\quad\text{for }n\ge2. \] It is clear from the definition that for $\varphi^{(n)}\in s(\mathcal{N})^{\otimes n}$ we have \[ \wickpolyo{\varphi^{(n)}}=\wickpolyo{\widehat{\varphi^{(n)}}}. \] \end{definition} \begin{lemma}\label{wickpolynomial} For $\omega\in s'(\mathcal{N})$ and $n\in\mathbb{N}_0$ one has \[ \colon\omega^{\otimes n}\colon=\sum_{k=0}^{\lfloor\frac{n}{2}\rfloor}\frac{(-1)^kn!}{2^kk!(n-2k)!}\tau_A^{{\wh{\otimes}} k}{\wh{\otimes}}\omega^{\otimes n-2k}\quad\text{and} \] \[ \omega^{\otimes n}=\sum_{k=0}^{\lfloor\frac{n}{2}\rfloor}\frac{n!}{2^kk!(n-2k)!}\tau_A^{{\wh{\otimes}} k}{\wh{\otimes}}\colon\omega^{\otimes n-2k}\colon. \] \begin{proof} This proof uses a straightforward but unattractive induction, which will be given in Section~\ref{wickpolynomialproofsection} of the appendix on page~\pageref{wickpolynomialproof}. \end{proof} \end{lemma} \begin{corollary}\label{wickhermite} For $\omega\in s'(\mathcal{N})$, $\varphi\in s(\mathcal{N})$, $\varphi\not=0$ and $n\in\mathbb{N}_0$ we have \begin{equation} \wickpolyo{\varphi^{\otimes n}}=\|\varphi\|_A^nH_n\left(\frac{\langle\varphi,\omega\rangle}{\|\varphi\|_A}\right), \end{equation} where $H_n$ is the $n^\text{th}$ Hermite polynomial, see section~\ref{Hermitepolynomial} in the appendix on page~\pageref{Hermitepolynomial}. \begin{proof} This is a direct implication of the previous lemma using Equation~\eqref{Hermitesumrepresentation}. \end{proof} \end{corollary} \begin{definition} The set of wick ordered polynomials on $s'(\mathcal{N})$ we define by \[ \mathcal{W}:=\left\{s'(\mathcal{N})\ni\omega\mapsto\sum_{n=0}^m\wickpolyo{\varphi^{(n)}}\in\mathbb{R}:m\in\mathbb{N}_0, \varphi^{(n)}\in s(\mathcal{N})^{\otimes n}\text{ for }n=0,\dots,m\right\}. \] Lemma~\ref{wickpolynomial} yields $\mathcal{W}=\mathcal{P}$, hence the wick ordered polynomials are dense in $L^2(\mu_A)$. \end{definition} \begin{lemma} Let $\varphi,\psi\in s(\mathcal{N})$ and $n,m\in\mathbb{N}_0$. Then we have \[ \int_{s'}\wickpolyo{\varphi^{\otimes n}}\wickpolyo[m]{\psi^{\otimes m}}d\mu_A(\omega)=\delta_{n,m}n!(\varphi,\psi)_A^n. \] \begin{proof} Without loss of generality we may assume $\|\varphi\|_A=\|\psi\|_A=1$. By $\dim\spann\{\varphi,\psi\}\le 2$ there exists $\eta\in s(\mathcal{N})$ with $\|\eta\|_A=1$ and $(\eta,\psi)_A=0$ such that $\varphi\in\spann\{\eta,\psi\}$. Then for $\alpha:=(\varphi,\psi)_A$ and $\beta:=(\varphi,\eta)_A$ we have $\varphi=\alpha\psi+\beta\eta$ with $\alpha^2+\beta^2=\|\varphi\|_A=1$. Now for $\omega\in s'(\mathcal{N})$, if necessary using the convention $0^0:=1$, Corollary~\ref{wickhermite} above and Equation~\eqref{Hermitebinomial} from page~\pageref{Hermitebinomial} yield \begin{align} \wickpolyo{\varphi^{\otimes n}}\wickpolyo[m]{\psi^{\otimes m}} & = H_n(\langle\varphi,\omega\rangle)H_m(\langle\psi,\omega\rangle)\notag\\ & = H_n\big(\alpha\langle\psi,\omega\rangle+\beta\langle\eta,\omega\rangle\big)H_m(\langle\psi,\omega\rangle)\label{somelabel1}\\ & = \sum_{k=0}^n\binom{n}{k}\alpha^k\beta^{n-k}H_k(\langle\psi,\omega\rangle)H_{n-k}(\langle\eta,\omega\rangle)H_m(\langle\psi,\omega\rangle).\notag \end{align} By using $\|\eta\|_A=1$, Corollary~\ref{orthofubini} and Equations~\eqref{Hermiteinnerprod} and~\eqref{Hermiteintegraliszero}, for fixed $k\in\{0,\dots,n\}$ we have \begin{align} \int_{s'}H_k(\langle\psi,\omega\rangle)H_{n-k}(\langle\eta,\omega\rangle)H_m(\langle\psi,\omega\rangle)d\mu_A(\omega) & = \int_\mathbb{R} H_k(x)H_m(x)d\mu_1(x)\int_\mathbb{R} H_{n-k}(y)d\mu_1(y)\notag\\ & = (H_k,H_m)_{L^2(\mu_1)}\delta_{n-k,0}\notag\\ & = m!\delta_{k,m}\delta_{k,n}=n!\delta_{k,n,m}.\label{somelabel2} \end{align} Using these equations we obtain \begin{align*} \delta_{n,m}n!(\varphi,\psi)_A^n & = \delta_{n,m}n!\alpha^n = \sum_{k=0}^n\alpha^kn!\delta_{k,n,m} = \sum_{k=0}^n\binom{n}{k}\alpha^k\beta^{n-k}n!\delta_{k,n,m}\\ & \stackrel{\eqref{somelabel2}}{=} \sum_{k=0}^n\binom{n}{k}\alpha^k\beta^{n-k}\int_{s'}H_k(\langle\psi,\omega\rangle)H_{n-k}(\langle\eta,\omega\rangle)H_m(\langle\psi,\omega\rangle)d\mu_A(\omega)\\ & \stackrel{\eqref{somelabel1}}{=} \int_{s'}\wickpolyo{\varphi^{\otimes n}}\wickpolyo[m]{\psi^{\otimes m}}d\mu_A(\omega). \end{align*} \end{proof} \end{lemma} \begin{corollary} Let $n,m\in\mathbb{N}_0$ and $\varphi^{(n)}\in s(\mathcal{N})^{\otimes n}$, $\psi^{(m)}\in s(\mathcal{N})^{\otimes m}$. Then \[ \int_{s'}\wickpolyo{\varphi^{(n)}}\wickpolyo[m]{\psi^{(m)}}d\mu_A(\omega)=\delta_{n,m}n!\left(\widehat{\varphi^{(n)}},\widehat{\psi^{(m)}}\right)_A. \] \begin{proof} By the polarization formula there exist $r_1,r_2\in\mathbb{N}$, $\varphi_1,\dots,\varphi_{r_1},\psi_1,\dots\psi_{r_2}\in s(\mathcal{N})$ and $\alpha_1,\dots\alpha_{r_1},\beta_1,\dots,\beta_{r_2}\in\mathbb{R}$ such that \[ \widehat{\varphi^{(n)}}=\sum_{k=1}^{r_1}\alpha_i\varphi_i^{\otimes n}\quad\text{and}\quad\widehat{\psi^{(m)}}=\sum_{j=1}^{r_2}\beta_j\psi_j^{\otimes m}. \] Then the previous lemma yields \begin{align*} \int_{s'}\wickpolyo{\varphi^{(n)}}\wickpolyo[m]{\psi^{(m)}}d\mu_A(\omega) & = \int_{s'}\wickpolyo{\widehat{\varphi^{(n)}}}\wickpolyo[m]{\widehat{\psi^{(m)}}}d\mu_A(\omega)\\ & = \sum_{i=1}^{r_1}\sum_{j=1}^{r_2}\alpha_i\beta_j\delta_{n,m}n!(\varphi_i,\psi_j)_A^n\\ & = \delta_{n,m}n!\left(\sum_{i=1}^{r_1}\alpha_i\varphi_i^{\otimes n},\sum_{j=1}^{r_2}\beta_j\psi_j^{\otimes m}\right)_A\\ & = \delta_{n,m}n!\left(\widehat{\varphi^{(n)}},\widehat{\psi^{(m)}}\right)_A. \end{align*} \end{proof} \end{corollary} \begin{proposition} Let $n\in\mathbb{N}_0$ and $f^{(n)}\in\ell_A^2(\mathcal{H})^{\wot n}$. Similar as in Lemma~\ref{Hilbertkernels} we may define $\wickpoly{f^{(n)}}\in L^2(\mu_A)$ as the element $\lim_{k\to\infty}\wickpoly{\varphi_k^{(n)}}$ in $L^2(\mu_A)$, where $(\varphi_k^{(n)})_{k\in\mathbb{N}}$ is an arbitrary sequence in $s(\mathcal{N})^{\wot n}$ with $\lim_{k\to\infty}\varphi_k^{(n)}=f^{(n)}$ in $\ell_A^2(\mathcal{H})^{\otimes n}$, whose particular choice is irrelevant. Furthermore we have $\left\|\wickpoly{f^{(n)}}\right\|_A^2=n!\|f^{(n)}\|_A^2$. \begin{proof} Let $f^{(n)}\in\ell_A^2(\mathcal{H})^{\wot n}$. Due to Corollary~\ref{densetensors} there exists a sequence $(\varphi_k^{(n)})_{k\in\mathbb{N}}$ in $s(\mathcal{N})^{\wot n}$ with $\lim_{k\to\infty}\varphi_k^{(n)}=f^{(n)}$. By the above Corollary we have \[ \Big\|\wickpoly{\varphi_k^{(n)}}\Big\|_A^2=n!\Big\|\widehat{\varphi_k^{(n)}}\Big\|_A^2=n!\Big\|\varphi_k^{(n)}\Big\|_A^2, \] hence $\left(\wickpoly{\varphi_k^{(n)}}\right)_{k\in\mathbb{N}}$ is a Cauchy sequence in $L^2(\mu_A)$. If $(\psi_k^{(n)})_{k\in\mathbb{N}}$ is another sequence in $s(\mathcal{N})^{\wot n}$ with limit $f^{(n)}$, then \[ \Big\|\wickpoly{\varphi_k^{(n)}}-\wickpoly{\psi_k^{(n)}}\Big\|_A^2=n!\Big\|\widehat{\varphi_k^{(n)}}-\widehat{\psi_k^{(n)}}\Big\|_A^2=n!\Big\|\varphi_k^{(n)}-\psi_k^{(n)}\Big\|_A^2, \] so $\lim_{k\to\infty}\left\|\wickpoly{\varphi_k^{(n)}}-\wickpoly{\psi_k^{(n)}}\right\|_A=0$ and the sequences $\left(\wickpoly{\varphi_k^{(n)}}\right)_{k\in\mathbb{N}}$ and $\left(\wickpoly{\psi_k^{(n)}}\right)_{k\in\mathbb{N}}$ take the same limit, which we denote by $\wickpoly{f^{(n)}}$. By continuity of the norm it holds \[ \left\|\wickpoly{f^{(n)}}\right\|_A^2=\lim_{k\to\infty}\left\|\wickpoly{\varphi_k^{(n)}}\right\|_A^2=\lim_{k\to\infty}n!\left\|\varphi_k^{(n)}\right\|_A^2=n!\left\|f^{(n)}\right\|_A^2. \] \end{proof} \end{proposition} This directly implies the following: \begin{corollary} Let $n,m\in\mathbb{N}_0$ and $f^{(n)}\in\ell_A^2(\mathcal{H})^{\wot n}$, $g^{(m)}\in\ell_A^2(\mathcal{H})^{\wot m}$. Then \[ \int_{s'}\wickpoly{f^{(n)}}\wickpoly[m]{g^{(m)}}d\mu_A=\delta_{n,m}n!(f^{(n)},g^{(m)})_A. \] \end{corollary} \begin{theorem}[Chaos decomposition] Let $F\in L^2(\mu_A)$. Then for each $n\in\mathbb{N}_0$ there exists a unique $f^{(n)}\in\ell_A^2(\mathcal{H})^{\wot n}$ such that $F=\sum_{n=0}^\infty\wickpoly{f^{(n)}}$ in the $L^2(\mu_A)$ sense. We then have $\|F\|_A^2=\sum_{n=0}^\infty n!\|f^{(n)}\|_A^2$. \begin{proof} Let $(F_k)_{k\in\mathbb{N}}$ be a sequence of wick ordered polynomials with $\lim_{k\to\infty}F_k=F$ in $L^2(\mu_A)$. For each $k\in\mathbb{N}$ there exists $m_k\in\mathbb{N}_0$, $\varphi_k^{(n)}\in s(\mathcal{N})^{\wot n}$ for $n=0,\dots,m_k$ such that \[ F_k=\sum_{n=0}^\infty\wickpoly{\varphi_k^{(n)}}, \] where we set $\varphi_k^{(n)}=0$ for $n>m_k$. For fixed $n\in\mathbb{N}_0$, we have \[ \big\|\varphi_k^{(n)}-\varphi_l^{(n)}\big\|_A^2\le\sum_{m=0}^\infty m!\big\|\varphi_k^{(m)}-\varphi_l^{(m)}\big\|_A^2=\|F_k-F_l\|_A^2 \] for any choices of $k,l\in\mathbb{N}$, hence the sequence $\big(\varphi_k^{(n)}\big)_{k\in\mathbb{N}}$ is a Cauchy sequence with some limit $f^{(n)}\in\ell_A^2(\mathcal{H})^{\wot n}$. We define \[ \tilde{F}_j:=\sum_{n=0}^j\wickpoly{f^{(n)}} \] for $j\in\mathbb{N}$. For $\varepsilon>0$ there exists $l\in\mathbb{N}$ with $\|F-F_l\|_A^2<\varepsilon/2$. By having in mind that $\varphi_l^{(n)}=0$ for $n>m_l$, we see that for all $i,j\ge m_l$ it holds \begin{align*} \|\tilde{F}_j-\tilde{F}_i\|_A^2 & = \sum_{n=i+1}^j n!\|f^{(n)}\|_A^2\\ & = \lim_{k\to\infty}\sum_{n=i+1}^j n!\big\|\varphi_k^{(n)}\big\|_A^2\\ & \le \lim_{k\to\infty}\sum_{n=i+1}^j 2n!\big\|\varphi_k^{(n)}-\varphi_l^{(n)}\big\|_A^2+\sum_{n=i+1}^j 2n!\big\|\varphi_l^{(n)}\big\|_A^2\\ & \le \lim_{k\to\infty}2\|F_k-F_l\|_A^2\\ & = 2\|F-F_l\|_A^2\\ & < \varepsilon, \end{align*} where we used the estimation $(a+b)^2\le2a^2+2b^2$ for $a,b\in\mathbb{R}$. Hence $(\tilde{F}_j)_{j\in\mathbb{N}}$ is a Cauchy sequence in $L^2(\mu_A)$ and thus we can define \[ \tilde{F}:=\lim_{j\to\infty}\tilde{F}_j=\sum_{n=0}^\infty\wickpoly{f^{(n)}}\in L^2(\mu_A). \] It remains to show $F=\tilde{F}$. To this end let $p\in\mathcal{P}$ be arbitrary with representation \[ p=\sum_{n=0}^m\wickpoly{\psi^{(n)}}\quad\text{for some }m\in\mathbb{N}_0\text{ and }\psi^{(n)}\in s(\mathcal{N})^{\otimes n}\text{ for }n=0,\dots,m .\] Then \[ (F,p)_A=\lim_{k\to\infty}(F_k,p)_A=\lim_{k\to\infty}\sum_{n=0}^mn!\big(\varphi_k^{(n)},\widehat{\psi^{(n)}}\big)_A=\sum_{n=0}^mn!\big(f^{(n)},\widehat{\psi^{(n)}}\big)_A=(\tilde{F},p)_A \] and hence $F-\tilde{F}\in\mathcal{P}^\perp=\{0\}$, i.e. $F=\tilde{F}$. It clearly follows \[ \|F\|_A^2=\lim_{j\to\infty}\|\tilde{F}_j\|_A^2=\lim_{j\to\infty}\sum_{n=0}^j n!\|f^{(n)}\|_A^2. \] \end{proof} \end{theorem} \end{section} \end{chapter} \begin{chapter}{Conditional Expectations} \begin{section}{Representation} \begin{definition} Let $(\Omega,\mathcal{F},P)$ be a probability space and $\mathcal{G}$ be a sub-$\sigma$-algebra of $\mathcal{F}$. Let $X:\Omega\to\mathbb{R}$ be a non-negative or integrable random variable. A random variable $Y:\Omega\to\mathbb{R}$ is called \emph{conditional expectation of $X$ given $\mathcal{G}$}, if $Y$ is $\mathcal{G}$-measureable and $\mathbb{E}[\mathbf{1}_GX]=\mathbb{E}[\mathbf{1}_GY]$ holds for all $G\in\mathcal{G}$. We denote the set of all conditional expectations of $X$ given $\mathcal{G}$ by $\mathbb{E}[X|\mathcal{G}]$. If $Z$ is another random variable we denote $\mathbb{E}[X|Z]:=\mathbb{E}[X|\sigma(Z)]$. \end{definition} \begin{remark}\label{augmentationremark} Let $(\Omega,\mathcal{F},P)$ be a probability space, $\mathcal{G}$ be a sub-$\sigma$-algebra of $\mathcal{F}$ and $p\ge 1$. One can show that the conditional expectation defines a contractive operator from $L^p(\Omega,\mathcal{F},P)$ onto $L^p(\Omega,\mathcal{G},P)$. In particular, the conditional expectation is independent of representatives. Via the isometry \[ L^p(\Omega,\mathcal{G},P)\ni[g]\longmapsto[g]_\mathcal{F}:=\big\{f:f\text{ is }\mathcal{F}\text{-measureable and } P(f=g)=1\big\}\in L^p(\Omega,\mathcal{F},P) \] we may consider $L^p(\Omega,\mathcal{G},P)$ as a closed subspace of $L^p(\Omega,\mathcal{F},P)$. For $[X]\in L^p(\Omega,\mathcal{F},P)$ we especially consider $\mathbb{E}[X|\mathcal{G}]$ as an element of $L^p(\Omega,\mathcal{F},P)$. \end{remark} \begin{remark} Let $(\Omega,\mathcal{F},P)$ be a probability space and $\mathcal{G}$ be a sub-$\sigma$-algebra of $\mathcal{F}$. Then for $[X]\in L^2(\Omega,\mathcal{F},P)$ we have $E[X|\mathcal{G}]=\mathcal{P}_\mathcal{G}([X])$, where \[ \mathcal{P}_\mathcal{G}:L^2(\Omega,\mathcal{F},P)\to L^2(\Omega,\mathcal{G},P) \] is the orthogonal projection. In particular, since the orthogonal projection is continuous, for a Cauchy sequence $([X_n])_{n\in\mathbb{N}}$ in $L^2(\Omega)$ we have $\lim_{n\to\infty}\mathbb{E}\big[[X_n]|\mathcal{G}\big]=\mathbb{E}\big[\lim_{n\to\infty}[X_n]\big|\mathcal{G}\big]$ in $L^2(\Omega)$. \end{remark} The well-known factorisation lemma will be very useful in our proofs later on: \begin{lemma}[Factorisation lemma]\label{factorisationlemma} Let $(\Omega,\mathcal{F})$ be a measureable space and $Y:\Omega\to\mathbb{R}$ be measureable. If $X:\Omega\to\mathbb{R}$ is $\mathcal{G}:=\sigma(Y)$-measureable, then there exists a measureable $g:\mathbb{R}\to\mathbb{R}$ such that $X=g(Y)$. \begin{proof} If $X$ is an elementary function, there exists $n\in\mathbb{N}$ and $G_1,\dots,G_n\in\mathcal{G}$, $\alpha_1,\dots,\alpha_n\in\mathbb{R}$ with $X=\sum_{i=1}^n\alpha_i\mathbf{1}_{G_i}$. Since $\mathcal{G}=\sigma(Y)$, there exist Borel sets $B_1,\dots,B_n$ with $G_i=Y^{-1}(B_i)$ for $i=1,\dots,n$. Thus \[ X=\sum_{i=1}^n\alpha_i\mathbf{1}_{G_i}=\sum_{i=1}^n\alpha_i\mathbf{1}_{Y^{-1}(B_i)}=\sum_{i=1}^n\alpha_i\mathbf{1}_{B_i}(Y)=g(Y)\quad\text{for } g:=\sum_{i=1}^n\alpha_i\mathbf{1}_{B_i}. \] If $X$ is non-negative, there exist elementary functions $(X_n)_{n\in\mathbb{N}}$ with $\sup_{n\in\mathbb{N}}X_n(\omega)=X(\omega)$ for all $\omega\in\Omega$. For each $n\in\mathbb{N}$ there exists a measureable $g_n$ with $X_n=g_n(Y)$ as above. Then the pointwisely defined function $g:=\sup_{n\in\mathbb{N}}g_n$ again is a $\mathcal{G}$-measureable function and we have $X=\sup X_n=\sup g_n(Y)=g(Y)$. For some arbitrary measureable $X$ we use the decomposition $X=X^+-X^-$, where $X^+:=\max\{X,0\}\ge 0$ and $X^-:=\max\{-X,0\}\ge0$, to obtain two measureable functions $g^+$ and $g^-$ with $X^+=g^+(Y)$ and $X^-=g^-(Y)$ which yields $X=g(Y)$ for $g:=g^+-g^-$. \end{proof} \end{lemma} \begin{remark} Let $(\Omega,\mathcal{F},P)$ be a probability space and $Z_1$ and $Z_2$ be two random variables with $Z_1=Z_2$ almost surely. In general, a random variable $X$ which is $\sigma(Z_1)$-measureable is not necessarily measureable with respect to $\sigma(Z_2)$, but since $X=g(Z_1)$ for some measureable $g$ by the factorisation lemma, it holds that $X$ almost surely equals the $\sigma(Z_2)$-measureable variable $g(Z_2)$. Hence for $p\ge 1$ we have $L^p(\Omega,\sigma(Z_1),P)=L^p(\Omega,\sigma(Z_2),P)$ as subspaces of $L^p(\Omega,\mathcal{F},P)$ as in Remark~\ref{augmentationremark}. This allows us to define $\mathbb{E}\big[X\big|[Z]\big]:=\mathbb{E}[X|Z]$, where $[Z]$ is an equivalence class of random variables with respect to almost sure equality. \end{remark} The following, sometimes called \emph{L\'evy's zero-one law}, is an implication of Doob's well-known martingale convergence theorem, see e.g. \cite{Bogachev2007,Oksendal2003}. \begin{theorem}[L\'evy's zero-one law] Let $(\Omega,\mathcal{F},P)$ be a probability space, $(\mathcal{F}_n)_{n\in\mathbb{N}}$ a filtration, $\mathcal{F}_\infty:=\sigma(\bigcup_{n\in\mathbb{N}}\mathcal{F}_n)$ and $X\in L^1(\Omega)$. Then $\lim_{n\to\infty}\mathbb{E}[X|\mathcal{F}_n]=\mathbb{E}[X|\mathcal{F}_\infty]$ in $L^1(\Omega)$. \end{theorem} Since we focus on the space of $L^2$-functions, we need the following proposition: \begin{proposition}\label{expectconv} Let $(\Omega,\mathcal{F},P)$ be a probability space, $(\mathcal{F}_n)_{n\in\mathbb{N}}$ a filtration and assume $X\in L^2(\Omega)$. If $(\mathbb{E}[X|\mathcal{F}_n])_{n\in\mathbb{N}}$ is a Cauchy sequence in $L^2(\Omega)$, then for $\mathcal{F}_\infty:=\sigma(\bigcup_{n\in\mathbb{N}}\mathcal{F}_n)$ we have $\lim_{n\to\infty}\mathbb{E}[X|\mathcal{F}_n]=\mathbb{E}[X|\mathcal{F}_\infty]$ in $L^2(\Omega)$. \begin{proof} Let $Y\in L^2(\Omega)$ be the limit of $(\mathbb{E}[X|\mathcal{F}_n])_{n\in\mathbb{N}}$. Since we have $\|\cdot\|_{L^1}\le\|\cdot\|_{L^2}$ on $L^2(\Omega)$ by Remark~\ref{Hoelderprob}, together with L\'evy's zero-one law we get \begin{align*} \|Y-\mathbb{E}[X|\mathcal{F}_\infty]\|_{L^1} & \le\inf_{n\in\mathbb{N}}\|Y-\mathbb{E}[X|\mathcal{F}_n]\|_{L^1}+\|\mathbb{E}[X|\mathcal{F}_n]-\mathbb{E}[X|\mathcal{F}_\infty]\|_{L^1}\\ & \le\inf_{n\in\mathbb{N}}\|Y-\mathbb{E}[X|\mathcal{F}_n]\|_{L^2}+\|\mathbb{E}[X|\mathcal{F}_n]-\mathbb{E}[X|\mathcal{F}_\infty]\|_{L^1}=0,\\ \end{align*} so $Y=\mathbb{E}[X|\mathcal{F}_\infty]$ in $L^1(\Omega)$. Since $Y\in L^2(\Omega)$, we also get $Y=\mathbb{E}[X|\mathcal{F}_\infty]$ in $L^2(\Omega)$. \end{proof} \end{proposition} \begin{remark} We note that in the above proposition the assumption for $(\mathbb{E}[X|\mathcal{F}_n])_{n\in\mathbb{N}}$ to be a Cauchy sequence in $L^2$ is actually redundant, since $\left\|\mathbb{E}[X|\mathcal{F}_n]\right\|_{L^2}\le\|X\|_{L^2}$ for all $n\in\mathbb{N}$, hence $(\mathbb{E}[X|\mathcal{F}_n])_{n\in\mathbb{N}}$ is a bounded martingale in $L^2$, and one can show that a martingale which is bounded in $L^p$ for some $p\in(1,\infty)$ already converges in $L^p$, see e.g. \cite{Bogachev2007}. \end{remark} \begin{theorem}\label{condexpectfinite} Let $n\in\mathbb{N}_0$, $m\in\mathbb{N}$, $f^{(n)}\in\ell_A^2(\mathcal{H})^{\wot n}$ and let $\{\psi_1,\dots,\psi_m\}$ be an orthonormal system in $\ell_A^2(\mathcal{H})$. Then for $\mathcal{G}:=\sigma\big(\langle\psi_k,\cdot\rangle,k=1,\dots,m\big)$ we have \begin{equation}\label{condexpectfiniteformula} \mathbb{E}\left[\wickpoly{f^{(n)}}\Big|\mathcal{G}\right]=\wickpoly{\mathcal{P}_\psi^{\otimes n} f^{(n)}}, \end{equation} where $\mathcal{P}_\psi:\ell_A^2(\mathcal{H})\to\spann\{\psi\}:=\spann\{\psi_1,\dots,\psi_m\}$ is the orthogonal projection, i.e. \[ \mathcal{P}_\psi f=\sum_{k=1}^m(f,\psi_k)_A\psi_k\quad\text{for }f\in\ell_A^2(\mathcal{H}). \] \begin{proof} For $n=0$ we have $\wickpoly{f^{(n)}}=\wickpoly{\mathcal{P}_\psi^{\otimes n} f^{(n)}}\in\mathbb{R}$, hence the statement is clear in that case and we may assume $n\not=0$. We will first prove the assertion for $f^{(n)}=\varphi^{\otimes n}$ for some $\varphi\in s(\mathcal{N})$ only and afterwards derive that the property transfers to arbitrary $f^{(n)}\in\ell_A^2(\mathcal{H})^{\wot n}$ by density arguments. So let $\varphi\in s(\mathcal{N})$. First note that $\wickpoly{\mathcal{P}_\psi^{\otimes n}\varphi^{\otimes n}}=\wickpoly{(\mathcal{P}_\psi\varphi)^{\otimes n}}$ is $\mathcal{G}$-measureable by Corollary~\ref{wickhermite} and we have $(\psi_k,\varphi-\mathcal{P}_\psi\varphi)_A=0$ for $k=1,\dots,m$. If $\varphi\in\spann\{\psi\}$, then $\wickpoly{\varphi^{\otimes n}}$ is $\mathcal{G}$-measureable and hence \[ \mathbb{E}\left[\wickpoly{\varphi^{\otimes n}}\Big|\mathcal{G}\right]=\wickpoly{\varphi^{\otimes n}}=\wickpoly{\mathcal{P}_\psi^{\otimes n}\varphi^{\otimes n}}. \] Otherwise $\varphi\not\in\spann\{\psi\}$, so $\varphi-\mathcal{P}_\psi\varphi\not=0$. For $G\in\mathcal{G}$ there exists a measureable function $g$ with $\mathbf{1}_G=g(\langle\psi_1,\cdot\rangle,\dots,\langle\psi_m,\cdot\rangle)$ by Lemma~\ref{factorisationlemma}. To shorten the notation we write $g(\psi)=g(\langle\psi_1,\cdot\rangle,\dots,\langle\psi_m,\cdot\rangle)$. We distinguish two cases: If $\mathcal{P}_\psi\varphi=0$, then by Corollary~\ref{wickhermite} we have \[ \wickpoly{\varphi^{\otimes n}}=\|\varphi\|_A^nH_n\left(\frac{\langle\varphi,\cdot\rangle}{\|\varphi\|_A}\right)=\|\varphi\|_A^nH_n\left(\frac{\langle\varphi-\mathcal{P}_\psi\varphi,\cdot\rangle}{\|\varphi-\mathcal{P}_\psi\varphi\|_A}\right), \] which together with Corollary~\ref{orthofubinigeneral}, Equation~\eqref{Hermiteintegraliszero} from page~\pageref{Hermiteintegraliszero} and $n\not=0$ yields \begin{align*} \int_G\wickpoly{\varphi^{\otimes n}}d\mu_A & = \|\varphi\|_A^n\int_{s'}g(\psi)H_n\left(\frac{\langle\varphi-\mathcal{P}_\psi\varphi,\cdot\rangle}{\|\varphi-\mathcal{P}_\psi\varphi\|_A}\right)d\mu_A\\ & = \|\varphi\|_A^n\int_{s'}g(\psi)d\mu_A\int_{\mathbb{R}}H_n(x)d\mu_1(x)\\ & = \|\varphi\|_A^n\mu_A(G)\delta_{0,n}\\ & = 0\\ & = \int_G\wickpoly{\mathcal{P}_\psi^{\otimes n}\varphi^{\otimes n}}d\mu_A. \end{align*} In the other case, $\mathcal{P}_\psi\varphi\not=0$, we recall the assumption $\varphi-\mathcal{P}_\psi\varphi\not=0$ to observe \[ 0<\|\mathcal{P}_\psi\varphi\|_A < \|\mathcal{P}_\psi\varphi\|_A+\|\varphi-\mathcal{P}_\psi\varphi\|_A=\|\varphi\|_A, \] hence for $\beta:=\|\mathcal{P}_\psi\varphi\|_A\cdot\|\varphi\|_A^{-1}$ we have $\beta\in(0,1)$, thus $\alpha:=\sqrt{1-\beta^2}\in(0,1)$ and it holds $\alpha^2+\beta^2=1$. Corollary~\ref{wickhermite} and Equation~\eqref{Hermitebinomial} yield \begin{align} \wickpoly{\varphi^{\otimes n}} & = \|\varphi\|_A^nH_n\left(\frac{\langle\varphi,\cdot\rangle}{\|\varphi\|_A}\right)\notag\\ & = \|\varphi\|_A^nH_n\left(\frac{\langle\varphi-\mathcal{P}_\psi\varphi,\cdot\rangle}{\|\varphi\|_A}+\frac{\langle \mathcal{P}_\psi\varphi,\cdot\rangle}{\|\varphi\|_A}\right)\notag\\ & = \|\varphi\|_A^nH_n\left(\frac{\alpha\langle\varphi-\mathcal{P}_\psi\varphi,\cdot\rangle}{\alpha\|\varphi\|_A}+\frac{\beta\langle\mathcal{P}_\psi\varphi,\cdot\rangle}{\beta\|\varphi\|_A}\right)\notag\\ & = \|\varphi\|_A^n\sum_{k=0}^n\binom{n}{k}\alpha^k\beta^{n-k}\underbrace{H_k\left(\frac{\langle\varphi-\mathcal{P}_\psi\varphi,\cdot\rangle}{\alpha\|f\|_A}\right)H_{n-k}\left(\frac{\langle \mathcal{P}_\psi\varphi,\cdot\rangle}{\beta\|\varphi\|_A}\right)}_{=:I_k(\cdot)}\label{somelabel3}. \end{align} For $k\in\{0,\dots,n\}$ by Corollary~\ref{orthofubinigeneral} and Equation~\eqref{Hermiteintegraliszero} we have \begin{align} \int_{s'}g(\psi)I_kd\mu_A & = \int_{s'}g(\psi)H_{n-k}\left(\frac{\langle\mathcal{P}_\psi\varphi,\cdot\rangle}{\beta\|\varphi\|_A}\right)d\mu_A\cdot\int_{s'}H_k\left(\frac{\langle\varphi-\mathcal{P}_\psi\varphi,\cdot\rangle}{\alpha\|\varphi\|_A}\right)d\mu_A\notag\\ & = \int_{s'}g(\psi)H_{n-k}\left(\frac{\langle\mathcal{P}_\psi\varphi,\cdot\rangle}{\|\mathcal{P}_\psi\varphi\|_A}\right)d\mu_A\cdot\delta_{0,k},\label{somelabel4} \end{align} where we used $\alpha\|\varphi\|_A=\|\varphi-\mathcal{P}_\psi\varphi\|_A$. Finally we obtain \begin{align*} \int_G\wickpoly{\varphi^{\otimes n}}d\mu_A & = \int_{s'}g(\psi)\wickpoly{\varphi^{\otimes n}}d\mu_A\\ & \stackrel{\eqref{somelabel3}}{=} \|\varphi\|_A^n\sum_{k=0}^n\binom{n}{k}\alpha^k\beta^{n-k}\int_{s'}g(\psi)I_kd\mu_A\\ & \stackrel{\eqref{somelabel4}}{=} \int_{s'}g(\psi)\|\mathcal{P}_\psi\varphi\|_A^nH_n\left(\frac{\langle\mathcal{P}_\psi\varphi,\cdot\rangle}{\|\mathcal{P}_\psi\varphi\|_A}\right)d\mu_A\\ & = \int_G\wickpoly{\mathcal{P}_\psi^{\otimes n}\varphi^{\otimes n}}d\mu_A. \end{align*} We have established $\mathbb{E}\left[\wickpoly{\varphi^{\otimes n}}|\mathcal{G}\right]=\wickpoly{\mathcal{P}_\psi^{\otimes n}\varphi^{\otimes n}}$ for $\varphi\in s(\mathcal{N})$. Now for some $\varphi^{(n)}\in s(\mathcal{N})^{\wot n}$ with representation \[ \varphi^{(n)}=\sum_{k=1}^m\alpha_k\varphi_k^{\otimes n}\quad\text{for some }m\in\mathbb{N},\,\alpha\in\mathbb{R}^m\text{ and }\varphi_1,\dots,\varphi_m\in\ell_A^2(\mathcal{H}) \] by linearity of the conditional expectation we have \[ \mathbb{E}\left[\wickpoly{\varphi^{(n)}}|\mathcal{G}\right]=\sum_{k=1}^m\alpha_k\wickpoly{\mathcal{P}_\psi^{\otimes n}\varphi_k^{\otimes n}}=\wickpoly{\mathcal{P}_\psi^{\otimes n} \varphi^{(n)}}. \] Since $s(\mathcal{N})^{\wot n}$ is dense in $\ell_A^2(\mathcal{H})^{\wot n}$ and both $\mathcal{P}_\psi^{\otimes n}$ and the conditional expectation are continuous, we also have \eqref{condexpectfiniteformula} for $f^{(n)}\in\ell_A^2(\mathcal{H})^{\wot n}$. \end{proof} \end{theorem} \begin{corollary} Let $F\in L^2(\mu_A)$ with chaos decomposition $F=\sum_{n=0}^\infty\wickpoly{f^{(n)}}$. For $\{\psi_1,\dots,\psi_m\}$, $\mathcal{G}$ and $\mathcal{P}_\psi$ as in the previous theorem we have \begin{equation} \mathbb{E}[F|\mathcal{G}]=\sum_{n=0}^\infty\wickpoly{\mathcal{P}_\psi^{\otimes n} f^{(n)}}. \end{equation} \begin{proof} The conditional expectation operator is linear and continuous, hence \[ \mathbb{E}[F|\mathcal{G}]=\sum_{n=0}^\infty\mathbb{E}\left[\wickpoly{f^{(n)}}|\mathcal{G}\right]=\sum_{n=0}^\infty\wickpoly{\mathcal{P}_\psi^{\otimes n} f^{(n)}}. \] \end{proof} \end{corollary} \begin{lemma}\label{samespansamesigmaalgebra} Let $(h_i)_{i\in\mathbb{N}}$ and $(h'_j)_{j\in\mathbb{N}}$ be two orthonormal bases in $\mathcal{H}$ and assume the sets $\{x_1,\dots,x_n\}$ and $\{y_1,\dots,y_m\}$ span the same subspace in $\ell^2(\mathbb{R})$ for some $n,m\in\mathbb{N}$. Then for the $\sigma$-algebras \[ \mathcal{G}_1:=\sigma\big(\langle h_i\bullet x_k,\cdot\rangle:i\in\mathbb{N},k=1,\dots,n\big)\quad\text{and}\quad\mathcal{G}_2:=\sigma\big(\langle h'_j\bullet y_l,\cdot\rangle:j\in\mathbb{N},l=1,\dots,m\big) \] we have that $L^2(s'(\mathcal{N}),\mathcal{G}_1,\mu_A)=L^2(s'(\mathcal{N}),\mathcal{G}_2,\mu_A)$ as subspaces of $L^2(\mu_A)$, i.e. in the sense of Remark~\ref{augmentationremark}. \begin{proof} By symmetry it suffices to show $L^2(s'(\mathcal{N}),\mathcal{G}_1,\mu_A)\subset L^2(s'(\mathcal{N}),\mathcal{G}_2,\mu_A)$. For $i\in\mathbb{N}$ and $k\in\{1,\dots,n\}$ we have \[ h_i\bullet x_k=\left(\sum_{j=1}^\infty(h_i,h'_j)h'_j\right)\bullet\left(\sum_{l=1}^m\alpha_ly_l\right)=\sum_{j=1}^\infty\sum_{l=1}^m\alpha_l(h_i,h'_j)h'_j\bullet y_l\quad\text{for some }\alpha\in\mathbb{R}^m. \] Hence $\langle h_i\bullet x_k,\cdot\rangle$ is the limit of $\left(\sum_{j=1}^N\sum_{l=1}^m\alpha_l(h_i,h'_j)\langle h'_j\bullet y_l,\cdot\rangle\right)_{N\in\mathbb{N}}$ in the closed subspace $L^2(s'(\mathcal{N}),\mathcal{G}_2,\mu_A)$ and thus an element in the latter itself. \end{proof} \end{lemma} \begin{definition} Let $n\in\mathbb{N}$ and $x_1,\dots,x_n\in\ell^2(\mathbb{R})$. For $F\in L^2(\mu_A)$ we define \[ \mathbb{E}[F|x_1,\dots,x_n]:=\mathbb{E}[F|\mathcal{G}],\quad\text{where }\mathcal{G}:=\sigma\big(\langle h_i\bullet x_k,\cdot\rangle:i\in\mathbb{N},k=1,\dots,n\big) \] for some orthonormal basis $(h_i)_{i\in\mathbb{N}}$ of $\mathcal{H}$. This notation makes sense since $\mathbb{E}[F|\mathcal{G}]$ does not depend on the particular choice of $(h_i)_{i\in\mathbb{N}}$, as proven in the lemma above. \end{definition} \begin{corollary}\label{samespansamecondexpect} Let $F\in L^2(\mu_A)$ and let $\{x_1,\dots,x_n\}$ and $\{y_1,\dots,y_m\}$ span the same subspace in $\ell^2(\mathbb{R})$ for some $n,m\in\mathbb{N}$. Then \[ \mathbb{E}[F|x_1,\dots,x_n]=\mathbb{E}[F|y_1,\dots,y_m]. \] \end{corollary} From this point we consider $A\in L(\ell^2(\mathcal{H}))$ to be induced by some self-adjoint and positive definite operator $A\in L(\ell^2(\mathbb{R}))$. \begin{remark} Let $(f_n)_{n\in\mathbb{N}}$ be a family of measureable functions on some measureable space and let $\mathcal{F}_m:=\sigma(f_i:i=1,\dots,m)$ and $\mathcal{F}_\infty:=\sigma(f_n:n\in\mathbb{N})$. Then $\mathcal{F}_\infty=\sigma\left(\bigcup_{m\in\mathbb{N}}\mathcal{F}_m\right)$. \end{remark} \begin{theorem}\label{condexpect} Let $f\in \ell^2(\mathcal{H})$, $n\in\mathbb{N}$ and $x_1,\dots,x_n\in\ell^2(\mathbb{R})$ be orthonormal with respect to $(\cdot,\cdot)_A$. Then we have \[ \mathbb{E}[\langle f,\cdot\rangle|x_1,\dots,x_n]=\sum_{k=1}^n\big\langle[f,Ax_k]\bullet x_k,\cdot\big\rangle=\langle Pf,\cdot\rangle, \] where $P:\ell^2(\mathbb{R})\to\spann\{x_1,\dots,x_n\}$ is the orthogonal projection with respect to $(\cdot,\cdot)_A$. In particular $\mathbb{E}[\langle f,\cdot\rangle|x_1,\dots,x_n]=\sum_{k=1}^n\mathbb{E}[\langle f,\cdot\rangle|x_k]$. Note that we require $f\in\ell^2(\mathcal{H})$, since $[f,\cdot]$ is not defined for general $f\in\ell_A^2(\mathcal{H})$, as we shall see in Example~\ref{noextension}. \begin{proof} Let $(h_i)_{i\in\mathbb{N}}$ be some orthonormal basis of $\mathcal{H}$ and note that then $(h_i\bullet x_k)_{i\in\mathbb{N},k=1,\dots,n}$ is an orthonormal system in $\ell_A^2(\mathcal{H})$ by Corollary~\ref{miniregel}. We use Proposition~\ref{expectconv} together with the remark above, Theorem~\ref{condexpectfinite} and Proposition~\ref{rechenregeln} to obtain \begin{align*} \mathbb{E}[\langle f,\cdot\rangle|x_1,\dots,x_n] & = \lim_{N\to\infty}\mathbb{E}[\langle f,\cdot\rangle|\langle h_i\bullet x_k,\cdot\rangle,i=1,\dots,N,k=1,\dots,n]\\ & = \lim_{N\to\infty}\left\langle\sum_{k=1}^n\sum_{i=1}^N(f,h_i\bullet x_k)_Ah_i\bullet x_k,\cdot\right\rangle\\ & = \sum_{k=1}^n\left\langle\lim_{N\to\infty}\sum_{i=1}^N\big([f,Ax_k],h_i\big)h_i\bullet x_k,\cdot\right\rangle\\ & = \sum_{k=1}^n\big\langle[f,Ax_k]\bullet x_k,\cdot\big\rangle, \end{align*} which proves the first equality. For the orthogonal projection $P:\ell^2(\mathbb{R})\to\spann\{x_1,\dots,x_n\}$ it clearly holds $Px=\sum_{k=1}^n(x,x_k)_Ax_k$ for $x\in\ell^2(\mathbb{R})$. The identity $f=\sum_{l=0}^\infty f_l\bullet e_l$, continuity of $[\cdot,\cdot]$ and Corollary~\ref{miniregel} give \begin{align*} \sum_{k=0}^n[f,Ax_k]\bullet x_k & = \sum_{k=0}^n\left[\sum_{l=0}^\infty f_l\bullet e_l,Ax_k\right]\bullet x_k\\ & = \sum_{k=0}^n\sum_{l=0}^\infty[f_l\bullet e_l,Ax_k]\bullet x_k\\ & = \sum_{k=0}^n\sum_{l=0}^\infty(e_l,x_k)_Af_l\bullet x_k\\ & = \sum_{l=0}^\infty f_l\bullet\left(\sum_{k=0}^n(e_l,x_k)_Ax_k\right)\\ & = \sum_{l=0}^\infty f_l\bullet Pe_l\\ & = Pf, \end{align*} where we view $P$ as an operator also defined on $\ell^2(\mathcal{H})$ as in Theorem~\ref{matrixextension}. Hence \[ \mathbb{E}[\langle f,\cdot\rangle|x_1,\dots,x_n]=\sum_{k=1}^n\big\langle[f,Ax_k]\bullet x_k,\cdot\big\rangle=\langle Pf,\cdot\rangle. \] \end{proof} \end{theorem} \end{section} \begin{section}{Examples and Application}\label{appsect} \begin{example}\label{noextension} In this example we will show that $[\cdot,\cdot]$ does not necessarily possess a continuous extension to $\ell_A^2(\mathcal{H})\times\ell^2(\mathbb{R})$. Let $A\in L(\ell^2(\mathbb{R}))$ be the operator uniquely given by \[ Ae_n=\frac{1}{n^2}e_n\quad\text{for }n\in\mathbb{N}, \] which exists since $\|Ax\|\le\|x\|$ for $x\in\spann\{e_1,e_2,\dots\}$. Clearly $A$ is self-adjoint and positive definite. Let $h\in\mathcal{H}$ with $\|h\|=1$ be arbitrary and define $f_n:=\sum_{k=1}^nh\bullet e_k\in\ell^2(\mathcal{H})$ for $n\in\mathbb{N}$. Then $(f_n)_{n\in\mathbb{N}}$ is a Cauchy sequence in $\ell^2(\mathcal{H})$ with respect to $(\cdot,\cdot)_A$, since for $n,m\in\mathbb{N}$ it holds \[ \|f_n-f_m\|_A^2=\left(\sum_{k=m+1}^nh\bullet e_k,\sum_{k=m+1}^nh\bullet\frac{1}{k^2}e_k\right)=\sum_{k=m+1}^n\frac{1}{k^2}. \] Let $x\in\ell^2(\mathbb{R})$ be given by $x_k=k^{-1}$ for $k\in\mathbb{N}$. Then \[ \big\|[f_n,x]\big\|=\left\|\sum_{k=1}^n\frac{1}{k}h\right\|=\sum_{k=1}^n\frac{1}{k}\quad\text{for }n\in\mathbb{N}, \] so the sequence $\big([f_n,x]\big)_{n\in\mathbb{N}}$ is unbounded and hence does not converge in $\mathcal{H}$. Thus no continuous extension of $[\cdot,\cdot]$ onto $\ell_A^2(\mathcal{H})\times\ell^2(\mathbb{R})$ exists, since otherwise we would have $\lim_{n\to\infty}[f_n,x]=[f,x]\in\mathcal{H}$, where $f$ is the limit of $(f_n)_{n\in\mathbb{N}}$ in $\ell_A^2(\mathcal{H})$. \end{example} \begin{example} Let the linear operator $A:\ell^2(\mathbb{R})\to\ell^2(\mathbb{R})$ be given by \[ Ae_1=e_1+\frac{1}{2}e_2,\quad Ae_2=\frac{1}{2}e_1+e_2\quad\text{and}\quad Ae_n=e_n\,\text{for }n\ge 3. \] With respect to $(e_k)_{k\in\mathbb{N}}$ the matrix representation of $A$ becomes \[ \big((e_k,e_l)_A\big)_{k,l\in\mathbb{N}}=\left(\begin{array}{c|c} \begin{matrix}1&\frac{1}{2}\\[1mm]\frac{1}{2}&1\end{matrix} & 0 \\[3mm]\hline 0 & \Id \end{array}\right), \] which can easily be seen to be bounded, self-adjoint and positive definite. Since only finitely many off-diagonal entries are distinct from zero, we clearly have $\ell_A^2(\mathcal{H})=\ell^2(\mathcal{H})$. For $f\in\ell^2(\mathcal{H})$ Theorem~\ref{condexpect} yields \begin{enumerate} \item $\mathbb{E}[\langle f,\cdot\rangle|e_1]=\big\langle(f_1+\frac{1}{2}f_2)\bullet e_1,\cdot\big\rangle$, \item $\mathbb{E}[\langle f,\cdot\rangle|e_2]=\big\langle(\frac{1}{2}f_1+f_2)\bullet e_2,\cdot\big\rangle$ and \item $\mathbb{E}[\langle f,\cdot\rangle|e_n]=\big\langle f_n\bullet e_n,\cdot\big\rangle$ for $n\ge 3$. \end{enumerate} However, we cannot directly apply the theorem to compute $\mathbb{E}[\langle f,\cdot\rangle|e_1,e_2]$, since $e_1$ and $e_2$ are not orthogonal with respect to $(\cdot,\cdot)_A$. By defining \[ \widehat{e_2}:=\frac{e_2-(e_2,e_1)_Ae_1}{\|e_2-(e_2,e_1)_Ae_1\|_A}=\sqrt{\frac{4}{3}}\Big(e_2-\frac{1}{2}e_1\Big) \] we have $(e_1,\widehat{e_2})_A=0$, $\|e_1\|_A=\|\widehat{e_2}\|_A=1$ and $\spann\{e_1,e_2\}=\spann\{e_1,\widehat{e_2}\}$, hence Corollary~\ref{samespansamecondexpect} makes Theorem~\ref{condexpect} applicable: \begin{align*} \mathbb{E}[\langle f,\cdot\rangle|e_1,e_2]=\mathbb{E}[\langle f,\cdot\rangle|e_1,\widehat{e_2}] & = \big\langle[f,Ae_1]\bullet e_1,\cdot\big\rangle+\big\langle[f,A\widehat{e_2}]\bullet\widehat{e_2},\cdot\big\rangle\\ & = \left\langle\left(f_1+\frac{1}{2}f_2\right)\bullet e_1,\cdot\right\rangle+\left\langle f_2\bullet\left(e_2-\frac{1}{2}e_1\right),\cdot\right\rangle\\ & = \langle f_1\bullet e_1,\cdot\rangle+\langle f_2\bullet e_2,\cdot\rangle. \end{align*} \end{example} \begin{application} In \cite{FrankSeibold2009}, the authors were engaged with the partial differential equation for radiative transfer, that is \begin{equation}\label{rteq} \partial_tI(x,\mu,t)+\mu\partial_x(x,\mu,t)+(\sigma(x)+\kappa(x))I(x,\mu,t)=\frac{\sigma(x)}{2}\int_{-1}^1I(x,\mu',t)d\mu'+q(x,t) \end{equation} with $t>0$, $x\in(a,b)$ and $\mu\in[-1,1]$. The following approach was used: Set \[ I_l(x,t):=\int_{-1}^1I(x,\mu,t)P_l(\mu)d\mu=(I(x,\cdot,t),P_l)_{L^2([-1,1])}\quad\text{for }l=0,1,2,\dots, \] where $P_l$ are the Legendre Polynomials, which form a complete orthogonal system in the Hilbert space $L^2([-1,1])$ and satisfy $\|P_l\|_{L^2([-1,1])}^2=\frac{2}{2l+1}$. Using the recursion relation for the Legendre Polynomials it was proven that \eqref{rteq} is equivalent to the infinite tridiagonal system of first-order partial differential equations \begin{equation}\label{rteq2} \partial_tI_k+b_{k,k-1}\partial_xI_{k-1}+b_{k,k+1}\partial_xI_{k+1}=-c_kI_k+q_k,\quad k=0,1,2,\dots, \end{equation} where \[ b_{k,l}=\frac{k+1}{2k+1}\delta_{k+1,l}+\frac{k}{2k+1}\delta_{k-1,l},\quad c_k=\begin{cases}\kappa & k=0\\\kappa+\sigma & k>0\end{cases},\quad\text{and }q_k=\begin{cases}2\kappa q & k=0\\0 & k>0\end{cases}. \] In order to start numerical computations, only the first $N$ equations in \eqref{rteq2} can be considered. The problem is to decide how to replace the dependence on $I_{N+1}$ in the equation for $I_N$. A simple approach would be to truncate the system by setting $I_l=0$ for $l>N$, which is called the $P_N$ closure. The approach focussed in \cite{FrankSeibold2009} was the method of \emph{optimal prediction}: Assume one is aware of some correlation between the moments $I_l$, $l=0,1,2,\dots$ via a correlation matrix $A$. Instead of simply neglecting $I_{N+1}$, the information of $I_0,\dots,I_N$ could be used to compute the mean solution for $I_{N+1}$, given $I_0,\dots,I_N$. The formula derived and used in \cite{FrankSeibold2009} was \begin{equation}\label{finalformula} \mathbb{E}[I|I_C]=\mathbb{E}\left[\left.\begin{pmatrix}I_C\\I_F\end{pmatrix}\right|I_C\right]=\begin{pmatrix}I_C\\A_{FC}A_{CC}^{-1}I_C\end{pmatrix}=\begin{pmatrix}\Id_{CC}&0\\A_{FC}A_{CC}^{-1}&0\end{pmatrix}I, \end{equation} where $C=\{0,\dots,N\}$, $F=\{N+1,N+2,\dots\}$ and the correlation matrix $A$ and the sequence $I$ are split into corresponding blocks \[ A=\begin{pmatrix}A_{CC}&A_{CF}\\A_{FC}&A_{FF}\end{pmatrix}\quad\text{and}\quad I=\begin{pmatrix}I_C\\I_F\end{pmatrix}. \] We are going to justify this notation with our results derived about conditional expectations, of course provided all necessary assumptions are fulfilled. Let $\mathcal{N}\subset\mathcal{H}\subset\mathcal{N}'$ be a Gel'fand triple, which gives rise to a Gel'fand triple $s(\mathcal{N})\subset\ell^2(\mathcal{H})\subset s'(\mathcal{N})$ by Theorem~\ref{seqgelfand}. Let a self-adjoint and positive definite operator $A\in L(\ell^2(\mathbb{R}))$ be given and consider the Gaussian measure $\mu_A$ on $s'(\mathcal{N})$ as in Definition~\ref{defmuA}. In consistency with the rest of this thesis, we stick to the agreement $0\not\in\mathbb{N}$, so $C=\{1,\dots,N\}$ and $F=\{N+1,N+2,\dots\}$. We identify $A$ with the infinite matrix $\big((e_k,Ae_l)\big)_{k,l\in\mathbb{N}}$ and note that applying $A$ to a sequence $x\in\ell^2(\mathbb{R})$ simply becomes usual infinite-dimensional matrix multiplication. Note that $A_{CC}$ is positive definite and thus bijective on $\mathbb{R}^C$ with inverse $A_{CC}^{-1}$. Consider the matrix \[ P:=\begin{pmatrix}\Id_{CC}&A_{CC}^{-1}A_{CF}\\0&0\end{pmatrix}=\begin{pmatrix}A_{CC}^{-1}&0\\0&0\end{pmatrix}\begin{pmatrix}A_{CC}&A_{CF}\\0&0\end{pmatrix} \] which defines a linear operator $P:\spann\{e_1,e_2,\dots\}\to\spann\{e_1,\dots,e_N\}$ by infinite matrix multiplication. It can easily be verified that $P$ has a continuous linear extension on $\ell^2(\mathbb{R})$ with operator norm $\|P\|_{L(\ell^2(\mathbb{R}))}\le\|A_{CC}^{-1}\|_{L(\mathbb{R}^C)}\|A\|_{L(\ell^2(\mathbb{R}))}$. Similarly for \[ P^T:=\begin{pmatrix}\Id_{CC}&0\\A_{FC}A_{CC}^{-1}&0\end{pmatrix}=\begin{pmatrix}A_{CC}&0\\A_{FC}&0\end{pmatrix}\begin{pmatrix}A_{CC}^{-1}&0\\0&0\end{pmatrix} \] we have $P^T:\ell^2(\mathbb{R})\to\ell^2(\mathbb{R})$ with $\|P^T\|_{L(\ell^2(\mathbb{R}))}\le\|A\|_{L(\ell^2(\mathbb{R}))}\|A_{CC}^{-1}\|_{L(\mathbb{R}^C)}$. Note that the operators $P$ and $P^T$ are adjoint to each other with respect to $(\cdot,\cdot)_{\ell^2(\mathbb{R})}$. The obvious identity $AP=P^TA$ yields that for all $x,y\in\ell^2(\mathbb{R})$ we have \begin{equation}\label{PselfadjforA} (x,Py)_A=(x,APy)=(x,P^TAy)=(Px,Ay)=(Px,y)_A. \end{equation} For $x\in\ell^2(\mathbb{R})$ this equation, together with the fact $P^2=P$, yields \[ \|Px\|_A^2=(Px,Px)_A=(x,P^2x)_A=(x,Px)_A\le\|x\|_A\|Px\|_A, \] hence $P$ can be extended to a bounded linear operator $P:\ell_A^2(\mathbb{R})\to\spann\{e_1,\dots,e_N\}$, where $\ell_A^2(\mathbb{R})$ denotes the completion of $\ell^2(\mathbb{R})$ with respect to $(\cdot,\cdot)_A$. Then Equation~\eqref{PselfadjforA} extends to hold for $x,y\in\ell_A^2(\mathcal{H})$. One easily sees that $P$ is surjective, hence for $x\in\ell_A^2(\mathbb{R})$ and $y\in\spann\{e_1,\dots,e_N\}$ it holds $y=Py$ and thus \[ (x-Px,y)_A=(x-Px,Py)_A=(Px-P^2x,y)_A=(Px-Px,y)_A=0, \] so $P$ is the orthogonal projection from $\ell_A^2(\mathbb{R})$ onto $\spann\{e_1,\dots,e_N\}$. Let $\widehat{e_1},\dots,\widehat{e_N}$ be an orthonormal basis of $\spann\{e_1,\dots,e_N\}$ with respect to $(\cdot,\cdot)_A$. For $f\in\ell^2(\mathcal{H})$ by Corollary~\ref{samespansamecondexpect} and Theorem~\ref{condexpect} we have \[ \mathbb{E}[\langle f,\cdot\rangle|e_1,\dots,e_N]=\mathbb{E}[\langle f,\cdot\rangle|\widehat{e_1},\dots,\widehat{e_N}]=\langle Pf,\cdot\rangle. \] We are going to justify Equation~\eqref{finalformula} in the sense that \[ \langle P\varphi,\omega\rangle=\langle\varphi,P^T\omega\rangle\quad\text{for }\varphi\in s(\mathcal{N}),\ \omega\in s'(\mathcal{N}). \] To this end, we prove the following three steps: \begin{enumerate} \item $P\varphi\in s(\mathcal{N})$ for $\varphi\in s(\mathcal{N})$, so $\langle P\varphi,\cdot\rangle$ is pointwisely defined. \item $P^T\omega\in s'(\mathcal{N})$ for $\omega\in s'(\mathcal{N})$, so the expression $\langle\varphi,P^T\omega\rangle$ makes sense for $\varphi\in s(\mathcal{N})$. \item $\langle P\varphi,\omega\rangle=\langle\varphi,P^T\omega\rangle$ for $\varphi\in s(\mathcal{N})$ and $\omega\in s'(\mathcal{N})$. \end{enumerate} For (i) we show that for all $p\in\mathbb{N}_0$ it holds $P\varphi\in\ell_p^2(\mathcal{N}_p)$ for $\varphi\in\ell_p^2(\mathcal{N}_p)$, and that the map $P:\ell_p^2(\mathcal{N}_p)\to\ell_p^2(\mathcal{N}_p)$ is bounded. Let $p\in\mathbb{N}_0$ and $\varphi\in\ell_p^2(\mathcal{N}_p)$. For $\psi:=\left(\begin{smallmatrix}A_{CC}&A_{CF}\\0&0\end{smallmatrix}\right)\varphi$ it holds \[ \psi_k=\sum_{i=1}^\infty A_{ki}\varphi_i\quad\text{for }k=1,\dots,N\quad\text{and}\quad\psi_k=0\quad\text{for }k>N. \] Let $\|\cdot\|_p$ and $(\cdot,\cdot)_p$ denote the norm and inner product on $\mathcal{N}_p$, respectively, and let $(\eta_l)_{l\in\mathbb{N}}$ be an orthonormal basis of $\mathcal{N}_p$. Then for $k\in\{1,\dots,N\}$ and all $n,m\in\mathbb{N}$ we have \begin{align*} \left\|\sum_{i=m+1}^n A_{ki}\varphi_i\right\|_p^2 & = \sum_{i,j=m+1}^nA_{ki}A_{kj}(\varphi_i,\varphi_j)_p\\ & = \sum_{l=1}^\infty\sum_{i,j=m+1}^n(e_k,Ae_i)(e_k,Ae_j)(\varphi_i,\eta_l)_p(\varphi_j,\eta_l)_p\\ & = \sum_{l=1}^\infty\left(e_k,A\sum_{i=m+1}^n(\varphi_i,\eta_l)_pe_i\right)^2\\ & \le \|A\|^2\sum_{l=1}^\infty\sum_{i=m+1}^n(\varphi_i,\eta_l)_p^2\\ & = \|A\|^2\sum_{i=m+1}^n\|\varphi_i\|_p^2\\ & \le \|A\|^2\sum_{i=m+1}^ni^{2p}\|\varphi_i\|_p^2, \end{align*} thus $\left(\sum_{i=1}^n A_{ki}\varphi_i\right)_{n\in\mathbb{N}}$ is a Cauchy sequence in the complete space $\mathcal{N}_p$ with limit $\psi_k\in\mathcal{N}_p$. We have established that $\psi$ is a finite sequence in $\mathcal{N}_p$, hence an element of $\ell_p^2(\mathcal{N}_p)$. Since the matrix $\left(\begin{smallmatrix}A_{CC}^{-1}&0\\0&0\end{smallmatrix}\right)$ only has finitely many non-zero entries, it also defines a bounded linear operator on $\ell_p^2(\mathcal{N}_p)$. Thus $P\varphi=\left(\begin{smallmatrix}A_{CC}^{-1}&0\\0&0\end{smallmatrix}\right)\psi\in\ell_p^2(\mathcal{N}_p)$ with $\|P\varphi\|_{\ell_p^2(\mathcal{N}_p)}\le K\|\varphi\|_{\ell_p^2(\mathcal{N}_p)}$ for some constant $K$. This yields that $\varphi\in s(\mathcal{N})$ implies $P\varphi\in s(\mathcal{N})$, since $s(\mathcal{N})=\bigcap_{p\in\mathbb{N}_0}\ell_p^2(\mathcal{N}_p)$. Furthermore the map $P:s(\mathcal{N})\to s(\mathcal{N})$ is continuous. Then, for $\varphi\in s(\mathcal{N})$, the conditional expectation $\mathbb{E}[\langle\varphi,\cdot\rangle|e_1,\dots,e_N]$ is even pointwisely defined by \[ \mathbb{E}[\langle\varphi,\cdot\rangle|e_1,\dots,e_N](\omega)=\langle P\varphi,\omega\rangle\quad\text{for }\omega\in s'(\mathcal{N}). \] Similarly, for (ii) we show that for all $p\in\mathbb{N}_0$ it holds $P^T\omega\in\ell_{-p}^2(\mathcal{N}_{-p})$ for $\omega\in\ell_{-p}^2(\mathcal{N}_{-p})$. For $p=0$ this has already been established, so let $p\ge 1$ and $\omega\in s'(\mathcal{N})$. Again, since $\left(\begin{smallmatrix}A_{CC}^{-1}&0\\0&0\end{smallmatrix}\right)$ only has finitely many non-zero entries, it defines a bounded linear operator on $\ell_{-p}^2(\mathcal{N}_{-p})$, so $\omega':=\left(\begin{smallmatrix}A_{CC}^{-1}&0\\0&0\end{smallmatrix}\right)\omega\in\ell_{-p}^2(\mathcal{N}_{-p})$. Then $\omega'':=\left(\begin{smallmatrix}A_{CC}&0\\A_{FC}&0\end{smallmatrix}\right)\omega'=P^T\omega$ is a sequence in $\mathcal{N}_{-p}$, since for $k\in\mathbb{N}$ it holds \[ \omega''_k=\sum_{i=1}^NA_{ki}\omega'_i\in\mathcal{N}_{-p}. \] We now check that $\omega''$ is a sequence in $\ell_{-p}^2(\mathcal{N}_{-p})$. Let $\|\cdot\|_{-p}$ and $(\cdot,\cdot)_{-p}$ denote the norm and inner product on $\mathcal{N}_{-p}$, respectively, and let $(\gamma_l)_{l\in\mathbb{N}}$ be an orthonormal basis of $\mathcal{N}_{-p}$. We have the norm estimate \begin{align*} \|\omega''_k\|_{-p}^2 & = \sum_{i,j=1}^NA_{ki}A_{kj}(\omega'_i,\omega'_j)_{-p}\\ & = \sum_{l=1}^\infty\sum_{i,j=1}^N(e_k,Ae_i)(e_k,Ae_j)(\omega'_i,\gamma_l)_{-p}(\omega'_j,\gamma_l)_{-p}\\ & = \sum_{l=1}^\infty\left(e_k,A\sum_{i=1}^N(\omega'_i,\gamma_l)_{-p}e_i\right)^2\\ & \le \|A\|^2\sum_{l=1}^\infty\sum_{i=1}^N(\omega'_i,\gamma_l)_{-p}^2\\ & = \|A\|^2\sum_{i=1}^N\|\omega'_i\|_{-p}^2\\ & \le \|A\|^2N^{2p}\sum_{i=1}^Ni^{-2p}\|\omega'_i\|_{-p}^2\\ & \le \|A\|^2N^{2p}\|\omega'\|_{\ell_{-p}^2(\mathcal{N}_{-p})}^2. \end{align*} Then by the assumption $p\ge 1$ it holds \[ \|\omega''\|_{\ell_{-p}^2(\mathcal{N}_{-p})}^2=\sum_{k=1}^\infty k^{-2p}\|\omega''_k\|_{-p}^2\le\|A\|^2N^{2p}\|\omega'\|_{\ell_{-p}^2(\mathcal{N}_{-p})}^2\sum_{k=1}^\infty k^{-2p}<\infty, \] so $P^T\omega=\omega''\in\ell_{-p}^2(\mathcal{N}_{-p})$ with $\|P^T\omega\|_{\ell_{-p}^2(\mathcal{N}_{-p})}\le K'\|\omega\|_{\ell_{-p}^2(\mathcal{N}_{-p})}$ for some constant $K'$. This yields that $\omega\in s'(\mathcal{N})$ implies $P^T\omega\in s'(\mathcal{N})$, since $s'(\mathcal{N})=\bigcup_{p\in\mathbb{N}_0}\ell_{-p}^2(\mathcal{N}_{-p})$. Furthermore the map $P^T:s'(\mathcal{N})\to s'(\mathcal{N})$ is continuous. For (iii) we note that $P$ and $P^T$ are adjoint to each other with respect to the usual inner product on $\ell^2(\mathbb{R})$. Then they are also adjoint to each other as operators on $\ell^2(\mathcal{H})$, since for $f,g\in\ell^2(\mathcal{H})$ we have \begin{align*} (f,Pg)_{\ell^2(\mathcal{H})} & = \lim_{N\to\infty}\left(\sum_{k=1}^Nf_k\bullet e_k,\sum_{l=1}^Ng_l\bullet Pe_l\right)_{\ell^2(\mathcal{H})}\\ & = \lim_{N\to\infty}\sum_{k,l=1}^N(f_k,g_l)_\mathcal{H}(e_k,Pe_l)_{\ell^2(\mathbb{R})}\\ & = \lim_{N\to\infty}\sum_{k,l=1}^N(f_k,g_l)_\mathcal{H}(P^Te_k,e_l)_{\ell^2(\mathbb{R})}\\ & = \lim_{N\to\infty}\left(\sum_{k=1}^Nf_k\bullet P^Te_k,\sum_{l=1}^Ng_l\bullet e_l\right)_{\ell^2(\mathcal{H})}\\ & = (P^Tf,g)_{\ell^2(\mathcal{H})}. \end{align*} Since in the chain $s(\mathcal{N})\subset\ell^2(\mathcal{H})\subset s'(\mathcal{N})$ we identified $\ell^2(\mathcal{H})$ with its topological dual space, the dual pairing of $\varphi\in s(\mathcal{N})$ and $\omega\in\ell^2(\mathcal{H})\subset s'(\mathcal{N})$ is realized as $\langle\varphi,\omega\rangle=(\varphi,\omega)_{\ell^2(\mathcal{H})}$, so \[ \langle P\varphi,\omega\rangle=(\varphi,P^T\omega)_{\ell^2(\mathcal{H})}=\langle\varphi,P^T\omega\rangle. \] For general $\omega\in s'(\mathcal{N})$, there exists $p\in\mathbb{N}_0$ such that $\omega\in\ell_{-p}^2(\mathcal{N}_{-p})$. Let $(\omega_n)_{n\in\mathbb{N}}$ be a sequence in $\ell^2(\mathcal{H})$ approximating $\omega$ with respect to $\|\cdot\|_{\ell_{-p}^2(\mathcal{N}_{-p})}$. Then \[ \langle P\varphi,\omega\rangle=\lim_{n\to\infty}\langle P\varphi,\omega_n\rangle=\lim_{n\to\infty}\langle\varphi,P^T\omega_n\rangle=\langle\varphi,P^T\omega\rangle. \] Now that (i), (ii) and (iii) are proven and thus we have \[ \mathbb{E}[\langle\varphi,\cdot\rangle|e_1,\dots,e_N](\omega)=\langle\varphi,P^T\omega\rangle\quad\text{for } \varphi\in s(\mathcal{N}),\ \omega\in s'(\mathcal{N}), \] we have established \eqref{finalformula} in the weak sense \[ \mathbb{E}[\omega|\omega_C]=P^T\omega=\begin{pmatrix}\Id_{CC}&0\\A_{FC}A_{CC}^{-1}&0\end{pmatrix}\omega. \] \end{application} \end{section} \end{chapter} \appendix \begin{chapter}{Appendix} \begin{section}{Positive Semidefinite Matrices} In this section, $(\lambda_{kl})_{k,l=1,\dots,n}$ is assumed to be a Hermitian matrix for some $n\in\mathbb{N}$, i.e. for $k,l=1,\dots,n$ we have $\lambda_{kl}=\overline{\lambda_{lk}}\in\mathbb{C}$. \begin{definition} The matrix $(\lambda_{kl})_{k,l=1,\dots,n}$ is called \emph{positive semidefinite}, if we have \begin{equation}\label{defposdef} \sum_{k,l=1}^n\alpha_k\overline{\alpha_l}\lambda_{kl}\ge 0\quad\text{for all }\alpha\in\mathbb{C}^n. \end{equation} \end{definition} \begin{lemma}\label{realposdeflemma} If $\lambda_{kl}\in\mathbb{R}$ for $k,l=1,\dots,n$, then \eqref{defposdef} is equivalent to \begin{equation}\label{realposdef} \sum_{k,l=1}^n\alpha_k\alpha_l\lambda_{kl}\ge 0\quad\text{for all }\alpha\in\mathbb{R}^n. \end{equation} \begin{proof} Clearly \eqref{defposdef} implies \eqref{realposdef}, so assume \eqref{realposdef} holds and let $\alpha\in\mathbb{C}^n$. For $k=1,\dots,n$ denote $a_k:=\Re(\alpha_k)$ and $b_k:=\Im(\alpha_k)$. Then \begin{align*} \sum_{k,l=1}^n\alpha_k\overline{\alpha_l}\lambda_{kl} & = \sum_{k,l=1}^n (a_k+ib_k)(a_l-ib_l)\lambda_{kl}\\ & = \underbrace{\sum_{k,l=1}^n a_ka_l\lambda_{kl}}_{\ge 0\text{ by }\eqref{realposdef}}+\underbrace{i\sum_{k,l=1}^n b_ka_l\lambda_{kl}-i\sum_{k,l=1}^n a_kb_l\lambda_{kl}}_{=0\text{ since }\lambda_{kl}=\lambda_{lk}}+\underbrace{\sum_{k,l=1}^n b_kb_l\lambda_{kl}}_{\ge 0\text{ by }\eqref{realposdef}}\ge0.\\ \end{align*} \end{proof} \end{lemma} By a theorem in \cite{Schur1911} we have the following: \begin{theorem} If $(\nu_{kl})_{k,l=1,\dots,n}$ is another Hermitian matrix and both $(\lambda_{kl})_{k,l=1,\dots,n}$ and $(\nu_{kl})_{k,l=1,\dots,n}$ are positive semidefinite, then so is their pointwise product $(\lambda_{kl}\nu_{kl})_{k,l=1,\dots,n}$. \end{theorem} \begin{corollary}\label{expisposdef} If $(\lambda_{kl})_{k,l=1,\dots,n}$ is positive semidefinite, then so is $(\exp(\lambda_{kl}))_{k,l=1,\dots,n}$. \begin{proof} Let $\alpha\in\mathbb{C}^n$. For $m=0$ we have \[ \sum_{k,l=1}^n\alpha_k\overline{\alpha_l}\lambda_{kl}^m=\sum_{k,l=1}^n\alpha_k\overline{\alpha_l}=\left|\sum_{k=1}^n\alpha_k\right|^2\ge 0 \] and for $m\ge 1$, an obvious inductive use of Schur's theorem above yields \[ \sum_{k,l=1}^n\alpha_k\overline{\alpha_l}\lambda_{kl}^m\ge 0. \] Hence \[ \sum_{k,l=1}^n\alpha_k\overline{\alpha_l}\exp(\lambda_{kl})=\sum_{k,l=1}^n\alpha_k\overline{\alpha_l}\sum_{m=0}^\infty\frac{1}{m!}\lambda_{kl}^m=\sum_{m=0}^\infty\frac{1}{m!}\sum_{k,l=1}^n\alpha_k\overline{\alpha_l}\lambda_{kl}^m\ge0. \] \end{proof} \end{corollary} \end{section} \begin{section}{Hermite Polynomials}\label{Hermitepolynomial} \begin{definition} For $n\in\mathbb{N}_0$ define the $n^\text{th}$ Hermite polynomial $H_n\in L^2(\mathbb{R},\mu_1)$ by \[ \mathbb{R}\ni x\longmapsto H_n(x):=(-1)^n\exp\left(\frac{1}{2}x^2\right)\frac{\text{d}^n}{\text{d}x^n}\exp\left(-\frac{1}{2}x^2\right)\in\mathbb{R}, \] where $\mu_1$ is the standard Gaussian measure on $\mathbb{R}$. Then $H_n$ is a polynomial of degree $n$ with \begin{equation}\label{Hermiteinnerprod} (H_n,H_m)_{L^2(\mu_1)}=n!\delta_{n,m}\quad\text{for }n,m\in\mathbb{N}_0. \end{equation} In particular $\|H_n\|_{L^2(\mu_1)}^2=n!$, and by $H_0\equiv 1$ we have \begin{equation}\label{Hermiteintegraliszero} \int_{\mathbb{R}}H_n(x)d\mu_1(x)=(H_n,H_0)_{L^2(\mu_1)}=\delta_{0,n}. \end{equation} Since the set of polynomials is dense in $L^2(\mathbb{R},\mu_1)$, the Hermite polynomials form a complete orthogonal system in $L^2(\mathbb{R},\mu_1)$. All these results can be found in \cite{Bogachev1998}, where the Hermite polynomials are introduced in a slightly different way. \end{definition} \begin{remark} In literature one may also find the definition of the $n^\text{th}$ Hermite polynomial to be \[ \mathbb{R}\ni x\longmapsto \widehat{H}_n(x):=(-1)^n\exp(x^2)\frac{\text{d}^n}{\text{d}x^n}\exp(-x^2)\in\mathbb{R}. \] A sum representation for these can be found in \cite{Obata1994}: \begin{equation}\label{physicistshermitesum} \widehat{H}_n(x)=\sum_{k=0}^{\lfloor n/2\rfloor}\frac{(-1)^kn!}{k!(n-2k)!}(2x)^{n-2k}\quad\text{for }x\in\mathbb{R}. \end{equation} These polynomials, further called \emph{physicists Hermite polynomials}, do \emph{not} form an orthogonal system in $L^2(\mathbb{R},\mu_1)$, but are orthogonal with respect to the probability measure on $\mathbb{R}$ given by the $\dx$-density \[ \mathbb{R}\ni x\longmapsto\frac{1}{\sqrt{\pi}}\exp(-x^2)\in\mathbb{R} \] We can link $H_n$ and $\widehat{H}_n$ by the identities \begin{gather} \label{Hermiterelation1} H_n(x)=2^{-\frac{n}{2}}\widehat{H}_n\left(\frac{x}{\sqrt{2}}\right)\quad\text{and}\\ \label{Hermiterelation2} \widehat{H}_n(x)=2^{\frac{n}{2}}H_n\left(\sqrt{2}x\right). \end{gather} This also yields a representation similar to Equation~\eqref{physicistshermitesum} for our Hermite polynomials: \begin{equation}\label{Hermitesumrepresentation} H_n(x)=\sum_{k=0}^{\lfloor n/2\rfloor}\frac{(-1)^kn!}{2^kk!(n-2k)!}x^{n-2k}\quad\text{for }x\in\mathbb{R}. \end{equation} If one considers the analytical extension to $\mathbb{C}$ of the physicists Hermite polynomials, then for $\alpha,\beta\in\mathbb{C}$ with $\alpha^2+\beta^2=1$ one has an expansion of binomial type \[ \widehat{H}_n(\alpha x+\beta y)=\sum_{k=0}^n\binom{n}{k}\alpha^k\beta^{n-k}\widehat{H}_k(x)\widehat{H}_{n-k}(y)\quad\text{for }x,y\in\mathbb{R}, \] see \cite{Obata1994}. By straightforward use of Equations~\eqref{Hermiterelation1} and~\eqref{Hermiterelation2} we obtain the analogous formula \begin{equation}\label{Hermitebinomial} H_n(\alpha x+\beta y)=\sum_{k=0}^n\binom{n}{k}\alpha^k\beta^{n-k}H_k(x)H_{n-k}(y)\quad\text{for }x,y\in\mathbb{R} \end{equation} for the Hermite polynomials, where we follow the convention $0^0:=1$. \end{remark} \end{section} \begin{section}{Proof of Lemma \ref{wickpolynomial}}\label{wickpolynomialproofsection} \begin{proof}\label{wickpolynomialproof} For $n=0,1$ the assertion is clear by definition. Now let $n\ge 2$ and assume the claim has been proven for all natural numbers $0,\dots,n-1$. One computes \begin{align*} \colon\omega^{\otimes n}\colon & = \omega{\wh{\otimes}}\colon\omega^{\otimes n-1}\colon-(n-1)\tau_A{\wh{\otimes}}\colon\omega^{\otimes n-2}\colon\\ & = \omega{\wh{\otimes}}\sum_{k=0}^{\lfloor\frac{n-1}{2}\rfloor}\frac{(n-1)!(-1)^k}{2^kk!(n-1-2k)!}\tau_A^{{\wh{\otimes}} k}{\wh{\otimes}}\omega^{\otimes n-1-2k}\\ & \qquad\qquad - (n-1)\tau_A{\wh{\otimes}}\sum_{k=0}^{\lfloor\frac{n-2}{2}\rfloor}\frac{(n-2)!(-1)^k}{2^kk!(n-2-2k)!}\tau_A^{{\wh{\otimes}} k}{\wh{\otimes}}\omega^{\otimes n-2-2k}\\ & = \sum_{k=0}^{\lfloor\frac{n-1}{2}\rfloor}\frac{(n-1)!(-1)^k(n-2k)}{2^kk!(n-2k)!}\tau_A^{{\wh{\otimes}} k}{\wh{\otimes}}\omega^{\otimes n-2k}\\ & \qquad\qquad - \sum_{k=0}^{\lfloor\frac{n-2}{2}\rfloor}(-1)2\frac{(n-1)!(-1)^{k+1}(k+1)}{2^{k+1}(k+1)!(n-2(k+1))!}\tau_A^{{\wh{\otimes}} k+1}{\wh{\otimes}}\omega^{\otimes n-2(k+1)}\\ & = \sum_{k=0}^{\lfloor\frac{n-1}{2}\rfloor}\frac{n!(-1)^k}{2^kk!(n-2k)!}\tau_A^{{\wh{\otimes}} k}{\wh{\otimes}}\omega^{\otimes n-2k} - 2\sum_{k=1}^{\lfloor\frac{n-1}{2}\rfloor}\frac{(n-1)!(-1)^kk}{2^kk!(n-2k)!}\tau_A^{{\wh{\otimes}} k}{\wh{\otimes}}\omega^{\otimes n-2k}\\ & \qquad\qquad + 2\sum_{k=1}^{\lfloor\frac{n}{2}\rfloor}\frac{(n-1)!(-1)^kk}{2^kk!(n-2k)!}\tau_A^{{\wh{\otimes}} k}{\wh{\otimes}}\omega^{\otimes n-2k}\\ & = \underbrace{\sum_{k=0}^{\lfloor\frac{n-1}{2}\rfloor}\frac{n!(-1)^k}{2^kk!(n-2k)!}\tau_A^{{\wh{\otimes}} k}{\wh{\otimes}}\omega^{\otimes n-2k}}_{=:A} + \underbrace{2\sum_{k=\lfloor\frac{n-1}{2}\rfloor+1}^{\lfloor\frac{n}{2}\rfloor}\frac{(n-1)!(-1)^kk}{2^kk!(n-2k)!}\tau_A^{{\wh{\otimes}} k}{\wh{\otimes}}\omega^{\otimes n-2k}}_{=:B}. \end{align*} If $n$ is odd, then $\lfloor\frac{n-1}{2}\rfloor=\lfloor\frac{n}{2}\rfloor$, so $B=0$ and the sum in $A$ runs over $k=0,\dots,\lfloor\frac{n}{2}\rfloor$, which is exactly the claim. If $n$ is even, then $\lfloor\frac{n-1}{2}\rfloor+1=\lfloor\frac{n}{2}\rfloor=\frac{n}{2}$ and hence \begin{align*} A+B & = A+2\frac{(n-1)!(-1)^{(n/2)}(n/2)}{2^{(n/2)}(n/2)!(n-2(n/2))!}\tau_A^{{\wh{\otimes}} (n/2)}{\wh{\otimes}}\omega^{\otimes n-2(n/2)}\\ & = A+\frac{n!(-1)^{(n/2)}}{2^{(n/2)}(n/2)!(n-2(n/2))!}\tau_A^{{\wh{\otimes}} (n/2)}{\wh{\otimes}}\omega^{\otimes n-2(n/2)}\\ & =\sum_{k=0}^{\lfloor\frac{n}{2}\rfloor}\frac{n!(-1)^k}{2^kk!(n-2k)!}\tau_A^{{\wh{\otimes}} k}{\wh{\otimes}}\omega^{\otimes n-2k}. \end{align*} We will spare the reader with the proof of the second equation claimed in Lemma~\ref{wickpolynomial}, as it works similarly. \end{proof} \end{section} \end{chapter} \end{document}
\begin{document} \makeatletter \def\subsection{\@startsection{subsection}{3} \z@{.5\linespacing\@plus.7\linespacing}{.1\linespacing} {\rm\bf}} \makeatother \newtheorem{definition}{Definition}[subsection] \newtheorem{definitions}[definition]{Definitions} \newtheorem{deflem}[definition]{Definition and Lemma} \newtheorem{lemma}[definition]{Lemma} \newtheorem{pro}[definition]{Proposition} \newtheorem{theorem}[definition]{Theorem} \newtheorem{cor}[definition]{Corollary} \newtheorem{cors}[definition]{Corollaries} \theoremstyle{remark} \newtheorem{remark}[definition]{Remark} \theoremstyle{remark} \newtheorem{remarks}[definition]{Remarks} \theoremstyle{remark} \newtheorem{notation}[definition]{Notation} \theoremstyle{remark} \newtheorem{example}[definition]{Example} \theoremstyle{remark} \newtheorem{examples}[definition]{Examples} \theoremstyle{remark} \newtheorem{dgram}[definition]{Diagram} \theoremstyle{remark} \newtheorem{fact}[definition]{Fact} \theoremstyle{remark} \newtheorem{illust}[definition]{Illustration} \theoremstyle{remark} \newtheorem{rmk}[definition]{Remark} \theoremstyle{definition} \newtheorem{que}[definition]{Question} \theoremstyle{definition} \newtheorem{conj}[definition]{Conjecture} \newcommand{\stac}[2]{\genfrac{}{}{0pt}{}{#1}{#2}} \newcommand{\stacc}[3]{\stac{\stac{\stac{}{#1}}{#2}}{\stac{}{#3}}} \newcommand{\staccc}[4]{\stac{\stac{#1}{#2}}{\stac{#3}{#4}}} \newcommand{\stacccc}[5]{\stac{\stacc{#1}{#2}{#3}}{\stac{#4}{#5}}} \renewenvironment{proof}{\noindent {\bf{Proof.}}}{\hspace*{3mm}{$\Box$}{ }} \title{Grothendieck Rings of Theories of Modules} \keywords{Grothendieck ring; model theory; module; positive primitive formula; abstract simplicial complex, monoid ring} \subjclass[2010]{03C60, 55U05, 16Y60, 20M25, 06A12} \maketitle \begin{center} AMIT KUBER\footnote{Email \texttt{amit.kuber@postgrad.manchester.ac.uk}. Research partially supported by a School of Mathematics, University of Manchester Scholarship.}, School of Mathematics, \ University of Manchester, \\ Manchester M13 9PL, \ England. \end{center} \begin{abstract} The model-theoretic Grothendieck ring of a first order structure, as defined by Krajic\v{e}k and Scanlon, captures some combinatorial properties of the definable subsets of finite powers of the structure. In this paper we compute the Grothendieck ring, $K_0(M_\mathcal R)$, of a right $R$-module $M$, where $\mathcal R$ is any unital ring. As a corollary we prove a conjecture of Prest that $K_0(M)$ is non-trivial, whenever $M$ is non-zero. The main proof uses various techniques from the homology theory of simplicial complexes. \end{abstract} \section{Introduction} In \cite{Kra}, Krajic\v{e}k and Scanlon introduced the concept of the model-theoretic Grothendieck ring of a structure. Amongst many other results, they proved that such a Grothendieck ring is nontrivial if and only if the definable subsets of the structure satisfy a version of the combinatorial pigeonhole principle, called the \textquotedblleft onto pigeonhole principle'' ($onto PHP$). Grothendieck rings have been studied for various rings and fields considered as models of a first order theory (see \cite{Kra}, \cite{Clu}, \cite{CluHask}, \cite{DenLoes1} and \cite{DenLoes2}) and they are found to be trivial in many cases (see \cite{Clu},\cite{CluHask}). Prest conjectured that in stark contrast to the case of rings, for any ring $\mathcal R$, the Grothendieck ring of a nonzero right $\mathcal R$ module $M_\mathcal R$, denoted $K_0(M_\mathcal R)$, is nontrivial. Perera (\cite{Perera}) investigated the problem in his doctoral thesis and found that elementarily equivalent modules have isomorphic Grothendieck rings, which is not the case for general structures. He computed the Grothendieck ring for modules over semisimple rings and showed that they are polynomial rings in finitely many variables over the ring of integers. In this paper we compute the Grothendieck ring for arbitrary modules and show that they are quotients of monoid rings $\mathbb Z[\mathcal X]$, where $\mathcal X$ is the multiplicative monoid of isomorphism classes of fundamental definable subsets of the module - the $pp$-definable subgroups. This is the content of the main theorem, theorem \ref{FINALgeneral}, which also describes the `invariants ideal' - the ideal of the monoid ring that codes indices of pairs of $pp$-definable subgroups. We further show (corollary \ref{MAINRESULTgeneral}) that there is a split embedding $\mathbb Z\rightarrow K_0(M)$, whenever the module $M$ is nonzero, proving Prest's conjecture. The proof of the main theorem uses inputs from various mathematical areas like model theory, algebra, combinatorics and algebraic topology. A special case of the main theorem (theorem \ref{FINAL}) is proved at the end of section \ref{spcasemult}. The special case assumes that the theory $T$ of the module $M$ satisfies the model theoretic condition $T=T^{\aleph_0}$. This condition is equivalent to the statement that the invariants ideal is trivial. The reader should note that the proof of the general case of the main theorem is not given in full detail since it develops along lines similar to the special case and uses only a few modifications to incorporate the invariants ideal. The fundamental theorem of the model theory of modules (theorem \ref{PPET}) states that every definable set is a boolean combination of $pp$-definable sets, but such a boolean combination is far from being unique. We achieve a `uniqueness' result as a by-product of the theory we develop. We call this result the `cell decomposition theorem' (Theorem \ref{CDT1}) which states that definable sets can be represented uniquely using $pp$-definable sets provided the theory $T$ of the module satisfies $T=T^{\aleph_0}$. Though this theorem is not used directly in any other proof, its underlying idea is one of the most important ingredients of the main proof. Based on this idea, we define various classes of definable sets of increasing complexity, namely $pp$-sets, convex sets, blocks and cells. Our strategy to prove every result about a general definable set is to prove it first for convex sets, then blocks and then cells. An important theme of the paper is the use of geometric and topological ideas in the setting of definable sets. We use the idea of a `neighbourhood' and `localization' to understand the structure of definable sets. We develop a notion of `connectedness' of a definable set in \ref{C} and prove theorem \ref{topconn} which clearly shows the analogy with its topological counterpart. The main proof takes place at two different levels, which we name `local' and `global' following geometric intuition. We try to describe the ``shape'' of each definable set in terms of integer valued functions called `local characteristics'. These numbers are computed using Euler characteristics of various abstract simplicial complexes which code the ``local geometry'' of the given set. The local data is combined to get a family of integer valued functions, each of which is called a `global characteristic'. The global characteristics enjoy the property of being preserved under definable bijections. The family of such functions is indexed by the elements of the monoid $\mathcal X$ and the functions collate to give the necessary monoid ring. The rest of the paper is organized as follows. Section \ref{prelim} contains the background material on Grothendieck rings and the model theory of modules. It also describes some important theorems in the homology theory of simplicial complexes. The core part of the proof of the special case is the content of section \ref{spcaseadd}. It introduces the terminology that we use and proves important facts about local and global characteristics. The highlights of this section are theorems \ref{t1} and \ref{t4}. Section \ref{spcasemult} contains proofs of the multiplicative properties of the global characteristics, completing the proof of the special case. Section \ref{gencase} introduces new terminology and the modifications in the proof of the special case necessary to handle the general case. Some applications of the main theorem are discussed in section \ref{appl}. The maps between modules which fit with model theory are called pure embeddings. We study their effect on Grothendieck rings in \ref{pure}. We also show the existence of Grothendieck rings containing nontrivial torsion elements in \ref{tors}. The cell decomposition theorem is proved in \ref{CDT}, whereas the discussion on connectedness is included in \ref{C}. We conclude the paper with section \ref{rmkcom} which contains further remarks on the technique of the proof and mentions some directions for further research in this area. \section{Preliminaries}\label{prelim} \subsection{Semirings and Grothendieck rings}\label{SGR} We recall the notion of a semiring and how to construct a ring in a canonical fashion from a given semiring. A detailed exposition on this material can be found in \cite{Lee}. Let $L_{ring}=\langle0,1,+,\cdotp\rangle$ be the language of rings. \begin{definitions} Any $L_{ring}$ structure $S$ satisfying the following conditions is a commutative \textbf{semiring} with unity. \begin{itemize} \item $(S,+,0)$ is a commutative monoid \item $(S,\cdotp,1)$ is a commutative monoid \item $a\cdotp 0=0$ for all $a\in S$ \item multiplication ($\cdotp$) distributes over addition ($+$) \end{itemize} A \textbf{semiring homomorphism} is an $L_{ring}$-homomorphism. A semiring $S$ is said to be \textbf{cancellative} if $a+c=b+c\ \Rightarrow\ a=b$ for all $a,b,c\in S$. \end{definitions} In \cite{Lee}, a cancellative semiring is called a \emph{halfring}. All the semirings considered here are commutative semirings with unity, allowing the possibility $0=1$. \begin{definition} A binary relation $\thicksim$ on a semiring $S$ is said to be a \textbf{congruence relation} if the following properties hold. \begin{itemize} \item $\thicksim$ is an equivalence relation \item $a\thicksim b,c\thicksim d$ for $a,b,c,d\in S$ $\Rightarrow (a+c)\thicksim(b+d),a\cdotp c\thicksim b\cdotp d$ \end{itemize} \end{definition} There is a canonical way of constructing a cancellative semiring from any semiring $S$ as stated in the following theorem. \begin{theorem}\label{QUOCONST}\textbf{Quotient construction}: Let $S$ be a semiring and let $\thicksim$ be the binary relation defined as follows. \begin{equation}\label{CANCEL} For\ a,b\in S,\ a\thicksim b\ \Leftrightarrow\ \exists c\in S,\ a+c=b+c \end{equation} Then $\thicksim$ is a congruence relation. If $\tilde{a}$ denotes the $\thicksim$ equivalence class of $a\in S$, then $\tilde{S}:=\{\tilde{a}:a\in S\}$ is a cancellative semiring with respect to the induced addition and multiplication operations. There is a surjective semiring homomorphism $q:S\rightarrow\tilde{S}$ given by $a\mapsto \tilde{a}$. Furthermore, given any cancellative semiring $T$ and a semiring homomorphism $f:S\rightarrow T$, there exists a unique semiring homomorphism $\tilde{f}:\tilde{S}\rightarrow T$ such that the diagram $\xymatrix{{S}\ar[rr] ^{q}\ar[dr]^f & & {\tilde{S}}\ar@{->}[dl]^{\exists !\tilde{f}} \\ & {T}}$ commutes. \end{theorem} One can embed a cancellative semiring in a ring in a canonical fashion as stated in the following theorem. \begin{theorem}\label{GRCONSTR} \textbf{Ring of Differences for a Cancellative Semiring}: Let $R$ denote a cancellative semiring and let $E$ denote the binary relation on the set $R\times R$ of ordered pairs of elements from $R$ defined as follows. \begin{equation}\label{NEGADD} For\ (a,b),(c,d)\in R\times R,\ (a,b) E (c,d)\ \Leftrightarrow\ a+d=b+c \end{equation} Then $R$ is an equivalence relation. If $(a,b)_E$ denotes the $E$-equivalence class of $(a,b)$, then the quotient structure $(R\times R)/E:=\{(a,b)_E:(a,b)\in R\times R\}$ is a ring with respect to the operations given by \begin{eqnarray}\label{RINGOPER} (a,b)_E+(c,d)_E&:=&(a+c,b+d)_E\\ (a,b)_E\cdotp(c,d)_E&:=&(a\cdotp c+b\cdotp d,a\cdotp d+b\cdotp c)_E\\ -(a,b)_E&:=&(b,a)_E \end{eqnarray} for $(a,b)_E,(c,d)_E\in(R\times R)/E$. We denote the ring $(R\times R)/E$ by $K_0(R)$ following the conventions of K-theory. The semiring $R$ can be embedded into the ring $K_0(R)$ by the semiring homomorphism $i$ given by $a\mapsto (a,0)$. Furthermore, given any ring $T$ and a semiring homomorphism $g:R\rightarrow T$, there exists a unique ring homomorphism $\overline g:K_0(R)\rightarrow T$ such that the diagram $\xymatrix{{R}\ar[rr] ^{i}\ar[dr]^g & & {K_0(R)}\ar@{->} [dl]^{\exists !\overline{g}} \\ & {T}}$ commutes. \end{theorem} Note that each of the $E$-equivalence classes of the elements from $R\times R$, as constructed in the previous theorem, contains a pair of the form $(a,0)$ or $(0,a)$ for some $a\in R$. For a semiring $S$, let $K_0(S)$ denote the ring $K_0(\tilde S)$ for simplicity, where $\tilde S$ is the cancellative semiring obtained from $S$ as stated in the theorem \ref{QUOCONST} and let the canonical map $S\rightarrow K_0(S)$ be denoted by $\eta_S$. We finally note the following result which combines the previous two theorems. \begin{cor}\label{GrRngAdj} A semiring $S$ can be embedded in a ring if and only if $S$ is cancellative. Given any ring $T$ and a semiring homomorphism $g:S\rightarrow T$, there exists a unique ring homomorphism $\overline g:K_0(S)\rightarrow T$ such that the diagram $\xymatrix{S\ar[rr] ^{\eta_S}\ar[dr]^g & & {K_0(S)}\ar@{->}[dl]^{\exists !\overline{g}} \\ & {T}}$ commutes. \end{cor} This result can be stated in category theoretic language as follows. Let $\mathbf{CSemiRing}$ denote the category of commutative semirings with unity and semiring homomorphisms preserving unity. Let $\mathbf{CRing}$ denote its full subcategory consisting of commutative rings with unity and let $I:\mathbf{CRing}\rightarrow\mathbf{CSemiRing}$ be the inclusion functor. Then $I$ admits a left adjoint, namely $K_0:\mathbf{CSemiRing}\rightarrow\mathbf{CRing}$. For each semiring $S$, the ring $K_0(S)$ is called the Grothendieck Ring constructed from $S$. If $\eta$ is the unit of the adjunction, the diagram in the previous corollary represents the universal property of the adjunction. \subsection{Grothendieck rings of first order structures}\label{GRFOS} We aim to introduce the notion of the model theoretic Grothendieck ring of a first order structure in this section. This account is based on \cite{Kra}. After setting some background in model theory, we state how to construct the semiring of definable isomorphism classes of definable subsets of finite cartesian powers of the given structure $M$. Following the method described in the previous section, we then construct the Grothendieck ring $K_0(M)$. Let $L$ denote any language and $M$ denote any first order $L$-structure. The term definable will always mean definable with parameters from $M$. \begin{definitions} For every $n\geq 1$, we define $\mathrm{Def}(M^n)$ to be the collection of all definable subsets of $M^n$. We define $\overline{\mathrm{Def}}(M):=\bigcup_{n\geq 1}\mathrm{Def}(M^n)$. \end{definitions} \begin{definition}\label{defiso} We say that two definable sets $A,B\in\overline{\mathrm{Def}}(M)$ are \textbf{definably isomorphic} if there exists a definable bijection between them, i.e., a bijection $f:A\rightarrow B$ such that the graph $Gr(f)\in\overline{\mathrm{Def}}(M)$. This is an equivalence relation on $\overline{\mathrm{Def}}(M)$ and the equivalence class of a set $A$ is denoted by $[A]$. We use $\widetilde{\mathrm{Def}}(M)$ to denote the set of all equivalence classes with respect to this relation. We use $[-]:\overline{\mathrm{Def}}(M)\rightarrow\widetilde{\mathrm{Def}}(M)$ to denote the surjective map defined by $A\mapsto[A]$. \end{definition} We can regard $\widetilde{\mathrm{Def}}(M)$ as an $L_{ring}$-structure. In fact, it is a semiring with respect to the operations defined as follows. \begin{itemize} \item $0 := [\emptyset]$ \item $1 := [\{*\}]$ for any singleton subset $\{*\}$ of $M$ \item $[A]+[B] := [A'\sqcup B']$ for $A'\in[A],B'\in[B]$ such that $A'\cap B'=\emptyset$ \item $[A]\cdotp[B] := [A\times B]$ \end{itemize} (NB: We use $\sqcup$ to denote disjoint unions.) Now we are ready to give the most important definition. \begin{definition} We define the \textbf{model-theoretic Grothendieck ring of the first order structure} $M$, denoted by $K_0(M)$, to be the ring $K_0(\widetilde{\mathrm{Def}}(M))$ obtained from corollary \ref{GrRngAdj}, where the semiring structure on $\widetilde{\mathrm{Def}}(M)$ is as defined above. \end{definition} This ring captures the definable combinatorics of the structure $M$. We are interested to know whether $K_0(M)=\{0\}$. It is useful to consider some definable combinatorial aspects to tackle this problem. \begin{definition} We say that an infinite structure $M$ satisfies the \textbf{pigeonhole principle} if for each $A\in\overline{\mathrm{Def}}(M)$, each definable injection $f:A\rightarrowtail A$ is an isomorphism. We write this as $M\vDash PHP$. \end{definition} This condition is very strong to be true for many structures. As an example, consider the additive group of integers $\mathbb Z$ in the language of abelian groups. The function $\mathbb Z\xrightarrow{(-)\times 2}\mathbb Z$ is a definable injection but not an isomorphism. So it is useful to consider some weaker forms. Though there are several of them (see \cite{Kra}), we note the one important for us. \begin{definition} We say that an infinite structure $M$ satisfies the \textbf{onto pigeonhole principle} if for each $A\in\overline{\mathrm{Def}}(M)$ and each definable injection $f:A\rightarrowtail A$, we have $f(A)\neq A\setminus\{a\}$ for any $a\in A$. We write this as $M\vDash ontoPHP$. \end{definition} The following proposition gives the necessary and sufficient condition for $K_0(M)$ to be nontrivial (i.e. $0\neq 1$ in $K_0(M)$). We include a proof for the sake of completeness. \begin{pro} Given any infinite structure $M$, $K_0(M)\neq\{0\}$ if and only if $M\vDash ontoPHP$. \end{pro} \begin{proof} Recall the construction of the cancellative semiring from (\ref{CANCEL}). The condition $0=1$ in $K_0(M)$ is thus equivalent to the statement that for some $A\in\overline{\mathrm{Def}}(M)$, we have $0+[A]=1+[A]$. This is precisely the statement that $M\nvDash ontoPHP$. \end{proof} \textbf{A brief survey of known Grothendieck Rings:} Very few examples of Grothendieck rings are known in general. If $M$ is a finite structure, then $K_0(M)\cong\mathbb Z$. Kraji\v{c}ek and Scanlon have shown in \cite[Example\,3.6]{Kra} that $K_0(\mathbb R)\cong\mathbb Z$ using the dimension theory and cell decomposition theorem for o-minimal structures, where $\mathbb R$ denotes the real closed field. Cluckers and Haskell (\cite{Clu}, \cite{CluHask}) proved that the fields of p-adic numbers have trivial Grothendieck rings, by constructing definable bijections from a set to the same set minus a point. Denef and Loeser (\cite{DenLoes1},\cite{DenLoes2}) have found that the Grothendieck ring $K_0(\mathbb C)$ of the field $\mathbb C$ of complex numbers regarded as an $L_{ring}$-structure admits the ring $\mathbb Z[X;Y]$ as a quotient. Kraji\v{c}ek and Scanlon have strengthened this result and shown that $K_0(\mathbb C)$ contains an algebraically independent set of size continuum, and hence the ring $\mathbb Z[X_i:i\in\mathfrak{c}]$ embeds into $K_0(\mathbb C)$. Perera showed in \cite[Theorem\,4.3.1]{Perera} that $K_0(M)\cong\mathbb Z[X]$ whenever $M$ is an infinite module over an infinite division ring. Prest conjectured \cite[Ch.\,8,\,Conjecture A]{Perera} that $K_0(M)$ is nontrivial for all nonzero right $\mathcal R$-modules $M$. We prove that $K_0(M)$ is actually a quotient of a monoid ring and furthermore it is nontrivial. Most of the paper is devoted to the proof of this statement. \subsection{Euler characteristic of simplicial complexes}\label{ECSC} We introduce the concept of an abstract simplicial complex and a couple of ways to calculate its Euler characteristic. We also state some important results in the homology theory of simplicial complexes. The material on homology and relative homology presented in this section is taken from \cite[II.4]{FerPic}. This theory provides the basis for the analysis of `local characteristics' in \ref{LC}. \begin{definition} An \textbf{abstract simplicial complex} is a pair $(X,\mathcal{K})$ where $X$ is a finite set and $\mathcal{K}$ is a collection of subsets of $X$ satisfying the following properties. \begin{itemize} \item $\emptyset\notin\mathcal{K}$ \item $\{x\}\in\mathcal{K}$ for each $x\in X$ \item if $F\in\mathcal{K}$ and $\emptyset\neq F'\subsetneq F$, then $F'\in\mathcal{K}$ \end{itemize} \end{definition} We usually identify the simplicial complex $(X,\mathcal{K})$ with $\mathcal{K}$. The elements $F\in\mathcal{K}$ are called the \textbf{faces} of the complex and the singleton faces are called the \textbf{vertices} of the complex. We use $\mathcal V(\mathcal K)$ to denote the set of vertices of $\mathcal K$. Let $\Delta^k:=\mathbb{P}([k+1])\setminus\{\emptyset\}$ denote the \textbf{standard $k$-simplex}, where $\mathbb P$ denotes the power set operator and $[k+1]=\{1,2,\hdots,k+1\}$ for $k\geq 0$. We define the \textbf{geometric realization} of the standard $k$-simplex, denoted $|\Delta^k|$, to be the set of all points of $\mathbb R^{k+1}$ which can be expressed as a convex linear combination of the standard basis vectors of $\mathbb R^{k+1}$. In fact we can associate to every abstract simplicial complex a topological space $|\mathcal K|$, called its geometric realization. This topological space is constructed by `gluing together' the geometric realizations of its simplices. We assign dimension to every face $F\in\mathcal{K}$ by stating $\mathrm{dim} F:=|F|-1$ and we say that the \textbf{dimension of the complex} is the maximum of the dimensions of its faces. \begin{definition}\label{Euler} We define the \textbf{Euler characteristic} of the complex $\mathcal{K}$, denoted $\chi(\mathcal{K})$, to be the integer $\Sigma_{n=0}^{\mathrm{dim}\mathcal{K}}(-1)^n v_n$ where $v_n$ is the number of faces in $\mathcal{K}$ with dimension $n$. \end{definition} It is easy to check that $\chi(\Delta^k)=1$ for each $k\geq 0$. Since we also allow our complex to be empty, we define $\chi(\emptyset):=0$ though $\mathrm{dim}\emptyset$ is undefined. There is yet another way to obtain the Euler characteristics of simplicial complexes, via homology. The word homology will always mean simplicial homology with integer coefficients in this paper. If $b_n$ denotes the $n^{th}$ Betti number of the simplicial complex $\mathcal{K}$ (i.e. the rank of the $n^{th}$ homology group $H_n(\mathcal{K})$), then we have the identity $\chi(\mathcal{K})=\Sigma_{n=0}^\infty(-1)^n b_n$ where the sum on the right hand side is finite. We use the notation $C_*(\mathcal K)$ to denote the chain complex $C_n(\mathcal K)_{n\geq 0}$ and $H_*(\mathcal K)$ to denote the chain complex $(H_n(\mathcal K))_{n\geq0}$, where $C_n(\mathcal K)$ is the free abelian group generated by the set of $n$-simplices in $\mathcal K$. The following result states that homology is a homotopy invariant. It will be useful in proving a key result (proposition \ref{p1}). \begin{theorem}\label{HTPYINV} If $\mathcal K_1$ and $\mathcal K_2$ (meaning, their geometric realizations) are homotopy equivalent, then $H_*(\mathcal K_1)\cong H_*(\mathcal K_2)$. \end{theorem} The definition of Euler characteristic in terms of Betti numbers gives the following corollary. \begin{cor}\label{EULHTPY} If $\mathcal K_1$ and $\mathcal K_2$ are homotopy equivalent, then $\chi(\mathcal K_1)=\chi(\mathcal K_2)$. \end{cor} The homology groups $H_n(\mathcal K)$, for $n\geq 1$, calculate the number of ``$n$-dimensional holes'' in the geometric realization of the complex $\mathcal K$. But sometimes it is important to ignore the data present in a smaller part of the given structure. This can be done in two ways, viz. using the cone construction for a subcomplex or by using relative homology. Given a complex $\mathcal{K}$ and a subcomplex $\mathcal{Q}\subseteq\mathcal{K}$, we write $\mathcal K\cup\mathrm{Cone}(\mathcal{Q})$ for the simplicial complex whose vertex set is $\mathcal V(\mathcal K)\cup\{x\}$, where $x\notin\mathcal V(\mathcal K)$, and the faces are $\mathcal{K}\cup\{\{x\}\cup F:F\in\mathcal{Q}\}$. We say that $x$ is the \textbf{apex} of the cone. In the same situation, we use the notation $H_n(\mathcal K;\mathcal{Q})$ to denote the $n^{th}$ homology of $\mathcal K$ relative to $\mathcal{Q}$. The following theorem connects the relative homologies with the homologies of the original complexes. \begin{theorem}(see \cite[Theorem\,2.16]{Hatcher})\label{LONGEXACT} Given a pair of simplicial complexes $\mathcal{Q}\subset\mathcal K$, we have the following long exact sequence of homologies.\\ \begin{equation*} \cdots\rightarrow H_n(\mathcal{Q})\rightarrow H_n(\mathcal{K})\rightarrow H_n(\mathcal K;\mathcal{Q})\rightarrow H_{n-1}(\mathcal{Q})\rightarrow\cdots\rightarrow H_0(\mathcal K;\mathcal{Q})\rightarrow 0 \end{equation*} \end{theorem} We shall also make use of the following result. \begin{theorem} Given a pair of simplicial complexes $\mathcal{Q}\subseteq\mathcal{K}$, we have $H_n(\mathcal K;\mathcal{Q}) \cong H_n(\mathcal K\cup\mathrm{Cone}(\mathcal{Q}))$ for $n\geq 1$ and $H_0(\mathcal K\cup \mathrm{Cone}(\mathcal{Q}))\cong H_0(\mathcal K;\mathcal{Q})\oplus \mathbb Z$. \end{theorem} \begin{illust} Let $\mathcal K=\{\{1\},\{2\},\{3\},\{1,2\},\{2,3\}\}$ and $\mathcal Q$ denote the subcomplex $\{\{1\},\{3\}\}$. Then \begin{eqnarray*} H_n(\mathcal K)&=&\begin{cases} \mathbb Z, &\mbox{if } n=0,\\ 0, &\mbox{otherwise} \end{cases}\\ H_n(\mathcal Q)&=&\begin{cases} \mathbb Z\oplus\mathbb Z, &\mbox{if } n=0,\\ 0, &\mbox{otherwise} \end{cases}\\ H_n(\mathcal K;\mathcal Q)&=&\begin{cases} \mathbb Z, &\mbox{if } n=1,\\ 0, &\mbox{otherwise} \end{cases}\\ H_n(\mathcal K\cup\mathrm{Cone}(\mathcal Q))&=&\begin{cases} \mathbb Z, &\mbox{if } n=0,1,\\ 0, &\mbox{otherwise} \end{cases} \end{eqnarray*} \end{illust} Combining the previous two results with the definition of Euler characteristic, we get \begin{cor}\label{HMLGCONE} For a pair of simplicial complexes $\mathcal{Q}\subseteq\mathcal K$, $\chi(\mathcal K\cup \mathrm{Cone}(\mathcal{Q}))+\chi(\mathcal{Q})=\chi(\mathcal K)+1$. \end{cor} \subsection{Products of simplicial complexes}\label{PRODSIMPCOMP} We define various products of simplicial complexes and study their interrelations. The inclusion-exclusion principle stated in lemma \ref{ecdpl} is equivalent to the statement that `local characteristics are multiplicative' (lemma \ref{localcharmult}). Let $\mathcal K$ and $\mathcal Q$ be two simplicial complexes with vertex sets $\mathcal{V}(\mathcal K)$ and $\mathcal{V}(\mathcal Q)$ respectively and let $\pi_1:\mathcal{V}(\mathcal K)\times\mathcal{V}(\mathcal Q)\rightarrow \mathcal{V}(\mathcal K)$ and $\pi_2:\mathcal{V}(\mathcal K)\times\mathcal{V}(\mathcal Q)\rightarrow \mathcal{V} (\mathcal Q)$ denote the projection maps. We define two simplicial complexes with the vertex set $\mathcal V(\mathcal K)\times\mathcal V(\mathcal Q)$. The following product is defined in \cite[\S 3]{EilZil}. \begin{definition} The \textbf{simplicial product} $\mathcal K\vartriangle\mathcal Q$ of two simplicial complexes $\mathcal K$ and $\mathcal Q$ is a simplicial complex with vertex set $\mathcal V(\mathcal K)\times\mathcal V(\mathcal Q)$ where a nonempty set $F\subseteq\mathcal V(\mathcal K)\times\mathcal V(\mathcal Q)$ is a face of $\mathcal K\vartriangle\mathcal Q$ if and only if $\pi_1(F)\in\mathcal K$ and $\pi_2(F)\in\mathcal Q$. \end{definition} \begin{definition} The \textbf{disjunctive product} $\mathcal K\boxtimes\mathcal Q$ of two simplicial complexes $\mathcal K$ and $\mathcal Q$ is a simplicial complex with vertex set $\mathcal V(\mathcal K)\times\mathcal V(\mathcal Q)$ where a nonempty set $F\subseteq\mathcal V(\mathcal K)\times\mathcal V(\mathcal Q)$ is a face of $\mathcal K\boxtimes\mathcal Q$ if and only if $\pi_1(F)\in\mathcal K$ or $\pi_2(F)\in\mathcal Q$. \end{definition} Observe that the previous two definitions are identical except for the word `and' in the former is replaced by the word `or' in the latter. Thus the simplicial product $\mathcal K\vartriangle\mathcal Q$ is always contained in the disjunctive product $\mathcal K\boxtimes\mathcal Q$. \begin{illust}\label{spdp} Let $\mathcal K=\{\{1\},\{2\}\}$ denote the complex consisting precisely of two vertices. Then $\mathcal K\vartriangle\mathcal K$ contains only the vertices of the `square' $\mathcal K\boxtimes\mathcal K$ given by $\{\{(1,1)\},\{(1,2)\},\{(2,1)\},\{(2,2)\},\{(1,1),(1,2)\},\{(2,1),(2,2)\},\{(1,1),(2,1)\},\\\{(1,2),(2,2)\}\}$. For each $k\geq 0$ the complex $\mathcal K\vartriangle\Delta^k$ is the union of two disjoint copies of $\Delta^k$, whereas the complex $\mathcal K\boxtimes\Delta^k$ is a copy of $\Delta^{2k+1}$. \end{illust} The main aim of this section is to prove the following lemma about the Euler characteristic of the disjunctive product. \begin{lemma}\label{ecdpl} The Euler characteristics of two simplicial complexes $\mathcal K$ and $\mathcal Q$ satisfy \begin{equation}\label{ecdp} \chi(\mathcal K\boxtimes\mathcal Q)=\chi(\mathcal K)+\chi(\mathcal Q)-\chi(\mathcal K)\chi(\mathcal Q). \end{equation} \end{lemma} \begin{illust} Let $\mathcal K$ be as defined in \ref{spdp}. Then we observe that $\chi(\mathcal K)=2$. Since $\mathcal K\boxtimes\mathcal K$ contains $4$ vertices and $4$ edges, we get $\chi(\mathcal K\boxtimes\mathcal K)=0=2\chi(\mathcal K)-\chi(\mathcal K)^2$ verifying equation (\ref{ecdp}) in this case. \end{illust} The proof of the lemma uses tensor products of chain complexes. \begin{definition} Let $C_*=\{C_n,\partial_n\}_{n\geq 0}$ and $D_*=\{D_n,\delta_n\}_{n\geq 0}$ denote two bounded chain complexes of abelian groups. The \textbf{tensor product complex} $C_*\otimes D_*=\{(C_*\otimes D_*)_n,d_n\}_{n\geq 0}$ is defined by \begin{eqnarray*} (C_*\otimes D_*)_n&=&\bigoplus_{i+j=n}C_i\otimes D_j,\\ d_n(a_i\otimes b_j)&=&\partial_i(a_i)\otimes b_j+(-1)^{i}a_i\otimes\delta_j(b_j). \end{eqnarray*} \end{definition} \begin{illust}\label{tensor} We compute the tensor product $C_*(\partial\Delta^2)\otimes C_n(\Delta^1)$ as an example, where $\partial\Delta^2$ denotes the boundary of $\Delta^2$. \begin{eqnarray*} C_n(\partial\Delta^2)&=&\begin{cases} \mathbb Z\oplus\mathbb Z\oplus\mathbb Z, &\mbox{if } n=0,1,\\ 0, &\mbox{otherwise} \end{cases}\\ C_n(\Delta^1)&=&\begin{cases} \mathbb Z\oplus\mathbb Z, &\mbox{if } n=0,\\ \mathbb Z, &\mbox{if } n=1,\\ 0, &\mbox{otherwise} \end{cases}\\ C_n(\partial\Delta^2)\otimes C_n(\Delta^1)&=&\begin{cases} \oplus_{i=1}^6\mathbb Z, &\mbox{if } n=0,\\ \oplus_{i=1}^9\mathbb Z, &\mbox{if } n=1,\\ \oplus_{i=1}^3\mathbb Z, &\mbox{if } n=2,\\ 0, &\mbox{otherwise} \end{cases} \end{eqnarray*} \end{illust} There is yet one more product of simplicial complexes, viz., the cartesian product, defined in the literature (see \cite{EilZil}). We avoid its use by dealing with the product of geometric realizations (with the product topology). The homology of such (finite) product spaces is easily computed using triangulation. We first note that the Euler characteristic is multiplicative. \begin{pro}\label{euprod}(see \cite[p.205,\,Ex.\,B.4]{Spa}) Let $\mathcal K$ and $\mathcal Q$ be any simplicial complexes. Then \begin{equation*}\chi(|\mathcal K|\times|\mathcal Q|)=\chi(\mathcal K)\chi(\mathcal Q).\end{equation*} \end{pro} A famous theorem of Eilenberg and Zilber (see \cite{EilZil}) connects the homologies of two semi-simplicial complexes (a term used in 1950 that includes the class of simplicial complexes) with that of their cartesian product. We state this result below using the cartesian product of their geometric realizations. More details can be found in \cite[\S 2.1]{Hatcher} and \cite[\S III.6]{FerPic}. \begin{theorem}\label{EZT}(see. \cite[\S III.6.2]{FerPic}) Let $\mathcal K$ and $\mathcal Q$ be any two simplicial complexes. Then we have \\ $H_*(|\mathcal K|\times|\mathcal Q|)\cong H_*(C_*(\mathcal K)\otimes C_*(\mathcal Q))$. \end{theorem} Furthermore, Eilenberg and Zilber state the following corollary of the previous theorem in \cite{EilZil}. \begin{cor}\label{simpprod} Let $\mathcal K$ and $\mathcal Q$ be any two simplicial complexes. Then\\ $H_*(\mathcal K\vartriangle\mathcal Q)\cong H_*(C_*(\mathcal K)\otimes C_*(\mathcal Q))$. \end{cor} \begin{illust} We continue the example in \ref{tensor}. The computation of the boundary operators yields \begin{equation*} H_n(C_*(\partial\Delta^2)\otimes C_*(\Delta^1))=\begin{cases} \mathbb Z, &\mbox{if } n=0,1,\\ 0, &\mbox{otherwise} \end{cases} \end{equation*} The space $|\partial\Delta^2|\times|\Delta^1|$ is a cylinder which is homotopy equivalent to $S^1$. Hence $H_n(|\partial\Delta^2|\times|\Delta^1|)=\mathbb Z$ for $n=0,1$ and is zero for other values of $n$. This completes the illustration of theorem \ref{EZT}. Furthermore the complex $\partial\Delta^2\vartriangle\Delta^1$ is the union of three copies of $\Delta^3$ each of which shares exactly one edge (i.e. a copy of $\Delta^1$) with every other copy and these three edges are pairwise disjoint. It can be easily see that this complex (i.e. its geometric realization) is homotopy equivalent to the circle and hence the conclusions of the corollary \ref{simpprod} hold. \end{illust} \begin{proof} (Lemma \ref{ecdpl}) We first observe that there is an embedding of simplicial complexes $\iota_1:\mathcal K\vartriangle(\Delta^{|\mathcal V(\mathcal Q)|-1})\rightarrow\mathcal K\boxtimes\mathcal Q$ induced by some fixed enumeration of $\mathcal V(\mathcal Q)$. Similarly there is an embedding $\iota_2:(\Delta^{|\mathcal V(\mathcal K)|-1})\vartriangle\mathcal Q\rightarrow\mathcal K\boxtimes\mathcal Q$ induced by some fixed enumeration of $\mathcal V(\mathcal K)$. Furthermore, the intersection $\iota_1(\mathcal K\vartriangle(\Delta^{|\mathcal V(\mathcal Q)|-1}))\cap\iota_2((\Delta^{|\mathcal V(\mathcal K)|-1})\vartriangle\mathcal Q)$ is precisely the complex $\mathcal K\vartriangle\mathcal Q$. This gives us, using the counting definition of the Euler characteristics, that the identity \begin{equation}\label{ecintermediate} \chi(\mathcal K\boxtimes\mathcal Q)=\chi(\mathcal K\vartriangle(\Delta^{|\mathcal V(\mathcal Q)|-1}))+\chi((\Delta^{|\mathcal V(\mathcal K)|-1})\vartriangle\mathcal Q)-\chi(\mathcal K\vartriangle\mathcal Q) \end{equation} holds. If we can prove that $\chi(\mathcal K\vartriangle\mathcal Q)=\chi(\mathcal K)\chi(\mathcal Q)$ for any simplicial complexes $\mathcal K$ and $\mathcal Q$, then (\ref{ecintermediate}) will yield (\ref{ecdp}) since $\chi(\Delta^k)=1$ for each $k\geq 0$. Now we have $H_*(\mathcal K\vartriangle\mathcal Q)\cong H_*(C_*(\mathcal K)\otimes C_*(\mathcal Q))\cong H_*(C_*(|\mathcal K|\times|\mathcal Q|))$, where the first isomorphism is by \ref{simpprod} and the second by \ref{EZT}. Hence we have $\chi(\mathcal K\vartriangle\mathcal Q)=\chi(|\mathcal K|\times|\mathcal Q|)=\chi(\mathcal K)\chi(\mathcal Q)$ by \ref{euprod} as required. This completes the proof. \end{proof} \subsection{Model theory of modules}\label{MTM} We introduce the terminology and some basic results in the model theory of modules in this section. A detailed exposition can be found in \cite{PreBk}. Instead of working with formulas all the time, we fix a structure and work with the definable subsets of finite cartesian powers of that structure. Let $\mathcal{R}$ be a fixed ring with unity. Then every right $\mathcal{R}$-module $M$ is a structure for the first order language $L_\mathcal{R}=\langle 0,+,-,m_r:r\in\mathcal{R}\rangle$, where each $m_r$ is a unary function symbol representing the action of right multiplication by the element $r$. When we are working in a fixed module $M$, we usually write the element $m_r(a)$ in formulas as $ar$ for each $a\in M$. First we note the following result of Perera which states that the Grothendieck ring of a module is an invariant of its theory. A proof of this proposition can be found at the end of section \ref{gencase} as a corollary of theorem \ref{FINALgeneral}. \begin{pro}(see \cite[Corollary\,5.3.2]{Perera})\label{eleequivmod} Let $M$ and $N$ be two right $\mathcal R$-modules such that $M\equiv N$, then $K_0(M)\cong K_0(N)$. \end{pro} Let us fix a right $\mathcal{R}$-module $M$. Then every definable subset of $M^n$ for any $n\geq 1$ can be expressed in terms of certain fundamental definable subsets of $M^n$. In order to state this partial quantifier elimination result, we first define the formulas which define these fundamental subsets. \begin{definition} A \textbf{positive primitive formula} (\textbf{pp-formula} for short) is a formula in the language $L_\mathcal{R}$ which is equivalent to one of the form \begin{equation*} \phi(x_1,x_2,\hdots,x_n)=\exists y_1\exists y_2\hdots\exists y_m\bigwedge_{i=1}^t\left(\sum_{j=1}^n x_j r_{ij}+\sum_{k=1}^m y_ks_{ik}+c_i=0\right), \end{equation*} where $r_{ij},s_{ik}\in\mathcal R$ and the $c_i$ are parameters from $M$. \end{definition} A subset of $M^n$ which is defined by a $pp$-formula (with parameters) will be called a $pp$-set. If a subgroup of $M^n$ is $pp$-definable, then its cosets are also $pp$-definable. The following lemma is well known and a proof can be found in \cite[Corollary\,2.2]{PreBk}. \begin{lemma} Every parameter-free $pp$-formula $\phi(\overline x)$ defines a subgroup of $M^n$, where $n$ is the length of $\overline x$. If $\phi(\overline x)$ contains parameters from $M$, then it defines either the empty set or a coset of a $pp$-definable subgroup of $M^n$. Furthermore, the conjunction of two $pp$-formulas is (equivalent to) a $pp$-formula. \end{lemma} Let $\mathcal{L}_n$ denote the meet-semilattice of all $pp$-subsets of $M^n$ ordered by the inclusion relation $\subseteq$. We will use the notation $\mathcal{L}_n(M_\mathcal{R})$, specifying the module, when we work with more than one module at a time. \begin{definition} Let $M$ be a right $\mathcal R$-module and let $A,B\in\mathcal L_n$ be subgroups. We define the invariant $\mathrm{Inv}(M,A,B)$ to be the index $[A:A\cap B]$ if this if finite or $\infty$ otherwise. \end{definition} An \textbf{invariants condition} is a statement that a given invariant is greater than or equal to or less than a certain number. These invariant conditions can be expressed as sentences in $L_\mathcal R$. An \textbf{invariants statement} is a finite boolean combination of invariants conditions. We are now ready to state the promised fundamental theorem of the model theory of modules. \begin{theorem}(see \cite{Baur})\label{PPET} Let $T$ be the theory of right $\mathcal R$-modules and $\phi(\overline x)$ be an $L_\mathcal R$ formula (possibly with parameters). Then we have \begin{equation*} T\vDash \forall \overline x (\phi(\overline x)\leftrightarrow\bigvee_{i=1}^m\left(\psi_i(\overline x)\wedge\bigwedge_{j=1}^{l_i}\neg\chi_{ij}(\overline x)\right)\wedge I), \end{equation*} where $I$ is an invariants statement and $\psi_i(\overline x),\chi_{ij}(\overline x)$ are $pp$-formulas. \end{theorem} We may assume that $\chi_{ij}(M)\subseteq\psi_i(M)$ for each value of $i$ and $j$, otherwise we redefine $\chi_{ij}$ as $\chi_{ij}\wedge\psi_i$. When we work in a complete theory, the invariants statements will vanish and hence we get the following form. \begin{theorem} For each $n\geq 1$, every definable subset of $M^n$ can be expressed as a finite boolean combination of $pp$-subsets of $M^n$. \end{theorem} Using this result together with the meet-semilattice structure of $\mathcal{L}_n$, we can express each definable subset of $M^n$ in a ``disjunctive normal form'' of $pp$-sets. Expressing a definable set as a disjoint union helps to break it down to certain low complexity fragments, each of which has a specific shape given by the normal form. A proof of this result can be found in \cite[Lemma\,3.2.1]{Perera}. \begin{lemma}\label{REP} Every definable subset of $M^n$ can be written as $\bigsqcup_{i=1}^t (A_i\setminus(\bigcup_{j=1}^{s_i} B_{ij}))$ for some $A_i,B_{ij}\in\mathcal{L}_n$. \end{lemma} The following lemma is one of the important tools in our analysis. \begin{lemma}\label{NL} \textbf{Neumann's Lemma} (see \cite[Theorem\,2.12]{PreBk})\\ If $H$ and $G_i$ are subgroups of some group $(K,+)$ and a coset of $H$ is covered by a finite union of cosets of the $G_i$, then this coset of $H$ is in fact covered by the union of just those cosets of $G_i$ where $G_i$ is of finite index in $H$, i.e. where $[H:G_i]:=[H:H\cap G_i]$ is finite. \begin{equation*} c+H\subseteq \bigcup_{i\in I}c_i+G_i\ \ \ \Rightarrow\ \ \ c+H\subseteq \bigcup_{i\in I_0}c_i+G_i, \end{equation*} where $I_0=\{i\in I:[H:G_i]<\infty\}$. \end{lemma} \section{Special Case: Additive Structure}\label{spcaseadd} \subsection{The condition $\mathrm{T=T^{\aleph_0}}$}\label{TATA} Let $M$ be a fixed right $\mathcal R$-module. For brevity we denote $Th(M)$ by $T$. We work with this fixed module throughout this section. If $X,Y\subseteq M^n$, $X,Y\neq\emptyset$, then we use the Minkowski sum notation $X+Y$ to denote the set $\{x+y:x\in X,y\in Y\}$. In case $X=\{x\}$, we use $x+Y$ to denote $X+Y$. \begin{pro}\label{EC} The following conditions are equivalent for a module $M$. \begin{enumerate} \item $\mathrm{Inv}(M;A,B)$ is either equal to $1$ or $\infty$ for each $A,B\in\mathcal L_n$ such that $0\in A\cap B$, for each $n\geq 1$, \item $M\equiv M\oplus M$, \item $M\equiv M^{(\aleph_0)}$. \end{enumerate} \end{pro} \begin{definition} The theory $T=Th(M)$ is said to satisfy the condition $T=T^{\aleph_0}$ if either (and hence all) of the conditions of proposition \ref{EC} hold. \end{definition} We wish to add yet one more condition to the list. The rest of this section is devoted to formulating the condition and deriving its consequences. We need to introduce some new notation to do this. Let us denote the set of all finite subsets of $\mathcal L_n\setminus\{\emptyset\}$ by $\mathcal P_n$ and the set of all finite antichains in $\mathcal L_n\setminus\{\emptyset\}$ by $\mathcal A_n$. Clearly $\mathcal A_n\subseteq\mathcal P_n$ for each $n\geq 1$. We use lowercase Greek letters to denote elements of $\mathcal A_n$ and $\mathcal P_n$. \begin{definition} A definable subset $A$ of $M^n$ will be called $pp$-\textbf{convex} if there is some $\alpha\in\mathcal P_n$ such that $A=\bigcup\alpha$. \end{definition} Neumann's lemma (\ref{NL}) takes the following simple form if we add the equivalent conditions of \ref{EC} to our hypotheses. \begin{cor}\label{NLU} Suppose $T=T^{\aleph_0}$. If $A\in\mathcal L_n$ and $\mathcal F\subseteq\mathcal L_n$ such that $A\subseteq\bigcup\mathcal F$, then $A\subseteq F$ for at least one $F\in\mathcal F$. \end{cor} Under the same hypotheses, we want to show that for every $\alpha\in\mathcal P_n$ the $pp$-convex set $\bigcup\alpha$ is uniquely determined by the antichain $\beta\subseteq\alpha$ of all maximal elements in $\alpha$. \begin{pro}\label{UNIQUE1} Suppose that $T=T^{\aleph_0}$ holds. Let $A\subseteq M^n$ be a $pp$-convex set for some $n\geq 1$. Then there is a unique $\beta\in\mathcal{A}_n$ such that $A=\bigcup\beta$. \end{pro} \begin{proof} Let $\alpha_1,\alpha_2\in\mathcal P_n$ be such that $A=\bigcup\alpha_1=\bigcup\alpha_2$. Without loss of generality we may assume $\alpha_1,\alpha_2\in\mathcal{A}_n$. Let $\alpha_1=\{C_1,C_2,\hdots,C_l\}$ and $\alpha_2=\{D_1,D_2,\hdots,D_m\}$. We have $D_j\subseteq\bigcup_{i=1}^lC_i$ for each $1\leq j\leq m$. Then by \ref{NLU}, we have $D_j\subseteq C_i$ for at least one $i$. By symmetry we also get that each $C_i$ is contained in a $D_j$. Using that both $\alpha_1$ and $\alpha_2$ are antichains with the same union, the proof is complete. \end{proof} This proposition shows that under the hypothesis $T=T^{\aleph_0}$ the set of $pp$-convex subsets of $M^n$ is in bijection with $\mathcal A_n$ for each $n\geq 1$. We shall often use this correspondence without mention. For $\alpha\in\mathcal A_n$, we define the \textbf{rank} of the $pp$-convex set $\bigcup\alpha$ to be the integer $|\alpha|$. The set $\mathcal{A}_n$ can be given the structure of a poset by introducing the relation $\prec_n$ defined by $\beta\prec_n\alpha$ if and only if for each $B\in\beta$, there is some $A\in\alpha$ such that $B\subsetneq A$. \begin{deflem}\label{UNIQUE2} Assume that $T=T^{\aleph_0}$. We say that a definable subset $C$ of $M^n$ is a \textbf{cell} if there are $\alpha,\beta\in\mathcal A_n$ with $\beta\prec_n\alpha$ such that $C=\bigcup\alpha\setminus\bigcup\beta$. We denote the set of all cells contained in $M^n$ by $\mathcal C_n$. The antichains $\alpha$ and $\beta$, denoted by $P(C)$ and $N(C)$ respectively, are uniquely determined by the cell $C$. In other words, there is a bijection between the set $\mathcal C_n$ and the set of pairs of antichains strictly related by $\prec_n$. In case $|P(C)|=1$, we say that $C$ is a \textbf{block}. We denote the set of all blocks in $\mathcal C_n$ by $\mathcal B_n$. \end{deflem} \begin{proof} Given any $\alpha,\beta\in\mathcal A_n$ such that $\beta\prec_n\alpha$ and $C=\bigcup\alpha\setminus\bigcup\beta$, the $pp$-convex set $\bigcup(\alpha\cup\beta)$ is determined by $C$. But this set is uniquely determined by the set of maximal elements in $\alpha\cup\beta$ by \ref{UNIQUE1}. Since $\beta\prec_n\alpha$, the required set of maximal elements is precisely $\alpha$. Furthermore, the set $\bigcup\alpha\setminus C=\bigcup\beta$ is $pp$-convex and thus is uniquely determined by $\beta$ by \ref{UNIQUE1} and this finishes the proof. \end{proof} We know from \ref{REP} that any definable subset of $M^n$ can be represented as a disjoint union of blocks. So it will be important for us to understand the structure of blocks in detail. A block is always nonempty since any finite union of proper $pp$-subsets cannot cover the given $pp$-set by \ref{UNIQUE1}. For each $B\in\mathcal B_n$, we use the notation $\overline B$ to denote the unique element of $P(B)$. \begin{theorem}\label{MINK} Let $M$ be an $\mathcal{R}$-module. Then $Th(M)=Th(M)^{\aleph_0}$ if and only if for each $B\in\mathcal B_n,\ n\geq 1$, we have $B+B-B=\overline B$. Under these conditions, we also get $B-B=\overline{B}$ whenever $\overline{B}$ is a subgroup. \end{theorem} \begin{proof} Assume that $Th(M)=Th(M)^{\aleph_0}$ holds. Let $B\in\mathcal B_n$ be such that $N(B)=\{D_1,D_2,\hdots,D_l\}\prec P(B)=\{A\}$. Let $D=\bigcup N(B)$. We want to show that $B+B-B=A$. But clearly $B\subseteq B+B-B$. So it suffices to show that $D\subseteq B+B-B$. First assume that $A$ is a subgroup of $M^n$. Let $d\in D$. Since $A\setminus (D-d)\neq\emptyset$, we can choose some $x\in A\setminus (D-d)$ by \ref{NLU}. Then $x+d\in (A+d)\setminus D=A\setminus D=B$, since $A$ is a subgroup. Again choose some $y\in A\setminus((D-d)\cup(D-d-x))$. Then $y+d\in (A+d)\setminus(D\cup(D-x))$ for similar reasons. Thus $y+d, y+x+d\in A\setminus D=B$. Now $d=(d+x)+(d+y)-(d+x+y)\in B+B-B$ and hence the conclusion follows. In the case when $A$ is a coset of a $pp$-definable subgroup $G$, say $A=a+G$, let $C=D-a$. Then, by the first case, $G=C+C-C$. Now if $d\in A$, then $d-a\in G$. Hence there are $x,y,z\in C$ such that $d-a=x+y-z$. Thus $d=(x+a)+(y+a)-(z+a)\in B+B-B$ and this completes the proof in one direction. For the converse, suppose that $Th(M)\neq Th(M)^{\aleph_0}$. Then there are two $pp$-definable subgroups $G,H$ of $M^n$ for some $n\geq 1$ such that $H\leq G$ and $1<[G:H]<\infty$. Let $[G:H]=k$ and let $H_1, H_2,\cdots, H_k$ be the distinct cosets of $H$ in $G$. Since $H$ is a $pp$-set, all the cosets $H_i$ are $pp$-sets as well. Now let $B=H_k=G\setminus\bigcup_{i=1}^{k-1}H_i$. Then $B$ is a nonempty block since $k>1$. But, since $B$ is a coset, $B+B-B=B\neq G$ which proves the result in the other direction. Now we prove the last statement under the hypothesis $Th(M)=Th(M)^{\aleph_0}$. Let $B,A,D$ be as defined in the first paragraph of the proof and assume that $A$ is a subgroup of $M^n$. Given any $a\in A$, we choose $x\in A\setminus (D\cup(D-a))$, which is possible by \ref{NLU}. Then $x,x+a\in B$ and hence $a=(x+a)-x\in (B-B)$. This shows the inclusion $A\subseteq B-B$. We clearly have $(B-B)\subseteq (A-A)$ and $A-A=A$ since $A$ is a subgroup. This completes the proof. \end{proof} A map $f:B\rightarrow M^n$ is \textbf{linear} if $f(x+y-z)=f(x)+f(y)-f(z)$ for all $x,y,z\in B$ such that $x+y-z\in B$. We use the previous theorem to show that any linear map on $B$ can be extended uniquely to a linear map on $\overline B$. \begin{lemma}\label{COLOURINJ} Suppose that $T=T^{\aleph_0}$ holds. Then for each $n\geq 1$, each $B\in\mathcal B_n$ and each injective linear map $f:B\rightarrowtail M^n$, there exists a unique injective linear extension $\overline{f}:\overline B\rightarrowtail M^n$. \end{lemma} \begin{proof} Let $B=A\setminus\bigcup_{i=1}^mD_i$ be a block and assume that $A$ is a $pp$-definable subgroup. Let $D=\bigcup_{i=1}^mD_i$ and, for each $i$, let $H_i$ denote the subgroup of $A$ whose coset is $D_i$. Let $H=\bigcup_{i=1}^m H_i$. We choose and fix elements $x_i\in B$ sequentially depending on the earlier choices as follows. We choose $x_1\in A\setminus(D\cup H)$ and for each $1<i\leq m$, choose $x_i\in A\setminus(D\cup H\cup \bigcup_{j=1}^{i-1}(D+x_j))$. We can choose $x_i$ at each step by \ref{NLU}. Then we define $\overline{f}(d_i)=f(x_i+d_i)-f(x_i)$ for $d_i\in D_i$ and $\overline{f}(b)=f(b)$ for $b\in B$. $\overline{f}$ is well-defined: Let $d\in D_i\cap D_j$ for some $j<i$. Then by the choice of $x_i$, $(x_i-x_j)\in B$. Also $x_i,x_i+d,x_j,x_j+d\in B$. Hence $f(x_i-x_j)$ is defined and is equal to both $f(x_i)-f(x_j)$ and $f(x_i+d)-f(x_j+d)$. Hence we see that $f(x_i+d)-f(x_i)=f(x_j+d)-f(x_j)$, which proves that $\overline{f}(d)$ is well-defined for each $d\in D$. $\overline{f}$ is linear: Let $b\in B$ and $d\in D_i$. Then there are two possibilities namely, $b+d\in B$ or $b+d\in D_j$ for some $j$. In the former case we have $\overline{f}(b+d)=f(b+d)=f(b+x_i+d-x_i)=f(b)+f(x_i+d)-f(x_i)=\overline{f}(b)+\overline{f}(d)$ since $x_i+d,x_i,b\in B$, while in the latter case we have $\overline{f}(b+d)=f(b+d+x_j)-f(x_j)=f(b+d+x_j-x_i+x_i)-f(x_j)=f(b)+f(x_i+d)-f(x_i)+f(x_j)-f(x_j)= f(b)+f(x_i+d)-f(x_i)=\overline{f}(b)+\overline{f}(d)$ since $b,x_i,x_j,x_i+d,x_j+d\in B$. Let $b\in D_k$ and $d\in D_i$. Then there are two possibilities namely, $b+d\in B$ or $b+d\in D_j$ for some $j$. In the former case we have $\overline{f}(b+d)=f(b+d)=f(b+x_k-x_k+d+x_i-x_i)= f(b+x_k)-f(x_k)+f(d+x_i)-f(x_i)=\overline{f}(b)+\overline{f}(d)$ since $b+x_k,x_k,x_i,x_i+d\in B$, while in the latter case we have $\overline{f}(b+d)=f(b+d+x_j)-f(x_j)=f(b+x_k-x_k+d+x_i-x_i+x_j)-f(x_j)= f(b+x_k)-f(x_k)+f(d+x_i)-f(x_i)+f(x_j)-f(x_j)=\overline{f}(b)+\overline{f}(d)$ since $b+x_k,x_k,x_i,x_i+d,x_j\in B$. In the case when $b,d\in B$ and $b+d\in B$, the linearity of $\overline{f}$ follows from the linearity of $f$. When $b+d\in D_i$, $\overline{f}(b+d)=f(b+d+x_i)-f(x_i)=f(b)+f(d)+f(x_i)-f(x_i)= \overline{f}(b)+\overline{f}(d)$ since $b,d,x_i\in B$. So we have showed in each case that $\overline{f}$ is linear. $\overline{f}$ is injective: Without loss we may assume that $\overline f(0)=0$, otherwise we may consider the function $\overline f(-)-\overline f(0)$. Let $a\in A$ be such that $\overline{f}(a)=0$. Then if $a\in B$, then $f(a)=0$ and hence $a=0$ by injectivity of $f$. If $a\in D_i$, then $f(x_i+a)-f(x_i)=0$ and hence $f(x_i+a)=f(x_i)$. But then $x_i+a=x_i$ by injectivity of $f$ since both $x_i+a,x_i\in B$. This again implies that $a=0$. $\overline{f}$ is unique: Let $h:A\rightarrow M^n$ be any linear injective extension of $f$. Then $h(b)=f(b)=\overline{f}(b)$ for each $b\in B$ and hence, for $d\in D_i$, we have $\overline{f}(d)=f(x_i+d)-f(x_i)=h(x_i+d)-h(x_i)=h(d)$ since $x_i+d,x_i\in B$ and $h$ is linear. If $A$ is a nontrivial coset of some $pp$-definable subgroup $G$ of $M^n$, $D\subsetneq A$, $B=A\setminus D$ and we are given some linear map $f:B\rightarrowtail M^n$, we choose and fix some $b\in B$. Then clearly $A-b=G$ and we take $C=B-b$. Define $g:C\rightarrow M^n$ by setting $g(c)=f(c+b)-f(b)$. Now whenever $x,y\in C$ such that $x+y\in C$, we have $g(x+y)=f(x+y+b)-f(b)=f((x+b)-b+(y+b))-f(b)= f(x+b)-f(b)+f(y+b)-f(b)=g(x)+g(y)$ since $x+b,y+b,b\in B$ and $f$ is linear. Hence $g$ is linear on $C$. Therefore by the subgroup case, we have a unique linear injective extension of $g$ to $G$, say $\overline{g}$. Then we define $\overline{f}:A\rightarrow M^n$ by setting $\overline{f}(a)=\overline{g}(a-b)+f(b)$. It can be easily seen that $\overline{f}$ is indeed an extension of $f$. The uniqueness, linearity and injectivity of $\overline{f}$ follows from the uniqueness of $\overline{g}$. This argument completes the proof of this case and hence that of the lemma. \end{proof} \subsection{Local characteristics}\label{LC} We fix some $n\geq 1$ and drop all the subscripts $n$. We also assume hereafter that $Th(M)=Th(M)^{\aleph_0}$ holds for some fixed right $\mathcal R$-module $M$. For brevity, we denote the sets $\mathcal L\setminus\{\emptyset\},\mathcal A\setminus\{\emptyset\},\hdots$ by $\mathcal L^*,\mathcal A^*,\hdots$ respectively. \begin{definition} Let $\mathcal D$ be a finite subset of $\mathcal L^*$. The smallest sub-meet-semilattice of $\mathcal L$ containing $\mathcal D$ will be called the $pp$-nest (or simply nest) corresponding to $\mathcal D$ and will be denoted by $\hat{\mathcal D}$. Note that $\hat{\mathcal D}$ is finite. In general, any finite sub-meet-semilattice of $\mathcal L$ will also be referred to as a $pp$-nest. \end{definition} \begin{definition} For each finite subset $\mathcal F$ of $\mathcal L^*$ and $F\in\mathcal F$, we define the \textbf{$\mathcal F$-core} of $F$ to be the block $\mathrm{Core}_\mathcal F(F):=F\setminus\bigcup\{G: G\in\mathcal F, G\cap F\subsetneq F\}$. \end{definition} Let $D\subseteq M^n$ be definable. Then $D=\bigsqcup_{i=1}^m B_i$ for some $B_i\in\mathcal B$ by \ref{REP}. We say that $\mathcal D$ is the nest corresponding to this partition of $D$ if it is the nest corresponding to the finite family $\bigcup_{i=1}^m(P(B_i)\cup N(B_i))$. Every definable set can be partitioned canonically given a suitable nest, which is the content of the following lemma whose proof is omitted. \begin{deflem}\label{CH1} Suppose $D\subseteq M^n$ is definable and $\mathcal D$ is the nest corresponding to a given partition $D=\bigsqcup_{i=1}^m B_i$. For every nonempty $F\in\mathcal D$, $\mathrm{Core}_\mathcal D(F)\cap~D\neq\emptyset$ if and only if $\mathrm{Core}_\mathcal D(F)\subseteq D$. We define the \textbf{characteristic function} of the nest $\mathcal D$, $\delta_\mathcal D:\mathcal D\rightarrow \{0,1\}$, by $\delta_\mathcal D(F)=1$ if and only if $F\neq\emptyset$ and $\mathrm{Core}_\mathcal D(F)\subseteq D$. We denote the sets $\delta_\mathcal D^{-1}(1)$ and $\delta_\mathcal D^{-1}(0)$ by $\mathcal D^+$ and $\mathcal D^-$ respectively. Then $D=\bigcup_{F\in\mathcal D^+}\mathrm{Core}_\mathcal D(F)$. \end{deflem} \begin{illust} If $B$ is a block with $P(B)=A$ and $N(B)=\{D_1,D_2\}$ such that $D_1\cap D_2\neq\emptyset$, then $\mathcal D=\{A,D_1,D_2,D_1\cap D_2\}$ is the corresponding nest. Clearly $B=\mathrm{Core}_\mathcal D(A)$ and hence $\mathcal D^+=\{A\}$. \end{illust} We will sometimes use another family of characteristic functions stated in the following definition. \begin{definition}\label{CH2} Given any $C\in\mathcal C$, we define the \textbf{characteristic function} of the cell $C$, $\delta(C):\mathcal L^*\rightarrow\{0,1\}$, as $\delta(C)(P)=1$ if $P\subseteq C$ and $\delta(C)(P)=0$ otherwise, for each $P\in\mathcal{L}^*$. When $P=\{a\}$, we write the expression $\delta(C)(a)$ instead of $\delta(C)(\{a\})$. \end{definition} The set $\mathcal A$ of antichains is ordered by the relation $\prec$ but can also be considered as a poset with respect to the natural inclusion ordering on the set of all $pp$-convex sets. For $\alpha,\beta\in\mathcal{A}$, we define $\alpha\wedge\beta$ to be the antichain corresponding (in the sense of \ref{UNIQUE1}) to $(\bigcup\alpha)\cap(\bigcup\beta)$ and $\alpha\vee\beta$ to be the antichain corresponding to $(\bigcup\alpha)\cup(\bigcup\beta)$. Since the intersection and union of two $pp$-convex sets are again $pp$-convex, the binary operations $\wedge,\vee:\mathcal A\times\mathcal A\rightarrow\mathcal A$ are well-defined. It can be easily seen that $\mathcal{A}$ is a distributive lattice with respect to these operations. We want to understand the structure of any definable set ``locally'' in a neighbourhood of a point in $M^n$. The following lemma defines a class of sub-lattices of $\mathcal A$ which provides the necessary framework to define the concept of localization. The proof is an easy verification of an adjunction and is not given here. \begin{deflem} Fix some $a\in M^n$. Let $\mathcal{L}_a:=\{A\in\mathcal{L}:a\in A\}$ and $\mathcal{A}_a$ denote the set of all antichains in the meet-semilattice $\mathcal{L}_a$. Then $\mathcal{A}_a$ is a sub-lattice of $\mathcal{A}$. We denote the inclusion $\mathcal{A}_a\rightarrow\mathcal{A}$ by $\mathcal{I}_a$. We also consider the map $\mathcal{N}_a:\mathcal{A}\rightarrow\mathcal{A}_a$ defined by $\alpha\mapsto \alpha\cap\mathcal{L}_a$. We call the antichain $\mathcal{N}_a(\alpha)$ the \textbf{localization} of $\alpha$ at $a$. Then $\mathcal{N}_a$ is a right adjoint to $\mathcal{I}_a$ if we consider the posets $\mathcal A$ and $\mathcal A_a$ as categories in the usual way, and the composite $\mathcal{N}_a\circ\mathcal{I}_a$ is the identity on $\mathcal{A}_a$. This in particular means that $\mathcal{A}_a$ is a reflective subcategory of $\mathcal{A}$. Furthermore, the map $\mathcal{N}_a$ not only preserves the meets of antichains, being a right adjoint, but it also preserves the joins of antichains. \end{deflem} Fix some $a\in M^n$. Let us denote the set of all finite subsets of $\mathcal{L}_a$ by $\mathcal P_a$ and let $\alpha\in\mathcal{P}_a$. We construct a simplicial complex $\mathcal{K}^a(\alpha)$ which determines the ``geometry'' of the intersection of elements of $\alpha$ around $a$. This construction is similar to the construction of the nerve of an open cover, except for the meaning of the ``triviality'' of the intersection. We know that a $pp$-set is finite if and only if it has at most $1$ element. We also know that $\bigcap\alpha\supseteq\{a\}$. \begin{definition}\label{SIMPCOMP} We associate an abstract simplicial complex $\mathcal K^a(\alpha)$ to each $\alpha\in\mathcal P_a$ by taking the vertex set $\mathcal V(\mathcal K^a(\alpha)):=\alpha\setminus\{a\}$. We say that a nonempty set $\beta\subseteq\alpha$ is a face of $\mathcal{K}^a(\alpha)$ if and only if $\bigcap\beta$ is infinite (i.e., strictly contains $a$). If the only element of $\alpha$ is $\{a\}$ or if $\alpha=\emptyset$, then we set $\mathcal{K}^a(\alpha)=\emptyset$, the empty complex. \end{definition} \begin{illust} Consider the real vector space $\mathbb R_{\mathbb R}$. The theory of this vector space satisfies the condition $\rm T=T^{\aleph_0}$. We consider subsets of $\mathbb R^3$. If $\alpha$ denotes the antichain corresponding to the union of $3$ coordinate planes and $a$ is the origin, then $\mathcal K^a(\alpha)$ is a copy of $\partial\Delta^2$. The $2$-dimensional face of $\Delta^2$ is absent since the intersection of the coordinate planes does not contain the origin properly. \end{illust} Since $\beta_1\subseteq\beta_2\ \Rightarrow\ \bigcap\beta_2\subseteq\bigcap\beta_1$, $\mathcal{K}^a(\alpha)$ is indeed a simplicial complex. We tend to drop the superscript $a$ when it is clear from the context. To extend this definition to arbitrary elements of $\mathcal P$, we extend the notion of localization operator (at $a$) to $\mathcal{P}$ by setting $\mathcal{N}_a(\alpha)=\alpha\cap\mathcal{L}_a$ for each $\alpha\in\mathcal P$. Now we are ready to define a family of numerical invariants for convex subsets of $M^n$, which we call ``local characteristics''. \begin{definition} We define the function $\kappa_a:\mathcal{P}\rightarrow\mathbb{Z}$ by setting $\kappa_a(\alpha):= \chi(\mathcal{K}(\mathcal{N}_a(\alpha)))- \delta(\alpha)(a)$, where $\chi(\mathcal{K})$ denotes the Euler characteristic of the complex $\mathcal{K}$ as defined in \ref{Euler} and $\delta(\alpha)$ is the characteristic function of the set $\bigcup\alpha$ as defined in \ref{CH2}. The value $\kappa_a(\alpha)$ will be called the \textbf{local characteristic} of the antichain $\alpha$ at $a$. \end{definition} If we view the local characteristic $\kappa_a(\alpha)$ as a function of $a$ for a fixed antichain $\alpha$, the correction term $\delta(\alpha)(a)$ makes sure that $\kappa_a(\alpha)=0$ for all but finitely many values of $a$. This fact will be useful in the next section. We want to show that the local characteristic satisfies the inclusion-exclusion principle for antichains. \begin{theorem}\label{t1} For $\alpha,\beta\in\mathcal{A}$, we have $\kappa_a(\alpha\vee\beta)+\kappa_a(\alpha\wedge\beta)=\kappa_a(\alpha)+\kappa_a(\beta)$. \end{theorem} The rest of this section is devoted to the proof of this theorem. First we observe that it is sufficient to prove this theorem for $\alpha,\beta\in\mathcal{A}_a$. We also observe that it is sufficient to prove this theorem in the case when $\kappa_a$ is replaced by the function $\chi(\mathcal K(-))$ because $\kappa_a(\alpha)=\chi(\mathcal{K}(\alpha))-1$ whenever $a\in\bigcup\alpha$ and the cases when either $a\notin\bigcup\alpha$ or $a\notin\bigcup\beta$ are trivial. We write $\kappa_a$ as $\kappa$ for simplicity of notation. The following proposition is the first step in this direction, which states that $\kappa(\alpha)$ is actually determined by the $pp$-convex set $\bigcup\alpha$. \begin{pro}\label{p1} Let $\alpha\in\mathcal{A}_a$ and $\beta\in\mathcal{P}_a$. If $\bigcup\alpha=\bigcup\beta$, then $\kappa(\alpha)=\kappa(\beta)$. \end{pro} \begin{proof} It is clear that $\beta\supseteq\alpha$ since $\beta$ is finite. Hence $\mathcal{K}(\alpha)$ is a full sub-complex of $\mathcal{K}(\beta)$ (i.e. if $\beta'\in \mathcal K(\beta)$ and $\beta'\subseteq\alpha$, then $\beta'\in\mathcal K(\alpha)$). We can also assume that $\{a\}\notin\beta$. Note that every element $\beta\setminus\alpha$ is properly contained in at least one element of $\alpha$. We use induction on the size of $\beta\setminus\alpha$ to prove this result. If $\beta\setminus\alpha=\emptyset$, then the conclusion is trivially true. For the inductive case, suppose $\alpha\subseteq\beta'\subsetneq\beta$ and the result has been proved for $\beta'$. Let $B\in\beta\setminus\beta'$. Since $\alpha$ is the set of maximal elements of $\beta$, there is some $A\in\alpha$ such that $A\supsetneq B$. Consider the complex $\mathcal{K}_1=\{F\in\mathcal{K}(\beta'):(F\cup\{B\})\in\mathcal{K}(\beta'\cup\{B\})\}$ as a full sub-complex of $\mathcal{K}(\beta')$. Observe that whenever $B\in F\in\mathcal{K}(\beta'\cup\{B\})$, we have $(F\cup\{A\})\setminus\{B\}\in\mathcal{K}(\beta')$. As a consequence, $\mathcal K_1=\mathrm{Cone}(\mathcal K(\beta'\setminus\{A\}))$ where the apex of the cone is $A$. In particular, $\mathcal K_1$ is contractible. Also note that $\mathcal{K}(\beta'\cup\{B\})=\mathcal{K}(\beta')\cup\mathrm{Cone}(\mathcal{K}_1)$, where the apex of the cone is $B$. Now we compare the pair $\mathcal K_1\subseteq\mathcal{K}(\beta')$ with another pair $\mathrm{Cone}(\mathcal K_1)\subseteq\mathcal K(\beta'\cup\{B\})$ of simplicial complexes. Observe the set equality $\mathcal{K}(\beta')\setminus\mathcal K_1=\mathcal K(\beta'\cup\{B\})\setminus \mathrm{Cone}(\mathcal K_1)$. Also both $\mathcal K_1$ and $\mathrm{Cone}(\mathcal K_1)$ are contractible. Thus we conclude that $\mathcal{K}(\beta'\cup\{B\})$ and $\mathcal{K}(\beta')$ are homotopy equivalent. Finally, an application of \ref{EULHTPY} completes the proof. \end{proof} Note that this result is very helpful for the computation of local characteristics as we get the equalities $\kappa(\alpha\vee\beta)=\kappa(\alpha\cup\beta)$ and $\kappa(\alpha\wedge\beta)=\kappa(\alpha\circ\beta)$ for all $\alpha,\beta\in\mathcal{A}_a$, where $\alpha\circ\beta=\{A\cap B: A\in\alpha, B\in\beta\}$. The vertices of $\mathcal{K}(\alpha\circ\beta)$ will be denoted by the elements from $\alpha\times\beta$. We use induction twice, first on $|\beta|$ and then on $|\alpha|$, to prove the main theorem of this section. The following lemma is the first step of this induction. \begin{lemma} For $\alpha,\beta\in\mathcal{A}_a$ and $\left|\alpha\right|\leq 1$, we have $\kappa(\alpha\vee\beta)+\kappa(\alpha\wedge\beta)=\kappa(\alpha)+\kappa(\beta)$. \end{lemma} \begin{proof} The cases $\left|\alpha\right|=0$ and $\alpha=\{\{a\}\}$ are trivial. So we assume that $\alpha=\{A\}$ where $A$ is infinite. We can make similar non-triviality assumptions on $\beta$, namely there is at least one element in $\beta$ and all the elements of $\beta$ are infinite. There are only two possible cases when $|\beta|=1$ and the conclusion holds true in both these cases. For example when $\beta=\{B\}$ and $A\cap B=\{a\}$, we have $\mathcal K(\alpha)\cong\mathcal K(\beta)\cong\Delta^0$, $\mathcal K(\alpha\circ\beta)$ is empty and $\mathcal K(\alpha\cup\beta)$ is disjoint union of two copies of $\Delta^0$. Hence the identity in the statement of the lemma takes the form $1+(-1)=0+0$. Suppose for the inductive case that the result is true for $\beta$ i.e. $\kappa(\alpha\vee\beta)+\kappa(\alpha\wedge\beta)=\kappa(\alpha)+\kappa(\beta)$ holds. We want to show that the result holds for $\beta\cup\{B\}$ i.e. $\kappa(\alpha\vee(\beta\cup\{B\}))+\kappa(\alpha\wedge(\beta\cup\{B\}))=\kappa(\alpha)+\kappa(\beta\cup\{B\})$. We introduce some superscript and subscript notations to denote new simplicial complexes obtained from the original. The following list describes them and also explains the rules to handle two or more scripts at a time. \begin{itemize} \item Let $\mathcal K_0$ denote the complex $\mathcal K(\alpha)$, i.e. the complex consisting of only one vertex and $\mathcal{K}$ denote the complex $\mathcal{K}(\beta)$. \item Let $\mathcal{K}^S$ denote the complex $\mathcal{K}(\beta\cup S)$ for any finite $S\subseteq\mathcal{L}_a$ which contains only infinite elements. Also, $\mathcal K^{A,B}$ is a short hand for $\mathcal K^{\{A,B\}}$. \item Whenever $C$ is a vertex of $\mathcal{Q}$, the notation $\mathcal{Q}_C$ denotes the sub-complex $\{F\in\mathcal{Q}:C\notin F, F\cup\{C\}\in\mathcal{Q}\}$ of $\mathcal{Q}$. \item If $\mathcal Q=\mathcal K(\gamma)$ for some antichain $\gamma$ and $A\notin\gamma$, then the notation $^A\mathcal{Q}$ denotes the complex $\mathcal{K}(\{A\}\circ\gamma)$. \item The notation $^C\mathcal K^S_B$ means $^C((\mathcal K^S)_B)$. This describes the order of the scripts. \item The Euler characteristic of $^C\mathcal K^S_B$ will be denoted by $^C\chi^S_B$. \end{itemize} Using this notation, the inductive hypothesis is \begin{equation}\label{1}\chi^A+\,^A\chi=\chi_0+\chi\end{equation} and our claim is \begin{equation}\label{2}\chi^{B,A}+\,^A\chi^B=\chi_0+\chi^B.\end{equation} \textbf{Case I: $(A\cap B)=\{a\}.$} In this case, the faces of $\mathcal{K}^{A,B}$ not present in $\mathcal{K}^A$ are the faces of $\mathcal{K}^B$. Hence $b_n(\mathcal{K})-b_n(\mathcal{K}^B)=b_n(\mathcal{K}^A)-b_n(\mathcal{K}^{A,B})$ for all $n\geq 0$, where $b_n$ denotes the $n^{th}$ Betti number. Hence we get \begin{equation*}\chi^{B,A}-\chi^A=\chi^B-\chi\end{equation*} Also note that the hypothesis $(A\cap B)=\{a\}$ yields $H_*(^A\mathcal{K})=H_*(^A\mathcal{K}^B)$ since only infinite elements matter for the computations. It follows that equation (\ref{2}) holds in this case. \textbf{Case II: $A\cap B\supsetneq\{a\}.$} Note that whenever $C$ is not a vertex of $\mathcal Q$, we have $\mathcal Q^C_C\subseteq\mathcal Q$ and $\mathcal Q\cup\mathrm{Cone}(\mathcal Q^C_C)=\mathcal Q^C$, where the apex of the cone is $C$. Hence corollary \ref{HMLGCONE} can be restated in this notation as the following identity. \begin{equation}\label{3}\chi(\mathcal{Q})+1=\chi(\mathcal{Q}^C)+\chi(\mathcal{Q}^C_C)\end{equation} As particular cases of (\ref{3}), we get the following equalities. \begin{equation}\label{4}\chi+1=\chi^B+\chi^B_B.\end{equation} \begin{equation}\label{5}\chi^A+1=\chi^{A,B}+\chi^{A,B}_B\end{equation} \begin{equation}\label{6}\chi^B_B+1=\chi^{A,B}_B+\chi^{A,B}_{A,B}\end{equation} It can be checked that $\mathcal{K}^{A,B}_{A,B}\cong\ ^A\mathcal{K}^B_B$ via the map $F\mapsto \{\{C,A\}: C\in F\}$. This gives us the following equation. \begin{equation}\label{8}\chi^{A,B}_{A,B}=\ ^A\chi^B_B\end{equation} If we combine equations (\ref{1}),(\ref{4}),(\ref{5}),(\ref{6}) and (\ref{8}), it remains to prove the following to get equation (\ref{2}) in the claim. \begin{equation}\label{7}^A\chi+1=\,^A\chi^B+\,^A\chi^B_B\end{equation} Observe that the natural inclusion maps $i_1:\mathcal{K}_0\rightarrow\mathcal{K}^A$ and $i_2:\mathcal{K}\rightarrow\mathcal{K}^A$ are inclusions of sub-complexes and their images are disjoint. Furthermore, the set theoretic map $g:\mathcal{K}^A\setminus(Im(i_1)\sqcup Im(i_2))\rightarrow\,^A\mathcal{K}$ defined by $F\mapsto\{\sigma\subseteq F:A\in\sigma,|\sigma|=2\}$ is a bijection. Now consider the composition $^A\mathcal{K}^B\cong\mathcal{K}^{A,B}\setminus(i_1(\mathcal{K}_0)\sqcup i_2(\mathcal{K}^B))\xrightarrow{\pi_B}\mathcal{K}^A\setminus(i_1(\mathcal{K}_0)\sqcup i_2(\mathcal{K}))\cong\, ^A\mathcal{K}$, where $\pi_B(F)=F\setminus\{B\}$. The union of images (under this composition of maps) of those faces in $^A\mathcal K^B$ which contain $A\cap B$ is the sub-complex $^A\mathcal{K}^B_B$ of $^A\mathcal{K}$. Hence $(^A\mathcal{K}\cup \mathrm{Cone}(^A\mathcal{K}^B_B))\cong\,^A\mathcal{K}^B$, where the apex of the cone is $\{A,B\}$. An application of \ref{HMLGCONE} gives the required identity in equation (\ref{7}). \end{proof} We use definition \ref{Euler} of Euler characteristic to prove the second step in the proof of the main theorem since we do not have a proof using homological techniques. In this step, we allow the size of $\beta$ to be an arbitrary fixed positive integer and we use induction on the size of $\alpha$. The lemma just proved is the base case for this induction. Let $A$ be a new element of $\mathcal L_a$ to be added to $\alpha$ and assume the result is true for $\alpha$. Again we may assume that $A$ is infinite. We construct the complex $\mathcal{K}(\alpha\cup\beta\cup\{A\})$ in steps starting with the complex $\mathcal{K}(\alpha\cup\beta)$ and the conclusion of the theorem holds for the latter by the inductive hypothesis. We do this in such a way that at each step $\mathcal{K}_1$ of the construction, the following identity is satisfied. \begin{equation}\label{count} \chi(\mathcal K(\alpha\cup\{A\})\cap\mathcal K_1)+\chi(\mathcal K(\beta))=\chi(\mathcal K(\alpha\cup\{A\}\cup\beta)\cap\mathcal K_1)+\chi(\mathcal K((\alpha\cup\{A\})\circ\beta)\cap\mathcal K_1) \end{equation} In this expression, $\mathcal K((\alpha\cup\{A\})\circ\beta)\cap\mathcal K_1$ denotes the subcomplex of $\mathcal K((\alpha\cup\{A\})\circ\beta)$ whose faces are appropriate projections of the faces of $\mathcal K_1$. For the first step, we construct all the elements in $\mathcal{K}(\alpha\cup\{A\})$ not in $\mathcal K(\alpha)$. Let $\mathcal K_1'$ denote the resulting complex. No new faces of the complex $\mathcal K((\alpha\cup\{A\})\circ\beta)$ are constructed in this process. Hence, for each $n\geq 0$, we have \begin{equation*} v_n(\mathcal K_1')-v_n(\mathcal K(\alpha\cup\beta\cup\{A\}))=v_n(\mathcal K_1'\cap\mathcal{K}(\alpha\cup\{A\}))-v_n(\mathcal{K}(\alpha\cup\{A\})), \end{equation*} where $v_n(\mathcal Q)$ denotes the number $n$-dimensional faces of $\mathcal Q$. Hence equation (\ref{count}) is satisfied for $\mathcal K_1'$. For the second step, we further construct all the faces corresponding to $\{A\}\circ\beta$. The conclusion in this case follows from the previous lemma. Finally we construct the faces containing $A$ and intersecting both $\alpha$ and $\beta$. We construct a face $F$ of size $m+k$ whenever all the proper sub-faces of $F$ have already been constructed, where $F\cap(\alpha\cup\{A\})$ and $F\cap\beta$ have sizes $m$ and $k$ respectively. It is clear that $m,k\geq 1$. Let the sub-complex of $\mathcal{K}(\alpha\cup\beta\cup\{A\})$ consisting of the already constructed faces be denoted by $\mathcal{K}$. We assume, for induction, that equation (\ref{count}) is true for $\mathcal K$. Let $g(F')=\{\sigma\subseteq F':|\sigma\cap(\alpha\cup\{A\})|=1,|\sigma\cap\beta|=1\}$ for $F'\in\mathcal{K}$. Let $\mathcal{K}_3=\bigcup_{F'\subsetneq F}g(F')$ and $\mathcal{K}_2=\bigcup_{F'\in\mathcal{K}}g(F')$. Note the inclusions $\mathcal K\subseteq\mathcal K_2\subseteq\mathcal{K}((\alpha\cup\{A\})\circ\beta)$. The construction of the face $F$ changes $\chi(\mathcal{K})$ by $(-1)^{\mathrm{dim} F}=(-1)^{m+k-1}$, while the numbers $\chi(\mathcal{K}(\alpha\cup\{A\}))$ and $\chi(\mathcal{K}(\beta))$ are unaltered. We calculate the change in the value of $\chi(\mathcal{K}_3)$ after the addition of $F$. The complex $g(F)$ is contractible. Hence its Euler characteristic is equal to $1$ by \ref{EULHTPY}. Let $w_n$ denote the number of $n$-dimensional faces of $\mathcal{K}_3$. Recall that $\mathcal V(\mathcal K_3)= (\alpha\cup\{A\})\times\beta$. If $\mathrm{dim} F'=n+2$ for some $F'\in\mathcal K(\alpha\cup\beta\cup\{A\})$ such that $|F'\cap(\alpha\cup\{A\})|\geq 1,|F'\cap\beta|\geq 1$, then $\mathrm{dim}(g(F'))=n$. Therefore to calculate $w_n$, we choose total $n+2$ vertices from $F$, making sure that we choose at least one vertex from both $\alpha\cup\{A\}$ and $\beta$. Hence $w_n=\Sigma_{j=1}^{n+1}\binom{m}{j}\binom{k}{n+2-j}$. This number can be easily shown to be equal to $\binom{m+k}{n+2}-\binom{m}{n+2}-\binom{k}{n+2}$. The maximum dimension of the face of $\mathcal{K}_3$ is equal to $m+k-3$. Hence the required change in the value of $\chi(\mathcal{K}_3)$ is $1-\Sigma_{n=0}^{m+k-3}(-1)^n[\binom{m+k}{n+2}-\binom{m}{n+2}-\binom{k}{n+2}]$. To obtain equation (\ref{count}) for $\mathcal K\cup\{F\}$, we need to show that there is no change in the value of $\chi(\mathcal{K})+\chi(\mathcal K_3)$ after construction of $F$. But we know that $\Sigma_{n=0}^{m+k}(-1)^n[\binom{m+k}{n}-\binom{m}{n}-\binom{k}{n}]=0$ since each of the three alternating sums is zero. This equation rearranges to give the required cancelation equation and completes the proof. \subsection{Global characteristic}\label{GCDS} Let the function $\kappa:\mathcal{A}\times M^n\rightarrow\mathbb{Z}$ be defined by $\kappa(\alpha,a)=\kappa_a(\alpha)$. Suppose $\alpha$ is a singleton. If $\bigcup\alpha$ is infinite, then $\kappa(\alpha,-)$ is the constant $0$ function and if $\alpha=\{a\}$, then $\kappa(\alpha,b)=0$ for all $b\neq a$ and $\kappa(\alpha,a)=-1$. For arbitrary $\alpha\in\mathcal{A}$, if $a\notin\bigcup\alpha$, then $\kappa(\alpha,a)=0$. \begin{definitions} For $\alpha\in\mathcal{A}$, we define the \textbf{set of singular points} of $\alpha$ to be the set $\mathrm{Sing}(\alpha):=\{a\in M^n: \kappa(\alpha,a)\neq 0\}$. $\mathrm{Sing}(\alpha)$ is always finite since all the singular points appear as singletons in the nest corresponding to $\alpha$. Using finiteness of $\mathrm{Sing}(\alpha)$, we define the \textbf{global characteristic} of $\alpha$ to be the sum $\Lambda(\alpha):=-\Sigma_{a\in M^n} \kappa(\alpha,a)$, which in fact is equal to the finite sum $\Lambda(\alpha)=-\Sigma_{a\in \mathrm{Sing}(\alpha)}\kappa(\alpha,a)$. \end{definitions} Fix some $a\in M^n$. Let $\alpha,\beta\in\mathcal{A}$ be such that $\beta\prec\alpha$. Then either $\mathcal{N}_a(\alpha)=\mathcal{N}_a(\beta)=\emptyset$ or $\mathcal{N}_a(\beta)\prec\mathcal{N}_a(\alpha)$. If $C:=\bigcup\alpha\setminus\bigcup\beta$ is a cell, we define the homology $H_*(C)$ to be the relative homology $H_*(\mathcal{K}(\mathcal{N}_a(\alpha\cup\beta));\mathcal{K}(\mathcal{N}_a(\beta)))$. In particular, the alternating sum of the Betti numbers of $H_*(C)$, denoted by $\chi_a(C)$, is equal to the difference $\chi(\mathcal{K}(\mathcal{N}_a(\alpha)))-\chi(\mathcal{K}(\mathcal{N}_a(\beta)))$ by \ref{p1} and \ref{LONGEXACT}. We also have the equation $\delta(C)=\delta(\alpha)-\delta(\beta)$. Hence if we define the local characteristic of $C$ as $\kappa_a(C):=\chi_a(C)-\delta(C)(a)$, we get the identity $\kappa_a(C)=\kappa_a(P(C))-\kappa_a(N(C))$. We define the extension of the function $\kappa$ to include all cells by setting $\kappa(C,a):=\kappa_a(C)$ for $a\in M^n,C\in\mathcal{C}$. \begin{definitions} We define the set of singular points $\mathrm{Sing}(C)$ for $C\in\mathcal{C}$ analogously by setting $\mathrm{Sing}(C):=\{a\in M^n:\kappa_a(C)\neq 0\}$. This set is finite since $\mathrm{Sing}(C)\subseteq \mathrm{Sing}(P(C))\cup \mathrm{Sing}(N(C))$. We also extend the definition of global characteristic for cells by setting $\Lambda(C):=-\Sigma_{a\in M^n} \kappa(C,a)$. \end{definitions} It is immediate that $\Lambda(C)=\Lambda(P(C))-\Lambda(N(C))$ for every $C\in\mathcal{C}$. The main aim of this section is to prove that the global characteristic is additive in the following sense. \begin{theorem}\label{t2} If $\{B_i:1\leq i\leq l\}, \{B_j':1\leq j\leq m\}$ are two finite families of pairwise disjoint blocks such that $\bigsqcup_{i=1}^l B_i=\bigsqcup_{j=1}^m B_j'$, then $\Sigma_{i=1}^l\Lambda(B_i)=\Sigma_{j=1}^m\Lambda(B_j')$. \end{theorem} The proof of this theorem follows at once from the following local version. \begin{lemma}\label{l1} If $a\in M^n$ and $\{B_i:1\leq i\leq l\}, \{B_j':1\leq j\leq m\}$ are two finite families of pairwise disjoint blocks such that $\bigsqcup_{i=1}^l B_i=\bigsqcup_{j=1}^m B_j'$, then $\Sigma_{i=1}^l\kappa_a(B_i)=\Sigma_{j=1}^m\kappa_a(B_j')$. \end{lemma} \begin{proof} It will be sufficient to show that both these numbers are equal to the sum $\Sigma_{B\in\mathcal{F}}\,\kappa_a(B)$ where $\mathcal{F}$ is any finite family of blocks finer than both the given families. We can in particular choose a finite $pp$-nest $\mathcal{D}$ containing all the elements in $\bigcup_{i=1}^l(P(B_i)\cup N(B_i))\cup\bigcup_{j=1}^m(P(B_j)\cup N(B_j))$ and set $\mathcal{F}=\{\mathrm{Core}_\mathcal{D}(D):D\in\mathcal{D}^+\}$. This involves partitioning every $B_i$ and $B_j'$ into smaller blocks of the form $\mathrm{Core}_\mathcal D(D)$ for $D\in\mathcal D^+$. Thus it will be sufficient to show that if $\mathcal{F}$ is a finite family of blocks corresponding to cores of a $pp$-nest $\mathcal{D}$ such that $B=\bigcup \mathcal{F}\in\mathcal{B}$, then $\kappa_a(B)=\Sigma_{F\in\mathcal{F}}\kappa_a(F)$. Consider the sub-poset $\mathcal{H}$ of $\mathcal{L}$ containing all the elements of $\bigcup_{F\in\mathcal{F}}(P(F)\cup N(F))$. Then we construct the antichains $\{\alpha_s\}_{s\geq 0}$ in such a way that $\alpha_s$ is the set of all minimal elements of $\mathcal{H}\setminus\bigcup_{0\leq t<s}\alpha_t$. Then this process stops, say $\alpha_v$ is $P(B)$. Then we have a chain of antichains $\alpha_0\prec\alpha_1\prec\cdots\prec\alpha_v$. Now $\kappa_a(B)=\kappa_a(\alpha_v)-\kappa_a(\alpha_0)= \Sigma_{t=1}^v\kappa_a(\alpha_t)-\kappa_a(\alpha_{t-1})$. In other words, if $C_t$ denotes the cell $\bigcup\alpha_t\setminus\bigcup\alpha_{t-1}$ for $1\leq t\leq v$, then $\kappa_a(B)=\Sigma_{t=1}^v\kappa_a(C_t)$. Now it remains to show that for each $1\leq t\leq v$, $\kappa_a(C_t)=\Sigma_{F\in\alpha_t} \kappa_a(\mathrm{Core}_\mathcal{D}(F))$. This follows from the following proposition by first choosing $A_j$ to consist of elements of $\alpha_t$ and then choosing $A_j$ to consist of elements of $\alpha_{t-1}$. Then by our construction of the chain and the definition of $\kappa_a(C_t)$, we get the required result. \end{proof} \begin{pro}\label{pr2} For any $\alpha_j\in\mathcal{A},A_j=\bigcup\alpha_j,j\in[k]=\{1,2,\hdots,k\}$ where $k\geq 2$, we have $\kappa_a(\bigcup_{j\in[k]}A_j)=\Sigma_{S\subseteq[k],S\neq\emptyset}\kappa_a(\bigcap_{s\in S}A_s\setminus\bigcup_{t\notin S}A_t)$. \end{pro} \begin{proof} We observe that all the arguments on the right hand side of the above expression are cells or possibly empty sets and they form a partition of the cell in the argument of the left hand side. Then we restate theorem \ref{t1} as $\kappa_a((\bigcup\alpha)\cup(\bigcup\beta))= \kappa_a((\bigcup\alpha)\setminus(\bigcup\beta))+ \kappa_a((\bigcup\beta)\setminus(\bigcup\alpha))+ \kappa_a((\bigcup\alpha)\cap(\bigcup\beta))$. Since the set of $pp$-convex sets is closed under taking unions and intersections, a simple induction proves the proposition with \ref{t1} being the base case. \end{proof} Theorem \ref{t2} allows us to define the global characteristic for arbitrary definable sets. \begin{definition} Let $D\subseteq M^n$ be definable. Then we define the \textbf{global characteristic} $\Lambda(D)$ as the sum of global characteristics of any finite family of blocks partitioning $D$. \end{definition} \subsection{Preservation of global characteristics}\label{PTGC} The aim of this section is to show that the global characteristic is preserved under definable isomorphisms. \begin{theorem}\label{t3} Suppose $D\in \mathrm{Def}(M^n)$ and $f:D\rightarrow M^n$ is a definable injection. Then $\Lambda(D)=\Lambda(f(D))$. \end{theorem} \begin{proof} We first prove the local version which states that for any $a\in M^n$ and $B\in\mathcal{B}$ if $g:B\rightarrow M$ is a $pp$-definable injection, then $\kappa_a(B)=\kappa_{g(a)}(g(B))$. We observe that $\delta(B)(a)=\delta(g(B))(g(a))$. Lemma \ref{COLOURINJ} gives that the complex $\mathcal{K}(\mathcal{N}_a(\alpha))$ is isomorphic to the complex $\mathcal{K}(\mathcal{N}_{g(a)}(g[\alpha]))$ where $g[\alpha]=\{g(A):A\in\alpha\}$ and $\alpha$ is either $P(B)$ or $N(B)$. We conclude that $g(\mathrm{Sing}(B))=\mathrm{Sing}(g(B))$. Hence $\Lambda(B)=\Sigma_{a\in \mathrm{Sing}(B)}\kappa_a(B)=\Sigma_{a\in \mathrm{Sing}(B)}\kappa_{g(a)}(g(B))=\Sigma_{a\in \mathrm{Sing}(g(B))}\kappa_a(g(B))=\Lambda(g(B))$. To prove the theorem, we consider any partition of $D$ into finitely many blocks $B_i,1\leq i\leq m$ such that $f\upharpoonright B_i$ is $pp$-definable. This is possible by an application of lemma \ref{REP}) to the set $Graph(f)$ followed projection of the finitely many blocks onto the first $n$ coordinates. Note that $D=\bigsqcup_{i=1}^m B_i \Rightarrow f(D)=\bigsqcup_{i=1}^m f(B_i)$ since $f$ is injective. Hence $\Lambda(f(D))=\Sigma_{i=1}^m\Lambda(f(B_i))=\Sigma_{i=1}^m\Lambda(B_i)=\Lambda(D)$, where the first and third equality follows by theorem \ref{t2} and the second equality follows from the previous paragraph. \end{proof} Now we are ready to prove a special case of the result promised at the end of section \ref{GRFOS}, which states that the Grothendieck ring of a right $\mathcal R$-module $M$ satisfying $M\equiv M^{(\aleph_0)}$ contains $\mathbb Z$ as a subgroup. This shows, in particular, that $K_0(M)$ is nontrivial in this case. \begin{cor}\label{MAINRESULT} Suppose $D\subseteq M^n$ is definable and $f:D\rightarrowtail D$ is a definable injection whose image is cofinite in the codomain, then $f$ is an isomorphism. \end{cor} \begin{proof} We extend the function $f$ to an injective function $g:M^n\rightarrowtail M^n$ by setting $g(a)=f(a)$ if $a\in D$ and $g(a)=a$ otherwise. Now $F:=M^n\setminus Im(g)$ is finite; say it has $p$ elements. Further $\Lambda(Im(g))=\Lambda(M^n\setminus F)=\Lambda(M^n)-\Lambda(F)=-p$. By theorem \ref{t3}, we get $\Lambda(M^n)=\Lambda(Im(g))$ since $g$ is definable injective. Hence $p=0$ and thus $g$ is an isomorphism. Since $g$ is the identity function outside $D$, we conclude that $f$ is a definable isomorphism. \end{proof} \subsection{Coloured global characteristics}\label{LCCC} Let $P\in\mathcal{L}^*$ be fixed for this section. We develop the notion of localization at $P$ and local characteristic at $P$; we have developed these ideas earlier when $P$ is a singleton. After stating what we mean by a colour, we define the notion of a ``coloured global characteristic'' and outline the proof that these invariants are preserved under definable isomorphisms. \begin{definition} We use $\mathcal{L}_P$ to denote the meet-semilattice of all upper bounds of $P$ in $\mathcal{L}$, i.e. $\mathcal{L}_P:=\{A\in\mathcal{L}:A\supseteq P\}$. As usual, we denote the set of all finite antichains in this semilattice by $\mathcal{A}_P$. \end{definition} Since every element of $\mathcal{L}_P$ contains $P$, we may as well quotient out $P$ from each such element. Such a process is consistent with our earlier definition of localization since taking quotient with respect to a singleton set gives an isomorphic copy of the original set. \begin{definitions} We define the operator $\mathcal{Q}_P$ on the elements of $\mathcal{L}_P$ by setting $\mathcal{Q}_P(A):=p+\frac{A-p}{P-p}=\{a+(P-p):a\in A\}$ for any $p\in P$. We can clearly extend this operator to finite subsets of $\mathcal{L}_P$. Now let $\mathcal{L}_{(P)}:=\mathcal{Q}_P[\mathcal{L}_P]$. We use $\mathcal{A}_{(P)}$ to denote the set of all finite antichains in this semilattice. \end{definitions} It is easy to see that $\mathcal{A}_{(P)}=\mathcal{Q}_P[\mathcal{A}_P]$. The appropriate analogue of the localization operator $\mathcal N_a:\mathcal A\rightarrow\mathcal A_a$ is a function $\mathcal{N}_P:\mathcal{A}\rightarrow\mathcal{A}_{(P)}$. \begin{deflem} For $\alpha\in\mathcal{A}$, we define $\mathcal{N}_P(\alpha):=\mathcal{Q}_P(\alpha\cap\mathcal{L}_P)$. As an operator on $pp$-convex sets, $\mathcal{N}_P$ preserves both unions and intersections. \end{deflem} The proof is easy and thus omitted. Recall from definition \ref{SIMPCOMP} of $\mathcal K^a(\alpha)$ that the ``trivial intersections'' were precisely those which were empty or a singleton. On the other hand, ``nontrivial intersections'' were precisely those which contained the $pp$-set $\{a\}$ properly. As $\mathcal N_P$ takes values in $\mathcal A_{(P)}$, we get the correct notion of non-trivial intersections followed by the quotient operation so that the techniques developed for a singleton $P$ still remain valid. Now we are ready to state the analogue of definition \ref{SIMPCOMP}. \begin{definition} For $\alpha\in\mathcal{A}$, we define the \textbf{simplicial complex} of $\alpha$ \textbf{in the neighbourhood of} $P$ as the complex $\mathcal{K}(\mathcal{N}_P(\alpha))=\{\beta\subseteq\mathcal{N}_P(\alpha):|\bigcap\beta|=\infty\}$. For simplicity of notation, we denote this complex by $\mathcal{K}^P(\alpha)$. \end{definition} We can easily extend the notion of local characteristic at $P$ as follows. \begin{definition} We define the \textbf{local characteristic} of $\alpha$ at $P$ by $\kappa_P(\alpha):=\chi(\mathcal{K}^P(\alpha))-\delta(\alpha)(P)$. \end{definition} It can be observed that we recover the definition of the local characteristic at a point $a\in M$ by choosing $P=\{a\}$. The proofs of theorem \ref{t1} and lemma \ref{l1} go through if we replace $\kappa_a$ by $\kappa_P$. Thus we can define $\kappa_P(D)$ for arbitrary definable sets $D\subseteq M^n$. We define the function $\kappa:\mathrm{Def}(M^n)\times\mathcal{L}^*\rightarrow\mathbb{Z}$ by setting $\kappa(D,P):=\kappa_P(D)$. \begin{definition} The \textbf{set of $\mathcal{L}$-singular elements} of a definable set $D\subseteq M^n$ is defined as the set $\mathrm{Sing}_\mathcal{L}(D):=\{P\in\mathcal{L}:\kappa(D,P)\neq0\}$. \end{definition} Fixing any partition of $D$ into blocks, it can be checked that the set $\mathrm{Sing}_\mathcal L(D)$ is contained in the nest corresponding to that partition and hence is finite. This finiteness will be used to define analogues of the global characteristic, which we call ``coloured global characteristics''. \begin{definition} For a given $P\in\mathcal{L}$, we define the \textbf{colour} of $P$ to be the set $\{A\in L:$ there is a bijection $f:A\cong P$ such that $Graph(f)$ is $pp$-definable $\}$. We denote the colour of $P$ by $[[P]]$. \end{definition} Note the significance of this definition. Theorem \ref{PPET} describes the $pp$-sets as fundamental definable sets and we are trying to classify definable sets up to definable isomorphism (definition \ref{defiso}). In fact it is sufficient to classify $pp$-sets up to $pp$-definable isomorphisms, which is the motivation behind the definition of a colour. Let $\mathcal{X}$ denote the set of colours of elements from $\mathcal L$. We use letters $\mathfrak{A},\mathfrak{B},\mathfrak{C}$ etc. to denote the colours. It can be observed that $[[\emptyset]]$ is a singleton. We denote the colour of any singleton by $\mathfrak{U}$. We use $\mathcal{X}^*$ to denote $\mathcal{X}\setminus\{[[\emptyset]]\}$. The global characteristic $\Lambda(D)$ is equal to $-\Sigma_{P\in\mathfrak{U}}\kappa_P(D)$ for each definable set $D$. This observation can be used to extend the notion of global characteristic. \begin{definition} For $\mathfrak{A}\in\mathcal{X}^*$, we define the \textbf{coloured global characteristic} with respect to $\mathfrak A$ for a definable set $D$ to be the integer $\Lambda_{\mathfrak{A}}(D):=-\Sigma_{P\in\mathfrak{A}}\kappa_P(D)$. This integer is well defined as it is equal to the finite sum $-\Sigma\{\kappa_P(D):P\in(\mathfrak{A}\cap \mathrm{Sing}_\mathcal{L}(D))\}$. \end{definition} The property of coloured global characteristics that we are looking for is stated in the following analogue of theorem \ref{t3}. The proof is analogous to that of \ref{t3} and thus is omitted. \begin{theorem}\label{t4} If $f:D\rightarrow D'$ is a definable bijection between definable sets $D,D'$, then $\Lambda_\mathfrak{A}(D)=\Lambda_\mathfrak{A}(D')$ for each $\mathfrak{A}\in\mathcal{X}^*$. \end{theorem} \section{Special Case: Multiplicative Structure}\label{spcasemult} \subsection{Monoid rings}\label{MRSK} We need the notion of an algebraic structure called a \emph{monoid ring}. \begin{definition} Let $(A,\star,1)$ be a commutative monoid and $S$ be a commutative ring with unity. Then we define an $L_{ring}$-structure $(S[A],0,1,+,\cdotp)$ called a \textbf{monoid ring} as follows. \begin{itemize} \item $S[A]:=\{\phi:A\rightarrow S :$ the set $\mathrm{Supp}(\phi)=\{a:\phi(a)\neq 0\}$ is finite$\}$ \item $(\phi+\psi)(a):=\phi(a)+\psi(a)$ for $a\in A$ \item $(\phi\cdotp\psi)(a):=\Sigma_{b\star c=a}\phi(b)\psi(c)$ for $a\in A$ \end{itemize} An element $\phi$ of $S[A]$ can be represented as a formal sum $\Sigma_{a\in A}s_a a$ where $s_a=\phi(a)$. \end{definition} As an example, let $A=\mathbb N$ ,equivalently the monoid $\{X^n\}_{n\geq 0}$ considered multiplicatively. Then the monoid ring $S[A]=S[\mathbb N]\cong S[X]$, the polynomial ring in one variable with coefficients from $S$. Let the symbols $\overline{\mathcal L},\overline{\mathcal A},\overline{\mathcal X},\hdots$ etc. denote the unions $\bigcup_{n=1}^\infty\mathcal L_n$, $\bigcup_{n=1}^\infty\mathcal A_n$, $\bigcup_{n=1}^\infty\mathcal X_n,\hdots$ respectively. We shall be especially concerned with the sets $\overline{\mathcal L}^*:=\overline{\mathcal L}\setminus\{\emptyset\}$ and $\overline{\mathcal X}^*:=\overline{\mathcal X}\setminus\{[[\emptyset]]\}$. There is a binary operation $\times:\overline{\mathcal L}^*\times\overline{\mathcal L}^*\rightarrow \overline{\mathcal L}^*$ which maps a pair $(A,B)$ to the cartesian product $A\times B$. This map commutes with the operation $[[-]]$ of taking colour i.e., whenever $[[A_1]]=[[A_2]]$ and $[[B_1]]=[[B_2]]$, we have $[[A_1\times B_1]]=[[A_2\times B_2]]$. This allows us to define a binary operation $\star:\overline{\mathcal X}^* \times \overline{\mathcal X}^* \rightarrow \overline{\mathcal X}^*$ which takes a pair of colours $(\mathfrak A,\mathfrak B)$ to $[[A\times B]]$ for any $A\in\mathfrak A,B\in\mathfrak B$. The colour $\mathfrak U$ of singletons acts as the identity element for the operation $\star$. Hence $(\overline{\mathcal X}^*,\star,\mathfrak U)$ is a monoid. Consider the maps $\Lambda_\mathfrak A:\widetilde{\mathrm{Def}}(M)\rightarrow\mathbb Z$ for $\mathfrak A\in\overline{\mathcal X}^*$ defined by $[D]\mapsto\Lambda_\mathfrak A(D')$ for any $D'\in[D]$. These maps are well defined due to theorem \ref{t4}. We can fix some $[D]\in\widetilde{\mathrm{Def}}(M)$ and look at the set $\mathrm{Supp}([D]):=\{\mathfrak A\in\overline{\mathcal X}^*:\Lambda_\mathfrak A(D)\neq 0\}$. This set is finite since it is contained in the finite set $\{[[P]]:P\in \mathrm{Sing}_{\overline{\mathcal L}}(D)\}$. This shows that the evaluation map $ev_{[D]}:\overline{\mathcal X}^*\rightarrow\mathbb Z$ defined by $\mathfrak A\mapsto\Lambda_{\mathfrak A}([D])$ for each $[D]\in\widetilde{\mathrm{Def}}(M)$ is an element of the monoid ring $\mathbb Z[\overline{\mathcal X}^*]$. Let us consider an example. We take $\mathcal R$ to be an infinite skew-field (i.e. a (possibly non-commutative) ring in which every nonzero element has two-sided multiplicative inverse) and $M$ to be any nonzero $\mathcal R$-vector space. This example has been studied in detail in \cite{Perera}. In this case, we have $Th(M)=Th(M)^{\aleph_0}$. Using the notion of affine dimension, it can be shown that $\overline{\mathcal X}^*\cong\mathbb N$. It has been shown that $K_0(M)\cong\mathbb Z[X]\cong\mathbb Z[\mathbb N]$. The proof in \cite{Perera} explicitly shows that the semiring $\widetilde{\mathrm{Def}}(M)$ is cancellative and is isomorphic to the semiring of polynomials in $\mathbb Z[X]$ with non-negative leading coefficients. We will prove that a similar fact holds for an arbitrary module $M$, i.e., the structure of the Grothendieck ring $K_0(M)$ is entirely determined by the monoid $\overline{\mathcal X}^*$. \begin{theorem}\label{FINAL} Let $M$ be a right $\mathcal R$-module satisfying $Th(M)=Th(M)^{\aleph_0}$. Then $K_0(M)\cong\mathbb Z[\overline{\mathcal X}^*]$. In particular, $K_0(M)$ is nontrivial for every nonzero module $M$. \end{theorem} The proof of this theorem will occupy the next two sections. \subsection{Multiplicative structure of $\widetilde{\mathrm{Def}}(M)$}\label{MULT} Given $D_1\in \mathrm{Def}(M^n)$ and $D_2\in \mathrm{Def}(M^m)$, their cartesian product $D_1\times D_2\in \mathrm{Def}(M^{(n+m)})$. This shows that $\overline{\mathrm{Def}}(M)$ is closed under cartesian products. We want to show that the sets $\overline{\mathcal L}$, $\overline{\mathcal A}$, $\overline{\mathcal B}$ and $\overline{\mathcal C}$ are all closed under multiplication. Let $P\in\mathcal L_n$ and $Q\in\mathcal L_m$. Then there are $pp$ formulas $\phi(\overline{x})$ and $\psi(\overline{y})$ defining those sets respectively. Without loss, we may assume that $\overline{x}\cap\overline{y}=\emptyset$. Now the formula $\rho(\overline{x},\overline{y})=\phi(\overline{x})\wedge\psi(\overline{y})$ is again a $pp$-formula and it defines the set $P\times Q\in\mathcal L_{n+m}$. This shows that the set $\overline{\mathcal L}$ is closed under multiplication. Now we want to show that the product of two antichains $\alpha\in\mathcal A_n$ and $\beta\in\mathcal A_m$ is again an antichain in $\mathcal A_{n+m}$. We have natural projection maps $\pi_1:M^{n+m}\rightarrow M^n$ and $\pi_2:M^{n+m}\rightarrow M^m$ which project onto the first $n$ and the last $m$ coordinates respectively. First we observe that $(\bigcup\alpha)\times(\bigcup\beta)=\bigcup_{A\in\alpha}\bigcup_{B\in\beta}A\times B$. If either $A_1,A_2\in\alpha$ are distinct or $B_1,B_2\in\beta$ are distinct, then all the distinct elements from $\{A_i\times B_j\}_{i,j=1}^2$ are incomparable with respect to the inclusion ordering since at least one of their projections is so. Hence $\bigcup\alpha\times\bigcup\beta$ is indeed an antichain of the rank $|\alpha|\times|\beta|$. We will denote this antichain by $\alpha\times\beta$. Given $C_1,C_2\in\overline{\mathcal C}$, we have $C_1\times C_2=\bigcup(\alpha_1\times\alpha_2)\setminus (\bigcup(\alpha_1\times\beta_2)\cup\bigcup(\beta_1\times\alpha_2))$ where $\alpha_i=P(C_i)$ and $\beta_i=N(C_i)$ for $i=1,2$. This shows that $C_1\times C_2\in\overline{\mathcal C}$ since $\overline{\mathcal A}$ is closed under both products and unions. Furthermore, we observe that $P(C_1\times C_2)=P(C_1)\times P(C_2)$. This in particular shows that the set $\overline{\mathcal B}$ of blocks is also closed under products. \begin{lemma}\label{localcharmult} Let $P,Q\in\overline{\mathcal L}$ and $\alpha,\beta\in\overline{\mathcal A}$. Then $\kappa_{P\times Q}(\alpha\times \beta)=-\kappa_P(\alpha)\kappa_Q(\beta)$. \end{lemma} \begin{proof} First assume that $\delta(\alpha)(P)=\delta(\beta)(Q)=1$. Then observe that \begin{equation}\label{discomp}\mathcal K^{P\times Q}(\alpha\times\beta)\cong\mathcal K^P(\alpha)\boxtimes\mathcal K^Q(\beta).\end{equation} Hence we have \begin{eqnarray*} \kappa_{P\times Q}(\alpha\times\beta)&=&\chi(\mathcal K^{P\times Q}(\alpha\times\beta))-1\\ &=&\chi(\mathcal K^P(\alpha))+\chi(\mathcal K^Q(\beta))-\chi(\mathcal K^P(\alpha))\chi(\mathcal K^Q(\beta))-1\\ &=&(\kappa_P(\alpha)+1)+(\kappa_Q(\beta)+1)-(\kappa_P(\alpha)+1)(\kappa_Q(\beta)+1)-1\\ &=&-\kappa_P(\alpha)\kappa_Q(\beta) \end{eqnarray*} The first and third equality is by definition of the local characteristic and the second is by equation (\ref{ecdp}) of lemma \ref{ecdpl} applied to (\ref{discomp}). In the remaining case when either $\delta(\alpha)(P)$ or $\delta(\beta)(Q)$ is $0$, we have $\delta(\alpha\times\beta)(P\times Q)=0$. Hence $\kappa_{P\times Q}(\alpha\times\beta)=0$ and either $\kappa_P(\alpha)$ or $\kappa_Q(\beta)$ is $0$. This gives the necessary identity and thus completes the proof in all cases. \end{proof} The aim of this section is to prove the following theorem. \begin{theorem}\label{t5} The map $ev:\widetilde{\mathrm{Def}}(M)\rightarrow\mathbb Z[\overline{\mathcal X}^*]$ defined by $[D]\mapsto ev_{[D]}$ is a semiring homomorphism. \end{theorem} \begin{proof} We have already seen that $ev$ is additive, since each $\Lambda_{\mathfrak A}$ is. So it remains to show that it is multiplicative. We have observed that the set $[\overline{\mathcal A}]$ is a monoid with respect to cartesian product, the isomorphism class of a singleton being the identity for the multiplication. So we will first show that $ev:[\overline{\mathcal A}]\rightarrow\mathbb Z[\overline{\mathcal X}^*]$ is a multiplicative monoid homomorphism. Let $\alpha,\beta\in\overline{\mathcal A}$ be fixed. Note that \begin{equation}\label{singincl}S:=\mathrm{Sing}_{\overline{\mathcal L}}(\alpha\times\beta)\subseteq\{P\times Q:P\in \mathrm{Sing}_{\overline{\mathcal L}}(\alpha),Q\in \mathrm{Sing}_{\overline{\mathcal L}}(\beta)\}.\end{equation} We need to show that $ev_{[\alpha]}\cdotp ev_{[\beta]}=ev_{[\alpha\times\beta]}$ as maps on $\overline{\mathcal X}^*$. This is equivalent to $ev_{[\alpha\times\beta]}(\mathfrak C)=\sum_{\mathfrak A\star\mathfrak B=\mathfrak C}ev_{[\alpha]}(\mathfrak A) ev_{[\beta]}(\mathfrak B)$ for each $\mathfrak C\in\overline{\mathcal X}^*$. Using the definition of the evaluation map, it is enough to check that $\Lambda_{\mathfrak C}([\alpha\times\beta])=\sum_{\mathfrak A\star\mathfrak B=\mathfrak C}\Lambda_{\mathfrak A}([\alpha]) \Lambda_{\mathfrak B}([\beta])$ for each $\mathfrak C\in\overline{\mathcal X}^*$. The left hand side of the above equation is \begin{eqnarray*} \Lambda_{\mathfrak C}([\alpha\times\beta]) &=& -\sum_{R\in\mathfrak C}\kappa_R(\alpha\times\beta)\\ &=& -\sum_{R\in(\mathfrak C\cap S)}\kappa_R(\alpha\times\beta)\\ &=& \sum_{R\in(\mathfrak C\cap S)}\kappa_{\pi_1(R)}(\alpha)\kappa_{\pi_2(R)}(\beta) \end{eqnarray*} The last equality is given by the lemma \ref{localcharmult} since, by (\ref{singincl}), every $R\in\mathfrak C\cap S$ can be written as $R=\pi_1(R)\times\pi_2(R)$. The right hand side is \begin{eqnarray*} \sum_{\mathfrak A\star\mathfrak B=\mathfrak C}\Lambda_{\mathfrak A}([\alpha])\Lambda_{\mathfrak B}([\beta]) &=& \sum_{\mathfrak A\star\mathfrak B=\mathfrak C}\left(-\sum_{P\in\mathfrak A}\kappa_P(\alpha)\right)\left(-\sum_{Q\in\mathfrak B}\kappa_Q(\beta)\right) \\ \ &=& \sum_{\mathfrak A\star\mathfrak B=\mathfrak C}\sum_{P\in\mathfrak A,Q\in\mathfrak B}\kappa_P(\alpha)\kappa_Q(\beta) \end{eqnarray*} Using the definition of $\mathrm{Sing}_{\overline{\mathcal L}}(-)$, we observe that the final expressions on both sides are equal. This completes the proof that $ev$ is a multiplicative monoid homomorphism on $[\overline{\mathcal A}]$. Now we will show that $ev$ is also multiplicative on the monoid $[\overline{\mathcal C}]$. Let $C_1,C_2$ be cells with $\alpha_i=P(C_i)$ and $\beta_i=N(C_i)$ for each $i=1,2$. Then $C_1\times C_2=\bigcup(\alpha_1\times\alpha_2)\setminus (\bigcup(\alpha_1\times\beta_2)\cup\bigcup(\beta_1\times\alpha_2))$. We also know that $ev_{[C]}=ev_{P(C)}-ev_{N(C)}$ for each cell $C$. We need to show that $\Lambda_{\mathfrak C}(C_1\times C_2)=\sum_{\mathfrak A\star\mathfrak B=\mathfrak C}\Lambda_{\mathfrak A}([C_1])\Lambda_{\mathfrak B}([C_2])$ for each $\mathfrak C\in\overline{\mathcal X}^*$. Now we have \begin{equation*} \Lambda_{\mathfrak C}(C_1\times C_2)=\Lambda_{\mathfrak C}(\alpha_1\times\alpha_2)-\Lambda_{\mathfrak C}((\alpha_1\times\beta_2)\vee(\beta_1\times\alpha_2)) \end{equation*} and we also have \begin{eqnarray*} \sum_{\mathfrak A\star\mathfrak B=\mathfrak C}\Lambda_{\mathfrak A}([C_1])\Lambda_{\mathfrak B}([C_2]) &=& \sum_{\mathfrak A\star\mathfrak B=\mathfrak C}(\Lambda_{\mathfrak A}([\alpha_1])-\Lambda_{\mathfrak A}([\beta_1]))(\Lambda_{\mathfrak B}([\alpha_2])-\Lambda_{\mathfrak B}([\beta_2])) \\ &=& \Lambda_{\mathfrak C}(\alpha_1\times\alpha_2)+\Lambda_{\mathfrak C}(\beta_1\times\beta_2)-\Lambda_{\mathfrak C}(\beta_1\times\alpha_2)-\Lambda_{\mathfrak C}(\alpha_1\times\beta_2) \end{eqnarray*} Therefore we need to show \begin{equation*} \Lambda_{\mathfrak C}((\alpha_1\times\beta_2)\vee(\beta_1\times\alpha_2))+\Lambda_{\mathfrak C}(\beta_1\times\beta_2)=\Lambda_{\mathfrak C}(\alpha_1\times\beta_2)+\Lambda_{\mathfrak C}(\beta_1\times\alpha_2). \end{equation*} This is true by theorem \ref{t1} since we have $(\alpha_1\times\beta_2)\wedge(\beta_1\times\alpha_2)=(\beta_1\times\beta_2)$. In the last step, we show that $ev_{[D_1\times D_2]}=ev_{[D_1]}\cdotp ev_{[D_2]}$ for arbitrary definable sets $D_1,D_2$. Let $[D_1]=\sum_{i=1}^k[B_{1i}]$ and $[D_2]=\sum_{j=1}^l[B_{2j}]$ be obtained from any decompositions of $D_1$ and $D_2$ into blocks. Then $[D_1\times D_2]=\sum_{i=1}^{k}\sum_{j=1}^{l}[B_{1i}\times B_{2j}])$. For each $\mathfrak C\in\overline{\mathcal X}^*$, we have \begin{eqnarray*} ev_{[D_1]}\cdotp ev_{[D_2]}(\mathfrak C) &=& \sum_{\mathfrak A\star\mathfrak B=\mathfrak C}\Lambda_{\mathfrak A}([D_1])\Lambda_{\mathfrak B}([D_2]) \\ &=& \sum_{\mathfrak A\star\mathfrak B=\mathfrak C}\left(\sum_{i=1}^{k}\Lambda_{\mathfrak A}([B_{1i}])\right)\left(\sum_{j=1}^{l}\Lambda_{\mathfrak B}([B_{2j}])\right) \\ &=& \sum_{i=1}^{k}\sum_{j=1}^{l}\sum_{\mathfrak A\star\mathfrak B=\mathfrak C}\Lambda_{\mathfrak A}([B_{1i}])\Lambda_{\mathfrak B}([B_{2j}]) \\ &=& \sum_{i=1}^{k}\sum_{j=1}^{l}\Lambda_{\mathfrak C}([B_{1i}\times B_{2j}]) \\ &=& \Lambda_{\mathfrak C}(\sum_{i=1}^{k}\sum_{j=1}^{l}[B_{1i}\times B_{2j}]) \\ &=& ev_{[D_1\times D_2]}(\mathfrak C). \end{eqnarray*} This completes the proof showing $ev$ is a semiring homomorphism. \end{proof} \subsection{Computation of the Grothendieck ring}\label{COMPUTATION} In the previous section, we showed that $ev:\widetilde{\mathrm{Def}}(M)\rightarrow\mathbb Z[\overline{\mathcal X}^*]$ is a semiring homomorphism. Since the codomain of this map is a ring, it factorizes through the unique homomorphism of cancellative semirings $\widetilde{ev}:\widetilde{\widetilde{\mathrm{Def}}(M)}\rightarrow\mathbb Z[\overline{\mathcal X}^*]$ where $\widetilde{\widetilde{\mathrm{Def}}(M)}$ is the quotient semiring of $\widetilde{\mathrm{Def}}(M)$ obtained as in theorem \ref{QUOCONST}. Our next aim is to prove the following lemma. \begin{lemma}\label{INJEV} The map $\widetilde{ev}:\widetilde{\widetilde{\mathrm{Def}}(M)}\rightarrow\mathbb Z[\overline{\mathcal X}^*]$ is injective. \end{lemma} \begin{proof} We will prove this lemma in several steps. First we will identify a subset of $\overline{\mathrm{Def}}(M)$ where the restriction of the evaluation function is injective. Let $\mathcal U=\{\alpha\in\overline{\mathcal A}:A_1\cap A_2=\emptyset$ for all distinct $A_1,A_2\in\alpha\}$. Then it can be easily checked that $\Lambda_{\mathfrak A}(\alpha)=|\alpha\cap\mathfrak A|$ for each $\mathfrak A\in\overline{\mathcal X}^*$ and $\alpha\in\mathcal U$. Hence if $ev_{[\alpha]}=ev_{[\beta]}$ for any $\alpha,\beta\in\mathcal U$, then we have $[\alpha]=[\beta]$. This proves that the map $ev$ is itself injective on $\mathcal U$. Given any $[D_1],[D_2]\in\widetilde{\mathrm{Def}}(M)$ such that $ev_{[D_1]}=ev_{[D_2]}$, we will find some $[X]\in\widetilde{\mathrm{Def}}(M)$ such that $[D_1]+[X]=[\alpha']$ and $[D_2]+[X]=[\beta']$ for some $\alpha',\beta'\in\mathcal U$. Then we get $ev_{[\alpha']}=ev_{[D_1]}+ev_{[X]}=ev_{[D_2]}+ev_{[X]}=ev_{[\beta']}$ and hence we will be done by the previous paragraph. \textbf{Claim:} It is sufficient to assume $[D_1],[D_2]\in[\overline{\mathcal A}]$. Let $[D_1]=\sum_{i=1}^k[B_{1i}]$ and $[D_2]=\sum_{j=1}^l[B_{2j}]$ be obtained from any decompositions of $D_1$ and $D_2$ into blocks. We have $[P(B)]=[B]+[N(B)]$ for any $B\in\overline{\mathcal B}$. Therefore if we choose $[Y]=\sum_{i=1}^k[N(B_{1i})]+\sum_{j=1}^l[N(B_{2j})]$, we get $[D_1]+[Y]=\sum_{i=1}^l[P(B_{1i})]+ \sum_{j=1}^l[N(B_{2j})]$ and $[D_2]+[Y]=\sum_{i=1}^l[N(B_{1i})]+\sum_{j=1}^l[P(B_{2j})]$. Hence both $[D_1]+[Y],[D_2]+[Y]\in[\overline{\mathcal A}]$. This finishes the proof of the claim. Now let $\alpha,\beta\in\overline{\mathcal A}$ be such that $ev_{[\alpha]}=ev_{[\beta]}$. We describe an algorithm which terminates in finitely many steps and yields some $[X]$ such that $[\alpha]+[X],[\beta]+[X]\in[\mathcal U]$. Before stating the algorithm, we define a \textbf{complexity function} $\Gamma:\overline{\mathcal A}\rightarrow\mathbb N$. For each antichain $\alpha$, the complexity $\Gamma(\alpha)$ is defined to be the maximum of the lengths of chains in the smallest nest corresponding to $\alpha$, where the length of a chain is the number of elements in it. Note that $\Gamma(\alpha)\leq 1$ if and only if $\alpha\in\mathcal U$. Let $\alpha=\{A_1,A_2,\hdots,A_k\}$ be any enumeration and let $\alpha_i=\{A_1,A_2,\hdots,A_i\}$ for each $1\leq i\leq k$. Similarly choosing an enumeration $\beta=\{B_1,B_2,\hdots,B_l\}$, we define $\beta_j$ for each $1\leq j\leq l$. Then we observe that $\bigcup\alpha=\bigsqcup_{i=1}^k \mathrm{Core}_{\alpha_i}(A_i)$ and $\bigcup\beta=\bigsqcup_{j=1}^l \mathrm{Core}_{\beta_j}(B_j)$. Now each $\mathrm{Core}_{\alpha_i}(A_i)$ is a block, which can be completed to a $pp$-set if we take its (disjoint) union with $N(\mathrm{Core}_{\alpha_i}(A_i))$. This can be written as the equation $[A_i]=[\mathrm{Core}_{\alpha_i}(A_i)]+ [N(\mathrm{Core}_{\alpha_i}(A_i))]$. If $\bigcup\alpha\subseteq M^n$, we consider $M^{nk}$ and inject $\mathrm{Core}_{\alpha_i}(A_i)$ in the obvious way into the $i^{th}$ copy of $M^n$ in $M^{nk}$ for each $i$. This gives us a definable set definably isomorphic to $\bigcup\alpha$. The advantage of this decomposition is that we can also add an isomorphic copy of $N(\mathrm{Core}_{\alpha_i}(A_i))$ at the appropriate place for each $i$ and obtain a new antichain representing $\sum_{i=1}^k [A_i]$. Repeating the same procedure for $\beta$ yields a representative of $\sum_{j=1}^l [B_j]$. In order to maintain the evaluation function on both sides, we add disjoint copies of the antichains $N(\mathrm{Core}_{\alpha_i}(A_i))$, $N(\mathrm{Core}_{\beta_j}(B_j))$ to both sides. So we choose $[W]=\sum_{i=1}^k[N(\mathrm{Core}_{\alpha_i}(A_i))]+ \sum_{j=1}^l [N(\mathrm{Core}_{\beta_j}(B_j))]$, hence $[\alpha]+[W],[\beta]+[W]$ are both in $[\overline{\mathcal A}]$ so that the particular antichains $\alpha',\beta'$ in these classes we constructed above satisfy $\Gamma((\bigcup\alpha')\sqcup(\bigcup\beta'))<\Gamma((\bigcup\alpha)\sqcup(\bigcup\beta))$. The inequality holds since we isolate the maximal elements of the nest corresponding to $(\bigcup\alpha)\cup(\bigcup\beta)$ in the process. We repeat this process, inducting on the complexity of the antichains, till the disjoint union of the pair of antichains in the output lies in $\mathcal U$. Since the complexity decreases at each step, this algorithm terminates in finitely many steps. The required $[X]$ is the sum of the $[W]$'s obtained at each step. This finishes the proof of the injectivity of the map $\widetilde{ev}$. \end{proof} Finally we are ready to prove theorem \ref{FINAL} regarding the structure of the Grothendieck ring $K_0(M)$. \begin{proof} (of Theorem \ref{FINAL}) It is easy to observe that the image of $\mathcal U$ under the evaluation map is the monoid semiring $\mathbb N[\overline{\mathcal X}^*]$. The Grothendieck ring $K_0(\mathbb N[\overline{\mathcal X}^*])$ is clearly isomorphic to the monoid ring $\mathbb Z[\overline{\mathcal X}^*]$. Since the map $\widetilde{ev}$ is injective by lemma \ref{INJEV} and $\mathbb N[\overline{\mathcal X}^*]\subseteq Im(\widetilde{ev})\subseteq\mathbb Z[\overline{\mathcal X}^*]$, we have $K_0(M)=K_0(Im(\widetilde{ev}))\cong\mathbb Z[\overline{\mathcal X}^*]$ by the universal property of $K_0$ in theorem \ref{GRCONSTR}. \end{proof} \section{General Case}\label{gencase} \subsection{Finite indices of $pp$-pairs}\label{TNTA} So far we have considered the Grothendieck ring of a right $\mathcal R$-module $M$ whose theory $T:=Th(M)$ satisfies $T=T^{\aleph_0}$. From this section onwards we remove this condition and work with an arbitrary right $\mathcal R$-module $M$. We continue to use the notations $\mathcal L_n,\mathcal P_n,\mathcal A_n,\mathcal X_n$ to denote the set of all $pp$-subsets of $M^n$, the set of all finite subsets of $\mathcal L_n$, the set of all finite antichains in $\mathcal L_n$ and the set of all $pp$-isomorphism classes (colours) in $\mathcal L_n$ respectively. We still use the representation theorem \ref{REP}, but lemma \ref{NLU} is unavailable to obtain the uniqueness - proposition \ref{UNIQUE1}. As a result we do not have a bijection between the set of all $pp$-convex sets, which we denote by $\mathcal O_n$, and the set $\mathcal A_n$. The elements of the set $\mathcal C_n:=\{(\bigcup\alpha)\setminus(\bigcup\beta)| \alpha,\beta\in\mathcal A_n,\ \bigcup\beta\subsetneq\bigcup\alpha\}$ will be called cells. The cells allowing a representation of the form $P\setminus\bigcup\beta$ for some $P\in\mathcal L_n$ and $\beta\in\mathcal A$ such that $P\subsetneq\bigcup\beta$ will be called blocks and the set of all blocks in $\mathcal C_n$ is denoted by $\mathcal B_n$. Let $(-)^\circ:\mathcal L_n\rightarrow \mathcal L_n$ denote the function which takes a coset $P$ to the subgroup $P^\circ:=P-p$, where $p\in P$ is any element. We use $\mathcal L_n^\circ$ to denote the image of this function, i.e. the set of all $pp$-definable subgroups. Let $\sim_n$ denote a relation on $\mathcal L_n^\circ$ defined by $P\sim_nQ$ if and only if $[P:P\cap Q]+[Q:P\cap Q]<\infty$. This is the \textbf{commensurability relation} and it can be easily checked to be an equivalence relation. We can extend this relation to all elements of $\mathcal L_n$ using the same definition if we set the index $[P:Q]:=[P^\circ:P^\circ\cap Q^\circ]$ for all $P,Q\in\mathcal L_n$. Let $\mathcal Y_n$ denote the set of all commensurability equivalence classes of $\mathcal L_n$ (\textbf{bands} for short). We use capital bold letters $\mathbf{P},\mathbf{Q},\cdots$ etc. to denote bands. The equivalence class (band) of $P$ will be denoted by the corresponding bold letter $\mathbf{P}$. Now we fix some $n\geq 1$ and drop all the subscripts as usual. Note that, in the special case, a band is just the collection of all cosets of a $pp$-subgroup. In particular any two distinct elements of a band are disjoint. This `discreteness' has been exploited heavily in all the proofs for the special case. We need to work hard to set up the technical machinery for defining the local characteristics. The proofs for the general case will be similar to those for the special case once we obtain the required discreteness condition. Let $\mathbf{P}\in\mathcal Y$. It can be easily checked that if $P,Q\in\mathbf{P}$ and $P\cap Q\neq\emptyset$ then $P\cap Q\in\mathbf{P}$ i.e. $\mathbf{P}$ is closed under intersections which are nonempty. By definition of the index, it is also clear that if $P\in\mathbf{P}$ and $a\in M^n$, then $a+P\in\mathbf{P}$. Let $\mathcal A(\mathbf{P}),\mathcal P(\mathbf{P})$ and $\mathcal O(\mathbf{P})$ denote the sets of all finite antichains in $\mathbf{P}$, finite subsets of $\mathbf{P}$ and unions of finite subsets of $\mathbf{P}$ respectively. We have the following analogue of proposition \ref{UNIQUE1} for $pp$-convex sets. The proof is omitted as it is similar to the $\mathrm{T=T^{\aleph_0}}$ case. \begin{pro} Let $X\in\mathcal O$. Then the set $S(X):=\{\mathbf{P}\in\mathcal Y: \exists \alpha\in\mathcal A\ (P\in~\alpha,$ $ \bigcup\alpha=X)\}$ is finite. Furthermore for any two $\alpha,\beta\in\mathcal A$ such that $\bigcup\alpha=\bigcup\beta=X$ and each $\mathbf{P}\in S(X)$, we have $\bigcup(\alpha\cap\mathbf{P})=\bigcup(\beta\cap\mathbf{P})$. Thus $X$ is uniquely determined by the family $\{X_\mathbf{P}:=\bigcup(\alpha\cap\mathbf{P}) \in\mathcal O(\mathbf{P})\mid\mathbf{P}\in S(X)\}$ for any $\alpha\in\mathcal A$ such that $\bigcup\alpha=X$. \end{pro} Given some $X\in\mathcal O(\mathbf{P})$ there could be two different $\alpha,\beta\in\mathcal A(\mathbf{P})$ such that $\bigcup\alpha=\bigcup\beta=X$. The nests corresponding to such antichains could have entirely different (semilattice) structures. The following proposition gives us a way to obtain an antichain $\alpha$ representing $X$ such that if $A,B\in\alpha$ and $A\neq B$, then $A\cap B=\emptyset$. \begin{pro} Let $X\in\mathcal O(\mathbf{P})$. Then for any $\alpha\in\mathcal A(\mathbf{P})$ such that $\bigcup\alpha=X$, there is some $\mathbf{P}(\alpha)\in\mathbf{P}^\circ$ such that $X$ is a finite union of distinct cosets of $\mathbf{P}(\alpha)$. \end{pro} \begin{proof} Choose $\mathbf{P}(\alpha)=\bigcap\{Q^\circ:Q\in\alpha\}$ and observe that $\mathbf{P}(\alpha)\in\mathbf{P}$ since $\mathbf{P}$ is closed under finite nonempty intersections. \end{proof} The previous two propositions together imply that we can always find a `nice' antichain representing the given $pp$-convex set. The following definition describes what we mean by this. \begin{definition} A finite set $\alpha\in\mathcal P$ is said to be in \textbf{discrete form} if $\alpha\cap\mathbf{P}$ consists of finitely many cosets of a fixed element of $\mathbf{P}^\circ$, denoted $\mathbf{P}(\alpha)$, for each $\mathbf{P}\in\mathcal Y$. The set of all finite sets $\alpha\in\mathcal P$ in discrete form will be denoted by $\mathcal P^d$ and the set of all antichains in discrete form will be denoted by $\mathcal A^d$. \end{definition} We would like to define the local characteristics for the elements of $\mathcal P^d$ as before and show that they satisfy the conclusion of theorem \ref{t1}. We will restrict our attention only to those $\alpha\in\mathcal P^d$ such that $\alpha=\hat{\alpha}$ (i.e. the nest corresponding to $\alpha$ is $\alpha$ itself). We denote the set of all such finite sets by $\hat{\mathcal P}^d$. Since we will deal with finite index subgroup pairs in $\mathcal L^\circ$, we will need more conditions on compatibility of $P$ and $\alpha$ as stated in the following definition. \begin{definition} A finite family $\mathcal F$ of elements of $\mathcal P$ is called \textbf{compatible} if $\mathcal F\subseteq\hat{\mathcal P}^d$ and for all $\alpha,\beta\in\mathcal F$ and $\mathbf{P}\in\mathcal Y$, we have $\mathbf{P}(\alpha)=\mathbf{P}(\beta)$ whenever $\mathbf{P}\cap\alpha,\mathbf{P}\cap\beta\neq\emptyset$. Furthermore, we say that $P\in\mathcal L$ is \textbf{compatible with} a finite family $\mathcal F$ of elements of $\mathcal P$ if $\mathcal F$ is compatible and $P\in\bigcup\mathcal F$. \end{definition} It is very easy to observe that given any finite family $\{X_1,X_2,\hdots,X_k\}$ of $pp$-convex sets, we can obtain a compatible family $\{\alpha_1,\alpha_2,\hdots,\alpha_k\}$ of antichains such that $\bigcup\alpha_i=X_i$ for each $i$. Finally we are ready to define the local characteristics in this set-up. \begin{definition} Let $P\in\mathcal L$ be compatible with a family $\mathcal F$ and let $\alpha\in\mathcal F$. We associate an abstract simplicial complex $\mathcal K^P(\alpha)$ with the pair $(\alpha,P)$ by setting $\mathcal K^P(\alpha):=\{\beta\subseteq\alpha: \beta\neq\emptyset,\,\bigcap\beta\supsetneq P\}$. We define the \textbf{local characteristic} $\kappa_P$ by the formula $\kappa_P(\alpha):=\chi(\mathcal K^P(\alpha))-\delta(\alpha)(P)$. \end{definition} Now we are ready to state the analogue of theorem \ref{t1} and it has essentially the same proof. The previous statement is justified because we have carefully developed the idea of a compatible family to avoid finite index pairs of $pp$-subgroups. Since we achieve discreteness simultaneously for any finite family of antichains, no changes in the proof of theorem \ref{t1} are necessary. \begin{theorem}\label{t1general} Let $X,Y\in\mathcal O$. Then $X\cup Y,X\cap Y\in\mathcal O$. For any compatible family $\mathcal F:=\{\alpha_1,\alpha_2,\beta_1,\beta_2\}$ such that $\bigcup\alpha_1=X$, $\bigcup\alpha_2=Y$, $\bigcup\beta_1=X\cup Y$ and $\bigcup\beta_2=X\cap Y$ and any $P\in\mathcal L$ compatible with $\mathcal F$, we have \begin{equation*} \kappa_P(\alpha_1)+\kappa_P(\alpha_2)=\kappa_P(\beta_1)+\kappa_P(\beta_2). \end{equation*} \end{theorem} We observe that the set $\overline{\mathcal A^d}$ is closed under cartesian products and thus we have the following analogue of lemma \ref{localcharmult} with the same proof. \begin{lemma}\label{localcharmultgeneral} Let $P,Q\in\overline{\mathcal L}$ be compatible with $\{\alpha,\beta\}\subseteq\overline{\mathcal A^d}$. Then \begin{equation*} \kappa_{P\times Q}(\alpha\times \beta)=-\kappa_P(\alpha)\kappa_Q(\beta). \end{equation*} \end{lemma} \subsection{The invariants ideal}\label{II} Once again, we use the notations $\overline{\mathcal L},\overline{\mathcal X}$ to denote the unions $\bigcup_{n=1}^{\infty}\mathcal L_n,$ $\bigcup_{n=1}^{\infty}\mathcal X_n$ etc. and set $\overline{\mathcal L}^*=\overline{\mathcal L}\setminus\{\emptyset\}, \overline{\mathcal X}^*=\overline{\mathcal X}\setminus\{[[\emptyset]]\}$ where $[[-]]:\overline{\mathcal L}\rightarrow\overline{\mathcal X}$ is the map taking a $pp$-set to its colour. Now, $\overline{\mathcal X}^*$ is a multiplicative monoid and we consider the monoid ring $\mathbb Z[\overline{\mathcal X}^*]$. In the case when $\mathrm{T\neq T^{\aleph_0}}$, there are $P,Q\in\mathcal L_n$ such that $1<\mathrm{Inv}(M;P,Q)<\infty$ for each $n\geq 1$. We can assume without loss that $0\in Q\subseteq P$. Now we define an ideal of the monoid ring, called \textbf{the invariants ideal}, which encodes these invariants. The following proposition is the motivation. \begin{pro}\label{partitionfurther} Let $\mathbf{P}\in\mathcal Y_n$ and $X\in\mathcal O(\mathbf{P})$. For any $\alpha,\beta\in\mathcal A^d_n$ with $\bigcup\alpha=\bigcup\beta=X$, we have \begin{equation*} [\mathbf{P}(\alpha):\mathbf{P}(\beta)]|\alpha\cap\mathbf{P}| =[\mathbf{P}(\beta):\mathbf{P}(\alpha)]|\beta\cap\mathbf{P}| \end{equation*} \end{pro} \begin{proof} Partition those cosets of both $\mathbf{P}(\alpha)$ and of $\mathbf{P}(\beta)$ which are contained in $X$ into cosets of $\mathbf{P}(\alpha)\cap\mathbf{P}(\beta)$ to get the required equality. \end{proof} \begin{definition} Let $\delta_{\mathfrak A}:\overline{\mathcal X}^*\rightarrow\mathbb Z$ denote the characteristic function of the colour $\mathfrak A$ for each $\mathfrak A\in\overline{\mathcal X}^*$. We define \textbf{the invariants ideal $\mathcal J$} of the monoid ring $\mathbb Z[\overline{\mathcal X}^*]$ to be the ideal generated by the set \begin{equation*} \{\delta_{[[P]]}=[P:Q]\delta_{[[Q]]}: P,Q\in\overline{\mathcal L},\ P\supseteq Q,\ \mathrm{Inv}(M;P,Q)<\infty\}. \end{equation*} \end{definition} The main aim of this section is to prove the following theorem. \begin{theorem}\label{FINALgeneral} For every right $\mathcal R$-module $M$, we have \begin{center} $K_0(M)\cong\mathbb Z[\overline{\mathcal X}^*]/\mathcal J$. \end{center} \end{theorem} We have proved this theorem when $\mathrm{T=T^{\aleph_0}}$ since the invariants ideal is trivial in that case. Let $\overline{\mathcal Y}=\bigcup_{n=1}^\infty\mathcal Y_n$. Given $\mathfrak A\in\overline{\mathcal X}^*$, we define $\mathcal Y(\mathfrak A):=\{\mathbf{P}\in\overline{\mathcal Y}:\mathbf{P}\cap\mathfrak A\neq\emptyset\}$. In order to define the global characteristics in this case, we need to find the set over which they vary. Let $\mathfrak A,\mathfrak B\in\overline{\mathcal X}^*$. We say that $\mathfrak A\approx\mathfrak B$ if and only if $\mathcal Y(\mathfrak A)\cap\mathcal Y(\mathfrak B)\neq\emptyset$. This relation is reflexive and symmetric. We use $\approx$ again to denote its transitive closure. The $\approx$-equivalence class of $\mathfrak A$ will be denoted by $\widetilde{\mathfrak A}$. \begin{definition} Let $\mathfrak A\in\overline{\mathcal X}^*$. Define the \textbf{colour class group} $\mathcal R(\widetilde{\mathfrak A})$ as the quotient of the free abelian group $\mathbb Z\langle \delta_{\mathfrak A}:\mathfrak A\in\widetilde{\mathfrak A}\rangle$ by the subgroup $\mathcal J(\widetilde{\mathfrak A})$ generated by the relations $\{\delta_{[[P]]}=[P:Q]\delta_{[[Q]]}: P,Q\in\bigcup\widetilde{\mathfrak A},\ P\supseteq Q\}$. \end{definition} It can be observed that the underlying abelian group of the monoid ring $\mathbb Z[\overline{\mathcal X}^*]$ is formed by taking the quotient of the direct sum of the free abelian groups $\mathbb Z\langle \delta_{\mathfrak A}:\mathfrak A\in\widetilde{\mathfrak A}\rangle$, one for each equivalence class of colours, by the multiplicative relations of the monoid $\overline{\mathcal X}^*$. Furthermore, the set $\bigcup\{\mathcal J(\widetilde{\mathfrak A}):\widetilde{\mathfrak A}\in\overline{\mathcal X}^*\}$ generates the ideal $\mathcal J$ in this ring. The discussion in the previous paragraph suggests to us to isolate the information in the evaluation map into different global characteristics, one for each colour class. These maps take values in the corresponding colour class group. We define the \textbf{global characteristic} $\Lambda_{\widetilde{\mathfrak A}}$ corresponding to $\widetilde{\mathfrak A}$ as the function $\overline{\hat{\mathcal P}^d}\rightarrow\mathcal R(\widetilde{\mathfrak A})$ given by $\alpha\mapsto-\sum_{\mathfrak A\in\widetilde{\mathfrak A}}\left(\sum_{P\in\mathfrak A}\kappa_P(\alpha)\right)\delta_{\mathfrak A}$. The following result is an easy corollary of proposition \ref{partitionfurther}. It states that the global characteristics depend only on the $pp$-convex sets and not on their representations as antichains. \begin{cor}\label{glogeneralconvex} Let $X\in\overline{\mathcal O}$ and $\alpha,\beta\in\overline{\hat{\mathcal P}^d}$ be such that $\bigcup\alpha=\bigcup\beta=X$. Then $\Lambda_{\widetilde{\mathfrak A}}(\alpha)=\Lambda_{\widetilde{\mathfrak A}}(\beta)$ for each $\mathfrak A\in\overline{\mathcal X}^*$. \end{cor} This finishes the technical setup for the general case when the theory $T$ of the module $M$ does not necessarily satisfy $T=T^{\aleph_0}$. The antichains in discrete form behave as if the theory satisfies $T=T^{\aleph_0}$, the bands allow us to go down (via intersections) so that any finite family can be converted to a compatible family and the notion of compatibility allows us to do appropriate local analysis. The local data can be pasted together using the information coded in the colour class groups. Now we give some important definitions and state results from the special case $\mathrm{T=T^{\aleph_0}}$ in a form compatible with the general case. The proofs of these results are omitted since they are similar to their special counterparts; the basic ingredients are provided by lemma \ref{NLU}, theorem \ref{t1general}, lemma \ref{localcharmultgeneral} and corollary \ref{glogeneralconvex}. The necessary change is to deal only with antichains which are in discrete form. Since cells are the difference sets of two $pp$-convex sets, we can obtain a compatible family $\{\alpha,\beta\}$ for any $C\in\overline{C}$ such that $C=\bigcup\alpha\setminus\bigcup\beta$. \begin{definition} Let $C\in\overline{\mathcal C}$ and $\mathfrak A$ be a colour. We define the global characteristic $\Lambda_{\widetilde{\mathfrak A}}(C):=\Lambda_{\widetilde{\mathfrak A}}(\alpha)-\Lambda_{\widetilde{\mathfrak A}}(\beta)\in\mathcal R(\widetilde{\mathfrak A})$ for any compatible family $\{\alpha,\beta\}$ representing $C$. \end{definition} The following theorem is the analogue of theorem \ref{t2} and uses the inductive version of \ref{t1general} in its proof. \begin{theorem}\label{t2general} If $\{B_i:1\leq i\leq l\}, \{B_j':1\leq j\leq m\}$ are two finite families of pairwise disjoint blocks such that $\bigsqcup_{i=1}^l B_i=\bigsqcup_{j=1}^m B_j'$, then $\Sigma_{i=1}^l\Lambda_{\widetilde{\mathfrak A}} (B_i)=\Sigma_{j=1}^m\Lambda_{\widetilde{\mathfrak A}}(B_j')$ for every $\mathfrak A\in\overline{\mathcal X}^*$. \end{theorem} This theorem allows us to extend the definition of global characteristics to all sets in $\overline{\mathrm{Def}}(M)$. Moreover the following theorem, the proof of which is an easy adaptation of that of theorem \ref{t3}, states that each of them is preserved under definable bijections. \begin{theorem}\label{t3general} Suppose $D\in \mathrm{Def}(M^n)$ and $f:D\rightarrow M^n$ is a definable injection. Then $\Lambda_{\widetilde{\mathfrak A}}(D)=\Lambda_{\widetilde{\mathfrak A}}(f(D))$ for each colour class $\widetilde{\mathfrak A}$. \end{theorem} Let $ev:\overline{\mathrm{Def}}(M)\rightarrow\mathbb Z[\overline{\mathcal X}^*]/\mathcal J$ be the map defined by $D\mapsto \sum\{\Lambda_{\widetilde{\mathfrak A}}(D):\widetilde{\mathfrak A}\in\overline{\mathcal X}^*/\approx\}$. This map is well defined since the sum is finite for every $D$ for reasons similar to those for the special case. Furthermore $ev_{D_1}=ev_{D_2}$ whenever $D_1$ and $D_2$ are definably isomorphic since $\Lambda_{\widetilde{\mathfrak A}}(D_1)=\Lambda_{\widetilde{\mathfrak A}}(D_2)$ for each colour class $\widetilde{\mathfrak A}$. In fact $ev$ is a semiring homomorphism. The proof of the following theorem is analogous to that of theorem \ref{t5}. \begin{theorem}\label{t5general} The map $ev:\widetilde{\mathrm{Def}}(M)\rightarrow\mathbb Z[\overline{\mathcal X}^*]/\mathcal J$ defined by $[D]\mapsto ev_{[D]}$ is a semiring homomorphism. \end{theorem} The final step in the proof of \ref{FINALgeneral} is the following analogue of lemma \ref{INJEV}. \begin{lemma}\label{INJEVgeneral} The map $\widetilde{ev}:\widetilde{\widetilde{\mathrm{Def}}(M)}\rightarrow\mathbb Z[\overline{\mathcal X}^*]/\mathcal J$ is injective. \end{lemma} \begin{proof} The proof of this lemma needs some modification of the first paragraph of the proof of lemma \ref{INJEV} in order to incorporate the invariants ideal. Let $\mathcal U:= \{\alpha\in\overline{\mathcal A^d}: A_1\cap A_2=\emptyset$ for all distinct $A_1,A_2\in\alpha\}$. If $ev_{[\alpha]}=ev_{[\beta]}$ for some $\alpha,\beta\in\mathcal U$, then we can obtain two antichains $\alpha'\in[\alpha]\cap\mathcal U,\beta'\in[\beta]\cap\mathcal U$ such that $\bigcup\alpha=\bigcup\alpha',\bigcup\beta=\bigcup\beta'$ and $\{\alpha',\beta'\}$ is compatible. Hence we have $\Lambda_{\widetilde{\mathfrak A}}(\alpha)=\Lambda_{\widetilde{\mathfrak A}}(\alpha')$, $\Lambda_{\widetilde{\mathfrak A}}(\beta)=\Lambda_{\widetilde{\mathfrak A}}(\beta')$ for each colour class $\widetilde{\mathfrak A}$. Observe that the equalities, if considered in the codomain ring, are modulo the invariants ideal. Now $\Lambda_{\widetilde{\mathfrak A}}(\alpha')=|\alpha'\cap(\bigcup\widetilde{\mathfrak A})|\delta_{[[\mathbf{P}(\alpha')]]}$, where $\mathbf{P}$ is the only band (if exists) such that $\mathbf{P}\cap\alpha'\cap(\bigcup\widetilde{\mathfrak A})\neq\emptyset$. Since $\mathbf{P}(\alpha')=\mathbf{P}(\beta')$ for each such colour class by the definition of compatibility, we get $|\alpha\cap(\bigcup\widetilde{\mathfrak A})|=|\beta\cap(\bigcup\widetilde{\mathfrak A})|$ for each colour class $\widetilde{\mathfrak A}$. A definable isomorphism can be easily constructed between the $pp$-convex sets represented by $\alpha'$ and $\beta'$, which are the sets represented by $\alpha$ and $\beta$ respectively. The rest of the proof is similar to the proof of \ref{INJEV}. \end{proof} \begin{proof} (Theorem \ref{FINALgeneral}) We have shown that the map $\widetilde{ev}$ is injective in the previous lemma. Then we observe that the sets of the form $\bigcup\alpha$ for some $\alpha\in\mathcal U$ are capable of producing every element of the quotient ring $\mathbb Z[\overline{\mathcal X}^*]/\mathcal J$ of the form $\sum n_{\mathfrak A}\delta_{\mathfrak A}+\mathcal J$, where the nonzero coefficients are positive. This completes the proof by an argument similar to the proof of theorem \ref{FINAL}. \end{proof} Since the Grothendieck ring is a quotient ring, we do not necessarily know if it is nontrivial. But the following corollary of theorem \ref{FINALgeneral} shows this result, proving Prest's conjecture in full generality. \begin{cor}\label{MAINRESULTgeneral} If $M$ is a nonzero right $\mathcal R$-module, then there is a split embedding $\mathbb Z\rightarrowtail K_0(M)$. \end{cor} \begin{proof} Consider the colour class $\widetilde{\mathfrak U}$, where $\mathfrak U$ is the identity element of the monoid $\overline{\mathcal X}^*$. A $pp$-set $P$ is an element of $\bigcup\widetilde{\mathfrak U}$ if and only if $P$ is finite. Finite sets enjoy the special property that two finite sets are isomorphic to each other if and only their cardinalities are equal. Furthermore, every such isomorphism is definable. In particular, $\mathcal R(\widetilde{\mathfrak U})\cong\mathbb Z$ if $M$ is a nonzero module. Next we observe that the set $\bigcup\widetilde{\mathfrak U}$ is closed under multiplication and hence the colour class group $\mathcal R(\widetilde{\mathfrak U})$ can be given the structure of a quotient of the monoid ring $\mathbb Z[\bigcup\widetilde{\mathfrak U}]$ with certain relations, where the multiplicative relations of the monoid ring are finitary and hence already present in the relations for $\mathcal R(\widetilde{\mathfrak U})$. We have thus described the ring structure of $\mathcal R(\widetilde{\mathfrak U})$ and this ring is naturally a subring of $K_0(M)$. To complete the proof, we show that the map $\pi_0:K_0(M)\rightarrow\mathcal R(\widetilde{\mathfrak U})$ given by $\sum_{\widetilde{\mathfrak A}\in(\overline{\mathcal X}^*/\approx)} n_{\widetilde{\mathfrak A}}\delta_{\widetilde{\mathfrak A}}\mapsto n_{\widetilde{\mathfrak U}}\delta_{\widetilde{\mathfrak U}}$ is a surjective ring homomorphism. The map $\pi_0$ is clearly an additive group homomorphism. Note that the multiplicative monoid $\bigcup\widetilde{\mathfrak U}$ is a sub-monoid of $\overline{\mathcal X}^*$. Also note that $\mathcal J(\widetilde{\mathfrak A})\cap\mathcal J(\widetilde{\mathfrak B})=\emptyset$ if $\widetilde{\mathfrak A}\neq\widetilde{\mathfrak B}$. Furthermore, $\mathfrak A\star\mathfrak B\in\bigcup\widetilde{\mathfrak U}$ if and only if $\mathfrak A,\mathfrak B\in\bigcup\widetilde{\mathfrak U}$. Thus the coefficient of $\delta_{\widetilde{\mathfrak U}}$ in the product of two elements of $K_0(M)$ is determined by the coefficient of $\delta_{\widetilde{\mathfrak U}}$ of the individual elements. Hence $\pi_0$ is also multiplicative. The surjectivity is clear. This completes the proof. \end{proof} Now we can give a proof that the Grothendieck ring of a module is an invariant of its theory. \begin{proof} (Proposition \ref{eleequivmod}) Elementarily equivalent modules have isomorphic lattices of $pp$-sets and they also satisfy the same invariant conditions (see \cite[Corollary\,2.18]{PreBk}). Hence theorem \ref{FINALgeneral} yields the result. \end{proof} \section{Applications}\label{appl} \subsection{Pure embeddings and Grothendieck rings}\label{pure} We will investigate some categorical properties of Grothendieck rings of modules in this section. The main aim is to prove the following theorem. \begin{theorem}\label{puresurj} Let $i:N\rightarrow M$ be a pure embedding of right $\mathcal R$-modules such that the theory of $M$ satisfies $Th(M)=Th(M)^{\aleph_0}$. Then $i$ induces a surjective ring homomorphism $I:K_0(M)\twoheadrightarrow K_0(N)$. \end{theorem} This theorem will be proved using a series of results of functorial nature. We begin with the definition of a pure embedding. \begin{definition} Let $M$ be a right $\mathcal R$-module. A submodule $N\leq M$ is called a \textbf{pure submodule} if, for each $n$, $A\cap N^n\in\mathcal L_n^\circ(N)$ for every $A\in\mathcal L_n^\circ(M)$.\\ A monomorphism $i:N\rightarrow M$ is said to be a \textbf{pure monomorphism} if $iN$ is a pure submodule of $M$. \end{definition} The following lemma states that a pure embedding induces a map of lattices of $pp$-formulas. \begin{lemma}\label{purelat}(see \cite[Lemma\,3.2.2]{PrePSL}) If $i:N\rightarrow M$ is a pure embedding then, for each $n$, the natural map $\overline{i}:\mathcal L_n^\circ(M)\rightarrow\mathcal L_n^\circ(N)$ given by $\overline{i}(A)=A\cap N^n$ is a surjection of lattices. \end{lemma} Now we state the following result about integral monoid rings. \begin{pro}\label{monringfunc}(see \cite[II,\,Proposition\,3.1]{Lang}) Let $\Phi:A\rightarrow B$ be a homomorphism of monoids. Then there exists a unique homomorphism $h:\mathbb Z[A]\rightarrow\mathbb Z[B]$ such that $h(x)=\Phi(x)$ for all $x\in A$ and $h(1)=1$. Furthermore, $h$ is surjective if $\Phi$ is so. \end{pro} \begin{cor} A pure embedding $i:N\rightarrow M$ induces a surjective homomorphism $\mathfrak i:\mathbb Z[\overline{\mathcal X}^*(M)]\twoheadrightarrow\mathbb Z[\overline{\mathcal X}^*(N)]$ of rings. \end{cor} \begin{proof} Observe that every colour $\mathfrak A\in\overline{\mathcal X}^*$ has a representative in $\overline{\mathcal L}^\circ:=\bigcup_{n=1}^\infty\mathcal L_n^\circ$. Thus we get an induced surjective homomorphism $\overline{\mathcal X}^*(M)\twoheadrightarrow\overline{\mathcal X}^*(N)$ of the colour monoids using lemma \ref{purelat}. Then proposition \ref{monringfunc} yields the required surjective map of the integral monoid rings. \end{proof} \begin{proof} (Theorem \ref{puresurj}) Observe that since $Th(M)=Th(M)^{\aleph_0}$ holds, theorem \ref{FINAL} gives $K_0(M)\cong\mathbb Z[\overline{\mathcal X}^*(M)]$. By theorem \ref{FINALgeneral}, we have $K_0(N)\cong\mathbb Z[\overline{\mathcal X}^*(N)]/\mathcal J(N)$. Let $\pi:\mathbb Z[\overline{\mathcal X}^*(N)]\twoheadrightarrow K_0(N)$ denote the natural quotient map. Take $I=\pi\circ\mathfrak i$, where $\mathfrak i$ is the map from the previous corollary, to finish the proof. \end{proof} We will see an example at the end of the next section to see that theorem \ref{puresurj} fails if $Th(M)\neq Th(M)^{\aleph_0}$. Recall that the notation $M^{(\aleph_0)}$ denotes the direct sum of countably many copies of a module $M$. It follows immediately from \cite[Lemma\,2.23(c)]{PreBk}) that the lattices $\mathcal L_1(M)$ and $\mathcal L_1(M^{(\aleph_0)})$ are isomorphic and $T:=Th(M^{(\aleph_0)})$ satisfies $T=T^{\aleph_0}$. We summarize these observations in the following corollary of theorem \ref{puresurj}. \begin{cor} Let $i_n:M\rightarrow M^{(\aleph_0)}$ denote the natural embedding of $M$ onto the $n^{th}$ component of $M^{(\aleph_0)}$. Then $i_n$ induces the natural quotient map $K_0(M^{(\aleph_0)})=\mathbb Z[\overline{\mathcal X}^*(M)]\twoheadrightarrow\mathbb Z[\overline{\mathcal X}^*(M)]/\mathcal J(M)=K_0(M)$. \end{cor} For a ring $\mathcal R$, let $\rm Mod\mbox{-}\mathcal R$ denote the category of right $\mathcal R$-modules. The theory $Th({\rm Mod\mbox{-}\mathcal R})$ is not a complete theory. But we may take a canonical complete theory extending it as follows. Recall that Grothendieck rings of elementarily equivalent modules are isomorphic by proposition \ref{eleequivmod}. Equivalently, $K_0(M)$ is determined by $Th(M)$ which, in turn, is determined by its invariants conditions (theorem \ref{FINALgeneral}). \begin{definition} Let $P$ be a direct sum of one model of each complete theory of right $\mathcal R$-modules. Then $T^*=Th(P)$ is referred to as \textbf{the largest complete theory of right $\mathcal R$-modules}. \end{definition} Thus every right $\mathcal R$-module is elementarily equivalent to a direct summand of some model of $Th(P)$. Now we note the following result without proof and define the Grothendieck ring of the module category. \begin{deflem}(see \cite[6.1.1,\,6.1.2]{Perera})\label{GrRngModCat} Let $T^*$ denote the largest complete theory of right $\mathcal R$-modules. Then $T^*=(T^*)^{\aleph_0}$. Furthermore if $P_1$ and $P_2$ are both direct sums of one model of each complete theory of right $\mathcal R$-modules, then $K_0(P_1)\cong K_0(P_2)$. We define the \textbf{Grothendieck ring of the module category}, denoted $K_0({\rm Mod\mbox{-}\mathcal R})$, to be the Grothendieck ring of the largest complete theory of right $\mathcal R$-modules. \end{deflem} As a consequence of theorem \ref{puresurj}, we state a result connecting Grothendieck rings of individual modules with that of the module category. \begin{cor} Let $M$ be a right $\mathcal R$-module. Then $K_0(M)$ is a quotient of $K_0({\rm Mod\mbox{-}\mathcal R})$. \end{cor} \begin{proof} Let $T^*$ be the largest complete theory of right $\mathcal R$-modules. Then lemma \ref{GrRngModCat} gives that, for any $P\models T^*$, $Th(P)=T^*$ satisfies $T^*=(T^*)^{\aleph_0}$ and we also have $K_0(P)\cong K_0({\rm Mod\mbox{-}\mathcal R})$. By the definition of $T^*$, there is a module $M'$ elementarily equivalent to $M$ such that $M'$ is a direct summand of $P$. Since the embedding $M'\rightarrowtail P$ is pure, we get a surjective homomorphism $K_0(P)\twoheadrightarrow K_0(M')$. Thus the required quotient map is the composite $K_0(\mathrm{Mod\mbox{-}\mathcal R})\cong K_0(P)\twoheadrightarrow K_0(M')\cong K_0(M)$, where the last isomorphism is obtained from proposition \ref{eleequivmod}. \end{proof} \subsection{Torsion in Grothendieck rings}\label{tors} As an application of the structure theorem for Grothendieck rings, theorem \ref{FINALgeneral}, we provide an example of a module whose Grothendieck ring contains a nonzero torsion element (i.e. a nonzero element $a$ such that $na=0$ for some $n\geq 1$). We also calculate the Grothendieck ring $K_0(\mathbb Z_\mathbb Z)$. \begin{definition} The \textbf{ring of $p$-adic integers}, denoted $\mathbb Z_p$, is the inverse limit of the system $\hdots\twoheadrightarrow\mathbb Z/p^n\mathbb Z\twoheadrightarrow\hdots\twoheadrightarrow\mathbb Z/p^2\mathbb Z\twoheadrightarrow\mathbb Z/p\mathbb Z\twoheadrightarrow 0$.\\ \end{definition} The ring $\mathbb Z_p$ is a commutative local PID with the ideal structure given by \begin{equation*} \mathbb Z_p\supsetneq p\mathbb Z_p\supsetneq\hdots\supsetneq p^n\mathbb Z_p\supsetneq\hdots\supsetneq 0. \end{equation*} In particular, $\mathbb Z_p$ is a commutative noetherian ring and hence satisfies the hypothesis of the following proposition. \begin{pro}(see \cite[p.19,\,Ex.\,2(ii)]{PreBk}) If $\mathcal R$ is a commutative noetherian ring then the $pp$-definable subgroups of the module $\mathcal R_\mathcal R$ are precisely the finitely generated ideals of $\mathcal R$. \end{pro} It can be observed that the maps $t_n:\mathbb Z_p\rightarrow p^n\mathbb Z_p$ which are `multiplication by $p^n$' are $pp$-definable isomorphisms for each $n\geq 1$. Thus a simple computation shows that the monoid of colours, $\overline{\mathcal X}^*(\mathbb Z_p)$, is isomorphic to the monoid $\mathbb N$. If $X$ denotes the class of $\mathbb Z_p$ in $K_0(\mathbb Z_p)$, then the invariants ideal $\mathcal J(\mathbb Z_p)$ is generated by the relations $\{X=p^nX:n\geq 1\}$. The relation $(p^n-1)X=0$ is an integral multiple of the relation $(p-1)X=0$ for each $n\geq 1$. Thus $\mathcal J(\mathbb Z_p)$ is principal and generated by the single relation $(p-1)X=0$. We summarize this discussion as the following corollary to theorem \ref{FINALgeneral}. \begin{cor}\label{p-adic} Let $\mathbb Z_p$ denote the ring $p$-adic integers. Then \center{$K_0(\mathbb Z_p)\cong\mathbb Z[X]/\langle(p-1)X\rangle$.} \end{cor} Consider the split (hence pure) embedding $i:\mathbb Z_p^{(2)}\rightarrowtail\mathbb Z_p^{(3)}$ of $\mathbb Z_p$-modules given by $(a,b)\mapsto(a,b,0)$, where $M^{(k)}$ denotes the direct sum of $k$ copies of $M$. We want to show that this embedding witnesses the failure of theorem \ref{puresurj} since the theory $T:=Th(\mathbb Z_p^{(3)})$ of the target module doesn't satisfy the condition $T=T^{\aleph_0}$. The following proposition is helpful for the calculation of Grothendieck rings. \begin{pro}(see \cite[Lemma\,2.23]{PreBk}) If $\phi(x)$ and $\psi(x)$ denote $pp$-formulas, then \begin{enumerate} \item $\phi(M\oplus N)=\phi(M)\oplus\phi(N)$, \item $\mathrm{Inv}(M\oplus N;\phi,\psi)=\mathrm{Inv}(M;\phi,\psi)\mathrm{Inv}(N;\phi,\psi)$. \end{enumerate} \end{pro} It is clear that the induced map $\mathfrak{i}:\mathbb Z[\overline{\mathcal X}^*(\mathbb Z_p^{(3)})]\rightarrow\mathbb Z[\overline{\mathcal X}^*(\mathbb Z_p^{(2)})]$ is the identity map on $\mathbb Z[X]$ since $\mathbb Z[\overline{\mathcal X}^*(\mathbb Z_p^{(k)})]\cong K_0(\mathbb Z_p^{(\aleph_0)})\cong\mathbb Z[X]$ for any $k\geq 1$. Further the previous proposition shows that $\mathcal J(\mathbb Z_p^{(k)})=\langle(p^k-1)X\rangle$ for any $k\geq 1$. Since $\mathcal J(\mathbb Z_p^{(3)})\nsubseteq\mathcal J(\mathbb Z_p^{(2)})$, there is no surjective map $K_0(\mathbb Z_p^{(3)})\twoheadrightarrow K_0(\mathbb Z_p^{(2)})$. \textbf{The abelian group of integers}: Since the ring $\mathbb Z$ is a commutative PID, the $pp$-definable subgroups of the module $\mathbb Z_\mathbb Z$ are precisely the ideals $n\mathbb Z$ for $n\geq 0$. Thus the monoid $\overline{\mathcal X}^*(\mathbb Z)$ is isomorphic to $\mathbb N$. Furthermore if $X$ denotes the class of $\mathbb Z$ in $K_0(\mathbb Z)$, the invariants ideal is generated by the relations $X=nX$ for each $n\geq 1$. This forces $\mathcal J(\mathbb Z)=\langle X\rangle$ and thus $K_0(\mathbb Z_\mathbb Z)\cong\mathbb Z$. \subsection{Representing definable sets uniquely}\label{CDT} We fix some $\mathcal R$-module $M$ whose theory $T$ satisfies the condition $T=T^{\aleph_0}$ and some $n\geq 1$. As usual we drop all the subscripts $n$ and write $\mathcal L\setminus\{\emptyset\},\mathcal A\setminus\{\emptyset\},\hdots$ as $\mathcal L^*,\mathcal A^*,\hdots$ respectively. The $pp$-elimination theorem for the model theory of modules (theorem \ref{PPET}) states that every definable set can be written as a finite disjoint union of blocks. But this representation is far from being unique in any sense. On the other hand we have unique representations for $pp$-convex sets (proposition \ref{UNIQUE1}) and cells (lemma \ref{UNIQUE2}). We exploit these ideas to achieve a unique representation for every definable set - an expression as a disjoint union of cells. This result will be called the `cell decomposition theorem'. We begin by defining some terms useful to describe the cell decomposition theorem. \begin{definition} Let $\mathcal F=\{C_j\}_{j=1}^l\subseteq\mathcal C$ be a family of pairwise disjoint cells. If there is a permutation $\sigma$ of $[l]$ such that $P(C_{\sigma(j+1)})\prec N(C_{\sigma(j)})$ for $1\leq j\leq l-1$, then we say that the family $\mathcal F$ is a \textbf{tower of cells}. We call the number $l$ the \textbf{height} of the tower. We denote the set of all finite towers of cells by $\mathcal T$. We define a function $\zeta:\mathcal T\rightarrow \mathbb N$ which assigns its height to a tower. \end{definition} \begin{definition} Let $\alpha_i\in\mathcal A^*$ for $1\leq i\leq k$. If $\alpha_{i+1}\prec\alpha_{i}$ for each $1\leq i\leq k-1$, we say that $\overline\alpha=\{\alpha_i\}_{i=1}^k$ is a $\prec$-\textbf{chain}. We denote the set of all finite $\prec$-chains in $\mathcal A^*$ by $\mathcal W$. We define a function $\omega:\mathcal W\rightarrow \mathbb N$, which assigns \textbf{height} to each $\prec$-chain, by $\omega(\overline\alpha)=\lceil(\frac{|\overline\alpha|}{2})\rceil$ where $\lceil q\rceil$ is the smallest integer larger than or equal to $q$. \end{definition} The following proposition states that towers and chains are two different ways of expressing the same kind of object. \begin{pro} There is a bijection $\Phi:\mathcal T\rightarrow\mathcal W$ preserving height i.e., $\omega(\Phi(\mathcal F))=\zeta(\mathcal F)$ for every $\mathcal F\in\mathcal T$. \end{pro} \begin{proof} Let $\mathcal F=\{C_j\}_{j=1}^l$ be a tower of cells with height $l$. Without loss, we may assume that $P(C_j)\prec N(C_{j+1})$, i.e., the associated permutation is the identity. We first define a non-negative integer $k$ as follows. \begin{math} k=\begin{cases} 0, & \mbox{if } l=0, \\ 2l+1, &\mbox{if } l>0\mbox{ and } N(C_l)=\emptyset,\\ 2l+2, &\mbox{if } l>0\mbox{ and } N(C_l)\neq\emptyset. \end{cases} \end{math} For each $1\leq i\leq k$, we define an antichain $\alpha_i\in\mathcal A^*$ as follows. \begin{math} \alpha_i=\begin{cases} P(C_j), & \mbox{if } i=2j+1,\\ N(C_j), &\mbox{if } i=2j+2. \end{cases} \end{math} Then $\overline\alpha=\{\alpha_i\}_{i=1}^k$ is clearly a $\prec$-chain and the map $\Phi(\mathcal F):=\overline{\alpha}$ can be easily checked to be injective. To prove surjectivity, let $\overline\beta\in\mathcal W$. We modify $\overline\beta$ to obtain a $\prec$-chain $\overline\beta'=\{\beta_i'\}_{i=1}^{2\omega(\overline\beta)}$ in $\mathcal A$ as follows. \begin{math} \beta_i'=\begin{cases} \beta_i, & \mbox{if } 1\leq i\leq |\overline\beta|,\\ \emptyset, &\mbox{if } |\overline\beta|\neq 2\omega(\overline\beta)\mbox{ and } i=2\omega(\overline\beta). \end{cases} \end{math} Then $|\overline\beta'|$ is an even integer. We define $C'_j=\bigcup\beta_{2j+1}\setminus\bigcup\beta_{2j+2}$ for $1\leq j\leq |\overline\beta'|/2$. The family $\mathcal F':=\{C'_j\}_{j=1}^{|\overline\beta'|/2}$ clearly satisfies $\Phi(\mathcal F')=\overline{\beta}$. The height preservation property is easy to check from the explicit constructions above. \end{proof} \begin{pro} Let $\{A_i\}_{i=1}^m\in\mathcal P$ and $B\in\mathcal B$ be such that $B\subseteq\bigcup_{i=1}^m A_i$. Then $\overline B\subseteq\bigcup_{i=1}^m A_i$. \end{pro} \begin{proof} We have $\overline B=B\cup\bigcup N(B)$. Hence $\overline B\subseteq\bigcup_{i=1}^m A_i\cup\bigcup N(B)$. By \ref{NLU}, $\overline B\subseteq A_i$ for some $i$, or $\overline B\subseteq D$ for some $D\in N(B)$. The latter case is not possible since $N(B)\prec P(B)=\{\overline B\}$. Hence the result. \end{proof} \begin{lemma}\label{CHAINANTITOWER} Let $D\in \mathrm{Def}(M^n)$. Then there is a unique $pp$-convex set $\overline D$ which satisfies $D\subseteq\bigcup\alpha\ \Rightarrow\ \overline D\subseteq\bigcup\alpha$ for every $\alpha\in\mathcal A$. \end{lemma} \begin{proof} Let $D=\bigsqcup_{i=1}^m B_i=\bigsqcup_{j=1}^l B'_j$ be any two representations of $D$ as disjoint unions of blocks. \textbf{Claim}: $\bigcup_{i=1}^m \overline{B_i}=\bigcup_{j=1}^l \overline{B'_j}$ Proof of the claim: We have $B_i\subseteq\bigsqcup_{i=1}^m B_i=\bigsqcup_{j=1}^l B'_j\subseteq\bigcup_{j=1}^l \overline{B'_j}$ for each $i$. Hence $\overline{B_i}\subseteq\bigcup_{j=1}^l \overline{B'_j}$ by the previous proposition. Therefore $\bigcup_{i=1}^m \overline{B_i}\subseteq\bigcup_{j=1}^l \overline{B'_j}$. The reverse containment is by symmetry and hence the claim. Now we define $\overline D=\bigcup_{i=1}^m \overline{B_i}$. By the claim, this $pp$-convex set is uniquely defined. Let $\alpha\in\mathcal A$ be such that $D\subseteq\bigcup\alpha$. But $D=\bigsqcup_{i=1}^m B_i$. Hence $B_i\subseteq\bigcup\alpha$ for each $i$. By arguments similar to the proof of the claim, we get $\bigcup_{i=1}^m \overline{B_i}\subseteq\bigcup\alpha$ i.e., $\overline D\subseteq\bigcup\alpha$. \end{proof} The assignment $D\mapsto\overline{D}$, where $\overline{D}$ is the $pp$-convex set obtained from the lemma, defines a closure operator $\mathrm{Def}(M^n)\rightarrow\mathcal A_n$. This closure operation is extremely useful in proving the cell decomposition theorem. \begin{theorem}\label{CDT1} \textbf{Cell Decomposition Theorem}: There is a bijection between the set $\mathrm{Def}(M^n)$ of all definable subsets of $M^n$ and the set $\mathcal T$ of towers of cells. \end{theorem} \begin{proof} Let $D\in \mathrm{Def}(M^n)$. We construct a tower $\mathcal F$ of cells by defining a nested sequence $\{D_j\}_{j\geq 0}$ of definable subsets of $D$ as follows. We set $D_0:=D$ and, for each $j>0$, we set $D_j:=D_{j-1}\setminus C_j$, where $C_j:=\overline{D_{j-1}}\setminus (\overline{\overline{D_{j-1}}\setminus D_{j-1}})$ is a cell. We stop this process when we obtain $D_j=\emptyset$ for the first time. This process must terminate because the elements of the antichains involved in this process belong to some finite nest containing a fixed decomposition of $D$ into blocks. In the converse direction, we assign $\bigcup\mathcal F\in \mathrm{Def}(M^n)$ to $\mathcal F\in\mathcal T$. It is easy to verify that the two assignments defined above are actually inverses of each other. \end{proof} The following corollary combines theorem \ref{CDT1} with lemma \ref{CHAINANTITOWER} and gives a combinatorial representation theorem for $\mathrm{Def}(M^n)$, which roughly states that every definable subset of $M^n$ can be represented uniquely as a finite $\prec$-chain in the free distributive lattice $\mathcal A$ over the meet semilattice $\mathcal L$. \begin{cor} There is a bijection between the set $\mathcal W_n$ of finite chains in $\mathcal A_n^*$ and $\mathrm{Def}(M^n)$. \end{cor} \subsection{Connectedness}\label{C} We fix a right $\mathcal R$-module $M$ satisfying $Th(M)=Th(M)^{\aleph_0}$ and some $n\geq 1$. We drop all the subscripts $n$ as usual. Recall that every global characteristic of a definable set is preserved under definable isomorphisms (Theorem \ref{t4}). In this section we describe what we mean by the statement that a definable subset of a (finite power of a) module is connected. The property of being connected is not preserved under definable isomorphisms. We prove a (topological) property of connected sets which states that a definable connected set $A$ contained in another definable set $B$ is in fact contained in a connected component of $B$. Let $\mathcal{F},\mathcal{F}'\subseteq\mathcal{B}$ be two finite families of disjoint blocks such that $\bigcup\mathcal{F}=\bigcup\mathcal{F}'$. Then we say that $\mathcal{F}'$ is a \textbf{refinement} of $\mathcal{F}$ if for each $F'\in\mathcal{F}'$, there is a unique $F\in\mathcal{F}$ such that $F'\subseteq F$. Recall from \ref{CH1} that if $\bigcup\mathcal{F}\in\mathcal{B}$ and if $\mathcal{D}$ is the corresponding nest, then $\{\mathrm{Core}_\mathcal{D}(D)\}_{D\in\mathcal{D}^+}$ is a refinement of $\mathcal{F}$, where $\mathcal{D}^+$ is the set $\delta_{\mathcal{D}}^{-1}\{1\}$. We use this property of nests to attach a digraph with each of them. \begin{definition} Let $\mathcal D$ be a nest corresponding to a fixed finite family of pairwise disjoint blocks. We define a \textbf{digraph structure} $\mathcal{H}(\mathcal{D}^+)$ on the set $\mathcal{D}^+$. The pair $(F_1, F_2)$ of elements of $\mathcal{D}^+$ will be said to constitute an arrow in the digraph if $F_1\subsetneq F_2$ and $F_1\subseteq F\subseteq F_2$ for some $F\in\mathcal{D}^+$ if and only if $F=F_1$ or $F=F_2$. \end{definition} If $\bigcup_{F\in\mathcal{D}^+}\mathrm{Core}_\mathcal{D}(F)\in\mathcal{B}$, then $\mathcal{D}^+$ is an upper set and in particular $\mathcal{H}(\mathcal{D}^+)$ is \textbf{weakly connected} i.e., its underlying undirected graph is connected. It seems natural to use this property to define the connectedness of a definable set. \begin{definition}\label{conn} Let $D\in\mathrm{Def}(M^n)$ be represented as $D=\bigcup\mathcal F$, where $\mathcal F\subseteq\mathcal B$ be a finite family of pairwise disjoint blocks and let $\mathcal{D}$ denote the nest corresponding to $\mathcal F$. We say that $D$ is \textbf{connected} if and only if the digraph $\mathcal{H}(\mathcal{D}'^+)$ is weakly connected for some nest $\mathcal{D}'$ containing $\mathcal{D}$. \end{definition} Note the existential clause in this definition. Let $\mathcal F,\mathcal F'$ be two finite families of pairwise disjoint blocks with $\bigcup\mathcal F=\bigcup\mathcal F'$ and let $\mathcal D,\mathcal D'$ denote the nests corresponding to them. If $\mathcal F'$ refines $\mathcal F$, then the number of weakly connected components of $\mathcal H(\mathcal D'^+)$ is bounded between $0$ and the number of weakly connected components of $\mathcal H(\mathcal D^+)$. This observation allows us to define the following invariant. \begin{definition} We define the \textbf{number of connected components} of $D$, denoted $\lambda(D)$, for every nonempty definable set $D$ to be the least number of weakly connected components of $\mathcal H(\mathcal D^+)$, where $\mathcal D$ varies over nests refining a fixed partition of $D$ into disjoint blocks. We set $\lambda(\emptyset)=0$. \end{definition} In the discussion on connectedness, we have treated blocks as if they are the basic connected sets. Note that a definable set $D$ is connected if and only if $\lambda(D)=1$. We denote the set of all connected definable subsets of $M^n$ by $\mathbf{Con}_n$. We tend to drop the suffix $n$ if it is clear from the context. We have $\mathbf B_n\subseteq\mathbf{Con}_n$ as expected. \begin{illust} Consider the vector space $\mathbb R_\mathbb R$. The $pp$-definable subsets of the plane, $\mathbb R^2$, are points and lines and the plane. Note that if a definable subset of $\mathbb R^2$ is topologically connected, then it is connected according to definition \ref{conn}. But the converse is not true. The set $B=\{(x,0):x\neq 0\}$ is not topologically connected, but $B\in\mathbf{Con}$ since $B$ is a block. If $D$ denotes the union of two coordinate axes with the origin removed, then the number of topologically connected components of $D$ is $4$, whereas $\lambda(D)=2$. \end{illust} \begin{rmk} If $X$ is a `nice' topological space (e.g., a manifold), then the rank $\beta_0$ of the homology group $H_0(X)$ is the number of (path) connected components of $X$. To note the analogy, consider $P\in \mathcal L_n$ and $\alpha\in\mathcal L_P$. If $\alpha\neq\emptyset$, then $\beta_0(\mathcal K^P(\alpha))=\lambda(\bigcup\alpha\setminus P)$. Note that the `deleted neighbourhood' of $P$ in $\alpha$, i.e., the set $\bigcup\alpha\setminus P$, occurs in this correspondence since the `non-deleted neighbourhood' $\bigcup\alpha$ is connected. \end{rmk} Topologically connected sets satisfy the following property. If a connected set $A$ is contained in another set $B$, then $A$ is actually contained in a connected component of $B$. We have a similar result here. \begin{theorem}\label{topconn} Let $A,B_i\in\mathbf{Con}$ for $1\leq i\leq m$ be such that $\lambda(\bigcup_{i=1}^mB_i)=m$. If $A\subseteq\bigcup_{i=1}^mB_i$, then $A\subseteq B_i$ for a unique $i$. \end{theorem} \begin{proof} Let $\mathcal D$ be a nest containing the nests corresponding to some fixed families of blocks partitioning $A$ and all the $B_i$. The restriction of the digraph $\mathcal H(\mathcal D^+)$ to $A$ is a subdigraph of $\mathcal H(\mathcal D^+)$. Since the former is weakly connected, it is a sub-digraph of exactly one of the $m$ weakly connected components of the latter. \end{proof} \subsection{Remarks and questions}\label{rmkcom} Consider the structure of the proof of the special case of the main theorem. Manipulation of different lattice-like structures is one of the important themes in this paper. The partial quantifier elimination result for theories of modules (theorem \ref{PPET}) makes the meet-semi-lattice $\mathcal L_n$, of $pp$-definable sets, the basic object of study. The lattice of antichains $\mathcal A_n$ is the free distributive lattice on $\mathcal L_n$ and simplicial methods are natural for studying the `set-theoretic geometry' associated with antichains. The local processes in $\mathrm{Def}(M^n)$ are similar to, but independent from, the local processes in $\mathrm{Def}(M^m)$ when $n\neq m$ and these different `dimensions' start to interact with each other only when we are concerned with the multiplicative structure. The fact that the $pp$-sets are closed under projections is not directly relevant to the technique. Note that the model-theoretic condition $\rm T=T^{\aleph_0}$ is equivalent to the lattice-theoretic statement that every element of $\mathcal L_n$ considered as an element of the lattice $\mathcal A_n$ is `join-irreducible'. The unique representation theorem (theorem \ref{CDT1}) relies solely on this fact and in particular this is a statement about lattices of sets. We would like to know if this idea can be expressed in some more abstract setting. The algebraic K-theory functor ${\rm K_0:Ring\rightarrow Ab}$ is covariant, whereas the model-theoretic Grothendieck ring functor $K_0$ is contravariant on pure embeddings (theorem \ref{puresurj}). Note that $K_0(M)$ depends on $\overline{\mathcal L}(M)$ in a covariant way and the assignment $M\mapsto\overline{\mathcal L}(M)$ is contravariant. This strongly suggests that the answer to the following question is positive. \begin{que} Is there a way to define the Grothendieck ring for a sequence $(L_n)_{n\geq 0}$ of meet-semi-lattices (with inclusion and projection maps) under certain conditions in a way that is abstractly similar to the technique used in the proof of theorem \ref{FINAL}? \end{que} A more specific question could be asked for model-theoretic Grothendieck rings. \begin{que} Are there any structures admitting some form of quantifier elimination, whose Grothendieck rings can be computed using a similar technique? \end{que} Though there are modules with additive torsion elements in Grothendieck rings (corollary \ref{p-adic}), we believe that there are no examples with non-trivial multiplicative torsion elements (i.e. elements $a\in K_0(M)$ such that $a^n=1$ for some $n>1$). \begin{conj} There are precisely two units (namely $\pm 1$) in the Grothendieck ring $K_0(M)$ of a nonzero module $M$. \end{conj} \end{document}
\begin{document} \RUNTITLE{Risk-Averse Equilibrium for Autonomous Vehicles in Stochastic Congestion Games} \RUNAUTHOR{Yekkehkhany and Nagi} \TITLE{Risk-Averse Equilibrium for Autonomous Vehicles in Stochastic Congestion Games} \ARTICLEAUTHORS{ \AUTHOR{Ali Yekkehkhany} \AFF{Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, \EMAIL{yekkehk2@illinois.edu} } \AUTHOR{Rakesh Nagi} \AFF{Industrial and Enterprise Systems Engineering, University of Illinois at Urbana-Champaign, \EMAIL{nagi@illinois.edu} } } \ABSTRACT{ The fast-growing market of autonomous vehicles, unmanned aerial vehicles, and fleets in general necessitates the design of smart and automatic navigation systems considering the stochastic latency along different paths in the traffic network. The longstanding shortest path problem in a deterministic network, whose counterpart in a congestion game setting is Wardrop equilibrium, has been studied extensively, but it is well known that finding the notion of an optimal path is challenging in a traffic network with stochastic arc delays. In this work, we propose three classes of risk-averse equilibria for an atomic stochastic congestion game in its general form where the arc delay distributions are load dependent and not necessarily independent of each other. The three classes are risk-averse equilibrium (RAE), mean-variance equilibrium (MVE), and conditional value at risk level $\alpha$ equilibrium (CVaR$_\alpha$E) whose notions of risk-averse best responses are based on maximizing the probability of taking the shortest path, minimizing a linear combination of mean and variance of path delay, and minimizing the expected delay at a specified risky quantile of the delay distributions, respectively. We prove that for any finite stochastic atomic congestion game, the risk-averse, mean-variance, and CVaR$_\alpha$ equilibria exist. We show that for risk-averse travelers, the Braess paradox may not occur to the extent presented originally since players do not necessarily travel along the shortest path in expectation, but they take the uncertainty of travel time into consideration as well. We show through some examples that the price of anarchy can be improved when players are risk-averse and travel according to one of the three classes of risk-averse equilibria rather than the Wardrop equilibrium. } \KEYWORDS{Stochastic Congestion Games, Autonomous Vehicles, Risk-Aversion, Risk-Averse Equilibrium.} \maketitle \section{Introduction} \label{introduction} The intelligent transportation systems are growing faster than ever with the speedy emergence of autonomous vehicles, unmanned aerial vehicles, Amazon delivery robots, Uber/Lyft self-driving cars, and such. One of the principal components of such systems is the navigation system whose goal is to provide travelers with fast and reliable paths from their sources to destinations. In a fleet of vehicles, an equilibrium is achieved when no travelers have any incentives in a certain sense to change routes unilaterally. In the classical Wardrop equilibrium \citep{wardrop1952correspondence, wardrop1952road}, travelers have incentives to change routes if they have an alternative route that has lower expected travel time. In other words, the optimality metric is based on minimizing the expected travel time in the Wardrop equilibrium. In the context of transportation though, collisions, weather conditions, road works, traffic signals, and varying traffic conditions can cause deviations in travel times \citep{ordonez2010wardrop}. As a result, the path with the minimum expected travel time may not be reliable due to its high variability. Similarly, in the context of telecommunication networks, noise, signal degradation, interference, re-transmission, and malfunctioning equipment can cause variability in transmission time from source to destination \citep{ordonez2010wardrop}. The empirical works by \cite{abdel1995investigating}, \cite{kazimi2000van}, \cite{lam2001value}, and \cite{small1999valuation} also support the fact that taking travel time uncertainty into account is indeed an essential criterion in navigation systems. As mentioned above, minimizing the expected travel time is inadequate in scenarios involving risk due to variability of travel times. In order to address this issue, we study a richer class of congestion games called stochastic congestion games in an atomic setting, where the travel times along different arcs of the network are random variables that are not necessarily independent of each other. In this framework, we introduce probability statements regarding the risk-averse best response of a traveler given the choice of the rest of travelers in the network. We propose three classes of risk-averse equilibria for stochastic congestion games: risk-averse equilibrium (RAE), mean-variance equilibrium (MVE), and conditional value at risk level $\alpha$ equilibrium (CVaR$_\alpha$E), whose notions of risk-averse best responses are based on maximizing the probability of traveling along the shortest path (also known as Risk-Averse Best Action Decision with Incomplete Information (R-ABADI)), minimizing a linear combination of mean and variance of path delay, and minimizing the expected delay at a specified risky quantile of the delay distributions, respectively. We prove that the risk-averse, mean-variance, and CVaR$_\alpha$ equilibria exist for any finite stochastic atomic congestion game. Note that two equilibria similar to the mean-variance and CVaR equilibria exist in the literature and are discussed in the related work section, but the probability distributions of travel times are load independent or link delays are considered to be independent in the literature, which is not the case in this article. It is noteworthy that most studies on stochastic congestion games make use of simplifying assumptions such as considering the arc delay distributions to be independent of their loads or adding independent and identically distributed errors to nominal delays of arcs neglecting their differences. In the Braess paradox \citep{braess1968paradoxon, murchland1970braess}, which is known to be a counterintuitive example rather than a paradox, the risk-neutral/selfish travelers select the shortest path in expected travel time, which maximizes the social delay/cost incurred by the whole society. Although the focus of this article is not on deriving bounds on price of anarchy, we study the Braess paradox in a stochastic setting under the three proposed risk-averse equilibria and show that the risk-averse behavior of travelers results in improving the social delay/cost incurred by the society; and as a result, the price of anarchy is improved if travelers are risk-averse. As the result, the Braess paradox may not occur to the extent presented originally if travelers are risk-averse. Furthermore, we study the Pigou network \citep{pigou1920economics} in a stochastic setting and observe that the price of anarchy is also improved if travelers are risk-averse in the senses discussed above. Note that the Pigou networks are prevalent in traffic/telecommunication networks. Hence, providing travelers with risk-averse navigation can decrease the social delay/cost in the real world applications. The article is structured in the following way. The related work is discussed in Section \ref{related_work_RAE_Congestion_Games}. The stochastic congestion game is formally defined in Section \ref{problem_statement_congestion_game}. The three proposed classes of equilibria, i.e. risk-averse, mean-variance, and CVaR$_\alpha$ equilibria, are presented in Section \ref{section_risk_averse_equilibrium_congestion_game} and their existences in any finite stochastic congestion game are proven; detailed proofs can be found in the Appendix. Numerical results including the study of the Pigou and Braess networks as well as notes for practitioners are provided in Section \ref{numerical_results_congestion_game}. Finally, conclusions and discussion of opportunities for future work are provided in Section \ref{conclusion_future_congestion_game}. \section{Related Work} \label{related_work_RAE_Congestion_Games} In this section, the literature on navigation for both deterministic and stochastic networks is presented first, then the literature on deterministic and stochastic congestion games is discussed in details. The main focus of the literature review is to motivate the necessity of risk-averse algorithms for navigation and congestion games in a stochastic setting. The problem of finding the shortest path in a transportation/telecommunication traffic network is one of the main parts of the in-vehicle navigation systems. This problem has been studied well in deterministic networks resulting in many efficient algorithms, e.g., the algorithms developed by \cite{dijkstra1959note}, and \cite{dreyfus1969appraisal}; also see \citep{schrijver2012history, fu2006heuristic, dial1969algorithm, tarjan1983data, lawler1976combinatorial, pierce1975bibliography, orda1990shortest, kaufman1993fastest, hall1993time, chabini1997new}, and \citep{hosseini2017mobile}. Although finding the shortest path problem is well understood in deterministic networks, the definition of an optimal path and how to identify such a path is more challenging in the stochastic version of the problem. There have been multiple approaches to define the optimal path in stochastic networks as summarized below. The least expected travel time is studied by \cite{loui1983optimal} and is equivalent to the deterministic case from a computational point of view. The path with the least expected time may be sub-optimal for risk-averse travelers due to its high variability and uncertainty; as the result, the probability distributions of link travel times need to be considered explicitly to find the most reliable path. In this manner, \cite{frank1969shortest} proposed the optimal path to be the one that maximizes the probability of realizing a travel time that less than a threshold, \cite{sigal1980stochastic} proposed the optimal path to be the one that maximizes the probability of realizing the shortest time, and \cite{chen2005path} proposed the optimal path to be the one with minimum travel time budget required to meet a travel time reliability constraint. For more variants of the mentioned algorithms, refer to \citep{nie2006arriving, nie2009shortest, zeng2015application, xing2011finding, howard2012dynamic, hall1986fastest, fu1998expected, waller2002online, miller2000least, mirchandani1986routes, mirchandani1976shortest, murthy1996relaxation, fan2003optimal, xiao2013adaptive, bell2009hyperstar, chen2010risk}, and \citep{lo2006degradable}. In the context of route selection in a fleet of vehicles, a game emerges between all travelers where the action of each traveler affects the travel time of the other travelers, which creates a competitive situation forcing travelers to strategize their decisions. In a deterministic network, the mentioned game is formalized by \cite{wardrop1952correspondence}, \cite{neumann1928theorie, von1947theory}, and \cite{nash1950equilibrium}. However, it is not realistic to consider the link delays to be known prior to making a decision due to external factors that make the travel times uncertain. In order to put this in perspective, several approaches have been adopted by researchers to capture the stochastic behavior of the traffic networks. For example, \cite{harsanyi1967games, harsanyi1968games} proposed Bayesian games that consider the incomplete information of payoffs, \cite{ordonez2010wardrop} modeled the risk-averse behavior of travelers by padding the expected travel time along paths with a safety margin, \cite{watling2006user} proposed an equilibrium based on the optimality measure of minimizing the probability of being late or maximizing the probability of being on time, \cite{szeto2006risk} associated a cost with the travel time uncertainty based on travelers' risk-averse behavior, \cite{chen2010alpha} proposed an equilibrium based on the optimality measure of minimizing the conditional expectation of travel time beyond a travel time budget, and \cite{bell2002risk} proposed to play out all possible scenarios before making a choice. For more details in the context of traffic networks, we refer readers to \citep{aashtiani1981equilibria, aghassi2006robust, altman2006survey, hayashi2005robust, mirchandani1987generalized, nie2011multi, connors2009network, schmocker2009game, fonzone2012link, angelidakis2013stochastic, nikolova2011stochastic, nikolova2015burden}, and \citep{correa2019network}. \section{Problem Statement} \label{problem_statement_congestion_game} Consider a directed graph (network) $G = (\mathcal{N}, \mathcal{E})$ with a node set $\mathcal{N} = [N] \coloneqq \{1, 2, \dots, N\}$ and directed link (edge) set $\mathcal{E}$ with cardinality $|\mathcal{E}|$, where the pair $(i, j) \in \mathcal{E}$ indicates a directed link from node $i \in \mathcal{N}$ to node $j \in \mathcal{N}$ in the directed graph. Denote the set of source-destination (SD) pairs with $\mathcal{K} \subseteq \mathcal{N} \times \mathcal{N}$, where for the SD pair $k = (s_k, d_k) \in \mathcal{K}$, $s_k \neq d_k$, the set of simple directed paths from $s_k$ to $d_k$ in $G$ is denoted by $\mathcal{P}_k$, and let $n_k$ be the number of players (travelers, vehicles, or data packages) associated with source-destination $k$. Let $\mathcal{P} \coloneqq \cup_{k \in \mathcal{K}} \mathcal{P}_k$ be the set of all paths. A feasible assignment $\boldsymbol{m} \coloneqq \{ m^p : p \in \mathcal{P} \}$ allocates a non-negative number of players to every path $p \in \mathcal{P}$ such that $\sum_{p \in \mathcal{P}_k} m^p = n_k$ for all $k \in \mathcal{K}$. As a result, the number of players along link $e \in \mathcal{E}$ denoted by $m_e$ is given by $m_e = \sum_{ \{ p \in \mathcal{P}: e \in p \} } m^p$. The latency (delay or travel time) along link $e$ is load-dependent which is denoted by the non-negative continuous random variable $L_e(m_e)$ with marginal probability density function (pdf) $f_e(x | m_e)$ and mean $l_e(m_e)$. Note that the number of players along an edge is determined by an assignment $\boldsymbol{m}$, so $L_e(\boldsymbol{m})$, $f_e(x | \boldsymbol{m})$, and $l_e(\boldsymbol{m})$ can be used instead of $L_e(m_e)$, $f_e(x | m_e)$, and $l_e(m_e)$, respectively. Furthermore, the latency along links of the graph can be dependent, in which case, the joint pdf of latency over all links is denoted by $f_{e_1, e_2, \dots, e_{|\mathcal{E}|}}(x_1, x_2, \dots, x_{|\mathcal{E}|} | m_1, m_2, \dots, m_{|\mathcal{E}|})$, which can be denoted as $f_\mathcal{E}(x_1, x_2, \dots, x_{|\mathcal{E}|} | \boldsymbol{m})$. Given the link latency defined above, the nominal latency of player $i$ along path $p_i \in \mathcal{P}$ under a given assignment $\boldsymbol{m}$ is simply $L^i(\boldsymbol{m}) \coloneqq \sum_{e \in p_i} L_e(\boldsymbol{m})$ with pdf $f^i(x | \boldsymbol{m}) = \partial \left ( \int\int\dots\int_{ \{ \sum_{e \in p_i} x_e \leq x \} } \ f_\mathcal{E}(x_1, x_2, \dots, x_{|\mathcal{E}|} | \boldsymbol{m}) \ dx_1 dx_2 \dots dx_{|\mathcal{E}|} \right ) \Big \slash \partial x$ and mean $l^i(\boldsymbol{m}) = \sum_{e \in p_i} l_e(\boldsymbol{m})$. The stochastic congestion game consists of $n \coloneqq \sum_{k \in \mathcal{K}} n_k$ players (travelers), where player $i \in [n] \coloneqq \{ 1, 2, \dots, n \}$ is associated with the corresponding source-destination pair $k(i) \in \mathcal{K}$. As a result, $\mathcal{P}_{k(i)}$ is the set of possible pure strategies (actions or paths) for player $i$. The pure strategy profile of all $n$ players is denoted by $\boldsymbol{p} \coloneqq (p_1, p_2, \dots, p_n)$, where $p_i \in \mathcal{P}_{k(i)}$, that fully specifies all actions in the game. The set of all pure strategy profiles is the Cartesian product of pure strategy sets of all players which is denoted by $\boldsymbol{\mathcal{P}} \coloneqq \mathcal{P}_{k(1)} \times \mathcal{P}_{k(2)} \dots \times \mathcal{P}_{k(n)}$. Let $\boldsymbol{p}_{-i} \coloneqq (p_1, p_2, \dots, p_{i - 1}, p_{i + 1}, \dots, p_n)$ be the pure strategies of all players except player $i$, so $\boldsymbol{p} = (p_i, \boldsymbol{p}_{-i})$. Given the pure strategy profile $\boldsymbol{p}$, the number of players on a path $p \in \mathcal{P}$ is given by $m^p = \sum_{i = 1}^n \mathbbm{1} \{ p_i = p \}$, and the number of players on a link $e \in \mathcal{E}$ is given by $m_e = \sum_{ \{ p \in \mathcal{P}: e \in p \} } \sum_{i = 1}^n \mathbbm{1} \{ p_i = p \}$. Let $\boldsymbol{m}(\boldsymbol{p})$ show the number of players on all paths which is fully determined by the pure strategy $\boldsymbol{p}$. As a result, given the pure strategy profile $\boldsymbol{p} = (p_i, \boldsymbol{p}_{-i})$, the latency of player $i$ by choosing the path $p_i$ is the random variable $L^{i} (\boldsymbol{m}(\boldsymbol{p})) = \sum_{e \in p_i} L_e(\boldsymbol{m}(\boldsymbol{p}))$ with pdf $f^{i}(x | \boldsymbol{m}(\boldsymbol{p}))$ and mean $l^{i}(\boldsymbol{m}(\boldsymbol{p})) = \sum_{e \in p_i} l_e(\boldsymbol{m}(\boldsymbol{p}))$. For simplicity, instead of using $L^{i} (\boldsymbol{m}(\boldsymbol{p}))$, $f^{i}(x | \boldsymbol{m}(\boldsymbol{p}))$, and $l^{i}(\boldsymbol{m}(\boldsymbol{p}))$, we use $L^{i} (\boldsymbol{p})$, $f^{i}(x | \boldsymbol{p})$, and $l^{i}(\boldsymbol{p})$, respectively. The mixed strategy of player $i$ is denoted by $\sigma_i \in \Sigma_i$, where $\Sigma_i$ is the set of all probability distributions over the set of pure strategies $\mathcal{P}_{k(i)}$, and $\sigma_i(p)$ is the probability that player $i$ selects path $p$. The mixed strategy profile of all $n$ players is denoted by $\boldsymbol{\sigma} \coloneqq (\sigma_1, \sigma_2, \dots, \sigma_n)$, where $\sigma_i \in \Sigma_i$. The set of all mixed strategy profiles is the Cartesian product of mixed strategy sets of all players which is denoted by $\boldsymbol{\Sigma} \coloneqq \Sigma_1 \times \Sigma_2 \dots \times \Sigma_n$. Let $\boldsymbol{\sigma}_{-i} \coloneqq (\sigma_1, \sigma_2, \dots, \sigma_{i - 1}, \sigma_{i + 1}, \dots, \sigma_n)$ be the mixed strategies of all players except player $i$, so $\boldsymbol{\sigma} = (\sigma_i, \boldsymbol{\sigma}_{-i})$. The latency of player $i$ by selecting path $p_i$ when the other $[n] \setminus i$ players select paths according to a mixed strategy $\boldsymbol{\sigma}_{-i}$ is denoted by the random variable $\overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i})$ that has the following pdf using the law of total probability: \begin{equation} \label{dist_mixed} \bar{f}^{i}(x | (p_i, \boldsymbol{\sigma}_{-i})) = \sum_{\boldsymbol{p}_{-i} \in \boldsymbol{\mathcal{P}}_{-i}} \Big ( f^{i}(x | (p_i, \boldsymbol{p}_{-i})) \cdot \boldsymbol{\sigma}(\boldsymbol{p}_{-i}) \Big ), \end{equation} where $\boldsymbol{\sigma}(\boldsymbol{p}_{-i}) = \prod_{j \in [n] \setminus i} \sigma_j (p_j)$ and $p_j$ is the corresponding strategy of player $j$ in $\boldsymbol{p}_{-i}$, and the mean of the random variable is given as \begin{equation} \label{eq_mean_mixed} \overline{l}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \coloneqq \mathbbm{E}[\overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i})] = \sum_{\boldsymbol{p}_{-i} \in \boldsymbol{\mathcal{P}}_{-i}} \Big ( l^{i}(p_i, \boldsymbol{p}_{-i}) \cdot \boldsymbol{\sigma}(\boldsymbol{p}_{-i}) \Big ). \end{equation} The expected average delay (latency) incurred by the $n$ players in the stochastic congestion game under the pure strategy profile $\boldsymbol{p}$, also known as the \textit{social cost} or \textit{social delay} in this context, is denoted by $D(\boldsymbol{p}) \coloneqq \frac{1}{n} \sum_{i = 1}^n l^{i}(\boldsymbol{p})$. The social delay under the mixed strategy $\boldsymbol{\sigma}$ is $D(\boldsymbol{\sigma}) \coloneqq \frac{1}{n} \sum_{\boldsymbol{p} \in \boldsymbol{\mathcal{P}}} \sum_{i = 1}^n \boldsymbol{\sigma}(\boldsymbol{p}) \cdot l^{i}(\boldsymbol{p})$, where $\boldsymbol{\sigma}(\boldsymbol{p}) = \prod_{i \in [n]} \sigma_i (p_i)$ and $p_i$ is the corresponding strategy of player $i$ in $\boldsymbol{p}$. The (pure) optimal load assignment denoted by $\boldsymbol{o}$ minimizes social delay among all possible (pure) load assignments which might be in contrast with the selfish behavior of players. The (pure) \textit{price of anarchy} (PoA) of a congestion game is the maximum ratio $D(\boldsymbol{p}) \slash D(\boldsymbol{o})$ over all equilibria $\boldsymbol{p}$ of the game. Throughout the article, we follow the convention that $y \leq \boldsymbol{x}$ means that $y$ is less than or equal to all elements of the vector $\boldsymbol{x}$. \section{Risk-Averse Equilibrium for Stochastic Congestion Games} \label{section_risk_averse_equilibrium_congestion_game} In the following sub-section, illustrative examples are provided with analysis of their equilibria in classic and risk-averse frameworks which motivate the novel risk-averse approach for stochastic congestion games presented in this article. \subsection{Illustrative Examples} \label{sub_section_illustrative_example} The Pigou network \citep{pigou2013economics} is one of the simplest networks studied in congestion games. We first use the Pigou network to clearly state the motivation of the current work in the first example. We then study the more controversial network used by \cite{braess1968paradoxon} in the famous Braess's paradox in the second example. The two examples below set grounding for the risk-averse equilibrium for congestion games proposed in this article. \begin{example} \label{example1} Consider the Pigou network with two parallel links between source and destination as shown in Figure \ref{figure_example1_Pigou}. There are $n$ players (vehicles or data packages) to travel from source to destination. The top and bottom links are labeled as $1$ and $2$ with loads $m_1$ and $m_2 = n - m_1$, respectively. The travel times on links $1$ and $2$ are respectively independent random variables $L_1(m_1)$ and $L_2(m_2)$ with expected values $l_1(m_1) = \frac{m_1}{n}$ and $l_2(m_2) = 1$ and pdfs \[ \begin{aligned} f_1(x | m_1) = \ & \alpha \Bigg ( 2exp \bigg ( -100 \Big (x - \frac{m_1}{4n} \Big )^2 \bigg ) \cdot \mathbbm{1} \left \{ 0 \leq x \leq \frac{m_1}{2n} \right \} \\ & \ \ \ \ + 3exp \bigg ( -100 \Big (x - \frac{3m_1}{2n} \Big )^2 \bigg ) \cdot \mathbbm{1} \left \{ \frac{5m_1}{4n} \leq x \leq \frac{7m_1}{4n} \right \} \Bigg ), \\ f_2(x | m_2) = \ & \beta exp \left ( -100 \left (x - 1 \right )^2 \right ) \cdot \mathbbm{1} \left \{ \frac34 \leq x \leq \frac54 \right \}, \end{aligned} \] where $\alpha$ and $\beta$ are constants for which each of the two distributions integrate to one and $\mathbbm{1}\{.\}$ is the indicator function. \begin{figure} \caption{The Pigou network in Example \ref{example1} with the load-dependent latency pdfs and the corresponding means of links.} \label{figure_example1_Pigou} \end{figure} \end{example} The well-known Wardrop equilibrium \citep{wardrop1952road, wardrop1952correspondence}, also Nash equilibrium \citep{vonNeumann1947}, for the Pigou network in Example \ref{example1} is that all the $n$ players travel along the top link since it is the weakly dominant strategy for any player as the expected latency incurred along the top link is always less than or equal to the expected latency incurred along the bottom link, $l_1(m_1) = \frac{m_1}{n} \leq 1 = l_2(m_2)$. As a result, the Wardrop equilibrium for Pigou network is $\boldsymbol{p}_W^* = (1, 1, \dots, 1)$ with social delay $D_W(\boldsymbol{p}_W^*) = 1$. However, although the expected latency along the top link is less than or equal to that of the bottom link, $l_1(m_1) \leq l_2(m_2)$, the variance of travel time along the top link at full capacity is larger than that along the bottom link, which increases the risk and uncertainty of traveling along the top link \citep{yekkehkhany2020risk, yekkehkhanycost, 9142286}. In fact, the bottom link with higher expected travel time is more likely to have a lower delay than the top link at full capacity, i.e., $P \big ( L_2(0) \leq L_1(n) \big ) = 0.6 > 0.5$. As a result, a risk-averse player selects the bottom link for commute when the top link is at full capacity, especially if it is a one-time trip. We will also shown later, the risk-averse behavior of players decreases social delay for this example. As an example, consider a traveler who wants to go from hotel to airport who has two options for this trip: taking the highway that has lower expected travel time, but is more likely to get congested due to traffic jams and crashes (top link in Pigou network), or taking the urban streets with a higher expected travel time and lower congestion (the bottom link in Pigou network). A risk-neutral player travels along the top link with lower expected latency, but a risk-averse player travels along the bottom link to assure not to incur a long delay and miss the flight. Even in everyday commutes between home and work, the expected delay over many days may not be a desirable objective to minimize. No-one desires to arrive early to work some days but late on others, and to be penalized accordingly. The Braess network, studied in the next example, further enforces the fact that minimizing the expected delay is not desirable for risk-averse players. \begin{figure} \caption{The Braess network in Example \ref{example2} with the load-dependent latency pdfs and the corresponding means of links.} \label{figure_example2_Braess} \end{figure} \begin{example} \label{example2} Consider the Braess network depicted in Figure \ref{figure_example2_Braess}. There are $n$ players (vehicles or data packages) to travel from source to destination. Other than the source and destination, there are two nodes $A$ and $B$ in the network. The directed links $(S, A)$, $(A, D)$, $(S, B)$, $(B, D)$, and $(A, B)$ are referred to as links $1$, $2$, $3$, $4$, and $5$ with loads $m_1$, $m_2$, $m_3$, $m_4$, and $m_5$, respectively. The travel times on links $1$, $2$, $3$, $4$, and $5$ are respectively independent random variables $L_1(m_1)$, $L_2(m_2)$, $L_3(m_3)$, $L_4(m_4)$, and $L_5(m_5)$ with expected values $l_1(m_1) = \frac{m_1}{n}$, $l_2(m_2) = 1$, $l_3(m_3) = 1$, $l_4(m_4) = \frac{m_4}{n}$, and $l_5(m_5) = 0$ and pdfs \[ \begin{aligned} f_1(x | m_1) = \ & \gamma \Bigg ( exp \bigg ( -100 \Big (x - \frac{m_1}{2n} \Big )^2 \bigg ) \cdot \mathbbm{1} \left \{ 0 \leq x \leq \frac{m_1}{n} \right \} \\ & \ \ \ \ + exp \bigg ( -100 \Big (x - \frac{3m_1}{2n} \Big )^2 \bigg ) \cdot \mathbbm{1} \left \{ \frac{m_1}{n} < x \leq \frac{2m_1}{n} \right \} \Bigg ), \\ f_2(x | m_2) = \ & \zeta exp \left ( -100 \left (x - 1 \right )^2 \right ) \cdot \mathbbm{1} \left \{ \frac12 \leq x \leq \frac32 \right \}, \\ f_3(x | m_3) = \ & \zeta exp \left ( -100 \left (x - 1 \right )^2 \right ) \cdot \mathbbm{1} \left \{ \frac12 \leq x \leq \frac32 \right \}, \\ f_4(x | m_4) = \ & \gamma \Bigg ( exp \bigg ( -100 \Big (x - \frac{m_4}{2n} \Big )^2 \bigg ) \cdot \mathbbm{1} \left \{ 0 \leq x \leq \frac{m_4}{n} \right \} \\ & \ \ \ \ + exp \bigg ( -100 \Big (x - \frac{3m_4}{2n} \Big )^2 \bigg ) \cdot \mathbbm{1} \left \{ \frac{m_4}{n} < x \leq \frac{2m_4}{n} \right \} \Bigg ), \end{aligned} \] where $\gamma$ and $\zeta$ are constants for which the distributions integrate to one, $\mathbbm{1}\{.\}$ is the indicator function, and $P \big ( L_5(m_5) = 0 \big ) = 1$. There are three paths from source to destination, $(S, A, D)$, $(S, A, B, D)$, and $(S, B, D)$, that are referred to as paths $1, 2$, and $3$ with loads $m^1, m^2$, and $m^3$, respectively, where the difference between links and paths should be clear from the context. Note that the link loads are related to path loads as $m_1 = m^1 + m^2$, $m_2 = m^1$, $m_3 = m^3$, $m_4 = m^2 + m^3$, and $m_5 = m^2$, and the delays along paths are related to link delays as $L^1(\boldsymbol{m}) = L_1(m_1) + L_2(m_2)$, $L^2(\boldsymbol{m}) = L_1(m_1) + L_5(m_5) + L_4(m_4) = L_1(m_1) + L_4(m_4)$, and $L^3(\boldsymbol{m}) = L_3(m_3) + L_4(m_4)$. \end{example} The Wardrop (Nash) equilibrium for the Braess network in Example \ref{example2} is that all the $n$ players travel along path $2$ since it is the weakly dominant path for any player as the expected latency incurred along path $2$ is always less than or equal to the expected latency incurred along the other two paths $1$ and $3$, \[ l^2(\boldsymbol{m}) = l_1(m_1) + l_5(m_5) + l_4(m_4) = \frac{m_1}{n} + \frac{m_4}{n} \begin{cases} \leq \frac{m_1}{n} + 1 = l_1(m_1) + l_2(m_2) = l^1(\boldsymbol{m}), \\ \leq 1 + \frac{m_4}{n} = l_3(m_3) + l_4(m_4) = l^3(\boldsymbol{m}). \end{cases} \] As a result, the Wardrop equilibrium for Braess network is $\boldsymbol{p}_W^* = (2, 2, \dots, 2)$ with social delay $D_W(\boldsymbol{p}_W^*) = 2$. However, although path $2$ has latency less than or equal to that of paths $1$ and $3$, $l^2(\boldsymbol{m}) \leq \big (l^1(\boldsymbol{m}), l^3(\boldsymbol{m}) \big)$, the variance of travel time along path $2$ at full capacity is larger than that along paths $1$ and $3$, which increases the risk and uncertainty of traveling along path $2$. In fact, path $1$ (or $3$) with higher expected travel time is more likely to have a lower delay than the rest of the paths, i.e., $P \Big ( L^1(0) \leq \big ( L^2(n), L^3(0) \big ) \Big ) = \frac38 > \frac14 = P \Big ( L^2(n) \leq \big ( L^1(0), L^3(0) \big ) \Big )$. As a result, a risk-averse player selects paths $1$ or $3$ for commute when path $2$ is at full capacity, and as is shown later, the risk-averse behavior of players decreases social delay for this example. \subsection{Risk-Averse Equilibrium} \label{sub_section_RAE} In the classical Wardrop (Nash) equilibrium, the best response of player $i \in [n]$ to the mixed strategy $\boldsymbol{\sigma}_{-i}$ of the other $[n] \setminus i$ players is defined as the set \[ \underset{p_i \in \mathcal{P}_i}{\argmin} \ \overline{l}^{i}(p_i, \boldsymbol{\sigma}_{-i}). \] In other words, the best response for player $i$ given $\boldsymbol{\sigma}_{-i}$ is defined as the path that minimizes the expected travel time. However, motivated by Examples \ref{example1} and \ref{example2}, the path with minimum expected latency may have a high volatility as well that causes risky scenarios for travelers. As a result, the classical Wardrop (Nash) equilibrium that ignores the distribution of path latency except for taking the expected latency into account, that does not carry any information about variance and the shape of the distribution, falls short in addressing risk-averse behavior of players. In this article, motivated by Examples \ref{example1} and \ref{example2}, we propose a Risk-Averse Best Action Decision with Incomplete Information (R-ABADI) of a player to the strategy of the other players in a stochastic congestion game as follows. \begin{definition} \label{def_best_response_mixed_RAE} Given the mixed strategy profile $\boldsymbol{\sigma}_{-i}$ of players $[n] \setminus i$, the set of mixed strategy risk-averse best responses of player $i$ is the set of all probability distributions over the set \begin{equation} \label{eq_mixed_best_response} \underset{p_i \in \mathcal{P}_i}{\argmax} \ P \left ( \overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \leq \overline{\boldsymbol{L}}^{i}(\mathcal{P}_i \setminus p_i, \boldsymbol{\sigma}_{-i}) \right ), \end{equation} where what we mean by $\overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i})$ being less than or equal to $\overline{\boldsymbol{L}}^{i}(\mathcal{P}_i \setminus p_i, \boldsymbol{\sigma}_{-i})$ when $\mathcal{P}_i \setminus p_i \neq \oldemptyset$ is that $\overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i})$ is less than or equal to $\overline{L}^{i}(p_i', \boldsymbol{\sigma}_{-i})$ for all $p_i' \in \mathcal{P}_i \setminus p_i$; otherwise, if $\mathcal{P}_i \setminus p_i = \oldemptyset$, player $i$ only has a single option that can be played. The same randomness on the action of players $[n] \setminus i$ is considered in $\overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i})$ for all $p_i \in \mathcal{P}_i$. Given the mixed strategy $\boldsymbol{\sigma}_{-i}$ of players $[n] \setminus i$, the risk-averse best response set of player $i$'s strategies is denoted by $RB(\boldsymbol{\sigma}_{-i})$, which is in general a set-valued function. \end{definition} The risk-averse equilibrium for stochastic congestion games is defined as follows. \begin{definition} \label{def_mixed_strategy_RAE} A strategy profile $\boldsymbol{\sigma}^* = (\sigma_1^*, \sigma_2^*, \dots,$ $\sigma_N^*)$ is a risk-averse equilibrium if and only if $\sigma_i^* \in RB(\boldsymbol{\sigma}_{-i}^*)$ for all $i \in [n]$. \end{definition} The following theorem proves the existence of a risk-averse equilibrium for any stochastic congestion game with finite number of players and pure strategy sets $\mathcal{P}_i$ for all $i \in [n]$ with finite cardinality. \begin{theorem} \label{theorem_existence_RAE} For any finite $n$-player stochastic congestion game, a risk-averse equilibrium exists. \end{theorem} The proof of Theorem \ref{theorem_existence_RAE} is provided in Appendix \ref{proof_theorem_RAE}. As a direct result of Definitions \ref{def_best_response_mixed_RAE} and \ref{def_mixed_strategy_RAE}, the pure strategy risk-averse best response and pure strategy risk-averse equilibrium are defined as follows. The pure strategy risk-averse best response of player $i$ to the pure strategy $\boldsymbol{p}_{-i}$ of players $[n] \setminus i$ is the set \begin{equation} \label{pure_best_response} \left\{ \begin{array}{ll} \argmax_{p_i \in \mathcal{P}_i} P \Big ( L^{i} \left ( p_i, \boldsymbol{p}_{-i} \right ) \leq \boldsymbol{L}^{i} \left ( \mathcal{P}_i \setminus p_i, \boldsymbol{p}_{-i} \right ) \Big ), \ \ \ \text{ if } \mathcal{P}_i \setminus p_i \neq \oldemptyset, \\ p_i, \ \ \ \text{ if } \mathcal{P}_i \setminus p_i = \oldemptyset. \end{array} \right. \end{equation} Given the pure strategy $\boldsymbol{p}_{-i}$ of players $[n] \setminus i$, the risk-averse best response set of player $i$ in Equation \eqref{pure_best_response} is denoted by $RB(\boldsymbol{p}_{-i})$ (overloading notation, $RB(.)$ is used for both pure and mixed strategy risk-averse best responses). As a result, a pure strategy profile $\boldsymbol{p}^* = (p_1^*, p_2^*, \dots, p_n^*)$ is a pure strategy risk-averse equilibrium if and only if $p_i^* \in RB(\boldsymbol{p}_{-i}^*)$ for all $i \in [n]$. Strict dominance in the classical Wardrop (Nash) equilibrium is defined as follows. A pure strategy $p_i \in \mathcal{P}_i$ of player $i$ strictly dominates a second pure strategy $p_i' \in \mathcal{P}_i$ of the player if \[ l^{i}(p_i, \boldsymbol{p}_{-i}) < l^{i}(p_i', \boldsymbol{p}_{-i}), \ \forall \boldsymbol{p}_{-i} \in \boldsymbol{\mathcal{P}}_{-i}. \] The solution concept of iterated elimination of strictly dominated strategies can also be applied to the risk-averse equilibrium using the following definition. \begin{definition} \label{def_strict_dominance_RAE} A pure strategy $p_i \in \mathcal{P}_i$ of player $i$ strictly dominates a second pure strategy $p_i' \in \mathcal{P}_i$ of the player in the risk-averse equilibrium if \begin{equation} \label{eq_strict_dominance_RAE} P \Big ( L^{i} \left ( p_i, \boldsymbol{p}_{-i} \right ) \leq \boldsymbol{L}^{i} \left ( \mathcal{P}_i \setminus p_i, \boldsymbol{p}_{-i} \right ) \Big ) > P \Big ( L^{i} \left ( p_i', \boldsymbol{p}_{-i} \right ) \leq \boldsymbol{L}^{i} \left ( \mathcal{P}_i \setminus p_i', \boldsymbol{p}_{-i} \right ) \Big ), \ \forall \boldsymbol{p}_{-i} \in \boldsymbol{\mathcal{P}}_{-i}. \end{equation} \end{definition} Consider path $p_i \in \mathcal{P}_i$ strictly dominates path $p_i' \in \mathcal{P}_i$ for player $i$; then, for any $\boldsymbol{\sigma}_{-i} \in \boldsymbol{\Sigma}_{-i}$ \begin{equation} \label{eq_proof_strict_dominance_RAE} \begin{aligned} & P \left ( \overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \leq \overline{\boldsymbol{L}}^{i}(\mathcal{P}_i \setminus p_i, \boldsymbol{\sigma}_{-i}) \right ) \\ \overset{(a)}{=} & \sum_{\boldsymbol{p}_{-i} \in \boldsymbol{\mathcal{P}}_{-i}} \Bigg ( P \left ( L^{i}(p_i, \boldsymbol{p}_{-i}) \leq \boldsymbol{L}^{i}(\mathcal{P}_i \setminus p_i, \boldsymbol{p}_{-i}) \right ) \cdot \boldsymbol{\sigma}(\boldsymbol{p}_{-i}) \Bigg ) \\ \overset{(b)}{>} & \sum_{\boldsymbol{p}_{-i} \in \boldsymbol{\mathcal{P}}_{-i}} \Bigg ( P \left ( L^{i}(p_i', \boldsymbol{p}_{-i}) \leq \boldsymbol{L}^{i}(\mathcal{P}_i \setminus p_i', \boldsymbol{p}_{-i}) \right ) \cdot \boldsymbol{\sigma}(\boldsymbol{p}_{-i}) \Bigg ) \\ = & \ P \left ( \overline{L}^{i}(p_i', \boldsymbol{\sigma}_{-i}) \leq \overline{\boldsymbol{L}}^{i}(\mathcal{P}_i \setminus p_i', \boldsymbol{\sigma}_{-i}) \right ), \end{aligned} \end{equation} where $(a)$ is true by the law of total probability, $\boldsymbol{\sigma}(\boldsymbol{p}_{-i}) = \prod_{j \in [n] \setminus i} \sigma_j (p_j)$ and $p_j$ is the corresponding strategy of player $j$ in $\boldsymbol{p}_{-i}$, and $(b)$ is followed by Equation \eqref{eq_strict_dominance_RAE} in Definition \ref{def_strict_dominance_RAE}. By Equation \eqref{eq_proof_strict_dominance_RAE} and Equation \eqref{eq_mixed_best_response} in Definition \ref{def_best_response_mixed_RAE}, a strictly dominated pure strategy cannot be a best response to any mixed strategy profile $\boldsymbol{\sigma}_{-i} \in \boldsymbol{\Sigma}_{-i}$, so it can be removed from the set of strategies of player $i$. In order to find the risk-averse equilibrium for a stochastic congestion game, we use support enumeration. For example, hypothesize that $\boldsymbol{\mathcal{P}}' \coloneqq \{ \mathcal{P}_1', \mathcal{P}_2', \dots, \mathcal{P}_n' \}$ is the \textit{support} of a risk-averse equilibrium, where $\mathcal{P}_i'$ is the set of pure strategies of player $i$ that are played with non-zero probability and $\sigma_i(p_i)$ for $p_i \in \mathcal{P}_i'$ indicates the probability mass function on the support. At equilibrium, player $i \in [n]$ should be indifferent between strategies in the set $\mathcal{P}_i'$, has no incentive to deviate to the rest of strategies in the set $\mathcal{P}_i \setminus \mathcal{P}_i'$, and the probability mass function over the support should add to one. As a result, if there is a risk-averse equilibrium with the mentioned support, it is the solution of the following set of equations for $\boldsymbol{\sigma} \in \boldsymbol{\Sigma}$: \begin{equation} \label{eq_find_mixed_strategy_RAE} \left\{ \begin{array}{ll} P \left ( \overline{L}^{i}(p_i', \boldsymbol{\sigma}_{-i}) \leq \overline{\boldsymbol{L}}^{i}(\mathcal{P}_i \setminus p_i', \boldsymbol{\sigma}_{-i}) \right ) \geq P \left ( \overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \leq \overline{\boldsymbol{L}}^{i}(\mathcal{P}_i \setminus p_i, \boldsymbol{\sigma}_{-i}) \right ), \forall p_i\in \mathcal{P}_i, p_i' \in \mathcal{P}_i', \forall i \in [n],\\ \\ \sum_{p_i \in \mathcal{P}_i'} \sigma_i(p_i) = 1, \forall i \in [n],\\ \\ \sigma_i(p_i) = 0, \forall p_i \in \mathcal{P}_i \setminus \mathcal{P}_i', \forall i \in [n]. \end{array} \right. \end{equation} As mentioned earlier in Equation \eqref{eq_proof_strict_dominance_RAE}, using the law of total probability, we have \begin{equation} \label{eq_LTP_RAE} \begin{aligned} & P \left ( \overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \leq \overline{\boldsymbol{L}}^{i}(\mathcal{P}_i \setminus p_i, \boldsymbol{\sigma}_{-i}) \right ) \\ = & \sum_{\boldsymbol{p}_{-i} \in \boldsymbol{\mathcal{P}}_{-i}} \Bigg ( P \left ( L^{i}(p_i, \boldsymbol{p}_{-i}) \leq \boldsymbol{L}^{i}(\mathcal{P}_i \setminus p_i, \boldsymbol{p}_{-i}) \right ) \cdot \boldsymbol{\sigma}(\boldsymbol{p}_{-i}) \Bigg ) \\ = & \sum_{\boldsymbol{p}_{-i} \in \boldsymbol{\mathcal{P}}_{-i}} \Big ( t_i(p_i, \boldsymbol{p}_{-i}) \cdot \boldsymbol{\sigma}(\boldsymbol{p}_{-i}) \Big ), \end{aligned} \end{equation} where $t_i(p_i, \boldsymbol{p}_{-i}) \coloneqq P \left ( L^{i}(p_i, \boldsymbol{p}_{-i}) \leq \boldsymbol{L}^{i}(\mathcal{P}_i \setminus p_i, \boldsymbol{p}_{-i}) \right )$ is the $i$-th element of an $n$-dimensional vector called $\boldsymbol{t}(p_i, \boldsymbol{p}_{-i})$. Construct a \textit{risk-averse probability tensor} of rank $n$ where $\mathcal{P}_i$ forms the $i$-th dimension of the tensor. Let the element associated with $(p_i, \boldsymbol{p}_{-i})$ in the tensor be the vector $\boldsymbol{t}(p_i, \boldsymbol{p}_{-i})$. Equations \eqref{eq_find_mixed_strategy_RAE} and \eqref{eq_LTP_RAE} along with the definition of the risk-averse probability tensor provide us with an alternative approach for deriving the risk-averse equilibrium, which is to find the Wardrop (Nash) equilibrium on the risk-averse probability tensor. The mean-variance (MV) and conditional value at risk level $\alpha$ (CVaR$_\alpha$) methods are two well-known frameworks to consider risk in statistics. In the next two sub-sections, two new risk-averse equilibria based on these two concepts are proposed. \subsection{Mean-Variance Equilibrium} \label{sec_mean_variance_equilibrium} As seen in Examples \ref{example1} and \ref{example2}, the high variance of paths with lower expected travel time can result in uncertainty and impose high latency for travelers. The mean-variance framework in statistics addresses this issue by keeping a balance between low latency and low variance. Applying this method to the proposed stochastic congestion game setting, the mean-variance best response and mean-variance equilibrium are defined as follows. \begin{definition} \label{def_best_response_mixed_MV} Given the mixed strategy profile $\boldsymbol{\sigma}_{-i}$ of players $[n] \setminus i$, the set of mixed strategy mean-variance best responses of player $i$ is the set of all probability distributions over the set \begin{equation} \label{eq_mixed_best_response_MV} \underset{p_i \in \mathcal{P}_i}{\argmin} \ \mathrm{Var} \left (\overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \right ) + \rho \cdot \overline{l}^{i}(p_i, \boldsymbol{\sigma}_{-i}), \end{equation} where the variance $\mathrm{Var} \left (\overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \right )$ can be calculated using the pdf of $\overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i})$ provided in Equation \eqref{dist_mixed} and $\rho \geq 0$ is a hyper-parameter capturing the absolute risk tolerance. Given the mixed strategy $\boldsymbol{\sigma}_{-i}$ of players $[n] \setminus i$, the mean-variance best response set of player $i$'s strategies is denoted by $MB(\boldsymbol{\sigma}_{-i})$, which is in general a set-valued function. \end{definition} \begin{definition} \label{def_mixed_strategy_MV} A strategy profile $\boldsymbol{\sigma}^* = (\sigma_1^*, \sigma_2^*, \dots,$ $\sigma_N^*)$ is a mean-variance equilibrium if and only if $\sigma_i^* \in MB(\boldsymbol{\sigma}_{-i}^*)$ for all $i \in [n]$. \end{definition} The existence of the mean-variance equilibrium is discussed in the following theorem. \begin{theorem} \label{theorem_existence_MV} For any finite $n$-player stochastic congestion game, a mean-variance equilibrium exists. \end{theorem} The proof of Theorem \ref{theorem_existence_MV} is provided in Appendix \ref{proof_theorem_MV}. The pure strategy mean-variance best response of player $i$ to the pure strategy $\boldsymbol{p}_{-i}$ of players $[n] \setminus i$ is the set \begin{equation} \label{pure_best_response_MV} \underset{p_i \in \mathcal{P}_i}{\argmin} \ \mathrm{Var} \left (L^{i}(p_i, \boldsymbol{p}_{-i}) \right ) + \rho \cdot l^{i}(p_i, \boldsymbol{p}_{-i}), \end{equation} where $\mathrm{Var} \left (L^{i}(p_i, \boldsymbol{p}_{-i}) \right ) = \mathrm{Var} \left ( \sum_{e \in p_i} L_e(p_i, \boldsymbol{p}_{-i}) \right ) = \sum_{e \in p_i} \sum_{e' \in p_i} \mathrm{Cov} \left ( L_e(p_i, \boldsymbol{p}_{-i}), L_{e'}(p_i, \boldsymbol{p}_{-i}) \right )$. Given the pure strategy $\boldsymbol{p}_{-i}$ of players $[n] \setminus i$, the mean-variance best response set of player $i$ in Equation \eqref{pure_best_response_MV} is denoted by $MB(\boldsymbol{p}_{-i})$ (overloading notation, $MB(.)$ is used for both pure and mixed strategy mean-variance best responses). As a result, a pure strategy profile $\boldsymbol{p}^* = (p_1^*, p_2^*, \dots, p_n^*)$ is a pure strategy mean-variance equilibrium if and only if $p_i^* \in MB(\boldsymbol{p}_{-i}^*)$ for all $i \in [n]$. The strict dominance concept is straightforward among pure strategy profiles in mean-variance equilibrium that is defined as follows. A pure strategy $p_i \in \mathcal{P}_i$ of player $i$ strictly dominates a second pure strategy $p_i' \in \mathcal{P}_i$ of the player in pure strategy mean-variance equilibrium if \begin{equation} \label{eq_strict_dominance_MV} \mathrm{Var} \left (L^{i}(p_i, \boldsymbol{p}_{-i}) \right ) + \rho \cdot l^{i}(p_i, \boldsymbol{p}_{-i}) < \mathrm{Var} \left (L^{i}(p_i', \boldsymbol{p}_{-i}) \right ) + \rho \cdot l^{i}(p_i', \boldsymbol{p}_{-i}), \ \forall \boldsymbol{p}_{-i} \in \boldsymbol{\mathcal{P}}_{-i}. \end{equation} However, due to the fact that variance is not a linear operator, strict dominance may not be derived from Equation \eqref{eq_strict_dominance_MV} for mixed strategy mean-variance equilibrium as described below. \begin{equation} \label{eq_strict_dominance_MV_general} \begin{aligned} & \mathrm{Var} \left (\overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \right ) + \rho \cdot \overline{l}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \\ \overset{(a)}{=} & \ \mathrm{E} \left [ \left ( \overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \right)^2 \right ] - \left ( \overline{l}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \right )^2 + \rho \cdot \overline{l}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \\ \overset{(b)}{=} & \sum_{\boldsymbol{p}_{-i} \in \boldsymbol{\mathcal{P}}_{-i}} \left ( \boldsymbol{\sigma}(\boldsymbol{p}_{-i}) \cdot \mathrm{E} \left [ \left ( L^{i}(p_i, \boldsymbol{p}_{-i}) \right)^2 \right ] \right ) - \left ( \sum_{\boldsymbol{p}_{-i} \in \boldsymbol{\mathcal{P}}_{-i}} \Big ( \boldsymbol{\sigma}(\boldsymbol{p}_{-i}) \cdot l^{i}(p_i, \boldsymbol{p}_{-i}) \Big ) \right )^2 \\ & + \rho \sum_{\boldsymbol{p}_{-i} \in \boldsymbol{\mathcal{P}}_{-i}} \Big ( \boldsymbol{\sigma}(\boldsymbol{p}_{-i}) \cdot l^{i}(p_i, \boldsymbol{p}_{-i}) \Big ) \\ \overset{(c)}{=} & \sum_{\boldsymbol{p}_{-i} \in \boldsymbol{\mathcal{P}}_{-i}} \left ( \boldsymbol{\sigma}(\boldsymbol{p}_{-i}) \cdot \mathrm{E} \left [ \left ( L^{i}(p_i, \boldsymbol{p}_{-i}) \right)^2 \right ] \right ) \\ & - \sum_{\boldsymbol{p}_{-i} \in \boldsymbol{\mathcal{P}}_{-i}} \left ( \sum_{\boldsymbol{p}_{-i}' \in \boldsymbol{\mathcal{P}}_{-i}} \Big ( \boldsymbol{\sigma}(\boldsymbol{p}_{-i}) \cdot \boldsymbol{\sigma}(\boldsymbol{p}_{-i}') \cdot l^{i}(p_i, \boldsymbol{p}_{-i}) \cdot l^{i}(p_i, \boldsymbol{p}_{-i}') \Big ) \right ) + \rho \cdot \sum_{\boldsymbol{p}_{-i} \in \boldsymbol{\mathcal{P}}_{-i}} \Big ( \boldsymbol{\sigma}(\boldsymbol{p}_{-i}) \cdot l^{i}(p_i, \boldsymbol{p}_{-i}) \Big ) \\ \overset{(d)}{=} & \sum_{\boldsymbol{p}_{-i} \in \boldsymbol{\mathcal{P}}_{-i}} \boldsymbol{\sigma}(\boldsymbol{p}_{-i}) \cdot \left ( \mathrm{E} \left [ \left ( L^{i}(p_i, \boldsymbol{p}_{-i}) \right)^2 \right ] - l^{i}(p_i, \boldsymbol{p}_{-i}) \cdot \sum_{\boldsymbol{p}_{-i}' \in \boldsymbol{\mathcal{P}}_{-i}} \Big ( \boldsymbol{\sigma}(\boldsymbol{p}_{-i}') \cdot l^{i}(p_i, \boldsymbol{p}_{-i}') \Big ) + \rho \cdot l^{i}(p_i, \boldsymbol{p}_{-i}) \right ) \\ = & \sum_{\boldsymbol{p}_{-i} \in \boldsymbol{\mathcal{P}}_{-i}} \boldsymbol{\sigma}(\boldsymbol{p}_{-i}) \cdot \left ( \mathrm{E} \left [ \left ( L^{i}(p_i, \boldsymbol{p}_{-i}) \right)^2 \right ] - l^{i}(p_i, \boldsymbol{p}_{-i}) \cdot \left ( \sum_{\boldsymbol{p}_{-i}' \in \boldsymbol{\mathcal{P}}_{-i}} \Big ( \boldsymbol{\sigma}(\boldsymbol{p}_{-i}') \cdot l^{i}(p_i, \boldsymbol{p}_{-i}') \Big ) + \rho \right ) \right ), \end{aligned} \end{equation} where $(a)$ is true by the definition of variance, $(b)$ is followed by Equation \eqref{eq_mean_mixed}, $(c)$ is derived by expanding the second term, and $(d)$ is true by combining the summation over $\boldsymbol{p}_{-i} \in \boldsymbol{\mathcal{P}}_{-i}$ and factoring $\boldsymbol{\sigma}(\boldsymbol{p}_{-i})$. As can be seen in Equation \eqref{eq_strict_dominance_MV_general}, since variance is a non-linear operator, it is not clear whether Equation \eqref{eq_strict_dominance_MV} can result in $\mathrm{Var} \left (\overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \right ) + \rho \cdot \overline{l}^{i}(p_i, \boldsymbol{\sigma}_{-i}) < \mathrm{Var} \left (\overline{L}^{i}(p_i', \boldsymbol{\sigma}_{-i}) \right ) + \rho \cdot \overline{l}^{i}(p_i', \boldsymbol{\sigma}_{-i})$ for all $\boldsymbol{\sigma}_{-i} \in \boldsymbol{\Sigma}_{-i}$. As a result, use of strict dominance in the mixed strategy mean-variance equilibrium is not advised. In certain circumstances though, we can propose conditions for strict dominance; e.g., when $l^{i}(\boldsymbol{p}) \leq \frac{\rho}{2}$ for all $\boldsymbol{p} \in \boldsymbol{\mathcal{P}}$ and for all $i \in [n]$ which is discussed in the following definition or when $l^{i}(\boldsymbol{p}) \geq \frac{\rho}{2}$ for all $\boldsymbol{p} \in \boldsymbol{\mathcal{P}}$ and for all $i \in [n]$. \begin{definition} \label{def_strict_dominance_MV_new} If $l^{i}(\boldsymbol{p}) \leq \frac{\rho}{2}$ for all $\boldsymbol{p} \in \boldsymbol{\mathcal{P}}$ and for all $i \in [n]$, pure strategy $p_i \in \mathcal{P}_i$ of player $i$ strictly dominates a second pure strategy $p_i' \in \mathcal{P}_i$ of the player in the mean-variance equilibrium if \begin{equation} \label{eq_strict_dominance_MV_sufficient_mean} l^{i} \left ( p_i, \boldsymbol{p}_{-i} \right ) < l^{i} \left ( p_i', \boldsymbol{p}_{-i} \right ), \ \forall \boldsymbol{p}_{-i} \in \boldsymbol{\mathcal{P}}_{-i}, \end{equation} and \begin{equation} \label{eq_strict_dominance_MV_sufficient_second_moment} \mathrm{E} \left [ \Big ( L^{i} \left ( p_i, \boldsymbol{p}_{-i} \right ) \Big )^2 \right ] < \mathrm{E} \left [ \Big ( L^{i} \left ( p_i', \boldsymbol{p}_{-i} \right ) \Big )^2 \right ], \ \forall \boldsymbol{p}_{-i} \in \boldsymbol{\mathcal{P}}_{-i}. \end{equation} \end{definition} Consider that path $p_i \in \mathcal{P}_i$ strictly dominates path $p_i' \in \mathcal{P}_i$ for player $i$ as defined in Definition \ref{def_strict_dominance_MV_new}; then, using Equation \eqref{eq_strict_dominance_MV_sufficient_mean}, for any $\boldsymbol{\sigma}_{-i} \in \boldsymbol{\Sigma}_{-i}$, \begin{equation} \overline{l}^{i}(p_i, \boldsymbol{\sigma}_{-i}) = \sum_{\boldsymbol{p}_{-i} \in \boldsymbol{\mathcal{P}}_{-i}} \Big ( \boldsymbol{\sigma}(\boldsymbol{p}_{-i}) \cdot l^{i}(p_i, \boldsymbol{p}_{-i}) \Big ) < \sum_{\boldsymbol{p}_{-i} \in \boldsymbol{\mathcal{P}}_{-i}} \Big ( \boldsymbol{\sigma}(\boldsymbol{p}_{-i}) \cdot l^{i}(p_i', \boldsymbol{p}_{-i}) \Big ) = \overline{l}^{i}(p_i', \boldsymbol{\sigma}_{-i}). \end{equation} Note that $\overline{l}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \leq \frac{\rho}{2}$ for all $p_i \in \mathcal{P}_i$, for all $\boldsymbol{\sigma}_{-i} \in \boldsymbol{\Sigma}_{-i}$, and for all $i \in [n]$ as a result of $l^{i}(\boldsymbol{p}) \leq \frac{\rho}{2}$ for all $\boldsymbol{p} \in \boldsymbol{\mathcal{P}}$ and for all $i \in [n]$. Hence, using the fact that the function $-f^2 + \rho \cdot f$ is increasing for $f \leq \frac{\rho}{2}$, for any $\boldsymbol{\sigma}_{-i} \in \boldsymbol{\Sigma}_{-i}$ we have \begin{equation} \label{eq_intermediate_mean_strict_dominance_MV} - \left ( \overline{l}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \right )^2 + \rho \cdot \overline{l}^{i}(p_i, \boldsymbol{\sigma}_{-i}) < - \left ( \overline{l}^{i}(p_i', \boldsymbol{\sigma}_{-i}) \right )^2 + \rho \cdot \overline{l}^{i}(p_i', \boldsymbol{\sigma}_{-i}). \end{equation} On the other hand, using Equation \eqref{eq_strict_dominance_MV_sufficient_second_moment}, we have \begin{equation} \label{eq_intermediate_second_moment_strict_dominance_MV} \begin{aligned} & \mathrm{E} \left [ \left ( \overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \right)^2 \right ] = \sum_{\boldsymbol{p}_{-i} \in \boldsymbol{\mathcal{P}}_{-i}} \left ( \boldsymbol{\sigma}(\boldsymbol{p}_{-i}) \cdot \mathrm{E} \left [ \left ( L^{i}(p_i, \boldsymbol{p}_{-i}) \right)^2 \right ] \right ) \\ < & \sum_{\boldsymbol{p}_{-i} \in \boldsymbol{\mathcal{P}}_{-i}} \left ( \boldsymbol{\sigma}(\boldsymbol{p}_{-i}) \cdot \mathrm{E} \left [ \left ( L^{i}(p_i', \boldsymbol{p}_{-i}) \right)^2 \right ] \right ) = \mathrm{E} \left [ \left ( \overline{L}^{i}(p_i', \boldsymbol{\sigma}_{-i}) \right)^2 \right ]. \end{aligned} \end{equation} Finally, Equations \eqref{eq_intermediate_mean_strict_dominance_MV} and \eqref{eq_intermediate_second_moment_strict_dominance_MV} conclude that $\mathrm{Var} \left (\overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \right ) + \rho \cdot \overline{l}^{i}(p_i, \boldsymbol{\sigma}_{-i}) < \mathrm{Var} \left (\overline{L}^{i}(p_i', \boldsymbol{\sigma}_{-i}) \right ) + \rho \cdot \overline{l}^{i}(p_i', \boldsymbol{\sigma}_{-i})$ for all $\boldsymbol{\sigma}_{-i} \in \boldsymbol{\Sigma}_{-i}$. In order to find the mean-variance equilibrium for a stochastic congestion game, we use support enumeration. For example, hypothesize $\boldsymbol{\mathcal{P}}' \coloneqq \{ \mathcal{P}_1', \mathcal{P}_2', \dots, \mathcal{P}_n' \}$ to be the support of a mean-variance equilibrium, where $\mathcal{P}_i'$ is the set of pure strategies of player $i$ that are played with non-zero probability and $\sigma_i(p_i)$ for $p_i \in \mathcal{P}_i'$ indicates the probability mass function on the support. At equilibrium, player $i \in [n]$ should be indifferent between strategies in the set $\mathcal{P}_i'$, has no incentive to deviate to the rest of strategies in the set $\mathcal{P}_i \setminus \mathcal{P}_i'$, and the probability mass function over the support should add to one. As a result, if there is a mean-variance equilibrium with the mentioned support, it is the solution of the following set of equations for $\boldsymbol{\sigma} \in \boldsymbol{\Sigma}$: \begin{equation} \label{eq_find_mixed_strategy_MV} \left\{ \begin{array}{ll} \mathrm{Var} \left (\overline{L}^{i}(p_i', \boldsymbol{\sigma}_{-i}) \right ) + \rho \cdot \overline{l}^{i}(p_i', \boldsymbol{\sigma}_{-i}) \leq \mathrm{Var} \left (\overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \right ) + \rho \cdot \overline{l}^{i}(p_i, \boldsymbol{\sigma}_{-i}), \forall p_i\in \mathcal{P}_i, p_i' \in \mathcal{P}_i', \forall i \in [n],\\ \\ \sum_{p_i \in \mathcal{P}_i'} \sigma_i(p_i) = 1, \forall i \in [n],\\ \\ \sigma_i(p_i) = 0, \forall p_i \in \mathcal{P}_i \setminus \mathcal{P}_i', \forall i \in [n]. \end{array} \right. \end{equation} \subsection{CVaR$_\alpha$ Equilibrium} \label{CVaR_sub_section} The conditional value at risk level $\alpha$ (CVaR$_\alpha$) is another framework in statistics to measure risk and to address the risk-averse behavior. Applying this method to the proposed stochastic congestion game setting, the CVaR$_\alpha$ best response and CVaR$_\alpha$ equilibrium are defined below. \begin{definition} \label{def_best_response_mixed_CVaR_alpha} Given the mixed strategy profile $\boldsymbol{\sigma}_{-i}$ of players $[n] \setminus i$, the set of mixed strategy CVaR$_\alpha$ best responses of player $i$ is the set of all probability distributions over the set \begin{equation} \label{eq_mixed_best_response_CVaR_alpha} \underset{p_i \in \mathcal{P}_i}{\argmin} \ CVaR_\alpha \left (\overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \right ) = \underset{p_i \in \mathcal{P}_i}{\argmin} \ \mathrm{E} \left [\overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \Big | \overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \geq v_\alpha^{i}(p_i, \boldsymbol{\sigma}_{-i}) \right ], \end{equation} where $v_\alpha^{i}(p_i, \boldsymbol{\sigma}_{-i})$ is a constant derived by solving the equality $P \left ( \overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \geq v_\alpha^{i}(p_i, \boldsymbol{\sigma}_{-i}) \right ) = \alpha$ and the constant $0 < \alpha \leq 1$ is a hyper-parameter depicting the risk level. Given the mixed strategy $\boldsymbol{\sigma}_{-i}$ of players $[n] \setminus i$, the CVaR$_\alpha$ best response set of player $i$'s strategies is denoted by $CB(\boldsymbol{\sigma}_{-i})$, which is in general a set-valued function. \end{definition} \begin{definition} \label{def_mixed_strategy_CVaR_alpha} A strategy profile $\boldsymbol{\sigma}^* = (\sigma_1^*, \sigma_2^*, \dots,$ $\sigma_N^*)$ is a CVaR$_\alpha$ equilibrium if and only if $\sigma_i^* \in CB(\boldsymbol{\sigma}_{-i}^*)$ for all $i \in [n]$. \end{definition} The existence of the CVaR$_\alpha$ equilibrium is discussed in the following theorem. \begin{theorem} \label{theorem_existence_CVaR_alpha} For any finite $n$-player stochastic congestion game, a CVaR$_\alpha$ equilibrium exists. \end{theorem} The proof of Theorem \ref{theorem_existence_CVaR_alpha} is provided in Appendix \ref{proof_theorem_CVaR_alpha}. The pure strategy CVaR$_\alpha$ best response of player $i$ to the pure strategy $\boldsymbol{p}_{-i}$ of players $[n] \setminus i$ is the set \begin{equation} \label{pure_best_response_CVaR_alpha} \underset{p_i \in \mathcal{P}_i}{\argmin} \ CVaR_\alpha \left (L^{i}(p_i, \boldsymbol{p}_{-i}) \right ) = \underset{p_i \in \mathcal{P}_i}{\argmin} \ \mathrm{E} \left [L^{i}(p_i, \boldsymbol{p}_{-i}) \Big | L^{i}(p_i, \boldsymbol{p}_{-i}) \geq v_\alpha^{i}(p_i, \boldsymbol{p}_{-i}) \right ], \end{equation} where $v_\alpha^{i}(p_i, \boldsymbol{p}_{-i})$ is a constant derived by solving the equality $P \left ( L^{i}(p_i, \boldsymbol{p}_{-i}) \geq v_\alpha^{i}(p_i, \boldsymbol{p}_{-i}) \right ) = \alpha$ and the constant $0 < \alpha \leq 1$ is the hyper-parameter depicting risk level. Given the pure strategy $\boldsymbol{p}_{-i}$ of players $[n] \setminus i$, the CVaR$_\alpha$ best response set of player $i$ in Equation \eqref{pure_best_response_CVaR_alpha} is denoted by $CB(\boldsymbol{p}_{-i})$ (overloading notation, $CB(.)$ is used for both pure and mixed strategy CVaR$_\alpha$ best responses). As a result, a pure strategy profile $\boldsymbol{p}^* = (p_1^*, p_2^*, \dots, p_n^*)$ is a pure strategy CVaR$_\alpha$ equilibrium if and only if $p_i^* \in CB(\boldsymbol{p}_{-i}^*)$ for all $i \in [n]$. A pure strategy $p_i \in \mathcal{P}_i$ of player $i$ strictly dominates a second pure strategy $p_i' \in \mathcal{P}_i$ of the player in pure strategy CVaR$_\alpha$ equilibrium if \begin{equation} \label{eq_strict_dominance_CVaR_alpha_pure} \mathrm{E} \left [L^{i}(p_i, \boldsymbol{p}_{-i}) \Big | L^{i}(p_i, \boldsymbol{p}_{-i}) \geq v_\alpha^{i}(p_i, \boldsymbol{p}_{-i}) \right ] < \mathrm{E} \left [L^{i}(p_i', \boldsymbol{p}_{-i}) \Big | L^{i}(p_i', \boldsymbol{p}_{-i}) \geq v_\alpha^{i}(p_i', \boldsymbol{p}_{-i}) \right ], \ \forall \boldsymbol{p}_{-i} \in \boldsymbol{\mathcal{P}}_{-i}, \end{equation} where $v_\alpha^{i}(p_i, \boldsymbol{p}_{-i})$ and $v_\alpha^{i}(p_i', \boldsymbol{p}_{-i})$ are constants derived by solving $P \left ( L^{i}(p_i, \boldsymbol{p}_{-i}) \geq v_\alpha^{i}(p_i, \boldsymbol{p}_{-i}) \right ) = \alpha$ and $P \left ( L^{i}(p_i', \boldsymbol{p}_{-i}) \geq v_\alpha^{i}(p_i', \boldsymbol{p}_{-i}) \right ) = \alpha$, and the constant $0 < \alpha \leq 1$ is the risk level hyper-parameter. However, similar to the mean-variance equilibrium, strict dominance may not be derived from Equation \eqref{eq_strict_dominance_CVaR_alpha_pure} for mixed strategy CVaR$_\alpha$ equilibrium as described below. Using Equation \eqref{dist_mixed} and $P \left ( \overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \geq v_\alpha^{i}(p_i, \boldsymbol{\sigma}_{-i}) \right ) = \alpha$, the distribution of the random variable $\left ( \overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \Big | \overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \geq v_\alpha^{i}(p_i, \boldsymbol{\sigma}_{-i}) \right )$ is \begin{equation} \label{dist_mixed_CVaR_alpha} \left ( \sum_{\boldsymbol{p}_{-i} \in \boldsymbol{\mathcal{P}}_{-i}} \Big ( f^{i}(x | (p_i, \boldsymbol{p}_{-i})) \cdot \boldsymbol{\sigma}(\boldsymbol{p}_{-i}) \Big ) \bigg / \alpha \right ) \cdot \mathbbm{1} \left \{ x \geq v_\alpha^{i}(p_i, \boldsymbol{\sigma}_{-i}) \right \}. \end{equation} As a result, \begin{equation} \label{eq_strict_dominance_CVaR_alpha} \begin{aligned} & \mathrm{E} \left [\overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \Big | \overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \geq v_\alpha^{i}(p_i, \boldsymbol{\sigma}_{-i}) \right ] \\ \overset{(a)}{=} & \ \frac{1}{\alpha} \cdot \sum_{\boldsymbol{p}_{-i} \in \boldsymbol{\mathcal{P}}_{-i}} \left ( \boldsymbol{\sigma}(\boldsymbol{p}_{-i}) \cdot \int_{-\infty}^\infty \Big ( x \cdot f^{i}(x | (p_i, \boldsymbol{p}_{-i})) \cdot \mathbbm{1} \left \{ x \geq v_\alpha^{i}(p_i, \boldsymbol{\sigma}_{-i}) \right \} \Big ) dx \right ) \\ \overset{(b)}{=} & \ \frac{1}{\alpha} \cdot \sum_{\boldsymbol{p}_{-i} \in \boldsymbol{\mathcal{P}}_{-i}} \Bigg ( \boldsymbol{\sigma}(\boldsymbol{p}_{-i}) \cdot P \big ( L^{i}(p_i, \boldsymbol{p}_{-i}) \geq v_\alpha^{i}(p_i, \boldsymbol{\sigma}_{-i}) \big ) \times \\ & \hspace{2.53cm} \int_{v_\alpha^{i}(p_i, \boldsymbol{\sigma}_{-i})}^\infty \Big ( x \cdot \frac{f^{i}(x | (p_i, \boldsymbol{p}_{-i}))}{P \big ( L^{i}(p_i, \boldsymbol{p}_{-i}) \geq v_\alpha^{i}(p_i, \boldsymbol{\sigma}_{-i}) \big )} \Big ) dx \Bigg ) \\ = & \ \frac{1}{\alpha} \cdot \sum_{\boldsymbol{p}_{-i} \in \boldsymbol{\mathcal{P}}_{-i}} \left ( \boldsymbol{\sigma}(\boldsymbol{p}_{-i}) \cdot P \big ( L^{i}(p_i, \boldsymbol{p}_{-i}) \geq v_\alpha^{i}(p_i, \boldsymbol{\sigma}_{-i}) \big ) \cdot \mathrm{E} \left [ L^{i}(p_i, \boldsymbol{p}_{-i}) \Big | L^{i}(p_i, \boldsymbol{p}_{-i}) \geq v_\alpha^{i}(p_i, \boldsymbol{\sigma}_{-i}) \right ] \right ), \end{aligned} \end{equation} where $(a)$ is true by using the pdf of the corresponding random variable in Equation \eqref{dist_mixed_CVaR_alpha} and switching the order of summation and integral and $(b)$ is true by multiplying and dividing by the term $P \big ( L^{i}(p_i, \boldsymbol{p}_{-i}) \geq v_\alpha^{i}(p_i, \boldsymbol{\sigma}_{-i}) \big )$. As can be seen in Equation \eqref{eq_strict_dominance_CVaR_alpha}, it is not clear whether Equation \eqref{eq_strict_dominance_CVaR_alpha_pure} can result in $\mathrm{E} \left [\overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \Big | \overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \geq v_\alpha^{i}(p_i, \boldsymbol{\sigma}_{-i}) \right ] < \mathrm{E} \left [\overline{L}^{i}(p_i', \boldsymbol{\sigma}_{-i}) \Big | \overline{L}^{i}(p_i', \boldsymbol{\sigma}_{-i}) \geq v_\alpha^{i}(p_i', \boldsymbol{\sigma}_{-i}) \right ]$ for all $\boldsymbol{\sigma}_{-i} \in \boldsymbol{\Sigma}_{-i}$. As a result, use of strict dominance in the mixed strategy CVaR$_\alpha$ equilibrium is not advised due to its complication. In order to find the CVaR$_\alpha$ equilibrium for a stochastic congestion game, we use support enumeration. For example, hypothesize $\boldsymbol{\mathcal{P}}' \coloneqq \{ \mathcal{P}_1', \mathcal{P}_2', \dots, \mathcal{P}_n' \}$ to be the support of a CVaR$_\alpha$ equilibrium, where $\mathcal{P}_i'$ is the set of pure strategies of player $i$ that are played with non-zero probability and $\sigma_i(p_i)$ for $p_i \in \mathcal{P}_i'$ indicates the probability mass function on the support. At equilibrium, player $i \in [n]$ should be indifferent between strategies in the set $\mathcal{P}_i'$, has no incentive to deviate to the rest of strategies in the set $\mathcal{P}_i \setminus \mathcal{P}_i'$, and the probability mass function over the support should add to one. As a result, if there is a CVaR$_\alpha$ equilibrium with the mentioned support, it is the solution of the following set of equations for $\boldsymbol{\sigma} \in \boldsymbol{\Sigma}$: \begin{equation} \label{eq_find_mixed_strategy_CVaR_alpha} \left\{ \begin{array}{ll} \mathrm{E} \left [\overline{L}^{i}(p_i', \boldsymbol{\sigma}_{-i}) \Big | \overline{L}^{i}(p_i', \boldsymbol{\sigma}_{-i}) \geq v_\alpha^{i}(p_i', \boldsymbol{\sigma}_{-i}) \right ] \\ \leq \mathrm{E} \left [\overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \Big | \overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \geq v_\alpha^{i}(p_i, \boldsymbol{\sigma}_{-i}) \right ], \forall p_i\in \mathcal{P}_i, p_i' \in \mathcal{P}_i', \forall i \in [n],\\ \\ \sum_{p_i \in \mathcal{P}_i'} \sigma_i(p_i) = 1, \forall i \in [n],\\ \\ \sigma_i(p_i) = 0, \forall p_i \in \mathcal{P}_i \setminus \mathcal{P}_i', \forall i \in [n]. \end{array} \right. \end{equation} \begin{remark} It is noteworthy that the polynomial terms in Equation \eqref{eq_find_mixed_strategy_RAE} for the risk-averse equilibrium are of degree $n - 1$ while the polynomial terms in Equation \eqref{eq_find_mixed_strategy_MV} for the mean-variance equilibrium are of degree $2 (n - 1)$ for $n$ number of players. On the other hand, it is more complicated to solve for Equation \eqref{eq_find_mixed_strategy_CVaR_alpha} as the top $\alpha$ quantile of distributions should be calculated. \end{remark} \section{Numerical Results} \label{numerical_results_congestion_game} The risk-averse, mean-variance, and CVaR$_\alpha$ equilibria are numerically analyzed for Examples \ref{example1} and \ref{example2} in this section. The price of anarchy for each of the mentioned equilibria is calculated as well. In the end, extra examples are presented to shed light on the corner cases of each one of the equilibria and to provide insight on how to tackle such circumstances. In order to find any of the three types of pure equilibria for the Pigou network in Example \ref{example1} with $n$ players, hypothesize that $m_1$ players choose link $1$ and $m_2 = n - m_1$ players choose link $2$ and check whether any players has any incentive in the corresponding sense of the equilibrium of the interest to change route, given the pure strategy of the other players. If none of the players has any incentive to change route given the pure strategy of the rest of players, $(m_1, n - m_1)$ is a pure equilibrium, where $(m_1, m_2)$ denotes that $m_1$ players select link $1$ and $m_2$ players select link $2$. By varying $m_1$ from zero to $n$ and taking the above procedure, the pure equilibrium is found if any exists. Given a fixed number of players $m_1$ that choose link $1$, it is obvious that they all have the same incentive to change to link $2$ or stay in link $1$, and all of the $m_2 = n - m_1$ players have the same incentive to change to link $1$ or stay in link $2$. As a result, if a specific player out of the $m_1$ players has no incentive to switch to link $2$ given the pure strategy of the other players, and a specific player out of the $m_2$ players has no incentive to switch to link $1$ given the pure strategy of the other players, $(m_1, m_2 = n - m_1)$ is a pure equilibrium. In other words, $(m_1, m_2 = n - m_1)$ is a pure risk-averse equilibrium if \begin{equation} \label{RAE_Pigou_numerical} \left\{ \begin{array}{lll} P \big (L_1(m_1) \leq L_2(m_2 + 1) \big ) \geq 0.5, \\ \\ P \big (L_2(m_2) \leq L_1(m_1 + 1) \big ) \geq 0.5, \end{array} \right. \end{equation} where the first inequality is true since each player has two options, link $1$ and link $2$, so $P \big (L_1(m_1) \leq L_2(m_2 + 1) \big ) \geq P \big (L_2(m_2 + 1) \leq L_1(m_1) \big )$, and since random variables are continuous we have $P \big (L_1(m_1) \leq L_2(m_2 + 1) \big ) + P \big (L_2(m_2 + 1) \leq L_1(m_1) \big ) = 1$, which results in $P \big (L_1(m_1) \leq L_2(m_2 + 1) \big ) \geq 0.5$. The second inequality is true due to a similar reasoning. By varying $m_1$ from zero to $n$, if Equation \eqref{RAE_Pigou_numerical} holds for $(m_1, m_2 = n - m_1)$, it is a pure risk-averse equilibrium. Similar to the above approach, $(m_1, m_2 = n - m_1)$ is a pure mean-variance equilibrium if \begin{equation} \label{MV_Pigou_numerical} \left\{ \begin{array}{lll} \mathrm{Var} \big (L_1(m_1) \big ) + \rho \cdot l_1(m_1) \leq \mathrm{Var} \big (L_2(m_2 + 1) \big ) + \rho \cdot l_2(m_2 + 1), \\ \\ \mathrm{Var} \big (L_2(m_2) \big ) + \rho \cdot l_2(m_2) \leq \mathrm{Var} \big (L_1(m_1 + 1) \big ) + \rho \cdot l_1(m_1 + 1). \end{array} \right. \end{equation} Again, by varying $m_1$ from zero to $n$, if Equation \eqref{MV_Pigou_numerical} holds for $(m_1, m_2 = n - m_1)$, it is a pure mean-variance equilibrium. Similarly, $(m_1, m_2 = n - m_1)$ is a pure CVaR$_\alpha$ equilibrium if \begin{equation} \label{CVaR_Pigou_numerical} \left\{ \begin{array}{lll} \mathrm{E} \big [ L_1(m_1) \big | L_1(m_1) \geq v_\alpha^1(m_1) \big ] \leq \mathrm{E} \big [ L_2(m_2 + 1) \big | L_2(m_2 + 1) \geq v_\alpha^2(m_2 + 1) \big ], \\ \\ \mathrm{E} \big [ L_2(m_2) \big | L_2(m_2) \geq v_\alpha^2(m_2) \big ] \leq \mathrm{E} \big [ L_1(m_1 + 1) \big | L_1(m_1 + 1) \geq v_\alpha^1(m_1 + 1) \big ], \end{array} \right. \end{equation} where $P \big ( L_1(m_1) \geq v_\alpha^1(m_1) \big ) = P \big ( L_2(m_2 + 1) \geq v_\alpha^2(m_2 + 1) \big ) = P \big ( L_2(m_2) \geq v_\alpha^2(m_2) \big ) = P \big ( L_1(m_1 + 1) \geq v_\alpha^1(m_1 + 1) \big ) = \alpha$. By varying $m_1$ from zero to $n$, if Equation \eqref{CVaR_Pigou_numerical} holds for $(m_1, m_2 = n - m_1)$, it is a pure CVaR$_\alpha$ equilibrium. Note that the equilibrium in the Pigou network in Example \ref{example1} is characterized by $m_1$, since $m_2$ can be derived given $m_1$. The pure risk-averse, mean-variance ($\rho = 1$), and CVaR$_\alpha$ ($\alpha = 0.1$) equilibria are found for the mentioned Pigou network and the proportion of players who select link $1$, i.e., $\frac{m_1}{n}$, is depicted in Figure \ref{figure_Pigou_equilibrium} for different values of $n$. Under the Nash equilibrium, no matter what the probability distributions of latency over links look like, all players select link $1$ as it has less or equal latency in expectation. Hence, $(n, 0)$ is the Nash equilibrium for all $n$, which corresponds to $\frac{m_1}{n} = 1$ as depicted in Figure \ref{figure_Pigou_equilibrium}. \begin{figure} \caption{The pure risk-averse, mean-variance ($\rho = 1$), CVaR$_\alpha$ ($\alpha = 0.1$), and Nash equilibria of the Pigou network in Example \ref{example1} are denoted for different numbers of players.} \label{figure_Pigou_equilibrium} \end{figure} The social delay/latency defined as the expected average delay/latency incurred by the $n$ players in the Pigou network in Example \ref{example1} under the pure strategy $(m_1, m_2)$ is $D(m_1) = \frac{1}{n} \left ( m_1 \cdot \frac{m_1}{n} + (n - m_1) \right ) = \left ( \frac{m_1}{n} \right )^2 - \frac{m_1}{n} + 1$, which is minimized when $m_1 = \frac{n}{2}$ for an even $n$, and $m_1 = \lfloor \frac{n}{2} \rfloor$ and $m_1 = \lceil \frac{n}{2} \rceil$ for an odd $n$. As a result, it is socially optimal that about half of the players take the top link and the rest take the bottom link to travel from source to destination in the Pigou network, which results in a social latency close to $\frac34$ for $n \gg 1$. If players are risk-neutral and seek to minimize their expected latency given the strategy of the rest of players, which is how the Nash equilibrium models games, the social latency in the mentioned Pigou network equals to one for the Nash equilibrium $(n, 0)$. In contrast, if players are risk-averse in the different senses discussed in this article, the social latency decreases compared to when players are risk-neutral; as a result, the price of anarchy decreases as depicted in Figure \ref{figure_Pigou_PoA}. In this example, it is to the benefit of the society if players are risk-averse, which is the case as numerous studies in prospect theory discuss the fact that players in the real world often behave in a risk-averse manner. Considering the Pigou network in a non-atomic setting, which corresponds to the case with infinite number of players, the socially optimal strategy is $(0.5, 0.5)$ with social latency of $\frac34$, where $(u_1, u_2)$ corresponds to $u_1$ fraction of players traveling along link $1$ and $u_2 = 1 - u_1$ fraction of players traveling along link $2$. We numerically calculate that the risk-averse equilibrium is $(0.7303, 0.2697)$ with $\text{PoA} = 1.0707$, the mean-variance equilibrium with $\rho = 1$ is $(0.7750, 0.2250)$ with $\text{PoA} = 1.1008$, the CVaR$_\alpha$ equilibrium with $\alpha = 0.1$ is $(0.6822, 0.3178)$ with $\text{PoA} = 1.0442$, and the Nash equilibrium is $(1, 0)$ with $\text{PoA} = \frac43$. \begin{figure} \caption{The prices of anarchy for the risk-averse, mean-variance ($\rho = 1$), CVaR$_\alpha$ ($\alpha = 0.1$), and Nash equilibria of the Pigou network in Example \ref{example1} are plotted for different numbers of players.} \label{figure_Pigou_PoA} \end{figure} In the Braess network in Example \ref{example2}, there are three paths from source to destination, $p_1 = (1, 2)$, $p_2 = (1, 5, 4)$, $p_3 = (3, 4)$, where links $SA, AD, SB, BD$, and $AB$ are denoted with $1, 2, 3, 4$, and $5$, respectively. In order to find the three types of pure equilibria for the Braess network with $n$ players, hypothesize that $m^1$ players select path $p_1$, $m^2$ players select path $p_2$, and $n - m^1 - m^2$ players select path $p_3$, then check whether any players has any incentive in the corresponding sense of the equilibrium of the interest to change route, given the pure strategy of the other players. If none of the players has any incentive to change route given the pure strategy of the rest of players, $(m^1, m^2, n - m^1 - m^2)$ is a pure equilibrium. As a result, $(m^1, m^2, n - m^1 - m^2)$ is a pure risk-averse equilibrium if \begin{equation} \label{RAE_Braess_numerical} \left\{ \begin{array}{llllllll} P \big ( L^1 \leq \{ L^2, L^3 \} \big ) \geq \big \{ P \big ( L^2 \leq \{ L^1, L^3 \} \big ), P \big ( L^3 \leq \{ L^1, L^2 \} \big ) \big \}, \\ \text{where } L^1 = L_1(m^1 + m^2) + L_2(m^1), L^2 = L_1(m^1 + m^2) + L_4(n - m^1 + 1), \text{and } \\ L^3 = L_3(n - m^1 - m^2 + 1) + L_4(n - m^1 + 1), \\ \\ P \big ( L^2 \leq \{ L^1, L^3 \} \big ) \geq \big \{ P \big ( L^1 \leq \{ L^2, L^3 \} \big ), P \big ( L^3 \leq \{ L^1, L^2 \} \big ) \big \}, \\ \text{where } L^1 = L_1(m^1 + m^2) + L_2(m^1 + 1), L^2 = L_1(m^1 + m^2) + L_4(n - m^1), \text{and } \\ L^3 = L_3(n - m^1 - m^2 + 1) + L_4(n - m^1), \\ \\ P \big ( L^3 \leq \{ L^1, L^2 \} \big ) \geq \big \{ P \big ( L^1 \leq \{ L^2, L^3 \} \big ), P \big ( L^2 \leq \{ L^1, L^3 \} \big ) \big \}, \\ \text{where } L^1 = L_1(m^1 + m^2 + 1) + L_2(m^1 + 1), L^2 = L_1(m^1 + m^2 + 1) + L_4(n - m^1), \text{and } \\ L^3 = L_3(n - m^1 - m^2) + L_4(n - m^1). \end{array} \right. \end{equation} By varying $m^1$ from zero to $n$ and $m^2$ from $0$ to $n - m^1$, if Equation \eqref{RAE_Braess_numerical} holds for $(m^1, m^2, m^3 = n - m^1 - m^2)$, it is a pure risk-averse equilibrium. Similar to the above approach, $(m^1, m^2, n - m^1 - m^2)$ is a pure mean-variance equilibrium if \begin{equation} \label{MV_Braess_numerical} \left\{ \begin{array}{llllllll} \mathrm{Var}(L^1) + \rho \cdot \mathrm{E}(L^1) \leq \big \{ \mathrm{Var}(L^2) + \rho \cdot \mathrm{E}(L^2), \mathrm{Var}(L^3) + \rho \cdot \mathrm{E}(L^3) \big \}, \\ \text{where } L^1 = L_1(m^1 + m^2) + L_2(m^1), L^2 = L_1(m^1 + m^2) + L_4(n - m^1 + 1), \text{and } \\ L^3 = L_3(n - m^1 - m^2 + 1) + L_4(n - m^1 + 1), \\ \\ \mathrm{Var}(L^2) + \rho \cdot \mathrm{E}(L^2) \leq \big \{ \mathrm{Var}(L^1) + \rho \cdot \mathrm{E}(L^1), \mathrm{Var}(L^3) + \rho \cdot \mathrm{E}(L^3) \big \}, \\ \text{where } L^1 = L_1(m^1 + m^2) + L_2(m^1 + 1), L^2 = L_1(m^1 + m^2) + L_4(n - m^1), \text{and } \\ L^3 = L_3(n - m^1 - m^2 + 1) + L_4(n - m^1), \\ \\ \mathrm{Var}(L^3) + \rho \cdot \mathrm{E}(L^3) \leq \big \{ \mathrm{Var}(L^1) + \rho \cdot \mathrm{E}(L^1), \mathrm{Var}(L^2) + \rho \cdot \mathrm{E}(L^2) \big \}, \\ \text{where } L^1 = L_1(m^1 + m^2 + 1) + L_2(m^1 + 1), L^2 = L_1(m^1 + m^2 + 1) + L_4(n - m^1), \text{and } \\ L^3 = L_3(n - m^1 - m^2) + L_4(n - m^1). \end{array} \right. \end{equation} By varying $m^1$ from zero to $n$ and $m^2$ from $0$ to $n - m^1$, if Equation \eqref{MV_Braess_numerical} holds for $(m^1, m^2, m^3 = n - m^1 - m^2)$, it is a pure risk-averse equilibrium. Similar to the above approach, $(m^1, m^2, n - m^1 - m^2)$ is a pure CVaR$_\alpha$ equilibrium if \begin{equation} \label{CVaR_Braess_numerical} \left\{ \begin{array}{llllllll} \mathrm{E} \big [ L^1 \big | L^1 \geq v_\alpha^1 \big ] \leq \big \{ \mathrm{E} \big [ L^2 \big | L^2 \geq v_\alpha^2 \big ], \mathrm{E} \big [ L^3 \big | L^3 \geq v_\alpha^3 \big ] \big \}, \\ \text{where } L^1 = L_1(m^1 + m^2) + L_2(m^1), L^2 = L_1(m^1 + m^2) + L_4(n - m^1 + 1), \\ L^3 = L_3(n - m^1 - m^2 + 1) + L_4(n - m^1 + 1), \text{ and } P \big ( L^1 \geq v_\alpha^1 \big ) = P \big ( L^2 \geq v_\alpha^2 \big ) = P \big ( L^3 \geq v_\alpha^3 \big ) = \alpha \\ \\ \mathrm{E} \big [ L^2 \big | L^2 \geq v_\alpha^2 \big ] \leq \big \{ \mathrm{E} \big [ L^1 \big | L^1 \geq v_\alpha^1 \big ], \mathrm{E} \big [ L^3 \big | L^3 \geq v_\alpha^3 \big ] \big \}, \\ \text{where } L^1 = L_1(m^1 + m^2) + L_2(m^1 + 1), L^2 = L_1(m^1 + m^2) + L_4(n - m^1), \\ L^3 = L_3(n - m^1 - m^2 + 1) + L_4(n - m^1), \text{ and } P \big ( L^1 \geq v_\alpha^1 \big ) = P \big ( L^2 \geq v_\alpha^2 \big ) = P \big ( L^3 \geq v_\alpha^3 \big ) = \alpha\\ \\ \mathrm{E} \big [ L^3 \big | L^3 \geq v_\alpha^3 \big ] \leq \big \{ \mathrm{E} \big [ L^1 \big | L^1 \geq v_\alpha^1 \big ], \mathrm{E} \big [ L^2 \big | L^2 \geq v_\alpha^2 \big ] \big \}, \\ \text{where } L^1 = L_1(m^1 + m^2 + 1) + L_2(m^1 + 1), L^2 = L_1(m^1 + m^2 + 1) + L_4(n - m^1), \\ L^3 = L_3(n - m^1 - m^2) + L_4(n - m^1), \text{ and } P \big ( L^1 \geq v_\alpha^1 \big ) = P \big ( L^2 \geq v_\alpha^2 \big ) = P \big ( L^3 \geq v_\alpha^3 \big ) = \alpha. \end{array} \right. \end{equation} By varying $m^1$ from zero to $n$ and $m^2$ from $0$ to $n - m^1$, if Equation \eqref{CVaR_Braess_numerical} holds for $(m^1, m^2, m^3 = n - m^1 - m^2)$, it is a pure CVaR$_\alpha$ equilibrium. Note that the equilibrium in the Braess network in Example \ref{example2} is characterized by $m^1$ and $m^2$, since $m^3$ can be derived given $m^1$ and $m^2$. The pure risk-averse, mean-variance ($\rho = 1$), and CVaR$_\alpha$ ($\alpha = 0.1$) equilibria are found for the mentioned Braess network and the proportions of players who select paths $1$ and $2$, i.e., $\frac{m^1}{n}$ and $\frac{m^2}{n}$, are depicted in Figure \ref{figure_Braess_equilibrium} for different values of $n$. Under the Nash equilibrium, no matter what the probability distributions of latency over links look like, all players select path $2$ as it has less or equal latency in expectation. Hence, $(0, n, 0)$ is the Nash equilibrium for all $n$, which corresponds to $\frac{m^2}{n} = 1$ and $\frac{m^1}{n} = \frac{m^3}{n} = 0$ as depicted in Figure \ref{figure_Braess_equilibrium}. \begin{figure} \caption{The pure risk-averse, mean-variance ($\rho = 1$), CVaR$_\alpha$ ($\alpha = 0.1$), and Nash equilibria of the Braess network in Example \ref{example2} are denoted for different numbers of players.} \label{figure_Braess_equilibrium} \end{figure} The social delay/latency defined as the expected average delay/latency incurred by the $n$ players in the Braess network in Example \ref{example2} under the pure strategy $(m^1, m^2, m^3 = n - m^1 - m^2)$ is $D(m^1, m^2) = \frac{1}{n} \cdot \left ( (m^1 + m^2) \cdot \frac{(m^1 + m^2)}{n} + m^1 + (n - m^1 - m^2) + (n - m^1) \cdot \frac{(n - m^1)}{n} \right ) = \frac{1}{n^2} \cdot \left ( 2 \left ( m^1 \right )^2 + \left ( m^2 \right )^2 + 2 m^1 m^2- 2 n m^1 - n m^2 + 2n^2 \right )$, which is minimized when $\left ( m^1 = \lfloor \frac{n}{2} \rfloor, m^2 = 0, m^3 = n - m^1 \right )$ or $\left ( m^1 = \lceil \frac{n}{2} \rceil, m^2 = 0, m^3 = n - m^1 \right )$. As a result, it is socially optimal that about half of players take path $p_1$ and the rest take path $p_3$ to travel from source to destination in the Braess network, which results in a social latency close to $\frac32$ for $n \gg 1$. If players are risk-neutral and seek to minimize their expected latency given the strategy of the rest of the players, which is how the Nash equilibrium models games, the social latency in the mentioned Braess network equals two for the Nash equilibrium $(0, n, 0)$. In contrast, if players are risk-averse in the different senses discussed in this article, the social latency decreases compared to when players are risk-neutral; as a result, the price of anarchy decreases as depicted in Figure \ref{figure_Braess_PoA}. In this example, it is again to the benefit of the society if players are risk-averse. \begin{figure} \caption{The prices of anarchy for the risk-averse, mean-variance ($\rho = 1$), CVaR$_\alpha$ ($\alpha = 0.1$), and Nash equilibria of the Braess network in Example \ref{example2} are plotted for different numbers of players.} \label{figure_Braess_PoA} \end{figure} Considering the Braess network in a non-atomic setting, which corresponds to the case with infinite number of players, the socially optimal strategy is $(0.5, 0, 0.5)$ with social latency of $\frac32$, where $(u^1, u^2, u^3)$ corresponds to $u^1$ fraction of players travel along path $p_1$, $u^2$ fraction of players travel along path $p_2$, and $u^3 = 1 - u^1 - u^2$ fraction of players travel along path $p_3$. We numerically calculate that the risk-averse equilibrium is $(0.2655, 0.4690, 0.2655)$ with $\text{PoA} = 1.0733$, the mean-variance equilibrium with $\rho = 1$ is $(0.1716, 0.6568, 0.1716)$ with $\text{PoA} = 1.1438$, the CVaR$_\alpha$ equilibrium with $\alpha = 0.1$ is $(0.3045, 0.3910, 0.3045)$ with $\text{PoA} = 1.0509$, and the Nash equilibrium is $(0, 1, 0)$ with $\text{PoA} = \frac43$. Although it is more prevalent to use pure equilibrium for congestion games, we analyze the mixed equilibrium of the Pigou network in Example \ref{example1} for two players. The underlying stochastic congestion game with the probability distributions of players' delays, the pure and mixed Nash, risk-averse, mean-variance, and CVaR equilibria are depicted in Figure \ref{figure_Pigou_mixed_equilibria}. Recall that the (pure) price of anarchy of a congestion game is the maximum ratio $D(\boldsymbol{p}) \slash D(\boldsymbol{o})$ over all equilibria $\boldsymbol{p}$ of the game, where $\boldsymbol{o}$ is the socially optimum strategy. As mentioned earlier, the optimum strategy for the Pigou network with two players is that one of the players travels along the top link and the other player travels along the bottom link which corresponds to the social delay of $\frac34$. As a result, the (pure) price of anarchy for the Nash equilibria is $\frac43$. On the other hand, the pure price of anarchy for the risk-averse, mean-variance, and CVaR equilibria is equal to one. Furthermore, the price of anarchy among both pure and mixed equilibria for the risk-averse, mean-variance, and CVaR equilibria is $1.2405$, $1.1689$, and $1.2897$, respectively. \begin{figure} \caption{The pure and mixed risk-averse, mean-variance ($\rho = 1$), CVaR$_\alpha$ ($\alpha = 0.1$), and Nash equilibria of the Pigou network in Example \ref{example1} for two players.} \label{figure_Pigou_mixed_equilibria} \end{figure} In the following, we present extra examples with the purpose of shedding light on drawbacks of the different equilibria in different scenarios and motivating more work to be done on a unified risk-averse framework. Furthermore, the following examples suggest that careful consideration should be given to the choice of the equilibrium that best fits the application of the interest. \subsection{Notes for Practitioners} The intention of this subsection is to direct the attention of practitioners planning to implement risk-averse in-vehicle navigation to cases in which each of the proposed risk-averse equilibria may provide travelers with counterintuitive guidance. To this end, three examples are discussed in the following to shed light on the implications of the three classes of risk-averse equilibria. The examples are meant to be simple to convey the idea in a straightforward manner. \begin{example} \label{example3} Consider a Pigou network with two parallel links, $1$ and $2$, between source and destination. The travel times on links $1$ and $2$ are respectively independent random variables $L_1$ and $L_2$ with pdfs \[ \begin{aligned} f_1(x) = \ & \alpha \bigg ( exp \left ( -100 \left (x - 14 \right )^2 \right ) \cdot \mathbbm{1} \left \{ 13 \leq x \leq 15 \right \} + exp \left ( -100 \left (x - 19 \right )^2 \right ) \cdot \mathbbm{1} \left \{ 18 \leq x \leq 20 \right \} \bigg ), \\ f_2(x) = \ & \beta exp \left ( -100 \left (x - 20 \right )^2 \right ) \cdot \mathbbm{1} \left \{ 19 \leq x \leq 21 \right \}, \end{aligned} \] where $\alpha$ and $\beta$ are constants for which each of the two distributions integrate to one. \end{example} In Example \ref{example3}, the means and variances of travel times along links $1$ and $2$ are $l_1 = 16.5$, $\mathrm{Var}(L_1) = 6.255$, $l_2 = 20.0$, $\mathrm{Var}(L_2) = 0.005$, respectively, and $P(L_1 \leq L_2) = 1.0$. As a result, although link $1$ has a higher variance than link $2$, not only is link $1$ shorter than link $2$ in expectation, but link $1$ is shorter than link $2$ almost certainly. Hence, a rational traveler intends to take link $1$ for commute although its variance is higher than the variance of link $2$. However, the mean-variance framework intends to keep a balance between lower expected travel time and lower uncertainty in travel time assuming that higher variance is against the spirit of risk-averse travelers. In Example \ref{example3}, the mean-variance framework guides travelers to travel along link $2$ if $\rho < 1.7857$, which is not optimal from the perspective of a risk-averse traveler. Note that both risk-averse equilibrium and CVaR$_\alpha$ equilibrium for any $\alpha \in [0, 1]$ guide travelers to traverse along link $1$ in this example. \begin{example} \label{example4} Consider a Pigou network with two parallel links, $1$ and $2$, between source and destination. The travel times on links $1$ and $2$ are respectively independent random variables $L_1$ and $L_2$ with pdfs \[ \begin{aligned} f_1(x) = \ & \alpha \bigg ( 4exp \left ( -100 \left (x - 5 \right )^2 \right ) \cdot \mathbbm{1} \left \{ 4 \leq x \leq 6 \right \} + exp \left ( -100 \left (x - 10 \right )^2 \right ) \cdot \mathbbm{1} \left \{ 9 \leq x \leq 11 \right \} \bigg ), \\ f_2(x) = \ & \beta \bigg ( 4exp \left ( -100 \left (x - 8 \right )^2 \right ) \cdot \mathbbm{1} \left \{ 7 \leq x \leq 9 \right \} + exp \left ( -100 \left (x - 10 \right )^2 \right ) \cdot \mathbbm{1} \left \{ 9 \leq x \leq 11 \right \} \bigg ), \end{aligned} \] where $\alpha$ and $\beta$ are constants for which each of the two distributions integrate to one. \end{example} In Example \ref{example4}, the means and variances of travel times along links $1$ and $2$ are $l_1 = 6.0$, $\mathrm{Var}(L_1) = 4.005$, $l_2 = 8.4$, $\mathrm{Var}(L_2) = 0.645$, respectively, and $P(L_1 \leq L_2) = 0.82$. Note that both distributions are the same over the interval $[9, 11]$; however, the traveler has a better opportunity of experiencing shorter travel time on the lower $0.8$ quantile of the distribution of link $1$ compared to that of link $2$. Hence, a rational traveler intends to take link $1$ for commute although its variance is higher than the variance of link $2$. Furthermore, $\mathrm{E} \left [ L_1 | L_1 \geq \alpha \right ] = \mathrm{E} \left [ L_2 | L_2 \geq \alpha \right ]$ for $\alpha \in [0, 0.2]$; hence, the CVaR$_\alpha$ framework is indifferent between the two links when $\alpha \in [0, 0.2]$, which can result in a counterintuitive route selection in Example \ref{example4}. The mean-variance framework also guides travelers to traverse along link $2$ if $\rho < 1.4$, which is not optimal from the perspective of a risk-averse traveler. Note that the risk-averse equilibrium guides travelers to traverse along link $1$ in this example as $P(L_1 \leq L_2) = 0.82$. \begin{example} \label{example5} Consider a Pigou network with two parallel links, $1$ and $2$, between source and destination. The travel times on links $1$ and $2$ are respectively independent random variables $L_1$ and $L_2$ with pdfs \[ \begin{aligned} f_1(x) = \ & \beta exp \left ( -100 \left (x - 7 \right )^2 \right ) \cdot \mathbbm{1} \left \{ 6 \leq x \leq 8 \right \}, \\ f_2(x) = \ & \alpha \bigg ( 7exp \left ( -100 \left (x - 5 \right )^2 \right ) \cdot \mathbbm{1} \left \{ 4 \leq x \leq 6 \right \} + 3exp \left ( -100 \left (x - 10 \right )^2 \right ) \cdot \mathbbm{1} \left \{ 9 \leq x \leq 11 \right \} \bigg ), \end{aligned} \] where $\alpha$ and $\beta$ are constants for which each of the two distributions integrate to one. \end{example} In Example \ref{example5}, the means and variances of travel times along links $1$ and $2$ are $l_1 = 7.0$, $\mathrm{Var}(L_1) = 0.005$, $l_2 = 6.5$, $\mathrm{Var}(L_2) = 5.255$, respectively, and $P(L_2 \leq L_1) = 0.7$. Although the expected travel time along link $2$ is less than that along link $1$ and it is more likely that the travel time along link $2$ is shorter than travel time along link $1$, the travel time along link $2$ is concentrated around $10$ with probability $0.3$ which is somewhat larger than the concentration of travel time around $7$ when traveling along link $1$. Hence, a risk-averse traveler may prefer to take link $1$ for commute although its expected travel time is higher than the expected travel time of link $2$ to avoid a long travel time. However, the risk-averse equilibrium guides travelers to traverse along link $2$, which may not be optimal from the perspective of a risk-averse traveler. Note that the CVaR$_\alpha$ equilibrium for $\alpha < 0.748$ and mean-variance equilibrium for $\rho < 10.5$ guide travelers to traverse along link $1$ in this example. \section{Conclusion and Future Work} \label{conclusion_future_congestion_game} A stochastic atomic congestion game with incomplete information on travel times along arcs of a traffic/telecommunication network is studied in this work from a risk-averse perspective. Risk-averse travelers intend to make decisions based on probability statements regarding their travel options rather than simply taking the average travel time into account. In order to put this into perspective, we propose three classes of equilibria, i.e., risk-averse equilibrium (RAE), mean-variance equilibrium (MVE), and CVaR$_\alpha$ equilibrium (CVaR$_\alpha$E). The MV and CVaR$_\alpha$ equilibria are studied in the literature for networks with simplifying assumptions such as that the probability distributions of link delays are load independent or link delays are independent, which are not the case in this article. The notions of best responses in risk-averse, mean-variance, and CVaR$_\alpha$ equilibria are based on maximizing the probability of traveling along the shortest path, minimizing a linear combination of mean and variance of path delay, and minimizing the expected delay at a specified risky quantile of the delay distributions, respectively. We prove that the risk-averse, mean-variance, and CVaR$_\alpha$ equilibria exist for any finite stochastic atomic congestion game. Although proving bounds on the price of anarchy (PoA) is not the focus of this work, we numerically study the impact of risk-averse equilibria on PoA and observe that the Braess paradox may not occur to the extent presented originally and the PoA may improve upon using any of the proposed equilibria in both Braess and Pigou networks. Promising future directions are to study non-atomic, instead of atomic, stochastic congestion games in the proposed three classes of equilibria in their general case where the arc delay distributions are load dependent and not necessarily independent of each other, to find bounds on the price of anarchy for the proposed three classes of equilibria, and to find a unified class of equilibrium that captures risk-aversion for a broader class of travel time distributions in traffic/telecommunication networks. \begin{appendices} \label{appendix} \section{Proof of Theorem \ref{theorem_existence_RAE}} \label{proof_theorem_RAE} Let $\boldsymbol{RB}: \boldsymbol{\Sigma} \rightarrow \boldsymbol{\Sigma}$ be the risk-averse best response function where $\boldsymbol{RB}(\boldsymbol{\sigma}) = \big (RB(\boldsymbol{\sigma}_{-1}), RB(\boldsymbol{\sigma}_{-2}),$ $\dots, RB(\boldsymbol{\sigma}_{-N}) \big )$. It is easy to see that the existence of a fixed point $\boldsymbol{\sigma}^* \in \boldsymbol{\Sigma}$ for the risk-averse best response function, i.e., $\boldsymbol{\sigma}^* \in \boldsymbol{RB}(\boldsymbol{\sigma}^*)$, proves the existence of a risk-averse equilibrium. The following four conditions of the Kakutani’s Fixed Point Theorem are shown to be satisfied for the function $\boldsymbol{RB}(\boldsymbol{\sigma})$ to prove the existence of a fixed point for the function. \begin{enumerate}[leftmargin=*] \item The domain of function $\boldsymbol{RB}(.)$ is a non-empty, compact, and convex subset of a finite dimensional Euclidean space: $\boldsymbol{\Sigma}$ is the Cartesian product of non-empty simplices as each player has at least one strategy to play; furthermore, each of the elements of $\boldsymbol{\Sigma}$ is between zero and one, so $\boldsymbol{\Sigma}$ is non-empty, convex, bounded, and closed containing all its limit points. \item $\boldsymbol{RB(\boldsymbol{\sigma})} \neq \oldemptyset$, $\forall \boldsymbol{\sigma} \in \boldsymbol{\Sigma}$: The set in Equation \eqref{eq_mixed_best_response} is non-empty as maximum exists over a finite number of values. As a result, $RB(\boldsymbol{\sigma}_{-i})$ is non-empty for all $i \in [n]$ since it is the set of all probability distributions over the corresponding mentioned non-empty set. \item The co-domain of function $\boldsymbol{RB}(.)$ is a convex set for all $\boldsymbol{\sigma} \in \boldsymbol{\Sigma}$: It suffices to prove that $RB(\boldsymbol{\sigma}_{-i})$ is a convex set for all $\boldsymbol{\sigma}_{-i} \in \boldsymbol{\Sigma}_{-i}$ and for all $i \in [n]$. For any $i \in [n]$, if $\sigma_i, \sigma_i' \in RB(\boldsymbol{\sigma}_{-i})$, we need to prove that $\lambda \sigma_i + (1 - \lambda) \sigma_i' \in RB(\boldsymbol{\sigma}_{-i})$ for any $\lambda \in [0, 1]$ and for any $\boldsymbol{\sigma}_{-i} \in \boldsymbol{\Sigma}_{-i}$. Let the supports of $\sigma_i$ and $\sigma_i'$ be defined as $supp(\sigma_i) = \{p_i \in \mathcal{P}_i: \sigma_i(p_i) > 0\}$ and $supp(\sigma_i') = \{p_i \in \mathcal{P}_i: \sigma_i'(p_i) > 0\}$, respectively. It is concluded from the definition of the risk-averse best response in Definition \ref{def_best_response_mixed_RAE} that $supp(\sigma_i), supp(\sigma_i') \subseteq \underset{p_i \in \mathcal{P}_i}{\argmax} \ P \left ( \overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \leq \overline{\boldsymbol{L}}^{i}(\mathcal{P}_i \setminus p_i, \boldsymbol{\sigma}_{-i}) \right )$, which results in $supp(\sigma_i) \cup supp(\sigma_i') \subseteq \underset{p_i \in \mathcal{P}_i}{\argmax} \ P \left ( \overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \leq \overline{\boldsymbol{L}}^{i}(\mathcal{P}_i \setminus p_i, \boldsymbol{\sigma}_{-i}) \right )$. As a result, using the definition of risk-averse best response, any probability distribution over the set $supp(\sigma_i) \cup supp(\sigma_i')$ is a risk-averse best response to $\boldsymbol{\sigma}_{-i}$. It is trivial that the mixed strategy $\lambda \sigma_i + (1 - \lambda) \sigma_i'$ is a valid probability distribution over the set $supp(\sigma_i) \cup supp(\sigma_i')$ for any $\lambda \in [0, 1]$, so $\lambda \sigma_i + (1 - \lambda) \sigma_i' \in RB(\boldsymbol{\sigma}_{-i})$ for any $\lambda \in [0, 1]$ and for any $\boldsymbol{\sigma}_{-i} \in \boldsymbol{\Sigma}_{-i}$ that completes the convexity proof of the set $RB(\boldsymbol{\sigma}_{-i})$. \item $\boldsymbol{RB}(\boldsymbol{\sigma})$ has a closed graph: $\boldsymbol{RB}(\boldsymbol{\sigma})$ has a closed graph if for any sequence $\{\boldsymbol{\sigma}^m, \widehat{\boldsymbol{\sigma}}^m\} \rightarrow \{\boldsymbol{\sigma}, \widehat{\boldsymbol{\sigma}}\}$ with $\widehat{\boldsymbol{\sigma}}^m \in \boldsymbol{RB}(\boldsymbol{\sigma}^m)$ for all $m \in \mathbb{N}$, we have $\widehat{\boldsymbol{\sigma}} \in \boldsymbol{RB}(\boldsymbol{\sigma})$. Proof by contradiction is used to show that $\boldsymbol{RB}(\boldsymbol{\sigma})$ has a closed graph. Consider by contradiction that $\boldsymbol{RB}(\boldsymbol{\sigma})$ does not have a closed graph, so there exists a sequence $\{\boldsymbol{\sigma}^m, \widehat{\boldsymbol{\sigma}}^m\} \rightarrow \{\boldsymbol{\sigma}, \widehat{\boldsymbol{\sigma}}\}$ with $\widehat{\boldsymbol{\sigma}}^m \in \boldsymbol{RB}(\boldsymbol{\sigma}^m)$ for all $m \in \mathbb{N}$, but $\widehat{\boldsymbol{\sigma}} \notin \boldsymbol{RB}(\boldsymbol{\sigma})$. As a result, there exists some $i \in [n]$ such that $\widehat{\sigma}_i \notin RB(\boldsymbol{\sigma}_{-i})$. Using the definition of risk-averse best response in Definition \ref{def_best_response_mixed_RAE}, there exists $p_i' \in supp(RB(\boldsymbol{\sigma}_{-i}))$, $\widehat{p}_i \in supp(\widehat{\sigma}_i)$, and some $\epsilon > 0$ such that \begin{equation} \label{eq_contradiction_1} P \left ( \overline{L}^{i}(p_i', \boldsymbol{\sigma}_{-i}) \leq \overline{\boldsymbol{L}}^{i}(\mathcal{P}_i \setminus p_i', \boldsymbol{\sigma}_{-i}) \right ) > P \left ( \overline{L}^{i}(\widehat{p}_i, \boldsymbol{\sigma}_{-i}) \leq \overline{\boldsymbol{L}}^{i}(\mathcal{P}_i \setminus \widehat{p}_i, \boldsymbol{\sigma}_{-i}) \right ) + 3 \epsilon. \end{equation} Since the latencies over edges are continuous random variables and $\boldsymbol{\sigma}_{-i}^m \rightarrow \boldsymbol{\sigma}_{-i}$, for any $\epsilon > 0$, there exists a sufficiently large $m_1$ such that we have the following for $m \geq m_1$: \begin{equation} \label{eq_contradiction_2} P \left ( \overline{L}^{i}(p_i', \boldsymbol{\sigma}_{-i}^m) \leq \overline{\boldsymbol{L}}^{i}(\mathcal{P}_i \setminus p_i', \boldsymbol{\sigma}_{-i}^m) \right ) > P \left ( \overline{L}^{i}(p_i', \boldsymbol{\sigma}_{-i}) \leq \overline{\boldsymbol{L}}^{i}(\mathcal{P}_i \setminus p_i', \boldsymbol{\sigma}_{-i}) \right ) - \epsilon. \end{equation} By adding inequalities with the same direction in Equations \eqref{eq_contradiction_1} and \eqref{eq_contradiction_2}, for $m \geq m_1$ we have \begin{equation} \label{eq_contradiction_3} P \left ( \overline{L}^{i}(p_i', \boldsymbol{\sigma}_{-i}^m) \leq \overline{\boldsymbol{L}}^{i}(\mathcal{P}_i \setminus p_i', \boldsymbol{\sigma}_{-i}^m) \right ) > P \left ( \overline{L}^{i}(\widehat{p}_i, \boldsymbol{\sigma}_{-i}) \leq \overline{\boldsymbol{L}}^{i}(\mathcal{P}_i \setminus \widehat{p}_i, \boldsymbol{\sigma}_{-i}) \right ) + 2 \epsilon. \end{equation} For the same reason as of Equation \eqref{eq_contradiction_2}, for any $\epsilon > 0$, there exists a sufficiently large $m_2$ such that we have the following for $m \geq m_2$: \begin{equation} \label{eq_contradiction_4} P \left ( \overline{L}^{i}(\widehat{p}_i, \boldsymbol{\sigma}_{-i}) \leq \overline{\boldsymbol{L}}^{i}(\mathcal{P}_i \setminus \widehat{p}_i, \boldsymbol{\sigma}_{-i}) \right ) > P \left ( \overline{L}^{i}(\widehat{p}_i^m, \boldsymbol{\sigma}_{-i}^m) \leq \overline{\boldsymbol{L}}^{i}(\mathcal{P}_i \setminus \widehat{p}_i^m, \boldsymbol{\sigma}_{-i}^m) \right ) - \epsilon, \end{equation} where $\widehat{p}_i^m \in supp(RB(\boldsymbol{\sigma}_{-i}^m))$. By adding the inequalities with the same direction in Equations \eqref{eq_contradiction_3} and \eqref{eq_contradiction_4}, for $m \geq \max \{m_1, m_2\}$ we have \begin{equation} \label{eq_contradiction_5} P \left ( \overline{L}^{i}(p_i', \boldsymbol{\sigma}_{-i}^m) \leq \overline{\boldsymbol{L}}^{i}(\mathcal{P}_i \setminus p_i', \boldsymbol{\sigma}_{-i}^m) \right ) > P \left ( \overline{L}^{i}(\widehat{p}_i^m, \boldsymbol{\sigma}_{-i}^m) \leq \overline{\boldsymbol{L}}^{i}(\mathcal{P}_i \setminus \widehat{p}_i^m, \boldsymbol{\sigma}_{-i}^m) \right ) + \epsilon. \end{equation} Equation \eqref{eq_contradiction_5} contradicts the fact that $\widehat{p}_i^m \in supp(RB(\boldsymbol{\sigma}_{-i}^m))$, which completes the proof that $\boldsymbol{RB}(\boldsymbol{\sigma})$ has a closed graph. \end{enumerate} As listed above, the risk-averse best response function $\boldsymbol{RB}(\boldsymbol{\sigma})$ satisfies the four conditions of Kakutani's Fixed Point Theorem. As a direct result, for any finite $n$-player stochastic congestion game, there exists $\boldsymbol{\sigma}^* \in \boldsymbol{\Sigma}$ such that $\boldsymbol{\sigma}^* \in \boldsymbol{RB}(\boldsymbol{\sigma}^*)$, which completes the existence proof of a risk-averse equilibrium for such games. \Halmos \section{Proof of Theorem \ref{theorem_existence_MV}} \label{proof_theorem_MV} Let $\boldsymbol{MB}: \boldsymbol{\Sigma} \rightarrow \boldsymbol{\Sigma}$ be the mean-variance best response function where $\boldsymbol{MB}(\boldsymbol{\sigma}) = \big (MB(\boldsymbol{\sigma}_{-1}), MB(\boldsymbol{\sigma}_{-2}),$ $\dots, MB(\boldsymbol{\sigma}_{-N}) \big )$. It is easy to see that the existence of a fixed point $\boldsymbol{\sigma}^* \in \boldsymbol{\Sigma}$ for the mean-variance best response function, i.e., $\boldsymbol{\sigma}^* \in \boldsymbol{MB}(\boldsymbol{\sigma}^*)$, proves the existence of a mean-variance equilibrium. The following four conditions of the Kakutani’s Fixed Point Theorem are shown to be satisfied for the function $\boldsymbol{MB}(\boldsymbol{\sigma})$ to prove the existence of a fixed point for the function. \begin{enumerate}[leftmargin=*] \item The domain of function $\boldsymbol{MB}(.)$ is a non-empty, compact, and convex subset of a finite dimensional Euclidean space: $\boldsymbol{\Sigma}$ is the Cartesian product of non-empty simplices as each player has at least one strategy to play; furthermore, each of the elements of $\boldsymbol{\Sigma}$ is between zero and one, so $\boldsymbol{\Sigma}$ is non-empty, convex, bounded, and closed containing all its limit points. \item $\boldsymbol{MB(\boldsymbol{\sigma})} \neq \oldemptyset$, $\forall \boldsymbol{\sigma} \in \boldsymbol{\Sigma}$: The set in Equation \eqref{eq_mixed_best_response_MV} is non-empty as minimum exists over a finite number of values. As a result, $MB(\boldsymbol{\sigma}_{-i})$ is non-empty for all $i \in [n]$ since it is the set of all probability distributions over the corresponding mentioned non-empty set. \item The co-domain of function $\boldsymbol{MB}(.)$ is a convex set for all $\boldsymbol{\sigma} \in \boldsymbol{\Sigma}$: It suffices to prove that $MB(\boldsymbol{\sigma}_{-i})$ is a convex set for all $\boldsymbol{\sigma}_{-i} \in \boldsymbol{\Sigma}_{-i}$ and for all $i \in [n]$. For any $i \in [n]$, if $\sigma_i, \sigma_i' \in MB(\boldsymbol{\sigma}_{-i})$, we need to prove that $\lambda \sigma_i + (1 - \lambda) \sigma_i' \in MB(\boldsymbol{\sigma}_{-i})$ for any $\lambda \in [0, 1]$ and for any $\boldsymbol{\sigma}_{-i} \in \boldsymbol{\Sigma}_{-i}$. Let the supports of $\sigma_i$ and $\sigma_i'$ be defined as $supp(\sigma_i) = \{p_i \in \mathcal{P}_i: \sigma_i(p_i) > 0\}$ and $supp(\sigma_i') = \{p_i \in \mathcal{P}_i: \sigma_i'(p_i) > 0\}$, respectively. It is concluded from the definition of the mean-variance best response in Definition \ref{def_best_response_mixed_MV} that $supp(\sigma_i), supp(\sigma_i') \subseteq \underset{p_i \in \mathcal{P}_i}{\argmin} \ \mathrm{Var} \left (\overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \right ) + \rho \cdot \overline{l}^{i}(p_i, \boldsymbol{\sigma}_{-i})$, which results in $supp(\sigma_i) \cup supp(\sigma_i') \subseteq \underset{p_i \in \mathcal{P}_i}{\argmin} \ \mathrm{Var} \left (\overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \right ) + \rho \cdot \overline{l}^{i}(p_i, \boldsymbol{\sigma}_{-i})$. As a result, using the definition of mean-variance best response, any probability distribution over the set $supp(\sigma_i) \cup supp(\sigma_i')$ is a mean-variance best response to $\boldsymbol{\sigma}_{-i}$. The mixed strategy $\lambda \sigma_i + (1 - \lambda) \sigma_i'$ is obviously a valid probability distribution over the set $supp(\sigma_i) \cup supp(\sigma_i')$ for any $\lambda \in [0, 1]$, so $\lambda \sigma_i + (1 - \lambda) \sigma_i' \in MB(\boldsymbol{\sigma}_{-i})$ for any $\lambda \in [0, 1]$ and for any $\boldsymbol{\sigma}_{-i} \in \boldsymbol{\Sigma}_{-i}$ that completes the convexity proof of the set $MB(\boldsymbol{\sigma}_{-i})$. \item $\boldsymbol{MB}(\boldsymbol{\sigma})$ has a closed graph: $\boldsymbol{MB}(\boldsymbol{\sigma})$ has a closed graph if for any sequence $\{\boldsymbol{\sigma}^m, \widehat{\boldsymbol{\sigma}}^m\} \rightarrow \{\boldsymbol{\sigma}, \widehat{\boldsymbol{\sigma}}\}$ with $\widehat{\boldsymbol{\sigma}}^m \in \boldsymbol{MB}(\boldsymbol{\sigma}^m)$ for all $m \in \mathbb{N}$, we have $\widehat{\boldsymbol{\sigma}} \in \boldsymbol{MB}(\boldsymbol{\sigma})$. Proof by contradiction is used to show that $\boldsymbol{MB}(\boldsymbol{\sigma})$ has a closed graph. Consider by contradiction that $\boldsymbol{MB}(\boldsymbol{\sigma})$ does not have a closed graph, so there exists a sequence $\{\boldsymbol{\sigma}^m, \widehat{\boldsymbol{\sigma}}^m\} \rightarrow \{\boldsymbol{\sigma}, \widehat{\boldsymbol{\sigma}}\}$ with $\widehat{\boldsymbol{\sigma}}^m \in \boldsymbol{MB}(\boldsymbol{\sigma}^m)$ for all $m \in \mathbb{N}$, but $\widehat{\boldsymbol{\sigma}} \notin \boldsymbol{MB}(\boldsymbol{\sigma})$. As a result, there exists some $i \in [n]$ such that $\widehat{\sigma}_i \notin MB(\boldsymbol{\sigma}_{-i})$. Using the definition of mean-variance best response in Definition \ref{def_best_response_mixed_MV}, there exists $p_i' \in supp(MB(\boldsymbol{\sigma}_{-i}))$, $\widehat{p}_i \in supp(\widehat{\sigma}_i)$, and some $\epsilon > 0$ such that \begin{equation} \label{eq_contradiction_1_MV} \mathrm{Var} \left (\overline{L}^{i}(p_i', \boldsymbol{\sigma}_{-i}) \right ) + \rho \cdot \overline{l}^{i}(p_i', \boldsymbol{\sigma}_{-i}) < \mathrm{Var} \left (\overline{L}^{i}(\widehat{p}_i, \boldsymbol{\sigma}_{-i}) \right ) + \rho \cdot \overline{l}^{i}(\widehat{p}_i, \boldsymbol{\sigma}_{-i}) - 3 \epsilon. \end{equation} Since the latencies over edges are continuous random variables and $\boldsymbol{\sigma}_{-i}^m \rightarrow \boldsymbol{\sigma}_{-i}$, for any $\epsilon > 0$, there exists a sufficiently large $m_3$ such that we have the following for $m \geq m_3$: \begin{equation} \label{eq_contradiction_2_MV} \mathrm{Var} \left (\overline{L}^{i}(p_i', \boldsymbol{\sigma}_{-i}^m) \right ) + \rho \cdot \overline{l}^{i}(p_i', \boldsymbol{\sigma}_{-i}^m) < \mathrm{Var} \left (\overline{L}^{i}(p_i', \boldsymbol{\sigma}_{-i}) \right ) + \rho \cdot \overline{l}^{i}(p_i', \boldsymbol{\sigma}_{-i}) + \epsilon. \end{equation} By adding inequalities with the same direction in Equations \eqref{eq_contradiction_1_MV} and \eqref{eq_contradiction_2_MV}, for $m \geq m_3$ we have \begin{equation} \label{eq_contradiction_3_MV} \mathrm{Var} \left (\overline{L}^{i}(p_i', \boldsymbol{\sigma}_{-i}^m) \right ) + \rho \cdot \overline{l}^{i}(p_i', \boldsymbol{\sigma}_{-i}^m) < \mathrm{Var} \left (\overline{L}^{i}(\widehat{p}_i, \boldsymbol{\sigma}_{-i}) \right ) + \rho \cdot \overline{l}^{i}(\widehat{p}_i, \boldsymbol{\sigma}_{-i}) - 2 \epsilon. \end{equation} For the same reason as of Equation \eqref{eq_contradiction_2_MV}, for any $\epsilon > 0$, there exists a sufficiently large $m_4$ such that we have the following for $m \geq m_4$: \begin{equation} \label{eq_contradiction_4_MV} \mathrm{Var} \left (\overline{L}^{i}(\widehat{p}_i, \boldsymbol{\sigma}_{-i}) \right ) + \rho \cdot \overline{l}^{i}(\widehat{p}_i, \boldsymbol{\sigma}_{-i}) < \mathrm{Var} \left (\overline{L}^{i}(\widehat{p}_i^m, \boldsymbol{\sigma}_{-i}^m) \right ) + \rho \cdot \overline{l}^{i}(\widehat{p}_i^m, \boldsymbol{\sigma}_{-i}^m) + \epsilon, \end{equation} where $\widehat{p}_i^m \in supp(MB(\boldsymbol{\sigma}_{-i}^m))$. By adding the inequalities with the same direction in Equations \eqref{eq_contradiction_3_MV} and \eqref{eq_contradiction_4_MV}, for $m \geq \max \{m_3, m_4\}$ we have \begin{equation} \label{eq_contradiction_5_MV} \mathrm{Var} \left (\overline{L}^{i}(p_i', \boldsymbol{\sigma}_{-i}^m) \right ) + \rho \cdot \overline{l}^{i}(p_i', \boldsymbol{\sigma}_{-i}^m) < \mathrm{Var} \left (\overline{L}^{i}(\widehat{p}_i^m, \boldsymbol{\sigma}_{-i}^m) \right ) + \rho \cdot \overline{l}^{i}(\widehat{p}_i^m, \boldsymbol{\sigma}_{-i}^m) - \epsilon. \end{equation} Equation \eqref{eq_contradiction_5_MV} contradicts the fact that $\widehat{p}_i^m \in supp(MB(\boldsymbol{\sigma}_{-i}^m))$, which completes the proof that $\boldsymbol{MB}(\boldsymbol{\sigma})$ has a closed graph. \end{enumerate} As listed above, the mean-variance best response function $\boldsymbol{MB}(\boldsymbol{\sigma})$ satisfies the four conditions of Kakutani's Fixed Point Theorem. As a direct result, for any finite $n$-player stochastic congestion game, there exists $\boldsymbol{\sigma}^* \in \boldsymbol{\Sigma}$ such that $\boldsymbol{\sigma}^* \in \boldsymbol{MB}(\boldsymbol{\sigma}^*)$, which completes the existence proof of a mean-variance equilibrium for such games. \Halmos \section{Proof of Theorem \ref{theorem_existence_CVaR_alpha}} \label{proof_theorem_CVaR_alpha} Let $\boldsymbol{CB}: \boldsymbol{\Sigma} \rightarrow \boldsymbol{\Sigma}$ be the CVaR$_\alpha$ best response function where $\boldsymbol{CB}(\boldsymbol{\sigma}) = \big (CB(\boldsymbol{\sigma}_{-1}), CB(\boldsymbol{\sigma}_{-2}),$ $\dots, CB(\boldsymbol{\sigma}_{-N}) \big )$. It is easy to see that the existence of a fixed point $\boldsymbol{\sigma}^* \in \boldsymbol{\Sigma}$ for the CVaR$_\alpha$ best response function, i.e., $\boldsymbol{\sigma}^* \in \boldsymbol{CB}(\boldsymbol{\sigma}^*)$, proves the existence of a CVaR$_\alpha$ equilibrium. The following four conditions of the Kakutani’s Fixed Point Theorem are shown to be satisfied for the function $\boldsymbol{CB}(\boldsymbol{\sigma})$ to prove the existence of a fixed point for the function. \begin{enumerate}[leftmargin=*] \item The domain of function $\boldsymbol{CB}(.)$ is a non-empty, compact, and convex subset of a finite dimensional Euclidean space: $\boldsymbol{\Sigma}$ is the Cartesian product of non-empty simplices as each player has at least one strategy to play; furthermore, each of the elements of $\boldsymbol{\Sigma}$ is between zero and one, so $\boldsymbol{\Sigma}$ is non-empty, convex, bounded, and closed containing all its limit points. \item $\boldsymbol{CB(\boldsymbol{\sigma})} \neq \oldemptyset$, $\forall \boldsymbol{\sigma} \in \boldsymbol{\Sigma}$: The set in Equation \eqref{eq_mixed_best_response_CVaR_alpha} is non-empty as minimum exists over a finite number of values. As a result, $CB(\boldsymbol{\sigma}_{-i})$ is non-empty for all $i \in [n]$ since it is the set of all probability distributions over the corresponding mentioned non-empty set. \item The co-domain of function $\boldsymbol{CB}(.)$ is a convex set for all $\boldsymbol{\sigma} \in \boldsymbol{\Sigma}$: It suffices to prove that $CB(\boldsymbol{\sigma}_{-i})$ is a convex set for all $\boldsymbol{\sigma}_{-i} \in \boldsymbol{\Sigma}_{-i}$ and for all $i \in [n]$. For any $i \in [n]$, if $\sigma_i, \sigma_i' \in CB(\boldsymbol{\sigma}_{-i})$, we need to prove that $\lambda \sigma_i + (1 - \lambda) \sigma_i' \in CB(\boldsymbol{\sigma}_{-i})$ for any $\lambda \in [0, 1]$ and for any $\boldsymbol{\sigma}_{-i} \in \boldsymbol{\Sigma}_{-i}$. Let the supports of $\sigma_i$ and $\sigma_i'$ be defined as $supp(\sigma_i) = \{p_i \in \mathcal{P}_i: \sigma_i(p_i) > 0\}$ and $supp(\sigma_i') = \{p_i \in \mathcal{P}_i: \sigma_i'(p_i) > 0\}$, respectively. It is concluded from the definition of the CVaR$_\alpha$ best response in Definition \ref{def_best_response_mixed_CVaR_alpha} that $supp(\sigma_i), supp(\sigma_i') \subseteq \underset{p_i \in \mathcal{P}_i}{\argmin} \ \mathrm{E} \left [\overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \Big | \overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \geq v_\alpha^{i}(p_i, \boldsymbol{\sigma}_{-i}) \right ]$, which results in $supp(\sigma_i) \cup supp(\sigma_i') \subseteq \underset{p_i \in \mathcal{P}_i}{\argmin} \ \mathrm{E} \left [\overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \Big | \overline{L}^{i}(p_i, \boldsymbol{\sigma}_{-i}) \geq v_\alpha^{i}(p_i, \boldsymbol{\sigma}_{-i}) \right ]$. As a result, using the definition of CVaR$_\alpha$ best response, any probability distribution over the set $supp(\sigma_i) \cup supp(\sigma_i')$ is a CVaR$_\alpha$ best response to $\boldsymbol{\sigma}_{-i}$. The mixed strategy $\lambda \sigma_i + (1 - \lambda) \sigma_i'$ is obviously a valid probability distribution over the set $supp(\sigma_i) \cup supp(\sigma_i')$ for any $\lambda \in [0, 1]$, so $\lambda \sigma_i + (1 - \lambda) \sigma_i' \in CB(\boldsymbol{\sigma}_{-i})$ for any $\lambda \in [0, 1]$ and for any $\boldsymbol{\sigma}_{-i} \in \boldsymbol{\Sigma}_{-i}$ that completes the convexity proof of the set $CB(\boldsymbol{\sigma}_{-i})$. \item $\boldsymbol{CB}(\boldsymbol{\sigma})$ has a closed graph: $\boldsymbol{CB}(\boldsymbol{\sigma})$ has a closed graph if for any sequence $\{\boldsymbol{\sigma}^m, \widehat{\boldsymbol{\sigma}}^m\} \rightarrow \{\boldsymbol{\sigma}, \widehat{\boldsymbol{\sigma}}\}$ with $\widehat{\boldsymbol{\sigma}}^m \in \boldsymbol{CB}(\boldsymbol{\sigma}^m)$ for all $m \in \mathbb{N}$, we have $\widehat{\boldsymbol{\sigma}} \in \boldsymbol{CB}(\boldsymbol{\sigma})$. Proof by contradiction is used to show that $\boldsymbol{CB}(\boldsymbol{\sigma})$ has a closed graph. Consider by contradiction that $\boldsymbol{CB}(\boldsymbol{\sigma})$ does not have a closed graph, so there exists a sequence $\{\boldsymbol{\sigma}^m, \widehat{\boldsymbol{\sigma}}^m\} \rightarrow \{\boldsymbol{\sigma}, \widehat{\boldsymbol{\sigma}}\}$ with $\widehat{\boldsymbol{\sigma}}^m \in \boldsymbol{CB}(\boldsymbol{\sigma}^m)$ for all $m \in \mathbb{N}$, but $\widehat{\boldsymbol{\sigma}} \notin \boldsymbol{CB}(\boldsymbol{\sigma})$. As a result, there exists some $i \in [n]$ such that $\widehat{\sigma}_i \notin CB(\boldsymbol{\sigma}_{-i})$. Using the definition of CVaR$_\alpha$ best response in Definition \ref{def_best_response_mixed_CVaR_alpha}, there exists $p_i' \in supp(CB(\boldsymbol{\sigma}_{-i}))$, $\widehat{p}_i \in supp(\widehat{\sigma}_i)$, and some $\epsilon > 0$ such that \begin{equation} \label{eq_contradiction_1_CVaR_alpha} \mathrm{E} \left [\overline{L}^{i}(p_i', \boldsymbol{\sigma}_{-i}) \Big | \overline{L}^{i}(p_i', \boldsymbol{\sigma}_{-i}) \geq v_\alpha^{i}(p_i', \boldsymbol{\sigma}_{-i}) \right ] < \mathrm{E} \left [\overline{L}^{i}(\widehat{p}_i, \boldsymbol{\sigma}_{-i}) \Big | \overline{L}^{i}(\widehat{p}_i, \boldsymbol{\sigma}_{-i}) \geq v_\alpha^{i}(\widehat{p}_i, \boldsymbol{\sigma}_{-i}) \right ] - 3 \epsilon. \end{equation} Since the latencies over edges are continuous random variables and $\boldsymbol{\sigma}_{-i}^m \rightarrow \boldsymbol{\sigma}_{-i}$, for any $\epsilon > 0$, there exists a sufficiently large $m_5$ such that we have the following for $m \geq m_5$: \begin{equation} \label{eq_contradiction_2_CVaR_alpha} \mathrm{E} \left [\overline{L}^{i}(p_i', \boldsymbol{\sigma}_{-i}^m) \Big | \overline{L}^{i}(p_i', \boldsymbol{\sigma}_{-i}^m) \geq v_\alpha^{i}(p_i', \boldsymbol{\sigma}_{-i}^m) \right ] < \mathrm{E} \left [\overline{L}^{i}(p_i', \boldsymbol{\sigma}_{-i}) \Big | \overline{L}^{i}(p_i', \boldsymbol{\sigma}_{-i}) \geq v_\alpha^{i}(p_i', \boldsymbol{\sigma}_{-i}) \right ] + \epsilon. \end{equation} By adding inequalities with the same direction in Equations \eqref{eq_contradiction_1_CVaR_alpha} and \eqref{eq_contradiction_2_CVaR_alpha}, for $m \geq m_5$ we have \begin{equation} \label{eq_contradiction_3_CVaR_alpha} \mathrm{E} \left [\overline{L}^{i}(p_i', \boldsymbol{\sigma}_{-i}^m) \Big | \overline{L}^{i}(p_i', \boldsymbol{\sigma}_{-i}^m) \geq v_\alpha^{i}(p_i', \boldsymbol{\sigma}_{-i}^m) \right ] < \mathrm{E} \left [\overline{L}^{i}(\widehat{p}_i, \boldsymbol{\sigma}_{-i}) \Big | \overline{L}^{i}(\widehat{p}_i, \boldsymbol{\sigma}_{-i}) \geq v_\alpha^{i}(\widehat{p}_i, \boldsymbol{\sigma}_{-i}) \right ] - 2 \epsilon. \end{equation} For the same reason as of Equation \eqref{eq_contradiction_2_CVaR_alpha}, for any $\epsilon > 0$, there exists a sufficiently large $m_6$ such that we have the following for $m \geq m_6$: \begin{equation} \label{eq_contradiction_4_CVaR_alpha} \mathrm{E} \left [\overline{L}^{i}(\widehat{p}_i, \boldsymbol{\sigma}_{-i}) \Big | \overline{L}^{i}(\widehat{p}_i, \boldsymbol{\sigma}_{-i}) \geq v_\alpha^{i}(\widehat{p}_i, \boldsymbol{\sigma}_{-i}) \right ] < \mathrm{E} \left [\overline{L}^{i}(\widehat{p}_i^m, \boldsymbol{\sigma}_{-i}^m) \Big | \overline{L}^{i}(\widehat{p}_i^m, \boldsymbol{\sigma}_{-i}^m) \geq v_\alpha^{i}(\widehat{p}_i^m, \boldsymbol{\sigma}_{-i}^m) \right ] + \epsilon, \end{equation} where $\widehat{p}_i^m \in supp(CB(\boldsymbol{\sigma}_{-i}^m))$. By adding the inequalities with the same direction in Equations \eqref{eq_contradiction_3_CVaR_alpha} and \eqref{eq_contradiction_4_CVaR_alpha}, for $m \geq \max \{m_5, m_6\}$ we have \begin{equation} \label{eq_contradiction_5_CVaR_alpha} \mathrm{E} \left [\overline{L}^{i}(p_i', \boldsymbol{\sigma}_{-i}^m) \Big | \overline{L}^{i}(p_i', \boldsymbol{\sigma}_{-i}^m) \geq v_\alpha^{i}(p_i', \boldsymbol{\sigma}_{-i}^m) \right ] < \mathrm{E} \left [\overline{L}^{i}(\widehat{p}_i^m, \boldsymbol{\sigma}_{-i}^m) \Big | \overline{L}^{i}(\widehat{p}_i^m, \boldsymbol{\sigma}_{-i}^m) \geq v_\alpha^{i}(\widehat{p}_i^m, \boldsymbol{\sigma}_{-i}^m) \right ] - \epsilon. \end{equation} Equation \eqref{eq_contradiction_5_CVaR_alpha} contradicts the fact that $\widehat{p}_i^m \in supp(CB(\boldsymbol{\sigma}_{-i}^m))$, which completes the proof that $\boldsymbol{CB}(\boldsymbol{\sigma})$ has a closed graph. \end{enumerate} As listed above, the CVaR$_\alpha$ best response function $\boldsymbol{CB}(\boldsymbol{\sigma})$ satisfies the four conditions of Kakutani's Fixed Point Theorem. As a direct result, for any finite $n$-player stochastic congestion game, there exists $\boldsymbol{\sigma}^* \in \boldsymbol{\Sigma}$ such that $\boldsymbol{\sigma}^* \in \boldsymbol{CB}(\boldsymbol{\sigma}^*)$, which completes the existence proof of a CVaR$_\alpha$ equilibrium for such games. \Halmos \end{appendices} \end{document}
\begin{document} \title{Searching for Robustness: Loss Learning for Noisy Classification Tasks} \author{Boyan~Gao\textsuperscript{\rm 1}\qquad~Henry~Gouk\textsuperscript{\rm 1}\qquad~Timothy~M.~Hospedales\textsuperscript{\rm 1, 2}\\ \textsuperscript{\rm 1}School of Informatics, University of Edinburgh, United Kingdom\\ \textsuperscript{\rm 2}Samsung AI Centre, Cambridge, United Kingdom\\ \small{\texttt{\{boyan.gao,henry.gouk,t.hospedales\}@ed.ac.uk}} } \maketitle \begin{abstract} We present a ``learning to learn'' approach for automatically constructing white-box classification loss functions that are robust to label noise in the training data. We parameterize a flexible family of loss functions using Taylor polynomials, and apply evolutionary strategies to search for noise-robust losses in this space. To learn re-usable loss functions that can apply to new tasks, our fitness function scores their performance in aggregate across a range of training dataset and architecture combinations. The resulting white-box loss provides a simple and fast ``plug-and-play'' module that enables effective noise-robust learning in diverse downstream tasks, without requiring a special training procedure or network architecture. The efficacy of our method is demonstrated on a variety of datasets with both synthetic and real label noise, where we compare favorably to previous work. \end{abstract} \section{Introduction} The success of modern deep learning is largely predicated on large amounts of accurately labelled training data. However, training with large quantities of gold-standard labelled data is often not achievable. This is often because professional annotation is too costly to achieve at scale and machine learning practitioners resort to less reliable crowd-sourcing, web-crawled incidental annotations~\cite{chen2015weblyCNN}, or imperfect machine annotation~\cite{kuznetsova2020openImages}; while in other situations the data is hard to classify reliably even by human experts, and label-noise is inevitable. These considerations have led to a large and rapidly progressing body of work focusing on developing noise-robust learning approaches~\cite{ren2018learning,han2018co}. Diverse solutions have been studied including those that modify the training algorithm through teacher-student~\cite{jiang2018mentornet,han2018co} learning, or identify and down-weight noisy instances~\cite{ren2018learning}. Much simpler, and therefore more widely applicable, are attempts to define noise-robust loss functions that provide drop-in replacements for standard losses such as cross-entropy~\cite{wang2019symmetric,zhang2018generalized,ghosh2017robust}. These studies hand engineer robust losses, motivated by different considerations including risk minimisation~\cite{ghosh2017robust} and information theory~\cite{xu2019l_dmi}. In this paper we explore an alternative data-driven approach~\cite{hutter2019autoMLbook} to loss design, and search for a simple white-box function that provides a general-purpose noise-robust drop-in loss. \begin{figure} \caption{Schematic of our robust loss search framework. (1) We train a robust loss function so as to optimize validation performance of a CNN trained with synthetic label noise using this loss. (2) Thanks to dataset and architecture randomization, our learned loss is reusable and can be deployed to new tasks, including those without clean validation set to drive robust learning.} \label{fig:teaser} \end{figure} We perform evolutionary search on a space of loss functions parameterised as Taylor polynomials. Every function in this space is smooth and differentiable, and thus provides a valid loss that can be easily plugged into existing deep learning frameworks. Meanwhile, this search space provides a good trade-off between the flexibility to represent non-trivial losses, and a low-dimensional white-box parameterisation that is efficient to search and reusable across tasks without overfitting. To score a given loss during our search, we use it to train neural networks on noisy data, and then evaluate the clean validation performance of the trained model. To learn a general purpose loss, rather than one that is specific to a given architecture or dataset, we explore domain randomisation~\cite{tobin2017domain} in the space of architectures and datasets. Scoring losses according to their average validation performance in diverse conditions leads to reusable functions that can be applied to new datasets and architectures, as illustrated in Figure~\ref{fig:teaser}. We apply our learned loss function to train various MLP and CNN architectures on several benchmarks including MNIST, FashionMNIST, USPS, CIFAR-10, and CIFAR-100 with different types of simulated label noise. We also test our loss on a large real-world noisy label dataset, Clothing1M. The results verify the re-usability of our learned loss and its efficacy compared to state-of-the-art in a variety of settings. An important advantage of our approach compared to previous work that makes use of AutoML techniques to learn noise-robust loss functions is transferability. Previous methods for tackling noisy classification tasks often require a clean (i.e., noiseless) validation dataset to use as a meta-supervision signal~\cite{ren2018learning,shu2019metaWeightNet}. In contrast, we formulate our learning algorithm in such a way that we can instead make use of a clean validation set on an arbitrary auxiliary domain that can be completely different from the domain of interest. \begin{figure*} \caption{Existing hand-designed robust losses and our meta-learned robust loss. Top left: Conventional Cross-Entropy (CE); Top middle: Generalised Cross Entropy (GCE)~\cite{zhang2018generalized}; Top right: Mean Absolute Error (MAE)~\cite{ghosh2017robust}; Bottom left: label-smoothing~\cite{pereyra2017regularizing}. Bottom middle: Symmetric Cross Entropy~\cite{wang2019symmetric}. Bottom right: Our learned loss.} \label{learned_loss_fuc_binary} \end{figure*} \section{Related Work} \textbf{Label Noise}\quad Learning with label noise is now a large research area due to its practical importance. Song \etal~\cite{song2020labelNoiseSurvey} present a detailed survey explaining the variety of approaches previously studied including designing noise robust neural network architectures \cite{chen2015weblyCNN}, regularisers such as label-smoothing~\cite{szegedy2016rethinking,pereyra2017regularizing}, sample selection methods that attempt to filter out noisy samples -- often by co-teaching or student teacher learning with multiple neural networks ~\cite{jiang2018mentornet,han2018co,wei2020combating}, various meta-learning approaches that often aim to down-weight noisy samples using meta-gradients from a validation set~\cite{ren2018learning,shu2019metaWeightNet}, and robust loss design. Among these families of approaches, we are motivated to focus on robust loss design due to simplicity and general applicability -- we wish to use standard architectures and standard learning algorithms. Major robust losses include MAE, shown to be theoretically robust in \cite{ghosh2017robust}, but hard to train in~\cite{zhang2018generalized}; GCE which attempts to be robust yet easy to train \cite{zhang2018generalized}, and symmetric cross-entropy~\cite{wang2019symmetric}. These losses are all hand-designed based on various motivations. Instead we take a data-driven AutoML approach and search for a robust loss function. This draws upon meta-learning techniques but, differently from existing meta-robustness work, focuses on general-purpose white box loss discovery. Incidentally, we note that our resulting Taylor loss covers all six desiderata for noise-robust learning outlined in~\cite{song2020labelNoiseSurvey}. Finally, we mention one recent study \cite{yao2020searching} that also applied an AutoML approach to noisy label learning. In contrast to our approach, this method meta-learns a sample-selection technique to separate clean and noisy data, which must be conducted on a per-dataset basis. In our case, once trained, our loss is ready for plug-and-play deployment on diverse target problems with no further meta-training or dataset-specific optimization required (see Figure~\ref{fig:teaser}). \textbf{Meta-learning and Loss Learning}\quad Meta-learning, also known as learning to learn, has been applied for a wide variety of purposes as summarized in~\cite{hospedales2020metaSurvey}. Of particular relevance is meta-learning of loss functions, which has been studied for various purposes including providing differentiable surrogates of non-differentiable objectives~\cite{huang2019addressing}, optimizing efficiency and asymptotic performance of learning \cite{jenni2018deep,bechtle2019meta,houthooft2018evolved,wu2018learning,gonzalez2019baikalLoss,gonzalez2020taylorGLO}, and improving robustness to train/test domain-shift \cite{balaji2018metareg,li2019feature}. We are particularly interested in learning \emph{white-box} losses for efficiency and improved task-transferability compared to neural network alternatives \cite{bechtle2019meta,houthooft2018evolved,balaji2018metareg,li2019feature}. Meta-learning of white-box learner components has been demonstrated for optimizers \cite{wichrowska2017learned}, activation functions \cite{ramachandran2017searching} and losses for accelerating conventional supervised learning \cite{gonzalez2019baikalLoss,gonzalez2020taylorGLO}. We are the first to demonstrate the value of automatic loss function discovery for general purpose label-noise robust learning. \section{Method} We aim to learn a loss function for multi-class classification problem that is robust to noisy labels in the training set. We consider the task of learning a loss function as a bilevel optimisation, where solutions generated by the upper objectives (in the outer loop) are conditioned on the response of the lower objectives (in the inner loop). In our loss function learning setting, the upper and lower problems are defined, respectively, as optimising the parameters of an adaptive loss function and training neural networks, $f_{\omega}$, with the learned loss function. The upper level optimisation problem uses as a supervision signal a measure the average performance of models trained with a prospective loss function, measured across a variety of domains. The lower level optimisation problem consists of optimising a collection of models, each being trained to minimise the prospective loss function on a different domain that has been subjected to artificial label noise. \begin{algorithm}[t] \caption{Offline Taylor CMA-ES} \label{general_algorithm} \begin{algorithmic}[1] \STATE {\bfseries Input: } $\mathcal{D}, F, \mu^{(0)}, \Sigma^{(0)}$ \STATE {\bfseries Output: } $p(\theta; \mu^{*}, \Sigma^{*})$ \STATE $t = 0$ \WHILE{not converged or reached max steps} \STATE Sample $\Theta = \{\theta_1, \theta_2, ..., \theta_n\} \sim p(\theta; \mu^{(t)}, \Sigma^{(t)})$ \COMMENT{Sample losses for exploration} \STATE $G = F\times\mathcal{D}\times\Theta$ \COMMENT{Assign datasets and architectures to losses} \STATE $\vec s = \textup{zeros} \in \mathbb{R}^n$ \FORALL{$(f^{(k)}, D_j, \theta_i) \in G$} \STATE $(D^{train}_j, D^{val}_j) = \textup{split}(D_j)$ \COMMENT{Construct train/val splits} \STATE $\omega^{*} = \argmin_\omega \mathcal{L}_{\theta_i}(f_\omega^{(k)}, D^{train}_j)$ \COMMENT{Train the network} \STATE $\vec s_{i} = \vec s_i + \frac{1}{|F||D|}\mathcal{M}(f_{\omega^{*}}^{(k)}, D^{val}_j)$ \COMMENT{Evaluate on validation data} \ENDFOR \STATE $(\mu^{(t+1)}, \Sigma^{(t+1)}) = \text{CMA-ES}(\mu^{(t)}, \Sigma^{(t)}, \Theta, \vec s)$ \COMMENT{Update $\mu$ and $\Sigma$ according to CMA-ES} \STATE $t = t + 1$ \ENDWHILE \end{algorithmic} \end{algorithm} The prospective loss functions are represented by their parameters, $\theta$, which correspond to the coefficients of an $n$-th order polynomial. These polynomials can be viewed as a Taylor expansion of the ideal loss function. The bilevel optimisation problem is given by \begin{align} \max_{\theta} \, \mathbb{E}_{D, f} \lbrack \mathcal{M}(f_{\omega^{*}_D}, D^{val}) \rbrack \label{eq:bilevel_op} \\ s.t. \quad \omega^{*}_D = \argmin_{\omega} \, \mathcal{L}_{\theta}( f_\omega, D^{train}),\nonumber \end{align} \noindent where $\mathcal{M}(\cdot, \cdot)$ is a fitness function measuring network performance, $D$ is a random variable representing a domain, with $D^{val}$ and $D^{train}$ representing the validation and training sets respectively, and $f$ is a neural network parameterised by $\omega$. The performance of $f_{\omega^{*}_D}$, as measured by $\mathcal{M}$, reflects the quality of the supervision provided by the candidate loss function $\mathcal{L}_{\theta}$ on dataset $D$. During meta-learning, the training set and validation set are not identically distributed: the validation set contains clean labels, while the training set is assumed to have some form of label corruption. We use the Covariance Matrix Adaptation Evolutionary Strategy (CMA-ES)~\cite{hansen1996adapting} to solve the upper layer problem, and standard stochastic gradient-based optimisation approaches to solve the lower level problems. A general overview of our algorithm for solving the optimisation problem in Equations~\ref{eq:bilevel_op} is described in Algorithm~\ref{general_algorithm}. \subsection{CMA-ES for Loss Function Learning} We use CMA-ES to solve the upper optimisation problem, and any variant of stochastic gradient descent for the lower problem. CMA-ES finds a Gaussian distribution defined over the search space that places most of its mass on high quality solutions to the optimisation problem. One of the benefits of using CMA-ES is that this algorithm does not require the performance measurement to be differentiable, which means the learned loss function can be evaluated using informative metrics, such as accuracy. Each generation consists of a set, $\Theta$, of loss functions obtained by sampling multiple individuals from the parameter distribution, $p(\theta; \vec \mu, \Sigma) = \mathcal{N}(\vec \mu, \Sigma)$. Each of the individuals, $\theta_i \in \Theta$, is evaluated according to \begin{align} \label{eq:fitness} \mathbb{E}_{D,f} \lbrack \mathcal{M}(f_{\omega^{*}_D}, D^{val}) \rbrack \approx \frac{1}{N} \sum_{j=1}^N \mathcal{M}(f_{\omega_j}^{(j)}, D_j^{val}) \\ \textup{s.t.} \quad \omega_j = \argmin_{\omega} \, \mathcal{L}_{\theta_i}(f_\omega^{(j)}, D_j^{train}) \nonumber, \end{align} \noindent where $f^{(j)}_\omega$ and $D_j$ are different network architectures and datasets, respectively. \subsection{Taylor Polynomial Representation} The space of potential loss functions in which CMA-ES searches is a crucial design parameter. From a practical point of view, we must limit ourselves to a space that can be parameterised by a small number of values. However, this must be balanced with the ability to represent a wide enough variety of functions that a good solution can be found. Moreover, by selecting a small space with a small number of free parameters and well-understood nonlinear form, it becomes possible to transfer the learned loss to new problems without having to retrain the network. The function space that we choose is the Taylor series approximations of all $\beta$-smooth functions, $\mathcal{L}: \mathbb{R}^m \to \mathbb{R}$, \begin{align} \mathcal{L}(\vec x) =& \sum_{n=0}^{\beta} \frac{1}{n!} \nabla^n \mathcal{L}(\vec x_0)^T(\vec x- \vec x_0)^{n}. \label{eq:taylor_raw} \end{align} where each $\nabla^n \mathcal{L}(\vec x_0)$ is the $n$-th order gradient of $\mathcal{L}$ evaluated at a fixed point, $\vec x_0$. We make the simplifying assumption that the loss function should be class-wise separable. That is, each potential class is considered in isolation, and we learn a loss function that measures the divergence between a noisy binary label and the probability predicted by the network. We then sum over the different possible classes, \begin{equation*} \mathcal{L}_\theta(\vec{\hat{y}}, \vec y) = \frac{1}{C}\sum_{i=1}^C \mathcal{L}_\theta^{(i)}(\vec{\hat{y}}_i, \vec y_i), \end{equation*} where $\vec{\hat{y}}$ and $\vec y$ are the vectors of predicted probabilities and (possibly noisy) ground-truth labels, respectively. The result of performing the simplification is that the loss function can be used in a variety of settings with different numbers of classes, and we can fix $m=2$. We found that $\beta=4$ is a good trade-off between modelling capacity and meta-training efficiency. Note that $\nabla^n \mathcal{L}(\vec x_0)$ does not depend on $\vec x$, meaning these values can be computed during a meta-training period before regular training commences. As such, we can reinterpret the task of meta-learning $\mathcal{L}$ as inferring $\theta = \{\nabla^n \mathcal{L}(\vec x_0)\}_{n=0}^\beta$. The resulting loss function is represented as \begin{align} \label{eq:taylorm} \mathcal{L}_{\theta}^{(i)}&(\vec{\hat{y}_i}, \vec{y_i}) = \vec{\theta}_2(\vec{\hat{y}}_i - \vec{\theta}_0) + \frac{1}{2}\vec{\theta}_3(\vec{\hat{y}}_i - \vec{\theta}_0)^2 \\ &+\frac{1}{6}\vec{\theta}_4(\vec{\hat{y}}_i - \vec{\theta}_0)^3 +\frac{1}{24}\vec{\theta}_5(\vec{\hat{y}}_i-\vec{\theta}_0)^{4} \nonumber \\ & + \vec{\theta}_6(\vec{\hat{y}}_i - \vec{\theta}_0)(\vec{y}_i - \vec{\theta}_1) \nonumber \\ &+ \frac{1}{2}\vec{\theta}_7(\vec{\hat{y}}_i-\vec{\theta}_0)(\vec{y}_i -\vec{\theta}_1)^2 + \frac{1}{2} \vec{\theta}_8(\vec{\hat{y}}_i - \vec{\theta}_0)^2(\vec{y}_i - \vec{\theta}_1) \nonumber \\ & + \frac{1}{6}\vec{\theta}_9(\vec{\hat{y}}_i - \vec{\theta}_0)^3(\vec{y}_i - \vec{\theta}_1) + \frac{1}{6}\vec{\theta}_{10}(\vec{\hat{y}}_i - \vec{\theta}_0)(\vec{y}_i - \vec{\theta}_1)^3 \nonumber\\ &+ \frac{1}{4}\vec{\theta}_{11}(\vec{\hat{y}}_i - \vec{\theta}_0)^2(\vec{y}_i - \vec{\theta}_1)^2. \nonumber \end{align} The fixed point where the gradients are evaluated is also left as a learned parameter, $(\vec{\theta}_{0}, \vec{\theta}_{1})$. Note that we have omitted terms where $\vec{\hat{y}}$ does not appear, as these do not impact the solution of the optimisation problem. In total there are only 12 parameters to fit, which is considerably smaller than the number of parameters found in a typical neural network paramaterized loss function~\cite{li2019feature,bechtle2019meta,kirsch2020Improving}. \subsection{Generalisation Across Architectures} To enable the learned loss function to generalise to different architectures, we extend the strategy domain randomisation \cite{tobin2017domain} to introduce the idea of evaluating the expected performance across a range of architectures during meta-learning. Specifically, we use a set, $F$, of $m$ architectures containing a variety of common neural network designs. The total population for evolutionary optimisation is then given by the Cartesian product $F \times \Theta$. The fitness function can then be computed as shown in Equation~\ref{eq:fitness}, where a mean is taken over all of the different architectures trained with the same loss. \subsection{Generalisation Across Datasets} We also explore another method for improving the generality of the loss function. To enable a loss function to be applied to an unseen dataset, the loss function should be exposed to several datasets during training so as not to overfit to the prediction distributions encountered for a specific machine learning problem. In our method a loss function sampled from the current target distribution is deployed to train several models with the same architecture and initial weights, but on different datasets. Similarly to architecture generalisation, we use a set of datasets ,$\mathcal{D}$, and take the Cartesian product, $\mathcal{D} \times \Theta$, to generate a population to be evaluated. The performance of the loss functions is evaluated by the mean performance of all the networks on their corresponding datasets. In principle, one could perform both dataset and architecture randomisation simultaneously. However, due to the implied three-way Cartesian product, we found this computationally infeasible. \subsection{Normalisation \label{sec:minmax}} We make use of a normalisation approach to prevent the learned loss functions from exhibiting an arbitrary output range, \begin{equation} \hat{f} = \eta \frac{f-f_{min}}{f_{max}-f_{min}}, \label{eq:min_max} \end{equation} where $f_{min}$ and $f_{max}$ denotes the minimum and maximum and $\eta$ is a hyperparameter deciding the dynamic range of the loss function. Both $f_{min}$ and $f_{max}$ are easily approximated by sampling random points satisfying $\{(\vec{\hat{y}}, \vec{y})| \vec{\hat{y}}_i \geq 0, \sum_{i}\vec{\hat{y}}_i = 1; \vec{y}_i \in \{0,1\}, \sum_{i}\vec{y} = 1\}$, which defines the domain of the loss function. \section{Experiments} In this section we evaluate our learned loss function on various noisy label learning tasks. In particular, we aim to answer three questions: (Q1) Can we learn a robust loss function that generalises across different datasets and architectures? (Q2) How well does our learned loss function generalise across different noise levels? (Q3) Can our learned loss function scale to larger scale real-world noisy-label tasks? \textbf{Datasets}\quad We use seven datasets in our experiments: MNIST~\cite{lecun-mnisthandwrittendigit-2010}, CIFAR-10, CIFAR-100~\cite{krizhevsky2009learning}, KMNIST~\cite{clanuwat2018deep}, USPS~\cite{hull1994database}, FashionMNIST~\cite{xiao2017fashion} and Clothing1M~\cite{xiao2015learning}. Clothing1M is a dataset containing 1 million clothing images in 14 classes: T-shirt, Shirt, Knitwear, Chiffon, Sweater, Hoodie, Windbreaker, Jacket, Down Coat, Suit, Shawl, Dress, Vest, and Underwear. The images are collected from shopping websites and the labels are generated from the text surrounding images, thus providing a realistic noisy label setting. \begin{figure} \caption{Example noise matrix in the symmetric (left) and asymmetric (right) conditions for 6 classes and a noise ratio of 40\%.} \label{fig:noise_matrix} \end{figure} \textbf{Noise types}\quad For loss learning, we consider simulating two types of noise, symmetric noise and asymmetric noise (pair-flip noise). Symmetric noisy labels are generated by uniformly flipping from the positive label to a negative one, while asymmetric noisy labels are produced to simulate the more realistic scenario where particular pairs of categories are more easily confused than others. {For example, in the case of MNIST it is conceivable that label noise could manifest in such a way that a 7 is more likely to mislabelled as a 1 than it is a 6, or a 3 mislabelled as an 8 than a 4. We give an example of both symmetric and asymmetric label noise transition matrices in Figure \ref{fig:noise_matrix}.} \textbf{Architectures}\quad We train and evaluate our learned loss with a range of neural networks from very shallow ones, including 2-layer MLP, 3-layer MLP, and 4-layer CNN, to deeper ones, such as VGG-11~\cite{simonyan2014very} and Resnet-18~\cite{he2016deep}. We also use the medium-size architecture considered in~\cite{wei2020combating}, which we term JoCoR-Net (see supplemental material for details). For a fair comparison, we train 2-layer MLP, 3-layer MLP, and 4-layer CNN with SGD optimiser and set the learning rate to $0.01$ and momentum to $0.9$. For the training of JoCor-Net, we apply the Adam optimiser~\cite{kingma2014adam} and the learning rate is set to $0.001$. When training Resnet-18 and VGG-11, we follow the training protocol in~\cite{zhang2019lookahead}. \textbf{Taylor Polynomial Order Selection}\quad We perform a preliminary experiment to select the order of the Taylor loss function. We train a linear classifier in the inner loop of the dataset randomization algorithm (on MNIST, KMNIST, and CIFAR-10), and evaluate performance for polynomial orders 2, 3, 4 and 5. From the results in Figure~\ref{fig:shared}(left), we can see that the impact of the specific polynomial order is small compared to the impact of loss learning overall. Nevertheless, we pick order 4 for the subsequent experiments, as this was the hyperparameter that achieved the best performance. \textbf{Competitors}\quad We compare our learned loss functions with the standard cross-entropy (CE) baseline, as well as several strong alternative losses hand-designed for label-noise robustness: \textbf{MAE:} Mean Absolute Error was theoretically shown to be robust in \cite{ghosh2017robust}. \textbf{GCE:} \cite{zhang2018generalized} analysed MAE as hard to train, and proposed generalised cross-entropy to provide the best of CE and MAE; \textbf{FW:}~\cite{patrini2017making} iteratively estimates the label noise transfer matrix, and trains the model corrected by the label noise estimate; \textbf{SCE:}~\cite{wang2019symmetric} argued that symmetrizing cross-entropy by adding reverse cross-entropy (RCE) improves label-noise robustness; \textbf{Bootstrap:} A classic method of replacing the noisy labels in training by the convex combination of the prediction and the given labels ~\cite{reed2014training}. \textbf{LSR:} Label-smoothing is an effective general purpose regularizer \cite{pereyra2017regularizing,szegedy2016rethinking,muller2019labelSmooth} whose properties in promoting noise robustness have been studied \cite{wang2019symmetric}. \subsection{Training a general-purpose robust loss function \label{se:generality}} \begin{figure*} \caption{Left: A preliminary experiment on hyperparameter selection. The performance of a linear model trained by the Taylor loss function with different orders vs training with cross-entropy (CE). Middle/Right: Example learning curves of test accuracy vs iterations when using different robust losses. Middle: USPS/VGG-11/80\% symmetric noise. Right: USPS/ResNet-18/40\% asymmetric noise. } \label{fig:shared} \end{figure*} \textbf{Experimental setup}\quad We consider two domain generalisation protocols for training a general purpose loss function, namely architecture and dataset generalisation. In architecture generalisation, we build a pool of training architectures including 2-layer MLP, 3-layer MLP, and 4-layer CNN and solely use MNIST as the training set. In dataset generalisation, we solely use the 4-layer CNN as the backbone build a dataset pool from MNIST, KMNIST, and CIFAR-10. We also consider two label-noise conditions during meta-learning: symmetric label noise with 80\% noise, and asymmetric label noise with 40\% noise. Losses are trained under each domain generalisation protocol, and each noise distribution, using the normalisation trick introduced in Section~\ref{sec:minmax}. It is important to note that we are not restricted to deploying our loss on the same tasks used during training. After loss function learning we can deploy the losses to train fresh models from scratch on a fresh suite of evaluation tasks unseen during training. For evaluation, we compare the accuracy at convergence, and summarise via the average ranks of each methods across different datasets and architectures~\cite{demvsar2006statistical}. \begin{table*}[t] \centering \caption{Accuracy (\%) of different robust learning strategies. 80\% symmetric noise condition. Our loss trained under architecture randomization (AR) and dataset randomization (DR) conditions has the best average rank. Grey columns indicate datasets seen during DR training. White columns are totally novel datasets.} \label{tab:_gen} \resizebox{\textwidth}{!}{ \begin{tabular}{ laaacccacccc} \toprule Architecture type & 2layer MLP & 4layer CNN & VGG11 & VGG11 & VGG11 & VGG11 & Resnet18 & Resnet18 & Resnet18 & Resnet18 &Avg.Rank \\ dataset & MNIST & MNIST & Cifar10 & Cifar100 & FashionMNIST & USPS & Cifar10 & Cifar100 & FashionMNIST & USPS & \\ \midrule CE & 22.10$\pm$0.68 & 28.48$\pm$0.35 & 18.38$\pm$0.21 & 4.25$\pm$0.28 & 20.55 $\pm$ 0.93 & 51.42 $\pm$ 0.94 & 18.44$\pm$0.34 & 8.86$\pm$0.10 & 21.92 $\pm$ 0.74 & 57.05 $\pm$ 0.42 & 6.4\\ GCE & 40.57$\pm$0.43 & 9.80$\pm$0.58 & 16.56$\pm$0.54 & 1.04$\pm$0.47 & 25.10 $\pm$ 0.68 & 63.45 $\pm$ 0.86 & 31.69$\pm$0.36 & 11.98$\pm$0.18 & 42.62 $\pm$ 0.89 & \bfs{79.52 $\pm$ 0.63} & 5.1 \\ SCE & 31.23$\pm$0.70 & 28.53$\pm$1.02 & 28.61$\pm$0.64 & 2.31$\pm$0.80 & 36.64 $\pm$ 0.59 & 63.68 $\pm$ 0.56 & \bfs{45.34$\pm$0.40} & 8.16$\pm$0.07 & 59.93 $\pm$ 0.75 & 58.35 $\pm$ 0.76 & 4.6 \\ FW & 54.01$\pm$0.89 & 80.34$\pm$1.21 & 16.97$\pm$0.44 & 1.41$\pm$0.07 & 22.57 $\pm$ 0.76 & 53.66 $\pm$ 0.40 & 10.15$\pm$0.68 & 1.16$\pm$0.04 & 13.18 $\pm$ 0.35 & 42.80 $\pm$ 0.77 & 6.5 \\ Bootstrap & 23.46$\pm$1.31 & 28.78$\pm$1.03 & 17.58$\pm$0.82 & 4.18$\pm$0.72 & 20.40 $\pm$ 0.31 & 64.58 $\pm$ 0.21 & 12.10$\pm$0.32 & 8.67$\pm$0.61 & 22.36 $\pm$ 1.76 & 72.17 $\pm$ 1.24 & 5.7 \\ MAE & \bfs{85.40$\pm$3.39} & 78.70$\pm$11.49 & 14.20$\pm$0.42 & 1.01$\pm$0.11 & 63.40 $\pm$ 0.16 & 30.94 $\pm$ 0.35 & 22.95$\pm$1.25 & 0.82$\pm$0.17 & 68.20 $\pm$ 1.87 & 37.17 $\pm$ 0.93 & 6.0 \\ Label-smooth & 24.31$\pm$1.25 & 27.02$\pm$0.48 & 17.74$\pm$0.46 & 4.47$\pm$0.12 & 21.19 $\pm$ 0.39 & 54.26 $\pm$ 0.19 & 17.67$\pm$0.35 & 7.66$\pm$1.52 & 20.99 $\pm$ 0.83 & 59.94 $\pm$ 0.54 & 6.3 \\ \midrule Taylor Loss (AR) & 34.27$\pm$0.34 & 37.08$\pm$0.44 & \bfs{41.36$\pm$0.47} & \bfs{5.63$\pm$0.24} & \bfs{70.16 $\pm$ 0.87} & \bfs{78.71 $\pm$ 0.90} & 29.50$\pm$0.30 & \bfs{14.94$\pm$0.26} & 71.96 $\pm$ 0.89 & 68.80 $\pm$ 0.92 & 2.4 \\ Taylor Loss (DR) & 48.57$\pm$0.11 & \bfs{85.31$\pm$0.12} & 31.12$\pm$0.23 & 5.04$\pm$0.14 & 67.29 $\pm$ 1.01 & 77.34 $\pm$ 1.34 & 35.23$\pm$0.23 & 13.36$\pm$0.63 & \bfs{71.97 $\pm$ 0.87} & 70.17 $\pm$ 0.64 & \bfs{2.0} \\ \bottomrule \end{tabular} } \end{table*} \begin{table*}[t] \centering \caption{Accuracy (\%) of different robust learning strategies. 40\% asymmetric noise condition. Our loss trained under architecture randomization (AR) and dataset randomization (DR) conditions has the best average rank. Grey columns indicate datasets seen during DR training. White columns are totally novel datasets.} \label{tab:pairnoise_gen} \resizebox{\textwidth}{!}{ \begin{tabular}{ laaacccacccc} \toprule Architecture type & 2layer MLP & 4layer CNN & VGG11 & VGG11 & VGG11 & VGG11 & Resnet18 & Resnet18 & Resnet18 & Resnet18 & Avg.Rank\\ dataset & MNIST & MNIST & Cifar10 & Cifar100 & FashionMNIST & USPS & Cifar10 & Cifar100 & FashionMNIST & USPS & \\ \midrule CE & 78.73$\pm$1.16 & 84.01$\pm$0.34 & 56.43$\pm$0.12 & 30.20$\pm$0.18 & 50.34 $\pm$ 1.23 & 77.74 $\pm$ 0.74 & 58.69$\pm$0.43 & 44.14$\pm$0.15 & 58.68 $\pm$ 0.63 & 73.84 $\pm$ 0.85 & 5.0\\ GCE & 81.94$\pm$1.22 & 9.80$\pm$0.10 & 56.42$\pm$0.54 & 22.39$\pm$0.35 & 53.57 $\pm$ 0.47 & 78.72 $\pm$ 0.72 & 57.90$\pm$0.31 & 40.76$\pm$0.24 & 58.51 $\pm$ 0.70 & 80.77 $\pm$ 0.35 & 5.3\\ SCE & 79.87$\pm$0.78 & 84.09$\pm$0.62 & 78.23$\pm$0.55 & 25.33$\pm$0.73 & 64.47 $\pm$ 0.97 & 85.50 $\pm$ 0.43 & 63.22$\pm$0.22 & 40.90$\pm$0.37 & 59.63 $\pm$ 0.96 & 81.57 $\pm$ 0.17 & 3.2\\ FW & 90.14$\pm$0.67 & 69.98$\pm$0.49 & 54.42$\pm$0.79 & 5.21$\pm$0.39 & 45.18 $\pm$ 0.84 & 76.41 $\pm$ 0.81 & 48.40$\pm$0.08 & 3.83$\pm$0.23 & 49.46 $\pm$ 0.73 & 46.04 $\pm$ 0.18 & 7.6\\ Bootstrap & 78.31$\pm$2.34 & 83.68$\pm$1.27 & 57.69$\pm$0.11 & \bfs{31.07$\pm$1.09} & 53.23 $\pm$ 1.53 & 77.81 $\pm$ 0.61 & 57.69$\pm$0.76 & \bfs{45.78$\pm$0.15} & 54.60 $\pm$ 0.85 & 75.67 $\pm$ 0.56 & 5.0\\ MAE & 71.61$\pm$4.50 & 69.91$\pm$0.49 & 49.06$\pm$0.22 & 0.96$\pm$0.10 & 49.02 $\pm$ 0.27 & 62.38 $\pm$ 0.89 & 55.67$\pm$3.05 & 1.02$\pm$0.14 & 56.31 $\pm$ 1.21 & 70.05 $\pm$ 0.35 & 8.2\\ Label-smooth & 59.66$\pm$1.16 & 68.14$\pm$0.61 & 57.76$\pm$0.37 & 20.64$\pm$0.18 & 51.12 $\pm$ 1.03 & 77.49 $\pm$ 0.11 & 59.69$\pm$0.36 & 39.92$\pm$0.49 & 57.53 $\pm$ 0.73 & 78.97 $\pm$ 0.46 & 6.1\\ \midrule Taylor Loss (AR) & \bfs{97.16$\pm$0.20} & \bfs{96.88$\pm$0.66} & 74.30$\pm$0.20 & 22.50$\pm$0.33 & \bfs{87.23 $\pm$ 1.22} & \bfs{90.67$\pm$1.21} & \bfs{86.70$\pm$0.12} & 44.47$\pm$0.48 & \bfs{89.24 $\pm$ 0.25} & \bfs{91.17 $\pm$ 0.25} & \bfs{1.6} \\ Taylor Loss (DR) & 85.77$\pm$0.33 & 93.47$\pm$0.28 & \bfs{79.09$\pm$0.51} & 18.30$\pm$0.27 & 81.18 $\pm$ 0.80 & 89.78$\pm$0.46 & 68.88$\pm$0.41 & 31.47$\pm$0.65 & 88.22 $\pm$ 0.97 & 89.59 $\pm$ 1.05 & 3.0\\ \bottomrule \end{tabular} } \end{table*} \textbf{Benchmark Results}\quad The results for symmetric and asymmetric noise are shown in Table~\ref{tab:_gen} and \ref{tab:pairnoise_gen} respectively. From the results, we can see that our learned losses perform favourably compared to hand-designed alternatives across a variety of benchmarks, with our learned loss providing a higher average rank than competitors in both experiments. However, there is no clear winner between architecture (AR) and dataset (DR) condition for meta-learning. We conjecture that best performance would be obtained by performing these simultaneously, but as this experiment is computationally costly, we leave this to future work. Note that during deployment, all methods have a similar computational cost, except for FW which requires training the network twice for noise estimation. \textbf{Analysis of Learning Curves}\quad The plots in Figure~\ref{fig:shared}(right) compares the learning curves of test accuracy for USPS/VGG-11 and USPS/ResNet-18 with 80\% symmetric and 40\% asymmetric noise respectively. We can see that while some alternative losses have early peaks, they all overfit after continued training. It is important to note that because we target the situation where we do not have a clean validation set for the target domain to drive model selection, one cannot rely on early-stopping cherry pick a good iteration. Therefore it's important that a robust loss has longer-term good performance, and on this metric our Taylor-AR and Taylor-DR are the clear winners. \cut{ \begin{figure} \caption{The test accuracy alongside with training epochs. Right: Resent18 trained on USPS with 40 \% asymmetric noise. Left: VGG11 trained on USPS with 80 \% symmetric noise } \label{fig:acc_curve} \end{figure} } \textbf{Real-world Clothing1M results}\quad The previous experiment reported performance of the learned model after training on manually corrupted labels. In this section, we follow the setting for Resent-18 described in~\cite{wei2020combating} to apply our learned loss to the real-world Clothing1M noisy-label benchmark. As a real-world noisy-label problem we apply our model from the asymmetric-40\% condition above. Note that neither Clothing1M, nor ResNet-18 were seen during meta-learning, above. We train with Adam optimiser with learning rate $8\times10^{-4}$, $5\times10^{-4}$, $5\times10^{-5}$ for 5 epochs each in a sequence. We report the mean accuracy of each model after ten trials in Table~\ref{tab:clothing1mNew}. Among the competitors, JoCoR is the state art method in the broader range of noise robust learners. It uses a complex co-distillation scheme with multiple network branches, while the other listed competitors and ours are simple plug-in robust losses applied to vanilla ResNet training. Nevertheless, our method obtains the highest performance. \cut{ \begin{table}[t] \centering \caption{Test accuracy ($\%$) of robust learners on Clothing1M with ResNet18. $^*$JoCoR is a multi-network co-distillation training framework. The others are simple plug-in robust losses.} \label{tab:clothing1m} \resizebox{1\columnwidth}{!}{ \begin{tabular}{ c c c c c c } \toprule \midrule CE & Bootstrap & GCE & FW & SCE & JoCoR$^*$ \\ 66.88 & 67.28 & 66.63 & 68.33 & 67.63 & 69.79 \\ \midrule & Talyor (AR-A40) & Taylor (DR-A40) & Taylor (AR-S80) & Taylor (DR-S80)&\\ & 69.14 & \textbf{70.09} & 68.85 & 69.34 &\\ \bottomrule \end{tabular} } \end{table} } \cut{\begin{table*}[t] \centering \caption{Test accuracy ($\%$) of robust learners on Clothing1M with ResNet18. $^*$JoCoR is a multi-network co-distillation training framework. The others are simple plug-in robust losses.} \label{tab:clothing1m} \resizebox{\textwidth}{!}{ \begin{tabular}{ c c c c c c c c c c } \toprule CE & Bootstrap & GCE & FW & SCE & JoCoR$^*$ & Talyor (AR-A40) & Taylor (DR-A40) & Taylor (AR-S80) & Taylor (DR-S80)\\ \midrule 66.88 & 67.28 & 66.63 & 68.33 & 67.63 & 69.79 & 69.14 & \textbf{70.09} & 68.85 & 69.34 \\ \bottomrule \end{tabular} } \end{table*}} \begin{table}[t] \centering \caption{Test accuracy ($\%$) of robust learners on Clothing1M with ResNet18. $^*$JoCoR is a multi-network co-distillation training framework. The others are simple plug-in robust losses.}\label{tab:clothing1mNew} \begin{tabular}{lc} \toprule Method & Accuracy \\ \midrule CE & 66.88 \\ Bootstrap & 67.28 \\ GCE & 66.63 \\ FW & 68.33 \\ SCE & 67.63 \\ JoCoR$^*$ & 69.79 \\ \midrule Talyor (AR-A40) & 69.14 \\ Taylor (DR-A40) & \textbf{70.09} \\ Taylor (AR-S80) & 68.85 \\ Taylor (DR-S80) & 69.34 \\ \bottomrule \end{tabular} \end{table} \begin{figure*} \caption{t-SNE visualisation of penultimate layer Resnet18 features after learning on CIFAR-10 with 40\% symmetric label noise. Top left: model trained by CE. Top middle: model trained by Bootstrap. Top right: model trained by FW. Bottom left: model trained by GCE. Bottom middle: model trained by SCE. Bottom right: model trained by Our learned loss.} \label{tsne_plot} \end{figure*} \subsection{Additional Analysis\label{se:ablation}} \textbf{Generalisation across noise-levels}\quad We trained our main losses on high levels of label noise ($80\%$-symmetric, $40\%$-asymmetric) as detailed previously, conjecturing that training on a difficult task would be sufficient for generalisation to other tasks with diverse noise conditions, as shown on Clothing1M. To evaluate this more systematically, we apply our $80\%$-symmetric loss on problems with a range of noise levels. From the results in Figure~\ref{fig:noiseStrength} we can see that our loss does tend to provide competitive performance across a range of operating points. \begin{figure*} \caption{Generalisation of learned loss to varying noise-levels. Left: 2MLP-MNIST, Middle: 4CNN-MNIST, Right: VGG11-CIFAR10.} \label{fig:noiseStrength} \end{figure*} \textbf{Qualitative analysis of representations}\quad We visualise the feature representation learned by our loss when applied to CIFAR-10 under 40\% symmetric label noise in Figure~\ref{tsne_plot}. We can see that conventional CE applied on noisy labels leads to a very mixed distribution of instances, while our loss leads to quite cleanly separable clusters despite the intense degree of label noise. \begin{table*}[t] \centering \caption{Accuracy (\%) of different robust learners. JoCoR net medium-sized CNN used throughout. Taylor Loss is trained specifically for the target problem. } \label{tab:mnist} \resizebox{\textwidth}{!}{ \begin{tabular}{ cccccccccc} \toprule & Noise Type & CE (ours) & CE (JoCoR) & GCE & SCE & FW & Bootstrap & JoCoR & Ours \\ \midrule \parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{MNIST}}} & Sym-20\% & 81.21$\pm$0.53 & 79.56$\pm$0.44 & 97.64$\pm$0.65 & 89.50$\pm$0.44 & 96.85$\pm$0.67 & 76.18$\pm$0.98 & \textbf{98.06$\pm$0.04} & 97.90$\pm$0.12 \\ & Sym-50\% & 59.51$\pm$0.70 & 52.66$\pm$0.43 & 94.14$\pm$1.32 & 67.38$\pm$0.53 & 94.25$\pm$0.43 & 51.53$\pm$1.56 & 96.64$\pm$0.12 & \textbf{96.71$\pm$0.21} \\ & Sym-80\% & 22.43$\pm$1.21 & 23.43$\pm$0.31 & 40.57$\pm$0.72 & 31.23$\pm$0.89 & 54.01$\pm$1.82 & 23.46$\pm$0.46 & 84.89$\pm$4.55 & \textbf{89.88$\pm$0.34} \\ & Asy-40\% & 78.73$\pm$1.16 & 79.00$\pm$0.28 & 81.94$\pm$1.22 & 79.87$\pm$0.78 & 90.14$\pm$0.67 & 78.31$\pm$2.34 & 95.24$\pm$0.10 & \textbf{97.38$\pm$0.17} \\ \midrule \parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{CIFAR-100}}} & Symm-20\% & 39.19$\pm$0.58 & 35.14$\pm$0.44 & 34.66$\pm$0.76 & 35.09$\pm$0.50 & 38.18$\pm$0.76 & 3.53$\pm$0.18 & \textbf{53.01$\pm$0.04} & 51.34$\pm$0.10 \\ & Symm-50\% & 19.50$\pm$0.43 & 16.97$\pm$0.40 & 10.29$\pm$0.53 & 18.54$\pm$0.29 & 3.25$\pm$0.15 & 18.36$\pm$0.63 & \textbf{43.49$\pm$0.46} & 42.18$\pm$0.27 \\ & Symm-80\% & 5.56$\pm$0.24 & 4.41$\pm$0.14 & 2.03$\pm$0.36 & 5.75$\pm$0.39 & 6.12$\pm$0.27 & 2.33$\pm$0.13 & 15.49$\pm$0.98 & \textbf{20.20$\pm$0.42} \\ & Asym-40\% & 30.16$\pm$0.44 & 27.29$\pm$0.25 & 1.32$\pm$0.23 & 27.07$\pm$0.42 & 4.23$\pm$0.51 & 31.72$\pm$0.74 & 32.70$\pm$0.35 & \textbf{36.01$\pm$0.39} \\ \midrule Avg.Rank & & 5.25 & 6.13 & 5.62 & 5.00 & 4.38 & 6.63 & 1.63 & \textbf{1.38} \\ \bottomrule \end{tabular} } \end{table*} \textbf{Dataset-specific loss learning}\quad Our main goal in this paper has been to learn a general purpose robust loss. In this section we examine an alternative use case of applying our framework to train a \emph{dataset-specific} robust loss, in which case better performance could be achieved by customising the loss for the target problem. To achieve this, we now additionally assume a clean subset of data for the target problem is available (unlike the previous experiments, but similarly to several alternative methods in this area~\cite{wei2020combating}) in order to drive loss learning. For this experiment we focus on comparison with JoCoR~\cite{wei2020combating}, since this is the current state-of-the-art model, and in their experiments medium sides networks are applied. We use the same medium sized CNN architecture as JoCoR for fair comparison, and train our loss to optimize the validation performance. From the results in Table~\ref{tab:mnist}, we can see that our method provides comparable or better performance than state of the art competitor JoCoR. However, this is now at significantly greater cost since the cost of data-specific loss training is not amortizable over multiple tasks as before. \textbf{Qualitative Analysis and Intuition of Learned Loss}\quad To gain some intuition about our loss functions' efficacy, we compare popular standard and robust losses in Figure~\ref{learned_loss_fuc_binary}. Comparing our robust loss, and comparison with the alternatives, we conjecture that there are two properties that impact label-noise robustness in practice: Feedback in response to perceived major prediction errors by the network, and the location of the minima where network predictions maximally satisfy the loss. In the case of a noisy labelled example that the network actually classifies correctly, (e.g., $y_{true}=1$, $y_{label}=0$, $y_{pred}\approx1$), conventional CE aggressively ``corrects'' the network by reporting exponentially large loss. This aggressive feedback can lead to fast training on clean data, but overfitting in noisy data~\cite{zhang2018generalized}. Existing robust alternatives MAE~\cite{ghosh2017robust} and GCE~\cite{zhang2018generalized} are explicitly motivated by softening this aggressive ``correction'' compared to CE. Although not explicitly motivated by this, SCE also softens the feedback as shown in the figure. Meanwhile in terms of the minima that best satisfies the loss, conventional CE, as well as SCE, GCE and MAE lead to maximally confident predictions (minima at $0$ or $1$); which, if applied to a noisy label, leads to overfitting. In contrast, label smoothing~\cite{pereyra2017regularizing,wang2019symmetric} improves robustness by inducing softer minima at $[0+\epsilon,1-\epsilon]$ compared to the others' $[0,1]$. However, LS issues the same aggressive correction of large errors as CE, and thus suffers from this accordingly. Only our Taylor loss has learned to exploit both these strategies of less aggressive ``corrections'' and softer targets. \section{Conclusion} In this work, we leverage CMA-ES to discover novel loss functions in the defined Taylor series function space. A framework based on domain randomization is developed and instantiated with two variations to enable transferability and generality of the learned loss functions. In order to demonstrate the efficacy of the learned loss functions, we deploy them on to a variety of tasks where the dataset and architecture are different to those seen during the meta-learning process. Comparison with recent work demonstrates the strength of our method empirically. In addition, we show that the proposed method is also able to produce well-behaved models trained with noisy data and these models outperform state-of-the-art models on a range of tasks. One of the key benefits of our approach is the ability to learn a loss function that can operate on domains with label noise, despite having no clean validation set for that domain---a trait that prominent meta-learning approaches to noise-robust loss function learning do not share~\cite{ren2018learning,shu2019metaWeightNet}. \cut{We will provide an implementation of our method and experiments online.\footnote{Removed for anonymization}} {\small } \end{document}
\begin{document} \title{Strong edge-colorings for $k$-degenerate graphs} \begin{abstract} We prove that the strong chromatic index for each $k$-degenerate graph with maximum degree $\Delta$ is at most $(4k-2)\Delta-k(2k-1)+1$. \end{abstract} A {\em strong edge-coloring} of a graph $G$ is an edge-coloring so that no edge can be adjacent to two edges with the same color. So in a strong edge-coloring, every color class gives an induced matching. The strong chromatic index $\chi_s'(G)$ is the minimum number of colors needed to color $E(G)$ strongly. This notion was introduced by Fouquet and Jolivet (1983, \cite{FJ83}). Erd\H{o}s and Ne\v{s}et\v{r}il during a seminar in Prague in 1985 proposed some open problems, one of which is the following \begin{conjecture}[Erd\H{o}s and Ne\v{s}et\v{r}il, 1985] If $G$ is a simple graph with maximum degree $\Delta$, then $\chi_s'(G)\le 5\Delta^2/4$ if $\Delta$ is even, and $\chi_s'(G)\le (5\Delta^2-2\Delta+1)/4$ if $\Delta$ is odd. \end{conjecture} This conjecture is true for $\Delta\le 3$ (\cite{A92, HHT93}). Cranston \cite{C06} showed that $\chi_s'(G)\le 22$ for $\Delta=4$. Chung, Gy\'arf\'as, Trotter, and Tuza (1990, \cite{CGTT90}) showed that the upper bounds are exactly the numbers of edges in $2K_2$-free graphs. Molloy and Reed \cite{MR97} proved that graphs with sufficient large maximum degree $\Delta$ has strong chromatic index at most $1.998\Delta^2$. For more results see \cite{SSTM} (Chapter 6, problem 17). A graph is {\em $k$-degenerate} if every subgraph has minimum degree at most $k$. Chang and Narayanan (2012, \cite{CN12}) recently proved that a $2$-degenerate graph with maximum degree $\Delta$ has strong chromatic index at most $10\Delta-10$. Luo and the author in \cite{LY12} improved the upper bound to $8\Delta-4$. In~\cite{CN12}, the following conjecture was made \begin{conjecture}[Chang and Narayanan, \cite{CN12}] There exists an absolute constant $c$ such that for any $k$-degenerate graphs $G$ with maximum degree $\Delta$, $\chi_s'(G)\le ck^2\Delta$. Furthermore, the $k^2$ may be replaced by $k$. \end{conjecture} In this paper, we prove a stronger form of the conjecture. Unlike the priming processes in\cite{CN12, LY12}, we find a special ordering of the edges and by using a greedy coloring obtain the following result. \begin{theorem} The strong chromatic index for each $k$-degenerate graph with maximum degree $\Delta$ is at most $(4k-2)\Delta-k(2k-1)+1$. \end{theorem} Thus, $2$-degenerate graphs have strong chromatic index at most $6\Delta-5$. \begin{proof} By definition of $k$-degenerate graphs, after the removal of all vertices of degree at most $k$, the remaining graph has no edges or has new vertices of degree at most $k$, thus we have the following simple fact on $k$-degenerate graphs (see also \cite{CN12}). {\em Let $G$ be a $k$-degenerate graph. Then there exists $u\in V(G)$ so that $u$ is adjacent to at most $k$ vertices of degree more than $k$. Moreover, if $\Delta(G)>k$, then the vertex $u$ can be selected with degree more than $k$.} We call a vertex $u$ a {\em special vertex} if $u$ is adjacent to at most $k$ vertices of degree more than $k$. An edge is a {\em special edge} if it is incident to a special vertex and a vertex with degree at most $k$. The above fact implies that every $k$-degenerate graph has a special edge, and if $\Delta\le k$, then every vertex and every edge are special. We order the edges of $G$ as follows. First we find in $G$ a special edge, put it at the beginning of the list, and then remove it from $G$. Repeat the above step in the remaining graph. When the process ends, we have an ordered list of the edges in $G$, say $e_1, e_2, \ldots, e_m$, where $m=|E(G)|$. So $e_m$ is the special edge we first chose and placed in the list. Let $G_i$ be the graph induced by the first $i$ edges in the list, $i=1,2,\ldots, m$. Then $e_i$ is a special edge in $G_i$. We now count the edges of $G_i$ within distance one to $e_i$ in $G$. We may call the edges in $G_i$ blue edges and the edges in $G-G_i$ yellow edges. Let $u_i,v_i$ be the endpoints of $e_i$ with $u_i$ being a special vertex in $G_i$. We first count the blue edges incident to $u_i$ and its neighbors. The vertex $u_i$ has three kinds of neighbors: the neighbors in $X_1$ sharing blue edges with $u_i$ and having degree more than $k$, the neighbors in $X_2$ sharing blue edges with $u_i$ and having degree at most $k$ (thus $v_i\in X_2$), and the neighbors in $X_3$ sharing yellow edges with $u_i$. By definition, $|X_1|\le k$, so at most $|X_1|\Delta+k(|X_2|-1)$ blue edges are incident to $X_1\cup (X_2-\{v_i\})$. For each vertex $u$ in $X_3$, $uu_i$ is a yellow edge in $G_i$ but will be a special edge in $G_j$ for some $j>i$. So either $u$ or $u_i$ has degree at most $k$ in $G_j$ (thus also in $G_i$), and if $u_i$ has degree at least $k$ in $G_m$ for some $m$, then all yellow edges incident to $u_i$ in $G_m$ should have degree at most $k-1$ in $G_m$, in order for the yellow edges to be special later. Then among vertices in $X_3$, at most $x=\max\{0,k-|X_1|-|X_2|\}$ vertices have degree more than $k$ in $G_i$, and all other vertices have degree at most $k-1$ in $G_i$. Therefore at most $x\Delta+(|X_3|-x)(k-1)$ blue edges are incident to $X_3$. Note that $d(u_i)\le \Delta, |X_2|\le \Delta$ and $|X_1|+x\le k$, then at most $$|X_1|\Delta+k(|X_2|-1)+x\Delta+(|X_3|-x)(k-1)=(|X_1|+x)\Delta+(k-1)(d(u_i)-|X_1|-x-1)+|X_2|-1\le 2k\Delta-k^2$$ blue edges are within distance one to $e_i$ from $u_i$ side (not including the edges incident to $v_i$). We also count the blue edges incident to $v_i$ and its neighbors. Similarly, $v_i$ has two kinds of neighbors: the neighbors in $Y_1$ sharing blue edges with $v_i$, and the neighbors in $Y_2$ sharing yellows edges with $v_i$. From the fact that $e_i$ is a special edge, $|Y_1|\le k$, so at most $(|Y_1|-1)\Delta$ blue edges are incident to $Y_1-\{u_i\}$. For each vertex $v$ in $Y_2$, $vv_i$ is a yellow edge in $G_i$ but will be a special edge in $G_s$ for some $s>i$. Similar to above, at most $k-|Y_1|$ vertices in $Y_2$ have degree more than $k$ in $G_i$, and all other vertices in $Y_2$ have degree at most $k-1$ in $G_i$. So at most $(k-|Y_1|)(\Delta-1)+(|Y_2|-(k-|Y_1|))(k-1)$ blued edges are incident to $Y_2$. In total, at most $$(|Y_1|-1)\Delta+(k-|Y_1|)(\Delta-1)+(|Y_2|-(k-|Y_1|))(k-1)\le (2k-2)\Delta-k(k-1)$$ So in $G_i$, the number of blue edges within distance one to $e_i$ is at most $$2k\Delta-k^2+(2k-2)\Delta-k(k-1)\le (4k-2)\Delta-k(2k-1)$$ Now color the edges in the list one by one greedily. For each $i$, when it is the turn to color $e_i$, only the edges in $G_i$ (the blue edges) have been colored. Since there are at least $(4k-2)\Delta-k(2k-1)+1$ colors, we are able to color the edges so that edges within distance one get different colors. \end{proof} We shall note that the above result is not only true for simple graphs, but also for multigraphs. \end{document}
\begin{document} \title[Multilinear pseudo-differential operators] {Boundedness of multilinear pseudo-differential operators with $S_{0,0}$ class symbols on Besov spaces} \author[N. Shida]{Naoto Shida} \date{\today} \address[N. Shida] {Graduate School of Mathematics, Nagoya University, Furocho, Chikusaku, Nagoya, Aichi, 464-8602, Japan} \email[N. Shida]{naoto.shida.c3@math.nagoya-u.ac.jp} \keywords{Multilinear pseudo-differential operators, multilinear H\"ormander symbol classes, Besov space} \thanks{This work was supported by Grant-in-Aid for JSPS KAKENHI Fellows, Grant Numbers 23KJ1053.} \subjclass[2020]{35S05, 42B15, 42B35} \begin{abstract} We consider multilinear pseudo-differential operators with symbols in the multilinear H\"ormander class $S_{0, 0}$. The aim of this paper is to discuss the boundedness of these operators in the settings of Besov spaces. \end{abstract} \maketitle \section{Introduction} For a bounded function $\sigma = \sigma(x, \xi_1, \dots, \xi_N)$ on $(\mathbb{R}^n)^{N+1}$, the ($N$-fold) multilinear pseudo-differential operator is defined by \[ T_\sigma(f_1, \dots, f_N)(x) = \frac{1}{(2\pi)^{Nn}} \int_{(\mathbb{R}^n)^N} e^{i x \cdot (\xi_1 + \dots + \xi_N)} \sigma(x, \xi_1, \dots, \xi_N) \prod_{j=1}^N \widehat{f}_j(\xi_j) \, d\xi_1 \dots d\xi_N, \] where $x \in \mathbb{R}^n$, $f_j \in \mathcal{S}(\mathbb{R}^n)$, $j=1, \dots, N$, and $\widehat{f}_j$ denotes the Fourier transform of $f_j$. For $m \in \mathbb{R}$, the symbol class $S^m_{0, 0}(\mathbb{R}^n, N)$ denotes the set of all $\sigma = \sigma(x, \xi_1, \dots, \xi_N) \in C^{\infty}((\mathbb{R}^n)^{N+1})$ satisfying \[ | \partial^\alpha_x \partial^{\beta_1}_{\xi_1} \dots \partial^{\beta_N}_{\xi_N} \sigma(x, \xi_1, \dots, \xi_N) | \le C_{\alpha, \beta_1, \dots, \beta_N} (1+|\xi_1|+ \dots + |\xi_N|)^{m} \] for all multi-indices $\alpha, \beta_1, \dots, \beta_N \in \mathbb{N}^n_0 = \{0, 1, 2, \dots\}^n$. The subject of this paper is to study the boundedness of multilinear pseudo-differential operators. We will use the following notations. Let $X_1, \dots, X_N$, and $Y$ be function spaces on $\mathbb{R}^n$ equipped with quasi-norms $\|\cdot\|_{X_j}$ and $\|\cdot\|_{Y}$, respectively. We say that $T_{\sigma}$ is bounded from $X_1 \times \dots \times X_N$ to $Y$ if there exists a positive constant $C$ such that the inequality \begin{equation}\label{bdd-dfn} \|T_{\sigma}(f_1, \dots, f_N)\|_{Y} \le C \prod_{j=1}^N \|f_j\|_{X_j} \end{equation} holds for all $f_j \in \mathcal{S} \cap X_j$, $j=1, \dots, N$. The smallest constant $C$ of \eqref{bdd-dfn} is defined by $\|T_{\sigma}\|_{X_1 \times \dots \times X_N \to Y}$. If $T_{\sigma}$ is bounded from $X_1 \times \dots \times X_N$ to $Y$ for all $\sigma \in S^m_{0,0}(\mathbb{R}^n, N)$, then we write \[ \mathop{\mathrm{Op}}(S^m_{0,0}(\mathbb{R}^n, N)) \subset B(X_1 \times \dots \times X_N \to Y). \] In the linear case, the class $S^m_{0,0}(\mathbb{R}^n, 1)$ is well known as the H\"ormander symbol class of $S_{0,0}$, and the boundedness of linear pseudo-differential operators with symbols in this class was well studied. More precisely, the following is widely known (for the definition of the function spaces $h^p$ and $bmo$, see Section \ref{preliminaries}). \begin{thmA}[\cite{CV, CM-Asterisque, Miyachi-MathNachr, PS, KMT-JFA}] Let $0< p \le \infty$. Then the boundedness \[ \mathop{\mathrm{Op}}( S^{m}_{0, 0} ) \subset B(h^{p} \to h^{p}). \] holds if and only if \[ m \le -n\left|\frac{1}{p} - \frac{1}{2} \right|, \] where $h^{p}$ should be replaced by $bmo$ if $p = \infty$ . \end{thmA} In Theorem A, the ``if'' part was proved by Calder\'on--Vaillancourt \cite{CV} for the case $p=2$, Coifman--Meyer \cite{CM-Asterisque} for the case $1 < p < \infty$, and Miyachi \cite{Miyachi-MathNachr} and P\"aiv\"arinta--Somersalo \cite{PS} for the case $0 < p \le \infty$. The proof of ``only if'' part can be found in \cite{KMT-JFA}. As a generalization of Theorem A, the boundedness of linear pseudo-differential operators of $S_{0,0}$ was studied in the settings of Besov spaces $B^s_{p, q}$ (for the definition of Besov spaces, see Section \ref{preliminaries}). \begin{thmB}[\cite{Marschall, Sugimoto, Park}] Let $0< p \le \infty$, $0< q \le \infty$, and $s, t \in \mathbb{R}$. Let \[ m = -n\left|\frac{1}{p} - \frac{1}{2} \right| +s-t. \] Then \[ \mathop{\mathrm{Op}}(S^{m}_{0, 0}) \subset B(B^s_{p, q} \to B^{t}_{p, q}). \] \end{thmB} The above result was given by Marschall \cite{Marschall} for the case $p=q=\infty$, and by Sugimoto \cite{Sugimoto} for the case $1 \le p, q \le \infty$. Recently, Theorem B was proved by Park \cite{Park}. In the multilinear settings, the class $S^m_{0,0}(\mathbb{R}^n, N)$ was first studied by B\'enyi-Torres \cite{BT-MRL} for the case $N=2$, that is, the bilinear case. The authors proved that, for $1 \le p, p_1, p_2 < \infty$ satisfying $1/p = 1/p_1 + 1/p_2$, there exist $x$-independent symbols in $S^0_{0,0}(\mathbb{R}^n, 2)$ such that the corresponding bilinear operators are not bounded from $L^{p_1} \times L^{p_2}$ to $L^p$. In particular, they pointed out that the class $S^0_{0,0}(\mathbb{R}^n, 2)$ does not assure the $L^2 \times L^2 \to L^1$ boundedness in contrast to the Calder\'on-Vaillancourt theorem for linear pseudo-differential operators. Then, the number $m$ which assures the $L^{p_1} \times \dots \times L^{p_N} \to L^p$ boundedness of these operators was investigated by B\'enyi-Bernicot-Maldonado-Naibo-Torres \cite{BBMNT} and Michalowski-Rule-Staubach \cite{MRS}, and after that, a complete description of $m$ was given by Miyachi-Tomita \cite{MT-IUMJ} for $N = 2$ (for the case $1/p = 1/p_1+1/p_2$) and Kato-Miyachi-Tomita\cite{KMT-JFA} for $N \ge 2$. \begin{thmC}[\cite{MT-IUMJ, KMT-JFA}] Let $N \ge 2$, $0< p, p_1, \dots, p_N \le \infty$, $1/p \le 1/p_1 + \dots + 1/p_N$ and $m \in \mathbb{R}$. Then the boundedness \[ \mathop{\mathrm{Op}}(S^m_{0, 0}(\mathbb{R}^n, N)) \subset B(h^{p_1} \times \dots \times h^{p_N} \to h^p) \] holds if and only if \[ m \le \min \left\{ \frac{n}{p}, \frac{n}{2} \right\} - \sum_{j=1}^N \max \left\{ \frac{n}{p_j}, \frac{n}{2} \right\}. \] where $h^{p_j}$ can be replaced by $bmo$ if $p_j= \infty$ for some $j=1, \dots, N$. \end{thmC} In the scale of Besov spaces, some partial boundedness results for mulitilinear pseudo-differential operators with symbols in $S^m_{0,0}(\mathbb{R}^n, N)$ were given for the bilinear case. In Hamada-Shida-Tomita \cite{HST}, it was proved that all bilinear pseudo-differential operators with symbols in $S^{-n/2}_{0,0}(\mathbb{R}^n, 2)$ is bounded from $L^2 \times L^2$ to $B^0_{p, q}$ if and only if $1 \le p \le 2$ and $1 \le q \le \infty$. Since $B^0_{1, 1} \hookrightarrow h^1$, this result improves the $L^2 \times L^2 \to h^1$ boundedness given by \cite{MT-IUMJ}. In \cite{Shida-PAMS}, the following is given. \begin{thmD}[\cite{Shida-PAMS}] Let $1 \le p \le 2 \le p_1, p_2 \le \infty$ be such that $1/p \le 1/p_1 + 1/p_2$, and let $0 < q_1, q_2, q \le \infty$ be such that $1/q \le 1/q_1 + 1/q_2$, and let $s_1, s_2, s \in \mathbb{R}$ be such that $s_1+s_2=s$. If $s_1$, $s_2$ and $s$ satisfy \begin{equation} \label{condi-PAMS} s_1 < \frac{n}{2}, \quad s_2 < \frac{n}{2}, \quad \text{and} \quad s > - \frac{n}{2}, \end{equation} then \begin{align*} \mathop{\mathrm{Op}}(S^{-n/2}_{0, 0}(\mathbb{R}^n, 2)) \subset B(B^{s_1}_{p_1, q_1} \times B^{s_2}_{p_2, q_2} \to B^{s}_{p, q}). \end{align*} \end{thmD} \noindent In \cite{Shida-PAMS}, the sharpness of the condition \eqref{condi-PAMS} is also considered. The purpose of this paper is to extend the partial results on the bilinear case stated in Theorem D to the multilinear case in the full range $0< p, p_1, \dots, p_N \le \infty$. Our main result reads as follows. \begin{thm}\label{main1} Let $N \ge 2$, $0< p, p_1, \dots, p_N \le \infty$, $1/p \le 1/p_1 + \dots + 1/p_N$, $0 < q, q_1, \dots, q_N \le \infty$, $1/q \le 1/q_1 + \dots + 1/q_N$, and $s, s_1, \dots, s_N \in \mathbb{R}$. Let \begin{equation}\label{criticalorder} m = \min \left\{ \frac{n}{p}, \frac{n}{2} \right\} - \sum_{j=1}^N \max \left\{ \frac{n}{p_j}, \frac{n}{2} \right\} + \sum_{j=1}^N s_j -s. \end{equation} If $s_1$, \dots, $s_N$ and $s$ satisfy \begin{align}\label{sassum} \begin{split} &s_j < \max \left\{ \frac{n}{p_j}, \frac{n}{2} \right\}, \quad j=1, \dots, N, \quad \text{and} \quad s > -\max \left\{ \frac{n}{p^{\prime}}, \frac{n}{2} \right\}, \end{split} \end{align} then \begin{equation}\label{boundedness_1} \mathop{\mathrm{Op}}(S^{m}_{0,0}(\mathbb{R}^n, N)) \subset B(B^{s_1}_{p_1, q_1} \times \dots \times B^{s_N}_{p_N, q_N} \to B^s_{p, q}). \end{equation} \end{thm} The condition \eqref{sassum} is sharp in the following sense. \begin{thm}\label{thmnec} Let $0 < p, p_1, \dots, p_N \le \infty$, $0< q, q_1, \dots, q_N \le \infty$ and $s, s_1, \dots, s_N \in \mathbb{R}$. If the boundedness \eqref{boundedness_1} holds with $m$ given in \eqref{criticalorder}, then $s_j \le \max \{n/p_j, n/2\}$, $j= 1, \dots, N$, and $s \ge -\max \{n/p^{\prime}, n/2\}$. \end{thm} We shall explain some connection between our main results and previous results. Firstly, Theorem \ref{main1} yields that, for $0< p, p_1, \dots, p_N \le \infty$, $1/p \le 1/p_1 + \dots + 1/p_N$ and $0< q, q_1, \dots, q_N \le \infty$, $1/q \le 1/q_1 + \dots +1/q_N$, the boundedness \begin{align}\label{bdd-main1-part} \mathop{\mathrm{Op}}(S^{m}_{0,0}(\mathbb{R}^n, N)) \subset B(B^{0}_{p_1, q_1} \times \dots \times B^{0}_{p_N, q_N} \to B^{0}_{p, q}) \end{align} holds with \begin{align*} m = \min\left\{\frac{n}{p}, \frac{n}{2}\right\} - \sum_{j=1}^N \max\left\{\frac{n}{p_j}, \frac{n}{2}\right\}. \end{align*} If $0< p < \infty$ and $0 < p_1, \dots, p_N \le \infty$ satisfy \begin{align}\label{addcondforexpo} \frac{1}{\min\{p, 2\}} \le \sum_{j=1}^N \frac{1}{\max\{p_j, 2\}}, \end{align} then the boundedness \eqref{bdd-main1-part} improves the boundedness result in Theorem C. In fact, for $0< p < \infty$ and $0< p_1, \dots, p_N \le \infty$ satisfying \eqref{addcondforexpo}, we can choose $q_j = \max\{p_j, 2\}$, $j=1, \dots, N$, and $q = \min\{p, 2\}$ in \eqref{bdd-main1-part}, and hence we obtain \[ \mathop{\mathrm{Op}}(S^{m}_{0,0}(\mathbb{R}^n, N)) \subset B(B^{0}_{p_1, \max\{p_1, 2\}} \times \dots \times B^{0}_{p_N, \max\{p_N, 2\}} \to B^{0}_{p, \min\{p, 2\}}). \] Since we have the embedding relations $B^{0}_{r, \min\{r, 2\}} \hookrightarrow h^r \hookrightarrow B^{0}_{r, \max\{r, 2\}}$ for $0 < r < \infty$, and $bmo \hookrightarrow B^{0}_{\infty, \infty}$, this boundedness gives an improvement of the corresponding $h^{p_1} \times \dots \times h^{p_N} \to h^p$ boundedness given in Theorem C. For the case $p= \infty$, if $0< p_1, \dots, p_N \le \infty$ satisfy \[ 1 \le \sum_{j=1}^N \frac{1}{\max\{p_j, 2\}}, \] then we can take $q_j = \max\{p_j, 2\}$, $j=1, \dots, N$ and $q =1$, and consequently we obtain \begin{align*} &\mathop{\mathrm{Op}}(S^m_{0,0}(\mathbb{R}^n, N)) \subset B(B^0_{p_1, \max\{p_1, 2\}} \times \dots \times B^0_{p_N, \max\{p_N, 2\}} \to B^0_{\infty, 1}). \end{align*} This improves the corresponding boundedness results given in Theorem C since $h^r \hookrightarrow B^{0}_{r, \max\{r, 2\}}$, $0< r < \infty$, $bmo \hookrightarrow B^0_{\infty, \infty}$ and $B^0_{\infty, 1}\hookrightarrow L^{\infty}$. Secondly, the condition \eqref{sassum} is peculiar to the multilinear case. In fact, we can take any $s$ and $t$ in Theorem B, however, in the multilinear settings, Theorem \ref{thmnec} says that the conditions \eqref{sassum} are (almost) necessary to assure the boundedness on Besov spaces. We also notice that the condition \eqref{sassum} can be found in the author's paper \cite{Shida-Sobolev}. In \cite{Shida-Sobolev}, it is proved that bilinear pseudo-differential operators with symbols in $S^m_{0, 0}(\mathbb{R}^n, 2)$ with the critical $m$ are bounded on Sobolev spaces under the assumption \eqref{sassum} with $N=2$. The organization of this paper is as follows. In Section 2, we give some notations and recall the definitions of some function spaces and embedding relations between them. In Section 3, we give the proof of Theorem \ref{main1}. In Section 4, we prove Theorem \ref{thmnec}. \section{Preliminaries}\label{preliminaries} For two nonnegative quantities $A$ and $B$, the notation $A \lesssim B$ means that $A \le CB$ for some unspecified constant $C>0$, and $A \approx B$ means that $A \lesssim B$ and $B \lesssim A$. For $0 < p \le \infty$, $p^{\prime}$ is the conjugate exponent of $p$, that is, $p^{\prime}$ is defined by $1/p+1/p^{\prime}=1$ if $1 < p \le \infty$ and $p^{\prime} = \infty$ if $0 < p \le 1$. For a finite set $\Lambda$, $|\Lambda|$ denotes the number of the elements of $\Lambda$. Let $\mathcal{S}(\mathbb{R}^n)$ and $\mathcal{S}'(\mathbb{R}^n)$ be the Schwartz space of rapidly decreasing smooth functions on $\mathbb{R}^n$ and its dual, the space of tempered distributions, respectively. We define the Fourier transform $\mathcal{F} f$ and the inverse Fourier transform $\mathcal{F}^{-1}f$ of $f \in \mathcal{S}(\mathbb{R}^n)$ by \[ \mathcal{F} f(\xi) =\widehat{f}(\xi) =\int_{\mathbb{R}^n}e^{-i \xi \cdot x} f(x)\, dx \quad \text{and} \quad \mathcal{F}^{-1}f(x) =\frac{1}{(2\pi)^n} \int_{\mathbb{R}^n}e^{i x \cdot \xi} f(\xi)\, d\xi. \] For $m \in L^{\infty}(\mathbb{R}^n)$, the Fourier multiplier operator $m(D)$ is defined by $m(D)f=\mathcal{F}^{-1}[m\widehat{f}]$ for $f \in \mathcal{S}(\mathbb{R}^n)$. For a countable set $J$, the sequence space $\ell^q(J)$, $0 < q \le \infty$, is defined to be the set of all complex sequences $a = \{a_j\}_{j \in J}$ such that \begin{align*} \|a\|_{\ell^q(J)} = \begin{cases} \left(\sum_{j \in J} |a_j|^q \right)^{1/q} &\text{if $0 < q < \infty$}, \\ \sup_{j \in J} |a_j| &\text{if $q = \infty$} \end{cases} \end{align*} is finite. For $a = \{a_j\}_{j \in J}$, we will use the notation $\|a_j\|_{\ell^q_j(J)}$ instead of $\|a\|_{\ell^q(J)}$ when we indicate the variable explicitly. Let $\phi \in \mathcal{S}(\mathbb{R}^n)$ be such that $\int_{\mathbb{R}^n} \phi(x)\, dx \neq 0$. For $0< p \le \infty$, the local Hardy space $h^p = h^p(\mathbb{R}^n)$ consists of all $f \in \mathcal{S}^\prime(\mathbb{R}^n)$ such that \begin{equation*} \|f\|_{h^p} = \left\| \sup_{0< t < 1} |\phi_t * f| \right\|_{L^p} < \infty, \end{equation*} where $\phi_t(x) = t^{-n} \phi(t^{-1} x)$. It is known that the definition of $h^p$ is independent of the choice of the function $\phi$ up to equivalence of quasi-norms. It is also known that $h^p = L^p$ for $1 < p \le \infty$ and $h^1 \hookrightarrow L^1$. The space $bmo = bmo(\mathbb{R}^n)$ consists of all locally integrable functions $f$ on $\mathbb{R}^n$ such that \begin{equation*} \|f\|_{bmo} = \sup_{|Q| \le 1} \frac{1}{|Q|} \int_{Q} |f(x) -f_Q| \, dx + \sup_{|Q| \ge 1} \frac{1}{|Q|} \int_{Q} |f(x)| \, dx < \infty, \end{equation*} where $Q$ ranges over all cubes in $\mathbb{R}^n$. It is known that $L^\infty \subset bmo$. It is also known that the dual space of $h^1$ coincides with $bmo$. We recall the definition of Besov spaces. Let $\psi_k \in \mathcal{S}(\mathbb{R}^n), k \ge 0,$ be such that \begin{align} \label{partition} \begin{split} &\mathop{\mathrm{supp}} \psi_0 \subset \{\xi \in \mathbb{R}^n : |\xi| \le 2\}, \quad \mathop{\mathrm{supp}} \psi_k \subset \{\xi \in \mathbb{R}^n : 2^{k-1} \le |\xi| \le 2^{k+1}\}, \quad k \ge 1, \\ & \|\partial^\alpha \psi_k\|_{L^\infty} \le C_\alpha 2^{-k |\alpha|}, \quad \alpha \in \mathbb{N}^n_0,\ k \ge 0, \\ & \sum_{k = 0}^\infty \psi_{k}(\xi) = 1, \quad \xi \in \mathbb{R}^n. \end{split} \end{align} The Besov space $B^s_{p, q} = B^s_{p, q}(\mathbb{R}^n)$, $0< p, q \le \infty$, $s \in \mathbb{R}$, is defined to be the set of all $f \in \mathcal{S}^\prime(\mathbb{R}^n)$ such that \begin{align*} \|f\|_{B^s_{p, q}} = \left\| 2^{k s} \left\| \psi_k(D)f(x) \right\|_{L^p_x(\mathbb{R}^n)} \right\|_{\ell^q_k(\mathbb{N}_0)} < \infty. \end{align*} It is known that the definition of Besov spaces is independent of the choice of $\psi_k$, $k=0, 1, 2, \dots$, up to the equivalence of quasi-norms. If $1\le p, q < \infty$, then the dual space of $B^s_{p, q}$ coincides with $B^{-s}_{p^\prime, q^\prime}$. The following embedding relations are well known; \begin{align} &B^s_{p, q_1} \hookrightarrow B^s_{p, q_2}, \label{Bq1Bq2} \quad \text{if} \quad q_1 \le q_2, \\ & B^0_{p, \min\{p, 2\}} \hookrightarrow h^p \hookrightarrow B^0_{p, \max\{p, 2\}}, \quad \text{if} \quad 0<p<\infty, \label{BhB} \\ & B^{0}_{\infty, 1} \hookrightarrow L^{\infty} \hookrightarrow B^{0}_{\infty, \infty}, \label{BLBinfty} \\ & bmo \hookrightarrow B^{0}_{\infty, \infty}. \notag \end{align} As a consequence of \eqref{Bq1Bq2}, \eqref{BhB} and \eqref{BLBinfty}, we have $h^p \hookrightarrow B^0_{p, \infty}$, $0 < p \le \infty$, which means \begin{equation} \label{embd-hpB0pinfty} \sup_{k \in \mathbb{N}_0} \|\psi_k(D)f\|_{L^p} \lesssim \|f\|_{h^p}. \end{equation} For more basic properties about Besov spaces, see, e.g., Triebel \cite{Triebel-ToFS}. It is known that the $L^p$-norm in the definition of $B^s_{p, q}$-norm can be replaced by the $h^p$-norm. More precisely, the following proposition was given by Qui \cite{Qui}. \begin{prop}[\cite{Qui}]\label{propQui} Let $0< p, q \le \infty$ and $s \in \mathbb{R}$. Then, \begin{equation}\label{Bhpequiv} \|f\|_{B^s_{p, q}} \approx \left\| 2^{k s} \left\| \psi_k(D)f(x) \right\|_{h^p_x(\mathbb{R}^n)} \right\|_{\ell^q_{k}(\mathbb{N}_0)}. \end{equation} \end{prop} We end this section by recalling the definition and some properties of the Wiener amalgam space. Let $\kappa \in \mathcal{S}(\mathbb{R}^n)$ be such that $\mathop{\mathrm{supp}} \kappa$ is compact and \begin{align} \label{part-Wiener} \left| \sum_{\mu \in \mathbb{Z}^n} \kappa(\xi - \mu) \right| \ge 1, \quad \xi \in \mathbb{R}^n. \end{align} The Wiener amalgam space $W^{p, q}_s =W^{p, q}_s(\mathbb{R}^n)$, $0< p, q \le \infty$, $s \in\mathbb{R}$, consists of all $f \in \mathcal{S}^\prime(\mathbb{R}^n)$ such that \[ \|f\|_{W^{p, q}_s} = \left\| \left\| \langle \mu \rangle^{s} \Box_{\mu}f(x) \right\|_{\ell^q_{\mu}(\mathbb{Z}^n)} \right\|_{L^p_x(\mathbb{R}^n)} < \infty, \] where $\Box_{\mu}f= \kappa(D-\mu)f = \mathcal{F}^{-1}[\kappa(\cdot-\mu) \widehat{f}]$. We simply write $W^{p, q} = W^{p, q}_{0}$. The space $W^{p, q}_{s}$ does not depend on the choice of $\kappa$ up to equivalence of quasi-norms. For the definition of the Wiener amalgam space, see Triebel \cite{Triebel-ZAA}. The embedding relations between Lebesgue, local Hardy spaces and Wiener amalgam spaces are well investigated as follows. \begin{lem}[\cite{CKS, GWYZ}] \label{embd} Let $0< p, p_1, p_2, q_1, q_2 \le \infty$. Then, \begin{align} &W^{p_1, q_1}_s \hookrightarrow W^{p_2, q_2}_s \quad \text{if} \quad p_1 \le p_2, \quad \text{and} \quad q_1 \le q_2; \label{monotone} \\ &W^{p, 2}_{\alpha(p)} \hookrightarrow h^p \quad \text{if} \quad 0< p< \infty, \quad \text{where} \quad \alpha(p) = (n/2) - \min \{n/2, n/p\}; \label{Wh} \\ &h^p \hookrightarrow W^{p, 2}_{\beta(p)} \quad \text{if} \quad 0< p< \infty, \quad \text{where} \quad \beta(p) = (n/2) - \max \{n/2, n/p\}; \label{hW} \\ &W^{\infty, 1} \hookrightarrow L^\infty. \label{WLinfty} \end{align} \end{lem} The embedding relations \eqref{Wh} and \eqref{hW} are given by Cunanan-Kobayashi-Sugimoto \cite{CKS} for the case $1< p< \infty$ and Guo-Wu-Yang-Zhao\cite{GWYZ} for the $0< p \le 1$. The embedding \eqref{WLinfty} is given by \eqref{WLinfty} The proof of \eqref{monotone} can be found in Kato-Miyachi-Tomita \cite{KMT-JFA}. We will use these embedding relations in the proof of Theorem \ref{main1}. The idea of using the Wiener amalgam spaces comes from the recent works of T. Kato, A. Miyachi and N. Tomita (see \cite{KMT-JPDOA, KMT-JMSJ, KMT-JFA}). In the rest of this section, we shall show the following proposition. \begin{prop} \label{Keyprop} Let $N \ge 2$, $0 < p_0, p_1, \dots, p_N \le \infty$, $1/p_0 = 1/p_1 + \dots + 1/p_N$, and $M_0 \in \mathbb{N}_0$. Let $R, R_1, \dots, R_N \ge 1$. Let $\Lambda, \Lambda_1, \dots, \Lambda_N$ be subsets of $\mathbb{Z}^n$ satisfying \begin{align*} \Lambda = \{\nu \in \mathbb{Z}^n : |\nu| \lesssim R\}, \quad \Lambda_j = \{\nu \in \mathbb{Z}^n : |\nu| \approx R_j\}, \quad j=1, \dots, N. \end{align*} Suppose that $R_1 = \max_{1 \le j \le N} R_j$ and that $R_2= \max_{2 \le j \le N} R_j$. Then the estimate \begin{align*} & \left\| \left\| \sum_{\substack{ \boldsymbol{\nu} \in \Lambda_1 \times \dots \times \Lambda_N \\ \nu_1 + \dots + \nu_N = \tau} } \prod_{j=1}^N |\Box_{\nu_j} f_j| \right\|_{\ell^2_\tau(\Lambda)} \right\|_{L^{p_0}} \lesssim \min \{R_2^{n/2}, R^{n/2}\} \prod_{j=3}^N R_j^{n/2} \prod_{j=1}^N R_j^{-\beta(p_j)} \|f_j\|_{h^{p_j}} \end{align*} holds with the implicit constant independent of $R_1, \dots, R_N$ and $R$. \end{prop} \begin{proof} Notice that $|\Lambda| \lesssim R^n$ and $|\Lambda_j| \approx R_j^n$, $j=1, \dots, N$. First, we have by Young's inequality and H\"older's inequality \begin{align*} \left\| \sum_{\substack{ \boldsymbol{\nu} \in \Lambda_1 \times \dots \times \Lambda_N \\ \nu_1 + \dots + \nu_N = \tau }} \prod_{j=1}^N |\Box_{\nu_j} f_j| \right\|_{\ell^2_\tau(\Lambda)} &\le \left\| \sum_{\substack{ \boldsymbol{\nu} \in \Lambda_1 \times \dots \times \Lambda_N \\ \nu_1 + \dots + \nu_N = \tau }} \prod_{j=1}^N |\Box_{\nu_j} f_j| \right\|_{\ell^2_\tau(\mathbb{Z}^n)} \\ &\le \left\| \Box_{\nu_1} f_1 \right\|_{\ell^2_{\nu_1}(\Lambda_1)} \prod_{j=2}^N \left\| \Box_{\nu_j} f_j \right\|_{\ell^1_{\nu_j}(\Lambda_j)} \\ &\lesssim \left\| \Box_{\nu_1} f_1 \right\|_{\ell^2_{\nu_1}(\Lambda_1)} \prod_{j=2}^N R_j^{n/2} \left\| \Box_{\nu_j} f_j \right\|_{\ell^2_{\nu_j}(\Lambda_j)}. \end{align*} Hence, this estimate and H\"older's inequality yield that \begin{align} \label{est-R2} \left\| \left\| \sum_{\substack{ \boldsymbol{\nu} \in \Lambda_1 \times \dots \times \Lambda_N \\ \nu_1 + \dots + \nu_N = \tau }} \prod_{j=1}^N |\Box_{\nu_j} f_j| \right\|_{\ell^2_\tau(\Lambda)} \right\|_{L^{p_0}} \lesssim \prod_{j = 2}^N R_j^{n/2} \prod_{j=1}^N \left\| \left\| \Box_{\nu_j} f_j \right\|_{\ell^2_{\nu_j}(\Lambda_j)} \right\|_{L^{p_j}}. \end{align} On the other hand, it follows from H\"older's inequality that \begin{align*} \sum_{\substack{ \boldsymbol{\nu} \in \Lambda_1 \times \dots \times \Lambda_N \\ \nu_1 + \dots + \nu_N = \tau }} \prod_{j=1}^N |\Box_{\nu_j} f_j| &= \sum_{\nu_1 \in \Lambda_1} |\Box_{\nu_1} f_1| \sum_{ \substack{ (\nu_2, \dots, \nu_N) \in \Lambda_2 \times \dots \times \Lambda_N \\ \nu_2 + \dots + \nu_N = \tau -\nu_1} } \prod_{j=2}^N |\Box_{\nu_j} f_j| \\ &\le \left\| \Box_{\nu_1} f_1 \right\|_{\ell^2_{\nu_1}(\Lambda_1)} \left\| \sum_{ \substack{ (\nu_2, \dots, \nu_N) \in \Lambda_2 \times \dots \times \Lambda_N \\ \nu_2 + \dots + \nu_N = \tau - \nu_1} } \prod_{j=2}^N |\Box_{\nu_j} f_j| \right\|_{\ell^2_{\nu_1}(\mathbb{Z}^n)} \\ &= \left\| \Box_{\nu_1} f_1 \right\|_{\ell^2_{\nu_1}(\Lambda_1)} \left\| \sum_{ \substack{ (\nu_2, \dots, \nu_N) \in \Lambda_2 \times \dots \times \Lambda_N \\ \nu_2 + \dots + \nu_N = \mu} } \prod_{j=2}^N |\Box_{\nu_j} f_j| \right\|_{\ell^2_{\mu}(\mathbb{Z}^n)}. \end{align*} By Young's inequality and H\"older's inequality, we have \begin{equation*} \left\| \sum_{ \substack{ (\nu_2, \dots, \nu_N) \in \Lambda_2 \times \dots \times \Lambda_N \\ \nu_2 + \dots + \nu_N = \mu} } \prod_{j=2}^N |\Box_{\nu_j} f_j| \right\|_{\ell^2_{\mu}(\mathbb{Z}^n)} \lesssim \prod_{j=3}^N R_j^{n/2} \prod_{j=2}^N \left\| \Box_{\nu_j} f_j \right\|_{\ell^2_{\nu_j}(\Lambda_j)}. \end{equation*} Hence we obtain by H\"older's inequality \begin{align} \label{est-R} \left\| \left\| \sum_{\substack{ \boldsymbol{\nu} \in \Lambda_1 \times \dots \times \Lambda_N \\ \nu_1 + \dots + \nu_N = \tau }} \prod_{j=1}^N |\Box_{\nu_j} f_j| \right\|_{\ell^2_\tau(\Lambda)} \right\|_{L^{p_0}} &\lesssim R^{n/2} \prod_{j=3}^N R_j^{n/2} \prod_{j=1}^N \left\| \left\| \Box_{\nu_j} f_j \right\|_{\ell^2_{\nu_j}(\Lambda_j)} \right\|_{L^{p_j}}. \end{align} Therefore, combining \eqref{est-R2} and \eqref{est-R}, we obtain \[ \left\| \left\| \sum_{\substack{ \boldsymbol{\nu} \in \Lambda_1 \times \dots \times \Lambda_N \\ \nu_1 + \dots + \nu_N = \tau }} \prod_{j=1}^N |\Box_{\nu_j} f_j| \right\|_{\ell^2_\tau(\Lambda)} \right\|_{L^{p_0}} \lesssim \min \{R_2^{n/2}, R^{n/2}\} \prod_{j=3}^N R_j^{n/2} \prod_{j=1}^N \left\| \left\| \Box_{\nu_j} f_j \right\|_{\ell^2_{\nu_j}(\Lambda_j)} \right\|_{L^{p_j}}. \] Since $\langle \nu_j \rangle \approx R_j$ if $\nu_j \in \Lambda_{j}$, it follows from the embedding $h^{p_j} \hookrightarrow W^{p_j, 2}_{\beta(p_j)}$ that \begin{align*} \left\| \left\| \Box_{\nu_j} f_j \right\|_{\ell^2_{\nu_j}(\Lambda_j)} \right\|_{L^{p_j}} &\lesssim R_j^{-\beta(p_j)} \left\| \left\| \langle \nu_j \rangle^{\beta(p_j)} \Box_{\nu_j} f_j \right\|_{\ell^2_{\nu_j}(\mathbb{Z}^n)} \right\|_{L^{p_j}} \\ & \lesssim R_j^{-\beta(p_j)} \|f_j\|_{h^{p_j}}, \quad j=1, \dots, N. \end{align*} The proof is complete. \end{proof} \begin{rem} \label{Keyrem} By the Cauchy-Schwartz inequality and Proposition \ref{Keyprop}, we also have \begin{align*} \left\| \left\| \sum_{\substack{ \boldsymbol{\nu} \in \Lambda_1 \times \dots \times \Lambda_N \\ \nu_1 + \dots + \nu_N = \tau} } \prod_{j=1}^N |\Box_{\nu_j} f_j| \right\|_{\ell^1_\tau(\Lambda)} \right\|_{L^{p_0}} \lesssim R^{n/2} \min \{R_2^{n/2}, R^{n/2}\} \prod_{j=3}^N R_j^{n/2} \prod_{j=1}^N R_j^{-\beta(p_j)} \|f_j\|_{h^{p_j}}. \end{align*} We also use this estimate in the proof of Theorem \ref{main1}. \end{rem} \section{Proof of Theorem \ref{main1}} In this section, we shall prove Theorem \ref{main1}. Let $0< p, p_j, q, q_j \le \infty$ and $s, s_j \in \mathbb{R}$, $j=1, \dots, N$, be the same as in Theorem \ref{main1}. Throughout this section, we always assume that $\sigma \in S^{m}_{0,0}(\mathbb{R}^n, N)$ with $m$ given by \eqref{criticalorder}. We use the notation $\boldsymbol{\xi} = (\xi_1, \dots, \xi_N) \in (\mathbb{R}^n)^N$. The following method using the Fourier series expansion goes back at least to Coifman-Meyer \cite{CM-Asterisque, CM-AIF}. Let $\varphi, \widetilde{\varphi} \in \mathcal{S}(\mathbb{R}^n)$ be such that \begin{align*} &\mathop{\mathrm{supp}} \varphi \subset [-1, 1]^n, \quad \sum_{\nu \in \mathbb{Z}^n} \varphi(\xi-\nu) = 1, \quad \xi \in \mathbb{R}^n, \\ &\mathop{\mathrm{supp}} \widetilde{\varphi} \subset [-3, 3]^n, \quad 0 \le \widetilde{\varphi} \le 1, \quad \widetilde{\varphi} = 1 \quad \text{on} \quad [-1, 1]^n. \end{align*} We remark that $\varphi$ and $\widetilde{\varphi}$ satisfy \eqref{part-Wiener}. We decompose the symbol $\sigma = \sigma(x, \boldsymbol{\xi})$ as \begin{align*} \sigma(x, \boldsymbol{\xi}) &= \sum_{\boldsymbol{\nu} = (\nu_1, \dots, \nu_N) \in (\mathbb{Z}^n)^N} \sigma(x, \boldsymbol{\xi}) \prod_{j=1}^N \varphi(\xi_j-\nu_j) = \sum_{\boldsymbol{\nu} \in (\mathbb{Z}^n)^N} \sigma_{\boldsymbol{\nu}}(x, \boldsymbol{\xi}), \end{align*} where $\sigma_{\boldsymbol{\nu}}(x, \boldsymbol{\xi}) = \sigma(x, \boldsymbol{\xi}) \prod_{j=1}^N \varphi(\xi_j-\nu_j)$. We define \[ S_{\boldsymbol{\nu}}(x, \boldsymbol{\xi}) = \sum_{\boldsymbol{\ell} \in (\mathbb{Z}^n)^N} \sigma_{\boldsymbol{\nu}} (x, \boldsymbol{\xi}-2\pi \boldsymbol{\ell}). \] Since $S_{\boldsymbol{\nu}}(x, \boldsymbol{\xi}) = \sigma_{\boldsymbol{\nu}}(x, \boldsymbol{\xi})$ if $\boldsymbol{\xi} \in \boldsymbol{\nu} + [-3, 3]^{Nn}$, we have \[ \sigma_{\boldsymbol{\nu}}(x, \boldsymbol{\xi}) = S_{\boldsymbol{\nu}}(x, \boldsymbol{\xi}) \prod_{j=1}^N \widetilde{\varphi}(\xi_j-\nu_j) \] Furthermore, since $S_{\boldsymbol{\nu}}$ is a $2\pi \mathbb{Z}^{Nn}$-periodic function with respect to the $\boldsymbol{\xi}$-variable, the Fourier series expansion yields that \[ \sigma_{\boldsymbol{\nu}}(x, \boldsymbol{\xi}) = \sum_{\boldsymbol{\mu} = (\mu_1, \dots, \mu_N) \in (\mathbb{Z}^n)^N} P_{\boldsymbol{\nu}, \boldsymbol{\mu}}(x) \prod_{j=1}^N e^{i \mu_j \cdot \xi_j} \widetilde{\varphi}(\xi_j-\nu_j), \] where \begin{align*} P_{\boldsymbol{\nu}, \boldsymbol{\mu}}(x) = \frac{1}{(2\pi)^{Nn}} \int_{\boldsymbol{\nu} + [-\pi, \pi]^{Nn}} e^{-i \boldsymbol{\mu} \cdot \boldsymbol{y}} \sigma_{\boldsymbol{\nu}}(x, \boldsymbol{y}) \, d\boldsymbol{y}. \end{align*} It follows from integration by parts that \begin{align*} &P_{\boldsymbol{\nu}, \boldsymbol{\mu}}(x) = \langle \boldsymbol{\mu} \rangle^{-2M} Q_{\boldsymbol{\nu}, \boldsymbol{\mu}}(x), \end{align*} where \begin{align*} &Q_{\boldsymbol{\nu}, \boldsymbol{\mu}}(x) = \frac{1}{(2\pi)^{Nn}} \int_{\boldsymbol{\nu} + [-\pi, \pi]^{Nn}} e^{-i \boldsymbol{\mu} \cdot \boldsymbol{y}} (I-\Delta_{\boldsymbol{y}})^{M} \sigma_{\boldsymbol{\nu}}(x, \boldsymbol{y}) \, d\boldsymbol{y}. \end{align*} We remark that, for $\alpha \in \mathbb{N}^n_0$, \begin{equation}\label{estPnu} |\partial^\alpha_x Q_{\boldsymbol{\nu}, \boldsymbol{\mu}}(x)| \lesssim \langle \boldsymbol{\nu} \rangle^{m} \end{equation} holds for all $\boldsymbol{\nu}, \boldsymbol{\mu} \in (\mathbb{Z}^n)^N$, since $\sigma_{\boldsymbol{\nu}}$ satisfies \[ | \partial_x^{\alpha} \partial_{\boldsymbol{\xi}}^{\boldsymbol{\beta}} \sigma_{\boldsymbol{\nu}}(x, \boldsymbol{\xi}) | \le C_{\alpha, \boldsymbol{\beta}} \langle \boldsymbol{\nu} \rangle^m, \quad \alpha \in \mathbb{N}_0^n,\ \ \boldsymbol{\beta} \in (\mathbb{N}_0^n)^N. \] Thus we can write $T_{\sigma}$ as \[ T_{\sigma}(f_1, \dots, f_N)(x) = \sum_{\boldsymbol{\mu} \in (\mathbb{Z}^n)^N} \langle \boldsymbol{\mu} \rangle^{-2M} \sum_{\boldsymbol{\nu} \in (\mathbb{Z}^n)^N} Q_{\boldsymbol{\nu}, \boldsymbol{\mu}}(x) \prod_{j=1}^N \Box_{\nu_j}f_j(x+ \mu_j) \] Choosing the number $M$ as large as $2M \min \{1, p, q\}>Nn$, we obtain \[ \left\| T_{\sigma}(f_1, \dots, f_N) \right\|_{B^{s}_{p, q}} \lesssim \sup_{\boldsymbol{\mu} \in (\mathbb{Z}^n)^N} \left\| \sum_{\boldsymbol{\nu} \in (\mathbb{Z}^n)^N} Q_{\boldsymbol{\nu}, \boldsymbol{\mu}} \prod_{j=1}^N \Box_{\nu_j}f_j(\cdot + \mu_j) \right\|_{B^s_{p, q}}. \] Let $\psi_{\ell_j} \in \mathcal{S}(\mathbb{R}^n),\ \ell_j \in \mathbb{N}_0$, $j=0, 1, \dots, N$, be the same partition of unities as in the definition of Besov spaces. We further decompose the sum on the right hand side above as follows; \begin{align*} &\sum_{\boldsymbol{\nu} \in (\mathbb{Z}^n)^N} Q_{\boldsymbol{\nu}, \boldsymbol{\mu}}(x) \prod_{j=1}^N \Box_{\nu_j}f_j(x+ \mu_j) \\ &= \sum_{\boldsymbol{\ell} = (\ell_0, \ell_1, \dots, \ell_N) \in (\mathbb{N}_0)^{N+1}} \sum_{\boldsymbol{\nu} \in (\mathbb{Z}^n)^N} \psi_{\ell_0}(D)Q_{\boldsymbol{\nu}, \boldsymbol{\mu}}(x) \prod_{j=1}^N \Box_{\nu_j} \psi_{\ell_j}(D)f_j(x+ \mu_j) \\ &= \sum_{\boldsymbol{\ell} \in (\mathbb{N}_0)^{N+1}} \sum_{\boldsymbol{\nu} \in (\mathbb{Z}^n)^N} Q_{\ell_0, \boldsymbol{\nu}, \boldsymbol{\mu}}(x) \prod_{j=1}^N \Box_{\nu_j} F^{j}_{\ell_j, \mu_j}(x), \end{align*} where we set $Q_{\ell_0, \boldsymbol{\nu}, \boldsymbol{\mu}} = \psi_{\ell_0}(D)Q_{\boldsymbol{\nu}, \boldsymbol{\mu}}$ and $F^{j}_{\ell_j, \mu_j} = \psi_{\ell_j}(D)f_j(\cdot + \mu_j)$, $j=1, \dots, N$. Now, we divide the sum with respect to the variable $\boldsymbol{\ell}$ into the following $N$ parts: \begin{align*} & \Lambda_1 = \left\{\boldsymbol{\ell} \in (\mathbb{N}_0)^{N+1} : \ell_j \le \ell_1, \ j=2, \dots, N \right\}, \\ & \Lambda_k = \left\{\boldsymbol{\ell} \in (\mathbb{N}_0)^{N+1} : \begin{array}{l} \ell_j < \ell_k, \ j= 1, \dots, k-1, \\ \ell_j \le \ell_k, \ j = k+1, \dots, N \end{array} \right\}, \quad k=2, \dots, N-1, \\ & \Lambda_{N} = \left\{\boldsymbol{\ell} \in (\mathbb{N}_0)^{N+1} : \ell_j < \ell_N, \ j=1, \dots, N-1 \right\}. \end{align*} By symmetry, it is sufficient to deal with the sum concerning with $\Lambda_1$. Furthermore, we divide the set $\Lambda_1$ into the following three parts; \begin{align*} \Lambda_1 = &\big\{ \boldsymbol{\ell} \in \Lambda_1 : \ell_0 \ge \ell_1-3 \big\} \\ &\cup \big\{ \boldsymbol{\ell} \in \Lambda_1 : \ell_0 \le \ell_1-4, \quad \max \{\ell_2, \dots, \ell_N\} \le \ell_1-N-2 \big\} \\ &\cup \big\{ \boldsymbol{\ell} \in \Lambda_1 : \ell_0 \le \ell_1-4, \quad \max\{ \ell_2, \dots, \ell_N \} \ge \ell_1-N-1 \big\}. \end{align*} By symmetry, it is sufficient to consider the case $\max\{\ell_2, \dots, \ell_N\} = \ell_2$. In particular we may assume that $\ell_j \le \ell_2$, $j=3, \dots, N$. Summarizing the above observations, it is sufficient to prove that the estimates \begin{equation}\label{GOAL!!!!} S_i := \left\| \sum_{\boldsymbol{\ell} \in D_{i}} \sum_{\boldsymbol{\nu} \in (\mathbb{Z}^n)^N} Q_{\ell_0, \boldsymbol{\nu}, \boldsymbol{\mu}} \prod_{j=1}^N \Box_{\nu_j} F^j_{\ell_j, \mu_j} \right\|_{B^{s}_{p, q}} \lesssim \prod_{j=1}^N \|f_{j}\|_{B^{s_j}_{p_j, q_j}}, \quad i=1, 2, 3, \end{equation} hold with the implicit constant independent of $\boldsymbol{\mu}$, where \begin{align*} & D_{1} = \big\{ \boldsymbol{\ell} \in (\mathbb{N}_0)^{N+1} : \ell_0 \ge \ell_1-3, \quad \ell_j \le \ell_1, \ j=2, \dots, N \big\}, \\ & D_{2} = \left\{ \boldsymbol{\ell} \in (\mathbb{N}_0)^{N+1} : \ell_0 \le \ell_1-4, \quad \ell_2 \le \ell_1-N-2, \quad \ell_j \le \ell_2 \le \ell_1, \ j=3, \dots, N \right\}, \\ & D_{3} = \left\{ \boldsymbol{\ell} \in (\mathbb{N}_0)^{N+1} : \ell_0 \le \ell_1-4, \quad \ell_2 \ge \ell_1-N-1, \quad \ell_j \le \ell_2 \le \ell_1,\ j=3, \dots, N \right\}. \end{align*} \begin{lem}\label{EST-x} Let $m \in \mathbb{R}$ and $L \in \mathbb{N}_0$. If $\sigma \in S^{m}_{0,0}(\mathbb{R}^n, N)$, then \[ \|Q_{\ell_0, \boldsymbol{\nu}, \boldsymbol{\mu}}\|_{L^\infty} \lesssim 2^{-\ell_0L} \langle \boldsymbol{\nu} \rangle^{m} \] holds for all $\boldsymbol{\nu}, \boldsymbol{\mu} \in (\mathbb{Z}^n)^N$ and $\ell_0 \in \mathbb{N}_0$. \end{lem} \begin{proof} We first consider the case $\ell_0 \ge 1$. Since $\mathcal{F}^{-1}\psi_{\ell_0}$ has the moment condition, Taylor's expansion gives that \begin{align*} Q_{\ell_0, \boldsymbol{\nu}, \boldsymbol{\mu}}(x) &= \int_{\mathbb{R}^n} \mathcal{F}^{-1}\psi_{\ell_0}(y) Q_{\boldsymbol{\nu}, \boldsymbol{\mu}}(x-y) \, dy \\ &= \int_{\mathbb{R}^n} \mathcal{F}^{-1}\psi_{\ell_0}(y) \left( Q_{\boldsymbol{\nu}, \boldsymbol{\mu}}(x-y) - \sum_{|\alpha| \le L-1} \frac{(-y)^\alpha}{\alpha !} \partial^\alpha Q_{\boldsymbol{\nu}, \boldsymbol{\mu}}(x) \right) \, dy \\ &= \int_{\mathbb{R}^n} \mathcal{F}^{-1}\psi_{\ell_0}(y) \left( L \sum_{|\alpha| = L} \frac{(-y)^\alpha}{\alpha !} \int_0^1 (1-t)^{L-1} [\partial^\alpha Q_{\boldsymbol{\nu}, \boldsymbol{\mu}}](x-ty) \, dt \right) \, dy \end{align*} Hence, it follows from \eqref{estPnu} that \begin{align*} |Q_{\ell_0, \boldsymbol{\nu}, \boldsymbol{\mu}}(x)| &\lesssim \int_{\mathbb{R}^n} |\mathcal{F}^{-1}\psi_{\ell_0}(y)| \left( \sum_{|\alpha| = L} |(-y)^\alpha| \int_0^1 |\partial^\alpha Q_{\boldsymbol{\nu}, \boldsymbol{\mu}}(x-ty)| \, dt \right) \, dy \\ &\lesssim \langle \boldsymbol{\nu} \rangle^{m} \int_{\mathbb{R}^n} \frac{2^{\ell_0 n}|y|^L}{(1+2^{\ell_0}|y|)^{L+n+\epsilon}} \, dy \\ &\lesssim 2^{-\ell_0L} \langle \boldsymbol{\nu} \rangle^{m}. \end{align*} Here, we used the estimate $|\mathcal{F}^{-1} \psi_{\ell_0}(x)| \lesssim 2^{\ell_0n}(1+2^{\ell_0}|x|)^{-(L+n+\epsilon)}$ in the second inequality. We obtain the same estimate for $\ell_0=0$ without using the moment condition. The proof is complete. \end{proof} Now, we shall prove the estimate \eqref{GOAL!!!!}. In what follows, we use the notation $f_{j, k} = \psi_k(D)f_j$ for $j = 1, \dots, N$ and $k \in \mathbb{N}_0$. Since \begin{align} &\mathop{\mathrm{supp}} \mathcal{F} [Q_{\ell_0, \boldsymbol{\nu}, \boldsymbol{\mu}}] \subset \{|\xi| \le 2^{\ell_0+1}\}, \label{suppP} \\ &\mathop{\mathrm{supp}} \mathcal{F} [\Box_{\nu_j} F^{j}_{\ell_j, \mu_j}] \subset \nu_j + [-3, 3]^n, \quad j=1, \dots, N, \label{suppF} \end{align} we have \begin{equation} \label{suppunif} \mathop{\mathrm{supp}} \mathcal{F} \left[ Q_{\ell_0, \boldsymbol{\nu}, \boldsymbol{\mu}} \prod_{j=1}^N \Box_{\nu_j} F^{j}_{\ell_j, \mu_j} \right] \subset \nu_1 + \dots + \nu_N + \left[ -2^{\ell_0+d}, 2^{\ell_0+d} \right]^n \end{equation} for some $d = d_N > 0$ depending on $N$. Thus we have \begin{align} \label{diag-rest} \begin{split} &\psi_k(D) \left[ \sum_{\boldsymbol{\ell} \in D_{i}} \sum_{\boldsymbol{\nu} \in (\mathbb{Z}^n)^N} Q_{\ell_0, \boldsymbol{\nu}, \boldsymbol{\mu}} \prod_{j=1}^N \Box_{\nu_j} F^j_{\ell_j, \mu_j} \right] \\ &= \psi_k(D) \left[ \sum_{\boldsymbol{\ell} \in D_i} \sum_{\boldsymbol{\nu} : \nu_1 + \dots + \nu_N \in \Lambda_{k, \ell_0}} Q_{\ell_0, \boldsymbol{\nu}, \boldsymbol{\mu}} \prod_{j=1}^N \Box_{\nu_j} F^{j}_{\ell_j, \mu_j} \right] \end{split} \end{align} with \begin{align*} \Lambda_{k, \ell_0} = \{ \nu \in \mathbb{Z}^n \, : \, \mathop{\mathrm{supp}} \psi_k \cap (\nu + [-2^{\ell_0+d}, 2^{\ell_0+d}]^{n}) \neq \emptyset \}. \end{align*} We remark that $|\nu| \lesssim 2^{\ell_0+k}$ if $ \nu \in \Lambda_{k, \ell_0}$, and consequently $|\Lambda_{k, \ell_0}| \lesssim 2^{(\ell_0+k)n}$. We set \begin{align*} R_{\boldsymbol{\ell}, k} = R_{\boldsymbol{\ell}, k, \boldsymbol{\mu}} = \sum_{\boldsymbol{\nu} : \nu_1 + \dots + \nu_N \in \Lambda_{k, \ell_0}} Q_{\ell_0, \boldsymbol{\nu}, \boldsymbol{\mu}} \prod_{j=1}^N \Box_{\nu_j} F^{j}_{\ell_j, \mu_j}. \end{align*} For $M_0 \in \mathbb{R}$, we now prove that the following estimate holds for all $\boldsymbol{\ell} = (\ell_0, \ell_1, \dots, \ell_N) \in (\mathbb{N}_0)^{N+1}$ and $k \in \mathbb{N}_0$: \begin{align}\label{Hulk} &\left\| R_{\boldsymbol{\ell}, k} \right\|_{h^p} \lesssim 2^{-\ell_0 M_0} 2^{k \alpha(p)} 2^{\ell_1m} 2^{\min\{\ell_2, k \}n/2} \prod_{j=3}^{N} 2^{\ell_j n/2} \prod_{j=1}^N 2^{-\ell_j\beta(p_j)} \|f_{j, \ell_j}\|_{h^{p_j}}. \end{align} Here the implicit constant does not depend on $\boldsymbol{\mu}$. Firstly, we prove that the estimate \eqref{Hulk} holds with $0< p< \infty$. By the embedding $W^{p_0, 2}_{\alpha(p)} \hookrightarrow W^{p, 2}_{\alpha(p)} \hookrightarrow h^p$, we have \begin{align*} \|R_{\boldsymbol{\ell}, k}\|_{h^p} &\lesssim \|R_{\boldsymbol{\ell}, k}\|_{W^{p_0, 2}_{\alpha(p)}} = \left\| \left\| \langle \tau \rangle^{\alpha(p)} \Box_{\tau} R_{\boldsymbol{\ell}, k} \right\|_{\ell^2_{\tau}(\mathbb{Z}^n)} \right\|_{L^{p_0}}. \end{align*} Recalling that \eqref{suppP}, \eqref{suppF} and that the function $\phi$ has compact support (see the definition of $\Box_{\tau}$), we write \begin{align*} &\|R_{\boldsymbol{\ell}, k}\|_{h^{p}} \lesssim \left\| \left\| \langle \tau \rangle^{\alpha(p)} \Box_{\tau} \left[ \sum_{\substack{\boldsymbol{\nu} : \nu_1 + \dots + \nu_N \in \Lambda_{k, \ell_0} \\ |\nu_1 + \dots + \nu_N - \tau| \lesssim 2^{\ell_0} } } Q_{\ell_0, \boldsymbol{\nu}, \boldsymbol{\mu}} \prod_{j=1}^N \Box_{\nu_j} F^{j}_{\ell_j, \mu_j} \right] \right\|_{\ell^2_\tau(\mathbb{Z}^n)} \right\|_{L^{p_0}} \\ &\lesssim 2^{\ell_0 n/p_0} \sum_{|\lambda| \lesssim 2^{\ell_0}} \left\| \left\| \langle \tau-\lambda \rangle^{\alpha(p)} \Box_{\tau-\lambda} \left[ \sum_{\boldsymbol{\nu} : \nu_1 + \dots + \nu_N = \tau} Q_{\ell_0, \boldsymbol{\nu}, \boldsymbol{\mu}} \prod_{j=1}^N \Box_{\nu_j} F^{j}_{\ell_j, \mu_j} \right] \right\|_{\ell^2_\tau(\Lambda_{k, \ell_0})} \right\|_{L^{p_0}} \\ &\lesssim 2^{\ell_0 (\alpha(p) + n/p_0)} 2^{k \alpha(p)} \sum_{|\lambda| \lesssim 2^{\ell_0}} \left\| \left\| \Box_{\tau-\lambda} R_\tau \right\|_{\ell^2_k(\Lambda_{k, \ell_0})} \right\|_{L^{p_0}}, \end{align*} where we set \begin{equation} \label{Nanjakore} R_\tau = \sum_{\boldsymbol{\nu} : \nu_1 + \dots + \nu_N = \tau} Q_{\ell_0, \boldsymbol{\nu}, \boldsymbol{\mu}} \prod_{j=1}^N \Box_{\nu_j} F^{j}_{\ell_j, \mu_j}. \end{equation} and used the inequality $\langle \tau -\lambda \rangle \lesssim 2^{\ell_0+k}$ for $|\lambda| \lesssim 2^{\ell_0}$ and $\tau \in \Lambda_{k, \ell_0}$. Now, by recalling \eqref{suppunif}, we have $\mathop{\mathrm{supp}} \mathcal{F}[\Box_{k-\lambda}R_\tau] \subset \{|\zeta - \tau| \lesssim 2^{\ell_0}\}$. Hence Nikol'skij's inequality (see, e.g, \cite[Remark 1.3.2/1]{Triebel-ToFS}) gives that \begin{align} \label{pointwiseNikolskij} |\Box_{k-\lambda} R_\tau(x)| \le \|\Phi(\cdot)R_\tau(x-\cdot)\|_{L^1} \lesssim 2^{\ell_0 n (1/r_0 -1)} \|\Phi(\cdot)R_\tau(x-\cdot)\|_{L^{r_0}}, \end{align} where $\Phi = \mathcal{F}^{-1} \phi$ and $r_0 = \min \{1, p_0\}$. By Minkowski's inequality, we have \begin{align*} \left\| \left\| \Box_{k-\lambda} R_\tau(x) \right\|_{\ell^2_\tau(\Lambda_{k, \ell_0})} \right\|_{L^{p_0}_x} &\lesssim 2^{\ell_0 n (1/r_0-1)} \left\| \left\| \left\| \Phi(y)R_\tau(x-y) \right\|_{L^{r_0}_y} \right\|_{\ell^2_\tau(\Lambda_{k, \ell_0})} \right\|_{L^{p_0}_x} \\ &\lesssim 2^{\ell_0 n (1/r_0-1)} \left\| \left\| R_\tau(x) \right\|_{\ell^2_\tau(\Lambda_{k, \ell_0})} \right\|_{L^{p_0}_x}. \end{align*} Hence, by applying Lemma \ref{EST-x} with $L$ replaced by $M_0+\alpha(p) + n/p_0 +n/r_0 +n/2$, we obtain \begin{align*} \|R_{\boldsymbol{\ell}, k}\|_{h^{p}} &\lesssim 2^{-\ell_0(M_0+n/2)} 2^{k \alpha(p)} \left\| \left\| \sum_{\boldsymbol{\nu} : \nu_1 + \dots + \nu_N = \tau} \langle \boldsymbol{\nu} \rangle^{m} \prod_{j=1}^N \left| \Box_{\nu_j} F^{j}_{\ell_j, \mu_j} \right| \right\|_{\ell^2_\tau(\Lambda_{k, \ell_0})} \right\|_{L^{p_0}}. \end{align*} If $\mathop{\mathrm{supp}} \varphi(\cdot-\nu_j) \cap \mathop{\mathrm{supp}} \psi_{\ell_j} = \emptyset$, then $\Box_{\nu_j} F^{j}_{\ell_j, \mu_j} = 0$. Hence the sum over the $\nu_j$-variable can be restricted to \[ \Lambda_{\ell_j} = \{\nu_j \in \mathbb{Z}^n : \mathop{\mathrm{supp}} \varphi(\cdot-\nu_j) \cap \mathop{\mathrm{supp}} \psi_{\ell_j} \neq \emptyset \}, \quad j=1, \dots, N. \] Notice that $|\nu_j| \approx 2^{\ell_j}$ if $\nu_j \in \Lambda_{\ell_j}$. Furthermore, since $\langle \boldsymbol{\nu} \rangle \approx 2^{\ell_1}$ for all $\boldsymbol{\nu} = (\nu_1, \dots, \nu_N) \in \Lambda_{\ell_1} \times \dots \times \Lambda_{\ell_N}$ if $\ell_1 \ge \ell_j$, $j=2, \dots, N$, we have \begin{align*} \sum_{\boldsymbol{\nu} : \nu_1 + \dots + \nu_N = \tau} \langle \boldsymbol{\nu} \rangle^{m} \prod_{j=1}^N \left| \Box_{\nu_j} F^{j}_{\ell_j, \mu_j} \right| \lesssim 2^{\ell_1 m} \sum_{\substack{\boldsymbol{\nu} \in \Lambda_{\ell_1} \times \dots \times \Lambda_{\ell_N} \\ \nu_1 + \dots + \nu_N = \tau}} \prod_{j=1}^N \left| \Box_{\nu_j} F^{j}_{\ell_j, \mu_j} \right|, \end{align*} Recalling that $|\nu_j| \approx 2^{\ell_j}$ if $\nu_j \in \Lambda_{\ell_j}$ and $|\tau| \lesssim 2^{\ell_0 + k}$ if $\tau \in \Lambda_{k, \ell_0}$, we have by Proposition \ref{Keyprop} \begin{align*} \left\| \left\| \sum_{\substack{\boldsymbol{\nu} \in \Lambda_{\ell_1} \times \dots \times \Lambda_{\ell_N} \\ \nu_1 + \dots + \nu_N = \tau}} \prod_{j=1}^N \left| \Box_{\nu_j} F^{j}_{\ell_j, \mu_j} \right| \right\|_{\ell^2_\tau(\Lambda_{k, \ell_0})} \right\|_{L^{p_0}} &\lesssim 2^{\min\{\ell_2, \ell_0+k\}n/2} \prod_{j=3}^N 2^{\ell_jn/2} \prod_{j=1}^N 2^{-\ell_j\beta(p_j)} \|F^{j}_{\ell_j, \mu_j}\|_{h^{p_j}} \\ &\le 2^{\ell_0 n/2} 2^{\min\{\ell_2, k \}n/2} \prod_{j=3}^N 2^{\ell_jn/2} \prod_{j=1}^N 2^{-\ell_j\beta(p_j)} \left\|f_{j, \ell_j}\right\|_{h^{p_j}}, \end{align*} where we used the fact that the $h^{p_j}$-norms are translation invariant in the last inequality. Collecting the above estimates, we obtain \eqref{Hulk} with $0 < p < \infty$. Next we shall show that the estimate \eqref{Hulk} holds with $p=\infty$. Notice that $\alpha(\infty) = n/2$. By the embedding relation $ W^{p_0, 1} \hookrightarrow W^{\infty, 1} \hookrightarrow L^{\infty}$, we have \[ \left\| R_{\boldsymbol{\ell}, k} \right\|_{L^\infty} \lesssim \left\| R_{\boldsymbol{\ell}, k} \right\|_{W^{p_0, 1}} = \left\| \left\| \Box_\tau R_{\boldsymbol{\ell}, k} \right\|_{\ell^1_\tau} \right\|_{L^{p_0}}. \] By the same argument as in the case $0< p < \infty$, it holds that \begin{align*} \left\| \left\| \Box_\tau R_{\boldsymbol{\ell}, k} \right\|_{\ell^1_\tau} \right\|_{L^{p_0}} &= \left\| \left\| \Box_\tau \left[ \sum_{\substack{\boldsymbol{\nu} : \nu_1 + \dots + \nu_N \in \Lambda_{k, \ell_0} \\ |\nu_1 + \dots + \nu_N - \tau| \lesssim 2^{\ell_0} }} Q_{\ell_0, \boldsymbol{\nu}, \boldsymbol{\mu}} \prod_{j=1}^N \left| \Box_{\nu_j} F^{j}_{\ell_j, \mu_j} \right| \right] \right\|_{\ell^1_\tau} \right\|_{L^{p_0}} \\ &\lesssim 2^{\ell_0 n/p_0} \sum_{|\lambda| \lesssim 2^{\ell_0}} \left\| \left\| \Box_{\tau-\lambda} R_\tau \right\|_{\ell^1_\tau(\Lambda)} \right\|_{L^{p_0}} \end{align*} with $R_\tau$ given by \eqref{Nanjakore}. Furthermore, it follows from \eqref{pointwiseNikolskij} that \begin{align*} \left\| \left\| \Box_{\tau-\lambda} R_\tau \right\|_{\ell^1_\tau(\Lambda)} \right\|_{L^{p_0}} \lesssim 2^{\ell_0 n (1/r_0-1)} \left\| \left\| R_\tau \right\|_{\ell^1_\tau(\Lambda)} \right\|_{L^{p_0}}. \end{align*} Combining these estimates with Lemma \ref{EST-x} with $L$ replaced by $M_0+ n/p_0 +n/r_0 +n$, we obtain \begin{align*} \left\| R_{\boldsymbol{\ell}, k} \right\|_{L^\infty} &\lesssim 2^{-\ell_0(M_0+n)} \left\| \left\| \sum_{\boldsymbol{\nu} : \nu_1 + \dots + \nu_N = \tau} \langle \boldsymbol{\nu} \rangle^{m} \prod_{j=1}^N \left| \Box_{\nu_j} F^{j}_{\ell_j, \mu_j} \right| \right\|_{\ell^1_\tau(\Lambda)} \right\|_{L^{p_0}} \\ &\lesssim 2^{-\ell_0(M_0+n)} 2^{\ell_1 m} \left\| \left\| \sum_{\substack{\boldsymbol{\nu} \in \Lambda_{\ell_1} \times \dots \times \Lambda_{\ell_N} \\ \nu_1 + \dots + \nu_N = \tau}} \prod_{j=1}^N \left| \Box_{\nu_j} F^{j}_{\ell_j, \mu_j} \right| \right\|_{\ell^1_\tau(\Lambda)} \right\|_{L^{p_0}}. \end{align*} Hence, it follows from the estimate in Remark \ref{Keyrem} that the right hand side just above can be estimated by \begin{align*} & 2^{-\ell_0(M_0+n)} 2^{\ell_1 m} \times 2^{(\ell_0+j)n/2} 2^{\min \{\ell_2, \ell_0+j\} n/2} \times 2^{-\ell_1\beta(p_1)} \|f_{1, \ell_1}\|_{h^{p_1}} \times 2^{-\ell_2\beta(p_2)} \|f_{2, \ell_2}\|_{h^{p_2}} \\ &\le 2^{-\ell_0 M_0} 2^{jn/2} 2^{\ell_1 m} 2^{\min \{\ell_2, j\}n/2} \times 2^{-\ell_1\beta(p_1)} \|f_{1, \ell_1}\|_{h^{p_1}} \times 2^{-\ell_2\beta(p_2)} \|f_{2, \ell_2}\|_{h^{p_2}}. \end{align*} The proof of \eqref{Hulk} is complete. Now, we shall return to the proof the estimate \eqref{GOAL!!!!}. By the embedding relation \eqref{Bq1Bq2}, it is sufficient to prove \eqref{GOAL!!!!} with $0 < q, q_1, \dots, q_N \le \infty$ satisfying $1/q = \sum_{j=1}^N 1/q_j$. We set $r=\min\{1, p, q\}$. \noindent \textbf{Estimate for $S_1$ :} If $\ell_0 \ge \ell_1-3$ and $\ell_1 \ge \ell_j$, $j=2, \dots, N$, then we have \begin{equation*} \mathop{\mathrm{supp}} \mathcal{F} \left[ Q_{\ell_0, \boldsymbol{\nu}, \boldsymbol{\mu}} \prod_{j=1}^N \Box_{\nu_j} F^j_{\ell_j, \mu_j} \right] \subset \big\{ |\zeta| \le 2^{\ell_0+a_N} \big\} \end{equation*} with some positive integer $a_N$ depending only on $N$. Hence \[ \psi_k(D) R_{\boldsymbol{\ell}, k} = 0 \quad \text{if} \quad \ell_0 \le k-1-a_N, \] and consequently, \begin{align} \label{psildel} \left\| \psi_k(D) \sum_{\boldsymbol{\ell} \in D_1} R_{\boldsymbol{\ell}, k} \right\|_{L^{p}}^r \le \sum_{\substack{\boldsymbol{\ell} \in D_1 \\ \ell_0 \ge k-a_N}} \left\| \psi_k(D) R_{\boldsymbol{\ell}, k} \right\|_{L^{p}}^r \lesssim \sum_{ \substack{ \boldsymbol{\ell} \in D_1 \\ \ell_0 \ge k-a_N }} \left\| R_{\boldsymbol{\ell}, k} \right\|_{h^p}^r, \end{align} where we used the estimate \eqref{embd-hpB0pinfty} in $\lesssim$. Then, by \eqref{Hulk}, the right hand side above is estimated by \begin{align*} & \sum_{ \substack{ \boldsymbol{\ell} \in D_1 \\ \ell_0 \ge k-a_N }} \Big( 2^{-\ell_0 M_0} 2^{k \alpha(p)} 2^{\ell_1 m} \prod_{j=2}^N 2^{\ell_j n/2} \prod_{j=1}^N 2^{-\ell_j\beta(p_j)} \left\| f_{j, \ell_j} \right\|_{h^{p_j}} \Big)^{r} = 2^{k \alpha(p) r} U_k, \end{align*} where \[ U_k = \sum_{\ell_0 : \ell_0 \ge k - a_N} 2^{-\ell_0 M_0 r} \sum_{\ell_1 : \ell_1 \le \ell_0+3} 2^{\ell_1 (m - \beta(p_1)) r} \|f_{1, \ell_1}\|_{h^{p_1}}^{r} \prod_{j=2}^N \sum_{\ell_j : \ell_j \le \ell_1} 2^{\ell_j\theta(p_j) r} \left\| f_{j, \ell_j} \right\|_{h^{p_j}}^{r} \] and \begin{align}\label{MsMarvel} \theta(p_j) = (n/2) - \beta(p_j) = \max\{n/p_j, n/2\}. \end{align} Notice that we have $\left\| f_{j, \ell_j} \right\|_{h^{p_j}} = \|\psi_{\ell_j}(D)f_j\|_{h^{p_j}} \lesssim 2^{-\ell_j s_j} \|f_j\|_{B^{s_j}_{p_j, q_j}}$ by \eqref{Bhpequiv} and \eqref{Bq1Bq2}. Hence we have \begin{align*} U_k &\lesssim \left( \sum_{\ell_0 : \ell_0 \ge k - a_N} 2^{-\ell_0 M_0 r} \sum_{\ell_1 : \ell_1 \le \ell_0+3} 2^{\ell_1(m -\beta(p_1)-s_1) r} \prod_{j=2}^N \sum_{\ell_j : \ell_j \le \ell_1} 2^{\ell_j(\theta(p_j) - s_j)r} \right) \prod_{j=1}^N \|f_{j}\|_{B^{s_j}_{p_j, q_j}}^r \\ &\lesssim \left( \sum_{\ell_0 : \ell_0 \ge k-a_N} 2^{-\ell_0(M_0-C) r} \right) \prod_{j=1}^N \|f_{j}\|_{B^{s_j}_{p_j, q_j}}^r. \end{align*} Here $C > 0$ is the sufficiently large number satisfying \[ \sum_{\ell_1 : \ell_1 \le \ell_0+3} 2^{\ell_1(m -\beta(p_1)-s_1) r} \prod_{j=2}^N \sum_{\ell_j : \ell_j \le \ell_1} 2^{\ell_j(\theta(p_j) - s_j)r} \le 2^{\ell_0 C r}. \] Choosing $M_0$ sufficiently large, we obtain \begin{align*} S_1 &\lesssim \left\| 2^{k s} \left\| \psi_k(D) \sum_{\boldsymbol{\ell} \in D_1} R_{\boldsymbol{\ell}, k} \right\|_{L^p_{}} \right\|_{\ell^q_k } \\ &\lesssim \left\| 2^{k (s +\alpha(p))} \left( \sum_{\ell_0 : \ell_0 \ge k-a_N} 2^{-\ell_0(M_0-C) r} \right)^{1/r} \right\|_{\ell^q_k} \prod_{j=1}^N \|f_{j}\|_{B^{s_j}_{p_j, q_j}} \lesssim \prod_{j=1}^N \|f_{j}\|_{B^{s_j}_{p_j, q_j}}. \end{align*} The estimate for $S_1$ is complete. \noindent \textbf{Estimate for $S_2$ :} If $\ell_0 \le \ell_1-4$ and $\ell_j \le \ell_1-N-2$ for all $j =2, \dots, N$, then \begin{equation*} \mathop{\mathrm{supp}} \mathcal{F} \left[ Q_{\ell_0, \boldsymbol{\nu}, \boldsymbol{\mu}} \prod_{j=1}^N \Box_{\nu_j} F^j_{\ell_j, \mu_j} \right] \subset \big\{ 2^{\ell_1-b_N} \le |\zeta| \le 2^{\ell_1+b_N} \big\}. \end{equation*} with some positive integer $b_N > 0$ depending only on $N$. Thus we see that \[ \psi_k(D) R_{\boldsymbol{\ell}, k} = 0 \quad \text{if} \quad |\ell_1 - k| \ge b_N + 1. \] Hence it follows from the same argument as in \eqref{psildel} that \begin{align*} \left\| \psi_k(D) \sum_{\boldsymbol{\ell} \in D_2} R_{\boldsymbol{\ell}, k} \right\|_{L^{p}}^r &\lesssim \sum_{ \substack{ \boldsymbol{\ell} \in D_{2} \\ |\ell_1-k| \le b_N } } \left\| R_{\boldsymbol{\ell}, k} \right\|_{h^{p}}^r. \end{align*} By using \eqref{Hulk} and taking the sum over $\ell_0$, we have \begin{align*} \sum_{ \substack{ \boldsymbol{\ell} \in D_{2} \\ |\ell_1-k| \le b_N } } \left\| R_{\boldsymbol{\ell}, k} \right\|_{h^{p}}^r &\lesssim 2^{k \alpha(p) r} V_k, \end{align*} where \begin{align*} V_k &= \sum_{\ell_1 : |\ell_1-k| \le b_N} 2^{\ell_1(m - \beta(p_1)) r} \|f_{1, \ell_1}\|_{h^{p_1}}^{r} \prod_{j=2}^N \sum_{\ell_j : \ell_j \le \ell_1} 2^{\ell_j\theta(p_j) r} \left\| f_{j, \ell_j} \right\|_{h^{p_j}}^{r} \\ &= \sum_{\ell : |\ell|\le b_N} 2^{(k+\ell)(m-\beta(p_1))r} \|f_{1, k+\ell}\|_{h^{p_1}}^{r} \prod_{j=2}^N \sum_{\ell_j : \ell_j \le k+\ell} 2^{\ell_j \theta(p_j) r} \left\| f_{j, \ell_j} \right\|_{h^{p_j}}^{r}. \end{align*} Thus, we have \begin{align*} S_2 &\lesssim \left\| 2^{k s} \left\| \psi_k(D) \sum_{\boldsymbol{\ell} \in D_2} R_{\boldsymbol{\ell}, k} \right\|_{L^{p}_{}} \right\|_{\ell^q_k} \\ &\lesssim \sum_{\ell : |\ell|\le b_N} \left\| 2^{k (s +\alpha(p))} 2^{(k+\ell)(m-\beta(p_1))} \|f_{1, k+\ell}\|_{h^{p_1}} \prod_{j=2}^N \left( \sum_{\ell_j : \ell_j \le k+\ell} 2^{\ell_j \theta(p_j) r} \left\| f_{j, \ell_j} \right\|_{h^{p_j}}^{r} \right)^{1/r} \right\|_{\ell^q_k} \\ &\le \sum_{\ell : |\ell|\le b_N} 2^{-\ell (s +\alpha(p))} \left\| 2^{k(m-\beta(p_1) +s +\alpha(p))} \|f_{1, k}\|_{h^{p_1}} \prod_{j=2}^N \left( \sum_{\ell_j : \ell_j \le k} 2^{\ell_j \theta(p_j) r} \left\| f_{j, \ell_j} \right\|_{h^{p_j}}^{r} \right)^{1/r} \right\|_{\ell^q_k} \\ &\lesssim \left\| 2^{k(m-\beta(p_1) +s +\alpha(p))} \|f_{1, k}\|_{h^{p_1}} \prod_{j=2}^N \left( \sum_{\ell_j : \ell_j \le k} 2^{\ell_j \theta(p_j) r} \left\| f_{j, \ell_j} \right\|_{h^{p_j}}^{r} \right)^{1/r} \right\|_{\ell^q_k} \end{align*} where we change a variable with respect to $\ell$ in the third inequality. Since $\alpha(p) - n/2 =-\min\{n/p, n/2\},$ we have $m -\beta(p_1) + s + \alpha(p) = s_1 + \sum_{j=2}^N (s_j -\theta(p_j))$. Hence we obtain \begin{align*} S_2 &\lesssim \left\| 2^{k s_1} \|f_{1, k}\|_{h^{p_1}} \prod_{j=2}^N \left( \sum_{\ell_j : \ell_j \le k} 2^{(k-\ell_j)(s_j - \theta(p_j)) r} 2^{\ell_j s_j r} \left\| f_{j, \ell_j} \right\|_{h^{p_j}}^{r} \right)^{1/r} \right\|_{\ell^q_k} \\ &\le \left\| 2^{k s_1} \|f_{1, k}\|_{h^{p_1}_{}} \right\|_{\ell^{q_1}_k} \prod_{j=2}^N \left\| \left( \sum_{\ell_j : \ell_j \le k} 2^{(k-\ell_j) (s_j -\theta(p_j)) r} 2^{\ell_j s_j r} \left\| f_{j, \ell_j} \right\|_{h^{p_j}_{}}^{r} \right)^{1/r} \right\|_{\ell^{q_j}_k} \\ &\approx \|f_{1}\|_{B^{s_1}_{p_1, q_1}} \prod_{j=2}^N \left\| \sum_{\ell_j : \ell_j \le k} 2^{(k-\ell_j) (s_j -\theta(p_j)) r} 2^{\ell_j s_j r} \left\| f_{j, \ell_j} \right\|_{h^{p_j}}^{r} \right\|_{\ell^{q_j/r}_k}^{1/r}, \end{align*} where we used \eqref{Bhpequiv} in the last inequality. Since $q_j \ge q \ge r$ and $s_j -\theta(p_j) <0$, $j=2, \dots, N$, it follows from Young's inequality that \begin{align*} \left\| \sum_{\ell_j : \ell_j \le k} 2^{(k-\ell_j) (s_j -\theta(p_j)) r} 2^{\ell_j s_j r} \left\| f_{j, \ell_j} \right\|_{h^{p_j}}^{r} \right\|_{\ell^{q_j/r}_k}^{1/r} &\le \left\| 2^{k (s_j -\theta(p_j))r} \right\|_{\ell^1_k}^{1/r} \left\| 2^{k s_j r} \left\|f_{j, k}\right\|_{h^{p_j}_{}}^r \right\|_{\ell^{q_j/r}_k}^{1/r} \\ &\lesssim \left\| 2^{k s_j} \left\|f_{j, k}\right\|_{h^{p_j}_{}} \right\|_{\ell^{q_j}_k} \approx \|f_{j}\|_{B^{s_j}_{p_j, q_j}}, \quad j=2, \dots, N, \end{align*} where we used Proposition \ref{propQui} in the last inequality. Combining these estimates, we obtain the desired estimate. \noindent \textbf{Estimate for $S_3$ :} Since if $\ell_0 \le \ell_1-4$ and $\ell_1 \ge \ell_j$, $j =2, \dots, N$, then \begin{equation*} \mathop{\mathrm{supp}} \mathcal{F} \left[ Q_{\ell_0, \boldsymbol{\nu}, \boldsymbol{\mu}} \prod_{j=1}^N \Box_{\nu_j} F^j_{\ell_j, \mu_j} \right] \subset \big\{ |\zeta| \le 2^{\ell_1 + c_N} \big\} \end{equation*} with some positive integer $c_N$ depending only on $N$. Hence \[ \psi_k(D) R_{\boldsymbol{\ell}, k} = 0 \quad \text{if} \quad \ell_1 \le k-1-c_N, \] and, by the same argument as above, \begin{align*} \left\| \psi_k(D) \sum_{\boldsymbol{\ell} \in D_3} R_{\boldsymbol{\ell}, k} \right\|_{L^{p}}^r &\lesssim \sum_{ \substack{ \boldsymbol{\ell} \in D_{3} \\ \ell_1 \ge k-c_N } } \left\| R_{\boldsymbol{\ell}, k} \right\|_{h^{p}}^r. \end{align*} Since $ \alpha(p) + (n/2) = \max\{n/p^{\prime}, n/2\} = \theta(p^{\prime}), $ it follows from \eqref{Hulk} and \eqref{MsMarvel} that \begin{align*} &\left\| \psi_k(D) \sum_{\boldsymbol{\ell} \in D_3} R_{\boldsymbol{\ell}, k} \right\|_{L^{p}}^r \lesssim 2^{k \theta(p^{\prime}) r} W_k, \end{align*} where \begin{align*} W_k= &\sum_{\ell_1 : \ell_1 \ge k-c_N} 2^{\ell_1(m-\beta(p_1)) r} \|f_{1, \ell_1}\|_{h^{p_1}}^{r} \\ &\qquad\qquad\qquad \times \sum_{\ell_2 : \ell_1-N-1 \le \ell_2 \le \ell_1} 2^{-\ell_2\beta(p_2) r} \|f_{2, \ell_2}\|_{h^{p_2}}^{r} \prod_{j=3}^N \sum_{\ell_j : \ell_j \le \ell_1} 2^{\ell_j\theta(p_j) r} \left\| f_{j, \ell_j} \right\|_{h^{p_j}}^{r}. \end{align*} By a change of variable with respect to $\ell_2$, we have \begin{align*} W_k = W_{k, 0} + W_{k, 1} + \dots + W_{k, N+1} = \sum_{i=0}^{N+1} W_{k, i} \end{align*} with \begin{align*} W_{k, i} = \sum_{\ell_1 : \ell_1 \ge k-c_N} 2^{\ell_1 (m-\beta(p_1)) r} 2^{-(\ell_1 -i) \beta(p_2) r} \|f_{1, \ell_1}\|_{h^{p_1}}^{r} \|f_{2, \ell_1 -i}\|_{h^{p_2}}^{r} \prod_{j=3}^N \sum_{\ell_j : \ell_j \le \ell_1} 2^{\ell_j \theta(p_j) r} \left\| f_{j, \ell_j} \right\|_{h^{p_j}}^{r}. \end{align*} Then we have \begin{align*} S_3 \lesssim \left\| 2^{k s} \left\| \psi_k(D) \sum_{\boldsymbol{\ell} \in D_3} R_{\boldsymbol{\ell}, k} \right\|_{L^p_{}} \right\|_{\ell^q_k} \lesssim \sum_{i=0}^{N+1} \left\| 2^{k (s + \theta(p^{\prime}))} W^{1/r}_{k, i} \right\|_{\ell^q_k}. \end{align*} It is sufficient to prove the estimate for $W_{k,0}$, The same argument below works for the other terms. By using the notation $\theta(p_j)$ given in \eqref{MsMarvel}, we can write $m -\beta(p_1) -\beta(p_2) = -s -\theta(p^{\prime}) +s_1+s_2 +\sum_{j=3}^N (s_j - \theta(p_j) ) $. Hence \begin{align*} W_{k, 0} &= \sum_{\ell_1 : \ell_1 \ge k-c_N} 2^{-\ell_1(s+\theta(p^{\prime}))r} \prod_{j=1}^2 2^{\ell_1 s_j r} \|f_{j, \ell_1}\|_{h^{p_j}}^r \prod_{j=3}^N \sum_{\ell_j : \ell_j \le \ell_1} 2^{(\ell_1-\ell_j)(s_j - \theta(p_j)) r} 2^{\ell_j s_j r} \|f_{j, \ell_j}\|_{h^{p_j}}^{r} \\ &= \sum_{\ell_1 : \ell_1 \ge k-c_N} 2^{-\ell_1(s+\theta(p^{\prime}))r} \widetilde{W}_{\ell_1}, \end{align*} where \begin{align*} \widetilde{W}_{\ell_1} = \prod_{j=1}^2 2^{\ell_1 s_j r} \|f_{j, \ell_1}\|_{h^{p_j}}^r \prod_{j=3}^N \sum_{\ell_j : \ell_j \le \ell_1} 2^{(\ell_1-\ell_j)(s_j - \theta(p_j)) r} 2^{\ell_j s_j r} \|f_{j, \ell_j}\|_{h^{p_j}}^{r}. \end{align*} Since $q \ge r$, it follows from Young's inequality that \begin{align*} \left\| 2^{k (s +\theta(p^{\prime}))} W_{k, 0}^{1/r} \right\|_{\ell^q_k} &= \left\| \sum_{\ell_1 : \ell_1 \ge k-c_N} 2^{-(\ell_1-k)(s+\theta(p^{\prime}))r} \widetilde{W}_{\ell_1} \right\|_{\ell^{q/r}_k}^{1/r} \\ &\lesssim \left\| \sum_{\ell_1 \in \mathbb{N}_0} 2^{-|\ell_1-k|(s+\theta(p^{\prime}))r} \widetilde{W}_{\ell_1} \right\|_{\ell^{q/r}_k}^{1/r} \\ &\le \left\| 2^{-k(s+\theta(p^{\prime}))r} \right\|_{\ell^1_k}^{1/r} \| \widetilde{W}_{k} \|_{\ell^{q/r}_k}^{1/r} \lesssim \| \widetilde{W}_{k} \|_{\ell^{q/r}_k}^{1/r}, \end{align*} where we used the assumption $s+ \theta(p^{\prime}) > 0$ in the last inequality. Furthermore, by H\"older's inequality and the same argumet as in the estimate for $S_2$, we obtain \begin{align*} \| \widetilde{W}_{k} \|_{\ell^{q/r}_k}^{1/r} &\lesssim \prod_{j=1}^2 \left\| 2^{k s_j r} \left\|f_{j, k}\right\|_{h^{p_j}_{}}^r \right\|_{\ell^{q_j/r}_k}^{1/r} \prod_{j=3}^N \left\| \sum_{\ell_j : \ell_j \le k} 2^{(k-\ell_j)(s_j - \theta(p_j)) r} 2^{\ell_j s_j r} \left\| f_{j, \ell_j} \right\|_{h^{p_j}_{}}^{r} \right\|_{\ell^{q_j/r}_k}^{1/r} \\ &\lesssim \prod_{j=1}^N \|f_j\|_{B^{s_j}_{p_j, q_j}}. \end{align*} Thus we obtain the estimate \eqref{GOAL!!!!}. The proof of Theorem \ref{main1} is complete. \section{Proof of Theorem \ref{thmnec}} In this section, we shall show the sharpness of conditions of $s_1, \dots, s_N$ and $s$. In particular, we shall give the proof of Theorem \ref{thmnec}. We now assume that the boundedness \begin{equation} \label{nec-bdd} \mathop{\mathrm{Op}}(S^m_{0,0}(\mathbb{R}^n, N)) \subset B(B^{s_1}_{p_1, q_1} \times \dots \times B^{s_N}_{p_N, q_N} \to B^s_{p, q}) \end{equation} holds with \begin{equation}\label{mcrit} m = \min \left\{ \frac{n}{p}, \frac{n}{2} \right\} - \sum_{j=1}^N \max\left\{ \frac{n}{p_j}, \frac{n}{2} \right\} + \sum_{j=1}^N s_j -s. \end{equation} By the closed graph theorem, the assumption \eqref{nec-bdd} implies that there exists $M \in \mathbb{N}$ such that \begin{align}\label{operator-norm} \begin{split} &\|T_{\sigma}\|_{B^{s_1}_{p_1, q_1} \times \dots \times B^{s_N}_{p_N, q_N} \to B^{s}_{p, q}} \\ &\quad \lesssim \max_{|\alpha|, |\beta_1|, \dots, |\beta_N| \le M} \| \langle (\xi_1, \dots, \xi_N)\rangle^{-m} \partial_x^{\alpha}\partial_{\xi_1}^{\beta_1} \dots \partial_{\xi_N}^{\beta_N} \sigma(x, \xi_1, \dots, \xi_N) \|_{L^\infty_{x, \xi_1, \dots, \xi_N}} \end{split} \end{align} for all $\sigma \in S^{m}_{0, 0}(\mathbb{R}^n, N)$. For the argument using the closed graph theorem, see \cite[Lemma 2.6]{BBMNT}. We recall the following fact given by Wainger \cite{Wainger} and Miyachi-Tomita \cite{MT-IUMJ}. \begin{lem}[\cite{Wainger, MT-IUMJ}]\label{lemWainger} Let $0< a < 1$, $0< b < n$, $1 \le p \le \infty$ and $\varphi \in \mathcal{S}(\mathbb{R}^n)$. For $\epsilon > 0$, we set \[ f_{a, b, \epsilon}(x) = \sum_{\nu \in \mathbb{Z}^n \setminus \{0\}} e^{-\epsilon|\nu|} |\nu|^{-b} e^{i|\nu|^a} e^{i\nu \cdot x} \varphi(x). \] If $b > (1-a)(n/2-n/p) +n/2$, then $\sup_{\epsilon >0}\|f_{a, b, \epsilon}\|_{L^p} < \infty$. \end{lem} In this section, we will use the following partition of unity $\psi_\ell$, $\ell= 0, 1, 2, \dots$, in the definition of Besov spaces; \begin{align*} &\mathop{\mathrm{supp}} \psi_0 \subset \{|\xi| \le 2^{3/4}\}, \quad \psi_0 = 1 \quad \text{on} \quad \{|\xi| \le 2^{1/4}\} \\ &\mathop{\mathrm{supp}} \psi_\ell \subset \{2^{\ell-3/4} \le |\xi| \le 2^{\ell+3/4}\}, \quad \psi_\ell= 1 \quad \text{on} \quad \{2^{\ell -1/4} \le |\xi| \le 2^{\ell + 1/4}\}, \quad \ell \ge 1, \\ &\|\partial^{\alpha}\psi_\ell\|_{L^\infty} \le C_{\alpha} 2^{-\ell |\alpha|}, \quad \alpha \in \mathbb{N}_0^n, \xi \in \mathbb{R}^n, \\ &\sum_{\ell = 0}^\infty \psi_{\ell}(\xi) = 1, \quad \xi \in \mathbb{R}^n. \end{align*} We take functions $\widetilde{\psi}_\ell \in \mathcal{S}(\mathbb{R}^n)$, $\ell = 0, 1, 2, \dots$, such that \begin{align*} &\mathop{\mathrm{supp}} \widetilde{\psi}_0 \subset \{|\xi| \le 2^{1/4}\} \quad \text{and} \quad \widetilde{\psi}_0 = 1 \quad \text{on} \quad \{|\xi| \le 2^{1/8}\}, \\ &\mathop{\mathrm{supp}} \widetilde{\psi}_\ell \subset \{2^{\ell-1/4} \le |\xi| \le 2^{\ell+1/4}\}, \quad \widetilde{\psi}_\ell = 1 \quad \text{on} \quad \{2^{\ell -1/8} \le |\xi| \le 2^{\ell + 1/8}\}, \quad \ell \ge 1, \\ &\sup_{\ell \in \mathbb{N}_0}\|\mathcal{F}^{-1} \widetilde{\psi}_{\ell}\|_{L^1} < \infty. \end{align*} We also use the functions $\varphi, \widetilde{\varphi} \in \mathcal{S}(\mathbb{R}^n)$ satisfying the following; \begin{align*} &\mathop{\mathrm{supp}} \varphi \subset [-1/4, 1/4]^n, \quad |\mathcal{F}^{-1}\varphi(x)| \ge 1 \quad \text{on} \quad [-1, 1]^n, \\ & \mathop{\mathrm{supp}} \widetilde{\varphi} \subset [-1/2, 1/2]^n, \quad \widetilde{\varphi} = 1 \quad \text{on} \quad [-1/4, 1/4]^n. \end{align*} \begin{lem}\label{lemNecessity1} Let $1< p_1, \dots, p_N < \infty$, $0< p< \infty$, $0 < q, q_1, \dots, q_N < \infty$ and $s, s_1, \dots, s_N \in \mathbb{R}$. If the boundedness \eqref{nec-bdd} holds with $m$ given in \eqref{mcrit}, then $s \ge -\max\{n/p^{\prime}, n/2\}$. \end{lem} \begin{proof} Let $\{c_{\boldsymbol{\mu}}\}_{\boldsymbol{\mu} \in (\mathbb{Z}^n)^{N}}$ be a sequence of complex numbers satisfying $\sup_{\boldsymbol{\mu}} |c_{\boldsymbol{\mu}}| \le 1$. We take sufficiently large number $L > 0$. For sufficiently large $\ell \in \mathbb{N}$ satisfying $\ell > L$, we set \begin{align*} &\sigma_{\ell}(\xi_1, \dots, \xi_N) = \sum_{\boldsymbol{\mu} = (\mu_1,\dots, \mu_N) \in D_\ell} c_{\boldsymbol{\mu}} \langle \boldsymbol{\mu} \rangle^{m} \prod_{j=1}^N \varphi(\xi_j - \mu_j), \\ & \widehat{f_{1, \ell}}(\xi_1) = \widetilde{\psi}_{\ell}(\xi_1) \widehat{f_{a_1, b_1, \epsilon}}(\xi_1), \quad \widehat{f_{2, \ell}}(\xi_2) = \widetilde{\psi}_{\ell}(\xi_2) \widehat{f_{a_2, b_2, \epsilon}}(\xi_2), \\ & \widehat{f_{j, \ell}}(\xi_j) = \widetilde{\psi}_{\ell- L}(\xi_j) \widehat{f_{a_j, b_j, \epsilon}}(\xi_j), \quad j=3, \dots, N. \end{align*} where \begin{align*} &D_{\ell} = \left\{ \boldsymbol{\mu} = (\mu_1, \dots, \mu_N) \in (\mathbb{Z}^n)^N : \begin{array}{l} \mu_1+ \mu_2 + \dots + \mu_N = 0, \quad 2^{\ell -\delta} \le |\mu_2| \le 2^{\ell + \delta}, \\ 2^{\ell -L-\delta} \le |\mu_j| \le 2^{\ell -L + \delta}, \quad j=3, \dots, N \end{array} \right\} \end{align*} with sufficiently small $\delta >0$ and \[ 0< a_j < 1, \quad b_j = (1-a_j) \left( n/2-n/p_j \right) + n/2 + \epsilon_j, \quad \epsilon_j > 0, \quad j=1, \dots, N. \] We choose the number $L$ sufficiently large enough to satisfy \begin{equation}\label{setassum1} \boldsymbol{\mu} \in D_{\ell} \implies 2^{\ell -2 \delta} \le |\mu_2 + \dots + \mu_N| \le 2^{\ell +2\delta}. \end{equation} Then, since $(1+|\xi_1|+\dots +|\xi_N|) \approx (1+|\mu_1|+\dots +|\mu_N|)$ if $(\xi_1, \dots, \xi_N) \in \mathop{\mathrm{supp}} \sigma_\ell$ and since $\sup_{\boldsymbol{\mu}} |c_{\boldsymbol{\mu}}| \le 1$, we have $\sigma_{\ell} \in S^m_{0,0}(\mathbb{R}^n, N)$. Furthermore, since $\psi_{k} \widetilde{\psi}_{\ell} = \widetilde{\psi}_{\ell}$ if $k = \ell$ and $0$ otherwise, Lemma \ref{lemWainger} and Young's inequality yield that \begin{align}\label{f1} \begin{split} \|f_{j, \ell}\|_{B^{s_j}_{p_j, q_j}} &=2^{\ell s_j}\|\widetilde{\psi}_{\ell}(D)f_{a_j, b_j, \epsilon}\|_{L^{p_j}} \\ &\le 2^{\ell s_j}\|\mathcal{F}^{-1}\widetilde{\psi}_{\ell}\|_{L^1} \|f_{a_j, b_j, \epsilon}\|_{L^{p_j}} \lesssim 2^{\ell s_j}, \quad j=1, 2, \end{split} \end{align} and similarly, \begin{align}\label{f3N} \|f_{j, \ell}\|_{B^{s_j}_{p_j, q_j}} \lesssim 2^{\ell s_j}, \quad j=3, \dots, N. \end{align} Here the implicit constants are independent of $\ell$ and $\epsilon$. Now, since $\ell$ is sufficiently large and $\delta$ is small, we have \begin{align*} & \mathop{\mathrm{supp}} \varphi (\cdot - \mu_j) \subset \{2^{\ell-1/8} \le |\xi_j| \le 2^{\ell + 1/8}\}, \quad j=1, 2, \\ & \mathop{\mathrm{supp}} \varphi (\cdot - \mu_j) \subset \{2^{\ell-L-1/8} \le |\xi_j| \le 2^{\ell-L + 1/8}\}, \quad j=3, \dots, N, \end{align*} for $\boldsymbol{\mu} = (\mu_1, \dots, \mu_N) \in D_{\ell}$, and consequently we have \begin{align*} & \varphi(\xi_j - \mu_j) \widetilde{\psi}_{\ell}(\xi_j) = \varphi(\xi_j - \mu_j), \quad j=1,2, \\ & \varphi(\xi_j - \mu_j) \widetilde{\psi}_{\ell-L}(\xi_j) = \varphi(\xi_j - \mu_j), \quad j=3, \dots, N. \end{align*} Hence, we obtain \begin{align*} T_{\sigma_\ell}(f_{1, \ell}, \dots, f_{N, \ell})(x) &= \sum_{\boldsymbol{\mu} \in D_{\ell}} c_{\boldsymbol{\mu}} \langle \boldsymbol{\mu} \rangle^{m} \prod_{j=1}^N e^{-\epsilon|\mu_j|} |\mu_j|^{-b_j} e^{i|\mu_j|^{a_j}} \mathcal{F}^{-1}[\varphi(\cdot - \mu_j)](x) \\ &= \{\Phi(x)\}^N \sum_{\boldsymbol{\mu} \in D_{\ell}} c_{\boldsymbol{\mu}} \langle \boldsymbol{\mu} \rangle^{m} \prod_{j=1}^N e^{-\epsilon|\mu_j|} |\mu_j|^{-b_j} e^{i|\mu_j|^{a_j}} \end{align*} with $\Phi = \mathcal{F}^{-1} \varphi$, where we used the fact that $\mu_1 + \dots + \mu_N = 0$ for $\boldsymbol{\mu} = (\mu_1, \dots, \mu_N) \in D_{\ell}$. Thus, if we choose $c_{\boldsymbol{\mu}} = \prod_{j=1}^N e^{-i|\mu_j|^{a_j}}$, then \[ T_{\sigma_\ell}(f_{1, \ell}, \dots, f_{N, \ell})(x) = \{\Phi(x)\}^N \sum_{\boldsymbol{\mu} \in D_{\ell}} \langle \boldsymbol{\mu} \rangle^{m} \prod_{j=1}^N e^{-\epsilon|\mu_j|} |\mu_j|^{-b_j}. \] Hence we obtain \begin{equation}\label{normest33} \|T_{\sigma_\ell}(f_{1, \ell}, \dots, f_{N, \ell})\|_{B^s_{p, q}} \approx \sum_{\boldsymbol{\mu} \in D_{\ell}} \langle \boldsymbol{\mu} \rangle^{m} \prod_{j=1}^N e^{-\epsilon|\mu_j|} |\mu_j|^{-b_j}. \end{equation} Combining \eqref{nec-bdd}, \eqref{f1}, \eqref{f3N}, and \eqref{normest33}, we obtain \begin{align*} \sum_{\boldsymbol{\mu} \in D_{\ell}} \langle \boldsymbol{\mu} \rangle^{m} \prod_{j=1}^N e^{-\epsilon|\mu_j|} |\mu_j|^{-b_j} \lesssim 2^{\ell \sum_{j=1}^N s_j} \end{align*} where the implicit constant does not depend on $\epsilon$. Hence, taking the limit $\epsilon \to 0$, we have \begin{align*} \sum_{\boldsymbol{\mu} \in D_{\ell}} \langle \boldsymbol{\mu} \rangle^{m} \prod_{j=1}^N |\mu_j|^{-b_j} \lesssim 2^{\ell \sum_{j=1}^N s_j}. \end{align*} By \eqref{setassum1}, the left hand side above can be written as \begin{align*} & \sum_{\boldsymbol{\mu} \in D_{\ell}} \langle \boldsymbol{\mu} \rangle^{m} \prod_{j=1}^N |\mu_j|^{-b_j} \\ &= \sum_{\substack{ 2^{\ell-\delta} \le |\mu_2| \le 2^{\ell+\delta} \\ 2^{\ell-L-\delta} \le |\mu_j| \le 2^{\ell-L+\delta}, \ j=3, \dots, N }} \langle (\mu_2 + \dots + \mu_N, \mu_2, \dots, \mu_N) \rangle^m |\mu_2+ \dots + \mu_N|^{-b_1} \prod_{j=2}^N |\mu_j|^{-b_j} \\ &\approx 2^{\ell(m-\sum_{j=1}^N b_j)} \\ &\qquad\qquad \times \mathrm{card} \left\{ (\mu_2, \dots, \mu_N) \in (\mathbb{Z}^n)^{N-1} : \begin{array}{l} 2^{\ell-\delta} \le |\mu_2| \le 2^{\ell+\delta}, \\ 2^{\ell-L-\delta} \le |\mu_j| \le 2^{\ell-L+\delta}, \ j=3, \dots, N \end{array} \right\} \\ &\approx 2^{\ell(m-\sum_{j=1}^N b_j+(N-1)n)}. \end{align*} Thus we obtain \begin{equation*} 2^{\ell(m-\sum_{j=1}^N b_j+(N-1)n)} \lesssim 2^{\ell \sum_{j=1}^N s_j} \end{equation*} with the implicit constant independent of $\ell$. Since $\ell$ is arbitrarily large, we obtain \begin{align*} \sum_{j=1}^N s_j &\ge m-\sum_{j=1}^N b_j +(N-1)n = m -\sum_{j=1}^N \left\{ (1-a_j)\left(\frac{n}{2}-\frac{n}{p_j}\right) +\frac{n}{2} + \epsilon_j \right\} +(N-1)n. \end{align*} Taking the limits as $a_j \to 0$ if $1 < p_j \le 2$, $a_j \to 1$ if $ 2 < p_j < \infty$, and $\epsilon_j \to 0$, we conclude that \begin{align*} \sum_{j=1}^N s_j \ge m - n + \sum_{j =1}^N \max \left\{ \frac{n}{p_j}, \frac{n}{2} \right\} = -n +\min \left\{ \frac{n}{p}, \frac{n}{2} \right\} + \sum_{j=1}^N s_j - s , \end{align*} which means $s \ge -\max \{n/p^{\prime}, n/2 \}$. The proof is complete. \end{proof} \begin{lem}\label{lemNecessity2} Let $1< p_1, \dots, p_N < \infty$, $0< p< \infty$, $1 < q, q_1, \dots, q_N < \infty$ and $s, s_1, \dots, s_N \in \mathbb{R}$. If the boundedness \eqref{nec-bdd} holds with $m$ given in \eqref{mcrit}, then $s_j \le \max \{n/p_j, n/2\}$, $j=1, \dots, N$. \end{lem} \begin{proof} It is sufficient to prove that $s_1 \le \max\{n/p_1, n/2\}$ by symmetry. \textbf{Case I : $2 < p < \infty$.} It is well known that the multilinear pseudo-differential operator $T_{\sigma}$ with $\sigma \in S^m_{0, 0}(\mathbb{R}^n, N)$ is bounded from $B^{s_1}_{p_1, q_1} \times B^{s_2}_{p_2, q_2} \times \dots \times B^{s_N}_{p_N, q_N}$ to $B^{s}_{p, q}$, then $T_{\sigma^{*1}}$ is bounded from $B^{-s}_{p^\prime, q^\prime} \times \dots \times B^{s_N}_{p_N, q_N} \times $ to $B^{-s_1}_{p_1^\prime, q_1^\prime}$. Here $\sigma^{*1}$ is a multilinear symbol defined by \[ \int_{\mathbb{R}^n} T_{\sigma}(f_1, f_2, \dots, f_N)(x) g(x) \, dx = \int_{\mathbb{R}^n} T_{\sigma^{*1}}(g, f_2, \dots, f_N)(x) f_1(x) \, dx. \] Since $\sigma \in S^m_{0, 0}(\mathbb{R}^n, N)$ if and only if $\sigma^{*1} \in S^{m}_{0,0}(\mathbb{R}^n, N)$, the boundedness \eqref{nec-bdd} holds if and only if \[ \mathop{\mathrm{Op}}(S^m_{0, 0}(\mathbb{R}^n, N)) \subset B(B^{-s}_{p^\prime, q^\prime} \times \dots \times B^{s_N}_{p_N, q_N} \to B^{-s_1}_{p_1^\prime, q_1^\prime}). \] Thus it follows from Lemma \ref{lemNecessity1} that $-s_1 \ge -\max\{n/(p_1^\prime)^{\prime}, n/2 \}$, that is, $s_1 \le \max\{n/p_1, n/2\}$. \textbf{Case I\hspace{-0.1em}I : $0< p \le 2$.} Let $L \in \mathbb{N}_0$ be a sufficently large. For $\ell \in \mathbb{N}$ satisfying $\ell > L$, we set \begin{align*} &\sigma_{\ell}(\xi_1, \dots, \xi_N) = \varphi(\xi_1) \left( \sum_{\boldsymbol{\mu} = (\mu_2, \dots, \mu_N) \in D_{\ell}} c_{\boldsymbol{\mu}} \langle \boldsymbol{\mu} \rangle^{m} \prod_{j=2}^{N} \varphi(\xi_j-\mu_j) \right) , \\ & \widehat{f}_1(\xi_1) = \widetilde{\varphi}(\xi_1), \quad \widehat{f_{2, \ell}}(\xi_2) = \widetilde{\psi}_{\ell}(\xi_2) \widehat{f_{a_2, b_2, \epsilon}}(\xi_{2}), \\ &\widehat{f_{j, \ell}}(\xi_j) = \widetilde{\psi}_{\ell-L}(\xi_j) \widehat{f_{a_j, b_j, \epsilon}}(\xi_j), \quad j=3, \dots, N, \end{align*} where $\{c_{\boldsymbol{\mu}}\}_{\boldsymbol{\mu}}$ be a sequence of complex numbers such that $\sup_{\boldsymbol{\mu}} |c_{\boldsymbol{\mu}}| \le 1$, and \begin{align*} D_{\ell} = \left\{ \boldsymbol{\mu} = (\mu_2, \dots, \mu_{N}) \in (\mathbb{Z}^n)^{N-1} : \begin{array}{l} 2^{\ell - \delta} \le |\mu_2 + \dots + \mu_{N}| \le 2^{\ell + \delta}, \\ 2^{\ell - L - \delta} \le |\mu_j| \le 2^{\ell - L + \delta}, \quad j=3, \dots, N \end{array} \right\}. \end{align*} with sufficiently small $\delta > 0$. We take the number $L$ sufficiently large so that \begin{align}\label{murange2} \boldsymbol{\mu} \in D_{\ell} \implies 2^{\ell - 2\delta} \le |\mu_{2}| \le 2^{\ell + 2\delta}. \end{align} We see that $\sigma_{\ell} \in S^m_{0,0}(\mathbb{R}^n, N)$ by the same argument as in the proof of Lemma \ref{lemNecessity1}, and hence \eqref{operator-norm} yields that \begin{equation}\label{operator-norm-2} \|T_{\sigma_\ell}\|_{B^{s_1}_{p_1, q_1} \times \dots \times B^{s_N}_{p_N, q_N} \to B^{s}_{p, q}} \lesssim 1 \end{equation} with the implicit constant independent of $c_{\boldsymbol{\mu}}$ and $\ell$. We also have \begin{align}\label{f2} \|f_1\|_{B^{s_1}_{p_1,q_1}} \lesssim 1, \quad \|f_{j, \ell}\|_{B^{s_j}_{p_j, q_j}} \lesssim 2^{\ell s_j}, \quad j=2, \dots, N. \end{align} Moreover, since $\varphi\widetilde{\varphi} = \varphi$ and \begin{align*} &\varphi(\xi_2 - \mu_2) \widetilde{\psi}_{\ell}(\xi_2) = \varphi(\xi_2 - \mu_2) \\ &\varphi(\xi_j-\mu_j)\widetilde{\psi}_{\ell-L}(\xi_j) = \varphi(\xi_j-\mu_j), \quad j=3, \dots, N, \end{align*} if $\boldsymbol{\mu} \in D_{\ell}$, then we have \begin{align*} T_{\sigma_{\ell}}(f_{1}, f_{2, \ell} \dots, f_{N, \ell})(x) &= \{\Phi(x)\}^N \sum_{\boldsymbol{\mu} \in D_{\ell}} c_{\boldsymbol{\mu}} \langle \boldsymbol{\mu} \rangle^{m} \prod_{j=2}^{N} e^{-\epsilon |\mu_j|} |\mu_j|^{-b_j} e^{i|\mu_j|^{a_j}} e^{i \mu_j \cdot x} \end{align*} Let $\{r_{\mu}(\omega)\}_{\mu \in \mathbb{Z}^n}$, $\omega \in [0, 1]^n$, be a sequence of the Rademacher functions enumerated in such a way that their index set is $\mathbb{Z}^n$ (for the definition of the Rademacher function, see, e.g., \cite[Appendix C]{Grafakos-Classical}). If we choose $\{c_{\boldsymbol{\mu}}\}_{\boldsymbol{\mu}}$ as \[ c_{\boldsymbol{\mu}} = r_{\mu_2+ \dots + \mu_{N}}(\omega) \prod_{j=2}^{N} e^{-i |\mu_j|^{a_j}}, \] then \begin{align*} T_{\sigma_{\ell}}(f_{1}, f_{2, \ell} \dots, f_{N, \ell})(x) &= \{\Phi(x)\}^N \sum_{\boldsymbol{\mu} \in D_{\ell}} r_{\mu_2 + \dots + \mu_N}(\omega) \langle \boldsymbol{\mu} \rangle^{m} \prod_{j=2}^{N} e^{-\epsilon |\mu_j|} |\mu_j|^{-b_j} e^{i \mu_j \cdot x} \\ &= \{\Phi(x)\}^N \sum_{\nu : 2^{\ell-\delta} \le |\nu| \le 2^{\ell + \delta}} r_{\nu}(\omega) e^{i \nu \cdot x} \sum_{\substack{\boldsymbol{\mu} \in D_{\ell} \\ \mu_2 + \dots + \mu_{N} = \nu}} \langle \boldsymbol{\mu} \rangle^{m} \prod_{j=2}^{N} e^{-\epsilon |\mu_j|} |\mu_j|^{-b_j}. \end{align*} Since $\ell$ is sufficiently large, we have \begin{align*} \mathop{\mathrm{supp}} \mathcal{F}[T_{\sigma_{\ell}}(f_1, f_{2, \ell}, \dots, f_{N, \ell})] & \subset \bigcup_{\nu : 2^{\ell-\delta} \le |\nu| \le 2^{\ell + \delta}} \mathop{\mathrm{supp}}[\varphi * \dots * \varphi](\cdot - \nu) \subset \{ 2^{\ell-1/4} \le |\zeta| \le 2^{\ell+1/4} \}, \end{align*} and consequently, we obtain $\|T_{\sigma_{\ell}}(f_1, f_{2, \ell}, \dots, f_{N, \ell})\|_{B^{s}_{p,q}}= 2^{\ell s}\|T_{\sigma_{\ell}}(f_1, f_{2, \ell}, \dots, f_{N, \ell})\|_{L^p}$. By the assumption $|\Phi| \ge 1$ on $[-1, 1]^n$, \begin{align*} \int_{[0, 1]^n} \|T_{\sigma_{\ell}}(f_1, f_{2, \ell}, \dots, f_{N, \ell})\|_{B^{s}_{p,q}}^p \, d\omega &= 2^{\ell s p} \int_{[0, 1]^n} \|T_{\sigma_{\ell}}(f_1, f_{2, \ell}, \dots, f_{N, \ell})\|_{L^p}^p \, d\omega \\ &\gtrsim 2^{\ell s p} \int_{[-1, 1]^n} \left( \int_{[0, 1]^n} \left| \sum_{\nu : 2^{\ell-\delta} \le |\nu| \le 2^{\ell + \delta}} r_{\nu}(\omega) e^{i \nu \cdot x} d_{\nu, \epsilon} \right|^p \, d\omega \right) dx, \end{align*} where $d_{\nu, \epsilon}$ is defined by \[ d_{\nu, \epsilon} = \sum_{\substack{\boldsymbol{\mu} \in D_{\ell} \\ \mu_2 + \dots + \mu_{N} =\nu} } \langle \boldsymbol{\mu} \rangle^{m} \prod_{j=2}^{N} e^{-\epsilon |\mu_j|} |\mu_j|^{-b_j}. \] Hence, by Khintchine's inequality (see, e.g., \cite[Appendix C]{Grafakos-Classical}), we have \begin{align*} \int_{[-1, 1]^n} \left( \int_{[0, 1]^n} \left| \sum_{\nu : 2^{\ell-\delta} \le |\nu| \le 2^{\ell + \delta}} r_{\nu}(\omega) e^{i \nu \cdot x} d_{\nu, \epsilon} \right|^p \, d\omega \right) dx &\approx \int_{[-1, 1]^n} \left( \sum_{\nu : 2^{\ell-\delta} \le |\nu| \le 2^{\ell + \delta}} |d_{\nu, \epsilon}|^{2} \right)^{p/2} dx \\ &\approx \left( \sum_{\nu : 2^{\ell-\delta} \le |\nu| \le 2^{\ell + \delta}} |d_{\nu, \epsilon}|^{2} \right)^{p/2} \end{align*} Combining \eqref{nec-bdd}, \eqref{operator-norm-2} and \eqref{f2}, we obtain \begin{align*} \left( \sum_{\nu : 2^{\ell-\delta} \le |\nu| \le 2^{\ell + \delta}} |d_{\nu, \epsilon}|^{2} \right)^{1/2} &\lesssim 2^{-\ell s} \times \left( \int_{[0, 1]^n} \|T_{\sigma_{\ell}}(f_1, f_{2, \ell}, \dots, f_{N, \ell})\|_{B^{s}_{p,q}}^p \, d\omega \right)^{1/p} \\ &\lesssim 2^{-\ell s} \times \|f_1\|_{B^{s_1}_{p_1, q_1}} \prod_{j=2}^N \|f_{j, \ell}\|_{B^{s_j}_{p_j, q_j}} \\ &\lesssim 2^{\ell (\sum_{j=2}^{N} s_j -s)} \end{align*} Since the sums over $\nu$ and $\boldsymbol{\mu}$ are finite sums, we have by taking the limit $\epsilon \to 0$ \begin{equation} \label{estfordnu} \left( \sum_{\nu : 2^{\ell-\delta} \le |\nu| \le 2^{\ell + \delta}} |d_{\nu}|^{2} \right)^{1/2} \lesssim 2^{\ell (\sum_{j=2}^{N} s_j -s)} \end{equation} with \[ d_{\nu} = \sum_{\substack{\boldsymbol{\mu} \in D_{\ell} \\ \mu_2 + \dots + \mu_{N} =\nu} } \langle \boldsymbol{\mu} \rangle^{m} \prod_{j=2}^{N} |\mu_j|^{-b_j}. \] Recalling \eqref{murange2}, for $\nu \in \mathbb{Z}^n$ satisfying $\nu : 2^{\ell-\delta} \le |\nu| \le 2^{\ell + \delta}$, we have \begin{align*} d_{\nu} &\approx 2^{\ell (m-\sum_{j=2}^{N} b_j)} \\ &\qquad \times \mathrm{card} \left\{ (\mu_3, \dots, \mu_N) \in (\mathbb{Z}^n)^{N-2} : 2^{\ell - L - \delta} \le |\mu_j| \le 2^{\ell - L + \delta}, \ j=3, \dots, N \right\} \\ &\approx 2^{\ell (m-\sum_{j=2}^{N} b_j-(N-2)n)}. \end{align*} Combining the above estimates, we obtain \[ 2^{\ell(m -\sum_{j=2}^{N} b_j +(N-2)n + n/2)} \lesssim 2^{\ell (\sum_{j=2}^{N} s_j -s)} \] with the implicit constant independent of $\ell$. Since $\ell$ is sufficiently large, we obtain \begin{align*} \sum_{j=2}^{N} s_j -s &\ge m -\sum_{j=2}^{N} b_j +(N-2)n + \frac{n}{2} \\ &= m - \left\{ \sum_{j=2}^{N} (1-a_j) \left( \frac{n}{2}-\frac{n}{p_j} \right) +\frac{n}{2} + \epsilon_j \right\} +(N-2)n + \frac{n}{2} \end{align*} If we take limits as $a_j \to 0$ if $1 < p_j \le 2$, $a_j \to 1$ if $2 < p_j < \infty$ and $\epsilon_j \to 0$, then we obtain \begin{align*} \sum_{j=2}^{N} s_j -s \ge m + \sum_{j=2}^{N} \max\left\{ \frac{n}{p_j}, \frac{n}{2}\right\} - \frac{n}{2} = -\max\left\{\frac{n}{p_1}, \frac{n}{2}\right\} + \sum_{j=1}^N s_j -s, \end{align*} which means $s_1 \le \max\{n/p_1, n/2\}$. The proof is complete. \end{proof} In the rest of this section, we prove Theorem \ref{thmnec}. The basic ideas go back to \cite{KMT-JFA} and \cite{Shida-Sobolev}. \begin{proof}[Proof of Theorem \ref{thmnec}] Let $0 < p, p_1, \dots, p_N \le \infty$, $0 < q, q_1, \dots, q_N \le \infty$ and $s, s_1 \dots, s_N \in \mathbb{R}$. We assume that the boundedness \begin{align*} & \mathop{\mathrm{Op}}(S^m_{0, 0}(\mathbb{R}^n, N)) \subset B(B^{s_1}_{p_1, q_1} \times \dots \times B^{s_N}_{p_N, q_N} \to B^{s}_{p, q}) \end{align*} holds with \begin{align*} m = \min \left\{ \frac{n}{p}, \frac{n}{2} \right\} - \sum_{j=1}^N \max\left\{ \frac{n}{p_j}, \frac{n}{2} \right\} + \sum_{j=1}^N s_j -s. \end{align*} It is already proved in Theorem \ref{main1} that the boundedness \[ \mathop{\mathrm{Op}}(S^{-(N-1)(n/2-t)}_{0, 0}(\mathbb{R}^n, N)) \subset B(B^{t}_{2, 2} \times \dots \times B^{t}_{2, 2} \to B^{t}_{2, 2}) \] holds for any $t \in \mathbb{R}$ satisfying $-n/2 < t < n/2$. Then, by complex interpolation, we have \[ \mathop{\mathrm{Op}}(S^{\widetilde{m}}_{0, 0}(\mathbb{R}^n, N)) \subset B(B^{\widetilde{s}_1}_{\widetilde{p}_1, \widetilde{q}_1} \times \dots \times B^{\widetilde{s}_N}_{\widetilde{p}_N, \widetilde{q}_N} \to B^{\widetilde{s}}_{\widetilde{p}, \widetilde{q}}), \] where $0< \theta < 1$, \begin{align*} &1/\widetilde{p}_j = (1-\theta)/2 + \theta/p_j, \quad 1/\widetilde{q}_j = (1-\theta)/2 + \theta/q_j, \quad \widetilde{s_j} = (1-\theta)t + \theta s_j , \quad j=1, \dots, N, \\ &1/\widetilde{p} = (1-\theta)/2 + \theta/p, \quad 1/\widetilde{q} = (1-\theta)/2 + \theta/q, \quad \widetilde{s} = (1-\theta)t + \theta s, \end{align*} and \begin{align*} \widetilde{m} &= - (N-1)\left(\frac{n}{2} - t\right) \times (1-\theta) + \left( \min \left\{ \frac{n}{p}, \frac{n}{2} \right\} - \sum_{j=1}^N \max \left\{ \frac{n}{p_j}, \frac{n}{2} \right\} +\sum_{j=1}^N s_j -s \right) \times \theta \\ &= \min \left\{ \frac{n}{\widetilde{p}}, \frac{n}{2} \right\} - \sum_{j=1}^N \max \left\{ \frac{n}{\widetilde{p}_j}, \frac{n}{2} \right\} + \sum_{j=1}^N \widetilde{s}_j -\widetilde{s}. \end{align*} Since $1 < \widetilde{p}_j, \widetilde{q}_j, \widetilde{q} < \infty$ for sufficiently small $\theta$, it follows from Lemma \ref{lemNecessity2} that \begin{align*} &\widetilde{s}_j \le \max \left\{ \frac{n}{\widetilde{p}_j}, \frac{n}{2} \right\}, \quad j=1, \dots, N, \end{align*} which means that \begin{align*} &s_j \le \max \left\{ \frac{n}{p_j}, \frac{n}{2} \right\} + \frac{1-\theta}{\theta} \left(\frac{n}{2} - t \right), \quad j=1, \dots, N, \end{align*} Taking the limit $t \to n/2$, we obtain the desired conclusion. On the other hand, since $1 < \widetilde{p}_j < \infty$ if $\theta$ is sufficiently small, Lemma \ref{lemNecessity1} yields that \begin{align*} &\widetilde{s} \ge - \max \left\{ \frac{n}{{\widetilde{p}}^{\prime}}, \frac{n}{2} \right\}. \end{align*} We remark that $1< \widetilde{p} < \infty$ for sufficiently small $\theta$, and hence we can write the above inequality as \[ s \ge - \max \left\{ \frac{n}{p^{\prime}}, \frac{n}{2} \right\} - \frac{1-\theta}{\theta} \left( t + \frac{n}{2} \right). \] We conclude that $s \ge -\max\{n/p^\prime, n/2\}$ by taking the limit $t \to -n/2$. The proof of Theorem \ref{thmnec} is complete. \end{proof} \end{document}
\begin{document} \title{Cubic graphical regular representations of $\PSL_2(q)$} \author[Xia]{Binzhou Xia} \address{Beijing International Center for Mathematical Research\\ Peking University\\ Beijing, 100871\\ P. R. China} \email{binzhouxia@pku.edu.cn} \author[Fang]{Teng Fang} \address{Beijing International Center for Mathematical Research\\ Peking University\\ Beijing, 100871\\ P. R. China} \email{tengfang@pku.edu.cn} \maketitle \begin{abstract} We study cubic graphical regular representations of the finite simple groups $\PSL_2(q)$. It is shown that such graphical regular representations exist if and only if $q\neq7$, and the generating set must consist of three involutions. \end{abstract} \textbf{\noindent Keywords: Cayley graph; cubic graph; graphical regular representation; projective special linear group.} \section{Introduction} Given a group $G$ and a subset $S\subset G$ such that $1\notin S$ and $S=S^{-1}:=\{g^{-1}\mid g\in S\}$, the \emph{Cayley graph} $\Cay(G,S)$ of $G$ is the graph with vertex set $G$ such that two vertices $x,y$ are adjacent if and only if $yx^{-1}\in S$. It is easy to see that $\Cay(G,S)$ is connected if and only if $S$ generates the group $G$. If one identifies $G$ with its right regular representation, then $G$ is a subgroup of $\Aut(\Cay(G,S))$. We call $\Cay(G,S)$ a \emph{graphical regular representation} (\emph{GRR} for short) of $G$ if $\Aut(\Cay(G,S))=G$. The problem of seeking graphical regular representations for given groups has been investigated for a long time. A major accomplishment for this problem is the determination of finite groups without a GRR, see~\cite[16g]{Biggs1993}. It turns out that most finite groups admit at least one GRR. For instance, every finite unsolvable group has a GRR~\cite{Godsil1981}. In contrast to unrestricted GRRs, the question of whether a group has a GRR of prescribed valency is largely open. Research on this subject have been focusing on small valencies~\cite{FLWX2002,Godsil1983,XX2004}. In~2002, Fang, Li, Wang and Xu~\cite{FLWX2002} issued the following conjecture. \begin{conjecture}\label{conj1} (\cite[Remarks on Theorem~1.3]{FLWX2002}) Every finite nonabelian simple group has a cubic GRR. \end{conjecture} Note that any GRR of a finite simple group must be connected, for otherwise its full automorphism group would be a wreath product. Hence if $\Cay(G,S)$ is a GRR of a finite simple group $G$, then $S$ is necessarily a generating set of $G$. Apart from a few small groups, Conjecture~\ref{conj1} was only known to be true for the alternating groups~\cite{Godsil1983} and Suzuki groups~\cite{FLWX2002}, while no counterexample was found yet. In this paper, we study cubic GRRs for finite projective special linear groups of dimension two. In particular, Theorem~\ref{thm4} shows that Conjecture~\ref{conj1} fails for $\PSL_2(7)$ but holds for all $\PSL_2(q)$ with $q\neq7$. For any subset $S$ of a group $G$, denote by $\Aut(G,S)$ the group of automorphisms of $G$ fixing $S$ setwise. Each element in $\Aut(G,S)$ is an automorphism of $\Cay(G,S)$ fixing the identity of $G$. Hence a necessary condition for $\Cay(G,S)$ to be a GRR of $G$ is that $\Aut(G,S)=1$. In~\cite{FLWX2002}, the authors showed that this condition is also sufficient for many cubic Cayley graphs of finite simple groups. We state their result for simple groups $\PSL_2(q)$ as follows, which is the starting point of the present paper. \begin{theorem}\label{thm3} \emph{(\cite{FLWX2002})} Let $G=\PSL_2(q)$ be a simple group, where $q\neq11$ is a prime power, and $S$ be a generating set of $G$ with $S^{-1}=S$ and $|S|=3$. Then $\Cay(G,S)$ is a GRR of $G$ if and only if $\Aut(G,S)=1$. \end{theorem} The following are our three main results. \begin{theorem}\label{thm4} For any prime power $q\geqslant5$, $\PSL_2(q)$ has a cubic GRR if and only if $q\neq7$. \end{theorem} \begin{theorem}\label{thm2} For each prime power $q$ there exist involutions $x$ and $y$ in $\PSL_2(q)$ such that the probability for a randomly chosen involution $z$ to make $$ \Cay(\PSL_2(q),\{x,y,z\}) $$ a cubic GRR of $\PSL_2(q)$ tends to $1$ as $q$ tends to infinity. \end{theorem} \begin{proposition}\label{thm1} Let $q\geqslant5$ be a prime power and $G=\PSL_2(q)$. If $\Cay(G,S)$ is a cubic GRR of $G$, then $S$ is a set of three involutions. \end{proposition} Theorem~\ref{thm2} shows that it is easy to make GRRs for $\PSL_2(q)$ from three involutions. On the other hand, Proposition~\ref{thm1} says that one can only make GRRs for $\PSL_2(q)$ from three involutions, which is a response to~\cite[Problem~1.2]{Godsil1983} as well. (Note that for a cubic Cayley graph $\Cay(G,S)$, the set $S$ either consists of three involutions, or has the form $\{x,y,y^{-1}\}$ with $o(x)=2$ and $o(y)>2$.) The proof of Theorem~\ref{thm2} is at the end of Section~\ref{sec1}, and the proofs of Theorem~\ref{thm4} and Proposition~\ref{thm1} are in Section~\ref{sec2}. We also pose two problems concerning cubic GRRs for other families of finite simple groups at the end of this paper. \section{Preliminaries} The following result is well known, see for example~\cite[II~\S7 and~\S8]{Huppert1967}. \begin{lemma}\label{lem3} Let $q\geqslant5$ be a prime power and $d=\gcd(2,q-1)$. Then $\PGL_2(q)$ has a maximal subgroup $M=\mathrm{D}} \def\di{\,\big|\,_{2(q+1)}$. Moreover, $M\cap\PSL_2(q)=\mathrm{D}} \def\di{\,\big|\,_{2(q+1)/d}$, and for $q\notin\{7,9\}$ it is maximal in $\PSL_2(q)$. \end{lemma} The next lemma concerns facts about involutions in two-dimensional linear groups which is needed in the sequel. \begin{lemma}\label{lem1} Let $q=p^f\geqslant5$ for some prime $p$ and $G=\PSL_2(q)$. Then the following hold. \begin{itemize} \item[(a)] There is only one conjugacy class of involutions in $G$. \item[(b)] For any involution $g$ in $G$, $$ \Cen_G(g)= \begin{cases} \mathrm{C}} \def\calB{\mathcal{B}} \def\Cay{\mathrm{Cay}} \def\Cen{\mathbf{C}} \def\Co{\mathrm{Co}} \def\Cos{\mathsf{Cos}_2^f,\quad\textup{if $p=2$},\\ \mathrm{D}} \def\di{\,\big|\,_{q-1},\quad\textup{if $q\equiv1\pmod{4}$},\\ \mathrm{D}} \def\di{\,\big|\,_{q+1},\quad\textup{if $q\equiv3\pmod{4}$}. \end{cases} $$ \item[(c)] If $p>2$, then for any involution $\alpha$ in $\PGL_2(q)$, the number of involutions in $\Cen_G(\alpha)$ is at most $(q+3)/2$. \end{itemize} \end{lemma} \begin{proof} Parts~(a) and~(b) can be found in~\cite[Lemma~A.3]{GZ2010}. To prove part~(c), assume that $p>2$ and $\alpha$ is an involution in $\PGL_2(q)$. By \cite[Lemma~A.3]{GZ2010} we have $\Cen_G(\alpha)=\mathrm{D}} \def\di{\,\big|\,_{q+\varepsilon}$ with $\varepsilon=\pm1$. As a consequence, the number of involutions in $\Cen_G(\alpha)$ is at most $1+(q+\varepsilon)/2\leqslant(q+3)/2$. This completes the proof. \end{proof} \section{GRRs from three involutions}\label{sec1} Recall from Lemma~\ref{lem3} that $\PSL_2(q)$ has a maximal subgroup $\mathrm{D}} \def\di{\,\big|\,_{2(q+1)/d}$, where $d=\gcd(2,q-1)$. The following proposition plays the central role in this paper. \begin{proposition}\label{prop1} Let $q=p^f\geqslant11$ for some prime $p$, $d=\gcd(2,q-1)$, $G=\PSL_2(q)$, and $H=\mathrm{D}} \def\di{\,\big|\,_{2(q+1)/d}$ be a maximal subgroup of $G$. Then for any two involutions $x,y$ with $\langle x,y\rangle=H$, there are at least $$ \frac{q^2-4d^2fq-(d+2)q-4d^2f-3d^2+2d-1}{d} $$ involutions $z\in G$ such that $\langle x,y,z\rangle=G$ and $\Aut(G,\{x,y,z\})=1$. \end{proposition} \begin{proof} Fix involutions $x,y$ in $H$ such that $\langle x,y\rangle=H$. Identify the elements in $G$ with their induced inner automorphisms of $G$. In this way, $G$ is a normal subgroup of $A:=\Aut(G)$, and the elements of $A$ act on $G$ by conjugation. Denote by $V$ the set of involutions in $G$, and \begin{eqnarray*} L&=&\{y^\alpha\mid\alpha\in A,x^\alpha=x\}\cup\{y^\alpha\mid\alpha\in A,x^\alpha=y\}\\ &&\cup\ \{x^\alpha\mid\alpha\in A,y^\alpha=x\}\cup\{x^\alpha\mid\alpha\in A,y^\alpha=y\}. \end{eqnarray*} By Lemma~\ref{lem1}, $x$ and $y$ are conjugate in $A$. Hence \begin{eqnarray}\label{eq1} |L|&\leqslant&|\{\alpha\in A\mid x^\alpha=x\}|+|\{\alpha\in A\mid x^\alpha=y\}|\\ \nonumber&&+\ |\{\alpha\in A\mid y^\alpha=x\}|+|\{\alpha\in A\mid y^\alpha=y\}|\\ \nonumber&=&4|\Cen_A(x)|\leqslant4|\mathrm{Out}(G)||\Cen_G(x)|=4df|\Cen_G(x)|\leqslant4df(q+1) \end{eqnarray} by virtue of Lemma~\ref{lem1}. Denote by $I$ the set of involutions $\alpha\in A$ such that $x^\alpha=y$ and $y^\alpha=x$. Take an arbitrary $\alpha\in I$. Then $H^\alpha=\langle x^\alpha,y^\alpha\rangle=\langle y,x\rangle=H$, i.e., $\alpha\in\mathbf{N}_A(H)$. Write $\mathbf{N}_A(H)=\langle a\rangle\rtimes\langle b\rangle$ with $\langle a\rangle=\mathrm{C}} \def\calB{\mathcal{B}} \def\Cay{\mathrm{Cay}} \def\Cen{\mathbf{C}} \def\Co{\mathrm{Co}} \def\Cos{\mathsf{Cos}_{q+1}<\PGL_2(q)$ and $\langle b\rangle=\mathrm{C}} \def\calB{\mathcal{B}} \def\Cay{\mathrm{Cay}} \def\Cen{\mathbf{C}} \def\Co{\mathrm{Co}} \def\Cos{\mathsf{Cos}_{2f}$, so that $H=\mathbf{N}_A(H)\cap G=\langle a^d\rangle\rtimes\langle b^f\rangle$. Since $x$ and $y$ are two involutions generating $H$, we have $x=a^{di}b^f$ and $y=a^{dj}b^f$ for some integers $i$ and $j$. Moreover, either $\alpha=a^kb^f$ for some integer $k$, or $\alpha=a^{(q+1)/2}$ with $q$ odd, because $\alpha$ is an involution in $\mathbf{N}_A(H)$. However, if $\alpha=a^{(q+1)/2}$ with $q$ odd, then $\alpha$ will fix $x$ and $y$, respectively. Thus $\alpha=a^kb^f$ for some integer $k$, and in particular, $\alpha\in\PGL_2(q)$. In view of $a^{b^f}=a^{-1}$, one computes that $$ x^\alpha=(a^{di}b^f)^{a^kb^f}=(a^{di})^{a^kb^f}(b^f)^{b^fa^{-k}}=(a^{di})^{b^f}(b^f)^{a^{-k}}=a^{2k-di}b^f. $$ This together with the assumption $x^\alpha=y=a^{dj}b^f$ gives $(a^k)^2=a^{d(i+j)}$. As a consequence, $|I|\leqslant d$. If $p=2$, then $\alpha\in\PGL_2(q)=G$, and we see from Lemma~\ref{lem1} that $|V\cap\Cen_G(\alpha)|=q-1$. If $p$ is odd, then Lemma~\ref{lem1} asserts that $|V\cap\Cen_G(\alpha)|\leqslant(q+3)/2$. To sum up, we have \begin{equation*} |\bigcup\limits_{\alpha\in I}(V\cap\Cen_G(\alpha))|\leqslant|I|\cdot\frac{q+3d-3}{d}\leqslant d\cdot\frac{q+3d-3}{d}=q+3d-3. \end{equation*} Due to Lemma~\ref{lem1} and the orbit-stabilizer theorem, $|V|\geqslant|G|/(q+1)=q(q-1)/d$, whence \begin{eqnarray}\label{eq2} &&|(V\setminus H)\setminus\bigcup\limits_{\alpha\in I}\Cen_G(\alpha)|\geqslant|V|-|V\cap H|-|\bigcup\limits_{\alpha\in I}(V\cap\Cen_G(\alpha))|\\ \nonumber&\geqslant&\frac{q(q-1)}{d}-\left(\frac{q+1}{d}+1\right)-(q+3d-3)=\frac{q^2-(d+2)q-3d^2+2d-1}{d}. \end{eqnarray} Suppose that $z$ is an involution of $G$ outside $H$. It follows that $\langle x,y,z\rangle=G$. If $\alpha$ is a nonidentity in $\Aut(G,\{x,y,z\})$, then either $z^\alpha=z$ or $z\in L$. In the former case, $\alpha$ is an involution as $\alpha$ interchanges $x$ and $y$, and hence $z\in\bigcup_{\alpha\in I}\Cen_G(\alpha)$. This implies that for any $z$ in $(V\setminus H)\setminus\bigcup_{\alpha\in I}\Cen_G(\alpha)$ but not in $L$, one has $\langle x,y,z\rangle=G$ and $\Aut(G,\{x,y,z\})=1$. Now combining~\eqref{eq1} and~\eqref{eq2} we deduce that the number of choices of such $z$ is at least \begin{eqnarray*} |(V\setminus H)\setminus\bigcup\limits_{\alpha\in I}\Cen_G(\alpha)|-|L|&\geqslant&\frac{q^2-(d+2)q-3d^2+2d-1}{d}-4df(q+1)\\ &=&\frac{q^2-4d^2fq-(d+2)q-4d^2f-3d^2+2d-1}{d}, \end{eqnarray*} as the lemma asserts. \end{proof} As an immediate consequence of Proposition~\ref{prop1}, we give a proof for Theorem~\ref{thm2}. \vskip0.1in \noindent\textbf{Proof of Theorem~\ref{thm2}:} Assume without loss of generality that $q\geqslant11$, and take $x,y$ to be any two involutions generating the maximal subgroup $\mathrm{D}} \def\di{\,\big|\,_{2(q+1)/d}$ of $\PSL_2(q)$, where $d:=\gcd(2,q-1)$. Let $J=J_{q,x,y}$ be the set of involutions $z$ such that $\Cay(\PSL_2(q),\{x,y,z\})$ is a GRR of $\PSL_2(q)$. By Theorem~\ref{thm3}, $J$ equals the set of involutions $z$ such that $\langle x,y,z\rangle=\PSL_2(q)$ and $\Aut(\PSL_2(q),\{x,y,z\})=1$. Hence applying Proposition~\ref{prop1} we obtain \begin{eqnarray*} |J|&\geqslant&\frac{q^2-4d^2fq-(d+2)q-4d^2f-3d^2+2d-1}{d}\\ \nonumber&\geqslant&\frac{q^2-16fq-4q-16f-9}{d}\geqslant\frac{q^2-16q\log_2q-4q-16\log_2q-9}{d}. \end{eqnarray*} Moreover, combining Lemma~\ref{lem1} and the orbit-stabilizer theorem shows that the number of involutions in $\PSL_2(q)$ is at most $|\PSL_2(q)|/(q-1)=q(q+1)/d$. Thus for a randomly chosen involution $z$, the probability such that $\Cay(\PSL_2(q),\{x,y,z\})$ is a cubic GRR of $\PSL_2(q)$ is at least $$ \frac{|J|}{q(q+1)/d}\geqslant\frac{q^2-16q\log_2q-4q-16\log_2q-9}{q(q+1)}, $$ which tends to $1$ as $q$ tends to infinity. \qed \section{Conclusion}\label{sec2} \begin{lemma}\label{lem5} Let $q=p^f$ for some prime $p$, and $d=\gcd(2,q-1)$. If $q\geqslant29$ or $q=23$, then $$ q^2-4d^2fq-(d+2)q-4d^2f-3d^2+2d-1>0. $$ \end{lemma} \begin{proof} It is direct to verify the conclusion for $q\in\{32,64,81,128,256\}$. Hence we only need to prove the lemma with $q\notin\{32,64,81,128,256\}$. Note that under this assumption, $q>22f$. Since $q>25/2\geqslant(16f+9)/(6f-4)$, or equivalently $4q+16f+9<6fq$, we have \begin{eqnarray*} &&q^2-4d^2fq-(d+2)q-4d^2f-3d^2+2d-1\\ &\geqslant& q^2-16fq-4q-16f-9>q^2-16fq-6fq=q(q-22f)>0 \end{eqnarray*} as desired. \end{proof} \begin{lemma}\label{lem6} Let $q$ be an odd prime power, and $x$ be an involution of $\PSL_2(q)$. If $q\equiv3\pmod{4}$, then for any $y\in\PSL_2(q)$, there exists $\alpha\in\Aut(\PSL_2(q))$ such that $x^\alpha=x$ and $y^\alpha=y^{-1}$. \end{lemma} \begin{proof} Appeal to the isomorphism $\PSL_2(q)\cong\PSU_2(q)$. Let $i$ be an element of order four in $\mathbb{F}_{q^2}^\times$, $G=\PSU_2(q)$, $A=\Aut(G)$ and $\psi$ be the homomorphism from $\GU_2(q)$ to $G$ modulo $\mathbf{Z}} \def\ZZ{\mathrm{C}(\GU_2(q))$. In view of Lemma~\ref{lem1} we may assume that $$ x= \begin{pmatrix} i&0\\ 0&-i \end{pmatrix} ^\psi $$ ($x$ is indeed an element of $\PSL_2(q)$ since $i^2=-1$). By~\cite[II~8.8]{Huppert1967} one has $$ y= \begin{pmatrix} a&b\\ -b^q&a^q \end{pmatrix} ^\psi, $$ where $a,b\in\mathbb{F}_{q^2}$ such that $a^{q+1}+b^{q+1}=1$. Then set $c=b^{q-1}$ if $b\neq0$, and $c=1$ if $b=0$. Define $$ \alpha:g\mapsto \begin{pmatrix} 0&c^q\\ 1&0 \end{pmatrix} ^\psi g \begin{pmatrix} 0&1\\ c&0 \end{pmatrix} ^\psi $$ for any $g\in G$. It is straightforward to verify that $\alpha\in A$, $x^\alpha=x$ and $y^\alpha=y^{-1}$. Thus the lemma is true. \end{proof} We remark that if $q\not\equiv3\pmod{4}$ then the conclusion of Lemma~\ref{lem6} may not hold. For example, when $q=8$ or $13$, respectively, there exist $x,y\in\PSL_2(q)$ with $o(x)=2$ and $o(y)>2$ such that $\{x,y\}$ does not generate $\PSL_2(q)$ and $\Aut(\PSL_2(q),\{x,y,y^{-1}\})=1$. \vskip0.1in Now we are able to prove Theorem~\ref{thm4} and Proposition~\ref{thm1}. \vskip0.1in \noindent\textbf{Proof of Theorem~\ref{thm4}:} For $q\in\{8,9,11,13,16,17,19,25,27\}$, computation in \magma~\cite{magma} shows that there exist involutions $x,y,z\in\PSL_2(q)$ such that $\langle x,y,z\rangle=\PSL_2(q)$ and $\Aut(\PSL_2(q),\{x,y,z\})=1$ (one can further take $x,y$ to be the generators of $\mathrm{D}} \def\di{\,\big|\,_{2(q+1)/\gcd(2,q-1)}$). This together with Proposition~\ref{prop1} and Lemma~\ref{lem5} indicates that whenever $q\geqslant8$, there exist involutions $x,y,z\in\PSL_2(q)$ such that $\langle x,y,z\rangle=\PSL_2(q)$ and $\Aut(\PSL_2(q),\{x,y,z\})=1$. Hence by Theorem~\ref{thm3}, $\PSL_2(q)$ has a cubic GRR if $q\geqslant8$ and $q\neq11$. Moreover, the existence of cubic GRRs for $\PSL_2(5)$ and $\PSL_2(11)$ was proved in~\cite[Remarks on Theorem~1.3]{FLWX2002}. Therefore, $\PSL_2(q)$ has a cubic GRR provided $q\neq7$. It remains to show that $\PSL_2(7)$ has no cubic GRR. Suppose on the contrary that $\Cay(\PSL_2(7),S)$ is a cubic GRR of $\PSL_2(7)$. Then according to Lemma~\ref{lem6}, $S$ must be a set of three involutions. However, computation in \magma~\cite{magma} shows that for any involutions $x,y,z$ with $\langle x,y,z\rangle=\PSL_2(7)$, there exists an involution $\alpha\in\PGL_2(7)$ such that $\{x^\alpha,y^\alpha,z^\alpha\}=\{x,y,z\}$. This contradiction completes the proof. \qed \vskip0.1in \noindent\textbf{Proof of Proposition~\ref{thm1}:} By contradiction, suppose that $S$ does not consist of three involutions. Then since $1\notin S$ and $S=S^{-1}$, we deduce that $S=\{x,y,y^{-1}\}$ with $o(x)=2$ and $o(y)>2$. It follows that $\langle x,y\rangle=G$. First assume that $q$ is even. By virtue of Lemma~\ref{lem1}, $x$ can be taken as any involution in $G=\mathrm{SL}} \def\SO{\mathrm{SO}} \def\Soc{\mathrm{Soc}} \def\Sp{\mathrm{Sp}} \def\Stab{\mathrm{Stab}} \def\SU{\mathrm{SU}} \def\Suz{\mathrm{Suz}} \def\Sy{\mathrm{S}} \def\Sym{\mathrm{Sym}} \def\Sz{\mathrm{Sz}_2(q)$, whence we may assume that $$ x= \begin{pmatrix} 1&1\\ 0&1 \end{pmatrix}. $$ Suppose $$ y= \begin{pmatrix} a&b\\ c&d \end{pmatrix}, $$ where $a,b,c,d\in\mathbb{F}_q$ such that $ad-bc=1$. If $c=0$, then $\langle x,y\rangle$ is contained in the group of upper triangular matrices in $\mathrm{SL}} \def\SO{\mathrm{SO}} \def\Soc{\mathrm{Soc}} \def\Sp{\mathrm{Sp}} \def\Stab{\mathrm{Stab}} \def\SU{\mathrm{SU}} \def\Suz{\mathrm{Suz}} \def\Sy{\mathrm{S}} \def\Sym{\mathrm{Sym}} \def\Sz{\mathrm{Sz}_2(q)$, impossible because $\langle x,y\rangle=G$. Consequently, $c\neq0$. Set $$ h= \begin{pmatrix} c&a+d\\ 0&c \end{pmatrix}, $$ and define $\alpha:g\mapsto h^{-1}gh$ for any $g\in G$. Then $\alpha\in A$, and one sees easily that $x^\alpha=x$ and $y^\alpha=y^{-1}$. This implies $\Aut(G,S)\neq1$, contrary to the condition that $\Cay(G,S)$ is a GRR of $G$. Next assume that $q\equiv1\pmod{4}$. Let $\omega$ be an element of order four in $\mathbb{F}_q^\times$, and $\varphi$ be the homomorphism from $\GL_2(q)$ to $\PGL_2(q)$ modulo $\mathbf{Z}} \def\ZZ{\mathrm{C}(\GL_2(q))$. Due to Lemma~\ref{lem1}, we may assume that $$ x= \begin{pmatrix} \omega&0\\ 0&-\omega \end{pmatrix} ^\varphi $$ ($x$ is indeed an element of $\PSL_2(q)$ since $\omega^2=-1$). Suppose $$ y= \begin{pmatrix} a&b\\ c&d \end{pmatrix} ^\varphi, $$ where $a,b,c,d\in\mathbb{F}_q$ such that $ad-bc=1$. If $bc=0$, then the preimage of $\langle x,y\rangle$ under $\varphi$ is contained either in the group of upper triangular matrices or in the group of lower triangular matrices in $\mathrm{SL}} \def\SO{\mathrm{SO}} \def\Soc{\mathrm{Soc}} \def\Sp{\mathrm{Sp}} \def\Stab{\mathrm{Stab}} \def\SU{\mathrm{SU}} \def\Suz{\mathrm{Suz}} \def\Sy{\mathrm{S}} \def\Sym{\mathrm{Sym}} \def\Sz{\mathrm{Sz}_2(q)$. Hence $bc\neq0$ as $\langle x,y\rangle=G$. Set $$ h= \begin{pmatrix} 0&b\\ -c&0 \end{pmatrix} ^\varphi, $$ and define $\alpha:g\mapsto h^{-1}gh$ for any $g\in G$. Then $\alpha\in A$, and it is direct to verify that $x^\alpha=x$ and $y^\alpha=y^{-1}$. This implies $\Aut(G,S)\neq1$, contrary to the condition that $\Cay(G,S)$ is a GRR of $G$. Finally assume that $q\equiv3\pmod{4}$. Then we derive from Lemma~\ref{lem6} that $\Aut(G,S)\neq1$, again a contradiction to the condition that $\Cay(G,S)$ is a GRR of $G$. The proof is thus completed. \qed We conclude with two problems on cubic GRRs for other families of finite simple groups. First, in view of Theorem~\ref{thm4}, a natural problem is to determine which finite nonabelian simple groups have no cubic GRR. Such groups would be vary rare, and we conjecture that there are only finitely many of them. \begin{conjecture} There are only finitely many finite nonabelian simple groups that have no cubic GRR. \end{conjecture} \begin{problem} Classify the finite nonabelian simple groups that have no cubic GRR. \end{problem} Second, as it is shown in Proposition~\ref{thm1} that for finite simple groups $\PSL_2(q)$, a GRR can only be made from three involutions, one would ask what is the situation for other finite nonabelian simple groups, which is the problem below. \begin{problem} Classify the finite nonabelian simple groups $G$ having no GRR of form $\Cay(G,\{x,y,y^{-1}\})$, where $o(x)=2$ and $o(y)>2$. \end{problem} \noindent\textsc{Acknowledgements.} The first author acknowledges the support of China Postdoctoral Science Foundation Grant 2014M560838 and National Science Foundation of China grant 11501011. The authors would like to thank the anonymous referees for their comments to improve the presentation. \end{document}
\begin{document} \title{A Simple Framework for Finding Balanced Sparse Cuts via APSP} \author{ Li Chen\thanks{Li Chen was supported by NSF Grant CCF-2106444.}\\ Georgia Tech\\ lichen@gatech.edu \and Rasmus Kyng\thanks{The research leading to these results has received funding from the grant ``Algorithms and complexity for high-accuracy flows and convex optimization'' (no. 200021 204787) of the Swiss National Science Foundation.}\\ ETH Zurich \\ kyng@inf.ethz.ch \and Maximilian Probst Gutenberg\footnotemark[2]\\ ETH Zurich\\ maxprobst@ethz.ch \and Sushant Sachdeva\thanks{Sushant Sachdeva's research is supported by an NSERC (Natural Sciences and Engineering Research Council of Canada) Discovery Grant. } \\ University of Toronto \\ sachdeva@cs.toronto.edu } \date{} \maketitle \begin{abstract} We present a very simple and intuitive algorithm to find balanced sparse cuts in a graph via shortest-paths. Our algorithm combines a new multiplicative-weights framework for solving unit-weight multi-commodity flows with standard ball growing arguments. Using Dijkstra's algorithm for computing the shortest paths afresh every time gives a very simple algorithm that runs in time $\O(m^2/\phi)$ and finds an $\O(\phi)$-sparse balanced cut, when the given graph has a $\phi$-sparse balanced cut. Combining our algorithm with known deterministic data-structures for answering approximate All Pairs Shortest Paths (APSP) queries under increasing edge weights (decremental setting), we obtain a simple deterministic algorithm that finds $m^{o(1)}\phi$-sparse balanced cuts in $m^{1+o(1)}/\phi$ time. Our deterministic almost-linear time algorithm matches the state-of-the-art in randomized and deterministic settings up to subpolynomial factors, while being significantly simpler to understand and analyze, especially compared to the only almost-linear time deterministic algorithm, a recent breakthrough by Chuzhoy-Gao-Li-Nanongkai-Peng-Saranurak (FOCS 2020). \end{abstract} \section{Introduction} Graph partitioning is a fundamental algorithmic primitive that has been studied extensively. There are several ways to formalize the question. We focus on the question of finding balanced separators in a graph. More precisely, given an $m$-edge graph $G=(V,E)$, the conductance of a cut is defined by $\Phi_G(S) = \frac{|E_G(S, \overline{S})|}{\min\{{{\textsf{vol}}}(S),{{\textsf{vol}}}{({V\setminus S})}\}}$ where $E_G(S, \overline{S})$ is the set of edges with exactly one endpoint in $S,$ and the volume of $S,$ denoted ${{\textsf{vol}}}(S)$ is the sum of the degrees of vertices in $S.$ We say that a cut $(S, V\setminus S)$ is $b$-balanced if ${{\textsf{vol}}}(S),{{\textsf{vol}}}(V \setminus S) \ge b\cdot {{\textsf{vol}}}(V).$ The objective in the Balanced Separator problem is \begin{quote} Given parameters $b, \phi \le 1,$ either find a cut $(S, V \setminus S)$ that is $b$-balanced and has conductance $\Phi_G(S) \le \phi,$ or certify that every $\Omega(b)$-balanced\footnote{Note that we allow the algorithm to return an $\Omega(b)$-balanced sparse cut when the graph has a $b$-balanced sparse cut. Such an algorithm is known as a pseudo-approximation algorithm. All known efficient algorithms for balanced cut find pseudo-approximations.} cut has conductance at least $\alpha\phi.$ \end{quote} The Balanced Separator problem is a classic NP-hard problem and under the Small-Set-Expansion hypothesis, even NP-hard to approximate to within an arbitrary constant~\cite{RaghavendraST12}. Thus, the above formulation allows for $\alpha$-approximation for some $\alpha < 1.$ This problem has been studied extensively due to its application to divide-and-conquer on graphs, and theoretical connections to random walks, spectral graph theory, and metric embeddings. \paragraph{Our Results.} In this paper, we present a very simple and intuitive algorithm for Balanced Separator. Our algorithm gives a simple framework based on (scalar) multiplicative weights that reduces the problem to computing approximate shortest paths in a graph under increasing lengths for the edges (decremental setting). Our framework either finds a balanced cut with small conductance, or certifies that every balanced cut has large conductance (\Cref{thm:fineGrainedReduction}). If one simply uses Dijkstra's algorithm to compute the necessary shortest paths afresh each time, our algorithm gives an $\O(m^2/\phi)$ time algorithm that achieves approximation $\alpha = \Omega(1/\log^2 n)$ for cuts of constant balance, and $\alpha = \Omega(1/\log n \cdot \log\log n)$ for cuts of constant balance and conductance (\Cref{theorem:balanceCutCorNaive}). If we instead use known $n^{o(1)}$-approximate deterministic dynamic algorithms for decremental All-Pairs-Shortest-Paths (APSP), we obtain an algorithm that runs in $m^{1+o(1)}/\phi$ and achieves an approximation of $\alpha = n^{o(1)}$ (\Cref{theorem:balanceCutCorAPSP}). Our algorithm can be described very simply. We attempt to embed an explicit expander $H$ as a multi-commodity flow using paths of length $\O(\phi^{-1})$ in $G,$ while ensuring that the congestion on the edges in $G$ is at most $\O(\phi^{-1}).$ If the ends of points of an edge $e \in H$ are connected in $G$ using a short path, we use the path in $G$ to route $e.$ Further, we increase the length of each edge on this path by a multiplicative factor. This increased length makes it less likely that this path will be used in the future. A simple multiplicative-weights argument here now allows us to bound the congestion over the course of entire algorithm. If our algorithm succeeds in embedding most edges of $H$ in $G,$ this provides us a certificate that all balanced cuts in $G$ have expansion $\widetilde{\Omega}(\phi).$ If our algorithm fails, we find several edges of $H$ such that the ends points of these edges are at distance $\widetilde{\Omega}(\phi^{-1})$ as measured by the lengths of the edges computed by the algorithm. Now, we can apply a simple ball-growing argument to recover a balanced cut of conductance $\phi.$ \paragraph{Applications.} While finding the Balanced Sparsest Cut is a crucial ingredient in Divide-And-Conquer frameworks for many algorithms (see \cite{shmoys1997cut} for an introduction), and has various applications ranging from VLSI Design, Image Segmentation \cite{shi2000normalized} to PRAM emulation, we want to point out in particular that our algorithm can be used to replace the use of the Cut-Matching framework~\cite{KhandekarRV09} in the work of Saranurak-Wang~\cite{SaranurakW18} (see \Cref{rem:extractExpanderSW19} in \Cref{sec:seporcert}). Together, this gives an elegant framework for computing expander decompositions which in turn have been pivotal in various recent breakthroughs in algorithmic graph theory with applications to computing Electric Flows \cite{SpielmanT04}, Maximum Flows and Min-Cost Flows \cite{chen2022maximum}, Gomory-Hu Trees \cite{abboud2021subcubic, abboud2022breaking} for finding Global Min-Cuts deterministically \cite{kawarabayashi2018deterministic, li2020deterministic, li2021deterministic}, and many, many more. \paragraph{Comparison to Previous Works.} There has been a lot of work on algorithms for Balanced Separator. The celebrated work of Leighton and Rao \cite{LeightonRao} showed that one could achieve an $O(\log n)$ approximation to Balanced Separator by repeatedly solving a linear program that computes a fractional multi-commodity flow. Several works give a faster implementation of this approach via a multiplicative-weights algorithms for multi-commodity flow~\cite{PlotkinST95, Young95, GargK07, Fleischer00}, and by using the Leighton-Rao result as a black-box to deduce that they compute an $O(\log n)$ approximation. However, the running time they achieved for Balanced Separator was $\Omega(nm^2)$ since they repeatedly find and remove low-conductance cuts, each of which might be highly unbalanced, possibly introducing a factor of $n.$ In contrast, our algorithm works directly with balanced cuts, rather than multi-commodity flows. Our algorithm is in the same spirit as the Garg-K\"{o}nemann, Fleischer framework from \cite{GargK07, Fleischer00}, but directly incorporates the Leighton-Rao algorithm for finding low conductance cuts. The groundbreaking work of Spielman and Teng on solving Laplacian linear systems~\cite{SpielmanT04} introduced the notion of local algorithms for finding low-conductance cuts, where the running time of the algorithm scales almost-linearly with the smaller size of the output cut. Thus the algorithm can be applied repeatedly to find balanced cuts in almost-linear time. Inspired by this work, multiple local algorithms were proposed~\cite{AndersenCL07, AndersenP09}. While all these algorithms are fast, and almost-linear in running time, they are inherently randomized, and the balanced cut found has conductance $\widetilde{\Omega}(\sqrt{\phi})$. In contrast, our algorithm is deterministic, and finds a cut of conductance at most $\phi\cdot m^{o(1)}.$ Another line of work develops fast SDP algorithms based on matrix-multiplicative weights. The most popular of these is the Cut-Matching framework of Khandekar-Rao-Vazirani~\cite{KhandekarRV09}. Inspired by~\cite{KhandekarRV09}, several works~\cite{AroraK07, OrecchiaSVV08, OrecchiaV11, OrecchiaSV12} obtained almost-linear time algorithms for Balanced Separator building on the matrix-multiplicative weights framework. While the cut-matching framework and the resulting algorithms are elegant, they rely on rather involved techniques that are non-intuitive. The celebrated work of Arora-Rao-Vazirani~\cite{AroraRV09} obtained an $O(\sqrt{\log n})$ approximation for Balanced Separator via an SDP based algorithm. Faster algorithms built on their ideas~\cite{AroraHK10, Sherman09} achieved almost-linear running time with $O(\sqrt{\log n})$ approximation. However, these algorithms are very involved, based on matrix-multiplicative weights, randomized, and rely on near-linear time (approximate) max-flow. Our algorithm and analysis work with scalar multiplicative weights and are very simple to understand. Further, our algorithm only need to invoke approximate shortest-path oracles under increasing edge weights. The only previous deterministic, almost-linear time approximation algorithm for Balanced Separator was given recently by Chuzhoy-Gao-Li-Nanongkai-Peng-Saranurak~\cite{ChuzhoyGLNPS19}. Their algorithm relies on a rather intricate recursive scheme that implicitly uses at each recursion level a reduction to decremental APSP. But even the analysis on a single level relies on the rather involved expander pruning framework. In contrast to their work, the simplicity of our algorithm and analysis stands out. We also point out that a generalization of \cite{ChuzhoyGLNPS19} to weighted graphs was given by Li and Saranurak \cite{li2021deterministic}. This algorithm implicitly uses \cite{ChuzhoyGLNPS19}, and is therefore even more involved. \section{Main Result} We formally state our results in this section. Our main result is the following theorem. \begin{theorem}\label{thm:conductanceVersion} Given an $n$-vertex, $m$-edge graph $G$, an $\alpha_{\texttt{APSP}}$-approx decremental APSP algorithm and conductance parameter $\phi$ and balance parameter $b \in [1/n, 1/4]$, the algorithm $\textsc{LowConductanceCutOrCertify}(G, \phi, b)$ either \begin{enumerate} \item\label{case:balanceLCCut} Returns a cut $(S, \overline{S})$ with ${{\textsf{vol}}}_G(S), {{\textsf{vol}}}_G(\overline{S}) \ge b \cdot {{\textsf{vol}}}(G)$ with conductance $\Phi_G(S) \leq \phi$, or \item\label{case:noBalanceLCCut} Certifies that every cut $(X, \overline{X})$ with ${{\textsf{vol}}}_G(X), {{\textsf{vol}}}_G(\overline{X}) = \Omega( b \cdot {{\textsf{vol}}}_G(G))$ has conductance at least $\phi \cdot \Omega\left(\frac{1}{\alpha_{\texttt{APSP}} \log n \cdot \log(1/b) \cdot \log(\log(n)\alpha_{\texttt{APSP}}/(b\phi))}\right) = \phi \cdot \Omega\left(\frac{1}{\alpha_{\texttt{APSP}} \log^3(n)}\right).$ \end{enumerate} The algorithm is deterministic and requires the APSP data structure to undergo $O( \alpha_{\texttt{APSP}} \cdot m \phi^{-1} \log^3 n)$ updates, queries it $O(m)$ times and spends an additional $O( \alpha_{\texttt{APSP}} \cdot m \phi^{-1} \log^3 n)$ time. \end{theorem} \begin{remark} APSP data structures often answer queries in time proportional to the number of edges on the approximate shortest path that they return. Our algorithm ensures that the number of such edges on all paths is bound by $O( \alpha_{\texttt{APSP}} \cdot m \phi^{-1} \log^3 n)$. \end{remark} We note that for computing balanced cuts (i.e. cuts where $b$ is constant) which is arguably the most interesting case, our approximation guarantee becomes $\Omega(1/ \alpha_{\texttt{APSP}}\log n \cdot \log (\alpha_{\texttt{APSP}} \phi^{-1} \cdot \log n))$. For a decremental APSP data structure with constant-approximation and $\phi \ge \Omega(1/\log^{O(1)} n)$, this further simplifies to $\Omega(1/\log n \log\log n)$. Using the efficient $n^{o(1)}$-approximate decremental APSP data structure from \cite{BGS21} or \cite{Chuzhoy21}, we obtain the following result\footnote{We remark that both data structures \cite{BGS21, Chuzhoy21} implicitly rely on the framework of Chuzhoy-Gao-Li-Nanongkai-Peng-Saranurak~\cite{ChuzhoyGLNPS19}, thus, our reduction in combination with these data structures does not yield a simpler algorithm in itself. We are however optimistic that simpler data structures for the decremental APSP problem are available in the future that do not necessarily rely on expander techniques.}: \begin{theorem} \label{theorem:balanceCutCorAPSP} Given an $n$-vertex, $m$-edge graph $G$, a conductance parameter $\phi$ and balance parameter $b \in [1/n, 1/4]$, there is an algorithm $\textsc{LowConductanceCutOrCertify}(G, \phi, b)$ that can either \begin{enumerate} \item\label{casecor:balanceCut} Find a cut $(S, \overline{S})$ with ${{\textsf{vol}}}_G(S), {{\textsf{vol}}}_G(\overline{S}) \ge b \cdot {{\textsf{vol}}}(G)$ with conductance $\Phi_G(S) \leq \phi$, or \item\label{casecor:noBalanceCut} Certify that every cut $(X, \overline{X})$ with ${{\textsf{vol}}}_G(X), {{\textsf{vol}}}_G(\overline{X}) = \Omega( b \cdot {{\textsf{vol}}}_G(G))$ has conductance $\phi/n^{o(1)}.$ \end{enumerate} The algorithm is deterministic and runs in $m^{1+o(1)} / \phi$ time. \end{theorem} On the other hand, one can run Dijkstra's shortest path algorithm for every query and obtain the following: \begin{theorem} \label{theorem:balanceCutCorNaive} Given an $n$-vertex, $m$-edge graph $G$, a conductance parameter $\phi$ and balance parameter $b \in [1/n, 1/4]$, there is a deterministic algorithm $\textsc{LowConductanceCutOrCertify}(G, \phi, b)$ that can either \begin{enumerate} \item\label{caseCor2:balanceCut} Find a cut $(S, \overline{S})$ with ${{\textsf{vol}}}_G(S), {{\textsf{vol}}}_G(\overline{S}) \ge b \cdot {{\textsf{vol}}}(G)$ with conductance $\Phi_G(S) \leq \phi$, or \item\label{casecor2:noBalanceCut} Certify that every cut $(X, \overline{X})$ with ${{\textsf{vol}}}_G(X), {{\textsf{vol}}}_G(\overline{X}) = \Omega( b \cdot {{\textsf{vol}}}_G(G))$ has conductance $\phi \cdot \Omega\left(\frac{1}{\log n \cdot \log(1/b) \cdot \log(\log(n)/(\phi b))}\right).$ \end{enumerate} The algorithm is deterministic and runs in $\O(m^2 / \phi)$ time. \end{theorem} \section{Preliminaries} \paragraph{Sparsity and Expanders.} In this article, we consider an undirected $n$-vertex graph $G = (V,E)$. For such a graph, we define the sparsity of a cut $\emptyset \subsetneq S \subsetneq V$ by $\Psi_G(S) = \frac{|E_G(S, \overline{S})|}{\min\{|S|, |\overline{S}|\}}$ where $E_G(S, \overline{S})$ is the set of edges with exactly one endpoint in $S$. The sparsity of a graph $G$ is defined $\Psi(G) = \min_{\emptyset \subsetneq S \subsetneq V} \Psi(S)$. If $G$ contains no $\psi$-sparse cut, we say that $G$ is a $\psi$-expander. \paragraph{Conductance vs. Sparsity.} Via a simple reduction replacing each vertex of degree $d$ with an explicit expander graph on $d$ vertices (see~\cref{sec:condViaSpars}), we can reduce to the case where every vertex has degree at most 10. In such a graph, for any set $S \subseteq V,$ $|S| \le {{\textsf{vol}}}(S) \le 10|S|,$ and thus, instead of conductance $\Phi_G(S) = \frac{|E_G(S, \overline{S})|}{\min\{{{\textsf{vol}}}(S),{{\textsf{vol}}}{({V\setminus S})}\}},$ we can work with sparsity $\Psi_G(S) = \frac{|E_G(S, \overline{S})|}{\min\{|S|,|V\setminus S|\}}.$ Throughout the rest of the article, we will therefore work with sparsity instead of conductance. \paragraph{Expander Constructions.} Given any $n$, there is a deterministic construction of a $\Omega(1)$-expander on $n$ vertices of bounded degree. This will be an essential tool used in our proof and we use $\psi_0$ to denote the universal lower bound on the sparsity of such family of expanders. \begin{theorem}[See Thm. 2.4 of \cite{ChuzhoyGLNPS19} based on Thm 2 of \cite{GabberG81}.]\label{thm:constExp} There is an universal constant $\psi_0 \in (0, 1)$ and an algorithm $\textsc{ConstDegExpander}(n)$ that returns a $\psi_0$-expander $H$ on a vertex set of size $n$ with maximum degree $9$. The algorithm runs in time $O(n)$. \end{theorem} \begin{remark}\label{rmk:simpleExpander} While deterministic algorithms to construct a constant-degree, constant sparsity expander require rather involved proof techniques, we prove in \Cref{sec:randExpand} a simple randomized algorithm to construct a $O(\log n)$-degree $\Omega(\log n)$-expander $H$ in $O(n \log n)$ time. Using this randomized algorithm in place of the above theorem only affects guarantees of our overall algorithm by polylogarithmic factors. \end{remark} \paragraph{Graph Embeddings.} Given graphs $H$ and $G$ that are defined over the same vertex set, then we say that a function $\Pi_{H \mapsto G}$ is an \emph{embedding} if it maps each edge $(u,v) \in H$ to a $u$-to-$v$ path $P_{u,v} = \Pi_{H \mapsto G}(u,v)$ in $G$. We say that the \emph{congestion} of $\Pi_{H \mapsto G}$ is the maximum number of times that any edge $e \in E(G)$ appears on any embedding path: \[ \congestion(\Pi_{H \mapsto G}) = \max_{e \in E(G)} |\{ e' \in E(H) \;|\; e \in \Pi_{H \mapsto G}(e') \}|. \] \paragraph{Certifying Expander Graphs via Embeddings.} Graph embeddings are useful since they allow us to argue that if we can embed a graph $H$ that is known to be an expander into a graph $G$, then we can reason about the sparsity of $G$, as shown below. \begin{lemma}\label{lma:folklore_embeddingEasy} Given a $\psi$-expander graph $H$ and an embedding of $H$ into $G$ with congestion $C$, then $G$ must be an $\Omega\left(\frac{\psi}{C}\right)$-expander. \end{lemma} \begin{proof} Consider any cut $(S, V \setminus S)$ with $|S| \leq |V \setminus S|$. Since $H$ is a $\psi$-expander, we have that $|E_H(S, V \setminus S)| \geq \psi|S|$. We also know by the embedding of $H$ into $G$, that for each edge $(u,v) \in E_H(S, V \setminus S)$, we can find path a $P_{u,v}$ in $G$ that also has to cross the cut $(S, V \setminus S)$ at least once. But since each edge in $G$ is on at most $C$ such paths, we can conclude that at least $|E_H(S, V \setminus S)|/ C \geq \psi|S|/C$ edges in $G$ cross the cut $(S, V \setminus S)$. \end{proof} We use the following generalization of this Folklore result to balanced sparse cuts. \begin{restatable}{lemma}{folkloreEmbed}\label{lma:folklore_embedding} Given a $\psi$-expander graph $H$, a subgraph $H' \subseteq H$ with $|E(H \setminus H')| \leq \frac{\psi}{2}bn$ for some $b \in [0,1]$ and an embedding $\Pi_{H' \mapsto G}$ of $H'$ into $G$ with congestion $C$, then for all cuts $(S, \overline{S})$ where $bn \leq |S| \leq n/2$, we have $\Psi_G(S) = \Omega\left(\frac{\psi}{C}\right)$. \end{restatable} \begin{proof} Observe that for each such $(S, \overline{S})$, we have $|E_{H'}(S, \overline{S})| \geq |E_H(S, \overline{S})| - |E(H \setminus H')| \geq \psi|S| - \frac{\psi}{2}bn \geq \frac{\psi}{2}|S|$. Using the same argument as above, the cut size of $S$ in $G$ is at least $\Abs{E_{G}(S, \overline{S})} \ge \Abs{E_{H'}(S, \overline{S})} / C \ge \psi |S| / 2C.$ \end{proof} \paragraph{Decremental All-Pairs Shortest-Paths (APSP).} A decremental $\alpha_{\texttt{APSP}}$-approximate All-Pairs Shortest-Paths (APSP) data structure (abbreviated $\alpha_{\texttt{APSP}}$-APSP) is a data structure that is initialized to an $m$-edge $n$-vertex graph $G$ and supports the following operations: \begin{itemize} \item $\textsc{IncreaseEdgeWeight}(u,v, \Delta)$: increases the edge weight of $(u,v)$ by $\Delta$. \item $\textsc{QueryDistance}(u,v)$: for any $u,v \in V$ returns a distance estimate $\tilde{d}(u,v)$ that $\alpha_{\texttt{APSP}}$-approximates the distance from $u$ to $v$ in the current graph $G$ denoted $d_G(u,v)$, i.e. $\tilde{d}(u,v) \in [d_G(u,v), \alpha_{\texttt{APSP}} \cdot d_G(u,v)]$. \item $\textsc{QueryPath}(u,v)$: returns a path $\pi$ from $u$ to $v$ in the current graph $G$ of total weight $\tilde{d}(u,v)$ (that is the value of the distance estimate if queried). \end{itemize} We denote the total time required by the data structure to execute a series of $q$ queries and $u$ update operations on an $n$-vertex constant-degree graph by $T_{APSP}(q,u)$. Recently, deterministic $n^{o(1)}$-approximate APSP data structures have been developed (see \cite{Chuzhoy21,BGS21}) that process any sequence of $\tilde{O}(m)$ edge weight increases in total time $m^{1+o(1)}$ while answering distance queries in time $n^{o(1)}$ time and for a path query, returns paths in time near-linear in the number of edges on the path (i.e. if it returns a path $P$, it takes at most time $|P|n^{o(1)}$. We conjecture that in the near-future, $O(\log n)$-APSP data structures are found that implement edge weight increases in time $\tilde{O}(m)$ and answers distance queries in time $\tilde{O}(1)$ and path queries in time $\tilde{O}(|P|)$. \section{Our Algorithm} In this section, we present an algorithm to find sparse cuts with respect to sparsity or embed an expander into a constant-degree graph $G$. By standard reductions (given in \Cref{sec:condViaSpars} and \Cref{sec:constDegAssump}), one can translate between sparsity and conductance and remove the bounded-degree assumption, both with only a constant loss in quality. Thus, by proving the theorem below, we directly establish our main result, \Cref{thm:conductanceVersion}. \begin{restatable}{theorem}{APSPreduction}\label{thm:fineGrainedReduction} \label{theorem:balanceCut} Given a graph $G$ of degree at most 10, an $\alpha_{\texttt{APSP}}$-approx decremental APSP algorithm and sparsity parameter $\psi$ and balance parameter $b \in [1/n, 1/4]$, there is an algorithm \\ $\textsc{SparseCutOrCertify}(G, \psi, b)$ (\Cref{alg:mainAlgo}) that can either \begin{enumerate} \item\label{case:balanceCut} Find a cut $(S, \overline{S})$ with $|S|, |\overline{S}| \ge bn$ of sparsity $\leq \psi$, or \item\label{case:noBalanceCut} Certify that every cut $(X, \overline{X})$ with $|X|, |\overline{X}| = \Omega(bn)$ has sparsity \\ $\psi \cdot \Omega\left(\frac{1}{\alpha_{\texttt{APSP}} \log n \cdot \log(1/b) \cdot \log(\log(n)\alpha_{\texttt{APSP}}/(b\psi))}\right).$ \end{enumerate} The algorithm is deterministic and requires the APSP data structure to undergo $O( \alpha_{\texttt{APSP}} \cdot n / \psi \log^3 n)$ updates, queries it $O(n)$ times and spends an additional $O( \alpha_{\texttt{APSP}} \cdot n / \psi \log^3 n)$ time. \end{restatable} \begin{remark} Our algorithm ensures that the total number of edges summed across all queried paths is bound by $O( \alpha_{\texttt{APSP}} \cdot n / \psi \log^3 n)$. \end{remark} The algorithm contains two phases. The first phase tries to embed an $\Omega(1)$-expander into the input graph $G$ with congestion $\O(1/\psi).$ Let $F$ be the subset of expander-edges the algorithm cannot embed. If $|F| = O(b n)$, i.e. the algorithm embed all but $O(b n)$ edges, \Cref{lma:folklore_embedding} ensures that every $b$-balanced cut has sparsity $\widetilde{\Omega}(\psi).$ Otherwise, $|F| = \Omega(b n)$ and the algorithm outputs an edge weight $\bw$ such that every $(u, v) \in F$ are far apart w.r.t. $\bw.$ In this case, the second phase is initiated to extract a sparse $\Omega(b)$-balanced cut from these far-apart pairs of vertices. \subsection{An Algorithm to Separate Or Certify} \label{sec:seporcert} First, we present the algorithm for the first phase that either embeds a large portion of an expander or finds a large set of far-apart vertex-pairs w.r.t. some edge weights $\bw.$ \begin{lemma} \label{lemma:SeparateOrCertifyBalanced} Given an $\alpha_{\texttt{APSP}}$-APSP data structure, two graphs $G$ and $H$ over the same vertex set $V$, a congestion parameter $C \in [1, n]$, and a balance parameter $b \in [1/n, 1/2]$. The algorithm $\textsc{SeparateOrCertify}(G, H, C, b)$ (\Cref{alg:embedOrSep}) outputs either \begin{enumerate} \item A set of weights $\bwInt \in \mathbb{R}_{\geq 1}^{E(G)}$ with $\|\bwInt\|_1 \leq 20n$, a number $b' \in [b, 1/2]$, and a subset of edges $F \subseteq E(H)$ with $|F| > 10 b' n$ such that \[ \forall (u,v) \in F, \quad \dist_{\bwInt}(u,v) > \frac{C}{b'}, \text{ or }\] \item A graph $H' \subseteq H$ with $|E(H) \setminus E(H')| \leq 10bn$ and an embedding $\Pi_{H' \mapsto G}$ that maps each edge $(u,v)$ in $H'$ to a $uv$-path in $G$ with congestion $O(C \cdot \alpha_{\texttt{APSP}} \cdot \log(1/b) \cdot \log(C \cdot \alpha_{\texttt{APSP}} / b))$. \end{enumerate} The algorithm is deterministic and requires the APSP data structure to undergo $O(C \alpha_{\texttt{APSP}} n \log^2 n)$ edge updates and $O(n)$ distance queries along with additional $O(C \alpha_{\texttt{APSP}} n \log^2 n)$ time. \end{lemma} \begin{algorithm} \DontPrintSemicolon $H' = (V, \emptyset)$; $\Pi_{H' \mapsto G} \gets \emptyset$; $\bw \gets \mathbf{1}^{|E(G)|}$; $\eta \gets \frac{1}{ 4 C \alpha_{\texttt{APSP}} \log_2(10/b)}$.\label{lne:init}\\ Maintain an $\alpha_{\texttt{APSP}}$-approximate APSP data structure on $G$ weighted by $\bw$.\\ \For(\label{lne:forLoop}){$i = 0, 1, \ldots, \lfloor \log_2(1 / b) \rfloor$}{ \ForEach(\label{lne:foreachUnrouted}){$e = (u,v) \in E(H) \setminus E(H')$}{ \If(\label{lne:ifReallyEmbed}){$\textsc{APSP}.\textsc{QueryDist}(u,v) \leq 2^{i} \cdot C \alpha_{\texttt{APSP}}$}{ Add $e$ to $H'$; $\Pi_{H' \mapsto G}(e) \gets \textsc{APSP}.\textsc{QueryPath}(u,v)$. \\ \ForEach{$f \in \Pi_{H' \mapsto G}(e)$}{ $\textsc{APSP}.\textsc{IncreaseWeight}(e, \eta \bw_e)$;\label{lne:increaseWeights} $\bw_e \gets (1+\eta) \bw_e$. } } } \lIf{$|E(H) \setminus E(H')| > 10 n / 2^i$}{ \Return $(\bw, 2^{-i}, E(H) \setminus E(H'))$. \label{lne:terminateCase2} \label{lne:bPrimeDefn} } } \Return $(H', \Pi_{H' \mapsto G})$. \label{lne:terminateEmbed} \caption{$\textsc{SeparateOrCertify}(G, H, C, b)$}\label{alg:embedOrSep} \end{algorithm} \paragraph{The Algorithm.} \Cref{alg:embedOrSep} implements $\textsc{SeparateOrCertify}(G, H, C, b)$. Here, the task of finding an embedding of $H$ into $G$ is interpreted as a multicommodity flow problem, that is each edge $(u,v) \in H$ gives rise to the demand to route one unit of flow from $u$ to $v$. Later, we use a $\psi_0$-expander in place of $H$. The goal of the algorithm is to find such an embedding/ multicommodity flow with small congestion which combined with our choice of $H$ certifies that $G$ is a good (almost) expander (i.e. contains no balanced sparse cut). Here, we guess the congestion to be roughly $C$ and want to enforce $\Cong(\Pi_{H \mapsto G}) \leq C$. In fact, we even provide a slightly tighter analysis. To achieve this goal, we use a technique which is an instance of the \emph{Multiplicative Weight Update (MWU)} framework. Initially, we define a uniform weight function $\bw$ with weights over $G$. We try to embed each edge $(u,v) \in E(H)$ using a short $uv$-path $P_{uv}$ in $G$ with respect to $\bw$. Whenever we embed an edge $(u,v)$ in such a way and the path $P_{uv}$ contains an edge $e \in E(G)$, we increase the weight $\bw_e$ by a multiplicative factor $(1+\eta)$. Naturally, after $t$ edges have been embedded by using the edge $e$, we have scaled up the weight of $e$ by a factor of $(1+\eta)^t$. Using $e^x \le (1 + 2x), x \in [0, 1]$, and setting $\eta \approx C$ ensures that the weight $\bw_e$ approaches a large polynomial in $n$ for $t \gg 2\eta \log n$ (which again is $\approx C$). At the same time, the algorithm only embeds edges $(u,v) \in E(H)$ if the distance between the endpoints in $G$ w.r.t. $\bw$ is small. This ensures that $\|\bw\|_1 = O(n \log(1/b))$ and that we never use an edge $e$ into which many embedding paths are already routed. More precisely, we proceed in rounds to embed edges in $H$. At later rounds (i.e. when $i$ large), we have already embed a large number of edges in $H$. Since the number of remaining edges is small, we allow for them to be embed with slightly longer paths which still lets us argue that $\|\bw\|_1$ is increased by at most $O(n)$ in the current round. If in any round, it is not possible to embed many of the remaining edges with paths of weight at most the current threshold, we can simply return these edges and end up in the first scenario. \paragraph{Correctness (Returning in \Cref{lne:terminateCase2}).} We start by proving the following claim which then immediately establishes correctness if \Cref{alg:embedOrSep} terminates at \Cref{lne:terminateCase2} (i.e. in the second scenario). \begin{invariant}\label{inv:totalWeight} After the $i$-th iteration of the for-loop in \Cref{lne:forLoop}, we have $\|\bw\|_1 \leq 10n (1 + 2 \eta C \alpha_{\texttt{APSP}} \cdot (i+1)) \leq 20n$. \end{invariant} \begin{proof} Initially, $\|\bw\|_1 = \|\mathbf{1}^{|E(G)|}\|_1 \leq 10n$. To gauge the increase in $\|\bw\|_1$ during the $i$-th iteration of the for-loop, consider the effect of embedding a new edge $e$ in the foreach-loop starting in \Cref{lne:foreachUnrouted} (we only consider such iterations if the if-statement in \Cref{lne:ifReallyEmbed} evaluates true as otherwise $\bw$ does not change). Letting $\bw^{OLD}$ denote $\bw$ just before the foreach-loop iteration and $\bw^{NEW}$ right after. We clearly have that $\|\bw^{NEW}\|_1 = \|\bw^{OLD}\| + \eta \cdot \bw^{OLD}(\Pi_{H' \mapsto G}(e))$ from \Cref{lne:increaseWeights}. But since the if-statement was true, we have that $\bw^{OLD}(\Pi_{H' \mapsto G}(e)) \leq 2^i \cdot C\alpha_{\texttt{APSP}}$. We conclude that each edge that is newly embed increases $\|\bw\|_1$ by at most $\eta \cdot 2^i \cdot C\alpha_{\texttt{APSP}}$. At the beginning of the $i$-th iteration of the for-loop, there are at most $10n/2^i$ edges in in $E(H) \setminus E(H')$. At the very first iteration $i = 0$, $|E(H)| \le 20 n$ as the max degree of $H$ is at most $10.$ Later, $|E(H) \setminus E(H')| \le 10n / 2^{i-1}$ holds or otherwise the algorithm would terminate after the $(i-1)$-th iteration in \Cref{lne:terminateCase2}. Thus, during the $i$-th iteration, the foreach-loop in \Cref{lne:foreachUnrouted} iterates over at most $10n / 2^{i-1}$ edges as well. We can bound the total increase of $\|\bw\|_1$ during the $i$-th iteration by \[\frac{10n}{2^{i-1}} \cdot \eta \cdot 2^i \cdot C\alpha_{\texttt{APSP}} = 20n \cdot \eta C\alpha_{\texttt{APSP}}.\] The total number of iterations is at most $\lfloor \log_2(1/b) \rfloor + 1.$ This establish the second inequality using the definition of $\eta$. \end{proof} Note that for every edge $(u,v)$ that is in $E(H) \setminus E(H')$ when the algorithm returns in \Cref{lne:terminateCase2}, the preceding foreach-loop iterated over $(u,v)$ and found that $\textsc{APSP}.\textsc{QueryDist}(u,v) > 2^i \cdot C\alpha_{\texttt{APSP}}$ (as otherwise $(u,v)$ would have been added to $E(H')$). But this implies that $\dist_{\bw}(u,v) > 2^i \cdot C = C / b'$ by our choice of $b'$. To establish correctness, it only remains to use the if-condition preceding \Cref{lne:terminateCase2} and observe that the condition does not hold when $i = 0$. \paragraph{Correctness (Returning in \Cref{lne:terminateEmbed}).} It is straight-forward to see from \Cref{alg:embedOrSep} that $\Pi_{H' \mapsto G}$ is a correct embedding from $H'$ to $G$ and that $|E(H) \setminus E(H')| \leq 10bn$. It thus only remains to bound the congestion of $\Pi_{H' \mapsto G}$. \begin{lemma}\label{lma:congestion} The congestion of $\Pi_{H' \mapsto G}$ is at most $\frac{2\log(2 C \alpha_{\texttt{APSP}}/ b)}{\eta}$. \end{lemma} \begin{proof} Let us fix any edge $e \in E(G)$. Note that each time we add an embedding path in the foreach-loop starting in \Cref{lne:foreachUnrouted} that contains $e$, we increase the weight $\bw_e$ to $(1+\eta)\bw_e$. Since initially, $\bw_e = 1$, we have that after $t$ times that the edge $e$ was used to embed an edge in the foreach-loop, we have that $\bw_e = (1+\eta)^t \geq e^{t\eta/2}$ since $e^x \leq 1+2x$ for $x \in [0,1]$. In particular, if the algorithm embeds $t$ times into $e$ for $t > \frac{2\log(2 C \alpha_{\texttt{APSP}} / b)}{\eta}$, then at the end of the algorithm, we would have $\bw_e > \frac{2 C \alpha_{\texttt{APSP}}}{b}$. However, note that by the if-condition in \Cref{lne:ifReallyEmbed}, we never embed into an edge $e$ that has weight more than $2^{\log_2(1/b)} \cdot C \alpha_{\texttt{APSP}} = \frac{C \alpha_{\texttt{APSP}}}{b}$ since otherwise the path using this edge has higher weight. We can thus conclude that at the end of the algorithm, $\bw_e \leq (1+\eta)\frac{C \alpha_{\texttt{APSP}}}{b} \leq \frac{2C \alpha_{\texttt{APSP}}}{b}$, which leads to a contradiction. \end{proof} \paragraph{Run time Analysis.} The for-loop of the algorithm runs at most $O(\log (1/b))$ iterations and in the $i^{th}$ iteration at most $O(n/2^i)$ edges are iterated over in the foreach-loop starting in \Cref{lne:foreachUnrouted}. Thus, the total number of queries to the APSP data structure can be bound by $O(\sum_i n/2^i) = O(n)$. The time the algorithm spends updating the weights in \Cref{lne:increaseWeights} can be bound by observing that each edge $e$ has its weight increased only after an additional embedding path was added through $e$; but the congestion is bound by $O(\log n/\eta)$ by \Cref{lma:congestion}, thus the foreach-loop is executed at most $O(n\log n/\eta)$ times over the entire course of the algorithm. This concludes our analysis of the number of updates to the APSP data structure. The runtime analysis of the algorithm follows along the same line of reasoning. \begin{remark} \label{rem:extractExpanderSW19} Our algorithm can be extended to compute expander decompositions, following the approach of \cite{SaranurakW18}. We refer the reader to this paper for additional background and the necessary definitions. For readers familiar with \cite{SaranurakW18}, we briefly describe the key step we need to implement: When $\textsc{SeparateOrCertify}(G,H,C,b)$ certifies that most edges in the expander $H$ can be embedded into $G$ (and hence by \Cref{lma:folklore_embedding} there are no sparse balanced cuts in $G$) then we need to be able to extract a large expander from $G$ so that we only need to recurse on a small (potentially) non-expanding part To find an induced subgraph with large expansion, we first produce a new graph $G'$ by adding the edges $E(H)\setminus E(H')$ to $G$. This ensures that $G'$ is a good expander. We then use the expander pruning of \cite{SaranurakW18} to delete the same edges $E(H)\setminus E(H')$ from $G'$, resulting in a large leftover expander $G''$ with vertex set $V''$. By construction $G[V'']$ is now a large expander. \end{remark} \subsection{Extracting the Sparsest Cut} In order to prove \Cref{theorem:balanceCut}, we now have to show how to extract a sparsest cut from the weight function that is returned in case no embedding is found. We point out that in order to do so it is significantly more convenient to work with an integral weight function $\bw$. We therefore round the weight function that we obtain \Cref{lemma:SeparateOrCertifyBalanced} up which might result in $\|\bw\|_1$ being at most twice as large as stated. We use the following auxiliary algorithm that finds a cut with few edges crossing given any two vertices at large distance. \begin{restatable}{claim}{sepLem}\label{clm:thinLayer} The procedure $\textsc{FindThinLayer}(G, \bwInt, u, v, D)$ takes a graph $G$ weighted by $\bwInt \in \mathbb{N}_{\geq 1}^{E(G)}$ and two vertices $u,v$ such that $\dist_{\bwInt}(u,v) > D$ for some integer $D > 4 \log_2 \|\bwInt\|_1$. It returns a set of vertices $S \neq \emptyset$ such that $|S| \leq |V|/2$ and $|E_G(S, V \setminus S)| \leq \frac{4 \bwInt(S) \log_2 \|\bwInt\|_1 }{D}$. The algorithm runs in time $O(|E_G(S)| \log |E_G(S)|)$. \end{restatable} Given this auxiliary algorithm, we can state the final algorithm and prove our main result, \Cref{theorem:balanceCut}. As described before, we use the algorithm $\textsc{SeparateOrCertify}(G, H, C, \hat{b})$ with a constant degree, constant sparsity expander $H$. It is straight-forward to conclude that $G$ contains no balanced sparse cuts, if the procedure can embed $H$. Otherwise, we take the weight function and repeatedly find a separator between the endpoints of edges in $F$ that are far from each other (using the auxiliary algorithm). Note that if there are roughly $b'n$ edges in $F$ at distance roughly $C/b'$, then using the auxiliary algorithm repeatedly with $D \approx C/b'$, produces a cut where the smaller side has $\Omega(|F|) = \Omega(b'n)$ vertices. Using the guarantees from the auxiliary procedure, we further have that the number of edges in the induced cut are at most $\tilde{O}(b'n/ C)$. Thus, the sparsity of the cut must be $\tilde{O}(1/C)$ where $C \approx 1/\psi$ by our choice of parameters. \begin{algorithm} $H \gets \textsc{ConstDegExpander}(|V(G)|)$; $C \gets 320\log n/\psi$; \\ \If{$\textsc{SeparateOrCertify}(G, H, C, 2b)$ returns $(H', \Pi_{H' \mapsto G})$}{ \Return $(H', \Pi_{H' \mapsto G})$.\label{lne:retTrivial} }\Else(\tcp*[h]{i.e. if it returns $(\bwInt, b', F)$}){ $\widehat{\bw} \gets \lceil \bwInt \rceil$. \\ $X \gets V(G)$.\\ $D \gets 2C / b'$.\\ \While{$\exists (u,v) \in H[X] \cap F$ and $|V \setminus X| \le n / 4$}{ \tcp*[h]{$\dist_{\widehat{\bw}}(u, v) > D$} \\ $S \gets \textsc{FindThinLayer}(G[X], \widehat{\bw}, u, v, D)$.\\ $X \gets X \setminus S$. } \Return $V \setminus X$. \label{lne:returnSparseCut} } \caption{$\textsc{SparseCutOrCertify}(G, \psi, b)$}\label{alg:mainAlgo} \end{algorithm} \APSPreduction* \begin{proof} The case where \Cref{alg:mainAlgo} returns in \Cref{lne:retTrivial} follows directly from \Cref{lemma:SeparateOrCertifyBalanced}, \Cref{thm:constExp} and \Cref{lma:folklore_embedding}. Let us therefore analyze the remaining case where the algorithm returns in \Cref{lne:returnSparseCut} (the while-loop can be seen to terminate since each iteration shrinks the set $X$ by \Cref{clm:thinLayer} and $X = \emptyset$ trivially has no two vertices at far distance). We first prove that the final set $V \setminus X$ has size $b'n \leq |V \setminus X| \leq \frac{3}{4}n$: \begin{itemize} \item $b'n \leq |V \setminus X|$: Initially, $H[X] = H$ and $F \subseteq H$ contains more than $10b' n$ edges by \Cref{lemma:SeparateOrCertifyBalanced}. Every edge $(u, v) \in F$ has $\dist_{\widehat{\bw}}(u,v) \ge \dist_{\bwInt}(u,v) > C / b'.$ Since the maximum degree of $H$ is $10$, as long as $|V \setminus X| < b'n$, $H[X]$ contains all but $10b'n$ edges from $H.$ Thus, $H[X] \cap F$ is not empty and the while-loop continues. We conclude that $b'n \leq |V \setminus X|$ holds. \item $|V \setminus X| \leq \frac{3}{4}n$: Since the while-loop condition allows only invocations of $\textsc{FindThinLayer}$ if $|V \setminus X| \leq n/4$, and since this procedure returns the smaller side of the cut it produces by \Cref{clm:thinLayer} (which is found on $G[X]$), we can conclude that at the end of the algorithm $|V \setminus X| \leq n/4 + n/2 \leq \frac{3}{4}n$. \end{itemize} This indicates that $|X| \ge n / 4 \ge b'n / 2 \ge bn$ since $2b \le b' \le \frac{1}{2}.$ Next, we bound the sparsity of the cut $V \setminus X.$ Let $S_1, S_2, \ldots, S_k$ be the sets returned by procedure $\textsc{FindThinLayer}$ one after another over the course of the while-loop, such that $V \setminus X = \cup S_i$. We first observe that these sets are vertex-disjoint since after the $i$-th iteration, the procedure $\textsc{FindThinLayer}$ is invoked on the graph $G_i = G[V \setminus (S_1 \cup \ldots \cup S_i)]$ to find $S_{i+1}$. Further, the final cut $(X, V \setminus X)$ contains only edges that were previously in a thin layer, i.e. \[ E_G(X, V \setminus X) \subseteq \bigcup_i E_{G_i}(V \setminus (S_1 \cup \ldots \cup S_i), S_i). \] It remains to use the guarantee of \Cref{clm:thinLayer} that for each $S_i$, we have $|E_{G_i}(S_i, V \setminus (S_1 \cup \ldots \cup S_i))| \leq \frac{4 \widehat{\bw}(S_i) \log_2 \|\bwInt\|_1 }{D}$ and by the vertex-disjointness of $S_1, S_2, \ldots, S_k$, we thus have that \begin{align*} |E_G(X, V \setminus X)| &\leq | \bigcup_i E_{G_i}(S_i, V \setminus (S_1 \cup \ldots \cup S_i))| \leq \sum_i \frac{4 \widehat{\bw}(S_i) \log \|\widehat{\bw}\|_1 }{D} \\ &\leq \frac{4 \|\widehat{\bw}\|_1 \log \|\widehat{\bw}\|_1 }{D} = \frac{8 n \cdot b' \log n}{C} \end{align*} where we use $\|\bwInt\| \leq 20n$ from \Cref{theorem:balanceCut} and $\widehat{\bw}$ is obtained from rounding up $\bwInt$, and our choice of $D$. Since we have shown that $|X|, |V \setminus X| \geq b'n / 2 \geq bn$, choosing $C = 320\log n/\psi$, we have $\Psi(V \setminus X) = \Psi(X) \leq \psi$, as desired. We use the disjointness of $S_1, S_2, \ldots, S_k$ to argue that the total time spend in procedure $\textsc{FindThinLayer}$ can be bound by $O(n \log n)$. The remainder of the runtime analysis is trivial given \Cref{lemma:SeparateOrCertifyBalanced}. \end{proof} It remains to provide an implementation of $\textsc{FindThinLayer}(G, \bwInt, u, v, D)$ and prove \Cref{clm:thinLayer}. The algorithm follows a simple ball-growing procedure. It grows balls from both endpoints $u$ and $v.$ Because the distance between $u$ and $v$ are guaranteed to be large, the procedure takes longer time. However, these two balls cannot be larger than the entire graph. There must be a moment that one of the ball grows only by a thin layer. \sepLem* \begin{proof} Since $\dist_{\bwInt}(u,v) > D$ by assumption, we have that at least one of $u$ and $v$ have their ball to radius $D/2$ contain at most half the vertices in $G$. More formally, for some $z \in \{u,v\}$, $|B_{G, \bwInt}(z, D/2)| \leq |V|/2$. We claim that there is a radius $0 < r \leq D/2$, such that taking $S = B(z, r)$ satisfies the above guarantees. For this proof, it is convenient to define the following auxiliary function $\Phi(z,r) = \sum_{e \in E} \Phi(z,r,e)$ where the latter functions are defined for all edges $e = (x,y) \in E$ by \[ \Phi(z, r, e) = \begin{cases} |\dist_{\bwInt}(z,x) - \dist_{\bwInt}(z,y)| & \text{if } \dist_{\bwInt}(z,x) \leq r \text{ and } \dist_{\bwInt}(z,y) \leq r\\ r - \dist_{\bwInt}(z,x) & \text{if } \dist_{\bwInt}(z,x) \leq r < \dist_{\bwInt}(z,y)\\ r - \dist_{\bwInt}(z,y) & \text{if } \dist_{\bwInt}(z,y) \leq r < \dist_{\bwInt}(z,x)\\ 0 & \text{otherwise} \end{cases} \] Here, an edge $e = (x,y) \in E(G)$ contributes the distance between its two endpoints $x$ and $y$ (which is at most $\bw_e$) to $\Phi(z,r,e)$ if both endpoints are fully contained in the ball $B(z,r)$. If neither of the endpoints are contained it contributes $0$. Otherwise, $e = (x,y)$ contributes the distance of the endpoint closer to $z$ to the boundary of the ball. In both cases, $0 \leq \Phi(z, r, e) \leq \bwInt_e$. This means in particular that the weight of edges incident to $B(z,r)$ denoted by $\bwInt(E(B(z,r)))$ is always greater-equal to $\Phi(z,r)$, i.e. $\bwInt(E(B(z,r))) \geq \Phi(z,r)$ for all $r$. Note further that $\Phi(z, r+1) - \Phi(z,r)$ is exactly $|E_G(B(z, r), V \setminus B(z, r))|$, the number of edges that leave $B(z,r)$. To see this, observe that an edge $e = (x, y)$ contributes $1$ to the difference if $\dist_{\bwInt}(x, z) \le r < r+1 \le \dist_{\bwInt}(y, z)$ holds, i.e. $e$ leaves $B(z, r)$. Otherwise, the contribution of $e$ are identical in both $\Phi(z, r)$ and $\Phi(z, r+1).$ Here we use that $\bwInt$ is integral and so are distances in $G$. Given this set-up, assume for contradiction that for all $0 < r < D/2$, we have \[\Phi(z,r+1) > \left(1+ \frac{4 \log_2 \|\bwInt\|_1}{D}\right) \Phi(z,r).\] By induction we have that \[\Phi(z, D/2) \geq \left(1+ \frac{4 \log_2 \|\bwInt\|_1}{D}\right)^{D/2 - 1} \Phi(z,1) >\|\bwInt\|_1\] where we use that $1+x \geq 2^x$ for $x \in [0,1]$. This would give a contradiction since $\|\bwInt\|_1 \ge \bwInt(E(B(z,D/2))) \geq \Phi(z,D/2) > \|\bwInt\|_1$. Therefore, there must be some radius $0 < r < D/2$ such that \begin{align*} \Phi(z,r+1) \le \left(1+ \frac{4 \log_2 \|\bwInt\|_1}{D}\right) \Phi(z,r). \end{align*} Combining with our previous discussion yields that \begin{align*} |E(B(z,r), V \setminus B(z,r))| &= \Phi(z, r+1) - \Phi(z, r) \\ &\le \frac{4 \log_2 \|\bwInt\|_1}{D}\Phi(z,r) \\ &\le \frac{4 \bwInt(E(B(z,r))) \log_2 \|\bwInt\|_1}{D}. \end{align*} We can therefore take $S = B(z,r)$, as desired. Finally, to compute this cut, we run Dijkstra's algorithm from $u$ and $v$ in parallel and check for the earliest radius $r$ for either of them such that the inequality holds. Thus, the algorithm runs in time $O(|E_G(S)| \log |E_G(S)|)$. \end{proof} \printbibliography \appendix \section{Reducing Conductance to Sparsity} \label{sec:condViaSpars} Here, we prove \Cref{thm:conductanceVersion}. The proof is an adaption of Lemma 5.4 and Theorem 5.5 of \cite{ChuzhoyGLNPS19}. \paragraph{The Transformation Algorithm.} Our algorithm is essentially a wrapper function around our main result \Cref{thm:fineGrainedReduction}. That is, we first construct a bounded degree graph $\widehat{G}$ from $G$, then run the algorithm from \Cref{thm:fineGrainedReduction} on $\widehat{G}$. If the algorithm certifies that $\widehat{G}$ has no balanced sparse cuts, we prove that $G$ has no balanced low-conductance cuts. Otherwise, if the algorithm returns a sparse cut in $\widehat{G},$ we recover a balanced low-conductance cut in $G.$ We first describe the construction of $\widehat{G}$ given $G = (V, E)$. Let us assume an arbitrary ordering of the edges incident to each vertex $v \in V$. $\widehat{G} = (\widehat{V}, \widehat{E})$ is constructed as follows: \begin{enumerate} \item For each vertex $v \in V$, create a set of vertices $X_v = \{v_{1}, v_2, \ldots, v_{\deg(v)}\}$, and an $\psi_0$-expander $H_v$ on $X_v$ using \Cref{thm:constExp}. Add $H_v$ to $\widehat{G}.$ \item For each edge $e = (u, v) \in E$, we add $(u_i, v_j)$ to $\widehat{E}$ if $e$ is the $i^{th}$ (and $j^{th}$) edge incident to $u$ (and $v$, respectively). \end{enumerate} Clearly, $\widehat{G}$ has ${{\textsf{vol}}}(G) = 2m$ vertices and each vertex has at most $10$ incident edges, $9$ from the expander and $1$ from the corresponding edge in $G$. We now run the algorithm from our main result, \Cref{thm:fineGrainedReduction}, on the graph $\widehat{G}$. If the algorithm certifies that no $b$-balanced $\psi$-sparse cut exists in $\widehat{G},$ we return the same result for $G$. Otherwise, we run \Cref{alg:transform} on the returned cut $(A, \overline{A})$ in $\widehat{G}$ to obtain a $\Omega(b)$-balanced $\Omega(\psi)$-sparse cut $(S, \overline{S})$ in $G$. It is straight-forward to check that \Cref{alg:transform} is deterministic and runs in time linear in the number of edges of $G$ and thus the runtimes stated in \Cref{thm:fineGrainedReduction} are asymptotically not affected. \begin{algorithm} \Return $S = \Set{u \in V}{\Abs{X_u \cap A} \ge \Abs{X_u \setminus A}}$.\label{lne:addS} \caption{$\textsc{Transform}(G, \widehat{G}, A \subseteq V(\widehat{G}))$}\label{alg:transform} \end{algorithm} \paragraph{Certifying $G$.} We start by showing that if no $\Omega(b)$-balanced $O(\psi)$-sparse cut is found on $\widehat{G}$, then no such cut exists in $G$ either. \begin{lemma} \label{lemma:reduceToConstDegNoCut} Given a balance parameter $b \in (0, 1/4)$, if every cut $(X, \overline{X})$ in $\widehat{G}$ with $\Abs{X}, \Abs{\overline{X}} \ge b \cdot |V(\widehat{G})|$ has $\Psi_{\widehat{G}}(S) \geq \psi$, then every cut $(S, \overline{S})$ in $G$ with ${{\textsf{vol}}}_G(S), {{\textsf{vol}}}_G(\overline{S}) \ge b \cdot {{\textsf{vol}}}(G)$ has $\Phi_{G}(X) \geq \psi$. \end{lemma} \begin{proof} Let $(S, \overline{S})$ be any cut in $G$ with ${{\textsf{vol}}}_G(S), {{\textsf{vol}}}_G(\overline{S}) \ge b \cdot {{\textsf{vol}}}(G)$. Define $X_S = \cup_{u \in S} X_u$ and $\overline{X_S} = \cup_{u \not\in S} X_u = X_{\overline{S}}.$ Observe that $|E_G(S, \overline{S})| = |E_{\widehat{G}}(X_S, \overline{X_S})|$ because the $\psi_0$-expander edges in $\widehat{G}$ do not appear in the cut and every cut edge $(u_i, v_j)$ in $\widehat{G}$ corresponds to the cut edge $(u, v) \in G.$ By construction of $\widehat{G}$, we have that $|X_S| = {{\textsf{vol}}}_G(S)$ and $|\overline{X_S}| = {{\textsf{vol}}}_G(\overline{S})$ and therefore $|X_S|, |\overline{X_S}| \geq b \cdot {{\textsf{vol}}}(G) = b |V(\widehat{G})|$ by assumption on $(S, \overline{S})$. Thus $(X_S, \overline{X_S})$ is balanced in $\widehat{G}$ and we can use the guarantee that $\Phi_{\widehat{G}}(X_S) \geq \psi$. This yields \begin{align*} |E_G(S, \overline{S})| = |E_{\widehat{G}}(X_S, \overline{X_S})| \ge \psi \cdot \min\{\Abs{X_S}, \Abs{\overline{X_S}}\} = \psi \cdot \min\{{{\textsf{vol}}}(S), {{\textsf{vol}}}(\overline{S})\}. \end{align*} \end{proof} \paragraph{Returning a Sparse Cut.} It remains to prove that the above algorithm transforms any balanced sparse cut in $\widehat{G}$ to a balanced low conductance cut in $G.$ We prove this claim in two steps. We first show that the number of edges in the cut $(S, \overline{S})$ in $G$ is comparable to the number of edges in $(A, \overline{A})$ in $\widehat{G}$. \begin{claim}\label{clm:numCrossingRemainsLow} $|E_G(S, \overline{S})| = O\left(\Abs{E_{\widehat{G}}(A, \overline{A})}\right)$. \end{claim} \begin{proof} Define $X_S = \cup_{u \in S} X_u$. Consider any vertex $u \in V$, we have that the graph $H_u$ contributes at least $\psi_0 \cdot \min\{|X_u \cap A|, |X_u \setminus A|\}$ edges to the cut $\Abs{E_{\widehat{G}}(A, \overline{A})}$. But in $\widehat{G}$, the number of edges incident to $X_u$ that are in the cut $(S, \overline{S})$ but where previously not in the cut $(A, \overline{A})$ can be at most $\min\{|X_u \cap A|, |X_u \setminus A|\}$ since $H_u$ is contained entirely in $S$ or $\overline{S}$ and only one additional edge is incident to each vertex in $X_u$. Thus, we can charge each edge in $E_{H_u}(A, \overline{A})$ with at most $1/\psi_0$ edges from $E_{\widehat{G}}(S, \overline{S}) \setminus E_{\widehat{G}}(A, \overline{A})$ incident on $u$ and cover all such edges. We conclude that $|E_{\widehat{G}}(S, \overline{S})| \le |E_{\widehat{G}}(A, \overline{A})| + |E_{\widehat{G}}(A, \overline{A})|/\psi_0$, and finally use that $|E_G(S, \overline{S})| = |E_{\widehat{G}}(X_S, \overline{X_S})|$ as observed in \Cref{lemma:reduceToConstDegNoCut}. \end{proof} Next, we prove that $(S, \overline{S})$ is a balanced cut. \begin{claim}\label{clm:balancedProd} If $\Psi_G(A) \leq \psi_0/2$, we have ${{\textsf{vol}}}_G(S) \geq \frac{1}{2}|A|$ and ${{\textsf{vol}}}_G(\overline{S}) \geq \frac{1}{2}|\overline{A}|$. \end{claim} \begin{proof} We prove ${{\textsf{vol}}}_G(S) \geq \frac{1}{2}|A|$ (the proof of ${{\textsf{vol}}}_G(\overline{S}) \geq \frac{1}{2}|\overline{A}|$ is symmetric). Let us assume for the sake of contradiction that ${{\textsf{vol}}}_G(S) < \frac{1}{2}|A|$. We argued before that for every $u \in V$, we have $|E_{H_u}(A, \overline{A})| \geq \psi_0 \cdot \min\{|A \cap X_u|, |A \setminus X_u|\}$. We again define $X_S = \cup_{u \in S} X_u$ and observe that the fact that $\sum_{u \in S} |A \cap X_u| \le |X_S| = {{\textsf{vol}}}_G(S) < \frac{1}{2}|A|$ implies that $\sum_{u \in S} |A \setminus X_u| \geq |A| - |X_S| > \frac{1}{2}|A|$. Definition of $S$ also yields that $|A \setminus X_u| \le |A \cap X_u|.$ Combining insights, we conclude \[ |E_{\widehat{G}}(A, \overline{A})| \geq \sum_u |E_{H_u}(A, \overline{A})| \geq \sum_u \psi_0 \cdot \min\{|A \cap X_u|, |A \setminus X_u|\} \ge \sum_{u \in S} \psi_0 \cdot |A \setminus X_u| > \frac{\psi_0}{2}|A| \] which implies that $\Psi_G(A) > \psi_0 /2$ which contradicts our assumption, as desired. \end{proof} Finally, we combine our insights to prove \Cref{thm:conductanceVersion}. \begin{proof}[Proof of \Cref{thm:conductanceVersion}.] We have from the algorithm that $|A|, |\overline{A}| \geq b \cdot |V(\widehat{G})| = 2bm$. Therefore, by \Cref{clm:balancedProd}, we produce a cut $(S, \overline{S})$ in $G$ with ${{\textsf{vol}}}_G(S), {{\textsf{vol}}}_G(\overline{S}) \ge b/2 \cdot {{\textsf{vol}}}(G)$. By \Cref{clm:numCrossingRemainsLow}, we further have that $|E_G(S, \overline{S})| \leq O(\Abs{E_{\widehat{G}}(A, \overline{A})})$ and therefore $\Phi_G(S) = \frac{|E_G(S, \overline{S})|}{{{\textsf{vol}}}_G(S), {{\textsf{vol}}}_G(\overline{S})} = O\left( \frac{\Abs{E_{\widehat{G}}(A, \overline{A})}}{\min\{|A|, |\overline{A}|\}}\right) = O(\phi)$ where the last equality stems from the fact that $(A, \overline{A})$ had $\Psi_{\widehat{G}}(A) \leq \phi$ by \Cref{thm:fineGrainedReduction}. \end{proof} \section{The Constant-Degree Assumption} \label{sec:constDegAssump} In this section, we prove that the following assumptions are without loss of generality. \begin{assumption}\label{assump:constDegree} When computing a sparse cut with respect to sparsity, we may assume at a cost of a constant factor in the output quality that the input graph $G$ has maximum degree 10. \end{assumption} \begin{proof} Consider obtaining the graph $\widehat{G}$ from $G$ by adding $\lceil m/n \rceil$ self-loops to each vertex in $G$. We then invoke \Cref{thm:conductanceVersion} on $\widehat{G}$ with $\psi$ and parameter $b$. Note first that in a connected graph $G$, we have that $\widehat{G}\leq 4m$. Further, note that since self-loops do not appear in cuts, we have $E_G(S, \overline{S}) = E_{\widehat{G}}(S, \overline{S})$ for all $S$. Now, if the algorithm certifies low conductance of $\widehat{G}$, we have for each $(S, \overline{S})$ in $G$ where $|S|, |\overline{S}| \geq 4b \cdot n$ that ${{\textsf{vol}}}_{\widehat{G}}(S) \geq |S| \lceil m/n \rceil \geq 4b m \geq b \cdot {{\textsf{vol}}}(\widehat{G})$. Since $|E_G(S, \overline{S})| = |E_{\widehat{G}}(S, \overline{S})| \geq \psi \min\{{{\textsf{vol}}}_{\widehat{G}}(S), {{\textsf{vol}}}_{\widehat{G}}(\overline{S})\} \geq \frac{1}{3} \psi \min\{|S|, |\overline{S}|\}$. Thus every $4b$-balanced sparse cut has sparsity at least $\frac{1}{3}\psi$. Otherwise, the algorithm returns a cut $(S, \overline{S})$ of conductance at most $\phi$ in $\widehat{G}$. But, we have $\Phi_{\widehat{G}}(S) \geq \Psi_{\widehat{G}}(S) = \Psi_{G}(S)$ for all $S$. \end{proof} \section{A Simple Randomized Algorithm to Construct Low-Degree Expanders} \label{sec:randExpand} \begin{algorithm} Construct an empty graph $H$ on $n$ vertices.\\ \ForEach{$v \in V(H)$}{ \For{$i = 1, 2, \ldots, k = 80 \log n$}{ Sample a vertex $u$ from $V(H)$ uniformely and i.i.d. at random.\\ Add edge $(u,v)$ to $H$. } } \Return $H$ \caption{$\textsc{RandConstDegExpander}(n)$} \label{alg:randExpa} \end{algorithm} \paragraph{The Algorithm.} Here, we provide \Cref{alg:randExpa} which implements the algorithm mentioned in \Cref{rmk:simpleExpander}. \paragraph{Analysis.} Before we start our analysis, we recall the following Chernoff bound. \begin{theorem} Given i.i.d. $\{0,1\}$-random variables $X_1, X_2, \ldots, X_k$, $X = \sum_i X_i$ and any $\delta \geq 0$, we have $P[X \geq (1+\delta) \mathbb{E}[X]] \leq e^{- \frac{\delta^2 \mathbb{E}[X]}{(2+\delta)}}$ and $P[X \leq (1-\delta)\mathbb{E}[X]] \leq e^{- \frac{\delta^2\mathbb{E}[X]}{2}}$. \end{theorem} Let us first prove that $H$ has bounded degree. \begin{claim} \Cref{alg:randExpa} returns $H$ such that w.h.p., the maximum degree is $O(\log n)$. \end{claim} \begin{proof} Each vertex $u$ is selected as the second endpoint of an edge added to $H$ in the inner for-loop with probability $1/n$ per iteration. As there are $nk$ iterations of this for-loop, and each iteration is independent, we have by the Chernoff bound that each vertex $u$ is at most $k$ times selected with probability at least $1 - e^{- \frac{4 k}{4}} = 1 - e^{-k} = 1- n^{-32}$. Since each vertex $u$ has degree equal to $k$ plus the number of times it is sampled, we have that its degree is at most $2k$ with probability at most $1 - n^{-32}$. We obtain our result over all vertices in $H$ by applying a union bound. \end{proof} \begin{claim} \Cref{alg:randExpa} returns a $\Omega(\log n)$-expander $H$ w.h.p. \end{claim} \begin{proof} Consider any set $S$ with $ |S| \leq n/2$. Then, we have that $\mathbb{E}[E_H(S, \overline{S})] \geq \frac{1}{2}|S| k$ since each edge $(u,v)$ sampled when the foreach-loop iterates over a vertex $v \in S$ has $u \not\in S$ with probability at least $\frac{1}{2}$ and there are $|S| k$ such sampling events. Since they are independent, we further have from the Chernoff bound that $P[|E_H(S, \overline{S})| \leq \frac{1}{4}|S|k] \leq e^{-\frac{|S|k}{16}} = n^{-5|S|}$. It is clear that if $|E_H(S, \overline{S})| > \frac{1}{4}|S|k$ then $\Psi_H(S) \geq \frac{k}{4} = \Omega(\log n)$. The remaining difficulty is that there are an exponential number of cuts so a union bound seems at first hard to apply. However, we observe that there are at most ${\alpha \choose n} \leq \left(\frac{ne}{\alpha}\right)^{\alpha} \leq n^{3\alpha}$ for $\alpha \geq 1$ cuts where the smaller half contains $\alpha$ vertices. As we have proven that a cut is $\psi$-sparse with probability at most $n^{- 5\alpha}$, we can thus conclude by a simple union bound argument that $H$ is not $\Omega(\log n)$-expander with probability at most $\sum_{\alpha \geq 1} {\alpha \choose n} \cdot n^{- 5\alpha} \leq 1/n$. \end{proof} \end{document}
\begin{document} \title{Volume and topology of bounded and closed hyperbolic $3$-manifolds.} \author{Jason DeBlois} \address{Department of Mathematics, Statistics, and Computer Science (M/C 249)\\ University of Illinois at Chicago\\ 851 S. Morgan St.\\ Chicago, IL 60607-7045} \email{jdeblois@math.uic.edu} \thanks{Partially supported by NSF grant DMS-0703749} \author{Peter B.~Shalen} \address{Department of Mathematics, Statistics, and Computer Science (M/C 249)\\ University of Illinois at Chicago\\ 851 S. Morgan St.\\ Chicago, IL 60607-7045} \email{shalen@math.uic.edu} \thanks{Partially supported by NSF grants DMS-0204142 and DMS-0504975} \begin{abstract} Let $N$ be a compact, orientable hyperbolic $3$-manifold with $\partial N$ a connected totally geodesic surface of genus $2$. If $N$ has Heegaard genus at least $5$, then its volume is greater than $6.89$. The proof of this result uses the following dichotomy: either $N$ has a long \textit{return path} (defined by Kojima-Miyamoto), or $N$ has an embedded codimension-$0$ submanifold $X$ with incompressible boundary $T \sqcup \partial N$, where $T$ is the frontier of $X$ in $N$, which is not a book of $I$-bundles. As an application of this result, we show that if $M$ is a closed, orientable hyperbolic $3$-manifold with $ \mathrm{dim}_{\mathbb{Z}_2} H_1(M; \mathbb{Z}_2) \geq 5$, and if the cup product map $H^1 (M;\mathbb{Z}_2) \otimes H^1(M;\mathbb{Z}_2) \rightarrow H^2(M;\mathbb{Z}_2)$ has image of dimension at most one, then $M$ has volume greater than $3.44$. \end{abstract} \maketitle \section{Introduction} The results of this paper support an old theme in the study of hyperbolic $3$-manifolds, that the volume of a hyperbolic $3$-manifold increases with its topological complexity. The first main result, Theorem \ref{vol6.89} below, reflects this theme in the context of hyperbolic manifolds with totally geodesic boundary. We denote the Heegaard genus of a $3$-manifold $N$ by $\Hg(N)$. \newcommand\volsixeightnine{ Let $N$ be a compact, orientable hyperbolic $3$-manifold with $\partial N$ a connected totally geodesic surface of genus $2$. If $\Hg(N) \geq 5$, then $N$ has volume greater than $6.89$. } \begin{theorem}\label{vol6.89} \volsixeightnine \end{theorem} Our second main result concerns closed manifolds: \newcommand\volthreefourfour{ Let $M$ be a closed, orientable hyperbolic $3$-manifold with $$ \mathrm{dim}_{\mathbb{Z}_2} H_1(M; \mathbb{Z}_2) \geq 5, $$ and suppose that the cup product map $H^1 (M;\mathbb{Z}_2) \otimes H^1(M;\mathbb{Z}_2) \rightarrow H^2(M;\mathbb{Z}_2)$ has image of dimension at most one. Then $M$ has volume greater than $3.44$. } \begin{theorem}\label{vol3.44} \volthreefourfour \end{theorem} Theorem \ref{vol6.89} builds on work by Kojima and Miyamoto. Miyamoto proved that the minimal--volume compact hyperbolic $3$-manifolds with totally geodesic boundary of genus $g$ decompose into $g$ regular truncated tetrahedra, each with dihedral angle $\pi/3g$ \cite[Theorem 5.4]{Miy}. Their volumes increase with $g$, taking values $6.452...$ for $g=2$ and $10.428...$ for $g=3$. Miyamoto's theorem implies in particular that the minimal volume compact hyperbolic manifolds with totally geodesic boundary of genus $g$ have Heegaard genus equal to $g+1$. Prior to the work of Miyamoto, Kojima-Miyamoto established the universal lower bound of $6.452...$ for the volume of compact hyperbolic $3$-manifolds with totally geodesic boundary and described the minimal--volume examples \cite{KM}. In fact their result is slightly stronger: if $N$ is not ``simple'', then $\mathrm{vol}(N) > 6.47$. In the terminology of \cite{KM}, a hyperbolic manifold with geodesic boundary is simple if it admits a decomposition into truncated polyhedra with one internal edge. Such manifolds are classified in \cite[Lemma 2.2]{KM}, and include those with minimal volume. Theorem \ref{vol6.89} can be regarded as an extension of Kojima-Miyamoto's theorem. Experimental results of Frigerio-Martelli-Petronio \cite{FMP} suggest that the next smallest manifolds with geodesic boundary after those of minimal volume have volume greater than $7.1$, so it is likely that Theorem \ref{vol6.89} is not close to sharp. Nonetheless it seems to be the only result of its kind in the literature. Theorem \ref{vol3.44} will be deduced by combining Theorem \ref{vol6.89} with results from \cite{CDS} and \cite{CS_vol}. The transition from these results to Theorem \ref{vol3.44} involves two other results, Theorems \ref{closedvol6.89} and \ref{genus2or3} below, which are of independent interest. \newcommand\closedvolsixeightnine{ Let $M$ be a closed, orientable hyperbolic $3$-manifold containing a closed, connected incompressible surface of genus 2 or 3, and suppose that $\Hg(M) \geq 8$. Then $M$ has volume greater than $6.89$. } \begin{theorem} \label{closedvol6.89} \closedvolsixeightnine \end{theorem} Theorem \ref{closedvol6.89} is analogous to \cite[Theorem 6.5]{CDS}, and follows in a similar way: we apply Theorem \ref{vol6.89} and the results of Miyamoto and Kojima-Miyamoto discussed above, using work of Agol-Storm-Thurston \cite{ASTD}, to the output of the topological theorem below. The notation in the statement is taken from \cite{CDS}. In particular, below and in the remainder of this paper, we will use the term ``simple" as it is defined in \cite[Definitions 1.1]{CDS}, which differs from its usage in \cite{KM} mentioned above. We also recall the definitions of $M \,\backslash\backslash\, S$ from the first sentence of \cite{CDS}, and ``$\kish$'' from Definition 1.1 there. \newcommand\genustwoorthree { Suppose that $M$ is a closed, simple $3$-manifold which contains a connected closed incompressible surface of genus 2 or 3, and that $\Hg(M) \geq 8$. Then $M$ contains a connected closed incompressible surface $S$ of genus at most 4, such that either $\chibar(\kish(M\,\backslash\backslash\, S)) \geq 2$, or $S$ is separating and $M \,\backslash\backslash\, S$ has an acylindrical component $N$ with $\Hg(N) \geq 7$. } \begin{theorem}\label{genus2or3} \genustwoorthree \end{theorem} Theorem \ref{genus2or3} follows by application of \cite[Theorem 5.8]{CDS} jointly with \cite[Theorem 3.1]{CDS}. It is the analog of \cite[Corollary 5.9]{CDS} for manifolds possessing an incompressible surface of genus 3. The proof of Theorem \ref{vol3.44} follows the outline of \cite[Theorem 6.8]{CDS}. In place of the results concerning 3--free groups used in \cite{CDS}, the proof uses results of \cite{CS_vol} concerning 4-free groups, and Theorem \ref{closedvol6.89} above replaces \cite[Theorem 6.5]{CDS}. All the theorems stated above are proved in Section \ref{sec:closed}. Sections \ref{sec:prelim}--\ref{sec:111} constitute preparation for the proof of Theorem \ref{vol6.89}. We introduce \textit{return paths}, defined by Kojima \cite{Ko}, and \textit{$(i,j,k)$ hexagons} in Section \ref{sec:prelim}. (The analysis of $(i,j,k)$ hexagons is a crucial element of \cite{KM}, but we have borrowed our notation for them from Gabai, Meyerhoff and Milley's paper \cite{GMM}, which is set in a different context.) Lemmas \ref{borbounds} and \ref{l2vsl1}, which are due to Kojima-Miyamoto \cite{KM}, respectively give an absolute lower bound on $\ell_1$, and a lower bound for $\ell_2$ in terms of $\ell_1$. Lemma \ref{KLM} refines Lemma \ref{l2vsl1}, giving a bound for $\ell_2$ which improves that of \cite{KM} when $\ell_1$ is in a certain interval. Section \ref{sec:KM} describes Kojima-Miyamoto's volume bounds. The main result rigorously establishes a lower bound which is apparent from inspection of \cite[Graph 4.1]{KM}. \newtheorem*{l1cosh1.215Prop}{Proposition \ref{l1cosh1.215}} \newcommand\ellonecoshonetwo{ Let $N$ be a hyperbolic $3$-manifold with $\partial N$ connected, totally geodesic, and of genus $2$, satisfying $\cosh \ell_1 \geq 1.215$. Then $N$ has volume greater than $6.89$. } \begin{l1cosh1.215Prop} \ellonecoshonetwo \end{l1cosh1.215Prop} \newtheorem*{no(1,1,1)Prop}{Proposition \ref{no(1,1,1)}} \newcommand\nooneoneone{ Let $N$ be a hyperbolic $3$-manifold with $\partial N$ connected, totally geodesic, and of genus $2$, such that there is no $(1,1,1)$ hexagon in $\widetilde{N}$. Then $\cosh \ell_1 \geq 1.215$. } Also in Section \ref{sec:KM}, Proposition \ref{no(1,1,1)} shows that any manifold with no $(1,1,1)$ hexagon has a shortest return path satisfying $\cosh \ell_1 \geq 1.215$, thus has volume greater than $6.89$ by the above. In the remaining sections, we explore the topological consequences of the presence of a $(1,1,1)$ hexagon in $\widetilde{N}$, when $\cosh \ell_1 \leq 1.215$. In Section \ref{sec:111}, we show that under these circumstances, $N$ contains a submanifold $X$ which is a \textit{nondegenerate trimonic manifold relative to $\partial N$} (see Definition \ref{i'm now sir murgatroyd}). Section \ref{sec:boibs} develops results from the theory of books of $I$--bundles (which were introduced in \cite{ACS}) that we use in Section \ref{sec:trimonic}. There we introduce trimonic manifolds and prove, in Proposition \ref{it can't happen here}, that such a manifold $X$ does not have the structure of a book of $I$--bundles. It follows that $X$ has \textit{kishkes} (or \textit{guts}, cf. \cite{ASTD}) with negative Euler characteristic. Using volume bounds due to Agol-Storm-Thurston, we obtain the following result. \newtheorem*{hg4orvol7.32Thm}{Theorem \ref{hg4orvol7.32}} \newcommand\hgfourorvolseventhreetwo{ Let $N$ be a compact, orientable hyperbolic $3$-manifold with $\partial N$ a connected totally geodesic surface of genus 2. If $\cosh \ell_1 \leq 1.215$ and there is a $(1,1,1)$ hexagon in $\widetilde{N}$, then $\Hg(N) \leq 4$ or $\mathrm{vol}(N) > 7.32$. } \begin{hg4orvol7.32Thm} \hgfourorvolseventhreetwo \end{hg4orvol7.32Thm} Together with the results of Section \ref{sec:KM}, this implies Theorem \ref{vol6.89}. \section{Geometric preliminaries} \label{sec:prelim} Suppose $N$ is a hyperbolic $3$-manifold with totally geodesic boundary. Its universal cover $\widetilde{N}$ may be identified with a convex subset of $\mathbb{H}^3$ bounded by a collection of geodesic hyperplanes. The following terminology was introduced in \cite{Ko} and used extensively in \cite{KM}; we will use it here as well. \begin{definition} Let $N$ be a hyperbolic $3$-manifold with totally geodesic boundary, and let $\widetilde{N} \subset \mathbb{H}^3$ be its universal cover. A \textit{short cut} in $\widetilde{N}$ is a geodesic arc joining the closest points of two distinct components of $\partial \widetilde{N}$. A \textit{return path} in $N$ is the projection of a short cut under the universal covering map. \end{definition} It is an easy consequence of the definitions that each return path is a homotopically nontrivial geodesic arc properly immersed in $N$, perpendicular to $\partial N$ at each of its endpoints. Corollary 3.3 of \cite{Ko} asserts that for a fixed hyperbolic manifold $N$ with geodesic boundary and $K \in \mathbb{R}$, there are only finitely many return paths in $N$ with length less than $K$. Thus the collection of return paths may be enumerated as $\{\lambda_1,\lambda_2,\hdots\}$, where for each $i \in \mathbb{N}$, the length of $\lambda_i$ is less than or equal to the length of $\lambda_{i+1}$. Fixing such an arrangement, we will denote by $\ell_i$ the length of $\lambda_i$. It will prove important to understand the distance in $\partial N$, properly interpreted, between endpoints of return paths of $N$. \begin{definition} Let $N$ be a compact hyperbolic $3$-manifold with connected totally geodesic boundary, and suppose $\lambda$ is a short cut in $\widetilde{N}$ projecting to $\lambda_i$. Fix an endpoint $x$ of $\lambda$, and let $\Pi$ be the component of $\partial \widetilde{N}$ containing $x$. For $j \in \mathbb{N}$ define $d_{ij}$ to be the minimum, taken over all short cuts $\lambda'$ projecting to $\lambda_j$ such that $\lambda'$ has an endpoint $y \in \Pi$ and $\lambda' \neq \lambda$, of $d(x,y)$. \end{definition} The requirement above that $\lambda'$ be distinct from $\lambda$ ensures that $d_{ii} > 0$. In general, $d_{ij}$ is the length of the shortest geodesic arc in $\partial N$ joining an endpoint of $\lambda_i$ to an endpoint of $\lambda_j$. A crucial tool for understanding the relationships between the lengths $\ell_i$ and distances $d_{ij}$ is a class of totally geodesic hexagons in $\widetilde{N}$ which have short cuts as edges. The two lemmas below describe the relevant hexagons. \begin{lemma} \label{geodhex} Suppose that $\Pi_1$, $\Pi_2$ and $\Pi_3$ are mutually disjoint geodesic planes in $\HH^3$. For each two-element subset $\{i,j\}$ of $\{1,2,3\}$, let $\lambda_{ij}$ denote the common perpendicular to $\Pi_i$ and $\Pi_j$. Then $\lambda_{12}$, $\lambda_{13}$ and $\lambda_{23}$ lie in a common plane $\Pi$. \end{lemma} \Proof We may assume that the three lines $\lambda_{12}$, $\lambda_{13}$ and $\lambda_{23}$ do not all coincide, and so by symmetry we may assume that $\lambda_{12}\ne \lambda_{13}$. For $i=2,3$ the line $\lambda_{1i}$ meets $\Pi_1$ orthogonally at some point $p_i$. Let $L\subset\Pi$ denote the line joining $p_2$ and $p_3$, and let $\Sigma$ denote the plane which meets $\Pi$ perpendicularly along $L$. It is clear that $\Sigma$ contains $\lambda_{12}$ and $\lambda_{13}$. This implies that $\Sigma$ meets the planes $\Pi_2$ and $\Pi_3$ perpendicularly. For $i=2,3$, let $X_i$ denote the line $\Pi_i\cap\Sigma$. Since $\Pi_2\cap\Pi_3=\emptyset$, the lines $X_2,X_3\subset\Sigma$ are disjoint. Hence the common perpendicular to $X_2$ and $X_3$ is a line $Y\subset\Sigma$. For $i=2,3$, the line $Y$ meets the line $X_i\subset\Pi_i$ perpendicularly, and $Y$ is contained in the plane $\Sigma$ which is perpendicular to $\Pi_i$; hence $Y$ is itself perpendicular to $\Pi_i$. It follows that $Y=\lambda_{23}$. Thus the plane $\Sigma$ contains $\lambda_{12}$, $\lambda_{13}$ and $\lambda_{23}$. \EndProof \begin{lemma} \label{rtanghex} Let $N$ be a compact hyperbolic $3$-manifold with totally geodesic boundary, and suppose $\Pi_1$, $\Pi_2$, and $\Pi_3$ are distinct components of $\partial \widetilde{N}$. Let $\Pi$ be the plane containing the short cuts $\lambda_{12}$, $\lambda_{23}$ and $\lambda_{13}$, which exists by Lemma \ref{geodhex}. Let $C$ be the right--angled hexagon in $\Pi$ with edges $\lambda_{ij}$ and the geodesic arcs in the $\Pi_i$ joining their endpoints. Then $C \subset \widetilde{N}$, and $C \cap \partial \widetilde{N} = \cup_i (C \cap \Pi_i)$. \end{lemma} \begin{proof} $\Pi \cap \widetilde{N}$ is a convex subset of $\Pi$ bounded by the family of disjoint geodesics $\Pi \cap \partial \widetilde{N}$, which includes $\Pi \cap \Pi_i$, $i \in \{1,2,3\}$. If $\gamma$ is another component of $\Pi \cap \partial \widetilde{N}$, then by definition $\Pi \cap \Pi_1$, $\Pi \cap \Pi_2$, and $\Pi \cap \Pi_3$ are all contained in the component of $\Pi - \gamma$ intersecting $\widetilde{N}$. Thus there is a single component of $\partial_{\infty} \Pi - \left( \cup_{i =1}^3 \partial_{\infty} \Pi_i \right)$ containing both endpoints of $\gamma$. Since $\gamma$ is a component of $\Pi \cap \partial \widetilde{N}$, its endpoints are between those of two different geodesics $\Pi \cap \Pi_i$, say $\Pi \cap \Pi_1$ and $\Pi \cap \Pi_2$. Then since the geodesic containing $\lambda_{12}$ intersects $\Pi_1$ and $\Pi_2$ perpendicularly, it is disjoint from $\gamma$ and contained in the component of $\Pi - \gamma$ intersecting $\widetilde{N}$. The remainder of $C$ is on the other side of $\lambda_{12}$ from $\gamma$. Since $\gamma$ was arbitrary, the lemma follows. \end{proof} \begin{definition} Let $N$ be a compact hyperbolic $3$-manifold with totally geodesic boundary, and let $C$ be a right-angled hexagon supplied by Lemma \ref{rtanghex}. We call the edges of $C$ which are short cuts \textit{internal}, and the remaining edges \textit{external}. If the internal edges project to $\lambda_i$, $\lambda_j$, and $\lambda_k$, we call $C$ an \textit{$(i,j,k)$ hexagon}. \end{definition} This terminology matches that defined in \cite{GMM} in the context of horospheres and cusped hyperbolic manifolds. As we will see below, $(i,j,k)$ hexagons were used extensively in the analysis of \cite{KM}, although not by name. The isometry class of a right-angled hexagon is determined by the lengths of three of its pairwise nonadjacent sides. If $\{\ell, \ell', \ell''\}$ is a collection of such lengths, and $d$ is the length of a side abutting those with lengths $\ell$ and $\ell'$, the \textit{right-angled hexagon rule} (cf. eg. \cite[Theorem 3.5.13]{Ratcliffe}) describes $d$ in terms of the other lengths. \begin{align} \cosh d = \frac{\cosh \ell \cosh \ell' + \cosh \ell''}{\sinh \ell \sinh \ell'} \label{hexrule} \end{align} A prototypical application of the right--angled hexagon rule is the following initial lemma, proved in \cite{KM} during the proof of Lemma 3.2. \begin{lemma}[Kojima-Miyamoto] \label{d11vsl1} Suppose $N$ is a compact hyperbolic $3$-manifold with connected totally geodesic boundary, and let $R$ be the function of $\ell_1$ defined by the following formula. \begin{align} \cosh R = \sqrt{1+ \frac{1}{2\cosh \ell_1 -2}} \label{R} \end{align} Then $d_{11} \geq 2R$. \end{lemma} \begin{proof} Let $\lambda$ and $\lambda'$ be short cuts in $\widetilde{N}$ with length $\ell_1$ whose feet are at distance $d_{11}$ on some boundary component. The short cut $\lambda''$ joining the boundary components containing the other feet of $\lambda$ and $\lambda'$ has length $\ell_k$ for some $k \geq 1$. Applying the right--angled hexagon rule to the $(1,1,k)$ hexagon containing $\lambda, \lambda'$, and $\lambda''$ yields the following inequality. \begin{align} \cosh d_{11} = \frac{\cosh^2 \ell_1 + \cosh \ell_k}{\sinh^2 \ell_1} \geq \frac{\cosh^2 \ell_1 + \cosh \ell_1}{\sinh^2 \ell_1} = 1 + \frac{1}{\cosh \ell_1 - 1} \label{KMd11} \end{align} The lemma now follows upon applying the ``half--angle formula" for hyperbolic cosine, $\cosh R = \sqrt{(\cosh (2R) + 1)/2}$. \end{proof} Note that $R$ is decreasing as a function of $\ell_1$. Hence using Lemma \ref{d11vsl1}, an upper bound on $d_{11}$ implies a lower bound on $\ell_1$. An upper bound for $d_{11}$ obtains from area considerations. \begin{lemma}[\cite{KM}, Corollary 3.5] \label{borbounds} Let $N$ be a compact hyperbolic $3$-manifold with $\partial N$ connected, totally geodesic, and of genus $2$. Then the following bounds hold for $d_{11}$ and $\ell_1$. \begin{align*} & \cosh d_{11} \leq 3+2\sqrt{3} & & \cosh \ell_1 \geq \frac{3+\sqrt{3}}{4} \end{align*} \end{lemma} \begin{proof} By Lemma \ref{d11vsl1}, there is a disk of radius $R$ embedded on $\partial N$ around each endpoint of $\lambda_1$, and these two disks do not overlap. They lift to a radius $R$ disk packing on a component $\Pi$ of $\partial \widetilde{N}$, invariant under the action of $\pi_1 \partial N$. Bor\"oczky's Theorem \cite{Bor} gives an upper bound $d(R)$ on the local density of a radius $R$ disk packing of $\mathbb{H}^2$. Since the packing in question is invariant under the action by covering transformations for the compact surface $\partial N$, $d(R)$ bounds the global density of the packing there, yielding the following inequality. $$ \frac{4\pi(\cosh R -1)}{4\pi} \leq d(R) $$ The numerator on the left hand side of the inequality above is twice the area of a hyperbolic disk of radius $R$, and the denominator is the area of $\partial N$. (This follows from the Gauss-Bonnet theorem and the fact that $\partial N$ has genus 2.) Let $\alpha$ be the angle at a vertex of a hyperbolic equilateral triangle $T(R)$ with side length $2R$. $T(R)$ has area $\pi - 3\alpha$, and the intersection with $T(R)$ of disks of radius $R$ centered at its vertices occupies a total area of $3\left( \frac{\alpha}{2\pi} \right) 2\pi(\cosh R - 1)$. Bor\"oczky's bound $d(R)$ is defined as the ratio of these areas; thus after simplifying the inequality above we obtain the one below. $$ \cosh R -1 \leq \frac{3\alpha(\cosh R -1)}{\pi - 3\alpha} $$ Solving for $\alpha$ yields $\alpha \geq \pi/6$. The hyperbolic law of cosines describes the relationship between $\alpha$ and the side length of $T(R)$: $$ \cos \alpha = \frac{\cosh^2 (2R) - \cosh (2R)}{\sinh^2 (2R)} = \frac{\cosh (2R)}{\cosh (2R) +1} = \frac{2\cosh^2 R - 1}{2\cosh^2 R} $$ Using the fact that $\cos \alpha \leq \sqrt{3}/2$ and solving for $\cosh R$ yields $\cosh R \leq (1+\sqrt{3})/\sqrt{2}$. The inequality for $\cosh d_{11}$ follows using the ``hyperbolic double angle formula", and the inequality for $\cosh \ell_1$ follows upon solving the formula of Lemma \ref{d11vsl1}. \end{proof} The following lemma combines Lemmas 4.2 and 4.3 of \cite{KM} with the discussion below them. \begin{lemma}[Kojima-Miyamoto] \label{l2vsl1} Let $N$ be a compact hyperbolic $3$-manifold with $\partial N$ totally geodesic, connected, and of genus $2$. Define quantities $R'$, $E$, and $F$, depending on $\ell_1$, by the following equations. \begin{align} & \cosh R' = 3 - \cosh R \label{R'} \\ & \cosh E = \frac{2}{\cosh^2 (R+R') \cdot \tanh^2 \ell_1 - 1} + 1 \label{E} \\ & \cosh F = \sqrt{ \frac{\cosh \ell_1 + 1}{\cosh 2R' -1} +1} \label{F} \end{align} Then $\ell_2 \geq \max\{\ell_1,\min \{E,F\}\}$. \end{lemma} \begin{proof} The right--angled hexagon rule can be used to obtain lower bounds on $\ell_2$ depending on values for $\ell_1$ and $d_{12}$ or $d_{22}$, respectively. \begin{align} & \cosh \ell_2 \geq \frac{2}{\cosh^2 d_{12} \tanh^2 \ell_1 - 1} + 1 \label{d12} \\ & \cosh \ell_2 \geq \sqrt{ \frac{\cosh \ell_1+1}{\cosh d_{22} - 1} + 1} \label{d22} \end{align} This is recorded in Lemma 4.2 of \cite{KM}. Therefore an upper bound for $d_{12}$ or $d_{22}$ gives a lower bound for $\ell_2$ in terms of $\ell_1$. Recall from above that an upper bound on $\ell_1$ gives a lower bound of $2R(\ell_1)$, on $d_{11}$, the shortest distance between feet of two different shortest short cuts on (any) component of $\partial \widetilde{N}$. Hence disks $U$ and $U'$ of radius $R$ in $\partial N$, each centered at a foot of the shortest return path, are embedded and nonoverlapping. Area considerations imply that $R'$ is an upper bound for the radii of two equal--size nonoverlapping disks in $\partial N - \mathrm{int}(U \cup U')$ \cite[Lemma 4.3]{KM}. It follows that at least one of $d_{12} \leq R + R'$ or $d_{22} \leq 2R'$ holds, since otherwise there would be disks of radius $R'$ embedded around the feet of the second--shortest return path without overlapping $U$ and $U'$. The inequalities (\ref{d12}) and (\ref{d22}) thus imply that $\ell_2 \geq \min \{E,F\}$. Note that by definition $\ell_2 \geq \ell_1$, which gives the lemma.\end{proof} The following lemma contains a new observation improving on the bound of Lemma \ref{l2vsl1} for values of $\ell_1$ with hyperbolic cosine near $1.4$. \begin{lemma} \label{KLM} Let $N$ be a compact hyperbolic $3$-manifold with $\partial N$ totally geodesic, connected, and of genus $2$. Let $R''$ be determined by the following equation. \begin{align} & \cosh R'' = \frac{1}{\sqrt{2(1-\cos (2\pi/9))}} = 1.4619.... \label{R''} \end{align} Define quantities $L$ and $M$ depending on $\ell_1$ by \begin{align} & \cosh L = \frac{2}{\cosh^2(2R'')\tanh^2 \ell_1 - 1} + 1 \label{L} \\ & \cosh M = \sqrt{ \frac{\cosh \ell_1+1}{\cosh(2R'') - 1} + 1} \label{M} \end{align} For any value of $\ell_1$ with \begin{align} & \cosh \ell_1 \leq \frac{\cos (2\pi/9)}{2\cos (2\pi/9) - 1} = 1.439..., \label{ell1forR'} \end{align} $\ell_2$ is bounded below by $\max \{ \ell_1,\min\{E,F\},\min\{L,M\}\}$. \end{lemma} \begin{proof} Applying Bor\"oczky's theorem as in Lemma \ref{borbounds}, but this time to four disks of equal radius packed on $\partial N$, we find that the radius is bounded above by the quantity $R''$ specified by the formula above. For $\ell_1$ satisfying the bound of (\ref{ell1forR'}), the inequality (\ref{KMd11}) implies that $d_{11}$ is at least $2R''$. If each of $d_{12}$ and $d_{22}$ were also larger than $2R''$, then disks of radius $R''$ around the feet of both the shortest and second--shortest return paths would be embedded and nonoverlapping, a contradiction. Thus $\min\{d_{12},d_{22}\} \leq 2R''$. Plugging into inequalities (\ref{d12}) and (\ref{d22}) gives the result. \end{proof} The quantities $L$ and $M$ defined above offer a better lower bound on $\ell_2$ than $E$ and $F$, for values of $\ell_1$ with $1.367 \leq \cosh \ell_1 \leq 1.439$. This is because $E$ and $F$ use the quantity $R'$ of equation (\ref{R'}), and by definition twice the area of a hyperbolic disk of radius $R'$ plus twice the area of a hyperbolic disk of radius $R$ is $4\pi$. However it is impossible to entirely cover a compact surface of genus 2 with 4 nonoverlapping embedded disks. Bor\"oczky's theorem bounds the proportion of the area which can be covered by four disks of equal radius, and this supplies $R''$. This is less than $R'$ for $R$ determined below. $$ \cosh R \leq 3 - \frac{1}{\sqrt{2(1-\cos (2\pi/9))}} $$ Solving equation (\ref{R}) for $\cosh \ell_1$, we find that this occurs when $\cosh \ell_1 \geq 1.366...$. It is at this point that the bound above for $d_{22}$ given by $\cosh(K)$ is better than the bound above given by $2R'$. The bound on $d_{12}$ given by $\cosh(K)$ becomes better than that given by $R + R'$ somewhat earlier, at least for values of $\ell_1$ with $\cosh \ell_1 \geq 1.25$. \section{Volume with a long return path} \label{sec:KM} By definition, in a hyperbolic manifold $N$ with totally geodesic boundary, $\ell_1/2$ is the height of a maximal embedded collar of $\partial N$. The volume of such a collar bounds the volume of $N$ below, but leaves a lot out. The volume bounds of \cite{KM} are obtained by taking a larger collar of $\partial N$ and using separate means to understand the region where it overlaps itself. \begin{definition} The \textit{muffin of height $\ell$}, here denoted $\mathit{Muf}_{\ell}$, is the hyperbolic solid obtained by rotating a hyperbolic pentagon with base of length $\ell$, opposite angle $2\pi/3$, and all other angles $\pi/2$, about its base (see \cite[Figure 3.1]{KM}). \end{definition} It is a standard fact of hyperbolic trigonometry (see \cite[Theorem 3.5.14]{Ratcliffe}) that positive real numbers $a$, $b$, and $c$ determine a right-angled hexagon in $\mathbb{H}^2$, unique up to isometry, with alternate sides of lengths $a$, $b$, and $c$. For $\ell > 0$, the hexagon specified by $a=b=c=\ell$ has an orientation preserving symmetry group of order three which cyclically permutes the sides of length $\ell$. The hyperbolic pentagon mentioned in the definition above is a fundamental domain for this symmetry group. For a compact hyperbolic $3$-manifold $N$, a copy of $\mathit{Muf}_{\ell_1}$ is embedded in $\widetilde{N}$ around each short cut with length $\ell_1$. Lemma 3.2 of \cite{KM} asserts that $\mathit{Muf}_{\ell_1}$ is embedded in $N$ by the universal covering. Let $A$ be the length of a side joining a vertex with angle $\pi/2$ to the vertex with angle $2\pi/3$ of the pentagon rotated to construct the muffin. In terms of $\ell_1$, $A$ is given by the formula below. \begin{align} \cosh A = \sqrt{\frac{2}{3}(\cosh \ell_1 + 1)} \label{A} \end{align} A collar of $\partial N$ with height less than both $A$ and $\ell_2/2$ has its region of self--overlap entirely contained in $\mathit{Muf}_{\ell_1}$. This yields the fundamental volume inequality of \cite{KM}, stated there in the proof of Proposition 4.1, which we formulate in the following lemma. \begin{lemma}[Kojima-Miyamoto] Let $N$ be a compact hyperbolic $3$-manifold with boundary, and let $H = \min\{A,\ell_2/2\}$. With $\mathit{Muf}_{\ell_1}$ as defined above and $R$ as in Lemma \ref{d11vsl1}, we have the following bound. \begin{align} \label{KMvol} \mathrm{vol}(N) \geq \mathrm{vol}(\mathit{Muf}_{\ell_1}) + \pi(2 - \cosh R)(2H + \sinh(2H)) \end{align} \end{lemma} \begin{proof} The intersection of $\mathit{Muf}_{\ell_1}$ with $\partial N$ is the union of disks $U$ and $U'$ of radius $R$, the quantity defined in Lemma \ref{d11vsl1}. This is because the pentagon rotated to construct the muffin is a fundamental domain for the orientation--preserving symmetry group of a $(1,1,1)$ hexagon, thus its sides adjacent to the base each have length $R$ (again see \cite[Figure 3.1]{KM}). It follows from Lemma \ref{d11vsl1} that $U$ and $U'$ are embedded in $\partial N$ without overlapping. The area of $\partial N - (U \cup U')$ is $4\pi - 4\pi(\cosh R - 1)$. A collar of $\partial N - (U \cup U')$ of height $H$ is embedded in $N$ without overlapping $\mathit{Muf}_{\ell_1}$, and the bound of the lemma is obtained by adding their volumes. This uses the following well known formula for the volume $V$ of a collar of height $H$ in $\mathbb{H}^3$ of a set in a plane with area $A$: $V = A \cdot (2H + \sinh 2H)/4$. \end{proof} A formula for the volume of $\mathit{Muf}_{\ell_1}$ is recorded in \cite[Lemma 3.3]{KM}. The output of inequality (\ref{KMvol}) is recorded as a function of $\ell_1$ in \cite[Graph 4.1]{KM}. Lower bounds for $H$ are determined at various points on the graph by $A$, the quantities $E$ and $F$ of Lemma \ref{l2vsl1}, and $\ell_1$ itself. The point on Graph 4.1 of \cite{KM} above $\cosh \ell_1 = 1.215$ is just to the left of the intersection of the curves labeled $H=A$ and $H = F/2$. A computation gives a volume bound of $7.007\hdots$ here. To the right of $\cosh \ell_1 = 1.215$, inspection of Graph 4.1 reveals a single local minimum at about $\cosh \ell_1 = 1.4$. Numerical experimentation indicates that the volume bound at this minimum is just larger than $6.89$. In this section, our main task is to prove this rigorously. \begin{l1cosh1.215Prop} \ellonecoshonetwo \end{l1cosh1.215Prop} \begin{remark} \label{vol6.94} Although Proposition \ref{l1cosh1.215} probably follows solely from results of \cite{KM}, our proof uses Lemma \ref{KLM}. In fact, numerical experimentation indicates that the volume bound above could be improved to about $6.94$ with this result. Since it is easier and suffices for applications, we prove only the bound of $6.89$ here. \end{remark} The lemma below proves useful for several results in this section. \begin{lemma} For $\frac{3+\sqrt{3}}{4} \leq \cosh \ell_1 \leq 1.4$, the quantity $E$ of (\ref{E}) is decreasing in $\ell_1$. \label{monotone} \end{lemma} \begin{proof} The formula (\ref{R'}) defines $R'$ in terms of $R$ by $$R' = \cosh^{-1}(3- \cosh R) = \log \left[(3-\cosh R) + \sqrt{(3-\cosh R)^2-1}\right]$$ Taking a derivative with respect to $R$, one finds that $R+R'$ increases with $R$ for $1 < \cosh R < 3/2$, reaching a maximum at $\cosh R = 3/2$, and decreases when $\cosh R > 3/2$. This implies that $\cosh (R+R')$ is an increasing function of $\ell_1$ on the interval $\frac{3+\sqrt{3}}{4} \leq \cosh \ell_1 \leq 1.4$, since values of $\ell_1$ in this interval give $R$--values between $\frac{3}{2}$ and $\frac{1+\sqrt{3}}{\sqrt{2}}$, and $R$ is a decreasing function of $\ell_1$. Since the hyperbolic tangent is an increasing function, the lemma follows from the definition of $E$ in Lemma \ref{l2vsl1}. \end{proof} In the proof of Proposition \ref{l1cosh1.215}, we divide the interval $1.215 \leq \cosh \ell_1 < \infty$ into subintervals: $$[1.215, \infty) = [1.215,1.367] \cup [1.367,1.439] \cup [1.439, \infty)$$ We address each subinterval separately. The first is below. \begin{lemma} \label{1.215vol1.367} A compact hyperbolic $3$-manifold $N$ with geodesic boundary satisfying $1.215 \leq \cosh \ell_1 \leq 1.367$ has volume greater than $6.89$. \end{lemma} \begin{table} \begin{center} \begin{tabular}{lllll} $\cosh \ell_1$ & muffin volume & $\mathrm{area}(\partial N - (U \cup U'))$ & $H$ & volume \\ \hline $[1.215,1.220]$ & $5.304$ & $2.216$ & $.629$ ($E$) & $6.899$ \\ $[1.220,1.226]$ & $5.236$ & $2.399$ & $.611$ ($E$) & $6.899$ \\ $[1.226,1.233]$ & $5.159$ & $2.609$ & $.592$ ($E$) & $6.900$ \\ $[1.233,1.241]$ & $5.076$ & $2.844$ & $.574$ ($E$) & $6.901$ \\ $[1.241,1.250]$ & $4.988$ & $3.097$ & $.556$ ($E$) & $6.901$ \\ $[1.250,1.260]$ & $4.895$ & $3.367$ & $.539$ ($F$) & $6.898$ \\ $[1.260,1.270]$ & $4.808$ & $3.648$ & $.524$ ($F$) & $6.908$ \\ $[1.270,1.281]$ & $4.717$ & $3.911$ & $.510$ ($F$) & $6.898$ \\ $[1.281,1.292]$ & $4.632$ & $4.182$ & $.498$ ($F$) & $6.900$ \\ $[1.292,1.303]$ & $4.551$ & $4.436$ & $.488$ ($F$) & $6.898$ \\ $[1.303,1.314]$ & $4.475$ & $4.675$ & $.479$ ($F$) & $6.894$ \\ $[1.314,1.324]$ & $4.409$ & $4.899$ & $.471$ ($F$) & $6.899$ \\ $[1.324,1.334]$ & $4.346$ & $5.092$ & $.464$ ($F$) & $6.891$ \\ $[1.334,1.343]$ & $4.292$ & $5.275$ & $.459$ ($F$) & $6.893$ \\ $[1.343,1.351]$ & $4.245$ & $5.432$ & $.454$ ($F$) & $6.894$ \\ $[1.351,1.358]$ & $4.206$ & $5.565$ & $.451$ ($F$) & $6.895$ \\ $[1.358,1.364]$ & $4.173$ & $5.678$ & $.448$ ($F$) & $6.896$ \\ $[1.364,1.367]$ & $4.157$ & $5.772$ & $.447$ ($F$) & $6.917$ \end{tabular} \end{center} \caption{} \label{table:volEF} \end{table} \begin{proof} The strategy of proof is to break the interval in question into subintervals, bound the constituent quantities in the formula (\ref{KMvol}) on each subinterval, and from this obtain a coarse lower bound for the right hand side of the inequality. Table \ref{table:volEF} records this computation. We explain its entries below. The leftmost column specifies the subinterval of values of $\cosh \ell_1$. The second column records a lower bound on this interval for the volume of the muffin. This is attained at the right endpoint, according to \cite[Lemma 3.3]{KM}. The third column records a lower bound for the area on $\partial N$ of the complement of the base of the muffin. This is attained at the left endpoint of the subinterval, since $R$ is decreasing in $\ell_1$. The fourth column records a lower bound for $H$. This is obtained by computing minima for each of $A$, $E/2$, and $F/2$ on the subinterval and taking the minimum (in each case this lower bound is greater than $\ell_1/2$). From its definition (\ref{A}) one easily finds that $A$ is increasing in $\ell_1$, hence a minimum for $A$ is obtained at the left endpoint of each subinterval. By Lemma \ref{monotone} above, $E$ is decreasing in $\ell_1$ on each subinterval in question, so its minimum is obtained at the right endpoint. The monotonicity of $F$ is not apparent from its definition, so to find a minimum on each subinterval, we plug the value of $\cosh \ell_1$ at the left endpoint and the value of $\cosh 2R'$ at the right endpoint to the formula (\ref{F}). In addition to the resulting minimum, we record which of $A$, $E$, and $F$ supplies it in the fourth column of Table \ref{table:volEF}. The final column assembles these bounds to give a volume bound. In each column after the first, the decimal approximation has been truncated after three places. \end{proof} Using the bounds of Lemma \ref{KLM} for $\ell_2$ in the inequality (\ref{KMvol}) yields the lemma below. \begin{lemma} \label{1.367vol1.439} A compact hyperbolic $3$-manifold with geodesic boundary satisfying $1.367 \leq \cosh \ell_1 \leq 1.439$ has volume at least $6.89$. \end{lemma} \begin{proof} We assemble a table as in the proof of Lemma \ref{1.215vol1.367}, using $L$ and $M$ to bound $H$ below, instead of $E$ and $F$. Since $L$ is decreasing in $\ell_1$, its minimum occurs at the right endpoint of each subinterval, whereas the minimum of $M$ occurs at the left endpoint. The results of the computation are recorded in Table \ref{table:volLM}. \end{proof} \begin{table}[ht] \begin{center} \begin{tabular}{lllll} $\cosh \ell_1$ & muffin volume & $\mathrm{area}(\partial N - (U \cup U'))$ & $H$ & volume \\ \hline $[1.367,1.377]$ & $4.105$ & $5.818$ & $.447$ ($M$) & $6.892$ \\ $[1.377,1.392]$ & $4.031$ & $5.966$ & $.448$ ($M$) & $6.894$ \\ $[1.392,1.416]$ & $3.920$ & $6.176$ & $.449$ ($M$) & $6.893$ \\ $[1.416,1.439]$ & $3.823$ & $6.485$ & $.451$ ($M$) & $6.959$ \end{tabular} \end{center} \caption{} \label{table:volLM} \end{table} We now prove Proposition \ref{l1cosh1.215}. Below we recall its statement. \begin{proposition}\label{l1cosh1.215} \ellonecoshonetwo \end{proposition} \begin{proof} Given Lemmas \ref{1.215vol1.367} and \ref{1.367vol1.439}, we need only concern ourselves with hyperbolic $3$-manifolds $N$ with totally geodesic boundary satisfying $1.439 \leq \cosh \ell_1$. The union of the muffin and a collar of height $H = \ell_1/2$ has volume given by the following formula. $$ V(\ell_1) = \pi \left[ \ell_1 + 2\sinh \ell_1 + \sqrt{\frac{2\cosh \ell_1 -1}{2\cosh \ell_1 -2}}\left( \cosh^{-1}\left(\frac{4\cosh \ell_1 +1}{3} \right) - \ell_1 - \sinh \ell_1 \right) \right] $$ When $\cosh \ell_1 = 1.439$, this yields a volume of $7.1...$ We claim that $V(\ell_1)$ is increasing with $\ell_1$ (as is suggested by Graph 4.1 in \cite{KM}). Once established, this will complete the proof. It is clear that $\displaystyle{\sqrt{\frac{2\cosh \ell_1 - 1}{2\cosh \ell_1 - 2}}=\cosh R}$ (recall equation (\ref{R})) is decreasing in $\ell_1$, asymptotically approaching $1$ from above. The derivative of $\cosh^{-1}((4\cosh \ell_1+1)/3)$ is $1/\cosh R$, bounded above by one. Hence the quantity in parentheses above, $$ Q = \cosh^{-1}\left( \frac{4\cosh \ell_1+1}{3} \right) - \ell_1 - \sinh \ell_1, $$ is decreasing in $\ell_1$. Its value is negative when $\cosh \ell_1 = 1.439$, and so also on the entire interval in question. Taking the derivative of $V(\ell_1)$ yields the following. $$ V'(\ell_1) = \pi \left[ 1 + 2\cosh \ell_1 + \cosh R\left(\frac{1}{\cosh R} - 1 - \cosh \ell_1\right) + \frac{d}{d\ell_1}(\cosh R) \cdot Q \right] $$ Since $\frac{d}{d\ell_1}(\cosh R)$ and $Q$ are both negative, the above is the sum of a positive number with the product of $1 + \cosh \ell_1$ and $2 - \cosh R$. When $\cosh \ell_1 = 1.439$, $\cosh R = 1.46... < 2$; since $\cosh R$ is decreasing as a function of $\ell_1$, the derivative of the volume formula is positive for $\cosh \ell_1 \geq 1.439$. \end{proof} \begin{lemma} Let $N$ be a compact hyperbolic $3$-manifold with $\partial N$ connected, totally geodesic, and of genus $2$. If $\cosh \ell_1 \leq 1.215$, then $\ell_1$ and $A$ are each less than $\ell_2/2$. \label{l2twicel1} \end{lemma} \begin{proof} Lemma \ref{l2vsl1} implies that $\ell_2 \geq \min \{E,F\}$. The values of $\ell_1$ in question here have hyperbolic cosine between $\frac{3+\sqrt{3}}{4} \cong 1.186$ and $1.215$. By Lemma \ref{monotone}, $E$ is monotone decreasing in $\ell_1$ on this interval; thus a lower bound, obtained by plugging in $1.215$, is $1.961\ldots$. The quantity $F$ is not necessarily monotone in $\ell_1$, since both $\cosh \ell_1$ and $\cosh 2R'$ are increasing in $\ell_1$. However, a coarse lower bound may be found by substituting $\cosh^{-1}(1.215)$ for $\ell_1$ in $\cosh \ell_1$ and $\cosh^{-1}(\frac{3+\sqrt{3}}{4})$ for $\ell_1$ in $\cosh 2R'$. The lower bound obtained in this way is $1.960...$. When $\cosh \ell_1 \leq 1.215$, $\ell_1 \leq .645$. Taking the inverse hyperbolic cosine of $1.960$, we find $\ell_2 \geq 1.293$. The quantity $A$ is clearly increasing in $\ell_1$. When $\cosh \ell_1 = 1.215$, the value obtained is $.644....$ This establishes the lemma. \end{proof} \begin{proposition}\label{no(1,1,1)} \nooneoneone \end{proposition} \begin{proof} Suppose $N$ is a hyperbolic $3$-manifold with totally geodesic boundary and no $(1,1,1)$ hexagons. According to Lemma \ref{l2twicel1}, if $\ell_1$ is at most $1.215$, then $\ell_2$ is at least twice $\ell_1$. Since $N$ has no $(1,1,1)$ hexagons, in the right--angled hexagon in $\widetilde{N}$ realizing $d_{11}$, the opposite side has length at least $\ell_2$; thus at least twice $\ell_1$. Using the right--angled hexagon rule as in inequality (\ref{KMd11}), we obtain the following inequality. \begin{align*} \cosh d_{11} & \geq \frac{\cosh^2 \ell_1 + \cosh (2\ell_1)}{\sinh^{2} \ell_1} = \frac{3\cosh^2 \ell_1 - 1}{\cosh^2 \ell_1 -1} = 3 + \frac{2}{\sinh^2 \ell_1} \end{align*} We recall from Lemma \ref{borbounds} that $\cosh d_{11} \leq 3 + 2\sqrt{3}$. Putting this together with the inequality above implies $$ \sinh^2 \ell_1 \geq \frac{1}{\sqrt{3}}. $$ This gives $\cosh \ell_1 > 1.255$, contradicting the hypothesis that $\cosh \ell_1 \leq 1.215$. \end{proof} \section{Normal books of $I$-bundles} \label{sec:boibs} \Number\label{ridin'} The results of this section and the next are topological. We work in the PL category in these sections, and follow the conventions of \cite{CDS}. In particular, we say that a subset $Y$ of a space $X$ is {\it $\pi_1$-injective} if for all path components $A$ of $X$ and $B$ of $Y$ such that $A\subset B$, the inclusion homomorphism $\pi_1(A)\to\pi_1(B)$ is injective. We denote the Euler characteristic of a finite polyhedron $X$ by $\chi(X)$, and we write $\chibar(X)=-\chi(X)$. \Proposition\label{easy come}Let $X$ be a compact, orientable $3$-manifold, and let $S$ and $T$ be components of $\partial X$. Suppose that for some field $F$ the inclusion homomorphism $H_1(T;F)\to H_1(M;F)$ is surjective. Then $S$ is $\pi_1$-injective in $M$. \EndProposition \Proof Assume that $S$ is not $\pi_1$-injective. Then there is a properly embedded disk $D\subset X$ such that $\partial D$ is a non-trivial simple closed curve in $S$. Let $N$ be a regular neighborhood of $D$ in $X$, let $A$ denote the component of $\overline{X-N}$ containing $T$, let $\star$ be a base point in $T$, and let $G\le\pi_1(X,\star)$ denote the image of $\pi_1(T,\star)$ under the inclusion homomorphism. Then $\pi_1(X,\star)$ is the free product of $G$ with another subgroup $K$, where $K\cong\ZZ$ if $D$ does not separate $X$, and $K\cong\pi_1(\overline{X-A})$ if $D$ does separate $X$. In the latter case, $\partial D$ is a separating, non-trivial simple closed curve in $S$, and hence $\overline{X-A}$ has a boundary component of strictly positive genus. Thus in either case we have $H_1(K;F)\ne0$. Hence the inclusion homomorphism $H_1(A;F)\to H_1(X;F)$ is not surjective. Since $S\subset A$, this contradicts the hypothesis. \EndProof \Lemma\label{how many people cried} Let $\tau:F\to F$ be a free involution of a compact orientable surface $F$, and let $C\subset F$ be a simple closed curve. Suppose that $\tau(C)$ is isotopic to a curve which is disjoint from $C$. Then $C$ is isotopic to a curve $C_1$ such that either (i) $\tau(C_1)\cap C_1=\emptyset$ or (ii) $\tau(C_1)= C_1$. Furthermore, if $\tau$ reverses orientation we may always choose $C_1$ so that (i) holds. \EndLemma \Proof We fix a metric with convex boundary on $F/\langle\tau\rangle$. Then $F$ inherits a metric such that $\tau:F \to F$ is an isometry. Since $C$ is a homotopically non-trivial simple closed curve in $F$, it is homotopic to a curve $C_1$ with shortest length in its homotopy class, which is a simple closed geodesic by \cite[Theorem 2.1]{lilfhs}. Let $C'$ be homotopic to $\tau(C)$ and disjoint from $C$. Then since the shortest closed geodesics $C_1$ and $\tau(C_1)$ are respectively homotopic to the disjoint curves $C$ and $C'$, it follows from \cite[Corollary 3.4]{lilfhs} that they either are disjoint or coincide. (The results we have quoted from \cite{lilfhs} are stated there for the case of a closed surface, but it is pointed out in the first paragraph of \cite[\S4]{lilfhs} that they hold in the case of a compact surface with convex boundary.) It follows from \cite[Theorem 2.1]{epstein} that $C$ is isotopic to $C_1$. This proves the first assertion. To prove the second assertion, suppose that $\tau$ reverses orientation and that $C$ is isotopic to a curve $C_1$ such that (ii) holds. Let $A$ be an invariant annular neighborhood of $C_1$. Since $\tau$ is a free involution it must preserve an orientation of the invariant curve $C$; since it reverses an orientation of $F$, it must therefore interchange the components of $\partial A$. Hence (i) holds if $C_1$ is replaced by one of the components of $\partial A$. \EndProof If a $3$-manifold $X$ has the structure of an $I$-bundle over a surface $T$ and $p: X \rightarrow T$ is the bundle projection, we will call $\partial_v X \doteq p^{-1}(\partial T)$ the \textit{vertical} boundary of $X$ and $\partial_h X \doteq \overline{\partial X - \partial_v X}$ the \textit{horizontal} boundary of $X$. Note that $\partial_v X$ inherits the structure of an $I$-bundle over $\partial T$, and $\partial_h X$ the structure of a $\partial I$-bundle over $T$, from the original $I$-bundle structure on $X$. We call an annulus $A \subset X$ \textit{vertical} if $A = p^{-1}(p(A))$. Let $M$ an orientable, irreducible $3$-manifold $M$, and let $F$ be a $\pi_1$-injective $2$-dimensional submanifold of $\partial M $. By an {\it essential annulus or torus in $M$ relative to $F$} we mean a properly embedded annulus or torus in $M$ which is $\pi_1$-injective, has its boundary contained in $F$, and not parallel to a subsurface of $F$. In the case where $M$ is boundary-irreducible, we shall define an {\it essential annulus or torus in $M$} to be an essential annulus or torus in $M$ relative to $\partial M$. \EndNumber \Proposition\label{one bundle at a time} Let $X$ be an $I$-bundle over a compact surface, and suppose that $A$ is an essential annulus in $|\calw|$ relative to $\partial_hX$. Then $A$ is isotopic, by an ambient isotopy of $X$ which is constant on $\partial_v X$, to a vertical annulus in $X$. \EndProposition \Proof Let $j:A\to X$ denote the inclusion. If $j$ is homotopic to a map of $A$ into $\partial_hX$ then by \cite[Lemma 5.3]{Waldhausen}, $A'$ is boundary parallel, contradicting essentiality relative to $\partial_hX$. Hence $j$ is not homotopic to a map of $A$ into $\partial_hX$. It follows that if $X$ is a trivial $I$-bundle then $A$ has one boundary component on each component of $\partial_h X$. Lemma 3.4 of \cite{Waldhausen} then asserts that $A$ is ambiently isotopic in $X$ to a vertical annulus, by an isotopy fixing $\partial_v X$ and one component of $\partial_h X$. This gives the conclusion in the case where $X$ is a trivial $I$-bundle. We turn to the case where $X$ is a twisted $I$-bundle. Then for some compact orientable surface $F$ and some orientation-reversing free involution $\tau:F\to F$ we may write $X=(F\times I)/\hat\tau$, where $\hat\tau:F\times I\to F\times I$ is defined by $\hat\tau(x,t)=(\tau(x),1-t)$. The quotient map $p:F\times I\to X$ is a two-sheeted covering map, and $p$ maps $F\times\{i\}$ homeomorphically onto $\partial_h X$ for $i=0,1$. Hence $j:A\to X$ may be lifted to an embedding $\tj:A\to\tX$. Since $j$ is not homotopic to a map of $A$ into $\partial_hX$, the annulus $\tj(A)$ is essential. By the case of the proposition already proved, $\tj$ is ambiently isotopic in $\tX$ to an embedding $\tj'$ such that $\tj'(A)$ is vertical. For $i=0,1$ let $C_i$ denote the component of $\partial A$ with $\tj'(C_i)\subset F\times\{i\}$. Define a simple closed curve $C\subset F$ by $C\times\{0\}=\tj'(C_0)$. Then $C\times\{0\}$ is isotopic to $\tj(C_0)$. Since $\tj'(A)$ is vertical, $\tj'(C_1) = C \times \{1\}$; hence by definition we have $\tau(C)\times\{0\}= \hat\tau(\tj'(C_1))$, and so $\tau(C) \times \{0\}$ is isotopic to $\hat\tau(\tj(C_1))$. But since $j$ is an embedding, $\tj(C_0)$ and $\hat\tau(\tj(C_1)$ are disjoint curves in $F\times\{0\}$. Thus $C$ and $\tau(C)$ are isotopic in $F$ to disjoint curves, and it follows from Lemma \ref{how many people cried} that $C$ is isotopic in $F$ to a curve $C_1$ such that $\tau(C_1)\cap C_1=\emptyset$. This implies that $\tj'$ is ambiently isotopic to an embedding $\tj'_1$ such that $\tj'_1(A)$ is a vertical annulus and $\tau(\tj'_1(A))\cap \tj'_1(A)=\emptyset$. Hence $p\circ\tj'_1:A\to X$ is an embedding, and $A_1\doteq p\circ\tj'_1(A)$ is a vertical annulus in $X$. Since $\tj'_1$ is ambiently isotopic in $F \times I$ to $\tj$, the homeomorphism $p\circ\tj'_1:A\to A_1$ is homotopic to $j$ by a boundary-preserving homotopy in $X$. It then follows from \cite[Corollary 5.5]{Waldhausen} that $A$ is isotopic to $A_1$ by an ambient isotopy fixing $\partial_v X$. \EndProof \Number\label{oshkosh bighash} Let $W$ be a compact, orientable, irreducible and boundary-irreducible $3$-manifold with $\partial W\ne\emptyset$. We recall the definition of the characteristic submanifold $\Sigma$ of $W$ relative to $\partial W$. Up to ambient isotopy, $\Sigma$ is the unique compact submanifold of $W$ with the following properties. \begin{enumerate} \item\label{thermometer} Every component of $\Sigma$ is either an $I$-bundle $P$ over a surface such that $P\cap\partial W=\partial_hP$, or a Seifert fibered space $S$ such that $S\cap\partial W$ is a saturated $2$-manifold in $\partial S$. \item\label{sushi} Every component of the frontier of $\Sigma$ is an essential annulus or torus in $W$. \item\label{cheesequake} No component of $\Sigma$ is ambiently isotopic in $W$ to a submanifold of another component of $\Sigma$. \item\label{big ol' hash} If $\Sigma_1$ is a compact submanifold of $W$ such that (\ref{thermometer}) and (\ref{sushi}) hold with $\Sigma_1$ in place of $\Sigma$, then $\Sigma_1$ is ambiently isotopic in $W$ to a submanifold of $\Sigma$. \end{enumerate} For further details, see \cite{Jo} and \cite{JaS}. It follows from Property (\ref{sushi}) of $W$ that $\Sigma$ is $\pi_1$-injective. Hence if $W$ is simple, the fundamental group of a component of $\Sigma$ cannot have a rank-$2$ free abelian subgroup. It follows that if $W$ is simple then every Seifert-fibered component of $\Sigma$ is a solid torus. In particular, all the components of the frontier of $\Sigma$ are essential annuli in this case. \EndNumber The rest of this section is concerned with books of $I$-bundles. We shall follow the conventions of \cite{CDS}, and we refer the reader to \cite[\rhumba]{CDS} for the definition of a book of $I$-bundles. As in \cite{CDS} we shall denote the union of the pages of a book of $I$-bundles by $\calp_\calw$ and the union of its bindings by $\calb_\calw$, and we shall set $|\calw|=\calp_\calw\cup\calb_\calw$ and $\cala_\calw=\calp_\calw\cap\calb_\calw$. \Definition Let $\calw$ be a book of $I$-bundles, and set $W=|\calw|$, and let $C$ denote a regular neighborhood of $\cala_\calw$ in $W$. We shall say that $\calw$ is {\it normal} if (i) $W$ is a simple $3$-manifold, and (ii) $\overline{W-C}$ is ambiently isotopic in $W$ to the characteristic submanifold of the pair $(W,\partial W)$. \EndDefinition \Proposition\label{all god's chillun got guts} Let $W$ be a simple $3$-manifold. Let $\Sigma$ denote the characteristic submnanifold of the pair $(W,\partial W)$, and suppose that $\chi(\overline{W-\Sigma})\ge0$. Then $\overline{W-\Sigma}$ is a regular neighborhood of a properly embedded submanifold $\cala$ of $W$, each component of which is an annulus. Furthermore, there is a normal book of $I$-bundles $\calw$ such that $|\calw|=W$ and $\cala_\calw=\cala$. \EndProposition \Proof Set $\calc=\overline{W-\Sigma}$. As we observed in \ref{oshkosh bighash}, the components of the frontier of $\Sigma$ are $\pi_1$-injective annuli. In particular, no component of $\partial\calc$ is a $2$-sphere. Since $\chi(\calc)\ge0$, it follows that every component of $\partial\calc$ is a torus. On the other hand, the property (\ref{sushi}) of $\Sigma$ stated in \ref{oshkosh bighash} implies that $\calc$ is $\pi_1$-injective in $W$. Since $W$ is simple, it follows that the fundamental group of a component of $\calc$ cannot have a rank-$2$ free abelian subgroup. Hence the components of $\calc$ are boundary-reducible, and in view of the irreducibility of $W$ they must be solid tori. Let $C$ be any component of $\calc$. The frontier components of $C$ are among the frontier components of $\Sigma$, and we observed in \ref{oshkosh bighash} that these are essential annuli in $W$. In particular it follows that the components of $C\cap\partial W$ are $\pi_1$-injective annuli on the solid torus $C$. Hence $C$ may be given the structure of a Seifert fibered space in such a way that $C\cap\partial W$ is saturated. It now follows from Property (\ref{big ol' hash}) of $\Sigma$ that $C$ is ambiently isotopic in $W$ to a submanifold of $\Sigma$. Since $C$ is also ambiently isotopic to a submanifold of $W-\Sigma$, it must be ambiently isotopic to the regular neighborhood of a frontier component of $\Sigma$. This proves: \Claim\label{spitzer} $\calc$ is a regular neighborhood of a properly embedded $2$-manifold $\cala\subset W$ whose components are annuli. In particular, each component of $\calc$ has two frontier annuli. \EndClaim In particular this gives the first assertion of the proposition. Let $\Sigma_0$ denote the union of all components of $\Sigma$ that are solid tori, and set $\Sigma_-=\Sigma-\Sigma_0$. Since every component of $\Sigma$ is a solid torus or an $I$-bundle $P$ with $P\cap\partial W=\partial_hP$, each component of $\Sigma_-$ is an $I$-bundle $P$ over a surface of negative Euler characteristic with $P\cap\partial W=\partial_hP$. We now claim: \Claim\label{morally triangulated} Each component of $\calc$ has one frontier annulus contained in $\Sigma_0$ and one contained in $\Sigma_-$. \EndClaim Let $C$ be any component of $\calc$. According to \ref{spitzer}, $C$ has two frontier annuli $A_1$ and $A_2$. Let $Q_i$ denote the component of $\Sigma$ containing $A_i$ (where a priori we might have $Q_1=Q_2$). To prove \ref{morally triangulated} we must show that $Q_1$ and $Q_2$ cannot both be contained in $\Sigma_-$ or both be contained in $\Sigma_0$. First suppose that the $Q_i$ are both contained in $\Sigma_-$. Then $Q\doteq Q_1\cup C\cup Q_2$ may be given the structure of an $I$-bundle over a surface in such a way that $\partial_hQ=Q\cap\partial W$. It therefore follows from the property (\ref{big ol' hash}) of $\Sigma$ stated in \ref{oshkosh bighash} that $Q$ is ambiently isotopic to a submanifold of $\Sigma$. But since $\chi(Q_i)<0$ for $i=1,2$ we have $\chibar(Q)>\chibar(Q_i)$ for $i=1,2$. Hence $Q$ cannot be ambiently isotopic to a submanifold of $Q_i$ for $i=1,2$. Furthermore, if $Q'$ is a component of $\Sigma$ distinct from $Q_1$ and $Q_2$, then since $Q\cap Q'=\emptyset$ and $\chi(Q)<0$, the $I$-bundle $Q$ cannot be ambiently isotopic to a submanifold of $Q'$. This is a contradiction. Now suppose that the $Q_i$ are both contained in $\Sigma_0$. Then the $Q_i$ are solid tori, and the image of the inclusion homomorphism $\pi_1(A_i)\to\pi_1(Q_i)$ has some finite index $m_i$ in $\pi_1(Q_i)$ for $i=1,2$. The fundamental group of $Q\doteq Q_1\cup C\cup Q_2$ has presentation $\langle x_1,x_2: x_1^{m_1}=x_2^{m_2}\rangle$. But since $W$ is simple and the frontier annuli of $Q$ are essential, $\pi_1(Q)$ has no rank-$2$ free abelian subgroup. Hence at least one of the $m_i$ must be equal to $1$, and we may assume that $m_2=1$. But this implies that $Q_2$ is ambiently isotopic to a submanifold of $Q_1$, which contradicts the property (\ref{cheesequake}) of $\Sigma$ stated in \ref{oshkosh bighash}. This completes the proof of \ref{morally triangulated}. It follows from \ref{spitzer} and \ref{morally triangulated} that $\calb\doteq\Sigma_0\cup\calc$ is a regular neighborhood of $\Sigma_0$ in $W$, and that the frontier $\cala'$ of $\calb$ is ambiently isotopic to $\cala$. If we set $\calp=\Sigma_-$, it follows from the definition given in \cite[\rhumba]{CDS} that $\calw'=(W,\calb,\calp)$ is a book of $I$-bundles. Normality is immediate from the construction. Since $\cala_{\calw'}=\cala'$ is ambiently isotopic to $\cala$, there is a normal book of $I$-bundles $\calw$ with $|\calw|=|\calw'|=W$ and $\cala_\calw=\cala$. \EndProof Requiring that a book of $I$-bundles structure be normal rules out certain degeneracies. For example, if $W^\flat$ is an $I$-bundle over a closed surface of negative Euler characteristic, we may write $W^\flat=|\calw^\flat|$ for some book of $I$-bundles $\calw^\flat$ with $\calb_{\calw^\flat}\ne\emptyset$. Such a book of $I$-bundles $\calw^\flat$ is not normal, and the following proposition would become false if the normal book of $I$-bundles $\calw$ were replaced by $\calw^\flat$. \Proposition\label{i once was as meek} Let $\calw$ be a normal book of $I$-bundles, and suppose that $A$ is an essential annulus in $|\calw|$. Then $A$ is ambiently isotopic in $|\calw|$ to either a vertical annulus in a page of $\calw$, or an annulus contained in a binding. \EndProposition \Proof Set $W=|\calw|$. Let $\calc$ be a regular neighborhood of $\cala_\calw$ in $W$. The definition of normality implies that (up to isotopy) $\Sigma\doteq\overline{W-\calc}$ is the characteristic submanifold of $W$ relative to $\partial W$. Let $V$ be a regular neighborhood of $A$ in $W$. Then $V$ may be given the structure of a Seifert fibered space in such a way that $V\cap\partial W$ is a saturated $2$-manifold in $\partial V$. The components of the frontier of $V$ in $W$ are essential annuli in $W$. Hence the property (\ref{big ol' hash}) of $\Sigma$ stated in \ref{oshkosh bighash} implies that $V$ is ambiently isotopic in $W$ to a submanifold of $\Sigma$. In particular, $A$ is ambiently isotopic in $W$ to an annulus $A'\subset W-\cala_\calw$. Thus $A'$ is contained in either a page or a binding of $W$. If $A'$ is contained in a page $P$, then $P$ is an $I$-bundle over a surface. Since $A$ is essential in $W$ (relative to $\partial W$), it is in particular essential in $P$ relative to $\partial_hP$. It therefore follows from Proposition \ref{one bundle at a time} that $A'$ is isotopic in $P$, by an ambient isotopy of $P$ which fixes $\partial_v P$, to a vertical annulus. The conclusion of the proposition follows. \EndProof \Lemma\label{as a new-born lamb} Let $\calw$ be a normal book of $I$-bundles. Let $Y$ be a compact $3$-dimensional submanifold of $ |\calw|$. Suppose that the following conditions hold. \begin{enumerate} \item\label{i'm now employed}Each component of the frontier of $Y$ in $|\calw|$ is an essential properly embedded annulus in $|\calw|$. \item\label{the devil may take him} The $2$-manifold $Y\cap \partial |\calw|$ has two components $Z_0$ and $Z_1$, with $\chibar(Z_0)=\chibar(Z_1)=1$. \item\label{i'll never forsake him}The inclusion homomorphism $H_1(Z_0;\ZZ_2)\to H_1(Y;\ZZ_2)$ is injective. \item\label{the full treatment}For every solid torus $L\subset Y$, such that $ L\cap Z_0$ is an annulus which is homotopically non-trivial in $|\calw|$, the inclusion homomorphism $H_1( L\cap Z_0;\ZZ)\to H_1(L,\ZZ)$ is surjective. \end{enumerate} Then the inclusion homomorphism $\pi_1(Z_0)\to\pi_1(Y)$ is surjective. \EndLemma \Proof We set $W=|\calw|$, $\calp=\calp_\calw$ and $\calb=\calb_\calw$. Since the frontier components of $Y$ relative to $W$ are annuli, we have $$\chibar(Y)=\frac12\chibar(\partial Y)=\frac12(\chibar(Z_0)+\chibar(Z_1))=1.$$ By Proposition \ref{i once was as meek} we may assume that each frontier component of $Y$ is either a vertical annulus contained in $\calp $ and disjoint from $\cala $, or an annulus in $\calb $. Then each component of $Y\cap\calp $ is an $I$-sub-bundle of a page of $\calw$, whose frontier is a disjoint union of vertical annuli in the page; each of these vertical annuli is either contained in, or disjoint from, the vertical boundary of the page. Furthermore, each component of $Y\cap\calb $ is a solid torus in a binding, whose frontier is a disjoint union of $\pi_1$-injective annuli in the binding. In particular we have $\chibar(C)\ge0$ for every component $C$ of $Y\cap\calp $, and $\chibar(C)=0$ for every component $C$ of $Y\cap\calb $. Since the frontier components of $Y\cap\calb $ and $Y\cap\calp $ relative to $Y$ are annuli, we have $$1=\chibar(Y)=\chibar(Y\cap\calb )+\chibar(Y\cap\calp )=\chibar(Y\cap\calp )=\sum_U\chibar(U),$$ where $U$ ranges over the components of $Y\cap\calp $. Hence there is a component $U_1$ of $Y\cap\calp $ with $\chibar(U_1)=1$, and $\chibar(U)=0$ for every component $U\ne U_1$ of $Y\cap\calp $. Since $U_1$ is a sub-bundle of a page of $\calw$, it is an $I$-bundle over a surface $S$ with $\chibar(S)=1$, and the frontier of $U_1$ relative to $W$ is its vertical boundary. We set $\calk=\overline{Y-U_1}$. Each component of the frontier of $U_1$ or $\calk$ relative to $W$ is a component of either the frontier $\cala_\calw$ of $\calp_\calw$ relative to $W$, or the frontier of $Y$ relative to $W$. According to \cite[Lemma 5.2]{CDS}, the components of $\cala_\calw$ are $\pi_1$-injective annuli in $W$. The components of the frontier of $Y$ are $\pi_1$-injective by hypothesis. Hence: \Claim\label{becomes perforce} Each component of the frontier of $U_1$ or $\calk$ relative to $W$ is a $\pi_1$-injective annulus in $W$. \EndClaim The horizontal boundary of $U_1$ is a two-sheeted covering space of $S$, which is connected if and only if $U_1$ is a twisted $I$-bundle. In particular we have $\chibar(\partial_hU_1)=2\chibar(S)=2$. On the other hand, we have $\partial_hU_1= U_1\cap\partial W\subset Y\cap\partial W=Z_0\cup Z_1$. Hence $\partial_hU_1\subset Z_j$ for some $j\in\{0,1\}$. It follows from \ref{becomes perforce} that $\partial_hU_1$ is $\pi_1$-injective in $W$ and hence in $Z_0\cup Z_1$. Since $\chibar(Z_0)=\chibar(Z_1)=1$, the surface $\partial_hU_1$ cannot be contained in $Z_0$ or in $Z_1$. Hence: \Claim\label{you'll never regret it} $U_1$ is a trivial $I$-bundle over $S$, and its horizontal boundary has one component contained in $Z_0$ and one in $Z_1$. \EndClaim For $i=1,2$, let us denote by $\Delta_i$ the component of $\partial_hU_1$ contained in $Z_i$. Since $W$ is simple by the definition of a normal book of $I$-bundles, it follows from \ref{becomes perforce} that each component of $\calk$ is simple. If $V$ is any component of $\calk$, the components of $V\cap\calp$ are components of $Y\cap\calp$ distinct from $U_1$, and the components of $V\cap\calb$ are components of $Y\cap\calb$. Hence all components of $V\cap\calp$ and $V\cap\calb$ have Euler characteristic $0$. As the components of $(V\cap\calp)\cap(V\cap\calb)$ are annuli it follows that $\chi(V)=0$. But the only simple $3$-manifold with non-empty boundary having Euler characteristic $0$ is a solid torus. This shows: \Claim\label{i readily bet it}Every component of $\calk$ is a solid torus. \EndClaim Let us consider any component $A$ of $U_1\cap\calk$, and let $V$ denote the component of $\calk$ containing $A$. According to \ref{i readily bet it}, $V$ is a solid torus. Since $A$ is a component of the frontier of $U_1$ in $W$, it is a $\pi_1$-injective annulus in $W$ by \ref{becomes perforce}, and hence in $V$. On the other hand, it follows from \ref{you'll never regret it} that some component $c$ of $\partial A$ is contained in $Z_0$. A small non-ambient isotopy of $V$ gives a solid torus $L$ such that $ L\cap Z_0$ is a regular neighborhood $R$ of $c$. Since $A$ is $\pi_1$-injective in $W$, the annulus $R$ is homotopically non-trivial in $W|$. Hypothesis (\ref{the full treatment}) then implies that the inclusion homomorphism $H_1( L\cap Z_0;\ZZ)\to H_1(L,\ZZ)$ is an isomorphism, and hence that the inclusion homomorphism $\pi_1(A)\to\pi_1(V)$ is an isomorphism. This shows: \Claim\label{for duty, duty must be done}For every component $A$ of $U_1\cap\calk$, if $V$ denotes the component of $\calk$ containing $A$, the inclusion homomorphism $\pi_1(A)\to\pi_1(V)$ is an isomorphism. \EndClaim Now suppose that $A_1$ and $A_2$ are two components of $U_1\cap\calk$ contained in a single component $V$ of $\calk$. For $i=1,2$ it follows from \ref{you'll never regret it} that some component $c_i$ of $\partial A_i$ is contained in $Z_0$. Since $c_1$ and $c_2$ are disjoint, non-trivial simple closed curves on the torus $\partial V\subset W$, they represent the same element of $H_1(W;\ZZ_2)$. Since by hypothesis (\ref{i'll never forsake him}) the inclusion homomorphism $H_1(Z_0;\ZZ_2)\to H_1(Y;\ZZ_2)$ is injective, $c_1$ and $c_2$ cobound a subsurface $Q$ of $Z_0$. Since $c_1$ and $c_2$ are homotopically non-trivial, $Q$ is $\pi_1$-injective in $Z_0$, and hence $\chibar(Q)\le\chibar(Z_0)=1$. As $Q$ is orientable and has exactly two boundary curves, it must be an annulus. On the other hand, $Q$ and $\Delta_0$ are subsurfaces of $Z_0$, and we have $\partial Q\subset\partial \Delta_0$. Hence either $\Delta_0\subset A$ or $A$ is a component of $\overline{Z_0-\Delta_0}$. But it follows from \ref{becomes perforce} that $\Delta_0$ is $\pi_1$-injective in $W$ and hence in $Z_0$; since $\chibar(\Delta)=1$, we cannot have $\Delta_0\subset A$. Thus we have proved: \Claim\label{the rule applies}If $A^{(0)}$ and $A^{(1)}$ are two components of $U_1\cap\calk$ contained in the same component of $\calk$, then some component of $\overline{Z_0-\Delta_0}$ is an annulus $Q$ having one boundary component contained in $\partial A^{(0)}$ and one contained in $\partial A^{(1)}$. \EndClaim In particular, if $Q$ is an annulus having the properties stated in \ref{the rule applies}, then $\partial Q\subset \partial\Delta_0$. Since $\chibar(\Delta_0)=1$, the surface $\Delta_0$ has at most three boundary curves. Hence there is at most one unordered pair $\{A^{(0)},A^{(1)}\}$ of components of $U_1\cap\calk$ such that $A^{(0)}$ and $A^{(1)}$ are contained in the same component of $\calk$. Equivalently: \Claim\label{to everyone} There is no component of $\calk$ whose boundary contains more than two components of $U_1\cap\calk$, and there is at most one component of $\calk$ whose boundary contains two components of $U_1\cap\calk$. \EndClaim In view of \ref{to everyone}, the argument now divides into two cases. \noindent{\bf Case I: The boundary of each component of $\calk$ contains only one component of $U_1\cap\calk$.} In this case, let $A_1,\ldots,A_m$ denote the components of $U_1\cap\calk$. (Since $\chibar(S)=1$ we have $m\le3$.) Let $V_i$ denote the component of $\calk$ containing $A_i$; thus the solid tori $V_1,\ldots,V_m$ are all distinct. We have $Y=U_1\cup V_1\cup\cdots\cup V_m$. By \ref{for duty, duty must be done} the inclusion homomorphism $\pi_1(A_i)\to\pi_1(V_i)$ is an isomorphism for $i=1,\ldots,m$. Hence the inclusion homomorphism $\pi_1(U_1)\to\pi_1(Y)$ is an isomorphism. But by \ref{you'll never regret it}, the inclusion homomorphism $\pi_1(\Delta_0)\to\pi_1(U_1)$ is an isomorphism. Hence the inclusion homomorphism $\pi_1(\Delta_0)\to\pi_1(Y)$ is an isomorphism, and in particular the inclusion homomorphism $\pi_1(Z_0)\to\pi_1(Y)$ is surjective. This establishes the conclusion of the lemma in this case. \noindent{\bf Case II: There is a component $V_0$ of $\calk$ whose boundary contains more than one component of $U_1\cap\calk$.} By \ref{to everyone}, this component $V_0$ is unique and contains exactly two components of $U_1\cap\calk$, which we denote $A_0^{(0)}$ and $A_0^{(1)}$. According to \ref{i readily bet it}, $V_0$ is a solid torus, and according to \ref{for duty, duty must be done} the inclusion homomorphism $\pi_1(A_0^{(i)})\to\pi_1(V_0)$ is an isomorphism for $i=0,1$. Hence there is a homeomorphism $h:V_0\to S^1\times[0,1]\times[0,1]$ such that $h(A_0^{(i)})=S^1\times\{i\}\times[0,1]$ for $i=0,1$. This allows us to extend the $I$-bundle structure of $U_1$ to an $I$-bundle structure for $L\doteq U_1\cup V_0$. (Note, however, that the horizontal boundary of $L$ need not be contained in $\partial W$.) According to \ref{the rule applies}, some component of $\overline{Z_0-\Delta_0}$ is an annulus $Q$ having one boundary component contained in $\partial A^{(0)}$ and one contained in $\partial A^{(1)}$. This implies that the $I$-bundle $L$ is trivial, and that one of its horizontal boundary components, which we shall denote by $\Theta$, is contained in $Z_0$. Now let $A_1,\ldots,A_m$ denote the components of $U_1\cap\calk$ distinct from $A_0^{(0)}$ and $A_0^{(1)}$. (Since $\chibar(S)=1$ one may show that $m\le1$, but this will not be used.) Let $V_i$ denote the component of $\calk$ containing $A_i$; thus the solid tori $V_1,\ldots,V_m$ are all distinct. We have $Y=L\cup V_1\cup\cdots\cup V_m$. By \ref{for duty, duty must be done} the inclusion homomorphism $\pi_1(A_i)\to\pi_1(V_i)$ is an isomorphism for $i=1,\ldots,m$. Hence the inclusion homomorphism $\pi_1(L)\to\pi_1(Y)$ is an isomorphism. But since $L$ is a trivial $I$-bundle and $\Theta$ is one component of its horizontal boundary, the inclusion homomorphism $\pi_1(\Theta)\to\pi_1(L)$ is an isomorphism. Hence the inclusion homomorphism $\pi_1(\Theta)\to\pi_1(Y)$ is an isomorphism, and in particular the inclusion homomorphism $\pi_1(Z_0)\to\pi_1(Y)$ is surjective. Thus the conclusion of the lemma is established in this case as well. \EndProof \section{Trimonic manifolds} \label{sec:trimonic} \Number\label{troglodyte} Let $V$ be a point of an oriented surface $S$. By an {\it ordered triod} based at $V$ we shall mean an ordered triple $(A_0,A_1,A_2)$ of closed topological arcs in $S$, each having $V$ as an endpoint, such that $A_i\cap A_j=\{V\}$ whenever $i\ne j$. Suppose that $(A_0,A_1,A_2)$ is an ordered triod based at $V$. For $i=0,1,2$ let $x_i$ denote the endpoint of $A_i$ that is distinct from $V$. Then there is a disk $\delta \subset S$ such that $A_0\cup A_1\cup A_2\subset \delta $ and $A_0\cup A_1\cup A_2\cap\partial \delta =\{x_0,x_1,x_2\}$. We shall express this by saying that the triod $(A_0,A_1,A_2)$ is {\it properly embedded} in $\delta$. The orientation of $S$ restricts to an orientation of $\delta $, which in turn induces an orientation of $\partial \delta $. We shall say that the ordered triod $(A_0,A_1,A_2)$ is {\it positive} if the ordered triple $(x_0,x_1,x_2)$ is in counterclockwise order on $\partial \delta $, and {\it negative} otherwise. \EndNumber \Number Suppose that $\theta$ is an oriented open arc. We denote by $\theta'$ the same arc with the opposite orientation. By a {\it terminal segment} of $\theta$ we mean a subset $A$ of $\theta$ which has the form $h((t,1))$ for some orientation-preserving homeomorphism $h:(0,1)\to \theta$ and some point $t\in(0,1)$. By an {\it initial segment} of $ \theta$ we mean a terminal segment of $\myprime \theta$. \EndNumber \Number\label{terminator} Now suppose that $\Gamma$ is a graph (i.e. a $1$-dimensional CW complex) contained in an oriented surface $S$. By an {\it oriented edge} of $\Gamma$ we mean simply an (open) edge which is equipped with an orientation. Let $V$ be a vertex of $\Gamma$, and let $(e_0,e_1,e_2)$ be an ordered triple of oriented edges of $\Gamma$ with terminal vertex $V$. Assume that the $e_i$ are distinct as oriented edges (although two of them may be opposite orientations of the same underlying edge). We may choose terminal segments $A_i$ of the $e_i$ in such a way that $(\bar{A}_0,\bar{A}_1,\bar{A}_2)$ is an ordered triod in $S$. We shall say that the ordered triple $(e_0,e_1,e_2)$ is {\it positive} if the ordered triod $(\bar{A}_0,\bar{A}_1,\bar{A}_2)$ is positive, and {\it negative} otherwise. \EndNumber \Number Let $Z$ be a planar surface with three boundary curves, and let $\star\in\inter Z$ be a base point. An ordered pair $(z_1,z_2)$ of elements of $\pi_1(Z,\star)$ will be called a {\it geometric basis} for $\pi_1(Z,\star)$ if the boundary curves of $Z$ may be indexed as $(C_i)_{1\le i\le3}$ in such a way that for $i=1,2,3$ there exist a point $p_i\in C_i$, a closed path $\gamma_i$ in $C_i$ based at $p_i$ and an oriented (embedded) arc $\tau_i$ from $\star$ to $p_i$, such that \begin{itemize} \item the $\tau_i$ have pairwise disjoint interiors; \item $\tau_i\cap C_i=\{p_i\}$ for each $i$; \item $[\gamma_i]$ generates $\pi_1(C_i,p_i)$ for each $i$; \item $z_i=[\tau_i*\gamma_i*\overline{\tau_i}]$ for $i=1,2$; and \item $z_1^{-1}z_2= [\tau_3*\gamma_3*\overline{\tau_3}]$. \end{itemize} Note that if $(z_1,z_2)$ is a geometric basis for $\pi_1(Z,\star)$ then $z_1$ and $z_2$ freely generate $\pi_1(Z,\star)$. \EndNumber \Number\label{but go} Let $\Gamma$ be a theta graph contained in an oriented surface $S$. Let $W$ and $V$ denote the vertices of $\Gamma$, and let $\beta_0$, $\beta_1$ and $\beta_2$ denote the oriented edges having initial vertex $W$ and terminal vertex $V$. Suppose that the ordered triple $(\beta_0,\beta_1,\beta_2)$ of edges terminating at $V$ is positive, and that the ordered triple $(\myprime \beta_0,\myprime \beta_1,\myprime \beta_2)$ of oriented edges terminating at $W$ is negative. Then a regular neighborhood $Z$ of $\Gamma$ in $S$ is a planar surface with three boundary curves, and $([\myprime \beta_0* \beta_1],[\myprime \beta_0* \beta_2])$ is a geometric basis of $\pi_1(Z,V)$. \EndNumber \Number\label{go in peace} Let $\Gamma$ be an eyeglass graph contained in an oriented surface $S$. Let $W$ and $V$ denote the vertices of $\Gamma$, let $\beta_0$ denote the oriented edge having initial vertex $W$ and terminal vertex $V$, and let $\beta_1$ and $\beta_2$ be oriented loops based at $W$ and $V$ respectively. Suppose that the ordered triple $(\beta_0,\beta_2',\beta_2)$ of edges terminating at $V$ is positive, and that the ordered triple $(\beta_0',\beta_1',\beta_1)$ of edges terminating at $W$ is negative. Then a regular neighborhood $Z$ of $\Gamma$ in $S$ is a planar surface with three boundary curves, and $([ \beta_2],[\myprime \beta_0* \beta_1*\beta_0])$ is a geometric basis of $\pi_1(Z,V)$. \EndNumber \Definition\label{i'm now sir murgatroyd} Let $X$ be a compact orientable $3$-manifold, and let $S$ be a component of $\partial X$. We shall say that $X$ is a {\it trimonic manifold relative to $S$} if there exists a properly embedded arc $\alpha\subset X$ and a PL map $f$ of a PL $2$-disk $D$ into $X$, such that the following conditions hold: \begin{enumerate} \item\label{of steven sondheim} $f^{-1}(\alpha)$ is a union of three disjoint arcs in $\partial D$; \item\label{and leonard bernstein} $f$ maps each component of $f^{-1}(\alpha)$ homeomorphically onto $\alpha$; \item\label{everything's free} $f|(\inter D\cup((\partial D)-f^{-1}(\alpha)))$ is one-to-one; \item\label{in america} $f(\inter D)\subset\inter X$; \item\label{for a small fee} $f((\partial D)-f^{-1}(\alpha))\subset S$; \item\label{maybe i go back} $X$ is a semi-regular neighborhood of $S\cup f(D)$. \end{enumerate} Note that condition (\ref{for a small fee}) implies that the endpoints of $\alpha$ lie in $S$. A PL map $f$ of a $2$-disk $D$ into $X$ such that (\ref{of steven sondheim})--(\ref{maybe i go back}) hold for some properly embedded arc $\alpha$ in $X$ will be called a {\it defining hexagon} for the trimonic manifold $X$ relative to $S$. \EndDefinition \Notation Suppose that $f:D\to X$ is a defining hexagon for a trimonic manifold $X$ relative to $S$. Note that the arc $\alpha$ appearing in Definition \ref{i'm now sir murgatroyd} is uniquely determined by $D$. We shall denote this arc by $\alpha_f$. Furthermore, we shall denote by $\Gamma_f$ the PL set $f({(\partial D)-f^{-1}(\alpha)})=f(D)\cap\partial X\subset S$. \EndNotation \Lemma\label{the magic} Suppose that $X$ is a trimonic manifold relative to $S$. Let $f:D\to X$ be a defining hexagon for $X$, and set $\alpha=\alpha_f$ and $\Gamma=\Gamma_f$. Then $\Gamma$ is homeomorphic to either a theta graph (the ``theta case") or an eyeglass graph (the ``eyeglass case"), and a regular neighborhood $Z$ of $\Gamma$ in $S$ is a planar surface with three boundary curves. Furthermore, for some (and hence for any) base point $\star\in\inter Z$, there is an ordered basis $(t,u_1,u_2)$ of the rank-$3$ free group $\pi_1(Z\cup\alpha,\star)$ with the following properties: \begin{itemize} \item the inclusion homomorphism $\pi_1(Z,\star)\to\pi_1(Z\cup\alpha,\star)$ maps some geometric basis of $\pi_1(Z,\star)$ to the pair $(u_1,u_2)$; and \item $\partial D$ may be oriented so that the conjugacy class in $\pi_1(Z\cup\alpha,\star)$ represented by $f|\partial D:\partial D\to Z\cup\alpha$ is $t^2u_1tu_2$ in the theta case, or $t^2u_1t^{-1}u_2$ in the eyeglass case. \end{itemize} \EndLemma \begin{figure} \caption{Some objects defined in the proof of Lemma \ref{the magic}} \label{hexnT} \end{figure} \Proof We denote the components of $f^{-1}(\alpha)$ by $a_0$, $a_1$ and $a_2$, and we denote the components of $\overline{(\partial D)-f^{-1}(\alpha)}$ by $b_0$, $b_1$ and $b_2$. We take the $a_i$ and $b_i$ to be labeled in such a way that $a_i$ and $b_j$ share an endpoint if and only if $i$ is congruent to either $j$ or $j+1$ modulo $3$. For $i=0,1,2$ we set $\beta_i=f(\inter b_i)$. Then $\beta_i$ is an open arc in $S-\partial\alpha $, and $\overline{\beta_i}-\beta_i\subset\partial\alpha $. Hence $\Gamma=\overline{\beta_0\cup\beta_1\cup\beta_2}$ may be given the structure of a graph whose vertices are the endpoints of $\alpha$, and whose edges are $\beta_0$, $\beta_1$ and $\beta_2$. Since $\alpha$ is an embedded arc, $f$ maps the terminal point of $\beta_i$ and the initial point of $\beta_{i+1}$ to distinct vertices of $\Gamma$. In particular, each vertex of $\Gamma$ has valence $3$. Since $\Gamma$ has three edges and two vertices, it is either a theta graph or an eyeglass graph. For $i=0,1,2$ we denote by $P_i$ the common endpoint of the arcs $a_i$ and $b_{i-1}$ in $\partial D$ (where subtraction is interpreted modulo $3$), and by $Q_i$ the common endpoint of $a_i$ and $b_{i}$. If we fix an orientation of $\partial D$ then each of the arcs $a_i$ and $b_i$ inherits an orientation. We choose the orientation of $\partial D$ in such a way that $P_i$ and $Q_i$ are respectively the initial and terminal points of $a_i$, while $Q_i$ and $P_{i+1}$ are respectively the initial and terminal points of $b_i$. According to Definition \ref{i'm now sir murgatroyd}, $f$ restricts to a homeomorphism $\phi_i:a_i\to\alpha$ for $i=0,1,2$. Let $T$ be a PL tubular neighborhood of $\alpha$ in $X$. We may choose $T$ in such a way that $f$ is transverse to the frontier of $T$ and $f^{-1}(T)$ is a regular neighborhood of $f^{-1}(\alpha)=a_0\cup a_1\cup a_2$ in $D$. For each $i\in\{0,1,2\}$, let $\nu_i$ denote the component of $f^{-1}(T)$ containing $a_i$. Then $\nu_i\cap\partial D$ has the form $a_i\cup s_i\cup r_i$, where $ s_i$ is the closure of a terminal segment of $b_{i-1}$ and $ r_i$ is the closure of an initial segment of $b_i$. It follows from Definition \ref{i'm now sir murgatroyd} that $f$ maps each $\nu_i$ homeomorphically on to a PL disk $J_i\subset T$, and that $J_i\cap J_j=\alpha$ when $i\ne j$. The intersection of $T$ with $\partial X$ consists of two PL disks; for each $i\in\{0,1,2\}$, one of these disks meets $J_i$ in the arc $\sigma_i\doteq f( s_i)$, and the other meets $J_i$ in the arc $\rho_i\doteq f( r_i)$. Hence we may identify $T$ by a PL homemorphism with $\delta\times\alpha$, where $\delta$ is a PL disk, in such a way that $\alpha=\{o\}\times\alpha$ for some interior point $o$ of $\delta$; and so that for $i=0,1,2$ we have $J_i=t_i\times\alpha$ for some arc $t_i\subset\delta$. Each $t_i$ has one endpoint in $\partial\delta$ and one at $o$, and we have $t_i\cap t_j=o$ for $i\ne j$. The components of $t_i\times\partial\alpha$ are $\sigma_i$ and $\rho_i$. We may orient $\alpha$ in such a way that at least two of the homeomorphisms $\phi_i:a_i\to\alpha$ are orientation-preserving. Hence, after a possible cyclic relabeling of the $a_i$ (and the $b_i$), we may assume that $\phi_0$ and $\phi_1$ are orientation-preserving. We denote by $V$ and $W$, respectively, the initial and terminal endpoints of $\alpha$ with respect to the orientation that we have chosen. Thus for $i=0,1$ we have $f(P_i)=V$ and $f(Q_i)=W$. We shall distinguish two cases according to whether the homeomorphism $\phi_2:a_2\to\alpha$ (I) preserves or (II) reverses orientation. In case I we have $f(P_2)=V$ and $f(Q_2)=W$, while in case II we have $f(P_2)=W$ and $f(Q_2)=V$. Hence we may define ordered triods (see \ref{troglodyte}) $\calt_V$ and $\calt_W$ based at $V$ and $W$ respectively by setting $\calt_V=(\sigma_0,\sigma_1,\sigma_2)$ and $\calt_W=(\rho_0,\rho_1,\rho_2)$ in case I, and $\calt_V=(\sigma_0,\sigma_1,\rho_2)$ and $\calt_W=(\rho_0,\rho_1,\sigma_2)$ in case II. In each case, if we denote by $\delta_V$ and $\delta_W$ the components of $T\cap\partial X$ containing $V$ and $W$ respectively, the triods $\calt_V$ and $\calt_W$ are properly embedded (see \ref{troglodyte}) in $\delta_V$ and $\delta_W$. If we identify $T$ as above with $\delta\times\alpha$, we have $\delta_V=\delta\times\{V\}$ and $\delta_W=\delta\times\{W\}$. We may then define a homeomorphism $\psi:\delta_V\to\delta_W$ by $\psi(x,V)=(x,W)$. We orient $S$ in such a way that $\calt_V$ is a positive ordered triod (see \ref{troglodyte}). Each of the disks $\delta_V$ and $\delta_W$ inherits an orientation from $S$. The orientability of the $3$-manifold $X$ implies that $\psi:\delta_V\to\delta_W$ is an orientation-reversing homeomorphism. By the properties of the identification stated above, we have $\psi(\sigma_i)=\rho_i$ for $i=0,1$; and in Case I we have $\psi(\sigma_2)=\rho_2$, while in Case II we have $\psi(\rho_2)=\sigma_2$. Since $\calt_V$ is a positive ordered triod, and since $\psi$ reverses orientation, the ordered triod $\calt_W$ is negative. We orient each open arc $\beta_i$ in such a way that $f|\inter b_i:\inter b_i\to\beta_i$ is orientation-preserving. Since $\alpha$, $\beta_0$, $\beta_1$ and $\beta_2$ are now equipped with orientations, their closures define elements of the fundamental groupoid $\Pi(f(D))$ which we denote by $[\alpha]$, $[\beta_0]$, $[\beta_1]$ and $[\beta_2]$. We let $c$ denote the closed path $a_0*b_0*a_1*b_1*a_2*b_2$ in $\partial D$, based at $P_0$. Then $[c]$ generates $\pi_1( \partial D,P_0)$. We set $\gamma= f\circ c$. We have $$[\gamma] =[\alpha][\beta_0][\alpha][\beta_1][\alpha]^\epsilon[\beta_2] \in\pi_1(f(D),V)\subset\Pi(f(D)),$$ where $\epsilon=1$ in Case I, and $\epsilon=-1$ in Case II. To prove the conclusions of the lemma in Case I, we note that since $f(P_i)=V$ and $f(Q_i)=W$ for each $i$, each $\beta_i$ has $W$ as initial vertex and $V$ as terminal vertex. Thus $\Gamma$ is a theta graph, and Case I is the ``theta case'' referred to in the statement of the Lemma. Since $\calt_V=(\sigma_0,\sigma_1,\sigma_2)$ is a positive ordered triod based at $V$, and the interior of $\sigma_i$ is a terminal segment of $\beta_{i-1}$, the triple $(\beta_2,\beta_0,\beta_1)$ of edges terminating at $V$ is positive in the sense of \ref{terminator}. Hence the triple $(\beta_0,\beta_1,\beta_2)$ is positive. Likewise, since $\calt_W=(\rho_0,\rho_1,\rho_2)$ is a negative ordered triod based at $W$, and since the interior of $\rho_i$ is an initial segment of $\beta_i$---and therefore a terminal segment of $\beta_i'$---the triple $(\beta_0',\beta_1',\beta_2')$ of edges terminating at $W$ is negative. It now follows from \ref{but go} that a regular neighborhood $Z$ of $\Gamma$ in $S$ is a planar surface with three boundary curves, and that if we set $z_1=[\myprime \beta_0* \beta_1]$ and $z_2=[\myprime \beta_0* \beta_2])$, then $(z_1,z_2)$ is a geometric basis of $\pi_1(Z,V)$. In particular $\pi_1(Z,V)$ is freely generated by $z_1$ and $z_2$, and hence $\pi_1(Z\cup\alpha,V)$ is generated by $t$, $u_1$ and $u_2$, where $u_i$ denotes the image of $z_i$ under the inclusion homomorphism $\pi_1(Z,V)\to\pi_1(Z\cup\alpha,V)$, and $t=[\alpha*\beta_0]$. We have $$\begin{aligned} \empty[ \gamma ] &=[\alpha*\beta_0*\alpha*\beta_1*\alpha*\beta_2]\\ &=[\alpha*\beta_0*\alpha*\beta_0*\myprime\beta_0*\beta_1*\alpha*\beta_0*\myprime\beta_0*\beta_2]\\ &=t^2u_1tu_2. \end{aligned} $$ This establishes the conclusions of the Lemma in the theta case. In Case II we have $f(P_i)=V$ and $f(Q_i)=W$ for $i=0,1$, while $f(P_2)=W$ and $f(Q_2)=V$. It follows that the closures of $\beta_1$ and $\beta_2$ are loops based at $W$ and $V$ respectively, whereas $\beta_0$ has initial point $W$ and terminal point $V$. Thus $\Gamma$ is an eyeglass graph, and Case II is the ``eyeglass case'' referred to above. Since $\calt_V=(\sigma_0,\sigma_1,\rho_2)$ is a positive ordered triod based at $V$, and since the interiors of $\sigma_0$, $\sigma_1$ and $\rho_2$ are respectively terminal segments of $\beta_{2}$, $\beta_0$ and $\beta_2'$, the triple $(\beta_2,\beta_0,\beta_2')$ of edges terminating at $V$ is positive. Hence the triple $(\beta_0,\beta_2',\beta_2)$ is positive. Likewise, since $\calt_W=(\rho_0,\rho_1,\sigma_2)$ is a negative ordered triod based at $W$, and since the interiors of $\rho_0$, $\rho_1$ and $\sigma_2$ are respectively terminal segments of $\beta_{0}'$, $\beta_1'$ and $\beta_1$, the triple $(\beta_0',\beta_1',\beta_1)$ of edges terminating at $W$ is negative. It now follows from \ref{go in peace} that a regular neighborhood $Z$ of $\Gamma$ in $S$ is a planar surface with three boundary curves, and that if we set $z_1=[\myprime \beta_0* \beta_1*\beta_0])$ and $z_2=[\beta_2]$, then $(z_1,z_2)$ is a geometric basis of $\pi_1(Z,V)$. In particular $\pi_1(Z,V)$ is freely generated by $z_1$ and $z_2$, and hence $\pi_1(Z\cup\alpha,V)$ is generated by $t$, $u_1$ and $u_2$, where $u_i$ denotes the image of $z_i$ under the inclusion homomorphism $\pi_1(Z,V)\to\pi_1(Z\cup\alpha,V)$, and $t=[\alpha*\beta_0]$. We have $$\begin{aligned} \empty [\gamma ] &=[\alpha*\beta_0*\alpha*\beta_1*\myprime\alpha*\beta_2]\\ &=[\alpha*\beta_0*\alpha*\beta_0*\myprime\beta_0*\beta_1*\beta_0*\myprime\beta_0*\myprime\alpha*\beta_2]\\ &=(t)^2u_1(t)^{-1}u_2. \end{aligned} $$ This establishes the conclusions of the lemma in the eyeglass case. \EndProof \Definition \label{non-degenerate} Let $X$ be a trimonic manifold relative to $S$. We shall say that $X$ is {\it non-degenerate} if there is a defining hexagon $f:D\to X$ such that no component of $S-\Gamma_f$ is an open disk. \EndDefinition \Lemma\label{as steward} Suppose that $X$ is a non-degenerate trimonic manifold relative to a component $S$ of $\partial X$. Then $Hg(X)\le1+\genus (S)$. Furthermore, $X$ contains a compact connected $3$-dimensional submanifold $Y$ with the following properties. \begin{enumerate} \item\label{for greater precision} Each component of the frontier of $Y$ in $X$ is an essential annulus in $X$, joining distinct components of $\partial X$. \item\label{without the elision} The $2$-manifold $Y\cap \partial X$ has two components $Z_0$ and $Z_1$, where $Z_0\subset S$ and $Z_1\not\subset S$, and each $Z_i$ is a planar surface with three boundary curves. \item\label{and i who was} The group $\pi_1(Y)$ is free of rank $2$. \item\label{his valet de sham} For any base point $\star \in\inter Z_0$, there exists an ordered pair of generators $(x,y)$ of $\pi_1(Y,\star )$, such that either the pair $(x,y x^{-1}y^2)$ or the pair $(x,y^{-1} x^{-1}y^2)$ is the image of some geometric basis of $\pi_1(Z_0,\star )$ under the inclusion homomorphism $\pi_1(Z_0,\star )\to\pi_1(Y,\star )$. \item\label{oh baby}Each component $Q$ of $\overline{X-Y}$ may be given the structure of a trivial $I$-bundle over a $2$-manifold, in such a way that $Y\cap Q$ is the vertical boundary of $Q$. \end{enumerate} \EndLemma \Proof We fix a defining hexagon $f:D\to X$, and set $\alpha=\alpha_f$ and $\Gamma=\Gamma_f$. Since $X$ is non-degenerate, we may choose $f$ in such a way that: \Claim\label{when an innocent heart} No component of $S-\Gamma$ is an open disk. \EndClaim Let $T$ be a PL tubular neighborhood of $\alpha$ in $X$, and let $\delta$ be a properly embedded disk in $T\cap\inter X$ which crosses $\alpha$ transversally in one point. We may choose $T$ in such a way that $f^{-1}(T)$ is a regular neighborhood of $f^{-1}(\alpha)=a_0\cup a_1\cup a_2$ in $D$. Let $D'$ denote the disk $\overline{D-f^{-1}(T)}$. Then the frontier of $D'$ is the union of three arcs $a_0'$, $a_1'$ and $a_2'$, where $a_i'$ is the frontier in $D$ of the component of $f^{-1}(T)$ containing $a_i$. According to Definition \ref{i'm now sir murgatroyd}, $f$ maps each $a_i$ homeomorphically onto $\alpha$. Hence we may choose $\delta$ so that for each $i$, the arc $f(a_i')$ meets $\partial\delta$ transversally in exactly one point. In particular: \Claim\label{your father takes bubble baths} The simple closed curve $f(\partial D')$ meets $\partial\delta$ in exactly three points, and these are transversal points of intersection within the frontier annulus of $T$. \EndClaim Let $\eta$ denote a regular neighborhood in $\overline{X-T}$ of the properly embedded disk $f(D')\subset\overline{X-T}$. Then $K\doteq T\cup\eta$ is a regular neighborhood of $f(D)$ in $X$. Let $X'$ denote the manifold obtained from $X$ by attaching a collar $\calc$ along the boundary component $S$. We identify $\calc$ with $\Sigma\times I$, where $\Sigma$ is a surface homeomorphic to $S$, in such a way that $S=\Sigma\times\{1\}$ and $S'\doteq\Sigma \times\{0\}\subset\partial X'$. We set $X''=\calc\cup K\subset X'$. Since $K$ is a regular neighborhood of $f(D)$ in $X$, the manifold $X'$ is a semi-regular neighborhood of its $3$-dimensional submanifold $X''$. It follows that the pair $(X'',S')$ is homeomorphic to $(X',S')$, and hence to $(X,S)$. It therefore suffices to show that the conclusions of the lemma are true when $X$ and $S$ are replaced by $X''$ and $S'$. We first note that the manifold $V \doteq(\Sigma\times I)\cup T$ is a compression body with $\partial_- V=S'$. If we set $g=\genus (S)$, the genus of $\partial_+ V$ is $g+1$. In particular $V$ admits a Heegaard splitting of genus $g+1$. We have $X''=V\cup\eta$, so that $X''$ is obtained from $V$ by adding a $2$-handle. It therefore follows from \cite[\easyHeegaard]{CDS} that $\Hg(X'')\le g+1$. This gives the first assertion of the lemma. We must now construct a compact connected $3$-dimensional submanifold $Y$ of $X''$ such that Properties (\ref{for greater precision})--(\ref{oh baby}) hold for $Y$ when $X$ and $S$ are replaced by $X''$ and $S'$. We take $Z$ to be a regular neighborhood of $K\cap S$ in $S$, and observe that $Z$ is a regular neighborhood of $f(D)\cap S=\Gamma$ in $S$. It therefore follows from Lemma \ref{the magic} that $Z$ is a planar surface with three boundary curves. We have $Z=R\times\{1\}$ for some $R\subset\Sigma$. We set $Y=(R\times I)\cup K\subset(\Sigma \times I)\cup K=X''$. By construction we have $\overline{X-Y}=\overline{\Sigma-R}\times I$ and $\overline{X-Y}\cap(S')=\overline{\Sigma-R}\times \{0\}$. This implies Property (\ref{oh baby}) for $Y$. To show that $Y$ has Property (\ref{without the elision}), we first observe that $Y=(R\times I)\cup K=(R\times I)\cup T\cup\eta$, and that $T$ and $\eta$ are disjoint from $S'=\Sigma\times\{0\}$ (since their intersection with $\Sigma\times I$ is contained in $\Sigma\times\{1\}$). Hence $Y\cap\partial X''$ is the disjoint union of $Z_0\doteq R\times\{0\}$ and $Z_1\doteq ((R\times \{1\})\cup T\cup\eta)\cap\partial X''\not\subset S'$. The $2$-manifold $Z_0$ is homeomorphic to $Z$, and is therefore a planar surface with three boundary curves. To describe the $2$-manifold $Z_1$, we first consider the $2$-manifold $Z^*\doteq((R\times \{1\})\cup T)\cap\partial((\Sigma\times I)\cup T)$. We may obtain $Z^*$ from $R\times\{1\}$ by removing the interior of $T\cap(R\times\{1\})$, which is a union of two disjoint disks, and attaching the annulus $\overline{(\partial T)-(T\cap(R\times\{1\}))}$. Since $R\times\{1\}\cong R$ is connected and $\chibar(R\times\{1\})=1$, it follows that $Z^*$ is connected and that $\chibar(Z^*)=3$. The surface $Z^*$ contains the simple closed curves $f(\partial D')$ and $\partial\delta$, which by \ref{your father takes bubble baths} meet transversally in exactly three points. In particular, their mod $2$ homological intersection number in $Z^*$ is equal to $1$. Hence $f(\partial D')$ does not separate the connected surface $Z^*$. The $2$-manifold $Z_1$ is obtained from $Z^*$ by removing the interior of the annulus $Z^*\cap\eta$ and attaching the two components of $\overline{(\partial\eta)-(Z^*\cap\eta)}$, which are disks. Since $\eta$ is a regular neighborhood of $f(D')$ in $\overline{X-T}$, the annulus $Z^*\cap\eta$ is a regular neighborhood of the non-separating curve $f(\partial D')$ in $T^*$. Hence $Z_1$ is connected, and so $Z_0$ and $Z_1$ are the components of $Y\cap\partial X''$. Furthermore, we have $\partial Z_1=\partial(R\times\{1\})$, and since $R\cong Z$ is a planar surface with three boundary curves, $Z_1$ has three boundary curves. Since in addition we have $\chibar(Z_1)=\chibar(Z^*)-2=1$, the surface $Z_1$ must be planar. This establishes Property (\ref{without the elision}) for $Y$. We now turn to the verification of Properties (\ref{and i who was}) and (\ref{his valet de sham}). By construction the pair $(Y,Z_0)$ is homotopy equivalent to $(Z\cup f(D),Z)$. Hence it suffices to show that if $\star\in\inter Z$ is a base point, then $\pi_1(Z\cup f(D),\star )$ is free of rank $2$ and has an ordered basis $(x,y)$, such that the image of some geometric basis of $\pi_1(Z,\star )$ under the inclusion homomorphism is either $(x,y^2xy)$ or $(x,y^2xy^{-1})$. We fix an ordered basis $(t,u_1,u_2)$ of the rank-$3$ free group $\pi_1(Z\cup\alpha,\star)$ having the properties stated in the conclusion of Lemma \ref{the magic}. In particular, if $\star\in \inter Z$ is any base point, $\pi_1(Z\cup\alpha,\star)$ is free on the generators $t$, $u_1$ and $u_2$. Furthermore, $Z\cup f(D)$ is obtained from $Z\cup\alpha$ by attaching a $2$-cell, and the attaching map realizes either the conjugacy class of $t^2u_1tu_2$ or that of $t^2u_1t^{-1}u_2$ in $\pi_1(Z\cup\alpha,\star)$. Hence $\pi_1(Z\cup f(D),\star)$ is given by either the presentation \Equation\label{a boring young reverend} |t,u_1,u_2:t^2u_1tu_2=1|, \EndEquation in the theta case, or the presentation \Equation\label{they thought he would never end} |t,u_1,u_2:t^2u_1t^{-1}u_2=1|, \EndEquation in the eyeglass case. Let $\bar t$ and $\bar u_i$ denote the respective images in $\pi_1(Z\cup f(D),\star)$ of the generators $t$ and $u_i$ of the free group on $t$, $u_1$ and $u_2$. From the properties of the basis $(t,u_1,u_2)$ stated in the conclusion of Lemma \ref{the magic}, it follows that $u_1$ and $u_2$ are the images of the elements of a geometric basis of $\pi_1(Z,\star)$ under the inclusion homomorphism $\pi_1(Z,\star)\to\pi_1(Z\cup f(D),\star)$. On the other hand, it is clear from the presentation (\ref{a boring young reverend}) or (\ref{they thought he would never end}) that $\pi_1(Z\cup f(D),\star)$ is free on the generators $x\doteq \bar{u}_1$ and $y\doteq \bar{t}^{-1}$. Furthermore, in the theta case, we have $\bar u_1=x$ and $\bar u_2=y x^{-1}y^2$; while in the eyeglass case, we have $\bar u_1=x$ and $\bar u_2=y^{-1} x^{-1}y^2$. Thus properties (\ref{and i who was}) and (\ref{his valet de sham}) of $Y$ are established. To prove that $Y$ has Property (\ref{for greater precision}), we first observe that by construction the frontier of $Y$ is $\partial R\times I$. Hence each component $A$ of the frontier has the form $c\times I$, where $c$ is a component of $\partial R$. Thus $A$ is an annulus having one boundary curve in the component $\Sigma \times\{0\}$ of $\partial X''$. The other boundary curve of $A$ is contained in $\Sigma\times\{1\}$ and is therefore disjoint from $S'=\Sigma\times\{0\}$. In particular, $A$ has its boundary curves in distinct components of $\partial X''$. To prove that $A$ is essential, it therefore suffices to prove that it is $\pi_1$-injective in $X''$. This in turn reduces to showing that $A$ is $\pi_1$-injective (\ref{ridin'}) in $Y$ and in $\overline{X''-Y}$. To prove $\pi_1$-injectivity of $A$ in $Y$, we observe that since the surface $R$ is homeomorphic to a regular neighborhood of $\Gamma$ in $S$, we have $\chibar(R)=1$. Hence the component $c$ of $\partial R$ is $\pi_1$-injective in $R$. This implies that $c\times\{0\}$ is $\pi_1$-injective in $Z_0=R\times\{0\}$. Since $Z_0$ is $\pi_1$-injective in $Y$ by Property (\ref{his valet de sham}), it follows that $c\times\{0\}$ is $\pi_1$-injective in $Y$, and hence that $A$ is $\pi_1$-injective in $Y$. To prove $\pi_1$-injectivity of $A$ in $\overline{X''-Y}$, we observe that by construction we have $\overline{X''-Y}=\overline{\Sigma -R}\times I \subset(\Sigma \times I)\cup K=X''$. Since $A=c\times I$, it suffices to prove that the boundary component $c$ of $R$ is $\pi_1$-injective in $\overline{\Sigma -R}$, or equivalently that no component of $\overline{\Sigma -R}$ is a disk. But this follows immediately from \ref{when an innocent heart}. Thus Property (\ref{for greater precision}) of $Y$ is established. \EndProof \Remark\label{so's your aunt tilly} It follows from Property (\ref{and i who was}) and (\ref{his valet de sham}) of $Y$, as stated in the conclusion of Lemma \ref{as steward}, that the inclusion homomorphism $H_1(Z_0;\ZZ_2)\to H_1(Y;\ZZ_2)$ is an isomorphism. \EndRemark \Lemma\label{one last kiss} Suppose that $X$ is a trimonic manifold relative to $S$. Then $\partial X$ has exactly one component $T\ne S$, whose genus is equal to that of $S$, and $T$ is $\pi_1$-injective in $X$. \EndLemma \Proof Let us fix a compact, connected $3$-dimensional submanifold $Y$ of $X$ having Properties (\ref{for greater precision})--(\ref{oh baby}) stated in the conclusion of Lemma \ref{as steward}. Define $Z_0$ and $Z_1$ as in the statement of Property (\ref{without the elision}) of that lemma. According to Property (\ref{without the elision}), we have $Z_0\subset S$, and $Z_1$ is contained in a component $T\ne S$ of $\partial X$. If $Q$ is a component of $\overline{X-Y}$, then by Property (\ref{oh baby}) of $Y$, $Q$ is a trivial $I$-bundle. By Property (\ref{for greater precision}), each component of the frontier of $Q$ is an essential annulus with one boundary component in $Z_0$ and one in $Z_1$. Thus the components of $\partial_hQ$ may be labeled $F_0$ and $F_1$ in such a way that $F_0$ meets $Z_0 \subset S$ and $F_1$ meets $Z_1 \subset T$. Note that $F_0$ and $F_1$ are homeomorphic. Since this holds for each component of $\overline{X-Y}$, and since $Z_0$ and $Z_1$ are homeomorphic, it follows that $S$ and $T$ have the same Euler characteristic, and furthermore that they are the only components of $\partial X$. It remains to show that $T$ is $\pi_1$-injective in $X$. According to Proposition \ref{easy come}, it suffices to show that the inclusion homomorphism $H_1(S;\ZZ_2)\to H_1(M;\ZZ_2)$ is surjective. For this purpose we set $\calq=\overline{X-Y}$ and $\partial_0 \calq=\calq \cap S$, we let $F$ denote the frontier of $Y$ in $X$, and we consider the commutative diagram $$\xymatrix{ H_1(F\cap S) \ar@{->}[r] \ar@{->}^{\alpha_1}[d] & H_1(Z_0)\oplus H_1(\partial_0 \calq)\ar@{->}[r]\ar@{->}^{\beta_1\oplus\gamma_1}[d]& H_1(S) \ar@{->}[r] \ar@{->}^{\delta}[d] & H_0(F\cap S)\ar@{->}[r]\ar@{->}^{\alpha_0}[d]& H_0(Z_0)\oplus H_0(\partial_0 \calq) \ar@{->}^{\beta_0\oplus\gamma_0}[d] \\ H_1(F) \ar@{->}[r] & H_1(Y)\oplus H_1(\calq)\ar@{->}[r]& H_1(M) \ar@{->}[r]& H_0(F)\ar@{->}[r]& H_0(Y)\oplus H_0(\calq), } $$ where all homology groups are defined with $\ZZ_2$-coefficients, the rows are segments of Mayer-Vietoris exact sequences, and the homomorphisms $\alpha_i$, $\beta_i$, $\gamma_i$ and $\delta$ are induced by inclusion. Since $Z_0\subset S$ and $Z_1\subset T$, each component of $F$ is an annulus with exactly one boundary curve in $S$. Hence the maps $\alpha_0$ and $\alpha_1$ are isomorphisms. If $Q$ is any component of $\calq$, then Property (\ref{oh baby}) of $Y$, as stated in Lemma \ref{as steward}, implies that $Q$ may be given the structure of a trivial $I$-bundle over a $2$-manifold in such a way that $Q\cap \partial X$ is the horizontal boundary of $Q$. Since $Q$ must contain at least one component of $Y$, exactly one component of the horizontal boundary of $Q$ lies in $S$. It follows that $\gamma_0$ and $\gamma_1$ are isomorphisms. The map $\beta_0$ is an isomorphism because $Z_0$ and $Y$ are both connected, while $\beta_1$ is an isomorphism by Remark \ref{so's your aunt tilly}. Since the $\alpha_i$, $\beta_i$ and $\gamma_i$ are isomorphisms, it follows from the Five Lemma that $\delta$ is an isomorphism. In particular it is surjective, as required. \EndProof \Lemma\label{it can't happen here} Suppose that $X$ is a non-degenerate trimonic manifold relative to a component $S$ of $\partial X$. Then there is no normal book of $I$-bundles $\calw$ with $|\calw|=X$. \EndLemma \Proof Let us fix a compact, connected $3$-dimensional submanifold $Y$ of $X$ having Properties (\ref{for greater precision})--(\ref{oh baby}) of the conclusion of Lemma \ref{as steward}. Define $Z_0$ as in the statement of Property (\ref{without the elision}) there. Suppose that $X=|\calw|$ for some normal book of $I$-bundles $\calw$. Then Properties (\ref{for greater precision}) and (\ref{without the elision}) of $Y$, as stated in \ref{as steward}, give hypotheses (\ref{i'm now employed}) and (\ref{the devil may take him}) of Lemma \ref{as a new-born lamb}. According to Remark \ref{so's your aunt tilly}, the inclusion homomorphism $H_1(Z_0;\ZZ_2)\to H_1(Y;\ZZ_2)$ is an isomorphism. This is Hypothesis (\ref{i'll never forsake him}) of Lemma \ref{as a new-born lamb}. Let us fix a base point $\star\in Z_0$. According to the properties (\ref{without the elision}) (\ref{and i who was}) and (\ref{his valet de sham}) of $Y$ stated in the conclusion of Lemma \ref{as steward}, $Z_0$ is a planar surface with three boundary curves, and the group $\pi_1(Y)$ is free of rank $2$, and there exists a pair of generators $(x,y)$ of $\pi_1(Y,\star )$, such that either the pair $(x,y x^{-1}y^2)$ or the pair $(x,y^{-1} x^{-1}y^2)$ is the image of some geometric basis of $\pi_1(Z_0,\star )$ under the inclusion homomorphism $\pi_1(Z_0,\star )\to\pi_1(Y,\star )$. Since neither of the pairs $(x,y x^{-1}y^2)$ or $(x,y^{-1} x^{-1}y^2)$ generates the free group on $x$ and $y$, the inclusion homomorphism $\pi_1(Z_0,\star )\to\pi_1(Y,\star )$ is not surjective. Hence Lemma \ref{as a new-born lamb} implies that there is a solid torus $L\subset Y$ such that $A\doteq L\cap Z_0$ is an annulus which is homotopically non-trivial in $|\calw|$, and the inclusion homomorphism $H_1(A;\ZZ)\to H_1(L,\ZZ)$ is not surjective. Let $c$ denote a core curve of the annulus $A$. Since $c$ is in particular homotopically non-trivial in $Z_0$, which is a planar surface with three boundary curves, $c$ is parallel to one of the boundary curves of $Z_0$. In view of the definition of a geometric basis, it follows that the conjugacy class in $\pi_1(Y,\star)$ defined by a suitably chosen orientation of $c$ is represented by one of the elements $x$, $y x^{-1}y^2$, $x^{-1}y x^{-1}y^2$, $y^{-1} x^{-1}y^2$ or $x^{-1}y^{-1} x^{-1}y^2$. On the other hand, since the inclusion homomorphism $H_1(A;\ZZ)\to H_1(L,\ZZ)$ is not surjective, a representative of a conjugacy class in $\pi_1(Y,\star)$ defined by $c$ must be an $n$-th power in $\pi_1(Y,\star)$ for some $n\ne\pm1$. But none of the elements $x$, $y x^{-1}y^2$, $x^{-1}y x^{-1}y^2$, $y^{-1} x^{-1}y^2$ or $x^{-1}y^{-1} x^{-1}y^2$ is a proper power in the free group on $x$ and $y$. This contradiction completes the proof. \EndProof \section{With a $(1,1,1)$ hexagon.} \label{sec:111} In this section and the next we will be working with hyperbolic $3$-manifolds with totally geodesic boundary. As is customary in low-dimensional topology, we shall implicitly carry over the PL results proved in Sections \ref{sec:trimonic} and \ref{sec:boibs} to the smooth category. For example, to say that a smooth manifold $X$ is a trimonic manifold relative to a component $S$ of $\partial X$ means that the pair $(X,S)$ is topologically homeomorphic to a PL pair $(X'S')$ such that $X'$ is a trimonic manifold relative to $S'$. Likewise, to say that a smooth manifold $X$ has the form $|\calw|$ for some normal book of $I$-bundles $\calw$ means that $X$ is topologically homeomorphic to a PL manifold which has the form $|\calw|$ for some normal book of $I$-bundles $\calw$. Here we will describe some topological consequences of the presence of a $(1,1,1)$ hexagon in $\widetilde{N}$, when the shortest return path of $N$ is not too long. The main result of the section, Proposition \ref{111}, asserts the existence of a trimonic manifold in $N$ under these circumstances. Below we prove a series of separate lemmas concerning the geometry of $(1,1,1)$ hexagons, from which the proposition follows quickly. The first follows from \cite[Lemma 3.2]{KM}, but for self-containedness we prove it here. \begin{lemma} \label{embeddedl1} Let $N$ be a compact hyperbolic $3$-manifold with totally geodesic boundary, and $\lambda, \lambda' \subset \widetilde{N}$ short cuts with length $\ell_1$. Then $\lambda =\lambda'$ or $\lambda \cap \lambda' = \emptyset$. \end{lemma} \begin{proof} Suppose $\lambda$ intersects $\lambda'$ at a single point $y \in \lambda$. Let $\Pi$ be the component of $\partial \widetilde{N}$ containing the endpoint $x$ of $\lambda$ closest to $y$, and let $\Pi'$ be the component containing the endpoint $x'$ of $\lambda'$ closest to $y$. The subarcs $[x,y]$ and $[x',y]$ of $\lambda$ and $\lambda'$, respectively, each have length at most $\ell_1/2$. They meet at $y$ at an angle properly less than $\pi$, since $\lambda \neq \lambda'$ and each is geodesic. But then $\Pi$ and $\Pi'$ are components of $\partial \widetilde{N}$ at distance less than $\ell_1$, so they are equal. But since $\lambda$ and $\lambda'$ are geodesic arcs, each perpendicular to $\Pi$, in $\widetilde{N}$ they are disjoint or equal, a contradiction. \end{proof} \Remark \label{reallyembeddedl1} Suppose $N$ is a compact hyperbolic $3$-manifold with totally geodesic boundary and $g : \widetilde{N} \rightarrow \widetilde{N}$ is a covering transformation. If $\lambda$ is a short cut with length $\ell_1$, then according to Lemma \ref{embeddedl1}, either $g(\lambda) = \lambda$ or $g(\lambda) \cap \lambda = \emptyset$. But in the first case, $g$ would fix a point in $\lambda$, a contradiction. It follows that every short cut of length $\ell_1$ is embedded in $N$ by the universal cover. \EndRemark \begin{lemma} \label{uniquel1} Let $N$ be a compact hyperbolic $3$-manifold with $\partial N$ connected, totally geodesic, and of genus $2$, and suppose $\ell_1$ satisfies the bound below. $$\cosh \ell_1 < \frac{\cos(2\pi/9)}{2\cos(2\pi/9)-1} = 1.4396...$$ Then $\ell_2 > \ell_1$; ie, the shortest return path in $N$ is unique. \end{lemma} \begin{proof} Suppose $\ell_2 = \ell_1$. Applying the right-angled hexagon rule as in the proof of Lemma \ref{d11vsl1}, we find that each of $d_{11}$, $d_{12}$, and $d_{22}$ is at least $2R$, for $R$ the function of $\ell_1$ defined there. Then a disk of radius $R$ is embedded around each of the feet of $\lambda_1$ and $\lambda_2$, so that none of these disks overlap. Bor\"oczky's bound on the radius of four disks of equal area embedded without overlapping on a surface of genus 2 is the quantity $R''$ defined in Lemma \ref{KLM}. Setting $R = R''$ and solving for $\ell_1$ yields the quantity of the bound above. \end{proof} \begin{lemma} \label{noboundarycross} Let $N$ be a compact hyperbolic $3$-manifold with totally geodesic boundary. Suppose $C$ and $C'$ are distinct $(1,1,1)$ hexagons in $\widetilde{N}$ with exterior edges on the same component of $\partial \widetilde{N}$. Then $C \cap C'$ is either empty or a single interior edge. \end{lemma} \begin{proof} Let $e$ and $e'$ be exterior edges of $C$ and $C'$ on the same component of $\partial \widetilde N$. The endpoints of $e$ and $e'$ are feet of lifts of the shortest return path; any such pair has distance at least $d_{11}$. Since $C$ and $C'$ are distinct, so are $e$ and $e'$; if they share an endpoint then $C \cap C'$ consists of an interior edge. Otherwise, $e$ is a geodesic arc of length $d_{11}$ connecting its endpoints $a$ and $b$ and $e'$ an arc of the same length connecting its endpoints $a'$ and $b'$, with $d(a,a')$, $d(a,b')$, $d(b,a')$, and $d(b,b')$ all at least $d_{11}$. Some hyperbolic trigonometry shows that any point at distance at least $d_{11}$ from $a$ and $b$ satisfies $\cosh \ell \geq \cosh d_{11}/\cosh (d_{11}/2),$ where $\ell$ is its distance from $e$. Twice this distance is larger than $d_{11}$; thus $e'$ does not cross $e$, and so $C \cap C' = \emptyset$. \end{proof} \Remark \label{embeddedboundary} Suppose $N$ is a compact hyperbolic $3$-manifold with totally geodesic boundary and $g: \widetilde{N} \rightarrow \widetilde{N}$ is a covering transformation. If $C$ is a $(1,1,1)$ hexagon, and an external edge of $g(C)$ intersects an external edge of $C$, then by Lemma \ref{noboundarycross}, either $g(C) = C$ or $g(C) \cap C$ is a single internal edge. In the former case, $g$ fixes a point in $C$, a contradiction. It follows that the union of the interiors of external edges of $C$ projects homeomorphically to $N$. \EndRemark \begin{lemma}\label{l1avoids111} Let $N$ be a compact hyperbolic $3$-manifold with totally geodesic boundary, and suppose $\cosh \ell_1 \leq 1.215$. If $C$ is a $(1,1,1)$ hexagon in $\widetilde{N}$ and $\lambda$ is a short cut of length $\ell_1$, then $\lambda$ is an internal edge of $C$ or $\lambda \cap C = \emptyset$. \end{lemma} \begin{proof} Suppose $\lambda$ and $C$ are as in the lemma, and $\lambda \cap C$ is nonempty. Let $\Pi_1$ and $\Pi_2$ be the components of $\partial \widetilde{N}$ containing the endpoints of $\lambda$, and let $\Pi$ be the geodesic plane containing $C$. Suppose first that $\lambda \subset \Pi$. If $\lambda$ is not contained in $C$, let $x$ be an endpoint of $\lambda \cap C$ contained in the interior of $\lambda$. Then $x$ is a point of intersection between $\lambda$ and an internal edge of $C$, since the external edges of $C$ are contained in $\partial \widetilde{N}$ and $\lambda$ is properly embedded. But each internal edge of $C$ is a short cut with length $\ell_1$, so this contradicts Lemma \ref{embeddedl1}. It follows that if $\lambda \subset \Pi$, $\lambda \subset C$. But $C$ intersects only three components of $\partial \widetilde{N}$ --- the components joined by its internal edges. Hence since $C$ intersects $\Pi_1$ and $\Pi_2$, $\lambda$ is an internal edge of $C$. Now suppose $\lambda$ is not contained in $\Pi$. Then $\lambda$ intersects $C$ transversely in a single point $x$. There is a component $\Pi'$ of $\partial \widetilde{N}$ such that $\Pi' \cap \Pi$ contains an external edge of $C$ and $x$ is distance at most $A$ from $\Pi'$. Then the distance from $\Pi'$ to each of $\Pi_1$ and $\Pi_2$ is less than $A + \ell_1$. By Lemma \ref{l2twicel1}, $\ell_2$ is at least twice $A$ and $\ell_1$; hence the distance from $\Pi'$ to each of $\Pi_1$ and $\Pi_2$ is $\ell_1$, since it is less than $\ell_2$. By Lemma \ref{geodhex}, there is a $(1,1,1)$ hexagon $C'$ with $\lambda$ as an internal edge and external edges in $\Pi'$, $\Pi_1$, and $\Pi_2$. But then $C' \cap C$ contains $\lambda \cap C$, contradicting Lemma \ref{noboundarycross}. \end{proof} \begin{lemma} Let $N$ be a compact hyperbolic $3$-manifold with totally geodesic boundary, and suppose $\cosh \ell_1 \leq 1.215$. If $C$ and $C'$ are distinct $(1,1,1)$ hexagons in $\widetilde{N}$, then $C \cap C'$ is empty or a single internal edge of each. \label{Cembedded} \end{lemma} \begin{proof} Suppose $C$ and $C'$ are distinct $(1,1,1)$ hexagons in $\widetilde{N}$, and $C \cap C' \neq \emptyset$. By Lemma \ref{noboundarycross}, the lemma holds if there is a component of $\partial \widetilde{N}$ containing external edges of both $C$ and $C'$, thus we may assume that this is not the case. It follows that no external edge of $C$ contains a point of $C \cap C'$ and vice--versa, since by Lemma \ref{rtanghex}, $C \cap \partial \widetilde{N}$ is precisely the union of its external edges. Let $\Pi$ be the geodesic hyperplane containing $C$. If $C' \subset \Pi$, then $C \cap C'$ is a two-dimensional subpolyhedron of $C$, and each vertex of $C \cap C'$ is an intersection point between internal edges, by the above. But these are all short cuts of length $\ell_1$, contradicting Lemma \ref{embeddedl1}. If $C'$ is not contained in $\Pi$, it intersects $C$ transversely in a geodesic arc, whose endpoints are points of intersection of internal edges of one with the other. But such intersections violate Lemma \ref{l1avoids111}, since the internal edges of each are short cuts of length $\ell_1$, a contradiction. \end{proof} \begin{proposition}\label{111} Let $N$ be a compact, orientable hyperbolic $3$-manifold with $\partial N$ connected, totally geodesic, and of genus $2$. Suppose that $\cosh \ell_1 \leq 1.215$, and there is a $(1,1,1)$ hexagon in $\widetilde{N}$. There is a submanifold $X \subset N$ with $\partial N\subset X$, such that $X$ is a trimonic manifold relative to $\partial N$. \end{proposition} \begin{proof} Let $N$ be as in the hypotheses, and fix a $(1,1,1)$ hexagon $C \subset \widetilde{N}$. Let $f : C \rightarrow N$ be the restriction of the universal covering, and let $X$ be a regular neighborhood of $\partial N \cup f(C)$. Since $\cosh \ell_1 \leq 1.215$, Lemma \ref{uniquel1} implies that $N$ has a unique shortest return path $\alpha$, hence each internal edge of $C$ projects to $\alpha$ by $f$. The preimage of $\alpha$ is a union of short cuts with length $\ell_1$, hence Lemma \ref{l1avoids111} implies Property (1) of Definition \ref{i'm now sir murgatroyd}. Remark \ref{reallyembeddedl1} now immediately implies Property (2) of the definition. Property (3) follows from Remark \ref{embeddedboundary} and Lemma \ref{Cembedded}. Properties (4) and (5) follow from Lemma \ref{rtanghex}, and (6) holds by construction. \end{proof} \begin{proposition} \label{Cboundary} Let $N$ be a compact, orientable hyperbolic $3$-manifold with $\partial N$ connected, totally geodesic, and of genus $2$, such that $\cosh \ell_1 \leq 1.215$ and there is a $(1,1,1)$ hexagon in $\widetilde{N}$. The trimonic submanifold $X \subset N$ supplied by Proposition \ref{111} is non-degenerate. \end{proposition} \begin{proof} Let $N$ satisfy the hypotheses, and as in the proof of Proposition \ref{111} let $C \subset \widetilde{N}$ be a $(1,1,1)$ hexagon, $f : C \rightarrow N$ the restriction of the universal cover, and $\alpha \subset N$ the shortest return path. Below we will borrow wholesale the constructions and notation from the proof of Lemma \ref{the magic}, with $C$ here in the role of $D$ there and $\partial N$ in the role of $S$. Recall that the internal edges $a_i$ and external edges $b_i$ of $C$, $i \in \{0,1,2\}$, are enumerated so that $a_i$ shares a vertex with $b_i$ and $b_{i-1}$ for each $i$, and $\partial C$ is oriented so that $\phi_0 \doteq f|a_0$ induces the same orientation on $\alpha$ as $\phi_1 \doteq f|a_1$. Then $\alpha$ is given the orientation induced by $\phi_0$ and $\phi_1$, with initial and terminal vertices $V$ and $W$, respectively. For each $i \in \{0,1,2\}$, the edge $\beta_i = f(b_i)$ of $\Gamma = \Gamma_f$ is given the induced orientation from $b_i$. Cases (I) and (II) in the proof of Lemma \ref{the magic} are distinguished according to whether $\phi_2 \doteq f|a_2$ is orientation preserving or reversing, respectively. In Case (I), $\Gamma = \overline{\beta_0 \cup \beta_1 \cup \beta_2}$ is a theta graph, and in Case (II) it is an eyeglass. (Recall that $\beta_i = f(\mathrm{int}\ b_i)$, $i \in \{0,1,2\}$.) For each $i \in \{0,1,2\}$, let $\Pi_i$ be the component of $\partial \widetilde{N}$ containing $b_i$. Suppose that $U$ is a component of $\partial N - \Gamma$ which is homeomorphic to an open disk, and let $U_0 \subset \Pi_0$ be a component of the preimage of $U$ under the universal covering map. Then $U_0$ is projected homeomorphically to $U$, and $\overline{U}_0$ is a compact polygon in $\Pi_0$ with edges projecting to the $\beta_i$, hence covering translates of the $b_i$. Since the $b_i$ are geodesic arcs, $\overline{U}_0$ has at least three edges. Since each vertex $v$ of $\overline{U}_0$ projects to $V$ or $W$, and $U_0$ projects homeomorphically, $\overline{U}_0$ has at most 6 vertices. Hence $\overline{U}_0$ is an $n$--gon for $n$ between $3$ and $6$. Suppose edges $b$ and $b'$ incident to a single vertex $v$ of $\overline{U}_0$ were identified in $\Gamma$. The covering transformation $f$ taking $b$ to $b'$ is orientation preserving on $\widetilde{N}$, so it preserves the boundary orientation on $\Pi_0$. Give $U_0$ this orientation, and orient $b$ and $b'$ as arcs in the boundary of $U_0$. Since $f(U_0)$ does not intersect $U_0$, $f(\overline{U}_0) \cap \overline{U}_0 = b' = f(b)$. Since $f(b)$ has the boundary orientation from $f(U_0)$, its orientation is opposite that of $b'$. But then since $v$ is the initial vertex of (say) $b$ and the terminal vertex of $b'$, $f(v) = v$, a contradiction. Hence: \Claim \label{Unidentified edges} If $v$ is a vertex of $\overline{U}_0$, the edges incident to $v$ project to distinct edges of $\Gamma$. \EndClaim Now suppose $\overline{U}_0$ is a triangle. Then two of its vertices are identified in $N$, since $\Gamma$ has only two vertices. The edge joining these vertices projects to an edge joining $V$ to $V$ or $W$ to $W$, so in this case $\Gamma$ is an eyeglass graph. On the other hand, the final vertex of $\overline{U}_0$ is not identified with the other two by \ref{Unidentified edges}, since an eyeglass graph has only one edge joining each vertex to itself. But then the two edges emanating from the final vertex yield distinct edges of $\Gamma$, joining $V$ to $W$, which does not occur in an eyeglass graph. This is a contradiction. When $\overline{U}_0$ is a pentagon, three of its vertices are identified to $V$ (say) in $\Gamma$. Thus two of these are adjacent in $\overline{U}_0$. The edge joining the adjacent vertices of $\overline{U}_0$ identified to $V$ joins it to itself in $\Gamma$, hence $\Gamma$ is an eyeglass graph. The third vertex identified to $V$ is not adjacent to either of the others, by \ref{Unidentified edges}, since $\Gamma$ has only one edge joining $V$ to itself. Then the edges adjacent to this vertex project to distinct $\beta_i$ joining $V$ to $W$, a contradiction. To rule out the possibility that $\overline{U}_0$ is a quadrilateral or hexagon requires counting angles. Recall that $S = \partial N$ is oriented so that $\calt_V$ is a positive oriented triod (see \ref{troglodyte}). Here $\calt_V = (\sigma_0,\sigma_1,\sigma_2)$ in Case (I) and $\calt_V = (\sigma_0,\sigma_1,\rho_2)$ in Case (II), where for each $i \in \{0,1,2\}$, $\rho_i$ is the closure of an initial segment of $\beta_i$ and $\sigma_i$ is the closure of a terminal segment of $\beta_{i-1}$. Define $\theta_1$ to be the angle measure from $\sigma_0$ to $\sigma_1$ at $V$, in the direction prescribed by the orientation on $\partial N$. In Case (I), we take $\theta_2$ to be the angle from $\sigma_0$ to $\sigma_2$ in the orientation direction, and in Case (II) we let $\theta_2$ be the angle from $\sigma_0$ to $\rho_2$. Then $0 < \theta_1 < \theta_2 < 2\pi$. Recall that $C$ is a totally geodesic hexagon in $\widetilde{N}$ by Lemma \ref{geodhex}, and the covering projection $f$ immerses $C$ in $N$ isometrically. Then appealing to Figure \ref{hexnT}, we note that $\theta_1$ is the angle from $\rho_1$ to $\rho_0$ at $W$, measured in the orientation direction. This is because the homeomorphism $\psi:\delta_V \rightarrow \delta_W$ defined in the proof of Lemma \ref{the magic} is orientation reversing. Similarly, in Case (I) $\theta_2$ is the the angle from $\rho_2$ to $\rho_0$ at $W$ in the orientation direction, and in Case (II) it is the angle from $\sigma_2$ to $\rho_0$. If $\overline{U}_0$ is a quadrilateral, then two of its edges are sent by $f$ to the same edge of $\Gamma$. These are opposite, by \ref{Unidentified edges}; hence the image in $\Gamma$ of each of the remaining edges joins a vertex to itself. Then $\Gamma$ is an eyeglass graph, the pair of opposite edges identified by $f$ project to $\beta_0$, and the other two edges project to $\beta_1$ and $\beta_2$. Abusing notation slightly, we label the edges projecting to $\beta_0$ by $b_0$ and $b_0'$, and similarly label the edge projecting to $\beta_i$ by $b_i$, $i = 1,2$. (These are covering translates of the corresponding edges of $C$.) We give each edge of $\overline{U}_0$ an orientation matching that of its correspondent in $\Gamma$. Orient $U_0$ so that $f|U_0$ preserves orientation. We may assume, by switching the labels of $b_0$ and $b_0'$ and/or replacing $C$ with a covering translate if necessary, that the prescribed orientation on $b_0 \subset C$ matches the boundary orientation which it inherits from $\overline{U}_0$. The terminal endpoint of $b_0$ is sent to $V$, and a terminal segment is sent to the interior of $\sigma_1$ by definition. Since the orientation on $\partial N$ is chosen so that $(\sigma_0,\sigma_1,\rho_2)$ is a positively oriented triod, the orientation $\sigma_1$ inherits from $\beta_0$ matches the boundary orientation from the component of $\delta_V - (\sigma_0 \cup \sigma_1 \cup \sigma_2)$ bounded by it and $\rho_2$. It follows that the terminal endpoint of $b_0$ is the initial endpoint of $b_2$, and the dihedral angle of $\overline{U}_0$ at this vertex is $\theta_2 - \theta_1$. Since the orientation on $b_0'$ is opposite that induced by $\overline{U}_0$, the dihedral angle of $\overline{U}_0$ at the common terminal endpoint of $b_0'$ and $b_2$ is $\theta_1$, the dihedral angle in $\delta_V$ between $\sigma_0$ and $\sigma_1$. Arguing as above, we find that the dihedral angle of $\overline{U}_0$ at the initial endpoint of $b_0$, which is the terminal endpoint of $b_1$, is $2\pi - \theta_2$. (Recall that the homeomorphism $\psi: \delta_V \rightarrow \delta_W$ visible in Figure \ref{hexnT} as projection upward is orientation reversing.) The dihedral angle at the final vertex is then $\theta_1$. Thus the sum of the dihedral angles is $2\pi + \theta_1$, contradicting the well known fact that the dihedral angle sum of a hyperbolic quadrilateral is less than $2\pi$. Now suppose $\overline{U}_0$ is a hexagon. Then the projection of $\overline{U}_0$ contains each of $\delta_V$ and $\delta_W$, since $\overline{U}_0$ has $6$ vertices and each of $V$ and $W$ has valence $3$. Thus the sum of the dihedral angles around vertices of $\overline{U}_0$ is $4\pi$. But a hyperbolic hexagon has dihedral angle sum less than $4\pi$, a contradiction. It follows that no component of $\partial N - \Gamma$ is homeomorphic to an open disk. Since $f : C \rightarrow N$ is a defining hexagon (in the sense of Definition \ref{non-degenerate}) for the submanifold $X$ defined in Proposition \ref{111}, and since $\Gamma=\Gamma_f$, the trimonic manifold $X$ is non-degenerate. \EndProof \section{Putting it all together} \label{sec:closed} In this section we prove the theorems stated in the introduction. Here we make much use of terminology and results from \cite{CDS}. Of particular importance is the term ``$(g,h)$-small'', see Definition 1.2 there. \begin{lemma} \label{I-bundle genus two boundary} Let $\calw$ be a normal book of $I$--bundles, set $W = |\calw|$, and suppose that $\partial W$ is connected and has genus $2$. Then $\mathrm{Hg}(W) = 3$. \end{lemma} \begin{proof} Since $\chibar(W) = \frac{1}{2}\chibar(\partial W) = 1$, there is a unique page $P$ of $\calw$ which has negative Euler characteristic; and furthermore, $\chibar(P) = 1$. Since $P \cap \partial W = \partial_h P$ is a $\pi_1$-injective subsurface of $\partial W$ with Euler characteristic $-2$, its complement in $\partial W$ is a disjoint union of annuli. Thus if $B$ is a component of $\overline{W-P}$, then $\partial B$ is a union of annuli in $\overline{\partial W - P}$ and vertical annuli in the frontier of $P$, and is therefore a torus. Since the frontier of $B$ in $W$ consists of essential annuli, $B$ is $\pi_1$-injective in $W$, and since $W$ is simple, $B$ is $(2,2)$-small. It now follows from \cite[Proposition 2.3]{CDS} that each component of $\overline{W-P}$ is a solid torus. Let $C$ be a closed disk contained in the interior of $T$. If $p: P \rightarrow T$ is the bundle projection, $\calh = p^{-1}(C)$ is a 1-handle in $P$ joining $\partial_h P \subset \partial W$ to itself. Let $\delta_0$ be an arc embedded in $\overline{T-C}$, so that $\partial \delta_0 = \delta_0 \cap \partial (\overline{T-C}) \subset \partial C$ and no arc of $\partial C$ bounds a disk embedded in $\overline{T-C}$ together with $\delta_0$. The existence of such an arc can be established using the fact that $\chi(\overline{T-C}) = -2$ and standard Morse theory arguments. If $D_0$ is a regular neighborhood of $\delta_0$ in $\overline{T-C}$, then $\overline{T-C-D_0}$ is a possibly disconnected surface with Euler characteristic $-1$ and no component which is a disk. Let $T_0$ be the component with $\chi(T_0) = -1$, and let $\beta$ be a component of $\partial T_0$ containing an arc of the frontier in $\overline{T-C}$ of $D_0$. There is an arc $\delta_1$ embedded in $T_0$, so that $\partial \delta_1 = \delta_1 \cap \partial T_0 \subset \beta$ and no arc of $\beta$ bounds a disk in $T_0$ together with $\delta_1$. This follows as above, and we may further assume, after sliding $\partial \delta_1$ along $\beta$ if necessary, that $\partial \delta_1$ does not intersect the frontier of $D_0$. Let $D_1$ be a regular neighborhood in $T_0$ of $\delta_1$ which does not intersect $D_0$, and let $D = D_0 \sqcup D_1 \subset \overline{T-C}$. By construction, $\overline{T-C-D}$ is a disjoint union of surfaces with Euler characteristic $0$; that is, annuli and M\"obius bands. Since $T$ is connected, each component of $\overline{T-C-D}$ has at least one component of its boundary containing arcs of the frontier of $D$. Thus if $\alpha$ is a component of $\partial T$, the component $T'$ of $\overline{T-C-D}$ containing $\alpha$ is an annulus, and $\alpha$ is the unique component of $\partial T'$ contained in $\partial T$. For $i = 0$ or $1$, let $\cald_i = p^{-1}(D_i) \subset P$, and let $\cald = \cald_0 \sqcup \cald_1$. Each of $\cald_0$ and $\cald_1$ is an $I$-bundle over a disk, hence a ball, and each component of $\overline{P-\calh - \cald}$ is an $I$-bundle over an annulus or M\"obius band, hence is a solid torus. Let $B$ be a component of $\overline{W - P}$. Then the component of $\overline{W -\calh - \cald}$ containing $B$ is the union of $B$ with a collection of solid torus components of $\overline{P-\calh - \cald}$. If $B_1$ is such a component, by the above $B_1$ is an $I$-bundle over an annulus component of $\overline{T-C-D}$ with a unique boundary component $\alpha \subset \partial T$. Let $A = p^{-1}(\alpha) = B \cap B_1$, a vertical annulus in the frontier of $P$. Since $A$ is a degree one annulus in $\partial B_1$ it follows that $B \cup B_1$ is a solid torus. It follows that each component of $\overline{W - \calh - \cald}$ is a solid torus, so $\overline{W - \calh}$ is obtained from a collection of solid tori by adding $\cald_0$ and $\cald_1$. Let $\caln_0$ be a regular neighborhood of $\partial W$ such that $\caln_0 \cap P$ is a regular neighborhood of $\partial_h P$ in $T$ with horizontal frontier. Then $\caln_0 \cap \calh$ is a disjoint union of two solid cylinders. Let $V_0 = \caln_0 \cup \calh$. $V_0$ is a compression body in $W$ with frontier a surface $S$ of genus 3. Our description above shows that $V_1 = \overline{W-V_0}$ is the union of a collection of solid tori with $\cald_0 \cap V_1$ and $\cald_1 \cap V_1$. Each of these has the structure of a 1-handle, since it is a ball and its intersection with its complement in $V_1$ consists of two disks. Hence $V_1$ is a handlebody and $S$ is a Heegaard surface for $W$. \end{proof} \begin{theorem} \label{hg4orvol7.32} \hgfourorvolseventhreetwo \end{theorem} \begin{proof} Let $N$ satisfy the hypotheses of the theorem, and let $X \subset N$ be the codimension-0 submanifold supplied by Proposition \ref{111}, which is a trimonic manifold with respect to $\partial N$, nondegenerate by Proposition \ref{Cboundary}. Let $T$ be the frontier of $X$ in $N$. By Lemma \ref{one last kiss}, $T$ is a surface of genus $2$ which is $\pi_1$-injective in $X$. Let $V = \overline{N - X}$. Then $V$ is a compact, connected, irreducible 3-dimensional submanifold of $N$ which is $\pi_1$-injective, with $\partial V = T$. Therefore $\chibar(V) = 1$. Note that $N$ is $(2,2)$-small, since it admits a hyperbolic structure with geodesic boundary. Thus in the case where $V$ is boundary-reducible, it is a handlebody by \cite[Proposition 2.3]{CDS}. By Lemma \ref{as steward}, $X$ has Heegaard genus equal to $3$. Then in this case, a genus 3 Heegaard surface for $X$ is a genus 3 Heegaard surface for $N$ (cf. \cite[Lemma 2.1]{CDS}), hence $\Hg(N) = 3$. Now consider the case in which $V = |\calw|$ for some normal book of $I$--bundles $\calw$. By Lemma \ref{I-bundle genus two boundary}, we have $\Hg(V) =3$. Amalgamating the Heegaard splittings of $V$ and $X$, each of genus 3, across $T$ yields a Heegaard splitting of $N$ with genus 4 (cf. \cite{Schul1}, Remark 2.7 and the definition above it). There remains the case in which $V$ is boundary-irreducible but is not homeomorphic to $|\calw|$ for any book of $I$-bundles $\calw$. Since $V$ is boundary-irreducible and $T$ is $\pi_1$-injective in $X$, the surface $T$ is incompressible in $N$. Hence $V$ and $X$ are simple. By Lemma \ref{it can't happen here}, $X$ is also not homeomorphic to $|\calw|$ for any book of $I$-bundles $\calw$. Hence by Proposition \ref{all god's chillun got guts}, if $\Sigma_V$ and $\Sigma_X$ denote the characteristic submanifolds of $V$ and $X$ relative to their boundaries, we have $\chi(\overline{X-\Sigma_X})<0$ and $\chi(\overline{V-\Sigma_V})<0$. According to \cite[Definition 1.1]{CDS}, $\kish(V)$ (or $\kish(X)$) is the union of all components of $\overline{V-\Sigma_V}$ (or respectively $\overline{X-\Sigma_X}$) having negative Euler characteristic. We therefore have $\kish V\ne\emptyset$ and $\kish X\ne\emptyset$, so that $\chibar(\overline{X-\Sigma_X})\geq 1$ and $\chibar(\overline{V-\Sigma_V})\ge1$. Hence $\chibar(\kish N \,\backslash\backslash\, T) =\chibar(\kish(X))+\chibar(\kish(V)) \geq 2$, and by \cite[Theorem 9.1]{ASTD}, the volume of $N$ is greater than $7.32$. \end{proof} \newtheorem*{vol6.89Thm}{Theorem \ref{vol6.89}} \begin{vol6.89Thm} \volsixeightnine \end{vol6.89Thm} \begin{proof} Let $N$ satisfy the hypotheses of the theorem, and let $\ell_1$ be the length of the shortest return path of $N$. If $\cosh \ell_1 \geq 1.215$, then by Proposition \ref{l1cosh1.215}, $N$ has volume greater than $6.89$. If $\widetilde{N}$ contains no $(1,1,1)$ hexagon, then by Proposition \ref{no(1,1,1)}, $\ell_1 \geq 1.215$, and Proposition \ref{l1cosh1.215} again gives the desired volume bound. We thus suppose that $N$ has a $(1,1,1)$ hexagon and $\cosh \ell_1 < 1.215$. But in this case Theorem \ref{hg4orvol7.32} gives a better volume bound of $7.32$, since by hypothesis $\Hg(N) \geq 5$. \end{proof} \newtheorem*{genus2or3Thm}{Theorem \ref{genus2or3}} \begin{genus2or3Thm} \genustwoorthree \end{genus2or3Thm} \begin{proof} Let $M$ satisfy the hypotheses of Theorem \ref{genus2or3}, and note that since $M$ is simple, it is $(2,2)$-small by definition. Suppose $M$ contains a connected closed incompressible surface of genus 2. If $M$ is $(3,2)$-small, the hypothesis on $\Hg(M)$ and \cite[Theorem 3.1]{CDS} imply that for any such surface $S$, $\chibar(\kish(M\,\backslash\backslash\, S)) \geq 2$, satisfying the first conclusion of Theorem \ref{genus2or3}. Otherwise, \cite[Theorem 5.8]{CDS} provides a separating, connected, closed incompressible surface $S$ of genus $2$ satisfying one of the conditions below. \begin{enumerate} \item At least one component of $M \,\backslash\backslash\, S$ is acylindrical; or \item For each component $B$ of $M \,\backslash\backslash\, S$ we have $\kish(B) \neq \emptyset$. \end{enumerate} If $S$ satisfies condition $(2)$, then since each component $B$ of $M \,\backslash\backslash\, S$ has $\kish B \neq \emptyset$ we have $\chibar(\kish(M\,\backslash\backslash\, S)) \geq 2$, which implies the first conclusion of Theorem \ref{genus2or3}. We address the other case below. Now suppose that $M$ contains no connected closed incompressible surface of genus 2 but contains a connected closed incompressible surface of genus $3$. If $M$ is $(5,3)$-small, the hypothesis on $\Hg(M)$ and \cite[Theorem 3.1]{CDS} imply that for any such surface $S$, $\chibar(\kish(M\,\backslash\backslash\, S)) \geq 4$, satisfying the first conclusion of Theorem \ref{genus2or3}. The remaining possibilities are that $M$ contains a separating incompressible surface of genus $g$ and is $(g,3)$-small, for $g = 3$ or $4$. In either case, the hypothesis on Heegaard genus ensures that \cite[Theorem 5.8]{CDS} provides a separating connected closed incompressible surface $S$ of genus $g$ satisfying condition $(1)$ or $(2)$ above. As above, if $S$ satisfies conclusion $(2)$, then the first conclusion of Theorem \ref{genus2or3} follows. In the remaining cases, we have a separating, connected, closed incompressible surface $S \subset M$ of genus $2$, $3$, or $4$, satisfying condition $(1)$ above, and we may assume that $S$ does not satisfy condition $(2)$ there. Let $N$ be an acylindrical component of $M \,\backslash\backslash\, S$ and $B$ the remaining component. Since $N$ is acylindrical, $N=\kish(N)$. Therefore $\kish(B) = \emptyset$, since otherwise $S$ would satisfy condition $(2)$ above. Then $B = |\calb|$ for some book of $I$-bundles $\calb$ (cf. \cite[\S 5.1]{CDS}), and so by \cite[Lemma 5.3]{CDS}, $B$ is ``shallow relative to $S$'' (\cite[Definition 4.3]{CDS}). It now follows from \cite[Lemma 4.4]{CDS} that $\Hg(M) \leq 1 + \Hg(N)$, hence by hypothesis that $\Hg(N) \geq 7$. Thus in this case the second conclusion of Theorem \ref{genus2or3} holds. \end{proof} \newtheorem*{closedvol6.89Thm}{Theorem \ref{closedvol6.89}} \begin{closedvol6.89Thm} \closedvolsixeightnine \end{closedvol6.89Thm} \begin{proof} We apply Theorem \ref{genus2or3} to $M$, yielding a connected closed surface $S$ of genus at most $4$ satisfying its conclusion. If $\chibar(\kish(M\,\backslash\backslash\, S)) \geq 2$, then Theorem 9.1 of \cite{ASTD} implies that the volume of $M$ is greater than $7.32$. Thus we assume $S$ is separating and $M \,\backslash\backslash\, S$ has an acylindrical component $X$ with $\Hg(X) \geq 7$. It is a standard result (cf. \cite[Proposition 6.3]{CDS}) that $X$ is homeomorphic to a hyperbolic $3$-manifold $N$ with totally geodesic boundary, and $\mathrm{vol}(N) = \mathrm{geodvol}(X)$ (see Definition 6.2 of \cite{CDS}). If $S$ has genus at least 3, then by Miyamoto's Theorem \cite[Theorem 5.4]{Miy}, $\mathrm{vol}(N) > 10.4$. If $S$ has genus 2, then Theorem \ref{vol6.89} implies that $N$ has volume greater than $6.89$. Theorem \ref{closedvol6.89} now follows from \cite[Proposition 6.4]{CDS} (which is in turn derived from results in \cite{ASTD}). \end{proof} \newtheorem*{vol3.44Thm}{Theorem \ref{vol3.44}} \begin{vol3.44Thm} \volthreefourfour \end{vol3.44Thm} \begin{proof} If $\pi_1 M$ is $4$--free, then Theorem 1.2 of \cite{CS_vol3.44} implies that $M$ has volume greater than $3.44$. Otherwise there is a subgroup $G$ of $ \pi_1 M$ which has rank at most $4$ and is not free. The homological hypotheses and Proposition 3.5 of \cite{CS_onecusp} ensure that there is a twofold cover $\widetilde{M} \rightarrow M$, with $ \mathrm{dim}_{\mathbb{Z}_2} H_1(\widetilde{M}; \mathbb{Z}_2) \geq 8,$ such that $G < \pi_1 \widetilde{M}$. Theorem 1.1 of \cite{CS_vol3.44} implies that $\widetilde{M}$ contains an incompressible surface of genus 2 or 3. Since $\Hg(\widetilde{M})$ bounds above the dimension of its $\mathbb{Z}_2$-homology, we have $\Hg(\widetilde{M}) \geq 8$. Theorem \ref{closedvol6.89} now implies that $\widetilde{M}$ has volume greater than $6.89$; hence that $M$ has volume greater than $3.445$. \end{proof} \end{document}
\begin{document} \let\labeloriginal\label \let\reforiginal\ref \begin{abstract} We prove that the strong polarized relation $\binom{\mu^+}{\mu} \rightarrow \binom{\mu^+}{\mu}^{1,1}_2$ is consistent with ZFC, for a singular $\mu$ which is a limit of measurable cardinals. \end{abstract} \title{A strong polarized relation} \section{introduction} The polarized relation $\binom{\alpha}{\beta} \rightarrow \binom{\gamma_0 \quad \gamma_1}{\delta_0 \quad \delta_1}^{1,1}$ asserts that for every coloring $c : \alpha \times \beta \rightarrow 2$ there are $A \subseteq \alpha$ and $B \subseteq \beta$ such that either ${\rm otp} (A) = \gamma_0, {\rm otp} (B) = \delta_0$ and $c \upharpoonright (A \times B) = \{0\}$ or ${\rm otp} (A) = \gamma_1, {\rm otp} (B) = \delta_1$ and $c \upharpoonright (A \times B) = \{1\}$. This relation was first introduced in \cite{MR0081864}, and investigated further in \cite{MR0202613}. If $(\gamma_0, \delta_0) \neq (\gamma_1, \delta_1)$ then we get the so-called \emph{unbalanced form} of the relation. The \emph{balanced form} is the case $(\gamma_0, \delta_0) = (\gamma_1, \delta_1)$, and in this case we can write also $\binom{\alpha}{\beta} \rightarrow \binom{\gamma}{\delta}^{1,1}_2$ (stipulating $\gamma = \gamma_0 = \gamma_1$ and $\delta = \delta_0 = \delta_1$). With this shorthand, the notation $\binom{\alpha}{\beta} \rightarrow \binom{\gamma}{\delta}^{1,1}_\theta$ means the same thing, but the number of colors is $\theta$ instead of $2$. From some trivialities and simple limitations, it follows that the case $\alpha = \mu^+$ and $\beta = \mu$ is interesting, for an infinite cardinal $\mu$. It is reasonable to distinguish between three cases - $\mu$ is a successor cardinal, $\mu$ is a limit regular cardinal (so it is a large cardinal) and $\mu$ is a singular cardinal; we concentrate in the latter case. By a result of \v Cudnovski\u i in \cite{MR0371655}, if $\mu$ is measurable then the relation $\binom{\mu^+}{\mu} \rightarrow \binom{\mu^+ \quad \alpha}{\mu \quad \mu}^{1,1}$ holds in ZFC for every $\alpha < \mu^+$ (see also \cite{MR1968607}, for discussion on weakly compact cardinals). In a sense, this is the best possible result, since we know that the assertion $\binom{\mu^+}{\mu} \nrightarrow \binom{\mu^+}{\mu}^{1,1}_2$ is valid under the GCH for every infinite cardinal $\mu$ (see \cite{williams}). This limitation gives rise to the following problem: Can one prove that the strong relation $\binom{\mu^+}{\mu} \rightarrow \binom{\mu^+}{\mu}^{1,1}_2$ is consistent with ZFC? For $\mu=\aleph_0$ the answer is yes. The same result holds for every supercompact cardinal $\mu$ (as we shall prove in a later work). But what happens if $\mu$ is singular? We give here a positive answer. For a singular $\mu$ which is a limit of measurables, we can show that under some cardinal arithmetic assumptions (including the violation of the GCH, of course) one can get $\binom{\mu^+}{\mu} \rightarrow \binom{\mu^+}{\mu}^{1,1}_\theta$ for every $\theta < {\rm cf} (\mu)$. This result is stronger, on the one hand, than the balanced result $\binom{\mu^+}{\mu} \rightarrow \binom{\alpha}{\mu}^{1,1}_\theta$ which is proved in \cite{MR1606515} for every $\alpha < \mu^+$. On the other hand, the result there is proved in ZFC, whence the strong relations in this paper can not be proved in ZFC. One can view this result as the parallel to the ordinary partition relation with respect to weakly compact cardinals. Recall that $\lambda\rightarrow(\theta,\kappa)^2$ means that for every coloring $c:[\lambda]^2\rightarrow 2$ there exists either $A\in[\lambda]^\theta$ so that $c\upharpoonright[A]^2=\{0\}$ or $B\in[\lambda]^\kappa$ such that $c\upharpoonright[B]^2=\{1\}$. We know that if $\lambda$ is inaccessible then $\lambda \rightarrow (\lambda, \alpha)^2$ for every $\alpha < \lambda$, but the strong (and balanced) relation $\lambda \rightarrow (\lambda)^2_2$ kicks $\lambda$ up in the chart of large cardinals, making it weakly compact. The result here is similar, replacing the ordinary partition relation by the polarized one. Our notation is standard. We use the letters $\theta, \kappa, \lambda, \mu$ for infinite cardinals, and $\alpha, \beta, \gamma, \delta, \varepsilon, \zeta, i, j$ for ordinals. For a regular cardinal $\kappa$ we denote the ideal of bounded subsets of $\kappa$ by $J^{\rm bd}_\kappa$. For $A,B \subseteq \kappa$ we say $A \subseteq^* B$ when $A \setminus B$ is bounded in $\kappa$; the common usage of this symbol is for $\kappa = \aleph_0$, but here we apply it to uncountable cardinals. Suppose $J$ is an ideal on $\kappa$. The product $\prod \limits_{\varepsilon < \kappa} \lambda_\varepsilon / J$ is $\theta$-directed if every subset of cardinality \emph{less} than $\theta$ has an upper bound in the product (with respect to $<_J$). This applies also to products of partially ordered sets. For more information about cardinal arithmetic, the reader may consult \cite{MR1318912}. For a measurable cardinal $\kappa$ and a normal ultrafilter $U$ on $\kappa$, let $\mathbb{Q}_U$ be the usual Prikry forcing. If $p=(t_p,A_p)\in \mathbb{Q}_U$ then $A_p$ is the pure component of $p$ and $t_p$ is the impure component. For infinite cardinals $\kappa, \lambda$ so that $\kappa < \lambda$ we denote by ${\rm Levy}(\kappa, \lambda)$ the Levy collapse of $\lambda$ to $\kappa$. This forcing notion consists of the partial functions $f : \kappa \rightarrow \lambda$ such that $|{\rm Dom}(f)| < \kappa$, ordered by inclusion. It collapses $\lambda$ to $\kappa$, and in general does not do any essential harm and does not change important things out of the interval $[\kappa, \lambda]$. We adopt the convention that $p \leq q$ means $q$ gives more information than $p$ in forcing notions. We use the symbol $p \parallel_{\mathbb{P}} q$ in the sense that the conditions $p$ and $q$ are compatible in $\mathbb{P}$. Throughout the paper, $\jmath_D : {\rm \bf{V}} \rightarrow M$ is the canonical elementary embedding of the universe into the transitive collapse $M$ of ${\rm \bf {V}}^\mu / D$ (where $D$ is a nonprincipal $\mu$-complete ultrafilter on $\mu$). $\mu$ is the critical point of $\jmath_D$, which means that $\mu$ is the first ordinal moved by $\jmath_D$. We shall use $\jmath$ instead of $\jmath_D$, when no confusion arises. The picture is as follows: $$ \jmath : {\rm \bf{V}} \hookrightarrow {\rm \bf {V}}^\mu / D \cong M $$ \par \noindent and we can treat $\jmath$ as a function from ${\rm \bf{V}}$ into $M$. We shall use the following basic result of Solovay, which asserts that if $\lambda$ is supercompact, $\tau \geq \lambda$ and $U_\tau$ is a fine and normal ultrafilter on $[\tau]^{<\lambda}$, then $\jmath_\tau (\lambda) > \tau$. We use a supercompact cardinal, but probably hyper-measurable cardinal suffices (as in \cite{MR1632081}, for example). We indicate, further, that being a limit of measurables (as we assume for our singular cardinal) can be weakend, and we hope to shed light on this subject in a subsequent work. The paper is arranged in three sections. In the first one we prove the main result, in the second we deal with forcing preliminaries, and in the last one we deal with cardinal arithmetic theorems. \newline We thank the referees for their excellent work, the careful reading, corrections, clarifications and improvements. \section{The combinatorial theorem} \par \noindent We state the main result of the paper: \begin{theorem} \label{mt} The main result. \newline Let $\mu$ be a singular cardinal, $\kappa = {\rm cf} (\mu)$ and $\theta < \kappa$. \newline Assume $2^\kappa < {\rm cf}(\lambda) \leq \lambda < {\rm cf}(\Upsilon)\leq \Upsilon \leq 2^\mu$. Suppose $\mu$ is a limit of measurable cardinals, $\mu<{\rm cf}(\lambda)$, $\bar{\lambda} = \langle \lambda_\varepsilon : \varepsilon < \kappa \rangle$ is a sequence of measurables with limit $\mu$ so that $\kappa < \lambda_0$, $\prod \limits_{\varepsilon < \kappa} \lambda_\varepsilon / J^{\rm bd}_\kappa$ and $\prod \limits_{\varepsilon < \kappa} \lambda^+_\varepsilon / J^{\rm bd}_\kappa$ are ${\rm cf}(\Upsilon)$-directed, and $2^{\lambda_\varepsilon} = \lambda_\varepsilon^+$ for every $\varepsilon < \kappa$. For every $\varepsilon < \kappa$ let $D_\varepsilon$ be a normal uniform ultrafilter on $\lambda_\varepsilon$, so the product $\prod \limits_{\varepsilon < \kappa} (D_\varepsilon, \subseteq^*) / J^{\rm bd}_\kappa$ is ${\rm cf}(\Upsilon)$-directed. {\underline{Then}}\ the strong relation $\binom{\lambda}{\mu} \rightarrow \binom{\lambda}{\mu}^{1,1}_\theta$ holds. \end{theorem} \par \noindent \emph{Proof}. \newline We start with $\bar{\lambda}, \bar{D}$ as in the assumptions of the theorem, ensured by Claim \ref{directed0} and Theorem \ref{directed1} below. Given a coloring $c : \lambda \times \mu \rightarrow \theta$, we have to find a single color $i_* < \theta$ and two sets $A \in [\lambda]^\lambda, B \in [\mu]^\mu$ such that $c \upharpoonright (A \times B) = \{i_*\}$. For every $\alpha < \lambda$ we would like to define the sequence of colors $\bar{i}_\alpha = \langle i_{\alpha, \varepsilon} : \varepsilon < \kappa \rangle$, so $i_{\alpha, \varepsilon} < \theta$ for every $\varepsilon < \kappa$. Suppose $\alpha < \lambda$ and $\varepsilon < \kappa$ are fixed. Since $D_\varepsilon$ is an ultrafilter, moreover, $D_\varepsilon$ is $\theta^+$-complete, there is an ordinal $i_{\alpha, \varepsilon} < \theta$ so that: $$ A_{\alpha, \varepsilon} =_{\rm def} \{ \gamma < \lambda_\varepsilon : c(\alpha, \gamma) = i_{\alpha, \varepsilon} \} \in D_\varepsilon $$ \par \noindent Let $\bar{A}_\alpha$ be the sequence $\langle A_{\alpha, \varepsilon} : \varepsilon < \kappa \rangle$, for every $\alpha < \lambda$. Without loss of generality, $A_{\alpha, \varepsilon} \cap \bigcup \limits_{\zeta < \varepsilon} \lambda_\zeta = \emptyset$ for every $\alpha < \lambda$ and $\varepsilon < \kappa$ (we can cut any initial segment of $A_{\alpha, \varepsilon}$ and still remain in the ultrafilter). Recall that $\mu$ is limit of measurable cardinals, so in particular it is strong limit. Consequently, $\theta^\kappa < \mu < \lambda$, and even if $\lambda$ is singular we have $\theta^\kappa = 2^\kappa < {\rm cf}(\lambda) \leq \lambda$, so we have less than ${\rm cf}(\lambda)$ color-sequences of the form $\bar{i}_\alpha$ and we can choose $S_0 \subseteq \lambda$, $|S_0| = \lambda$ and a constant sequence $\langle i_\varepsilon : \varepsilon < \kappa \rangle$ so that: $$ \alpha \in S_0 \Rightarrow \bar{i}_\alpha \equiv \langle i_\varepsilon : \varepsilon < \kappa \rangle $$ \par \noindent Moreover, since $\theta < \kappa = {\rm cf} (\kappa)$ we can pick up an ordinal $i_* < \theta$ and a set $u \in [\kappa]^\kappa$ such that $\varepsilon \in u \Rightarrow i_\varepsilon \equiv i_*$. Without loss of generality, $u = \kappa$ (one may replace $\bar{\lambda}, \bar{D}$ by the sequences $\langle \lambda_\varepsilon : \varepsilon \in u \rangle$ and $\langle D_\varepsilon : \varepsilon \in u \rangle$). The crucial step is the following: we choose a sequence of sets $\bar{A_*} = \langle A_\varepsilon^* : \varepsilon < \kappa \rangle$, $\bar{A_*} \in \prod \limits_{\varepsilon < \kappa} D_\varepsilon$, such that $A_\varepsilon^* \setminus A_{\alpha, \varepsilon}$ is bounded for (many and without loss of generality) each $\alpha \in S_0$ and every $\varepsilon < \kappa$. How can we ensure that such a sequence does exist? Well, each $\bar{A}_\alpha$ is a member in the product $\prod \limits_{\varepsilon < \kappa} (D_\varepsilon, \subseteq^*) / J^{\rm bd}_\kappa$. Since $|S_0| = \lambda < \Upsilon$ and by the $\Upsilon$-directness of the product, we can choose $\bar{A}_*$ such that: $$ \alpha \in S_0 \Rightarrow \bar{A}_* \leq_{J^{\rm bd}_\kappa} \bar{A}_\alpha $$ \par \noindent The meaning of the former is that $A_\varepsilon^* \setminus A_{\alpha, \varepsilon}$ is bounded for each $\alpha \in S_0$ (recall that the order of the product is reverse $\subseteq^*$). More precisely, $A_\varepsilon^* \setminus A_{\alpha, \varepsilon}$ is bounded for all large $\varepsilon$, but since $\kappa<{\rm cf}(\lambda)$ we can shrink $S_0$ and confine ourselves to a tail end of $\lambda_\varepsilon$-s. We employ a similar argument to show that (after some shrinking of the set $S_0$) ${\rm sup} (A_\varepsilon^* \setminus A_{\alpha, \varepsilon})$ does not depend on $\alpha$. For this, define $g_\alpha \in \prod \limits_{\varepsilon < \kappa} \lambda_\varepsilon$ by $g_\alpha (\varepsilon) = {\rm sup} (A_\varepsilon^* \setminus A_{\alpha, \varepsilon}) < \lambda_\varepsilon$. We choose, in this way, just $\lambda$ functions. Since the product is $\Upsilon$-directed, there is $g_* \in \prod \limits_{\varepsilon < \kappa} \lambda_\varepsilon$ such that: $$ \alpha \in S_0 \Rightarrow g_\alpha <_{J^{\rm bd}_\kappa} g_* $$ \par \noindent Now define $j_\alpha = {\rm sup} \{ \varepsilon < \kappa : g_\alpha (\varepsilon) \geq g_* (\varepsilon) \} < \kappa$, for every $\alpha \in S_0$. $\kappa < {\rm cf}(\lambda) \leq \lambda$, so one can choose $S_1 \subseteq S_0$, $|S_1| = \lambda$ and an ordinal $j(*) < \kappa$ so that: $$ \alpha \in S_1 \Rightarrow j_\alpha = j(*) $$ \par \noindent Without loss of generality, $A^*_\varepsilon \cap [\bigcup \limits_{\zeta < \varepsilon} \lambda_\zeta, g_*(\varepsilon)) = \emptyset$, so one can verify that $(\forall \alpha \in S_1)(\forall \varepsilon \in [j(*), \kappa))(A^*_\varepsilon \subseteq A_{\alpha, \varepsilon})$, and we can construct now the desired sets $A$ and $B$. Define $A = S_1$, and $B = \bigcup \{ A^*_\varepsilon : j(*) \leq \varepsilon < \kappa \}$. Clearly, $A \in [\lambda]^\lambda$, and $B \in [\mu]^\mu$. Suppose $\alpha \in A$ and $\beta \in B$. By the nature of $B$, there exists an ordinal $\varepsilon \in [j(*),\kappa)$ such that $\beta \in A^*_\varepsilon$, and since $A^*_\varepsilon \subseteq A_{\alpha, \varepsilon}$ we have $\beta \in A_{\alpha, \varepsilon}$, so $c(\alpha, \beta) = i_{\alpha, \varepsilon} = i_*$, and the relation $\binom{\lambda}{\mu} \rightarrow \binom{\lambda}{\mu}^{1,1}_\theta$ is established. \qedref {mt} \begin{corollary} \label{mcoroll} The strong polarized relation $\binom{\mu^+}{\mu} \rightarrow \binom{\mu^+}{\mu}^{1,1}_2$ is consistent with ZFC for some singular cardinal $\mu$. \end{corollary} \par\noindent\emph{Proof}.\newline As in Claim \ref{directed0} we prove the consistency of the conditions of Theorem \ref{mt} with $\kappa=\omega={\rm cf}(\mu), \lambda=\mu^+$ and $\Upsilon=\mu^{++}=2^\mu$. \qedref{mcoroll} \begin{remark} \label{st} Denote by $\binom{\mu^+}{\mu} \rightarrow_{\rm st} \binom{\mu^+}{\mu}^{1,1}_\theta$ the assertion that for every coloring $c : \mu^+ \times \mu \rightarrow \theta$ there are $A$ and $B$ such that $A$ is a stationary subset of $\mu^+$, $B \in [\mu]^\mu$ and $c$ is constant on the cartesian product $A \times B$. Actually, our proof gives this relation. \end{remark} \section{forcing preliminaries} \par \noindent We need some preliminaries, before proving the main claim of the next section. First of all, we shall use a variant of Laver's indestructibility (see \cite{MR0472529}), making sure that a supercompact cardinal $\lambda$ will remain supercompact upon forcing with some prescribed properties. Let us start with the following definition: \begin{definition} \label{strategic} Strategical completeness. \newline Let $\mathbb{P}$ be a forcing notion, $p \in \mathbb{P}$, and let $\mu$ be an infinite cardinal. \begin{enumerate} \item [$(a)$] The game $\Game_\mu(p, \mathbb{P})$ is played between two players, `com' and `inc'. It lasts $\mu$ moves. In the $\alpha$-th move, `com' tries to choose $p_\alpha \in \mathbb{P}$ such that $p \leq_{\mathbb{P}} p_\alpha$ and $\beta < \alpha \Rightarrow q_\beta \leq_{\mathbb{P}} p_\alpha$. After that, `inc' tries to choose $q_\alpha \in \mathbb{P}$ such that $p_\alpha \leq_{\mathbb{P}} q_\alpha$. \item [$(b)$] `com' wins a play if he has a legal move for every $\alpha < \mu$. \item [$(c)$] $\mathbb{P}$ is $\mu$-strategically complete if the player `com' has a winning strategy in the game $\Game_\mu(p, \mathbb{P})$ for every $p \in \mathbb{P}$. \end{enumerate} \end{definition} \begin{claim} \label{laver} Indestructible supercompact and strategically completeness. \newline Let $\lambda$ be a supercompact cardinal in the ground model. There is a forcing notion $\mathbb{Q}$ which makes $\lambda$ indestructible under every forcing $\mathbb{P}$ with the following properties: \begin{enumerate} \item [$(a)$] $\mathbb{P}$ is $\mu$-strategically complete for every $\mu < \lambda$, \item [$(b)$] $\chi \geq \lambda$, and $\mathbb{P} \in \mathcal{H}(\chi)$, \item [$(c)$] for some $\jmath : {\rm \bf{V}} \rightarrow M$ such that $\lambda = {\rm crit}(\jmath)$, $M^\chi \subseteq M$ and for every $G \subseteq \mathbb{P}$ which is generic over {\rm \bf{V}}, we have $M[G] \models "\{ \jmath(p) : p \in G \}$ has an upper bound in $\jmath(\mathbb{P})"$. \end{enumerate} \end{claim} \par \noindent \emph{Proof}. \newline Basically, the proof walks along the line of \cite{MR0472529}, using Laver's diamond. In the crux of the matter, when Laver needs the $\lambda$-completeness, we employ requirement (c) above. \qedref{laver} \par \noindent We define now the `single step' forcing notion $\mathbb{Q}_{\bar{\theta}}$ to be used in the proof of Claim \ref{directed0}. This is called the $\bar{\theta}$-dominating forcing (it appears also in \cite{945}). We will use an iteration which consists, essentially, of these forcing notions: \begin{definition} \label{dominating} The $\bar{\theta}$-dominating forcing. \newline Let $\lambda$ be a supercompact cardinal. Suppose $\bar{\theta} = \langle \theta_\alpha : \alpha < \lambda \rangle$ is an increasing sequence of regular cardinals so that $2^{|\alpha|+\aleph_0} < \theta_\alpha < \lambda$ for every $\alpha < \lambda$. \begin{enumerate} \item [$(\aleph)$] $p \in \mathbb{Q}_{\bar{\theta}}$ iff: \begin{enumerate} \item $p = (\eta, f) = (\eta^p, f^p)$, \item $\ell g(\eta) < \lambda$, \item $\eta \in \prod \{ \theta_\zeta : \zeta < \ell g(\eta) \}$, \item $f \in \prod \{ \theta_\zeta : \zeta < \lambda \}$, \item $\eta \triangleleft f$ (i.e., $\eta(\zeta)=f(\zeta)$ for every $\zeta<\ell g(\eta)$). \end{enumerate} \item [$(\beth)$] $p \leq_{\mathbb{Q}_{\bar{\theta}}} q$ iff ($p,q \in \mathbb{Q}_{\bar{\theta}}$ and) \begin{enumerate} \item $\eta^p \trianglelefteq \eta^q$, \item $f^p(\varepsilon) \leq f^q(\varepsilon)$, for every $\varepsilon < \lambda$. \end{enumerate} \end{enumerate} \end{definition} Notice that if $\ell g(\eta^p) \leq \varepsilon < \ell g(\eta^q)$ then $f^p(\varepsilon) \leq \eta^q(\varepsilon)$, since $f^p(\varepsilon) \leq f^q(\varepsilon) = \eta^q(\varepsilon)$. The purpose of $\mathbb{Q}_{\bar{\theta}}$ is to add (via the generic object) a dominating function in the product of the $\theta_\alpha$-s. \begin{obs} \label{cc} Basic properties of $\mathbb{Q}_{\bar{\theta}}$. \newline Let $\mathbb{Q}_{\bar{\theta}}$ be the $\bar{\theta}$-dominating forcing (for the supercompact cardinal $\lambda$). \begin{enumerate} \item [$(a)$] $\mathbb{Q}_{\bar{\theta}}$ satisfies the $\lambda^+$-cc. \item [$(b)$] $\mathbb{Q}_{\bar{\theta}}$ is $\mu$-strategically complete for every $\mu < \lambda$. \end{enumerate} \end{obs} \par \noindent \emph{Proof}. \begin{enumerate} \item [$(a)$] If $p = (\eta, f^p), q = (\eta, f^q)$, define $f(\varepsilon) = {\rm max} \{ f^p(\varepsilon), f^q(\varepsilon) \}$ for every $\varepsilon < \lambda$, and then $r = (\eta, f)$. Clearly, $r \in \mathbb{Q}_{\bar{\theta}}$ and $p,q \leq r$. So the cardinality of an antichain does not exceed the number of possible $\eta$-s, which is $\lambda$ since $\ell g(\eta) < \lambda$ and $\lambda^{< \lambda} = \lambda$. \item [$(b)$] Assume $\mu < \lambda$ and $p \in \mathbb{Q}_{\bar{\theta}}$. Let $\Game_\mu (p, \mathbb{Q}_{\bar{\theta}})$ be the game defined in \ref{strategic}. We shall find a winning strategy for `com'. In the first stage, `com' may choose $p_0 = p$, and from now on `com' needs to deal only with the $q_\gamma$-s (and being above $p$ follows, since $p \leq_{\mathbb{Q}_{\bar{\theta}}} q_\gamma$ for each $\gamma$). So assume $\beta < \mu$ and $q_\gamma$ was already chosen (by `inc') for $\gamma < \beta$. Notice that $\langle q_\gamma : \gamma < \beta \rangle$ is an increasing sequence of conditions in $\mathbb{Q}_{\bar{\theta}}$. Define: $$ \eta^{p_\beta} = \bigcup \{ \eta^{q_\gamma} : \gamma < \beta \} $$ \par \noindent Since $\beta < \lambda = {\rm cf}(\lambda)$ and $\ell g(\eta^{q_\gamma}) < \lambda$ for every $\gamma < \beta$, we know that $\ell g(\eta^{p_\beta}) < \lambda$. Note that $\eta^{q_\gamma} \trianglelefteq \eta^{p_\beta}$ for every $\gamma < \beta$. \newline Now, for $\varepsilon < \ell g(\eta^{p_\beta})$ set $f^{p_\beta}(\varepsilon) = \eta^{p_\beta}(\varepsilon)$, and for $\ell g(\eta^{p_\beta}) \leq \varepsilon < \lambda$ set $f^{p_\beta}(\varepsilon) = {\rm sup} \{ f^{q_\gamma}(\varepsilon) : \gamma < \beta \}$. We may assume, without loss of generality, that $\beta<\ell g(\eta^{p_\beta})$ (if not, set $f^{p_\beta}(\varepsilon)=0$ for every $\varepsilon\in[\ell g(\eta^{p_\beta}),\beta]$). Hence $f^{p_\beta}(\varepsilon)$ is well defined, since $\alpha < \theta_\alpha$ for every $\alpha < \lambda$. Notice also that $\eta^{p_\beta} \triangleleft f^{p_\beta}$. Finally, set $p_\beta = (\eta^{p_\beta}, f^{p_\beta})$. Clearly, $p_\beta \in \mathbb{Q}_{\bar{\theta}}$ and $q_\gamma \leq_{\mathbb{Q}_{\bar{\theta}}} p_\beta$ for every $\gamma < \beta$, so we are done. \end{enumerate} \qedref{cc} \begin{remark} \label{comp} Despite the fact that $\mathbb{Q}_{\bar{\theta}}$ is $\mu$-strategically complete for every $\mu<\lambda$, it is not $\lambda$-complete. We indicate that for every $\mu < \lambda$ there is a dense subset which is $\mu$-complete, but no dense subset which is $\mu$-complete simultaneously for every $\mu < \lambda$. So we will have to employ claim \ref{laver} instead of the original theorem of Laver. \end{remark} \qedref{comp} \par \noindent Having the basic component, we would like to iterate the $\bar{\theta}$-dominating forcing. We shall use a $(< \lambda)$-support, aiming to take care of all the increasing sequences of the form $\bar{\theta}$ with limit $\lambda$. We need the following: \begin{definition} \label{iteration} The iteration. \newline Let $\lambda$ be a supercompact cardinal, and $\lambda<{\rm cf}(\Upsilon)\leq\Upsilon$. Let $\mathbb{P}_\Upsilon$ be the $(< \lambda)$-support iteration $\langle \mathbb{P}_\alpha, \mathunderaccent\tilde-3 {\mathbb{Q}}_\beta : \alpha \leq \Upsilon, \beta < \Upsilon \rangle$, where each $\mathunderaccent\tilde-3 {\mathbb{Q}}_\beta$ is (a $\mathbb{P}_\beta$-name of) the forcing $\mathbb{Q}_{\bar{\theta}}$ with respect to some $\bar{\theta}$ as in definition \ref{dominating}, so that each $\bar{\theta}$ appears at some stage of the iteration. \end{definition} \par \noindent We would like to show that the nice properties of each component ensured by \ref{cc} are preserved in the iteration. Now, the strategical completeness is preserved, but the chain condition may fail. Nevertheless, in the case of the dominating forcing $\mathbb{Q}_{\bar{\theta}}$ it holds: \begin{definition} \label{strongcc} Linked forcing notions. \newline Let $\mathbb{P}$ be a forcing notion. $\mathbb{P}$ is $\lambda$-2-linked when for every subset of conditions $\{ p_\alpha : \alpha < \lambda^+ \} \subseteq \mathbb{P}$ there are $C,h$ such that: \begin{enumerate} \item $C$ is a closed unbounded subset of $\lambda^+$ \item $h : \lambda^+ \rightarrow \lambda^+$ is a regressive function \item for every $\alpha, \beta \in C$, if ${\rm cf}(\alpha) = {\rm cf}(\beta) = \lambda$ and $h(\alpha) = h(\beta)$ then $p_\alpha \parallel p_\beta$; moreover, $p_\alpha \cup p_\beta$ is a least upper bound \end{enumerate} \end{definition} \begin{remark} \label{strongness} If $\mathbb{Q}$ is $\lambda$-2-linked, {\underline{then}}\ $\mathbb{Q}$ is $\lambda^+$-cc. \end{remark} \begin{lemma} \label{preservation} Preservation of the $\lambda$-2-linked property. \newline Assume $\lambda = \lambda^{< \lambda}$, $\mathbb{P}$ is a $(< \lambda)$-support iteration so that every component is $\chi$-strategically complete for every $\chi < \lambda$ and $\lambda$-2-linked. \newline {\underline{Then}}\ $\mathbb{P}$ is also $\lambda$-2-linked (and consequently $\lambda^+$-cc). \end{lemma} \par \noindent \emph{Proof}. \newline As in \cite{MR0505492}, with the minor changes for $\lambda$ instead of $\aleph_1$. \qedref{preservation} \begin{obs} \label{dominatcc} The dominating forcing $\mathbb{Q}_{\bar{\theta}}$ is $\lambda$-2-linked. \end{obs} \par \noindent \emph{Proof}. \newline Suppose $\{ p_\alpha : \alpha < \lambda^+ \} \subseteq \mathbb{Q}_{\bar{\theta}}$. Without loss of generality there exists $\eta$ so that $\eta^{p_\alpha} \equiv \eta$ for every $\alpha < \lambda^+$ (since $\ell g(\eta) < \lambda$ and $\lambda^{< \lambda} = \lambda < \lambda^+$). Now choose any club $C$ and regressive $h$, upon noticing that two conditions with the same stem are compatible. \qedref{dominatcc} \begin{lemma} \label{prodcof} The high cofinality. \newline Suppose $\lambda$ is supercompact, ${\rm cf}(\Upsilon) > \lambda$, $\bar{\theta}$ and $\mathbb{P} = \mathbb{P}_\Upsilon$ are the sequence of regular cardinals and iteration defined above. {\underline{Then}}\ ${\rm cf}(\prod \limits_{\alpha < \lambda} \theta_\alpha, <_{J^{\rm {bd}}_\lambda}) = {\rm cf}(\Upsilon)^{{\rm \bf V}^\mathbb{P}}$. \end{lemma} \par \noindent \emph{Proof}. \newline Let $\Upsilon = \lambda^{++}$ (the proof of the general case is just the same). For proving that ${\rm cf}(\prod \limits_{\alpha < \lambda} \theta_\alpha, <_{J^{\rm {bd}}_\lambda}) \leq \lambda^{++}$ we introduce a cofinal subset (in the product) of cardinality $\lambda^{++}$. Moreover, the cofinal subset will be a dominating one (i.e., each $\mathunderaccent\tilde-3 {g}_\beta$ below dominates all the old functions), and consequently ${\rm cf}(\prod \limits_{\alpha < \lambda} \theta_\alpha, <_{J^{\rm {bd}}_\lambda}) = \lambda^{++}$. For each $\beta < \lambda^{++}$ let $\mathunderaccent\tilde-3 {G}_\beta \subseteq \mathunderaccent\tilde-3 {\mathbb{Q}}_\beta$ be generic, and set $\mathunderaccent\tilde-3 {g}_\beta = \bigcup \{ \eta^p : p \in \mathunderaccent\tilde-3 {G}_\beta \}$. Now, $\mathunderaccent\tilde-3 {\mathbb{P}}_{\beta+1} \models ``\mathunderaccent\tilde-3 {g}_\beta \in \prod \limits_{\alpha < \lambda} \theta_\alpha$ and $\mathunderaccent\tilde-3 {g}_\beta$ is a dominating function". To see this, define the following set for every $g \in \prod \limits_{\alpha < \lambda} \theta_\alpha$: $$ \mathcal{I}_g = \{ (\eta, f) \in \mathunderaccent\tilde-3 {\mathbb{Q}}_\beta : \forall \varepsilon \in [\ell g(\eta), \lambda) \quad g(\varepsilon) \leq f(\varepsilon) \} $$ \par \noindent One verifies that $\mathcal{I}_g$ is a dense open set for every $g \in \prod \limits_{\alpha < \lambda} \theta_\alpha$, so if $G$ is generic then $G \cap \mathcal{I}_g \neq \emptyset$ for every $g \in \prod \limits_{\alpha < \lambda} \theta_\alpha$. Consequently, $\Vdash_{\mathunderaccent\tilde-3 {\mathbb{P}}_{\beta+1}} "g \leq_{J^{\rm {bd}}_\lambda} \mathunderaccent\tilde-3 {g}_\beta"$. Take a look at $\{ \mathunderaccent\tilde-3 {g}_\beta : \beta < \lambda^{++} \}$. We claim (working in ${\rm \bf{V}}^{\mathbb{P}}$) that this set is cofinal in the product. For showing this, notice that $\langle \mathbb{P}_\alpha : \alpha \leq \lambda^{++} \rangle$ is $\lessdot$-increasing, so $\alpha < \beta < \lambda^{++} \Rightarrow {\rm \bf{V}}^{\mathbb{P}} \models "\mathunderaccent\tilde-3 {g}_\alpha \leq_{J^{\rm {bd}}_\lambda} \mathunderaccent\tilde-3 {g}_\beta"$, and by the nature of these objects we know that every function in the product is bounded by one of them. \qedref{prodcof} \begin{lemma} \label{indestructible} The property of being indestructible. \newline The iteration $\mathbb{P}$ satisfies demand $(c)$ in claim \ref{laver}. \end{lemma} \par \noindent \emph{Proof}. \newline Let $G \subseteq \mathbb{P}$ be generic over ${\rm \bf{V}}$. Let $\jmath$ be a $\chi$-supercompact elementary embedding of ${\rm \bf{V}}$ into $M$ with critical point $\lambda$, and let $\Upsilon = \lambda^{++}$. We may assume that $\chi \geq \Upsilon$. We define a condition $q \in \jmath(\mathbb{P})$, and we shall prove that $q$ is an upper bound (in the forcing notion $\jmath(\mathbb{P})$ which belongs to $M[G]$) for $\{ \jmath(p) : p \in G \}$. Set ${\rm Dom}(q) = \{ \jmath(\alpha) : \alpha < \Upsilon \}$. By saying, below, that $\eta^p$ is an object we mean that it is not just a name. For every $\alpha < \Upsilon$ let $q(\jmath(\alpha)) = (\eta^\alpha, \mathunderaccent\tilde-3 {f}^\alpha)$ where: $$ \eta^\alpha = \bigcup \{ \eta^{p(\alpha)} : p \in G, \alpha \in {\rm Dom}(p), \eta^p {\rm \ is \ an \ object} \} $$ \par \noindent and for $\gamma \geq \ell g(\eta^\alpha)=\lambda$, set: $$ f^\alpha(\gamma) = {\rm sup} \{ \jmath(f^{p(\alpha)})(\gamma) : p \in G, \alpha \in {\rm Dom}(p) \} $$ \par \noindent Clearly, $q$ is an upper bound for $\{ \jmath(p) : p \in G \}$ in $\jmath(\mathbb{P})$, provided that $q$ is well defined. For this, notice that $|{\rm Dom}(q)| < \jmath(\lambda)$ since $\Upsilon < \jmath(\lambda)$, $^\chi M \subseteq M$, and ${\rm Dom}(q) = \{ \jmath(\alpha) : \alpha < \Upsilon \}$. We also must show that $f^\alpha(\gamma)$ is well defined. Notice that for every $\bar{\theta}$ which proceeds fast enough (i.e., $\alpha < \lambda \Rightarrow 2^{|\alpha|+\aleph_0} < \theta_\alpha < \lambda$ as in \ref{dominating}) and for each $\gamma \geq \lambda$ we have $\jmath(\bar{\theta})_\gamma > \Upsilon$. Also, if $\alpha < \Upsilon$ then $M[G] \models |\{ f^{p(\alpha)} : p \in G_{\mathbb{P}} \}| < \jmath(\bar{\theta})_\gamma$ for $\gamma \geq \lambda$. Consequently, $f^\alpha(\gamma)$ is bounded in $\jmath(\bar{\theta})_\gamma$, hence well defined, so we are done. \qedref{indestructible} \section{Cardinal arithmetic assumptions} \par \noindent We phrase two theorems, which we shall prove in this section. The first one asserts that there exists (i.e., by forcing) a singular cardinal, limit of measurable cardinals, with some properties imposed on the product of these measurables and their successors. Related works, in this light, are \cite{MR1632081} and \cite{MR1245523}. The second theorem deals with properties of the product of normal (uniform) ultrafilters on these cardinals. \newline We start with the following known fact: \begin{lemma} \label{prikry} Cofinality preservation under Prikry forcing. \newline Let $U$ be a normal (uniform) ultrafilter on a measurable cardinal $\mu$. \newline Let $\mathbb{Q}_U$ be the Prikry forcing (with respect to $\mu$ and $U$) and $\langle \vartheta_n : n \in \omega \rangle$ the Prikry sequence. Suppose $\theta = {\rm cf}(\theta) \neq \mu$, $F : \mu \rightarrow {\rm Reg} \cap \mu$, $F(\alpha)>\alpha$ for every $\alpha<\mu$ and ${\rm cf}(\prod \limits_{\alpha < \mu} F(\alpha) / U) = \theta$ as exemplified by $\bar{g} = \langle g_\varepsilon / U : \varepsilon < \theta \rangle$ in ${\rm \bf {V}}$. Let $\bar{h}=\langle h_\varepsilon : \varepsilon < \theta \rangle$ be the restriction of $\bar{g}$ to the Prikry sequence, i.e., $h_\varepsilon(\vartheta_0)=0$ and $h_\varepsilon\upharpoonright\{\vartheta_{n+1}:n\in\omega\}= g_\varepsilon\upharpoonright\{\vartheta_{n+1}:n\in\omega\}$ for every $\varepsilon<\theta$. {\underline{Then}}\ ${\rm cf}(\prod \limits_{n < \omega} F(\vartheta_n) / J_\omega^{\rm bd}) = \theta$, as exemplified by $\bar{h}$ in ${\rm \bf{V}}^{\mathbb{Q}_U}$. \end{lemma} \par \noindent \emph{Proof}. \newline Let $A$ be any member of $U$. We claim that $\vartheta_{i+1}\in A$ for almost every $i\in\omega$ (i.e., except a finite set). For this, let $D_A$ be $\{p\in\mathbb{Q}_U:A\supseteq A_p\}$ (recall that $A_p$ is the pure component of the condition $p$). Let $G$ be a generic subset of $\mathbb{Q}_U$. Since $D_A$ is open and dense, one can pick a condition $p\in D_A\cap G$. Let $i_p$ be the maximal natural number so that $\vartheta_{i_p}\in t_p$. Consequently, $p\Vdash(\forall i\in[i_p,\omega))(\vartheta_{i_p}\in A)$. Now suppose $p\Vdash (\mathunderaccent\tilde-3 {f}:\omega\rightarrow\mu)\wedge (\bigwedge\limits_{i<\omega}\mathunderaccent\tilde-3 {f}(i)<F(\mathunderaccent\tilde-3 {\vartheta}_{i+1}))$. We claim that there exists a condition $q\geq p$ and a function $g\in{\rm \bf V}$ so that $g:\mu\rightarrow\mu, \bigwedge\limits_\lambda g(\lambda)<F(\lambda)$ and $q\Vdash \mathunderaccent\tilde-3 {f}(i)<g(\lambda)$ whenever $\mathunderaccent\tilde-3 {\vartheta}_{i+1}=\lambda$ (more precisely, this holds for every large enough $i$ since for every measure one set $A$, $\vartheta_{i+1}\in A$ for all large $i$). For this claim, let $A_p$ be the pure component of $p$. For each $\lambda\in A_p$ define: $$ T_{p,\lambda}=\{t:\exists A' \in U, p\leq(t,A'), {\rm max}(t)=\lambda\} $$ For every $t\in T_{p,\lambda}$ we choose a condition $q_{t,\lambda}$ so that: \begin{enumerate} \item [$(a)$] $q_{t,\lambda}$ is of the form $(t,A')$, so forces the value $\lambda$ to $\mathunderaccent\tilde-3 {\vartheta}_{|t\cap\lambda|}$ \item [$(b)$] $q_{t,\lambda}$ forces a value to $\mathunderaccent\tilde-3 {f}_{|t\cap\lambda|}$ which is an ordinal below $F(\lambda)$ \end{enumerate} Denote this ordinal by $g_p(t,\lambda)$. Now we define a condition $q=(s,A)$ as follows. $s=t_p$, and $A=A_q$ is the following set: $$ A=\{\lambda\in A_p:\forall\lambda_1\in\lambda\cap A_p, \forall t\in T_{p,\lambda}, \lambda\in A_{q_{t,\lambda_1}}\} $$ We shall show that $q\in\mathbb{Q}_U$. $A\subseteq A_p$, hence ${\rm max}(t_p)<{\rm min}(A)$. $A\in U$ since $A$ is the diagonal intersection of $\mu$ members from $U$. To verify this, set $B_\lambda=\bigcap\{A_{q_{t,\lambda}}:t\in T_{p,\lambda}\}$, for every $\lambda\in A_p$. Now $B_\lambda\in U$ for every $\lambda\in A_p$, as an intersection of at most $\lambda=|[\lambda]^{<\omega}|$ members from $U$ (recall that $U$ is $\mu$-complete). Since $A=\Delta\{B_\lambda:\lambda\in A_p\}$ we know that $A\in U$, hence $q\in\mathbb{Q}_U$. Clearly, $p\leq_{\mathbb{Q}_U}q$. Let us define $g:\mu\rightarrow\mu$ by $g(\lambda) = {\rm sup}\{g_p(t,\lambda)+1:t\in T_{p,\lambda}\}$ if $\lambda\in A_p$, and $g(\lambda)=0$ otherwise. Notice that $g(\lambda)<F(\lambda)$, since $\lambda<F(\lambda), F(\lambda)$ is regular, $g_p(t,\lambda)<F(\lambda)$ for every $t\in T_{p,\lambda}$ and $|T_{p,\lambda}|\leq\lambda$. It follows that $q\Vdash\mathunderaccent\tilde-3 {f}(i)<g(\mathunderaccent\tilde-3 {\vartheta}_{i+1})$ (for almost every $i$ hence without loss of generality for every $i$), as required. We conclude that if $p\Vdash \mathunderaccent\tilde-3 {f}\in\prod\limits_{i<\omega}F(\vartheta_i)$ then one can find a function $g\in\prod\limits_{\alpha<\mu}F(\alpha)$, an ordinal $j<\omega$ and a condition $q\geq p$ such that $q\Vdash \bigwedge\limits_{i\in[j,\omega)}\mathunderaccent\tilde-3 {f}(i)<g(\mathunderaccent\tilde-3 {\vartheta}_{i+1})$. Equipped with this property, we can accomplish the proof of the lemma. For every $\varepsilon<\theta$ set $h_\varepsilon=g_\varepsilon\upharpoonright\{\vartheta_{i+1}:i<\omega\}\cup \langle\vartheta_0,0\rangle$ and collect these functions to the sequence $\bar{h}= \langle h_\varepsilon:\varepsilon<\theta\rangle$. We claim that $\bar{h}$ is a cofinal sequence in the product $(\prod\limits_{n\in\omega}F(\vartheta_n),<_{J_\omega^{\rm bd}})$ (in ${\rm \bf{V}}^{\mathbb{Q}_U}$). Assume $p\Vdash \mathunderaccent\tilde-3 {f}\in\prod\limits_{i<\omega}F(\vartheta_i)$, and let $g,q$ be as above. Pick an ordinal $\varepsilon<\theta$ so that $g<_U g_\varepsilon$. It means that $B_{\varepsilon,g}=\{\gamma<\mu:g(\gamma)<g_\varepsilon(\gamma)\}\in U$. By the beginning of the proof, $\vartheta_{i+1}\in B_{\varepsilon,g}$ for almost every $i$, hence $q\Vdash\mathunderaccent\tilde-3 {f}<_{J_\omega^{\rm bd}}h_\varepsilon$, and we are done. \qedref{prikry} \begin{remark} \label{mmmagidor} The same proof works for Magidor's forcing, upon replacing $\omega$ by $\kappa={\rm cf}(\mu)$. In the proof above we demanded $q\Vdash\mathunderaccent\tilde-3 {f}(i)<g(\mathunderaccent\tilde-3 {\vartheta}_{i+1})$ (and not $\mathunderaccent\tilde-3 {\vartheta}_i$), so it works also for Magidor's forcing (in contrary to Prikry forcing, in Magidor's forcing we encounter limit points in the cofinal sequence). \end{remark} \par \noindent We can state now the main claim of this section: \begin{claim} \label{directed0} The main claim. \newline Starting with a supercompact cardinal, one can force the existence of a singular cardinal $\mu > {\rm cf}(\mu) = \kappa$, limit of measurables $\bar{\lambda} = \langle \lambda_\varepsilon : \varepsilon < \kappa \rangle$, such that both $\prod \limits_{\varepsilon < \kappa} \lambda_\varepsilon / J^{\rm bd}_\kappa$ and $\prod \limits_{\varepsilon < \kappa} \lambda^+_\varepsilon / J^{\rm bd}_\kappa$ are ${\rm cf}(\Upsilon)$-directed (for some $\Upsilon \in [\mu^{++},2^\mu), {\rm cf}(\Upsilon)\geq\mu^{++}$), and $2^{\lambda_\varepsilon} = \lambda_\varepsilon^+$ for every $\varepsilon < \kappa$. \end{claim} \par \noindent \emph{Proof}. \newline We shall prove the claim for the specific case of $\kappa=\omega$. The arguments can be generalized upon using Magidor's forcing instead of $\mathbb{Q}_U$ below. Let $\mu$ be a supercompact cardinal. Begin with the variant of Laver's forcing, ensured by \ref{laver} above. Let ${\rm Levy}(\mu^+, 2^\mu)$ follow Laver's forcing, so $2^\mu = \mu^+$ and $\mu$ remains supercompact (notice that ${\rm Levy}(\mu^+, 2^\mu)$ is $\mu$-directed-closed). Use $\mathbb{P}$ from definition \ref{iteration} to follow the composition of the preparatory Laver forcing and ${\rm Levy}(\mu^+, 2^\mu)$. By observation \ref{cc} we know that $\mathbb{P}$ is $\chi$-strategically complete for every $\chi < \mu$ (recall that an iteration keeps this property, provided that each stage satisfies it) and also $\mu^+$-cc (by lemma \ref{preservation} and observation \ref{dominatcc} above). By claim \ref{laver} we know that $\mu$ is still supercompact after forcing with $\mathbb{P}$. It should be stretched that $\mathbb{P}$ forces $2^\mu>\mu$, as it creates a $\mu^{++}$-directed product. Choose a sequence of measurable cardinals $\langle \lambda_\varepsilon : \varepsilon < \mu \rangle$, so that $\mu$ is the limit of the sequence. Notice that both $\langle \lambda_\varepsilon : \varepsilon < \mu \rangle$ and $\langle \lambda_\varepsilon^+ : \varepsilon < \mu \rangle$ fit the definition \ref{dominating} (hence appear at some stage of the iteration). Without loss of generality $\varepsilon < \mu \Rightarrow 2^{\lambda_\varepsilon} = \lambda_\varepsilon^+$ (recall ${\rm Levy}(\mu^+, 2^\mu)$ upon noticing that the local GCH on $\mu$ reflects down to enough measurables below, and the iteration $\mathbb{P}$ does not affect the measurability of the cardinals below $\mu$, since it does not add new bounded subsets). Let $U$ be a normal (uniform) ultrafilter on $\mu$ in ${\rm \bf{V}}^{\mathbb{P}}$. Let $\mathbb{Q}_U$ be the Prikry forcing applied to $\mu$, adding the cofinal Prikry sequence $\langle \vartheta_n : n < \omega \rangle$. In ${\rm \bf{V}}^{\mathbb{P} \ast \mathbb{Q}_U}$ we know that ${\rm cf}(\mu) = \aleph_0$. We indicate that using Magidor's forcing (from \cite{MR0465868}), we can get a similar result for ${\rm cf}(\mu) = \kappa > \aleph_0$. Now, if $\bar{\theta}$ is an increasing sequence as in definition \ref{dominating}, we know that ${\rm cf}(\prod \limits_{n < \omega} \theta_{\vartheta_n} / J^{{\rm bd}}_\omega)^{{\rm \bf{V}}^{\mathbb{P}\ast\mathbb{Q}_U}} = {\rm cf}(\prod \limits_{\alpha < \mu} \theta_\alpha / U)^{{\rm \bf{V}}^{\mathbb{P}}} = {\rm cf}(\prod \limits_{\alpha < \mu} \theta_\alpha / J^{{\rm bd}}_\mu)^{{\rm \bf{V}}^{\mathbb{P}}} = {\rm cf}(\Upsilon)$ (by \ref{prikry} and \ref{prodcof} above, and the second equality follows from the fact that we have here true cofinality, which is preserved under extending the ideal). Apply it to the sequences $\langle \lambda_{\vartheta_\varepsilon} : \varepsilon < \kappa \rangle$ and $\langle \lambda_{\vartheta_\varepsilon}^+ : \varepsilon < \kappa \rangle$ so the proof is complete. \qedref{directed0} \begin{remark} \label{more} We have used the main theorem for proving a combinatorial result, but we indicate that it can serve for other problems as well (by describing an extreme situation of cardinal arithmetic). \end{remark} \begin{theorem} \label{directed1} Let $\mu > {\rm cf} (\mu) = \kappa$ be singular, limit of measurables. \newline Let $\bar{\lambda} = \langle \lambda_\varepsilon : \varepsilon < \kappa \rangle$ be a sequence of measurable cardinals, which tends to $\mu$. Assume $2^{\lambda_\varepsilon} = \lambda_\varepsilon^+$ for every $\varepsilon < \kappa$, and $\prod \limits_{\varepsilon < \kappa} \lambda^+_\varepsilon / J^{\rm bd}_\kappa$ is $\Upsilon$-directed (for some $\Upsilon \in [\mu^{++}, 2^\mu)$). {\underline{Then}}\ for every sequence $\bar{D} = \langle D_\varepsilon : \varepsilon < \kappa \rangle$ such that $D_\varepsilon$ is a normal (hence $\lambda_\varepsilon$-complete) uniform ultrafilter on $\lambda_\varepsilon$ for every $\varepsilon < \kappa$, the product $\prod \limits_{\varepsilon < \kappa} (D_\varepsilon, \subseteq^*) / J^{\rm bd}_\kappa$ is $\Upsilon$-directed. \end{theorem} \par \noindent \emph{Proof}. \newline For every $\varepsilon < \kappa$ choose a normal ultrafilter $D_\varepsilon$ on $\lambda_\varepsilon$ (recall that each $\lambda_\varepsilon$ is measurable). First we claim that $(D_\varepsilon, \subseteq^*_\varepsilon)$ is $\lambda_\varepsilon^+$-directed for every $\varepsilon < \kappa$. So we have to show that for every collection of less than $\lambda_\varepsilon^+$ sets from $D_\varepsilon$ we can find a set in the ultrafilter which is almost included in every member of the collection. Indeed, if $\{ S_\beta : \beta < \delta \}$ is such a collection (and without loss of generality $\delta \leq \lambda_\varepsilon$), define $S = \Delta \{ S_\beta : \beta < \delta \}$, and by the normality of $D_\varepsilon$ we know that $S \in D_\varepsilon$. Since $S \subseteq^*_\varepsilon S_\beta$ for every $\beta < \delta$ (by the very definition of the diagonal intersection), we are done. Second we claim that for every $\varepsilon < \kappa$ we can find a $\subseteq^*_\varepsilon$-decreasing sequence of sets from $D_\varepsilon$ of the form $\langle S_{\varepsilon,\alpha} : \alpha < \lambda_\varepsilon^+ \rangle$, such that for every $B \in D_\varepsilon$ there is $\alpha < \lambda_\varepsilon^+$ so that $S_{\varepsilon,\alpha} \subseteq^*_\varepsilon B$. This is justified by the assumption that $2^{\lambda_\varepsilon} = \lambda_\varepsilon^+$, so we can enumerate the members of $D_\varepsilon$ by $\{ A_\gamma : \gamma < \lambda^+_\varepsilon \}$. For every $\gamma < \lambda^+_\varepsilon$, choose an enumeration of the collection $\{ A_\beta : \beta < \gamma \}$ as $\{ A_\alpha : \alpha < \gamma' \}$ such that $\gamma' \leq \lambda_\varepsilon$ (the new enumeration is needed whenever $\gamma > \lambda_\varepsilon$, and we want to arrange our sets below $\lambda_\varepsilon$). Define $S_\gamma = \Delta \{ A_\alpha : \alpha < \gamma' \}$. Now, for every $B \in D_\varepsilon$ find an ordinal $\delta$ above the index of $B$ in the enumeration of the members of $D_\varepsilon$, and $S_{\varepsilon,\delta} \subseteq^*_\varepsilon B$ as required. Now we can show that the product $\prod \limits_{\varepsilon < \kappa} (D_\varepsilon, \subseteq^*) / J^{\rm bd}_\kappa$ is $\Upsilon$-directed. Assume $A \subseteq \prod \limits_{\varepsilon < \kappa} (D_\varepsilon, \subseteq^*)$, and $|A| = \Upsilon' < \Upsilon$. A typical member $\bar{C}_\alpha \in A$ is a sequence of the form $\langle C_\varepsilon^\alpha : \varepsilon < \kappa \rangle$ such that $C_\varepsilon^\alpha \in D_\varepsilon$ for every $\varepsilon < \kappa$. Choose an enumeration $\{ \bar{C}_\alpha : \alpha < \Upsilon' \}$ of $A$. For each $\alpha < \Upsilon'$ we assign a vector $\bar{j}_\alpha = \langle j^\alpha_\varepsilon : \varepsilon < \kappa \rangle$ in the product $\prod \limits_{\varepsilon < \kappa} \lambda_\varepsilon^+$ as follows. For $\varepsilon < \kappa$ let $j_\varepsilon^\alpha$ be the index of the set $C_\varepsilon^\alpha$ in the enumeration of the members of $D_\varepsilon$ mentioned above. Define, now, the following set: $$ A' = \{ \bar{j}_\alpha : \alpha < \Upsilon' \} \subseteq \prod \limits_{\varepsilon < \kappa} \lambda^+_\varepsilon $$ \par \noindent Clearly, $|A'| < \Upsilon$, and we assume that this product is $\Upsilon$-directed, so we can choose a member $\bar{j} \in \prod \limits_{\varepsilon < \kappa} \lambda_\varepsilon^+$ which is an upper bound of $A'$. It means that $\alpha < \Upsilon' \Rightarrow \bar{j}_\alpha \leq_{J^{\rm bd}_\kappa} \bar{j}$. \newline $\bar{j}$ produces a member $\bar{C}$ in the product $\prod \limits_{\varepsilon < \kappa} (D_\varepsilon, \subseteq^*)$, as follows: for each $\varepsilon < \kappa$ we define $C_\varepsilon = S_{\varepsilon,j_\varepsilon}$, and then $\bar{C} = \langle C_\varepsilon : \varepsilon < \kappa \rangle$. Now $\bar{C}$ is an upper bound for the set $A$, and the proof is complete. \qedref{directed1} The theorems above (and consequently, the strong relation that was proved in the first section) require the existence of a supercompact cardinal in the ground model. The combinatorial result is proved on a singular cardinal, limit of measurables. It is plausible to get similar results below the first measurable cardinal, and with weaker assumption than supercompact. We indicate that some forcing which kills the measurability but keeps enough properties of the product $\prod \limits_{\varepsilon < \kappa} (D_\varepsilon, \subseteq^*) / J_\kappa^{\rm bd}$ is required. We hope to continue this subject in a subsequent paper. \end{document}
\begin{document} \title{Disparity Between Batches \ as a Signal for Early Stopping} \begin{abstract} We propose a metric for evaluating the generalization ability of deep neural networks trained with mini-batch gradient descent. Our metric, called \emph{gradient disparity}, is the $\ell_2$ norm distance between the gradient vectors of two mini-batches drawn from the training set. It is derived from a probabilistic upper bound on the difference between the classification errors over a given mini-batch, when the network is trained on this mini-batch and when the network is trained on another mini-batch of points sampled from the same dataset. We empirically show that gradient disparity is a very promising early-stopping criterion (i) when data is limited, as it uses all the samples for training and (ii) when available data has noisy labels, as it signals overfitting better than the validation data. Furthermore, we show in a wide range of experimental settings that gradient disparity is strongly related to the generalization error between the training and test sets, and that it is also very informative about the level of label noise. \keywords{Early Stopping, Generalization, Gradient Alignment, Overfitting, Neural Networks, Limited Datasets, Noisy Labels.} \end{abstract} \input{Introduction.tex} \section{Related Work}\label{sec:rel} The coherent gradient hypothesis \cite{chatterjee2020coherent} states that the gradient is stronger in directions where similar examples exist and towards which the parameter update is biased. He and Su \cite{He2020The} study the local elasticity phenomenon, which measures how the prediction over one sample changes, as the network is updated on another sample. Motivated by \cite{He2020The}, reference \cite{deng2020toward} proposes generalization upper bounds using locally elastic stability. The generalization penalty introduced in our work measures how the prediction over one sample (batch) changes when the network is updated on the same sample, instead of being updated on another sample. Finding a practical metric that completely captures the generalization properties of deep neural networks, and in particular indicates the level of label noise and decreases with the size of the training set, is still an active research direction \cite{dziugaite2017computing,neyshabur2017exploring,nagarajan2019uniform,Chatterji2020The}. Recently, there have been a few studies that propose similarity between gradients as a generalization metric. The benefit of tracking generalization by measuring the similarity between gradient vectors is its tractability during training, and the dispensable access to unseen data. Sankararaman et al.~\cite{sankararaman2019impact} propose gradient confusion, which is a bound on the inner product of two gradient vectors, and shows that the larger the gradient confusion is, the slower the convergence is. Gradient interference (when the gradient inner product is negative) has been studied in multi-task learning, reinforcement learning and temporal difference learning \cite{riemer2018learning,liutoward,bengio2020interference}. Yin et al. \cite{yin2017gradient} study the relation between gradient diversity, which measures the dissimilarity between gradient vectors, and the convergence performance of distributed SGD algorithms. Fort et al. \cite{fort2019stiffness} propose a metric called stiffness, which is the cosine similarity between two gradient vectors, and shows empirically that it is related to generalization. Fu et al. \cite{fu2020rethinking} study the cosine similarity between two gradient vectors for natural language processing tasks. Reference \cite{mehta2020extreme} measures the alignment between the gradient vectors within the same class (denoted by $\Omega_{c}$) , and studies the relation between $\Omega_{c}$ and generalization as the scale of initialization (the variance of the probability distribution the network parameters are initially drawn from) is increased. These metrics are usually not meant to be used as early stopping criteria, and indeed in Table~\ref{tab:RWmain} and Table~\ref{tab:RW} in the appendix, we observe that none of them consistently outperforms $k$-fold cross-validation. Another interesting line of work is the study of the variance of gradients in deep learning settings. Negrea et al. \cite{negrea2019information} derive mutual information generalization error bounds for stochastic gradient Langevin dynamics (SGLD) as a function of the sum (over the iterations) of square gradient incoherences, which is closely related to the variance of gradients. Two-sample gradient incoherences also appear in \cite{haghifam2020sharpened}, which are taken between a training sample and a ``ghost" sample that is not used during training and therefore taken from a validation set (unlike gradient disparity). The upper bounds in \cite{negrea2019information,haghifam2020sharpened} are cumulative bounds that increase with the number of iterations and are not intended to be used as early stopping criteria. can be used as an early stopping criterion not only for SGD with additive noise (such as SGLD), but also other adaptive optimizers. Reference \cite{qian2020impact} shows that the variance of gradients is a decreasing function of the batch size. However, reference \cite{jastrzebski2020break} hypothesizes that gradient variance counter-intuitively increases with the batch size, by studying the effect of the learning rate on the variance of gradients, which is consistent with our results on convolutional neural networks in Section~\ref{sec:gen}. References \cite{jastrzebski2020break,qian2020impact} mention the connection between variance of gradients and generalization as promising future directions. Our study shows that variance of gradients used as an early stopping criterion outperforms $k$-fold cross-validation (see Table~\ref{tab:RW}). Liu et al. \cite{Liu2020Understanding} propose a relation between gradient signal-to-noise ratio (SNR), called GSNR, and the one-step generalization error, with the assumption that both the training and test sets are large. Mahsereci et al. \cite{mahsereci2017early} also study gradient SNR and propose an early stopping criterion called evidence-based criterion (EB) that eliminates the need for a held-out validation set. Reference \cite{liu2008optimized} proposes an early stopping criterion based on the signal-to-noise ratio figure, which is further studied in \cite{piotrowski2013comparison}, a study that shows the average test error achieved by standard early stopping is lower than the one obtained by this criterion. Zhang et al. \cite{zhang2021optimization} empirically show that the variance term in the bias-variance decomposition of the loss function dominates the variations of the test loss, and hence propose optimization variance (OV) as an early stopping criterion. \paragraph{Summary of Comparison to Related Work} In Table~\ref{tab:RWmain} and Appendix~\ref{app:var}, we compare gradient disparity (GD) to EB, GSNR, gradient inner product, sign of the gradient inner product, variance of gradients, cosine similarity, $\Omega_c$, and OV. We observe that the only metrics that consistently outperform $k$-fold cross-validation as early stopping criteria across various settings (see Table~\ref{tab:RW} in the appendix), and that reflect well the label noise level (see in Figs.~\ref{fig:eb} and \ref{fig:inner_prod} that metrics such as EB and $\text{sign}(g_i\cdot g_j)$ do not correctly detect the label noise level), are gradient disparity and variance of gradients. The two are analytically very close as discussed in Appendix~\ref{app:var2}. However, we observe that the correlation between gradient disparity and the test loss is in general larger than the correlation between variance of gradients and the test loss (see Table~\ref{tab:var} in the appendix). \begin{table*}[t] \caption{Test error (TE) and test loss (TL) achieved by using various metrics as early stopping criteria for an AlexNet trained on the MNIST dataset with 50\% random labels. See Table~\ref{tab:RW} in the appendix for further details and experiments. }\label{tab:RWmain} \vspace*{1em} \begin{subtable}{\linewidth}{ \resizebox{\textwidth}{!}{ \begin{tabular}{c|c|cccccccc|c|c} \toprule & Min & GD/Var \hspace*{-0.7em} & EB \hspace*{-0.7em} & GSNR & $g_i\cdot g_j$ & \hspace*{-0.3em} $\text{sign}(g_i\cdot g_j)$ & $\cos(g_i\cdot g_j)$ & $\Omega_c$ \hspace*{-0.7em} & OV & $k$-fold& No ES\\ \midrule \multicolumn{1}{l|}{TE} & $13.76$ & \ul{$\mathbf{16.66}$} & $24.63$ & $35.68$ & $37.92$ & $24.63$ & $35.68$ & $29.40$ & $34.36$ & $17.86$ & $25.72$\\ TL & $0.75$ & \ul{$1.08$} & \ul{$\mathbf{0.86}$} & $1.68$ & $1.82$ & \ul{$\mathbf{0.86}$} & $1.68$ & $1.46$ & $1.65$& $1.09$ & $0.91$\\ \bottomrule \end{tabular}}} \end{subtable} \end{table*} \section{Generalization Penalty}\label{sec:pre} Consider a classification task with input $x \in \mathcal{X} \coloneqq \mathbb{R}^n$ and ground truth label $y\in \{1, 2, \cdots, k \}$, where $k$ is the number of classes. Let $h_w \in \mathcal{H}: \mathcal{X} \rightarrow \mathcal{Y} \coloneqq \mathbb{R}^k$ be a predictor (classifier) parameterized by the parameter vector $w \in \mathbb{R}^d$, and $l(\cdot, \cdot)$ be the 0-1 loss function $ {l\left(h_w(x), y\right) = \mathbbm{1}\left[h_w(x)[y] < \max_{j\neq y}h_w(x)[j]\right] } $ for all $h_w \in \mathcal{H}$ and $(x,y)\in \mathcal{X} \times \{1, 2, \cdots, k \}$. The expected loss and the empirical loss over the training set $S$ of size $m$ are respectively defined as\\ \begin{align} L(h_w) &= \mathbb{E}_{(x,y)\sim D} \left[l\left(h_w(x),y\right)\right], \end{align}\label{eq:exp_loss} and \begin{align} L_{S}(h_w) = \frac{1}{m} \sum_{i=1}^{m} l(h_w(x_i),y_i) , \label{eq:train_loss} \end{align} where $D$ is the probability distribution of the data points and $S=\{(x_i,y_i)\}^m$ is a collection of $m$ i.i.d. samples drawn from $D$. Similar to the notation used in \cite{dziugaite2017computing}, distributions on the hypotheses space $\mathcal{H}$ are simply distributions on the underlying parameterization. With some abuse of notation, $\nabla L_{S_i}$ refers to the gradient with respect to the surrogate differentiable loss function, which in our experiments is cross entropy\footnote{We have also studied networks trained with the mean square error in Appendix~\ref{app:mse}, and we observe that there is a strong positive correlation between the test error/loss and gradient disparity for this choice of the surrogate loss function as well (see Fig.~\ref{fig:train_size_mse}).}. In a mini-batch gradient descent (SGD) setting, let mini-batches $S_1$ and $S_2$ have sizes $m_1$ and $m_2$, respectively, with ${m_1 + m_2 \leq m}$. Let ${w = w^{(t)}}$ be the parameter vector at the beginning of an iteration $t$. If $S_1$ is selected for the next iteration, $w$ gets updated to ${w_1 = w^{(t+1)}}$ with \begin{equation}\label{eq:w1} w_1 = w - \gamma \nabla L_{S_1}\left(h_{w}\right) , \end{equation} where $\gamma$ is the learning rate. The generalization penalty $\mathcal{R}_2$ is defined as the gap between the loss over $S_2$, $L_{S_2}\left(h_{w_1}\right)$, and its target value, $L_{S_2}\left(h_{w_2}\right)$, at the end of iteration $t$. When selecting $S_1$ for the parameter update, Eq.~(\ref{eq:w1}) makes a step towards learning the input-output relations of mini-batch $S_1$. If this negatively affects the performance on mini-batch~$S_2$,~$\mathcal{R}_2$ will be large; the model is learning the data structures that are unique to $S_1$ and that do not appear in~$S_2$. Because~$S_1$ and $S_2$ are mini-batches of points sampled from the same distribution $D$, they have data structures in common. If, throughout the learning process, we consistently observe that, in each update step, the model learns structures unique to only one mini-batch, then it is very likely that the model is memorizing the labels instead of learning the common data-structures. This is captured by the generalization penalty~$\mathcal{R}$. We adapt the PAC-Bayesian framework \cite{mcallester1999pac,mcallester1999some} to account for the trajectory of the learning algorithm; For each learning iteration $t$ we define a prior, and two possible posteriors depending on the choice of the batch selection. Let $w\sim P$ follow a prior distribution~$P$, which is a $\mathcal{F}_t$-measurable function, where $\mathcal{F}_t$ denotes the filtration of the available information at the beginning of iteration $t$. Let $h_{w_1}, h_{w_2}$ be the two learned single predictors, at the end of iteration $t$, from $S_1$ and $S_2$, respectively. In this framework, for $i \in \{1,2\}$, each predictor $h_{w_i}$ is randomized and becomes $h_{\nu_i}$ with $\nu_i = w_i + u_i$, where $u_i$ is a random variable whose distribution might depend on $S_i$. Let $Q_i$ be the distribution of~$\nu_i$, which is a distribution over the predictor space $\mathcal{H}$ that depends on $S_i$ via $w_i$ and possibly $u_i$. Let $\mathcal{G}_i$ be a $\sigma$-field such that $\sigma(S_i) \cup \mathcal{F}_t \subset \mathcal{G}_i$ and such that the posterior distribution $Q_i$ is $\mathcal{G}_i$-measurable for $i \in \{1,2\}$. We further assume that the random variable $\nu_1\sim Q_1$ is statistically independent from the draw of the mini-batch $S_2$ and, vice versa, that $\nu_2\sim Q_2$ is independent from the batch $S_1$\footnote{Mini-batches $S_1$ and $S_2$ are drawn without replacement, and the random selection of indices of mini-batches $S_1$ and $S_2$ is independent from the dataset $S$. Hence, similarly to \cite{negrea2019information,dziugaite2020role}, we have $\sigma(S_1) \perp \!\!\! \perp \sigma(S_2)$.}, i.e., $\mathcal{G}_1 \perp \!\!\! \perp \sigma(S_2)$ and $\mathcal{G}_2 \perp \!\!\! \perp \sigma(S_1)$. \begin{theorem}\label{THM1} For any $\delta \in (0,1]$, with probability at least $1-\delta$ over the sampling of sets $S_1$ and $S_2$, the sum of the expected penalties conditional on $S_1$, and $S_2$, respectively, satisfies \begin{align}\label{eq:mybound} \mathbb{E} \left[\mathcal{R}_1\right] + \mathbb{E} \left[\mathcal{R}_2\right] \leq \sqrt{\frac{2\text{KL}(Q_2||Q_1) + 2\ln{\frac{2m_2}{\delta}}}{{m_2}-2}} + \sqrt{\frac{2\text{KL}(Q_1||Q_2) + 2\ln{\frac{2m_1}{\delta}}}{{m_1}-2}} . \end{align} \end{theorem} In this paper, the goal is to get a signal of overfitting that indicates at the beginning of each iteration $t$ whether to stop or to continue training. This signal should track the performance of the model at the end of iteration~$t$ by investigating its evolution over all the possible outcomes of the batch sampling process during this iteration. For simplicity, we consider two possible outcomes: either mini-batch $S_1$ or mini-batch $S_2$ is chosen for this iteration (we later in the next section extend to more pairs of batches). If we were to use bounds such as the ones in \cite{mcallester2003simplified,neyshabur2017pac} for one iteration at a time, the generalization error at the end of that iteration can be bounded by a function of either $\text{KL}(Q_1||P)$ or $\text{KL}(Q_2||P)$, depending on the selected batch. Therefore, as each of the two batches is equally likely to be sampled, we should track $\text{KL}(Q_1||P)$ and $\text{KL}(Q_2||P)$ for a signal of overfitting at the end of the iteration, which requires in turn access to the three distributions $P$, $Q_1$ and $Q_2$. In contrast, the upper bound on the generalization penalty given in Theorem~\ref{THM1} only requires the two distributions $Q_1$ and $Q_2$, which is a first step towards a simpler metric since, loosely speaking, the symmetry between the random choices for $S_1$ and $S_2$ should carry over these two distributions, leading us to assume the random perturbations $u_1$ and $u_2$ to be identically distributed. If furthermore we assume them to be Gaussian, then we show in the next section that $\text{KL}(Q_2||Q_1)$ and $\text{KL}(Q_1||Q_2)$ are equal and boil down to a very tractable generalization metric, which we call gradient disparity. \section{Gradient Disparity}\label{sec:BGP} In Section~\ref{sec:pre}, the randomness modeled by the additional perturbation $u_i$, conditioned on the current mini-batch $S_i$, comes from (i) the parameter vector at the beginning of the iteration $w$, which itself comes from the random parameter initialization and the stochasticity of the parameter updates until that iteration, and (ii) the gradient vector $\nabla L_{S_i}$ (simply denoted by $g_i$), which may also be random because of the possible additional randomness in the network structure due for instance to dropout \cite{srivastava2014dropout}. A common assumption made in the literature is that the random perturbation $u_i$ follows a normal distribution \cite{bellido1993backpropagation,neyshabur2017pac}. The upper bound in Theorem~\ref{THM1} takes a particularly simple form if we assume that for $i \in \{1,2\}$, $u_i$ are zero mean i.i.d. normal variables (${u_i \sim \mathcal{N} (0, \sigma^2 I)} $), and that $w_i$ is fixed, as in the setting of \cite{dziugaite2017computing}. As ${w_i=w - \gamma g_i}$ for $i \in \{1,2\}$, the KL-divergence between $Q_1 = \mathcal{N}(w_1, \sigma^2 I)$ and ${Q_2 = \mathcal{N}(w_2, \sigma^2 I)}$ (Lemma~\hyperref[lem:1]{1} in Appendix~\ref{app:add}) is simply \begin{align}\label{eq:kl} \text{KL}(Q_1 || Q_2) = \frac{1}{2} \frac{\gamma^2}{\sigma^2} \norm{g_1 - g_2}_2^2 = \text{KL}(Q_2 || Q_1) , \end{align} which shows that, keeping a constant step size $\gamma$ and assuming the same variance for the random perturbations $\sigma^2$ in all the steps of the training, the bound in Theorem \ref{THM1} is driven by $\norm{g_1 - g_2}_2 $. This indicates that the smaller the $\ell_2$ distance between gradient vectors is, the lower the upper bound on the generalization penalty is, and therefore the closer the performance of a model trained on one batch is to a model trained on another batch. For two mini-batches of points $S_i$ and $S_j$, with respective gradient vectors~$g_i$ and $g_j$, we define the \emph{gradient disparity} (GD) between $S_i$ and $S_j$ as \begin{equation}\label{eq:bgd} \mathcal{D}_{i,j} = \norm{g_i - g_j}_2 . \end{equation} To compute $\mathcal{D}_{i,j}$, a first option is to sample $S_i$ from the training set and $S_j$ from the held-out validation set, which we refer to as the ``train-val" setting, following~\cite{fort2019stiffness}. The generalization penalty $\mathcal{R}_j$ in this setting measures how much, during the course of an iteration, a model updated on a training set is able to generalize to a validation set, making the resulting (``train-valâ€) gradient disparity $\mathcal{D}_{i,j}$ a natural candidate for tracking overfitting. But it requires access to a validation set to sample $S_j$, which we want to avoid. The second option is to sample both $S_i$ and $S_j$ from the training set, as proposed in this paper, to yield now a value of $\mathcal{D}_{i,j}$ that we could call ``train-train" gradient disparity (GD) by analogy. Importantly, we observe a strong positive correlation between the two types of gradient disparities ($\rho=0.957$) in Fig.~\ref{fig:valtrain}. Therefore, we can expect that both of them do (almost) equally well in detecting overfitting, with the advantage that the latter does not require to set data aside, contrary to the former. We will therefore consider GD when both batches are sampled from the training set and evaluate it in this paper. \begin{figure} \caption{``Train-val" gradient disparity versus ``train-train" gradient disparity for 220 experimental settings that vary in architecture, dataset, training set size, label noise level and initial random seed. Pearson's correlation coefficient is $\rho=0.957$. } \label{fig:valtrain} \end{figure} To track the upper bound of the generalization penalty for more pairs of mini-batches, we can compute an average gradient disparity over $B$ mini-batches, which requires all the $B$ gradient vectors at each iteration, which is computationally expensive if $B$ is large. We approximate it by computing GD over only a much smaller subset of the mini-batches, of size $s \ll B$, \begin{equation*} \overline{\mathcal{D}} = \sum_{i=1}^{s} \sum_{\substack{j=1 , j\neq i }}^{s} \frac{\mathcal{D}_{i,j}}{s(s-1)} . \end{equation*} In our experiments, $s=5$; we observe that such a small subset is already sufficient (see Appendix~\ref{app:s} for an experimental comparison of different values of $s$). Consider two training iterations $t_1$ and $t_2$ where $t_1 \ll t_2$. At earlier stages of the training (iteration $t_1$), the parameter vector ($w^{(t_1)}$) is likely to be located in a steep region of the training loss landscape, where the gradient vector of training batches, $g_i$, and the training loss $L_{S_i}(h_{w^{(t_1)}})$ take large values. At later stages of training (iteration $t_2$), the parameter vector ($w^{(t_2)}$) is more likely in a flatter region of the training loss landscape where $g_i$ and $L_{S_i}(h_{w^{(t_2)}})$ take small values. To compensate for this scale mismatch when comparing the distance between gradient vectors at different stages of training, we re-scale the loss values within each batch before computing $\overline{\mathcal{D}}$ (see Appendix~\ref{app:norm} for more details). Note that this re-scaling is only done for the purpose of using GD as a metric, and therefore does not have any effect on the training process itself. We focus on the vanilla SGD optimizer. In Appendix~\ref{app:opt}, we extend the analysis to other stochastic optimization algorithms: SGD with momentum, Adagrad, Adadelta, and Adam. In all these optimizers, we observe that GD (Eq.~(\ref{eq:bgd})) appears in $\text{KL}(Q_1 || Q_2) $ with other factors that depend on a decaying average of past gradient vectors. Experimental results support the use of GD as an early stopping metric also for these popular optimizers (see Fig.~\ref{fig:opt} in Appendix~\ref{app:opt}). For vanilla SGD optimizer, we also provide an alternative and simpler derivation leading to gradient disparity from the linearization of the loss function in Appendix~\ref{app:simple}. \section{Early Stopping Criterion}\label{sec:early} \input{kfold.tex} \section{Discussion and Final Remarks}\label{sec:gen} We propose gradient disparity (GD), as a simple to compute early stopping criterion that is particularly well-suited when the dataset is limited and/or noisy. Beyond indicating the early stopping time, GD is well aligned with factors that contribute to improve or degrade the generalization performance of a model, which have an often strikingly similar effect on the value of GD as well. We briefly discuss in this section some of these observations that further validate the use of GD as an effective early stopping criterion; more details are provided in the appendix. \textbf{Label Noise Level.} We observe that GD reflects well the label noise level throughout the training process, even at early stages of training, where the generalization gap fails to do so (see Fig.~\ref{fig:random}, and Figs.~\ref{fig:fc_mnist}, \ref{fig:fc_cifar10}, and~\ref{fig:resnet_cifar100} in Appendix~\ref{app:more}). \begin{figure*} \caption{Test error} \caption{Generalization error } \label{fig:random} \end{figure*} \textbf{Training Set Size.} We observe that GD, similarly to the test error, decreases with training set size, unlike many previous metrics as shown by \cite{neyshabur2017exploring,nagarajan2019uniform}. Moreover, we observe that applying data augmentation decreases the values of both GD and the test error (see Fig.~\ref{fig:DA} and Fig.~\ref{fig:train_size2} in Appendix~\ref{app:more}). \begin{figure*}\label{fig:DA} \end{figure*} \textbf{Batch Size.} We observe that both the test error and GD increase with batch size. This observation is counter-intuitive because one might expect that gradient vectors get more similar when they are averaged over a larger batch. GD matches the ranking of test errors for different networks, trained with different batch sizes, as long as the batch sizes are not too large (see Fig.~\ref{fig:batch_size2} in Appendix~\ref{app:more}). \begin{figure*}\label{fig:batch_size} \end{figure*} \textbf{Width.} We observe that both the test error and GD (normalized with respect to the number of parameters) decrease with the network width for ResNet, VGG and fully connected neural networks (see Fig.~\ref{fig:width} and Fig.~\ref{fig:width2} in Appendix~\ref{app:more}). \begin{figure*}\label{fig:width} \end{figure*} Gradient disparity belongs to the same class of metrics based on the similarity between two gradient vectors \cite{sankararaman2019impact,fort2019stiffness,fu2020rethinking,mehta2020extreme,jastrzebski2020break}. A common drawback of all these metrics is that they are not informative when the gradient vectors are very small. In practice however, we observe (see for instance Figure~\ref{fig:fc_mnist0} in the appendix) that the time at which the test and training losses start to diverge, which is the time when overfitting kicks in, does not only coincide with the time at which gradient disparity increases, but also occurs much before the training loss becomes infinitesimal. This drawback is therefore unlikely to cause a problem for gradient disparity when it is used as an early stopping criterion. Nevertheless, as a future direction, it would be interesting to explore this further especially for scenarios such as epoch-wise double-descent \cite{heckel2020early}. \section*{Acknowledgments} This work has been accepted at the ECML PKDD 2021 conference (\url{https://2021.ecmlpkdd.org/wp-content/uploads/2021/07/sub_1075.pdf}). We would like to thank anonymous reviewers and Tianzong Zhang for their helpful comments. \appendix \section{Organization of the Appendix} This appendix includes sections that are provided here for the sake of completeness and reproducibility (such as Sections~\ref{app:det} and \ref{app:more}) and/or for lack of space in the main paper (such as Sections~\ref{app:opt} and \ref{app:var}). It is structured as follows. \begin{itemize} \item Appendix~\ref{app:proof} gives the proof of Theorem~\ref{THM1}, which also uses Hoeffding’s bound recalled in Appendix~\ref{app:add}. \item Appendix~\ref{app:simple} provides a simple relation between gradient disparity and generalization penalty from linearization. \item A number of details common to all experiments are provided in Appendix~\ref{app:det}, which discusses in particular how the loss is re-scaled before computing gradient disparity (Appendix~\ref{app:norm}) and how gradient disparity can also be applied to networks trained with the mean square error (Appendix~\ref{app:mse}). \item A detailed comparison of gradient disparity to $k$-fold cross validation as early stopping criteria is given in Appendix~\ref{app:kfold}, which includes a study on the robustness to the early stopping threshold in Appendix~\ref{app:patthre} (Tables~\ref{tab:robustness},~\ref{tab:robustnessn} and~\ref{tab:patience}) and additional experiments on four image classification datasets (Figs.~\ref{fig:kfoldl},~\ref{fig:kfoldn}, and \ref{fig:mrnet} together with Tables~\ref{tab:kfoldl}, and \ref{tab:kfoldn}). \item Additional experiments on benchmark datasets are provided in Appendix~\ref{app:more} to study the effect of label noise level, training set size, batch size and network width on the value of gradient disparity. The results, which support the claims made in the main paper, are displayed in Figs.~\ref{fig:random} to \ref{fig:resnet_cifar100}. \item Besides the vanilla SGD algorithm adopted in the main paper, gradient disparity can be extended to other stochastic optimization algorithms (SGD with momentum, Adagrad, Adadelta, and Adam) as shown in Appendix~\ref{app:opt}. \item Finally, a detailed comparison with related work is presented in Appendix~\ref{app:var} (Tables~\ref{tab:RW} and \ref{tab:var} together with Figs.~\ref{fig:eb} and \ref{fig:inner_prod}). \end{itemize} \normalsize \section{Additional Theorem}\label{app:add} Hoeffding's bound is used in the proof of Theorem~\ref{THM1}, and Lemma~\hyperref[lem:1]{1} is used in Section~\ref{sec:BGP}. \begin{theorem}[Hoeffding's Bound]\label{thm:2} Let $Z_1,\cdots,Z_n$ be independent bounded random variables on $\left[a,b\right]$ (i.e., $Z_i \in [a,b]$ for all $1 \leq i \leq n$ with $ -\infty < a \leq b < \infty$). Then \begin{equation*} \mathbb{P} \left(\frac{1}{n} \sum_{i=1}^{n} \left(Z_i - \mathbb{E}[Z_i]\right) \geq t\right) \leq \exp \left(-\frac{2nt^2}{(b-a)^2}\right) \end{equation*} and \begin{equation*} \mathbb{P} \left(\frac{1}{n} \sum_{i=1}^{n} \left(Z_i - \mathbb{E}[Z_i]\right) \leq -t\right) \leq \exp \left(-\frac{2nt^2}{(b-a)^2}\right) \end{equation*} for all $t \geq 0$. \end{theorem} \textbf{Lemma 1\label{lem:1}} If $N_1 = \mathcal{N} (\mu_1, \Sigma_1)$ and $N_2 = \mathcal{N} (\mu_2, \Sigma_2)$ are two multivariate normal distributions in $\mathbb{R}^d$, where $\Sigma_1$ and $\Sigma_2$ are positive definite, \begin{align*} \text{KL}(N_1 || N_2) = \frac{1}{2} \Bigg( \text{tr} \left( \Sigma_2^{-1} \Sigma_1 \right) - d \Bigg. + (\mu_2 - \mu_1)^T \Sigma_2^{-1} (\mu_2 - \mu_1) + \Bigg. \ln \left(\frac{\det \Sigma_2}{\det \Sigma_1}\right) \Bigg) . \end{align*} \section{Proof of Theorem \ref{THM1}}\label{app:proof} \begin{proof} We compute the upper bound in Eq.~(\ref{eq:mybound}) using a similar approach as in \cite{mcallester2003simplified}. The main challenge in the proof is the definition of a function $X_{S_2}$ of the variables and parameters of the problem, which can then be bounded using similar techniques as in \cite{mcallester2003simplified}. $S_1$ is a batch of points (with size $m_1$) that is randomly drawn from the available set $S$ at the beginning of iteration $t$, and $S_2$ is a batch of points (with size $m_2$) that is randomly drawn from the remaining set $S \setminus S_1$. Hence, $S_1$ and $S_2$ are drawn from the set $S$ without replacement ($S_1 \cap S_2 = \emptyset$). Similar to the setting of \cite{negrea2019information,dziugaite2020role}, as the random selection of indices of $S_1$ and $S_2$ is independent from the dataset $S$, $\sigma(S_1) \perp \!\!\! \perp \sigma(S_2)$, and as a result, $\mathcal{G}_1 \perp \!\!\! \perp \sigma(S_2)$ and $\mathcal{G}_2 \perp \!\!\! \perp \sigma(S_1)$. Recall that $\nu_i$ is the random parameter vector at the end of iteration $t$ that depends on $S_i$, for $i \in \{1,2\}$. For a given sample set $S_i$, denote the conditional probability distribution of $\nu_i$ by $Q_{S_i}$. For ease of notation, we represent $Q_{S_i}$ by $Q_i$. Let us denote \begin{equation}\label{eq:delta} \Delta\left(h_{\nu_1}, h_{\nu_2}\right) \triangleq \left(L_{S_2}(h_{\nu_1}) - L(h_{\nu_1}) \right) - \left(L_{S_2}(h_{\nu_2}) - L(h_{\nu_2}) \right) , \end{equation} and \begin{equation}\label{eq:def_f_s} X_{S_2} \triangleq \sup_{Q_1,Q_2} \;\left(\frac{m_2}{2}-1\right) \mathbb{E}_{\nu_1\sim Q_1} \left[ \mathbb{E}_{\nu_2\sim Q_2} \left[ \left(\Delta\left(h_{\nu_1}, h_{\nu_2}\right)\right)^2\right] \right] - \text{KL}(Q_2||Q_1) . \end{equation} Note that $X_{S_2}$ is a random function of the batch $S_2$. Expanding the KL-divergence, we find that \begin{align*} \left(\frac{m_2}{2}-1\right) \; & \mathbb{E}_{\nu_1\sim Q_1} \left[ \mathbb{E}_{\nu_2\sim Q_2} \left[ \left(\Delta\left(h_{\nu_1}, h_{\nu_2}\right)\right)^2\right] \right] - \text{KL}(Q_2||Q_1) \\ &= \mathbb{E}_{\nu_1\sim Q_1} \left[ \left(\frac{m_2}{2}-1\right) \; \mathbb{E}_{\nu_2\sim Q_2} \left[ \left(\Delta\left(h_{\nu_1}, h_{\nu_2}\right)\right)^2\right] \right. + \left. \mathbb{E}_{\nu_2\sim Q_2} \left[ \ln{\frac{Q_1(\nu_2)}{Q_2(\nu_2)}} \right] \right]\\ &\leq \mathbb{E}_{\nu_1\sim Q_1} \left[ \ln{\mathbb{E}_{\nu_2\sim Q_2} \left[e^{(\frac{m_2}{2}-1)\left(\Delta\left(h_{\nu_1}, h_{\nu_2}\right)\right)^2} \frac{Q_1(\nu_2)}{Q_2(\nu_2)}\right]} \right] \\ &= \mathbb{E}_{\nu_1\sim Q_1} \left[ \ln \mathbb{E}_{\nu_1^\prime\sim Q_1} \left[e^{(\frac{m_2}{2}-1)\left(\Delta\left(h_{\nu_1}, h_{\nu_1^\prime}\right)\right)^2}\right] \right], \end{align*} where the inequality above follows from Jensen's inequality as logarithm is a concave function. Therefore, again by applying Jensen's inequality \begin{equation*} X_{S_2} \leq \ln \mathbb{E}_{\nu_1\sim Q_1} \mathbb{E}_{\nu_1^\prime\sim Q_1} \left[e^{(\frac{m_2}{2}-1)\left(\Delta(h_{\nu_1}, h_{\nu_1^\prime})\right)^2}\right]. \end{equation*} Taking expectations over $S_2$, we have that \begin{align}\label{eq:22} \mathbb{E}_{S_2}\left[e^{X_{S_2}}\right] & \leq \mathbb{E}_{S_2} \mathbb{E}_{\nu_1\sim Q_1} \mathbb{E}_{\nu_1^\prime\sim Q_1} \left[e^{(\frac{m_2}{2}-1)\left(\Delta(h_{\nu_1}, h_{\nu_1^\prime})\right)^2}\right] \nonumber\\ &= \mathbb{E}_{\nu_1\sim Q_1} \mathbb{E}_{\nu_1^\prime\sim Q_1} \mathbb{E}_{S_2} \left[e^{(\frac{m_2}{2}-1)\left(\Delta(h_{\nu_1}, h_{\nu_1^\prime})\right)^2}\right] , \end{align} where the change of order in the expectation follows from the independence of the draw of the set $S_2$ from $\nu_1 \sim Q_1$ and $\nu_1^\prime \sim Q_1$, i.e., $Q_1$ is $\mathcal{G}_1$-measurable and $\mathcal{G}_1 \perp \!\!\! \perp \sigma(S_2)$. Now let \begin{equation*} Z_i \triangleq l(h_{\nu_1}(x_i), y_i) - l(h_{\nu_1^\prime}(x_i), y_i), \end{equation*} for all $1 \leq i \leq m_2$. Clearly, $Z_i \in [-1,1]$ and because of Eqs.~(\ref{eq:train_loss}) and of the definition of $\Delta$ in Eq. (\ref{eq:delta}), \begin{equation*} \Delta\left(h_{\nu_1}, h_{\nu_1^\prime}\right) = \frac{1}{m_2} \sum_{i=1}^{m_2} \left(Z_i - \mathbb{E}[Z_i]\right). \end{equation*} Hoeffding's bound (Theorem \ref{thm:2}) implies therefore that for any $t \geq 0$, \begin{equation}\label{eq:deltau} \mathbb{P}_{S_2} \left(\lvert \Delta\left(h_{\nu_1}, h_{\nu_1^\prime}\right)\rvert \geq t \right) \leq 2 e^{- \frac{m_2}{2} t^2} . \end{equation} Denoting by $p(\Delta)$ the probability density function of $\lvert \Delta\left(h_{\nu_1}, h_{\nu_1^\prime}\right) \rvert$, inequality~(\ref{eq:deltau}) implies that for any $t \geq 0$, \begin{equation}\label{eq:upperp} \int_{t}^{\infty} p(\Delta) d \Delta \leq 2 e^{- \frac{m_2}{2} t^2} . \end{equation} The density $\tilde{p}(\Delta)$ that maximizes $\int_{0}^{\infty} e^{(\frac{m_2}{2}-1)\Delta^2} p(\Delta) d \Delta$ (the term in the first expectation of the upper bound of Eq.~(\ref{eq:22}) ), is the density achieving equality in~(\ref{eq:upperp}), which is ${\tilde{p}(\Delta) = 2 m_2 \Delta e^{-\frac{m_2}{2}\Delta^2}}$. As a result, \begin{align*} \mathbb{E}_{S_2}\left[e^{(\frac{m_2}{2}-1)\Delta^2}\right] &\leq \int_{0}^{\infty} e^{(\frac{m_2}{2}-1)\Delta^2} 2 m_2 \Delta e^{-\frac{m_2}{2}\Delta^2} d \Delta \\&= \int_{0}^{\infty} 2 m_2 \Delta e^{-\Delta^2} d \Delta = m_2 \end{align*} and consequently, inequality (\ref{eq:22}) becomes \begin{equation*} \mathbb{E}_{S_2}\left[e^{X_{S_2}}\right] \leq m_2 . \end{equation*} Applying Markov's inequality on $X_{S_2}$, we have therefore that for any $0 < \delta \leq 1$, \begin{align*} \mathbb{P}_{S_2} \left[X_{S_2} \geq \ln\frac{2m_2}{\delta} \right] = \mathbb{P}_{S_2}\left[e^{X_{S_2}} \geq \frac{2m_2}{\delta}\right] \leq \frac{\delta}{2m_2}\mathbb{E}_{S_2}\left[e^{X_{S_2}}\right] \leq \frac{\delta}{2} . \end{align*} Replacing $X_{S_2}$ by its expression defined in Eq.~(\ref{eq:def_f_s}), the previous inequality shows that with probability at least $1-\delta/2$ \begin{equation*} \left(\frac{m_2}{2}-1\right) \; \mathbb{E}_{\nu_1\sim Q_1} \mathbb{E}_{\nu_2\sim Q_2} \left[ \left(\Delta(h_{\nu_1}, h_{\nu_2})\right)^2\right] - \text{KL}(Q_2||Q_1) \leq \ln{\frac{2m_2}{\delta}} . \end{equation*} Using Jensen's inequality and the convexity of $\left(\Delta(h_{\nu_1}, h_{\nu_2})\right)^2$, and assuming that $m_2 > 2$, we therefore have that with probability at least $1 - \delta/2$, \begin{align*} \left(\mathbb{E}_{\nu_1\sim Q_1} \mathbb{E}_{\nu_2 \sim Q_2} [\Delta\left(h_{\nu_1}, h_{\nu_2}\right)]\right)^2 &\leq \mathbb{E}_{\nu_1\sim Q_1} \mathbb{E}_{\nu_2 \sim Q_2} \left[\left(\Delta\left(h_{\nu_1}, h_{\nu_2}\right)\right)^2\right] \\ & \leq \frac{\text{KL}(Q_2||Q_1) + \ln{\frac{2m_2}{\delta}}}{\frac{m_2}{2}-1} . \end{align*} Replacing $\Delta(h_{\nu_1}, h_{\nu_2})$ by its expression Eq.~(\ref{eq:delta}) in the above inequality, yields that with probability at least $1-\delta/2$ over the choice of the sample set $S_2$, \begin{equation}\label{eq:1} \mathbb{E}_{\nu_1 \sim Q_1} \left[L_{S_2}(h_{\nu_1}) - L(h_{\nu_1}) \right] \\ \leq \mathbb{E}_{\nu_2 \sim Q_2} \left[L_{S_2}(h_{\nu_2}) - L(h_{\nu_2}) \right] + \sqrt{\frac{2 \text{KL}(Q_2||Q_1) + 2 \ln{\frac{2m_2}{\delta}}}{m_2-2}}. \end{equation} Similar computations with $S_1$ and $S_2$ switched, and considering that $m_1 > 2$, yields that with probability at least $1-\delta/2$ over the choice of the sample set $S_1$, \begin{equation}\label{eq:2} \mathbb{E}_{\nu_2 \sim Q_2} \left[L_{S_1}(h_{\nu_2}) - L(h_{\nu_2}) \right] \\ \leq \mathbb{E}_{\nu_1 \sim Q_1} \left[L_{S_1}(h_{\nu_1}) - L(h_{\nu_1}) \right] + \sqrt{\frac{2 \text{KL}(Q_1||Q_2) + 2 \ln{\frac{2m_1}{\delta}}}{m_1-2}}. \end{equation} The events in Eqs.~(\ref{eq:1}) and (\ref{eq:2}) jointly hold with probability at least $1-\delta$ over the choice of the sample sets $S_1$ and $S_2$ (using the union bound and De Morgan's law), and by adding the two inequalities we therefore have \begin{multline*} \mathbb{E}_{\nu_1 \sim Q_1} \left[L_{S_2}(h_{\nu_1})\right] + \mathbb{E}_{\nu_2 \sim Q_2} \left[L_{S_1}(h_{\nu_2})\right] \leq \; \mathbb{E}_{\nu_2 \sim Q_2} \left[L_{S_2}(h_{\nu_2})\right] + \mathbb{E}_{\nu_1 \sim Q_1} \left[L_{S_1}(h_{\nu_1})\right] \\ + \sqrt{\frac{2\text{KL}(Q_2||Q_1) + 2\ln{\frac{2m_2}{\delta}}}{{m_2}-2}} + \sqrt{\frac{2\text{KL}(Q_1||Q_2) + 2\ln{\frac{2m_1}{\delta}}}{{m_1}-2}} , \end{multline*} which concludes the proof. \end{proof} \section{A Simple Connection Between Generalization Penalty and \\ Gradient Disparity}\label{app:simple} In this section, we present an alternative and much simpler connection between the notions of generalization penalty and gradient disparity than the one presented in Sections~\ref{sec:pre} and \ref{sec:BGP}. Recall that each update step of the mini-batch gradient descent is written as $w_i = w - \gamma g_i$ for $i \in \{1,2\}$. By applying a first order Taylor expansion over the loss, we have \begin{equation*} L_{S_1} (h_{w_1}) = L_{S_1} (h_{w-\gamma g_1}) \approx L_{S_1}(h_w) - \gamma g_1 \cdot g_1 . \end{equation*} The generalization penalties $\mathcal{R}_1$ and $\mathcal{R}_2$ would therefore be \begin{align*} \mathcal{R}_1 = L_{S_1} (h_{w_2}) - L_{S_1} (h_{w_1}) \approx \gamma g_1 \cdot (g_1 - g_2), \end{align*} and \begin{align*} \mathcal{R}_2 = L_{S_2} (h_{w_1}) - L_{S_2} (h_{w_2}) \approx \gamma g_2 \cdot (g_2 - g_1), \end{align*} respectively. Consequently, \begin{align*} \mathcal{R}_1 + \mathcal{R}_2 \approx \gamma \norm{g_1 - g_2}_2^2 . \end{align*} This derivation requires the loss function to be (approximately) linear near parameter vectors $w_1$ and $w_2$, which does not necessarily hold. Therefore, in the main paper we only focus on the connection between generalization penalty and gradient disparity via Theorem~\ref{THM1}, which does not require such an assumption. Nevertheless, it interesting to note that this simple derivation recovers the connection between generalization penalty and gradient disparity. \section{Common Experimental Details}\label{app:det} The training objective in our experiments is to minimize the cross-entropy loss, and both the cross entropy and the error percentage are displayed in the figures. The training error is computed using Eq.~(\ref{eq:train_loss}) over the training set. The empirical test error also follows Eq.~(\ref{eq:train_loss}) but it is computed over the test set. The generalization loss (respectively, error) is the difference between the test and the training cross entropy losses (resp., classification errors). The batch size in our experiments is 128 unless otherwise stated. The SGD learning rate is $\gamma=0.01$ and no momentum is used (unless otherwise stated). All the experiments took at most few hours on one Nvidia Titan X Maxwell GPU. All the reported values throughout the paper are an average over at least 5 runs. To present results throughout the training, in the x-axis of figures, both epoch and iteration are used: an epoch is the time spent to pass through the entire dataset, and an iteration is the time spent to pass through one batch of the dataset. Thus, each epoch has $B$ iterations, where $B$ is the number of batches. The convolutional neural network configurations we use are: AlexNet \cite{krizhevsky2012imagenet}, VGG \cite{simonyan2014very} and ResNet \cite{he2016deep}. In those experiments with varying width, we use a scaling factor to change both the number of channels and the number of hidden units in convolutional and fully connected layers, respectively. The default configuration is with scaling factor $=1$. For the experiments with data augmentation (Fig.~\ref{fig:DA}) we use random crop with padding $= 4$ and random horizontal flip with probability~$= 0.5$. In experiments with a random labeled training set, we modify the dataset similarly to \cite{chatterjee2020coherent}. For a fraction of the training samples, which is the amount of noise ($0\%$, $25\%$, $50\%$, $75\%$, $100\%$), we choose the labels at random. For a classification dataset with a number $k$ of classes, if the label noise is $25\%$, then on average ${75\% + 25\% * 1/k}$ of the training points still have the correct label. \subsection{Re-scaling the Loss}\label{app:norm} \addtocounter{footnote}{-1} \begin{wrapfigure}[24]{R}{0.5\textwidth} \includegraphics[width=0.5\textwidth, height=0.28\textwidth]{normalize.png} \caption{Normalizing versus re-scaling loss before computing average gradient disparity $\overline{\mathcal{D}}$ for a VGG-11 trained on 12.8 k points of the CIFAR-10 dataset\protect\footnotemark. For this experiment, Pearson's correlation coefficient between gradient disparity re-scaled (our chosen metric) and the test loss is $0.91$. If we would have instead normalized the loss values the correlation would be $0.88$. If we would have re-scaled with respect to the gradients (instead of the loss), the correlation would be $0.79$. }\label{fig:norm} \end{wrapfigure} Let us track the evolution of gradient disparity (Eq.~(\ref{eq:bgd})) during training. As training progresses, the training losses of all the batches start to decrease when they get selected for the parameter update. Therefore, the value of gradient disparity might decrease, not necessarily because the distance between the two gradient vectors is decreasing, but because the value of each gradient vector is itself decreasing. To avoid this, a re-scaling or normalization is needed to compare gradient disparity at different stages of training. If we perform a re-scaling with respect to the gradient vectors, then the gradient disparity between two batches $S_1$ and $S_2$ would be $\norm{g_1/\text{std}(g_1) - g_2/\text{std}(g_2) }_2$, where $\text{std}(g_i)$ is the standard deviation of the gradients within batch $S_i$ for $i \in \{1,2\}$. However, such a re-scaling would also absorb the variations of $g_1$ and $g_2$ with respect to each other. That is, if after an iteration, $g_1$ is scaled by a factor $\alpha<1$, while $g_2$ remains unchanged, this re-scaling would leave the gradient disparity unchanged, although the performance of the network has improved only on $S_1$ and not on $S_2$, which might be a signal of overfitting. \footnotetext{Note that in this figure, both the gradient disparity re-scaled and the generalization loss are increasing from the very first epoch. If we would use gradient disparity as an early stopping criterion, optimization would stop at epoch 5 and we would have a 0.36 drop in the test loss value, compared to the loss reached when the model achieves 0 training loss.} We therefore propose to normalize the loss values instead, before computing gradient disparity, so that the initial losses of two different iterations would have the same scale. We can normalize the loss values by \begin{equation*} {L_{S_j} = \frac{1}{m_j} \sum_{i=1}^{m_j} \frac{l_i-\text{Min}_i\left(l_i\right)}{\text{Max}_i\left(l_i\right)-\text{Min}_i\left(l_i\right)}} , \end{equation*} where with some abuse of notation, $l_i$ is the cross entropy loss for the data point $i$ in the batch $S_j$. However, this normalization might be sensitive to outliers, making the bulk of data end up in a very narrow range within 0 and 1, and degrading in turn the accuracy of the signal of overfitting. Re-scaling is usually less sensitive to outliers in comparison with normalization, it leads to loss values that are given by \begin{equation*} {L_{S_j} = \frac{1}{m_j} \sum_{i=1}^{m_j} \frac{l_i}{\text{std}_i\left(l_i\right)}} . \end{equation*} We experimentally compare these two ways of computing gradient disparity in Fig.~\ref{fig:norm}. Both the re-scaled and normalized losses might get unbounded, if within each batch the loss values are very close to each other. However, in our experiments, we do not observe gradient disparity becoming unbounded either way. We observe that the correlation between gradient disparity and the test loss is the highest if we re-scale the loss values before computing gradient disparity. This is therefore how we compute gradient disparity in all experiments presented in the paper. Note that this re-scaling does not affect the training algorithm, since it is only used to compute the gradient disparity metric (we do not perform loss re-scaling before \verb|opt.step()|). \subsection{The Hyper-parameter $s$}\label{app:s} \begin{wrapfigure}[16]{R}{0.5\textwidth} \includegraphics[width=0.5\textwidth, height=0.28\textwidth]{s.png} \caption{Average gradient disparity for different averaging parameter $s$ for a ResNet-18 that has been trained on 12.8k points of the CIFAR-10 dataset.}\label{fig:s} \end{wrapfigure} In this section, we briefly study the choice of the size $s$ of the subset of batches to compute the average gradient disparity $$\overline{\mathcal{D}} = \frac{1}{s(s-1)} \sum_{i=1}^{s}\sum\limits_{\substack{j=1 , j\neq i }}^s \mathcal{D}_{i,j} .$$ Fig.~\ref{fig:s} shows the average gradient disparity when averaged over number $s$ of batches\footnote{In the setting of Fig.~\ref{fig:s}, if we use gradient disparity as an early stopping criterion, optimization would stop at epoch 9 and we would have a 0.28 drop in the test loss value compared to the loss reached when the model achieves 0 training loss.}. When $s=2$, gradient disparity is the $\ell_2$ norm distance of the gradients of two randomly selected batches and has a quite high variance. Although with higher values of $s$ the results have lower variance, computing it with a large value of $s$ is more computationally expensive (refer to Appendix~\ref{app:kfold} for more details). Therefore, we find the choice of $s=5$ to be sufficient enough to track overfitting; in all the experiments reported in this paper, we use $s=5$. \subsection{The Surrogate Loss Function}\label{app:mse} Cross entropy has been shown to be better suited for computer-vision classification tasks, compared to mean square error~\cite{kline2005revisiting,hui2020evaluation}. Hence, we choose the cross entropy criterion for all our experiments to avoid possible pitfalls of the mean square error, such as not tracking the confidence of the predictor. \cite{soudry2018implicit} argues that when using cross entropy, as training proceeds, the magnitude of the network parameters increases. This can potentially affect the value of gradient disparity. Therefore, we compute the magnitude of the network parameters over iterations in various settings. We observe that this increase is very low both at the end of the training and, more importantly, at the time when gradient disparity signals overfitting (denoted by GD epoch in Table~\ref{tab:norm}). Therefore, it is unlikely that the increase in the magnitude of the network parameters affects the value of gradient disparity. Furthermore, we examine gradient disparity for models trained on the mean square error, instead of the cross entropy criterion. We observe a high correlation between gradient disparity and test error/loss (Fig.~\ref{fig:train_size_mse}), which is consistent with the results obtained using the cross entropy criterion. The applicability of gradient disparity as a generalization metric is therefore not limited to settings with the cross entropy criterion. \begin{figure*}\label{fig:train_size_mse} \end{figure*} \begin{table}[h] \centering \caption{The ratio of the magnitude of the network parameter vector at epoch $t$ to the magnitude of the network parameter vector at epoch $0$, for $t \in \{0,\text{GD},200\}$, where GD stands for the epoch when gradient disparity signals to stop the training. Setting (1): AlexNet, MNIST, (2): AlexNet, MNIST, $50\%$ random, (3): VGG-16, CIFAR-10, and (4): VGG-16, CIFAR-10, $50\%$ random. }\label{tab:norm} \vspace*{1em} \begin{tabularx}{0.45\textwidth}{@{} l|C|C|C @{}} \toprule Setting & at epoch $0$ & at GD epoch & at epoch $200$ \\ \midrule ($1$) & $1$ & $1.00034$ & $1.00123$ \\ \midrule ($2$) & $1$ & $1.00019$ & $1.00980$ \\ \midrule ($3$) & $1$ & $1.00107$ & $1.00127$ \\ \midrule ($4$) & $1$ & $1.00222$ & $1.00233$ \\ \bottomrule \end{tabularx} \end{table} \section{$k$-fold Cross-Validation}\label{app:kfold} $k$-fold cross-validation (CV) splits the available dataset into $k$ sets, training on $k-1$ of them and validating on the remaining one. This is repeated $k$ times so that every set is used once as the validation set. Each experiment out of $k$ folds can itself be viewed as a setting where the available data is split into a training and a validation set. Early stopping can then be done by evaluating the network performance on the validation set and stopping the training as soon as there is an increase in the value of the validation loss. However, in practice, the validation loss curve is not necessarily smooth (as can be clearly observed in our experiments throughout the paper), and therefore as discussed in \cite{prechelt1998early,lodwich2009evaluation}, there is no obvious early stopping rule (threshold) to obtain the minimum value of the generalization error. \subsection{Early Stopping Threshold}\label{app:patthre} In this paper, we adapt two different early stopping thresholds: (t1) stop training when there are $p=5$ (consecutive or nonconsecutive) increases in the value of the early stopping metric (the hyper-parameter $p$ is commonly referred to as the “patience” parameter among practitioners), which is indicated by the gray vertical bars in Figs.~\ref{fig:kfoldl} and \ref{fig:kfoldn}, and (t2) stop training when there are $p$ consecutive increases in the value of the early stopping metric, which is indicated by the magenta vertical bars in Figs.~\ref{fig:kfoldl} and \ref{fig:kfoldn}. When there are either low variations or a sharp increase in the value of the metric, the two coincide (for instance, in Fig.~\ref{fig:kfoldl}~(b)~(middle left)). For $k$-fold CV, the early stopping metric is the validation loss, and for our proposed method, the early stopping criterion is gradient disparity (GD). Which exact patience parameter to choose as an early stopping threshold, or whether it should include non-consecutive increases or not, are indeed interesting questions \cite{prechelt1998early}, which do not have a definite answer to date even for $k$-fold CV. Tables~\ref{tab:robustness} and~\ref{tab:robustnessn} give the results obtained by 20 different early stopping thresholds for both $k$-fold CV (shown on the left tables) and GD (shown on the right tables). We also give the best, mean and standard deviation of test loss and test accuracy across all thresholds. In the following two paragraphs, we summarize the findings of these two tables. \paragraph{Performance} In Fig.~\ref{fig:sen}, we observe that the test accuracy (averaged over 20 thresholds) is higher when using GD as an early stopping criterion, than $k$-fold CV. Note that beforehand we do not have access to the test set to choose the best possible threshold, hence in Fig.~\ref{fig:sen} the average test accuracy is reported over all thresholds. However, even if we did have the test set to choose the best threshold, we can still observe from Tables~\ref{tab:robustness} and~\ref{tab:robustnessn} that GD either performs comparably to CV, or that it significantly outperforms CV. \begin{figure} \caption{Test Accuracy achieved by using GD and $k$-fold CV as early stopping methods in 7 experimental settings (indicated in the x-axis by Set1-7). The result is averaged over 20 choices of the early stopping threshold. The complete set of results are reported in Tables~\ref{tab:robustness} and~\ref{tab:robustnessn}. For the CIFAR-100 experiments (Set1, 4, and 5) the top-$5$ accuracy is reported. } \label{fig:sen} \end{figure} \paragraph{Sensitivity to Threshold} Ideally, we would like to have a robust metric that does not strongly depend on the early stopping threshold. To compute the sensitivity of each method to the choice of the threshold, we compute \begin{equation}\label{eq:sen} \text{Sensitivity to the threshold} = {\sum_{i=1}^{7} \text{std}(\text{Set}_i) / \text{Mean}(\text{Set}_i)} , \end{equation} where $\text{Mean}(\text{Set}_i)$ and $\text{std}(\text{Set}_i)$ are the mean and standard deviation of the test accuracy/loss across different thresholds, respectively, of setting $i$ across the early stopping thresholds, which are reported in Tables~\ref{tab:robustness} and~\ref{tab:robustnessn}. The lower the sensitivity is, the more the method is robust to the choice of the early stopping threshold. In Table~\ref{tab:senmain}, we observe that GD is less sensitive and more robust to the choice of the early stopping threshold than $k$-fold CV, which is another advantage of GD over CV. This can also be observed from our figures: In most of the experiments (more precisely, in 5 out of 7 experiments of Figs.~\ref{fig:kfoldl} and \ref{fig:kfoldn}), GD is not sensitive to the choice of the threshold (see Figs. ~\ref{fig:kfoldl}~(a), ~\ref{fig:kfoldl}~(b), ~\ref{fig:kfoldn}~(a), ~\ref{fig:kfoldl}~(b) and ~\ref{fig:kfoldl}~(c) on the 2nd column, where the gray and magenta bars almost coincide). In contrast, $k$-fold CV is more sensitive (see for example the leftmost column of Figs. ~\ref{fig:kfoldl}~(a) and ~\ref{fig:kfoldl}~(b), where the gray and magenta bars are very far away when using $k$-fold CV). In the other 2 experiments (Figs.~\ref{fig:kfoldl}~(c) and ~\ref{fig:kfoldn}~(d), both with the MNIST dataset), the thresholds (t1) and (t2) do not coincide for neither GD nor $k$-fold CV. In Table~\ref{tab:patience}, we further study these two settings: we provide the test accuracy for experiments of Figs.~\ref{fig:kfoldl}~(c) and ~\ref{fig:kfoldn}~(d) (both with the MNIST dataset), for different values of $p$. We again observe that even if we optimize $p$ for $k$-fold CV (reported in bold in Table~\ref{tab:patience}), GD still outperforms $k$-fold CV. \begin{table*} \centering \caption{The test accuracy, test loss, top-5 accuracy, and stopping epoch obtained by using $k$-fold cross-validation (CV) (left columns (a), (c), and (e)) and by using gradient disparity (GD) (right columns (b), (d), and (f)) as early stopping criteria for different patience values and for different thresholds (t1) and (t2). $(\text{t1}_p)$: training is stopped after $p$ increases in the value of the validation loss in $k$-fold CV, and of GD, respectively. $(\text{t2}_p)$: training is stopped after $p$ consecutive increases in the value of the validation loss in $k$-fold CV, and of GD, respectively. For the rest of the experiments we only report the best values, mean and standard deviation (std) over all thresholds. $-^{a}$: The epoch to have the best test loss does not coincide with the epoch to have the best test accuracy. $-^{b}$: The metric does not have $p$ consecutive increases during training.}\label{tab:robustness} \vspace*{1em} \resizebox{0.55\textheight}{!}{ \begin{subtable}{0.48\linewidth} \centering \begin{tabularx}{\textwidth}{@{} l|C|C|C|C @{}} \toprule Threshold & Loss & ACC & top-5 ACC & Epoch \\ \midrule $(\text{t1}_1)$ & $4.63$ & $0.98$ & $5.02$ & $1$ \\ $(\text{t1}_2)$ & $4.65$ & $0.98$ & $5.0$ & $2$ \\ $(\text{t1}_3)$ & $4.65$ & $1.19$ & $5.99$ & $3$ \\ $(\text{t1}_4)$ & $4.25$ & $6.73$ & $21.47$ & $12$ \\ $(\text{t1}_5)$ & $4.25$ & $6.79$ & $22.19$ & $14$ \\ $(\text{t1}_6)$ & $4.23$ & $7.46$ & $22.48$ & $17$ \\ $(\text{t1}_7)$ & $4.25$ & $7.22$ & $22.18$ & $18$ \\ $(\text{t1}_8)$ & $4.21$ & $7.69$ & $23.50$ & $21$ \\ $(\text{t1}_9)$ & $4.23$ & $7.94$ & $23.72$ & $22$ \\ $(\text{t1}_{10})$ & $4.27$ & $7.58$ & $22.84$ & $24$ \\ $(\text{t2}_1)$ & $4.63$ & $0.98$ & $5.02$ & $1$ \\ $(\text{t2}_2)$ & $4.65$ & $0.98$ & $5.0$ & $2$ \\ $(\text{t2}_3)$ & $4.65$ & $1.19$ & $5.99$ & $3$ \\ $(\text{t2}_4)$ & $4.12 $ & $9.50 $ & $ 26.71$ & $45$ \\ $(\text{t2}_5)$ & $4.12 $ & $9.45 $ & $ 26.32$ & $46$ \\ $(\text{t2}_6)$ & $4.19 $ & $9.51$ & $26.43$ & $314$ \\ $(\text{t2}_7)$ & $4.20$ & $9.54$ & $26.47$ & -\footnote[2]{}\\ $(\text{t2}_8)$ & - & - & - & - \\ $(\text{t2}_9)$ & - & - & - & -\\ $(\text{t2}_{10})$ & - & - & - & - \\ \midrule best & $4.11$ & $9.51$ & $26.71$ & -\\ mean & $4.42$ & $4.64$ & $14.74$ & $38.33$ \\ std & $0.23$ & $3.70$ & $9.56$ & $84.53$ \\ range & $4.11-4.65$ & $0.98-9.51$ & $5.0-26.71$ & $1-314$ \\ \bottomrule \end{tabularx} \caption{\centering CV, CIFAR-100, ResNet-34, limited dataset (Figure~\ref{fig:kfoldl} (a))} \end{subtable}\hspace{0.4em} \begin{subtable}{0.48\linewidth} \centering \begin{tabularx}{\textwidth}{@{} l|C|C|C|C @{}} \toprule Threshold & Loss & ACC & top-5 ACC & Epoch \\ \midrule $(\text{t1}_1)$ & $4.27$ & $7.36$ & $23.58$ & $17$ \\ $(\text{t1}_2)$ & $4.19$ & $7.88$ & $24.28 $& $19$ \\ $(\text{t1}_3)$ & $4.14$ & $8.85$ & $25.67 $& $22$ \\ $(\text{t1}_4)$ & $4.13$ & $9.08$ & $25.91$ & $23$ \\ $(\text{t1}_5)$ & $4.06$ & $9.99$ & $27.84$ & $24$ \\ $(\text{t1}_6)$ & $4.12$ & $9.32$ & $26.61$ & $25$ \\ $(\text{t1}_7)$ & $4.07$ & $9.86$ & $27.34$ & $26$ \\ $(\text{t1}_8)$ & $4.06$ & $10.04$ & $27.89$ & $27$ \\ $(\text{t1}_9)$ & $4.04$ & $10.43$ & $28.35$ & $28$ \\ $(\text{t1}_{10})$ & $4.04$ & $10.41$ & $28.40$ & $29$ \\ $(\text{t2}_1)$ & $4.27$ & $7.36$ & $23.58$ & $17$ \\ $(\text{t2}_2)$ & $4.13$ & $9.08$ & $25.91$ & $23$ \\ $(\text{t2}_3)$ & $4.06$ & $9.99$ & $27.84$ & $24$ \\ $(\text{t2}_4)$ & $4.12$ & $9.33$ & $26.61$ & $25$ \\ $(\text{t2}_5)$ & $4.07 $& $9.86$ & $27.34$ & $26$ \\ $(\text{t2}_6)$ & $4.06 $& $10.04$ & $28.89$ & $27$ \\ $(\text{t2}_7)$ & $4.04 $ & $10.43$ & $28.35$ & $28$ \\ $(\text{t2}_8)$ & $4.04$ & $10.41$ & $28.40$ & $29$ \\ $(\text{t2}_9)$ & $4.05$ & $10.29$ & $28.41$ & $30$ \\ $(\text{t2}_{10})$ & $4.05$ & $10.39$ & $28.31$ & $31$ \\ \midrule best & $4.04$ & $10.43$ & $28.41$ & -\footnote[1]{} \\ mean & $\mathbf{4.1}$ & $\mathbf{9.52}$ & $\mathbf{26.93}$ & $25$\\ std & $0.07$ & $0.98$ & $1.57$ & $3.89$ \\ range & $4.04-4.27$ & $7.36-10.43$ & $23.58-28.41$ & $17-31$ \\ \bottomrule \end{tabularx} \caption{\centering GD, CIFAR-100, ResNet-34, limited dataset (Figure~\ref{fig:kfoldl} (a))} \end{subtable}} \resizebox{0.55\textheight}{!}{ \begin{subtable}{0.48\linewidth} \centering \begin{tabularx}{\textwidth}{@{} l|C|C|C @{}} \toprule Threshold & Loss & ACC & Epoch \\ \midrule best & $1.84$ & $37.03$ & - \\ mean & $1.97$ & $31.05$ & $12.25$ \\ std & $0.22$ & $8.80$ & $10.54$ \\ range & $1.84-2.35$ & $15.84-37.03$ & $1-30$ \\ \bottomrule \end{tabularx} \caption{\centering CV, CIFAR-10, VGG-13, limited dataset (Figure~\ref{fig:kfoldl} (b))} \end{subtable}\hspace{0.4em} \begin{subtable}{0.48\linewidth} \centering \begin{tabularx}{\textwidth}{@{} l|C|C|C @{}} \toprule Threshold & Loss & ACC & Epoch \\ \midrule best & $1.79$ & $38.16$ & - \\ mean & $\mathbf{1.80}$ & $\mathbf{36.97}$ & $6.5$ \\ std & $0.02$ & $1.27$ & $2.87$ \\ range & $1.79-1.85$ & $33.71-38.16$ & $2-11$ \\ \bottomrule \end{tabularx} \caption{\centering GD, CIFAR-10, VGG-13, limited dataset (Figure~\ref{fig:kfoldl} (b))} \end{subtable}} \resizebox{0.55\textheight}{!}{ \begin{subtable}{0.48\linewidth} \centering \begin{tabularx}{\textwidth}{@{} l|C|C|C @{}} \toprule Threshold & Loss & ACC & Epoch \\ \midrule best & $0.56$ & $81.39$ & $34$ \\ mean & $\mathbf{1.09}$ & $62.63$ & $21.71$ \\ std & $0.36$ & $13.46$ & $9.59$ \\ range & $0.63-1.64$& $41.15-81.39$ & $9-41$ \\ \bottomrule \end{tabularx} \caption{\centering CV, MNIST, AlexNet, limited dataset (Figure~\ref{fig:kfoldl} (c))} \end{subtable}\hspace{0.4em} \begin{subtable}{0.48\linewidth} \centering \begin{tabularx}{\textwidth}{@{} l|C|C|C @{}} \toprule Threshold & Loss & ACC & Epoch \\ \midrule best & $0.45$ & $86.39$ & $39$ \\ mean & $1.13$ & $\mathbf{64.27}$ & $17.58$ \\ std & $0.72$ & $22.38$ & $13.91$ \\ range & $0.45-2.18$ & $30.19-86.39$ & $1-40$ \\ \bottomrule \end{tabularx} \caption{\centering GD, MNIST, AlexNet, limited dataset (Figure~\ref{fig:kfoldl} (c))} \end{subtable}} \end{table*} \begin{table*} \centering \caption{The test accuracy, test loss, top-5 accuracy, and stopping epoch obtained by using $k$-fold cross-validation (CV) (left columns (a), (c), (e), and (g)) and by using gradient disparity (GD) (right columns (b), (d), (f), and (h)) as early stopping criteria for different patience values and for different thresholds (t1) and (t2). $(\text{t1}_p)$: training is stopped after $p$ increases in the value of the validation loss in $k$-fold CV, and of GD, respectively. $(\text{t2}_p)$: training is stopped after $p$ consecutive increases in the value of the validation loss in $k$-fold CV, and of GD, respectively. We report the best values, mean and standard deviation (std) over 20 thresholds. }\label{tab:robustnessn} \vspace*{1em} \begin{subtable}{0.48\linewidth} \centering \begin{tabularx}{\textwidth}{@{} l|C|C|C|C @{}} \toprule Threshold & Loss & ACC & top-5 ACC & Epoch \\ \midrule best & $4.69$ & $2.00$ & $9.38$ & $19$ \\ mean & $4.94$ & $1.60$ & $6.72$ & $12.3$\\ std & $0.1$ & $0.14$ & $0.98$ & $4.47$ \\ range & $4.69-5.03$ & $1.42-2.00$ & $5.62-9.38$ & $6-20$ \\ \bottomrule \end{tabularx} \caption{\centering CV, CIFAR-100, ResNet-18, noisy dataset (Figure~\ref{fig:kfoldn} (a))} \end{subtable}\hspace{0.2em} \begin{subtable}{0.48\linewidth} \centering \begin{tabularx}{\textwidth}{@{} l|C|C|C|C @{}} \toprule Threshold & Loss & ACC & top-5 ACC & Epoch \\ \midrule best & $4.38$ & $4.38$ & $17.76$ & - \\ mean & $\mathbf{4.47}$ & $\mathbf{3.79}$ & $\mathbf{15.43}$ & $29.25$\\ std & $0.08$ & $0.75$ & $2.31$ & $6.41$ \\ range & $4.38-4.69$ & $2.13-4.38$ & $9.74-17.76$ & $18-41$ \\ \bottomrule \end{tabularx} \caption{\centering GD, CIFAR-100, ResNet-18, noisy dataset (Figure~\ref{fig:kfoldn} (a))} \end{subtable} \begin{subtable}{0.48\linewidth} \centering \begin{tabularx}{\textwidth}{@{} l|C|C|C|C @{}} \toprule Threshold & Loss & ACC & top-5 ACC & Epoch \\ \midrule best & $3.87$ & $12.14$ & $37.06$ & - \\ mean & $\mathbf{4.16}$ & $10.19$ & $32.46$ & $11.5$ \\ std & $0.25$ & $1.27$ & $2.72$ & $2.87$ \\ range & $3.87-4.5$ & $8.47-12.14$ & $27.80-37.06$ & $7-16$ \\ \bottomrule \end{tabularx} \caption{\centering CV, CIFAR-100, ResNet-34, noisy dataset (Figure~\ref{fig:kfoldn} (b))} \end{subtable}\hspace{0.2em} \begin{subtable}{0.48\linewidth} \centering \begin{tabularx}{\textwidth}{@{} l|C|C|C|C @{}} \toprule Threshold & Loss & ACC & top-5 ACC & Epoch \\ \midrule best & $3.82$ & $15.81$ & $40.97$ & - \\ mean & $4.37$ & $\mathbf{12.82}$ & $\mathbf{35.53} $& $14.75$ \\ std & $0.25$ & $1.76$ & $6.08$ & $6.59$ \\ range & $3.82-4.59$ & $10.41-15.81$ & $23.51-40.97$ & $3-23$ \\ \bottomrule \end{tabularx} \caption{\centering GD, CIFAR-100, ResNet-34, noisy dataset (Figure~\ref{fig:kfoldn} (b))} \end{subtable} \begin{subtable}{0.48\linewidth} \centering \begin{tabularx}{\textwidth}{@{} l|C|C|C @{}} \toprule Threshold & Loss & ACC & Epoch \\ \midrule best & $1.69$ & $43.02$ & $2$ \\ mean & $\mathbf{2.19}$ & $36.89$ & $6.5$ \\ std & $0.35$ & $3.21$ & $2.87$ \\ range & $1.69-2.66$ & $32.41-43.02$ & $2-11$ \\ \bottomrule \end{tabularx} \caption{\centering CV, CIFAR-10, VGG-13, noisy dataset (Figure~\ref{fig:kfoldn} (c))} \end{subtable}\hspace{0.2em} \begin{subtable}{0.48\linewidth} \centering \begin{tabularx}{\textwidth}{@{} l|C|C|C @{}} \toprule Threshold & Loss & ACC & Epoch \\ \midrule best & $1.77 $& $42.45$ & - \\ mean & $2.35$ & $\mathbf{38.9}$ & $10.1$ \\ std & $0.27$ & $2.98$ & $3.59$ \\ range & $1.77-2.59$ & $32.92-42.45$ & $4-16$ \\ \bottomrule \end{tabularx} \caption{\centering GD, CIFAR-10, VGG-13, noisy dataset (Figure~\ref{fig:kfoldn} (c))} \end{subtable} \begin{subtable}{0.48\linewidth} \centering \begin{tabularx}{\textwidth}{@{} l|C|C|C @{}} \toprule Threshold & Loss & ACC & Epoch \\ \midrule best & $0.59$ & $97.44$ & - \\ mean & $\mathbf{0.63}$ & $\mathbf{96.33}$ & $23.5$ \\ std & $0.18$ & $1.68 $ & $13.11$ \\ range & $0.59-0.69$ & $92.03-97.44$ & $7-49$ \\ \bottomrule \end{tabularx} \caption{\centering CV, MNIST, AlexNet, noisy dataset (Figure~\ref{fig:kfoldn} (d))} \end{subtable}\hspace{0.2em} \begin{subtable}{0.48\linewidth} \centering \begin{tabularx}{\textwidth}{@{} l|C|C|C @{}} \toprule Threshold & Loss & ACC & Epoch \\ \midrule best & $0.62$ & $97.49$ & - \\ mean & $0.65$ & $96.22$ & $20.4$ \\ std & $0.02$ & $1.81$ & $13.81$ \\ range & $0.62-0.66$& $92.58-97.49$ & $10-48$ \\ \bottomrule \end{tabularx} \caption{\centering GD, MNIST, AlexNet, noisy dataset (Figure~\ref{fig:kfoldn} (d))} \end{subtable} \end{table*} \subsection{Image-classification Benchmark Datasets} \paragraph{Limited Data} Fig.~\ref{fig:kfoldl} and Table~\ref{tab:kfoldl} show the results for MNIST, CIFAR-10 and CIFAR-100 datasets, where we simulate the limited data scenario by using a small subset of the training set. For the CIFAR-100 experiment (Figure~\ref{fig:kfoldl}~(a) and Table~\ref{tab:kfoldl}~(top row), we observe (from the left figure) that the validation loss predicts the test loss pretty well. We observe (from the middle left figure) that gradient disparity also predicts the test loss quite well. However, the main difference between the two settings is that when using cross-validation, $1/k$ of the data is set aside for validation and $1-1/k$ of the data is used for training. Whereas when using gradient disparity, all the data ($1-1/k+1/k =1$) is used for training. Hence, the test loss in the leftmost and middle left figures differ. The difference between the test accuracy (respectively, test loss) obtained in each setting is visible in the rightmost figure (resp., middle right figure). We observe that there is over $3\%$ improvement in the test accuracy when using gradient disparity as an early stopping criterion. This improvement is consistent for the MNIST and CIFAR-10 datasets (Figs.~\ref{fig:kfoldl}~(b) and (c) and Table~\ref{tab:kfoldl}). We conclude that in the absence of label noise, both $k$-fold cross-validation and gradient disparity predict the optimal early stopping moment, but the final test loss/error is much lower for the model trained with all the available data (thus, when gradient disparity is used), than the model trained with a $(1-1/k)$ portion of the data (thus when $k$-fold cross-validation is used). To further test on a dataset that is itself limited, a medical application with limited labeled data is empirically studied later in this section (Appendix~\ref{app:mrnet}). The same conclusion is made for this dataset. \paragraph{Noisy Labeled Data} The results for datasets with noisy labels are shown in Fig.~\ref{fig:kfoldn} and Table~\ref{tab:kfoldn} for the MNIST, CIFAR-10 and CIFAR-100 datasets. We observe (from Fig.~\ref{fig:kfoldn}~(a)~(left)) that for the CIFAR-100 experiment, the validation loss does no longer predict the test loss. Nevertheless, although gradient disparity is computed on a training set that contains corrupted samples, it predicts the test loss quite well (Fig.~\ref{fig:kfoldn}~(a)~(middle left)). There is a $2\%$ improvement in the final test accuracy (for top-5 accuracy there is a $9\%$ improvement) (Table~\ref{tab:kfoldn}~(top two rows)) when using gradient disparity instead of a validation set as an early stopping criterion. This is also consistent for other configurations and datasets (Fig.~\ref{fig:kfoldn} and Table~\ref{tab:kfoldn}). We conclude that, in the presence of label noise, $k$-fold cross-validation does no longer predict the test loss and fails as an early stopping criterion, unlike gradient disparity. \paragraph{Computational Cost} Denote the time, in seconds, to compute one gradient vector, to compute the $\ell_2$ norm between two gradient vectors, to take the update step for the network parameters , and to evaluate one batch (find its validation loss and error) by $t_1$, $t_2$, $t_3$ and $t_4$, respectively. Then, one epoch of $k$-fold cross-validation takes \begin{equation*} \text{CV}_\text{epoch} = {k \times \left(\frac{k-1}{k}B(t_1+t_3) + \frac{B}{k} t_4\right)} \end{equation*} seconds, where $B$ is the number of batches. Performing one epoch of training and computing the gradient disparity takes \begin{equation*} \text{GD}_\text{epoch} = {B(t_1+t_3) + s\left(t_1 + \frac{s-1}{2}t_2\right)} \end{equation*} seconds. In our experiments, we observe that $t_1\approx 5.1 t_2 \approx 100 t_3 \approx 3.4 t_4$, hence the approximate time to perform one epoch for each setting is \begin{align*} \text{CV}_\text{epoch} &\approx (k-1) B t_1 , & &\text{and} & \text{GD}_\text{epoch} &\approx (B+s) t_1. \end{align*} Therefore, as $s < B$, we have $\text{CV}_\text{epoch} \gg \text{GD}_\text{epoch} $. \begin{table*} \centering \caption{The test accuracies achieved by using $k$-fold cross-validation (CV) and by using gradient disparity (GD) as early stopping criteria for different patience values. For a given patience value of $p$, the training is stopped after $p$ increases in the value of the validation loss in $k$-fold CV (top rows) and of GD (bottom rows). Throughout the paper, we have chosen $p = 5$ as the default patience value for all methods without optimizing it even for GD. However, in this table (also in Tables~\ref{tab:robustness} and \ref{tab:robustnessn}), we observe that even if we tune the patience value for $k$-fold CV and for GD separately (which is indicated in bold), GD still outperforms $k$-fold CV. Moreover, as we discussed in Appendix~\ref{app:patthre}, even if we take an average over patience values and early stopping thresholds (to avoid the need to tune this parameter), GD again outperforms CV (Fig.~\ref{fig:sen}). }\label{tab:patience} \vspace*{1em} \begin{subtable}{\linewidth} \centering \begin{tabular}{c|c|c|c|c|c|c} \toprule \backslashbox{Method}{Patience} & 1 & 5 & 10 & 15 & 20 & 25 \\ \midrule 5-fold CV & $41.15_{\pm 5.68}$ & $62.62_{\pm 6.36}$ & $81.39 _{\pm 3.64}$ &$80.39_{\pm 2.88}$ & $\mathbf{84.84}_{\pm 2.53}$ & $83.55_{\pm 2.84}$ \\ GD & $30.19_{\pm 6.21}$ & $79.12_{\pm 3.04}$ & $84.82_{\pm 2.14}$ &$85.35_{\pm 2.09}$ & $\mathbf{87.28}_{\pm 1.24}$ & $86.69_{\pm 1.31}$ \\ \bottomrule \end{tabular} \caption{MNIST, AlexNet, limited dataset (Fig.~\ref{fig:kfoldl} (c))} \end{subtable} \begin{subtable}{\linewidth} \centering \begin{tabular}{c|c|c|c|c|c|c} \toprule \backslashbox{Method}{Patience} & 1 & 5 & 10 & 15 & 20 & 25 \\ \midrule 10-fold CV& $96.54_{\pm 0.15}$ & $97.28_{\pm 0.20}$ & $\mathbf{97.35}_{\pm 0.23}$ & $97.22_{\pm 0.19}$ & $96.60_{\pm 0.33}$ & $94.69_{\pm 0.87}$ \\ GD & $97.07_{\pm 0.16}$ & $97.32_{\pm 0.15}$ & $\mathbf{97.41}_{\pm 0.15}$ & $96.57_{\pm 0.64}$ & $95.44_{\pm 0.96}$ & $92.58_{\pm 0.65}$ \\ \bottomrule \end{tabular} \caption{MNIST, AlexNet, noisy dataset (Fig.~\ref{fig:kfoldn} (d))} \end{subtable} \end{table*} \begin{table*} \centering \caption{The loss and accuracy on the test set comparing 5-fold cross-validation and gradient disparity as early stopping criterion when the available dataset is limited. The corresponding curves during training are presented in Fig.~\ref{fig:kfoldl}. The results below are obtained by stopping the optimization when the metric (either validation loss or gradient disparity) has increased for five epochs from the beginning of training.}\label{tab:kfoldl} \vspace*{1em} \begin{tabular}{c|c|c|c} \toprule Setting & Method & Test loss & Test accuracy \\ \midrule \multirow{2}{*}{CIFAR-100, ResNet-34} & 5-fold CV & $4.249_{\pm 0.028}$ & $6.79_{\pm 0.49}$ (top-5: $22.19_{\pm 0.77}$) \\ & GD & $\mathbf{4.057}_{\pm 0.043}$ & $\mathbf{9.99}_{\pm 0.92}$ (top-5: $\mathbf{27.84}_{\pm 1.30}$) \\ \midrule \multirow{2}{*}{CIFAR-10, VGG-13} & 5-fold CV& $1.846_{\pm 0.016}$ & $35.982_{\pm 0.393}$ \\ & GD & $\mathbf{1.793}_{\pm 0.016}$ & $\mathbf{36.96}_{\pm 0.861}$ \\ \midrule \multirow{2}{*}{MNIST, AlexNet} & 5-fold CV & $1.123_{\pm 0.25}$ & $62.62_{\pm 6.36}$ \\ & GD & $\mathbf{0.656}_{\pm 0.080}$ & $\mathbf{79.12}_{\pm 3.04}$ \\ \bottomrule \end{tabular} \end{table*} \begin{figure*} \caption{ResNet-34 trained on 1.28 k points of CIFAR-100} \caption{VGG-13 trained on 1.28 k points of CIFAR-10} \caption{AlexNet trained on 256 points of MNIST} \caption{Comparing 5-fold cross-validation (CV) with gradient disparity (GD) as an early stopping criterion when the available dataset is limited. (left) Validation loss versus test loss in 5-fold cross-validation. (middle left) Gradient disparity versus test and generalization losses. (middle right and right) Performance on the unseen (test) data for GD versus $5$-fold CV. (a) The parameters are initialized by Xavier techniques with uniform distribution. (b, c) The parameters are initialized using He technique with normal distribution. (c) The batch size is 32. The gray and magenta vertical bars indicate the epoch in which the metric (the validation loss or gradient disparity) has increased for 5 epochs from the beginning of training and for 5 consecutive epochs, respectively. In (b) the middle left figure, these two bars meet each other. } \label{fig:kfoldl} \end{figure*} \begin{table*} \centering \caption{The loss and accuracy on the test set comparing 10-fold cross-validation and gradient disparity as early stopping criterion when the available dataset is noisy. In all the experiments, 50\% of the available data has random labels. The corresponding curves during training are shown in Fig.~\ref{fig:kfoldn}. The results below are obtained by stopping the optimization when the metric (either validation loss or gradient disparity) has increased for five epochs from the beginning of training. The last row in each setting, which we call 10$^{+}$-fold CV, refers to the test loss and accuracy reached at the epoch suggested by 10-fold CV, for a network trained on the entire set. In all these settings, using GD still results in a higher test accuracy. }\label{tab:kfoldn} \vspace*{1em} \begin{tabular}{c|c|c|c} \toprule Setting & Method & Test loss & Test accuracy \\ \midrule \multirow{3}{*}{CIFAR-100, ResNet-18} & 10-fold CV & $5.023_{\pm 0.083}$ & $1.59_{\pm 0.15}$ (top-5: $6.47_{\pm 0.52}$) \\ & GD & $\mathbf{4.463}_{\pm 0.038}$ & $\mathbf{3.68}_{\pm 0.52}$ (top-5: $\mathbf{15.22}_{\pm 1.24}$) \\ & 10$^{+}$-fold CV & $4.964_{\pm 0.057}$ & $1.68_{\pm 0.24}$ (top-5: $7.05_{\pm 0.71}$) \\ \midrule \multirow{3}{*}{CIFAR-100, ResNet-34} & 10-fold CV & $4.062_{\pm 0.091}$ & $9.62_{\pm 1.08}$ (top-5: $32.06_{\pm 1.47}$) \\ & GD & $4.592_{\pm 0.179}$ & $\mathbf{10.41}_{\pm 1.40}$ (top-5: $\mathbf{36.92}_{\pm 1.20}$) \\ & 10$^{+}$-fold CV & $4.134_{\pm 0.185}$ & $10.11_{\pm 1.60}$ (top-5: $34.19_{\pm 2.10}$) \\ \midrule \multirow{3}{*}{CIFAR-10, VGG-13} & 10-fold CV & $2.126_{\pm 0.063}$ & $34.88_{\pm 1.66}$ \\ & GD & $2.519_{\pm 0.062}$ & $\mathbf{36.98}_{\pm 0.77}$ \\ & 10$^{+}$-fold CV & $2.195_{\pm 0.142}$ & $35.40_{\pm 3.00}$ \\ \midrule \multirow{3}{*}{MNIST, AlexNet} & 10-fold CV & $0.656_{\pm 0.034}$ & $97.28_{\pm 0.20}$ \\ & GD & $0.654_{\pm 0.031}$ & $\mathbf{97.32}_{\pm 0.27}$ \\ & 10$^{+}$-fold CV & $0.639_{\pm 0.029}$ & $97.31_{\pm 0.15}$ \\ \bottomrule \end{tabular} \end{table*} \begin{figure*} \caption{ResNet-18 trained on 1.28 k points of CIFAR-100 dataset with 50\% label noise} \caption{ResNet-34 trained on the entire CIFAR-100 dataset with 50\% label noise} \caption{VGG-13 trained on the entire CIFAR-10 dataset with 50\% label noise} \caption{AlexNet trained on the entire MNIST dataset with 50\% label noise} \caption{Comparing 10-fold cross-validation with gradient disparity as early stopping criteria when the available dataset is noisy. (left) Validation loss versus test loss in 10-fold cross-validation. (middle left) Gradient disparity versus test and generalization losses. (middle right and right) Performance on the unseen (test) data for GD versus 10-fold CV. (a) The parameters are initialized by Xavier techniques with uniform distribution. (b, c, and d) The parameters are initialized using He technique with normal distribution. } \label{fig:kfoldn} \end{figure*} \subsection{MRNet Dataset}\label{app:mrnet} So far, we have shown the improvement of gradient disparity over cross-validation for limited subsets of MNIST, CIFAR-10 and CIFAR-100 datasets. In this sub-section, we give the results for the MRNet dataset \cite{bien2018deep} used for diagnosis of knee injuries, which is by itself limited. The dataset contains 1370 magnetic resonance imaging (MRI) exams to study the presence of abnormality, anterior cruciate ligament (ACL) tears and meniscal tears. The labeled data in the MRNet dataset is therefore very limited. Each MRI scan is a set of $S$ slices of images stacked together. Note that, in this dataset, because slice $S$ changes from one patient to another, it is not possible to stack the data into batches, hence the batch size is 1, which may explain the fluctuations of both the validation loss and gradient disparity in this setting. Each patient (case) has three MRI scans: sagittal, coronal and axial. The MRNet dataset is split into training (1130 cases), validation (120 cases) and test sets (120 cases). The test set is not publicly available. We need however to set aside some data to evaluate both gradient disparity and $k$-fold cross-validation, hence, in our experiments, the validation set becomes the unseen (test) set. To perform cross-validation, we split the set used for training in \cite{bien2018deep} into a first subset used for training in our experiments, and a second subset used as validation set. We use the SGD optimizer with the learning rate $10^{-4}$ for training the model. Each task in this dataset is a binary classification with an unbalanced set of samples, hence we report the area under the curve of the receiver operating characteristic (AUC score). The results for three tasks (detecting ACL tears, meniscal tears and abnormality) are shown in Fig.~\ref{fig:mrnet} and Table~\ref{tab:mrnet}. We can observe that both the validation loss (despite a small bias) and the gradient disparity predict the generalization loss quite well. Yet, when using gradient disparity, the final test AUC score is higher (Fig.~\ref{fig:mrnet}~(right)). As mentioned above, for this dataset, both the validation loss and gradient disparity vary a lot. Hence, in Table~\ref{tab:mrnet}, we show the results of early stopping, both when the metric has increased for 5 epochs from the beginning of training, and between parenthesis when the metric has increased for 5 consecutive epochs. We conclude that with both approaches, the use of gradient disparity as an early stopping criterion results in more than $1\%$ improvement in the test AUC score. Because the test set used in \cite{bien2018deep} is not publicly available, it is not possible to compare our predictive results with~\cite{bien2018deep}. Nevertheless, we can take as a baseline the results presented in the work given at \url{https://github.com/ahmedbesbes/mrnet}, which report a test AUC score of $88.5\%$ for the task of detecting ACL tears. We observe in Table~\ref{tab:mrnet} that stopping training after $5$ consecutive increases in gradient disparity leads to $91.52\%$ test AUC score for this task. With further tuning, and combining the predictions found on two other MRI planes of each patient (axial and coronal), our final prediction results could even be improved. \begin{figure*} \caption{Task: detecting ACL tears} \caption{Task: detecting meniscal tears} \caption{Task: detecting abnormality} \caption{Detecting three tasks from the MRNet dataset from the sagittal plane MRI scans. (left) Validation loss versus test loss in 5-fold cross-validation. (middle) Gradient disparity versus generalization loss. (right) Performance comparison on the final unseen data when applying 5-fold CV versus gradient disparity. For the results of applying early stopping refer to Table~\ref{tab:mrnet}.} \label{fig:mrnet} \end{figure*} \section{Additional Experiments}\label{app:more} In this section, we provide additional experiments on benchmark image-classification datasets. \subsection{MNIST Experiments}\label{sec:mnist} Fig. \ref{fig:random} shows the test error for networks trained with different amounts of label noise. Interestingly, observe that for this setting the test error for the network trained with $75\%$ label noise remains relatively small, indicating a good resistance of the model against memorization of corrupted samples. As suggested both from the test error (Fig.~\ref{fig:random}~(a)) and gradient disparity (Fig.~\ref{fig:random}~(c)), there is no proper early stopping time for these experiments\footnote{According to Table~\ref{tab:patience}, for the noisy MNIST dataset, the patience value of $p=10$ is preferred for both GD and CV. Therefore, in Fig.~\ref{fig:random}~(c), even for the setting with $75\%$ label noise, gradient disparity does not increase for $p=10$ consecutive iterations, and would therefore not signal overfitting throughout training. }. The generalization error (Fig.~\ref{fig:random}~(b)) remains close to zero, regardless of the level of label noise, and hence fails to account for label noise. In contrast, gradient disparity is very sensitive to the label noise level in \emph{all} stages of training, even at early stages of training, as desired for a metric measuring generalization. Fig.~\ref{fig:BGP_vs_gen_m} shows the results for an AlexNet \cite{krizhevsky2012imagenet} trained on the MNIST dataset\footnote{\url{http://yann.lecun.com/exdb/mnist/}}. This model generalizes quite well for this dataset. We observe that, throughout the training, the test curves are even below the training curves, which is due to the dropout regularization technique \cite{srivastava2014dropout} being applied during training and not during testing. The generalization loss/error is almost zero, until around iteration 1100 (indicated in the figure by the gray vertical bar), which is when overfitting starts and the generalization error becomes non-zero, and when gradient disparity signals to stop training. Fig. \ref{fig:fc_mnist} shows the results for a 4-layer fully connected neural network trained on the entire MNIST training set. Figs.~\ref{fig:fc_mnist} (e) and (f) show the generalization losses. We observe that at the early stages of training, generalization losses do not distinguish between different label noise levels, whereas gradient disparity does so from the beginning (Figs.~\ref{fig:fc_mnist}~(g) and (h)). At the middle stages of training we can observe that, surprisingly in this setting, the network with 0\% label noise has higher generalization loss than the networks trained with 25\%, 50\% and 75\% noise, and this is also captured by gradient disparity. The final gradient disparity values for the networks trained with higher label noise level are also larger. For the network trained with 0\% label noise we show the results with more details in Fig.~\ref{fig:fc_mnist0}, and observe again how gradient disparity is well aligned with the generalization loss/error. In this experiment, the early stopping time suggested by gradient disparity is epoch 9, which is the exact same time when the training and test losses/errors start to diverge, and signals therefore the start of overfitting. \subsection{CIFAR-10 Experiments}\label{sec:cifar10} Fig.~\ref{fig:BGP_vs_gen} shows the results for a ResNet-18 \cite{he2016deep} trained on the CIFAR-10 dataset\footnote{\url{https://www.cs.toronto.edu/~kriz/cifar.html}}. Around iteration~500 (which is indicated by a thick gray vertical bar in the figures), the training and test losses (and errors) start to diverge, and the test loss reaches its minimum. This is indeed when gradient disparity increases and signals overfitting. To compare models with a different number of parameters using gradient disparity, we need to normalize it. The dimension of a gradient vector is the number $d$ of parameters of the model. Gradient disparity being the $\ell_2$-norm of the difference of gradient vectors will thus grow proportionally to $\sqrt{d}$, hence to compare different architectures, we propose to use the normalized gradient disparity $\tilde{\mathcal{D}} = \overline{\mathcal{D}}/{\sqrt{d}}$. We observe in Fig.~\ref{fig:width2} that both the normalized\footnote{Note that the normalization with respect to the number of parameters is different than the normalization mentioned in Section~\ref{app:norm} which was with respect to the loss values. The value of gradient disparity reported everywhere is the re-scaled gradient disparity; further if comparison between two different architectures is taking place the normalization with respect to dimensionality will also take place.} gradient disparity and test error decrease with the network width (the \emph{scale} is a hyper-parameter used to change both the number of channels and hidden units in each configuration). Fig. \ref{fig:fc_cifar10} shows the results for a 4-layer fully connected neural network, which is trained on the entire CIFAR-10 training set. We observe that gradient disparity reflects the test error at the early stages of training quite well. In the later stages of training we observe that the ranking of gradient disparity values for different label noise levels matches with the ranking of generalization losses and errors. In all experiments the gradient disparity is indeed very informative about the test error. The test error decreases with the size of the training set (Fig.~\ref{fig:DA} (bottom)) and a reliable signal of overfitting should therefore reflect this property. Many of the previous metrics fail to do so, as shown by \cite{neyshabur2017exploring,nagarajan2019uniform}. In contrast, gradient disparity indeed decreases with the training set size, as shown in Fig.~\ref{fig:DA}~(top) and Fig.~\ref{fig:train_size2} in Appendix~\ref{app:more}. In Figure~\ref{fig:DA}, we study the effect of data augmentation \cite{shorten2019survey}, which is one of the popular techniques used to reduce overfitting given limited labeled data. Consistently with the rest of the paper, we observe a strong positive correlation ($\rho=0.979$) between the test error and gradient disparity for networks that are trained with data augmentation. Moreover, we observe that applying data augmentation decreases the values of both gradient disparity and the test error. Fig.~\ref{fig:train_size2} shows the test error and gradient disparity for networks that are trained with different training set sizes. In Fig.~\ref{fig:batch_size2}, we observe that, as discussed in Section~\ref{sec:gen}, gradient disparity, similarly to the test error, increases with the batch size for not too large batch sizes. As expected, when the batch size is very large (512 for the CIFAR-10 experiment and 256 for the CIFAR-100 experiments) gradient disparity starts to decrease, because gradient vectors are averaged over a large batch. Note that even with such large batch sizes, gradient disparity correctly detects the early stopping time, although it can no longer be compared to the value of gradient disparity found with other batch sizes. \subsection{CIFAR-100 Experiments}\label{sec:cifar100} Fig. \ref{fig:resnet_cifar100} shows the results for a ResNet-18 that is trained on the CIFAR-100 training set\footnote{\url{https://www.cs.toronto.edu/~kriz/cifar.html}}. Clearly, the model is not sufficient to learn the complexity of the CIFAR-100 dataset: It has $99\%$ error for the network with $0\%$ label noise, as if it had not learned anything about the dataset and is just making a random guess for classification (because there are 100 classes, random guessing would give $99\%$ error on average). We observe from Fig. \ref{fig:resnet_cifar100} (f) that as training progresses, the network overfits more, and the generalization error increases. Although the test error is high (above $90\%$), very surprisingly for this example, the networks with higher label noise level have a lower test loss and error (Figs.~\ref{fig:resnet_cifar100} (b) and (d)). Quite interestingly, gradient disparity (Fig.~\ref{fig:resnet_cifar100} (g)) captures also this surprising trend as well. \input{mnist.tex} \begin{figure*} \caption{Cross entropy loss} \label{fig:BGP_vs_gen} \end{figure*} \input{cifar10.tex} \input{cifar100.tex} \input{opt_analytical.tex} \subsection{Experiments} Fig. \ref{fig:opt} shows gradient disparity and the test loss curves during the course of training for adaptive optimizers. The epoch in which the fifth increase in the value of the test loss and gradient disparity has happened is shown in the caption of each experiment. We observe that the two suggested epochs for stopping the optimization (the one suggested by gradient disparity (GD) and the other one suggested by test loss) are extremely close to each other, except in Fig.~\ref{fig:opt}~(c) where the fifth epoch with an increase in the value of gradient disparity is much later than the epoch with the fifth increase in the value of test loss. However, in this experiment, there is a 23\% improvement in the test accuracy if the optimization is stopped according to GD compared to test loss, due to many variations of the test loss compared to gradient disparity. As an early stopping criterion, the increase in the value of gradient disparity coincides with the increase in the test loss in all our experiments presented in Fig.~\ref{fig:opt}. In Fig.~\ref{fig:opt}~(h), for the Adam optimizer, we observe that after around 20 epochs, the value of gradient disparity starts to decrease, whereas the test loss continues to increase. This mismatch between test loss and gradient disparity might result from other factors that appear in Eq.~(\ref{eq:adam}). Nevertheless, even in this experiment, the increase in the test loss and gradient disparity coincide, and hence gradient disparity can correctly detect early stopping time. These experiments are a first indication that gradient disparity can be used as an early stopping criterion for optimizers other than SGD. \begin{figure*} \caption{\centering SGD with Momentum, test loss \newline epoch: 11, GD epoch: 10} \caption{\centering Adagrad, test loss epoch: 19, \newline GD epoch: 18} \caption{\centering RmsProp, test loss epoch: 15 \newline (err: 54\%), GD epoch: 36 (err: \textbf{31\%}) } \caption{\centering Adam, test loss epoch: 19, \newline GD epoch: 20} \caption{\centering Adagrad, test loss epoch: 21, \newline GD epoch: 21} \caption{\centering Adadelta, test loss epoch: 12, \newline GD epoch: 15} \caption{\centering RmsProp, test loss epoch: 20, \newline GD epoch: 18} \caption{\centering Adam, test loss epoch: 12, \newline GD epoch: 10} \caption{(a-d) VGG-19 configuration trained on 12.8 k training points of CIFAR-10 dataset. (e-h) VGG-11 configuration trained on 12.8 k points of the CIFAR-10 dataset. The training is stopped when the training loss gets below $0.01$. The presented results are an average over 5 runs. The captions below each figure give the epoch number where test loss and gradient disparity have respectively been increased for 5 epochs from the beginning of training. } \label{fig:opt} \end{figure*} \section{Comparison to Related Work}\label{app:var} \begin{table*} \caption{Test error (TE) and test loss (TL) achieved by using various metrics as early stopping criteria. On the leftmost column, the minimum values of TE and TL over all the iterations are reported (which is not accessible during training). The results of 5-fold cross validation are reported on the right, which serve as a baseline. For each experiment, we have underlined those metrics that result in a better performance than 5-fold cross-validation. We observe that gradient disparity (GD) and variance of gradients (Var) consistently outperform $k$-fold cross-validation, unlike other metrics. On the rightmost column (No ES) we report the results without performing early stopping (ES) (training is continued until the training loss is below $0.01$).}\label{tab:RW} \vspace*{1em} \begin{subtable}{\linewidth}{ \resizebox{\textwidth}{!}{ \begin{tabular}{c|c|cccccccc|c|c} \toprule & Min & GD/Var \hspace*{-0.7em} & EB \hspace*{-0.7em} & GSNR & $g_i\cdot g_j$ & $\text{sign}(g_i\cdot g_j)$ & $\cos(g_i\cdot g_j)$ & $\Omega_c$ \hspace*{-0.7em} & OV & $k$-fold& No ES\\ \midrule \multicolumn{1}{l|}{TE} & $4.84$ $\;$ & \ul{$\mathbf{4.84}$} $\;$& $\;$\ul{$\mathbf{4.84}$} $\;$ & $12.82$ & $22.30$ & $12.82$ & $18.31$ & $8.30$ $\;$ & $11.79$& $4.84$ & $4.96$\\ TL & $0.18$ & \ul{$\mathbf{0.18}$} & \ul{$\mathbf{0.18}$} & $0.46$ & $0.82$ & $0.46$ & $0.69$ & $0.32$ & $0.38$ & $0.18$ & $0.22$\\ \bottomrule \end{tabular}}} \caption{MNIST, AlexNet} \end{subtable} \begin{subtable}{\linewidth}{ \resizebox{\textwidth}{!}{ \begin{tabular}{c|c|cccccccc|c|c} \toprule & Min & GD/Var \hspace*{-0.7em} & EB \hspace*{-0.7em} & GSNR & $g_i\cdot g_j$ & $\text{sign}(g_i\cdot g_j)$ & $\cos(g_i\cdot g_j)$ & $\Omega_c$ \hspace*{-0.7em} & OV & $k$-fold& No ES\\ \midrule \multicolumn{1}{l|}{TE} & $13.76$ & \ul{$\mathbf{16.66}$} & $24.63$ & $35.68$ & $37.92$ & $24.63$ & $35.68$ & $29.40$ & $34.36$ & $17.86$ & $25.72$\\ TL & $0.75$ & \ul{$1.08$} & \ul{$\mathbf{0.86}$} & $1.68$ & $1.82$ & \ul{$\mathbf{0.86}$} & $1.68$ & $1.46$ & $1.65$& $1.09$ & $0.91$\\ \bottomrule \end{tabular}}} \caption{MNIST, AlexNet, $50\%$ random} \end{subtable} \begin{subtable}{\linewidth}{ \resizebox{\textwidth}{!}{ \begin{tabular}{c|c|cccccccc|c|c} \toprule & Min & GD/Var \hspace*{-0.7em} & EB \hspace*{-0.7em} & GSNR & $g_i\cdot g_j$ & $\text{sign}(g_i\cdot g_j)$ & $\cos(g_i\cdot g_j)$ & $\Omega_c$ \hspace*{-0.7em} & OV & $k$-fold& No ES\\ \midrule \multicolumn{1}{l|}{TE} & $45.54$ & \ul{$\mathbf{45.95}$} & $61.76$ & $70.46$ & $70.46$ & $55.84$ & $67.09$ & $67.37$ & $70.61$ & $51.64$ & $64.19$\\ TL & $1.32$ & \ul{$\mathbf{1.45}$} & $1.68$ & $1.92$ & $1.92$ & $1.52$ & $1.83$ & $1.85$ & $1.92$ & $1.49$ & $1.98$ \\ \bottomrule \end{tabular}} } \caption{CIFAR-10, ResNet-18} \end{subtable} \begin{subtable}{\linewidth}{ \resizebox{\textwidth}{!}{ \begin{tabular}{c|c|cccccccc|c|c} \toprule & Min & GD/Var \hspace*{-0.7em} & EB \hspace*{-0.7em} & GSNR & $g_i\cdot g_j$ & $\text{sign}(g_i\cdot g_j)$ & $\cos(g_i\cdot g_j)$ & $\Omega_c$ \hspace*{-0.7em} & OV & $k$-fold& No ES\\ \midrule \multicolumn{1}{l|}{TE} & $59.77$ & \ul{$71.97$} & $73.17$ & $77.08$ & $75.91$ & \ul{$\mathbf{65.80}$} & $75.43$ & $77.71$ & $76.65$ & $72.56$ & $75.96$\\ TL & $1.75$ & \ul{$2.00$} & $2.03$ & $2.12$ & $2.13$ & \ul{$\mathbf{1.93}$} & $2.07$ & $2.13$ & $2.10$ & $2.02$ & $2.30$\\ \bottomrule \end{tabular} }} \caption{CIFAR-10, ResNet-18, $50\%$ random } \end{subtable} \end{table*} In Table~\ref{tab:RW}, we compare gradient disparity (GD) to a number of metrics that were proposed either directly as an early stopping criterion, or as a generalization metric. For those metrics that were not originally proposed as early stopping criteria, we choose a similar method for early stopping as the one we use for gradient disparity. We consider two datasets (MNIST and CIFAR-10), and two levels of label noise ($0\%$ and $50\%$). Here is a list of the metrics that we compute in each setting (see Section~\ref{sec:rel} of the main paper where we introduce each metric): \begin{enumerate} \item Gradient disparity (GD) (ours): we report the error and loss values at the time when the value of GD increases for the 5th time (from the beginning of the training). \item The EB-criterion \cite{mahsereci2017early}: we report the error and loss values when EB becomes positive. \item Gradient signal to noise ratio (GSNR) \cite{Liu2020Understanding}: we report the error and loss values when the value of GSNR decreases for the 5th time (from the beginning of the training). \item Gradient inner product, $g_i\cdot g_j$ \cite{fort2019stiffness}: we report the error and loss values when the value of $g_i\cdot g_j$ decreases for the 5th time (from the beginning of the training). \item Sign of the gradient inner product, $\text{sign}(g_i\cdot g_j)$ \cite{fort2019stiffness}: we report the error and loss values when the value of $\text{sign}(g_i\cdot g_j)$ decreases for the 5th time (from the beginning of the training). \item Cosine similarity between gradient vectors, $\cos(g_i\cdot g_j)$ \cite{fort2019stiffness}: we report the error and loss values when the value of $\cos(g_i\cdot g_j)$ decreases for the 5th time (from the beginning of the training). \item Variance of gradients (Var) \cite{negrea2019information}: we report the error and loss values when the value of Var increases for the 5th time (from the beginning of the training). Variance is computed over the same number of batches used to compute gradient disparity, in order to compare metrics given the same computational budget. \item Average gradient alignment within the class $\Omega_c$ \cite{mehta2020extreme}: we report the error and loss values when the value of $\Omega_c$ decreases for the 5th time (from the beginning of the training). \item Optimization variance (OV) \cite{zhang2021optimization}: we report the error and loss values when the value of OV increases for the 5th time (from the beginning of the training). \end{enumerate} On the leftmost column of Table~\ref{tab:RW}, we report the minimum values of the test error and the test loss over all the iterations, which may not necessarily coincide. For instance, in setting~(c), the test error is minimized at iteration 196, whereas the test loss is minimized at iteration 126. On the rightmost column of Table~\ref{tab:RW} we report the values of the test error and loss when no early stopping is applied and the training is continued until training loss value below $0.01$. Next to this, we report the values of the test error and the test loss when using $5$-fold cross-validation, which serves as a baseline. We have underlined in red the metrics that outperform $k$-fold cross validation. We observe that the only metrics that consistently outperform $k$-fold CV are GD and Variance of gradients. The EB-criterion, $\text{sign}(g_i\cdot g_j)$, and $\cos(g_i\cdot g_j)$ are metrics that perform quite well as early stopping criteria, although not as well as GD and Var. In Section~\ref{sec:cos}, we observe that these metrics are not informative of the label noise level, contrary to gradient disparity. It is interesting to observe that gradient disparity and variance of gradients produce the exact same results when used as early stopping criteria (Table~\ref{tab:RW}). Moreover, these two are the only metrics that consistently outperform $k$-fold cross-validation. However, in Section~\ref{app:var2}, we observe that the correlation between gradient disparity and the test loss is in general larger than the correlation between variance of gradients and the test loss. \subsection{Capturing Label Noise Level}\label{sec:cos} In this section, we show in particular three metrics that even though perform relatively well as early stopping criteria, fail to account for the level of label noise, contrary to gradient disparity. \begin{itemize} \item The sign of the gradient inner product, $\text{sign}(g_i\cdot g_j)$, should be inversely related to the test loss; it should decrease when overfitting increases. However, we observe that the value of $\text{sign}(g_i\cdot g_j)$ is larger for the setting with the higher label noise level; it incorrectly detects the setting with the higher label noise level as the setting with the better generalization performance (see Fig.~\ref{fig:eb}). \item The EB-criterion should be larger for settings with more overfitting. In most stages of training, the EB-criterion does not distinguish between settings with different label noise levels, contrary to gradient disparity (see Fig.~\ref{fig:eb}). At the end of the training, the EB-criterion even mistakenly signals the setting with the higher label noise level as the setting with the better generalization performance. \item The cosine similarity between gradient vectors, $\cos (g_i \cdot g_j)$, should decrease when overfitting increases and therefore with the level of label noise in the training data. But $\cos (g_i \cdot g_j)$ does not appear to be sensitive to the label noise level, and in some cases (Fig.~\ref{fig:inner_prod} (a)) it even increases with the noise level. Gradient disparity is much more informative of the label noise level compared to cosine similarity and the correlation between gradient disparity and the test error is larger than the correlation between cosine similarity and the test accuracy (see Fig.~\ref{fig:inner_prod}). \end{itemize} \subsection{Gradient Disparity versus Variance of Gradients}\label{app:var2} It has been shown that generalization is related to gradient alignment experimentally in \cite{fort2019stiffness}, and to variance of gradients theoretically in \cite{negrea2019information}. Gradient disparity can be viewed as bringing the two together. Indeed, one can check that ${\mathbb{E}\left[\mathcal{D}_{i,j}^2\right] = 2 \sigma_g^2 + 2 \mu_g^T \mu_g - 2 \mathbb{E} \left[g_i^Tg_j\right] }$, given that ${\mu_g = \mathbb{E}[g_i] = \mathbb{E}[g_j]}$ and ${\sigma_g^2 = \text{tr}\left(\text{Cov}\left[g_i\right]\right) = \text{tr}\left(\text{Cov}\left[g_j\right]\right)}$. This shows that gradient variance $\sigma_g^2$ and gradient alignment $g_i^Tg_j$ both appear as components of gradient disparity. We conjecture that the dominant term in gradient disparity is the variance of gradients, hence as early stopping criteria these two metrics almost always signal overfitting simultaneously. This is indeed what our experiments show; we show that variance of gradients is also a very promising early stopping criterion (Table~\ref{tab:RW}). However, because of the additional term in gradient disparity (the gradients inner product), gradient disparity emphasizes the alignment or misalignment of the gradient vectors. This could be the reason why gradient disparity in general outperforms variance of gradients in tracking the value of the generalization loss; the positive correlation between gradient disparity and the test loss is often larger than the positive correlation between variance of gradients and the test loss (Table~\ref{tab:var}). \begin{table}[h] \centering \caption{Pearson's correlation coefficient between gradient disparity ($\overline{\mathcal{D}}$) and test loss (TL) over the training iterations is compared to the correlation between variance of gradients (Var) and test loss.}\label{tab:var} \vspace*{1em} \begin{tabular}{l|l|l} \toprule Setting & $\rho_{\overline{\mathcal{D}}, \text{TL}}$ & $\rho_{\text{Var}, \text{TL}}$ \\ \midrule AlexNet, MNIST & $\mathbf{0.433}$ & $0.169$ \\ \midrule AlexNet, MNIST, $50\%$ random labels & $\mathbf{0.535}$ & $0.161$ \\ \midrule VGG-16, CIFAR-10 & $0.190$ & $\mathbf{0.324}$ \\ \midrule VGG-16, CIFAR-10, $50\%$ random labels & $\mathbf{0.634}$ & $0.623$ \\ \midrule VGG-19, CIFAR-10 & $\mathbf{0.685}$ & $0.508$ \\ \midrule VGG-19, CIFAR-10, $50\%$ random labels & $\mathbf{0.748}$ & $0.735$ \\ \midrule ResNet-18, CIFAR-10 & $\mathbf{0.975}$ & $0.958$ \\% batch size here is 32 \midrule ResNet-18, CIFAR-10, $50\%$ random labels & $\mathbf{0.471}$ & $0.457$ \\% here is also with batch isze 32 \bottomrule \end{tabular} \end{table} \begin{figure*} \caption{Test loss, gradient disparity, EB-criterion \cite{mahsereci2017early}, and $\text{sign}(g_i\cdot g_j)$ for a ResNet-18 trained on the CIFAR-10 dataset, with $0\%$ and $50\%$ random labels. Gradient disparity, contrary to EB-criterion and $\text{sign}(g_i\cdot g_j)$, clearly distinguishes the setting with correct labels from the setting with random labels.} \label{fig:eb} \end{figure*} \begin{figure*} \caption{CIFAR-10, ResNet-18} \caption{MNIST, AlexNet} \label{fig:inner_prod} \end{figure*} \end{document}
\begin{document} \pagestyle{plain} \title{Dynamic Set Intersection\thanks{Supported by NSF grants CCF-1217338 and CNS-1318294 and a grant from the US-Israel Binational Science Foundation. This research was performed in part at the Center for Massive Data Algorithmics (MADALGO) at Aarhus University, which is supported by the Danish National Research Foundation grant DNRF84.}} \author{ Tsvi Kopelowitz\inst{1} and Seth Pettie\inst{1} and Ely Porat\inst{2}} \institute{University of Michigan \and Bar-Ilan University } \maketitle \begin{abstract} Consider the problem of maintaining a family $F$ of dynamic sets subject to insertions, deletions, and set-intersection reporting queries: given $S,S'\in F$, report every member of $S\cap S'$ in any order. We show that in the word RAM model, where $w$ is the word size, given a cap $d$ on the maximum size of any set, we can support set intersection queries in $O(\frac{d}{w/\log^2 w})$ expected time, and updates in $O(\log w)$ expected time. Using this algorithm we can list all $t$ triangles of a graph $G=(V,E)$ in $O(m+\frac{m\alpha}{w/\log^2 w} +t)$ expected time, where $m=|E|$ and $\alpha$ is the arboricity of $G$. This improves a 30-year old triangle enumeration algorithm of Chiba and Nishizeki running in $O(m \alpha)$ time. We provide an incremental data structure on $F$ that supports intersection {\em witness} queries, where we only need to find {\em one} $e\in S\cap S'$. Both queries and insertions take $O\paren{\sqrt \frac{N}{w/\log^2 w}}$ expected time, where $N=\sum_{S\in F} |S|$. Finally, we provide time/space tradeoffs for the fully dynamic set intersection reporting problem. Using $M$ words of space, each update costs $O(\sqrt {M \log N})$ expected time, each reporting query costs $O(\frac{N\sqrt{\log N}}{\sqrt M}\sqrt{op+1})$ expected time where $op$ is the size of the output, and each witness query costs $O(\frac{N\sqrt{\log N}}{\sqrt M} + \log N)$ expected time. \end{abstract} \section{Introduction} In this paper we explore the power of {\em word level parallelism} to speed up algorithms for dynamic set intersection and triangle enumeration. We assume a $w$-bit word-RAM model, $w>\log n$, with the standard repertoire of unit-time operations on $w$-bit words: bitwise Boolean operations, left/right shifts, addition, multiplication, comparison, and dereferencing. Using the modest parallelism intrinsic in this model (sometimes in conjunction with tabulation) it is often possible to obtain a nearly factor-$w$ (or factor-$\log n$) speedup over traditional algorithms. The {\em Four Russians} algorithm for boolean matrix multiplication is perhaps the oldest algorithm to use this technique. Since then it has been applied to computing edit distance~\cite{MasekP80}, regular expression pattern matching~\cite{Myers92}, APSP in dense weighted graphs~\cite{Chan10}, APSP and transitive closure in sparse graphs~\cite{Chan12,Chan08}, and more recently, to computing the Fr\'echet distance~\cite{BuchinBMM14} and solving 3SUM in subquadratic time~\cite{BaranDP08,GronlundP14}. Refer to~\cite{Chan13} for more examples. \paragraph{\textbf{Set Intersection.}} The problem is to represent a (possibly dynamic) family of sets $F$ with total size $N=\sum_{S\in F} |S|$ so that given $S,S'\in F$, one can quickly determine if $S\cap S'=\emptyset$ (emptiness query) or report some $x\in S\cap S'$ (witness query) or report all members of $S\cap S'$. Let $d$ be an {\em a priori} bound on the size of any set. We give a randomized algorithm to preprocess $F$ in $O(N)$ time such that reporting queries can be answered in $O(d / \frac{w}{\log^2 w} + |S\cap S'|)$ {\em expected} time. Subsequent insertion and deletion of elements can be handled in $O(1)$ expected time. We give $O(N)$-space structures for the three types of queries when there is no restriction on the size of sets. For emptiness queries the expected update and query times are $O(\sqrt{N})$; for witness queries the expected update and query times are $O(\sqrt{N\log N})$; for reporting queries the expected update time is $O(\sqrt{N\log N})$ and the expected query time is $O(\sqrt{N\log N (1+ |S\cap S'|)})$. These fully dynamic structures do not benefit from word-level parallelism. When only insertions are allowed we give another structure that handles both insertions and emptiness/witness queries in $O(\sqrt{N / \frac{w}{\log^2 w}})$ expected time.\footnote{These data structures offer a tradeoff between space $M$, query time, and update time. We restricted our attention to $M=O(N)$ here for simplicity.} \paragraph{\textbf{\textsf{3SUM}{} Hardness.}} Data structure lower bounds can be proved unconditionally, or conditionally, based on the {\em conjectured} hardness of some problem. One of the most popular conjectures for conditional lower bounds is that the \textsf{3SUM}{} problem (given $n$ real numbers, determine if any three sum to zero) cannot be solved in truly subquadratic (expected) time, i.e. $O(n^{2-\Omega(1)})$ time. Even if the inputs are integers in the range $[-n^3,n^3]$ (the \textsf{Integer\ThreeSUM}{} problem), the problem is still conjectured to be insoluble in truly subquadratic (expected) time. See~\cite{Patrascu10,KPP14a,GronlundP14} and the references therein. P\v{a}tra\c{s}cu{} in~\cite{Patrascu10} showed that the \textsf{Integer\ThreeSUM}{} problem can be reduced to offline set-intersection, thereby obtaining conditional lower bounds for offline data structures for set-intersection. The parameters of this reduction were tightened by us in~\cite{KPP14a}. Converting a conditional lower bound for the offline version of a problem to a conditional lower bound for the incremental (and hence dynamic) version of the same problem is straightforward, and thus we can prove conditional lower bounds for the incremental (and hence dynamic) set intersection problems. In particular, we are able to show that conditioned on the \textsf{Integer\ThreeSUM}{} conjecture, for the incremental emptiness version either the update or query time must be at least $\Omega(N^{1/2-o(1)})$ time. This is discussed in more detail, including lower bounds for the reporting version, in Appendix~\ref{app:3sum_lb}. \paragraph{\textbf{Related work.}} Most existing set intersection data structures, e.g., \cite{DLM00,BarbayK02,BY04}, work in the comparison model, where sets are represented as sorted lists or arrays. In these data structures the main benchmark is the minimum number of comparisons needed to certify the answer. Bille, Pagh, and Pagh~\cite{BPP07} also used similar word-packing techniques to evaluate expressions of set intersections and unions. Their query algorithm finds the intersection of $m$ sets with a total of $n$ elements in $O(n/\frac{w}{\log^2 w} + m\cdot op)$ time, where $op$ is the size of the output. Cohen and Porat~\cite{CP10} designed a {\em static} $O(N)$-space data structure for answering reporting queries in $O(\sqrt{N(1+|S\cap S'|)})$ time, which is only $O(\sqrt{\log N})$ faster than the data structure presented here. \paragraph{\textbf{Triangle Enumeration.}} Itai and Rodeh~\cite{ItaiR78} showed that all $t$ triangles in a graph could be enumerated in $O(m^{3/2})$ time. Thirty years ago Chiba and Nishizeki~\cite{CN85} generalized~\cite{ItaiR78} to show that $O(m\alpha)$ time suffices, where $\alpha$ is the {\em arboricity} of the graph. This algorithm has only been improved for dense graphs using fast matrix multiplication. The recent algorithm of Bj\"orklund, Pagh, Williams, and Zwick~\cite{BjorklundPWZ14} shows that when the matrix multiplication exponent $\omega =2$, triangle enumeration takes $\tilde{O}(\min\{n^2 + nt^{2/3}, m^{4/3} + mt^{1/3}\})$ time. (The actual running time is expressed in terms of $\omega$.) We give the first asymptotic improvement to Chiba and Nishizeki's algorithm for graphs that are too sparse to benefit from fast matrix multiplication. Using our set intersection data structure, we can enumerate $t$ triangles in $O(m + m\alpha/\frac{w}{\log^2 w} + t)$ expected time.\\ For simplicity we have stated all bounds in terms of an arbitrary word size $w$. When $w=O(\log n)$ the $w/\log^2 w$ factor becomes $\log n/\log\log n$. \paragraph{\textbf{Overview of the paper.}} The paper is structured as follows. In Section~\ref{sect:set_packing} we discuss a packing algorithm for (dynamic) set intersection, and in Section~\ref{sect:triangle_listing} we show how the packing algorithm for set intersection can be used to speed up triangle listing. In Section~\ref{sec:emptiness} we present our data structure for emptiness queries on a fully dynamic family of sets, with time/space tradeoffs. In Section~\ref{sect:packed_witnesses} we combine the packing algorithm for set intersection with the emptiness query data structure to obtain a packed data structure for set intersection witness queries on an incremental family of sets. In Section~\ref{sect:fully_dyn_set_intersection} we present non-packed data structures for emptiness, witness, and reporting set intersection queries on a fully dynamic family of sets, with time/space tradeoffs. Finally, we discuss conditional lower bounds based on the \textsf{3SUM}{} conjecture for dynamic versions of the set intersection problem in the Appendix. \section{Packing Sets}\label{sect:set_packing} \begin{theorem}\label{theorem:packed_set_intersection} A family of sets $F=\{S_1,\cdots,S_t\}$ with $d > \max_{S\in F} |S|$ can be preprocessed in linear time to facilitate the following set intersection queries. Given two $S,S'\in F$, one can find a witness in $S\cap S'$ in $O(\frac{d\log^2 w}{w})$ expected time and list all of the elements of $S\cap S'$ in $O(|S\cap S'|)$ additional expected time. If $w = O(\log n)$ then the query time is reduced to $O(\frac{d\log\log n}{\log n})$. Furthermore, updates (insertions/deletions of elements) to sets in $F$ can be performed $O(1)$ expected time, subject to the constraint that $d > \max_{S\in F} |S|$. \end{theorem} \begin{proof} Every set $S\in F$ is split into $\ell$ buckets $B^S_1,\ldots, B^S_\ell$ where $\ell = \frac{d\log w}{w}$. We pick a function $h$ from a pairwise independent family of hash functions and assign each element $e\in S$ into a bucket $B^S_{h(e)}$. The expected number of elements from a set $S$ in each bucket is $\frac{w}{\log w}$. We use a second hash function $h'$ from another family of pairwise independent hash functions which reduces the universe size to $w^2$. An $h'(e)$ value is represented with $2\log w + 1$ bits, the extra {\em control bit} being necessary for certain manipulations described below. For each $S$ and $i$ we represent $h'(B^S_i)$ as a packed, sorted sequence of $h'$-values. In expectation each $h'(B^S_i)$ occupies $O(1)$ words, though some buckets may be significantly larger. Finally, for each bucket $B^S_i$ we maintain a lookup table that translates from $h'(e)$ to $e$. If there is more than one element that is hashed to $h'(e)$ then all such elements are maintained in the lookup table via a linked list. Notice that $S\cap S' = \bigcup_{i=1}^{\ell} B^S_i\cap B^{S'}_i$. Thus, we can enumerate $S\cap S'$ by enumerating the intersections of all $B^S_i\cap B^{S'}_i$. Fix one such $i$. We first merge the packed sorted lists $h'(B^S_i)$ and $h'(B^{S'}_i)$. Albers and Hagerup~\cite{AH97} showed that two words of sorted numbers (separated by control bits) can be merged using Batcher's algorithm in $O(\log w)$ time. Using this as a primitive we can merge the sorted lists $h'(B^S_i)$ and $h'(B^{S'}_i)$ in time $O(|B^S_i| + |B^{S'}_i| / (w/\log^2 w))$. Let $C$ be the resulting list, with control bits set to 0. Our task is now to enumerate all numbers that appear twice (necessarily consecutively) in $C$. Let $C'$ be $C$ with control bits set to 1. We shift $C$ one field to the right ($2\log w + 1$ bit positions) and subtract it from $C'$.\footnote{The control bits stop carries from crossing field boundaries.} Let $C''$ be the resulting list, with all control bits reset to 0. A field is zero in $C''$ iff it and its predecessor were identical, so the problem now is to enumerate zero fields. By repeated halving, we can distill each field to a single bit (0 for zero, 1 for non-zero) in $O(\log\log w)$ time and then take the complement of these bits (1 for zero, 0 for non-zero). We have now reduced the problem to reading off all the 1s in a $w$-bit word, which can be done in $O(1)$ time per 1 using the most-significant-bit algorithm of~\cite{FW93}.\footnote{This algorithm uses multiplication. Without unit-time multiplication~\cite{BrodnikMM97} one can read off the 1s in $O(\log\log w)$ time per 1. If $w = O(\log n)$ then the instruction set is not as relevant since we can build $o(n)$-size tables to calculate most significant bits and other useful functions.} For each repeated $h'$-value we lookup all elements in $B^S_i$ and $B^{S'}_i$ with that value and report any occurring in both sets. Every unit of time spent in this step corresponds to an element in the intersection or a false positive. The cost of intersecting buckets $B^S_i$ and $B^{S'}_i$ is \[ O\paren{1+\paren{\ceil{\frac{|B^S_i|}{w/\log w}}+\ceil{\frac{|B^{S'}_i|}{w/\log w}}}\log w + |B^S_i\cap B^{S'}_i| + f_i}, \] where $f_i$ is the number of false positives. The expected value of $f_i$ is $o(1)$ since the expected sizes of $B^S_i$ and $B^{S'}_i$ are $w/\log w$ and for $e\in B^S_i, e'\in B^{S'}_i$, $\Pr(h'(e) = h'(e')) = 1/w^2$. Thus, the expected runtime for a query is \begin{align*} &\sum_{i=1}^{\ell} O\paren{1+\paren{\ceil{\frac{|B^S_i|}{w/\log w}}+\ceil{\frac{|B^{S'}_i|}{w/\log w}}}\log w + |B^S_i\cap B^{S'}_i| + f_i}\\ &= O(\ell \log w + |S\cap S'|) \; = O\paren{\frac{d\log^2 w}{w} + |S\cap S'|}. \end{align*} It is straightforward to implement insertions and deletions in $O(1)$ time in expectation. Suppose we must insert $e$ into $S$. Once we calculate $i = h(e)$ and $h'(e)$ we need to insert $h'(e)$ into the packed sorted list representing $h'(B^S_i)$. Suppose that $h'(B^S_i)$ fits in one word; let it be $D$, with all control bits set to 1.\footnote{If $h'(B^S_i)$ is larger we apply this procedure to each word of the list $h'(B^S_i)$. It occupies $O(1)$ words in expectation.} With a single multiplication we form a word $D'$ whose fields each contain $h'(e)$ and whose control bits are zero. If we subtract $D'$ from $D$ and mask everything but the control bits, the most significant bit identifies the location of the successor of $h'(e)$ in $h'(B^S_i)$. We can then insert $h'(e)$ into the sorted list in $D$ with $O(1)$ masks and shifts. The procedure for deleting an element in $O(1)$ time follows the same lines. \qed \end{proof} \section{A Faster Triangle Enumeration Algorithm}\label{sect:triangle_listing} \begin{theorem}\label{theorem:three_corners} Given an undirected graph $G=(V,E)$ with $m=|E|$ edges and arboricity $\alpha$, all $t$ triangles can be enumerated in $O(m+\frac{m\alpha}{w/\log^2 w}+t)$ expected time or in $O\paren{m + \frac{m\alpha}{\log n/\log\log n} + t}$ expected time if $w = O(\log n)$. \end{theorem} \begin{proof} We will make use of the data structure in Theorem~\ref{theorem:packed_set_intersection}. To do this we first find an acyclic orientation of $E$ in which the out-degree of any vertex is $O(\alpha)$. Such an orientation can be found in linear time using the peeling algorithm of Chiba and Nishizeki~\cite{CN85}. Define $\Gamma^+(u) = \{v \;|\; (u,v)\}$ to be the set of out-neighbors of $u$ according to this orientation. Begin by preprocessing the family $F = \{\Gamma^+(u) \:|\: u\in V\}$, where all sets have size $O(\alpha)$. For each edge $(u,v)$, enumerate all elements in the intersection $\Gamma^+(u) \cap \Gamma^+(v)$. For each vertex $w$ in the intersection output the triangle $\{u,v,w\}$. Since the orientation is acyclic, every triangle is output exactly once. There are $m$ set intersection queries, each taking $O(1 + \alpha / \max\{\frac{w}{\log^2 w}, \frac{\log n}{\log\log n}\})$ time, aside from the cost of reporting the output, which is $O(1)$ per triangle. \qed \end{proof} \section{Dynamic Emptiness Queries with Time/Space Tradeoff}\label{sec:emptiness} \begin{theorem}\label{theorem:emptiness_structure} There exists an algorithm that maintains a family $F$ of dynamic sets using $O(M)$ space where each update costs $O(\sqrt {M})$ expected time, and each emptiness query costs $O(\frac{N}{\sqrt{M}})$ expected time.\end{theorem} \begin{proof} Each set $S\in F$ maintains its elements in a lookup table using a perfect dynamic hash function. So the cost of inserting a new element into $S$, deleting an element from $S$, or determining whether some element $x$ is in $S$ is expected $O(1)$ time. Let $N=\sum_{S\in F} |S|$. We make the standard assumption that $N$ is always at least $N'/2$ and at most $2N'$ for some natural number $N'$. Standard rebuilding de-amortization techniques are used if this is not the case. \paragraph{The Structure.} We say a set $S$ is \textit{large} if at some point $|S|>2N'/\sqrt{M}$, and since the last time $S$ was at least that large, its size was never less than $N'/\sqrt{M}$. If $S$ is not large, and its size is at least $N'/\sqrt{M}$ then we say it is \textit{medium}. If $S$ is neither large nor medium then it is \textit{small}. Notice that the size of a small set is less than $N'/\sqrt{M}=O(N/\sqrt{M})$. Let $L\subseteq F$ be the sub-family of large and medium sets, and let $\ell = |L|$. Notice that $\ell\leq {\sqrt{M}}$. For each set $S\in L$ we maintain a unique integer $1\leq i_S \leq \ell$, and an \textit{intersection-size} dynamic look-up table $T_S$ of size $\ell$ such that for a large set $S'$ we have $T_S[i_{S'}]= |S\cap S'|$. Adding and deleting entries from the table takes expected constant time using hashing. Due to the nature of our algorithm we cannot guarantee that all of the intersection-size tables will always be fully updated. However, we will guarantee the following invariant. \begin{inv}\label{inv:indicators} For every two large sets $S$ and $S'$, $T_S[i_{S'}]$ and $T_{S'}[i_{S}]$ are correctly maintained. \end{inv} \paragraph{Query.} For two sets $S,S'\in F$ where either $S$ or $S'$ is not large, say $S$, we determine if they intersect by scanning the elements in $S$ and using the lookup table for $S'$. The time cost is $O(|S|)=O(N'/\sqrt{M})$. If both sets are large, then we examine $T_S[i_{S'}]$ which determines the size of the intersection (by Invariant~\ref{inv:indicators}) and decide accordingly if it is empty or not. This takes $O(1)$ time. \paragraph{Insertions.} When inserting a new element $x$ into $S$, we first update the lookup table of $S$ to include $x$. Next, if $S$ was small and remained small then no additional work is done. Otherwise, for each $S'\in L$ we must update the size of $S\cap S'$ in the appropriate intersection-size tables. This is done directly in $O(\sqrt {M})$ time by determining whether $x$ is in $S'$, for each $S'$, via the lookup tables. We briefly recall, as mentioned above, that it is possible that some of the intersection-size tables will not be fully updated, and so incrementing the size of an intersection is only helpful if the intersection size was correctly maintained before. Nevertheless, as explained soon, Invariant~\ref{inv:indicators} will be guaranteed to hold, which suffices for the correctness of the algorithm since the intersection-size tables are only used when intersecting two large sets. The more challenging case is when $S$ becomes medium. If this happens we would like to increase $\ell$ by 1, assign $i_S$ to be the new $\ell$, allocate and initialize $T_S$ in $O(\sqrt{M})$ time, and for each $S'\in L$ we compute $|S\cap S'|$ and insert the answer into $T_S[i_{S'}]$ and $T_{S'}[i_S]$. This entire process is dominated by the the task of computing $|S\cap S'|$ for each $S' \in L$, taking a total of $O(\sum_{S'\in L}|S|)$ time, which could be as large as $O(N)$ and is too costly. However, this work can be spread over the next $N'/\sqrt {M}$ insertions made into $S$ until $S$ becomes large. This is done as follows. When $S$ becomes medium we create a list $L_S$ of all of the large and medium sets at this time (without their elements). This takes $O(\sqrt {M})$ time. Next, for every insertion into $S$ we compute the values of $O(M/N')$ locations in $T_S$ by computing the intersection size of $S$ and each of $O(M/N')$ sets from $L_S$ in $O(\frac{M}{N'} \cdot \frac{N}{\sqrt{M}}) = O(\sqrt M)$ time. For each such set $S'$ we also update $T_{S'}[i_S]$. By the time $S$ becomes large we will have correctly computed the values in $T_S$ for all $O(\sqrt M)$ of the sets in $L_S$, and for every set $S'\in L_S$ we will have correctly computed $T_{S'}[i_S]$. It is possible that between the time $S$ became medium to the time $S$ became large, there were other sets such as $S'$ which became medium and perhaps even large, but $S'\not\in L_S$. Notice that in such a case $S\in L_{S'}$ and so it is guaranteed that by the time both $S$ and $S'$ are large, the indicators $T_S[i_{S'}]$ and $T_{S'}[i_{S}]$ are correctly updated, thereby guaranteeing that Invariant~\ref{inv:indicators} holds. Thus the total cost of performing an insertion is $O(\sqrt M)$ expected time. \paragraph{Deletions.} When deleting an element $x$ from $S$, we first update the lookup table of $S$ to remove $x$ in $O(1)$ expected time. If $S$ was small and remained small then no additional work is done. If $S$ was in $L$ then we scan all of the $S'\in L$ and check if $x$ is in $S'$ in order to update the appropriate locations in the intersection-size tables. This takes $O(\sqrt {M})$ time. If $S$ was medium and now became small, we need to decrease $\ell$ by 1, remove the assignment to $i_S$ to be the new $\ell$, delete $T_S$, and for each $S'\in L$ we need to remove $T_{S'}[i_S]$. In addition, in order to accommodate the update process of medium sized sets, for each medium set $S'$ we must remove $S$ from $L_{S'}$ if it was in there. \qed \end{proof} \begin{cor}\label{cor:emptiness_structure_linear} There exists an algorithm that maintains a family $F$ of dynamic sets using $O(N)$ space where each update costs $O(\sqrt {N})$ expected time, and each emptiness query costs $O(\sqrt{N})$ expected time. \end{cor} \section{Incremental Witness Queries}\label{sect:packed_witnesses} \begin{theorem}\label{theorem:combine_set_intersection} Suppose there exists an algorithm $A$ that maintains a family $F$ of incremental sets, each of size at most $d$, such that set intersection witness queries can be answered in $O(\frac{d}{\tau_q})$ expected time and inserts can be performed in $O(\tau_u)$ expected time. Then there exists an algorithm to maintain a family $F$ of incremental sets---with no upper bound on set sizes---that uses $O(N)$ space and performs insertions and witness queries in $O(\sqrt{N'/\tau_q} )$ expected time, where $N = \sum_{S\in F} |S|$. \end{theorem} \begin{proof} We make the standard assumption that $N$ is always at least $N'/2$ and at most $2N'$ for some natural number $N'$. Standard rebuilding de-amortization techniques are used if this is not the case. In our context, we say that a set is large if its size is at least $\sqrt{N'\tau_q}$, and is medium if its size is between $\sqrt{N'/\tau_q}$ and $\sqrt{N'\tau_q}$. Each medium and large set $S$ maintains a \emph{stash} of the at most $\sqrt{N'\tau_q}$ last elements that were inserted into $S$ (these elements are part of $S$). This stash is the entire set $S$ if $S$ is medium. If $S$ is large then the rest of $S$ (the elements not in the stash) is called the \emph{primary} set of $S$. Stashes are maintained using algorithm $A$ with $d=\sqrt{N'\tau_q}$. Thus, answering intersection queries between two medium sets takes $O(\sqrt{N'/\tau_q} )$ expected time. We maintain for each medium and large set $S$ a witness table $P_S$ such that for any large set $S'$ we have that $P_S[i_{S'}]$ is either an element (witness) in the intersection of $S$ and the primary set of $S'$, or null if no such element exists. This works in the incremental setting as once a witness is established it never changes. Since there are at most $\sqrt{N'/\tau_q}$ large sets and at most $\sqrt{N'\tau_q}$ medium sets, the space usage is $O(N')$. If a query is between $S_1$ and $S_2$ and $S_1$ is large, then: (1) if $S_2$ is small we lookup each element in $S_2$ to see if it is in $S_1$, (2) if $S_2$ is medium or large then we use the witness tables to see if there is a witness of an intersection between $S_2$ and the primary set of $S_1$ or between $S_1$ and the primary set of $S_2$, and if there is no such witness then we use algorithm $A$ to intersect the stashes of $S_2$ and $S_1$. In any case, the cost of a query is $O(\sqrt{N'/\tau_q})$ expected time. The details for maintaining these tables are similar to the details of maintaining the intersection-size array tables from Section~\ref{sec:emptiness}. \paragraph{Insertion.} When inserting an element $x$ into $S$, if $S$ is small then we do nothing. If $S$ is medium then we add $x$ to the stash of $S$ in algorithm $A$. If $S$ is large then we add $x$ to the stash of $S$ and verify for every other large set if $x$ is in that set, updating the witness table accordingly. If $S$ became medium then we add it to the structure of algorithm $A$. Since the size of $S$ is $O(\sqrt{N'/\tau_q})$ this takes $O(\sqrt{N'/\tau_q})$ expected time. Furthermore, when $S$ becomes medium the table $P_S$ needs to be prepared. To do this, between the time $S$ is of size $\sqrt{N'/2\tau_q}$ and the time $S$ is of size $\sqrt{N'/\tau_q}$, the table $P_S$ is inclemently constructed. If $S$ became large then we now allow its primary set to be nonempty, and must also update the witness tables. The changes to witness tables in this case is treated using the same techniques as in Theorem~\ref{theorem:emptiness_structure}, and so we omit their description. This will cost $O(\sqrt{N'/\tau_q}+\tau_u)$ expected time. Finally, for a large set $S$, once its stash reaches size $\sqrt {N' \tau_q}$ we dump the stash into the primary set of $S$, thereby emptying the stash. We describe an amortized algorithm for this process, which is deamortized using a standard lazy approach. To combine the primary set and the stash we only need to update the witness tables for set intersection witnesses between medium sets and the new primary set of $S$ as it is possible that a witness was only in the stash. To do this, we directly scan all of the medium sets and check if a new witness can be obtained from the stash. The number of medium sets is $O(\sqrt {N'\tau_q})$ and the cost of each intersection will be $O(\sqrt{N'/\tau_q})$ for a total of $O(N')$ time. Since this operation only happens after $\Omega(\sqrt {N'\tau_q})$ insertions into $S$ the amortized cost is $O(\sqrt{N'/\tau_q})$ time. \qed \end{proof} Combining Theorem~\ref{theorem:packed_set_intersection} with Theorem~\ref{theorem:combine_set_intersection} we obtain the following. \begin{corollary} There exists an algorithm in the word-RAM model that maintains a family $F$ of incremental sets using $O(N)$ space where each insertion costs $O(\sqrt {\frac{N}{w/\log^2 w}} +\log w)$ expected time and a witness query costs $O(\sqrt {\frac{N}{w/\log^2 w}} )$ expected time. \end{corollary} \section{Fully Dynamic Set Intersection with Witness and Reporting Queries}\label{sect:fully_dyn_set_intersection} Each element in $\bigcup_{S\in F} S$ is assigned an integer from the range of $[2N']$. When a new element not appearing in $\bigcup_{S\in F} S$ arrives, it is assigned to the smallest available integer, and that integer is used as its key. When keys are deleted (no longer in use), we do not remove their assignment, and instead, we conduct a standard rebuilding technique in order to reassign the elements. Finally, we use a second assignment via a random permutation of the integers in order to uniformly spread the assignments within the range. \paragraph{The structure.} Consider the following binary tree $T$ of height $\log N' + 1$ where each vertex $v$ covers some range from $U$, denoted by $[\alpha_v,\beta_v]$, such that the range of the root covers all of $U$, and the left (right) child of $v$ covers the first (second) half of $[\alpha_v,\beta_v]$. A vertex at depth $i$ covers $\frac{2N'}{2^i}$ elements of $U$. For a vertex $v$ let $S^v=S\cap [\alpha_v,\beta_v]$. Let $N_v = \sum_{S\in F} |S^v|$. Let $M_v = \frac{N_v\cdot M}{N'}$. We say a set $S$ is \textit{$v$-large} if at some point $|S^v|>\frac{2N_v}{\sqrt{M_v}}$, and since the last time $S^v$ was at least that large, its size was never less than $\frac{N_v}{\sqrt{M_v}}$. Each vertex $v\in T$ with children $v_0$ and $v_1$ maintains a structure for emptiness queries as in Theorem~\ref{theorem:emptiness_structure}, using $M_v$ space, on the family $F^v=\{S^v: S\in F\}$. In addition, we add auxiliary data to the intersection-size tables as follows. For sets $S_1,S_2\in F$ the set of all vertices in which $S_1$ and $S_2$ intersect under them defines a connected tree $T'$. This tree has some branching vertices which have 2 children, some non-branching internal vertices with only 1 child, and some leaves. Consider the vertices $v$ in $T$ for which $S_1$ and $S_2$ are $v$-large and define $\hat{T}$ to be the connected component of these vertices that includes the root $r$. (It may be that $\hat{T}$ does not exist.) To facilitate a fast traversal of $\hat{T}$ during a query we maintain \textit{shortcut} pointers for every two sets $S_1,S_2\in F$ and for every vertex $v\in T$ such that both $S_1$ and $S_2$ are $v$-large. To this end, we say $v$ is a \textit{branching-$(S_1,S_2)$-vertex} if both $S_1^{v_0}\cap S_2^{v_0} \neq \emptyset$ and $S_1^{v_1}\cap S_2^{v_1} \neq \emptyset$. Consider the path starting from the left (right) child of $v$ and ending at the first descendent $v'$ of $v$ such that:(1) $S_1$ and $S_2$ are relatively large for all of the vertices on the path, (2) $S_1^{v'}\cap S_2^{v'}\neq \emptyset$, and (3) either $v'$ is a \textit{branching-$(S_1,S_2)$-vertex} or one of the sets $S_1$ and $S_2$ is not $v'$-large. The left (right) shortcut pointer of $v$ will point to $v'$. Notice that the shortcut pointers are maintained for every vertex $v$ even if on the path from $r$ to $v$ there are some vertices for which either $S_1$ or $S_2$ are not relatively large, which helps to reduce the update time during insertions/deletions. Also notice that using these pointers it is straightforward to check in $O(1)$ time if $S_1^{v_0}\cap S_2^{v_0}$ and $S_1^{v_1}\cap S_2^{v_1}$ are empty or not. The space complexity of the structure is as follows. Each vertex $v$ uses $O(M_v)$ words of space which is $O(M N_v/N')$. So the space usage is $\sum_v M_v= O(M\log N)$ words, since in each level of $T$ the sum of all $M_v$ for the vertices in that level is $O(M)$, and there are $O(\log N)$ levels. \paragraph{Reporting queries.} For a reporting query on $S_1$ and $S_2$, if $op=0$ then either the emptiness test at the root will conclude in $O(1)$ time, or we spend $O(\frac{N_r}{\sqrt{M_r}}) = O(\frac{N}{\sqrt M})$ time. Otherwise, we recursively examine vertices $v$ in $T$ starting with the root $r$. If both $S_1$ and $S_2$ are $v$-large and $S_1^v\cap S_2^v \neq \emptyset$, then we continue recursively to the vertices pointed to by the appropriate shortcut pointers. If either $S_1$ or $S_2$ is not $v$-large then we wish to output all of the elements in the intersection of $S_1^v$ and $S_2^v$. To do this, we check for each element in the smaller set if it is contained within the larger set using the lookup table which takes $O(\frac{N_v}{\sqrt{M_v}})$ time. For the runtime, as we traverse down $T$ from $r$ using appropriate shortcut pointers, we encounter only two types of vertices. The first type are vertices $v$ for which both $S_1$ and $S_2$ are $v$-large, and the second type are vertices $v$ for which either $S_1$ or $S_2$ is not $v$-large. Each vertex of the first type performs $O(1)$ work, and the number of such vertices is at most the number of vertices of the second type, due to the branching nature of the shortcut pointers. For vertices of the second type, the intersection of $S_1$ and $S_2$ must both be non-empty relative to such vertices and so the $O(\frac{N_v}{\sqrt{M_v}})$ time cost can be charged to at least one element in the output. Denote the vertices of the second type by $v_1,v_2,\ldots,v_t$. Notice that $t\leq op$ as each $v_i$ contains at least one element from the intersection, and that $\sum_i N_{v_i} < 2N'$ since the vertices are not ancestors of each other. We will make use of the following Lemma. \begin{lemma}\label{lemma:sum_sqrt_bound} If $\sum_{i=1}^t x_i \leq k$ then $\sum_{i=1}^t \sqrt{x_i} \leq \sqrt{k\cdot t}$. \end{lemma} \begin{proof} Since $\sum_{i=1}^t \sqrt{x_i}$ is maximized whenever all the $x_i$ are equal, we have that $\sum_{i=1}^t \sqrt{x_i} \leq t\sqrt{\frac{k}{t}} = \sqrt{kt}$. \qed \end{proof} Therefore, the total time cost is \begin{align*} \sum_i \frac{N_{v_i}}{\sqrt{M_{v_i}}} &=\sum_i \frac{N_{v_i}\sqrt{N'}}{\sqrt{M N_{v_i}}} =\sqrt{\frac{N'}{M}} \sum_i \sqrt{N_{v_i}} \leq \sqrt{\frac{N'}{M}} \sqrt{2N'}\sqrt{t} \leq O\paren{\frac{N\sqrt{op}}{\sqrt{M}}}. \end{align*} \paragraph{Witness queries.} A witness query is answered by traversing down $T$ using shortcut pointers, but instead of recursively looking at both shortcut pointers for each vertex, we only consider one. Thus the total time it takes until we reach a vertex $v$ for which either $S_1$ or $S_2$ is not $v$-large is $O(\log N)$. Next, we use the hash function to find an element in the intersection in $O(\frac{N}{\sqrt M})$ time, for a total of $O(\log N + \frac{N}{\sqrt M})$ time to answer a witness query. \paragraph{Insertions and Deletions.} When inserting a new element $x$ into $S_1$, we first locate the leaf $\ell$ of $T$ which covers $x$. Next, we update our structure on the path from $\ell$ to $r$ as follows. Starting from $\ell$, for each vertex $v$ on the path we insert $x$ into $S_1^v$. This incurs a cost of $\sqrt {M_v} $ for updating the emptiness query structure at $v$. If there exists some set $S_2$ such that $|S_1^v\cap S_2^v|$ becomes non-zero, then we may need to update some shortcut pointers on the path from $\ell$ to $r$ relative to $S_1$ and $S_2$. Being that such a set $S_2$ must be large, the number of such sets is at most $\frac{N_v}{\sqrt{M_v}}$. To analyze the expected running time of an insertion notice that since the elements in the universe are randomly distributed, the expected value of $N_v$ and $M_v$ for a vertex $v$ at depth $i$ are $\frac{N}{2^i}$ and $\frac{M}{2^i}$ respectively. So the number of $v$-large sets is at most $\frac{N_v}{\sqrt{M_v}} = \frac{N}{\sqrt{2^iM}}$. The expected time costs of updating the emptiness structure is at most $\sum_{i=0}^{\log N'}\frac{N}{\sqrt{2^iM}} = O(\frac{N}{\sqrt M})$. The same analysis holds for the shortcut pointer. The deletion process is exactly the reverse of the insertions process, and also costs $O(\frac{N}{\sqrt M})$ expected time. The total space usage is $O(M\log N)$. With a change of variable (substituting $M/\log N$ for $M$ in the construction above), we can make the space $O(M)$ and obtain the following result. \begin{theorem}\label{theorem:mem_sqrt_set_intersection} There exists an algorithm that maintains a family $F$ of dynamic sets using $O(M)$ space where each update costs $O(\sqrt {M \log N})$ expected time, each reporting query costs $O(\frac{N\sqrt{\log N}}{\sqrt M}\sqrt{op+1})$ time, and each witness query costs $O(\frac{N\sqrt{\log N}}{\sqrt M}+ \log N)$ expected time. \end{theorem} {\small } \appendix \section{Conditional Lower Bounds from \textsf{3SUM}{}}\label{app:3sum_lb} We first make use of the following Theorem, which was proven by Kopelowitz, Pettie, and Porat~\cite{KPP14a}. \begin{theorem}[\cite{KPP14a}]\label{thm:improved_reduction_reporting} For any constants $0\leq \gamma < 1$ and $0<\delta\leq 2$, let $\mathbb{A}$ be an algorithm for the offline set intersection reporting problem on a family $F$ of sets such that $N = \sum_{S\in F} |S| = \Theta(n^{\frac{3+\delta-\gamma}{2}})$ and there are $\Theta(n^{1+\gamma})$ pairs of sets whose intersection needs to be reported such that the total size of these set intersections of these $t$ pairs is expected to be $O(n^{2-\delta})$. If $\mathbb{A}$ runs in expected $O(n^{2-\Omega(1)})$ time, then \textsf{Integer\ThreeSUM}{} can be solved in expected $O(n^{2-\Omega(1)})$ time. \end{theorem} \begin{theorem}\label{thm:dynamic_set_int_report_lb} {\bf (Set Intersection Reporting Lower Bound)} For any constants $0\leq \gamma < 1$ and $0<\delta< 1$, any algorithm for solving the incremental set intersection reporting problem with insertion time of $t_i$ and query time $t_q + t_r\cdot op$ (where $op$ is the size of the output) must have $N\cdot t_i+N^{\frac{2(1+\gamma)}{3+\delta-\gamma}}t_q + N^{\frac{4-2\delta}{3+\delta-\gamma}}t_r= \Omega(N^{\frac{4}{3+\delta-\gamma}-o(1)})$ unless the \textsf{Integer\ThreeSUM}{} conjecture is false. \end{theorem} \begin{proof} An algorithm for solving the incremental set intersection decision problem can be used to solve \textsf{Integer\ThreeSUM}{} via Theorem~\ref{thm:improved_reduction_reporting} by first inserting all of the $\Theta(n^{\frac{3+\delta-\gamma}{2}})$ elements into their appropriate sets and then performing the $\Theta(n^{1+\gamma})$ queries. Therefore, unless the \textsf{Integer\ThreeSUM}{} conjecture is false, we have $\Theta(n^{\frac{3+\delta-\gamma}{2}} t_i + n^{1+\gamma}t_q + n^{2-\delta}t_r) = \Omega(n^{2-o(1)})$. Substituting $n=N^{\frac{2}{3+\delta-\gamma}}$ completes the proof. \qed \end{proof} Let us consider a few points on the lower bound curve of Theorem~\ref{thm:dynamic_set_int_report_lb}. The coefficients of the terms $t_i$, $t_q$, and $t_r$ are equal when $\gamma = \delta = 1/2$, which translates to $t_i+t_q +t_r= \Omega(N^{1/3-o(1)})$. Thus, at least one of the operations must cost roughly $\Omega(N^{1/3})$ time. Furthermore, if $t_q=t_r=O(1)$ then $t_i = \Omega(N^{\frac{4}{3+\delta-\gamma}-1-o(1)})$ so by making $\delta$ as small as possible and $\gamma$ as large as possible we obtain $t_i = \Omega(N^{1-o(1)})$. This matches a trivial algorithm where we explicitly maintain each set intersection. However, if $t_i=t_r=O(1)$ then $t_q = \Omega(N^{\frac{2-2\gamma}{3+\delta-\gamma}-o(1)})$, and so by making $\delta$ as small as possible and setting $\gamma = 0$ we obtain $t_q = \Omega(N^{2/3-o(1)})$. Finally, if $t_i=t_q=O(1)$ then $t_r = \Omega(N^{\frac{2\delta}{3+\delta-\gamma}-o(1)})$ and so making $\gamma$ and $\delta$ as large as possible we obtain $t_r = \Omega(N^{2/3 -o(1)})$. \begin{theorem}[\cite{KPP14a}]\label{thm:improved_reduction} For any constant $0< \gamma < 1$ let $\mathbb{A}$ be an algorithm for offline set intersection decision problem on a family $F$ of sets such that $N = \sum_{S\in F} |S| = \Theta(n^{2-\gamma})$, and there are $\Theta(n^{1+\gamma})$ pairs of sets whose disjointness needs to be determined. If $\mathbb{A}$ runs in expected $O(n^{2-\Omega(1)})$ time, then \textsf{Integer\ThreeSUM}{} can be solved in expected $O(n^{2-\Omega(1)})$ time. \end{theorem} \begin{theorem}\label{thm:dynamic_set_int_decision_lb} {\bf (Set Intersection Emptiness Lower Bound)} Fix $0< \gamma < 1$. Any algorithm for solving the incremental set intersection emptiness problem with insertion time $t_i$ and query time of $t_q$ must have $N\cdot t_i+N^{\frac{1+\gamma}{2-\gamma}}t_q = \Omega(N^{\frac{2}{2-\gamma}-o(1)})$ unless the \textsf{Integer\ThreeSUM}{} conjecture is false. \end{theorem} \begin{proof} An algorithm for solving the incremental set intersection decision problem can be used to solve \textsf{Integer\ThreeSUM}{} via Theorem~\ref{thm:improved_reduction} by first inserting all of the $\Theta(n^{2-\gamma})$ elements into their appropriate sets and then performing the $\Theta(n^{1+\gamma})$ queries. Therefore, unless the \textsf{Integer\ThreeSUM}{} conjecture is false, we have $\Theta(n^{2-\gamma} t_i + n^{1+\gamma}t_q) = \Omega(n^{2-o(1)})$. Substituting $n=N^{\frac{1}{2-\gamma}}$ completes the proof. \qed \end{proof} Let us consider a few points on the lower bound curve of Theorem~\ref{thm:dynamic_set_int_decision_lb}. The coefficients of the terms $t_i$ and $t_q$ are equal when $\gamma = 1/2$, which translates to $t_i+t_q = \Omega(N^{1/3-o(1)})$. Thus, at least one of the operations must cost roughly $\Omega(N^{1/3})$ time. Furthermore, if $t_i=O(1)$ then $t_q = \Omega(N^{\frac{1-\gamma}{2-\gamma}-o(1)})$ so by making $\gamma$ as small as possible we obtain $t_q = \Omega(N^{1/2-o(1)})$. Finally, if $t_q=O(1)$ then $t_i = \Omega(N^{\frac{\gamma}{2-\gamma}-o(1)})$ so by making $\gamma$ as large as possible we obtain $t_i = \Omega(N^{1/2-o(1)})$. \end{document}
\begin{document} \baselineskip=15pt \newcommand{\langle}{\langle} \newcommand{\rangle}{\rangle} \newcommand{ }{ } \newcommand{ }{ } \newcommand{\partial}{\partial} \newcommand{\delta}{\delta} \newcommand{\sigma}{\sigma} \newcommand{\alpha}{\alpha} \newcommand{\beta}{\beta} \newcommand{\Gamma}{\Gamma} \newcommand{\gamma}{\gamma} \newcommand{\varsigma}{\varsigma} \newcommand{\Lambda}{\Lambda} \newcommand{\lambda}{\lambda} \newcommand{\tilde}{\tilde} \newcommand{\varphi}{\varphi} \newcommand{Y^{\nu}}{Y^{\nu}} \newcommand{\mbox{wt}\:}{\mbox{wt}\:} \newcommand{\mbox{Res}}{\mbox{Res}} \newcommand{\mbox{ad}}{\mbox{ad}} \newcommand{\stackrel}{\stackrel} \newcommand{\overline}{\overline} \newcommand{\underline}{\underline} \newcommand{\epsilon}{\epsilon} \newcommand{\diamond}{\diamond} \newcommand{\clubsuit}{\clubsuit} \newcommand{\vartheta}{\vartheta} \newcommand{\varepsilon}{\varepsilon} \newcommand{\dagger}{\dagger} \newcommand{\mbox{Tr}}{\mbox{Tr}} \newcommand{{\cal G}({\cal A})}{{\cal G}({\cal A})} \newcommand{\hat{\cal G}({\cal A})}{\hat{\cal G}({\cal A})} \newcommand{\mbox{End}\:}{\mbox{End}\:} \newcommand{\mbox{for}}{\mbox{for}} \newcommand{\mbox{ker}}{\mbox{ker}} \newcommand{\Delta}{\Delta} \newcommand{\mbox{Rad}}{\mbox{Rad}} \newcommand{\rightarrow}{\rightarrow} \newcommand{\mathbb}{\mathbb} \newcommand{\Longrightarrow}{\Longrightarrow} \newcommand{{\cal X}}{{\cal X}} \newcommand{{\cal Y}}{{\cal Y}} \newcommand{{\cal Z}}{{\cal Z}} \newcommand{{\cal U}}{{\cal U}} \newcommand{{\cal V}}{{\cal V}} \newcommand{{\cal W}}{{\cal W}} \newcommand{\theta}{\theta} \setlength{\unitlength}{3pt} \newcommand{\mathscr}{\mathscr} \begin{center}{\Large \bf Conformal Oscillator Representations \\ of Orthogonal Lie Algebras} \footnote {2010 Mathematical Subject Classification. Primary 17B10;Secondary 22E46.} \end{center} \begin{center}{\large Xiaoping Xu \footnote{Research supported by NSFC Grants 11171324 and 11321101.}}\end{center} \begin{center}{Hua Loo-Keng Key Mathematical Laboratory\\ Institute of Mathematics, Academy of Mathematics \& System Sciences\\ Chinese Academy of Sciences, Beijing 100190, P.R. China }\end{center} \begin {abstract} \quad The conformal transformations with respect to the metric defining the orthogonal Lie algebra $o(n,\mathbb C)$ give rise to a one-parameter ($c$) family of inhomogeneous first-order differential operator representations of the orthogonal Lie algebra $o(n+2,\mathbb C)$. Letting these operators act on the space of exponential-polynomial functions that depend on a parametric vector $\vec a\in \mathbb C^n$, we prove that the space forms an irreducible $o(n+2,\mathbb C)$-module for any $c\in\mathbb C$ if $\vec a$ is not on a certain hypersurface. By partially swapping differential operators and multiplication operators, we obtain more general differential operator representations of $o(n+2,\mathbb C)$ on the polynomial algebra $\mathscr C$ in $n$ variables. Moreover, we prove that $\mathscr C$ forms an infinite-dimensional irreducible weight $o(n+2,\mathbb C)$-module with finite-dimensional weight subspaces if $c\not\in\mathbb Z/2$. \noindent{\it Keywords}:\hspace{0.3cm} orthogonal Lie algebra; differential operator; oscillator representation; irreducible module; polynomial algebra; exponential-polynomial function. \end{abstract} \section {Introduction} \quad$\;$ A module of a finite-dimensional simple Lie algebra is called a {\it weight module} if it is a direct sum of its weight subspaces. A module of a finite-dimensional simple Lie algebra is called {\it cuspidal} if it is not induced from its proper parabolic subalgebras. Infinite-dimensional irreducible weight modules of finite-dimensional simple Lie algebras with finite-dimensional weight subspaces have been intensively studied by the authors in [BBL], [BFL], [BHL], [BL1], [BL2], [Fs], [Fv], [M]. In particular, Fernando [Fs] proved that such modules must be cuspidal or parabolically induced. Moreover, such cuspidal modules exist only for special linear Lie algebras and symplectic Lie algebras. A similar result was independently obtained by Futorny [Fv]. Mathieu [M] proved that these cuspidal such modules are irreducible components in the tensor modules of their multiplicity-free modules with finite-dimensional modules. Although the structures of irreducible weight modules of finite-dimensional simple Lie algebra with finite-dimensional weight subspaces were essentially determined by Fernando's result in [Fs] and Methieu's result in [M], explicit structures of such modules are not that known. It is important to find explicit natural realizations of them. The $n$-dimensional conformal group with respect to Euclidean metric $(\cdot,\cdot)$ is generated by the translations, rotations, dilations and special conformal transformations $$\vec x\mapsto\frac{\vec x-(\vec x,\vec x)\vec b}{(\vec b,\vec b) (\vec x,\vec x)-2(\vec b,\vec x)+1}.\eqno(1.1)$$ Conformal groups play important roles in geometry, partial differential equations and quantum physics. The conformal transformations with respect to the metric defining $o(n,\mathbb{C})$ give rise to an inhomogeneous representation of the Lie algebra $o(n+2,\mathbb{C})$ on the polynomial algebra in $n$ variables. Using Shen's mixed product for Witt algebras in [S] and the above representation, Zhao and the author [XZ] constructed a new functor from $o(n,\mathbb{C})$-{\bf Mod} to $o(n+2,\mathbb{C})$-{\bf Mod} and derived a condition the functor to map a finite-dimensional irreducible $o(n,\mathbb{C})$-module to an infinite-dimensional irreducible $o(n+2,\mathbb{C})$-module. Our general frame also gave a direct polynomial extension from irreducible $o(n,\mathbb{C})$-modules to irreducible $o(n+2,\mathbb{C})$-modules. The work [XZ] lead to a one-parameter ($c$) family of inhomogeneous first-order differential operator (oscillator) representations of $o(n+2,\mathbb{C})$. Letting these operators act on the space of exponential-polynomial functions that depend on a parametric vector $\vec a\in \mathbb C^n$, we prove in this paper that the space forms an irreducible $o(n+2,\mathbb C)$-module for any $c\in\mathbb C$ if $\vec a$ is not on a certain hypersurface. By partially swapping differential operators and multiplication operators, we obtain more general differential operator (oscillator) representations of $o(n+2,\mathbb C)$ on the polynomial algebra $\mathscr C$ in $n$ variables. Moreover, we prove that $\mathscr C$ forms an infinite-dimensional irreducible weight $o(n+2,\mathbb C)$-module with finite-dimensional weight subspaces if $c\not\in\mathbb Z/2$. Our results are extensions of Howe's oscillator construction of infinite-dimensional multiplicity-free irreducible representations for $sl(n,\mathbb{C})$ (cf. [H]). For any two integers $p\leq q$, we denote $\overline{p,q}=\{p,p+1,\cdots,q\}$. Let $E_{r,s}$ be the square matrix with 1 as its $(r,s)$-entry and 0 as the others. Fix a positive integer $n$. Denote $$A_{i,j}=E_{i,j}-E_{n+1+j,n+1+i},\;\;B_{i,j}=E_{i,n+1+j}-E_{j,n+1+i},\;\;C_{i,j}=E_{n+1+i,j}-E_{n+1+j,i}\eqno(1.2)$$ for $i,j\in\overline{1,n+1}$. Then the split even orthogonal Lie algebra $$ o(2n+2,\mathbb{C})=\sum_{i,j=1}^{n+1} (\mathbb{C}A_{i,j}+\mathbb{C}B_{i,j}+\mathbb{C}C_{i,j}).\eqno(1.3)$$ Set $$D=\sum_{r=1}^nx_r\partial_{x_r}+\sum_{s=1}^ny_s\partial_{y_s},\;\;\eta=\sum_{i=1}^nx_iy_i.\eqno(1.4)$$ According to Zhao and the author's work [XZ], we have the following one-parameter generalization $\pi_c$ of the conformal representation of $o(2n+2,\mathbb{C})$: $$\pi_c(A_{i,j})=x_i\partial_{x_j}-y_j\partial_{x_i},\;\pi_c(B_{i,j})=x_i\partial_{y_j}-x_j\partial_{y_i},\; \pi_c(C_{i,j})=y_i\partial_{x_j}-y_j\partial_{x_i},\eqno(1.5)$$ $$\pi_c(A_{n+1,i})=\partial_{x_i},\;\;\pi_c(A_{n+1,n+1})=-D-c,\;\;\pi_c(B_{i,n+1})=-\partial_{y_i},\eqno(1.6)$$ $$\pi_c(A_{i,n+1})=\eta\partial_{y_i}-x_i(D+c),\;\;\pi_c(C_{n+1,i})=y_i(D+c)-\eta\partial_{x_i}\eqno(1.7)$$ for $i,j\in\overline{1,n}$. For $\vec a=(a_1,a_2,...,a_n)^t,\;\vec b=(b_1,b_2,...,b_n)^t\in\mathbb{C}^n$, we put $$\vec a\cdot\vec x=\sum_{i=1}^na_ix_i,\qquad\vec b\cdot\vec y=\sum_{i=1}^nb_iy_i.\eqno(1.8)$$ Let ${\mathscr A}=\mathbb{C}[x_1,...,x_n,y_1,...,y_n]$ be the algebra of polynomials in $x_1,...,x_n,y_1,...,y_n$. Moreover, we set $${\mathscr A}_{\vec a,\vec b}=\{fe^{\vec a\cdot\vec x+\vec b\cdot\vec y}\mid f\in{\mathscr A}\}.\eqno(1.9)$$ Denote by $\pi_{c,\vec a,\vec b}$ the representation $\pi_c$ of $o(2n+2,\mathbb C)$ on $\mathscr A_{\vec a,\vec b}$. Fix $n_1,n_2\in\overline{1,n}$ with $n_1\leq n_2$. Changing operators $\partial_{x_r}\mapsto -x_r,\; x_r\mapsto \partial_{x_r}$ for $r\in\overline{1,n_1}$ and $\partial_{y_s}\mapsto -y_s,\; y_s\mapsto\partial_{y_s}$ for $s\in\overline{n_2+1,n}$ in the representation $\pi_c$ of $o(2n+2,\mathbb{C})$, we get another differential-operator representation $\pi_c^{n_1,n_2}$ of $o(2n+2,\mathbb{C})$ on $\mathscr A$. We call $\pi_c$ and $\pi_c^{n_1,n_2}$ the {\it conformal oscillator representations of $o(2n+2,\mathbb{C})$} in terms of physics terminology. In this paper, we prove: {\bf Theorem 1}. {\it The representation $\pi_{c,\vec a,\vec b}$ of $o(2n+2,\mathbb{C})$ is irreducible for any $c\in\mathbb{C}$ if $\sum_{i=1}^na_ib_i\neq 0$. Moreover, the representation $\pi_c^{n_1,n_2}$ of $o(2n+2,\mathbb{C})$ is irreducible for any $c\in\mathbb{C}\setminus(\mathbb Z/2)$, and its underlying module ${\mathscr A}$ is an infinite-dimensional irreducible weight $o(2n+2,\mathbb{C})$-module with finite-dimensional weight subspaces. } Set $$K_i=E_{0,i}-E_{n+i+1,0},\qquad K_{n+1+i}=E_{0,n+1+i}-E_{i,0}\qquad\mbox{for}\;\;i\in\overline{1,n+1}.\eqno(1.10)$$ Then the split odd orthogonal Lie algebra $$o(2n+3,\mathbb{C})= o(2n+2,\mathbb{C})+\sum_{i=1}^{2n+2}\mathbb{C}K_i.\eqno(1.11)$$ Moreover, we redefine $$D=\sum_{r=0}^nx_r\partial_{x_r}+\sum_{r=1}^ny_r\partial_{y_r}\qquad\eta=\frac{1}{2}x_0^2+\sum_{i=1}^nx_iy_i. \eqno(1.12)$$ According to Zhao and the author's work [XZ], we have the following one-parameter generalization of the conformal representation $\pi_c$ of $o(2n+3,\mathbb{C})$: $\pi_c|_{o(2n+2,\mathbb{C})}$ is given in (1.5)-(1.7) with $D$ and $\eta$ in (1.12), $$\pi_c(K_i)=x_0\partial_{x_i}-y_i\partial_{x_0},\;\;\pi(K_{n+1+i})=x_0\partial_{y_i}-x_i\partial_{x_0}\qquad\mbox{for}\;\;i\in\overline{1,n}, \eqno(1.13)$$ $$\pi_c(K_{n+1})=x_0(D+c)-\eta\partial_{x_0},\qquad \pi_c(K_{2n+2})=-\partial_{x_0}.\eqno(1.14)$$ Fix $n_1,n_2\in\overline{1,n}$ with $n_1\leq n_2$. Changing operators $\partial_{x_r}\mapsto -x_r,\; x_r\mapsto \partial_{x_r}$ for $r\in\overline{1,n_1}$ and $\partial_{y_s}\mapsto -y_s,\; y_s\mapsto\partial_{y_s}$ for $s\in\overline{n_2+1,n}$ in the above representation of $o(2n+3,\mathbb{C})$, we get another differential-operator representation $\pi_c^{n_1,n_2}$ of $o(2n+3,\mathbb{C})$. Again call the representations $\pi_c$ and $\pi_c^{n_1,n_2}$ of $o(2n+3,\mathbb{C})$ {\it conformal oscillator representations} in terms of physics terminology. Let ${\mathscr B}=\mathbb{C}[x_0,x_1,...,x_n,y_1,...,y_n]$ be the algebra of polynomials in $x_0,x_1,...,x_n,y_1,...,y_n$. Redenote $$\vec a\cdot\vec x=\sum_{i=0}^na_ix_i\qquad\mbox{for}\;\;\vec a=(a_0,a_1,...,a_n)^t\in\mathbb{C}^{1+n}.\eqno(1.15)$$ Fix $\vec a \in\mathbb{C}^{1+n},\; \vec b\in\mathbb C^n$ and $n_1,n_2\in\overline{1,n}$ with $n_1\leq n_2$. We set $${\mathscr B}_{\vec a,\vec b}=\{fe^{\vec a\cdot\vec x+\vec b\cdot\vec y}\mid f\in{\mathscr B}\}\eqno(1.16)$$ (cf. (1.8)). Denote by $\pi_{c,\vec a,\vec b}$ the representation $\pi_c$ of $o(2n+3,\mathbb{C})$ on ${\mathscr B}_{\vec a,\vec b}$. In [XZ], Zhao and the author proved that the representation $\pi_{c,\vec 0,\vec 0}$ of $o(2n+3,\mathbb{C})$ is irreducible if and only if $c\not\in -\mathbb{N}$. The following is our second main theorem in this paper. {\bf Theorem 2}. {\it The representation $\pi_{c,\vec a,\vec b}$ of $o(2n+3,\mathbb{C})$ is irreducible for any $c\in\mathbb C$ if $a_0^2+2\sum_{i=1}^na_ib_i\neq 0$. Moreover, the representation $\pi_c^{n_1,n_2}$ of $o(2n+3,\mathbb{C})$ is irreducible for any $c\in\mathbb{C}\setminus(\mathbb Z/2)$, and its underlying module ${\mathscr B}$ is an infinite-dimensional irreducible weight $o(2n+3,\mathbb{C})$-module with finite-dimensional weight subspaces. } In Section 2, we prove Theorem 1. The proof of Theorem 2 is given in Section 3. \section{Proof of Theorem 1} First we want to prove: {\bf Theorem 2.1}. {\it The representation $\pi_{c,\vec a,\vec b}$ of $o(2n+2,\mathbb{C})$ is irreducible if $\sum_{i=1}^na_ib_i\neq 0$ for any $c\in\mathbb{C}$.} {\it Proof}. By symmetry, we may assume $a_1\neq 0$. Let ${\mathscr M}$ be a nonzero $o(2n+2,\mathbb{C})$-submodule of ${\mathscr A}_{\vec a,\vec b}$. Take any $0\neq fe^{\vec a\cdot\vec x+\vec b\cdot\vec y}\in \mathscr{M}$ with $f\in \mathscr{A}$. Let $\mathscr{A}_k$ be the subspace of homogeneous polynomials with degree $k$. Set $$\mathscr{A}_{\vec a,\vec b,k}=\mathscr{A}_ke^{\vec a\cdot\vec x+\vec b\cdot\vec y}\qquad\mbox{for}\;k\in\mathbb{N}.\eqno(2.1)$$ According to (1.6), $$(A_{n+1,i}-a_i)(fe^{\vec a\cdot\vec x+\vec b\cdot\vec y})=\partial_{x_i}(f)e^{\vec a\cdot\vec x+\vec b\cdot\vec y},\;\;-(B_{i,n+1}+b_i)(fe^{\vec a\cdot\vec x+\vec b\cdot\vec y})=\partial_{y_i}(f)e^{\vec a\cdot\vec x+\vec b\cdot\vec y}\eqno(2.2)$$ for $i\in\overline{1,n}$. Repeatedly applying (2.2), we obtain $e^{\vec a\cdot\vec x+\vec b\cdot\vec y}\in \mathscr{M}$. Equivalently, $\mathscr{A}_{\vec a,\vec b,0}\subset\mathscr{M}$. Suppose $\mathscr{A}_{\vec a,\vec b,\ell}\subset\mathscr{M}$ for some $\ell\in\mathbb{N}$. Take any $ge^{\vec a\cdot\vec x+\vec b\cdot\vec y}\in \mathscr{A}_{\vec a,\vec b,\ell}$. Since $$(x_i\partial_{x_1}-y_1\partial_{y_i})(g)e^{\vec a\cdot\vec x+\vec b\cdot\vec y},(y_i\partial_{x_1}-y_1\partial_{x_i})(g)e^{\vec a\cdot\vec x+\vec b\cdot\vec y}\in \mathscr{A}_{\vec a,\vec b,\ell}\subset\mathscr{M},\eqno(2.3)$$ we have $$A_{i,1}(ge^{\vec a\cdot\vec x+\vec b\cdot\vec y})\equiv (a_1x_i-b_iy_1)ge^{\vec a\cdot\vec x+\vec b\cdot\vec y} \equiv 0\;\;(\mbox{mod}\;\mathscr M)\eqno(2.4)$$ and $$C_{i,1}(ge^{\vec a\cdot\vec x+\vec b\cdot\vec y})\equiv (a_1y_i-a_iy_1)ge^{\vec a\cdot\vec x+\vec b\cdot\vec y} \equiv 0\;\;(\mbox{mod}\;\mathscr M)\eqno(2.5)$$ for $i\in\overline{1,n}$ by (1.5). On the other hand, (1.4) implies $$(D+c)(g)e^{\vec a\cdot\vec x+\vec b\cdot\vec y}\in \mathscr{A}_{\vec a,\vec b,\ell}\subset\mathscr{M},\eqno(2.6)$$ and so (1.6) gives $$-A_{n+1,n+1}(ge^{\vec a\cdot\vec x+\vec b\cdot\vec y})\equiv [\sum_{i=1}^n(a_ix_i+b_iy_i)]ge^{\vec a\cdot\vec x+\vec b\cdot\vec y} \equiv 0\;\;(\mbox{mod}\;\mathscr M)\eqno(2.7)$$ Substituting (2.4) and (2.5) into (2.7), we get$$(\sum_{i=1}^na_ib_i)y_1ge^{\vec a\cdot\vec x+\vec b\cdot\vec y} \equiv 0\;\;(\mbox{mod}\;\mathscr M).\eqno(2.8)$$ Equivalently, $y_1ge^{\vec a\cdot\vec x+\vec b\cdot\vec y}\in\mathscr M$. Substituting it to (2.4) and (2.5), we obtain $$x_ige^{\vec a\cdot\vec x+\vec b\cdot\vec y},y_ige^{\vec a\cdot\vec x+\vec b\cdot\vec y}\in\mathscr M\eqno(2.9)$$ for $i\in\overline{1,n}$. Therefore, $\mathscr{A}_{\vec a,\vec b,\ell+1}\subset\mathscr{M}$. By induction, $\mathscr{A}_{\vec a,\vec b, \ell}\subset\mathscr{M}$ for any $\ell\in\mathbb{N}$. So $\mathscr{A}_{\vec a,\vec b}=\mathscr{M}$. Hence $\mathscr{A}_{\vec a,\vec b}$ is an irreducible $o(2n+2,\mathbb{C})$-module. $\qquad\Box$ Fix $n_1,n_2\in\overline{1,n}$ with $n_1\leq n_2$. To make notations more distinguishable, we write $$D_{n_1,n_2}=-\sum_{i=1}^{n_1}x_i\partial_{x_i} +\sum_{r=n_1+1}^nx_r\partial_{x_r}+\sum_{j=1}^{n_2}y_j\partial_{y_j}-\sum_{s=n_2+1}^ny_s\partial_{y_s},\eqno(2.10)$$ $$\eta_{n_1,n_2}=\sum_{i=1}^{n_1}y_i\partial_{x_i}+\sum_{r=n_1+1}^{n_2}x_ry_r+\sum_{s=n_2+1}^n x_s\partial_{y_s}\eqno(2.11)$$and $$\tilde c=c+n_2-n_1-n.\eqno(2.12)$$ Then we have the following representation $\pi_c^{n_1,n_2}$ of the Lie algebra $o(2n+2,\mathbb{C})$ determined by $$\pi_c^{n_1,n_2}(A_{i,j})=E_{i,j}^x-E_{j,i}^y\eqno(2.13)$$ with $$E_{i,j}^x=\left\{\begin{array}{ll}-x_j\partial_{x_i}-\delta_{i,j}&\mbox{if}\; i,j\in\overline{1,n_1},\\ \partial_{x_i}\partial_{x_j}&\mbox{if}\;i\in\overline{1,n_1},\;j\in\overline{n_1+1,n},\\ -x_ix_j &\mbox{if}\;i\in\overline{n_1+1,n},\;j\in\overline{1,n_1},\\ x_i\partial_{x_j}&\mbox{if}\;i,j\in\overline{n_1+1,n} \end{array}\right.\eqno(2.14)$$ and $$E_{i,j}^y=\left\{\begin{array}{ll}y_i\partial_{y_j}&\mbox{if}\; i,j\in\overline{1,n_2},\\ -y_iy_j&\mbox{if}\;i\in\overline{1,n_2},\;j\in\overline{n_2+1,n},\\ \partial_{y_i}\partial_{y_j} &\mbox{if}\;i\in\overline{n_2+1,n},\;j\in\overline{1,n_2},\\ -y_j\partial_{y_i}-\delta_{i,j}&\mbox{if}\;i,j\in\overline{n_2+1,n}, \end{array}\right.\eqno(2.15)$$ and $$\pi_c^{n_1,n_2}(E_{i,n+1+j})=\left\{\begin{array}{ll} \partial_{x_i}\partial_{y_j}&\mbox{if}\;i\in\overline{1,n_1},\;j\in\overline{1,n_2},\\ -y_j\partial_{x_i}&\mbox{if}\;i\in\overline{1,n_1},\;j\in\overline{n_2+1,n},\\ x_i\partial_{y_j}&\mbox{if}\;i\in\overline{n_1+1,n},\;j\in\overline{1,n_2},\\ -x_iy_j&\mbox{if}\;i\in\overline{n_1+1,n},\;j\in\overline{n_2+1,n},\end{array}\right.\eqno(2.16)$$ $$\pi_c^{n_1,n_2}(E_{n+1+i,j})=\left\{\begin{array}{ll} -x_jy_i&\mbox{if}\;j\in\overline{1,n_1},\;i\in\overline{1,n_2},\\ -x_j\partial_{y_i}&\mbox{if}\;j\in\overline{1,n_1},\;i\in\overline{n_2+1,n},\\ y_i\partial_{x_j}&\mbox{if}\;j\in\overline{n_1+1,n},\;i\in\overline{1,n_2},\\ \partial_{x_j}\partial_{y_i}&\mbox{if}\;j\in\overline{n_1+1,n},\;i\in\overline{n_2+1,n},\end{array}\right.\eqno(2.17)$$ $$\pi_c^{n_1,n_2}(A_{n+1,n+1})=-D_{n_1,n_2}-\tilde c,\eqno(2.18)$$ $$\pi_c^{n_1,n_2}(A_{n+1,i})=\left\{\begin{array}{ll}-x_i&\mbox{if}\;\;i\in\overline{1, n_1},\\ \partial_{x_i}&\mbox{if}\;\;i\in\overline{n_1+1,n},\end{array}\right.\eqno(2.19)$$ $$\pi_c^{n_1,n_2}(B_{i,n+1})=\left\{\begin{array}{ll}-\partial_{y_i}&\mbox{if}\;\;\in\overline{1, n_2},\\ y_i&\mbox{if}\;\;i\in\overline{n_2+1,n},\end{array}\right. \eqno(2.20)$$ $$\pi_c^{n_1,n_2}(A_{i,n+1})=\left\{\begin{array}{ll} \eta_{n_1,n_2}\partial_{y_i}-(D_{n_1,n_2}+\tilde c-1)\partial_{x_i}&\mbox{if}\;\;i\in\overline{1, n_1},\\ \eta_{n_1,n_2}\partial_{y_i}-x_i(D_{n_1,n_2}+\tilde c)&\mbox{if}\;\;i\in\overline{n_1+1, n_2},\\ - \eta_{n_1,n_2}y_i-x_i(D_{n_1,n_2}+\tilde c)&\mbox{if}\;\;i\in\overline{n_2+1,n},\end{array}\right.\eqno(2.21)$$ $$\pi_c^{n_1,n_2}(C_{n+1,i}) =\left\{\begin{array}{ll} \eta_{n_1,n_2} x_i+y_i(D_{n_1,n_2}+\tilde c)&\mbox{if}\;\;i\in\overline{1, n_1},\\ - \eta_{n_1,n_2}\partial_{x_i}+y_i(D_{n_1,n_2}+\tilde c)&\mbox{if}\;\;i\in\overline{n_1+1, n_2},\\ -\eta_{n_1,n_2}\partial_{x_i}+(D_{n_1,n_2}+\tilde c-1)\partial_{y_i}&\mbox{if}\;\;i\in\overline{n_2+1,n}\end{array}\right.\eqno(2.22)$$ for $i,j\in\overline{1,n}$. Set $$\mathscr A_{\langle k\rangle}=\mbox{Span}\{x^\alpha y^\beta\mid\alpha,\beta\in\mathbb{N}\:^n;\sum_{r=n_1+1}^n\alpha_r-\sum_{i=1}^{n_1}\alpha_i+ \sum_{i=1}^{n_2}\beta_i-\sum_{r=n_2+1}^n\beta_r=k\}\eqno(2.23)$$ for $k\in\mathbb{Z}$. Then $$\mathscr A_{\langle k\rangle}=\{u\in\mathscr A\mid D_{n_1,n_2}(u)=k u\}\eqno(2.24)$$ Observe that the Lie subalgebra $$\mathscr K=\sum_{i,j=1}^n(\mathbb{C}A_{i,j}+\mathbb{C}B_{i,j}+\mathbb{C}C_{i,j})\cong o(2n,\mathbb{C}).\eqno(2.25)$$ With respect to the presentation $\pi_c^{n_1,n_2}$, $\mathscr A_{\langle k\rangle}$ forms a $\mathscr K$-module. Write $$\mathscr D_{n_1,n_2}=-\sum_{i=1}^{n_1}x_i\partial_{y_i}+\sum_{r=n_1+1}^{n_2}\partial_{x_r}\partial_{y_r}-\sum_{s=n_2+1}^n y_s\partial_{x_s}.\eqno(2.26)$$ Note that as operators on $\mathscr A$, $$\xi\eta_{n_1,n_2}=\eta_{n_1,n_2}\xi,\;\;\xi\mathscr D_{n_1,n_2} = \mathscr D_{n_1,n_2}\xi\qquad\mbox{for}\;\;\xi\in\mathscr K.\eqno(2.27)$$ In particular, $$\mathscr H_{\langle k\rangle}=\{u\in\mathscr A_{\langle k\rangle}\mid \mathscr D_{n_1,n_2}(u)=0 \}\eqno(2.28)$$ forms a $\mathscr K$-module. The following result is taken from Luo and the author's work [LX2]. {\bf Lemma 2.2}. {\it For any $n_1-n_2+1-\delta_{n_1,n_2}\geq k\in\mathbb{Z}$, $\mathscr H_{\langle k\rangle}$ is an irreducible $\mathscr K$-submodule and $\mathscr A_{\langle k\rangle}=\bigoplus_{i=0}^\infty\eta_{n_1,n_2}^i(\mathscr H_{\langle k-2i\rangle})$ is a decomposition of irreducible $\mathscr K$-submodules.} Now we have the second result in this section. {\bf Theorem 2.3}. {\it The representation $\pi_c^{n_1,n_2}$ of $o(2n+2,\mathbb{C})$ on $\mathscr A$ is irreducible if $c\not\in \mathbb{Z}/2$.} {\it Proof}. Let $\mathscr M$ be a nonzero $o(2n+2,\mathbb{C})$-submodule of $\mathscr A$. By (2.18) and (2.24), $$\mathscr M=\bigoplus_{k\in\mathbb{Z}}\mathscr A_{\langle k\rangle}\bigcap \mathscr M.\eqno(2.29)$$ Thus $\mathscr A_{\langle k\rangle}\bigcap \mathscr M\neq\{0\}$ for some $k\in \mathbb{Z}$. If $k>n_1-n_2+1-\delta_{n_1,n_2}$, then $$ \{0\}\neq(-x_1)^{k-(n_1-n_2+1-\delta_{n_1,n_2})}(\mathscr A_{\langle k\rangle}\bigcap \mathscr M) =A_{n+1,1}^{k-(n_1-n_2+1-\delta_{n_1,n_2})}(\mathscr A_{\langle k\rangle}\bigcap \mathscr M)\eqno(2.30)$$ by (2.19), which implies $\mathscr A _{\langle n_1-n_2+1-\delta_{n_1,n_2} \rangle}\bigcap \mathscr M\neq \{0\}$. Thus we can assume $k\leq n_1-n_2+1-\delta_{n_1,n_2}$. Observe that the Lie subalgebra $$\mathscr L=\sum_{i,j=1}^n\mathbb CA_{i,j}\cong sl(n,\mathbb{C}).\eqno(2.31)$$ By Lemma 2.2, $\mathscr A_{\langle k\rangle}=\bigoplus_{i=0}^\infty\eta_{n_1,n_2}^i(\mathscr H_{\langle k-2i\rangle})$ is a decomposition of irreducible $\mathscr K$-submodules. Moreover, $\eta_{n_1,n_2}^i(\mathscr H_{\langle k-2i\rangle})$ are highest-weight $\mathscr L$-modules with distinct highest weights by [LX1]. Hence $$\eta_{n_1,n_2}^i(\mathscr H_{\langle k-2i\rangle})\subset \mathscr M\;\;\mbox{for some}\;\;i\in\mathbb{N}.\eqno(2.32)$$ Observe that $$x_1^{-k+2i}\in \mathscr H_{\langle k-2i\rangle}.\eqno(2.33)$$ By (2.11) and (2.20), $$i!(-1)^i(\prod_{r=1}^i(-k+i+r))x_1^{-k+i}=B_{1,2n+2}^i(\eta^i_{n_1,n_2}(x_1^{-k+2i}))\in \mathscr M.\eqno(2.34)$$ Thus $$\mathscr H_{\langle k-i\rangle}\subset \mathscr M.\eqno(2.35)$$ So we can just assume $$\mathscr H_{\langle k\rangle}\subset \mathscr M.\eqno(2.36)$$ According to (2.19), $$x_1^{-k+s}=(-1)^sA_{n+1,1}^s(x_1^{-k})\in \mathscr M\qquad\mbox{for}\;\;s\in\mathbb{N}.\eqno(2.37)$$ So Lemma 2.2 gives $$\mathscr H_{\langle k-s\rangle}\subset \mathscr M\qquad\mbox{for}\;\;s\in\mathbb{N}.\eqno(2.38)$$ For any $r\in k-\mathbb{N}$, we suppose $\eta_{n_1,n_2}^s(x_1^{-r+s}),\eta_{n_1,n_2}^s(x_1^{-r+s+1})\in \mathscr M$ for some $s\in\mathbb N$. Applying (2.22) to it, we get $$C_{n+1,1}[\eta_{n_1,n_2}^s(x_1^{-r+s})]=\eta_{n_1,n_2}^{s+1}(x_1^{-r+s+1}) +(r+\tilde c)\eta_{n_1,n_2}^s(y_1x_1^{-r+s}) \in\mathscr M.\eqno(2.39)$$ By (2.11) and (2.22), $$C_{n+1,i}[\eta_{n_1,n_2}^s(x_1^{-r+s+1})]=(r-1+\tilde c)\eta_{n_1,n_2}^s(y_ix_1^{-r+s+1})\in\mathscr M\eqno(2.40)$$ for $i\in\overline{n_1+1,n_2}$. According to (2.11) and (2.21), $$A_{i,n+1}[\eta_{n_1,n_2}^s(y_ix_1^{-r+s+1})]=\eta_{n_1,n_2}^{s+1}(x_1^{-r+s+1}) -(r+\tilde c)\eta_{n_1,n_2}^s(x_iy_ix_1^{-r+s+1})\in \mathscr M\eqno(2.41)$$ for $i\in\overline{n_1+1,n_2}$. Again (2.11), (2.39) and (2.41) lead to $$ (1+r+\tilde c-n_2+n_1)\eta_{n_1,n_2}^{s+1}(x_1^{-r+s+1})\in\mathscr{M}\Rightarrow \eta_{n_1,n_2}^{s+1}(x_1^{-r+s+1})\in\mathscr{M}.\eqno(2.42)$$ By induction, $$\eta_{n_1,n_2}^\ell(x_1^{-r+\ell})\in\mathscr{M}\qquad\mbox{for}\;\;\ell\in\mathbb N.\eqno(2.43)$$ Since $\eta_{n_1,n_2}^\ell(\mathscr H_{\langle r-\ell\rangle})\ni \eta_{n_1,n_2}^\ell(x_1^{-r+\ell})$ is an irreducible $\mathscr L$-module by Lemma 2.2, we have $$\eta_{n_1,n_2}^\ell(\mathscr H_{\langle r-\ell\rangle})\subset\mathscr M \qquad\mbox{for}\;\;\ell\in\mathbb N.\eqno(2.44)$$ Taking $r=m-\ell$ with $m\in k-\mathbb N$, we get $$\eta_{n_1,n_2}^\ell(\mathscr H_{\langle m-2\ell\rangle})\subset\mathscr M \qquad\mbox{for}\;\;\ell\in\mathbb N.\eqno(2.45)$$ According to Lemma 2.2, $$\mathscr A_{\langle m\rangle}=\bigoplus_{\ell=0}^\infty\eta_{n_1,n_2}^\ell(\mathscr H_{\langle m-2\ell\rangle})\subset\mathscr M\qquad\mbox{for}\;\;m\in k-\mathbb N.\eqno(2.46)$$ Expression (2.21) gives $$\pi_c^{n_1,n_2}(A_{i,n+1})y_i=\left\{\begin{array}{ll} \eta_{n_1,n_2}(y_i\partial_{y_i}+1)-y_i\partial_{x_i}(D_{n_1,n_2}+\tilde c+1)&\mbox{if}\;\;i\in\overline{1, n_1},\\ \eta_{n_1,n_2}(y_i\partial_{y_i}+1)-x_iy_i(D_{n_1,n_2}+\tilde c+1)&\mbox{if}\;\;i\in\overline{n_1+1,n_2},\end{array}\right. \eqno(2.47)$$ $$\pi_c^{n_1,n_2}(A_{j,n+1})\partial_{y_j}=- \eta_{n_1,n_2}y_j\partial_{y_j}-x_j\partial_{y_j}(D_{n_1,n_2}+\tilde c+1)\qquad\mbox{for}\;\;j\in\overline{n_2+1,n}.\eqno(2.48)$$ Moreover, (2.22) yields $$\pi_c^{n_1,n_2}(C_{n+1,r})\partial_{x_r}=\eta_{n_1,n_2}x_r\partial_{x_i} +y_r\partial_{x_r}(D_{n_1,n_2}+\tilde c+1)\qquad\mbox{for}\;\;r\in\overline{1,n_1},\eqno(2.49)$$ \begin{eqnarray*}& &\pi_c^{n_1,n_2}(C_{n+1,s})x_s \\&=&\left\{\begin{array}{ll} - \eta_{n_1,n_2}(x_s\partial_{x_s}+1)+x_sy_s(D_{n_1,n_2}+\tilde c+1)&\mbox{if}\;\;s\in\overline{n_1+1,n_2},\\ -\eta_{n_1,n_2}(x_s\partial_{x_s}+1)+x_s\partial_{y_s}(D_{n_1,n_2}+\tilde c+1)&\mbox{if}\;\;s\in\overline{n_2+1,n}.\end{array}\right.\hspace{2.3cm}(2.50)\end{eqnarray*} Thus \begin{eqnarray*}\hspace{2cm}& &\sum_{i=1}^{n_2}\pi_c^{n_1,n_2}(A_{i,n+1})y_i+\sum_{j=n_2+1}^n\pi_c^{n_1,n_2}(A_{j,n+1})\partial_{y_j} \\ &&-\sum_{r=1}^{n_1}\pi_c^{n_1,n_2}(C_{n+1,r})\partial_{x_r}-\sum_{s=n_1+1}^n\pi_c^{n_1,n_2}(C_{n+1,s})x_s\\ &=&\eta_{n_1,n_2}(-D_{n_1,n_2}+n_2+n-n_1-2(\tilde c+1))\hspace{4.7cm}(2.51)\end{eqnarray*} as operators on $\mathscr A$. Suppose that $\mathscr A_{\langle \ell-s\rangle}\subset \mathscr M$ for some $k\leq\ell\in\mathbb{Z}$ and any $s\in\mathbb{N}$. For any $f\in \mathscr A_{\langle \ell-1\rangle}$, we apply the above equation to it and get $$(1-\ell+n_2+n-n_1-2(\tilde c+1))\eta_{n_1,n_2}(f)\in \mathscr M.\eqno(2.52)$$ Since $c\not\in \mathbb Z/2$, we have $$\eta_{n_1,n_2}(f)\in \mathscr M.\eqno(2.53)$$ Now for any $g\in\mathscr A_{\langle \ell\rangle}$, we have $\partial_{y_1}(g)\in \mathscr A_{\langle \ell-1\rangle}$. By (2.21), $$A_{1,n+1}(g)=\eta_{n_1,n_2}(\partial_{y_1}(g))-(\ell+\tilde c)\partial_{x_1}(g)\in\mathscr M.\eqno(2.54)$$ Moreover, (2.53) and (2.54) yield $$\partial_{x_1}(g)\in \mathscr M\qquad\mbox{for}\;\;g\in \mathscr A_{\langle \ell\rangle}.\eqno(2.55)$$ Since $$\partial_{x_1}(\mathscr A_{\langle \ell\rangle})=\mathscr A_{\langle \ell+1\rangle},\eqno(2.56)$$ we obtain $$\mathscr A_{\langle \ell+1\rangle}\subset \mathscr M.\eqno(2.57)$$ By induction on $\ell$, we find $$\mathscr A_{\langle \ell\rangle}\subset \mathscr M\qquad\mbox{for}\;\;\ell\in\mathbb{Z},\eqno(2.58)$$ or equivalently, $\mathscr A=\bigoplus_{\ell\in\mathbb{Z}}\mathscr A_{\langle \ell\rangle}=\mathscr M$. Thus $\mathscr A$ is an irreducible $o(2n+2,\mathbb C)$-module.$\qquad\Box$ {\bf Remark 2.4}. The above irreducible representation depends on the three parameters $c\in \mathbb{F}$ and $m_1,m_2\in\overline{1,n}$. It is not highest-weight type because of the mixture of multiplication operators and differential operators in (2.16), (2.17) and (2.19)-(2.22). Since $\mathscr A$ is not completely reducible as a $\mathscr L$-module by [LX1] when $n\geq 2$ and $n_1<n$, $\mathscr A$ is not a unitary $o(2n+2,\mathbb{C})$-module. Expression (2.18) shows that $\mathscr A$ is a weight $o(2n+2,\mathbb{C})$-module with finite-dimensional weight subspaces. Theorem 1 follows from Theorem 2.1, Theorem 2.3 and the above remark. \section{Proof of Theorem 2} $\quad\;$ In this section, we prove Theorem 2. Our first result in this section is as follows. {\bf Theorem 3.1}. {\it The representation $\pi_{c,\vec a,\vec b}$ of $o(2n+3,\mathbb{C})$ is irreducible for any $c\in\mathbb{C}$ if $a_0^2+2\sum_{i=1}^na_ib_i\neq 0$.} {\it Proof}. Let $\mathscr B_k$ be the subspace of homogeneous polynomials with degree $k$. Set $$\mathscr B_{\vec a,\vec b,k}=\mathscr B_ke^{\vec a\cdot\vec x+\vec b\cdot\vec y}\qquad\mbox{for}\;k\in\mathbb{N}\eqno(3.1)$$ (cf. (1.15) and the second equation in (1.8)). Let ${\mathscr M}$ be a nonzero $o(2n+3,\mathbb{C})$-submodule of $\mathscr B_{\vec a,\vec b}$. Take any $0\neq fe^{\vec a\cdot\vec x+\vec b\cdot\vec y}\in \mathscr{M}$ with $f\in \mathscr B$. According to (1.6), $$(A_{n+1,i}-a_i)(fe^{\vec a\cdot\vec x+\vec b\cdot\vec y})=\partial_{x_i}(f)e^{\vec a\cdot\vec x+\vec b\cdot\vec y},\;\;-(B_{i,n+1}+b_i)(fe^{\vec a\cdot\vec x+\vec b\cdot\vec y})=\partial_{y_i}(f)e^{\vec a\cdot\vec x+\vec b\cdot\vec y}\eqno(3.2)$$ for $i\in\overline{1,n}$. Moreover, the second equation in (1.14) gives $$-(K_{2n+2}+a_0)(fe^{\vec a\cdot\vec x+\vec b\cdot\vec y})=\partial_{x_0}(f)e^{\vec a\cdot\vec x+\vec b\cdot\vec y}.\eqno(3.3)$$ Repeatedly applying (3.2) and (3.3), we obtain $e^{\vec a\cdot\vec x+\vec b\cdot\vec y}\in \mathscr{M}$. Equivalently, $\mathscr B_{\vec a,\vec b,0}\subset\mathscr{M}$. Suppose $\mathscr B_{\vec a,\vec b,\ell}\subset\mathscr{M}$ for some $\ell\in\mathbb{N}$. Let $ ge^{\vec a\cdot\vec x+\vec b\cdot\vec y}$ be any element in $\mathscr{A}_{\vec a,\vec b,\ell}$. {\it Case 1. $a_i\neq 0$ or $b_i\neq 0$ for some $i\in\overline{1,n}$.} By symmetry, we may assume $a_1\neq 0$. Expression (2.3) with $\mathscr A_{\vec a,\vec b,\ell}$ replaced by $\mathscr B_{\vec a,\vec b,\ell}$ implies $$A_{i,1}(ge^{\vec a\cdot\vec x+\vec b\cdot\vec y})\equiv (a_1x_i-b_iy_1)ge^{\vec a\cdot\vec x+\vec b\cdot\vec y} \equiv 0\;\;(\mbox{mod}\;\mathscr M)\eqno(3.4)$$ and $$C_{1+i,1}(ge^{\vec a\cdot\vec x+\vec b\cdot\vec y})\equiv (a_1y_i-a_iy_1)ge^{\vec a\cdot\vec x+\vec b\cdot\vec y} \equiv 0\;\;(\mbox{mod}\;\mathscr M)\eqno(3.5)$$ for $i\in\overline{1,n}$ by (1.5). Moreover, the first equation in (1.13) gives $$K_1(ge^{\vec a\cdot\vec x+\vec b\cdot\vec y})\equiv (a_1x_0-a_0y_1)ge^{\vec a\cdot\vec x+\vec b\cdot\vec y} \equiv 0\;\;(\mbox{mod}\;\mathscr M)\eqno(3.6)$$ because $$(x_0\partial_{x_1}-y_1\partial_{x_0})(g)e^{\vec a\cdot\vec x+\vec b\cdot\vec y}\in\mathscr B_{\vec a,\vec b,\ell}\subset\mathscr{M}.\eqno(3.7)$$ On the other hand, the second equation in (1.6) with $D$ in (1.12) gives $$-A_{n+1,n+1}(ge^{\vec a\cdot\vec x+\vec b\cdot\vec y})\equiv [a_0x_0+\sum_{i=1}^n(a_ix_i+b_iy_i)]ge^{\vec a\cdot\vec x+\vec b\cdot\vec y} \equiv 0\;\;(\mbox{mod}\;\mathscr M)\eqno(3.8)$$ by (2.6) with $\mathscr A_{\vec a,\vec b,\ell}$ replaced by $\mathscr B_{\vec a,\vec b,\ell}$. Substituting (3.4)-(3.6) into (3.8), we get $$(a_0^2+2\sum_{i=1}^na_ib_i)y_1ge^{\vec a\cdot\vec x+\vec b\cdot\vec y} \equiv 0\;\;(\mbox{mod}\;\mathscr M).\eqno(3.9)$$ Equivalently, $y_1ge^{\vec a\cdot\vec x+\vec b\cdot\vec y}\in\mathscr M$. Substituting it to (3.4)-(3.6), we obtain $$x_0ge^{\vec a\cdot\vec x+\vec b\cdot\vec y},x_ige^{\vec a\cdot\vec x+\vec b\cdot\vec y},y_ige^{\vec a\cdot\vec x+\vec b\cdot\vec y}\in\mathscr M\eqno(3.10)$$ for $i\in\overline{1,n}$. Therefore, $\mathscr B_{\vec a,\vec b,\ell+1}\subset\mathscr{M}$. By induction, $\mathscr B_{\vec a,\vec b, \ell}\subset\mathscr{M}$ for any $\ell\in\mathbb{N}$. So $\mathscr B_{\vec a,\vec b}=\mathscr{M}$. Hence $\mathscr B_{\vec a,\vec b}$ is an irreducible $o(2n+3,\mathbb{C})$-module. {\it Case 2. $a_0\neq 0$ and $a_i=b_0=0$ for $i\in\overline{1,n}$.} Under the above assumption, $$K_i(ge^{\vec a\cdot\vec x+\vec b\cdot\vec y})=(x_0\partial_{x_i}-y_i\partial_{x_0}-a_0y_i)(g)e^{\vec a\cdot\vec x+\vec b\cdot\vec y}\in\mathscr M\eqno(3.11)$$ and $$K_{n+1+i}(ge^{\vec a\cdot\vec x+\vec b\cdot\vec y})=(x_0\partial_{y_i}-x_i\partial_{x_0}-a_0x_i)(g)e^{\vec a\cdot\vec x+\vec b\cdot\vec y}\in\mathscr M\eqno(3.12)$$ for $i\in\overline{1,n}$. Note $$(x_0\partial_{x_i}-y_i\partial_{x_0})(g)e^{\vec a\cdot\vec x+\vec b\cdot\vec y}, (x_0\partial_{y_i}-x_i\partial_{x_0})(g)e^{\vec a\cdot\vec x+\vec b\cdot\vec y}\in\mathscr B_{\vec a,\vec b,\ell}\subset\mathscr{M} \eqno(3.13)$$ by the inductional assumption. Thus (3.10) and (3.11) imply $$y_ige^{\vec a\cdot\vec x+\vec b\cdot\vec y},x_ige^{\vec a\cdot\vec x+\vec b\cdot\vec y}\in\mathscr M\qquad\mbox{for}\;\;i\in\overline{1,n}.\eqno(3.14)$$ Now (3.8) yields $x_0ge^{\vec a\cdot\vec x+\vec b\cdot\vec y}\in\mathscr M$. So $B_{\vec a,\vec b,\ell+1}\subset\mathscr{M}$. By induction, $\mathscr B=\mathscr M$; that is, $\mathscr B$ is irreducible. $\qquad\Box$ Fix $n_1,n_2\in\overline{1,n}$ with $n_1\leq n_2$ . Reset $$D_{n_1,n_2}=x_0\partial_{x_0}-\sum_{i=1}^{n_1}x_i\partial_{x_i} +\sum_{r=n_1+1}^nx_r\partial_{x_r}+\sum_{j=1}^{n_2}y_j\partial_{y_j}-\sum_{s=n_2+1}^ny_s\partial_{y_s},\eqno(3.15)$$ $$\mathscr D_{n_1,n_2}=\partial_{x_0}^2-2\sum_{i=1}^{n_1}x_i\partial_{y_i}+2\sum_{r=n_1+1}^{n_2}\partial_{x_r}\partial_{y_r}-2\sum_{s=n_2+1}^n y_s\partial_{x_s}\eqno(3.16)$$ and $$\eta_{n_1,n_2}=\frac{x_0^2}{2}+\sum_{i=1}^{n_1}y_i\partial_{x_i}+\sum_{r=n_1+1}^{n_2}x_ry_r+\sum_{s=n_2+1}^n x_s\partial_{y_s}.\eqno(3.17)$$ Then the representation $\pi_c^{n_1,n_2}$ of $o(2n+3,\mathbb C)$ is determined as follows: $\pi_c|_{o(2n+2,\mathbb C)}$ is given by (2.12)-(2.22) with $D_{n_1,n_2}$ in (3.15) and $\eta_{n_1,n_2}$ in (3.17), and $$\pi_c^{n_1,n_2}(K_i)=\left\{\begin{array}{ll}-x_0x_i-y_i\partial_{x_0}&\mbox{if}\;i\in\overline{1,n_1},\\ x_0\partial_{x_i}-y_i\partial_{x_0}&\mbox{if}\;i\in\overline{n_1+1,n_2},\\ x_0\partial_{x_i}-\partial_{x_0}\partial_{y_i}&\mbox{if}\;i\in\overline{n_2+1,n},\end{array}\right.\eqno(3.18)$$ $$\pi_c^{n_1,n_2}(K_{n+1+i})=\left\{\begin{array}{ll}x_0\partial_{y_i}-\partial_{x_0}\partial_{x_i}&\mbox{if}\;i\in\overline{1,n_1},\\ x_0\partial_{y_i}-x_i\partial_{x_0}&\mbox{if}\;i\in\overline{n_1+1,n_2},\\ -x_0y_i-x_i\partial_{x_0}&\mbox{if}\;i\in\overline{n_2+1,n},\end{array}\right.\eqno(3.19)$$ $$\pi_c^{n_1,n_2}(K_{n+1})=x_0(D_{n_1,n_2}+\tilde c)-\eta_{n_1,n_2}\partial_{x_0},\qquad \pi_c^{n_1,n_2}(K_{2n+2})=-\partial_{x_0}. \eqno(3.20)$$ Note that $$\mathscr G=\mathscr K+\sum_{i=1}^{2n+2}\mathbb{C}K_i\eqno(3.21)$$ is a Lie subalgebra isomorphic to $o(2n+1,\mathbb C)$. Define $$\mathscr B_{\langle k\rangle}=\sum_{i=}^\infty\mathscr A_{\langle k\rangle}x_0^i.\eqno(3.22)$$ Then $$\mathscr B_{\langle k\rangle}=\{u\in\mathscr B\mid D_{n_1,n_2}(u)=k u\}\qquad\mbox{for}\;\;k\in\mathbb Z\eqno(3.23)$$ and $$\mathscr B=\bigoplus_{k\in\mathbb Z}\mathscr B_{\langle k\rangle}.\eqno(3.24)$$ Moreover, $$\xi D_{n_1,n_2} = D_{n_1,n_2}\xi,\;\;\xi\eta_{n_1,n_2}=\eta_{n_1,n_2}\xi,\;\;\xi\mathscr D_{n_1,n_2} = \mathscr D_{n_1,n_2}\xi\qquad\mbox{for}\;\;\xi\in\mathscr G\eqno(3.25)$$ as operators on $\mathscr B$. In particular, $\mathscr B_{\langle k\rangle}$ forms a $\mathscr G$-module for any $k\in\mathscr Z$. Furthermore, $$\mathscr H_{\langle k\rangle}=\{u\in\mathscr B_{\langle k\rangle}\mid \mathscr D_{n_1,n_2}(u)=0 \}\eqno(3.26)$$ forms a $\mathscr G$-module. The following result is taken from Luo and the author's work [LX2]. {\bf Lemma 3.2}. {\it For any $ k\in\mathbb{Z}$, $\mathscr H_{\langle k\rangle}$ is an irreducible $\mathscr G$-submodule and $\mathscr A_{\langle k\rangle}=\bigoplus_{i=0}^\infty\eta_{n_1,n_2}^i(\mathscr H_{\langle k-2i\rangle})$ is a decomposition of irreducible $\mathscr G$-submodules.} Now we have the second result in this section. {\bf Theorem 3.3}. {\it The representation $\pi_c^{n_1,n_2}$ of $o(2n+3,\mathbb{C})$ on $\mathscr A$ is irreducible if $c\not\in \mathbb{Z}/2$.} {\it Proof}. Let $\mathscr M$ be a nonzero $o(2n+3,\mathbb{C})$-submodule of $\mathscr B$. By (3.23) and (2.18) with $D_{n_1,n_2}$ in (3.15), $$\mathscr M=\bigoplus_{k\in\mathbb{Z}}\mathscr B_{\langle k\rangle}\bigcap \mathscr M.\eqno(3.27)$$ Thus $\mathscr B_{\langle k\rangle}\bigcap \mathscr M\neq\{0\}$ for some $k\in \mathbb{Z}$. Take the Lie subalgebra $\mathscr L$ in (2.31). By Lemma 3.2, $\mathscr B_{\langle k\rangle}=\bigoplus_{i=0}^\infty\eta_{n_1,n_2}^i(\mathscr H_{\langle k-2i\rangle})$ is a decomposition of irreducible $\mathscr G$-submodules. Moreover, $\eta_{n_1,n_2}^i(\mathscr H_{\langle k-2i\rangle})$ are highest-weight $\mathscr L$-modules with distinct highest weights by [LX1]. Hence $$\eta_{n_1,n_2}^i(\mathscr H_{\langle k-2i\rangle})\subset \mathscr M\;\;\mbox{for some}\;\;i\in\mathbb{N}.\eqno(3.28)$$ Lemma 3.2 and the arguments in (2.33)-(2.36) show $$\mathscr H_{\langle k-s\rangle}\subset \mathscr M\qquad\mbox{for}\;\;s\in\mathbb{N}.\eqno(3.29)$$ Suppose $\eta_{n_1,n_2}^s(x_1^{-r+s})\in \mathscr M$ for any $r\in k-\mathbb N$ and some $s\in\mathbb N$. Then, $$K_{n+1}(\eta_{n_1,n_2}^s(x_1^{-r+s+1}))=(r-1+\tilde c)\eta_{n_1,n_2}^s(x_0x_1^{-r+s+1})\in\mathscr M\eqno(3.30)$$ by (3.17) and the first equation in (3.20), which implies $\eta_{n_1,n_2}^s(x_0x_1^{-r+s+1})\in\mathscr M.$ Moreover, $$K_{n+1}(\eta_{n_1,n_2}^s(x_0x_1^{-r+s+1}))=-\eta_{n_1,n_2}^{s+1}(x_1^{-r+s+1})+(r+\tilde c) \eta_{n_1,n_2}^s(x_0^2x_1^{-r+s+1}).\eqno(3.31)$$ Now (2.39) and (2.41) with $\eta_{n_1,n_2}$ in (3.17), and (3.31) lead to $$ (1/2+r+\tilde c-n_2+n_1)\eta_{n_1,n_2}^{s+1}(x_1^{-r+s+1})\in\mathscr{M}\Rightarrow \eta_{n_1,n_2}^{s+1}(x_1^{-r+s+1})\in\mathscr{M}.\eqno(3.32)$$ By induction, $$\eta_{n_1,n_2}^\ell(x_1^{-r+\ell})\in\mathscr{M}\qquad\mbox{for}\;\;\ell\in\mathbb N,\;r\in k-\mathbb N.\eqno(3.33)$$ According to Lemma 3.2, $$\mathscr B_{\langle m\rangle}=\bigoplus_{\ell=0}^\infty\eta_{n_1,n_2}^\ell(\mathscr H_{\langle m-2\ell\rangle})\subset\mathscr M\qquad\mbox{for}\;\;m\in k-\mathbb N.\eqno(3.34)$$ Observe that $$\pi_c^{n_1,n_2}(K_{n+1})x_0=x_0^2(D_{n_1,n_2}+\tilde c+1)-\eta_{n_1,n_2}(x_0\partial_{x_0}+1)\eqno(3.35)$$ by (3.20). Then (3.35) and (2.47)-(2.50) with $\eta_{n_1,n_2}$ in (3.17) and $D_{n_1,n_2}$ in (3.15) yield \begin{eqnarray*}\hspace{2cm}& &-\pi_c^{n_1,n_2}(K_{n+1})x_0+\sum_{i=1}^{n_2}\pi_c^{n_1,n_2}(A_{i,n+1})y_i+\sum_{j=n_2+1}^n\pi_c^{n_1,n_2}(A_{j,n+1})\partial_{y_j} \\ &&-\sum_{r=1}^{n_1}\pi_c^{n_1,n_2}(C_{n+1,r})\partial_{x_r}-\sum_{s=n_1+1}^n\pi_c^{n_1,n_2}(C_{n+1,s})x_s\\ &=&\eta_{n_1,n_2}(1-D_{n_1,n+2}+n_2+n-n_1-2(\tilde c+1))\hspace{4.1cm}(3.36)\end{eqnarray*} as operators on $\mathscr B$. The arguments in (2.52)-(2.58) show $\mathscr M=\mathscr B$; that is, $\mathscr B$ is an irreducible $o(2n+3,\mathbb C)$-module. $\qquad\Box$ {\bf Remark 3.4}. The above irreducible representation depends on the three parameters $c\in \mathbb{C}$ and $m_1,m_2\in\overline{1,n}$. It is not highest-weight type because of the mixture of multiplication operators and differential operators in (2.16), (2.17), (2.19)-(2.22), (3.18) and (3.19). Since $\mathscr B$ is not completely reducible as a $\mathscr L$-module by [LX1] when $n\geq 2$ and $n_1<n$, $\mathscr B$ is not a unitary $o(2n+3,\mathbb{C})$-module. Expression (2.18) with $D_{n_1,n_2}$ in (3.15) shows that $\mathscr B$ is a weight $o(2n+2,\mathbb{C})$-module with finite-dimensional weight subspaces. Theorem 2 follows from Theorem 3.1, Theorem 3.3 and the above remark. \end{document}
\begin{document} \date{November 2022} \title{How to Design a Stable \\ Serial Knockout Competition} \author[1]{Roel Lambers\thanks{r.lambers@tue.nl}} \author[1]{Rudi Pendavingh} \author[1]{Frits Spieksma} \affil[1]{Department of Mathematics \& Computer Science, Eindhoven University of Technology, Eindhoven, Netherlands} \maketitle \begin{abstract} We investigate a new tournament format that consists of a series of individual knockout tournaments; we call this new format a Serial Knockout Competition (SKC). This format has recently been adopted by the Professional Darts Corporation. Depending on the seedings of the players used for each of the knockout tournaments, players can meet in the various rounds (eg first round, second round, ..., semi-final, final) of the knockout tournaments. Following a fairness principle of treating all players equal, we identify an attractive property of an SKC: each pair of players should potentially meet equally often in each of the rounds of the SKC. If the seedings are such that this property is indeed present, we call the resulting SKC {\em stable}. In this note we formalize this notion, and we address the question: do there exist seedings for each of the knockout tournaments such that the resulting SKC is stable? We show, using a connection to the Fano plane, that the answer is yes for 8 players. We show how to generalize this to any number of players that is a power of 2, and we provide stable schedules for competitions on 16 and 32 players. \end{abstract} \section{Introduction} \label{sec:intro} Two popular tournament formats are the round robin format and the knockout format. In a round robin format, each pair of players (or teams) meet a given number of times. In a knockout tournament, starting from a so-called {\em seeding}, each round of the knockout tournament sees matches between all remaining players, and a player is removed from the tournament after losing a match; in this way, after $\mbox{log } n$ rounds a winner is determined (where $n$ is the number of players). Each of these formats has been studied intensely from very different viewpoints. In particular, deciding upon a seeding of the players in a single knockout tournament has attracted a lot of attention; we do not aim to review this field, and simply refer to \cite{Ho+Ri1985}, \cite{Vu2010}, \cite{Vu+Shoham2011}, \cite{Groh2012}, \cite{Aziz2014}, \cite{Karpov2016}, \cite{Pa+Su2022}, and the references contained therein for more information on this subject. Most of this literature assumes that probabilities are given that denote the chance of one player beating the other. In practice, it is not uncommon to design a tournament combining both formats: for instance, first have a number of round robin tournaments in parallel, and then let the winners of the round robins participate in a knockout tournament. In this note we study a new format that can be seen as an alternative combination of a knockout tournament and a round robin tournament. Let the number of players $n$ be equal to $2^{k}$ for some $k \geq 2$, allowing us to focus exclusively on so-called {\em balanced} knockout tournaments, i.e., knockout tournaments where each player has to play the same number of matches to win the tournament. Observe that a balanced knockout tournament consists of $k$ successive {\em rounds}, where in round $i$ the remaining $2^{k+1-i}$ players compete, $i=1, \ldots,k$. The competition format we study consists of a set of $2^{k}-1$ knockout tournaments. We will call this format a {\em Serial Knockout Competition}, or SKC for short. Related (but different) formats are the so-called quasi-double knockout tournament (\cite{Considine2018}) and the multiple-elimination knockout tournament (\cite{Fayers2005}). The problem that we analyze in this note is to specify, for each of the individual knockout tournaments that make up the SKC, the {\em seeding}; these seedings specify, for each player, the leaf nodes of the underlying knockout trees to which the player is assigned, see Figure~\ref{fig:tree+seedings} for an example of a single knockout tournament. \begin{figure} \caption{A single knockout $T$ where players $0,1, \ldots,7$ are assigned to the leaf nodes, leading to the seeding $s=0145-2367$.} \label{fig:tree+seedings} \end{figure} Once the seedings are specified, the individual knockout tournaments of the SKC can unfold - no other decisions in the design of the competition need to be taken. We refer to specifying the seedings as the {\em design} of the SKC. In this note, we do not deal with determining the winner of an SKC; instead, we focus on the question: how to design an SKC in a fair way? Here, we interpret fair by asking for a design that (i) treats all players equal without any prior assumptions on the strenghts of the players, and (ii) each pair of players should meet equally often in each of the rounds of an SKC. One could argue that simply picking random seedings leads to a fair SKC as each player, in expectation, meets each other player equally often. However, it is clear that due to the inherent variability of picking random seedings, a design is found that violates these conditions. Thus, we aim to find seedings such that, over the SKC, each pair of players meets equally often in all rounds. Consider for instance the first round: as the SKC consists of $2^k-1$ knockout tournaments, each player plays $2^k-1$ first round matches. Hence, we want to find seedings such that each player meets each other player exactly once in a first round. More generally, the question is: do there exist seedings such that each pair of players meets equally often in each of the rounds of the SKC? We capture this notion formally by defining the notion of {\em stability} of an SKC. \begin{definition} \label{def:level} Given a knockout tournament $T$ for $n=2^k$ players, we say that $v_T(x,x') = i$ if players $x,x'$ can meet in round $i$ of that tournament, $i=1, \ldots, k$. \end{definition} The phrase `can meet' in the above definition refers to the assumption that players $x$ and $x'$ win their matches in the rounds prior to their encounter. For instance, in Figure~\ref{fig:tree+seedings}, players 1 and 4 can meet in Round 2, while players 0 and 3 can meet in Round 3, the final. Let us now formally define the concept of stability, where we use $\#S$ to denote the number of elements of a finite set $S$. \begin{definition} \label{def:stable} Given a set of knockout tournaments $\mathcal{T}$ on $n=2^k$ players, we say that it is {\em stable in round $i$} if there is a number $c_i$ so that $$\#\{T\in\mathcal{T}: v_T(x,x')=i\} = c_i$$ for all pairs of distinct players $x,x'$. We say that the set $\mathcal{T}$ is {\em stable} if it is stable in all rounds $i=1,\dots,k$. \end{definition} Observe that the expression $\#\{T\in\mathcal{T}: v_T(x,x')=i\}$ counts the tournaments $T$ from the set $\mathcal{T}$ such that players $x$ and $x'$ can meet at round $i$ in $T$, $1 \leq i \leq k$. \begin{definition} We define a Serial Knockout Competition (SKC) as a competition for $n=2^k$ players consisting of $n-1$ knockout tournaments. \end{definition} Notice that in an individual knockout tournament $T$, a player can meet any of $2^{i-1}$ other players when reaching round $i$, i.e., for each player $x$, we have $\#\{x' : v_T(x,x') = i\} = 2^{i-1}$, $i=1, \ldots, k$. As an SKC consists of $2^k-1$ knockout tournaments, the number of meetings that are possible in round $i$ for any player is given by $(2^k-1)2^{i-1}$, $1 \leq i \leq k$. With the number of opponents of any player $x$ equal to $n-1 = 2^k-1$, an SKC is stable in round $i$ if $c_i = 2^{i-1}$, for $i=1, \ldots, k$. In this note, we prove that stable SKC's exist for arbitrary $n=2^k$. We describe in Section~\ref{sec:motivation} the case that motivates this work. In Section~\ref{sec:n=8} we investigate the case of 8 players, and in Section~\ref{sec:general} we deal with the general case. We illustrate in Section~\ref{sec:1632} the cases of 16 and 32 players, and we close in Section~\ref{sec:conclusion}. \subsection{Motivation: The Premier League of Darts} \label{sec:motivation} The motivation for investigating this particular tournament design comes from the Professional Darts Corporation (PDC). We now describe this competition in more detail. The Premier League of Darts, organized by the PDC, is an annual competition where the best darts players of the world compete over several months for the title. This year's edition featured the best 8 players, started at February 3, 2022, and ended at June 13, 2022. Total prize money is £1.000.000, and the winner pockets £275.000. The concept of the league changed drastically compared to the previous years – this edition consists of 16 knockout tournaments. Thus, there is a winner for each of these knockout tournaments, and, importantly, in every single match there is something to play for, which adds to the excitement of the format. The 16 knockout tournaments are structured in the following way: the first 7 knockout tournaments have a predetermined seeding, then there is a special knockout tournament, again 7 knockout tournaments with a given seeding, and a last special knockout tournament. The seedings in the special knockout tournaments depend on the standings at that time. The other (regular) knockout tournaments have a fixed seeding that is determined in advance by the PDC. Our analysis focuses on the seedings in these regular knockout tournaments. The first 7 knockout tournaments, as well as the second 7 regular knockout tournaments, each correspond to an SKC. As far as we are aware, this is the first occurence of an SKC in practice. One reason explaining why an SKC format is not being used more often in practice is the fact that knockout tournaments are used when a match is physically (or otherwise) demanding, and one wants to have relatively few matches to determine a winner. As an SKC requires multiple knockouts, it does not constitute a format with few matches. However, this argument does not apply when the tournament can be organized over a relatively long time period (as in the case of the PDC), and it also does not apply in the domain of e-sports as these require little (physical) effort. E-sports are a fast growing domain with an enormous amount of competitions being organized. We expect that the format of an SKC, or variations thereof, will turn out to be useful and popular in e-sports, as it combines the excitement of a knockout format with the fairness of a round robin format. \section{Constructing a stable SKC when $n=8$} \label{sec:n=8} In this section, we are going to construct a stable SKC tournament $\mathcal{T} = (T_r)_{r \leq 7}$ for $8$ players; this analysis applies directly to the situation encountered by the PDC (see Section~\ref{sec:motivation}). Each knockout tournament is specified by providing a \textit{seeding} $s$, i.e., an ordered permutation of the players $0, \ldots, 7$. In Figure~\ref{exa:seeding} it is shown how to make a knockout tree out of the seeding $s = 01452367$. Although the permutation itself holds all the information needed, we may place hyphens as a visual aid indicating the halves of the seeding: $0145-2367$ instead of $01452367$. \begin{example} \label{exa:seeding} The permutation $0145-2367$ corresponds to the tree in Figure~\ref{exa:seeding}. \begin{figure} \caption{Knockout tree $T$ with seeding $s=0145-2367$.} \label{fig:seedings} \end{figure} \end{example} As for the construction, we first simply state a stable SKC in Table \ref{tbl:stable8}, after which we give a method to generate such a set of seedings. \begin{table}[!h] \centering \begin{tabular}{|cccc|} \hline Knockout & Seeding & Node & Line\\ Tournament & & & \\ \hline 1 & 0145-2367 & 1 & Red \\ 2 & 0426-1537 & 4 & Purple \\ 3 & 0213-4657 & 2 & Light green \\ 4 & 0356-1247 & 3 & Blue \\ 5 & 0527-1436 & 5 & Orange \\ 6 & 0734-1625 & 7 & Green \\ 7 & 0617-2435 & 6 & Light blue \\ \hline \end{tabular} \caption{Seedings for a stable SKC.} \label{tbl:stable8} \end{table} In Table \ref{tbl:stable8}, the last two columns refer to nodes and lines. These nodes and lines are elements of the Fano-plane used to get to these seedings. This plane is depicted in Figure $\ref{fig:fanoplane}$, where the players $1$ to $7$ are placed on the seven nodes. We construct a seeding in the following way: \begin{itemize} \item Select a node $x \in \{1,\ldots,7\}$. This indicates that Player $0$ meets Player $x$ in the first knockout tournament. In case $x=1$, we have a partial seeding $s=01\dots$. \item Select a line that goes through node $x$. The players corresponding to the two other nodes on the line meet each other. In case $x=1$, if we select the red line, then players $4,5$ meet and we extend the partial seeding to $s=0145\dots$. \item The remaining two matches are given by the two non-selected lines through node $x$. The two players on each line respectively, meet each other. This means that, in case $x=1$, players $2,3$ (light green) and $7,6$ (light blue) meet in the first knockout tournament. The resulting seeding for the first knockout tournament is thus given by $0145-2376$. \end{itemize} \begin{figure} \caption{The Fano-plane used to construct \Cref{tbl:stable8}} \label{fig:fanoplane} \end{figure} A routine verification shows that the knockout tournament arising from a node and a line has the following key property. \begin{lemma} \label{lem:level}Let $T$ be the knockout tournament that arises from the node-line pair $x,\ell$ of the Fano plane, and let $y$ be a node of the Fano plane. Then \begin{itemize} \item $v_T(0,y)=1$ if and only if $y=x$, \item $v_T(0,y)=2$ if and only if $y\in \ell$ and $y\neq x$, and \item $v_T(0,y)=3$ if and only if $y\not\in \ell$. \end{itemize} Moreover, if $\ell'=\{y,x, x'\}$ is any line of the Fano plane containing the node $y$, then $v_T(x,x')=v_T(0,y)$. \end{lemma} Notice that in Table \ref{tbl:stable8}, each node and each line of the Fano plane occur exactly once, and each node is on the corresponding line. The following theorem states that this construction is sufficient to obtain a stable SKC. \begin{theorem} Let $x_1,\ldots, x_7$ be an enumeration of the nodes and $\ell_1,\ldots, \ell_7$ be an enumeration of the lines of the Fano plane, such that $x_r \in \ell_r$ for $r=1,\ldots, 7$. Let $T_r$ be the the knockout tournament that arises from the the pair $x_r, \ell_r$. Then, the SKC defined by ${\mathcal T}:=\{T_1,\ldots, T_7\}$ is stable. \end{theorem} \begin{proof} To show that $\mathcal{T}$ is stable, we need to show that \begin{equation} \label{eq:theeq} \#\{T\in \mathcal{T}: v_T(x,x')=i\}=2^{i-1}, \end{equation} for each pair of distinct players $x,x'$ and each round $i\in \{1,2,3\}$. Notice that $\mathcal{T}$ is stable in round $i=3$ if it is stable in both round $1$ and $2$. We first consider the case that one of $x,x'$ is 0, say $\{x, x'\}=\{0,y\}$ for some $y\in \{1,\ldots, 7\}$. \begin{itemize} \item When $i=1$, our construction ensures that in each individual knockout tournament $r=r^y$, there exists a unique player $x_r=y$ meeting player 0. Hence, $\#\{T\in \mathcal{T}: v_T(0,y)=1\}=\#\{r: y=x_r\}=1$, and equation (\ref{eq:theeq}) is satisfied for $i=1$. \item When $i=2$, we observe that there are exactly three lines through $y$, thus there exist two distinct knockout tournaments $r,r' \neq r^y$ such that $y \in \ell_r,\ell_r'$ - meaning that $(0,y)$ can meet in round $2$ in those knockout tournaments. Thus: $\#\{T\in \mathcal{T}: v_T(0,y)=2\}=\#\{r: y\in \ell_r, y\neq x_r\}=2$, and equation (\ref{eq:theeq}) is satisfied for $i=2$. \end{itemize} This settles the case where one player is Player 0. Next, suppose $x,x'$ are distinct players, both not $0$. Then, the Fano plane contains a unique node $y$ and line $\ell' = \{y,x,x'\}$ through $x,x'$. By Lemma \ref{lem:level}, we have $v_T(x,x')=v_T(0,y)$ for each $T\in \mathcal{T}$. As $\#\{T \in \mathcal{T} : v_T(0,y) = i\} = 2^{i-1}$ for all $y$, this holds for any distinct pair $x,x'$, for $i=1,2,3$. The theorem follows. \end{proof} We point out that, from the viewpoint of stability, the sequence with which the individual knockout tournaments are played, is irrelevant. \section{Constructing a stable SKC} \label{sec:general} Here we generalize the node-line construction used in Section~\ref{sec:n=8} to find a stable SKC for $n=2^k$ players. In Section~\ref{sec:keyidea}, we describe the basic idea, and in Section~\ref{sec:galois} we make a connection to Galois fields. We use this connection in Section~\ref{sec:result} to prove our main result: Theorem~\ref{th:main}. \subsection{The basic idea} \label{sec:keyidea} The key idea that we will carry over to the general setting, is that we will construct our knockout tournaments in a restricted way, so that for each pair of players $x,x'$, there is a well-defined player $y$ such that $$v_T(x,x')=v_T(0,y)$$ for all knockout tournaments $T$ of this restricted form. Showing that an SKC $\mathcal{T}$ is stable, where each tournament $T\in \mathcal{T}$ is of this special form, then reduces to verifying that $$\#\{T\in \mathcal{T}: v_T(0,y)\}=2^{i-1}$$ for each player $y$ and each round $i$, $i=1, \ldots,k$. To define the representative $y$ of a pair of players $x,x'$ and to create the special tournaments $T$, we need additional structure on the set of players. For the case $n=8$, we identified the non-zero players with nodes of the Fano plane and used its geometry to define the tournaments. In what follows, we will identify the $n=2^k$ players with the $2^k$ elements of the {\em Galois field} $GF(2^k)$. As $GF(2^k)$ is a field, both addition and multiplication are possible operations on its elements. We construct a tournament $T$ such that for $x,x' \in GF(2^k)$, we have \begin{align*}v_T(x,x')=v_T(0,y)\end{align*} when $y:=x-x'$. After we have constructed a base model for our knockout tournament, we use the multiplication in $GF(2^k)$ on $T$, to create tournaments $T(z)$ for each nonzero element $z$ of $GF(2^k)$, and argue that $$\mathcal{T}:=\{T(z): z\neq 0\}$$ is a stable SKC. \subsection{The connection to Galois fields} \label{sec:galois} To exploit the structure of Galois field $GF(2^k)$, we first have to describe $GF(2^k)$. Although we do not go into too much detail, we point out the main properties that we use. For an accessible introduction to finite fields, see \cite{FF2022}.\\ \noindent A {\em binary polynomial} $q\in\mathbb{Z}_2[X]$ is an expression of the form $$q = q_kX^k + \dots q_1X + q_0$$ where the coefficients $q_i$ are either $0$ or $1$. Such polynomials may be added and multiplied as usual, but taking into account that the coefficients are added according to the rule $1+1=0$. So e.g. $$(X+1)\cdot(X^2+X+1)=X^3+X^2+X^2+X+ X+1=X^3+1$$ The degree of a polynomial $q=\sum_i q_i x^i$ is the highest value of $i$ so that $q_i\neq 0$. The polynomial $q=X^3+1$ that is the outcome of the above calculation is {\em reducible}, because it has degree 3 and is the product of two polynomials of strictly lower degree, resp. $X+1$ of degree 1 and $X^2+X+1$ of degree 2. For any value of $k$, irreducible polynomials $q\in\mathbb{Z}_2[X]$ are guaranteed to exist. For example, when $k=3$, the polynomial $q = X^3 + X^2 + 1$ is irreducible over $\mathbb{Z}_2[X]$. Other irreducible polynomials of small degree are $X^2+X+1, X^4+X+1, X^5+X^2+1$ for degree $k=2,4,5$ respectively. Given any polynomial $q\in \mathbb{Z}_2[X]$ , we write $\mathbb{Z}_2[X]/(q)$ for the set of polynomials one gets from a polynomial in $\mathbb{Z}[X]$ by filling in a symbolic value $\alpha$ that is assumed to satisfy $q(\alpha)=0$. If $q=X^2+X+1$, then the element $x=\alpha^3\in\mathbb{Z}_2[X]$ can be rewritten as $$x=\alpha^3=\alpha^3+\alpha\cdot q(\alpha)=\alpha^3+\alpha\cdot(\alpha^2+\alpha+1)=\alpha^2 +\alpha=\alpha^2 +\alpha+q(\alpha)=1$$ because $q(\alpha)=0$. Indeed, any element $x\in \mathbb{Z}_2[X]/(q)$ can be rewritten to $x= x_{k-1} \alpha^{k-1}+\cdots+x_1\alpha+x_0$, that is, without using powers $\alpha^i$ with $i\geq k$ in the expression. If $q \in \mathbb{Z}_2[X]$ is an {\em irreducible} polynomial of degree $k$, it is known that $GF(2^k) \cong \mathbb{Z}_2[X]/(q)$ is a {\em field}: one can add and multiply with its elements, but also divide by any nonzero element. Indeed, consider that in the above example with $q=X^2+X+1$, we had $\alpha\cdot \alpha^2=\alpha^3=1$. Then $\alpha^{-1}=\alpha^2$, and dividing by $\alpha$ amounts to multiplying with $\alpha^2$. The irreducibility of $q$ ensures that for any nonzero $x\in GF(2^k)$ there is a $y\in GF(2^k)$ so that $x\cdot y=1$. Then a division by $x$ can be executed as a multiplication by $y$. There is more than one irreducible polynomial $q$ of each degree $k$, but whichever one uses, the outcome is mathematically `the same` field $GF(2^k)$. Having fixed a polynomial $q$ for the construction of the Galois field $GF(2^k)$, there is just one way to write an element $x\in GF(2^k)$ as $x = \sum_{i=0}^{k-1} x_i\alpha^i \in GF(2^k)$, and we may define the {\em degree} of $x$ as $d(x) = \max\{i : x_i \neq 0\}$. This degree leads us to the following lemma on the existence of a tournament $T$ with the nice property that $v_T(x,y) = v_T(0,x-y) = 1 + d(x-y)$. \begin{lemma} There is a knockout tournament $T$ whose players are the elements of $GF(2^k)$, so that $v_T(x,y)=1+d(x-y)$ for all $x,y\in GF(2^k)$. \label{lem:knockout} \end{lemma} \begin{proof} We construct tournament $T$ by inductively constructing $T_m$ for incremental values $m=1,\dots,k$, where each $T_m$ is a knockout tournament on the set $P_m = \{x \in GF(2^k) : d(x) < m\}$, and all the $T_m$ have the property that $v_{T_m}(x,y) = 1 + d(x-y)$ for $x,y \in P_m$. Then $T=T_k$ proves the lemma. When $m=1$, the set $P_0=\{0,1\}$ contains only two players, and the unique tournament $T_1$ one can construct on these two players has $v_{T_1}(0,1)=1=1+d(1-0)$. As induction step, assume that $T_m$ exists such that $v_{T_m}(x,y)=1+d(x-y)$ for all $x,y\in P_m$. Let $T_m'$ arise from a copy of $T_m$ by adding $\alpha^m$ to each player. Then $T_m'$ has players $P_m'=\{x+\alpha^m: x\in P_m\}$ and for any two players $x',y' \in P'_m$ we have \begin{align*}v_{T_m'}(x', y')=v_{T_m}(x,y)=1+d(x-y)=1+d(x'-y')\end{align*} where $x'=x+\alpha^m$ and $y'=y+\alpha^m$ with $x,y\in P_m$. We construct $T_{m+1}$ for players $P_{m+1}=P_m\cup P_m'$ as the combination of tournaments $T_m$, $T_m'$, where at round $m+1$, the winner of $T_m$ plays the winner of $T_m'$. For this $T_{m+1}$, we see that for $x,y \in P_{m+1}$: \begin{align*} v_{T_{m+1}}(x,y) &= v_{T_{m}}(x,y) = 1 + d(x-y) && \text{if }x,y \in P_m \\ v_{T_{m+1}}(x,y) &= v_{T_{m}'}(x,y) = 1 + d(x-y) &&\text{if } x,y \in P_m' \\ v_{T_{m+1}}(x,y) &= 1 + m = 1 + d(x-y) &&\text{if } x \in P_m, y \in P_m' \text{ or } x \in P_m', y \in P_m \end{align*} This finishes the induction step. Taking $T = T_k$ gives the desired tournament. \end{proof} The construction of $T$ with elements in $GF(2^3)$ is given in Figure \ref{fig:ko-construction}. \begin{figure} \caption{A knock-out tournament $T$ so that $v_T(x,y)=1+d(x-y)$} \label{fig:ko-construction} \end{figure} \subsection{The result} \label{sec:result} By Lemma~\ref{lem:knockout}, we know there exists a knockout tournament $T$ on the elements of $GF(2^k)$ such that $v_T(x,y) = v_{T}(0,x-y) = 1+d(x,y)$ for all $x,y \in GF(2^k)$. In the following section, we argue that for each non-zero $z \in GF(2^k)$, the tournament $T(z)$ obtained from $T$ by replacing each player $x$ by $zx$ maintains the property that $v_{T(z)}(x,y) = v_{T(z)}(0,x-y)$. Then we show that $$\mathcal{T} = \{T(z) : z \in GF(2^k) \setminus \{0\}\}$$ is a stable SKC. Let $T$ be a tournament satisfying Lemma~\ref{lem:knockout}, thus $v_T(x,y) = 1 + d(x-y)$ for all $x,y\in GF(2^k)$. Let $z\in GF(2^k)$ be non-zero and thus invertible. We construct $T(z)$ from $T$ by replacing each player $x$ with $zx$. As the map $x \mapsto zx$ is one-to-one, $T(z)$ is again a tournament whose players are the elements of $GF(2^k)$. Evidently we have $v_{T(z)}(x, y)=v_T(z^{-1}x, z^{-1}y)$ for all $x,y\in GF(2^k)$. It follows that $$v_{T(z)}(x, y)=v_T(z^{-1}x, z^{-1}y)=v_T(0, z^{-1}(x-y))=v_{T(z)}(0,x-y)$$ for all $x,y\in GF(2^k)$ and $$v_{T(z)}(0,y)=v_{T}(0,z^{-1}y)=1+d(z^{-1}y)$$ for all $y\in GF(2^k)$. \begin{theorem} \label{th:main}$\mathcal{T}:=\{T(z): z\text{ a nonzero element of } GF(2^k)\}$ is a stable SKC. \end{theorem} \begin{proof} We need to show that $\#\{T\in \mathcal{T}: v_T(x,x')=i\}=2^i$ for each pair of distinct players $x,x'\in GF(2^k)$ and each round $i=1,\ldots, k$. If one of $x,x'$ is 0, say $\{x,x'\}=\{0,y\}$ with $y\neq 0$, then, for each $i=1, \ldots,k$, \begin{align*}\#\{T\in \mathcal{T}: v_T(0,y)=i\}= \#\{z\in GF(2^k): z\neq 0, 1+d(z^{-1}y)=i\}.\end{align*} Substituting $z$ by $r^{-1}y$ this equals \begin{align*}&\#\{r^{-1}y\in GF(2^k): r\neq 0, 1+d(r)=i\}=\\ &\#\{r\in GF(2^k): r\neq 0, 1+d(r)=i\}=2^i\end{align*} since the map $r\mapsto r^{-1}y$ is one-to-one. The general case reduces to the above special case, since each of the tournaments $T\in \mathcal{T}$ has $v_T(x,x')= v_T(0, x-x')$. Then \begin{align*}\#\{T\in \mathcal{T}: v_T(x,x')=i\}=\#\{T\in \mathcal{T}: v_T(0, x-x')=i\}=2^i,\end{align*} as required. \end{proof} \ignore{ For example, if $k=3$, we have a tournament with the following matches. \begin{itemize} \item at stage 0: 0 vs 1; $\alpha$ vs. $\alpha+1$; $\alpha^2$ vs. $\alpha^2+1$; $\alpha^2+\alpha$ vs. $\alpha^2+\alpha+1$ \item at stage 1: winner of 0,1 vs. winner of $\alpha$, $\alpha+1$; winner of $\alpha^2$,$\alpha^2+1$ vs. winner of $\alpha^2+\alpha$, $\alpha^2+\alpha+1$ \item at stage 2: winner of 0,1, $\alpha$, $\alpha+1$ vs. winner of $\alpha^2$,$\alpha^2+1$, $\alpha^2+\alpha$, $\alpha^2+\alpha+1$ \end{itemize} } We close this section with an example that constructs a stable SKC on $8$ players using the Galois group. \begin{example} For the Galois group, we choose $q(X)=X^3+X+1$ as the irreducible polynomial over $\mathbb{Z}_2$ and set $q(\alpha) = 0$. The corresponding multiplication table is shown in Table \ref{tab:multiplic}. \begin{table}[!h] \centering \tiny \begin{tabular}{|C*{7}{C}|} \hline & 1 & \alpha & \alpha + 1 & \alpha^2 & \alpha^2+1 & \alpha^2 + \alpha & \alpha^2 + \alpha + 1\\ \hline 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline 1 & 1 & \alpha & \alpha + 1 & \alpha^2 & \alpha^2+1 & \alpha^2 + \alpha & \alpha^2 + \alpha + 1 \\ \hline \alpha & \alpha & \alpha^2 & \alpha^2 + \alpha & \alpha+1 & 1 & \alpha^2 + \alpha+1 & \alpha^2 + 1 \\ \hline \alpha + 1 & \alpha + 1 & \alpha^2 + \alpha & \alpha^2 + 1 & \alpha^2 + \alpha + 1 & \alpha^2 & 1 & \alpha \\ \hline \alpha^2 & \alpha^2 & \alpha + 1 & \alpha^2 + \alpha + 1 & \alpha^2+\alpha & \alpha & \alpha^2+1 & 1 \\ \hline \alpha^2 + 1 & \alpha^2 + 1 & 1 & \alpha^2 & \alpha & \alpha^2+\alpha+1 & \alpha + 1 & \alpha^2 + \alpha \\ \hline \alpha^2 + \alpha & \alpha^2 + \alpha & \alpha^2 + \alpha+1 & 1 & \alpha^2+1 & \alpha + 1 & \alpha & \alpha^2 \\ \hline \alpha^2 + \alpha + 1 & \alpha^2 + \alpha + 1 & \alpha^2+1 & \alpha & 1 & \alpha^2+\alpha & \alpha^2 & \alpha+1 \\ \hline \end{tabular} \caption{Multiplication on $GF(2^3)$} \label{tab:multiplic} \end{table} Table \ref{tab:multiplic} essentially gives the seedings for the SKC, since the row for multiplication by $z$ presents the seeding for $T(z)$. Upon replacing each polynomial with the number specified in \Cref{tab:galteams}, we get the SKC of Table \ref{tab:GaloisSKO}. \begin{table} \centering \begin{tabular}{|*{8}{C}|} \hline 0 & 1 & \alpha & \alpha + 1 & \alpha^2 & \alpha^2+1 & \alpha^2 + \alpha & \alpha^2 + \alpha + 1\\ \hline 0&1&4&5&2&3&6&7\\ \hline \end{tabular} \caption{From Galois to teams} \label{tab:galteams} \end{table} \ignore{Take any permutation of $1,\dots,7$, and let the $i$-th element in the permutation correspond to the $i$-th element in the first row. Apply this correspondence to all of the values in the Table, to get $7$ rows with a permutation of $1,\dots,7$. Starting with the $0$ player and reading each of these $7$ rows, gives the seeding of $7$ knock-out tournaments, that together form a stable SKC. If we let the first row correspond to the first row of Table \ref{tbl:stable8}, that is $145-2367$, omitting the $0$ player, than the knockout schemes can be read as follows: } Comparing the SKC from Table \ref{tbl:stable8} with the one shown in Table \ref{tab:GaloisSKO}, we see that the knockout tournaments are the same and merely permuted. \begin{table}[!h] \centering \begin{tabular}{|CCCC|} \hline \begin{array}{c}\text{Knockout} \\ \text{Tournament}\end{array} & \text{Seeding} & \begin{array}{c}\text{Knockout} \\ \text{Tournament}\end{array} & \text{Seeding} \\ \hline 1 & 0145-2367 & 5 & 0312-4756\\ 2 & 0426-5173 & 6 & 0671-3542 \\ 3 & 0563-7214 & 7 & 0734-1625 \\ 4 & 0257-6431 & & \\ \hline \end{tabular} \caption{SKC constructed from \Cref{tab:multiplic}} \label{tab:GaloisSKO} \end{table} \end{example} \section{Stable SKC on 16 and 32 players} \label{sec:1632} In this section we use the construction of the previous section to generate an SKC on $16$ and one on $32$ players. For notational purposes, we enumerate the first $10$ players by $0,\dots,9$ and continue with $a,b$ up until $f$ in the case of $16$ and $v$ in the case of $32$ teams. By doing this, we can visualize the seedings as a string of length $16$ ($32$) where each character is one player. The seedings are shown in \Cref{tab:SKC16,tab:SKC32}. \begin{table}[!h] \centering \begin{tabular}{|CC|} \hline \begin{array}{c}\text{Knockout} \\ \text{Tournament}\end{array} & \text{Seeding} \\ \hline 1 & 0123-4567-89ab-cdef \\ 2 & 0246-8ace-3175-b9fd \\ 3 & 0365-cfa9-b8de-7412 \\ \hline 4 & 048c-37bf-62ea-51d9 \\ 5 & 05a\mathit{f}-72d8-eb41-9c36 \\ 6 & 06ca-bd71-539f-e824 \\ 7 & 07e9-\f816-da34-25cb \\ \hline 8 & 083b-6e5d-c4f7-a291 \\ 9 & 0918-2b3a-4d5c-6f7e \\ 10 & 0a7d-e493-f582-1b6c \\ 11 & 0b5e-a1\mathit{f} 4-7c29-d683 \\ \hline 12 & 0cb7-59e2-a61d-f348 \\ 13 & 0d94-1c85-2fb6-3da7 \\ 14 & 0ef1-d32b-9768-4ab5 \\ 15 & 0fd2-964b-1ec3-875a \\ \hline \end{tabular} \caption{Balanced SKC on 16 players} \label{tab:SKC16} \end{table} \renewcommand{0.8}{0.8} \begin{table}[!h] \centering \sffamily \begin{tabular}{|CC|} \hline \begin{array}{c}\text{Knockout} \\ \text{Tournament}\end{array} & \text{Seeding} \\ \hline 1 & 0123-4567-89ab-cdef-ghij-klmn-opqr-stuv\\ 2 & 0246-8ace-gikm-oqsu-5713-df9b-lnhj-tvpr\\ 3 & 0365-cfa9-orut-knih-lmjg-pqvs-deb8-1274\\ \hline 4 & 048c-gkos-51d9-lhtp-ae26-quim-fb73-vrnj\\ 5 & 05af-khur-d872-psjm-qvgl-eb41-nito-369c\\ 6 & 06ca-ouki-ljpv-db17-f935-nhrt-qsmg-24e8\\ 7 & 07e9-sril-tqjk-16f8-vohm-34da-25cb-upgn\\ \hline 8 & 08go-5dlt-a2qi-f7vn-ks4c-hp19-ume6-rjb3\\ 9 & 09ir-18jq-2bgp-3aho-4dmv-5cnu-6fkt-7els\\ 10 & 0aku-d7pj-qge4-nt39-hr5f-sm82-b1vl-6cio\\ 11 & 0bmt-92vk-ip4f-rgd6-1ans-83ul-jo5e-qhc7\\ \hline 12 & 0cok-lpd1-f3nr-qm2e-ui6a-b7jv-ht95-48sg\\ 13 & 0dqn-hsb6-7atg-mrc1-e3kp-vi58-94ju-ol2f\\ 14 & 0esi-tj1f-vh3d-2cug-rl79-68qk-4aom-pn5b\\ 15 & 0fuh-pm78-no96-e1gv-b4lq-itc3-sj2d-5ark\\ \hline 16 & 0g5l-aqfv-k4h1-uerb-dt8o-7n2i-p9sc-j3m6\\ 17 & 0h7m-ev9o-sdra-i3l4-tcqb-j2k5-1g6n-fu8p\\ 18 & 0i1j-2g3h-4m5n-6k7l-8q9r-aobp-cudv-esft\\ 19 & 0j3g-6l5m-cvfs-ap9q-obr8-udte-k7n4-i1h2\\ \hline 20 & 0kdp-qen3-h5s8-bv6i-7jau-t9g4-m2rf-co1l\\ 21 & 0lfq-ubh4-pcm3-7i8t-n2od-9s6j-er1k-g5va\\ 22 & 0m9v-i4rd-1n8u-j5qc-2kbt-g6pf-3las-h7oe\\ 23 & 0nbs-m1ta-9u2l-v8k3-i5pe-4jfo-rcg7-dq6h\\ \hline 24 & 0old-fnq2-u6bj-h94s-p1ck-me3r-7via-8gt5\\ 25 & 0pne-bis5-mf1o-t4aj-9gu7-2rlc-v68h-kd3q\\ 26 & 0qhb-7tmc-ekv5-9jo2-s6dn-r1ag-i83p-lf4u\\ 27 & 0rj8-3ogb-6tle-5umd-cnv4-fks7-ahp2-9iq1\\ \hline 28 & 0st1-v32u-r76q-4op5-jfei-cghd-8kl9-nbam\\ 29 & 0tv2-r64p-jech-8lna-3us1-o57q-gdfi-bmk9\\ 30 & 0up7-n9eg-blic-s25r-m8fh-1vo6-t34q-akjd\\ 31 & 0vr4-jc8n-3so7-gfbk-6pt2-laeh-5qu1-m9di\\ \hline \end{tabular} \caption{Balanced SKC on 32 players} \label{tab:SKC32} \end{table} \section{Discussion} \label{sec:conclusion} We have analyzed a novel tournament design that is used in practice, and that can be seen as a combination of a knockout tournament and a round robin tournament; we call it a Serial Knockout Competition (SKC). From the viewpoint of fairness an attractive property of an SKC is stability: whether or not pairs of players can meet equally often in the rounds of the SKC. We have shown that this is always possible. Interestingly, one easily observes that the implementation of the SKC used in the PDC Premier League is not stable. We remark here that the construction to create stable SKC's does not generate a unique tournament - for example, the order of the individual knockout tournaments can be changed without impacting the stability of the SKC. Also, within each knockout tournament, a tournament $T(s)$ with seeding $s$ can be replaced by $T(s')$ as long as $v_{T(s)}=v_{T(s')}$. Thus, not all stable SKC's are equal and from an organizer's point of view, there might be additional constraints allowing one to prefer one stable SKC over another. \textbf{Acknowledgement}{The research of Frits C.R. Spieksma was partly funded by the NWO Gravitation Project NETWORKS, Grant Number 024.002.003.} \printbibliography \end{document}
\begin{document} \title{Accurate Driving Model by Online Adaptation of Parameters \using GRU based Neural Network with BO} \begin{abstract} Testing self-driving cars in different areas require surrounding cars with different driving styles accordingly. A method to measure and differentiate the driving style numerically to create a virtual driver with a certain driving style accordingly is in demand. However, most methods measuring driving style need thresholds or labels to classify, and some of them require additional experiments. These limitations do not fit for creating a large virtual testing environment. Meanwhile, Driving Models (DMs) simulate human driving behaviors. Calibrating the DM makes the simulated driving behavior getting closer to the real behaviors therefore creates a natural-behavior car in simulation. The main DM calibrating methods do not consider that the parameters in a DM are variable while driving. These “fixed” calibrating methods can not reflect an interactive and real driving scenario. In response to the two main problems, we propose 1) An objective entropy weight method to measure and clustering driving styles, and 2) An adaptive DM calibration method based on deep learning by combining Bayesian Optimization (BO) and Gated Recurrent Unit (GRU). The experiments showed that our method can be easily used to measure a driver’s style. The experiments also proved that we could calibrate a corresponding driver in a virtual testing environment up to 26\% accurate than other calibration method. \end{abstract} \section{Introduction} Human driving styles are, aggressive, normal, conservative etc. These styles distribute differently among countries and areas. It is necessary to create a virtual environment with those drivers from different areas to fully test a self-driving car. E.g., Drivers in Japan have a relative low accident rate than USA \cite{doi:10.1080/13669877.2018.1517384}. It is predictable that testing a self-driving car in USA require more aggressive drivers as surrounding cars than in Japan. To create such an environment, there are two problems need to solve: \begin{enumerate} \item A driver’s driving style is an abstract concept, it is necessary to find a way to measure it numerically. So that all the drivers’ style in one area can be represented to create a corresponding environment. \item A driving model (DM) should simulate real human driving behaviors. \end{enumerate} A car-following model describes the movements of a following vehicle (FV) in response to the actions of the leading vehicle (LV) \cite{ZHU2018425}, and is used in simulation environment such as Simulation of Urban Mobility (SUMO) \cite{SUMO2018}. In a DM, a driver’s behaviors are controlled by a series math equation. Setting the parameters in a DM to make the simulated behaviors closer to the real behaviors is called calibrating a DM. A well-calibrated DM creates a natural-behavior car in simulation, but parameters of the DM are fixed values in simulations \cite{8317836}. \begin{figure} \caption{Overview of proposed method} \label{figure:overview} \end{figure} In this paper, we propose a method of measuring real driving style and online adaptation of DM parameters using Gated Recurrent Unit (GRU) based neural network to reproduce real car-following behavior more accurately. The proposed method first uses objective entropy weigh method for measure driving data \cite{8342953} and clustering it. It then splits the driving data by short time window (e.g., 0.5 sec), and employs Bayesian optimization (BO) to search for optimized DM parameters for the time windows. It finally trains Gated Recurrent Unit (GRU) based neural network with the parameters that searched by BO. We evaluated the proposed method by applying it Krauss car-following model \cite{7019528} and Wiedemann car-following models \cite{wiedemann74} , with a public dataset of driving data, Next Generation Simulation (NGSIM), Interstate 80 freeway, collected in California, USA \cite{doi:10.3141/2390-11}. We demonstrated the proposed method worked in a real simulation environment, SUMO. The results suggested that our method can easily numerically differentiate drivers’ style distribution among a dataset. It demonstrated that our adaptive method could gain less difference that previous methods. In response to the problems and limitations in previous arts. Our contributions are as follow: \begin{enumerate} \item We applied the objective entropy weight method for measuring human’s driving style. \item We propose a new method for adaptively calibrating a driving model. \item The experiment suggested that our method can gain less difference in simulating driver’s velocity trajectory than previous method. \end{enumerate} \section{Related work} \subsection{Measurement of driving styles } Researchers investigated relationship between driving styles and external (traffic, weather…) and internal conditions (ages, genders…). Some research focused on define a correct way of labeling for driving styles. Researchers classified drivers into discrete classes (aggressive, normal…) or some continuous indexing (-1 to 1, for most dangerous to most conservative) Meanwhile, driving style classification methods were also attractive topic. The main research included rule-based method, model-based method and learning-based methods covering supervised or unsupervised learning method. Most of the methods set an absolute criteria and need a clear label for classification \cite{8002632}. \subsection{Calibrating driving model} Researchers developed a lot of models in term of car-following behavior. Car following models simulate the longitudinal movement of a vehicle and are used to replicate the behavior of a driver when following another vehicle. Krauss model \cite{7019528} and Wiedemann model \cite{wiedemann74} are two examples. Krauss model for car-following behavior can be described as \cite{7019528}: {\fontsize{8pt}{10pt}\selectfont \begin{eqnarray} v_{des} =& \min (v_{max}, v + at_{step}, v_{safe}) \\ v_{safe} =& \nonumber \\ &-b(t_r+\frac{t_i}{2})+\sqrt{b^2(t_r+\frac{t_i}{2})^2+b{v_it_i*\frac{v^2_t}{a}}+2g} \end{eqnarray} } $v_{des}$ is the desired speed in simulation environment. Where $t_r$ = Driver’s overall reaction time, $t_i$ = Driver’s decreasing reaction time. $t_r$ , $t_i$ are the inner parameters define driving behavior (speed control). For reproducing a driver’s car-following behavior, calibrating $t_r$ , $t_i$ can make $v_{des}$ closer to the real collected data. Therefore, well-calibrated DM can naturally simulate a real driver’s car-following behavior. For the rest parameters, {\it b} = max deceleration, {\it a} = max acceleration, $v_l$ = preceding vehicle speed. $g$ = subject vehicle gap distance to the preceding vehicle; $t_{step}$= Simulation frame interval; $v_{max}$= limitation of max speed road. The forward Euler method was then used to solve for the vehicle position and speed: \begin{eqnarray} V_n(t+\Delta T)&=&V_n(t)+ a_n(t)\cdot \Delta T \\ X_n(t+\Delta T)&=&X_n(t)+ V_n(t)\cdot \Delta T \end{eqnarray} Wiedemann car-following model can be described as \cite{8317836}: \begin{itemize} \item $AX$ is the distance to the front vehicle when standing still, and is calculated as $AX(t)$ = $l_{\alpha-1} + AX_{add}$ \item $ABX$ is the desired minimum distance to the front vehicle, and is calculated as $ABX(t)$ = $AX(t) + BX_{add}$. \item $SDX$ is the maximum distance when following a vehicle, and is calculated as $SDX(t)$ = $SDX_{mult} \cdot ABX(t)$. \item $SDV$ is the point when the driver notices that he/she is approaching a vehicle with a lower velocity, and is calculated as $SDV(t)$ =$\frac{s_\alpha(t)-AX(t)}{CX}^2$ \item $OPDV$ is the point when the driver notices that the front vehicle is driving away with a higher velocity, and is calculated as $OPDV(t)$ = $SDV (t) \cdot (-OPDV_{add})$. \end{itemize} \begin{description} \item where $AX_{add}$, $BX_{add}$, $SDX_{mult}$, $CX$, and $OPDV_{add}$ are tuning parameters, and $v(t)$ = $min(v_\alpha(t), v_{\alpha-1}(t))$. In an actual simulation, SUMO simulation uses the Wiedemann car-following model as 10-parameter version \cite{w99demo}. \end{description} Researchers investigated calibrating method based on naturalistic driving data. But the dataset was not open, and the inner features are hard to acquire in large scale testing \cite{ZHU2018425}. The previous research did not consider an adaptive calibrating method. The DM parameters in the simulation stayed the same. A variable calibrating method is expected to represent a real and complex driving scenario. \subsection{BO and GRU} BO is a design strategy for global optimization of black-box functions. BO has appeared in the machine learning literature as a means of optimizing difficult black box optimizations \cite{10.5555/2999325.2999464}, e.g., BO was used to learn a set of robot parameters that maximize velocity of a Sony AIBO ERS-7 robot \cite{10.5555/1625275.1625428}. We use BO to search optimal parameters of car-following model. GRU is a gating mechanism in recurrent neural networks \cite{cho-etal-2014-learning}, and is used to real-world applications such as predict traffic flow \cite{7804912}. \section{Proposed Method} \subsection{Driving style score by objective entropy weight} In this research, we used a method named objective entropy weigh method \cite{8342953}. The method bases on the information provided by various attributes to find the weights. The steps of calculating the weight are as follows: \begin{enumerate} \item For one driving trajectory dataset, construct evaluation matrix \boldmath$E$. Assuming the set include trajectories data of {\it m} vehicles which with ID = 1,2,\ldots,{\it m}, for a vehicle with ID = $i$, \boldmath$E$ = $(e^{\prime}_{ij})_{m \times n}$ , where {\it n} = 6, is: {\fontsize{7.5pt}{9pt}\selectfont \begin{eqnarray} E &=& (e^{\prime}_{ij})_{m \times n} \nonumber \\ &=& \left( \begin{array}{ccc} {\it Vel_{mean,1}} & {\it Vel_{mean,i}} & {\it Vel_{mean,m}} \\ {\it Vel_{var,1}} & {\it Vel_{var,i}} & {\it Vel_{var,m}} \\ {\it Acc_{mean,1}} ... & {\it Acc_{mean,i}} ... & {\it Acc_{mean,m}} \\ {\it Acc_{var,1}} & {\it Acc_{var,i}} & {\it Acc_{var,m}} \\ {\it H_{s,1}} & {\it H_{s,i}} & {\it H_{s,m}} \\ {\it H_{t,1}} & {\it H_{t,i}} & {\it H_{t,m}} \\ \end{array} \right) \end{eqnarray} } Where mean velocity: ${\it Vel_{mean}}$, variance of velocity: ${\it Vel_{var}}$, mean acceleration: ${\it Acc_{mean}}$, variance of acceleration: ${\it Acc_{var}}$, average of space headway: ${\it H_s}$, average time headway: ${\it H_t}$. \item Normalize \boldmath$E$ into (0,1) and get \boldmath$E_{nor}$ = $(e^{\prime}_{ij})_{m \times n}$. For column {\it j}, Normalize \boldmath$E$ according to following equations: \begin{eqnarray} e_{ij} = \begin{cases} \frac{Max(e^{\prime}_{ij}) - e^{\prime}_{ij}}{Max(e^{\prime}_{ij}) - Min(e^{\prime}_{ij})} & (e^{\prime}_{ij}<0),\\ \frac{e^{\prime}_{ij}-Min(e^{\prime}_{ij})}{Max(e^{\prime}_{ij}) - Min(e^{\prime}_{ij})} & (e^{\prime}_{ij}>0). \end{cases} \\ Max(e^{\prime}_{ij}) = \max (e^{\prime}_{1j},e^{\prime}_{2j},\ldots e^{\prime}_{mj}) \nonumber \\ Min(e^{\prime}_{ij}) = \max (e^{\prime}_{1j},e^{\prime}_{2j},\ldots e^{\prime}_{mj}) \nonumber \end{eqnarray} \item For column {\it j}, calculate the percentage of data weight \boldmath$p_{ij}$: \begin{eqnarray} p_{ij} = \frac{e_{ij}}{\sum_{i=i}^m e_{ij}} \end{eqnarray} \item Calculate the entropy score \boldmath$ent_{j}$: \begin{eqnarray} ent_{j} = -\frac{1}{\ln (m)}\sum_{i=i}^m p_{ij}\ln(p_{ij}) \end{eqnarray} \item Calculate the entropy weight vector \boldmath$W$: \begin{eqnarray} W = w{j} = -\frac{1-ent_{j}}{n-\sum_{i=i}^n ent_{j}} \end{eqnarray} \end{enumerate} For the most cases, we consider that one driver drive the same vehicle in the dataset. Then, the exact driving style of vehicle ID={\it i} (also is the driver ID={\it i}) in \boldmath$E_{nor}$ is measured by score, $s_i$: \begin{eqnarray} s_i = [e_{i1} \ldots e_{ij} \ldots e_{i6} ]\cdot W \end{eqnarray} The objective entropy weight method provides a way to divide and select the driving trajectory dataset. Depending on the required simulation environment, \subsection{Online adaptation of DM parameters} With the measurement method, we have the driving styles distribution in one dataset. But to create an exact virtual testing environment, we need a method to reproduce a driver’s behaviors. Generally, in simulation, a driver’s behavior is controlled by a DM. We use Krauss car-following model and Wiedemann one in this paper as described in Section Related work. The previous research proposed many calibration methods \cite{ZHU2018425}. But the main limitations were that they did not consider that the inner parameters could vary when driving. Proposed method employs BO and GRU based neural network. Figure \ref{figure:nn_architecture} shows the architecture of the GRU based neural network. Assuming that our reproducing target is subject vehicle with a preceding vehicle for car-following DM. The output should be a policy $\pi$ ($v_p$ , $v_s$) worked by network which inputs both preceding vehicle state $v_p$ and subject vehicle state $v_s$, e.g., subject vehicle speed trajectory in the past few steps. It will predict next possible parameters which minimize the difference between the actual and simulated trajectories. \begin{figure} \caption{Gated Recurrent Unit (GRU) based NN structure} \label{figure:nn_architecture} \end{figure} Algorithm \ref{algorithm:calibration} shows steps of the online adaptation of DM parameters. Assuming the target is to reproduce one driver as subject vehicle by using a DM with $M$ inner parameters set $pM\{p1, p2,\cdots pM\}$ to be calibrated. The data used for reproduction can be one driver (assuming he/she with an ID=$i$, $s_i$). For driver i in the reproduction dataset, we assume the data from the subject vehicle as $d_{s,i}$, and the data from the pre-ceding $d_{p,i}$. \renewcommand{\algorithmiccomment}[1]{\bgroup //~#1\egroup} \renewcommand{\textbf{Input:}}{\textbf{Input:}} \renewcommand{\textbf{Output:}}{\textbf{Output:}} \begin{algorithm}[h!] \caption{Adaptive DM calibration model for driver $i$} \label{algorithm:calibration} \begin{algorithmic}[1] \Require $d_{s,i}$, $d_{p,i}$ \Ensure $\pi$ ($v_p$ , $v_s$) \Repeat \State $t$=0, $a$, 2$a$\ldots \State Split the $d_{s,i}$, $d_{p,i}$ into short time windows every $a$ steps, each window length=$L$, get the sliced dataset from $d_{s,i}$, $d_{p,i}$ as $d^{k=0, L}_p$, $d^{k= a, L+a}_p$ \ldots ; $d^{k=0, L}_s$, $d^{k= a, L+a}_s$ \ldots. where $k$ is the time frame. \Repeat \State Loss= RMSE ($d^{k= j, L+j}_s$ , $d^{k= j, L+j}_sims$) \State For time step $j$, search the best inner parameters set $P_{M, j}$ by BO using $d^{k= j, L+j}_p$ , $d^{k= j, L+j}_s$ \Until{all the $d^{k=j, L+j}_p$ , $d^{k= j, L+j}_s$ has been searched} \Until{$d_{s,n}$, $d_{p,n}$ has been searched} \State \boldmath$Train$ $\pi$ ($v_p$ , $v_s$) with input $d^{k=j, L+j}_p$ , $d^{k= j, L+j}_s$ and labels: $P_{M, j}$ \end{algorithmic} \end{algorithm} \section{Experiments} \subsection{Dataset} We performed experiments on NGSIM, Interstate 80 freeway (I-80) dataset, collected between 4:00 p.m. and 4:15 p.m. on April 13, 2005. The study area was approximately 500 meters (1,640 feet) in length. The dataset consists of comma separated values and the columns include vehicle ID, position (x, y), velocity, acceleration, following vehicle ID, and so on. The data represents moving vehicles every 0.1 second (one frame). For enough amount of trajectory data for car-following experiments, we selected 94 pairs of leading and following vehicles ({\it I-80 Selected Data}) that have over 70 seconds (700 frames), from the total number of 1475 vehicles in the dataset. \subsection{Driving style score} We calculated driving scores for the I-80 Selected Data. Figure \ref{figure:score} shows the driving styles in the distribution. We divided the styles into three clusters by their percentile, cluster-0 (conservative) that is bottom 25\% (24 vehicles data, driving score$<$0.380), cluster-1 (normal) that is middle 50\% (46 vehicles data, 0.380$\leq$driving score$\leq$0.564) and cluster-2 (aggressive) that is top 25\% (24 vehicles data, driving score$>$0.564). \begin{figure} \caption{Drivers styles measurement among with the distribution of driving style} \label{figure:score} \end{figure} \subsection{Online adaptation} \subsubsection{A vehicle data for Krauss model} We first apply proposed method to each pair of leading and following vehicle in the {\it I-80 Selected Data}. Car-following models simulate the following vehicle according with the leading vehicle, so we apply the proposed method to the following vehicle. The trajectory data is spliced into 0.5 seconds (5 frames) short time window, and the parameters for the sliced data are searched by BO. We used the first 80\% of trajectory data for training GRU NN with 0.1 of validation split. The rest 20\% of data were selected as the test sets for the GRU NN. For example, a pair of leading vehicle ID=1 and following vehicle ID=11 has 83.9 seconds (839 frames) data for its trajectory in the dataset, the data is sliced into 167 (=839$\div$5) time windows, and the NN training uses 134 (=167$\times$80\%) time windows and 33 (=167$\times$20\%) ones. We used hyper-parameters of the training as epochs = 500 and batch size = 1, at the result of grid search of the hyper-parameters. We evaluate the proposed method by comparing to car-following model with {\it default} parameters and {\it fixed} parameters searched by BO. {\it Default} parameters means that initial parameters (e.g., $t_r$ = 1.5 and $t_i$ = 0.15 for Krauss model) are used for the car-following models. {\it Fixed} parameters were searched by BO using 80\% of trajectory data. Note that this BO is only used in experiments and different from the BO in the proposed method at Figure \ref{figure:overview}. Figure \ref{figure:no_calib} shows velocity of car-following simulation (Vel\_Sim) and real data (Vel\_Real) on the vehicle ID=11 trajectory data frames for the result of {\it default} parameters, Figure \ref{figure:bo_calib} shows velocity on the same vehicle ID for the result of {\it fixed} parameters, and Figure \ref{figure:dm_calib} shows the results of proposed method, online adaptation of parameters. Table \ref{table:krauss-1} shows the root mean square error (RMSE) between Vel\_Sim and Vel\_Real, and a proposed method difference from {\it fixed} parameters. We focus on the difference between the proposed method and {\it Fixed} parameters to evaluate accuracy of the method. In the vehicle ID=11 case, the difference was 35.5\%. \begin{figure} \caption{Default parameters} \caption{Fixed calibration} \caption{Proposed method} \label{figure:no_calib} \label{figure:bo_calib} \label{figure:dm_calib} \end{figure} \begin{table}[t!] \centering \caption{Experiment on Vehicle ID =1 for Krauss model} \label{table:krauss-1} \begin{tabular}{p{10mm}|p{11mm}|p{15mm}|p{14mm}||p{13mm}} \hline & Default param. & Fixed param. (a) & Proposed (b) & Diff.(c) \text{*} \\ \hline RMSE & 3.393 & 1.033 & {\bf 0.666} & 35.5\% \\ \hline \multicolumn{5}{l}{\text{*} c = (a-b)/a} \end{tabular} \end{table} \subsubsection{Each vehicle for Krauss model and Wiedemann model} We next evaluate the proposed method by applying it to each vehicle data (94 pairs of leading and following vehicles data) for both Krauss model and Wiedemann model. Each vehicle data is used to train and test as same as above a vehicle experiment for Krauss model. Figure \ref{figure:diff_one} shows each vehicle's differences from {\it fixed} parameters in Krauss model and Wiedemann one. Note that we eliminated top and bottom 10\% for the value of differences as outlier. \begin{figure} \caption{Differences from {\it fixed} parameters for each vehicle} \label{figure:diff_one} \end{figure} \subsubsection{Online Adaptation for the clusters} We then evaluate the differences for each cluster that has been divided into three by entropy weight. We employ 5-fold cross validation for each cluster to train and test vehicles data in a cluster. For example, in the cluster-0 that has 24 vehicles data, 19 vehicles data are used to train GRU NN model, 5 vehicles data are used to test the NN model, and repeat this training and testing for each 5-fold set. Figure \ref{figure:diff_cluster_kr} shows differences from {\it fixed} parameters for clusters in Krauss car-following model. Figure \ref{figure:diff_cluster_w99} shows differences from {\it fixed} parameters for clusters in Wiedemann car-following model. \begin{figure} \caption{Differences from {\it fixed} parameters for clusters (Krauss model)} \label{figure:diff_cluster_kr} \end{figure} \begin{figure} \caption{Differences from {\it fixed} parameters for clusters (Wiedemann model)} \label{figure:diff_cluster_w99} \end{figure} \subsubsection{Application for the proposed method} We demonstrated the proposed method on the microscopic traffic simulator, SUMO. Figure \ref{figure:sumo_replay} shows a demonstration of real data, proposed method and {\it default} parameter vehicle on SUMO simulation. SUMO can import maps from OpenStreetMap \cite{OpenStreetMap}, and we imported the map of I-80 targeted area. All the leading vehicles are replayed according with real data in the dataset, and following vehicles are simulated real data, proposed method and {\it default} parameters vehicle respectively. We confirmed the proposed method worked on a real simulator. \begin{figure} \caption{Demonstration of proposed method on SUMO simulation} \label{figure:sumo_replay} \end{figure} \section{Discussion} Regarding of the challenge that a driving model (DM) should simulate real human driving behaviors, the proposed method achieved up to 26\% more accurate model using Wiedemann model than {\it fixed} parameters for all clusters as shown in Figure \ref{figure:diff_cluster_w99}. Average of the differences for cluster-2 (aggressive driving) is greater than cluster-0 (conservative driving). This indicates that online adaptation of the conservative driving is more difficult than the aggressive one, and it might be affected by that the conservative driving consists of low velocity. Differences between proposed method and fixed parameters depended on each vehicle as shown Figure \ref{figure:diff_one}. Average of the differences for both car-following models were less than zero, that means they were worse than {\it fixed} parameters regarding of each vehicle. However, average differences of clusters achieved far more accuracy than the fixed parameters. The cause of the results is presumed that the experiments of clusters have a greater number of training data than each vehicle. For example, the number of training data for vehicle ID=11 is only 134 data, however, cluster-0 has about 2500 time windows (=19 vehicle data $\times$ 134 time windows). The proposed method for Wiedemann car-following model achieved better accuracy than the one for Krauss model as shown in Figure \ref{figure:diff_cluster_kr}, \ref{figure:diff_cluster_w99}. The cause of the results is considered that the number of parameters for each model are different. Krauss model has two parameters and Wiedemann model has ten parameters for their car-following model. \section{Conclusion and Future work} The proposed method effectively divided the drivers according to their driving styles, and achieved more accurate than other calibration method. In this paper, we applied the proposed method to car-following models, however, we believe the method can be applied to humanoid robots that imitate human behavior. That kind of humanoid robots employs some models to simulate human actions, and the proposed method can be applied such robot. \end{document}
\begin{document} \title{A new family of Markov branching trees: the alpha-gamma model} \author{ Bo Chen\thanks{University of Oxford; email chen@stats.ox.ac.uk} \and Daniel Ford\thanks{Google Inc.; email dford@math.stanford.edu} \and Matthias Winkel\thanks{ University of Oxford; email winkel@stats.ox.ac.uk}} \maketitle \begin{abstract} We introduce a simple tree growth process that gives rise to a new two-parameter family of discrete fragmentation trees that extends Ford's alpha model to multifurcating trees and includes the trees obtained by uniform sampling from Duquesne and Le Gall's stable continuum random tree. We call these new trees the alpha-gamma trees. In this paper, we obtain their splitting rules, dislocation measures both in ranked order and in sized-biased order, and we study their limiting behaviour. \emph{AMS 2000 subject classifications: 60J80.\newline Keywords: Alpha-gamma tree, splitting rule, sampling consistency, self-similar fragmentation, dislocation measure, continuum random tree, $\mathbb{R}$-tree, Markov branching model} \end{abstract} \section{Introduction} \em Markov branching trees \em were introduced by Aldous \cite{Ald-93} as a class of random binary phylogenetic models and extended to the multifurcating case in \cite{HMPW}. Consider the space $\mathbb{T}_n$ of combinatorial trees without degree-2 vertices, one degree-1 vertex called the {\sc root} and exactly $n$ further degree-1 vertices labelled by $[n]=\{1,\ldots,n\}$ and called the \em leaves\em; we call the other vertices \em branch points\em. Distributions on $\mathbb{T}_n$ of random trees $T_n^*$ are determined by distributions of the delabelled tree $T_n^\circ$ on the space $\mathbb{T}_n^\circ$ of \em unlabelled trees \em and conditional label distributions, e.g. \em exchangeable \em labels. A sequence $(T_n^\circ,n\ge 1)$ of unlabelled trees has the \em Markov branching property \em if for all $n\ge 2$ conditionally given that the branching adjacent to the {\sc root} is into tree components whose numbers of leaves are $n_1,\ldots,n_k$, these tree components are independent copies of $T_{n_i}^\circ$, $1\le i\le k$. The distributions of the sizes in the first branching of $T_n^\circ$, $n\ge 2$, are denoted by \begin{eqnarray*}&\ds q(n_1,\ldots,n_k),\qquad n_1\ge\ldots\ge n_k\ge 1,\quad k\ge 2:\quad n_1+\ldots+n_k=n, &\end{eqnarray*} and referred to as the \em splitting rule \em of $(T_n^\circ,n\ge 1)$. Aldous \cite{Ald-93} studied in particular a one-parameter family ($\beta\ge-2$) that interpolates between several models known in various biology and computer science contexts (e.g. $\beta=-2$ comb, $\beta=-3/2$ uniform, $\beta=0$ Yule) and that he called the \em beta-splitting model\em, he sets for $\beta>-2$: \begin{eqnarray*} q^{{\rm Aldous}}_\beta(n-m,m)=\frac{1}{Z_n}{n\choose m} B(m+1+\beta,n-m+1+\beta),&&\quad \mbox{for $1\le m<n/2$,}\\ q^{\rm Aldous}_\beta(n/2,n/2)=\frac{1}{2Z_n}{n\choose n/2}B(n/2+1+\beta,n/2+1+\beta),&&\quad\mbox{if $n$ even,} \end{eqnarray*} where $B(a,b)=\Gamma(a)\Gamma(b)/\Gamma(a+b)$ is the Beta function and $Z_n$, $n\ge 2$, are normalisation constants; this extends to $\beta=-2$ by continuity, i.e. $q^{\rm Aldous}_{-2}(n-1,1)=1$, $n\ge 2$. For exchangeably labelled Markov branching models $(T_n,n\ge 1)$ it is convenient to set \begin{equation}\label{spliteppf} p(n_1,\ldots,n_k):=\frac{m_1!\ldots m_n!}{{n\choose n_1,\ldots,n_k}}q((n_1,\ldots,n_k)^\downarrow),\quad n_j\ge1,j\in[k];k\ge 2:\ n=n_1+\ldots+n_k, \end{equation} where $(n_1,\ldots,n_k)^\downarrow$ is the decreasing rearrangement and $m_r$ the number of $r$s of the sequence $(n_1,\ldots,n_k)$. The function $p$ is called \em exchangeable partition probability function (EPPF) \em and gives the probability that the branching adjacent to the {\sc root} splits into tree components with label sets $\{A_1,\ldots,A_k\}$ partitioning $[n]$, with \em block sizes \em $n_j=\#A_j$. Note that $p$ is invariant under permutations of its arguments. It was shown in \cite{MPW} that Aldous's beta-splitting models for $\beta>-2$ are the only \em binary \em Markov branching models for which the EPPF is of Gibbs type \begin{eqnarray*}&\ds p^{\rm Aldous}_{-1-\alpha}(n_1,n_2)=\frac{w_{n_1}w_{n_2}}{Z_{n_1+n_2}},\quad n_1\ge 1,n_2\ge 1,\qquad\mbox{in particular }w_n=\frac{\Gamma(n-\alpha)}{\Gamma(1-\alpha)}, &\end{eqnarray*} and that the \em multifurcating \em Gibbs models are an \em extended \em Ewens-Pitman two-parameter family of random partitions, $0\le\alpha\le 1$, $\theta\ge -2\alpha$, or $-\infty\le\alpha<0$, $\theta=-m\alpha$ for some integer $m\ge 2$, \begin{equation} p_{{\alpha,\theta}}^{\rm PD^*}(n_1,\ldots,n_k)=\frac{a_k}{Z_n}\prod_{j=1}^kw_{n_j},\quad \mbox{where }w_n=\frac{\Gamma(n-\alpha)}{\Gamma(1-\alpha)}\mbox{ and }a_k=\alpha^{k-2}\frac{\Gamma(k+\theta/\alpha)}{\Gamma(2+\theta/\alpha)}, \label{EPmod} \end{equation} boundary cases by continuity. Ford \cite{For-05} introduced a different \em binary \em model, the \em alpha model\em, using simple sequential growth rules starting from the unique elements $T_1\in\mathbb{T}_1$ and $T_2\in\mathbb{T}_2$: \begin{enumerate}\item[(i)$^{\rm F}$] given $T_n$ for $n\ge 2$, assign a weight $1-\alpha$ to each of the $n$ edges adjacent to a leaf, and a weight $\alpha$ to each of the $n-1$ other edges; \item[(ii)$^{\rm F}$] select at random with probabilities proportional to the weights assigned by step (i)$^{\rm F}$, an edge of $T_n$, say $a_n\rightarrow c_n$ directed away from the {\sc root}; \item[(iii)$^{\rm F}$] to create $T_{n+1}$ from $T_n$, replace $a_n\rightarrow c_n$ by three edges $a_n\rightarrow b_n$, $b_n\rightarrow c_n$ and $b_n\rightarrow n+1$ so that two new edges connect the two vertices $a_n$ and $c_n$ to a new branch point $b_n$ and a further edge connects $b_n$ to a new leaf labelled $n+1$. \end{enumerate} It was shown in \cite{For-05} that these trees are Markov branching trees but that the labelling is not exchangeable. The splitting rule was calculated and shown to coincide with Aldous's beta-splitting rules if and only if $\alpha=0$, $\alpha=1/2$ or $\alpha=1$, interpolating differently between Aldous's corresponding models for $\beta=0$, $\beta=-3/2$ and $\beta=-2$. This study was taken further in \cite{HMPW,PW2}. In this paper, we introduce a new model by extending the simple sequential growth rules to allow \em multifurcation\em. Specifically, we also assign weights to \em vertices \em as follows, cf. Figure \ref{fig1}: \begin{enumerate}\item[(i)] given $T_n$ for $n\ge 2$, assign a weight $1-\alpha$ to each of the $n$ edges adjacent to a leaf, a weight $\gamma$ to each of the $n-1$ other edges, and a weight $(k-1)\alpha-\gamma$ to each vertex of degree $k+1\ge 3$; \item[(ii)] select at random with probabilities proportional to the weights assigned by step (i), \begin{itemize}\item an edge of $T_n$, say $a_n\rightarrow c_n$ directed away from the {\sc root}, \item or, as the case may be, a vertex of $T_n$, say $v_n$; \end{itemize} \item[(iii)] to create $T_{n+1}$ from $T_n$, do the following: \begin{itemize}\item if an edge $a_n\rightarrow c_n$ was selected, replace it by three edges $a_n\rightarrow b_n$, $b_n\rightarrow c_n$ and $b_n\rightarrow n+1$ so that two new edges connect the two vertices $a_n$ and $c_n$ to a new branch point $b_n$ and a further edge connects $b_n$ to a new leaf labelled $n+1$; \item if a vertex $v_n$ was selected, add an edge $v_n\rightarrow n+1$ to a new leaf labelled $n+1$.\pagebreak[2] \end{itemize} \end{enumerate} \begin{figure} \caption{Sequential growth rule: displayed is one branch point of $T_n$ with degree $k+1$, hence vertex weight $(k-1)\alpha-\gamma$, with $k-r$ leaves $L_{r+1},\ldots,L_k\in[n]$ and $r$ bigger subtrees $S_1,\ldots,S_r$ attached to it; all edges also carry weights, weight $1-\alpha$ and $\gamma$ are displayed here for one leaf edge and one inner edge only; the three associated possibilities for $T_{n+1}$ are displayed.} \label{fig1} \end{figure} We call the resulting model the \em alpha-gamma model\em. These growth rules satisfy the rules of probability for all $0\le\alpha\le 1$ and $0\le\gamma\le\alpha$. They contain the growth rules of the alpha model for $\gamma=\alpha$. They also contain growth rules for a model \cite{Mar-08,Mie-03} based on the stable tree of Duquesne and Le Gall \cite{DuL-02}, for the cases $\gamma=1-\alpha$, $1/2\le\alpha<1$, where all edges are given the same weight; we show here that these cases $\gamma=1-\alpha$, $1/2\le\alpha\le 1$, as well as $\alpha=\gamma=0$ form the intersection with the extended Ewens-Pitman-type two-parameter family of models (\ref{EPmod}). \begin{prop}\label{prop1} Let $(T_n,n\ge 1)$ be alpha-gamma trees with distributions as implied by the sequential growth rules {\rm (i)-(iii)} for some $0\le\alpha\le 1$ and $0\le\gamma\le\alpha$. Then \begin{enumerate}\item[\rm(a)] the delabelled trees $T_n^\circ$, $n\ge 1$, have the Markov branching property. The splitting rules are \begin{equation}\label{split} q_{\alpha,\gamma}^{\rm seq}(n_1,\ldots,n_k)\quad\propto\quad\left(\gamma+(1-\alpha-\gamma)\frac{1}{n(n-1)}\sum_{i\neq j}n_in_j\right)q_{\alpha,-\alpha-\gamma}^{\rm PD^*}(n_1,\ldots,n_k), \end{equation} in the case $0\le\alpha<1$, where $q_{\alpha,-\alpha-\gamma}^{\rm PD^*}$ is the splitting rule associated via (\ref{spliteppf}) with $p_{\alpha,-\alpha-\gamma}^{\rm PD^*}$, the Ewens-Pitman-type EPPF given in (\ref{EPmod}), and LHS $\propto$ RHS means equality up to a multiplicative constant depending on $n$ and $(\alpha,\gamma)$ that makes the LHS a probability function; \item[\rm(b)] the labelling of $T_n$ is exchangeable for all $n\ge 1$ if and only if $\gamma=1-\alpha$, $1/2\le\alpha\le 1$. \end{enumerate} \end{prop} For any function $(n_1,\ldots,n_k)\mapsto q(n_1,\ldots,n_k)$ that is a probability function for all fixed $n=n_1+\ldots+n_k$, $n\ge 2$, we can construct a Markov branching model $(T_n^\circ,n\ge 1)$. A condition called \em sampling consistency \em \cite{Ald-93} is to require that the tree $T_{n,-1}^\circ$ constructed from $T_n^\circ$ by removal of a uniformly chosen leaf (and the adjacent branch point if its degree is reduced to 2) has the same distribution as $T_{n-1}^\circ$, for all $n\ge 2$. This is appealing for applications with incomplete observations. It was shown in \cite{HMPW} that all sampling consistent splitting rules admit an integral representation $(c,\nu)$ for an erosion coefficient $c\ge 0$ and a dislocation measure $\nu$ on $\mathcal{S}^\downarrow=\{s=(s_i)_{i\ge 1}:s_1\ge s_2\ge\ldots\ge 0,s_1+s_2+\ldots\le 1\}$ with $\nu(\{(1,0,0,\ldots)\})=0$ and $\int_{\mathcal{S}^\downarrow}(1-s_1)\nu(ds)<\infty$ as in Bertoin's continuous-time fragmentation theory \cite{Ber-hom,Ber-ss,Ber-book}. In the most relevant case when $c=0$ and $\nu(\{s\in\mathcal{S}^\downarrow:s_1+s_2+\ldots<1\})=0$, this representation is \begin{equation}\label{Kingman} p(n_1,\ldots,n_k)=\frac{1}{\widetilde{Z}_n}\int_{\mathcal{S}^\downarrow}\sum_{{\QATOP{i_1, \ldots,i_{k}\ge 1}\atop{\mbox{\scriptsize distinct}}}}\prod_{j=1}^ks_{i_j}^{n_j}\nu(ds),\quad n_j\ge1,j\in[k];k\ge 2:\ n=n_1+\ldots+n_k, \end{equation} where $\widetilde{Z}_n=\int_{\mathcal{S}^\downarrow}(1-\sum_{i\ge 1}s_i^n)\nu(ds)$, $n\ge 2$, are the normalization constants. The measure $\nu$ is unique up to a multiplicative constant. In particular, it can be shown \cite{Mie-03,HPW} that for the Ewens-Pitman EPPFs $p_{\alpha,\theta}^{\rm PD^*}$ we obtain $\nu={\rm PD}^*_{\alpha,\theta}(ds)$ of Poisson-Dirichlet type (hence our superscript ${\rm PD}^*$ for the Ewens-Pitman type EPPF), where for $0<\alpha<1$ and $\theta>-2\alpha$ we can express \begin{eqnarray*}&\ds \int_{\mathcal{S}^\downarrow}f(s){\rm PD}^*_{\alpha,\theta}(ds)=\mathbb{E}\left(\sigma_1^{-\theta}f\left(\Delta\sigma_{[0,1]}/\sigma_1\right)\right), &\end{eqnarray*} for an $\alpha$-stable subordinator $\sigma$ with Laplace exponent $-\log(\mathbb{E}(e^{-\lambda\sigma_1}))=\lambda^\alpha$ and with ranked sequence of jumps $\Delta\sigma_{[0,1]}=(\Delta\sigma_t,t\in[0,1])^\downarrow$. For $\alpha<1$ and $\theta=-2\alpha$, we have \begin{eqnarray*}&\ds \int_{\mathcal{S}^\downarrow}f(s){\rm PD}^*_{\alpha,-2\alpha}(ds)=\int_{1/2}^1 f(x,1-x,0,0,\ldots)x^{-\alpha-1}(1-x)^{-\alpha-1}dx. &\end{eqnarray*} Note that $\nu={\rm PD}^*_{\alpha,\theta}$ is infinite but $\sigma$-finite with $\int_{\mathcal{S}^\downarrow}(1-s_1)\nu(ds)<\infty$ for $-2\alpha\le\theta\le-\alpha$. This is the relevant range for this paper. For $\theta>-\alpha$, the measure ${\rm PD}^*_{\alpha,\theta}$ just defined is a multiple of the usual Poisson-Dirichlet probability measure ${\rm PD}_{\alpha,\theta}$ on $\mathcal{S}^\downarrow$, so for the integral representation of $p_{{\alpha,\theta}}^{\rm PD^*}$ we could also take $\nu={\rm PD}_{\alpha,\theta}$ in this case, and this is also an appropriate choice for the two cases $\alpha=0$ and $m\ge 3$; the case $\alpha=1$ is degenerate $q_{\alpha,\theta}^{\rm PD^*}(1,1,\ldots,1)=1$ (for all $\theta$) and can be associated with $\nu={\rm PD}^*_{1,\theta}=\delta_{(0,0,\ldots)}$, see \cite{MPW}. \begin{theo}\label{thm2} The alpha-gamma-splitting rules $q_{\alpha,\gamma}^{\rm seq}$ are sampling consistent. For $0\le\alpha<1$ and $0\le\gamma\le\alpha$ the measure $\nu$ in the integral representation can be chosen as \begin{equation}\label{thm2nu} \nu_{\alpha,\gamma}(ds)=\left(\gamma+(1-\alpha-\gamma)\sum_{i\neq j}s_is_j\right){\rm PD}^*_{\alpha,-\alpha-\gamma}(ds). \end{equation} \end{theo} The case $\alpha=1$ is discussed in Section \ref{sectalpha1}. We refer to Griffiths \cite{Gri-83} who used discounting of Poisson-Dirichlet measures by quantities involving $\sum_{i\neq j}s_is_j$ to model genic selection. In \cite{HMPW}, Haas and Miermont's self-similar continuum random trees (CRTs) \cite{HM} are shown to be scaling limits for a wide class of Markov branching models. See Sections \ref{seccrts} and \ref{sechmpw} for details. This theory applies here to yield: \begin{coro}\label{dconv} Let $(T_n^\circ,n\ge 1)$ be delabelled alpha-gamma trees, represented as discrete $\mathbb{R}$-trees with unit edge lengths, for some $0<\alpha<1$ and $0<\gamma\le\alpha$. Then \begin{eqnarray*}&\ds \frac{T_n^\circ}{n^{\gamma}}\rightarrow\mathcal{T}^{\alpha,\gamma}\qquad\mbox{in distribution for the Gromov-Hausdorff topology,} &\end{eqnarray*} where the scaling $n^\gamma$ is applied to all edge lengths, and $\mathcal{T}^{\alpha,\gamma}$ is a $\gamma$-self-similar CRT whose dislocation measure is a multiple of $\nu_{\alpha,\gamma}$. \end{coro} We observe that every dislocation measure $\nu$ on $\mathcal{S}^\downarrow$ gives rise to a measure $\nu^{\rm sb}$ on the space of summable sequences under which fragment sizes are in a size-biased random order, just as the ${\rm GEM}_{\alpha,\theta}$ distribution can be defined as the distribution of a ${\rm PD}_{\alpha,\theta}$ sequence re-arranged in size-biased random order \cite{csp}. We similarly define ${\rm GEM}^*_{\alpha,\theta}$ from ${\rm PD}^*_{\alpha,\theta}$. One of the advantages of size-biased versions is that, as for ${\rm GEM}_{\alpha,\theta}$, we can calculate marginal distributions explicitly. \begin{prop}\label{prop4} For $0<\alpha<1$ and $0\le\gamma<\alpha$, distributions $\nu_k^{\rm sb}$ of the first $k\ge 1$ marginals of the size-biased form $\nu_{\alpha,\gamma}^{\rm sb}$ of $\nu_{\alpha,\gamma}$ are given, for $x=(x_1,\ldots,x_k)$, by \begin{eqnarray*}&\ds\hspace{-0.3cm} \nu^{\rm sb}_k(dx)=\left(\gamma+(1-\alpha-\gamma)\left(1-\sum_{i=1}^kx_i^2-\frac{1\!-\!\alpha}{1\!+\!(k\!-\!1)\alpha\!-\!\gamma}\left(1-\sum_{i=1}^kx_i\right)^2\right)\right){\rm GEM}^*_{\alpha,-\alpha-\gamma}(dx). &\end{eqnarray*} \end{prop} \noindent The other boundary values of parameters are trivial here -- there are at most two non-zero parts.\pagebreak[2] We can investigate the convergence of Corollary \ref{dconv} when labels are retained. Since labels are non-exchangeable, in general, it is not clear how to nicely represent a continuum tree with infinitely many labels other than by a consistent sequence $\mathcal{R}_k$ of trees with $k$ leaves labelled $[k]$, $k\ge 1$. See however \cite{PW2} for developments in the binary case $\gamma=\alpha$ on how to embed $\mathcal{R}_k$, $k\ge 1$, in a CRT $\mathcal{T}^{\alpha,\alpha}$. The following theorem extends Proposition 18 of \cite{HMPW} to the multifurcating case. \begin{theo}\label{LE} Let $(T_n,n\ge 1)$ be a sequence of trees resulting from the alpha-gamma-tree growth rules for some $0<\alpha<1$ and $0<\gamma\le\alpha$. Denote by $R(T_n,[k])$ the subtree of $T_n$ spanned by the {\sc root} and leaves $[k]$, reduced by removing degree-2 vertices, represented as discrete $\mathbb{R}$-tree with graph distances in $T_n$ as edge lengths. Then \begin{eqnarray*}&\ds \frac{R(T_n,[k])}{n^{\gamma}}\rightarrow\mathcal{R}_k\qquad\mbox{a.s. in the sense that all edge lengths converge,} &\end{eqnarray*} for some discrete tree $\mathcal{R}_k$ with shape $T_k$ and edge lengths specified in terms of three random variables, conditionally independent given that $T_k$ has $k+\ell$ edges, as $L_kW_k^\gamma D_k$ with \begin{itemize}\item $W_k\sim{\rm beta}(k(1-\alpha)+\ell\gamma,(k-1)\alpha-\ell\gamma)$, where ${\rm beta}(a,b)$ is the beta distribution with density $B(a,b)^{-1}x^{a-1}(1-x)^{b-1}1_{(0,1)}(x)$; \item $L_k$ with density $\displaystyle\frac{\Gamma(1+k(1-\alpha)+\ell\gamma)}{\Gamma(1+\ell+k(1-\alpha)/\gamma)}s^{\ell+k(1-\alpha)/\gamma}g_\gamma(s)$, where $g_\gamma$ is the Mittag-Leffler density, the density of $\sigma_1^{-\gamma}$ for a subordinator $\sigma$ with Laplace exponent $\lambda^\gamma$; \item $D_k\sim{\rm Dirichlet}((1-\alpha)/\gamma,\ldots,(1-\alpha)/\gamma,1,\ldots,1)$, where ${\rm Dirichlet}(a_1,\ldots,a_m)$ is the Dirichlet distribution on $\Delta_m=\{(x_1,\ldots,x_m)\in[0,1]^m:x_1+\ldots+x_m=1\}$ with density of the first $m-1$ marginals proportional to $x_1^{a_1-1}\ldots x_{m-1}^{a_{m-1}-1}(1-x_1-\ldots-x_{m-1})^{a_m-1}$; here $D_k$ contains edge length proportions, first with parameter $(1-\alpha)/\gamma$ for edges adjacent to leaves and then with parameter $1$ for the other edges, each enumerated e.g. by depth first search. \end{itemize} \end{theo} In fact, $1-W_k$ captures the total limiting leaf proportions of subtrees that are attached on the vertices of $T_k$, and we can study further how this is distributed between the branch points, see Section \ref{secbw}. We conclude this introduction by giving an alternative description of the alpha-gamma model obtained by adding colouring rules to the alpha model growth rules (i)$^{\rm F}$-(iii)$^{\rm F}$, so that in $T_n^{\rm col}$ each edge except those adjacent to leaves has either a blue or a red colour mark. \begin{enumerate}\item[(iv)$^{\rm col}$] To turn $T_{n+1}$ into a colour-marked tree $T_{n+1}^{\rm col}$, keep the colours of $T_n^{\rm col}$ and do the following: \begin{itemize} \item if an edge $a_n\rightarrow c_n$ adjacent to a leaf was selected, mark $a_n\rightarrow b_n$ blue; \item if a red edge $a_n\rightarrow c_n$ was selected, mark both $a_n\rightarrow b_n$ and $b_n\rightarrow c_n$ red; \item if a blue edge $a_n\rightarrow c_n$ was selected, mark $a_n\rightarrow b_n$ blue; mark $b_n\rightarrow c_n$ red with probability $c$ and blue with probability $1-c$; \end{itemize} \end{enumerate} When $(T_n^{\rm col},n\ge 1)$ has been grown according to (i)$^{\rm F}$-(iii)$^{\rm F}$ and (iv)$^{\rm col}$, crush all red edges, i.e. \begin{enumerate}\item[(cr)] identify all vertices connected via red edges, remove all red edges and remove the remaining colour marks; denote the resulting sequence of trees by $(\widetilde{T}_{n},n\ge 1)$; \end{enumerate} \begin{prop}\label{prop6} Let $(\widetilde{T}_n,n\ge 1)$ be a sequence of trees according to growth rules {\rm (i)}$^{\rm F}$-{\rm(iii)}$^{\rm F}$,{\rm(iv)}$^{\rm col}$ and crushing rule {\rm (cr)}. Then $(\widetilde{T}_n,n\ge 1)$ is a sequence of alpha-gamma trees with $\gamma=\alpha(1-c)$. \end{prop} The structure of this paper is as follows. In Section 2 we study the discrete trees grown according to the growth rules (i)-(iii) and establish Proposition \ref{prop6} and Proposition \ref{prop1} as well as the sampling consistency claimed in Theorem \ref{thm2}. Section 3 is devoted to the limiting CRTs, we obtain the dislocation measure stated in Theorem \ref{thm2} and deduce Corollary \ref{dconv} and Proposition \ref{prop4}. In Section 4 we study the convergence of labelled trees and prove Theorem \ref{LE}. \section{Sampling consistent splitting rules for the alpha-gamma trees} \subsection{Notation and terminology of partitions and discrete fragmentation trees}\label{trees} For $B\subseteq\mathbb{N}$, let $\mathcal{P}_B$ be the \em set of partitions of $B$ \em into disjoint non-empty subsets called \em blocks\em. Consider a probability space $(\Omega,\mathcal{F},\mathbb{P})$, which supports a $\mathcal{P}_B$-valued random partition $\Pi_B$. If the probability function of $\Pi_B$ only depends on its block sizes, we call it \em exchangeable\em. Then $$ \mathbb{P}(\Pi_B=\{A_1,\ldots ,A_k\})=p(\#A_1,\ldots,\#A_k)\qquad\mbox{for each partition $\pi=\{A_1,\ldots ,A_k\}\in\mathcal{P}_B$,}$$ where $\#A_j$ denotes the block size, i.e. the number of elements of $A_j$. This function $p$ is called the \em exchangeable partition probability function \em (EPPF) of $\Pi_B$. Alternatively, a random partition $\Pi_B$ is exchangeable if its distribution is invariant under the natural action on partitions of $B$ by the symmetric group of permutations of $B$. Let $B\subseteq\mathbb{N}$, we say that a partition $\pi\in \mathcal{P}_B$ is \em finer than \em $\pi'\in \mathcal{P}_B$, and write $\pi\preceq\pi'$, if any block of $\pi$ is included in some block of $\pi'$. This defines a partial order $\preceq$ on $\mathcal{P}_B$. A process or a sequence with values in $\mathcal{P}_B$ is called refining if it is decreasing for this partial order. Refining partition-valued processes are naturally related to trees. Suppose that $B$ is a finite subset of $\mathbb{N}$ and $\mathbf{t}$ is a collection of subsets of $B$ with an additional member called the {\sc root} such that \begin{itemize} \item we have $B\in\mathbf{t}$; we call $B$ the \em common ancestor \em of $\mathbf{t}$; \item we have $\{i\}\in \mathbf{t}$ for all $i\in B$; we call $\{i\}$ a \em leaf \em of $\mathbf{t}$; \item for all $A\in\mathbf{t}$ and $C\in \mathbf{t}$, we have either $A\cap C=\varnothing$, or $A\subseteq C$ or $C\subseteq A$. \end{itemize} If $A\subset C$, then $A$ is called a \em descendant \em of $C$, or $C$ an \em ancestor \em of $A$. If for all $D\in \mathbf{t}$ with $A\subseteq D\subseteq C$ either $A=D$ or $D=C$, we call $A$ a \em child \em of $C$, or $C$ the \em parent \em of $A$ and denote $C\rightarrow A$. If we equip $\mathbf{t}$ with the parent-child relation and also $\mbox{\sc root}\rightarrow B$, then $\mathbf{t}$ is a rooted connected acyclic graph, i.e. a combinatorial tree. We denote the space of such trees $\mathbf{t}$ by $\mathbb{T}_B$ and also $\mathbb{T}_n=\mathbb{T}_{[n]}$. For $\mathbf{t}\in\mathbb{T}_B$ and $A\in\mathbf{t}$, the rooted subtree $\mathbf{s}_A$ of $\mathbf{t}$ with common ancestor $A$ is given by $\mathbf{s}_A=\{\mbox{\sc root}\}\cup\{C\in\mathbf{t}:C\subseteq A\}\in\mathbb{T}_A.$ In particular, we consider the \em subtrees $\mathbf{s}_j=\mathbf{s}_{A_j}$ of the common ancestor $B$ of $\mathbf{t}$\em, i.e. the subtrees whose common ancestors $A_j$, $j\in[k]$, are the children of $B$. In other words, $\mathbf{s}_1,\ldots,\mathbf{s}_k$ are the rooted connected components of $\mathbf{t}\setminus\{B\}$. Let $(\pi(t), t\geq0)$ be a $\mathcal{P}_B$-valued refining process for some finite $B\subset\mathbb{N}$ with $\pi(0)=\mathbf{1}_B$ and $\pi(t)=\mathbf{0}_B$ for some $t>0$, where $\mathbf{1}_B$ is the trivial partition into a single block $B$ and $\mathbf{0}_B$ is the partition of $B$ into singletons. We define $\mathbf{t}_\pi=\{\text{\sc root}\}\cup\{A\subset B: A\in \pi(t)\mbox{ for some $t\geq 0$}\}$ as the associated \textit{labelled fragmentation tree}. \begin{defi}\label{lab}\rm Let $B\subset\mathbb{N}$ with $\#B=n$ and $\mathbf{t}\in\mathbb{T}_B$. We associate the relabelled tree $$\mathbf{t}^\sigma=\{\mbox{\sc root}\}\cup\{\sigma(A):A\in\mathbf{t}\}\in\mathbb{T}_n,$$ for any bijection $\sigma:B\rightarrow[n]$, and the combinatorial tree shape of $\mathbf{t}$ as the equivalence class $$\mathbf{t}^\circ=\{\mathbf{t}^\sigma|\sigma:B\rightarrow[n]\mbox{ bijection}\}\subset\mathbb{T}_n.$$ We denote by $\mathbb{T}_n^\circ=\{\mathbf{t}^\circ:\mathbf{t}\in\mathbb{T}_n\}=\{\mathbf{t}^\circ:\mathbf{t}\in\mathbb{T}_B\}$ the collection of all tree shapes with $n$ leaves, which we will also refer to in their own right as \em unlabelled fragmentation trees\em. \end{defi} Note that the number of subtrees of the common ancestor of $\mathbf{t}\in\mathbb{T}_n$ and the numbers of leaves in these subtrees are invariants of the equivalence class $\mathbf{t}^\circ\subset\mathbb{T}_n$. If $\mathbf{t}^\circ\in\mathbb{T}_n^\circ$ has subtrees $\mathbf{s}_1^\circ,\ldots,\mathbf{s}_k^\circ$ with $n_1\geq\ldots\geq n_k\geq 1$ leaves, we say that $\mathbf{t}^\circ$ is formed by \em joining together \em $\mathbf{s}_1^\circ,\ldots,\mathbf{s}_k^\circ$, denoted by $\mathbf{t}^\circ=\mathbf{s}_1^\circ*\ldots*\mathbf{s}_k^\circ$. We call the \em composition \em $(n_1,\ldots,n_k)$ of $n$ the \em first split \em of $\mathbf{t}_n^\circ$. With this notation and terminology, a sequence of random trees $T_n^\circ\in\mathbb{T}_n^\circ$, $n\ge 1$, has the \em Markov branching property \em if, for all $n\ge 2$, the tree $T_n^\circ$ has the same distribution as $S_1^\circ*\ldots*S_{K_n}^\circ$, where $N_1\ge\ldots\ge N_{K_n}\ge 1$ form a random composition of $n$ with $K_n\ge 2$ parts, and conditionally given $K_n=k$ and $N_j=n_j$, the trees $S_j^\circ$, $j\in[k]$, are independent and distributed as $T_{n_j}^\circ$, $j\in[k]$. \subsection{Colour-marked trees and the proof of Proposition \ref{prop6}} The growth rules (i)$^{\rm F}$-(iii)$^{\rm F}$ construct binary combinatorial trees $T_n^{\rm bin}$ with vertex set \begin{eqnarray*}&\ds V=\{\mbox{\sc root}\}\cup[n]\cup\{b_1,\ldots,b_{n-1}\} &\end{eqnarray*} and an edge set $E\subset V\times V$. We write $v\rightarrow w$ if $(v,w)\in E$. In Section \ref{trees}, we identify leaf $i$ with the set $\{i\}$ and vertex $b_i$ with $\{j\in[n]:b_i\rightarrow\ldots\rightarrow j\}$, the edge set $E$ then being identified by the parent-child relation. In this framework, a \em colour mark \em for an edge $v\rightarrow b_i$ can be assigned to the vertex $b_i$, so that a \em coloured binary tree \em as constructed in (iv)$^{\rm col}$ can be represented by \begin{eqnarray*}&\ds V^{\rm col}=\{\mbox{\sc root}\}\cup[n]\cup\{(b_1,\chi_n(b_1)),\ldots,(b_{n-1},\chi_n(b_{n-1}))\} &\end{eqnarray*} for some $\chi_n(b_i)\in\{0,1\}$, $i\in[n-1]$, where $0$ represents red and $1$ represents blue. \begin{proof}[Proof of Proposition \ref{prop6}] We only need to check that the growth rules (i)$^{\rm F}$-(iii)$^{\rm F}$ and (iv)$^{\rm col}$ for $(T_n^{\rm col},n\ge 1)$ imply that the uncoloured multifurcating trees $(\widetilde{T}_n,n\ge 1)$ obtained from $(T_n^{\rm col},n\ge 1)$ via crushing (cr) satisfy the growth rules (i)-(iii). Let therefore $\mathbf{t}^{\rm col}_{n+1}$ be a tree with $\mathbb{P}(T_{n+1}^{\rm col}=\mathbf{t}^{\rm col}_{n+1})>0$. It is easily seen that there is a unique tree $\mathbf{t}^{\rm col}_n$, a unique insertion edge $a_n^{\rm col}\rightarrow c_n^{\rm col}$ in $\mathbf{t}^{\rm col}_n$ and, if any, a unique colour $\chi_{n+1}(c_n^{\rm col})$ to create $\mathbf{t}^{\rm col}_{n+1}$ from $\mathbf{t}^{\rm col}_n$. Denote the trees obtained from $\mathbf{t}^{\rm col}_n$ and $\mathbf{t}^{\rm col}_{n+1}$ via crushing (cr) by $\mathbf{t}_n$ and $\mathbf{t}_{n+1}$. If $\chi_{n+1}(c_n^{\rm col})=0$, denote by $k+1\ge 3$ the degree of the branch point of $\mathbf{t}_n$ with which $c_n^{\rm col}$ is identified in the first step of the crushing (cr). \begin{itemize}\item If the insertion edge is a leaf edge ($c_n^{\rm col}=i$ for some $i\in[n]$), we obtain $$\mathbb{P}(\widetilde{T}_{n+1}=\mathbf{t}_{n+1}|\widetilde{T}_n=\mathbf{t}_n,T_n^{\rm col}=\mathbf{t}_n^{\rm col})=(1-\alpha)/(n-\alpha).$$ \item If the insertion edge has colour blue ($\chi_n(c_n^{\rm col})=1$) and also $\chi_{n+1}(c_n^{\rm col})=1$, we obtain $$\mathbb{P}(\widetilde{T}_{n+1}=\mathbf{t}_{n+1}|\widetilde{T}_n=\mathbf{t}_n,T_n^{\rm col}=\mathbf{t}_n^{\rm col})=\alpha(1-c)/(n-\alpha).$$ \item If the insertion edge has colour blue ($\chi_n(c_n^{\rm col})=1$), but $\chi_{n+1}(c_n^{\rm col})=0$, or if the insertion edge has colour red ($\chi_n(c_n^{\rm col})=0$, and then necessarily $\chi_{n+1}(c_n^{\rm col})=0$ also), we obtain $$\mathbb{P}(\widetilde{T}_{n+1}=\mathbf{t}_{n+1}|\widetilde{T}_n=\mathbf{t}_n,T_n^{\rm col}=\mathbf{t}_n^{\rm col})=(c\alpha+(k-2)\alpha)/(n-\alpha),$$ because apart from $a_n^{\rm col}\rightarrow c_n^{\rm col}$, there are $k-2$ other edges in $\mathbf{t}^{\rm col}_n$, where insertion and crushing also create $\mathbf{t}_{n+1}$. \end{itemize} Because these conditional probabilities do not depend on $\mathbf{t}_n^{\rm col}$ and have the form required, we conclude that $(\widetilde{T}_n,n\ge 1)$ obeys the growth rules (i)-(iii) with $\gamma=\alpha(1-c)$. \end{proof} \subsection{The Chinese Restaurant Process} An important tool in this paper is the Chinese Restaurant Process (CRP), a partition-valued process $(\Pi_n,n\ge 1)$ due to Dubins and Pitman, see \cite{csp}, which generates the Ewens-Pitman two-parameter family of exchangeable random partitions $\Pi_\infty$ of $\mathbb{N}$. In the restaurant framework, each block of a partition is represented by a \em table \em and each element of a block by a \em customer \em at a table. The construction rules are the following. The first customer sits at the first table and the following customers will be seated at an occupied table or a new one. Given $n$ customers at $k$ tables with $n_j\ge 1$ customers at the $j$th table, customer $n+1$ will be placed at the $j$th table with probability $(n_j-\alpha)/(n+\theta)$, and at a new table with probability $(\theta+k\alpha)/(n+\theta)$. The parameters $\alpha$ and $\theta$ can be chosen as either $\alpha<0$ and $\theta=-m\alpha$ for some $m\in\mathbb{N}$ or $0\leq \alpha\leq 1$ and $\theta>-\alpha$. We refer to this process as the CRP with $(\alpha,\theta)$-\em seating plan\em. In the CRP $(\Pi_n,n\ge 1)$ with $\Pi_n\in\mathcal{P}_{[n]}$, we can study the block sizes, which leads us to consider the proportion of each table relative to the total number of customers. These proportions converge to \textit{limiting frequencies} as follows. \begin{lemm}[Theorem 3.2 in \cite{csp}]\label{crp1} For each pair of parameters $(\alpha,\theta)$ subject to the constraints above, the Chinese restaurant with the $(\alpha,\theta)$-seating plan generates an exchangeable random partition $\Pi_\infty$ of $\mathbb{N}$. The corresponding EPPF is $$p_{\alpha,\theta}^{\rm PD}(n_1,\ldots ,n_k)=\frac{\alpha^{k-1}\Gamma(k+\theta/\alpha)\Gamma(1+\theta)} {\Gamma(1+\theta/\alpha)\Gamma(n+\theta)}\prod_{i=1}^k\frac{\Gamma(n_i-\alpha)}{\Gamma(1-\alpha)},\quad n_i\ge 1,i\in[k];k\ge 1:\ \mbox{$\sum n_i=n$,}$$ boundary cases by continuity. The corresponding limiting frequencies of block sizes, in size-biased order of least elements, are ${\rm GEM}_{\alpha,\theta}$ and can be represented as $$(\tilde{P_1},\tilde{P_2},\ldots )=(W_1,\overline{W}_1W_2,\overline{W}_1\overline{W}_2W_3,\ldots )$$ where the $W_i$ are independent, $W_i$ has ${\rm beta}(1-\alpha, \theta+i\alpha)$ distribution, and $\overline{W}_i:=1-W_i.$ The distribution of the associated ranked sequence of limiting frequencies is Poisson-Dirichlet ${\rm PD}_{\alpha,\theta}$. \end{lemm} We also associate with the EPPF $p_{\alpha,\theta}^{\rm PD}$ the distribution $q_{\alpha,\theta}^{\rm PD}$ of block sizes in decreasing order via (\ref{spliteppf}) and, because the Chinese restaurant EPPF is \em not \em the EPPF of a splitting rule leading to $k\ge 2$ block (we use notation $q_{\alpha,\theta}^{\rm PD^*}$ for the splitting rules induced by conditioning on $k\ge 2$ blocks), but can lead to a single block, we also set $q_{\alpha,\theta}^{\rm PD}(n)=p_{\alpha,\theta}^{\rm PD}(n)$. The asymptotic properties of the number $K_n$ of blocks of $\Pi_n$ under the $(\alpha,\theta)$-seating plan depend on $\alpha$: if $\alpha<0$ and $\theta=-m\alpha$ for some $m\in\mathbb{N}$, then $K_n=m$ for all sufficiently large $n$ a.s.; if $\alpha=0$ and $\theta>0$, then $\lim_{n\rightarrow\infty}K_n/\log n=\theta$ a.s. The most relevant case for us is $\alpha>0$. \begin{lemm}[Theorem 3.8 in \cite{csp}]\label{crp2} For $0<\alpha<1$, $\theta>-\alpha$, , $$\frac{K_n}{n^\alpha}\rightarrow S\qquad\mbox{a.s. as $n\rightarrow\infty$,}$$ where $S$ has a continuous density on $(0,\infty)$ given by $$ \frac{d}{ds}\mathbb{P}(S\in ds)=\frac{\Gamma(\theta+1)}{\Gamma(\theta/\alpha+1)}s^{-\theta/\alpha}g_\alpha(s),$$ and $g_\alpha$ is the density of the Mittag-Leffler distribution with $p$th moment $\Gamma(p+1)/\Gamma(p\alpha+1)$. \end{lemm} As an extension of the CRP, Pitman and Winkel in \cite{PW2} introduced the \em ordered \em CRP. Its seating plan is as follows. The tables are ordered from left to right. Put the second table to the right of the first with probability $\theta/(\alpha+\theta)$ and to the left with probability $\alpha/(\alpha+\theta)$. Given $k$ tables, put the $(k+1)$st table to the right of the right-most table with probability $\theta/(k\alpha+\theta)$ and to the left of the left-most or between two adjacent tables with probability $\alpha/(k\alpha+\theta)$ each. A composition of $n$ is a sequence $(n_1,\ldots,n_k)$ of positive numbers with sum $n$. A sequence of random compositions $\mathcal{C}_n$ of $n$ is called \textit{regenerative} if conditionally given that the first part of $\mathcal{C}_n$ is $n_1$, the remaining parts of $\mathcal{C}_n$ form a composition of $n-n_1$ with the same distribution as $\mathcal{C}_{n-n_1}$. Given any decrement matrix $(q^{\rm dec}(n,m), 1\leq m\leq n)$, there is an associated sequence $\mathcal{C}_n$ of regenerative random compositions of $n$ defined by specifying that $q^{\rm dec}(n,\cdot)$ is the distribution of the first part of $\mathcal{C}_n$. Thus for each composition $(n_1,\ldots,n_k)$ of $n$, $$\mathbb{P}(\mathcal{C}_n=(n_1,\ldots,n_k))=q^{\rm dec}(n,n_1)q^{\rm dec}(n-n_1,n_2)\ldots q^{\rm dec}(n_{k-1}+n_k,n_{k-1})q^{\rm dec}(n_k,n_k).$$ \begin{lemm}[Proposition 6 (i) in \cite{PW2}]\label{OCRL} For each $(\alpha,\theta)$ with $0<\alpha<1$ and $\theta\geq 0$, denote by $\mathcal{C}_n$ the composition of block sizes in the ordered Chinese restaurant partition with parameters $(\alpha,\theta)$. Then $(\mathcal{C}_n, n\geq 1)$ is regenerative, with decrement matix \begin{equation} q_{\alpha,\theta}^{\rm dec}(n,m)={n \choose m}\frac{(n-m)\alpha+m\theta}{n}\frac{\Gamma(m-\alpha)\Gamma(n-m+\theta)}{\Gamma(1-\alpha)\Gamma(n+\theta)}\ \ \ (1\leq m\leq n). \end{equation} \end{lemm} \subsection{The splitting rule of alpha-gamma trees and the proof of Proposition \ref{prop1}} Proposition \ref{prop1} claims that the unlabelled alpha-gamma trees $(T_n^\circ,n\ge 1)$ have the Markov branching property, identifies the splitting rule and studies the exchangeability of labels. In preparation of the proof of the Markov branching property, we use CRPs to compute the probability function of the first split of $T_n^\circ$ in Proposition \ref{prop10}. We will then establish the Markov branching property from a spinal decomposition result (Lemma \ref{spinaldec}) for $T_n^\circ$. \begin{prop}\label{prop10} Let $T_n^\circ$ be an unlabelled alpha-gamma tree for some $0\le\alpha<1$ and $0\le\gamma\le\alpha$, then the probability function of the first split of $T_n^\circ$ is \begin{eqnarray*}&\ds q^{\rm seq}_{\alpha,\gamma}(n_1,\ldots,n_k)=\frac{Z_n\Gamma(1-\alpha)}{\Gamma(n-\alpha)}\left(\gamma+(1-\alpha-\gamma)\frac{1}{n(n-1)}\sum_{i\neq j} n_i n_j\right )q_{\alpha,-\alpha-\gamma}^{\rm PD^*}(n_1,\ldots ,n_k), &\end{eqnarray*} $n_1\ge\ldots\ge n_k\ge 1$, $k\ge 2$: $n_1+\ldots+n_k=n$, where $Z_n$ is the normalisation constant in (\ref{EPmod}). \end{prop} \begin{proof} We start from the growth rules of the labelled alpha-gamma trees $T_n$. Consider the \em spine \em $\mbox{\sc root}\rightarrow v_1\rightarrow\ldots\rightarrow v_{L_{n-1}}\rightarrow 1$ of $T_n$, and the \em spinal subtrees \em $S_{ij}^{\rm sp}$, $1\le i\le L_{n-1}$, $1\le j\le K_{n,i}$, not containing 1 of the spinal vertices $v_i$, $i\in[L_{n-1}]$. By joining together the subtrees of the spinal vertex $v_i$ we form the $i$th \em spinal bush \em $S_i^{\rm sp}=S_{i1}^{\rm sp}*\ldots*S_{iK_{n,i}}^{\rm sp}$. Suppose a bush $S_i^{\rm sp}$ consists of $k$ subtrees with $m$ leaves in total, then its weight will be $m-k\alpha-\gamma+k\alpha=m-\gamma$ according to growth rule (i) -- recall that the total weight of the tree $T_n$ is $n-\alpha$. Now we consider each bush as a table, each leaf $n=2,3,\ldots$ as a customer, 2 being the first customer. Adding a new leaf to a bush or to an edge on the spine corresponds to adding a new customer to an existing or to a new table. The weights are such that we construct an ordered Chinese restaurant partition of $\mathbb{N}\setminus\{1\}$ with parameters $(\gamma, 1-\alpha)$. Suppose that the first split of $T_n$ is into tree components with numbers of leaves $n_1\ge\ldots\ge n_k\ge 1$. Now suppose further that leaf 1 is in the subtree with $n_i$ leaves in the first split, then the first spinal bush $S_1^{\rm sp}$ will have $n-n_i$ leaves. Notice that this event is equivalent to that of $n-n_i$ customers sitting at the first table with a total of $n-1$ customers present, in the terminology of the ordered CRP. According to Lemma \ref{OCRL}, the probability of this is \begin{eqnarray} q^{\rm dec}_{\gamma, 1-\alpha}(n-1,n-n_i)&=&{n-1 \choose n-n_i}\frac{(n_i-1)\gamma+(n-n_i)(1-\alpha)}{n-1}\frac{\Gamma(n_i-\alpha)\Gamma(n-n_i-\gamma)}{\Gamma(n-\alpha)\Gamma(1-\gamma)}\nonumber\\ &=&{n\choose n-n_i}\left (\frac{n_i}{n}\gamma+\frac{n_i(n-n_i)}{n(n-1)}(1-\alpha-\gamma)\right)\frac{\Gamma(n_i-\alpha)\Gamma(n-n_i-\gamma)}{\Gamma(n-\alpha)\Gamma(1-\gamma)}.\nonumber \end{eqnarray} Next consider the probability that the first bush $S_1^{\rm sp}$ joins together subtrees with $n_1\ge \ldots\ge n_{i-1}\ge n_{i+1}\ge \ldots n_k\ge 1$ leaves conditional on the event that leaf 1 is in a subtree with $n_i$ leaves. The first bush has a weight of $n-n_i-\gamma$ and each subtree in it has a weight of $n_j-\alpha, j\neq i$. Consider these $k-1$ subtrees as tables and the leaves in the first bush as customers. According to the growth procedure, they form a second (unordered, this time) Chinese restaurant partition with parameters $(\alpha, -\gamma)$, whose EPPF is \begin{equation} p_{\alpha,-\gamma}^{\rm PD}(n_1,\ldots,n_{i-1},n_{i+1},\ldots,n_k)=\frac{\alpha^{k-2}\Gamma(k-1-\gamma/\alpha)\Gamma(1-\gamma)}{\Gamma(1-\gamma/\alpha)\Gamma(n-n_i-\gamma)}\prod_{j\in[k]\setminus\{i\}}\frac{\Gamma(n_j-\alpha)}{\Gamma(1-\alpha)}.\nonumber \end{equation} Let $m_j$ be the number of $j$s in the sequence of $(n_1,\ldots,n_k)$. Based on the exchangeability of the second Chinese restaurant partition, the probability that the first bush consists of subtrees with $n_1\ge\ldots\ge n_{i-1}\ge n_{i+1}\ge\ldots\ge n_k\ge 1$ leaves conditional on the event that leaf 1 is in one of the $m_{n_i}$ subtrees with $n_i$ leaves will be \begin{equation} \frac{m_{n_i}}{m_1!\ldots m_n!}{n-n_i\choose n_1,\ldots,n_{i-1},n_{i+1},\ldots,n_k}p_{\alpha,-\gamma}^{\rm PD}(n_1,\ldots,n_{i-1},n_{i+1},\ldots,n_k).\nonumber \end{equation} Thus the joint probability that the first split is $(n_1,\ldots,n_k)$ and that leaf 1 is in a subtree with $n_i$ leaves is, \begin{eqnarray}\label{pni} &&\hspace{-1cm}\frac{m_{n_i}}{m_1!\ldots m_n!}{n-n_i\choose n_1,\ldots,n_{i-1},n_{i+1},\ldots,n_k}q^{\rm dec}_{\gamma, 1-\alpha}(n-1,n-n_i)p^{\rm PD}_{\alpha,-\gamma}(n_1,\ldots,n_{i-1},n_{i+1},\ldots,n_k)\nonumber\\ &=& m_{n_i}\left (\frac{n_i}{n}\gamma+\frac{n_i(n-n_i)}{n(n-1)}(1-\alpha-\gamma)\right) \frac{Z_n\Gamma(1-\alpha)}{\Gamma(n-\alpha)}q_{\alpha,-\alpha-\gamma}^{\rm PD^*}(n_1,\ldots,n_k). \end{eqnarray} Hence the splitting rule will be the sum of (\ref{pni}) for all \em different \em $n_i$ (not $i$) in $(n_1,\ldots ,n_k)$, but they contain factors $m_{n_i}$, so we can write it as sum over $i\in[k]$: \begin{eqnarray*} q_{\alpha,\gamma}^{\rm seq}(n_1,\ldots ,n_k) &=&\left(\sum_{i=1}^k \left (\frac{n_i}{n}\gamma+\frac{n_i(n-n_i)}{n(n-1)}(1-\alpha-\gamma)\right)\right )\frac{Z_n\Gamma(1-\alpha)}{\Gamma(n-\alpha)}q_{\alpha,-\alpha-\gamma}^{\rm PD^*}(n_1,\ldots ,n_k)\\ &=&\left(\gamma+(1-\alpha-\gamma)\frac{1}{n(n-1)}\sum_{i\neq j} n_i n_j\right )\frac{Z_n\Gamma(1-\alpha)}{\Gamma(n-\alpha)}q_{\alpha,-\alpha-\gamma}^{\rm PD^*}(n_1,\ldots ,n_k). \end{eqnarray*} \end{proof} We can use the nested Chinese restaurants described in the proof to study the subtrees of the spine of $T_n$. We have decomposed $T_n$ into the subtrees $S_{ij}^{\rm sp}$ of the spine from the {\sc root} to 1 and can, conversely, build $T_n$ from $S_{ij}^{\rm sp}$, for which we now introduce notation \begin{eqnarray*}&\ds T_n=\coprod_{i,j}S_{ij}^{\rm sp}. &\end{eqnarray*} We will also write $\coprod_{i,j}S_{ij}^\circ$ when we join together unlabelled trees $S_{ij}^\circ$ along a spine. The following unlabelled version of a spinal decomposition theorem will entail the Markov branching property. \begin{lemm}[Spinal decomposition]\label{spinaldec} Let $(T_n^{\circ1},n\ge 1)$ be alpha-gamma trees, delabelled apart from label 1. For all $n\ge 2$, the tree $T_n^{\circ1}$ has the same distribution as $\coprod_{i,j}S_{ij}^\circ$, where \begin{itemize}\item $\mathcal{C}_{n-1}=(N_1,\ldots,N_{L_{n-1}})$ is a regenerative composition with decrement matrix $q_{\gamma,1-\alpha}^{\rm dec}$, \item conditionally given $L_{n-1}=\ell$ and $N_i=n_i$, $i\in[\ell]$, the sizes $N_{i1}\ge\ldots\ge N_{iK_{n,i}}\ge 1$ form random compositions of $n_i$ with distribution $q_{\alpha,-\gamma}^{PD}$, independently for $i\in[\ell]$, \item conditionally given also $K_{n,i}=k_i$ and $N_{ij}=n_{ij}$, the trees $S_{ij}^\circ$, $j\in[k_i]$, $i\in[\ell]$, are independent and distributed as $T_{n_{ij}}^\circ$. \end{itemize} \end{lemm} \begin{proof} For an induction on $n$, note that the claim is true for $n=2$, since $T_n^{\circ1}$ and $\coprod_{i,j}S_{ij}^\circ$ are deterministic for $n=2$. Suppose then that the claim is true for some $n\ge 2$ and consider $T_{n+1}^\circ$. The growth rules (i)-(iii) of the labelled alpha-gamma tree $T_n$ are such that \begin{itemize}\item leaf $n+1$ is inserted into a new bush or any of the bushes $S_i^{\rm sp}$ selected according to the rules of the ordered CRP with $(\gamma,1-\alpha)$-seating plan, \item further into a new subtree or any of the subtrees $S_{ij}^{\rm sp}$ of the selected bush $S_i^{\rm sp}$ according to the rules of a CRP with $(\alpha,-\gamma)$-seating plan, \item and further within the subtree $S_{ij}^{\rm sp}$ according to the weights assigned by (i) and growth rules (ii)-(iii). \end{itemize} These selections do not depend on $T_n$ except via $T_n^{\circ1}$. In fact, since labels do not feature in the growth rules (i)-(iii), they are easily seen to induce growth rules for partially labelled alpha-gamma trees $T_n^{\circ1}$, and also for unlabelled alpha-gamma trees such as $S_{ij}^\circ$. From these observations and the induction hypothesis, we deduce the claim for $T_{n+1}^\circ$. \end{proof} \begin{proof}[Proof of Proposition \ref{prop1}] ${\rm(a)}$ Firstly, the distributions of the first splits of the unlabelled alpha-gamma trees $T_n^\circ$ were calculated in Proposition \ref{prop10}, for $0\le\alpha<1$ and $0\le\gamma\le\alpha$. Secondly, let $0\le\alpha\le 1$ and $0\le\gamma\le\alpha$. By the regenerative property of the spinal composition $\mathcal{C}_{n-1}$ and the conditional distribution of $T_n^{\circ1}$ given $\mathcal{C}_{n-1}$ identified in Lemma \ref{spinaldec}, we obtain that given $N_1=m$, $K_{n,1}=k_1$ and $N_{1j}=n_{1j}$, $j\in[k_1]$, the subtrees $S_{1j}^\circ$, $j\in[k_1]$, are independent alpha-gamma trees distributed as $T_{n_{1j}}^\circ$, also independent of the remaining tree $S_{1,0}:=\coprod_{i\ge 2,j}S_{ij}^\circ$, which, by Lemma \ref{spinaldec}, has the same distribution as $T_{n-m}^\circ$. This is equivalent to saying that conditionally given that the first split is into subtrees with $n_1\ge \ldots\ge n_{i}\ge \ldots\ge n_k\ge 1$ leaves and that leaf 1 is in a subtree with $n_i$ leaves, the delabelled subtrees $S_1^\circ,\ldots,S_k^\circ$ of the common ancestor are independent and distributed as $T_{n_j}^{\circ}$ respectively, $j\in[k]$. Since this conditional distribution does not depend on $i$, we have established the Markov branching property of $T_n^\circ$. (b) Notice that if $\gamma=1-\alpha$, the alpha-gamma model is the model related to stable trees, the labelling of which is known to be exchangeable, see Section \ref{sectstable}. On the other hand, if $\gamma\neq 1-\alpha$, let us turn to look at the distribution of $T_3$. \setlength{\unitlength}{0.5cm} \begin{picture}(30,6) \multiput(10,1)(10,0){2}{\line(0,1){2.8}} \multiput(10, 2.4)(10,0){2}{\line(1,1){1}} \multiput(10, 3.8)(10,0){2}{\line(1,1){1}} \multiput(10, 3.8)(10,0){2}{\line(-1,1){1}} \put(8.8,5){1} \put(10.8,5){2} \put(10.8,3.6){3} \put(18.8,5){1} \put(20.8,5){3} \put(20.8,3.6){2} \put(7.5,0){Probability:\ $\frac{\gamma}{2-\alpha}$} \put(17.5,0){Probability:\ $\frac{1-\alpha}{2-\alpha}$} \end{picture} We can see the probabilities of the two labelled tree in the above picture are different although they have the same unlabelled tree. So if $\gamma\neq 1-\alpha$, $T_n$ is not exchangeable. \end{proof} \subsection{Sampling consistency and strong sampling consistency}\label{sectsc} Recall that an unlabelled Markov branching tree $T_n^\circ$, $n\geq2$ has the property of \textit{sampling consistency}, if when we select a leaf uniformly and delete it (together with the adjacent branch point if its degree is reduced to 2), then the new tree, denoted by $T_{n,-1}^\circ$, is distributed as $T_{n-1}^\circ$. Denote by $d:\mathbb{D}_n\rightarrow\mathbb{D}_{n-1}$ the induced deletion operator on the space $\mathbb{D}_n$ of probability measures on $\mathbb{T}_n^\circ$, so that for the distribution $P_n$ of $T_n^\circ$, we define $d(P_{n})$ as the distribution of $T_{n,-1}^\circ$. Sampling consistency is equivalent to $d(P_n)=P_{n-1}$. This property is also called \textit{deletion stability} in \cite{For-05}. \begin{prop} \label{samplecons} The unlabelled alpha-gamma trees for $0\le\alpha\le 1$ and $0\le\gamma\le\alpha$ are sampling consistent. \end{prop} \begin{proof} The sampling consistency formula $(14)$ in \cite{HMPW} states that $d(P_n)=P_{n-1}$ is equivalent to \begin{eqnarray}\label{del3} q(n_1,\ldots,n_k)&=&\sum_{i=1}^{k}\frac{(n_i+1)(m_{n_i+1}+1)}{nm_{n_i}}q((n_1,\ldots,n_i+1,\ldots,n_k)^\downarrow)\nonumber\\ &&+\frac{m_1+1}{n}q(n_1,\ldots,n_k,1)+\frac{1}{n}q(n-1,1)q(n_1,\ldots,n_k) \end{eqnarray} for all $n_1\ge\ldots\ge n_k\ge 1$ with $n_1+\ldots+n_k=n-1$, where $m_j$ is the number of $n_i$, $i\in[k]$, that equal $j$, and where $q$ is the splitting rule of $T_n^\circ\sim P_n$. In terms of EPPFs (\ref{spliteppf}), formula (\ref{del3}) is equivalent to \begin{equation}\label{del4} \left(1-p(n-1,1)\right)p(n_1,\ldots,n_k)=\sum_{i=1}^kp((n_1,\ldots,n_i+1,\ldots,n_k)^\downarrow)+p(n_1,\ldots,n_k,1). \end{equation} Now according to Proposition \ref{prop1}, the EPPF of the alpha-gamma model with $\alpha<1$ is \begin{equation} p_{\alpha,\gamma}^{\rm seq}(n_1,\ldots,n_k)=\frac{Z_n}{\Gamma_\alpha(n)}\left(\gamma+(1-\alpha-\gamma)\frac{1}{(n-1)(n-2)}\sum_{u\neq v}n_un_v\right)p_{\alpha,-\alpha-\gamma}^{\rm PD^*}(n_1,\ldots,n_k), \end{equation} where $\Gamma_\alpha(n)=\Gamma(n-\alpha)/\Gamma(1-\alpha)$. Therefore, $p_{\alpha,\gamma}^{\rm seq}(n_1,\ldots,n_i+1,\ldots,n_k)$ can be written as \begin{eqnarray} &&\hspace{-0.5cm}\left (p_{\alpha,\gamma}^{\rm seq}(n_1,\ldots,n_k)+2(1-\alpha-\gamma)\frac{(n-2)(n-1-n_i)-\sum_{u\neq v}n_u n_v}{n(n-1)(n-2)}\frac{Z_n}{\Gamma_\alpha(n)}p_{\alpha,-\alpha-\gamma}^{\rm PD^*}(n_1,\ldots,n_k)\right )\nonumber\\ &&\hspace{0.5cm}\times\frac{n_i-\alpha}{n-1-\alpha}\nonumber \end{eqnarray} and $p_{\alpha,\gamma}^{\rm seq}(n_1,\ldots,n_k,1)$ as \begin{eqnarray} &&\hspace{-0.5cm}\left (p_{\alpha,\gamma}^{\rm seq}(n_1,\ldots,n_k)+2(1-\alpha-\gamma)\frac{(n-1)(n-2)-\sum_{u\neq v}n_u n_v}{n(n-1)(n-2)}\frac{Z_n}{\Gamma_\alpha(n)}p_{\alpha,-\alpha-\gamma}^{\rm PD^*}(n_1,\ldots,n_k)\right )\nonumber\\ &&\hspace{0.5cm}\times\frac{(k-1)\alpha-\gamma}{n-1-\alpha}.\nonumber \end{eqnarray} Sum over the above formulas, then the right-hand side of (\ref{del4}) is \begin{equation} \left (1-\frac{1}{n-1-\alpha}\left(\gamma+\frac{2}{n}(1-\alpha-\gamma)\right )\right)p_{\alpha,\gamma}^{\rm seq}(n_1,\ldots,n_k).\nonumber \end{equation} Notice that $p_{\alpha,\gamma}^{\rm seq}(n-1,1)=\left(\gamma+2(1-\alpha-\gamma)/ n \right )/(n-1-\alpha)$. Hence, the splitting rules of the alpha-gamma model satisfy (\ref{del4}), which implies sampling consistency for $\alpha<1$. The case $\alpha=1$ is postponed to Section \ref{sectalpha1}. \end{proof} Moreover, sampling consistency can be enhanced to \em strong sampling consistency \em \cite{HMPW} by requiring that $(T_{n-1}^\circ,T_{n}^\circ)$ has the same distribution as $(T_{n,-1}^\circ,T_{n}^\circ)$. \begin{prop}\label{prop13} \label{ssc} The alpha-gamma model is strongly sampling consistent if and only if $\gamma=1-\alpha$. \end{prop} \begin{proof} For $\gamma=1-\alpha$, the model is known to be strongly sampling consistent, cf. Section \ref{sectstable}. \setlength{\unitlength}{0.5cm} \begin{picture}(30,5.5) \multiput(10,1)(10,0){2}{\line(0,1){2.8}} \multiput(10, 2.4)(10,0){2}{\line(1,1){1}} \multiput(10, 3.8)(10,0){2}{\line(1,1){1}} \multiput(10, 3.8)(10,0){2}{\line(-1,1){1}} \put(20,2.4){\line(-1,1){1}} \put(9.6,0){ $\mathbf{t}_3^\circ$} \put(19.7,0){$\mathbf{t}_4^\circ$} \end{picture} If $\gamma\neq 1-\alpha,$ consider the above two deterministic unlabelled trees. $$\mathbb{P}(T_4^\circ=\mathbf{t}_4^\circ)=q_{\alpha,\gamma}^{\rm seq}(2,1,1)q_{\alpha,\gamma}^{\rm seq}(1,1)=(\alpha-\gamma)(5-5\alpha+\gamma)/((2-\alpha)(3-\alpha)).$$ Then we delete one of the two leaves at the first branch point of $\mathbf{t}_4^\circ$ to get $\mathbf{t}_3^\circ$. Therefore $$\mathbb{P}((T_{4,-1}^\circ,T_4^\circ)=(\mathbf{t}_3^\circ,\mathbf{t}_4^\circ))=\frac{1}{2}\mathbb{P}(T_4^\circ=\mathbf{t}_4^\circ) =\frac{(\alpha-\gamma)(5-5\alpha+\gamma)}{2(2-\alpha)(3-\alpha)}.$$ On the other hand, if $T_3^\circ=\mathbf{t}_3^\circ$, we have to add the new leaf to the first branch point to get $\mathbf{t}_4^\circ$. Thus $$\mathbb{P}((T_3^\circ,T_4^\circ)=(\mathbf{t}_3^\circ,\mathbf{t}_4^\circ))=\frac{\alpha-\gamma}{3-\alpha}\mathbb{P}(T_3^\circ=\mathbf{t}_3^\circ)= \frac{(\alpha-\gamma)(2-2\alpha+\gamma)}{(2-\alpha)(3-\alpha)}.$$ It is easy to check that $\mathbb{P}((T_{4,-1}^\circ,T_4^\circ)=(\mathbf{t}_3^\circ,\mathbf{t}_4^\circ))\neq \mathbb{P}((T_3^\circ,T_4^\circ)=(\mathbf{t}_3^\circ,\mathbf{t}_4^\circ))$ if $\gamma\neq 1-\alpha$, which means that the alpha-gamma model is then not strongly sampling consistent. \end{proof} \section{Dislocation measures and asymptotics of alpha-gamma trees} \subsection{Dislocation measures associated with the alpha-gamma-splitting rules} Theorem \ref{thm2} claims that the alpha-gamma trees are sampling consistent, which we proved in Section \ref{sectsc}, and identifies the integral representation of the splitting rule in terms of a dislocation measure, which we will now establish. \begin{proof}[Proof of Theorem \ref{thm2}] Firstly, we make some rearrangement for the coefficient of the sampling consistent splitting rules of alpha-gamma trees identified in Proposition \ref{prop10}: \begin{eqnarray} &&\hspace{-0.5cm}\gamma+(1-\alpha-\gamma)\frac{1}{n(n-1)}\sum_{i\neq j}n_in_j\nonumber\\ &&=\frac{(n+1-\alpha-\gamma)(n-\alpha-\gamma)}{n(n-1)}\left(\gamma+(1-\alpha-\gamma)\left(\sum_{i\neq j}A_{ij} +2\sum_{i=1}^kB_i+C\right)\right),\nonumber \end{eqnarray} where \begin{eqnarray} A_{ij}&=&\frac{(n_i-\alpha)(n_j-\alpha)}{(n+1-\alpha-\gamma)(n-\alpha-\gamma)},\nonumber\\ B_i&=&\frac{(n_i-\alpha)((k-1)\alpha-\gamma)}{(n+1-\alpha-\gamma)(n-\alpha-\gamma)},\nonumber\\ C&=&\frac{((k-1)\alpha-\gamma)(k\alpha-\gamma)}{(n+1-\alpha-\gamma)(n-\alpha-\gamma)}.\nonumber \end{eqnarray} Notice that $B_ip_{\alpha,-\alpha-\gamma}^{\rm PD^*}(n_1,\ldots,n_k)$ simplifies to \begin{eqnarray} &&\hspace{-0.5cm}\frac{(n_i-\alpha)((k-1)\alpha-\gamma)}{(n+1-\alpha-\gamma)(n-\alpha-\gamma)}\frac{\alpha^{k-2}\Gamma(k-1-\gamma/\alpha)}{Z_n\Gamma(1-\gamma/\alpha)}\Gamma_\alpha(n_1)\ldots\Gamma_\alpha(n_k)\nonumber\\ &&=\frac{Z_{n+2}}{Z_n(n+1-\alpha-\gamma)(n-\alpha-\gamma)}\frac{\alpha^{k-1}\Gamma(k-\gamma/\alpha)}{Z_{n+2}\Gamma(1-\gamma/\alpha)}\Gamma_\alpha(n_1)\ldots\Gamma_\alpha(n_i+1)\ldots\Gamma_\alpha(n_k)\nonumber\\ &&=\frac{\widetilde{Z}_{n+2}}{\widetilde{Z}_n}p_{\alpha,-\alpha-\gamma}^{\rm PD^*}(n_1,\ldots,n_i+1,\ldots,n_k,1),\nonumber \end{eqnarray} where $\Gamma_\alpha(n)=\Gamma(n-\alpha)/\Gamma(1-\alpha)$ and $\widetilde{Z}_n=Z_n\alpha\Gamma(1-\gamma/\alpha)/\Gamma(n-\alpha-\gamma)$ is the normalisation constant in (\ref{Kingman}) for $\nu={\rm PD}^*_{\alpha,-\gamma-\alpha}$, as can be read from \cite[Formula (17)]{HPW}. According to (\ref{Kingman}), \begin{equation} p_{\alpha,-\alpha-\gamma}^{\rm PD^*}(n_1,\ldots,n_k)=\frac{1}{\widetilde{Z}_n}\int_{\mathcal{S}^\downarrow}\sum_{\substack{i_1,\ldots,i_k\geq 1\\ {\rm distinct}}}\prod_{l=1}^k s_{i_l}^{n_l} {\rm PD}^*_{\alpha,-\alpha-\gamma}(ds).\nonumber \end{equation} Thus, \begin{eqnarray} B_ip_{\alpha,-\alpha-\gamma}^{\rm PD^*}(n_1,\ldots,n_k)&=& \frac{1}{\widetilde{Z}_n}\int_{\mathcal{S}^\downarrow}\sum_{\substack{i_1,\ldots,i_k\geq 1\\ {\rm distinct}}}\prod_{l=1}^k s_{i_l}^{n_l} \left(\sum_{u\in\{i_1,\ldots,i_k\},v\not\in \{i_1,\ldots,i_k\}}s_us_v\right) {\rm PD}^*_{\alpha,-\alpha-\gamma}(ds)\nonumber \end{eqnarray} Similarly, \begin{eqnarray} A_{ij}p_{\alpha,-\alpha-\gamma}^{\rm PD^*}(n_1,\ldots,n_k)&=&\frac{1}{\widetilde{Z}_n}\int_{\mathcal{S}^\downarrow}\sum_{\substack{i_1,\ldots,i_k\geq 1\\ {\rm distinct}}}\prod_{l=1}^k s_{i_l}^{n_j} \left(\sum_{u,v\in\{i_1,\ldots,i_k\}:u\neq v}s_us_v\right) {\rm PD}^*_{\alpha,-\alpha-\gamma}(ds)\nonumber\\ Cp_{\alpha,-\alpha-\gamma}^{\rm PD^*}(n_1,\ldots,n_k)&=&\frac{1}{\widetilde{Z}_n}\int_{\mathcal{S}^\downarrow}\sum_{\substack{i_1,\ldots,i_k\geq 1\\ {\rm distinct}}}\prod_{l=1}^k s_{i_l}^{n_l} \left(\sum_{u,v\not\in\{i_1,\ldots,i_k\}:u\neq v}s_us_v\right) {\rm PD}^*_{\alpha,-\alpha-\gamma}(ds),\nonumber \end{eqnarray} Hence, the EPPF $p_{\alpha,\gamma}^{\rm seq}(n_1,\ldots,n_k)$ of the sampling consistent splitting rule takes the following form: \begin{eqnarray}\label{nuag3} &&\hspace{-0.5cm}\frac{(n+1-\alpha-\gamma)(n-\alpha-\gamma)Z_n}{n(n-1)\Gamma_\alpha(n)}\left(\gamma+(1-\alpha-\gamma)\left(\sum_{i\neq j}A_{ij} +2\sum_{i=1}^kB_i+C\right)\right)p_{\alpha,\gamma}^{\rm PD^*}(n_1,\ldots,n_k)\nonumber\\ &&=\frac{1}{Y_n}\int_{\mathcal{S}^\downarrow}\sum_{\substack{i_1,\ldots ,i_k\geq 1\\ {\rm distinct}}}\prod_{j=1}^k s_{i_j}^{n_j}\left(\gamma+(1-\alpha-\gamma)\sum_{i\neq j} s_is_j\right){\rm PD}^*_{\alpha,-\alpha-\gamma}(ds), \end{eqnarray} where $Y_n=n(n-1)\Gamma_\alpha(n)\alpha\Gamma(1-\gamma/\alpha)/\Gamma(n+2-\alpha-\gamma)$ is the normalization constant. Hence, we have $\nu_{\alpha,\gamma}(ds)=\Big(\gamma+(1-\alpha-\gamma)\sum_{i\neq j} s_is_j\Big){\rm PD^*}_{\alpha,-\alpha-\gamma}(ds)$. \end{proof} \subsection{The alpha-gamma model when $\alpha=1$, spine with bushes of singleton-trees}\label{sectalpha1} Within the discussion of the alpha-gamma model so far, we restricted to $0\leq\alpha<1$. In fact, we can still get some interesting results when $\alpha=1$. The weight of each leaf edge is $1-\alpha$ in the growth procedure of the alpha-gamma model. If $\alpha=1$, the weight of each leaf edge becomes zero, which means that the new leaf can only be inserted to internal edges or branch points. Starting from the two leaf tree, leaf 3 must be inserted into the root edge or the branch point. Similarly, any new leaf must be inserted into the spine leading from the root to the common ancestor of leaf 1 and leaf 2. Hence, the shape of the tree is just a spine with some bushes of one-leaf subtrees rooted on it. Moreover, the first split of an $n$-leaf tree will be $(n-k+1,1,\ldots,1)$ for some $2\leq k\leq n-1$. The cases $\gamma=0$ and $\gamma=1$ lead to degenerate trees with, respectively, all leaves connected to a single branch point and all leaves connected to a spine of binary branch points (comb). \begin{prop}\label{alpha1} Consider the alpha-gamma model with $\alpha=1$ and $0<\gamma<1$. \begin{enumerate}\item[\rm(a)] The model is sampling consistent with splitting rules \begin{eqnarray}\label{a11} \hspace{-0.5cm}&&\hspace{-0.5cm}q_{1,\gamma}^{\rm seq}(n_1,\ldots,n_k)\nonumber\\ \hspace{-0.5cm}&&=\begin{cases} \gamma\Gamma_\gamma(k-1)/(k-1)!, &\text{if}\ 2\leq k\leq n-1\ \text{and}\ (n_1,\ldots,n_k)=(n-k+1,1,\ldots,1);\\ \Gamma_\gamma(n-1)/(n-2)!, &\text{if}\ k=n\ \text{and}\ (n_1,\ldots,n_k)=(1,\ldots,1);\\ 0,&\text{otherwise}, \end{cases} \end{eqnarray} where $n_1\geq\ldots \geq n_k\geq 1$ and $n_1+\ldots +n_k=n$. \item[\rm(b)] The dislocation measure associated with the splitting rules can be expressed as follows \begin{equation} \int_{\mathcal{S}^\downarrow}f(s_1,0,\ldots)\nu_{1,\gamma}(ds)=\int_0^1 f(s_1,0,\ldots)\left(\gamma(1-s_1)^{-1-\gamma}ds_1+\delta_0(ds_1)\right). \end{equation} In particular, it does \em not \em satisfy $\nu(\{s\in\mathcal{S}^\downarrow:s_1+s_2+\ldots<1\})=0$. \end{enumerate} \end{prop} \begin{proof}(a) We start from the growth procedure of the alpha-gamma model when $\alpha=1$. Consider a first split into $(n-k+1,1,\ldots,1)$ for some labelled $n$-leaf tree. Suppose its first branch point is created when the leaf $l$ is inserted to the root edge for $l\geq 3$. At this time the first split is $(l-1,1)$ with a probability $\gamma/(l-2)$ as $\alpha=1$. In the following insertion, leaves $l+1,\ldots,n$ have been added either to the first branch point or to the subtree with $l-1$ leaves at this time. Hence the probability that the first split of this tree is $(n-k+1,1,\ldots,1)$ is $$\frac{(n-k-1)!}{(n-2)!}\gamma\Gamma_\gamma(k-1),$$ which does not depend on $l$. Notice that the growth rules imply that if the first split is $(n-k+1, 1,\ldots,1)$ with $k\le n-1$, then leaves $1$ and $2$ will be located in the subtree with $n-k+1$ leaves. There are ${n-2\choose n-k-1}$ labelled trees with the above first split. Therefore, $$q_{1,\gamma}^{\rm seq}(n-k+1, 1,\ldots,1)={n-2\choose n-k-1}\frac{(n-k-1)!}{(n-2)!}\gamma\Gamma_\gamma(k-1)=\gamma\Gamma_\gamma(k-1)/(k-1)!.$$ On the other hand, there is only one $n$-leaf labelled tree with a first split $(1,\ldots,1)$ and in this case, all leaves have been added to the only branch point . Hence $$q_{1,\gamma}^{\rm seq}(1,\ldots,1)=\Gamma_\gamma(n-1)/(n-2)!.$$ For sampling consistency, we check criterion (\ref{del3}), which reduces to the two formulas \begin{eqnarray} (1-q_{1,\gamma}^{\rm seq}(n-1,1))q_{1,\gamma}^{\rm seq}(n-k, 1,\ldots,1)&=&\frac{k}{n}q_{1,\gamma}^{\rm seq}(n-k, 1,\ldots,1)\nonumber\\&&+\frac{n-k+1}{n}q_{1,\gamma}^{\rm seq}(n-k+1, 1,\ldots,1)\nonumber\\ (1-q_{1,\gamma}^{\rm seq}(n-1,1))q_{1,\gamma}^{\rm seq}( 1,\ldots,1)&=&\frac{2}{n}q_{1,\gamma}^{\rm seq}(2, 1,\ldots,1)+q_{1,\gamma}^{\rm seq}(1,\ldots,1).\nonumber \end{eqnarray} (b) According to (\ref{a11}), \begin{eqnarray}\label{a12} &&\hspace{-0.5cm}q_{1,\gamma}^{\rm seq}(n-k+1,1,\ldots,1)\nonumber\\ &&={n\choose n-k+1}\frac{\Gamma_\gamma(n+1)}{n!}\gamma B(n-k+2,k-1-\gamma)\nonumber\\ &&=\frac{1}{Y_n}{n\choose n-k+1}\int_0^1 s_1^{n-k+1}(1-s_1)^{k-1}\left(\gamma(1-s_1)^{-1-\gamma}ds_1\right)\nonumber\\ &&=\frac{1}{Y_n}{n\choose n-k+1}\int_0^1 s_1^{n-k+1}(1-s_1)^{k-1}\left((\gamma(1-s_1)^{-1-\gamma}ds_1+\delta_0(ds_1)\right), \end{eqnarray} where $Y_n=n!/\Gamma_\gamma(n+1)$. Similarly, \begin{equation}\label{a13} q_{1,\gamma}^{\rm seq}(1,\ldots,1)=\frac{1}{Y_n}\int_0^1 \left(n(1-s_1)^{n-1}s_1+(1-s_1)^n\right)\left((\gamma(1-s_1)^{-1-\gamma}ds_1+\delta_0(ds_1)\right). \end{equation} Formulas (\ref{a12}) and (\ref{a13}) are of the form of \cite[Formula (2)]{HMPW}, which generalises (\ref{Kingman}) to the case where $\nu$ does not necessarily satisfy $\nu(\{s\in\mathcal{S}^\downarrow:s_1+s_2+\ldots<1\})=0$, hence $\nu_{1,\gamma}$ is identified. \end{proof} \subsection{Continuum random trees and self-similar trees}\label{seccrts} Let $B\subset \mathbb{N}$ finite. A \em labelled tree with edge lengths \em is a pair $\vartheta=(\mathbf{t},\eta)$, where $\mathbf{t}\in\mathbb{T}_B$ is a labelled tree, $\eta=(\eta_A,A\in \mathbf{t}\setminus\{\mbox{\sc root}\})$ is a collection of marks, and every edge $C\rightarrow A$ of $\mathbf{t}$ is associated with mark $\eta_A\in(0,\infty)$ at vertex $A$, which we interpret as the \em edge length \em of $C\rightarrow A$. Let $\Theta_B$ be the set of such trees $(\mathbf{t},\eta)$ with $\mathbf{t}\in\mathbb{T}_B$. We now introduce continuum trees, following the construction by Evans et al. in \cite{EPW}. A complete separable metric space $(\tau,d)$ is called an $\mathbb{R}$-tree, if it satisfies the following two conditions: \begin{enumerate} \item for all $x,y\in \tau$, there is an isometry $\varphi_{x,y}:[0,d(x,y)]\rightarrow \tau$ such that $\varphi_{x,y}(0)=x$ and $\varphi_{x,y}(d(x,y))=y$, \item for every injective path $c:[0,1]\rightarrow \tau$ with $c(0)=x$ and $c(1)=y$, one has $c([0,1])=\varphi_{x,y}([0,d(x,y)])$. \end{enumerate} We will consider rooted $\mathbb{R}$-trees $(\tau,d,\rho)$, where $\rho\in\tau$ is a distinguished element, the \em root\em. We think of the root as the lowest element of the tree.\pagebreak[2] We denote the range of $\varphi_{x,y}$ by $[[x,y]]$ and call the quantity $d(\rho,x)$ the \em height \em of $x$. We say that $x$ is an ancestor of $y$ whenever $x\in [[\rho,y]]$. We let $x\wedge y$ be the unique element in $\tau$ such that $[[\rho,x]]\cap[[\rho,y]]=[[\rho,x\wedge y]]$, and call it the \em highest common ancestor \em of $x$ and $y$ in $\tau$. Denoted by $(\tau_x,d|_{\tau_x},x)$ the set of $y\in\tau$ such that $x$ is an ancestor of $y$, which is an $\mathbb{R}$-tree rooted at $x$ that we call the \em fringe subtree \em of $\tau$ above $x$. Two rooted $\mathbb{R}$-trees $(\tau,d,\rho),(\tau^\prime,d^\prime,\rho^\prime)$ are called \em equivalent \em if there is a bijective isometry between the two metric spaces that maps the root of one to the root of the other. We also denote by $\Theta$ the set of equivalence classes of compact rooted $\mathbb{R}$-trees. We define the \em Gromov-Hausdorff distance \em between two rooted $\mathbb{R}$-trees (or their equivalence classes) as \begin{eqnarray*}&\ds d_{\rm GH}(\tau,\tau^\prime)=\inf\{d_{\rm H}(\widetilde{\tau},\widetilde{\tau}^\prime)\} &\end{eqnarray*} where the infimum is over all metric spaces $E$ and isometric embeddings $\widetilde{\tau}\subset E$ of $\tau$ and $\widetilde{\tau}^\prime\subset E$ of $\tau^\prime$ with common root $\widetilde{\rho}\in E$; the Hausdorff distance on compact subsets of $E$ is denoted by $d_{\rm H}$. Evans et al. \cite{EPW} showed that $(\Theta,d_{\rm GH})$ is a complete separable metric space. We call an element $x\in \tau$, $x\neq \rho$, in a rooted $\mathbb{R}$-tree $\tau$, a \em leaf \em if its removal does not disconnect $\tau$, and let $\mathcal{L}(\tau)$ be the set of leaves of $\tau$. On the other hand, we call an element of $\tau$ a \em branch point\em, if it has the form $x\wedge y$ where $x$ is neither an ancestor of $y$ nor vice-visa. Equivalently, we can define branch points as points disconnecting $\tau$ into three or more connected components when removed. We let $\mathcal{B}(\tau)$ be the set of branch points of $\tau$. A \em weighted $\mathbb{R}$-tree \em $(\tau,\mu)$ is called a \em continuum tree \em \cite{Ald-91}, if $\mu$ is a probability measure on $\tau$ and \begin{enumerate} \item $\mu$ is supported by the set $\mathcal{L}(\tau)$, \item $\mu$ has no atom, \item for every $x\in\tau\backslash\mathcal{L}(\tau)$, $\mu(\tau_x)>0$. \end{enumerate} A \em continuum random tree (CRT) \em is a random variable whose values are continuum trees, defined on some probability space $(\Omega,\mathcal{A},\mathbb{P})$. Several methods to formalize this have been developed \cite{Ald-CRT3,EW,GPW}. For technical simplicity, we use the method of Aldous \cite{Ald-CRT3}. Let the space $\ell_1=\ell_1(\mathbb{N})$ be the base space for defining CRTs. We endow the set of compact subsets of $\ell_1$ with the Hausdorff metric, and the set of probability measures on $\ell_1$ with any metric inducing the topology of weak convergence, so that the set of pairs $(T,\mu)$ where $T$ is a rooted $\mathbb{R}$-tree embedded as a subset of $\ell_1$ and $\mu$ is a measure on $T$, is endowed with the product $\sigma$-algebra. An exchangeable \em $\mathcal{P}_\mathbb{N}$-valued fragmentation process \em $(\Pi(t),t\geq0)$ is called \em self-similar \em with index $a\in \mathbb{R}$ if given $\Pi(t)=\pi=\{\pi_i,i\ge 1\}$ with asymptotic frequencies $|\pi_i|=\lim_{n\rightarrow\infty}n^{-1}\#[n]\cap\pi_j$, the random variable $\Pi(t+s)$ has the same law as the random partition whose blocks are those of $\pi_i\cap\Pi^{(i)}(|\pi_i|^a s),i\geq 1$, where $(\Pi^{(i)}, i\geq1)$ is a sequence of i.i.d. copies of $(\Pi(t),t\geq 0)$. The process $(|\Pi(t)|^\downarrow,t\geq 0)$ is an \em $S^\downarrow$-valued self-similar fragmentation process\em. Bertoin \cite{Ber-ss} proved that the distribution of a $\mathcal{P}_\mathbb{N}$-valued self-similar fragmentation process is determined by a triple $(a,c,\nu)$, where $a\in\mathbb{R}$, $c\ge 0$ and $\nu$ is a dislocation measure on $S^\downarrow$. For this article, we are only interested in the case $c=0$ and when $\nu(s_1+s_2+\ldots<1)=0$. We call $(a,\nu)$ the characteristic pair. When $a=0$, the process $(\Pi(t),t\ge 0)$ is also called \em homogeneous fragmentation process\em. A CRT $(\mathcal{T},\mu)$ is a \em self-similar CRT \em with index $a=-\gamma<0$ if for every $t\geq 0$, given $(\mu(\mathcal{T}_t^i),i\geq 1))$ where $\mathcal{T}_t^i, i\geq 1$ is the ranked order of connected components of the open set $\{x\in\tau: d(x,\rho(\tau))>t\}$, the continuum random trees $$ \left (\mu(\mathcal{T}^1_t)^{-\gamma}\mathcal{T}_t^1,\frac{\mu(\cdot \cap\mathcal{T}_t^1)}{\mu(\mathcal{T}^1_t)}\right ), \left (\mu(\mathcal{T}^2_t)^{-\gamma}\mathcal{T}_t^2,\frac{\mu(\cdot \cap\mathcal{T}_t^2)}{\mu(\mathcal{T}^2_t)}\right ),\ldots $$ are i.i.d copies of $(\mathcal{T},\mu)$, where $\mu(\mathcal{T}^i_t)^{-\gamma}\mathcal{T}_t^i$ is the tree that has the same set of points as $\mathcal{T}^i_t$, but whose distance function is divided by $\mu(\mathcal{T}^i_t)^{\gamma}$. Haas and Miermont in \cite{HM} have shown that there exists a self-similar continuum random tree $\mathcal{T}_{(\gamma,\nu)}$ characterized by such a pair $(\gamma,\nu)$, which can be constructed from a self-similar fragmentation process with characteristic pair $(\gamma,\nu)$. \subsection{The alpha-gamma model when $\gamma=1-\alpha$, sampling from the stable CRT}\label{sectstable} Let $(\mathcal{T},\rho,\mu)$ be the stable tree of Duquesne and Le Gall \cite{DuL-02}. The distribution on $\Theta$ of any CRT is determined by its so-called finite-dimensional marginals: the distributions of $\mathcal{R}_k$, $k\ge 1$, the subtrees $\mathcal{R}_k\subset\mathcal{T}$ defined as the discrete trees with edge lengths spanned by $\rho,U_1,\ldots,U_k$, where given $(\mathcal{T},\mu)$, the sequence $U_i\in\mathcal{T}$, $i\ge 1$, of leaves is sampled independently from $\mu$. See also \cite{Mie-05,DuL-05,HMPW,HPW,Mar-08} for various approaches to stable trees. Let us denote the discrete tree without edge lengths associated with $\mathcal{R}_k$ by $T_k$ and note the Markov branching structure. \begin{lemm}[Corollary 22 in \cite{HMPW}] Let $1/\alpha\in(1,2]$. The trees $T_n$, $n\ge 1$, sampled from the $(1/\alpha)$-stable CRT are Markov branching trees, whose splitting rule has EPPF \begin{eqnarray*}&\ds p^{\rm stable}_{1/\alpha}(n_1,\ldots,n_k)=\frac{\alpha^{k-2}\Gamma(k-1/\alpha)\Gamma(2-\alpha)}{\Gamma(2-1/\alpha)\Gamma(n-\alpha)} \prod_{j=1}^k\frac{\Gamma(n_j-\alpha)}{\Gamma(1-\alpha)} &\end{eqnarray*} for any $k\ge 2$, $n_1\ge1,\ldots,n_k\ge 1$, $n=n_1,\ldots,n_k$. \end{lemm} We recognise $p^{\rm stable}_{1/\alpha}=p^{\rm PD^*}_{\alpha,-1}$ in (\ref{EPmod}), and by Proposition \ref{prop1}, we have $p^{\rm PD^*}_{\alpha,-1}=p^{\rm seq}_{\alpha,1-\alpha}$. This observation yields the following corollary: \begin{coro} The alpha-gamma trees with $\gamma=1-\alpha$ are strongly sampling consistent and exchangeable. \end{coro} \begin{proof} These properties follow from the representation by sampling from the stable CRT, particularly the exchangeability of the sequence $U_i$, $i\ge 1$. Specifically, since $U_i$, $i\ge 1$, are conditionally independent and identically distributed given $(\mathcal{T},\mu)$, they are exchangeable. If we denote by $\mathcal{L}_{n,-1}$ the random set of leaves $\mathcal{L}_n=\{U_1,\ldots,U_n\}$ with a uniformly chosen member removed, then $(\mathcal{L}_{n,-1},\mathcal{L}_n)$ has the same conditional distribution as $(\mathcal{L}_{n-1},\mathcal{L}_n)$. Hence the pairs of (unlabelled) tree shapes spanned by $\rho$ and these sets of leaves have the same distribution -- this is strong sampling consistency as defined before Proposition \ref{prop13}. \end{proof} \subsection{Dislocation measures in size-biased order} In actual calculations, we may find that the splitting rules in Proposition \ref{prop1} are quite difficult and the corresponding dislocation measure $\nu$ is always inexplicit, which leads us to transform $\nu$ to a more explicit form. The method proposed here is to change the space $\mathcal{S}^\downarrow$ into the space $[0,1]^\mathbb{N}$ and to rearrange the elements $s\in\mathcal{S}^\downarrow$ under $\nu$ into the \em size-biased random order \em that places $s_{i_1}$ first with probability $s_{i_1}$ (its \em size\em) and, successively, the remaining ones with probabilities $s_{i_j}/(1-s_{i_1}-\ldots-s_{i_{j-1}})$ proportional to their sizes $s_{i_j}$ into the following positions, $j\ge 2$. \begin{defi}\rm We call a measure $\nu^{\rm sb}$ on the space $[0,1]^\mathbb{N}$ the size-biased dislocation measure associated with dislocation measure $\nu$, if for any subset $A_1\times A_2\times\ldots\times A_k\times [0,1]^\mathbb{N}$ of $[0,1]^\mathbb{N}$, \begin{equation} \nu^{\rm sb}(A_1\times A_2\times\ldots\times A_k\times [0,1]^\mathbb{N})=\sum_{\substack{i_1,\ldots,i_k\ge 1\\ {\rm distinct}}}\int_{\{s\in \mathcal{S}^\downarrow:s_{i_1}\in A_1,\ldots,s_{i_k}\in A_k\}}\frac{s_{i_1}\ldots s_{i_k}}{\prod_{j=1}^{k-1}(1-\sum_{l=1}^j s_{i_l})} \nu(ds)\label{sizebias1} \end{equation} for any $k\in\mathbb{N}$, where $\nu$ is a dislocation measure on $\mathcal{S}^\downarrow$ satisfying $\nu(s\in\mathcal{S}^\downarrow:s_1+s_2+\ldots<1)=0$. We also denote by $\nu_k^{\rm sb}(A_1\times A_2\times\ldots\times A_k)=\nu^{\rm sb}(A_1\times A_2\times\ldots\times A_k\times [0,1]^\mathbb{N})$ the distribution of the first $k$ marginals. \end{defi} The sum in (\ref{sizebias1}) is over all possible rank sequences $(i_1,\ldots,i_k)$ to determine the first $k$ entries of the size-biased vector. The integral in (\ref{sizebias1}) is over the decreasing sequences that have the $j$th entry of the re-ordered vector fall into $A_j$, $j\in[k]$. Notice that the support of such a size-biased dislocation measure $\nu^{\rm sb}$ is a subset of $\mathcal{S}^{\rm sb}:=\{ s\in [0,1]^\mathbb{N}: \sum_{i=1}^\infty s_i=1 \}$. If we denote by $s^\downarrow$ the sequence $s\in\mathcal{S}^{\rm sb}$ rearranged into ranked order, taking (\ref{sizebias1}) into formula (\ref{Kingman}), we obtain \begin{prop}\label{sizebias} The EPPF associated with a dislocation measure $\nu$ can be represented as: $$ p(n_1,\ldots,n_k)=\frac{1}{\widetilde{Z}_n}\int_{[0,1]^k}x_1^{n_1-1}\ldots x_k^{n_k-1}\prod_{j=1}^{k-1}(1-\sum_{l=1}^jx_l) \nu_k^{\rm sb}(dx),$$ where $ \nu^{\rm sb}$ is the size-biased dislocation measure associated with $\nu$, where $n_1\geq\ldots\geq n_k\geq 1, k\geq 2, n=n_1+\ldots+n_k$ and $x=(x_1,\ldots,x_k)$. \end{prop} Now turn to see the case of Poisson-Dirichlet measures ${\rm PD}^*_{\alpha,\theta}$ to then study $\nu_{\alpha,\gamma}^{\rm sb}$. \begin{lemm}\label{PDSB} If we define ${\rm GEM}^*_{\alpha,\theta}$ as the size-biased dislocation measure associated with ${\rm PD}_{\alpha,\theta}^*$ for $0<\alpha<1$ and $\theta>-2\alpha$, then the first $k$ marginals have joint density \begin{equation}\label{GEM} {\rm gem}_{\alpha,\theta}^*(x_1,\ldots,x_k)=\frac{\alpha\Gamma(2+\theta/\alpha)}{\Gamma(1-\alpha)\Gamma(\theta+\alpha+1)\prod_{j=2}^kB(1-\alpha,\theta+j\alpha)}\frac{(1-\sum_{i=1}^kx_i)^{\theta+k\alpha}\prod_{j=1}^k x_j^{-\alpha}}{\prod_{j=1}^k(1-\sum_{i=1}^j x_i)}, \end{equation} where $B(a,b)=\int_0^1x^{a-1}(1-x)^{b-1}dx$ is the beta function. \end{lemm} This is a simple $\sigma$-finite extension of the {\rm GEM} distribution and (\ref{GEM}) can be derived analogously to Lemma \ref{crp1}. Applying Proposition \ref{sizebias}, we can get an explicit form of the size-biased dislocation measure associated with the alpha-gamma model. \begin{proof}[Proof of Proposition \ref{prop4}] We start our proof from the dislocation measure associated with the alpha-gamma model. According to (\ref{thm2nu}) and (\ref{sizebias1}), the first $k$ marginals of $\nu_{\alpha,\gamma}^{\rm sb}$ are given by \begin{eqnarray*} &&\hspace{-0.5cm}\nu_k^{\rm sb}(A_1\times\ldots\times A_k)\\ &&=\sum_{\substack{i_1,\ldots,i_k\ge 1\\ {\rm distinct}}}\int_{\{s\in \mathcal{S}^\downarrow:s_{i_j}\in A_j,j\in[k]\}}\frac{s_{i_1}\ldots s_{i_k}}{\prod_{j=1}^{k-1}(1-\sum_{l=1}^j s_{i_l})} \left (\gamma+(1-\alpha-\gamma)\sum_{i\neq j}s_i s_j \right){\rm PD}^*_{\alpha,-\alpha-\gamma}(ds)\nonumber\\ &&=\gamma D+(1-\alpha-\gamma)(E-F),\nonumber \end{eqnarray*} where \begin{eqnarray} D&=&\sum_{\substack{i_1,\ldots,i_k\ge 1\\ {\rm distinct}}}\int_{\{s\in \mathcal{S}^\downarrow:\ s_{i_1}\in A_1,\ldots,s_{i_k}\in A_k\}}\frac{s_{i_1}\ldots s_{i_k}}{\prod_{j=1}^{k-1}(1-\sum_{l=1}^j s_{i_l})}{\rm PD}^*_{\alpha,-\alpha-\gamma}(ds)\nonumber\\ &=&{\rm GEM}^*_{\alpha,-\alpha-\gamma}(A_1\times\ldots\times A_k),\nonumber\\ E&=&\sum_{\substack{i_1,\ldots,i_k\ge 1\\ {\rm distinct}}}\int_{\{s\in \mathcal{S}^\downarrow:\ s_{i_1}\in A_1,\ldots,s_{i_k}\in A_k\}}\left(1-\sum_{u=1}^k s_{i_u}^2\right)\frac{s_{i_1}\ldots s_{i_k}}{\prod_{j=1}^{k-1}(1-\sum_{l=1}^j s_{i_l})}{\rm PD}^*_{\alpha,-\alpha-\gamma}(ds)\nonumber\\ &=&\int_{A_1\times\ldots\times A_k}\left(1-\sum_{i=1}^k x_i^2\right ){\rm GEM}^*_{\alpha,-\alpha-\gamma}(dx)\nonumber\\ F&=&\sum_{\substack{i_1,\ldots,i_k\ge 1\\ {\rm distinct}}}\int_{\{s\in \mathcal{S}^\downarrow:\ s_{i_1}\in A_1,\ldots,s_{i_k}\in A_k\}}\left(\sum_{v\not\in\{i_1,\ldots,i_k\}} s_v^2\right)\frac{s_{i_1}\ldots s_{i_k}}{\prod_{j=1}^{k-1}(1-\sum_{l=1}^j s_{i_l})}{\rm PD}^*_{\alpha,-\alpha-\gamma}(ds)\nonumber\\ &=&\sum_{\substack{i_1,\ldots,i_{k+1}\ge 1\\ {\rm distinct}}}\int_{\{s\in \mathcal{S}^\downarrow:\ s_{i_1}\in A_1,\ldots,s_{i_k}\in A_k\}}\frac{s_{i_{k+1}}^2}{1-\sum_{l=1}^k s_{i_l}} \frac{s_{i_1}\ldots s_{i_{k+1}}}{\prod_{j=1}^k(1-\sum_{l=1}^j s_{i_l})}{\rm PD}^*_{\alpha,-\alpha-\gamma}(ds)\nonumber\\ &=&\int_{A_1\times\ldots\times A_k\times[0,1]}\frac{x_{k+1}}{1-\sum_{i=1}^k x_i}{\rm GEM}^*_{\alpha,-\alpha-\gamma}(d(x_1,\ldots ,x_{k+1})).\nonumber \end{eqnarray} Applying (\ref{GEM}) to $F$ (and setting $\theta=-\alpha-\gamma$), then integrating out $x_{k+1}$, we get: $$F=\int_{A_1\times\ldots\times A_k}\frac{1-\alpha}{1+(k-1)\alpha-\gamma}\left(1-\sum_{i=1}^k x_i\right)^2{\rm GEM}^*_{\alpha,-\alpha-\gamma}(dx).$$ Summing over $D, E, F$, we obtain the formula stated in Proposition \ref{prop4}. \end{proof} As the model related to stable trees is a special case of the alpha-gamma model when $\gamma=1-\alpha$, the sized-biased dislocation measure for it is $$\nu^{\rm sb}_{\alpha,1-\alpha}(ds)=\gamma{\rm GEM}^*_{\alpha,-1}(ds).$$ For general $(\alpha,\gamma)$, the explicit form of the dislocation measure in size-biased order, specifically the density $g_{\alpha,\gamma}$ of the first marginal of $\nu^{\rm sb}_{\alpha,\gamma}$, yields immediately the tagged particle \cite{Ber-hom} L\'{e}vy measure associated with a fragmentation process with alpha-gamma dislocation measure. \begin{coro}\label{cortps} Let $(\Pi^{\alpha,\gamma}(t),t\geq 0)$ be an exchangeable homogeneous $\mathcal{P}_{\mathbb{N}}$-valued fragmentation process with dislocation measure $\nu_{\alpha,\gamma}$. Then, for the size $|\Pi_{(i)}^{\alpha,\gamma}(t)|$ of the block containing $i\ge 1$, the process $\xi_{(i)}(t)=-\log|\Pi_{(i)}^{\alpha,\gamma}(t)|$, $t\geq 0$, is a pure-jump subordinator with L\'{e}vy measure \begin{eqnarray*} \Lambda_{\alpha,\gamma}(dx)\;=\;e^{-x}g_{\alpha,\gamma}(e^{-x})dx\!\!&=&\!\!\frac{\alpha\Gamma(1-\gamma/\alpha)}{\Gamma(1-\alpha)\Gamma(1-\gamma)}\left(1-e^{-x}\right)^{-1-\gamma}\left(e^{-x}\right)^{1-\alpha}\\ &&\times \left(\gamma+(1-\alpha-\gamma)\left(2e^{-x}(1-e^{-x})+\frac{\alpha-\gamma}{1-\gamma}(1-e^{-x})^2\right)\right)dx. \end{eqnarray*} \end{coro} \subsection{Convergence of alpha-gamma trees to self-similar CRTs}\label{sechmpw} In this subsection, we will prove that the delabelled alpha-gamma trees $T_n^\circ$, represented as $\mathbb{R}$-trees with unit edge lengths and suitably rescaled converge to CRTs as $n$ tends to infinity. \begin{lemm} If $(\widetilde{T}_n^\circ)_{n\geq 1}$ are strongly sampling consistent discrete fragmentation trees associated with dislocation measure $\nu_{\alpha,-\alpha-\gamma}$, then $$ \frac{\widetilde{T}_n^\circ}{n^\gamma}\rightarrow\mathcal{T}^{\alpha,\gamma}$$ in the Gromov-Hausdorff sense, in probability as $n\rightarrow \infty$. \end{lemm} \begin{proof} Theorem 2 in \cite{HMPW} says that a strongly sampling consistent family of discrete fragmentation trees $(\widetilde{T}_n^\circ)_{n\geq1}$ converges in probability to a CRT $$\frac{\widetilde{T}_n^\circ}{n^{\gamma_{\nu}}\ell(n)\Gamma(1-\gamma_{\nu})}\rightarrow \mathcal{T}_{(\gamma_{\nu},\nu)}$$ for the Gromov-Hausdorff metric if the dislocation measure $\nu$ satisfies following two conditions: \begin{equation} \nu(s_1\leq1-\varepsilon)=\varepsilon^{-\gamma_\nu}\ell(1/\varepsilon);\label{v1} \end{equation} \begin{equation} \int_{\mathcal{S}^\downarrow}\sum_{i\geq2}s_i|\rm ln\it s_i|^\rho\nu(ds)<\infty,\label{v2} \end{equation} where $\rho$ is some positive real number, $\gamma_{\nu}\in(0,1)$, and $x\mapsto\ell(x)$ is slowly varying as $x\rightarrow\infty$. By virtue of (19) in \cite{HMPW}, we know that (\ref{v1}) is equivalent to $$ \Lambda([x,\infty))= x^{-\gamma_\nu}\ell(1/x),\qquad\mbox{as $x\downarrow0$,}$$ where $\Lambda$ is the L\'{e}vy measure of the tagged particle subordinator as in Corollary \ref{cortps}. So, the dislocation measure $\nu_{\alpha,\gamma}$ satisfies (\ref{v1}) with $\ell(x)\rightarrow \gamma\alpha \Gamma(1-\gamma/\alpha)/\Gamma(1-\alpha)\Gamma(2-\gamma)$ and $\gamma_{\nu_{\alpha,\gamma}}=\gamma$. Notice that $$\int_{\mathcal{S}^\downarrow}\sum_{i\geq2}s_i|{\rm ln} s_i|^\rho\nu_{\alpha,\gamma}(ds)\leq\int_0^\infty x^\rho\Lambda_{\alpha,\gamma}(dx).$$ As $x\rightarrow\infty$, $\Lambda_{\alpha,\gamma}$ decays exponentially, so $\nu_{\alpha,\gamma}$ satisfies condition (\ref{v2}). This completes the proof. \end{proof} \begin{proof}[Proof of Corollary \ref{dconv}] The splitting rules of $T_n^\circ$ are the same as those of $\widetilde{T}_n^\circ$, which leads to the identity in distribution for the whole trees. The preceding lemma yields convergence in distribution for $T_n^\circ$. \end{proof} \section{Limiting results for labelled alpha-gamma trees} In this section we suppose $0<\alpha<1$ and $0<\gamma\le\alpha$. In the boundary case $\gamma=0$ trees grow logarithmically and do not possess non-degenerate scaling limits; for $\alpha=1$ the study in Section \ref{sectalpha1} can be refined to give results analogous to the ones below, but with degenerate tree shapes. \subsection{The scaling limits of reduced alpha-gamma trees} For $\tau$ a rooted $\mathbb{R}$-tree and $x_1,\ldots ,x_n\in\tau$, we call $R(\tau, x_1,\ldots ,x_n)=\bigcup_{i=1}^n[[\rho,x_i]]$ the reduced subtree associated with $\tau,x_1,\ldots ,x_n$, where $\rho$ is the root of $\tau$. As a fragmentation CRT, the limiting CRT $(\mathcal{T}^{\alpha,\gamma},\mu)$ is naturally equipped with a mass measure $\mu$ and contains subtrees $\widetilde{\mathcal{R}}_k,k\geq 1$ spanned by $k$ leaves chosen independently according to $\mu$. Denote the discrete tree without edge lengths by $\tilde{T}_n$ -- it has \textit{exchangeable} leaf labels. Then $\widetilde{\mathcal{R}}_n$ is the almost sure scaling limit of the reduced trees $R(\widetilde{T}_n,[k])$, by Proposition 7 in \cite{HMPW}. On the other hand, if we denote by $T_n$ the (non-exchangeably) labelled trees obtained via the alpha-gamma growth rules, the above result will not apply, but, similarly to the result for the alpha model shown in Proposition 18 in \cite{HMPW}, we can still establish a.s. convergence of the reduced subtrees in the alpha-gamma model as stated in Theorem \ref{LE}, and the convergence result can be strengthened as follows. \begin{prop}\label{slagt}In the setting of Theorem \ref{LE} $$ (n^{-\gamma}R(T_n,[k]), n^{-1}W_{n,k})\rightarrow (\mathcal{R}_k,W_k)\qquad\mbox{a.s. as $n\rightarrow\infty$,}$$ in the sense of Gromov-Hausdorff convergence, where $W_{n,k}$ is the total number of leaves in subtrees of $T_n\backslash R(T_n,[k])$ that are linked to the present branch points of $R(T_n,[k])$. \end{prop} \begin{proof}[Proof of Theorem \ref{LE} and Proposition \ref{slagt}] Actually, the labelled discrete tree $R(T_n,[k])$ with edge lengths removed is $T_k$ for all $n$. Thus, it suffices to prove the convergence of its total length and of its edge length proportions. Let us consider a first urn model, cf. \cite{Fel1}, where at level $n$ the urn contains a black ball for each leaf in a subtree that is directly connected to a branch point of $R(T_n,[k])$, and a white ball for each leaf in one of the remaining subtrees connected to the edges of $R(T_n,[k])$. Suppose that the balls are labelled like the leaves they represent. If the urn then contains $W_{n,k}=m$ white balls and $n-k-m$ black balls, the induced partition of $\{k+1,\ldots ,n\}$ has probability function $$p(m, n-k-m)=\frac{\Gamma(n-m-\alpha-w)\Gamma(w+m)\Gamma(k-\alpha)}{\Gamma(k-\alpha-w)\Gamma(w)\Gamma(n-\alpha)} =\frac{B(n-m-\alpha-w,w+m)}{B(k-\alpha-w,w)} $$ where $w=k(1-\alpha)+\ell\gamma$ is the total weight on the $k$ leaf edges and $\ell$ other edges of $T_k$. As $n\rightarrow\infty$, the urn is such that $W_{n,k}/n\rightarrow W_k$ a.s., where $W_k\sim {\rm beta}((k-1)\alpha-l\gamma,k(1-\alpha)+l\gamma)$. We will partition the white balls further. Extending the notions of spine, spinal subtrees and spinal bushes from Proposition \ref{prop10} ($k=1$), we call, for $k\ge 2$, \em skeleton \em the tree $S(T_n,[k])$ of $T_n$ spanned by the {\sc root} and leaves $[k]$ including the degree-2 vertices, for each such degree-2 vertex $v\in S(T_n,[k])$, we consider the skeletal subtrees $S^{\rm sk}_{vj}$ that we join together into a \em skeletal bush \em $S^{\rm sk}_v$. Note that the total length $L_k^{(n)}$ of the skeleton $S(T_n,[k])$ will increase by 1 if leaf $n+1$ in $T_{n+1}$ is added to any of the edges of $S(T_n,[k])$; also, $L_k^{(n)}$ is equal to the number of skeletal bushes (denoted by $\overline{K}_n$) plus the original total length of $k+\ell$ of $T_k$. Hence, as $n\rightarrow \infty$ \begin{equation} \frac{L_k^{(n)}}{n^{\gamma}}\sim\frac{\overline{K}_n}{W_{n,k}^{\gamma}}\left (\frac{W_{n,k}}{n}\right )^{\gamma} \sim\frac{\overline{K}_n}{W_{n,k}^{\gamma}}W_k^{\gamma}.\label{sim} \end{equation} The partition of leaves (associated with white balls), where each skeletal bushes gives rise to a block, follows the dynamics of a Chinese Restaurant Process with $(\gamma,w)$-seating plan: given that the number of white balls in the first urn is $m$ and that there are $K_m:=\overline{K}_n$ skeletal bushes on the edges of $S(T_n,[k])$ with $n_i$ leaves on the $i$th bush, the next leaf associated with a white ball will be inserted into any particular bush with $n_i$ leaves with probability proportional to $n_i-\gamma$ and will create a new bush with probability proportional to $w+K_m\gamma$. Hence, the EPPF of this partition of the white balls is $$p_{\gamma,w}(n_1,\ldots,n_{K_m})=\frac{\gamma^{K_m-1}\Gamma(K_m+w/\gamma)\Gamma(1+w)} {\Gamma(1+w/\gamma)\Gamma(m+w)}\prod_{i=1}^{K_m}\Gamma_{\gamma}(n_i).$$ Applying Lemma \ref{crp2} in connection with (\ref{sim}), we get the probability density of $L_k/W_k^{\gamma}$ as specified. Finally, we set up another urn model that is updated whenever a new skeletal bush is created. This model records the edge lengths of $R(T_n,[k])$. The alpha-gamma growth rules assign weights $1-\alpha+(n_i-1)\gamma$ to leaf edges of $R(T_n,[k])$ and weights $n_i\gamma$ to other edges of length $n_i$, and each new skeletal bush makes one of the weights increase by $\gamma$. Hence, the conditional probability that the length of each edge is $(n_1,\ldots ,n_{k+l})$ at stage $n$ is that $$ \frac{\prod_{i=1}^{k}\Gamma_{1-\alpha}(n_i)\prod_{i=k+1}^{k+\ell}\Gamma_{\gamma}(n_i)}{\Gamma_{k\alpha+\ell\gamma}(n-k)}.$$ Then $D_k^{(n)}$ converge a.s. to the Dirichlet limit as specified. Moreover, $L_k^{(n)}D_k^{(n)}\rightarrow L_kD_k$ a.s., and it is easily seen that this implies convergence in the Gromov-Hausdorff sense. The above argument actually gives us the conditional distribution of $L_k/W_k^{\gamma}$ given $T_k$ and $W_k$, which does not depend on $W_k$. Similarly, the conditional distribution of $D_k$ given given $T_k$, $W_k$ and $L_k$ does not depend on $W_k$ and $L_k$. Hence, the conditional independence of $W_k$, $L_k/W_k^{\gamma}$ and $D_k$ given $T_k$ follows. \end{proof} \subsection{Further limiting results}\label{secbw} Alpha-gamma trees not only have edge weights but also vertex weights, and the latter are in correspondence with the vertex degrees. We can get a result on the limiting ratio between the degree of each vertex and the total number of leaves. \begin{prop} Let $(c_1+1,\ldots,c_{\ell}+1)$ be the degree of each vertex in $T_k$, listed by depth first search. The ratio between the degrees in $T_n$ of these vertices and $n^\alpha$ will converge to $$C_k=(C_{k,1},\ldots,C_{k,\ell})= \overline{W}_k^\alpha M_k D_k^\prime,\qquad \mbox{where $D_k^\prime\sim{\rm Dirichlet}(c_1-1-\gamma/\alpha,\ldots,c_\ell-1-\gamma/\alpha)$}$$ and $M_k$ are conditionally independent of $W_k$ given $T_k$, where $\overline{W}_k=1-W_k$, and $M_k$ has density $$\frac{\Gamma(\overline{w}+1)}{\Gamma(\overline{w}/\alpha+1)}s^{\overline{w}/\alpha}g_\alpha(s),\qquad s\in(0,\infty),$$ $\overline{w}=(k-1)\alpha-\ell\gamma$ is total branch point weight in $T_k$ and $g_\alpha(s)$ is the Mittag-Leffler density. \end{prop} \begin{proof} Recall the first urn model in the preceding proof which assigns colour black to leaves attached in subtrees of branch points of $T_k$. We will partition the black balls further. The partition of leaves (associated with black balls), where each \em subtree \em $S^{\rm sk}_{vj}$ of a branch point $v\in R(T_n,[k])$ gives rise to a block, follows the dynamics of a Chinese Restaurant Process with $(\alpha,\overline{w})$-seating plan. Hence, the total degree $C^{\rm tot}_k(n)/\overline{W}_{n,k}^\alpha\rightarrow M_k$ a.s., where $C_k^{\rm tot}(n)$ is the sum of degrees in $T_n$ of the branch points of $T_k$, and $\overline{W}_{n,k}=n-k-W_{n,k}$ is the total number of leaves of $T_n$ that are in subtrees directly connected to the branch points of $T_k$. Similarly to the discussion of edge length proportions, we now see that the sequence of degree proportions will converge a.s. to the Dirichlet limit as specified. Since $1-W_k$ is the a.s. limiting proportion of leaves in subtrees connected to the vertices of $T_k$. \end{proof} Given an alpha-gamma tree $T_n$, if we decompose along the spine that connects the {\sc root} to leaf 1, we will find the leaf numbers of subtrees connected to the spine is a Chinese restaurant partition of $\{2,\ldots,n\}$ with parameters $(\alpha, 1-\alpha)$. Applying Lemma \ref{crp1}, we get following result. \begin{prop} Let $(T_n,n\ge 1)$ be alpha-gamma trees. Denote by $(P_1,P_2,\ldots)$ the limiting frequencies of the leaf numbers of each subtree of the spine connecting the {\sc root} to leaf 1 in the order of appearance. These can be represented as $$(P_1,P_2,\ldots)=(W_1,\overline{W}_1W_2,\overline{W}_1\overline{W}_2W_3,\ldots )$$ where the $W_i$ are independent, $W_i$ has ${\rm beta}(1-\alpha, 1+(i-1)\alpha)$ distribution, and $\overline{W}_i=1-W_i$. \end{prop} Observe that this result does not depend on $\gamma$. This observation also follows from Proposition \ref{prop6}, because colouring (iv)$^{\rm col}$ and crushing (cr) do not affect the partition of leaf labels according to subtrees of the spine. \end{document}
\begin{document} \title{A REMARK ON PARTIAL SUMS INVOLVING THE M\"OBIUS FUNCTION} \cauthor \author[1]{Terence Tao} \address[1]{Department of Mathematics, UCLA, Los Angeles CA 90095-1555\email{tao@math.ucla.edu}} \authorheadline{T. Tao} \support{The author is supported by NSF Research Award DMS-0649473, the NSF Waterman award and a grant from the MacArthur Foundation. } \begin{abstract} Let $\langle \P \rangle \subset \N$ be a multiplicative subsemigroup of the natural numbers $\N = \{1,2,3,\ldots\}$ generated by an arbitrary set $\P$ of primes (finite or infinite). We given an elementary proof that the partial sums $\sum_{n \in \langle \P \rangle: n \leq x} \frac{\mu(n)}{n}$ are bounded in magnitude by $1$. With the aid of the prime number theorem, we also show that these sums converge to $\prod_{p \in \P} (1 - \frac{1}{p})$ (the case when $\P$ is all the primes is a well-known observation of Landau). Interestingly, this convergence holds even in the presence of non-trivial zeroes and poles of the associated zeta function $\zeta_\P(s) := \prod_{p \in \P} (1-\frac{1}{p^s})^{-1}$ on the line $\{ \Re(s)=1\}$. As equivalent forms of the first inequality, we have $|\sum_{n \leq x: (n,P)=1} \frac{\mu(n)}{n}| \leq 1$, $|\sum_{n|N: n \leq x} \frac{\mu(n)}{n}| \leq 1$, and $|\sum_{n \leq x} \frac{\mu(mn)}{n}| \leq 1$ for all $m,x,N,P \geq 1$. \end{abstract} \classification{primary 11A25} \keywords{M\"obius function, prime number theorem, effective inequalities} \maketitle \section{Introduction} Let $\N := \{1,2,\ldots\}$ be the natural numbers, and let $\mu: \N \to \{-1,0,+1\}$ be the M\"obius function, thus $\mu(n) = (-1)^k$ when $n$ is the product of $k$ distinct primes, and $\mu(n)=0$ otherwise. Landau\cite{landau} made the elementary observation that the prime number theorem\footnote{We adopt the convention that $n$ always ranges over the natural numbers, and $p$ over prime numbers, unless otherwise stated. The notation $o(1)$ denotes any quantity which converges to zero as $x \to \infty$, holding all other parameters fixed.} $$ \sum_{p \leq x} 1 = (1 + o(1)) \frac{x}{\log x} $$ is equivalent to the conditional convergence of the infinite sum $\sum_n \frac{\mu(n)}{n} = 0$, thus \begin{equation}\label{mun} \sum_{n \leq x} \frac{\mu(n)}{n} = o(1). \end{equation} As is well known, \eqref{mun} and the prime number theorem are also both equivalent to the fact that the Riemann zeta function \begin{equation}\label{zeta} \zeta(s) := \sum_n \frac{1}{n^s} = \prod_p (1-\frac{1}{p^s})^{-1}. \end{equation} has a simple pole at $s=1$ and no zeroes or poles elsewhere on the line $\{ \Re(s)=1\}$. See \cite{diamond} for further discussion of this and other equivalences. On the other hand, one has the elementary bound \begin{equation}\label{el} |\sum_{n \leq x} \frac{\mu(n)}{n}| \leq 1 \end{equation} for all $x$. Indeed, to see this we may assume without loss of generality that $x$ is a natural number, and then sum the M\"obius inversion formula\footnote{We write $1_E$ to denote the indicator of a statement $E$, thus $1_E=1$ when $E$ is true and $1_E=0$ otherwise.} $1_{n=1} = \sum_{d|n} \mu(d)$ from $1$ to $x$ to obtain the identity $$ 1 = \sum_{d \leq x} \mu(d) \lfloor \frac{x}{d}\rfloor = \sum_{d \leq x} \mu(d) \frac{x}{d} - \sum_{d < x} \mu(d) \{ \frac{x}{d} \}$$ where $\{y\} := y - \lfloor y \rfloor$ is the fractional part of $y$. Using the trivial bound $|\mu(d) \{ \frac{x}{d} \}| \leq 1$ and the triangle inequality one obtains \eqref{el}. The bound \eqref{el} is of course attained with equality when $x=1$. In this paper we investigate the analogue of these facts when we ``turn off'' some of the primes in $\N$. More precisely, we consider an arbitrary set $\P$ of primes (either finite or infinite), and let $\langle \P \rangle \subset \N$ be the multiplicative semigroup generated by $\P$ (i.e. the set of natural numbers whose prime factors all lie in $\P$). We will study the behaviour of the sum \begin{equation}\label{motor} \sum_{n \in \langle \P \rangle: n \leq x} \frac{\mu(n)}{n} \end{equation} in $x$ and $\P$. The analogue of the Riemann zeta function \eqref{zeta} is then the \emph{Burgess zeta function} $\zeta_\P$, defined for $\Re(s) > 1$ by the formula \begin{equation}\label{zetap} \zeta_\P(s) := \sum_{n \in \langle \P \rangle} \frac{1}{n^s} = \prod_{p \in \P} (1-\frac{1}{p^s})^{-1}. \end{equation} Note that as $\P$ is arbitrary, there need not be any asymptotic formula for the prime counting function $\sum_{p < x} 1$. For similar reasons, there need not be any meromorphic continuation of $\zeta$ beyond the region $\{ \Re(s) > 1 \}$; for instance, one can easily construct a set $\P$ for which $\zeta_\P(s)$ blows up as $s \to 1^+$ at an intermediate rate between $1$ and $\frac{1}{s-1}$, which is not consistent with any meromorphic continuation at $s=1$. Related to this, the zeta function $\zeta_\P(s)$ can develop zeroes or singularities on the line $\Re(s)=1$. Indeed, observe for $\Re(s) > 1$ that $$ \log|\zeta_\P(s)| = - \sum_{p \in \P} \log |1 - \frac{1}{p^s}| = \sum_{p \in \P} \frac{1}{p^s} + O(1).$$ For any non-zero real number $t$, if one sets $\P$ to be those primes $p$ for which $\{ \frac{t \log p}{2\pi} \} \leq 0.1$ (say), then one can easily check that $|\zeta_\P(1+it+\eps)| \to \infty$ as $\eps \to 0$; similarly, if we set $\P$ instead to be those primes for which $\{ \frac{t \log p}{2\pi} - \frac{1}{2} \} \leq 0.1$, then $|\zeta_\P(1+it+\eps)| \to 0$. A modification of these examples shows that $\zeta_\P$ need not have a meromorphic continuation at $1+it$ for a fixed $t$, and with a bit more effort one can concoct a $\P$ for which $\zeta_\P$ has no continuation at $1+it$ for \emph{any} $t$; we omit the details. Despite this, one can\footnote{Note added in proof: as pointed out to us after the submission of this article, these generalisations were essentially contained in \cite{gran} and \cite{skalba} respectively, see Remarks \ref{gran-rem} and \ref{skal-rem}. We hope however that this article continues to serve an expository role in highlighting these elementary results.} generalise the statements \eqref{mun}, \eqref{el} to arbitrary $\P$. We first prove the generalisation of \eqref{el}, which is surprisingly elementary: \begin{theorem}[Elementary bound]\label{mon} For any $\P$ and $x$, one has \begin{equation}\label{monster} |\sum_{n \in \langle \P \rangle: n \leq x} \frac{\mu(n)}{n}| \leq 1. \end{equation} \end{theorem} \begin{proof} We may assume of course that $x$ is a natural number. We may also assume that \begin{equation}\label{sop} \sum_{n \in \langle \P \rangle: n \leq x} \frac{1}{n} > 1. \end{equation} since the claim is immediate from the triangle inequality otherwise. Let $\P'$ be the set of primes not in $\P$. From M\"obius inversion one has $$ 1_{n \in \langle \P'\rangle} = \sum_{d \in \langle \P\rangle: d|n} \mu(d)$$ for all natural numbers $n$; summing this over all $n \leq x$ as in the proof of \eqref{el} yields $$ \sum_{n \in \langle \P'\rangle: n \leq x} 1 = \sum_{d \in \langle \P\rangle: d \leq x} \mu(d) \frac{x}{d} - \sum_{d \in \langle \P\rangle: d \leq x} \mu(d) \left \{ \frac{x}{d} \right \}.$$ Using the bound $$ |\mu(d) \left \{ \frac{x}{d} \right \}| \leq 1 - \frac{1}{d}$$ we conclude that \begin{equation}\label{zorn} |x \sum_{n \in \langle \P \rangle: n \leq x} \frac{\mu(n)}{n}| \leq \sum_{n \in \langle \P'\rangle: n \leq x} 1 + \sum_{n \in \langle \P\rangle: n \leq x} 1 - \sum_{n \in \langle \P\rangle: n \leq x} \frac{1}{n}. \end{equation} Since $\langle \P \rangle$ and $\langle \P' \rangle$ overlap only at $1$, the claim now follows from \eqref{sop}. \end{proof} \begin{remark}\label{gran-rem} A bound very similar to \eqref{monster} was also observed in \cite{gran}. In particular, the following refinement $$ x \sum_{n \in \langle \P \rangle: n \leq x} \frac{\mu(n)}{n} = \sum_{n \in \langle \P'\rangle: n \leq x} 1 + (1-\gamma) \sum_{n \in \langle \P\rangle: n \leq x} \mu(n) + O( \frac{x}{\log^{1/5} x} )$$ to \eqref{zorn} was obtained as a special case of \cite[Theorem 3.1]{gran}. The lower bound of $-1$ for $\sum_{n \in \langle \P \rangle: n \leq x} \frac{\mu(n)}{n}$ was also improved in \cite[Theorem 2]{gran} to $$ (1 - 2 \log(1+\sqrt{e}) + 4 \int_1^{\sqrt{e}} \frac{\log t}{t+1} dt) \log 2 + o(1) = -0.4553\ldots + o(1),$$ which is optimal except for the $o(1)$ term, with a characterisation of those primes $\P$ for which the lower bound is attained. \end{remark} As corollaries of Theorem \ref{mon} we have \begin{equation}\label{mock-1} |\sum_{n \leq x: (n,P)=1} \frac{\mu(n)}{n}| \leq 1 \end{equation} and \begin{equation}\label{mock-2} |\sum_{n|N: n \leq x} \frac{\mu(n)}{n}| \leq 1 \end{equation} for any $P,x,N \geq 1$; also, from the identity $\mu(mn) = \mu(m) \mu(n) 1_{(m,n)=1}$ one has \begin{equation}\label{mock-3} |\sum_{n \leq x} \frac{\mu(mn)}{n}| \leq 1 \end{equation} for any $m,x \geq 1$. These inequalities, which save a factor of $O(\log x)$ over the trivial bound, may be of some value in obtaining effective estimates in sieve theory or in exponential sums over the primes. Now we turn to the generalisation of \eqref{mun}. \begin{theorem}[Landau's theorem for arbitrary sets of primes]\label{main} Let $\P$ be a set of primes. Then the sum $\sum_{n \in \langle \P \rangle} \frac{\mu(n)}{n}$ converges conditionally to $\prod_{p \in \P} (1-\frac{1}{p})$, thus \begin{equation}\label{monster2} \sum_{n \in \langle \P \rangle: n \leq x} \frac{\mu(n)}{n} = \prod_{p \in \P} (1-\frac{1}{p}) + o(1) \end{equation} for all $x > 0$, where the decay rate of the error $o(1)$ depends on $\P$. In particular, $\sum_{n \in \langle P\rangle} \frac{\mu(n)}{n}$ is conditionally convergent to zero if and only if $\sum_{p \in \P} \frac{1}{p}$ is infinite. \end{theorem} The proof of Theorem \ref{main} is also elementary (except for its use of \eqref{mun}, which is of course the special case of \eqref{monster2} when $\P$ consists of all the primes). Interestingly, it is surprisingly difficult to replicate this elementary proof by zeta function methods, in large part due to the lack of meromorphic continuation alluded to earlier. \begin{remark} A classical result of Wirsing (see e.g. \cite{hildebrand}) on mean values of multiplicative functions implies in particular that $$ \frac{1}{x} \sum_{n \in \langle \P \rangle: n \leq x} \mu(n) = o(1).$$ This fact is also deducible from \eqref{monster2}. \end{remark} \begin{remark}\label{skal-rem} Theorem \ref{main} was also proven in \cite{skalba} by a similar method; in fact, the result in \cite{skalba} extends to arbitrary collections of prime ideals in algebraic number fields. \end{remark} We remark that the decay rate $o(1)$ in \eqref{monster2} is not uniform in $\P$. For instance, if one takes $\P$ to be all the primes $p$ between $\sqrt{x}$ and $x$, one sees from Mertens' theorems that $$ \sum_{n \in \langle \P \rangle: n \leq x} \frac{\mu(n)}{n} = 1 - \sum_{\sqrt{x} \leq p \leq x} \frac{1}{p} = 1 - \log 2 + o(1)$$ and $$ \prod_{p \in \P} (1-\frac{1}{p}) = \frac{1}{2} + o(1)$$ and so one can keep the error term $o(1)$ in \eqref{monster2} bounded away from zero even for arbitrarily large $x$, by choosing $\P$ depending on $x$. More precise statements of this nature can be found in \cite{gran}. Note that the above results do not hold when $\langle \P \rangle$ is replaced by a more general subsemigroup $G$ of the natural numbers (in which the generators are not necessarily prime). For instance\footnote{We thank Andrew Sutherland for this example.}, if $G$ is the semigroup generated by the semiprimes (the products of two primes), then $\mu$ is either $0$ or $1$ and it is not difficult to see that $\sum_{n \in G} \frac{\mu(n)}{n}$ diverges. One reason for the bad behaviour of these sums is that the zeta function $\sum_{n \in G} \frac{1}{n^s}$ no longer has an Euler product. It is also essential that $\P$ consist of natural numbers, rather than merely real numbers larger than $1$ (as is the case in Beurling prime models), as the equal spacing of the integers is used in an essential way. For instance, the inequality \eqref{monster} fails when $\P = \{1.1,1.2,1.3\}$ and $x=1.3$ (we thank Harold Diamond for this example and observation). We thank Melvyn Nathanson and Keith Conrad for corrections, Harold Diamond for comments, and Wladyslaw Narkiewicz, Mariusz Ska{\l}ba, Kannan Soundararajan for references. We are also indebted to the anonymous referee for corrections and suggestions. \section{Proof of main theorem} We now establish Theorem \ref{main}. Fix $\P$; we allow all implied constants in the asymptotic notation to depend on $\P$. If $\sum_{p \in \P} \frac{1}{p}$ is finite, then from the monotone convergence theorem we see that $$ \sum_{n \in \langle \P \rangle} \frac{1}{n} = \prod_{p \in \P} (1 + \frac{1}{p-1})$$ is absolutely convergent, and thus (by dominated convergence) $$ \sum_{n \in \langle \P \rangle} \frac{\mu(n)}{n} = \prod_{p \in \P} (1 - \frac{1}{p})$$ is conditionally convergent, giving the claim. Thus we shall assume that $\sum_{p \in \P} \frac{1}{p}$ is infinite, in which case our task is to show that \begin{equation}\label{sum} \sum_{n \in \langle \P \rangle: n \leq x} \frac{\mu(n)}{n} = o(1). \end{equation} Let $\P' := \{ p: p \not \in \P \}$ be the complement of $\P$ in the primes. Suppose that $\sum_{p \in \P'} \frac{1}{p}$ is also infinite, thus $$ \prod_{p \in \P} (1-\frac{1}{p}) = \prod_{p \in \P'} (1-\frac{1}{p}) = 0.$$ From elementary sieve theory (the Legendre sieve), this implies that both $\langle \P \rangle$ and $\langle \P' \rangle$ have asymptotic density zero, thus the right-hand side of \eqref{zorn} is $o(x)$, and \eqref{sum} follows. The last remaining case is when $\sum_{p \in \P'} \frac{1}{p}$ is finite. In this case, we use M\"obius inversion to write $$ 1_{n \in \langle \P\rangle} = \sum_{d \in \langle \P'\rangle: d|n} \mu(d)$$ and thus $$ \sum_{n \in \langle \P \rangle: n \leq x} \frac{\mu(n)}{n} = \sum_{d \in \langle \P'\rangle: d \leq x} \frac{\mu(d)}{d} \sum_{m \leq x/d} \frac{\mu(dm)}{m}.$$ By \eqref{mock-3}, $\frac{\mu(d)}{d} \sum_{m \leq x/d} \frac{\mu(dm)}{m}$ is bounded in magnitude by $\frac{1}{d}$. As $\sum_{p \in \P'} \frac{1}{p}$ is finite, $\sum_{d \in \langle \P' \rangle} \frac{1}{d}$ is absolutely convergent, so by dominated convergence it suffices to show that $$ \sum_{m \leq x} \frac{\mu(dm)}{m} = o(1)$$ for each fixed $d$. Fix $d$. If we take $\P_d$ to be the primes dividing $d$, we observe the M\"obius inversion identity \begin{align*} \mu(dm) &= \mu(d) \mu(m) 1_{(d,m)=1} \\ &= \sum_{n \in \langle \P_d \rangle: n|m} \mu(d) \mu(m/n) \end{align*} and so $$ \sum_{m \leq x} \frac{\mu(dm)}{m} = \mu(d) \sum_{n \in \langle \P_d \rangle: n \leq x}\frac{1}{n} \sum_{l \leq x/n} \frac{\mu(l)}{l}.$$ Since $\sum_{n \in \langle \P_d \rangle} \frac{1}{n}$ is absolutely convergent, the claim now follows from \eqref{el}, \eqref{mun} and the dominated convergence theorem. \begin{rem} By a convexity argument, one sees from \eqref{monster} that $|\sum_{n \leq x} \frac{\mu(n) a(n)}{n}| \leq 1$ for every multiplicative function $a: \N \to [0,1]$ taking values between zero and one (note that the expression in absolute values is affine-linear in $a(p)$ for each prime $p$, and \eqref{monster} is the special case when the $a(p)$ take the extreme values of $0$ and $1$); see also \cite{gran}. It is also not difficult to adapt the arguments in this section to show that $\sum_{n=1}^\infty \frac{\mu(n) a(n)}{n}$ converges conditionally to $\prod_p (1 - \frac{a(p)}{p})$; we leave the details to the interested reader. \end{rem} \end{document}
\begin{document} \begin{frontmatter} \title{A lower bound for the uniform Schoenberg operator} \author[up]{Johannes~Nagler} \ead{johannes.nagler@uni-passau.de} \author[ua]{Uwe~K\"ahler} \ead{ukaehler@ua.pt} \address[up]{Fakult\"at f\"ur Informatik und Mathematik, Universit\"at Passau, Germany} \address[ua]{CIDMA -- Center for R\&D in Mathematics and Applications, Universidade de Aveiro, Portugal} \begin{abstract} We present an estimate for the lower bound for the Schoenberg operator with equidistant knots in terms of the second order modulus of smoothness. We investigate the behaviour of iterates of the Schoenberg operator and in addition, we show an upper bound of the second order derivative of these iterates. Finally, we prove the equivalence between the approximation error and the second order modulus of smoothness. \end{abstract} \begin{keyword} spline approximation \sep Schoenberg operator \sep iterates \sep inverse theorem \end{keyword} \end{frontmatter} \fancypagestyle{pprintTitle}{ \lhead{} \chead{}\rhead{} \lfoot{}\cfoot{}\rfoot{{\footnotesize\itshape Preprint, \today}} \renewcommand{0.0pt}{0.0pt} } \section{Introduction} L. Beutel et al. stated in \cite{Beutel:2002} an interesting conjecture about the equivalence of the approximation error of the Schoenberg operator on $\ivcc{0}{1}$ and the second order modulus of smoothness. We prove that this conjecture holds true for the uniform Schoenberg operator if the degree of the splines is fixed and the mesh gauge tends to zero. To this end, we characterize the behaviour of the iterates of the Schoenberg operator. Related to our result is the work of Zapryanova et al. \cite{Zapryanova:2012}, who proved an inverse theorem for the uniform Schoenberg operator using the Ditzian-Totik modulus of smoothness. In contrast to their result, we give a direct lower bound. More specifically, we show that for $f \in \spacecf$ we have the uniform estimate \begin{equation*} \omega_2(f, \delta) \leq 5 \cdot \normi{ f-S_{n, k}\, f}, \end{equation*} where $\omega_2(f,\delta)$ is the classical modulus of smoothness. \subsection{The Schoenberg operator} For integers $n, k > 0$, we consider the equidistant knots $\{x_j = \frac{j}{n}\}_{j=0}^{n}$ as a partition of $\ivcc{0}{1}$. We extend this knot sequence by setting \begin{equation*} x_{-k} = \cdots = x_0 = 0 < x_1 < \ldots < x_n = \cdots = x_{n+k} = 1. \end{equation*} For $f \in \spacecf$, the variation-diminishing spline operator of degree $k$ with respect to the knots $\{x_j\}_{j=-k}^{n+k}$ is then defined by \begin{align*} S_{n, k}\, f(x) &= \sum_{j=-k}^{n-1}f(\xi_{j,k})\bspk{j}(x),\quad 0 \leq x < 1,\\ S_{n, k}\, f(1) &= \lim_{y \nearrow 1} S_{n, k}\, f(y) \end{align*} with the nodes \begin{equation*} \xi_{j,k} := \frac{x_{j+1} + \cdots + x_{j+k}}{k},\quad -k \leq j \leq n-1, \end{equation*} and the normalized B-splines \begin{equation*} \bspk{j}(x) := (x_{j+k+1} - x_j)[x_j,\ldots,x_{j+k+1}](\cdot - x)_+^k. \end{equation*} This operator was introduced by Schoenberg in 1959 as a generalization of the Bernstein operator see, e.g., \cite{Curry:1966, Marsden:1970}. The normalized B-splines form a partition of the unity \begin{equation} \label{eq:partition_unity} \sum_{j=-k}^{n-1}\bspk{j}(x) = 1, \end{equation} and the Schoenberg operator can reproduce linear functions, i.e., \begin{equation} \label{eq:reproduce_linear} \sum_{j=-k}^{n-1}\xi_{j,k}\bspk{j}(x) = x, \end{equation} due to the chosen Greville nodes. A comprehensive overview of direct inequalities for this operator can be found in \cite{Beutel:2002}. \subsection{Notation} Throughout this paper, we will consider the Banach space $\spacecf$, i.e., the space of real-valued continuous functions on the intervall $\ivcc{0}{1}$ endowed with the supremum norm $\normi{\cdot}$, \begin{equation*} \norm{f}_\infty = \sup\cset{\abs{f(x)}}{x \in \ivcc{0}{1}}, \qquad f\in \spacecf. \end{equation*} The space of bounded linear operators on $\spacecf$ will be denoted by $\mathcal{B}(\spacecf)$ equipped with the usual operator norm $\normop{\cdot}$. As a $n+k$-dimensional subspace of $\spacecf$, we denote by $\mathcal{S}(n, k)$ the spline space of degree $k$ with respect to the knot sequence $\setof{x_j}_{j=-k}^{n+k}$, \begin{equation*} \mathcal{S}(n, k) = \cset{\sum_{j=-k}^{n-1}c_j \bspk{j}}{c_j \in {\mathbb R},\ j \in \setof{-k,\ldots,n-1}} \subset \spacedf{k-1}. \end{equation*} Since $\mathcal{S}(n, k)$ is finite-dimensional, $\mathcal{S}(n, k)$ is a Banach space with the inherited norm $\normi{\cdot}$. For more information on spline spaces see, e.g., \cite{deBoor:1987}. For $f \in \spacecf$ and points $x_0,\ldots,x_{k} \in \ivcc{0}{1}$, the divided difference $[x_0,\ldots,x_{k}]f$ is defined to be the coefficient of $x^k$ in the unique polynomial of degree $k$ or less that interpolates $f(x)$ at the points $x_0,\ldots,x_{k}$. \section{The iterates of the Schoenberg operator} In the following, we discuss some basic properties of the iterates of the Schoenberg operator. For $m \in {\mathbb N}$, we define \begin{equation*} (\Snkp{m} f)(x) = (\Snkp{m-1}(S_{n, k}\, f))(x)\qquad\text{for all } x \in \ivcc{0}{1}. \end{equation*} \begin{lemma} \label{lemma:iterates} We can write the $m$-th iterate of the Schoenberg operator as \begin{align*} \Snkp{m} f(x) &= \Snkp{m-1}\left(\sum_{j=-k}^{n-1}f(\xi_{j,k})N_{j,k}(x)\right)\\ & = \sum_{j_1,\ldots, j_m=-k}^{n-1}f(\xi_{j_1,k})N_{j_1,k}(\xi_{j_2,k})\cdots N_{j_{m-1},k}(\xi_{j_m,k}) N_{j_m,k}(x). \end{align*} \end{lemma} \begin{proof} Induction over $m$. \end{proof} \subsection{The first and second derivative of the iterates} In this section, we consider the derivatives and give explicit representations. For that, we define a discrete backward difference operator $\Delta_l$ by \begin{equation*} \Delta_l f(\xi_{j,k}) := \frac{f(\xi_{j,k}) - f(\xi_{j-1,k})}{\xi_{j,l} - \xi_{j-1, l}}. \end{equation*} With this, we can state: \begin{lemma} \label{lemma:derivative_spline} The following properties hold for the derivatives of the Schoenberg operator: \begin{align*} D S_{n, k}\, f &= S^+_{n,k-1}\, \Delta_{k} f \shortintertext{and} D^2 S_{n, k}\, f &= S^{++}_{n,k-2}\, \Delta_{k-1} \Delta_{k} f, \end{align*} where $S^+_{n,k}f$ and $S^{++}_{n,k}$ are Schoenberg operators with shifted knots defined by \begin{equation*} S^+_{n,k} f = \sum_{j=-k}^{n-1}f(\xi_{j, k+1})\bspk{j}\qquad\text{and}\qquad S^{++}_{n,k} f = \sum_{j=-k}^{n-1}f(\xi_{j, k+2})\bspk{j}. \end{equation*} \end{lemma} \begin{proof} This lemma follows directly by the representation of the derivative, \cite{Marsden:1970}: \begin{align*} DS_{n, k}\, f(x) &= \sum_{j=1-k}^{n-1}\frac{f(\xi_{j,k}) - f(\xi_{j-1,k})}{\xi_{j,k} - \xi_{j-1,k}} \bspkm{j}{1}(x), \intertext{and} D^2S_{n, k}\, f(x) &= \sum_{j=2-k}^{n-1}\frac{\frac{f(\xi_{j,k}) - f(\xi_{j-1,k})}{\xi_{j,k} - \xi_{j-1,k}} - \frac{f(\xi_{j-1,k}) - f(\xi_{j-2,k})}{\xi_{j-1,k} - \xi_{j-2,k}}}{\xi_{j,k-1} - \xi_{j-1,k-1}} \bspkm{j}{1}(x). \end{align*} Applying the definition of the discrete backward difference operator $\Delta_l$ gives the required representation. \end{proof} Now we give an analogous representation for the iterates of the Schoenberg operator. \begin{theorem} The first and the second derivative of the iterates of the Schoenberg operator have the following representation: \begin{align*} \MoveEqLeft D \Snkp{m} f(x) = \sum_{j_m = 1-k}^{n-1} \sum_{j_1,\ldots, j_{m-1}=-k}^{n-1} f(\xi_{j_1,k}) N_{j_1, k}(\xi_{j_2, k}) \cdot \ldots \cdot N_{j_{m-2}, k}(\xi_{j_{m-1}, k}) \cdots\\ &\cdots \left[ \frac{N_{j_{m-1}, k}(\xi_{j_m, k}) - N_{j_{m-1}, k}( \xi_{{j_m} - 1, k})}{\xi_{{j_m}, k} - \xi_{{j_m} - 1, k}} \right] N_{j_m,k-1}(x)\\ &= \sum_{j_m = 1-k}^{n-1} \sum_{j_1,\ldots, j_{m-1}=-k}^{n-1} f(\xi_{j_1,k}) N_{j_1, k}(\xi_{j_2, k}) \cdots N_{j_{m-2}, k}(\xi_{j_{m-1}, k}) \Delta_k N_{j_{m-1}}(\xi_{j_m}) N_{j_m,k-1}(x) . \end{align*} and \begin{align*} \MoveEqLeft D^2 \Snkp{m} f(x) = \sum_{j_m = 2-k}^{n-1} \sum_{j_1,\ldots, j_{m-1}=-k}^{n-1} f(\xi_{j_1,k}) N_{j_1, k}(\xi_{j_2, k}) \cdot \ldots \cdot N_{j_{m-2}, k}(\xi_{j_{m-1}, k}) \cdots\\ &\cdots \frac{ \left[ \frac{N_{j_{m-1}, k}(\xi_{j_m, k}) - N_{j_{m-1}, k}( \xi_{{j_m} - 1, k})}{\xi_{{j_m}, k} - \xi_{{j_m} - 1, k}} \right] - \left[ \frac{N_{j_{m-1}, k}(\xi_{{j_m}-1, k}) - N_{j_{m-2}, k}( \xi_{{j_m} - 1, k})}{\xi_{{j_m}-1, k} - \xi_{{j_m} - 2, k}} \right] }{\xi_{{j_m},k-1} - \xi_{{j_m}-1, k-1}} N_{j_m,k-1}(x)\\ &= \sum_{j_m = 1-k}^{n-1} \sum_{j_1,\ldots, j_{m-1}=-k}^{n-1} f(\xi_{j_1,k}) N_{j_1, k}(\xi_{j_2, k}) \cdots N_{j_{m-2}, k}(\xi_{j_{m-1}, k}) \Delta_{k-1} \Delta_k N_{j_{m-1}}(\xi_{j_m}) N_{j_m,k-1}(x) . \end{align*} \end{theorem} \begin{proof} Applying Lemma \ref{lemma:iterates} and \ref{lemma:derivative_spline} to $S_{n,k}^{m-1}f$ yields the result. \end{proof} \subsubsection{An upper bound for the second derivative of the iterates} Our idea is now to work with the shift invariant basis functions $\bspk{j}$, $j \in \setof{0,\ldots,n-k-1}$, to stay away from the boundary of the interval $\ivcc{0}{1}$. Then we can represent the Schoenberg operator as a convolution operator and apply known techniques for this kind of operators. Therefore, let $x \in \ivcc{x_{2k+2}}{x_{n-2k-2}}$. Then we have \begin{equation*} x \not\in \bigcup_{j=-k}^{k+1} \mathrm{supp\,} \bspk{j} \text{ and } x \not\in \bigcup_{j=n-2k-2}^{n-1} \mathrm{supp\,} \bspk{j}, \end{equation*} because $\mathrm{supp\,}\bspk{j} \subset \ivcc{x_j}{x_{j+k+1}}$. Besides, we can simplify the notation of the iterates of the Schoenberg operator for $x \in \ivcc{x_{2k+2}}{x_{n-2k-2}}$ to \begin{equation*} \Snkp{m} f(x) = \sum_{j_1,\ldots, j_m=0}^{n-k-1}f(\xi_{j_1,k})N_{j_1,k}(\xi_{j_2,k})\cdots N_{j_{m-1},k}(\xi_{j_m,k}) N_{j_m,k}(x). \end{equation*} Now, we show that the basis functions $\setof{\bspk{j}}_{j=0}^{n-k-1}$ are shift invariant. \begin{theorem} \label{thm:shift_invariance} The $\bspk{j}$ with $j \in \setof{0,\ldots, n-k-1}$ are translates of each other, i.e., \begin{equation*} \bspk{j+1}(\xi_i) = \bspk{j}(\xi_{i-1}), \end{equation*} and $\mathrm{supp\,} \mathrm{span\,}\bspk{j} \subset \ivcc{0}{1}$ \end{theorem} \begin{proof} As $\mathrm{supp\,} \bspk{j} \subset \ivcc{x_{j}}{x_{j+k+1}}$ all corresponding knots $x_i$, $i \in \setof{j, \ldots, j+k+1}$ are distinct from each other. Explicitly, we have $x_i = \frac{i}{n}$. Now let $h = 1/n$. Then, we get \begin{align*} \bspk{j+1}(\xi_i) &= (x_{j+k+2} - x_{j+1})[x_{j+1},\ldots,x_{j+k+2}](\cdot - \xi_i)_+^k\\ &= (x_{j+k+1} - x_{j}) \frac1{h^k\cdot k!} \sum_{l={j+1}}^{j+k+2}\binom{k+1}{l-j-1}(-1)^{j+k+2-l}(x_l - \xi_i)_+^k\\ &= (x_{j+k+1} - x_{j}) \frac1{h^k\cdot k!} \sum_{l={j}}^{j+k+1}\binom{k+1}{l-j}(-1)^{j+k+1-l}(x_{l+1} - \xi_i)_+^k\\ &= (x_{j+k+1} - x_{j}) \frac1{h^k\cdot k!} \sum_{l={j}}^{j+k+1}\binom{k+1}{l-j}(-1)^{j+k+1-l}(x_{l} - \xi_{i-1})_+^k\\ &= \bspk{j}(\xi_{i-1}). \end{align*} The last line holds, because \begin{equation*} x_{l+1} - \xi_{i} = \frac{k\cdot (l+1) - \sum_{j=1}^k(i+j)}{nk} = \frac{k\cdot l - \sum_{j=1}^k(i+j-1)}{nk}\\ = x_{l} - \xi_{i-1}. \end{equation*} \end{proof} With Theorem \ref{thm:shift_invariance}, we get the following corollary: \begin{corollary} For $m \in {\mathbb N}$ and $x \in \ivcc{x_{k+1}}{x_{n-2k-2}}$ we get: \begin{align*} \MoveEqLeft DS_{n, k}\, f(x) = \sum_{j_1,\ldots, j_{m}=0}^{n-k-1} f(\xi_{j_1,k}) N_{j_1, k}(\xi_{j_2, k}) \cdots N_{j_{m-2}, k}(\xi_{j_{m-1}, k}) \Delta_k N_{j_{m-1}}(\xi_{j_m}) N_{j_m,k-1}(x)\\ &= \sum_{j_1,\ldots, j_{m}=0}^{n-k-1} f(\xi_{j_1,k}) N_{j_1, k}(\xi_{j_2, k}) \cdots \Delta_k N_{j_{m-2}, k}(\xi_{j_{m-1}, k}) N_{j_{m-1}}(\xi_{j_m}) N_{j_m,k-1}(x)\\ & \qquad\qquad\qquad\qquad\qquad\qquad\qquad \vdots \\ &= \sum_{j_1,\ldots, j_{m}=0}^{n-k-1} f(\xi_{j_1,k}) \Delta_{k} N_{j_1, k}(\xi_{j_2, k}) N_{j_2, k}(\xi_{j_3, k}) \cdots N_{j_{m-1}}(\xi_{j_m}) N_{j_m,k-1}(x), \end{align*} i.e., the backward difference operator can be applied to $\bspk{j}$ for every index $j$. Thus, we have $m-1$ possibilites to represent the first derivative of the iterated Schoenberg operator. Analog for $D^2 \Snkp{m} f$, where we have \begin{align*} \MoveEqLeft D^2S_{n, k}\, f(x) = \sum_{j_1,\ldots, j_{m}=0}^{n-k-1} f(\xi_{j_1,k}) N_{j_1, k}(\xi_{j_2, k}) \cdots N_{j_{m-2}, k}(\xi_{j_{m-1}, k}) \Delta_{k-1}\Delta_k N_{j_{m-1}}(\xi_{j_m}) N_{j_m,k-2}(x)\\ &= \sum_{j_1,\ldots, j_{m}=0}^{n-k-1} f(\xi_{j_1,k}) N_{j_1, k}(\xi_{j_2, k}) \cdots \Delta_{k-1} N_{j_{m-2}, k}(\xi_{j_{m-1}, k}) \Delta_{k}N_{j_{m-1}}(\xi_{j_m}) N_{j_m,k-2}(x)\\ & \qquad\qquad\qquad\qquad\qquad\qquad\qquad \vdots \\ &= \sum_{j_1,\ldots, j_{m}=0}^{n-k-1} f(\xi_{j_1,k}) \Delta_{k-1}\Delta_{k} N_{j_1, k}(\xi_{j_2, k}) N_{j_2, k}(\xi_{j_3, k}) \cdots N_{j_{m-1}}(\xi_{j_m}) N_{j_m,k-2}(x). \end{align*} Similar to $DS_{n, k}\, f$, we have $\frac{m(m-1)}{2}$ possibilites to represent the second derivative of the $m$-th iterate of the Schoenberg operator. \end{corollary} We will abreviate the last term by \begin{equation} \label{eq:iterates_sec_derivative} D^2\, \Snkp{m} f(x) = \sum_{j_1,\ldots, j_{m}=-k}^{n-1} f(\xi_{j_1,k}) \cdot P(j_1,\ldots,j_m; x) \cdot I_{{l_1},{l_2}}(j_1,\ldots,j_{m-1}; x), \end{equation} where \begin{equation*} P(j_1,\ldots,j_m;x) := \left[\prod_{l=1}^{m-1}N_{j_l,k}(\xi_{j_{l+1},k})\right] N_{j_m,k-2}(x), \end{equation*} and for $l_1,l_2 \in \setof{1,\ldots,m-1}$, $l_1 \leq l_2$, \begin{equation*} I_{{l_1},{l_2}}(j_1,\ldots,j_{m-1}; x) = \begin{dcases} \frac{\Delta_{k-1}\bspk{l_1}(x)\cdot \Delta_k \bspk{l_2}(x)}{\bspk{l_1}(x)\cdot \bspk{l_2}(x)}, & \text{for }l_1 \neq l_2,\\ \frac{\Delta_{k-1}\Delta_k \bspk{l_1}(x)}{\bspk{l_1}(x)}, & \text{for }l_1 = l_2. \end{dcases} \end{equation*} Now we are able to give an upper bound for the second order derivative of the iterated Schoenberg operator: \begin{theorem} For the integer $m \geq 2$, $h=1/n$ and $x \in \ivcc{x_{2k+2}}{x_{n-2k-2}}$ we have the upper bound \begin{equation*} \abs{D^2 \Snkp{m} f(x)} \leq \frac{2\varepsilon_{n,k}}{(m-1)^{3/2} h^2} \cdot \normi{f}. \end{equation*} \end{theorem} \begin{proof} As we have $\frac{m(m-1)}{2}$ possibilities to express $D^2 \Snkp{m} f(x)$, we write \eqref{eq:iterates_sec_derivative} as the following mean: \begin{align*} D^2\, \Snkp{m} f(x) &= \frac2{m(m-1)} \sum_{{l_1}\leq{l_2}=1}^{m-1} D^2\, \Snkp{m} f(x)\\ &= \frac{2}{m(m-1)} \sum_{j_1,\ldots, j_{m}=0}^{n-k-1}\left( f(\xi_{j_1,k}) \cdot P(j_1,\ldots,j_m; x) \cdot \sum_{{l_1}\leq{l_2}=1}^{m-1} I_{{l_1},{l_2}}(j_1,\ldots,j_{m-1}; x)\right). \end{align*} Since $P$ is positive, we can split $P$ into $P=P^{1/2}P^{1/2}$, where $P^{1/2}$ is the positive root. Then we apply the Cauchy-Schwarz inequality and get in abbreviated notation the following pointwise inequality for $x \in \ivcc{x_{2k+2}}{x_{n-2k-2}}$: \begin{align} \abs{D^2 \Snkp{m} f} &\leq \frac{2}{m(m-1)}\left\{\sum_{j_1,\ldots, j_{m}=0}^{n-k-1}|f|^{2}P\right\}^{\frac1{2}}\left\{\sum_{j_1,\ldots, j_{m}=0}^{n-k-1} P\left(\sum_{{l_1}\leq{l_2}=1}^{m-1} I \right)^2\right\}^{\frac1{2}}\notag\\\label{eq:upper_bound_step1} &\leq \frac{2}{m(m-1)}\left(\normi{f}\cdot 1\right)\left(\sum_{j_1,\ldots, j_{m}=0}^{n-k-1}P\cdot \left(\sum_{{l_1}\leq{l_2}=1}^{m-1} I_{{l_1},{l_2}} \right)^2\right)^{\frac1{2}}. \end{align} Here, we used the partition of unity property of the B-splines, namely that $\sum_{j=-k}^{n-1}\bspk{j}(x) = 1$ holds for all $x \in \ivcc{0}{1}$. Summation by parts, beginning with $j_1$, $j_2$, $\ldots$, leads to \begin{equation*} \sum_{j_1,\ldots, j_{m}=-k}^{n-1}P(j_1,\ldots,j_m; x) = \sum_{j_m=0}^{n-k-1} N_{{j_m}, k-2}(x) \sum_{j_{m-1}=0}^{n-k-1} N_{{j_{m-1}}, k}(\xi_{j_m, k}) \cdots \sum_{{j_1}=0}^{n-k-1} N_{{j_1}, k}(\xi_{j_2 , k}) = 1. \end{equation*} Finally, we take the supremum norm of $f$ and obtain the inequality used for the first term. Next, we discuss the second product in \eqref{eq:upper_bound_step1}. For the term $\left(\sum_{{l_1}\leq{l_2}=1}^{m-1} I_{{l_1},{l_2}} \right)^2$ we get formally \begin{equation*} \left(\sum_{{l_1}\leq{l_2}=1}^{m-1} I_{{l_1},{l_2}} \right)^2 = \sum_{l_1=l_2=1}^{m-1} I_{l_1,l_2}^2 + \sum_{l_1 \neq l_2} I_{{l_1},{l_2}} I_{{s_1},{s_2}}. \end{equation*} Note that the last sum vanishes, since for any indices $i,j \in \setof{0,\ldots,n-k-1}$ we have \begin{equation*} \sum_{j=-k}^{n-1} \Delta_{k} \bspk{j}(\xi_i) = 0, \end{equation*} and \begin{equation*} \sum_{j=-k}^{n-1} \Delta_{k-1} \Delta_{k} \bspk{j}(\xi_i) = 0, \end{equation*} because of the partition of unity \eqref{eq:partition_unity}. That means, if the difference operator $\Delta_{k}$ or $\Delta_{k-1} \Delta_{k}$ is applied to $\bspk{j}$ without beeing squared, the whole sum vanishes. Therefore, we get \begin{equation*} \sum_{j_1,\ldots, j_{m}=0}^{n-k-1}P(j_1,\ldots,j_m; x) \left(\sum_{{l_1}\leq{l_2}=1}^{m-1} I_{{l_1},{l_2}}\right)^2 = \sum_{j_1,\ldots, j_{m}=0}^{n-k-1}P(j_1,\ldots,j_m; x)\sum_{l=1}^{m-1} I_{l,l}^2. \end{equation*} With this we obtain from \eqref{eq:upper_bound_step1} the final inequality \begin{align*} \abs{D^2 \Snkp{m} f(x)} &\leq \frac{2}{(m-1)^2}\normi{f} \left((m-1)\frac{\varepsilon_{n,k}^2}{h^4}\right)^{\frac1{2}}\\ &\leq \frac{2\varepsilon_{n,k}}{(m-1)^{3/2}h^2}\cdot \normi{f}, \end{align*} where \begin{align*} \varepsilon_{n,k}^2 &:= \sup_i \sum_{j=-k}^{n-1} \frac{\left( \bspk{j}(\xi_{i,k}) - 2\bspk{j} (\xi_{i-1,k}) + \bspk{j}(\xi_{i-2,k})\right)^2}{\bspk{j}(\xi_{i,k}) } = \sup_i \sum_{j=-k}^{n-1} \frac{(\Delta_{k-1}\Delta_{k} \bspk{j}(\xi_{i,k}))^2}{\overline{N}_{j,k}(\xi_{i,k})} \shortintertext{with} \overline{N}_{j,k}(\xi_{l,k}) &:= \begin{cases} \bspk{j}(\xi_{i,k}), & \text{if } \bspk{j}(\xi_{i,k}) = 0,\\ 1, &\text{if } \bspk{j}(\xi_{i,k}) \neq 0.\\ \end{cases} \end{align*} The terms $\overline{N}_{j,k}(\xi_{l,k})$ are formally needed to avoid zero divisions in the term for $\varepsilon_{n,k}^2$. \end{proof} \begin{corollary} \label{cor:upper_bound_iterates} Due to the uniform convergence, we get for $k > 0$ fixed, $m > 1$ and $n \to \infty$ the uniform upper bound \begin{equation*} \normi{D^2 \Snkp{m} f} \leq \frac{2\varepsilon_{n,k}}{h^2\cdot(m-1)^{3/2}}\cdot \normi{f}. \end{equation*} \end{corollary} \section{The lower bound of the Schoenberg operator} In this section, we show that for $0 < t \leq \frac1{2}$ and $k \geq 3$, there exists a constant $M > 0$, such that \begin{equation*} M \cdot \omega_2(f, t) \leq \normi{f - S_{n, k}\, f}, \end{equation*} where the second order modulus of smoothness $\omega_2: \spacecf \times \ivoc{0}{\frac{1}{2}} \to \ivco{0}{\infty}$ is defined by \begin{equation*} \omega_2(f,t) := \sup_{0<h<t}\sup_{x \in \ivcc{0}{1-2h}}\abs{f(x) - 2f(x+h) + f(x+2h)}. \end{equation*} As the modulus of smoothness is equivalent to the $K$-functional \cite{Butzer:1967,Johnen:1976}, we can derive the inequality \begin{equation} \label{eq:inequality_modulus_kfunctional} \omega_2(f, t) \leq 4\normi{f-S_{n, k}\, f}+t^2\normi{D^2 S_{n, k}\, f}. \end{equation} To prove our main result, we need to estimate the second term by the approximation error $\normi{f- S_{n, k}\, f}$. In a first step, we show that the second order differential operator $D^2$ is bounded on the spline space. \begin{lemma} \label{lem:bounded_derivative} For $k \geq 3$, the differential operator $D^2: \mathcal{S}(n, k) \to \spaceppk{k-2}$ is bounded with \begin{equation*} \normop{D^2} \leq \frac{4d_k}{h^2}, \end{equation*} where $d_k > 0$ is a constant depending only on $k$. \end{lemma} \begin{proof} Let $s \in \mathcal{S}(n, k)$, $s(x) = \sum_{j=-k}^{n-1}c_j \bspk{j}(x)$, with $\norm{s}_\infty = 1$. According to M. Marsden \cite{Marsden:1970}, Lemma 2 on page 35, we can calculate the second order derivative by \begin{equation*} D^2s(x) = \sum_{j=2-k}^{n-1}\frac{ \frac{c_j - c_{j-1}}{\xi_{j,k} - \xi_{j-1,k}} - \frac{c_{j-1,k} - c_{j-2,k}}{\xi_{j-1,k} - \xi_{j-2,k}}}{\xi_{j,k-1} - \xi_{j-1,k-1}} \bspkm{j}{2}(x). \end{equation*} Then we obtain with the triangle inequality \begin{align*} \normi{D^2 s} &= \normi{\sum_{j=2-k}^{n-1}\frac{ \frac{c_j - c_{j-1}}{\xi_{j,k} - \xi_{j-1,k}} - \frac{c_{j-1,k} - c_{j-2,k}}{\xi_{j-1,k} - \xi_{j-2,k}}}{\xi_{j,k-1} - \xi_{j-1,k-1}} \bspkm{j}{2}(x)}\\ &\leq \frac{\normi{c} + 2\normi{c} + \normi{c}}{h^2} \cdot \normi{ \sum_{j=1-k}^{n-1}\bspkm{j}{1}}, \end{align*} where \begin{equation} \label{eq:norm_coeff} \normi{c} = \max\cset{\abs{c_j}}{j \in \setof{-k, \ldots, n-1}}. \end{equation} According to \cite{deBoor:1973}, there exists $d_k > 0$, such that \begin{equation} \label{eq:stability_spline} d_k^{-1} \norm{c}_\infty \leq \norm{\sum_{j=-k}^{n-1}c_j\bspk{j}}_\infty \leq \norm{c}_\infty. \end{equation} Rewriting the first inequality yields $\norm{c}_\infty \leq D_k$, because $\normi{s} = 1$. Now we use the partition of unity \eqref{eq:partition_unity} to derive the estimate \begin{align*} \normi{D^2 s} &\leq \frac{4}{h^2}d_k. \end{align*} Taking the supremum of all $s \in \mathcal{S}(n, k)$ with $\normi{s} = 1$ yields the result. \end{proof} Now we are able to prove our main result: \begin{theorem} \label{thm:lower_bound} For $0 < t \leq \frac1{2}$ and $k \geq 3$, there exists a constant $M > 0$ only depending on $n$ and $k$, independent of $f$, such that \begin{equation*} M \cdot \omega_2(f, t) \leq \normi{f - S_{n, k}\, f}. \end{equation*} \end{theorem} \begin{proof} We extend $\normi{D^2 S_{n, k}\, f}$ into a telescopic series: \begin{align*} \normi{D^2 S_{n, k}\, f} &= \normi{D^2 S_{n, k}\, f-D^2 \Snkp{2} f+ D^2 \Snkp{2} f -D^2 \Snkp{3} f + \ldots}\\ &\leq \sum_{m=1}^\infty \normi{D^2 \Snkp{m} (f-S_{n,k}f)}\\ &= \normi{D^2 S_{n, k}\, (f-S_{n,k}f)} + \sum_{m=2}^\infty \normi{D^2 \Snkp{m} (f-S_{n,k}f)}. \intertext{Then we apply Corollary \ref{cor:upper_bound_iterates} and Lemma \ref{lem:bounded_derivative} and obtain} \normi{D^2 S_{n, k}\, f} &\leq \frac{4 d_k\normi{f-S_{n,k}f}}{h^2} + \sum_{m=1}^\infty \frac{2\varepsilon_{n,k}}{h^2\cdot m^{3/2}}\normi{f-S_{n, k}\, f} \\ &\leq \frac{4d_k + 2 \varepsilon_{n,k} \cdot \zeta(\frac{3}{2}) }{h^2}\, \normi{ f-S_{n, k}\, f}. \end{align*} Finally, applying the above result to \eqref{eq:inequality_modulus_kfunctional} yields the estimate \begin{equation*} \omega_2(f,t) \leq \left(4 + \frac{t^2(4d_k + 2\varepsilon_{n,k}\cdot \zeta(\frac{3}{2}))}{h^2}\right)\normi{ f-S_{n, k}\, f} \end{equation*} \end{proof} \begin{corollary} For $k \geq 3$, $n \to \infty$ and $f \in \spacecf$ the following uniform estimate holds: \begin{equation*} \omega_2(f, \delta) \leq 5 \cdot \normi{ f-S_{n, k}\, f}. \end{equation*} \end{corollary} \begin{proof} With \begin{equation*} \delta = \frac{h}{\sqrt{(4d_k + 2\varepsilon_{n,k}\cdot \zeta(\frac{3}{2}))}}, \end{equation*} the corollary follows, because for $n \to \infty$ we have that $h \to 0$ and hence, $\delta \to 0$. \end{proof} \begin{corollary} For $0 < t \leq \frac1{2}$ and $k \geq 3$, we have the equivalence \begin{equation*} \omega_2(f, t) \sim \normi{f - S_{n, k}\, f} \end{equation*} in the sense that there exist constants $M_1, M_2 > 0$ independent of $f$ and only depending on $n$ and $k$ such that \begin{equation*} M_1 \cdot \omega_2(f, t) \leq \normi{f - S_{n, k}\, f} \leq M_2 \cdot \omega_2(f, t). \end{equation*} \end{corollary} \begin{proof} We apply Theorem \ref{thm:lower_bound} to get the lower inequality and we use the inequality \begin{equation*} \normi{f - S_{n, k}\, f} \leq \left(1 + \frac1{2t^2}\cdot\min\setof{\frac{1}{2k},\, \frac{(k+1)H^2}{12}}\right)\cdot \omega_2(f, t), \end{equation*} from \cite{Beutel:2002} to obtain the upper bound, where \begin{equation*} H := \max\cset{(x_{j+1} - x_j)}{j \in \setof{-k,\ldots,n-1}}. \end{equation*} \end{proof} Consequently, we have proved that the conjecture stated in \cite{Beutel:2002} holds true under the conditions of Theorem~\ref{thm:lower_bound}. Additionally, we note that in Corollary~3 we have the relation $d_k \sim 2^k$. Therefore, $\delta$ tends to zero also for $k \to \infty$. With this note, we finally conclude with the following related conjecture: \begin{conjecture} For $n > 0$ fixed and $k \to \infty$, there exists $M > 0$ independent on $n$ and $k$ such that \begin{equation*} M \cdot \omega_2(f, \delta) \leq \normi{ f-S_{n, k}\, f}. \end{equation*} \end{conjecture} \end{document}
\begin{document} \title{Ricci flat Finsler metrics by warped product} \begin{abstract} In this work, we consider a class of Finsler metrics using the warped product notion introduced by Chen, S. and Zhao in \cite{chen:shen:zhao}, with another ``warping'', one that is consistent with static spacetimes. We will give the PDE characterization for the proposed metrics to be Ricci-flat and explicitly construct two non-Riemannian examples. \noindent \textsc{Keywords.} Warped product; Finsler metrics; Ricci curvature; Ricci flat. \end{abstract} \section{Introduction} If $(M,d s_1^2)$, $(N,d s_2^2)$ are Riemannian manifolds, then a warped product is the manifold $M \times N$ endowed with a Riemannian metric of the form \begin{equation} \label{warped prod riem} d s^2 = d s_1^2 + f^2 d s_2^2 \, , \end{equation} where $f$ is a smooth function depending on the coordinates of $M$ only; said a warping function. This notion, called by \emph{warped product}, must be credited to Bishop and O'Neill \cite{bishop:oneill}. However, years earlier, metrics in the form of (\ref{warped prod riem}) were being studied with different names; in \cite{kruch}, for instance, they were called semi-reducible Riemannian spaces. The class of warped product manifolds has shown itself to be rich, both wide and diverse, playing important roles in differential geometry as well as in physics. To illustrate, Bishop and O'Neill introduced warped products in \cite{bishop:oneill} as means to construct a large class of complete Riemannian manifolds with negative curvature. For this reason, it seems valuable to study notions of warped product metrics without the quadratic restriction, in the setting of Finsler geometry. Notably, progress in this direction has been stimulated by efforts to expand general relativity, such as the work of Asanov (e.g. \cite{asanov85}, \cite{asanov91}, \cite{asanov98}), which later motivated Kozma, Peter and Varga to study product manifolds $M \times N$ endowed with a Finsler metric \begin{equation} \label{warped prod 1} F = \sqrt{F_1^{2} + f^{2}F_{2}^{2}} \, , \end{equation} called warped product, where $(M,F_1)$, $(N,F_2)$ are Finsler manifolds and $f$ is a smooth function on $M$ (see \cite{kozma:peter:varga}). Following the definition of Beem \cite{beem}, one may take $L = F^2$ to consider pseudo-Finsler metrics. For example, if $(M, F_1)$ is a $3$-dimensional Finsler manifold and $(\mathbb{R},F_2)$ is a Minkowski space, then \begin{equation} \label{static spacetime 1} L = f^2 F_2^2 - F_1^2 \end{equation} is a Finsler metric with Lorentz signature, and $(\mathbb{R}\times M, L)$ may be regarded as a Finsler static spacetime. This is the case for \cite{li:chang}, where Li and Chang studied metrics in the form of (\ref{static spacetime 1}), given on coordinates $((t,r,\theta , \varphi),(y^t,y^r,y^\theta , y^\varphi))$ of the tangent bundle by \begin{equation*} L = f^2 (y^t)^2 - \left[g^2(y^r)^2 + r^2\overline{F}^2\right] \, , \end{equation*} with $\overline{F}$ a Finsler metric on coordinates $(\theta, \varphi,y^\theta , y^\varphi)$ and $f,g$ functions of $r$. They suggested the vacuum field equation for Finsler spacetime is equivalent to the vanishing of the Ricci scalar, and obtained a non-Riemannian exact solution similar to the Schwarzschild metric. Their results display viability to the explict construction of Ricci-flat Finsler metrics by warped product. Recently, Chen, S. and Zhao have considered product manifolds $\mathbb{R}\times M$ with Finsler metrics arising from warped products in the following way: if $(M,\alpha^2)$, $(\mathbb{R},d t^2)$ are Riemannian manifolds, then $F^2 = d t^2 + f^2(t)\alpha^2$ is a warped product, which may be rewritten as $F = \alpha\sqrt{\left(\frac{d t}{\alpha}\right)^2 + f^2(t)}$. Letting $z = \frac{d t}{\alpha}$, they defined a class of Finsler metrics by \begin{equation} \label{warped prod 2} F = \alpha \sqrt{\phi(z,t)} \, , \end{equation} also called warped product, where $\phi$ is a suitable function on $\mathbb{R}^2$, (see \cite{chen:shen:zhao}). In the present work, we wish to consider Finsler metrics of similar type as (\ref{warped prod 2}), with another ``warping'', one that is consistent with the form of metrics modeling (Riemannian) static spacetimes and simplified by spherical symmetry over spatial coordinates, which emerged from the Schwarzschild metric in isotropic rectangular coordinates $(t,x^1,x^2,x^3)$: \begin{equation} \label{sch iso coord} d s^2 = \frac{\left(1 - \frac{m}{4\rho}\right)^2}{\left(1 + \frac{m}{4\rho}\right)^2} c^2 d t^2 - \left(1 + \frac{m}{4\rho}\right)^4 \left[(d x^1)^2 + (d x^2)^2 + (d x^3)^2\right] \, , \end{equation} where $\rho=\sqrt{(x^1)^2+(x^2)^2+(x^3)^2}$ (see for example \cite{edd}, p. 93). Letting $z = \frac{d t}{\alpha}$ and $\alpha = \sqrt{(d x^1)^2 + (d x^2)^2 + (d x^3)^2}$, the Schwarzschild metric (\ref{sch iso coord}) is written as $$d s = \alpha \sqrt{ \frac{\left(1 - \frac{m}{4\rho}\right)^2}{\left(1 + \frac{m}{4\rho}\right)^2} c^2 z^2 - \left(1 + \frac{m}{4\rho}\right)^4 } \, .$$ For $\mathbb{R}$, $\mathbb{R}^n$ with their Euclidean metrics $d t^2$, $\alpha^2$ (respectively), define a (positive-definite) Finsler metric on $\mathbb{R}\times\mathbb{R}^n$ by \begin{equation*} F = \alpha \sqrt{\phi(z,\rho)} \, , \end{equation*} where $z = \frac{d t}{\alpha}$, $\rho = \vert \overline{x}\vert$ for $\overline{x}\in \mathbb{R}^n$, and $\phi$ a suitable function on $\mathbb{R}^2$. We give the PDE characterization for the proposed metrics to be Ricci-flat: \begin{thm} \label{thm} For $n\geq 2$, $F = \alpha\sqrt{\phi(z,\rho)}$ is Ricci-flat if and only if $P(z,\rho) = Q(z,\rho) = 0$. Furthermore, the Ricci-flat condition is weaker when $n=1$; namely, $P(z,\rho) + \rho^2Q(z,\rho) = 0$. \end{thm} Here $P$, $Q$ are functions of $\phi$ and its derivatives, described by equations (\ref{ricci comp}). Next, we construct three examples. The first presents the Riemannian case to corroborate computations, the second are $m$-th root metrics, and the third consists of Randers norm. For $n\geq 3$, the non-Riemannian solutions are: \begin{align*} \phi(z, \rho) &= (Az^m + B\rho^{-2m})^{\frac{2}{m}} \, , \; A,B > 0 &(\ref{m3sol})\\ \phi(z, \rho) &= (\sqrt{A z^2 + B\rho^{-4}} + \varepsilon\sqrt{A}z)^2 \, , \; A,B > 0 \, , \; 0 < \vert \varepsilon \vert < 1 \, , \; C\in\mathbb{R} &(\ref{r3sol}) \end{align*} Whenever possible, we describe solutions with Lorentz signature, for which the $4$-dimensional metrics may also be studied as Finsler static spacetimes satisfying the vacuum field equation proposed in \cite{li:chang}. \section{Geometric Quantities} Set $M = \mathbb{R}\times\mathbb{R}^n$ with coordinates on $TM$ \begin{align*} x=(x^0,\overline{x}) &, \; \overline{x} = (x^1,\ldots , x^n) \, , \\ y=(y^0,\overline{y}) &, \; \overline{y} = (y^1,\ldots , y^n) \, ; \end{align*} and consider a Finsler metric \begin{equation} \label{def} F = \alpha\sqrt{\phi(z,\rho)}\, , \end{equation} where $\alpha= \vert\overline{y}\vert$, $z=\frac{y^0}{\vert\overline{y}\vert}$ and $\rho=\vert\overline{x}\vert$. Throughout our work, the following convention for indices is adopted: A, B, ... range from $0$ to $n$; i, j, ... range from $1$ to $n$. This construction is the same as \cite{chen:shen:zhao} but for the ``warping''. Consequently, any calculations involving $F$ and its derivatives of any degree with respect to $y^A$ only will be similar in form to the calculations in \cite{chen:shen:zhao}, e.g. the fundamental form. The effects of the warping only appear when derivatives of $F$ with respect to $x^A$ are involved, e.g. spray coefficients. So the Hessian matrix, $g_{AB} = \frac{1}{2}[F^2]_{y^A y^B}$, is \begin{equation} \label{hessian matrix} (g_{AB}) = \left( \begin{array}{c|c} \frac{1}{2}\phi_{zz} & \frac{1}{2}\Omega_z\frac{y^j}{\alpha} \\ \hline \frac{1}{2}\Omega_z\frac{y^i}{\alpha} & \frac{1}{2}\Omega\delta_{ij} - \frac{1}{2}z\Omega_z\frac{y^i y^j}{\alpha^2} \end{array} \right) \, , \end{equation} where \begin{equation} \Omega := 2\phi -z\phi_z\, , \end{equation} and the same argument as \cite{chen:shen:zhao} to verify non-degeneracy of $F$ applies. It actually simplifies, because $\alpha$ is the Euclidean metric here. Thus, $$\det (g_{AB}) = \frac{1}{2^{n+1}}\Omega^{n-1}\Lambda \, ,$$ where \begin{equation} \Lambda:=\phi_{zz}(\Omega - z\Omega_z) - \Omega_z^2 = 2\phi\phi_{zz} - \phi_z^2\, , \end{equation} and: \begin{prop}[Prop.4.1, \cite{chen:shen:zhao}] $F=\alpha\sqrt{\phi(z,\rho)}$ is strongly convex if and only if $\Omega, \Lambda > 0$. \end{prop} Moreover, letting $L = \alpha^2\phi(z,\rho)$, one has a metric with Lorentz signature $(+,-,\ldots,-)$ if $\Omega,\Lambda < 0$, or $(-,+,\ldots,+)$ if $\Omega > 0$ and $\Lambda < 0$. Henceforth, assume $F$ is non-degenerate. In this case, the inverse of $(g_{AB})$ is \begin{equation} \label{inverse matrix} (g^{AB}) = \left( \begin{array}{c|c} \frac{2}{\Lambda}(\Omega - z\Omega_z) & -\frac{2}{\Lambda}\Omega_z\frac{y^j}{\alpha} \\ \hline -\frac{2}{\Lambda}\Omega_z\frac{y^i}{\alpha} & \frac{2}{\Omega}\delta^{ij} + \frac{2\phi_z\Omega_z}{\Omega\Lambda}\frac{y^i y^j}{\alpha^2} \end{array} \right) \, . \end{equation} The spray coefficients $G^C = \frac{1}{4}g^{CA}\left([F^2]_{y^A x^B}y^B - [F^2]_{x^A} \right)$ are: \begin{subequations} \begin{align} G^0 &= (U + z V)(x^m y^m)\alpha \, , \label{g0} \\ G^i &= (V + W) y^i(x^m y^m) - W x^i\alpha^2 \, , \label{gi} \end{align} \end{subequations} where \begin{subequations} \begin{align} U &:= \frac{1}{2\rho\Lambda}(2\phi\phi_{z\rho} - \phi_z\phi_{\rho})\, , \label{u}\\ V &:= \frac{1}{2\rho\Lambda}(\phi_{\rho}\phi_{zz} - \phi_z\phi_{z\rho})\, , \label{v}\\ W &:= \frac{1}{2\rho\Omega}\phi_{\rho}\, . \label{w} \end{align} \end{subequations} The flag curvature tensor by Berwald's formula $$K_B^C = 2[G^C]_{x^B} - [G^C]_{x^A y^B} y^A + 2G^A[G^C]_{y^A y^B} - [G^C]_{y^A}[G^A]_{y^B}$$ gives \begin{dgroup}[frame={0pt},framesep={4pt}] \begin{dmath} K_0^0 = \left[ \rho^2(U + z V)W_z - (2\rho^2W +1)(U_z + V + z V_z) \right] \alpha^2 + \left[ 2(V + W)(U_z + V + z V_z) - (V_z + W_z)(U + z V) + 2U(U_{zz} + 2V_z + z V_{zz}) - \frac{1}{\rho}(U_{z\rho} + V_{\rho} + z V_{z\rho}) - (U_z + V + z V_z)^2 - (U - z U_z - z^2V_z)V_z \right](x^m y^m)^2 \end{dmath} \begin{dmath} K_j^i = - \left[ 2W + (2\rho^2W + 1)(V + W) \right]\alpha^2\delta^i_j +\left[ (V + W)^2 + 2U(V_z + W_z) - \frac{1}{\rho}(V_{\rho} + W_{\rho}) \right](x^m y^m)^2\delta^i_j + \left[ 2W(2W -z W_z) + W_z(U - zW) -\frac{2}{\rho}W_{\rho} \right]\alpha^2x^i x^j + \left[ (V + W) + z(V_z + W_z)(2\rho^2W + 1) + (\rho^2(V + W) + 1)(2W - z W_z) \right]y^i y^j - \left[ 2z U (V_{zz} + W_{zz}) + (3U - z U_z - z V + 5z W)(V_z + W_z) - \frac{z}{\rho}(V_{z\rho} + W_{z\rho}) \right] (x^m y^m)^2\frac{y^i y^j}{\alpha^2} + \left[ - (2W - z W_z)^2 -2U(W_z - z W_{zz}) + \frac{1}{\rho}(2W_{\rho} - z W_{z\rho}) + W_z(U - z U_z + z^2 W_z) \right](x^m y^m) x^i y^j + \left[ -(V + W)^2 + (V_z + W_z)(U + 3z W) + \frac{1}{\rho}(V_{\rho} + W_{\rho}) \right](x^m y^m) x^j y^i \end{dmath} \begin{dmath} K_j^0 = z\left[ (2\rho^2 W + 1)(V + U_z + z V_z) - \rho^2 W_z(U + z V) \right]\alpha y^j + \left[ z(U + z V)(V_z + W_z) - 2z U(U_{zz} + 2V_z + z V_{zz}) + (U - z U_z - z^2 V_z)(5W - U_z) - \frac{1}{\rho}(U_{\rho} - z U_{z\rho} - z^2 V_{z\rho}) \right](x^m y^m)^2\frac{y^j}{\alpha} + \left[ (U + z V)(U_z - V + z V_z - 2W) + (V - 3W)(U - z U_z - z^2 V_z) + \frac{1}{\rho}(U_{\rho} + z V_{\rho}) \right](x^m y^m)\alpha x^j \end{dmath} \begin{dmath} K_0^i = \left[ \rho^2 W_z(V - W) - (2\rho^2W + 1)V_z \right]\alpha y^i + \left[ (2W - V - U_z)(V_z + W_z) + 2U(V_{zz} + W_{zz}) -\frac{1}{\rho}(V_{z\rho} + W_{z\rho}) \right](x^m y^m)^2 \frac{y^i}{\alpha} + \left[ (U_z - W)W_z - 2U W_{zz} + \frac{1}{\rho}W_{z\rho} \right](x^m y^m)\alpha x^i \end{dmath} \end{dgroup} After simplification, the Ricci curvature is: \begin{dmath*} \ric = \sum K^A_A \\ = \left[ - (2\rho^2 W + 1)(U_z + n V + (n-3) W) - 2(n W + \rho W_\rho - \rho^2 W_z(U - z W)) \right]\alpha^2 + \left[ 2U(U_{zz} + n V_z + (n-2) W_z) -\frac{1}{\rho}(U_{z\rho} + n V_{\rho} + (n-3) W_{\rho}) + n V(V + 2W) + W((n-5) W + 2z W_z) + U_z(2W - U_z) \right](x^m y^m)^2 \end{dmath*} Let the Ricci curvature components be \begin{dgroup}[frame={0pt},framesep={4pt}] \label{ricci comp} \begin{dmath} P(z,\rho) \defeq - (2\rho^2 W + 1)(U_z + n V + (n-3) W) - 2(n W + \rho W_\rho - \rho^2 W_z(U - z W)) \end{dmath} \begin{dmath} Q(z, \rho) \defeq 2U(U_{zz} + n V_z + (n-2) W_z) -\frac{1}{\rho}(U_{z\rho} + n V_{\rho} + (n-3) W_{\rho}) + n V(V + 2W) + W((n-5) W + 2z W_z) + U_z(2W - U_z) \end{dmath} \end{dgroup} So \begin{dmath} \label{ricci} \ric = P\left(\frac{y^0}{\vert\overline{y}\vert},\vert\overline{x}\vert\right)\langle \overline{y} , \overline{y} \rangle + Q\left(\frac{y^0}{\vert\overline{y}\vert},\vert\overline{x}\vert\right)\langle \overline{x} , \overline{y} \rangle^2 \\ = \left\langle P\left(\frac{y^0}{\vert\overline{y}\vert},\vert\overline{x}\vert\right)\overline{y} + Q\left(\frac{y^0}{\vert\overline{y}\vert},\vert\overline{x}\vert\right)\langle\overline{x},\overline{y}\rangle\overline{x} , \overline{y} \right\rangle \end{dmath} \begin{proof}[{\bf Proof of Theorem \ref{thm}}] Suppose $\ric = 0$. Let $e_i$ denote the $n$-dimensional vector with $1$ in the $i^{th}$ entry and zeros elsewhere. Take $\overline{y} = e_i$ and $\overline{x} = \rho e_j$ for $\rho \geq 0$. By equation (\ref{ricci}), $$P\left(y^0,\rho\right) + Q\left(y^0,\rho\right)\rho^2\delta^{ij} = 0 \, , \; \forall i,j \, .$$ For $n \geq 2$, pick $i\neq j$ to get $P(y^0,\rho) = 0$. Now set $i=j$ to conclude $Q(y^0,\rho) = 0$ for $\rho \neq 0$. Finally, $Q(y^0,0) = 0$ by continuity. The remaining assertions are clear. \end{proof} The above proof suggests metrics $F$ that are singular on $(x^0,0)$ or metrics $F$ defined on $\mathbb{R}\times \mathbb{R}^n\setminus\{0\}$ should also be considered. This becomes evident on the examples bellow. \section{Examples} \begin{ex}[Riemannian metrics] Suppose $\phi(z, \rho) = \e^{f(\rho)}z^2 + \e^{g(\rho)}$. So $\Omega = 2\e^{g}$, $\Lambda = 4\e^{f + g}$ and $F = \alpha\sqrt{\phi}$ gives a positive-definite Riemannian metric. The Ricci curvature components are: \begin{align*} P &= -\frac{1}{4\rho}\left[ p_2\e^{f-g}z^2 + p_0 \right] \\ Q &= -\frac{1}{4\rho^3}q_0 \end{align*} where \begin{dgroup*} \begin{dmath*} p_2 = 2\rho f^{\prime\prime} + \rho(f^{\prime})^2 + (n-2)\rho f^{\prime}g^{\prime} + 2(n-1)f^{\prime} \end{dmath*} \begin{dmath*} p_0 = 2\rho g^{\prime\prime} + (n-2)\rho(g^{\prime})^2 + \rho f^{\prime}g^{\prime} + 2f^{\prime} + 2(2n-3)g^{\prime} \end{dmath*} \begin{dmath*} q_0 = 2\rho f^{\prime\prime} + 2(n-2)\rho g^{\prime\prime} + \rho(f^{\prime})^2 - 2\rho f^{\prime}g^{\prime} - (n-2)\rho (g^{\prime})^2 - 2f^{\prime} - 2(n-2)g^{\prime} \end{dmath*} \end{dgroup*} By independence of $z$ and $\rho$, the Ricci-flat equations for $n\geq 2$ become $p_2 = p_0 = q_0 = 0$. Taking $q_0 - p_2 + n p_0 = 0$ yields: $$4(n-1)\rho g^{\prime\prime} + (n-2)(n-1)\rho(g^{\prime})^2 + 4(n-1)^2g^{\prime} = 0 $$ For $n\geq 3 :$ \begin{equation} \label{Riemann edo} 4\rho g^{\prime\prime} + (n-2)\rho(g^{\prime})^2 + 4(n-1)g^{\prime} = 0 \end{equation} If $g^{\prime} = 0$, then (\ref{Riemann edo}) is trivially satisfied. So $g(\rho)=\overline{B}$ constant is a solution. Otherwise, (\ref{Riemann edo}) is a Bernoulli differential equation in $g^{\prime}$, which can be transformed to a linear ODE by letting $u:= (g^{\prime})^{-1}$. The equation reduces to: $$4\rho u^{\prime} - (n-2)\rho - 4(n-1)u = 0$$ Its solution gives $g(\rho) = \ln(B\vert \rho^{2-n} + C\vert^{\frac{4}{n-2}})$, for $B,C\in \mathbb{R}$ constants with $B>0$. To find $f$, substitute $g$ in $p_0 = 0$. When $g(\rho)=\overline{B}$, $f^{\prime} = 0$ and $f$ is constant also, say $f(\rho) = \overline{A}$. For $g(\rho) = \ln(B\vert \rho^{2-n} + C\vert^{\frac{4}{n-2}})$, equation $p_0= 0$ gives: $$f^{\prime} = \frac{4(n-2)C\rho^{1-n}}{(C-\rho^{2-n})(C+\rho^{2-n})}$$ So $f(\rho) = \ln \left[A\left(\frac{C-\rho^{2-n}}{C+\rho^{2-n}}\right)^2\right]$, for some constant $A>0$. Therefore, when $n\geq 3$, solutions are: \begin{subequations} \begin{align} \phi(z, \rho) &= Az^2 + B \, , \; A,B > 0 \\ \phi(z, \rho) &= A\left(\frac{C-\rho^{2-n}}{C+\rho^{2-n}}\right)^2 z^2 + B\vert \rho^{2-n} + C\vert^{\frac{4}{n-2}} \, , \; A,B > 0 , \, C\in\mathbb{R} \end{align} \end{subequations} For $n = 2$, equation (\ref{Riemann edo}) still holds, but it is already linear: $$\rho g^{\prime\prime} + g^{\prime} = 0$$ So $g(\rho) = \ln (B\vert \rho\vert^C)$, for $B,C\in \mathbb{R}$ constants with $B>0$. Substitute $g$ in $p_0 = 0$ to get: $$(C+2)f^{\prime} = 0$$ If $C\neq -2$, then $f^{\prime} = 0$. So $f(\rho) = \overline{A}$. When $C = -2$, equation $p_2 = 0$ yields: $$2\rho f^{\prime\prime} + \rho(f^{\prime})^2 + 2f^{\prime} = 0$$ If $f^{\prime} = 0$, the above equation is trivially satisfied and then $f(\rho) = \overline{A}$. Else, it is a Bernoulli equation in $f^{\prime}$. As before, let $u:= (f^{\prime})^{-1}$ to get a linear ODE: $$2\rho u^{\prime} - \rho - 2u = 0$$ It gives $f(\rho) = \ln (A_1 + A_2\ln\vert\rho\vert)^2$, for real constants $A_1 , A_2$. Thus, for $n=2$, solutions are: \begin{subequations} \begin{align} \phi(z, \rho) &= Az^2 + B\vert\rho\vert^C \, , \; A,B > 0 , \, C\in\mathbb{R}\setminus\{-2\} \\ \phi(z, \rho) &= (A_1 + A_2\ln\vert\rho\vert)^2z^2 + B\rho^{-2} \, , \; A_1,A_2\in\mathbb{R}, \, B > 0 \end{align} \end{subequations} For $n = 1$, the Ricci-flat condition gives $p_2 = p_0 + q_0 = 0$, by independence of $z$ and $\rho$. This gives: $$2 f^{\prime\prime} + (f^{\prime})^2 - f^{\prime}g^{\prime} = 0$$ So either $f(\rho) = \overline{A}$ and $g$ is an arbitrary smooth function of $\rho$, or $g = \ln(f^{\prime})^2 + f + \overline{B}$ for any smooth function $f$ of $\rho$. Hence, solutions for $n=1$ are: \begin{subequations} \begin{align} \phi(z, \rho) &= Az^2 + \e^{g(\rho)} \, , \; A > 0 \, , \; g\in C^{\infty} \\ \phi(z, \rho) &= \e^{f(\rho)}\left(z^2 + B[f^{\prime}(\rho)]^2 \right) , \; f\in C^{\infty} , \, B > 0 \end{align} \end{subequations} Finally, if $\phi(z, \rho) = \e^{f(\rho)}z^2 - \e^{g(\rho)}$, then $\Omega = - 2\e^{g}$ and $\Lambda = - 4\e^{f + g}$. So the associated metric $L = \alpha^2\phi$ has Lorentz signature $(+,-,\ldots,-)$. In this case, the Ricci curvature components are: \begin{align*} P &= \frac{1}{4\rho}\left[ p_2\e^{f-g}z^2 - p_0 \right] \\ Q &= -\frac{1}{4\rho^3}q_0 \end{align*} where $p_2$, $p_0$ and $q_0$ are as before. Thus, by independence of $z$ and $\rho$, the Ricci-flat equations reduce to the same system as the positive-definite case. \end{ex} \begin{ex}[$m^{th}$-root metrics] If $\phi(z,\rho) = \left(\e^{f(\rho)}z^{m} + \e^{g(\rho)}\right)^{\frac{2}{m}}$ for an even integer $m > 2$, then $\Omega = \frac{2\e^g}{(\e^f z^m + \e^g)^{1-\frac{2}{m}}}$ and $\Lambda = \frac{4(m-1)\e^{f+g}z^{m-2}}{(\e^f z^m + \e^g)^{2(1-\frac{2}{m})}}$. So $F = \alpha\sqrt{\phi}$ is a positive-definite $m^{th}$-root metric. The Ricci curvature components are: \begin{align*} P &= -\frac{1}{2m^2(m-1)\rho}\left[ p_{2m}\e^{2(f-g)}z^{2m} + p_m\e^{f-g}z^m + p_0 \right] \\ Q &= \frac{1}{4m^2(m-1)^2\rho^3}\left[ q_{2m}\e^{2(f-g)}z^{2m} - q_m\e^{f-g}z^m - q_0 \right] \end{align*} where \begin{dgroup*} \begin{dmath*} p_{2m} = (m-2)(m+n-2)\rho(f^{\prime})^2 \end{dmath*} \begin{dmath*} p_m = 2m(m-1)\rho f^{\prime\prime} + m(m-1)\rho(f^{\prime})^2 + (n-2)(3m-4)\rho f^{\prime}g^{\prime} + m[(n-2)(3m-4)+2(m-1)]f^{\prime} \end{dmath*} \begin{dmath*} p_0 = 2m(m-1)\rho g^{\prime\prime} + 2(m-1)(n-2)\rho(g^{\prime})^2 + m\rho f^{\prime}g^{\prime} + m^2f^{\prime} + 2m(m-1)(2n-3)g^{\prime} \end{dmath*} \begin{dmath*} q_{2m} = (m-2)[2m^2 + (n-2)(3m-2)]\rho(f^{\prime})^2 \end{dmath*} \begin{dmath*} q_m = 2(m-2)[ m(m-1)(n-2)\rho f^{\prime\prime} - m(m+n-1)\rho(f^{\prime})^2 + 2(n-2)(m-1)\rho f^{\prime}g^{\prime} + m(m-1)(n-2)f^{\prime}] \end{dmath*} \begin{dmath*} q_0 = 2m^2(m-1)\rho f^{\prime\prime} + 4m(m-1)^2(n-2)\rho g^{\prime\prime} + m^2\rho(f^{\prime})^2 - 4m(m-1)\rho f^{\prime}g^{\prime} - 4(m-1)^2(n-2)\rho (g^{\prime})^2 - 2m^2(m-1)f^{\prime} - 4m(m-1)^2(n-2)g^{\prime} \end{dmath*} \end{dgroup*} By independence of $z$ and $\rho$, the Ricci-flat equations for $n\geq 2$ are $p_{2m} = p_m = p_0 = q_{2m} = q_m = q_0 = 0$. Since $m > 2$, $p_{2m} = q_{2m} = 0$ imply $f^{\prime} = 0$, and equations $p_m = q_m = 0$ are automatically satisfied. The remaining equations reduce to: \begin{equation} \label{m p eq} m\rho g^{\prime\prime} + (n-2)\rho (g^{\prime})^2 + (2n-3)mg^{\prime} = 0 \end{equation} \begin{equation} \label{m q eq} (n-2)[m\rho g^{\prime\prime} - \rho(g^{\prime})^2 - mg^{\prime}] = 0 \end{equation} So $f(\rho) = \overline{A}$ and $g(\rho)$ must be determined from the above equations. For $n \geq 3$, combine equations (\ref{m p eq}) and (\ref{m q eq}) to eliminate $g^{\prime\prime}$. This gives: $$g^{\prime}(\rho g^{\prime} + 2m) = 0$$ If $g^{\prime} = 0$, then $g(\rho) = \overline{B}$. Otherwise, $\rho g^{\prime} + 2m = 0$ and so $g(\rho) = \ln(B\rho^{-2m})$ for some constant $B > 0$. Therefore, when $n\geq 3$, solutions are: \begin{subequations} \begin{align} \phi(z, \rho) &= (Az^m + B)^{\frac{2}{m}} \, , \; A,B > 0 \\ \phi(z, \rho) &= (Az^m + B\rho^{-2m})^{\frac{2}{m}} \, , \; A,B > 0 \label{m3sol} \end{align} \end{subequations} For $n = 2$, (\ref{m q eq}) is vacuous and (\ref{m p eq}) gives a linear ODE: $$\rho g^{\prime\prime} + g^{\prime} = 0$$ So $g(\rho) = \ln(B\vert\rho\vert^C)$, for constants $B > 0$ and $C\in\mathbb{R}$. Hence, solutions for $n=2$ are: \begin{equation} \phi(z,\rho) = (Az^m + B\vert\rho\vert^C)^{\frac{2}{m}} \, , \; A,B > 0 \, , \; C\in\mathbb{R} \end{equation} For $n = 1$, the Ricci-flat equations are $2(m-1)p_{2m} - q_{2m} = 2(m-1)p_m + q_m = p_0 + q_0 = 0$, by independence of $z$ and $\rho$. As before, since $m > 2$, $2(m-1)p_{2m} - q_{2m} = 0$ implies $f^{\prime} = 0$, and equation $2(m-1)p_m + q_m = 0$ is automatically satisfied. The remaining equation gives: $$m\rho g^{\prime\prime} - \rho (g^{\prime})^2 - m g^{\prime} = 0$$ If $g^{\prime} = 0$, the above equation is trivially satisfied; then $g(\rho) = \overline{B}$. Otherwise, this is yet again a Bernoulli equation in $g^{\prime}$. Let $u := (g^{\prime})^{-1}$ to obtain a linear ODE: $$m\rho u^{\prime} + \rho + m u = 0$$ Its solution gives $g(\rho) = \ln(B\vert \rho^2 + C \vert^{-m})$ for constants $B > 0$ and $C\in\mathbb{R}$. So, for $n=1$, solution are: \begin{subequations} \begin{align} \phi(z, \rho) &= (Az^m + B)^{\frac{2}{m}} \, , \; A,B > 0 \\ \phi(z, \rho) &= (Az^m + B\vert \rho^{2} + C \vert^{-m})^{\frac{2}{m}} \, , \; A,B > 0 \, , \; C\in\mathbb{R} \end{align} \end{subequations} For an odd integer $m>2$, all formulas still hold, but the metric generated changes signature according to the sign of $z$, because it determines the sign of $\Lambda$. For $m=2$, $\phi$ simplifies to give a Riemannian metric; in this case, the non-trivial Ricci-flat equations are multiples of the previously found equations for Riemannian metrics. Finally, taking $\phi(z,\rho) = \left(\e^{f(\rho)}z^{m} - \e^{g(\rho)}\right)^{\frac{2}{m}}$ for some integer $m > 2$ with $m \equiv 2 (\text{mod } 4)$ gives a well-defined metric $L = \alpha^2\phi$ with Lorentz signature $(+,-,\ldots,-)$, since $\Omega = - \frac{2\e^g}{(\e^f z^m - \e^g)^{1-\frac{2}{m}}}$ and $\Lambda = - \frac{4(m-1)\e^{f+g}z^{m-2}}{(\e^f z^m - \e^g)^{2(1-\frac{2}{m})}}$. In this setting, the Ricci components are: \begin{align*} P &= -\frac{1}{2m^2(m-1)\rho}\left[ p_{2m}\e^{2(f-g)}z^{2m} - p_2\e^{f-g}z^2 + p_0 \right] \\ Q &= \frac{1}{4m^2(m-1)^2\rho^3}\left[q_{2m}\e^{2(f-g)}z^{2m} + q_m\e^{f-g}z^m - q_0 \right] \end{align*} where $p_i$, $q_j$ are as before. Thus, the Ricci-flat equations coincide with the positive-definite case. With some thought, one might consider these equations for other values of $m$. When $m > 2$ is divisible by $4$, one may take $L = \alpha^m\left(\e^{f(\rho)}z^{m} - \e^{g(\rho)}\right)$ to consider Finsler spacetimes in the sense of Pfeifer and Wohlfarth \cite{pfeifer:wohlfarth}. When $m > 2$ is odd, $F = \alpha\left(\e^{f(\rho)}z^{m} - \e^{g(\rho)}\right)^{\frac{1}{m}}$ already makes sense. However, in both cases, one needs to become concerned with the domain of $z$ and $\rho$ to ensure $\Omega$, $\Lambda$ are defined and their sign give the appropriate signature. \end{ex} \begin{ex}[Randers metrics] Assume $\phi(z,\rho) = (\sqrt{\e^{f(\rho)}z^2 + \e^{g(\rho)}} + \varepsilon\e^{\frac{f(\rho)}{2}}z)^2$ with $0 < \vert \varepsilon \vert < 1$, so $F = \alpha\sqrt{\phi}$ gives a positive-definite Randers metric. Indeed, $\Omega = 2\left(\frac{\sqrt{\e^f z^2 + \e^g} + \varepsilon\e^{\frac{f}{2}}z}{\sqrt{\e^f z^2 + \e^g}}\right)\e^g$ and $\Lambda = 4\left(\frac{\sqrt{\e^f z^2 + \e^g} + \varepsilon\e^{\frac{f}{2}}z}{\sqrt{\e^f z^2 + \e^g}}\right)^3\e^{f+g}$. The Ricci curvature components are: \begin{dgroup*} \begin{dmath*} P = -\frac{1}{4\rho\sqrt{\e^f z^2+\e^g}(\sqrt{\e^f z^2+\e^g} + \varepsilon\e^{\frac{f}{2}}z)}\left[ p_4\e^{2f-g}z^4 + 2\varepsilon p_3\e^{\frac{3f}{2}-g}\sqrt{\e^f z^2+\e^g}z^3 + p_2\e^f z^2 + \varepsilon p_1\e^{\frac{f}{2}}\sqrt{\e^f z^2+\e^g}z + p_0\e^g \right] \end{dmath*} \begin{dmath*} Q = \frac{1}{4\rho^3(\e^f z^2+\e^g)^2(\sqrt{\e^f z^2+\e^g} + \varepsilon\e^{\frac{f}{2}}z)^2}\left[ q_6\e^{3f}z^6 + 2\varepsilon q_5\e^{\frac{5f}{2}}\sqrt{\e^f z^2+\e^g}z^5 + 2q_4\e^{2f+g}z^4 + 4\varepsilon q_3e^{\frac{3f}{2}+g}\sqrt{\e^f z^2+\e^g}z^3 + q_2\e^{f+2g}z^2 + 2\varepsilon q_1\e^{\frac{f}{2}+2g}\sqrt{\e^f z^2+\e^g}z + q_0\e^{3g} \right] \end{dmath*} \end{dgroup*} where $p_i$, $q_j$ are functions of $\rho$, $f$, $g$ and its derivatives of order up to two. Particularly, \begin{dgroup*} \begin{dmath*} p_4 = 2(\varepsilon^2 + 1)\rho f^{\prime\prime} - ((n+1)\varepsilon^2 - 1)\rho(f^{\prime})^2 + (n-2)(\varepsilon^2 + 1)\rho f^{\prime}g^{\prime} + 2(n-1)(\varepsilon^2 + 1)f^{\prime} \end{dmath*} \begin{dmath*} p_3 = 2\rho f^{\prime\prime} - \frac{1}{4}((n+2)\varepsilon^2 + (n-2))\rho (f^{\prime})^2 + (n-2)\rho f^{\prime}g^{\prime} + 2(n-1)f^{\prime} \end{dmath*} \end{dgroup*} For $n\geq 2$, the Ricci-flat equations reduce to $p_i = q_j = 0$, by independence of $z$ and $\rho$. Taking $p_4 - (\varepsilon^2 + 1)p_3 = 0$ reads $$ \frac{(n+2)}{4}(\varepsilon^2 - 1)^2\rho (f^{\prime})^2 = 0 \, .$$ So $f^{\prime} = 0$, and the remaining equations simplify to: \begin{equation} \label{r p eq} 2\rho g^{\prime\prime} + (n-2)\rho (g^{\prime})^2 + 2(2n-3)g^{\prime} = 0 \end{equation} \begin{equation} \label{r q eq} (n-2)[2\rho g^{\prime\prime} - \rho(g^{\prime})^2 - 2g^{\prime}] = 0 \end{equation} Hence, $f(\rho) = \overline{A}$ and $g(\rho)$ must be determined from the above equations. When $n\geq 3$, one may combine equations (\ref{r p eq}) and (\ref{r q eq}) to eliminate $g^{\prime\prime}$, obtaining: $$(n-1)(\rho g^{\prime} + 4)g^{\prime} = 0$$ If $g^{\prime} = 0$, then $g(\rho) = \overline{B}$. Otherwise, $\rho g^{\prime} + 4 = 0$ and so $g(\rho) = \ln (B\rho^{-4})$ for some constant $B>0$. Thus, solutions for $n \geq 3$ are: \begin{subequations} \begin{align} \phi(z, \rho) &= (\sqrt{A z^2 + B} + \varepsilon\sqrt{A}z)^2 \, , \; A,B > 0 \, , \; 0 < \vert \varepsilon \vert < 1 \\ \phi(z, \rho) &= (\sqrt{A z^2 + B\rho^{-4}} + \varepsilon\sqrt{A}z)^2 \, , \; A,B > 0 \, , \; 0 < \vert \varepsilon \vert < 1 \, , \; C\in\mathbb{R} \label{r3sol} \end{align} \end{subequations} For $n = 2$, (\ref{r q eq}) is vacuous and (\ref{r p eq}) becomes a linear ODE: $$\rho g^{\prime\prime} + g^{\prime} = 0$$ So $g(\rho) = \ln(B\vert\rho\vert^C)$, for constants $B > 0$ and $C\in\mathbb{R}$. Hence, when $n=2$, solutions are: \begin{equation} \phi(z,\rho) = \left( \sqrt{A z^2 + B\vert \rho \vert ^{C}} + \varepsilon\sqrt{A}z \right)^2 \, , \; A,B > 0 \, , \; C\in\mathbb{R} , \; 0 < \vert \varepsilon \vert < 1 \end{equation} Finally, for $n = 1$, the Ricci-flat condition once again implies $f^{\prime} = 0$, although the computation is lengthier and will be omitted. All remaining equations are automatically satisfied. Therefore, for $n=1$, solutions are: \begin{equation} \phi(z,\rho) = \left( \sqrt{A z^2 + \e^{g(\rho)}} + \varepsilon\sqrt{A}z \right)^2 \, , \; A > 0 \, , \; g\in C^{\infty} \, , \; 0 < \vert \varepsilon \vert < 1 \end{equation} Clearly, one may rewrite solutions as $\phi(z,\rho) = \left( \sqrt{A z^2 + \e^{g(\rho)}} + D z \right)^2$ for any constant $D$ satisfying $D^2A^{-1} < 1$. More generally, it is possible to look for solutions in the form $\phi(z,\rho) = \left( \sqrt{\e^{f(\rho)} z^2 + \e^{g(\rho)}} \pm h(\rho) z \right)^2$ with $h^2(\rho) < \e^{f(\rho)}$, but the calculations quickly become cumbersome. In either case, it is uncertain how to consider Lorentz signature (if possible). \end{ex} \section{Discussion} The Hessian of the Ricci curvature $$\ric_{AB} = \frac{1}{2}\left[\ric \right]_{y^A y^B}$$ was the first notion for Ricci curvature tensor of Finsler metrics introduced by Akbar-Zadeh in 1988. Evidently, $\ric_{AB} = 0$ if and only if $\ric = 0$, and they imply the vanishing of the scalar curvature $R = g^{AB}\ric_{AB}$. By defining the modified Einstein tensor $$ G_{AB} = \ric_{AB} - \frac{1}{2}g_{AB}R $$ in \cite{li:chang}, Li an Chang established the equivalence between the vacuum field equation for Finsler spacetime and the vanishing of the Ricci curvature. However, the notion of Ricci curvature tensor for Finsler metrics is not unique. If $R^{\;A}_{B\;CD}$ is the Riemann curvature tensor for Finsler metrics introduced by Berwald in 1926, then $$\widetilde{\ric}_{AB} = \frac{1}{2}\left( R^{\;C}_{A\;CB} + R^{\;C}_{B\;CA} \right)$$ is another notion of Ricci curvature tensor introduced by Li and S. in \cite{li:shen}. Moreover, these Ricci tensors differ by a non-Riemannian quantity; namely, $$ \widetilde{\ric}_{AB} - \ric_{AB} = H_{AB} = \frac{1}{2}\left( [\chi_B]_{y^A} + [\chi_A]_{y^B} \right) \, , $$ where the $\chi$-curvature tensor is given by $$\chi_A = \frac{1}{2}\left[ \Pi_{x^B y^A}y^B - \Pi_{x^A} - 2\Pi_{y^A y^B}G^B \right]$$ with $\Pi = \frac{\partial G^C }{\partial y^C}$. So $\widetilde{\ric}_{AB} = 0$ if and only if $\ric_{AB} = 0$ and $H_{AB} = 0$; in words, the vanishing of $\widetilde{\ric}_{AB}$ is a stronger condition than the vanishing of $\ric_{AB}$. In particular, if $\ric = 0$ and $\chi_A = 0$, then $\widetilde{\ric}_{AB} = 0$. For the proposed metrics $F = \alpha\sqrt{\phi(z,\rho)}$, we have $\Pi = \Psi(x^m y^m)$, where \begin{equation} \label{Psi} \Psi := U_z + (n+2)V + (n-1)W \, , \end{equation} and the $\chi$-curvature is \begin{dgroup} \begin{dmath} \chi_0 = \left[ \frac{1}{2\rho}\Psi_{z\rho} - U\Psi_{zz} - W\Psi_{z} \right]\frac{(x^my^m)^2}{\alpha} + \frac{1}{2}(2\rho^2 W + 1)\Psi_z\alpha \end{dmath} \begin{dmath} \chi_i = \left[ zU\Psi_{zz} - \frac{z}{2\rho}\Psi_{z\rho} + (U + 2zW)\Psi_z \right] \frac{(x^m y^m)^2}{\alpha^2}y^i -\frac{z}{2}(2\rho^2 W + 1)\Psi_z y^i - (U + zW)\Psi_z(x^m y^m)x^i \end{dmath} \end{dgroup} Clearly, $\Psi_z = 0$ is a sufficient condition for the vanishing of the $\chi$-curvature. By direct verification, all solutions in previous section satisfy $\Psi_z = 0$. Thus, they are strongly Ricci-flat metrics: $\widetilde{\ric}_{AB} = \ric_{AB} = 0$. In addition to the examples presented here, it seems to be feasible (although lengthy) to construct other types of (strongly) Ricci-flat metrics in the proposed form; particularly, one may look for series expansions. The same type of construction also seems to work well for Ricci-isotropic metrics, $\ric = [(n+1)-1] k(x)F^2$. At the very least the PDE characterization is similar to describe; namely, for $n \geq 2$, $F=\alpha\sqrt{\phi(z,\rho)}$ is Ricci isotropic if and only if $P = n k \phi$ and $Q=0$. It might be wise, however, to spend such efforts with a wider class of warped product Finsler metrics, which may allow for global solutions on $\mathbb{R}\times M$; for instance, a class of Finsler metrics defined by \begin{equation} F = \alpha \sqrt{\phi(z,\overline{x})} \, , \end{equation} for $\alpha$ any Riemannian metric on $M$, $z$ as before and $\phi$ some appropriate function on $\mathbb{R}\times M$. \appendix \section{Derivatives} Derivatives of $F^2$: \begin{dgroup*} \begin{dmath*} \left[ F^2 \right]_{y^0} = \alpha\phi_z \end{dmath*} \begin{dmath*} \left[ F^2 \right]_{y^i} = \Omega y^i \end{dmath*} \begin{dmath*} \left[ F^2 \right]_{x^0} = 0 \end{dmath*} \begin{dmath*} \left[ F^2 \right]_{x^i} = \frac{1}{\rho}\phi_{\rho}\alpha^2 x^i \end{dmath*} \begin{dmath*} \left[ F^2 \right]_{y^A x^0} = 0 \end{dmath*} \begin{dmath*} \left[ F^2 \right]_{y^0 x^i} = \frac{1}{\rho}\phi_{z\rho}\alpha x^i \end{dmath*} \begin{dmath*} \left[ F^2 \right]_{y^i x^j} = \frac{1}{\rho}\Omega_{\rho}x^j y^i \end{dmath*} \end{dgroup*} Derivatives of $G^A$: \begin{dgroup*} \begin{dmath*} \left[ G^A \right]_{x^0} = 0 \end{dmath*} \begin{dmath*} \left[ G^0 \right]_{x^i} = \frac{1}{\rho}(U_{\rho} + z V_{\rho})x^i(x^m y^m)\alpha + (U + z V)y^i\alpha \end{dmath*} \begin{dmath*} \left[ G^j \right]_{x^i} = \frac{1}{\rho}(V_{\rho} + W_{\rho})x^i y^j(x^m y^m) + (V + W)y^i y^j - \frac{1}{\rho}W_{\rho}x^i x^j\alpha^2 - W\delta_i^j\alpha^2 \end{dmath*} \begin{dmath*} \left[ G^0 \right]_{y^0} = (U_z + V + z V_z)(x^m y^m) \end{dmath*} \begin{dmath*} \left[ G^j \right]_{y^0} = (V_z + W_z)(x^m y^m)\frac{y^j}{\alpha} - W_z x^j\alpha \end{dmath*} \begin{dmath*} \left[ G^0 \right]_{y^i} = (U -z U_z - z^2 V_z)(x^m y^m)\frac{y^i}{\alpha} + (U + z V)x^i\alpha \end{dmath*} \begin{dmath*} \left[ G^j \right]_{y^i} = (V + W)(x^m y^m)\delta_i^j - z(V_z + W_z)(x^m y^m)\frac{y^i y^j}{\alpha^2} + (V + W)x^i y^j + (z W_z - 2W)x^j y^i \end{dmath*} \begin{dmath*} \left[ G^B \right]_{x^0 y^A} = 0 \end{dmath*} \begin{dmath*} \left[ G^0 \right]_{x^i y^0} = \frac{1}{\rho}(U_{z\rho} + V_{\rho} + z V_{z\rho})(x^m y^m)x^i + (U_z + V + z V_z)y^i \end{dmath*} \begin{dmath*} \left[ G^j \right]_{x^i y^0} = \frac{1}{\rho}(V_{z\rho} + W_{z\rho})(x^m y^m)\frac{x^i y^j}{\alpha} + (V_z + W_z)\frac{y^i y^j}{\alpha} - \frac{1}{\rho}W_{z\rho}x^i x^j\alpha - W_z\delta_i^j\alpha \end{dmath*} \begin{dmath*} \left[ G^0 \right]_{x^i y^j} = (U + z V)\delta_j^i\alpha + (U - z U_z - z^2 V_z)\frac{y^i y^j}{\alpha} + \frac{1}{\rho}(U_{\rho} - z U_{z\rho} - z^2 V_{z\rho})(x^m y^m)\frac{x^i y^j}{\alpha} + \frac{1}{\rho}(U_{\rho} + z V_{\rho})x^i x^j\alpha \end{dmath*} \begin{dmath*} \left[ G^k \right]_{x^i y^j} = (V + W)(\delta_j^i y^k + \delta_j^k y^i) + (z W_z - 2W)\delta_i^k y^j + \frac{1}{\rho}(V_{\rho} + W_{\rho})(x^m y^m)x^i\delta_j^k + \frac{1}{\rho}(V_{\rho} + W_{\rho})x^i x^j y^k + \frac{1}{\rho}(z W_{z\rho} - 2W_{\rho})x^i x^k y^j -\frac{z}{\rho}(V_{z\rho} + W_{z\rho})(x^m y^m)\frac{x^i y^j y^k}{\alpha^2} - z(V_z + W_z)\frac{y^i y^j y^k}{\alpha^2} \end{dmath*} \begin{dmath*} \left[ G^0 \right]_{y^0 y^0} = (U_{zz} + 2 V_z + z V_{zz})\frac{(x^m y^m)}{\alpha} \end{dmath*} \begin{dmath*} \left[ G^k \right]_{y^0 y^0} = (V_{zz} + W_{zz})(x^m y^m)\frac{y^k}{\alpha^2} - W_{zz}x^k \end{dmath*} \begin{dmath*} \left[ G^0 \right]_{y^0 y^i} = (U_z + V + z V_z)x^i - z(U_{zz}+ 2\psi_z + z V_{zz})(x^m y^m)\frac{y^i}{\alpha^2} \end{dmath*} \begin{dmath*} \left[ G^k \right]_{y^0 y^i} = (V_z + W_z)\frac{(x^m y^m)}{\alpha}\delta_i^k + (V_z + W_z)\frac{x^i y^k}{\alpha} + (z W_{zz} - W_z)\frac{x^k y^i}{\alpha} - (V_z + z V_{zz} + W_z + z W_{zz})\frac{(x^m y^m)}{\alpha}\frac{y^i y^k}{\alpha^2} \end{dmath*} \begin{dmath*} \left[ G^0 \right]_{y^i y^j} = (U - z U_z - z^2 V_z)\frac{(x^m y^m)}{\alpha}\delta_j^i + (U - z U_z - z^2 V_z)\left[ \frac{x^i y^j}{\alpha} + \frac{x^j y^i}{\alpha}\right] + (-U + z U_z + z^2 U_{zz} + 3z^2 V_z + z^3 V_{zz})\frac{(x^m y^m)}{\alpha}\frac{y^i y^j}{\alpha^2} \end{dmath*} \begin{dmath*} \left[ G^k \right]_{y^i y^j} = z(3(V_z + W_z) + z(V_{zz} + W_{zz}))(x^m y^m)\frac{y^i y^j y^k}{\alpha^4} - z(V_z + W_z)\left[ \frac{x^i y^j y^k}{\alpha^2} + \frac{x^j y^i y^k}{\alpha^2} \right] - z(z W_{zz} - W_z)\frac{x^k y^i y^j}{\alpha^2} + (V + W)(x^i \delta^j_k + x^j \delta^k_i) + (z W_z - 2W)x^k\delta^i_j - z(V_z + W_z)(x^m y^m)\left[ \frac{\delta^j_i y^k}{\alpha^2} + \frac{y^i\delta^j_k}{\alpha^2} + \frac{\delta^k_i y^j}{\alpha^2}\right] \end{dmath*} \end{dgroup*} Derivatives of $\Pi$: \begin{dgroup*} \begin{dmath*} \Pi_{x^0} = 0 \end{dmath*} \begin{dmath*} \Pi_{x^i} = \Psi y^i + \frac{1}{\rho}\Psi_{\rho}(x^m y^m)x^i \end{dmath*} \begin{dmath*} \Pi_{y^0} = \Psi_{z}\frac{(x^m y^m)}{\alpha} \end{dmath*} \begin{dmath*} \Pi_{y^j} = -z\Psi_z\frac{(x^m y^m)}{\alpha}\frac{y^j}{\alpha} + \Psi x^j \end{dmath*} \begin{dmath*} \Pi_{x^0 y^A} = 0 \end{dmath*} \begin{dmath*} \Pi_{x^i y^0} = \Psi_z\frac{y^i}{\alpha} + \frac{1}{\rho}\Psi_{z\rho}\frac{(x^m y^m)}{\alpha}x^i \end{dmath*} \begin{dmath*} \Pi_{x^i y^j} = \Psi\delta_i^j - z\Psi_z\frac{y^i y^j}{\alpha^2} - \frac{z}{\rho}\Psi_{z\rho}\frac{(x^m y^m)}{\alpha}x^i\frac{y^j}{\alpha} + \frac{1}{\rho}\Psi_{\rho}x^ix^j \end{dmath*} \begin{dmath*} \Pi_{y^0 y^0} = \Psi_{zz}\frac{(x^m y^m)}{\alpha^2} \end{dmath*} \begin{dmath*} \Pi_{y^0 y^j} = \Psi_z\frac{x^j}{\alpha} -\left(\Psi_z + z\Psi_{zz} \right)\frac{(x^m y^m)}{\alpha^2}\frac{y^j}{\alpha} \end{dmath*} \begin{dmath*} \Pi_{y^i y^j} = -z\Psi_z\left[ \frac{(x^m y^m)}{\alpha^2}\delta_{i}^{j} + \frac{x^i y^j}{\alpha^2} + \frac{x^j y^i}{\alpha^2} \right] + z\left( 3\Psi_z + z\Psi_{zz} \right)\frac{(x^m y^m)}{\alpha^2}\frac{y^i y^j}{\alpha^2} \end{dmath*} \end{dgroup*} \end{document}
\begin{document} \catchline{}{}{}{}{} \title{Detecting and visualizing $3$-dimensional surgery} \author{Stathis Antoniou} \address{ \tiny {School of Applied Mathematical and Physical Sciences, National Technical University of Athens, Greece \newline \textit{santoniou@math.ntua.gr}}} \author{Louis H.Kauffman} \address{\tiny {Department of Mathematics, Statistics, and Computer Science, University of Illinois at Chicago, Chicago, USA \newline Department of Mechanics and Mathematics, Novosibirsk State University, Novosibirsk, Russia\newline \textit{kauffman@uic.edu}}} \author{Sofia Lambropoulou} \address{\tiny {School of Applied Mathematical and Physical Sciences, National Technical University of Athens, Greece \newline \textit{sofia@math.ntua.gr}} } \maketitle \begin{abstract} Topological surgery in dimension $3$ is intrinsically connected with the classification of $3$-manifolds and with patterns of natural phenomena. In this expository paper, we present two different approaches for understanding and visualizing the process of $3$-dimensional surgery. In the first approach, we view the process in terms of its effect on the fundamental group. Namely, we present how $3$-dimensional surgery alters the fundamental group of the initial manifold and present ways to calculate the fundamental group of the resulting manifold. We also point out how the fundamental group can detect the topological complexity of non-trivial embeddings that produce knotting. The second approach can only be applied for standard embeddings. For such cases, we give new visualizations for both types of $3$-dimensional surgery as different rotations of the decompactified $2$-sphere. Each rotation produces a different decomposition of the $3$-sphere which corresponds to a different visualization of the $4$-dimensional process of $3$-dimensional surgery. \end{abstract} \keywords{topological surgery, framed surgery, topological process, 3-space, 3-sphere, 3-manifold, handle, decompactification, rotation, 2-sphere, stereographic projection, fundamental group, Poincar\'{e} sphere, topology change, visualization, knots, knot group, blackboard framing} \ccode{2010 Mathematics Subject Classification: 57M05, 57M25, 57M27, 57R60, 57R65} \section{Introduction}\label{Intro} Topological surgery is a mathematical technique introduced by A.H.Wallace~\cite{Wal} and J.W.Milnor~\cite{Milsur} which produces new manifolds out of known ones. It has been used in the study and classification of manifolds of dimensions greater than three while also being an important topological tool in lower dimensions. Indeed, starting with $M=S^2$, $2$-dimensional surgery can produce every compact, connected, orientable $2$-manifold without boundary, see~\cite{Kosniowski,Ad}. Similarly, starting with $M=S^3$, every closed, connected, orientable $3$-manifold can be obtained by performing a finite sequence of $3$-dimensional surgeries, see~\cite{Wal,LickTh,Rolfsen}. Further, the surgery descriptions of two homeomorphic $3$-manifolds are related via the Kirby calculus, see~\cite{Kirby,Rolfsen}. But apart from being a useful mathematical tool, topological surgery in dimensions $1$ and $2$ is also very well suited for describing the topological changes occurring in many natural processes, see~\cite{SS1,SS2}. The described dynamics have also been exhibited by the trajectories of Lotka-{}-Volterra type dynamical systems, which are used in the physical and biological sciences, see~\cite{SS5,SS6,N7}. And, more recently, $3$-dimensional surgery has been proposed for describing cosmic phenomena such as wormholes and cosmic string black holes, see~\cite{BHsurg, SS3,SS4}. Topological surgery uses the fact that the following two $m$-manifolds have the same boundary: $\partial (S^{n} \times D^{m-n})= \partial (D^{n+1} \times S^{m-n-1})=S^{n} \times S^{m-n-1}$. Its process removes an embedding of $S^{n} \times D^{m-n}$ (an $(m-n)$-thickening of the $n$-sphere) and glues back $D^{n+1} \times S^{m-n-1}$ (an $(n+1)$-thickening of ${(m-n-1)}$-sphere) along the common boundary, see Definition~\ref{surgery}. In dimensions $1$ and $2$ this process can be easily understood and visualized in $3$-space as it describes removing and gluing back segments or surfaces. However, in dimension $3$ the process becomes more complex, as the additional dimension leaves room for the appearance of knotting when non-trivial embeddings are used. Moreover, the process requires four dimensions in order to be visualized. In this paper we present how to detect the complexity of surgery via the fundamental group and, for the case of standard embeddings, we propose a new visualization of $3$-dimensional surgery. The first approach is understanding $3$-dimensional surgery by determining the fundamental group of the resulting manifold. This approach is presented for both types of $3$-dimensional surgery, namely $3$-dimensional $0$- and $3$-dimensional $1$-surgery. For the case of $3$-dimensional $1$-surgery the presence of possible knotting during the process makes its visualization harder, since the resulting $3$-manifolds are involved with the complexity of the knot complement. However, this complexity can be detected by the fundamental group, as we can describe the framing longitude in terms of the generators of the fundamental group of the knot complement $S^3 \setminus N(K)$ and understand the process of $3$-dimensional $1$-surgery as the process which collapses this longitude in the fundamental group of the resulting manifold. On the other hand, when the standard embedding is used, we can produce a simple visualization of this $4$-dimensional process. Namely, the second approach presents a way to visualize the elementary steps of both types of $3$-dimensional surgery as rotations of the plane. As we will see, each rotation produces a different decomposition of the decompactified $3$-sphere which corresponds to a visualization of the local process of each type of elementary $3$-dimensional surgery. It also worth adding that, except from the two approaches presented here, there are other ways of understanding the non-trivial embeddings of $3$-dimensional $1$-surgery. For example, the whole process can be seen as happening within a $4$-dimensional handle. This approach is discussed in~\cite{SS4}. Further details on this perspective of surgery on framed knot and Kirby calculus can be found in~\cite{StiGompf}. The paper is divided in four main parts: in Section~\ref{MDProcessBig} we define the processes of topological surgery. Then, in Section~\ref{EmbTopoChange3D} we discuss the topology change induced by a sequence of $3$-dimensional surgeries on a $3$-manifold $M$ and point out the role of the embedding in the case of $3$-dimensional $1$-surgery. In Section~\ref{Fundappendix} we present our first approach, which uses the fundamental group of the resulting manifold for understanding $3$-dimensional surgery. In order to analyze the case of knotted surgery curves in $3$-dimensional $1$-surgery, we also present the blackboard framing of a knot as well as the knot group. Our second approach is discussed in Section~\ref{TrivialLongi}, where we point out how the stereographic projection of the $m$-sphere can be used in order to visualize $m$-dimensional surgery in one dimension lower and we present the visualization of the elementary steps of each type of $3$-dimensional surgery via rotations of the stereographic projection of the $2$-sphere. \section{The process of topological surgery} \label{MDProcessBig} In Section~\ref{MDProcess} we define the process of $m$-dimensional $n$-surgery while in Sections~\ref{Types2D} and~\ref{Types3D} we present the types of $2$ and $3$-dimensional surgery. \subsection{The process of $m$-dimensional $n$-surgery} \label{MDProcess} \begin{definition} \label{surgery} \rm An \textbf{$m$-dimensional $n$-surgery} is the topological process of creating a new $m$-manifold $M'$ out of a given $m$-manifold $M$ by removing a framed $n$-embedding $h:S^n\times D^{m-n}\hookrightarrow M$, and replacing it with $D^{n+1}\times S^{m-n-1}$, using the `gluing' homeomorphism $h$ along the common boundary $S^n\times S^{m-n-1}$. Namely, and denoting surgery by $\chi$: $$M' = \chi(M) = \overline{M\setminus h(S^n\times D^{m-n})} \cup_{h|_{S^n\times S^{m-n-1}}} (D^{n+1}\times S^{m-n-1}). $$ The resulting manifold $M'$ may or may not be homeomorphic to $M$. Note that from the definition, we must have $n+1 \leq m$. Also, the horizontal bar in the above formula indicates the topological closure of the set underneath. For details the reader is referred to~\cite{Ra}. \end{definition} \subsection{Types of $2$-dimensional surgery}\label{Types2D} In dimension $2$, the above Definition gives two types of surgery. For $m=2$ and $n=0$, we have the $2$-dimensional $0$-surgery, whereby two discs $S^0\times D^2$ are removed from a $2$-manifold $M$ and are replaced in the closure of the remaining manifold by a cylinder $D^1\times S^1$: \begin{samepage} \begin{center} $\chi(M) = \overline{M\setminus h(S^0\times D^{2})} \cup_{h} (D^{1}\times S^{1})$ \end{center} \end{samepage} For $m=2$ and $n=1$, \textit{$2$-dimensional $1$-surgery} removes a cylinder $S^1\times D^1$ and glues back two discs $D^2\times S^0$. We will only consider the first type of surgery as $2$-dimensional $1$-surgery is just the reverse (dual) process of $2$-dimensional $0$-surgery. For example, $2$-dimensional $0$-surgery on the $M=S^2$ produces the torus $\chi(M)=S^1 \times S^1$, see Fig.~\ref{2Dex}. \begin{figure} \caption{ $2$-dimensional $0$-surgery on the sphere } \label{2Dex} \end{figure} \subsection{Types of $3$-dimensional surgery}\label{Types3D} Moving up to dimension $3$, Definition~\ref{surgery} gives us three types of surgery. Namely, starting with a 3-manifold $M$, for $m=3$ and $n=0$, we have the \textit{ $3$-dimensional $0$-surgery}, whereby two 3-balls $S^0\times D^3$ are removed from $M$ and are replaced in the closure of the remaining manifold by a thickened sphere $D^1\times S^2$: \begin{samepage} \begin{center} $\chi(M) = \overline{M\setminus h(S^0\times D^{3})} \cup_{h} (D^{1}\times S^{2})$ \end{center} \end{samepage} Next, for $m=3$ and $n=2$, we have the \textit{ $3$-dimensional $2$-surgery}, which is the reverse (dual) process of $3$-dimensional $0$-surgery. Finally, for $m=3$ and $n=1$, we have the self-dual \textit{ $3$-dimensional $1$-surgery}, whereby a solid torus $S^1\times D^2$ is removed from $M$ and is replaced by another solid torus $D^2\times S^1$ (with the factors now reversed) via a homeomorphism $h$ of the common boundary: \begin{samepage} \begin{center} $\chi(M) = \overline{M\setminus h(S^1\times D^{2})} \cup_{h} (D^{2}\times S^{1})$ \end{center} \end{samepage} For example, let us consider a $3$-dimensional $1$-surgery on $M=S^3$ using the standard embedding $h_{s}$. This embedding restricted to the common boundary $S^1\times S^1$ induces the standard pasting map $h_{s}$ which maps each longitude (respectively meridian) of $S^1\times D^2$ to a meridian (respectively longitude) of $D^2\times S^1$. The operation produces the trivial lens space $L(0,1)$: $\chi(S^3) = \overline{S^3\setminus h_s(S^1\times D^2)} \cup_{h_s} (D^2 \times S^1) = (D^2 \times S^1) \cup_{h_s} (D^2 \times S^1) = S^2 \times S^1 =L(0,1)$. \section{Topology change of $3$-dimensional surgeries}\label{EmbTopoChange3D} Each type of $3$-dimensional surgery induces a different topology change on a $3$-manifold $M$. In Section~\ref{TopoChange3D0}, we discuss the topology change induced by a sequence of $3$-dimensional $0$-surgeries on $M$ and point out that the choice of the embedding $h$ in Definition~\ref{surgery} doesn't affect the resulting manifold. In Section~\ref{TopoChange3D1}, we discuss the topology change induced by a sequence of $3$-dimensional $1$-surgeries on $M$ where the embedding $h$ plays a crucial role and introduce the notion of `knot surgery'. \subsection{$3$-dimensional $0$-surgery}\label{TopoChange3D0} The result $\chi(M)$ of a $3$-dimensional $0$-surgery on a $3$-manifold $M$ is homeomorphic to the connected sum $M \# ({S^{1}\times S^{2}})$, independently of the embedding $h$, see~\cite{SS4}. Hence, we will consider the elementary step of $3$-dimensional $0$-surgery to be using the standard (or trivial) embedding $h_{s}$. We will use this fact in Section~\ref{TrivialLongi} where we present how to obtain the elementary step of the local process of $3$-dimensional $0$-surgery via rotation. \subsection{$3$-dimensional $1$-surgery}\label{TopoChange3D1} In contrast with $3$-dimensional $0$-surgery, $3$-dimensional $1$-surgery produces a much greater variety of $3$-manifolds. Indeed, as mentioned in Section~\ref{Intro}, every closed, connected, orientable $3$-manifold can be obtained by performing a finite sequence of $3$-dimensional $1$-surgeries on $M=S^3$, see~{\cite[Theorem 6]{Wal}} and~{\cite[Theorem 2]{LickTh}}. As previously, we will consider the elementary step of $3$-dimensional $1$-surgery to be using the standard embedding $h_{s}$. However, here, such elementary steps produce only a restricted family of $3$-manifolds. Indeed, starting from $S^3$, standard embeddings $h_{s}$ can only produce $S^2 \times S^1$ or connected sums of $S^2 \times S^1$ while more complicated $3$-manifolds, such as the Poincar\'{e} homology sphere, require using a non-trivial embedding $h$. Hence, unlike $3$-dimensional $0$-surgery, the embedding $h$ plays an important role in the resulting manifold of $3$-dimensional $1$-surgery. As mentioned in Section~\ref{Types3D}, the standard embedding $h_{s}$ maps the longitudes of the removed solid torus $V_1=S_1^1\times D^2$ to the meridians of solid torus $V_2=D^2\times S_2^1$ which is glued back and vice versa, hence $h_{s}(\ell_1)=m_2$ and $h_{s}(m_1)=\ell_2$. When such embedding is used, the core and the longitude $\ell_1$ of the removed solid torus $V_1$ are both trivial loops, or unknotted circles. This fact allows us to obtain the elementary step of the local process of $3$-dimensional $1$-surgery via rotation. This visualization is presented with the visualization of $3$-dimensional $0$-surgery in Section~\ref{TrivialLongi}, where it is also shown that the visualizations of both types of $3$-dimensional surgeries are closely related as each one corresponds to a different rotation. When using a non-trivial embedding $h$, both the core curve and the longitude of the removed solid torus $h(S^1\times D^2)$ can be knotted. Hence the process of $3$-dimensional $1$-surgery can be also described in terms of knots. We will call this process `knot surgery' in order to differentiate it from the process of $3$-dimensional $1$-surgery where $h_{s}$ is used. Here, we can view the embedding $h(V_1)=h(S_1^1\times D^2)$ as a tubular neighbourhood $N(K)$ of knot $K$: $N(K)=K\times D^2=h(S_1^1\times D^2)$. The knot $K=h(S_1^1\times \{ 0 \})$ is the surgery curve at the core of solid torus $N(K)=h(S_1^1 \times D^2)$. On the boundary of $N(K)$, we further define the \textit{framing longitude} $\lambda \subset \partial N(K)$ with $ \lambda=h(S_1^1 \times \{ 1 \} )$, which is a parallel curve of $K$ on $\partial N(K)$, and the meridian $m_1 \subset \partial N(K)$ which bounds a disk of solid torus $N(K)$ and intersects the core $K$ transversely in a single point. A \textit{`knot surgery'} (or `framed surgery') along $K$ with framing $\lambda$ on a manifold $M$ is the process whereby $N(K)=h(V_1)$ is removed from $M$ and $V_2=D^2 \times S_2^1$ is glued along the common boundary. The interchange of factors of the `gluing' homeomorphism $h$ along $S_1^1 \times S_2^1$ can now be written as $h(\lambda)=m_2$ and $h(m_1)=l_2$. The knottedness of $h$ makes the process harder to visualize. However, the manifold resulting from knot surgery can be understood by determining its fundamental group. In Section~\ref{Fundappendix}, we describe how to calculate this fundamental group by writing down a longitudinal element $\lambda$ in the fundamental group of the complement of the knot $S^3 \setminus N(K)$. \section{Detecting 3-dimensional surgery via the fundamental group}\label{Fundappendix} The fundamental group is one of the most significant algebraic constructions for obtaining topological information about a topological space. It is a topological invariant: homeomorphic topological spaces have the same fundamental group. In Section~\ref{Fund3d0} we present how to determine the fundamental group of $3$-dimensional $0$-surgery. The more complicated topological changes, occurring during $3$-dimensional $1$-surgery, are analyzed in Section~\ref{Fund3d1}. \subsection{$3$-dimensional $0$-surgery}\label{Fund3d0} As mentioned in Section~\ref{TopoChange3D0}, the resulting manifold of $3$-dimensional $0$-surgery is $M \# ({S^{1}\times S^{2}})$. The fundamental group of $\chi(M)$ can be characterized using the following lemma which is a consequence of the Seifert-{}-van Kampen theorem (see for example~\cite{Munkres}): \begin{lemma} \label{fundalem3d0} \rm Let $m \geq 3$. Then the fundamental group of a connected sum of $m$-dimensional manifolds is the free product of the fundamental groups of the components: $$ \pi_1(M \# M')\cong \pi_1(M) * \pi_1(M')$$ \end{lemma} Based on the above, a $3$-dimensional $0$-surgery on $M$ alters its fundamental group as follows: $\pi_1(\chi(M)) \cong \pi_1(M \# ({S^{1}\times S^2})) \cong \pi_1(M) * \pi_1({S^{1}\times S^2})\cong \pi_1(M) * ( \pi_1{(S^{1})} \times \pi_1{(S^{2})}) \cong \pi_1(M) * \mathbb{Z}$. \subsection{$3$-dimensional $1$-surgery}\label{Fund3d1} In Section~\ref{Blackboard}, we present the blackboard framing of a knot which will allow us to present the theorem determining the fundamental group of the manifold resulting from $3$-dimensional $1$-surgery. This is done in Section~\ref{FundaTr} where we also discuss the case of framed surgery along the unknot. Next, in Section~\ref{KG}, we describe the fundamental group of a knot and its presentation which allows to present the case where the surgery curve is knotted in Section~\ref{surgeryonK}. \subsubsection{The blackboard framing}\label{Blackboard} A framing of a knot can be also viewed as a choice of non-tangent vector at each point of the knot. The \textit{blackboard framing} of a knot is the framing where each of the vectors points in the vertical direction, perpendicular to the plane, see Fig.~\ref{3D_31_Framing}(2). The blackboard framing of a knot gives us a well-defined general rule for determining the framing of a knot diagram. Here the knot diagram is taken up to regular isotopy, namely up to Reidemeister II and III moves (see~\cite{Ad} for details on the Reidemeister moves). We use the curling in the diagram to determine the framing for an embedding corresponding to the knot, as will be explained below. Note that once we have chosen a longitude for the blackboard framing we can allow Reidemeister I moves (that might eliminate a curl) and just keep track of how the longitude now winds on the torus surface. \smallbreak \begin{figure} \caption{\textbf{(1)} Longitude $l_1$ \textbf{(2)} Longitude $\lambda=l_1+3 \cdot m_1$ } \label{3D_31_13} \end{figure} An example is shown in Fig.~\ref{3D_31_13}(2). This case corresponds to a non-trivial embedding $N(K)=h(S_1^1 \times D^2)$ where both the knot $K$ and the longitude $\lambda$ perform three curls. As also shown in Fig.~\ref{3D_31_13}(2), there is an isotopic embedding of $N(K)$ where the surgery curve $K$ at the core of $N(K)$ is unknotted while the curls of $\lambda$ have become windings around $K$. This allows us to express $\lambda$ in terms of the unknotted longitude $l_1$ of the trivial embedding shown in Fig.~\ref{3D_31_13}(1). Namely, as $\lambda$ performs $3$ revolutions around a meridian, it can be expressed as $\lambda=l_1+3 \cdot m_1$, see Fig.~\ref{3D_31_13}(2). More generally, if a longitude $\lambda$ performs $p$ revolutions around a meridian, it can be expressed as $\lambda=l_1+p \cdot m_1$. The induced `gluing' homeomorphism along the common boundary $S_1^1\times S_2^1$ maps each $\lambda$ of $V_1$ to a meridian of $V_2$, hence $h(l_1+p.m_1)=m_2$, while the meridians of $V_1$ are mapped to longitudes of $V_2$, hence $h(m_1)=h_{s}(m_1)=l_2$. Note that the resulting manifolds obtained by doing a $3$-dimensional $1$-surgery on $M=S^3$ using such framings on the unknot are the lens spaces $L(p,1)$. For $p=0$ we have $h(l_1)=h_s(l_1)=m_2$ and $L(0,1)=S^2 \times S^1$, which was the case presented in Section~\ref{Types3D}. For more details on lens spaces see, for example, \cite{PS}. Note that, since the multiple of the meridian $p$ is the framing number, this type of surgery is also called `framed surgery'. \smallbreak \begin{figure} \caption{\textbf{(1)} Isotopy of $\lambda$ \textbf{(2)} Blackboard framing of $\lambda$} \label{3D_31_Framing} \end{figure} Recall that in Fig.~\ref{3D_31_13} the framing was $p=3$, as $\lambda$ performs $3$ revolutions. However, determining the framing of a knot diagram requires a well-defined general rule. For instance, that rule should give the same framing $p=3$ for the isotopic curve shown in Fig.~\ref{3D_31_Framing}(1). This general rule is to take the natural framing of a knot to be its \textit{writhe}, which is the total number of positive crossings minus the total number of negative crossings. The rule for the sign of a crossing is the following: as we travel along the knot, at each crossing we consider a counterclockwise rotation of the overcrossing arc. If we reach the undercrossing arc and are pointing the same way, then the crossing is positive, see Fig.~\ref{3D_31_Framing}(2). Otherwise, the crossing is negative, see also Fig.~\ref{3D_31_Framing}(2). Using this convention we can calculate $\lambda$ and be sure that isotopic knots will have the same framing. For instance, in Fig.~\ref{3D_31_Framing}(1), the framing number is the writhe of the knot diagram which is $p=Wr(\lambda)=4-1=3$. \subsubsection{The fundamental group of $\chi_{\mbox{\tiny K}}(S^3)$}\label{FundaTr} In this section, we present the theorem which characterizes the effect of knot surgery on $M=S^3$ by determining the fundamental group of the resulting manifold. We then apply it on the simple case of framed surgery along an unknotted surgery curve. The fundamental group of the $3$-sphere $S^3$ is trivial, as any loop on it can be continuously shrunk to a point without leaving $S^3$. To examine how knot surgery alters the trivial fundamental group of $S^3$, let us consider the tubular neighborhood $N(K)$ of knot $K$. The generators of the group of $\partial N(K)$ are the longitudinal curve $\lambda$ and the meridional curve $m_1$. Note now that in $V_1=N(K)$ meridional curves bound discs while it is the specified framing longitudinal curve $\lambda$ that bounds a disc in $V_2=D^2 \times S_2^1$, since, after surgery, the disc bounded by $m_2$ is now filling the longitude $\lambda$. Thus, $\lambda$ is made trivial in the fundamental group of $\chi_K(S^3)$. In this sense, surgery collapses $\lambda$. This statement is made precise by the following theorem which is a consequence of the Seifert-{}-van Kampen theorem (see for example~\cite{Munkres}): \begin{theorem} \label{3d1long} \rm Let $K$ be a blackboard framed knot with longitude $\lambda \in \pi_1(S^3 \setminus N(K))$. Let $\chi_{\mbox{\tiny K}}(S^3)$ denote the $3$-manifold obtained by surgery on $K$ with framing longitude $\lambda$. Then we have the isomorphism: $$ \pi_1(\chi_{\mbox{\tiny K}}(S^3)) \cong \frac{\pi_1(S^3 \setminus N(K))}{<\lambda>} $$ where $<\lambda>$ denotes the normal subgroup generated by $\lambda \in \pi_1(S^3 \setminus N(K))$. \end{theorem} For a proof, the reader is referred to~\cite{Kirby,DNA}. The theorem tells us that in order to obtain the fundamental group of the resulting manifold, we have to factor out $<\lambda>$ from $\pi_1(S^3 \setminus N(K))$. \begin{example} \label{ExFramedUnknot} When the trivial embedding $h_{s}$ is used, then the `gluing' homeomorphism is $h_{s}(l_1)=m_2$, $K=the \ unknot$, $\lambda=l_1$ and $l_1$ is a trivial element in $\pi_1(S^3 \setminus N(K))$, so $<\lambda>=<0>$. In this case, we obtain the lens space $L(0,1)$ and the above formula gives us: \begin{equation} \begin{aligned} \pi_1(\chi(S^3))\cong \frac{\pi_1(S^3\setminus h_s(S_1^1\times D^2))}{<\lambda>} = \frac{\pi_1(\mathring{D^2} \times S^1)}{<0>} = \frac{ \mathbb{Z}}{\{0\}} \cong\mathbb{Z} \end{aligned} \nonumber \end{equation} \end{example} \smallbreak \begin{example} \label{ExFramedUnknot2} When we use a non-trivial embedding $N(K)=h(S_1^1 \times D^2)$ where the specified framing longitude $\lambda$ performs $p$ curls, the `gluing' homeomorphism is $h(\lambda)=m_2$ and, as mentioned in Section~\ref{Blackboard}, we can consider that $K=the \ unknot$. In order to use Theorem~\ref{3d1long}, we have to find the subgroup generated by $\lambda=l_1+p \cdot m_1$ in $\pi_1(S^3 \setminus N(K))$. This subgroup is $<\lambda> = <l_1+p \cdot m_1> \cong <p \cdot m_1> \cong p \cdot <m_1>\cong p\mathbb{Z}$. In this case we obtain the lens space $L(p,1)$ and its fundamental group is the cyclic group of order $p$: \begin{equation} \begin{aligned} \pi_1(\chi(S^3))\cong \frac{\pi_1(S^3\setminus h(S_1^1\times D^2))}{<\lambda>} =\frac{\pi_1(\mathring{D^2} \times S^1)}{p\mathbb{Z}} = { \mathbb{Z}}/{p\mathbb{Z}} \end{aligned} \nonumber \end{equation} \end{example} As we saw in Example~\ref{ExFramedUnknot2}, if $\lambda$ is not a bounding curve in the knot complement, then we need to work out just what element $\lambda$ is in the fundamental group of the knot complement. This can be done by using one of the known presentations of the fundamental group, such as the Wirtinger presentation. A detailed presentation on the fundamental group of a knot $K$ and how we can use this presentation to determine the resulting manifold for knot surgery on $M=S^3$ along $K$ is done in next sections. \subsubsection{The knot group}\label{KG} The \textit{fundamental group of a knot $K$} (or the \textit{knot group}) is defined as the fundamental group of the complement of the knot in $3$-dimensional space (considered to be either $\mathbb{R}^3$ or $S^3$) with a basepoint $p$ chosen arbitrarily in the complement. The group is denoted $\pi_1(K)$ or $\pi_1(S^3 \setminus N(K))$, where $N(K)$ is a tubular neighborhood of the knot $K$. To describe this group, it is useful to have the concept of the longitude and meridian elements of the fundamental group of a knot. The longitude and the meridian are loops in the knot complement that are on the surface of a torus, the boundary of $N(K)$. \begin{figure} \caption{\textbf{(1)} Trefoil knot $T$ \textbf{(2)} Tubular neighborhood $N(T)$} \label{App_Funda_solidtrefoil} \end{figure} For the case of the trefoil knot $T$ shown in Fig.~\ref{App_Funda_solidtrefoil}(1), the meridian $m$ and the longitude $\lambda$ on the tubular neighborhood $N(T)$ are shown in Fig.~\ref{App_Funda_solidtrefoil}(2). $N(T)$ is homeomorphic to a solid torus with the knot at the core of the torus. The meridian bounds a disk in the torus, that intersects $T$ transversely in a single point. The longitude runs along the surface of the torus in parallel to $T$, and so makes a second copy of the knot out along the surface of the torus. \begin{figure} \caption{\textbf{(1)} Generators represented as meridian loops \textbf{(2)},\textbf{(3)} Homotopic loops \textbf{(4)},\textbf{(5)},\textbf{(6)},\textbf{(7)} Trivial curve } \label{App_Funda_crossinglabels} \end{figure} The presentation of a knot group is generated by one meridian loop for each arc in a diagram of the knot. For the case of the trefoil, in Fig.~\ref{App_Funda_crossinglabels}(1), we illustrate the three generators $a,b,c$ (in red) which are meridian elements associated with the corresponding arcs $a,b,c$ (in black). Each crossing gives rise to a relation among those elements. For example, let us examine the crossing of the trefoil circled in Fig.~\ref{App_Funda_crossinglabels}(1). By considering a loop $u$ in the close-up view of this crossing shown in Fig.~\ref{App_Funda_crossinglabels}(2), it is shown that $u$ wraps around arcs $a$ and $c$ but can also slide upwards to wrap around arcs $c$ and $b$. In both cases, a homotopy of loop $u$ shows that we can write $u$ as a product of the generators of the fundamental group, see Fig.~\ref{App_Funda_crossinglabels}(3). Since both homotopies describe the same loop $u$, we have $ac = cb$ which gives relation $b = c^{-1}ac$. Another way to obtain the same relation is by observing that curve $acb^{-1}c^{-1}$ contracts to a point and is therefore a trivial element of the fundamental group: $acb^{-1}c^{-1}=1$, see Fig.~\ref{App_Funda_crossinglabels}(4),(5),(6),(7). \begin{figure} \caption{\textbf{(1)} Positive crossing \textbf{(2)} Negative crossing} \label{App_Funda_combi} \end{figure} Similarly, we can show that the relations obtained by the other two crossings are $a = b^{-1}cb$ and $c = a^{-1}ba$. More generally, given a diagram $D$ of an oriented knot $K$, if we label each arc of $D$, then the {\em fundamental group} of $K$ is the group whose generators are the labels of the arcs of $D$, and whose relations are the relations coming from the products of loops up to homotopy as we have just described them above. This presentation of the knot group is called the \textit{Wirtinger presentation} and its proof makes us of the Seifert-{}-van Kampen theorem, see for example~\cite{Rolfsen}. Hence for the trefoil knot, we have the presentation: $$ \pi_1(T)=\pi_1(S^3 \setminus N(T)) = (a,b,c \ | \ a = b^{-1}cb, b = c^{-1}ac, c = a^{-1}ba).$$ The fundamental group of a knot can be also defined in a combinatorial way as follows: consider a diagram of the knot and a crossing in diagram, as in Fig.~\ref{App_Funda_combi}(1) or (2), where the incoming undercrossing arc is labeled $a$, the overcrossing arc is labeled $c$ and the outgoing arc is labeled $b.$ Then write a relation in the form $b = c^{-1}ac$ for each positive crossing, as in Fig.~\ref{App_Funda_combi}(1), and a relation $b = cac^{-1}$ for each negative crossing, as in Fig.~\ref{App_Funda_combi}(2). The combinatorial approach defines the fundamental group as the group having one generator for each arc and one relation at each crossing in the diagram as we just specified them. One can show that this group is invariant under the Reidemeister moves. This means that all diagrams of the same knot have the same fundamental group. This combinatorial description is equivalent to the Wirtinger presentation. Indeed, see for example the relation coming from the positive crossing of Fig.~\ref{App_Funda_combi}(1) and the relation coming from homotopic loops in Fig.~\ref{App_Funda_crossinglabels}(3) or (4). However, as we will see in Section~\ref{surgeryonK}, for the purpose of doing surgery we need the topological approach, so that we can express the longitude in terms of the generators of the fundamental group of $S^3 \setminus N(K)$. For more details on combinatorial group theory, the reader is referred to~\cite{Sti} or~\cite{MagnusKarassSolitar}. \subsubsection{Computing $\pi_1(\chi_{\mbox{\tiny K}}(S^3))$}\label{surgeryonK} When the core curve $K$ of a non-trivial embedding $h(S_1^1 \times D^2)=N(K)$ is knotted, one cannot express $\lambda$ in terms of trivial longitudes and meridians, as was the case in Examples~\ref{ExFramedUnknot} and~\ref{ExFramedUnknot2}. In general, in order to compute the fundamental group of a $3$-manifold that is obtained by doing surgery on a blackboard framed knot $K$, we have to describe first how to write down a longitudinal element $\lambda$ in the fundamental group of the knot complement $S^3 \setminus N(K)$. \begin{figure} \caption{\textbf{(1)} Longitude $\lambda$ in N(T) \textbf{(2)}\textbf{(3)}\textbf{(4)} Homotopy of $\lambda$ } \label{App_Funda_solidtrefoil_Lambda} \end{figure} To do so, we homotope $\lambda$ to a product of the generators of $\pi_1(S^3 \setminus N(K))$ corresponding to the arcs that it underpasses. In this expression for the longitude, the elements $x$ that are passed underneath will appear either as $x$ or as $x^{-1}$ according to whether the knot is going to the right or to the left from the point of view of a traveler on the original longitude curve. Once the longitude $\lambda$ is expressed in terms of the generators of the fundamental group of $S^3 \setminus N(K)$, we can calculate the fundamental group of $\chi_{\mbox{\tiny K}}(S^3)$ using Theorem~\ref{3d1long}. For example, in Fig.~\ref{App_Funda_solidtrefoil_Lambda}(1) we show a trefoil knot and the longitudinal element $\lambda$ in the fundamental group running parallel alongside it. Note that, for convenience, the basepoint $p$ is on the boundary of the torus but it could be anywhere in the complement $S^3 \setminus N(K)$. Each time that $\lambda$ goes under the knot we can run a line all the way back to the base point $p$ and then back to the point where $\lambda$ comes out from underneath the knot, see Fig.~\ref{App_Funda_solidtrefoil_Lambda}(2),(3) and (4). By doing this, we have written, up to homotopy, the longitude as a product of the generators of the fundamental group that are passed under by the original longitude curve. Thus in the trefoil knot case, as shown in Fig.~\ref{App_Funda_solidtrefoil_Lambda}(4), we see that the longitude is given by $\lambda= cab$. \begin{example} \label{TrefoilSugeries} We will now calculate the fundamental group of a $3$-manifold obtained by doing $3$-dimensional $1$-surgery on the trefoil knot for two different projections. The first one is the simplest projection of the trefoil shown in Fig.~\ref{App_Funda_solidtrefoil_Lambda}(1). It has three positive crossings yielding a blackboard framing number of $3$. The second one has two additional negative crossings thus having a blackboard framing number of $1$, see Fig.~\ref{TrefoilP}. \begin{figure} \caption{Projection of the trefoil with total blackboard framing $1$} \label{TrefoilP} \end{figure} As mentioned in Section~\ref{FundaTr}, surgery collapses the longitude $\lambda$, so the resulting fundamental group depends on how longitude $\lambda$ is expressed in the following relation: $$ \pi_1(\chi(S^3))=\frac{\pi_1(S^3\setminus N(T))}{<\lambda>} =\frac{\pi_1(T)}{<\lambda>}=(a,b,c \ | \ aba = bab, \lambda=1) $$ In the first case, by substituting $\lambda= cab$ and $c=a^{-1}ba$ to $\lambda=1$, we have $a^{-1}baab=1 \Leftrightarrow a=ba^{2}b$. Given that $aba = bab$, this implies that $a^2=baaba\Leftrightarrow a^2=babab \Leftrightarrow a^3=(ba)^3$. Notice now that $(aba)^2=aba \cdot aba=bab \cdot bab=(ba)^3$. Thus by setting $A=a, B=ba$ and $C=aba$ we have that $A^3 = B^3=C^2$ and we only need to show that this is equal to $ABC$. Indeed, $ABC=a \cdot ba \cdot aba=(ba)^3$. Hence, the fundamental group of the resulting manifold is isomorphic to the binary tetrahedral group $(A,B,C \ | \ A^3 = B^3=C^2=ABC)$ denoted $<3,3,2>$. It is also worth mentioning that the resulting manifold is isomorphic to $S^3/<3,3,2>$, the quotient of the $3$-sphere by an action of the binary tetrahedral group. For details on group actions the reader is referred to~\cite{MilAct}. \begin{figure} \caption{Poincar\'{e} sphere} \label{Poincare} \end{figure} In the second case, the longitude $\lambda$ in the projection shown in Fig.~\ref{TrefoilP} is the same as the one in Fig.~\ref{App_Funda_solidtrefoil_Lambda}(1) with two additional negative crossings along arc $a$. Hence, in this case $\lambda=caba^{-2}$. By substitution, we have $a^{-1}baaba^{-2}=1 \Leftrightarrow a^{3}=ba^{2}b$. Given that $aba = bab$, this implies that $a^3=baaba\Leftrightarrow a^4=babab \Leftrightarrow a^5=(ba)^3$. Thus by setting $A=a, B=ba$ and $C=aba$ we have that $A^5 = B^3=C^2$ and $ABC=a \cdot ba \cdot aba=(ba)^3$. The fundamental group of the resulting manifold is isomorphic to the binary icosahedral group $(A,B,C \ | \ A^5 = B^3=C^2=ABC)$ denoted by $<5,3,2>$. The resulting manifold is isomorphic to $S^3/<5,3,2>$, the quotient of the $3$-sphere by an action of $<5,3,2>$. This manifold is also known as the \textit{Poincar\'{e} homology sphere}, which can be described by identifying opposite faces of a dodecahedron according to the scheme shown in Fig.~\ref{Poincare} (for more details on this identification, see~\cite{SeTh}). It can be shown from this that the Poincar\'{e} homology sphere is diffeomorphic to the link of the variety $V((2,3,5))=\{ (z_1,z_2,z_3) \ | \ z_1{^2} + z_2{^3} + z_3{^5} = 0\}$, that is, the intersection of a small $5$-sphere around $0$ with $V((2,3,5))$. From this it is not hard to see that the Poincar\'{e} homology sphere can be also obtained as a $5$-fold cyclic branched covering of $S^3$ over the trefoil knot. For more details on the different descriptions of the Poincar\'{e} homology sphere, the reader is referred to~\cite{KirScha}. This manifold has been of great interest, and is even thought by some physicists to be the shape of the geometric universe, see~\cite{Weeks,Luminet,Levin}. See also~\cite{SS4}. \end{example} \section{Visualizing elementary $3$-dimensional surgery via rotation} \label{TrivialLongi} As mentioned in Section~\ref{EmbTopoChange3D}, we consider $3$-dimensional surgeries using the standard embedding $h_{s}$ as elementary steps. In this section, we show that, in this case, both types of $3$-dimensional surgery can be visualized via rotation. To do so, we first describe how stereographic projection can be used to visualize the local process of topological surgery in one dimension lower, see Section~\ref{MDStereoBig} and then use it to visualize elementary $3$-dimensional surgeries in $\mathbb{R}^3$, see Section~\ref{3DProcess}. \subsection{Visualizations of topological surgery using the stereographic projection} \label{MDStereoBig} In Section~\ref{MDStereo} we present a way to visualize the initial and the final instances of $m$-dimensional surgery in $\mathbb{R}^{m}$ using stereographic projection. We then discuss the case of $m=2$ in Section~\ref{2DStereo} which will be our basic tool for the visualization of elementary $3$-dimensional surgery via rotation in Section~\ref{3DProcess}. \subsubsection{Visualizing $m$-dimensional $n$-surgery in $\mathbb{R}^m$} \label{MDStereo} Let us first mention that the two spherical thickenings involved in the process of $m$-dimensional $n$-surgery are both $m$-manifolds with boundary. Notice now, that if we glue theses two $m$-manifolds, along their common boundary using the standard mapping $h_{s}$, we obtain the $m$-sphere which, in turn, is the boundary of the $(m+1)$-dimensional disc: $(S^n\times D^{m-n}) \cup_{h} (D^{n+1}\times S^{m-n-1}) = (\partial D^{n+1}\times D^{m-n}) \cup_{h} (D^{n+1}\times \partial D^{m-n}) = \partial (D^{n+1}\times D^{m-n})\cong \partial (D^{m+1})=S^{m}$. \begin{figure} \caption{ Continuous local process of $2$-dimensional $0$-surgery in $\mathbb{R}^{3}$ } \label{2D_Cont} \end{figure} The process of surgery can be seen independently from the initial manifold $M$ as a local process which transforms $M$ into $\chi(M)$. The $(m+1)$-dimensional disc $D^{m+1} \cong D^{n+1}\times D^{m-n}$ is one dimension higher than the initial manifold $M^{m}$. This extra dimension leaves room for the process of surgery to take place continuously. The disc $D^{m+1}$ considered in its homeomorphic form $D^{n+1}\times D^{m-n}$ is an $(m+1)$-dimensional $(n+1)$-handle. The unique intersection point $D^{n+1} \cap D^{m-n}$ within $D^{n+1}\times D^{m-n}$ is called the critical point. The process of surgery is the continuous passage, within the handle $D^{n+1}\times D^{m-n}$, from boundary component $S^n\times D^{m-n} \subset \partial (D^{n+1}\times D^{m-n})$ to its complement $D^{n+1}\times S^{m-n-1} \subset \partial (D^{n+1}\times D^{m-n})$ by passing through the critical point $D^{n+1} \cap D^{m-n}$. More precisely, the boundary component $S^n\times D^{m-n}$ collapse to the critical point $D^{n+1} \cap D^{m-n}$ from which the complement boundary component $D^{n+1}\times S^{m-n-1}$ emerges. For example, in dimension $2$, the two discs $S_1^0\times D^{2}$ collapse to the critical point from which the cylinder $D^{1}\times S_2^{1}$ uncollapses, see Fig.~\ref{2D_Cont}. Keeping in mind that gluing the two $m$-manifolds with boundary involved in the process of $m$-dimensional $n$-surgery along their common boundary gives us the $m$-sphere $S^m$, the idea of our proposed visualization of surgery is that while $S^m$ is embedded in $\mathbb{R}^{m+1}$, it can be stereographically projected to $\mathbb{R}^{m}$. Hence, for every $m$, one can visualize the initial and the final instances of the process of $m$-surgery one dimension lower. In the following examples we deliberately did not project the intermediate instances, as this can't be done without self-intersections. \subsubsection{Visualizing $2$-dimensional $0$-surgery in $\mathbb{R}^2$} \label{2DStereo} For $m=2$ and $n=0$, the initial and final instances of $2$-dimensional $0$-surgery that make up $S^2$ are shown in Fig.~\ref{2D_Proj}(1). If we remove the point at infinity, we can project the points of $S^2 \setminus \{\infty\}$ on $\mathbb{R}^2$ bijectively. We will use two different projections for two different choices for the point at infinity. The first one is shown in Fig.~\ref{2D_Proj}({2\textsubscript{a}}) where the point at infinity is a point of the core $S_2^1$ of $D^{1}\times S_2^{1}$. In this case, the two great circles $S_2^1 = \ell \cup \{\infty\}$ and $\ell' \cup \{\infty\}$ of $S^2$ are projected on the two perpendicular infinite lines $\ell$ and $\ell'$ in ${\mathbb R}^2$. In the second one, shown in Fig.~\ref{2D_Proj}({2\textsubscript{b}}), the point at infinity is the center of one of the two discs $S_1^0\times D^{2}$. In this case the great circle $\ell' \cup \{\infty\}$ in $S^2$ is, again, projected to the infinite line $\ell'$ in ${\mathbb R}^2$ but the great circle $S_2^1 = \ell$ is now projected to the circle $\ell$ in ${\mathbb R}^2$. \begin{figure}\label{2D_Proj} \end{figure} As mentioned in Section~\ref{MDStereo}, the one dimension higher of the disc $D^{m+1}$ leaves room for the process of $m$-dimensional surgery to take place continuously. For $2$-dimensional surgery, the third dimension allows the two points of the core $S_1^0$ to touch at the critical point, recall Fig.\ref{2D_Cont}. Using the two stereographic projections discussed above and shown again in Fig.~\ref{2D_Decomp}(1\textsubscript{a}) and (1\textsubscript{b}), we present in Fig.~\ref{2D_Decomp}(2\textsubscript{a}) and (2\textsubscript{b}) two different local visualizations of $2$-dimensional surgery in $\mathbb{R}^2$. Note that in Fig.~\ref{2D_Decomp}({1\textsubscript{b}}) and ({2\textsubscript{b}}), the red dashes show that all lines converge to the point at infinity which is the center of the decompactified disc and one of the points of $S_1^0$. The process of $2$-dimensional $0$-surgery starts with either one of the first instances of Fig.~\ref{2D_Decomp}(2\textsubscript{a}) and (2\textsubscript{b}). Then the centers of the two discs $S_1^0\times D^{2}$ collapse to the critical point which is shown with increased transparency to remind us that this happens in one dimension higher, see the second instances of either Fig.~\ref{2D_Decomp}(2\textsubscript{a}) or (2\textsubscript{b}). Finally the cylinder $D^{1}\times S_2^{1}$ uncollapses, as illustrated in the last instances of Fig.~\ref{2D_Decomp}(2\textsubscript{a}) and (2\textsubscript{b}). Clearly, the reverse processes provide visualizations of $2$-dimensional $1$-surgery. \begin{figure}\label{2D_Decomp} \end{figure} \subsection{Visualizing elementary $3$-dimensional surgeries in $\mathbb{R}^3$} \label{3DProcess} In this section we present two ways of visualizing the elementary steps of both types of $3$-dimensional surgery in $\mathbb{R}^3$ using rotations of the decompactified $2$-sphere $S^2\setminus \{\infty\}$. More precisely, in Section~\ref{Rotdecomp} we discuss the rotations of spheres and present how the rotations of $S^2\setminus \{\infty\}$ formed by the initial and final instances of $2$-dimensional $0$-surgery in ${\mathbb R}^2$ produce the $3$-space ${\mathbb R}^3=S^3 \setminus \{\infty\}$. Then, in Section~\ref{3DProj} we detail how the rotations of the two different projections of $S^2\setminus \{\infty\}$ shown in Fig.~\ref{2D_Proj} give rise to two different decompositions of $S^3$ which correspond to two visualizations of the elementary steps of both types of $3$-dimensional surgery. \subsubsection{Decompactification and rotations of spheres}\label{Rotdecomp} Applying the remark of Section~\ref{MDStereo} for $m=3$ we have that the initial and final instances of all types of $3$-dimensional surgery form $S^{3}=\partial D^{4}$. Since, now, $S^3 \setminus \{\infty\}$ can be projected on ${\mathbb R}^3$ bijectively, we will present a new way of visualizing $3$-dimensional surgery in ${\mathbb R}^3$ by rotating appropriately the projections of the initial and final instances of $2$-dimensional $0$-surgery in ${\mathbb R}^2$. The underlying idea is that, in general, $S^n$ which is embedded in ${\mathbb R}^{n+1}$ can be obtained by a 180$^\circ$ rotation of $S^{n-1}$, which is embedded in ${\mathbb R}^n$. So, a 180$^\circ$ rotation of $S^{0}$ around an axis bisecting the interval of the two points (e.g. line $\ell$ in Fig.~\ref{2D_Proj}({2\textsubscript{a}})) gives rise to $S^1$ (which is $\ell' \cup \{\infty\}$ in Fig.~\ref{2D_Proj}({2\textsubscript{a}})), while a 180$^\circ$ rotation of $S^{1}$ around any diameter gives rise to $S^2$. For example, in Fig.~\ref{2D_Proj}({2\textsubscript{b}}), a 180$^\circ$ rotation of $\ell' \cup \{\infty\}$ around the north-south pole axis results in the $2$-sphere shown in the figure. Now, the creation of $S^3$ (which is embedded in ${\mathbb R}^4$) as a rotation of $S^2$ requires a fourth dimension in order to be visualized. Instead we can obtain its stereographic projection in ${\mathbb R}^3=S^3 \setminus \{\infty\}$ by rotating the stereographic projection of $S^2 \setminus \{\infty\}={\mathbb R}^2$. Indeed, a 180$^\circ$ rotation of the plane around any line in the plane gives rise to the $3$-space. As we will see, each type of $3$-dimensional surgery corresponds to a different rotation, which, in turn, corresponds to a different decomposition of $S^{3}$. As we consider here two kinds of projections of $S^2 \setminus \{\infty\}$ in ${\mathbb R}^2$, see Fig.~\ref{2D_Decomp}({1\textsubscript{a}}) and ({1\textsubscript{b}}), these give rise to two kinds of decompositions of $S^{3}$ via rotation, see Fig.~\ref{3D_Decomp_F}({1\textsubscript{a}}) and ({1\textsubscript{b}}). Each decomposition, now, leads to the visualizations of both types of $3$-dimensional surgery. Hence, the elementary steps of both types of $3$-dimensional surgery are now visualized as rotations of the decompactified $S^2$. \begin{figure}\label{3D_Decomp_F} \end{figure} \subsubsection{Projections and visualizations of elementary $3$-dimensional surgeries}\label{3DProj} Let us start with the first projection. In Fig.~\ref{3D_Decomp_F}({1\textsubscript{a}}), we show this decompactified view in ${\mathbb R}^2$ and the two axes of rotation $\ell'$ and $\ell$. As we will see, a rotation around axis $\ell'$ induces $3$-dimensional $0$-surgery in ${\mathbb R}^3$ while a rotation around axis $\ell$ induces $3$-dimensional $1$-surgery in ${\mathbb R}^3$. Namely, in the case of $3$-dimensional $0$-surgery, a \textit{horizontal} rotation of 180$^\circ$ around axis $\ell'$ transforms the two discs $S_1^0\times D^2$ of Fig.~\ref{2D_Decomp}({2\textsubscript{a}}) (the first instance of $2$-dimensional $0$-surgery) to the two $3$-balls $S_1^0\times D^3$ of Fig.~\ref{3D_Decomp_F}(2\textsubscript{a}) (the first instance of $3$-dimensional $0$-surgery). After the collapsing of the centers of the two $3$-balls $S_1^0\times D^3$, the rotation transforms the decompactified cylinder $D^{1}\times (S_1^{1}\setminus \{\infty\})$ of Fig.~\ref{2D_Decomp}({2\textsubscript{a}}) (the last instance of $2$-dimensional $0$-surgery) to the decompactified thickened sphere $D^{1}\times (S_2^{1}\setminus \{\infty\})$ of Fig.~\ref{3D_Decomp_F}(2\textsubscript{a}) (the last instance of $3$-dimensional $0$-surgery). Indeed, the rotation of line $\ell$ along $\ell'$ creates the green plane that cuts through ${\mathbb R}^3$ and separates the two resulting 3-balls $S_1^0\times D^3$. This plane is shown in green in the last instance of Fig.~\ref{3D_Decomp_F}(2\textsubscript{a}) and it is the decompactified view of the sphere $S_2^2$ in ${\mathbb R}^3$. Note that it is thickened by the arcs connecting the two discs $S_1^0\times D^2$ which have also been rotated. Similarly, in the case of $3$-dimensional $1$-surgery, a \textit{vertical} rotation of 180$^\circ$ around axis $\ell$ transforms the two discs $S_1^0\times D^2$ (the first instance of $2$-dimensional $0$-surgery shown in Fig.~\ref{2D_Decomp}({2\textsubscript{a}})) to the solid torus $S_1^1\times D^2$ (the first instance of $3$-dimensional $1$-surgery), see Fig.~\ref{3D_Decomp_F}(3\textsubscript{a}). After the collapsing of the (red) core $S_1$ of $S_1^1\times D^2$, the rotation transforms the decompactified cylinder $D^{1}\times (S_1^{1}\setminus \{\infty\})$ of Fig.~\ref{2D_Decomp}({2\textsubscript{a}}) (the last instance of $2$-dimensional $0$-surgery) to the decompactified solid torus $D^{2}\times (S_1^{1}\setminus \{\infty\})$ of Fig.~\ref{3D_Decomp_F}(3\textsubscript{a}) (the last instance of $3$-dimensional $1$-surgery). Indeed, each of the arcs $D^1$ connecting the two discs $S_1^0\times D^2$ generates through the rotation a $2$-dimensional disc $D^2$, and the set of all such discs are parametrized by the points of the line $\ell$ in ${\mathbb R}^3$. In both cases, in Fig.~\ref{3D_Decomp_F}({1\textsubscript{a}}), $S^3$ is presented as the result of rotating the $2$-sphere $S^2 = {\mathbb R}^2 \cup \{\infty\}$. For $3$-dimensional $0$-surgery, $S^2$ is rotated about the circle $\ell' \cup \{\infty\}$ where $\ell'$ is a straight horizontal line in ${\mathbb R}^2$. The resulting decomposition of $S^3$ is $S^3 =(S_1^0\times D^3) \cup (D^1 \times S_2^2)$, a thickened sphere with two $3$-balls glued along the boundaries, which is visualized as $S^3\setminus \{\infty\} =(S_1^0\times D^3) \cup (D^1 \times (S_2^2\setminus \{\infty\}))$. For $3$-dimensional $1$-surgery, $S^2$ is rotated about the circle $\ell \cup \{\infty\}$ where $\ell$ is a straight vertical line in ${\mathbb R}^2$. The resulting decomposition of $S^3$ is $S^3 =(S_1^1\times D^2) \cup (D^2 \times S_2^1)$, two solid tori glued along their common boundary, which is visualized as $S^3\setminus \{\infty\} =(S_1^1\times D^2) \cup (D^2 \times (S_2^1\setminus \{\infty\}))$. Analogously, starting with the second projection of Fig.~\ref{2D_Decomp}({1\textsubscript{b}}), the same rotations induce each type of $3$-dimensional surgery and their corresponding decompositions of $S^3$, see Fig.~\ref{3D_Decomp_F}({1\textsubscript{b}}). More precisely, a \textit{horizontal} rotation of the instances of Fig.~\ref{2D_Decomp}({2\textsubscript{b}}) by 180$^\circ$ around axis $\ell'$ induces the initial and final instances of $3$-dimensional $0$-surgery visualized in ${\mathbb R}^3$, see Fig.~\ref{3D_Decomp_F}({2\textsubscript{b}}). The $3$-sphere $S^3$ is now visualized as $S^3\setminus \{\infty\} =((S_1^0\setminus \{\infty\})\times D^3) \cup (D^1 \times S_2^2)$, a thickened sphere union two $3$-balls with the center of one of them removed (being the point at infinity). Similarly, a rotation of the instances of Fig.~\ref{2D_Decomp}({2\textsubscript{b}}) by 180$^\circ$ around the (green) \textit{circle} $\ell$ induces the initial and final instances of $3$-dimensional $1$-surgery visualized in ${\mathbb R}^3$, see Fig.~\ref{3D_Decomp_F}({3\textsubscript{b}}). Note that $\ell$ is now a circle and not a (vertical) line. The easiest part for visualizing this rotation is the rotation of the middle annulus of Fig.~\ref{3D_Decomp_F}({1\textsubscript{b}}) which gives rise to the solid torus $D^2 \times S_2^1$ in Fig.~\ref{3D_Decomp_F}({3\textsubscript{b}}). The same rotation of the two remaining discs around $\ell$ can be visualized as follows: each radius of the inner disc lands from above the plane on the corresponding radius of the outer disc. At the same time, that radius of the outer disc lands on the corresponding radius of the inner disc from underneath the plane. So, the two corresponding radii together have created by rotation an annular ring around $\ell$. Note that the red center of the inner disc will land on all points at infinity, creating a half-circle from above and, at the same time, all points at infinity land on the center of the inner disc and create a half-circle from below. Glued together, the two half-circles create a (red) circle. Now, the set of all annular rings around $\ell$ and parametrized by $\ell$ make up the complement solid torus $S_1^1\setminus \{\infty\}\times D^2$ whose core is the aforementioned red circle. The $3$-sphere $S^3$ is visualized through this rotation as $S^3\setminus \{\infty\} =((S_1^1\setminus \{\infty\})\times D^2) \cup (D^2 \times S_2^1)$, the decompactified union of two solid tori. Finally, it is worth pinning down that the two types of visualizations presented above are related. Indeed, the $D^1 \times (S_2^2\setminus \{\infty\})$ shown in the rightmost instance of Fig.~\ref{3D_Decomp_F}({2\textsubscript{a}}) is the decompactified view of the $D^1 \times S_2^2$ shown in the rightmost instance of Fig.~\ref{3D_Decomp_F}({2\textsubscript{b}}). Likewise, the $D^2 \times (S_2^1\setminus \{\infty\})$ shown in the rightmost instance of Fig.~\ref{3D_Decomp_F}({3\textsubscript{a}}) is the decompactified view of the the solid torus $D^2 \times S_2^1$ shown in the rightmost instance of Fig.~\ref{3D_Decomp_F}({3\textsubscript{b}}). Further, the $(S_1^0\setminus \{\infty\})\times D^3$ and $(S_1^1\setminus \{\infty\})\times D^2$ shown in the leftmost instances of Fig.~\ref{3D_Decomp_F}({2\textsubscript{b}}) and ({3\textsubscript{b}}) are the decompactified views of $S_1^0\times D^3$ and $S_1^1\times D^2$ shown in the leftmost instances of Fig.~\ref{3D_Decomp_F}({2\textsubscript{a}}) and ({3\textsubscript{a}}) respectively. \section{Conclusion} \label{Conclusion} In this paper we present how to detect the topological changes of $3$-dimensional surgery via the fundamental group and we provide a new way of visualizing its elementary steps. As this topological tool is used in both the classification of $3$-manifolds and in the description of natural phenomena, we hope that this study will help our understanding of the topological changes occurring in $3$-manifolds from a mathematical as well as a physical perspective. \section*{Acknowledgments} Antoniou's work was partially supported by the Papakyriakopoulos scholarship which was awarded by the Department of Mathematics of the National Technical University of Athens. Kauffman's work was supported by the Laboratory of Topology and Dynamics, Novosibirsk State University (contract no. 14.Y26.31.0025 with the Ministry of Education and Science of the Russian Federation). \end{document}
\begin{document} \title{Quantum Dynamical Resource Theory under Resource Non-increasing Framework} \author{Si-ren Yang$^1$, Chang-shui Yu*$^{1,2}$} \begin{abstract} We define the resource non-increasing (RNI) framework to study the dynamical resource theory. With such a definition, we propose several potential quantification candidates under various free operation sets. For explicit demonstrations, we quantify the quantum dynamical coherence in the scenarios with and without post-selective measurements. Correspondingly, we show that maximally incoherent operations (MIO) and incoherent operations (IO) in the static coherence resource theory are free in the sense of dynamical coherence. We also provide operational meanings for the measures by the quantum discrimination tasks. Moreover, for the dynamical total coherence, we also present convenient measures and give the analytic calculation for the amplitude damping channel. \end{abstract} \address{$^1$School of Physics, Dalian University of Technology, Dalian 116024, China} \address{$^2$DUT-BSU joint institute, Dalian University of Technology, Dalian 116024, China } \ead{ycs@dlut.edu.cn} \begin{indented} \item[\today] \end{indented} \maketitle \section{Introduction} In recent years, quantum features, such as quantum entanglement \cite{entanglement, entangle,e5} and quantum discord \cite{discord,qd1,qd2,qd3,qd4} are taken regarded as quantum resources and quantitatively investigated under rigorous mathematical methods, i.e. quantum resource theories (QRTs), which has been systematically developed since quantum coherence was formally and quantitatively studied in Ref. \cite{Quantifying}. QRTs are powerful approaches to investigate quantum characteristics and have deep effects on our recognization of quantum science. Up to now, QRTs have been widely applied to other quantum features: nonlocality \cite{nonlocality}, contextuality \cite{contextuality}, non-Gaussianity \cite{Strobel424} and asymmetry \cite{as} and so on. QRTs are usually defined by two fundamental ingredients: free states and free quantum operations. Free states are those states without any resource and free quantum operations are referred to as those that could not generate quantum resources if operated on free states. In such a framework, QRTs bring new insight in studying quantum features on the static level: measures for quantum features not only evaluate the weight of a system but also have operational meanings corresponding to specific quantum processes. Such a framework for static resources is quite rigorous, and could even develop a connection to different areas. For example, in the QRTs of quantum coherence, researchers proposed multiple measures \cite{alter,trace,multi,mea1,mea2,mre,mea3} and operational interpretations \cite{op1,op2,op3,op4,op6}. It has been shown that quantum coherence closely correlated with other quantum features like quantum entanglement \cite{ entangle,e1,e5}, quantum discord \cite{discord,qd1,qd2,qd3,qd4}, quantum asymmetric \cite{ass1,piani,as3} et al. Quantum states might get evolved under dynamic processes or quantum operations. Typically, quantum states can be viewed as some special quantum channels. Most quantum processes can be characterized by quantum channels which are mathematically the set of completely positive trace preserving (CPTP) maps. Corresponding to static states, quantum channels are dynamical quantum resources in quantum science and carry much more information than static systems. Therefore, one natural idea would be whether to investigate dynamical channels by QRTs. In this sense, one possible way is to follow and upgrade the two elements in static QRTs to dynamical levels, and find out what could be gotten in dynamical resources through the QRTs. With such motivations, researchers are working on this area and have made progresses \cite{ch3,ch5,POVM1,qrt5,ch9,ch8,ch1,qrt1,nch1,nch2,nch3,nch4,qd,nch5,nch6}. Some studied the dynamical coherence via different free channels, for example, the Choi isomorphism of classical channels is considered \cite{ch3}. In Ref. \cite{POVM1}, the resource theory proposed free Positive-Operator-Valued Measures (POVMs) and free detection/creation incoherence in the sense of quantum computational setting \cite{qrt5}. In addition, the dynamical entanglement has been considered in Ref. \cite{ch8}, and a quantitative relationship between the dynamical coherence and the dynamical entanglement are introduced in Ref. \cite{ch1}. In dynamical QRTs, the two ingredients are free operations and free superoperations (completely CP and TP preserving maps). The free operations are those without expected quantum features, and free superoperations are defined as those that could not map a free operation to a resourceful operation. However, one would find that the definitions of free superoperation sets depend on the physical considerations. Even for the same quantum feature, the free sets are not unique. Another tough problem is how to measure dynamical quantum features. Due to the diversity of channel representations, the analytical solutions rarely exist, thus measures are usually given in numerical results. Moreover, realizations of superoperations are not unique either. To sum up, dynamical QRTs are developing in different directions, which leads to various QRTs. In this paper, we direct a new path to define the free operation sets for quantum dynamical resources. We call it resource non-increasing (RNI) frameworks, in which the free channel won't increase the resourcefulness of the input states. It will be shown that our RNI mindset provides a straight comprehension for dynamical free and meanwhile guarantee RNI framework has no conflict with the well-defined RNG framework. We also refer to static resource theories and design appropriate superoperations for dynamical QRTs. The free dynamical sets in the RNI frameworks are fairly pellucid. We present several potential quantifications of dynamical resources under different free operation sets. To investigate the dynamical coherence, we demonstrate that maximally incoherent operations (MIO) and incoherent operations (IO) in the static coherence resource theory are free in the sense of dynamical coherence. In this sense, we give the corresponding measure of the quantum dynamical coherence in the case with and without post-selective measurements. Semidefinite programming (SDP) is also applied to quantify dynamical coherence without post selective measurements. In addition, we also study the quantification of the dynamical total coherence, for which an analytic calculation is given for the amplitude dumpling channel. We organize the remaining parts of this paper as follows. In Sec. II, we review the dynamical QRTs, propose our RNI framework and present several alternative quantifications. In Sec. III, we establish dynamical QRTs for quantum coherence in different scenarios. We illustrate the operational meanings of measures in quantum discrimination tasks. In Sec. IV, we investigate the dynamical total coherence and give an analytic calculation as an example. We summarize the paper in Sec. V. \section{Resource theory of quantum channels} RNI framework has a direct definition of the free dynamical set, i.e. the free operations could not increase any quantum static resource for arbitrary static input. Therefore, to demonstrate the RNI frameworks, we need the unambiguous definitions of static resource measures first. As mentioned previously, for static resource theory, the free states are those without any resource, and the set of free states are denoted by $\mathbb{F}$. The free operations with its Kraus operators $\{K_i\}$ are defined by $K_i\delta K_i^\dagger\in \mathbb{F}$ for any free state $\delta$. Thus a valid static resource measure can be given as follows. \textit{Proposition 1.-} A static resource measure $\mathcal{M}$ for some certain quantum resource $\mathcal{R}$ should fulfill (i) Faithfulness: $\mathcal{M}\geq 0$ and vanishes for free states; (iia) Strong monotonicity: the average resource under selective free operations can not be increased; (iib) Monotonicity: the resource of the state after free operations can not be increased; (iii) Convexity: mixing states would not increase their resourcefulness. These constraints are widely applied in measuring entanglement, coherence et al \cite{Quantifying}. It is known to all that the strong monotonicity combined with the convexity would lead to the general monotonicity. The measures that fulfill the strong monotonicity imply that such quantum channels are post-selective measurements allowed. These selective measurements exist in a scenario that the results are accessible with post-selective operations. With the static resource measure, we can present our free operations in the RNI framework (RNI-free operations). \textit{Theorem. 2-} RNI-free operations in the dynamical resource theory are consistent with the free operations in the static resource theory. \textit{Proof.-} Based on the idea of the RNI framework, the RNI-free operations $\mathcal{E}(\cdot)=\sum_{i}K_{i}\cdot K^{\dagger}_{i}$ with respect to a valid static resource measure $\mathcal{M}$ is defined as $\sum_n \mathrm{Tr}((K_{i}\rho K^{\dagger}_{i}))\mathcal{M}(K_{i}\rho K^{\dagger}_{i}/\mathrm{Tr}((K_{i}\rho K^{\dagger}_{i}))\leq \mathcal{M}(\rho)$ for any density matrix $\rho$. If $\rho\in \mathbb{F}$, due to the faithfulness of static measures, $\mathcal{M}(\rho)=0$, so $\mathcal{M}(K_{i}\rho K^{\dagger}_{i}/\mathrm{Tr}((K_{i}\rho K^{\dagger}_{i}))=0$, which is consistent with the definition of free operations in the static resource theory. On the contrary, if $\mathcal{E}(\cdot)=\sum_{i}K_{i}\cdot K^{\dagger}_{i}$ is the static free operations, based on the strong monotonicity, the mentioned definition for the RNI-free operations is also satisfied. {}$\Box$ Note that if a quantum channel is considered in a black-box scenario, which means the measurement is non-selective, one can only require monotonicity in a non-increasing framework, i.e. $\mathcal{M}(\mathcal{E}(\rho))\leq \mathcal{M}(\rho)$. This corresponds to the RNI framework in the sense of monotonicity. Similarly, the free operations can also be naturally defined subject to monotonicity, which will be directly used later and won't be explicitly elucidated here. Another ingredient in dynamical QRTs is the free superoperations. Similar to the static resource theory, the free superoperation has a primal constraint that maps a free operation to free operation. Besides, we stress that the construction of superoperations should not contain certain resourceful ingredients. Now we propose our free superoperations with the following structure. \textit{Definition. 3-} A superoperation represented by Kraus operators $\{ \mathfrak{F}_n\}$ is free if and only if $\mathfrak{F}_n[\mathcal{N}]$ can be written into the sequence as \begin{equation} \mathfrak{F}_n[\mathcal{N}]=\mathcal{E}_{i_n,\mathrm{\Phi}^{j_n}_n},\cdots,\mathcal{E}_{i_2,\Phi^{j_2}_2}\mathcal{E} _{i_1,\mathrm{\Phi}^{j_1}_1} \label{superchannel}, \end{equation} where $\mathcal{E}_{i_n,\mathrm{\Phi}^{j_n}_n}$ denotes the $j_n$th Kraus element of the superoperation $\{\mathcal{E}_{i_n,\mathrm{\Phi}^{j_n}_n}\}$ with the corresponding free operation $\{\mathrm{\Phi}^{j_n}_n\}$ and \begin{align} & \mathcal{E}_{i_n=0,\mathrm{\Phi}^{j_n}_n}[\mathcal{N}]=\mathrm{Tr}[\mathcal{N}],\notag\\ &\mathcal{E}_{i_n=1,\mathrm{\Phi}^{j_n}_n}[\mathcal{N}]=\mathrm{\Phi}^{j_n}_n\circ \mathcal{N},\mathcal{E}_{i_n=2,\mathrm{\Phi}^{j_n}_n}[\mathcal{N}]=\mathrm{\Phi}^{j_n}_n\otimes \mathcal{N}, \notag\\ &\mathcal{E}_{i_n=3,\mathrm{\Phi}^{j_n}_n}[\mathcal{N}]=\mathcal{N}\circ\mathrm{\Phi}^{j_n}_n,\mathcal{E}_{i_n=4,\mathrm{\Phi}^{j_n}_n}[\mathcal{N}]=\mathcal{N}\otimes\mathrm{\Phi}^{j_n}_n.\label{superele} \end{align} The above definition implies the tensor product structure is automatically satisfied if one replaces $\mathrm{\Phi}^{j_n}_n$ by identity operator $\mathds{1}$. Furthermore, it is not difficult to find that our superoperations would be free for every single Kraus operator. Combing the two characteristics one can conclude that our superoperations are separately (every Kraus operator free) and completely (tensor product structure) free. Hence, our free superoperations meet the requirement of our motivation. With the two ingredients defined, we can define the measure of RNI dynamical resource as follows. \textit{Definition 4.-} A measure $T(\cdot)$ quantifying dynamical resourcefulness of arbitrary quantum channel $\mathcal{N}$ is a qualified measure if the following are satisfied. \begin{align}\nonumber (1)&\text{ Faithfulness}: T(\mathcal{N}) \geq 0, \text{where equality holds iff}\ \mathcal{N} \text{ is free};\\ \nonumber (2)&\text{ Monotonicity or strong monotonicity}: T(\mathcal{N}) \geq T(\mathfrak{F}[\mathcal{N}]) \\ \nonumber &\text{or} \ T(\mathcal{N})\geq\sum_{n}p_{n}T(\mathcal{N}_{n}) \text{ for free superoperations} \\ \nonumber & \mathfrak{F}=\sum_{n}p_{n}\mathfrak{F}_{n} \text{ with } \mathcal{N}_{n}=\mathfrak{F}_{n}[\mathcal{N}]\: \text{and} \sum_n p_n=1,\\\nonumber (3)&\text{ Convexity}: T(\mathcal{N}) \text{ is convex}.\nonumber \end{align} The strong monotonicity is an operational constraint where the measures obey the monotonicity under selective measurements. Our free superoperations allow us to read out information of the post states. However, if one could not have enough information about the resourceful superoperation (such as a black box in some practical scenarios mentioned previously), it is enough to consider the monotonicity, even though in a QRT strong monotonicity is required. Our free superoperations can also be explicitly given in the sense of quantum computational settings, which is similar to Ref. \cite{qo}. Up to now, we have well established the fundamental requirements for a dynamical resource measure in the RNI framework. Thus in the following, we present two natural quantification approaches, one is based on the distance from the free operation set, the other is directly based on the violation of the definition of free operations. \textit{Definition. 5-} Dynamical resource of the channel $\mathcal{N}(\cdot)=\sum_{i}K_{i}\cdot K^{\dagger}_{i}$ can be measured by the minimal distance from the free set $\mathcal{S}$ with appropriate distance functions $\vert\vert \cdot\vert\vert_{\star}$ as \begin{align} T(\mathcal{N})&=\displaystyle \min_{\mathcal{F}\in \mathcal{S}} \vert\vert \mathcal{N} -\mathcal{F}\vert\vert_{\star}, \label{normmeasure} \end{align} or by the magnitude of the violation of the free operation as \begin{align} \tilde{T}(\mathcal{N})&=\displaystyle \max \{\mathrm{\Delta} \mathcal{M}_{\star}(\mathcal{N}), 0\},\label{minus} \end{align} with $\mathrm{\Delta} \mathcal{M}_{\star}(\mathcal{N})=\max_\rho \sum_i\mathrm{Tr}[K_i\rho K_i^\dagger]\mathcal{M}\left(K_i\rho K_i^\dagger/\mathrm{Tr}[K_i\rho K_i^\dagger\right)$ where $\star$ denotes the proper distance functions in Eq. (\ref{normmeasure}) or static resource measure in Eq. (\ref{minus}) such that both $T(\mathcal{N})$ or $\tilde{T}(\mathcal{N})$ satisfies definition 4. Finally, we'd like to emphasize that the dynamical resource in the RNI framework makes sense for the quantum channel with given Kraus operators since our definitions are based on the strong monotonicity. Of course, one can consider the channel without post-selection in the sense of monotonicity. \section{Dynamical quantum coherence in resource non-increasing framework} Now we will consider the dynamical resource theory of coherence in the RNI framework. Since we have shown that the RNI-free operations are the same as free operations in the static resource theory, the RNI-incoherent operations in the dynamical resource theory naturally correspond to the incoherent operations in the static coherence quantification. As mentioned in the previous section, the RNI dynamical resource theory can be considered in the sense of both strong monotonicity and monotonicity. One can find that the free operations subject to the dynamical coherence are separately the incoherent operations (IO) and the maximally incoherent operations (MIO). Thus, we will directly employ the proposed approaches to quantify the dynamical coherence based on definition 5. \textit{Theorem 6.-} Let the free operation set denote by IO and MIO corresponding to the RNI coherence with and without post-selection scenarios, the dynamical coherence for a quantum channel $\mathcal{N}$ can be measured by \begin{align} T_{1/\diamond}(\mathcal{N})=\displaystyle \min_{\mathcal{F}\in\text{IO/MIO}} \vert\vert\mathcal{N}-\mathcal{F}\vert\vert_{1/\diamond}, \end{align} where the induced trace norm and the diamond norm are defined as \begin{align} \vert\vert\mathrm{\Phi}\vert\vert_{1}=\text{max}\{{\vert\vert\mathrm{\Phi(X)}\vert\vert_{1}:\;X\in\mathcal{L}(X),\: \vert\vert X\vert\vert_{1}\leq 1}\}, \end{align} \begin{align} \vert\vert\mathrm{\Phi}^{\mathrm{A}\rightarrow\mathrm{B}}\vert\vert_{\diamond}=\vert\vert\mathrm{\Phi}^{\mathrm{A}\rightarrow\mathrm{B}}\otimes\mathds{1}^{\mathrm{C}}\vert\vert_{1}. \end{align} \textit{Proof.-} Firstly, $T_{1/\diamond}(\mathcal{N})$ vanishes for free operations since it is defined by the minimal distance. For the convexity, let's consider a channel $\mathcal{N}$ mixed by a set $\mathcal{N}_m$ with probabilities $q_m$ and denote the corresponding optimal free operation in set IO/MIO by $\mathcal{F}_m$, the average coherence is \begin{align} &\sum_{m} q_{m} T_{1/\diamond}(\mathcal{N}_m)=\sum_m q_m\vert\vert\mathcal{N}_m-\mathcal{F}_m\vert\vert_{1/\diamond}\nonumber\\ &\geq \vert\vert\sum_m q_m\mathcal{N}_m-\sum_m q_m \mathcal{F}_m\vert\vert_{1/\diamond} \geq \vert\vert\mathcal{N-F}\vert\vert_{1/\diamond}\nonumber\\ &=T_{1/\diamond}(\mathcal{N}). \end{align} For the strong monotonicity, let's consider the free superchannel $\mathfrak{F}=\sum_m p_m \mathfrak{F_m}$ given in Eq.(\ref{superchannel}). One can denote $T_{1/\diamond}(\mathfrak{F}_{m}[\mathcal{N}])=T_{1/\diamond}({\mathcal{N}_m})$. Since the induced trace norm is sub-multiplicative and sub-multiplicative with respect to tensor prouduct, which indicates that $T_{1/\diamond}({\mathcal{N}_m})\leq T_{1/\diamond}(\mathcal{N})$ for every single Kraus operatore in free superchannels. Thus, the strong monotinicity holds by the following inequality \begin{align} \sum_m p_m T_{1/\diamond}({\mathcal{N}_m}) \leq \sum_m p_m T_{1/\diamond}(\mathcal{N})=T_{1/\diamond}(\mathcal{N}). \end{align} Finally one can directly obtain the monotonicity based on the strong monotonicity and convexity. Let $T_{1/\diamond,non}$ denotes $T_{1/\diamond}$ without post-selective measurements. It is shown that $T_{1/\diamond,non}$ have a direct operational meaning in the quantum channels discrimination task and can be calculated by semidefinite programming (SDP). We will illustrate the details in the appendix. Besides the distance measures, one can also employ the maximal violation in definition 5 to define the dynamical coherence as follows. \textit{Theorem 7.-} Given a quantum channel $\mathcal{N}(\cdot)=\sum_{n}K_{n}\cdot K^{\dagger}_{n}$, the dynamical coherence can be well quantified by \begin{align} \tilde{T}(\mathcal{N})&=\displaystyle \max \{\mathrm{\Delta} \mathcal{M}_{np/p}(\mathcal{N}), 0\},\label{minus} \end{align} where $\mathcal{D}$ denotes the set of all density matrices in the space, and \begin{equation} \mathrm{\Delta}\mathcal{M}_{np}(\mathcal{N})=\displaystyle \max_{\rho\in\mathcal{D}}\mathcal{C}(\mathcal{N}(\rho))-\mathcal{C}(\rho)\label{spower} \end{equation} without post-selective measurements, \begin{align} \mathrm{\Delta} \mathcal{M}_{p}(\mathcal{N})= \max_{\rho\in\mathcal{D}}\sum_n p_n \mathcal{C}(\rho_n)-\mathcal{C}(\rho). \end{align} with post-selective measurements. Here $p_n=\mathrm{Tr}((K_{n}\rho K^{\dagger}_{n}))$ and $\rho_n=K_{n}\rho K^{\dagger}_{n}/p_n$. \textit{Proof.-} At first, the definition of free operations in either scenario directly implies $\tilde{T}(\mathcal{N})\geq 0$ which is saturated if and only if $\mathcal{N}\subset\text{IO/MIO}$. For the strong monotonicity, one will have to consider definition 3 in detail, which shows free superoperations can be written as $\mathfrak{F}(\mathcal{N})=\sum_m q_m \mathfrak{F}_m(\mathcal{N})=\sum_m q_m\mathcal{N}_m$. From the following, one can find that every $\mathfrak{F}_m$ implies $\tilde{T}(\mathcal{N}_m)\leq \tilde{T}(\mathcal{N}$), which immmediately leads to $ \sum_m p_m \tilde{T}({\mathcal{N}_m}) \leq \sum_m p_m\tilde{ T}(\mathcal{N})=\tilde{T}(\mathcal{N})$. (i) Discarding the system with $\mathcal{E}_{i_n=0}$ makes a free state; (ii) Attatching ancilla by $\mathcal{E}_{i_n=2, 4}$ means $\mathcal{N}(\rho)=\mathrm{Tr_{A}}[(\mathrm{\Theta}_A\otimes\mathcal{N})(\sigma_A \otimes \rho)]=\mathrm{Tr_{A}}[\mathrm{\Theta}_A(\sigma_A)\otimes \mathcal{N}(\rho)]=\mathrm{1}\cdot\mathcal{N}(\rho)$ ; (iii) Linking a free operation by $\mathcal{E}_{i_n=1}$ corresponds to $\mathcal{C}(\mathrm{\Theta}\circ\mathcal{N}(\rho))\leq\mathcal{C}(\mathcal{N}(\rho))$ for any state $\rho\in\mathcal{D}$ and any free channel $\mathrm{\Theta}$ due to the monotonicity of static measures $\mathcal{C}$; (iv) Linking a free operation for $\mathcal{E}_{i_n=3}$ can be verified as \begin{align} \tilde{T}(\mathcal{N}\circ\mathrm{\Theta})&= \max_{\rho}\sum_n p_n \mathcal{C}(\frac{K_n\rho_{\mathrm{\Theta}} K^{\dagger}_n}{p_n})-\mathcal{C}(\rho)\nonumber\\ &\leq \max_{\rho}\sum_n p_n \mathcal{C}(\frac{K_n\rho_{\mathrm{\Theta}} K^{\dagger}_n}{p_n})-\mathcal{C}(\rho_{\mathrm{\Theta}})\nonumber\\ &\leq \max_{\rho}\sum_n p_n \mathcal{C}(\frac{K_n \rho K^{\dagger}_n}{p_n})-\mathcal{C}(\rho)\nonumber\\ &=\tilde{T}(\mathcal{N}),\label{13} \end{align} where $\mathrm{\Theta}$ represents any free channel and $\rho_{\mathrm{\Theta}}=\mathrm{\Theta}(\rho)$ is the post state operated by the free channel. The first inequality comes from the monotonicity of static measures, i.e., $\mathcal{C}(\rho_{\mathrm{\Theta}})\leq\mathcal{C}(\rho)$ and the second holds since the maximum overall the density matrix spaces is definitely not less than the maximum overall the subspace. Eq. (\ref{13}) mainly focuses on $\mathrm{\Delta}\mathcal{M}_{p}(\mathcal{N})$. A similar proof can be easily obtained for $\mathrm{\Delta}\mathcal{M}_{np}(\mathcal{N})$ (not given here). So (i)$\sim$ (iv) prove the strong monotonicity. For the convexity, let's first prove the scenario with post-seletive measurements. Consider the dynamical coherence of mixing a set of channels $\{\mathcal{N}_i\}$ with probabilities $\{q_i\}$. Let a state $\rho$ undergo channel $\mathcal{N}_i$ with its Kraus operator $K_{in}$ , the post-measurement state is denoted by $\sigma_{i,n}=K_{in}\rho K_{in}^\dagger$, then we have \begin{align} \tilde{T}(\sum_i q_i\mathcal{N}_i) &=\max_{\rho \in \mathcal{D}}\sum_{i,n} q_i \mathrm{Tr}(\sigma_{i,n})\mathcal{C}(\frac{\sigma_{i,n}}{\mathrm{Tr}(\sigma_{i,n})}) -\mathcal{C}(\rho)\label{op1g}\\ & =\sum_{i,n} q_i \mathrm{Tr}(\sigma^{0}_{i,n})\mathcal{C}(\frac{\sigma^0_{i,n}}{\mathrm{Tr}(\sigma^0_{i,n})}) -\mathcal{C}(\rho^0)\\ &\leq \sum_{i,n} q_i \mathrm{Tr}(\sigma^{i}_{i,n})\mathcal{C}(\frac{\sigma^i_{i,n}}{\mathrm{Tr}(\sigma^i_{i,n})}) -\mathcal{C}(\rho^i)\\ & = \sum_{i,n} q_i \max_{\rho^{i} \in \mathcal{D}} p^i_n \mathcal{C}(\rho^i_n) -\mathcal{C}(\rho^i)=\sum_i q_i \tilde{T}(\mathcal{N}_i), \end{align} where the superscript on $\sigma^{0}_{i,n}$ denotes the optimal state of the maximum in the sense of Eq. (\ref{op1g}) and $\sigma^{i}_{i,n}$ is the optimal one for $\mathcal{N_i}$. In the case without post-selection, for the mixed channel $\sum_i q_i\mathcal{N}_i$, we have \begin{align} \tilde{T}(\sum_i q_i\mathcal{N}_i)&=\displaystyle \max_{\rho \in \mathcal{D}}\mathcal{C}(\sum_i q_i \mathcal{N}_i(\rho))-\mathcal{C}(\rho)\\ &=\mathcal{C}(\sum_i q_i \mathcal{N}_i(\sigma))-\mathcal{C}(\sigma)\label{staconvex}\\ &\leq\sum_i q_i \mathcal{C}(\mathcal{N}_i(\sigma))-\mathcal{C}(\sigma)\\ &\leq \sum_i q_i \displaystyle \max_{\rho_i \in \mathcal{D}}\mathcal{C}(\mathcal{N}_i(\rho_i))-\mathcal{C}(\rho_i)\label{maxoverstate} &=\sum_i q_i \tilde{T}(\mathcal{N}_i), \end{align} where $\sigma$ denotes the optimal state for the mixed channel $\sum_i q_i\mathcal{N}_i$, and the inequality Eq. (\ref{staconvex}) comes from the convexity for static measures. Up to now, we have proved the convexity, which will directly lead to the monotonicity associated with the strong monotonicity. $\Box$ The dynamical coherence in Eq. (\ref{spower}) has a similar form with cohering power in Ref. \cite{PhysRevA.92.032331}, but our maximum is taken over all the density matrices rather than incoherent states. The MIO set was proposed as the maximal set of free operations in the static QRT, and here we show that the free set of RNI with the non-selective measurements is exactly the MIO. The RNI free set subject to MIO can provide a new operational interpretation to the RNG framework. In this sense, one can find an alternative dynamical coherence measure assisted by the dephasing channel $\mathrm{\Delta}(\cdot)=\sum_i \langle i\vert \cdot\vert\ i \rangle\vert i\rangle\langle i \vert$ as follows. \textit{Theorem 8.-} Given a quantum channel $\mathcal{N}(\cdot)=\sum_{n}K_{n}\cdot K^{\dagger}_{n}$, the dynamical coherence without post measurements can be quantified by \begin{align} T_{a,non}(\mathcal{N})&=\displaystyle \min_{\mathcal{F}\in\text{MIO},\delta \in\mathcal{I}} \vert\vert(\mathcal{N}-\mathcal{F})\delta\vert\vert_{1}\\ &=\displaystyle \min_{\mathcal{F}\in\text{MIO}} \vert\vert(\mathcal{N}-\mathcal{F})\mathrm{\Delta}\vert\vert_{1}. \end{align} \textit{Proof.-} (1) The distance functions guarantee $T_{a,non}$ can faithfully detect dynamical coherence. (2) The convexity holds because of the absolute homogeneity and the triangle inequality. Considering two quantum channels $\mathcal{N}$ and $\mathcal{M}$ such that $T_{a,non}(\mathcal{N})=\vert\vert \mathcal{N}-\mathcal{F}_1\vert\vert_1$ and $T_{a,non}(\mathcal{M})=\vert\vert \mathcal{M}-\mathcal{F}_2\vert\vert_1$, for any $0\leq t \leq 1$, one can find \begin{align} T_{a}(t\mathcal{N}+(1-t)\mathcal{M})&=\displaystyle \min_{\mathcal{F}\in \text{MIO}}\vert\vert(t\mathcal{N}+(1-t)\mathcal{M}-\mathcal{F})\mathrm{\Delta}\vert\vert_1\\\nonumber &\leq\vert\vert(t\mathcal{N}+(1-t)\mathcal{M})\mathrm{\Delta}-(t\mathcal{F}_1+(1-t)\mathcal{F}_2)\mathrm{\Delta}\vert\vert_1\\\nonumber &=\vert\vert t(\mathcal{N}\mathrm{\Delta}-\mathcal{F}_1\mathrm{\Delta})+(1-t)(\mathcal{M}\mathrm{\Delta}-\mathcal{F}_2\mathrm{\Delta})\vert\vert_1\\\nonumber &\leq t\vert\vert(\mathcal{N}-\mathcal{F}_1)\mathrm{\Delta}\vert\vert_1+(1-t)\vert\vert(\mathcal{M}-\mathcal{F}_2)\mathrm{\Delta}\vert\vert_1\\\nonumber &=tT_{a}(\mathcal{N})+(1-t)T_{a}(t\mathcal{M}), \end{align} which is the convexity. (3) The strong monotonicity $T(\mathcal{N})\geq \sum_i q_i T({\mathcal{N}_i})$ can be proved by \begin{align} \sum_i q_i T_{a}({\mathcal{N}_i})&=\sum_i q_i \min_{\mathcal{F}_i\in\text{MIO}}\vert\vert({\mathcal{N}_i}-\mathcal{F}_i)\mathrm{\Delta}\vert\vert_1 \\ &=\sum_i q_i \vert\vert({\mathcal{N}_i}-\mathcal{F}_{i}^{*})\mathrm{\Delta}\vert\vert_1\\ &\leq \sum_i q_i \min_{\mathcal{X}\in\text{MIO}}\vert\vert(\mathcal{F}_i-\mathcal{F}_i(X))\mathrm{\Delta}\vert\vert_1\label{submul}\\ &\leq \min_{\mathcal{X}\in\text{MIO}} \vert\vert(\mathcal{N}-X)\mathrm{\Delta}\vert\vert_1=T_{a}(\mathcal{N})\label{minsum}, \end{align} in which $\mathcal{N}_i=\mathfrak{F}_i(N)$ and $\mathfrak{F}_i$ are Kraus operators of superoperation $\mathfrak{F}$, $\mathcal{F}^{*}$ represents the channel achieving the minimum, inequality (\ref{submul}) holds for that $\mathcal{F}_i(X)$ couldn't be the optimal, and inequality (\ref{minsum}) is valid due to the sub-multiplicativity. (4) The strong monotonicity combined with convexity leads to monotonicity. $\Box$ $T_{a,non}$ is a success probability in channel discrimination tasks if the participant allows the specific free dephasing operation or the incoherent states. This numerical result is very close to the one studied in Ref. \cite{qo}, where the authors analyzed detection-incoherent settings. Since the MIO is the maximal free set in static QRTs of coherence, constraints (such as applying a dephasing channel) would definitely shrink the free set (to dephasing incoherent channels). \section{Dynamical total coherence in the RNI framework} Quantum total coherence is one type of basis-independent coherence of which the static QRT was studied in Ref. \cite{YANG2018305}. Similarly, it can also be investigated in the sense of dynamical resource theory, i.e., the dynamical total coherence of a channel. In Ref. \cite{YANG2018305}, it is explicitly given that the free operations with post-selective measurements are the mixed unitary channels defined as $\mathcal{U}(\cdot)=\sum_{x} q_x U_x(\cdot)U^{\dagger}_{x}$ with unitary $U$ and $\sum_x q_x=1$, and the free operations without post-selective measurements are the unital channels given by $\mathcal{A}(\cdot)=\sum_{x} A_x(\cdot)A^{\dagger}_{x}$ with $\sum_{x} A_x(\cdot)A^{\dagger}_{x} =I$. Following the above section, for a given quantum channel $\mathcal{N}$ one can straightforwardly get the corresponding measures of the dynamical total coherence by replacing the set 'IO/MIO' in Theorem 6 and replacing the static coherence measure $\mathcal{M}$ by a proper total coherence measure. One can also easily show that these measures satisfy the necessary conditions for a valid dynamic resource theory. Considering the computability, we'd like to mention that the static total coherence measure based on the $l_2$ norm is a good measure. In this sense, an explicit measure of the dynamical total coherence can be raised similar to theorem 7 as follows. \textit{Theorem 9.-} For a quantum channel $\mathcal{N}$ with Kraus operators $\{K_n\}$, the dynamical total coherence in the RNI framework can be quantified as \begin{align} \tilde{T}_{l_2}(\{K_n\})=\max\{\max_{\rho} \sum_{n} p_n \mathrm{Tr}[\rho_n^2]-\mathrm{Tr}[\rho^{2}],0\}\label{l2sep} \end{align} with post-selective measurements, and \begin{align} \tilde{T}_{l_2}(\mathcal{N})=\max\{\max_{\rho}\mathrm{Tr}[ \sum_{n} p_n \rho_n]^2-\mathrm{Tr}[\rho^{2}],0\}\label{l2nonsep} \end{align} without post-selective measurements, where $\rho_n=\frac{K_n\rho K^{\dagger}_n}{p_n}$ and $p_n=\mathrm{Tr}(K_n \rho K_n^{\dagger})$. \textit{Proof.-}Since the static total coherence based on $l_2$ norm is a qualified measure (i.e. satisfy all constraints for measures including the monotonicity and the strong monotonicity) \cite{2016Total}, this theorem can entirely be understood as an explicit example of theorem 7. $\Box$ As demonstrations, we consider the dynamical coherence of amplitude damping channels which characterize the energy dissipation in the quantum process. With a given dissipation rate $\eta$, the Kraus operators of this channel can be given as \cite{nielsen} \begin{align} &K_{0}= \left[ \begin{array} {lr} 1 & 0\\ 0&\sqrt{1-\eta}\\ \end{array} \right], &K_{1}= \left[ \begin{array} {lr} 0& \sqrt{\eta}\\ 0&0\\ \end{array} \right]. \end{align} Given a density matrix in Bloch representaion \begin{align} &\rho=\frac{1}{2} \left[ \begin{array} {lr} 1+z & x-iy\\ x+iy&1-z\\ \end{array} \right], \end{align} where the parameters $x$, $y$ and $z$ are subject to $x^2+y^2+z^2\leq1$, the dynamical coherence measured by $\tilde{T}_{l_2}(\{K_n\})$ with Eq. (\ref{l2sep}) reads \begin{align} \nonumber \tilde{T}_{l_2}(\{K_n\})&=\max\{\max_{x,y,z}\frac{(1+z)^2+(1-\eta)^2(1-z)^2+2(1-\eta)(x^2+y^2)}{2(2-\eta+\eta z)}\\\label{functionf} &+\frac{\eta(1-z)}{2}-\frac{1+x^2+y^2+z^2}{2}, 0\} \end{align} To maximize $\tilde{T}_{l_2}(\{K_n\}$, we would like to apply the method of \textit{residual Multipliers}: \begin{align} \mathcal{L}(x, y, z, \lambda)=f(x, y, z)+\lambda(x^2+y^2+z^2-1), \end{align} where $f(x, y, z)$ is the main part (ignore the maximize) of Eq.(\ref{functionf}) and the residual part is the qubit constraint. The maximum of $f(x, y, z)$ can be obtained when the partial derivatives for all parameters equal zero and the inequality constraint holds for the equality. Thus, we conclude the following equations: \begin{equation}\label{Lasolve} \left\{ \begin{array} {lr} \displaystyle \frac{\partial{\mathcal{L}}}{\partial {x}}=\frac{2(1-\eta)x}{2-\eta+\eta z}-x+2\lambda x=0,\\ \displaystyle \frac{\partial{\mathcal{L}}}{\partial {y}}=\frac{2(1-\eta)y}{2-\eta+\eta z}-y+2\lambda y=0,\\ \displaystyle \frac{\partial{\mathcal{L}}}{\partial {z}}=-2\eta z^3+(2\eta-6)z^2+(2\eta=4)z-2\eta+2=0,\\ \lambda(x^2+y^2+z^2-1)=0. \end{array} \right. \end{equation} It should be noted that for any $x^2+y^2+z^2=1$, $f(x,y,z)=0$, which implies that the maximum lies within the feasible region. Thus, we have the optimal solution for Eq. (\ref{Lasolve}) as: \begin{equation} \left\{ \begin{array} {lr} \lambda=x=y=0,\\ z=\frac{2 \eta+\sqrt{9-8 \eta}-3}{2 \eta}. \end{array} \right. \end{equation} Thus the analytic total coherence measure for amplitude damping channels is given by \begin{equation} \tilde{T}_{l_2}(\{K_n\})=\frac{9 \left(\sqrt{9-8 \eta}-3\right)-4 \eta \left(2 \eta+2 \sqrt{9-8 \eta}-9\right)}{4 \eta^2}. \end{equation} \begin{figure} \caption{The dynamical total coherence by $\tilde{T}_2$ versus $\eta$.} \label{T12} \end{figure} For the situation without post-selective measurements, the dynamical total coherence coherence Eq. (\ref{l2nonsep}) is \begin{align} \tilde{T}_{2}(\mathcal{N})=\mathrm{max}\{\sum\limits_{i=1}^3\frac{\xi_{i}^2\tilde{a}^2_{i}}{2(1-\xi^{2}_{i}))}+\frac{a^2_{i}}{2},\ 0\}, \end{align} where $a_{i}=\frac{1}{2}\mathrm{Tr}\mathcal{N}(\sigma_{i})$, $M_{ij}=\frac{1}{2}\mathrm{Tr}\sigma_{i}\mathcal{N}(\sigma_{j})$, and $\xi_{i}$ is the singular value of the matrix $M$, $\vert\tilde{a}\rangle={U^{T}}\vert a \rangle$ with $U$ determined by the singular value decomposition ${M=U\Lambda V^{\mathrm{T}}}$ \cite{yang2021quantifying}. In Fig. \ref{T12} we compare the dynamical total coherence of amplitude damping channels with or without post-selective measurement. It is clear that the dynamical coherence with post-selective measurements is larger than that post-selective measurements. \section{Conclusion and discussion} Quantum channels are dynamical resources that contain more information than static states. Investigating dynamical resources in QRTs has a pre-request that identifies the free sets in different physical backgrounds. In this paper, we introduce a framework in the sense of resource non-increasing. We introduce the free operation sets with and without post selective measurements. It should be understood that the range of the RNI framework does not exceed the well-defined resource non-generating (RNG) framework, but it gives a new sight to determine the free set under different scenarios. We design free super-operations with fundamental ingredients and give some measures to quantify dynamical resources. As a demonstration, we quantify dynamical (total) coherence in our frameworks. MIO and IO are free (incoherent) operations corresponding to the case without and with post selective measurements, respectively. The operational meanings in quantum tasks are also given for the dynamical coherence. In particular, the analytical calculation is given for the dynamical coherence of the amplitude damping channel, and the semidefinite programming is provided for dynamical coherence without post selective measurements. \section{Acknowledgments} This work was supported by the National Natural Science Foundation of China under Grant No.12175029, No.11775040, and No. 12011530014. \appendix \section{Operational meaning of $T_{\diamond,non}$} We first demonstrate the operational meaning for $T_{\diamond,non}$ in channel discrimination tasks \cite{discrimination2, qsl1,qsl2, qsl3}. In such a task, Bob wants to distinguish two channels with allowed operations. Another participant Alice prepares a probabilistic state in classic register Z. The classic register will be state 0 with probability $\lambda$ and state 1 with probability $1-\lambda$. Alice read out the state 0 or 1 in register Z, then sent an initial state (which is prepared in arbitrary register X) through quantum channels $\mathcal{N}_1$ and $\mathcal{N}_2$ respectively, and the final state ($\rho_1$ and $\rho_2$) was sent to Bob in register Y. Bob has to determine the final state had experienced which quantum channel. And Bob tries best to improve his successful probability by maximize $\frac{1}{2}\vert\vert\lambda\rho_{1}-(1-\lambda)\rho_{2}\vert\vert$. We set $\lambda=\frac{1}{2}$ for a better illustration. Meanwhile, with practical consideration, Bob is allowed to hold an auxiliary register R and manipulate some operations $\mathrm{\Psi}^{\text{AR}}$. In a non-selective scenario, Bob can only rely on the information located in register Y. Now, Bob, has a successful probability \cite{disprop,qo} as \begin{align} \text{Prob}(\mathcal{N}_1,\mathcal{N}_{2})=\frac{1}{2}+\frac{1}{4}\displaystyle \max_{\mathrm{\Psi}^{\text{AR}},\sigma^{AR}}\vert\vert\mathrm{\Psi}^{\text{AR}}(\mathcal{N}_{1}-\mathcal{N}_{2})^{A}\otimes\mathds{1}^{\text{R}}\sigma^{\text{AR}}\vert\vert_1,\label{proba} \end{align} where $\sigma^{AR}$ denotes the bipartite final state in registers Y and R. And the allowed manipulations $\mathrm{\Psi}^{\text{AR}}$ can be determined by Bob if are applied or not. If Bob holds the free set MIO and we minimize the diamond norm between resource channel $\mathcal{N}_1$ and $\mathcal{N}_2\in\text{MIO}$, then we will find that \begin{align}\nonumber T_\diamond\left(\mathcal{N}_{1}\right)&=\displaystyle \min_{\mathcal{N}^{A}_2 \in \text{MIO}} \vert\vert\mathcal{N}^{A}_1-\mathcal{N}^{A}_2\vert\vert_\diamond\\\label{maxf} & \geq \displaystyle \min_{\mathcal{N}^{A}_2 \in \text{MIO}}\displaystyle \max_{\mathcal{F}^{AR}\in \text{MIO}} \vert\vert\mathcal{F}^{AR}(\mathcal{N}_1-\mathcal{N}_2)^{A}\vert\vert_\diamond\\\nonumber &=\displaystyle \min_{\mathcal{N}^{A}_2 \in \text{MIO}}\displaystyle \max_{\mathcal{F}^{AR}\in \text{MIO}} \vert\vert\mathcal{F}^{AR}(\mathcal{N}_1-\mathcal{N}_2)^{A}\otimes \mathds{1}^{R}\vert\vert_1\\\label{tra} &=\displaystyle \min_{\mathcal{N}^{A}_2 \in \text{MIO}}\displaystyle \max_{\mathcal{F}^{AR}\in \text{MIO},P} \displaystyle \max_{\sigma^{AR}}\text{Tr}[P\mathcal{F}^{AR}(\mathcal{N}_1-\mathcal{N}_2)^{A}\otimes \mathds{1}^{AR}\sigma^{AR}]\\\nonumber &=\displaystyle \min_{\mathcal{N}^{A}_2 \in \text{MIO}}\displaystyle \max_{\mathcal{F}^{AR}\in \text{MIO},\sigma^{AR}}\vert\vert\mathcal{F}(\mathcal{N}_1-\mathcal{N}_2)^{A}\otimes \mathds{1}^{AR}\sigma^{AR}\vert\vert_1\\\label{absorb} \end{align} where (\ref{maxf}) originates from the monotonicity of dynamical measures. (\ref{tra}) is an alternative definition for the trace norm $\left\Vert\cdot\right\Vert= \max_P \text{Tr} [P (\cdot)]$ where $P$ denotes projective operators. Since any projective operators are in the set of $\text{MIO}$, thus we have (\ref{absorb}). Comparing (\ref{absorb}) with (\ref{proba}), one can easily see that in such a scenario the probability is exactly given by \begin{equation} \text{Prob}(\mathcal{N}_1,\mathcal{N}_{2})=\frac{1}{2}+\frac{1}{4}\displaystyle T_{\diamond, non}\left(\mathcal{N}_{1}\right),\label{hh} \end{equation} since the maximum can be reached at least when Bob do an identity operation ( which belongs to MIO, i.e. a free operation). Hence, the boundary gives the operational meaning to $T_{\diamond, non}$. Furthermore, if Bob doesn't hold the ancilla reference R, one will directly derive an operational meaning for $T_{1, non}$ based on Eq. (\ref{hh}). \section{Semidefinite programming for $T_{\diamond,non}$} A quantum channel $\mathcal{N}$ has its Choi representation (sometimes called Choi-Jamiolkowski isomorphism)\cite{CHOI,Jami} as \begin{align} J(\mathcal{N})=\mathds{1}\otimes\mathcal{N}(\phi_{+}), \end{align} where $\phi_{+}$ is unnormalized maximally entangled state $\phi_{+}=\sum_{ij}\vert i\rangle \langle j\vert\otimes\vert i\rangle \langle j\vert$. Its dynamical coherence can be measured by \begin{align} T_{\diamond}(\mathcal{N})=\displaystyle \min_{\mathcal{F}\in \mathrm{FREE}}\vert\vert \mathcal{N}-\mathcal{F}\vert\vert_{\diamond} \end{align} Under the RNI and non-selective background, the free set is MIO. Thus, the dynamical coherence measure \begin{equation}T_{\diamond,non}(\mathcal{N})=\displaystyle \min_{\mathcal{F}\in\text{MIO}} \vert\vert\mathcal{N}-\mathcal{F}\vert\vert_{\diamond} \end{equation} and evaluated by semidefinite programming (with polynomial algorithms \cite{sdp1}). According to its definition, diamond norm for operator $\vert\vert \mathcal{N}-\mathcal{F}\vert\vert_{\diamond}$ has its primal problem \begin{align} &\textit{Primal}\\\nonumber \mathrm{minimize} \quad &2\vert\vert\mathrm{Tr_{B}(Z)}\vert\vert_{\infty}\\\nonumber \textit{s.t.}\quad &\mathrm{Z}\geq J(\mathcal{N}-\mathcal{F})\\\nonumber &\mathrm{Z}\geq 0. \end{align} Then $T_{\diamond,non}(\mathcal{N})$ is the optimal value of \begin{align} \mathrm{minimize}:\quad &2\vert\vert\mathrm{Tr_{B}(Z)}\vert\vert_{\infty}\\\nonumber \textit{s.t.}\quad \mathrm{Z}&\geq J(\mathcal{N}-\mathcal{F})\\\nonumber \mathrm{Z}&\geq 0\\\nonumber \mathcal{F}&\in\:\mathrm{MIO}. \end{align} Applying constraints on $\mathcal{F}$ and Choi representation properties for MIO, the primal problem becomes: \begin{align} \mathrm{minimize}:\quad &2\vert\vert\mathrm{Tr(Z)}\vert\vert_{\infty}\\\nonumber \textit{s.t.}\quad \mathrm{Z}&\geq J(\mathcal{N})-M\\\nonumber &\mathrm{Z}\geq 0\\\nonumber &\mathrm{M}\geq0\\\nonumber &\mathrm{Tr_{B}(M)}=\mathds{1}_{A}\\\nonumber &\mathrm{Tr_{A}(M)}-\mathrm{\Delta}\mathrm{Tr_{A}(M)}=0.\nonumber \end{align} which equals to \begin{align} \mathrm{minimize:}\: a\\\nonumber \textit{s.t.} \quad &a\geq0\\\nonumber &\mathds{1}_{A}\cdot a-2\mathrm{Tr_{B}(Z)}\geq 0\\\nonumber & \mathrm{Z}\geq J(\mathcal{N})-M\\\nonumber &\mathrm{Z}\geq 0\\\nonumber &\mathrm{M}\geq0\\\nonumber &\mathrm{Tr_{B}(M)}=\mathds{1}_{A}\\\nonumber &\mathrm{Tr_{A}(M)}-\mathrm{\Delta}\mathrm{Tr_{A}(M)}=0.\nonumber \end{align} The Lagrangian of the primal problem is given by: \begin{align} \mathcal{L}(a,Z,M,\widetilde{X},X,Y_1,Y_2)=a+\mathrm{Tr}[(2\mathrm{Tr_{B}(Z)}-a\mathds{1}_{A})\widetilde{X}]\\\nonumber +\mathrm{Tr}[(J(\mathcal{N})-M-Z)X]+\mathrm{Tr}[(\mathrm{Tr_{A}M}-\mathrm{\Delta Tr_{A}M})Y_{1}]\\\nonumber +\mathrm{Tr}[(\mathrm{Tr_{B}M}-\mathds{1}_{A})Y_{2}] \end{align} and its dual function (see more details for prime and dual problems in Ref. \cite{norms1}) is \begin{align*} &q(\widetilde{X},X,Y_{1},Y_{2})=\displaystyle \inf_{a,Z,M } \mathcal{L}(a,Z,M,\widetilde{X},X,Y_1,Y_2)\\\nonumber &\qquad=\displaystyle \inf_{a,Z,M } \mathrm{Tr}[J(\mathcal{N})]-\mathrm{Tr}[Y_2]+a[1-\mathrm{Tr}(\widetilde{X})]\\\nonumber &+\mathrm{Tr}[(2\cdot\widetilde{X}\otimes\mathds{1}_{B}-X)Z]+\mathrm{Tr}[(\mathds{1}_{A}\otimes Y_{1}-\mathds{1}_{A}\otimes \mathrm{\Delta}Y_{1}+Y_{2}\otimes \mathds{1}_{B}-X)M].\nonumber \end{align*} The dual function has value $\mathrm{Tr}[J(\mathcal{N})X]-\mathrm{Tr}[Y_2]$ if $ \widetilde{X}\leq 1\land 2\cdot\widetilde{X}\otimes\mathds{1}_{B}-X\geq0\:\land\mathds{1}_{A}\otimes Y_{1} -\mathds{1}\otimes\mathrm{\Delta}Y_{1}+Y_{2}\otimes\mathds{1}_{B}-X\geq0$ and $-\infty$ in other cases. Thus, the dual problem is to maximize $\mathrm{Tr}[J(\mathcal{N})X]-\mathrm{Tr}[Y_2]$ with constraints for $\widetilde{X}, X, Y_1,Y_2$. To make the constraint clear and easy reading, we simplify the four constraints: (1-2) $\widetilde{X}$ is nonnegative and trace less than one; (3) $X \geq 0$; (4) $2\cdot\widetilde{X}\otimes\mathds{1}_{B}-X$ to one constraint as $X\leq 2\cdot\rho\otimes\mathds{1}_{B}$ which $ \rho$ is a density matrix . Then we can construct a $\widetilde{X}^{'}:=\frac{1}{\mathrm{Tr}\widetilde{X}}\widetilde{X}$ is trace one, positive semidefinite and will keep $2\cdot\widetilde{X}^{'}\otimes\mathds{1}_{B}-X$ positive semidefinite for all X satisfy $2\widetilde{X}\otimes\mathds{1}_{B}-X\geq 0$. Hence, the dual problem is \begin{align} &Dual \\\nonumber \mathrm{maximize}\quad &\mathrm{Tr}[J(\mathcal{N})X]-\mathrm{Tr}[Y_{2}]\\\nonumber s.t.\quad& X\leq 2\cdot\rho\otimes\mathds{1}_{B}: \rho\mathrm{\:is\:density\: matrix}\\\nonumber &\mathds{1}_{A}\otimes Y_{1}-\mathds{1}\otimes \mathrm{\Delta}Y_{1}+Y_{2}\otimes\mathds{1}_{B}-X\geq0\\\nonumber &X\geq0\\\nonumber &Y_{1}=Y_{1}^{\dagger}\\\nonumber &Y_{2}=Y_{2}^{\dagger}.\nonumber \end{align} In the end, we have to show that the primal and the dual problem reaches the same optimal value, that is the strong duality holds in this programming. The strong duality obtains if the Slater condition \cite{slater} holds: there exists some $Z^{*}$ and $W^{*}$ satisfy all the equality constraints in primal problem and all the inequality constraints \textit{strictly} hold. It is easy to find that the Slater condition holds when \begin{align} Z^{*}&=\mathds{1}_{A}\otimes\mathds{1}_{B}+J(\mathcal{N})\\\nonumber M^{*}&=\frac{1}{ \vert B\vert}\mathds{1}_{A}\otimes\mathds{1}_{B}. \end{align} and the strong duality would promise the optimal value can be reached. \textit{Example.} The quantum channel $\mathcal{K}$ has two Kraus operators: \begin{align} &K_{0}= \left[ \begin{array} {lr} 0.2096& -0.3956\\ -0.2564& -0.3719\\ \end{array} \right],\\ &K_{1}= \left[ \begin{array} {lr} -0.6197 & 0.6418\\ -0.7116& -0.5415\\ \end{array} \right]. \end{align} By solving SDP in software CVX \cite{cvx}, this quantum channel has total coherence as +0.186758. \end{document}
\begin{document} \title{{The Dirichlet problem in Lipschitz domains with boundary data in Besov spaces for higher order elliptic systems with rough coefficients} \thanks{2000 {\it Math Subject Classification.} Primary: 35G15, 35J55, 35J40 Secondary 35J67, 35E05, 46E39. \newline {\it Key words}: higher order elliptic systems, Besov spaces, weighted Sobolev spaces, mean oscillations, BMO, VMO, Lipschitz domains, Dirichlet problem \newline The work of authors was supported in part by from NSF DMS and FRG grants as well as from the Swedish National Science Research Council}} \author{V.\, Maz'ya, M.\, Mitrea and T.\, Shaposhnikova} \date{~} \maketitle \begin{abstract} We settle the issue of well-posedness for the Dirichlet problem for a higher order elliptic system ${\mathcal L}(x,D_x)$ with complex-valued, bounded, measurable coefficients in a Lipschitz domain $\Omega$, with boundary data in Besov spaces. The main hypothesis under which our principal result is established is in the nature of best possible and requires that, at small scales, the mean oscillations of the unit normal to $\partial\Omega$ and of the coefficients of the differential operator ${\mathcal L}(x,D_x)$ are not too large. \end{abstract} \section{Introduction} \setcounter{equation}{0} A fundamental theme in the theory of partial differential equations, which has profound and intriguing connections with many other subareas of analysis, is the well-posedness of various classes of boundary value problems under sharp smoothness assumptions on the boundary of the domain and on the coefficients of the corresponding differential operator. In this paper we initiate a program broadly aimed at extending the scope of the agenda set forth by Agmon, Douglis, Nirenberg and Solonnikov (cf. {\bf\cite{ADN}}, {\bf\cite{Sol1}}, {\bf\cite{Sol2}}) in connection with general elliptic boundary value problems on Sobolev-Besov scales, as to allow minimal smoothness assumptions (on the underlying domain and on the coefficients of the differential operator). Our main result is the solvability of the Dirichlet problem for general higher order elliptic systems in divergence form, with complex-valued, bounded, measurable coefficients in Lipschitz domains, and for boundary data in Besov spaces. In order to be more specific we need to introduce some notation. Let $m,l\in{\mathbb{N}}$ be two fixed integers and, for a bounded Lipschitz domain $\Omega$ in $\mathbb{R}^n$ (a formal definition is given in \S{6.1}) with outward unit normal $\nu=(\nu_1,...,\nu_n)$ consider the Dirichlet problem for the operator \begin{equation}\label{LOL} {\mathcal L}(X,D_X)\,{\mathcal U} :=\sum_{|\alpha|=|\beta|=m}D^\alpha(A_{\alpha\beta}(X)D^\beta{\mathcal U}) \end{equation} \noindent i.e., \begin{equation}\label{e0} \left\{ \begin{array}{l} \displaystyle{\sum_{|\alpha|=|\beta|=m} D^\alpha(A_{\alpha\beta}(X)\,D^\beta\,{\mathcal U})}=0 \qquad\mbox{for}\,\,X\in\Omega, \\[28pt] {\displaystyle\frac{\partial^k{\mathcal U}}{\partial\nu^k}}=g_k \,\,\quad\mbox{on}\,\,\partial\Omega,\qquad 0\leq k\leq m-1. \end{array} \right. \end{equation} \noindent Here and elsewhere, $D^\alpha=(-i\partial/\partial x_1)^{\alpha_1}\cdots (-i\partial/\partial x_n)^{\alpha_n}$ if $\alpha=(\alpha_1,...,\alpha_n)$. The coefficients $A_{\alpha\beta}$ are $l\times l$ matrix-valued functions with complex entries satisfying \begin{equation}\label{A-bdd} \sum_{|\alpha|=|\beta|=m}\|A_{\alpha\beta}\|_{L_\infty(\Omega)}\leq\kappa_1 \end{equation} \noindent for some finite constant $\kappa_1$, and such that the coercivity condition \begin{equation}\label{coercive} \Re\,\int_\Omega\sum_{|\alpha|=|\beta|=m}\langle A_{\alpha\beta}(X) D^\beta\,{\mathcal U}(X),\,D^\alpha\,{\mathcal U}(X)\rangle\,dX \geq\kappa_0\sum_{|\alpha|=m}\|D^\alpha\,{\mathcal U}\|^2_{L_2(\Omega)} \end{equation} \noindent with $\kappa_0=const>0$ holds for all ${\mathbb{C}}^l$-valued functions ${\mathcal U}\in C^\infty_0(\Omega)$. Throughout the paper, $\Re\,z$ denotes the real part of $z\in{\mathbb{C}}$ and $\langle\cdot,\cdot\rangle$ stands for the canonical inner product in $\mathbb{C}^l$. Since, generally speaking, $\nu$ is merely bounded and measurable, care should be exercised when defining iterated normal derivatives. For the setting we have in mind it is natural to take $\partial^k/\partial\nu^k:=(\sum_{j=1}^n\xi_j\partial/\partial x_j)^k \mid_{\xi=\nu}$ or, more precisely, \begin{equation}\label{nuk} \frac{\partial^k{\mathcal U}}{\partial\nu^k} :=i^k\sum_{|\alpha|=k}\frac{k!}{\alpha!}\, \nu^\alpha\,{\rm Tr}\,[D^\alpha{\mathcal U}],\qquad 0\leq k\leq m-1, \end{equation} \noindent where ${\rm Tr}$ is the boundary trace operator and $\nu^\alpha:=\nu_1^{\alpha_1}\cdots\nu_n^{\alpha_n}$ if $\alpha=(\alpha_1,...,\alpha_n)$. With $\rho(X):={\rm dist}\,(X,\partial\Omega)$ and $p\in(1,\infty)$, $a\in(-1/p,1-1/p)$ fixed, a solution for (\ref{e0}) is sought in $W^{m,a}_p(\Omega)$, defined as the space of vector-valued functions for which \begin{equation}\label{W-Nr} \Bigl(\sum_{0\leq|\alpha|\leq m}\int_\Omega|D^\alpha{\mathcal U}(X)|^p \rho(X)^{pa}\,dX\Bigr)^{1/p}<\infty. \end{equation} \noindent In particular, as explained later on, the traces in (\ref{nuk}) exist in the Besov space $B_p^{s}(\partial\Omega)$, where $s:=1-a-1/p\in(0,1)$, for any ${\mathcal U}\in W^{m,a}_p(\Omega)$. Recall that, with $d\sigma$ denoting the area element on $\partial\Omega$, \begin{equation}\label{Bes-xxx} g\in B_p^s(\partial\Omega)\Leftrightarrow \|g\|_{B_p^s(\partial\Omega)}:=\|f\|_{L_p(\partial\Omega)} +\Bigl(\int_{\partial\Omega}\int_{\partial\Omega} \frac{|g(X)-g(Y)|^p}{|X-Y|^{n-1+sp}}\,d\sigma_Xd\sigma_Y\Bigr)^{1/p}<\infty. \end{equation} \noindent The above definition takes advantage of the Lipschitz manifold structure of $\partial\Omega$. On such manifolds, smoothness spaces of index $s\in(0,1)$ can be defined in an intrinsic, invariant fashion by lifting their Euclidean counterparts onto the manifold itself via local charts. We shall, nonetheless, find it useful to consider higher order smoothness spaces on $\partial\Omega$ in which case the above approach is no longer effective. An alternative point of view has been developed by H.\,Whitney in {\bf\cite{Wh}} where he considered what amounts to higher order Lipschitz spaces on arbitrary closed sets. A far-reaching extension of this circle of ideas pertaining to the full scale of Besov and Sobolev spaces on irregular subsets of ${\mathbb{R}}^n$ can be found in the book {\bf\cite{JW}} by A.\,Jonsson and H.\,Wallin. For the purpose of this introduction we note that one possible description of these higher order Besov spaces on the boundary of a Lipschitz domain $\Omega\subset{\mathbb{R}}^n$ and for $m\in{\mathbb{N}}$, $p\in(1,\infty)$, $s\in(0,1)$, reads \begin{equation}\label{Bes-X} \dot{B}^{m-1+s}_p(\partial\Omega)=\,\mbox{the closure of}\,\, \Bigl\{(D^\alpha\,{\mathcal V}|_{\partial\Omega})_{|\alpha|\leq m-1}:\, {\mathcal V}\in C^\infty_0({\mathbb{R}}^n)\Bigr\}\mbox{ in } B_p^s(\partial\Omega) \end{equation} \noindent (we shall often make no notational distinction between a Banach space ${\mathfrak X}$ and ${\mathfrak X}^N={\mathfrak X}\oplus\cdots\oplus{\mathfrak X}$ for a finite, positive integer $N$). A formal definition along with other equivalent characterizations of $\dot{B}^{m-1+s}_p(\partial\Omega)$ can be found in \S{6.4}. Given (\ref{nuk})-(\ref{W-Nr}), a necessary condition for the boundary data $\{g_k\}_{0\leq k\leq m-1}$ in (\ref{e0}) is that \begin{equation}\label{data-B} \begin{array}{l} \displaystyle{ \mbox{there exists $\dot{f}=\{f_\alpha\}_{|\alpha|\leq m-1}\in\dot{B}^{m-1+s}_p(\partial\Omega)$ such that}} \\[10pt] \displaystyle{ g_k=i^k\sum_{|\alpha|=k}\frac{k!}{\alpha!}\,\nu^\alpha\,f_\alpha, \qquad\mbox{for each}\,\,\,\,0\leq k\leq m-1.} \end{array} \end{equation} To state the (analytical and geometrical) conditions under which the problem (\ref{e0}), formulated as above, is well-posed, we need one final piece of terminology. By the {\it infinitesimal mean oscillation} of a function $F\in L_1(\Omega)$ we shall understand the quantity \begin{equation}\label{e60} \{F\}_{{\rm Osc}(\Omega)}:=\mathop{\hbox{lim\,sup}}_{\varepsilon\to 0} \left(\mathop{\hbox{sup}}_{{\{B_\varepsilon\}}_\Omega} {\int{\mkern-19mu}-}_{\!\!\!B_\varepsilon\cap\Omega}\,\, {\int{\mkern-19mu}-}_{\!\!\!B_\varepsilon\cap\Omega}\, \Bigl|\,F(X)-F(Y)\,\Bigr|\,dXdY\right), \end{equation} \noindent where $\{B_\varepsilon\}_\Omega$ stands for the set of arbitrary balls centered at points of $\Omega$ and of radius $\varepsilon$, and the barred integral is the mean value. In a similar fashion, the infinitesimal mean oscillation of a function $f\in L_1(\partial\Omega)$ is defined by \begin{equation}\label{e61} \{f\}_{{\rm Osc}(\partial\Omega)} :=\mathop{\hbox{lim\,sup}}_{\varepsilon\to 0} \left(\mathop{\hbox{sup}}_{\{B_\varepsilon\}_{\partial\Omega}} {\int{\mkern-19mu}-}_{\!\!\!B_\varepsilon\cap\partial\Omega}\,\, {\int{\mkern-19mu}-}_{\!\!\!B_\varepsilon\cap\partial\Omega}\, \Bigl|\,f(X)-f(Y)\,\Bigr|\,d\sigma_Xd\sigma_Y\right), \end{equation} \noindent where $\{B_\varepsilon\}_{\partial\Omega}$ is the collection of $n$-dimensional balls with centers on $\partial\Omega$ and of radius $\varepsilon$. Our main result reads as follows; see also Theorem~\ref{Theorem1} for a more general version. \vskip 0.08in \begin{theorem}\label{Theorem} In the above setting, for each $p\in (1,\infty)$ and $s\in (0,1)$, the problem {\rm (\ref{e0})} with boundary data as in (\ref{data-B}) has a unique solution ${\mathcal U}$ for which (\ref{W-Nr}) holds with $a=1-s-1/p$ provided the coefficient matrices $A_{\alpha\beta}$ and the exterior normal vector $\nu$ to $\partial\Omega$ satisfy \begin{equation}\label{a0} \{\nu\}_{{\rm Osc}(\partial\Omega)} +\sum_{|\alpha|=|\beta|=m}\{ A_{\alpha\beta}\}_{{\rm Osc}(\Omega)} \leq\,C\,s(1-s)\Bigl(pp'+s^{-1}(1-s)^{-1}\Bigr)^{-1} \end{equation} \noindent where $p'=p/(p-1)$ is the conjugate exponent of $p$. Above, $C$ is a sufficiently small constant which depends on $\kappa_0$, $\kappa_1$ and the Lipschitz constant of $\Omega$, and is independent of $p$ and $s$. Furthermore, the bound {\rm (\ref{a0})} can be improved for second order operators, i.e. when $m=1$, when the factor $s(1-s)$ in {\rm (\ref{a0})} can be removed. \end{theorem} \vskip 0.08in Let ${\rm BMO}$ and ${\rm VMO}$ stand, respectively, for the John-Nirenberg space of functions of bounded mean oscillations and the Sarason space of functions of vanishing mean oscillations (considered either on $\Omega$ or on $\partial\Omega$). Since for an arbitrary function $F$ we have (with the dependence on the domain dropped) $\{F\}_{{\rm Osc}}\leq 2\,{\rm dist}\,(F,{\rm VMO})$ where the distance is taken in ${\rm BMO}$, the smallness condition (\ref{a0}) in Theorem~\ref{Theorem} is satisfied if \begin{equation}\label{axxx} {\rm dist}\,(\nu,{\rm VMO}\,(\partial\Omega)) +\sum_{|\alpha|=|\beta|=m}{\rm dist}\,(A_{\alpha\beta},{\rm VMO}\,(\Omega)) \leq\,C\,s(1-s)\Bigl(pp'+s^{-1}(1-s)^{-1}\Bigr)^{-1}. \end{equation} \noindent In particular, this is trivially the case when $\nu\in {\rm VMO}(\partial\Omega)$ and the $A_{\alpha\beta}$'s belong to ${\rm VMO}(\Omega)$, irrespective of $p$, $s$, $\kappa_0$, $\kappa_1$ and the Lipschitz constant of $\Omega$. While the Lipschitz character of a domain $\Omega$ controls the infinitesimal mean oscillation of its unit normal, the inequality in the opposite direction is false in general, as seen by considering $\Omega:=\{(x,y)\in{\mathbb{R}}^2:\,y>\varphi_\varepsilon(x)\}$ with $\varphi_\varepsilon(x):=x\,\sin\,(\varepsilon\log |x|^{-1})$. Indeed, a simple calculation gives $\|\varphi'_\varepsilon\|_{{\rm BMO}({\mathbb{R}})}\leq C\varepsilon$, yet $\|\varphi'_\varepsilon\|_{L_\infty({\mathbb{R}})}\sim 1$ uniformly for $\varepsilon\in(0,1/2)$. An essentially equivalent reformulation of (\ref{e0}) is \begin{equation}\label{e0-bis} \left\{ \begin{array}{l} \displaystyle{\sum_{|\alpha|=|\beta|=m} D^\alpha(A_{\alpha\beta}(X)\,D^\beta\,{\mathcal U})}=0 \qquad\mbox{in}\,\,\,\Omega, \\[24pt] {\rm Tr}\,[D^\gamma\,{\mathcal U}]=g_\gamma \,\,\quad\mbox{on}\,\,\partial\Omega,\qquad |\gamma|\leq m-1, \end{array} \right. \end{equation} \noindent where ${\mathcal U}$ satisfies (\ref{W-Nr}) and \begin{equation}\label{data-G} \dot{g}:=\{g_\gamma\}_{|\gamma|\leq m-1}\in\dot{B}^{m-1+s}_p(\partial\Omega), \end{equation} \noindent though an advantage of the classical formulation (\ref{e0}) is that the number of the data is minimal. For a domain $\Omega\subset{\mathbb{R}}^2$ of class $C^r$, $r>{\textstyle\frac 1{2}}$, and for constant coefficient operators, the Dirichlet problem (\ref{e0-bis}) has been considered by S.\,Agmon in {\bf\cite{Ag1}} where he proved that there exists a unique solution ${\mathcal U}\in C^{m-1+s}(\bar{\Omega})$, $0<s<r$, whenever $g_\gamma=D^\gamma{\mathcal V}|_{\partial\Omega}$ , $|\gamma|\leq m-1$, for some function ${\mathcal V}\in C^{m-1+s}(\bar{\Omega})$. See also {\bf\cite{Ag2}} for a related version. The innovation that allows us to consider, for the first time, boundary data in Besov spaces as in (\ref{data-B}) and (\ref{data-G}), is the systematic use of {\it weighted Sobolev spaces} such as those associated with the norm in (\ref{W-Nr}). In relation to the standard Besov scale in ${\mathbb{R}}^n$, we would like to point out that, thanks to Theorem~4.1 in {\bf\cite{JK}} on the one hand, and Theorem~1.4.2.4 and Theorem~1.4.4.4 in {\bf\cite{Gr}} on the other, we have \begin{equation}\label{incls} \begin{array}{l} a=1-s-\frac{1}{p}\in (0,1-1/p)\Longrightarrow W^{m,a}_p(\Omega)\hookrightarrow B^{m-1+s+1/p}_p(\Omega), \\[10pt] a=1-s-\frac{1}{p}\in (-1/p,0)\Longrightarrow B^{m-1+s+1/p}_p(\Omega)\hookrightarrow W^{m,a}_p(\Omega). \end{array} \end{equation} \noindent Of course, $W^{m,a}_p(\Omega)$ is just a classical Sobolev space when $a=0$. Remarkably, the classical trace theory for unweighted Sobolev spaces turns out to have a most satisfactory analogue in this weighted context; for the upper half-space this has been worked out by S.V.\,Uspenski\u{\i} in {\bf\cite{Usp}}, a paper preceded by the significant work of E.\,Gagliardo in {\bf\cite{Ga}} in the unweighted case. As a consequence, we note that under the assumptions made in Theorem~\ref{Theorem}, \begin{equation}\label{Trace} \sum_{|\alpha|\leq m-1} \|{\rm Tr}\,[D^\alpha\,{\mathcal U}]\|_{B_p^{s}(\partial\Omega)} \sim \left(\sum_{0\leq|\alpha|\leq m}\int_{\Omega}\rho(X)^{p(1-s)-1}\, |D^\alpha{\mathcal U}(X)|^p\,dX\right)^{1/p}, \end{equation} \noindent uniformly in ${\mathcal U}$ satisfying ${\mathcal L}(X,D_X)\,{\mathcal U}=0$ in $\Omega$. The estimate (\ref{Trace}) can be viewed as a far-reaching generalization of a well-known characterization of the membership of a function to a Besov space in ${\mathbb{R}}^{n-1}$ in terms of weighted Sobolev norm estimates for its harmonic extension to ${\mathbb{R}}^n_+$ (see, e.g., Proposition~$7'$ on p.\,151 of {\bf\cite{St}}). Theorem~1.1 is new even in the case when $m=1$ and $A_{\alpha\beta}\in{\mathbb{C}}^{l\times l}$ (i.e., for second order, constant coefficient systems) and provides a complete answer to the issue of well-posedness of the problem (\ref{e0}), (\ref{data-B}), (\ref{W-Nr}) in the sense that the small mean oscillation condition, depending on $p$ and $s$, is in the nature of best possible if one insists of allowing arbitrary indices $p$ and $s$ in (\ref{data-B}). This can be seen by considering the following Dirichlet problem for the Laplacian in a domain $\Omega\subset{\mathbb{R}}^n$: \begin{equation}\label{LapJK} \left\{ \begin{array}{l} \Delta\,{\mathcal U}=0\mbox{ in }\Omega, \\[6pt] {\rm Tr}\,{\mathcal U}=g\in B^s_p(\partial\Omega), \\[6pt] D^\alpha{\mathcal U}\in L_p(\Omega,\,\rho(X)^{p(1-s)-1}\,dX),\qquad \forall\,\alpha\,:\,|\alpha|\leq 1. \end{array} \right. \end{equation} \noindent It has long been known that, already in the case when $\partial\Omega$ exhibits one cone-like singularity, the well-posedness of (\ref{LapJK}) prevents the indices $(s,1/p)$ from taking arbitrary values in $(0,1)\times(0,1)$. At a more sophisticated level, the work of D.\,Jerison and C.\,Kenig in {\bf\cite{JK}} shows that (\ref{LapJK}) is well-posed in an arbitrary, given Lipschitz domain $\Omega$ if and only if the point $(s,1/p)$ belongs to a certain open region ${\mathcal R}_{\Omega}\subseteq(0,1)\times(0,1)$, determined exclusively by the geometry of the domain $\Omega$ (cf. {\bf\cite{JK}} for more details). Let us also mention here that, even when $\partial\Omega$ is smooth and $m=l=1$, a well-known example due to N.\,Meyers (cf. {\bf\cite{Mey}}) shows that the well-posedness of (\ref{e0}) in the class of operators with bounded, measurable coefficients confines $p$ to a small neighborhood of $2$. Broadly speaking, there are two types of questions pertaining to the well-posedness of the Dirichlet problem in a Lipschitz domain $\Omega$ for a divergence form, elliptic system (\ref{LOL}) of order $2m$ with boundary data in Besov spaces. \vskip 0.08in \noindent {\it Question I.} Granted that the coefficients of ${\mathcal L}$ exhibit a certain amount of smoothness, identifying the Besov spaces for which this boundary value problem is well-posed. \vskip 0.08in \noindent {\it Question II.} Alternatively, for a given Besov space characterize the class of Lipschitz domains $\Omega$ and elliptic operators ${\mathcal L}$ for which the aforementioned boundary value problem is well-posed. \vskip 0.08in \noindent These, as well as other related issues, have been a driving force behind many exciting, recent developments in partial differential equations and allied fields. Ample evidence of their impact can be found in C.\,Kenig's excellent account {\bf\cite{Ke}} which describes the state of the art in this field of research up to mid 1990's, with a particular emphasis on the role played by harmonic analysis techniques. One generic problem which falls under the scope of {\it Question I} is to determine the optimal scale of spaces on which the Dirichlet problem for an elliptic system of order $2m$ is solvable in an {\it arbitrary Lipschitz domain} $\Omega$ in ${\mathbb{R}}^n$. The most basic case, that of the constant coefficient Laplacian in arbitrary Lipschitz domains in ${\mathbb{R}}^n$, is now well-understood thanks to the work of B.\,Dahlberg and C.\,Kenig {\bf\cite{DK}}, in the case of $L_p$-data, and D.\,Jerison and C.\,Kenig {\bf\cite{JK}}, in the case of Besov data. The case of (\ref{LapJK}) for boundary data exhibiting higher regularity (i.e., $s>1$) has been recently dealt with by V.\,Maz'ya and T.\,Shaposhnikova in {\bf\cite{MS2}} where nearly optimal smoothness conditions for $\partial\Omega$ are found in terms of the properties of $\nu$ as a Sobolev space multiplier. Generalizations of (\ref{LapJK}) to the case of variable-coefficient, single, second order elliptic equations have been obtained by M.\,Mitrea and M.\,Taylor in {\bf\cite{MT1}}, {\bf\cite{MT2}}, {\bf\cite{MT3}}. In spite of substantial progress in recent years, there remain many basic open questions, particularly for $l>1$ and/or $m>1$ (corresponding to genuine systems and/or higher order equations), even in the case of {\it constant coefficient} operators in Lipschitz domains. In this context, one significant problem (as mentioned in, e.g., {\bf\cite{Fa}}) is to determine the sharp range of $p$'s for which the Dirichlet problem for elliptic systems with $L_p$-boundary data is well-posed. In {\bf\cite{PV}}, J.\,Pipher and G.\,Verchota have developed a $L_p$-theory for real, constant coefficient, higher order systems $L=\sum_{|\alpha|=2m}A_\alpha D^\alpha$ when $p$ is near $2$, i.e. $2-\varepsilon<p<2+\varepsilon$ with $\varepsilon>0$ depending on the Lipschitz character of $\Omega$ but this range is not optimal. Recently, more progress for the biharmonic equation and for general constant coefficient, second order systems with real coefficients, which are elliptic in the sense of Legendre-Hadamard was made by Z.\,Shen in {\bf\cite{Sh}}, where he further extended the range of $p$'s from $(2-\varepsilon,2+\varepsilon)$ to $(2-\varepsilon,\frac{2(n-1)}{n-3}+\varepsilon)$ for a general Lipschitz domain $\Omega$ in ${\mathbb{R}}^n$, $n\geq 4$, where as before $\varepsilon=\varepsilon(\partial\Omega)>0$. Let us also mention here the work {\bf\cite{AP}} of V.\,Adolfsson and J.\,Pipher who have dealt with the Dirichlet problem for the biharmonic operator in arbitrary Lipschitz domains and with data in Besov spaces, {\bf\cite{Ve}} where G.\,Verchota formulates and solves a Neumann-type problem for the bi-Laplacian in Lipschitz domains and with boundary data in $L_2$, {\bf\cite{MMT}} where the authors treat the Dirichlet problem for variable coefficient symmetric, real, elliptic systems of second order in an arbitrary Lipschitz domain $\Omega$ and with boundary data in $B^s_p(\partial\Omega)$, when $2-\varepsilon<p<2+\varepsilon$ and $0<s<1$, as well as the paper {\bf\cite{KM}} by V.\,Kozlov and V.\,Maz'ya, which contains an explicit description of the asymptotic behavior of null-solutions of constant coefficient, higher order, elliptic operators near points on the boundary of a domain with a sufficiently small Lipschitz constant. A successful strategy for dealing with {\it Question II} consists of formulating and solving the analogue of the original problem in a standard case, typically when $\Omega={\mathbb{R}}^n_+$ and ${\mathcal L}$ has constant coefficients, and then deviating from this most standard setting by allowing perturbations of a certain magnitude. A paradigm result in this regard, going back to the work of Agmon, Douglis, Nirenberg and Solonnikov in the 50's and 60's is that the Dirichlet problem is solvable in the context of Sobolev-Besov spaces if $\partial\Omega$ is sufficiently smooth and if ${\mathcal L}$ has continuous coefficients. The latter requirement is an artifact of the method of proof (based on Korn's trick of freezing the coefficients) which requires measuring the size of the oscillations of the coefficients in a {\it pointwise sense} (as opposed to integral sense, as in (\ref{e60})). For a version of {\it Question II}, corresponding to boundary data of higher regularity, optimal results have been obtained by V.\,Maz'ya and T.\,Shaposhnikova in {\bf\cite{MS}}. In this context, the natural language for describing the smoothness of the domain $\Omega$ is that of Sobolev space multipliers. While the study of boundary value problems in a domain $\Omega\subset{\mathbb{R}}^n$ for elliptic differential operators with discontinuous coefficients goes a long way back (for instance, C.\,Miranda has considered in {\bf\cite{Mir}} operators with coefficients in the Sobolev space $W^1_n$), a lot of attention has been devoted lately to the class of operators with coefficients in ${\rm VMO}$ (it is worth pointing out here that $W^1_n\hookrightarrow{\rm VMO}$ on Lipschitz subdomains of ${\mathbb{R}}^n$). Much of the impetus for the recent surge of interest in this particular line of work stems from an observation made by F.\,Chiarenza, M.\,Frasca and P.\,Longo in the early 1990's. More specifically, while investigating interior estimates for the solution of a scalar, second-order elliptic differential equation of the form ${\mathcal L}\,{\mathcal U}=F$, these authors have noticed in {\bf\cite{CFL1}} that ${\mathcal U}$ can be related to $F$ via a potential theoretic representation formula in which the residual terms are commutators between operators of Calder\'on-Zygmund type, on the one hand, and operators of multiplication by the coefficients of ${\mathcal L}$, on the other hand. This made it possible to control these terms by invoking the commutator estimate of Coifman-Rochberg-Weiss ({\bf\cite{CRW}}). Various partial extensions of this result can be found in {\bf\cite{AQ}}, {\bf\cite{By1}}, {\bf\cite{CaPe}}, {\bf\cite{CFL2}}, {\bf\cite{Faz}}, {\bf\cite{Gu}}, {\bf\cite{Ra}}, and the references therein. Here we would just like to mention that, in the whole Euclidean space, a different approach (based on estimates for the Riesz transforms) has been devised by T.\,Iwaniec and C.\,Sbordone in {\bf\cite{IS}}. Compared to the aforementioned works, our approach is more akin to that of F.\,Chiarenza and collaborators ({\bf\cite{CFL1}}, {\bf\cite{CFL2}}), though there are fundamental differences between solving boundary problems for higher order and for second order operators. One difficulty inherently linked with the case $m>1$ arises from the way the norm in (\ref{W-Nr}) behaves under a change of variables $\varkappa:\Omega=\{(X',X_n):\,X_n>\varphi(X')\}\to{\mathbb{R}}^n_+$ designed to flatten the Lipschitz surface $\partial\Omega$. When $m=1$, a simple bi-Lipschitz changes of variables such as the inverse of the map ${\mathbb{R}}^n_+\ni (X',X_n)\mapsto(X',\varphi(X')+X_n)\in\Omega$ will do, but matters are considerable more subtle in the case $m>1$. In this latter situation, we employ a special global flattening map first introduced by J.\,Ne\v{c}as (in a different context; cf. p.\,188 in {\bf\cite{Nec}}) and then independently rediscovered and/or further adapted to new settings by several authors, including V.\,Maz'ya and T.\,Shaposhnikova in {\bf\cite{MS}}, B.\,Dahlberg, C.\,Kenig J.\,Pipher, E.\,Stein and G.\,Verchota (cf. {\bf\cite{Dah}} and the discussion in {\bf\cite{DKPV}}), and S.\,Hofmann and J.\,Lewis in {\bf\cite{HL}}. Our main novel contribution in this regard is adapting this circle of ideas to the context when one seeks pointwise estimates for higher order derivatives of $\varkappa$ and $\lambda:=\varkappa^{-1}$ in terms of $[\nabla\varphi]_{{\rm BMO}({\mathbb{R}}^{n-1})}$. Another ingredient of independent interest is deriving estimates for $D_x^\alpha D_y^\beta G(x,y)$ where $G$ is the Green function associated with a constant (complex) coefficient system $L(D)$ of order $2m$ in the upper half space, which are sufficiently well-suited for deriving commutator estimates in the spirit of {\bf\cite{CRW}}. The methods employed in earlier work are largely based on explicit representation formulas for $G(x,y)$ and, hence, cannot be adapted easily to the case of non-symmetric, complex coefficient, higher order systems. By way of contrast, our approach consists of proving directly that the residual part $R(x,y):=G(x,y)-\Phi(x-y)$, where $\Phi$ is a fundamental solution for $L(D)$, has the property that $D_x^\alpha D_y^\beta R(x,y)$ is a Hardy-type kernel whenever $|\alpha|=|\beta|=m$. The layout of the paper is as follows. Section~2 contains estimates for the Green function in the upper-half space. Section~3 deals with integral operators (of Calder\'on-Zygmund and Hardy type) as well as commutator estimates on weighted Lebesgue spaces. In the last part of this section we also revisit Gagliardo's extension operator and establish estimates in the context of ${\rm BMO}$. Section~4 contains a discussion of the Dirichlet problem for higher order, variable coefficient elliptic systems in the upper-half space. Then the adjustments necessary to treat the case of an unbounded domain lying above the graph of a Lipschitz function are presented in Section~5. Finally, in Section~6, we explain how to handle the case of a bounded Lipschitz domain, and state and prove Theorem~\ref{Theorem1} (from which Theorem~\ref{Theorem} follows). This section also contains further complements and extensions of our main result. \section{Green's matrix estimates in the half-space} \setcounter{equation}{0} In this section we prove a key estimate for derivatives of Green's matrix associated with the Dirichlet problem for homogeneous, higher-order constant coefficient elliptic systems in the half-space $\mathbb{R}^n_+$. \subsection{Statement of the main result} Let ${L}(D_x)$ be a matrix-valued differential operator \begin{equation}\label{eq1.1} {L}(D_x)=\sum_{|\alpha|=2m}A_{\alpha} D^{\alpha}_x, \end{equation} \noindent \noindent where the $A_\alpha$'s are constant $l\times l$ matrices with complex entries. Throughout the paper, $D^\alpha_x:= i^{-|\alpha|} \partial_{x_1}^{\alpha_1}\partial_{x_2}^{\alpha_2}\cdots \partial_{x_n}^{\alpha_n}$ if $\alpha=(\alpha_1,\alpha_2,...,\alpha_n)\in{\mathbb{N}}_0^n$. Here and elsewhere, ${\mathbb{N}}$ stands for the collection of all positive integers and ${\mathbb{N}}_0:={\mathbb{N}}\cup\{0\}$. We assume that ${L}$ is strongly elliptic, i.e. there exists $\kappa>0$ such that $\sum_{|\alpha|=m}\|A_\alpha\|_{{\mathbb{C}}^{l\times l}}\leq\kappa^{-1}$ and \begin{equation}\label{eq1.2} \Re\,\langle{L}(\xi)\eta,\eta\rangle_{\mathbb{C}^l}\geq\kappa\, |\xi|^{2m}\,\|\eta\|^2_{\mathbb{C}^l},\qquad \forall\,\xi\in{\mathbb{R}}^n,\,\,\,\forall\,\eta\in\mathbb{C}^l. \end{equation} \noindent In what follows, in order to simplify notations, we shall denote the norms in different finite-dimensional real Euclidean spaces by $|\cdot|$ irrespective of their dimensions. Also, quite frequently, we shall make no notational distinction between a space of scalar functions, call it ${\mathfrak X}$, and the space of vector-valued functions (of a fixed, finite dimension) whose components are in ${\mathfrak X}$. We denote by $F(x)$ a fundamental matrix of the operator ${L}(D_x)$, i.e. a $l\times l$ matrix solution of the system \begin{equation}\label{eq1.3} {L}(D_x)F(x)=\delta(x)I_l\quad\mbox{in}\,\,\mathbb{R}^n, \end{equation} \noindent where $I_l$ is the $l\times l$ identity matrix and $\delta$ is the Dirac function. Consider the Dirichlet problem \begin{equation}\label{eq1.5} \left\{ \begin{array}{l} {L}(D_x)u=f\qquad\qquad\qquad \mbox{in}\,\,\mathbb{R}^n_+, \\[6pt] {\rm Tr}\,[\partial^j u/\partial x_n^j]=f_j\quad j=0,1,\ldots,m-1, \qquad\quad\,\,\mbox{on}\,\,\mathbb{R}^{n-1} \end{array} \right. \end{equation} \noindent where $\mathbb{R}^n_+:=\{x=(x',x_n):\,x'\in\mathbb{R}^{n-1},\,x_n>0\}$ and ${\rm Tr}$ is the boundary trace operator. Hereafter, we shall identify $\partial{\mathbb{R}}^n_+$ with ${\mathbb{R}}^{n-1}$ in a canonical fashion. For each $y'\in \mathbb{R}^{n-1}$ we introduce the Poisson matrices $P_0,\ldots, P_{m-1}$ for problem (\ref{eq1.5}), i.e. the solutions of the boundary-value problems \begin{equation}\label{eq1.6} \left\{ \begin{array}{l} { L}(D_x)P_j(x,y')= 0\,I_l \qquad\qquad\qquad\mbox{on}\,\,\mathbb{R}^n_+, \\[10pt] \displaystyle{\left(\frac{\partial^k}{\partial x_n^k}P_j\right) (\,(x',0),y'\,)} =\delta_{jk}\,\delta(x'-y')I_l\,\,\,{\rm for}\,\,\,x'\in\mathbb{R}^{n-1},\,\, 0\leq k\leq m-1, \end{array} \right. \end{equation} \noindent where $\delta_{jk}$ is the usual Kronecker symbol and $0\leq j\leq m-1$. The matrix-valued function $P_j(x,0')$ is positive homogeneous of degree $j+1-n$, i.e. \begin{equation}\label{eq1.7} P_j(x,0')=|x|^{j+1-n}\,P_j(x/|x|,0'),\qquad x\in\mathbb{R}^n, \end{equation} \noindent where $0'$ denotes the origin of $\mathbb{R}^{n-1}$. The restriction of $P_j(\cdot,0')$ to the upper half-sphere $S^{n-1}_+$ is smooth and vanishes on the equator along with all of its derivatives up to order $m-1$ (see for example, \S{10.3} in {\bf\cite{KMR2}}). Hence, \begin{equation}\label{eq1.8} \|P_j(x,0')\|_{\mathbb{C}^{l\times l}} \leq C\,\frac{x_n^m}{|x|^{n+m-1-j}},\qquad x\in{\mathbb{R}}^n_+, \end{equation} \noindent and, consequently, \begin{equation}\label{eq1.9} \|P_j(x,y')\|_{\mathbb{C}^{l\times l}} \leq C\,\frac{x_n^m}{|x-(y',0)|^{n+m-1-j}},\qquad x\in{\mathbb{R}}^n_+,\,\,\,\,y'\in{\mathbb{R}}^{n-1}. \end{equation} By $G(x,y)$ we shall denote the Green's matrix of the problem (\ref{eq1.5}), i.e. the unique solution of the boundary-value problem \begin{equation}\label{eq1.10} \left\{ \begin{array}{l} {L}(D_x)G(x,y)=\delta(x-y)I_l\quad\mbox{for}\,\,x\in\mathbb{R}^n, \\[6pt] \displaystyle{\left(\frac{\partial ^j}{\partial x_n^j}G\right)((x',0),y)} =0\,I_l\qquad \mbox{for}\,\,x'\in\mathbb{R}^{n-1}, \,\,\, 0\leq j\leq m-1, \end{array} \right. \end{equation} \noindent where $y\in\mathbb{R}^n_+$ is regarded as a parameter. We now introduce the matrix \begin{equation}\label{defRRR} R(x,y):=F(x-y)-G(x,y),\qquad x,y\in{\mathbb{R}}^n_+, \end{equation} \noindent so that, for each fixed $y\in\mathbb{R}^n_+$, \begin{equation}\label{eq1.12} \left\{ \begin{array}{l} {L}(D_x)\,R(x,y)=0\qquad\qquad\qquad\qquad\qquad\quad\,\,\,\, \mbox{for}\,\,x\in\mathbb{R}^n, \\[6pt] \displaystyle{\left(\frac{\partial^j}{\partial x_n^j}R\right)((x',0),y)} =\left(\frac{\partial^j}{\partial x_n^j}F\right)((x',0)-y) \quad\mbox{for}\,\,x'\in\mathbb{R}^{n-1},\,\, 0\leq j\leq m-1. \end{array} \right. \end{equation} \noindent Our goal is to prove the following result. \begin{theorem}\label{th1} For all multi-indices $\alpha,\beta$ of length $m$ \begin{equation}\label{mainest} \|D^\alpha_xD^\beta_y R(x,y)\|_{\mathbb{C}^{l\times l}} \leq C\,|x-\bar{y}|^{-n}, \end{equation} \noindent for $x,y\in{\mathbb{R}}^n_+$, where $\bar{y}:=(y',-y_n)$ is the reflection of the point $y\in{\mathbb{R}}^n_+$ with respect to $\partial{\mathbb{R}}^n_+$. \end{theorem} In the proof of Theorem~\ref{th1} we distinguish two cases, $n>2m$ and $n\leq 2m$, which we shall treat separately. Our argument pertaining to the situation when $n>2m$ is based on a lemma to be proved in the subsection below. \subsection{Estimate for a parameter dependent integral} As a preamble to the proof of Theorem~\ref{th1}, here we dispense with the following technical result. \begin{lemma}\label{lem1} Let $a$ and $b$ be two non-negative numbers and assume that $\zeta\in \mathbb{R}^N$. Then for every $\varepsilon>0$ and $0<\delta<N$ there exists a constant $c(N,\varepsilon,\delta)>0$ such that \begin{equation}\label{E1} \int_{\mathbb{R}^N} \frac{d\eta}{(|\eta|+a)^{N+\varepsilon}(|\eta-\zeta|+ b)^{N-\delta}} \leq\frac{c(N,\varepsilon,\delta)}{a^\varepsilon(|\zeta|+a+b)^{N-\delta}}. \end{equation} \end{lemma} \noindent{\bf Proof.} Write ${\mathcal J}={\mathcal J}_1+{\mathcal J}_2$ where ${\mathcal J}$ stands for the integral in the left side of (\ref{E1}), whereas ${\mathcal J}_1$ and ${\mathcal J}_2$ denote the integrals obtained by splitting the domain of integration in ${\mathcal J}$ into $B_a=\{\eta\in \mathbb{R}^N:\,|\eta|<a\}$ and ${\mathbb{R}}^n\setminus B_a$, respectively. If $|\zeta|<2a$, then \begin{equation}\label{I1} {\mathcal J}_1\leq a^{-N-\varepsilon} \int_{B_a}\frac{d\eta}{(|\eta-\zeta|+b)^{N-\delta}} \leq c\,a^{-N-\varepsilon}\int_{B_{4a}}\frac{d\xi}{(|\xi|+b)^{N-\delta}}. \end{equation} \noindent Hence \begin{equation}\label{II1} {\mathcal J}_1\leq \left\{ \begin{array}{l} c\,a^{-N-\varepsilon}\,a^{N}/b^{N-\delta}\qquad{\rm if}\,\,a<b, \\[6pt] c\,a^{-N-\varepsilon+\delta}\qquad\quad{\rm if}\,\,a>b, \end{array} \right. \end{equation} \noindent so that, in particular, \begin{equation}\label{I1est} |\zeta|<2a\Longrightarrow {\mathcal J}_1\leq c\,a^{-\varepsilon}(|\zeta|+a+b)^{\delta-N}. \end{equation} Let us now assume that $|\zeta|>2a$. Then \begin{equation}\label{I1est2} {\mathcal J}_1\leq\int_{B_a}\frac{d\eta}{(|\eta|+a)^{N+\varepsilon}}\, \frac{c}{(|\zeta|+b)^{N-\delta}} \leq c\,a^{-\varepsilon}(|\zeta|+a+b)^{\delta-N}, \end{equation} \noindent which is of the right order. As for ${\mathcal J}_2$, we write \begin{equation}\label{I2} {\mathcal J}_2\leq\int_{\mathbb{R}^n\backslash B_a} \frac{d\eta}{|\eta|^{N+\varepsilon}(|\eta-\zeta|+b)^{N-\delta}} ={\mathcal J}_{2,1}+{\mathcal J}_{2,2}. \end{equation} \noindent where ${\mathcal J}_{2,1}$, ${\mathcal J}_{2,2}$ are obtained by splitting the domain of integration in the above integral into the set $\{\eta:|\eta|>\max\{a,2|\zeta|\}\}$ and its complement in $\mathbb{R}^n\backslash B_a$. We have \begin{eqnarray}\label{Eq4} {\mathcal J}_{2,1} & \leq & \int_{|\eta|>\max\{a,b,2|\zeta|\}} \frac{d\eta}{|\eta|^{N+\varepsilon}(|\eta|+b)^{N-\delta}} +\int_{b>|\eta|>\max\{a,b,2|\zeta|\}} \frac{d\eta}{|\eta|^{N+\varepsilon}(|\eta|+b)^{N-\delta}} \nonumber\\[6pt] & \leq & c\Biggl(\int_{|\eta|>\max\{a,b,2|\zeta|\}} \frac{d\eta}{|\eta|^{2N+\varepsilon-\delta}} +\frac{1}{b^{N-\delta}}\int_{b>|\eta|>\max\{a,b,2|\zeta|\}} \frac{d\eta}{|\eta|^{N+\varepsilon}}\Biggr) \nonumber\\[6pt] & \leq & \frac{c}{(a+b+|\zeta|)^{N+\varepsilon-\delta}} +\frac{c}{a^\varepsilon(a+b+|\zeta|)^{N-\delta}} \nonumber\\[6pt] & \leq & \frac{c}{a^\varepsilon(a+b+|\zeta|)^{N-\delta}}. \end{eqnarray} There remains to estimate the integral \begin{equation}\label{Eq5} {\mathcal J}_{2,2}=\int_{B_{2|\zeta|}\backslash B_a} \frac{d\eta}{|\eta|^{N+\varepsilon}(|\eta-\zeta|+b)^{N-\delta}} ={\mathcal J}_{2,2}^{(1)}+{\mathcal J}_{2,2}^{(2)}, \end{equation} \noindent where ${\mathcal J}_{2,2}^{(1)}$ and ${\mathcal J}_{2,2}^{(2)}$ are obtained by splitting the domain of integration in ${\mathcal J}_{2,2}$ into $B_{|\zeta|/2}\backslash B_a$ and its complement (relative to $B_{2|\zeta|}\backslash B_a$). On the one hand, \begin{equation}\label{Eq6} {\mathcal J}_{2,2}^{(1)}\leq\frac{c}{(|\zeta|+b)^{N-\delta}} \int_{B_{|\zeta|/2}\backslash B_a}\frac{d\eta}{|\eta|^{N+\varepsilon}} \leq\frac{c}{a^\varepsilon(|\zeta|+a+b)^{N-\delta}}. \end{equation} \noindent On the other hand, whenever $|\zeta|>a/2$, the integral ${\mathcal J}_{2,2}^{(2)}$, which extends over all $\eta$'s such that $|\eta|>a$, $2|\zeta|>|\eta|>|\zeta|/2$, can be estimated as \begin{eqnarray*} {\mathcal J}_{2,2}^{(2)} & \leq & \frac{c}{|\zeta|^{N+\varepsilon}} \int_{B_{2|\zeta|}\backslash B_a}\frac{d\eta}{(|\eta-\zeta|+b)^{N-\delta}} \leq\frac{c}{|\zeta|^{N+\varepsilon}} \int_{B_{4|\zeta|}}\frac{d\xi}{(|\xi|+b)^{N-\delta}} \\[6pt] & \leq & \frac{c}{|\zeta|^{N+\varepsilon}} \Biggl(\int_{{|\xi|<4|\zeta|}\atop{|\xi|<b}} \frac{d\xi}{(|\xi|+b)^{N-\delta}} +\int_{{|\xi|<4|\zeta|}\atop{|\xi|>b}}\frac{d\xi}{(|\xi|+b)^{N-\delta}}\Biggr). \end{eqnarray*} \noindent Consequently, \begin{equation}\label{J22} {\mathcal J}_{2,2}^{(2)}\leq \frac{c\,\min\{|\zeta|,b\}^N}{|\zeta|^{N+\varepsilon} b^{N-\delta}}. \end{equation} \noindent Using $|\zeta|>a/2$ and the obvious inequality \begin{equation}\label{trivial} \min\{|\zeta|,b\}^N\,\max\{|\zeta|,b^{N-\delta}\}\leq|\zeta|^N\,b^{N-\delta} \end{equation} \noindent we arrive at \begin{equation}\label{Eq7} {\mathcal J}_{2,2}^{(2)}\leq c\,a^{-\varepsilon}(|\zeta|+a+b)^{\delta-N}. \end{equation} \noindent The estimate (\ref{Eq7}), along with (\ref{Eq6}) and (\ref{Eq5}), gives the upper bound $c\,a^{-\varepsilon}(|\zeta|+ a+b)^{\delta-N}$ for ${\mathcal J}_{2,2}$. Combining this with (\ref{Eq4}) we obtain the same majorant for ${\mathcal J}_{2}$ which, together with a similar result for ${\mathcal J}_{1}$ already obtained leads to (\ref{E1}). The proof of the lemma is therefore complete. $\Box$ \vskip 0.08in \subsection{Proof of Theorem~\ref{th1} for $n>2m$} In the case when $n>2m$ there exists a unique fundamental matrix $F(x)$ for the the operator (\ref{eq1.1}) which is positive homogeneous of degree $2m-n$. We shall use the integral representation formula \begin{equation}\label{IntRRR} R(x,y)=R_0(x,y)+\ldots+ R_{m-1}(x,y),\qquad x,y\in{\mathbb{R}}^n_+, \end{equation} \noindent where $R(x,y)$ has been introduced in (\ref{defRRR}) and, with $P_j$ as in (\ref{eq1.6}), we set \begin{equation}\label{eq1.13} R_j(x,y):=\int_{\mathbb{R}^{n-1}}P_j(x,\xi')\, \left(\frac{\partial^j}{\partial x_n^j}F\right)((\xi',0)-y)\,d\xi', \qquad 0\leq j\leq m-1. \end{equation} \noindent Then, thanks to (\ref{eq1.8}) we have \begin{equation}\label{eq1.14} \|R_j(x,y)\|_{\mathbb{C}^{l\times l}}\leq C\,\int_{\mathbb{R}^{n-1}} \frac{x_n^m}{|x-(\xi',0)|^{n+m-1-j}}\cdot\frac{d\xi'}{|(\xi',0)-y|^{n-2m+j}}. \end{equation} Next, putting \begin{eqnarray*} N=n-1 &, & \quad a=x_n,\\ \varepsilon=m-j &, & \quad b=y_n,\\ \delta=2m-j-1 &, & \quad \zeta=y'-x', \end{eqnarray*} \noindent in the formulation of Lemma~\ref{lem1}, we obtain from (\ref{eq1.14}) \begin{equation}\label{Rj} \|R_j(x,y)\|_{\mathbb{C}^{l\times l}} \leq\frac{C\,x_n^j}{(|y'-x'|+x_n+y_n)^{n-2m+j}},\qquad 0\leq j\leq m-1. \end{equation} \noindent Summing up over $j=0,\ldots,m-1$ gives, by virtue of (\ref{IntRRR}), the estimate \begin{equation}\label{eq1.16} \|R(x,y)\|_{\mathbb{C}^{l\times l}}\leq C\,|x-{\bar y}|^{2m-n}, \qquad x,y\in{\mathbb{R}}^n_+. \end{equation} In order to obtain pointwise estimates for derivatives of $R(x,y)$, we make use of the following local estimate for a solution of problem (\ref{eq1.5}) with $f=0$. Recall that $W^s_p$ stands for the classical $L_p$-based Sobolev space of order $s$. The particle {\it loc} is used to brand the local versions of these (and other similar) spaces. \begin{lemma}\label{l1.1}{\rm [see {\bf\cite{ADN}}]} Let $\zeta$ and $\zeta_0$ be functions in $C_0^\infty(\mathbb{R}^n)$ such that $\zeta_0 =1$ in a neighborhood of $\mbox{supp}\,\zeta$. Then the solution $u\in W_2^m(\mathbb{R}_+^n,loc)$ of problem (\ref{eq1.5}) with $f=0$ and $\varphi_j\in W_p^{k+1-j-1/p}(\mathbb{R}^{n-1},loc)$, where $k\geq m$ and $p\in(1,\infty)$, belongs to $W_p^{k+1}(\mathbb{R}^n_+,loc)$ and satisfies the estimate \begin{equation}\label{eq1.17} \|\zeta u\|_{W_p^{k+1}(\mathbb{R}^n_+)}\leq C\, \Bigl(\sum_{j=0}^{m-1}\|\zeta_0 \varphi_j\|_{W_p^{k+1-j-1/p}(\mathbb{R}^{n-1})} +\|\zeta_0 u\|_{L_p(\mathbb{R}^n_+)}\Bigr), \end{equation} \noindent where $C$ is independent of $u$ and $\varphi_j$. \end{lemma} \noindent Let $B(x,r)$ denote the ball of radius $r>0$ centered at $x$. \begin{corollary}\label{c1.2} Assume that $u\in W_2^m(\mathbb{R}_+^n,loc)$ is a solution of problem (\ref{eq1.5}) with $f=0$ and $\varphi_j\in C^{k+1-j}(\mathbb{R}^{n-1},loc)$. Then for any $z\in\overline{\mathbb{R}^n_+}$ and $\rho>0$ \begin{equation}\label{eq1.18} \sup_{\mathbb{R}^n_+\cap B(z,\rho)}|\nabla _k u|\leq C\, \Bigl(\,\rho^{-k}\sup_{\mathbb{R}^n_+\cap B(z,2\rho)}|u| +\sum_{j=0}^{m-1}\sum_{s=0}^{k+1-j} \rho^{s+j-k}\sup_{\mathbb{R}^{n-1}\cap B(z,2\rho)}|\nabla'_{s}\varphi_j|\Bigr), \end{equation} \noindent where $\nabla'_s$ is the gradient of order $s$ in $\mathbb{R}^{n-1}$. Here $C$ is a constant independent of $\rho$, $z$, $u$ and $f_j$. \end{corollary} \noindent{\bf Proof.} Given the dilation invariant nature of the estimate we seek, it suffices to assume that $\rho=1$. Given $\phi\in C^{k+1-j}({\mathbb{R}}^{n-1})$ supported in ${\mathbb{R}}^{n-1}\cap B(z,2)$, we observe that, for a suitable $\theta\in (0,1)$, \begin{eqnarray}\label{simple} \|\phi\|_{W_p^{k+1-j-1/p}(\mathbb{R}^{n-1})} & \leq C & \|\phi\|_{L_p(\mathbb{R}^{n-1})}^\theta \|\phi\|_{W_p^{k+1-j}(\mathbb{R}^{n-1})}^{1-\theta} \nonumber\\[6pt] & \leq C & \sum_{s=0}^{k+1-j}\sup_{\mathbb{R}^{n-1}\cap B(z,2)}|\nabla'_{s}\phi|. \end{eqnarray} \noindent Also, if $p>n$, \begin{equation}\label{eq1.19} \sup\limits_{\mathbb{R}^n_+}|\nabla_k v|\leq C\, \|v\|_{W_p^{k+1}(\mathbb{R}^n_+)}, \end{equation} \noindent by virtue of the classical Sobolev inequality. Combining (\ref{simple}), (\ref{eq1.19}) with Lemma~\ref{l1.1} now readily gives (\ref{eq1.18}). $\Box$ \vskip 0.08in Given $x,y\in{\mathbb{R}}^n_+$, set $\rho:=|x-\bar{y}|/5$ and pick $z\in\partial{\mathbb{R}}^n_+$ such that $|x-z|=\rho/2$. It follows that for any $w\in{\mathbb{R}}^n_+\cap B(z,2\rho)$ we have $|x-\bar{y}|\leq |x-z|+|z-w|+|w-\bar{y}|\leq \rho/2+2\rho+|w-\bar{y}| \leq |x-\bar{y}|/2+|w-\bar{y}|$. Consequently, $|x-\bar{y}|/2\leq |w-\bar{y}|$ for every $w\in{\mathbb{R}}^n_+\cap B(z,2\rho)$, so that, ultimately, \begin{equation}\label{nablaPhi} \rho^{\nu-k}\sup_{w\in\mathbb{R}^{n-1}\cap B(z,2\rho)} \|\nabla'_{\nu}F(w-y)\|_{{\mathbb{C}}^{l\times l}} \leq \frac{C}{|x-\bar{y}|^{n-2m+k}}, \end{equation} \noindent for each $\nu\in{\mathbb{N}}_0$. Granted (\ref{eq1.16}) and our choice of $\rho$, we altogether obtain that \begin{equation}\label{eq1.20} \|D^\alpha_x R(x,y)\|_{\mathbb{C}^{l\times l}} \leq C_{k}\,|x-\bar{y}|^{2m-n-k},\qquad x,y\in{\mathbb{R}}^n_+, \end{equation} \noindent for each multi-index $\alpha\in{\mathbb{N}}_0^n$ of length $k$. In the following two formulas, it will be convenient to use the notation $R_{\mathcal L}$ for the matrix $R$ associated with the operator ${\mathcal L}(D_x)$ as in (\ref{defRRR}). By Green's formula \begin{equation}\label{eq1.11} R_{\mathcal L}(y,x)=\Bigl[R_{{\mathcal L}^*}(x,y)\Bigr]^*,\qquad x,y\in{\mathbb{R}}^n_+, \end{equation} \noindent where the superscript star indicates adjunction. In order to estimate {\it mixed} partial derivatives, we observe that (\ref{eq1.11}) entails \begin{equation}\label{eq1.21} (D^\beta_y R_{\mathcal L})(x,y) =\Bigl[(D^\beta_x R_{{\mathcal L}^*})(y,x)\Bigr]^* \end{equation} \noindent and remark that ${\mathcal L}^*$ has properties similar to ${\mathcal L}$. This, in concert with (\ref{eq1.20}) and \begin{equation}\label{reflect} |x-\bar{y}|=|\bar{x}-y|,\qquad x, y\in\mathbb{R}^n_+. \end{equation} \noindent yields \begin{equation}\label{eq1.22} \|D^\beta_y R(x,y)\|_{\mathbb{C}^{l\times l}} \leq C_{\beta}\,|x-\bar{y}|^{2m-n-|\beta|}. \end{equation} \noindent Let us also point out that by formally differentiating (\ref{eq1.12}) with respect to $y$ we obtain \begin{equation}\label{eq1.23} \left\{ \begin{array}{l} {\mathcal L}(D_x)\,[D^\beta_yR_{\mathcal L}(x,y)]=0 \qquad\qquad\qquad\qquad\qquad \mbox{for}\,\,x\in\mathbb{R}^n, \\[10pt] \displaystyle{\left(\frac{\partial^j}{\partial x_n^j} D^\beta_y R\right)((x',0),y) =\left(\frac{\partial ^j}{\partial x_n^j}(-D)^\beta F\right)((x',0)-y)}, \,\,x'\in\mathbb{R}^{n-1},\,\,0\leq j\leq m-1. \end{array} \right. \end{equation} \noindent With (\ref{eq1.22}) and (\ref{eq1.23}) in place of (\ref{eq1.16}) and (\ref{eq1.12}), respectively, we can now run the same program as above and obtain the estimate \begin{equation}\label{Eq3} \|D^\alpha_xD^\beta_y R(x,y)\|_{\mathbb{C}^{l\times l}} \leq C_{\alpha\beta}\,|x-\bar{y}|^{2m-n-|\alpha|-|\beta|}, \end{equation} \noindent for all multi-indices $\alpha$ and $\beta$. \subsection{Proof of Theorem~\ref{th1} for $n\leq 2m$} When $n\leq 2m$ we shall use the method of descent. To get started, fix an integer $N$ such that $N>2m$ and let $(x,z)\mapsto {\mathcal G}(x,y,z-\zeta)$ denote the Green matrix with singularity at $(y,\zeta)\in{\mathbb{R}}^n\times{\mathbb{R}}^{N-n}$ of the Dirichlet problem for the operator ${\mathcal L}(D_x)+(-\Delta_z)^m$ in the $N$-dimensional half-space \begin{equation}\label{RN} \mathbb{R}^N_+:= \{(x,z):\,z\in\mathbb{R}^{N-n},\,x=(x',x_n),\,x'\in\mathbb{R}^{n-1},\,x_n>0\}. \end{equation} \noindent Also, recall that $G(x,y)$ stands for the Green matrix of the problem (\ref{eq1.5}). Our immediate goal is to establish the following. \begin{lemma}\label{lem3} For all multi-indices $\alpha$ and $\beta$ of order $m$ and for all $x$ and $y$ in $\mathbb{R}^n_+$ \begin{equation}\label{Eq12} D^\alpha_x D^\beta_y G(x',y) =\int_{\mathbb{R}^{N-n}}D^\alpha_x D^\beta_y{\mathcal G}(x,y,-\zeta)\,d\zeta. \end{equation} \end{lemma} \noindent{\bf Proof.} The strategy is to show that \begin{equation}\label{pairGf} \int_{\mathbb{R}^n_+}D^\alpha_xD_y^\beta G(x,y)\,f_\beta(y)\,dy =\int_{\mathbb{R}^n_+}\int_{\mathbb{R}^{N-n}}D^\alpha_xD_y^\beta {\mathcal G}(x, y,-\zeta)\,d\zeta\,f_\beta(y)\,dy \end{equation} \noindent for each $f_\beta\in C^\infty_0(\mathbb{R}^n_+)$, from which (\ref{Eq12}) clearly follows. To justify (\ref{pairGf}) for a fixed, arbitrary $f_\beta\in C^\infty_0(\mathbb{R}^n_+)$, we let $u$ be the unique vector-valued function satisfying $D^\alpha u\in L^2(\mathbb{R}^n_+)$ for all $\alpha$ with $|\alpha|=m$, and such that \begin{equation}\label{Eq9} \left\{ \begin{array}{l} {\mathcal L}(D_x)u=D^\beta_x f_\beta \qquad{\rm in}\,\,\mathbb{R}^n_+, \\[6pt] \displaystyle{\left(\frac{\partial^j u}{\partial x_n^j}\right)(x',0)=0 \qquad {\rm on}\,\,\mathbb{R}^{n-1}},\,\,0\leq j\leq m-1. \end{array} \right. \end{equation} \noindent It is well-known that for each $\gamma\in{\mathbb{N}}_0^n$ \begin{equation}\label{Eq10} |D^\gamma u(x)|\leq C_\gamma\,|x|^{m-n-|\gamma|}\qquad{\rm for}\,\,|x|>1. \end{equation} \noindent This follows, for instance, from Theorem~6.1.4 {\bf\cite{KMR1}} combined with Theorem~10.3.2 {\bf\cite{KMR2}}. Also, as a consequence of Green's formula, the solution of the problem (\ref{Eq9}) satisfies \begin{equation}\label{Eq14} D^\alpha_x u(x) =\int_{\mathbb{R}^n_+}D^\alpha_x(-D_y)^\beta G(x,y)\,f_\beta(y)\,dy. \end{equation} We shall now derive yet another integral representation formula for $D^\alpha_x u$ in terms of (derivatives of) ${\mathcal G}$ which is similar in spirit to (\ref{Eq14}). To get started, we note that since $N>2m$ the estimate (\ref{Eq3}) implies \begin{equation}\label{Eq13} \|D^\alpha_x D^\beta_y {\mathcal G}(x,y,-\zeta)\|_{\mathbb{C}^{l\times l}} \leq c\,(|x-y|+|\zeta|)^{-N}. \end{equation} \noindent Let us now fix $x\in\mathbb{R}^n_+$, $\rho>0$ and introduce a cut-off function $H\in C^\infty(\mathbb{R}^{N-n})$ which satisfies $H(z)=1$ for $|z|\leq 1$ and $H(z)=0$ for $|z|\geq 2$. We may then write \begin{equation}\label{u=G} u(x)=\int_{\mathbb{R}^N} {\mathcal G}(x,y,-\zeta) \Bigl[H\bigl(\zeta/\rho\bigr)D^\beta f_\beta(y)+(-\Delta_\zeta)^m \bigl(H\bigl(\zeta/\rho\bigr)\,u(y)\bigr)\Bigr]\,dy\,d\zeta, \end{equation} \noindent which further implies \begin{eqnarray}\label{est1} && \Bigl|D^\alpha_x u(x)-\int_{\mathbb{R}^N} D^\alpha_x(-D_y)^\beta\, {\mathcal G}(x,y,-\zeta)\,H\bigl(\zeta/\rho\bigr)\,f_\beta(y)\,dy\,d\zeta\Bigr| \nonumber\\[6pt] && \qquad\leq c\,\sum_{|\gamma|=m}\int_{\mathbb{R}^N_+} \|D^\alpha_x D_\zeta^\gamma\,{\mathcal G}(x,y,-\zeta)\| _{\mathbb{C}^{l\times l}}\, \bigl|u(y)\,D^\gamma_\zeta\bigl(H\bigl(\zeta/\rho\bigr)\bigr)\bigr|\,d\zeta. \end{eqnarray} \noindent By (\ref{Eq10}) and (\ref{Eq13}), the expression in the right-hand side of (\ref{est1}) does not exceed \begin{eqnarray*} && c\,\rho^{-m}\int_{\rho<|\zeta|<2\rho}d\zeta \int_{\mathbb{R}^{n-1}}(|x-y|+|\zeta|)^{-N}\,|y|^{m-n}\,dy \\[6pt] && \qquad\qquad \leq c\,\rho^{N-n-m}\int_{\mathbb{R}^{n-1}}(|y|+\rho)^{-N}|y|^{m-n}\,dy =c\,\rho^{-n}. \end{eqnarray*} \noindent This estimate, in concert with (\ref{Eq13}), allows us to obtain, after making $\rho\to\infty$, that \begin{equation}\label{DalphaU} D^\alpha_x u(x) =\int_{\mathbb{R}^n_+}\int_{\mathbb{R}^{N-n}}D^\alpha_x(-D_y)^\beta {\mathcal G}(x, y,-\zeta)\,d\zeta\,f_\beta(y)\,dy. \end{equation} \noindent Now (\ref{pairGf}) follows readily from this and (\ref{Eq14}). $\Box$ \vskip 0.08in Having disposed of Lemma~\ref{lem3}, we are ready to present the \vskip 0.08in \noindent{\bf End of Proof of Theorem~\ref{th1}.} Assume that $2m\geq n$ and let $N$ be again an integer such that $N>2m$. Denote by ${\mathcal F}(x,z)$ the fundamental solution of the operator ${\mathcal L}(D_x)+(-\Delta_z)^m$, which is positive homogeneous of degree $2m-N$ and is singular at $(0,0)\in{\mathbb{R}}^n\times{\mathbb{R}}^{N-n}$. Then the identity \begin{equation}\label{Eq17} D^{\alpha+\beta}_x F(x)=\int_{\mathbb{R}^{N-n}}D^{\alpha+\beta}_x {\mathcal F}(x,-\zeta)\,d\zeta \end{equation} \noindent can be established by proceeding as in the proof of Lemma~\ref{lem3}. Combining (\ref{Eq17}) with Lemma~\ref{lem3}, we arrive at \begin{equation}\label{Eq18} D^\alpha_x D^\beta_y R(x',y)=\int_{\mathbb{R}^{N-n}}D^\alpha_x D^\beta_y {\mathcal R}(x,y,-\zeta)\,d\zeta, \end{equation} \noindent where ${\mathcal R}(x,y,z):={\mathcal G}(x,y,z)-{\mathcal F}(x-y,z)$. Consequently, \begin{equation}\label{DalDbetU} \|D^\alpha_x D^\beta_y {\mathcal R}(x,y,-\zeta)\|_{\mathbb{C}^{l\times l}} \leq C(|x-\bar{y}|+|\zeta|)^{-N}. \end{equation} \noindent by (\ref{eq1.20}) with $k=0$ and $N$ in place of $n$. This estimate, together with (\ref{Eq18}), yields (\ref{mainest}) and the proof of Theorem~\ref{th1} is therefore complete. $\Box$ \vskip 0.08in \section{Properties of integral operators in a half-space} \setcounter{equation}{0} \noindent In \S{3.1} and \S{3.2} we prove estimates for commutators (and certain commutator-like operators) between integral operators in $\mathbb{R}^n_+$ and multiplication operators with functions of bounded mean oscillations, in weighted Lebesgue spaces on $\mathbb{R}^n_+$. Subsection~3.3 contains ${\rm BMO}$ and pointwise estimates for extension operators from $\mathbb{R}^{n-1}$ onto $\mathbb{R}^n_+$. Throughout this section, given two Banach spaces $E,F$, we let ${\mathfrak L}(E,F)$ stand for the space of bounded linear operators from $E$ into $F$, and abbreviate ${\mathfrak L}(E):={\mathfrak L}(E,E)$. Also, given $p\in[1,\infty]$, an open set ${\mathcal O}\subset{\mathbb{R}}^n$ and a measurable nonnegative function $w$ on ${\mathcal O}$, we let $L_p({\mathcal O},w(x)\,dx)$ denote the usual Lebesgue space of (classes of) functions which are $p$-th power integrable with respect to the weighted measure $w(x)\,dx$ on ${\mathcal O}$. Finally, following a well-established tradition, $A(r)\sim B(r)$ will mean that each quantity is dominated by a fixed multiple of the other, uniformly in the parameter $r$. \subsection{Kernels with singularity at $\partial\mathbb{R}^{n}_+$} Recall $L_p({\mathbb{R}}^n_+,\,x_n^{ap}\,dx)$ stands for the weighted Lebesgue space of $p$-th power integrable functions in ${\mathbb{R}}^n_+$ corresponding to the weight $w(x):=x_n^{ap}$, $x=(x',x_n)\in{\mathbb{R}}^n_+$. \begin{proposition}\label{tp3} Let $a\in{\mathbb{R}}$, $1<p<\infty$, and assume that ${\mathcal Q}$ is a non-negative measurable function on $\{\zeta=(\zeta',\zeta_n)\in{\mathbb{R}}^{n-1}\times{\mathbb{R}}:\,\zeta_n>-1\}$, which also satisfies \begin{equation}\label{CC-1} \int_{\mathbb{R}^n_+} {\mathcal Q}(\zeta',\zeta_n-1)\,\zeta_n^{-a-1/p}\,d\zeta<\infty. \end{equation} \noindent Then the operator \begin{equation}\label{CC-2} Qf(x):=x_n^{-n}\int_{\mathbb{R}^n_+}{\mathcal Q} \Bigl(\frac{y-x}{x_n}\Bigr)f(y)\,dy,\qquad x=(x',x_n)\in{\mathbb{R}}^n_+, \end{equation} \noindent initially defined on functions $f\in L_p(\mathbb{R}^n_+)$ with compact support in $\mathbb{R}^n_+$, can be extended by continuity to an operator acting from $L_p({\mathbb{R}}^n_+,\,x_n^{a p}\,dx)$ into itself, with the norm satisfying \begin{equation}\label{CC-3} \|Q\|_{{\mathfrak L}(L_p({\mathbb{R}}^n_+,\,x_n^{ap}dx))}\leq\int_{{\mathbb{R}}^n_+} {\mathcal Q}(\zeta',\zeta_n-1)\,\zeta_n^{-a-1/p}\,d\zeta. \end{equation} \end{proposition} \noindent{\bf Proof.} Introducing the new variable $\zeta:=(x_n^{-1}(y'-x'), x_n^{-1}y_n)\in{\mathbb{R}}^n_+$, we may write \begin{equation}\label{CC-4} |Qf(x)|\leq\int_{{\mathbb{R}}^n_+}{\mathcal Q}(\zeta',\zeta_n-1) |f(x' +x_n\zeta', x_n\zeta_n)|d\zeta,\qquad\forall\,x\in{\mathbb{R}}^n_+. \end{equation} \noindent Then, by Minkowski's inequality, \begin{eqnarray}\label{CC-5} \|Qf\|_{L_p({\mathbb{R}}^n_+,x_n^{a p}\,dx)} & \leq & \int_{{\mathbb{R}}^n_+} {\mathcal Q}(\zeta',\zeta_n-1)\Bigl(\int_{{\mathbb{R}}^n_+} x_n^{a p}|f(x'+x_n\zeta',x_n\zeta_n)|^p\,dx\Bigr)^{1/p}d\zeta \nonumber\\[6pt] & = & \Bigl(\int_{{\mathbb{R}}^n_+}{\mathcal Q}(\zeta',\zeta_n-1)\, \zeta_n^{-a-1/p}\,d\zeta\Bigr)\|f\|_{L_p({\mathbb{R}}^n_+,x_n^{a p}\,dx)}, \end{eqnarray} \noindent as desired. $\Box$ \vskip 0.08in Recall that $\bar{y}:=(y',-y_n)$ if $y=(y',y_n)\in{\mathbb{R}}^{n-1}\times{\mathbb{R}}$. \begin{corollary}\label{Cor1} Consider \begin{equation}\label{1a} Rf(x):=\int_{\mathbb{R}^n_+}\frac{\log\,\bigl(\frac{|x-y|}{x_n}+2\bigr)} {|x-\bar{y}|^n} f(y)\,dy,\qquad x=(x',x_n)\in{\mathbb{R}}^n_+. \end{equation} \noindent Then for each $1<p<\infty$ and each $a\in (-1/p,1-1/p)$ the operator $R$ is bounded from $L_p({\mathbb{R}}^n_+,\,x_n^{a p}\,dx)$ into itself. Moreover, \begin{equation}\label{70a} \|R\|_{{\mathfrak L}(L_p({\mathbb{R}}^n_+,\,x_n^{a p}\,dx))} \leq\frac{c(n)\,p^2}{(pa+1)(p(1-a)-1)}=\frac{c(n)}{s(1-s)}, \end{equation} \noindent where $s=1-a-1/p$ and $c(n)$ is independent of $p$ and $a$. \end{corollary} \noindent{\bf Proof.} The result follows from Proposition~\ref{tp3} with \begin{equation}\label{CC-6} {\mathcal Q}(\zeta):=\frac{\log\,(|\zeta|+2)}{(|\zeta|^2+1)^{n/2}}, \end{equation} \noindent and from the obvious inequality $2|x-\bar{y}|^2 \geq |x-y|^2 +x_n^2$. $\Box$ \vskip 0.08in Let us note here that Corollary~\ref{Cor1} immediately yields the following. \begin{corollary}\label{Cor2} Consider \begin{equation}\label{2a} Kf(x):=\int_{{\mathbb{R}}^n_+}\frac{f(y)}{|x-\bar{y}|^n}\,dy,\qquad x\in{\mathbb{R}}^n_+. \end{equation} \noindent Then for each $1<p<\infty$ and $a\in (-1/p,1-1/p)$ the operator $K$ is bounded from $L_p({\mathbb{R}}^n_+,\,x_n^{a p}\,dx)$ into itself. Moreover, \begin{equation}\label{71a} \|K\|_{{\mathfrak L}(L_p({\mathbb{R}}^n_+,\,x_n^{a p}\,dx))} \leq \frac{c(n)\,p^2}{(pa+1)(p(1-a)-1)}=\frac{c(n)}{s(1-s)}, \end{equation} \noindent where $s=1-a-1/p$ and $c(n)$ is independent of $p$ and $a$. \end{corollary} Recall that the barred integral stands for the mean-value (taken in the integral sense). \begin{lemma}\label{Lem1} Assume that $1<p<\infty$, $a\in (-1/p,1-1/p)$, and recall the operator $K$ introduced in {\rm (\ref{2a})}. Further, consider a non-negative, measurable function $w$ defined on ${\mathbb{R}}^n_+$ and fix a family of balls ${\mathcal F}$ which form a Whitney covering of ${\mathbb{R}}^n_+$. Then the norm of $wK$ as an operator from $L_p({\mathbb{R}}^n_+,\,x_n^{a p}\,dx)$ into itself is equivalent to \begin{equation}\label{CC-7} \sup\limits_{B\in{\mathcal F}}{\int{\mkern-19mu}-}_{\!\!\!\!B}w(y)^p\,dy. \end{equation} \noindent Furthermore, \begin{equation}\label{CC-70} \|w\,K\|_{{\mathfrak L}(L_p({\mathbb{R}}^n_+,\,x_n^{a p}\,dx))}\leq\frac{c(n)}{s(1-s)} \sup\limits_{B\in{\mathcal F}}\Bigl({\int{\mkern-19mu}-}_{\!\!\!\!B}w(y)^p\,dy\Bigr)^{1/p}, \end{equation} \noindent where $c(n)$ is independent of $w$, $p$, and $\alpha$. \end{lemma} \noindent{\bf Proof.} Fix $f\geq 0$ and denote by $|B|$ the Euclidean volume of $B$. Sobolev's embedding theorem allows us to write \begin{equation}\label{CC-8} \|Kf\|^p_{L_\infty(B)}\leq c(n)\,|B|^{-1}\sum_{j=0}^n |B|^{jp/n} \|\nabla_j Kf\|^p_{L_p(B)},\qquad\forall\,B\in{\mathcal F}. \end{equation} \noindent Hence, \begin{equation}\label{N1-bis} \int_{{\mathbb{R}}^n_+}|x_n^{a}w(x)(Kf)(x)|^p\,dx\leq c(n)\, \sup\limits_{B\in{\mathcal F}}{\int{\mkern-19mu}-}_{\!\!\!\!B}w(y)^p\,dy \int_{{\mathbb{R}}^n_+}x_n^{pa}\sum_{0\leq j\leq l}x_n^{jp}|\nabla_j Kf|^p\,dx. \end{equation} \noindent Observing that $x_n^j|\nabla_j\, Kf|\leq c(n)\,Kf$ and referring to Corollary~\ref{Cor2}, we arrive at the required upper estimate for the norm of $wK$. The lower estimate is obvious. $\Box$ \vskip 0.08in We momentarily pause in order to collect some definitions and set up basic notation pertaining to functions with bounded mean oscillations. Let $f$ be a locally integrable function defined on $\mathbb{R}^n$ and define the seminorm \begin{equation}\label{semi1} [f]_{{\rm BMO}(\mathbb{R}^n)}:=\sup_{B} {\int{\mkern-19mu}-}_{\!\!\!B}\,\Bigl|f(x)-{\int{\mkern-19mu}-}_{\!\!\!B}f(y)\,dy\Bigr|\,dx, \end{equation} \noindent where the supremum is taken over all balls $B$ in ${\mathbb{R}^n}$. Next, if $f$ is a locally integrable function defined on $\mathbb{R}^n_+$, we set \begin{equation}\label{semi2} [f]_{{\rm BMO}(\mathbb{R}^n_+)}:=\mathop{\hbox{sup}}_{\{B\}} {\int{\mkern-19mu}-}_{\!\!\!B\cap\mathbb{R}^n_+}\, \Bigl|f(x)-{\int{\mkern-19mu}-}_{\!\!\!B\cap\mathbb{R}^n_+}f(y)\,dy\Bigr|\,dx, \end{equation} \noindent where, this time, the supremum is taken over the collection $\{B\}$ of all balls $B$ with centers in $\overline{\mathbb{R}^n_+}$. Then the following inequalities are straightforward \begin{equation}\label{N4-bis} [f]_{{\rm BMO}(\mathbb{R}^n_+)}\leq\mathop{\hbox{sup}}_{\{B\}} {\int{\mkern-19mu}-}_{\!\!\!B\cap\mathbb{R}^n_+}\,{\int{\mkern-19mu}-}_{\!\!\!B\cap\mathbb{R}^n_+} \Bigl|f(x)-f(y)\,\Bigr|\,dxdy\leq 2\,[f]_{{\rm BMO}(\mathbb{R}^n_+)}. \end{equation} \noindent We also record here the equivalence relation \begin{equation}\label{semi} [f]_{{\rm BMO}(\mathbb{R}^n_+)}\sim [{\rm Ext}\,f]_{{\rm BMO}(\mathbb{R}^n)}, \end{equation} \noindent where ${\rm Ext}\,f$ is the extension of $f$ onto $\mathbb{R}^n$ as an even function in $x_n$. Finally, by ${\rm BMO}({\mathbb{R}^n_+})$ we denote the collection of equivalence classes, modulo constants, of functions $f$ on $\mathbb{R}^n_+$ for which $[f]_{{\rm BMO}({\mathbb{R}^n_+})}<\infty$. \begin{proposition}\label{tp3-bis} Let $b\in{\rm BMO}({\mathbb{R}}^n_+)$ and consider the operator \begin{equation}\label{eqp8-bis} Tf(x):=\int_{{\mathbb{R}}^n_+}\frac{|b(x)-b(y)|}{|x-\bar{y}|^n}f(y)\,dy, \qquad x\in{\mathbb{R}}^n_+. \end{equation} \noindent Then for each $p\in(1,\infty)$ and $a\in (-1/p,1-1/p)$ \begin{equation}\label{eqp9bis} T:L_p({\mathbb{R}}^n_+,\,x_n^{a p}\,dx)\longrightarrow L_p({\mathbb{R}}^n_+,\,x_n^{a p}\,dx) \end{equation} \noindent is a well-defined, bounded operator with \begin{equation}\label{CC-71} \|T\|_{{\mathfrak L}(L_p({\mathbb{R}}^n_+,\,x_n^{a p}\,dx))} \leq\frac{c(n)}{s(1-s)}\,[b]_{{\rm BMO}({\mathbb{R}}^n_+)}, \end{equation} \noindent where $c(n)$ is a constant which depends only on $n$. \end{proposition} \noindent{\bf Proof.} Given $x\in{\mathbb{R}}^n_+$ and $r>0$, we shall use the abbreviations \begin{equation}\label{CC-9} \bar{b}_r(x):={\int{\mkern-19mu}-}_{\!\!\!B(x,r)\cap{\mathbb{R}}^n_+}b(y)\,dy, \qquad\quad D_r(x):=|b(x)-\bar{b}_{r}(x)|, \end{equation} \noindent and make use of the integral operator \begin{equation}\label{CC-10} Sf(x):=\int_{\mathbb{R}^n_+}\frac{D_{|x-\bar{y}|}(x)}{|x-\bar{y}|^n} \,f(y)\,dy,\qquad x\in{\mathbb{R}}^n_+, \end{equation} \noindent as well as its adjoint $S^*$. Clearly, for each nonnegative, measurable function $f$ on ${\mathbb{R}}^n_+$ and each $x\in{\mathbb{R}}^n_+$, \begin{eqnarray}\label{CC-11} Tf(x) & \leq & Sf(x)+S^*f(x)+\int_{{\mathbb{R}}^n_+} \frac{|\bar{b}_{|x-\bar{y}|}(x)-\bar{b}_{|x-\bar{y}|}(y)|}{|x-\bar{y}|^n}\, f(y)dy \nonumber\\[6pt] & \leq & Sf(x)+S^*f(x)+c(n)\,[b]_{{\rm BMO}({\mathbb{R}}^n_+)}Kf(x), \end{eqnarray} \noindent where $K$ has been introduced in (\ref{2a}). Making use of Corollary~\ref{Cor2}, we need to estimate only the norm of $S$. Obviously, \begin{equation}\label{CC-12} Sf(x)\leq D_{x_n}(x)Kf(x)+\int_{{\mathbb{R}}^n_+} \frac{|\bar{b}_{x_n}(x)-\bar{b}_{|x-\bar{y}|}(x)|}{|x-\bar{y}|^n}\,f(y)\,dy. \end{equation} \noindent Setting $r=|x-\bar{y}|$ and $\rho=x_n$ in the standard inequality \begin{equation}\label{CC-13} |\bar{b}_\rho(x)-\bar{b}_r(x)|\leq c(n)\,\log\,\Bigl(\frac{r}{\rho}+1\Bigr) [b]_{{\rm BMO}({\mathbb{R}}^n_+)}, \end{equation} \noindent where $r>\rho$ (cf., e.g., p.\,176 in {\bf\cite{MS}}, or p.\,206 in {\bf\cite{Tor}}), we arrive at \begin{equation}\label{CC-14} Sf(x)\leq D_{x_n}(x)Kf(x)+ c(n)\,[b]_{BMO(\mathbb{R}^n_+)}\, Rf(x), \end{equation} \noindent where $R$ is defined in (\ref{1a}). Let ${\mathcal F}$ be a Whitney covering of ${\mathbb{R}}^n_+$ with open balls. For an arbitrary $B\in{\mathcal F}$, denote by $\delta$ the radius of $B$. By Lemma~\ref{Lem1} with $w(x):=D_{x_n}(x)$, the norm of the operator $D_{x_n}(x)K$ does not exceed \begin{eqnarray}\label{CC-15} \sup\limits_{B\in{\mathcal F}}\Bigl({\int{\mkern-19mu}-}_{\!\!\!B}|D_{x_n}(x)|^p\,dx \Bigr)^{1/p} & \leq & c(n)\,\sup\limits_{B\in{\mathcal F}}\Bigl({\int{\mkern-19mu}-}_{\!\!\!B} |b(x)-\bar{b}_\delta(x)|^p\,dx\Bigr)^{1/p} +c(n)\,[b]_{{\rm BMO}(\mathbb{R}^n_+)} \nonumber\\[6pt] &\leq & c(n)\,[b]_{{\rm BMO}(\mathbb{R}^n_+)}, \end{eqnarray} \noindent by the John-Nirenberg inequality. Here we have also used the triangle inequality and the estimate (\ref{CC-13}) in order to replace $\bar{b}_{x_n}(x)$ in the definition of $D_{x_n}(x)$ by $\bar{b}_{\delta}(x)$. The intervening logarithmic factor is bounded independently of $x$ since $x_n$ is comparable with $\delta$, uniformly for $x\in B$. With this estimate in hand, a reference to Corollary~\ref{Cor1} gives that \begin{eqnarray}\label{CC-16} && S:L_p({\mathbb{R}}^n_+,\,x_n^{a p}\,dx)\to L_p({\mathbb{R}}^n_+,\,x_n^{a p}\,dx) \mbox{ boundedly} \\[6pt] &&\mbox{for each }p\in(1,\infty)\mbox{ and each }a\in (-1/p,1-1/p). \nonumber \end{eqnarray} \noindent The corresponding estimate for the norm $S$ results as well. By duality, it follows that $S^*$ enjoys the same property and, hence, the operator $T$ is bounded on $L_p({\mathbb{R}}^n_+,\,x_n^{a p}\,dx)$ for each $p\in(1,\infty)$ and $a\in(-1/p,1-1/p)$, thanks to (\ref{CC-11}) and Corollary~\ref{Cor2}. The fact that the operator norm of $T$ admits the desired estimate is implicit in the above reasoning and this finishes the proof of the proposition. $\Box$ \vskip 0.08in \subsection{Singular integral operators} We need the analogue of Proposition~\ref{tp3-bis} for the class of Mikhlin-Calder\'on-Zygmund singular integral operators. Recall that \begin{equation}\label{CZ-op} {\mathcal S}f(x)=p.v.\int_{{\mathbb{R}}^n}k(x,x-y)f(y)\,dy,\qquad x\in{\mathbb{R}}^n, \end{equation} \noindent (where $p.v.$ indicates that the integral is taken in the principal value sense, which means excluding balls centered at the singularity and then passing to the limit as the radii shrink to zero), is called a Mikhlin-Calder\'on-Zygmund operator (with a variable coefficient kernel) provided the function $k:{\mathbb{R}}^n\times({\mathbb{R}}^n\setminus\{0\})\to{\mathbb{R}}$ satisfies: \begin{itemize} \item[(i)] $k(x,\cdot)\in C^\infty({\mathbb{R}}^n\setminus\{0\})$ and, for almost each $x\in{\mathbb{R}}^n$, \begin{equation}\label{kk-est} \max_{|\alpha|\leq 2n}\|D_z^\alpha k(x,z)\|_{L_\infty({\mathbb{R}}^n\times S^{n-1})} <\infty, \end{equation} \noindent where $S^{n-1}$ is the unit sphere in ${\mathbb{R}}^n$; \item[(ii)] $k(x,\lambda z)=\lambda ^{-n}k(x,z)$ for each $z\in{\mathbb{R}}^n$ and each $\lambda\in{\mathbb{R}}$, $\lambda>0$; \item[(iii)] $\int_{S^{n-1}} k(x,\omega)\,d\omega=0$, where $d\omega$ indicates integration with respect to $\omega\in S^{n-1}$. \end{itemize} \vskip 0.08in It is well-known that the Mikhlin-Calder\'on-Zygmund operator ${\mathcal S}$ and its commutator $[{\mathcal S},b]$ with the operator of multiplication by a function $b\in{\rm BMO}(\mathbb{R}^n_+)$ are bounded operators in $L_p(\mathbb{R}^n_+)$ for each $1<p<\infty$. The norms of these operators admit the estimates \begin{equation}\label{b1} \|{\mathcal S}\|_{{\mathfrak L}(L_p(\mathbb{R}^n_+))}\leq c(n)\,p\,p',\qquad \|\,[{\mathcal S},b]\,\|_{{\mathfrak L}(L_p(\mathbb{R}^n_+))} \leq c(n)\,p\,p'\,[b]_{{\rm BMO}({\mathbb{R}}^n_+)}, \end{equation} \noindent where $c(n)$ depends only on $n$ and the quantity in (\ref{kk-est}). The first estimate in (\ref{b1}) goes back to the work of A.\,Calder\'on and A.\,Zygmund (cf., e.g., {\bf\cite{CaZy}}, {\bf\cite{CaZy2}}; see also the comment on p.\,22 of {\bf\cite{St}} regarding the dependence on the parameter $p$ of the constants involved). The second estimate in (\ref{b1}) was originally proved for convolution type operators by R.\,Coifman, R.\,Rochberg and G.\,Weiss in {\bf\cite{CRW}} and a standard expansion in spherical harmonics allows to extend this result to the case of operators with variable-kernels of the type considered above. We are interested in extending (\ref{b1}) to the weighted case, i.e. when the measure $dx$ on ${\mathbb{R}}^n_+$ is replaced by $x_n^{ap}\,dx$, where $1<p<\infty$ and $a\in(-1/p,1-1/p)$. Parenthetically, we wish to point out that $a\in(-1/p,1-1/p)$ corresponds precisely to the range of $a$'s for which $w(x):=x_n^{ap}$ is a weight in Muckenhoupt's $A_p$ class, and that while in principle this observation can help with the goal just stated, we prefer to give a direct, self-contained proof. \begin{proposition}\label{Prop4} Retain the above conventions and hypotheses. Then the operator ${\mathcal S}$ and its commutator $[{\mathcal S},b]$ with a function $b\in{\rm BMO}(\mathbb{R}^n_+)$ are bounded when acting from $L_p({\mathbb{R}}^n_+,\,x_n^{ap}\,dx)$ into itself for each $p\in(1,\infty)$ and $a\in (-1/p, 1-1/p)$. The norms of these operators satisfy \begin{eqnarray}\label{b2} &&\|{\mathcal S}\|_{{\mathfrak L}(L_p(\mathbb{R}^n_+,\,x_n^{ap}\,dx))} \leq c(n)\Bigl(p\,p'+\frac{1}{s(1-s)}\Bigr), \\[6pt] &&\|\,[{\mathcal S},b]\,\|_{{\mathfrak L}(L_p(\mathbb{R}^n_+,\,x_n^{ap}\,dx))} \leq c(n)\Bigl(p\,p'+\frac{1}{s(1-s)}\Bigr)\,[b]_{{\rm BMO}({\mathbb{R}}^n_+)}. \label{b2'} \end{eqnarray} \end{proposition} \noindent{\bf Proof.} Let $\chi_j$ be the characteristic function of the layer $2^{j/2}<x_n\leq 2^{1+j/2}$, $j=0,\pm 1,\ldots$, so that $\sum_{j\in{\mathbb{Z}}}\chi_j=2$. We then write ${\mathcal S}$ as the sum ${\mathcal S}_1+{\mathcal S}_2$, where \begin{equation}\label{CC-17} {\mathcal S}_1:=\frac{1}{4}\sum_{|j-k|\leq 3}\chi _j{\mathcal S}\chi_k. \end{equation} \noindent The following chain of inequalities is evident \begin{eqnarray}\label{CC-18} \|{\mathcal S}_1\, f\|_{L_p(\mathbb{R}^n_+,\,x_n^{ap}\,dx)} & \leq & \Bigl(\sum_j\int_{{\mathbb{R}}^n_+}\chi_j(x)\, \Bigl|{\mathcal S}\Bigl(\sum_{|k-j|\leq 3}\chi_k f\Bigr)(x)\Bigr|^p\, x_n^{ap}\,dx\Bigr)^{1/p} \nonumber\\[6pt] & \leq & c(n)\Bigl(\sum_j\int_{\mathbb{R}^n_+} \Bigl|{\mathcal S}\Bigl(\sum_{|k-j|\leq 3} \chi_k 2^{ja/2} f\Bigr)(x)\Bigr|^p\,dx\Bigr)^{1/p}. \end{eqnarray} \noindent In concert with the first estimate in (\ref{b1}), this entails \begin{eqnarray}\label{CC-19} \|{\mathcal S}_1\,f\|_{L_p(\mathbb{R}^n_+,\,x_n^{ap}\,dx)} & \leq & c(n)\,p\, p'\Bigl(\sum_j \int_{{\mathbb{R}}^n_+}\Bigl(\sum_{|k-j|\leq 3} \chi_k 2^{ja/2}|f|\Bigr)^p\,dx\Bigr)^{1/p} \nonumber\\[6pt] & \leq & c(n)\,p\,p'\Bigl(\int_{{\mathbb{R}}^n_+}|f(x)|^p\,x_n^{ap}\,dx\Bigr)^{1/p}, \end{eqnarray} \noindent which is further equivalent to \begin{equation}\label{b3} \|{\mathcal S}_1\|_{{\mathfrak L}(L_p(\mathbb{R}^n_+,\,x_n^{ap}\,dx))} \leq c(n)\, p\, p'. \end{equation} \noindent Applying the same argument to $[{\mathcal S},b]$ and referring to (\ref{b1}), we arrive at \begin{equation}\label{b4} \|\,[{\mathcal S}_1,b]\,\|_{{\mathfrak L}(L_p(\mathbb{R}^n_+,\,x_n^{ap}\,dx))} \leq c(n)\,p\,p'\,[b]_{{\rm BMO}({\mathbb{R}}^n_+)}. \end{equation} It remains to obtain the analogues of (\ref{b3}) and ({\ref{b4}) with ${\mathcal S}_2$ in place of ${\mathcal S}_1$. One can check directly that the modulus of the kernel of ${\mathcal S}_2$ does not exceed $c(n)\,|x-\bar{y}|^{-n}$ and that the modulus of the kernel of $[{\mathcal S}_2,b]$ is majorized by $c(n)\,|b(x)-b(y)|\,|x-\bar{y}|^{-n}$. Then the desired conclusions follow from Corollary~\ref{Cor2} and Proposition~\ref{tp3-bis}. $\Box$ \vskip 0.08in \subsection{The Gagliardo extension operator} Here we shall revisit a certain operator $T$, extending functions defined on $\mathbb{R}^{n-1}$ into functions defined on $\mathbb{R}^n_+$, first introduced by Gagliardo in {\bf\cite{Ga}}. Fix a smooth, radial, decreasing, even, non-negative function $\zeta$ in $\mathbb{R}^{n-1}$ such that $\zeta(t)=0$ for $|t|\geq 1$ and \begin{equation}\label{zeta-int} \int\limits_{\mathbb{R}^{n-1}}\zeta(t)\,dt=1. \end{equation} \noindent (A standard choice is $\zeta(t):=c\,{\rm exp}\,(-1/(1-|t|^2)_+)$ for a suitable $c$.) Following {\bf\cite{Ga}} we then define \begin{equation}\label{10.1.20} (T\varphi)(x',x_n)=\int\limits_{\mathbb{R}^{n-1}}\zeta(t)\varphi(x'+x_nt)\,dt, \qquad(x',x_n)\in{\mathbb{R}}^n_+, \end{equation} \noindent acting on functions $\varphi$ from $L_1(\mathbb{R}^{n-1},loc)$. To get started, we note that \begin{eqnarray}\label{10.1.29} \nabla_{x'}(T\varphi)(x',x_n) & = & \int\limits_{\mathbb{R}^{n-1}}\zeta(t)\nabla\varphi(x'+tx_n)\,dt, \\[6pt] {\partial\over\partial x_n}(T\varphi)(x',x_n) & = & \int\limits_{\mathbb{R}^{n-1}}\zeta(t)\,t\,\nabla\varphi(x'+t x_n)\,dt, \label{10.1.30} \end{eqnarray} \noindent and, hence, we have the estimate \begin{equation}\label{10.1.21} \|\nabla_x\,(T\varphi)\|_{L_\infty(\mathbb{R}^n_+)} \leq c\,\|\nabla_{x'}\,\varphi\|_{L_\infty(\mathbb{R}^{n-1})}. \end{equation} \noindent Refinements of (\ref{10.1.21}) are contained in the Lemmas~\ref{lem7}-\ref{lem8} below. \begin{lemma}\label{lem7} (i) For all $x\in\mathbb{R}^n_+$ and for all multi-indices $\alpha$ with $|\alpha|>1$, \begin{equation}\label{1.2} \Bigl|D^\alpha_{x}(T\varphi)(x)\Bigr| \leq c\,x_n ^{1-|\alpha|}[\nabla\varphi]_{\rm BMO(\mathbb{R}^{n-1})}. \end{equation} (ii) For all $x=(x',x_n)\in\mathbb{R}^n_+$, \begin{equation}\label{Tfi} \Bigl|(T\varphi)(x)-\varphi(x')\Bigr| \leq c\,x_n[\nabla\varphi]_{\rm BMO(\mathbb{R}^{n-1})}. \end{equation} \end{lemma} \noindent{\bf Proof.} Rewriting (\ref{10.1.30}) as \begin{equation}\label{1.3} {\partial\over\partial x_n}(T\varphi)(x',x_n) =x_n^{1-n}\int\limits_{\mathbb{R}^{n-1}}\zeta\Bigl(\frac{\xi-x'}{x_n}\Bigr) \frac{\xi-x'}{x_n}\Bigl(\nabla\varphi(\xi)-{\int{\mkern-19mu}-}_{\!\!\!|z-x'|<x_n} \nabla\varphi(z)dz\Bigr)d\xi \end{equation} \noindent we obtain \begin{equation}\label{1.12} \Bigl|D^\gamma_{x}{\partial\over\partial x_n}(T\varphi)(x)\Bigr| \leq c\,x_n^{-|\gamma|}[\nabla\varphi]_{\rm BMO(\mathbb{R}^{n-1})} \end{equation} \noindent for every non-zero multi-index $\gamma$. Furthermore, for $i=1,\ldots n-1$, by (\ref{10.1.29}) \begin{equation}\label{TT1} \frac{\partial}{\partial x_i}\nabla_{x'}(T\varphi)(x) =x_n^{1-n}\int\limits_{\mathbb{R}^{n-1}}\partial_i\zeta \Bigl(\frac{\xi-x'}{x_n}\Bigr)\Bigl(\nabla\varphi(\xi) -{\int{\mkern-19mu}-}_{\!\!\!|z-x'|<x_n}\nabla\varphi(z)dz\Bigr)d\xi, \end{equation} \noindent where $\partial_i$ is the differentiation with respect to the $i$-th component of the argument. Hence once again \begin{equation}\label{TT2} \Bigl|D^\gamma_x\frac{\partial}{\partial x_i}\nabla_{x'}(T\varphi)(x)\Bigr| \leq c\,x_n^{-|\gamma|-1}[\nabla\varphi]_{\rm BMO(\mathbb{R}^{n-1})}, \end{equation} \noindent and the estimate claimed in (i) follows. Finally, (ii) is a simple consequence of (i) and the fact that $(T\varphi)|_{{\mathbb{R}}^{n-1}}=\varphi$. $\Box$ \vskip 0.08in \noindent{\bf Remark.} In concert with Theorem~2 on p.\,62-63 in {\bf\cite{St}}, formula (\ref{10.1.29}) yields the pointwise estimate \begin{equation}\label{HL-Max} |\nabla\,(T\varphi)(x)| \leq c\,{\mathcal M}(\nabla\varphi)(x'),\qquad x=(x',x_n)\in{\mathbb{R}}^{n}_+, \end{equation} \noindent where ${\mathcal M}$ is the classical Hardy-Littlewood maximal function (cf., e.g., Chapter~I in {\bf\cite{St}}). As for higher order derivatives, an inspection of the above proof reveals that \begin{equation}\label{sharp} \Bigl|D_x^\alpha(T\varphi)(x)\Bigr| \leq c\,x_n^{1-|\alpha|}(\nabla\varphi)^\#(x'),\qquad (x',x_n)\in{\mathbb{R}}^{n}, \end{equation} \noindent holds for each multi-index $\alpha$ with $|\alpha|>1$, where $(\cdot)^\#$ is the Fefferman-Stein sharp maximal function (cf. {\bf\cite{FS}}). \begin{lemma}\label{lem8} If $\nabla_{x'}\varphi\in{\rm BMO}(\mathbb{R}^{n-1})$ then $\nabla(T\varphi)\in{\rm BMO}(\mathbb{R}^n_+)$ and \begin{equation}\label{1.8} [\nabla(T\varphi)]_{{\rm BMO}(\mathbb{R}^n_+)} \leq c\,[\nabla_{x'}\varphi]_{{\rm BMO}(\mathbb{R}^{n-1})}. \end{equation} \end{lemma} \noindent{\bf Proof.} Since $(T\varphi)(x',x_n)$ is even with respect to $x_n$, it suffices to estimate $[\nabla_x(T\varphi)]_{{\rm BMO}(\mathbb{R}^n)}$. Let $Q_r$ denote a cube with side-length $r$ centered at the point $\eta =(\eta',\eta_n)\in{\mathbb{R}}^{n-1}\times{\mathbb{R}}$. Also let $Q'_r$ be the projection of $Q_r$ on $\mathbb{R}^{n-1}$. Clearly, \begin{equation}\label{xxx} \nabla_{x'}(T\varphi)(x',x_n)-\nabla _{x'}\varphi(x') =x_n^{1-n}\int\limits_{\mathbb{R}^{n-1}}\zeta\Bigl(\frac{\xi -x'}{x_n}\Bigr) (\nabla\varphi(\xi)-\nabla\varphi(x'))\,d\xi. \end{equation} Suppose that $|\eta_n|<2r$ and write \begin{eqnarray}\label{1.10} \int_{Q_r}\Bigl|\nabla_{x'}(T\varphi)(x',x_n)-\nabla_{x'}\varphi(x')\Bigr|\,dx & \leq & c\,r^{2-n}\int_{Q'_{4r}} \int_{Q'_{4r}}|\nabla\varphi(\xi)-\nabla\varphi(z)|\,dz\,d\xi. \nonumber\\[6pt] & \leq & c\,r^n[\nabla\varphi]_{{\rm BMO}(\mathbb{R}^{n-1})}. \end{eqnarray} Therefore, for $|\eta_n|<2r$ \begin{eqnarray}\label{1.18} {\int{\mkern-19mu}-}_{\!\!\!Q_r}{\int{\mkern-19mu}-}_{\!\!\!Q_r}|\nabla_{x'}T\varphi(x)-\nabla_{y'} T\varphi(y)|\,dxdy & \leq & 2{\int{\mkern-19mu}-}_{\!\!\!Q_r}|\nabla_{x'}T\varphi(x)-\nabla\varphi(x')|\,dx \nonumber\\[6pt] &&+{\int{\mkern-19mu}-}_{\!\!\!Q'_r}{\int{\mkern-19mu}-}_{\!\!\!Q'_r} |\nabla\varphi(x')-\nabla\varphi(y')|\,dx'dy' \nonumber\\[6pt] & \leq & c\,[\nabla\varphi]_{{\rm BMO}(\mathbb{R}^{n-1})}. \end{eqnarray} Next, consider the case when $|\eta_n|\geq 2r$ and let $x$ and $y$ be arbitrary points in $Q_r(\eta)$. Then, using the generic abbreviation $\bar{f}_E:={\displaystyle{{\int{\mkern-19mu}-}_{\!\!\!E}f}}$, we may write \begin{eqnarray}\label{arTT} |\nabla_{x'}T\varphi(x)-\nabla_{y'}T\varphi(y)| & \leq & \int\limits_{\mathbb{R}^{n-1}}\Bigl|x_n^{1-n}\zeta\Bigl(\frac{\xi-x'}{x_n} \Bigr)-y_n^{1-n}\zeta\Bigl(\frac{\xi-y'}{y_n}\Bigr) \Bigr|\Bigl|\nabla\varphi(\xi)- {\overline{\nabla\varphi}}_{Q'_{2|\eta_n|}} \Bigr|\,d\xi \nonumber\\[6pt] & \leq & \frac{c\,r}{|\eta_n|^n}\int_{Q'_{2|\eta_n|}} \Bigl|\nabla\varphi(\xi) -{\overline{\nabla\varphi}}_{Q'_{2|\eta_n|}}\Bigr|\,d\xi \nonumber\\[6pt] & \leq & c\,[\nabla\varphi]_{{\rm BMO}(\mathbb{R}^{n-1})}. \end{eqnarray} \noindent Consequently, for $|\eta_n|\geq 2r$, \begin{equation}\label{QrT3} {\int{\mkern-19mu}-}_{\!\!\!Q_r}{\int{\mkern-19mu}-}_{\!\!\!Q_r} |\nabla_{x'}T\varphi(x)-\nabla_{y'}T\varphi(y)|\,dxdy \leq c\,[\nabla\varphi]_{{\rm BMO}(\mathbb{R}^{n-1})} \end{equation} \noindent which, together with (\ref{1.18}), gives \begin{equation}\label{QrT4} [\nabla_{x'}T\varphi]_{{\rm BMO}(\mathbb{R}^{n})} \leq c[\nabla\varphi]_{\rm BMO(\mathbb{R}^{n-1})}. \end{equation} \noindent This inequality and (\ref{1.12}), where $|\gamma|=0$, imply (\ref{1.8}). $\Box$ \vskip 0.08in \section{The Dirichlet problem in $\mathbb{R}^n_+$ for variable coefficient systems} \setcounter{equation}{0} \subsection{Preliminaries} For \begin{equation}\label{indices} 1<p<\infty,\quad -\frac{1}{p}<a<1-\frac{1}{p}\quad \mbox{and}\quad m\in{\mathbb{N}}, \end{equation} \noindent we let $V^{m,a}_p(\mathbb{R}^n_+)$ denote the weighted Sobolev space associated with the norm \begin{equation}\label{defVVV} \|u\|_{V_p^{m,a}(\mathbb{R}^n_+)}:=\Bigl(\sum_{0\leq|\beta|\leq m} \int_{\mathbb{R}^n_+}|x_n^{|\beta|-m} D^\beta u(x)|^p\,x_n^{pa}\,dx \Bigr)^{1/p}. \end{equation} \noindent It is easily proved that $C_0^\infty(\mathbb{R}^n_+)$ is dense in $V^{m,a}_p(\mathbb{R}^n_+)$. Moreover, by the one-dimensional Hardy's inequality (see, for instance, {\bf\cite{Maz1}}, formula (1.3/1)), we have \begin{equation}\label{4.60} \|u\|_{V_p^{m,a}(\mathbb{R}^n_+)}\leq cs^{-1} \Bigl(\sum_{|\beta|=m}\int_{\mathbb{R}^n_+} |D^\beta u(x)|^p\,x_n^{pa}\,dx\Bigr)^{1/p} \,\,\,\,\mbox{ for }\,\,u\in C_0^\infty(\mathbb{R}^n_+). \end{equation} \noindent The dual of $V^{m,-a}_{p'}(\mathbb{R}^n_+)$ will be denoted by $V^{m,a}_p(\mathbb{R}^n_+)$, where $1/p+1/p'=1$. Consider now the operator \begin{equation}\label{LxDu} {L}(x,D_x)u:=\sum_{0\leq|\alpha|,|\beta|\leq m} D^\alpha_x({A}_{\alpha\beta}(x)\,x_n^{|\alpha|+|\beta|-2m} D_x^\beta\,u) \end{equation} \noindent where ${A}_{\alpha\beta}$ are ${\mathbb{C}}^{l\times l}$-valued functions in $L_\infty(\mathbb{R}^n_+)$. We shall use the notation $\mathaccent"0017 {L}(x,D_x)$ for the principal part of ${L}(x,D_x)$, i.e. \begin{equation}\label{Lcirc} \mathaccent"0017 {L}(x,D_x)u:=\sum_{|\alpha|=|\beta|=m} D^\alpha_x({A}_{\alpha\beta}(x)\,D_x^\beta\,u). \end{equation} \subsection{Solvability and regularity result} \begin{lemma}\label{lem5} Assume that there exists $\kappa=const>0$ such that the coercivity condition \begin{equation}\label{B5} \Re\int_{\mathbb{R}^n_+}\sum_{|\alpha|=|\beta|=m} \langle A_{\alpha\beta}(x)\,D^\beta u(x),\,D^\alpha u(x) \rangle_{\mathbb{C}^l}\,dx \geq \kappa\sum_{|\gamma|=m}\|D^\gamma\,u\|^2_{L_2(\mathbb{R}^n_+)}, \end{equation} \noindent holds for all $u\in C^\infty_0(\mathbb{R}^n_+)$, and that \begin{equation}\label{B6} \sum_{|\alpha|=|\beta|= m}\|A_{\alpha\beta}\|_{L_\infty(\mathbb{R}^n_+)} \leq \kappa^{-1}. \end{equation} {\rm (i)} Let $p\in (1,\infty)$ and $-1/p<a< 1-1/p$ and suppose that \begin{equation}\label{E7} \frac{1}{s(1-s)}\sum_{{|\alpha|+|\beta|<2m}\atop{0\leq |\alpha|,|\beta|\leq m}} \|A_{\alpha\beta}\|_{{L_\infty}(\mathbb{R}^n_+)} +\sum_{|\alpha|=|\beta|=m}[A_{\alpha\beta}]_{{\rm BMO}(\mathbb{R}^n_+)} \leq \delta, \end{equation} \noindent where $\delta$ satisfies \begin{equation}\label{E8} \Bigl(pp'+\frac{1}{s(1-s)}\Bigr)\,{\delta}<c(n,m,\kappa) \end{equation} \noindent with a sufficiently small constant $c(n,m,\kappa)$ and $s=1-a-1/p$. Then the operator \begin{equation}\label{Liso} L=L(x,D_x):V_p^{m,a}(\mathbb{R}^n_+)\longrightarrow V_p^{-m,a}(\mathbb{R}^n_+) \end{equation} \noindent is an isomorphism. {\rm (ii)} Let $p_i\in (1,\infty)$ and $-1/p_i<a_i<1-1/p_i$, where $i=1,2$. Suppose that (\ref{E8}) holds with $p_i$ and $s_i=1-a_i-1/p_i$ in place of $p$ and $s$. If $u\in V_{p_1}^{m,a_1}(\mathbb{R}^n_+)$ is such that ${L}u\in V_{p_1}^{-m,a_1}(\mathbb{R}^n_+) \cap V_{p_2}^{-m,a_2}(\mathbb{R}^n_+)$, then $u\in V_{p_2}^{m,a_2}(\mathbb{R}^n_+)$. \end{lemma} \noindent{\bf Proof.} The fact that the operator (\ref{Liso}) is continuous is obvious. Also, the existence of a bounded inverse ${L}^{-1}$ for $p=2$ and $a=0$ follows from (\ref{B5}) and (\ref{E7})-(\ref{E8}) with $p=2$, $a=0$, which allow us to implement the Lax-Milgram lemma. We shall use the notation $\mathaccent"0017 {L}_y$ for the operator $\mathaccent"0017 {L}(y,D_x)$, corresponding to (\ref{Lcirc}) in which the coefficients have been frozen at $y\in\mathbb{R}^n_+$, and the notation $G_y$ for the solution operator for the Dirichlet problem for $\mathaccent"0017 {L}_y$ in ${\mathbb{R}}^n_+$ with homogeneous boundary conditions. Next, given $u\in V_p^{m,a}(\mathbb{R}^n_+)$, set $f:=Lu\in V_p^{-m,a}(\mathbb{R}^n_+)$ so that \begin{equation}\label{E10} \left\{ \begin{array}{l} {L}(x,D)u=f\qquad\qquad\mbox{in}\,\,\,\mathbb{R}^n_+,\\[6pt] \displaystyle{\frac{\partial^j\,u}{\partial x_n^j}(x',0)=0} \qquad\qquad {\rm on}\,\,\mathbb{R}^{n-1},\,\,0\leq j\leq m-1. \end{array} \right. \end{equation} \noindent Applying the trick used for the first time in {\bf\cite{CFL1}}, we may write \begin{equation}\label{E12} u(x)=(G_y f)(x)-(G_{y}(\mathaccent"0017 {L}-\mathaccent"0017 {L}_y)u)(x)-(G_{y}({L}-\mathaccent"0017 {L})u)(x), \qquad x\in{\mathbb{R}}^n_+. \end{equation} \noindent We desire to use (\ref{E12}) in order to express $u$ in terms of $f$ (cf. (\ref{IntRepFor})-(\ref{SSS}) below) via integral operators whose norms we can control. First, we claim that whenever $|\gamma|=m$, the norm of the operator \begin{equation}\label{ItalianTrick} V_p^{m,a}(\mathbb{R}^n_+)\ni u \mapsto D_x^\gamma(G_y(\mathaccent"0017 {L}-\mathaccent"0017 {L}_y)u)(x)\Bigl|_{x=y}\, \in L_p(\mathbb{R}^n_+,y_n^{ap}\,dy) \end{equation} \noindent does not exceed \begin{equation}\label{smallCT} C\,\Bigl(pp'+\frac{1}{s(1-s)}\Bigr)\sum_{|\alpha|=|\beta|=m} [A_{\alpha\beta}]_{{\rm BMO}(\mathbb{R}^n_+)}. \end{equation} \noindent Given the hypotheses under which we operate, the expression (\ref{smallCT}) is therefore small if $\delta$ is small. In what follows, we denote by $G_y(x,z)$ the integral kernel of $G_y$ and integrate by parts in order to move derivatives of the form $D_z^\alpha$ with $|\alpha|=m$ from $(\mathaccent"0017 {L}-\mathaccent"0017 {L}_y)u$ onto $G_y(x,z)$ (the absence of boundary terms is due to the fact that $G_y(x,\cdot)$ satisfies homogeneous Dirichlet boundary conditions). That (\ref{smallCT}) bounds the norm of (\ref{ItalianTrick}) can now be seen by combining Theorem~\ref{th1} with (\ref{CC-71}) and Proposition~\ref{Prop4}. Let $\gamma$ and $\alpha$ be multi-indices with $|\gamma|=m$, $|\alpha|\leq m$ and consider the assignment \begin{equation}\label{oppsi} \begin{array}{l} \displaystyle C_0^\infty({\mathbb{R}}^n_+)\ni\Psi\mapsto\Bigl(D^\gamma_x\int_{{\mathbb{R}}^n_+} G_y(x,z) D^\alpha_z\frac{\Psi(z)}{z_n^{m-|\alpha|}}\,dz\Bigr)\Bigl|_{x=y}. \end{array} \end{equation} \noindent After integrating by parts, the action of this operator can be rewritten in the form \begin{equation}\label{DgammaInt} \Bigl(D^\gamma_x\int_{{\mathbb{R}}^n_+}\Bigl[ \Bigl(\frac{-1}{i}\frac{\partial}{\partial z_n}\Bigr)^{m-|\alpha|} (-D_z)^\alpha G_y(x,z)\Bigr]\Gamma_\alpha(z)\,dz\Bigr)\Bigl|_{x=y}, \end{equation} \noindent where \begin{equation}\label{Gamma-def} \Gamma_\alpha(z):= \left\{ \begin{array}{l} \Psi(z),\qquad\mbox{if }\,\,\,|\alpha|=m, \\[10pt] {\displaystyle{\frac{(-1)^{m-|\alpha|}}{(m-|\alpha|-1)!} \int_{z_n}^\infty (t-z_n)^{m-|\alpha|-1}\frac{\Psi(z',t)}{t^{m-|\alpha|}}\,dt, \qquad\mbox{if }\,\,\,|\alpha|<m}}. \end{array} \right. \end{equation} \noindent Using Theorem~\ref{th1} along with (\ref{CC-71}) and Proposition~\ref{Prop4}, we may therefore conclude that \begin{eqnarray}\label{DZGamma} && \Bigl\|\Bigl(D^\gamma_x\int_{{\mathbb{R}}^n_+} \Bigl[\Bigl(\frac{-1}{i}\frac{\partial}{\partial z_n}\Bigr)^{m-|\alpha|} (-D_z)^\alpha G_y(x,z)\Bigr]\Gamma_\alpha(z)\,dz\Bigr)\Bigl|_{x=y} \Bigr\|_{L_p(\mathbb{R}^n_+,\,y_n^{ap}\,dy)} \nonumber\\[6pt] &&\qquad\qquad\qquad\qquad\qquad\qquad\qquad \leq C\,\Bigl(pp'+\frac{1}{s(1-s)}\Bigr) \|\Gamma_\alpha\|_{L_p(\mathbb{R}^n_+,\,x_n^{ap}\,dx)}. \end{eqnarray} \noindent On the other hand, Hardy's inequality gives \begin{equation}\label{Cs-1} \|\Gamma_\alpha\|_{L_p(\mathbb{R}^n_+,\,x_n^{ap}\,dx)} \leq\frac{C}{1-s}\,\|\Psi\|_{L_p(\mathbb{R}^n_+,\,x_n^{ap}\,dx)} \end{equation} \noindent and, hence, the operator (\ref{oppsi}) can be extended from $C_0^\infty(\mathbb{R}^n_+)$ as a bounded operator in $L_p(\mathbb{R}^n_+,\,x_n^{ap}\,dx)$ and its norm is majorized by \begin{equation}\label{Bigpp} \frac{C}{1-s}\,\Bigl(pp'+\frac{1}{s(1-s)}\Bigr). \end{equation} Next, given an arbitrary $u\in V_p^{m,a}(\mathbb{R}^n_+)$, we let $\Psi=\Psi_{\alpha\beta}$ in (\ref{oppsi}) with \begin{equation}\label{Pizz} \Psi_{\alpha\beta}(z):=z_n^{|\beta|-m}\,A_{\alpha\beta}\,D^\beta u(z), \qquad |\alpha|+|\beta|<2m, \end{equation} \noindent and conclude that the norm of the operator \begin{equation}\label{altOp} V_p^{m,a}(\mathbb{R}^n_+)\ni u\mapsto D_x^\gamma\,(G_y\,({L}-\mathaccent"0017 {L})u)(x)\Bigl|_{x=y} \in L_p(\mathbb{R}^n_+,y_n^{ap}\,dy) \end{equation} \noindent does not exceed \begin{equation}\label{notExceed} \frac{C}{1-s}\,\Bigl(pp'+\frac{1}{s(1-s)}\Bigr)\sum_{{|\alpha|+|\beta|<2m} \atop{0\leq|\alpha|,|\beta|\leq m}} \|A_{\alpha\beta}\|_{{L_\infty}(\mathbb{R}^n_+)}. \end{equation} It is well-known (cf. (1.1.10/6) on p.\,22 of {\bf\cite{Maz1}}) that any $u\in V^{m,a}_p({\mathbb{R}}^n_+)$ can be represented in the form \begin{equation}\label{IntUU} u=K\{D^\sigma u\}_{|\sigma|=m} \end{equation} \noindent where $K$ is a linear operator with the property that \begin{equation}\label{K-map} D^\alpha K:L_p(\mathbb{R}^n_+,x_n^{ap}\,dx)\longrightarrow L_p(\mathbb{R}^n_+,x_n^{ap}\,dx) \end{equation} \noindent is bounded for every multi-index $\alpha$ with $|\alpha|=m$. In particular, by (\ref{4.60}), \begin{equation}\label{KDu} \|K\{D^\sigma u\}_{|\sigma|=m}\|_{V_p^{m,a}(\mathbb{R}^n_+)}\leq C\,s^{-1} \|\{D^\sigma u\}_{|\sigma|=m}\|_{L_p(\mathbb{R}^n_+,x_n^{ap}\,dx)}. \end{equation} At this stage, we transform the identity (\ref{E12}) in the following fashion. First, we express the two $u$'s occurring inside the Green operator $G_y$ in the left-hand side of (\ref{E12}) as in (\ref{IntUU}). Second, for each multi-index $\gamma$ with $|\gamma|=m$, we apply $D^\gamma$ to both sides of (\ref{E12}) and, finally, set $x=y$. The resulting identity reads \begin{equation}\label{IntRepFor} \{D^\gamma u\}_{|\gamma|=m}+S\{D^\sigma u\}_{|\sigma|=m}=Q\,f \end{equation} \noindent where $Q$ is a bounded operator from $V_p^{-m,a}(\mathbb{R}^n_+)$ into $L_p(\mathbb{R}^n_+,\,x_n^{ap}\,dx)$ and $S$ is a linear operator mapping $L_p(\mathbb{R}^n_+,\,x_n^{ap}\,dx)$ into itself. Furthermore, on account of (\ref{ItalianTrick})-(\ref{smallCT}), (\ref{altOp})-(\ref{notExceed}) and (\ref{KDu}), we can bound $\|S\|_{{\mathfrak L}(L_p(\mathbb{R}^n_+,\,x_n^{ap}\,dx))}$ by \begin{equation}\label{SSS} C\,\Bigl(pp'+\frac{1}{s(1-s)}\Bigr) \Bigl(\sum_{|\alpha|=|\beta|=m} [A_{\alpha\beta}]_{{\rm BMO}(\mathbb{R}^n_+)} +\frac{1}{s(1-s)} \sum_{{|\alpha|+|\beta|<2m}\atop{0\leq |\alpha|,|\beta|\leq m}} \|A_{\alpha\beta}\|_{{L^\infty}(\mathbb{R}^n_+)}\Bigr). \end{equation} Owing to (\ref{E7})-(\ref{E8}) and with the integral representation formula (\ref{IntRepFor}) and the bound (\ref{SSS}) in hand, a Neumann series argument and standard functional analysis allow us to simultaneously settle the claims (i) and (ii) in the statement of the lemma. $\Box$ \vskip 0.08in \section{The Dirichlet problem in a special Lipschitz domain} \setcounter{equation}{0} In this section as well as in subsequent ones, we shall work with an unbounded domain of the form \begin{equation}\label{10.1.26} G=\{X=(X',X_n)\in{\mathbb{R}}^n:\,X'\in\mathbb{R}^{n-1},\,\,X_n>\varphi(X')\}, \end{equation} \noindent where $\varphi:{\mathbb{R}}^{n-1}\to{\mathbb{R}}$ is a Lipschitz function. \subsection{The space ${\rm BMO}(G)$} The space of functions of bounded mean oscillations in $G$ can be introduced in a similar fashion to the case $G={\mathbb{R}}^n_+$. Specifically, a locally integrable function on $G$ belongs to the space ${\rm BMO}(G)$ if \begin{equation}\label{f-BMO} [f]_{{\rm BMO}(G)}:=\sup\limits_{\{B\}}{\int{\mkern-19mu}-}_{\!\!\!B\cap G} \Bigl|f(X)-{\int{\mkern-19mu}-}_{\!\!\!B\cap G}f(Y)\,dY\Bigr|\,dX<\infty, \end{equation} \noindent where the supremum is taken over all balls $B$ centered at points in ${\bar G}$. Much as before, \begin{equation}\label{10.26} [f]_{{\rm BMO}(G)}\sim\sup\limits_{\{B\}}{\int{\mkern-19mu}-}_{\!\!\!B\cap G} {\int{\mkern-19mu}-}_{\!\!\!B\cap G}\Bigl|f(X)-f(Y)\Bigr|\,dXdY. \end{equation} \noindent This implies the equivalence relation \begin{equation}\label{1.32} [f]_{{\rm BMO}(G)}\sim [f\circ\lambda]_{{\rm BMO}(\mathbb{R}^n_+)} \end{equation} \noindent for each bi-Lipschitz diffeomorphism $\lambda$ of $\mathbb{R}^n_+$ onto $G$. As direct consequences of definitions, we also have \begin{eqnarray}\label{1.33} [\prod_{1\leq j\leq N} f_j]_{{\rm BMO}(G)} & \leq & c\,\|f\|^{N-1}_{L_\infty(G)} [f]_{{\rm BMO}(G)},\quad\mbox{where}\,\,f=(f_1,\ldots, f_N), \\[6pt] [f^{-1}]_{{\rm BMO}(G)} & \leq & c\,\|f^{-1}\|^2_{L_\infty(G)}[f]_{{\rm BMO}(G)}. \label{1.34} \end{eqnarray} \subsection{A bi-Lipschitz map $\lambda:\mathbb{R}^n_+\to G$ and its inverse} Let $\varphi:{\mathbb{R}}^{n-1}\to{\mathbb{R}}$ be the Lipschitz function whose graph is $\partial G$ and set $M:=\|\nabla\varphi\|_{L_\infty({\mathbb{R}}^{n-1})}$. Next, let $T$ be the extension operator defined as in (\ref{10.1.20}) and, for a fixed, sufficiently large constant $C>0$, consider the Lipschitz mapping \begin{equation}\label{lambda} \lambda:\,\mathbb{R}^n_+\ni(x',x_n)\mapsto (X',X_n)\in\,G \end{equation} \noindent defined by the equalities \begin{equation}\label{10.1.27} X':=x',\qquad X_n:=C\,M\,x_n+(T\varphi)(x',x_n) \end{equation} \noindent (see {\bf\cite{MS}}, \S{6.5.1} and an earlier, less accessible, reference {\bf\cite{MS1}}). The Jacobi matrix of $\lambda$ is given by \begin{equation}\label{matrix} \lambda'= \left( \begin{array}{cc} I & 0 \\ \nabla_{x'}(T\varphi) & CM+\partial (T\varphi)/\partial x_n \end{array} \right) \end{equation} \noindent where $I$ is the identity $(n-1)\times(n-1)$-matrix. Since $\vert\partial(T\varphi)/\partial x_n\vert\leq cM$ by (\ref{10.1.30}), it follows that ${\rm det}\,\lambda'>(C-c)\,M>0$. Next, thanks to (\ref{Tfi}) and (\ref{lambda})-(\ref{10.1.27}) we have \begin{equation}\label{eq-phi} X_n-\varphi(X')\sim x_n. \end{equation} \noindent Also, based on (\ref{1.8}) we may write \begin{equation}\label{1.35} [\lambda']_{{\rm BMO}(\mathbb{R}^n_+)} \leq c[\nabla\varphi]_{{\rm BMO}(\mathbb{R}^{n-1})} \end{equation} \noindent and further, by (\ref{10.1.21}) and (\ref{1.2}), \begin{equation}\label{1.4} \|D^\alpha \lambda'(x)\|_{\mathbb{R}^{n\times n}} \leq c(M)\,x_n^{-|\alpha|}[\nabla\varphi]_{{\rm BMO}(\mathbb{R}^{n-1})}, \qquad\forall\,\alpha\,:\,|\alpha|\geq 1. \end{equation} Next, by closely mimicking the proof of Proposition~2.6 from {\bf\cite{MS2}} it is possible to show the existence of the inverse Lipschitz mapping $\varkappa:=\lambda^{-1}:G\to{\mathbb{R}}^n_+$. Owing to (\ref{1.32}), the inequality (\ref{1.35}) implies \begin{equation}\label{1.40} [\lambda'\circ\varkappa]_{{\rm BMO}(G)} \leq c[\nabla\varphi]_{{\rm BMO}(\mathbb{R}^{n-1})}. \end{equation} \noindent Furthermore, (\ref{1.4}) is equivalent to \begin{equation}\label{1.41} \|(D^\alpha\lambda')(\varkappa(X))\|_{\mathbb{R}^{n\times n}} \leq c(M,\alpha)(X_n-\varphi(X'))^{-|\alpha|} [\nabla\varphi]_{{\rm BMO}(\mathbb{R}^{n-1})}, \end{equation} \noindent whenever $|\alpha|>0$. Since $\varkappa'=(\lambda'\circ\varkappa)^{-1}$ we obtain from (\ref{1.34}) and (\ref{1.40}) \begin{equation}\label{1.36} [\varkappa']_{{\rm BMO}(G)}\leq c[\nabla\varphi]_{{\rm BMO}(\mathbb{R}^{n-1})}. \end{equation} \noindent On the other hand, using $\varkappa'=(\lambda'\circ\varkappa)^{-1}$ and (\ref{1.41}) one can prove by induction on the order of differentiation that \begin{equation}\label{1.5} \|D^\alpha\varkappa'(X)\|_{\mathbb{R}^{n\times n}} \leq c(M,\alpha)\,(X_n-\varphi(X'))^{-|\alpha|} [\nabla\varphi]_{{\rm BMO}(\mathbb{R}^{n-1})} \end{equation} \noindent for all $X\in G$ if $|\alpha|>0$. \subsection{The space $V_p^{m,a}(G)$} Analogously to $V_p^{m,a}(\mathbb{R}^n_+)$, we define the weighted Sobolev space $V_p^{m,a}(G)$ naturally associated with the norm \begin{equation}\label{VVV-space} \|{\mathcal U}\|_{V_p^{m,a}(G)}:=\Bigl(\sum_{0\leq|\gamma|\leq m}\int_{G} |(X_n-\varphi(X'))^{|\gamma|-m}D^\gamma{\mathcal U}(X)|^p \,(X_n-\varphi(X'))^{pa}\,dX\Bigr)^{1/p}. \end{equation} \noindent Replacing the function $X_n-\varphi(X')$ by either $\rho(X):={\rm dist}\,(X,\partial G)$, or by the so-called regularized distance function $\rho_{\rm reg}(X)$ (defined as on pp.\,170-171 of {\bf\cite{St}}), yields equivalent norms on $V_p^{m,a}(G)$. Based on a standard localization argument involving a cut-off function vanishing near $\partial G$ (for example, take $\eta(\rho_{\rm reg}/\varepsilon)$ where $\eta\in C^\infty_0({\mathbb{R}})$ satisfies $\eta(t)=0$ for $|t|<1$ and $\eta(t)=1$ for $|t|>2$) one can show that $C^\infty_0(G)$ is dense in $V_p^{m,a}(G)$. Next, we observe that for each ${\mathcal U}\in C^\infty_0(G)$, \begin{equation}\label{equivNr} C\,s\,\|{\mathcal U}\|_{V_p^{m,a}(G)}\leq \Bigl(\sum_{|\gamma|=m}\int_G |D^\gamma {\mathcal U}(X)|^p\,(X_n-\varphi(X'))^{pa}\,dX\Bigr)^{1/p} \leq\,\|{\mathcal U}\|_{V_p^{m,a}(G)} \end{equation} \noindent where, as before, $s=1-a-1/p$. Indeed, for each multi-index $\gamma$ with $0\leq|\gamma|\leq m$, the one-dimensional Hardy's inequality gives \begin{eqnarray}\label{Hardy} && \int_{G}|(X_n-\varphi(X'))^{|\gamma|-m}D^\gamma{\mathcal U}(X)|^p\, (X_n-\varphi(X'))^{pa}\,dX \nonumber\\[6pt] &&\qquad\qquad\qquad\qquad \leq\bigl({C}/{s}\bigr)^p \sum_{|\alpha|=m}\int_G|D^\alpha{\mathcal U}(X)|^p\,(X_n-\varphi(X'))^{pa}\,dX, \end{eqnarray} \noindent and the first inequality in (\ref{equivNr}) follows readily from it. Also, the second inequality in (\ref{equivNr}) is a trivial consequence of (\ref{VVV-space}). Going further, we aim to establish that \begin{equation}\label{equivNr-bis} c_1\,\|{u}\|_{V_p^{m,a}(\mathbb{R}^n_+)} \leq\|{u}\circ\varkappa\|_{V_p^{m,a}(G)} \leq c_2\,\|{u}\|_{V_p^{m,a}(\mathbb{R}^n_+)}, \end{equation} \noindent where $c_1$ and $c_2$ do not depend on $p$ and $s$, whereas $\varkappa:G\longrightarrow\mathbb{R}^n_+$ is the map introduced in \S{5.2}. Clearly, it suffices to prove the upper estimate for $\|{u}\circ\varkappa\|_{V_p^{m,a}(G)}$ in (\ref{equivNr-bis}). As a preliminary matter, we remark that \begin{eqnarray}\label{1.49} D^\gamma\bigl({u}(\varkappa(X))\bigr) & = & \bigl((\varkappa'^*(X)\xi)_{\xi=D}^\gamma\,{u}\bigr)(\varkappa(X)) \nonumber\\[6pt] && +\sum_{1\leq|\tau|<|\gamma|} (D^\tau{u})(\varkappa(X))\sum_\sigma\,c_\sigma\prod_{i=1}^n\prod_j D^{\sigma_{ij}}\varkappa_i(X), \end{eqnarray} \noindent where \begin{equation}\label{1.50} \sum_{i,j}\sigma_{ij}=\gamma,\quad |\sigma_{ij}|\geq 1,\quad \sum_{i,j}(|\sigma_{ij}|-1)=|\gamma|-|\tau|. \end{equation} \noindent In turn, (\ref{1.49})-(\ref{1.50}) and (\ref{1.5}) allow us to conclude that \begin{equation}\label{DUU} |D^\gamma\bigl({u}(\varkappa(X))\bigr)|\leq c\sum_{1\leq |\tau|\leq |\gamma|} x_n^{|\tau|-|\gamma|}\,|D^\tau{u}(x)|, \end{equation} \noindent which, in view of (\ref{eq-phi}), yields the desired conclusion. Finally, we set \begin{equation}\label{dual-VG} V^{-m,a}_p(G):=\Bigl(V^{m,-a}_{p'}(G)\Bigr)^*. \end{equation} \noindent where, as usual, $p'=p/(p-1)$. \subsection{Solvability and regularity result for the Dirichlet problem in the domain $G$} Let us consider the differential operator \begin{equation}\label{E4} {\mathcal L}\,{\mathcal U} ={\mathcal L}(X,D_X)\,{\mathcal U}=\sum_{|\alpha|=|\beta|=m} D^\alpha(\mathfrak{A}_{\alpha\beta}(X)\,D^\beta{\mathcal U}),\qquad X\in G, \end{equation} \noindent whose matrix-valued coefficients satisfy \begin{equation}\label{E4a} \sum_{|\alpha|=|\beta|=m}\|\mathfrak{A}_{\alpha\beta}\|_{L_\infty(G)} \leq \kappa^{-1}. \end{equation} \noindent This operator generates the sesquilinear form ${\mathcal L}(\cdot,\cdot):V_p^{m,a}(G)\times V_{p'}^{m,-a}(G)\to{\mathbb{C}}$ where $p'$ is the conjugate exponent of $p$, defined by \begin{equation}\label{LUV} {\mathcal L}({\mathcal U},{\mathcal V}):=\sum_{|\alpha|=|\beta|=m} \int_G\langle\mathfrak{A}_{\alpha\beta}(X)\, D^\beta{\mathcal U}(X),\,D^\alpha{\mathcal V}(X)\rangle\,dX. \end{equation} \noindent We assume that the inequality \begin{equation}\label{B25} \Re\,{\mathcal L}({\mathcal U},{\mathcal U}) \geq \kappa\sum_{|\gamma|=m}\|D^\gamma\,{\mathcal U}\|^2_{L_2(G)} \end{equation} \noindent holds for all ${\mathcal U}\in V_2^{m,0}(G)$. \begin{lemma}\label{lem5a} {\rm (i)} Let $p\in (1,\infty)$, $-1/p<a<1-1/p$ and $s:=1-a-1/p$. Suppose that \begin{equation}\label{E7a} [\nabla\varphi]_{{\rm BMO}(\mathbb{R}^{n-1})} +\sum_{|\alpha|=|\beta|=m}[{\mathfrak A}_{\alpha\beta}]_{{\rm BMO}(G)} \leq \delta, \end{equation} \noindent where $\delta$ satisfies \begin{equation}\label{E8b} \Bigl(pp'+\frac{1}{s(1-s)}\Bigr)\,\frac{\delta}{s(1-s)} <C(n,m,\kappa,\|\nabla\varphi\|_{L_\infty(\mathbb{R}^{n-1})}) \end{equation} \noindent with a sufficiently small constant $C$, independent of $p$ and $s$. In the case $m=1$ the factor $\delta/s(1-s)$ in {\rm (\ref{E8b})} can be replaced by $\delta$. Then the operator \begin{equation}\label{Liso-2} {\mathcal L}(X,D_X):V_p^{m,a}(G)\longrightarrow V_p^{-m,a}(G) \end{equation} \noindent is an isomorphism. {\rm (ii)} Let $p_i\in (1,\infty)$ and $-1/p_i<a_i<1-1/p_i$, where $i=1,2$. Suppose that {\rm (\ref{E8})} holds with $p_i$ and $s_i=1-a_i-1/p_i$ in place of $p$ and $s$. If ${\mathcal U}\in V_{p_1}^{m,a_1}(G)$ and ${\mathcal L}\,{\mathcal U}\in V_{p_1}^{-m,a_1}(G)\cap V_{p_2}^{-m,a_2}(G)$, then ${\mathcal U}\in V_{p_2}^{m,a_2}(G)$. \end{lemma} \noindent{\bf Proof.} We shall extensively use the flattening mapping $\lambda$ and its inverse studied in \S{5.2}. The assertions (i) and (ii) will follow directly from Lemma~\ref{lem5} as soon as we show that the operator $L$ defined in $\mathbb{R}^n_+$ by \begin{equation}\label{E20} L({\mathcal U}\circ\lambda):=({\mathcal L}\,{\mathcal U})\circ\lambda \end{equation} \noindent satisfies all the hypotheses in that lemma. The sesquilinear form corresponding to the operator $L$ will be denoted by $L(u,v)$. Set $u(x):={\mathcal U}(\lambda(x))$, $v(x):={\mathcal V}(\lambda(x))$ and note that the identity (\ref{1.49}) implies \begin{equation}\label{E40} D^\beta {\mathcal U}(X)=\bigl((\varkappa'^*(\lambda(x))\xi)_{\xi=D}^\beta\,{u} \bigr)(x)+\sum_{1\leq|\tau|<|\beta|}K_{\beta\tau}(x)\,x_n^{|\tau|-|\beta|} D^\tau u(x), \end{equation} \begin{equation}\label{E41} D^\alpha {\mathcal V}(X) =\bigl((\varkappa'^*(\lambda(x))\xi)_{\xi=D}^\alpha\,{v} \bigr)(x)+\sum_{1\leq|\tau|<|\alpha|}K_{\alpha\tau}(x)\,x_n^{|\tau|-|\alpha|} D^\tau v(x), \end{equation} \noindent where, thanks to (\ref{1.5}), the coefficients $K_{\gamma\tau}$ satisfy \begin{equation}\label{E42} \|K_{\gamma\tau}\|_{L_\infty(\mathbb{R}^n_+)} \leq c[\nabla\varphi]_{{\rm BMO}(\mathbb{R}^{n-1})}. \end{equation} \noindent Plugging (\ref{E40}) and (\ref{E41}) into the definition of ${\mathcal L}({\mathcal U},{\mathcal V})$, we arrive at \begin{equation}\label{LUV2} {\mathcal L}({\mathcal U},{\mathcal V})=L_0(u,v) +\sum_{{1\leq|\alpha|,|\beta|\leq m}\atop{|\alpha|+|\beta|<2m}} \int_{\mathbb{R}^n_+}\langle {A}_{\alpha\beta}(x)\,x_n^{|\alpha|+|\beta|-2m} D^\beta\,u(x),\,D^\alpha v(x)\rangle\,dx, \end{equation} \noindent where \begin{equation}\label{Lzero} L_0(u,v)=\sum_{|\alpha|=|\beta|=m}\int_{\mathbb{R}^n_+} \langle (\mathfrak{A}_{\alpha\beta}\circ\lambda) ((\varkappa'^*\circ\lambda)\xi)_{\xi=D}^\beta\,{u},\, ((\varkappa'^*\circ\lambda)\xi)_{\xi=D}^\alpha\,{v}\rangle \,{\rm det}\,\lambda'\,dx. \end{equation} \noindent It follows from (\ref{E40})-(\ref{E42}) that the coefficient matrices $A_{\alpha\beta}$ obey \begin{equation}\label{E43} \sum_{{1\leq|\alpha|,|\beta|\leq m}\atop{|\alpha|+|\beta|<2m}} \|A_{\alpha\beta}\|_{L_\infty(\mathbb{R}^n_+)} \leq {c}{\kappa}^{-1}\,[\nabla\varphi]_{{\rm BMO}(\mathbb{R}^{n-1})}, \end{equation} \noindent where $c$ depends on $m$, $n$, and $\|\nabla\varphi\|_{L_\infty(\mathbb{R}^{n-1})}$. We can write the form $L_0(u,v)$ as \begin{equation}\label{Lzero-uv} \sum_{|\alpha| =|\beta| =m}\int_{\mathbb{R}^n_+} \langle {A}_{\alpha\beta}(x)\,D^\beta u(x),\,D^\alpha v(x)\rangle\,dx \end{equation} \noindent where the coefficient matrices $A_{\alpha\beta}$ are given by \begin{equation}\label{Aalbet} A_{\alpha\beta}={\rm det}\,\lambda'\,\sum_{|\gamma|=|\tau|=m} P_{\alpha\beta}^{\gamma\tau}(\varkappa'\circ\lambda) ({\mathfrak A}_{\gamma\tau}\circ\lambda), \end{equation} \noindent for some scalar homogeneous polynomials $P_{\alpha\beta}^{\gamma\tau}$ of the elements of the matrix $\varkappa'(\lambda(x))$ with ${\rm deg}\,P_{\alpha\beta}^{\gamma\tau}=2m$. In view of (\ref{1.33})-(\ref{1.36}), \begin{equation}\label{E44} \sum_{|\alpha|=|\beta|=m}[A_{\alpha\beta}]_{{\rm BMO}(\mathbb{R}^n_+)} \leq c\Bigl(\kappa^{-1}[\nabla\varphi]_{{\rm BMO}(\mathbb{R}^{n-1})} +\sum_{|\alpha|=|\beta|=m}[{\mathfrak A}_{\alpha\beta}]_{{\rm BMO}(G)}\Bigr), \end{equation} \noindent where $c$ depends on $n$, $m$, and $\|\nabla\varphi\|_{L_\infty(\mathbb{R}^{n-1})}$. By (\ref{E43}) \begin{equation}\label{L-L} |L(u,u)-L_0(u,u)|\leq c\,\delta\|u\|_{V_2^{m,0}(\mathbb{R}^n_+)}^2 \end{equation} \noindent and, therefore, \begin{equation}\label{ReLzero} \Re\,L_0(u,u)\geq\Re\,{\mathcal L}({\mathcal U},{\mathcal U}) -c\,\delta\|u\|^2_{V_2^{m,0}(\mathbb{R}^n_+)}. \end{equation} \noindent Using (\ref{B25}) and the equivalence \begin{equation}\label{norm-U} \|{\mathcal U}\|_{V_2^{m,0}(G)}\sim\|u\|_{V_2^{m,0}(\mathbb{R}^n_+)} \end{equation} \noindent (cf. the discussion in \S{5.3}), we arrive at (\ref{B5}). Thus, all conditions of Lemma~\ref{lem5} hold and the result follows. The improvement of (\ref{E8b}) for $m=1$ mentioned in the statement (i) holds because in this case $L=L_0$. $\Box$ \vskip 0.08in \section{Dirichlet problem in a bounded Lipschitz domain} \setcounter{equation}{0} \subsection{Preliminaries} Let $\Omega$ be a {\it bounded Lipschitz domain} in ${\mathbb{R}}^n$ which means (cf. {\bf\cite{St}}, p.\,189) that there exists a finite open covering $\{{\mathcal O}_j\}_{1\leq j\leq N}$ of $\partial\Omega$ with the property that, for every $j\in\{1,...,N\}$, ${\mathcal O}_j\cap\Omega$ coincides with the portion of ${\mathcal O}_j$ lying in the over-graph of a Lipschitz function $\varphi_j:{\mathbb{R}}^{n-1}\to{\mathbb{R}}$ (where ${\mathbb{R}}^{n-1}\times{\mathbb{R}}$ is a new system of coordinates obtained from the original one via a rigid motion). We then define the {\it Lipschitz constant} of a bounded Lipschitz domain $\Omega\subset{\mathbb{R}}^n$ as \begin{equation}\label{Lip-ct} \inf\,\Bigl(\max\{\|\nabla\varphi_j\|_{L_\infty({\mathbb{R}}^{n-1})}:\,1\leq j\leq N\} \Bigr) \end{equation} \noindent where the infimum is taken over all possible families $\{\varphi_j\}_{1\leq j\leq N}$ as above. It is a classical result that the surface measure $d\sigma$ is well-defined and that there exists an outward pointing normal vector $\nu$ at almost every point on $\partial\Omega$. We denote by $\rho(X)$ the distance from $X\in{\mathbb{R}}^n$ to $\partial\Omega$ and, for $p$, $a$ and $m$ as in (\ref{indices}), introduce the weighted Sobolev space $V^{m,a}_p(\Omega)$ naturally associated with the norm \begin{equation}\label{normU2} \|{\mathcal U}\|_{V_p^{m,a}(\Omega)} :=\Bigl(\sum_{0\leq |\beta|\leq m}\int_{\Omega} |\rho(X)^{|\beta|-m} D^\beta{\mathcal U}(X)|^p\,\rho(X)^{pa}\,dX\Bigr)^{1/p}. \end{equation} \noindent One can check the equivalence of the norms \begin{equation}\label{equiv-Nr2} \|{\mathcal U}\|_{V_p^{m,a}(\Omega)}\sim \|\rho_{\rm reg}^a\,{\mathcal U}\|_{V_p^{m,0}(\Omega)}, \end{equation} \noindent where $\rho_{\rm reg}(X)$ stands for the regularized distance from $X$ to $\partial\Omega$ (in the sense of Theorem~2, p.\,171 in {\bf\cite{St}}). It is also easily proved that $C_0^\infty(\Omega)$ is dense in $V^{m,a}_p(\Omega)$ and that \begin{equation}\label{sumDU} \|{\mathcal U}\|_{V_p^{m,a}(\Omega)}\sim \Bigl(\sum_{|\beta|=m}\int_\Omega|D^\beta{\mathcal U}(X)|^p\,\rho(X)^{pa}\,dX \Bigr)^{1/p} \end{equation} \noindent uniformly for ${\mathcal U}\in C_0^\infty(\Omega)$. As in (\ref{dual-VG}), we set \begin{equation}\label{dual-V} V^{-m,a}_p(\Omega):=\Bigl(V^{m,-a}_{p'}(\Omega)\Bigr)^*. \end{equation} Let us fix a Cartesian coordinates system and consider the differential operator \begin{equation}\label{E444} {\mathcal A}\,{\mathcal U}={\mathcal A}(X,D_X)\,{\mathcal U} :=\sum_{|\alpha|=|\beta|=m}D^\alpha({\mathcal A}_{\alpha\beta}(X) \,D^\beta{\mathcal U}),\qquad X\in\Omega, \end{equation} \noindent with measurable $l\times l$ matrix-valued coefficients. The corresponding sesquilinear form will be denoted by ${\mathcal A}({\mathcal U},{\mathcal V})$. Similarly to (\ref{E4a}) and (\ref{B25}) we impose the conditions \begin{equation}\label{E4b} \sum_{|\alpha|=|\beta|=m}\|{\mathcal A}_{\alpha\beta}\|_{L_\infty(\Omega)} \leq \kappa^{-1} \end{equation} \noindent and \begin{equation}\label{B25b} \Re\,{\mathcal A}({\mathcal U},{\mathcal U})\geq\kappa\sum_{|\gamma|=m} \|D^\gamma\,{\mathcal U}\|^2_{L_2(\Omega)}\quad\mbox{for all}\,\,\, {\mathcal U}\in V_2^{m,0}(G). \end{equation} \subsection{Interior regularity of solutions} \begin{lemma}\label{lem2} Let $\Omega\subset{\mathbb{R}}^n$ be a bounded Lipschitz domain. Pick two functions ${\mathcal H},{\mathcal Z}\in C^\infty_0(\Omega)$ such that ${\mathcal H}\,{\mathcal Z}={\mathcal H}$, and assume that \begin{equation}\label{E5} \sum_{|\alpha|=|\beta|=m}[{\mathcal A}_{\alpha\beta}]_{{\rm BMO}(\Omega)} \leq\delta \end{equation} \noindent where \begin{equation}\label{E6} \delta\leq \frac{c(m,n,\kappa)}{p\,p'} \end{equation} \noindent with a sufficiently small constant $c(m,n,\kappa)>0$. If ${\mathcal U}\in W_q^m(\Omega,loc)$ for a certain $q<p$ and ${\mathcal A}\,{\mathcal U}\in W_p^{-m}(\Omega,loc)$, then ${\mathcal U}\in W_p^m(\Omega,loc)$ and \begin{equation}\label{E3} \|{\mathcal H}\,{\mathcal U}\|_{W_p^m(\Omega)} \leq C\,(\|{\mathcal H}\,{\mathcal A}(\cdot,D)\, {\mathcal U}\|_{W_p^{-m}(\Omega)} +\|{\mathcal Z}\,{\mathcal U}\|_{W_q^m(\Omega)}). \end{equation} \end{lemma} \noindent{\bf Proof.} We start with a trick applied in {\bf\cite{CFL1}} under slightly different circumstances. We shall use the notation ${\mathcal A}_Y$ for the operator ${\mathcal A}(Y, D_X)$, where $Y\in\Omega$ and the notation $\Phi_Y$ for a fundamental solution of ${\mathcal A}_Y$ in $\mathbb{R}^n$. Then, with star denoting the convolution product, \begin{equation}\label{HUPhi} {\mathcal H}\,{\mathcal U} +\Phi_Y\ast({\mathcal A}-{\mathcal A}_Y)({\mathcal H}\,{\mathcal U}) =\Phi_Y\ast({\mathcal H}\,{\mathcal A}{\mathcal U}) +\Phi_Y\ast ([{\mathcal A},{\mathcal H}]({\mathcal Z}{\mathcal U})) \end{equation} \noindent and, consequently, for each multi-index $\gamma$, $|\gamma|=m$, \begin{eqnarray}\label{DHU} && D^\gamma({\mathcal H}\,{\mathcal U})+\sum_{|\alpha|=|\beta|=m} D^{\alpha+\gamma}\Phi_Y\ast\bigl(({\mathcal A}_{\alpha\beta} -{\mathcal A}_{\alpha\beta}(Y)) D^\beta({\mathcal H}\,{\mathcal U})\bigr) \nonumber\\[6pt] && \qquad\quad =D^\gamma\Phi_Y\ast({\mathcal H}\,{\mathcal A}{\mathcal U}) +D^\gamma\Phi_Y\ast([{\mathcal A},{\mathcal H}]({\mathcal Z}{\mathcal U})). \end{eqnarray} \noindent Writing this equation at the point $Y$ and using (\ref{b1}), we obtain \begin{eqnarray}\label{E3a} && (1-C\,pp'\delta) \sum_{|\gamma|=m}\|D^\gamma({\mathcal H}\,{\mathcal U})\|_{L_p(\Omega)} \nonumber\\[6pt] && \qquad\quad \leq C(p,\kappa) (\|{\mathcal H}\,{\mathcal A}{\mathcal U}\|_{W_p^{-m}(\Omega)} +\|[{\mathcal A},{\mathcal H}]({\mathcal Z}{\mathcal U})\|_{W_p^{-m}(\Omega)}). \end{eqnarray} \noindent Let $p'<n$. We have for every ${\mathcal V}\in \mathaccent"0017 W^m_p(\Omega)$ \begin{eqnarray}\label{AHZUV} && \Bigl|\int_\Omega\langle[{\mathcal A},{\mathcal H}] ({\mathcal Z}{\mathcal U}),{\mathcal V}\rangle\,dX\Bigr| =|{\mathcal A}({\mathcal H}{\mathcal Z}{\mathcal U},{\mathcal V}) -{\mathcal A}({\mathcal Z}{\mathcal U},{\mathcal H}{\mathcal V})| \nonumber\\[6pt] && \qquad\qquad\quad \leq c(\|{\mathcal Z}{\mathcal U}\|_{W_p^{m-1}(\Omega)} \|{\mathcal V}\|_{W_{p'}^m(\Omega)} +\|{\mathcal Z}{\mathcal U}\|_{W_{\frac{pn}{n+p}}^{m} (\Omega)}\|{\mathcal V}\|_{W_{\frac{p'n}{n-p'}}^{m-1}(\Omega)}). \end{eqnarray} \noindent By Sobolev's theorem \begin{equation}\label{ZZU} \|{\mathcal Z}{\mathcal U}\|_{W_p^{m-1}(\Omega)} \leq c\,\|{\mathcal Z}{\mathcal U}\|_{W_{\frac{pn}{n+p}}^{m}(\Omega)} \end{equation} \noindent and \begin{equation}\label{VWpm} \|{\mathcal V}\|_{W_{\frac{p'n}{n-p'}}^{m-1}(\Omega)} \leq c\,\|{\mathcal V}\|_{W_{p'}^m(\Omega)}. \end{equation} \noindent Therefore, \begin{equation}\label{AHZ} \Bigl|\int_\Omega\langle [{\mathcal A},{\mathcal H}]({\mathcal Z}{\mathcal U}),{\mathcal V}\rangle \,dX\Bigr| \leq c\,\|{\mathcal Z}{\mathcal U}\|_{W_{\frac{pn}{n+p}}^{m}(\Omega)} \|{\mathcal V}\|_{W_{p'}^m(\Omega)} \end{equation} \noindent which is equivalent to the inequality \begin{equation}\label{AHZ3} \|[{\mathcal A},{\mathcal H}]({\mathcal Z}{\mathcal U})\|_{W_p^{-m}(\Omega)} \leq c\,\|{\mathcal Z}{\mathcal U}\|_{W_{\frac{pn}{n+p}}^{m} (\Omega)}. \end{equation} \noindent In the case $p'\geq n$, the same argument leads to a similar inequality, where $pn/(n+p)$ is replaced by $1+\varepsilon$ with an arbitrary $\varepsilon>0$ for $p'>n$ and $\varepsilon=0$ for $p'=n$. Now, (\ref{E3}) follows from (\ref{E3a}) if $p'\geq n$ and $p'<n$, $q\geq pn/(n+p)$. In the remaining case the goal is achieved by iterating this argument finitely many times. $\Box$ \vskip 0.08in \begin{corollary}\label{cor3} Let $p\geq 2$ and suppose that {\rm (\ref{E5})} and {\rm (\ref{E6})} hold. If ${\mathcal U}\in W_2^m(\Omega,loc)$ and ${\mathcal A}\,{\mathcal U}\in W_p^{-m}(\Omega,loc)$, then ${\mathcal U}\in W_p^m(\Omega,loc)$ and \begin{equation}\label{E3-aaa} \|{\mathcal H}\,{\mathcal U}\|_{W_p^m(\Omega)} \leq C\,(\|{\mathcal Z}\,{\mathcal A}(\cdot,D)\, {\mathcal U}\|_{W_p^{-m}(\Omega)} +\|{\mathcal Z}\,{\mathcal U}\|_{W_2^{m-1}(\Omega)}). \end{equation} \end{corollary} \noindent{\bf Proof.} Let ${\mathcal Z}_0$ denote a real-valued function in $C^\infty_0(\Omega)$ such that ${\mathcal H}{\mathcal Z}_0={\mathcal H}$ and ${\mathcal Z}_0{\mathcal Z}={\mathcal Z}_0$. By (\ref{E3}) \begin{equation}\label{E3b} \|{\mathcal H}\,{\mathcal U}\|_{W_p^m(\Omega)} \leq C\,(\|{\mathcal H}\,{\mathcal A}(\cdot,D)\, {\mathcal U}\|_{W_p^{-m}(\Omega)} +\|{\mathcal Z}_0\,{\mathcal U}\|_{W_2^{m}(\Omega)}) \end{equation} \noindent and it follows from (\ref{B25b}) that \begin{equation}\label{ZUW} \|{\mathcal Z}_0\,{\mathcal U}\|^2_{W_2^m(\Omega)}\leq c\kappa^{-1} \Re\,{\mathcal A}({\mathcal Z}_0{\mathcal U},{\mathcal Z}_0{\mathcal U}). \end{equation} \noindent Furthermore, \begin{equation}\label{AZ5} |{\mathcal A}({\mathcal Z}_0{\mathcal U},{\mathcal Z}_0{\mathcal U}) -{\mathcal A}({\mathcal U},{\mathcal Z}_0^2{\mathcal U})|\leq c\kappa^{-1} \|{\mathcal Z}{\mathcal U}\|_{W_2^{m-1}(\Omega)}\, \|{\mathcal Z}_0{\mathcal U}\|_{W_2^{m}(\Omega)}. \end{equation} \noindent Hence \begin{equation}\label{ZU6} \|{\mathcal Z}_0\,{\mathcal U}\|^2_{W_2^m(\Omega)}\leq c\kappa^{-1} (\|{\mathcal Z}\,{\mathcal A}{\mathcal U}\|_{W_2^{-m}(\Omega)}\, \|{\mathcal Z}_0^2\,{\mathcal U}\|_{W_2^m(\Omega)} +\kappa^{-1}\|{\mathcal Z}{\mathcal U}\|_{W_2^{m-1}(\Omega)}\, \|{\mathcal Z}_0{\mathcal U}\|_{W_2^{m}(\Omega)}) \end{equation} \noindent and, therefore, \begin{equation}\label{Zu7} \|{\mathcal Z}_0\,{\mathcal U}\|_{W_2^m(\Omega)} \leq c\kappa^{-1}(\|{\mathcal Z}\,{\mathcal A} {\mathcal U}\|_{W_2^{-m}(\Omega)}\, +\kappa^{-1}\|{\mathcal Z}{\mathcal U}\|_{W_2^{m-1}(\Omega)}). \end{equation} \noindent Combining this inequality with (\ref{E3b}) we arrive at (\ref{E3-aaa}). $\Box$ \vskip 0.08in \subsection{Invertibility of ${\mathcal A}:V_p^{m,a}(\Omega)\longrightarrow V_p^{-m,a}(\Omega)$} Recall the infinitesimal mean oscillations as defined in (\ref{e60}). \begin{theorem}\label{th1a} Let $1<p<\infty$, $0<s<1$, and $a=1-s-1/p$. Furthermore, let $\Omega$ be a bounded Lipschitz domain in $\mathbb{R}^n$. Suppose that the differential operator ${\mathcal A}$ is as in {\rm \S{6.1}} and that, in addition, \begin{equation}\label{E16} \sum_{|\alpha|=|\beta|=m}\{{\mathcal A}_{\alpha\beta}\}_{{\rm Osc}(\Omega)} +\{\nu\}_{{\rm Osc}(\partial\Omega)}\leq\delta, \end{equation} \noindent where \begin{equation}\label{E17} \Bigl(pp'+\frac{1}{s(1-s)}\Bigr)\frac{\delta}{s(1-s)}\leq c \end{equation} \noindent for a sufficiently small constant $c>0$ independent of $p$ and $s$. In the case $m=1$ the factor $\delta/s(1-s)$ in {\rm (\ref{E17})} can be replaced by $\delta$. Then the operator \begin{equation}\label{cal-L} {\mathcal A}:V_p^{m,a}(\Omega)\longrightarrow V_p^{-m,a}(\Omega) \end{equation} \noindent is an isomorphism. \end{theorem} \noindent{\bf Proof.} We shall proceed in a series a steps starting with \vskip 0.08in (i) {\it The construction of the auxiliary domain $G$ and operator ${\mathcal L}$}. \noindent Let $\varepsilon$ be small enough so that \begin{equation}\label{m1} \sum_{|\alpha|=|\beta|=m}{\int{\mkern-19mu}-}_{\!\!\!B_r\cap \Omega} {\int{\mkern-19mu}-}_{\!\!\!B_r\cap\Omega} |{\mathcal A}_{\alpha\beta}(X)-{\mathcal A}_{\alpha\beta}(Y)|\,dXdY \leq 2\delta \end{equation} \noindent for all balls in $\{B_r\}_\Omega$ with radii $r<\varepsilon$ and \begin{equation}\label{m2} {\int{\mkern-19mu}-}_{\!\!\!B_r\cap\partial\Omega}{\int{\mkern-19mu}-}_{\!\!\!B_r\cap\partial\Omega}\, \Bigl|\nu(X)-\nu(Y)\,\Bigr|\,d\sigma_Xd\sigma_Y\leq 2\delta \end{equation} \noindent for all balls in $\{B_r\}_{\partial\Omega}$ with radii $r<\varepsilon$. We fix a ball $B_\varepsilon$ in $\{B_\varepsilon\}_{\partial\Omega}$ and assume without loss of generality that, in a suitable system of Cartesian coordinates, \begin{equation}\label{newGGG} \Omega\cap B_\varepsilon=\{X=(X',X_n)\in B_\varepsilon:\,X_n>\varphi(X')\} \end{equation} \noindent for some Lipschitz function $\varphi:{\mathbb{R}}^{n-1}\to{\mathbb{R}}$. Consider now the unique cube $Q(\varepsilon)$ (relative to this system of coordinates) which is inscribed in $B_\varepsilon$ and denote its projection onto $\mathbb{R}^{n-1}$ by $Q'(\varepsilon)$. Since $\nabla\varphi=-\nu'/\nu_n$, it follows from (\ref{m2}) that \begin{equation}\label{m3} {\int{\mkern-19mu}-}_{\!\!\!B'_r}{\int{\mkern-19mu}-}_{\!\!\!B'_r}\, \Bigl|\nabla\varphi(X')-\nabla\varphi(Y')\,\Bigr|\,dX'dY'\leq c(n)\,\delta, \end{equation} \noindent where $B'_r=B_r\cap \mathbb{R}^{n-1}$, $r<\varepsilon$. Let us retain the notation $\varphi$ for the mirror extension of the function $\varphi$ from $Q'(\varepsilon)$ onto $\mathbb{R}^{n-1}$. We extend ${\mathcal A}_{\alpha\beta}$ from $Q(\varepsilon)\cap\Omega$ onto $Q(\varepsilon)\backslash\Omega$ by setting \begin{equation}\label{A=A} {\mathcal A}_{\alpha\beta}(X) :={\mathcal A}_{\alpha\beta}(X',-X_n+2\varphi(X')), \qquad X\in Q(\varepsilon)\backslash\Omega, \end{equation} \noindent and we shall use the notation ${\mathfrak A}_{\alpha\beta}$ for the periodic extension of ${\mathcal A}_{\alpha\beta}$ from $Q(\varepsilon)$ onto $\mathbb{R}^n$. Consistent with the earlier discussion in \S{5}, we shall denote the special Lipschitz domain $\{X=(X',X_n):\,X'\in\mathbb{R}^{n-1}, \,X_n>\varphi(X')\}$ by $G$. One can easily see that, owing to $2\varepsilon n^{-1/2}$-periodicity of $\varphi$ and ${\mathcal A}_{\alpha\beta}$, \begin{equation}\label{SumA} \sum_{|\alpha|=|\beta|=m}[{\mathcal A}_{\alpha\beta}]_{{\rm BMO}(G)} +[\nabla\varphi]_{{\rm BMO}(\mathbb{R}^{n-1})}\leq c(n)\,\delta. \end{equation} \noindent Now, with the operator ${\mathcal A}(X,D_X)$ in $\Omega$, we associate an auxiliary operator ${\mathcal L}(X,D_X)$ in $G$ given by (\ref{E4}). \vskip 0.08in (ii) {\it Uniqueness.} \noindent Assuming that ${\mathcal U}\in V_p^{m,a}(\Omega)$ satisfies ${\mathcal L}\,{\mathcal U}=0$ in $\Omega$, we shall show that ${\mathcal U}\in V_2^{m,0}(\Omega)$. This will imply that ${\mathcal U}=0$ which proves the injectivity of the operator (\ref{cal-L}). To this end, pick a function ${\mathcal H}\in C_0^\infty(Q(\varepsilon))$ and write ${\mathcal L}({\mathcal H}\,{\mathcal U}) =[{\mathcal L},\,{\mathcal H}]\,{\mathcal U}$. Also, fix a small $\theta>0$ and select a smooth function $\Lambda$ on $\mathbb{R}^1_+$, which is identically $1$ on $[0,1]$ and which vanishes identically on $(2,\infty)$. Then by (ii) in Lemma~\ref{lem5a}, \begin{equation}\label{E19} {\mathcal L}({\mathcal H}\,{\mathcal U})-[{\mathcal L},\,{\mathcal H}]\, (\Lambda(\rho_{\rm reg}/\theta)\,{\mathcal U}) \in V_2^{-m,0}(G)\cap V_p^{-m,a}(G). \end{equation} \noindent Note that the operator \begin{equation}\label{LH-1} [{\mathcal L},\,{\mathcal H}]\rho_{\rm reg}^{-1}: V_p^{m,a}(G) \longrightarrow V_p^{-m,a}(G) \end{equation} \noindent is bounded and that the norm of the multiplier $\rho_{\rm reg}\,\Lambda(\rho_{\rm reg}/\theta)$ in $V_p^{m,a}(G)$ is $O(\theta)$. Moreover, the same is true for $p=2$ and $a=0$. The inclusion (\ref{E19}) can be written in the form \begin{equation}\label{E21} {\mathcal L}({\mathcal H}\,{\mathcal U}) +{\mathcal M}({\mathcal Z}\,{\mathcal U})\in V_p^{-m,a}(G)\cap V_2^{-m,0}(G), \end{equation} \noindent where ${\mathcal Z}\in C^\infty_0(\mathbb{R}^n)$, ${\mathcal Z}\,{\mathcal H}={\mathcal H}$ and ${\mathcal M}$ is a linear operator mapping \begin{equation}\label{V2Vp} V_p^{m,a}(G)\to V_p^{-m,a}(G)\quad {\rm and}\quad V_2^{m,0}(G)\to V_2^{-m,0}(G) \end{equation} \noindent with both norms of order $O(\theta)$. Select a finite covering of $\overline{\Omega}$ by cubes $Q_j(\varepsilon)$ and let $\{{\mathcal H}_j\}$ be a smooth partition of unity subordinate to $\{Q_j(\varepsilon)\}$. Also, let ${\mathcal Z}_j\in C_0^\infty(Q_j(\varepsilon))$ be such that ${\mathcal H}_j{\mathcal Z}_j={\mathcal H}_j$. By $G_j$ we denote the special Lipschitz domain generated by the cube $Q_j(\varepsilon)$ as in part (i) of the present proof. The corresponding operators ${\mathcal L}$ and ${\mathcal M}$ will be denoted by ${\mathcal L}_j$ and ${\mathcal M}_j$, respectively. It follows from (\ref{E21}) that \begin{equation}\label{Hu8} {\mathcal H}_j\,{\mathcal U} +\sum_k({\mathcal L}_j^{-1}\,{\mathcal M}_j\,{\mathcal Z}_j \,{\mathcal Z}_k)({\mathcal H}_k\,{\mathcal U}) \in V_p^{m,a}(\Omega)\cap V_2^{m,0}(\Omega). \end{equation} \noindent Taking into account that the norms of the matrix operator ${\mathcal L}_j\,{\mathcal M}_j\,{\mathcal Z}_j\,{\mathcal Z}_k$ in the spaces $V_p^{m,a}(\Omega)$ and $V_2^{m,0}(\Omega)$ are $O(\theta)$, we may take $\theta>0$ small enough and obtain ${\mathcal H}_j\,{\mathcal U}\in V_2^{m,0}(\Omega)$, i.e. ${\mathcal U}\in V_2^{m,0}(\Omega)$. Therefore, ${\mathcal L}:V_p^{m,a}(\Omega)\to V_p^{-m,a}(\Omega)$ is injective. \vskip 0.08in (iii) {\it A priori estimate}. \noindent Let $p\geq 2$ and assume that ${\mathcal U}\in V_p^{m,a}(\Omega)$. Referring to Corollary~\ref{cor3} and arguing as in part (ii) of the present proof, we arrive at the equation \begin{equation}\label{m32} {\mathcal H}_j\,{\mathcal U} +\sum_k({\mathcal L}_j^{-1}\,{\mathcal M}_j\,{\mathcal Z}_j\, {\mathcal Z}_k)({\mathcal H}_k\,{\mathcal U})={\mathcal F}, \end{equation} \noindent whose right-hand side satisfies \begin{equation}\label{FVO} \|{\mathcal F}\|_{V_p^{m,a}(\Omega)}\leq c (\|{\mathcal A}\,{\mathcal U}\|_{V_p^{-m,a}(\Omega)} +\|{\mathcal U}\|_{W_2^{m-1}(\omega)}), \end{equation} \noindent for some domain $\omega$ with ${\overline\omega}\subset\Omega$. Since the $V_p^{m,a}(\Omega)$-norm of the sum in (\ref{m32}) does not exceed $ C\theta\|{\mathcal U}\|_{V_p^{m,a}(\Omega)}$, we obtain the estimate \begin{equation}\label{m32-bis} \|{\mathcal U}\|_{V_p^{m,a}(\Omega)} \leq c\,(\|{\mathcal A}\,{\mathcal U}\|_{V_p^{-m,a}(\Omega)} +\|{\mathcal U} \|_{W_2^{m-1}(\omega)}). \end{equation} \vskip 0.08in (iv) {\it End of proof.} \noindent Let $p\geq 2$. The range of the operator ${\mathcal A}:V_p^{m,a}(\Omega)\to V_p^{-m,a}(\Omega)$ is closed by (\ref{m32}) and the compactness of the restriction operator: $V_p^{m,a}(\Omega) \to W_2^{m-1}(\omega)$. Since the coefficients of the adjoint operator ${\mathcal L}^*$ satisfy the same conditions as those of ${\mathcal L}$, the operator ${\mathcal L}^*: V_{p'}^{m,a}(\Omega)\to V_{p'}^{-m,-a}(\Omega)$ is injective. Therefore, we conclude that ${\mathcal L}:V_{p}^{m,a}(\Omega)\to V_{p}^{-m,-a}(\Omega)$ is surjective. Being also injective, ${\mathcal L}$ is isomorphic if $p\geq 2$. Hence ${\mathcal L}^*$ is isomorphic for $p'\leq 2$. This means that ${\mathcal L}$ is isomorphic for $p\leq 2$. The result follows. $\Box$ \vskip 0.08in \subsection{Traces and extensions} Let $\Omega\subset{\mathbb{R}}^n$ be a bounded Lipschitz domain and, for $m\in{\mathbb{N}}$ as well as $1<p<\infty$ and $-1/p<a<1-1/p$, consider a new space, $W_p^{m,a}(\Omega)$, consisting of functions ${\mathcal U}\in L_p(\Omega,loc)$ with the property that $\rho^{a}D^\alpha{\mathcal U}\in L_p(\Omega)$ for all multi-indices $\alpha$ with $|\alpha|=m$. We equip $W_p^{m,a}(\Omega)$ with the norm \begin{equation}\label{newW} \|{\mathcal U}\|_{W_p^{m,a}(\Omega)} :=\sum_{|\alpha|=m}\|D^\alpha{\mathcal U}\|_{L_p(\Omega,\,\rho(X)^{ap}\,dX)} +\|{\mathcal U}\|_{L_p(\omega)}, \end{equation} \noindent where $\omega$ is an open non-empty domain, $\overline{\omega}\subset\Omega$. An equivalent norm is given by the expression in (\ref{W-Nr}). We omit the standard proof of the fact that \begin{equation}\label{dense} C^\infty({\overline\Omega})\hookrightarrow W_p^{m,a}(\Omega) \quad\mbox{densely}. \end{equation} Recall that for $p\in(1,\infty)$ and $s\in(0,1)$ the Besov space $B_p^s(\partial\Omega)$ is then defined via the requirement (\ref{Bes-xxx}). If we introduce the $L_p$-modulus of continuity \begin{equation}\label{omega-p} \omega_p(f,t):=\Bigl(\int\!\!\!\!\!\!\int \limits_{{|X-Y|<t}\atop{X,Y\in\partial\Omega}} |f(X)-f(Y)|^p\,d\sigma_Xd\sigma_Y\Bigr)^{1/p}, \end{equation} \noindent then \begin{equation}\label{Eqv} \|f\|_{B_p^s(\partial\Omega)}\sim\|f\|_{L_p(\partial\Omega)} +\left(\int_0^\infty\frac{\omega_p(f,t)^p}{t^{n-1+ps}}dt\right)^{1/p}, \end{equation} \noindent uniformly for $f\in B^s_p(\partial\Omega)$. The nature of our problem requires that we work with Besov spaces (defined on Lipschitz boundaries) which exhibit a higher order of smoothness. In accordance with {\bf\cite{JW}}, we now make the following definition. \begin{definition}\label{def1} For $p\in(1,\infty)$, $m\in{\mathbb{N}}$ and $s\in(0,1)$, define the (higher order) Besov space $\dot{B}^{m-1+s}_p(\partial\Omega)$ as the collection of all finite families $\dot{f}=\{f_\alpha\}_{|\alpha|\leq m-1}$ of functions defined on $\partial\Omega$ with the following property. For each multi-index $\alpha$ of length $\leq m-1$ let \begin{equation}\label{reminder} R_\alpha(X,Y):=f_\alpha(X)-\sum_{|\beta|\leq m-1-|\alpha|}\frac{1}{\beta!}\, f_{\alpha+\beta}(Y)\,(X-Y)^\beta,\qquad X,Y\in\partial\Omega, \end{equation} \noindent and consider the $L_p$-modulus of continuity \begin{equation}\label{rem-Rr} r_\alpha(t):=\Bigl(\int\!\!\!\!\!\!\int \limits_{{|X-Y|<t}\atop{X,Y\in\partial\Omega}} |R_\alpha(X,Y)|^p\,d\sigma_Xd\sigma_Y\Bigr)^{1/p}. \end{equation} \noindent Then \begin{equation}\label{Bes-Nr} \|\dot{f}\|_{\dot{B}^{m-1+s}_p(\partial\Omega)} :=\sum_{|\alpha|\leq m-1}\|f_\alpha\|_{L_p(\partial\Omega)} +\sum_{|\alpha|\leq m-1}\Bigl(\int_0^\infty \frac{r_\alpha(t)^p}{t^{p(m-1+s-|\alpha|)+n-1}}\,dt\Bigr)^{1/p}<\infty. \end{equation} \end{definition} For further reference we note here that for each fixed $\kappa>0$, an equivalent norm is obtained by replacing $r_\alpha(t)$ by $r_\alpha(\kappa\,t)$ in (\ref{Bes-Nr}). Also, when $m=1$, the above definition agrees with (\ref{Bes-xxx}), thanks to (\ref{Eqv}). A few notational conventions which make the exposition more transparent are as follows. Given a family of functions $\{f_\alpha\}_{|\alpha|\leq m-1}$ on $\partial\Omega$ and $X\in\Omega$, $Y,Z\in\partial\Omega$, set \begin{equation}\label{PPP} \begin{array}{l} {\displaystyle{ P_\alpha(X,Y):=\sum_{|\beta|\leq m-1-|\alpha|}\frac{1}{\beta!}\, f_{\alpha+\beta}(Y)\,(X-Y)^\beta,\qquad \forall\,\alpha\,:\,|\alpha|\leq m-1,}} \\[25pt] P(X,Y):=P_{(0,...,0)}(X,Y), \end{array} \end{equation} \noindent so that \begin{equation}\label{PR-0} R_\alpha(Y,Z)=f_\alpha(Y)-P_\alpha(Y,Z), \qquad\forall\,\alpha\,:\,|\alpha|\leq m-1, \end{equation} \noindent and the following elementary identities hold for each multi-index $\alpha$ of length $\leq m-1$: \begin{eqnarray}\label{PR} D^{\beta}_XP_{\alpha}(X,Y) & = & P_{\alpha+\beta}(X,Y), \qquad|\beta|\leq m-1-|\alpha|, \nonumber\\[6pt] P_{\alpha}(X,Y) -P_{\alpha}(X,Z) & =& \sum_{|\beta|\leq m-1-|\alpha|}R_{\alpha+\beta}(Y,Z)\frac{(X-Y)^\beta}{\beta!}. \end{eqnarray} \noindent See, e.g., p.\,177 in {\bf\cite{St}} for the last formula. \begin{lemma}\label{trace-1} For each $1<p<\infty$, $-1/p<a<1-1/p$ and $s=1-a-1/p$, the trace operator \begin{equation}\label{TR-1} {\rm Tr}:W^{1,a}_p(\Omega)\longrightarrow B^s_p(\partial\Omega) \end{equation} \noindent is well-defined, linear, bounded, onto and has $V^{1,a}_p(\Omega)$ as its null-space. Furthermore, there exists a linear, continuous mapping \begin{equation}\label{Extension} {\mathcal E}:B^s_p(\partial\Omega)\longrightarrow W^{1,a}_p(\Omega), \end{equation} \noindent called extension operator, such that ${\rm Tr}\circ{\mathcal E}=I$ (i.e., the operator (\ref{TR-1}) has a bounded, linear right-inverse). \end{lemma} \noindent{\bf Proof.} By a standard argument involving a smooth partition of unity it suffices to deal with the case when $\Omega$ is the domain lying above the graph of a Lipschitz function $\varphi:{\mathbb{R}}^{n-1}\to{\mathbb{R}}$. Composing with the bi-Lipschitz homeomorphism ${\mathbb{R}}^n_+\ni(X',X_n)\mapsto(X',\varphi(X')+X_n)\in\Omega$ further reduces matters to the case when $\Omega={\mathbb{R}}^n_+$, in which situation the claims in the lemma have been proved in {\bf\cite{Usp}}. $\Box$ \vskip 0.08in We need to establish an analogue of Lemma~\ref{trace-1} for higher smoothness spaces. While for $\Omega={\mathbb{R}}^n_+$ this has been done by Uspenski\u{\i} in {\bf\cite{Usp}}, the flattening argument used in Lemma~\ref{trace-1} is no longer effective in this context. Let us also mention here that a result similar in spirit, valid for any Lipschitz domain $\Omega$ but with $B^{m-1+s+1/p}(\Omega)$ in place of $W^{m,a}_p(\Omega)$ (cf. (\ref{incls}) for the relationship between these spaces) has been proved by A.\,Jonsson and H.\,Wallin in {\bf\cite{JW}} (in fact, in this latter context, these authors have dealt with much more general sets than Lipschitz domains). The result which serves our purposes is as follows. \begin{proposition}\label{trace-2} Let $1<p<\infty$, $-1/p<a<1-1/p$, $s=1-a-1/p\in(0,1)$ and $m\in{\mathbb{N}}$. Define the {\rm higher} {\rm order} trace operator \begin{equation}\label{TR-11} {\rm tr}_{m-1}:W^{m,a}_p(\Omega)\longrightarrow \dot{B}^{m-1+s}_p(\partial\Omega) \end{equation} \noindent by setting \begin{equation}\label{Tr-DDD} {\rm tr}_{m-1}\,\,{\mathcal U} :=\Bigl\{i^{|\alpha|}\,{\rm Tr}\,[D^\alpha\,{\mathcal U}]\Bigr\} _{|\alpha|\leq m-1}, \end{equation} \noindent where the traces in the right-hand side are taken in the sense of Lemma~\ref{trace-1}. Then (\ref{TR-11})-(\ref{Tr-DDD}) is a a well-defined, linear, bounded operator, which is onto and has $V^{m,a}_p(\Omega)$ as its null-space. Moreover, it has a bounded, linear right-inverse, i.e. there exists a linear, continuous operator \begin{equation}\label{Ext-222} {\mathcal E}:\dot{B}^{m-1+s}_p(\partial\Omega) \longrightarrow W^{m,a}_p(\Omega) \end{equation} \noindent such that \begin{equation}\label{Ext-333} \dot{f}=\{f_\alpha\}_{|\alpha|\leq m-1}\in\dot{B}^{m-1+s}(\partial\Omega) \Rightarrow i^{|\alpha|}\,{\rm Tr}\,[D^\alpha({\mathcal E}\,\dot{f})]=f_\alpha, \quad\forall\,\alpha\,:\,|\alpha|\leq m-1. \end{equation} \end{proposition} In order to facilitate the exposition, we isolate a couple of preliminary results prior to the proof of Proposition~\ref{trace-2}. \begin{lemma}\label{Lemma-R} Assume that $\varphi:{\mathbb{R}}^{n-1}\to{\mathbb{R}}$ is a Lipschitz function and define $\Phi:{\mathbb{R}}^{n-1}\to\partial\Omega\hookrightarrow{\mathbb{R}}^n$ by setting $\Phi(X'):=(X',\varphi(X'))$ at each $X'\in{\mathbb{R}}^{n-1}$. Define the Lipschitz domain $\Omega$ as $\{X=(X',X_n)\in{\mathbb{R}}^n:\,X_n>\varphi(X')\}$ and, for some fixed $m\in{\mathbb{N}}$, $p\in(1,\infty)$ and $s\in(0,1)$ consider a system of functions $f_\alpha\in B^s_p(\partial\Omega)$, $\alpha\in{\mathbb{N}}_0^n$, $|\alpha|\leq m-1$, with the property that \begin{equation}\label{B-CC} \frac{\partial}{\partial X_k}[f_\alpha(\Phi(X'))] =\sum_{j=1}^{n}f_{\alpha+e_j}(\Phi(X'))\partial_k\Phi_j(X'), \qquad 1\leq k\leq n-1, \end{equation} \noindent for each multi-index $\alpha$ of length $\leq m-2$, where $\{e_j\}_j$ is the canonical orthonormal basis in ${\mathbb{R}}^n$. Finally, for each $l\in\{1,...,m-1\}$ introduce $\Delta_l:=\{(t_1,...,t_{l}):\,0\leq t_{l}\leq\cdots\leq t_1\leq 1\}$, and define $R_\alpha(X,Y)$ as in (\ref{reminder}). Then if $\alpha$ is an arbitrary multi-index of length $\leq m-2$ and $r:=m-1-|\alpha|$, the following identity holds: \begin{eqnarray}\label{RRR=id} && R_\alpha(\Phi(X'),\Phi(Y')) \nonumber\\[6pt] &&\quad =\sum_{(j_1,...,j_{r})\in\{1,...,n\}^{r}}\Bigl\{\int_{\Delta_{r}}\Bigl[ f_{\alpha+e_{j_1}+\cdots+e_{j_r}}(\Phi(Y'+t_{r}(X'-Y'))) -f_{\alpha+e_{j_1}+\cdots+e_{j_r}}(\Phi(Y'))\Bigr] \nonumber\\[6pt] && \qquad\times\prod_{k=1}^{r}\,\nabla\Phi_{j_k}(Y'+t_{k}(X'-Y'))\cdot(X'-Y') \,dt_{r}\cdots dt_1\Bigr\},\qquad X',\,Y'\in{\mathbb{R}}^{n-1}. \end{eqnarray} \end{lemma} \noindent{\bf Proof.} We shall show that for any system of functions $\{f_\alpha\}_{|\alpha|\leq m-1}$ which satisfies (\ref{B-CC}), any multi-index $\alpha\in{\mathbb{N}}_0^n$ with $|\alpha|\leq m-2$ and any $l\in{\mathbb{N}}$ with $l\leq r:=m-1-|\alpha|$, there holds \begin{eqnarray}\label{FF=id} && f_\alpha(\Phi(X'))-\sum_{|\beta|\leq l}\frac{1}{\beta!} f_{\alpha+\beta}(\Phi(Y'))(\Phi(X')-\Phi(Y'))^\beta \nonumber\\[6pt] &&\quad=\sum_{(j_1,...,j_{l})\in\{1,...,n\}^{l}}\Bigl\{\int_{\Delta_l} \Bigl[f_{\alpha+e_{j_1}+\cdots+e_{j_l}}(\Phi(Y'+t_{l}(X'-Y'))) -f_{\alpha+e_{j_1}+\cdots+e_{j_l}}(\Phi(Y'))\Bigr] \nonumber\\[6pt] &&\qquad\qquad\qquad\qquad\quad \times\prod_{k=1}^{l}\,\nabla\Phi_{j_k}(Y'+t_{k}(X'-Y'))\cdot(X'-Y') \,dt_{l}\cdots dt_1\Bigr\}. \end{eqnarray} \noindent Clearly, (\ref{RRR=id}) follows from (\ref{reminder}) and (\ref{FF=id}) by taking $l:=r$. In order to justify (\ref{FF=id}) we proceed by induction on $l$. Concretely, when $l=1$ we may write, based on (\ref{B-CC}) and the Fundamental Theorem of Calculus, \begin{eqnarray}\label{RRR=0} && f_{\alpha}(\Phi(X'))-f_{\alpha}(\Phi(Y')) -\sum_{j=1}^n f_{\alpha+e_j}(\Phi(Y'))(\Phi_j(X')-\Phi_j(Y')) \nonumber\\[6pt] &&\quad =\int_0^1\frac{d}{dt}\Bigl[f_\alpha(\Phi(Y'+t(X'-Y')))\Bigr]\,dt \nonumber\\[6pt] && \qquad\qquad\quad-\sum_{j=1}^n f_{\alpha+e_j}(\Phi(Y')) \int_0^1\frac{d}{dt}\Bigl[\Phi_j(Y'+t(X'-Y'))\Bigr]\,dt \nonumber\\[6pt] &&\quad = \sum_{j=1}^n\Bigl\{\int_0^1\Bigl[f_{\alpha+e_j}(Y'+t(X'-Y')) -f_{\alpha+e_j}(\Phi(Y'))\Bigr] \nonumber\\[6pt] &&\qquad\qquad\times\nabla\Phi_j(Y'+t(X'-Y'))\cdot(X'-Y')\,dt\Bigr\}, \end{eqnarray} \noindent as wanted. To prove the version of (\ref{FF=id}) when $l$ is replaced by $l+1$ we split the sum in the left-hand side of (\ref{FF=id}), written for $l+1$ in place of $l$, according to whether $|\beta|\leq l$ or $|\beta|=l+1$ and denote the expressions created in this fashion by $S_1$ and $S_2$, respectively. Next, based on (\ref{B-CC}) and the Fundamental Theorem of Calculus, we write \begin{eqnarray}\label{FTC} && f_{\alpha+e_{j_1}+\cdots+e_{j_l}}(\Phi(Y'+t_{l}(X'-Y'))) -f_{\alpha+e_{j_1}+\cdots+e_{j_l}}(\Phi(Y')) \\[6pt] && =\sum_{i=1}^n\int_0^{t_{l}}f_{\alpha+e_{j_1}+\cdots+e_{j_l}+e_i} (\Phi(Y'+t_{l+1}(X'-Y'))) \nabla\Phi_{i}(Y'+t_{l+1}(X'-Y'))\cdot(X'-Y')\,dt_{l+1} \nonumber \end{eqnarray} \noindent and use the induction hypothesis to conclude that \begin{eqnarray}\label{FF=id-2} && S_1=\sum_{(j_1,...,j_{l+1})\in\{1,...,n\}^{l+1}} \Bigl\{\int_{\Delta_{l+1}} f_{\alpha+e_{j_1}+\cdots+e_{j_{l+1}}}(\Phi(Y'+t_{l+1}(X'-Y'))) \nonumber\\[6pt] &&\qquad\qquad \times\prod_{k=1}^{l+1}\,\nabla\Phi_{j_k}(Y'+t_{k}(X'-Y'))\cdot(X'-Y') \,dt_{l+1}\cdots dt_1\Bigr\}. \end{eqnarray} \noindent Thus, if \begin{equation}\label{F=Psi} F_j(t):=\Phi_j(Y'+t(X'-Y'))-\Phi_j(Y'),\qquad 1\leq j\leq n, \end{equation} \noindent we may express $S_1$ in the form \begin{eqnarray}\label{another} && S_1=\sum_{(j_1,...,j_{l+1})\in\{1,...,n\}^{l+1}}\Bigl\{\int_{\Delta_{l+1}} \Bigl[f_{\alpha+e_{j_1}+\cdots+e_{j_{l+1}}}(\Phi(Y'+t_{l+1}(X'-Y'))) -f_{\alpha+e_{j_1}+\cdots+e_{j_{l+1}}}(\Phi(Y'))\Bigr] \nonumber\\[6pt] &&\qquad\qquad\qquad\qquad \times\prod_{k=1}^{l+1}\,\nabla\Phi_{j_k}(Y'+t_{k}(X'-Y'))\cdot(X'-Y') \,dt_{l+1}\cdots dt_1\Bigr\} \\[6pt] &&\qquad\quad +\sum_{(j_1,...,j_{l+1})\in\{1,...,n\}^{l+1}} f_{\alpha+e_{j_1}+\cdots+e_{j_{l+1}}}(\Phi(Y'))\int_{\Delta_{l+1}}\, \prod_{k=1}^{l+1}\,F'_{j_k}(t_{k})\,dt_{l+1}\cdots dt_1. \nonumber \end{eqnarray} Note that the first double sum above corresponds precisely to the expression in the right-hand side of (\ref{FF=id}) written with $l$ replaced by $l+1$. Our proof of (\ref{FF=id}) by induction is therefore complete as soon as we show that for each multi-index $\beta$ of length $l+1$, \begin{equation}\label{S2} \sum_{{(j_1,...,j_{l+1})\in\{1,...,n\}^{l+1}}\atop {e_{j_1}+\cdots+e_{j_{l+1}}=\beta}} \int_{\Delta_{l+1}}\, \prod_{k=1}^{l+1}\,F'_{j_k}(t_{k})\,dt_{l+1}\cdots dt_1 =\frac{1}{\beta!}(\Phi(X')-\Phi(Y'))^\beta. \end{equation} \noindent In turn, this is going to be a consequence of a general identity, to the effect that \begin{equation}\label{Fprime} \sum_{{(j_1,...,j_{l})\in\{1,...,n\}^{l}} \atop{e_{j_1}+\cdots+e_{j_{l}}=\beta}} \int_0^{t_0}\int_0^{t_1}\cdots\int_0^{t_{l-1}} \prod_{k=1}^{l}\,F'_{j_k}(t_{k})\,dt_{l}\cdots dt_1 =\frac{1}{\beta!}F(t_0)^\beta, \end{equation} \noindent for any Lipschitz function $F=(F_1,...,F_n):[0,1]\to{\mathbb{C}}^n$ with $F(0)=0$, any point $t_0\in[0,1]$ any $l\in{\mathbb{N}}$ and any multi-index $\beta$ of length $l$. Of course, the case most relevant for our purposes is when the $F_j$'s are as in (\ref{F=Psi}), $t_0=1$ and when $l$ is replaced by $l+1$, but the above formulation is best suited for proving (\ref{Fprime}) via induction on $l$. Indeed, the case $l=1$ is immediate from the Fundamental Theorem of Calculus and to pass from $l$ to $l+1$ it suffices to show that the two sides of (\ref{Fprime}) have the same derivative with respect to $t_0$. The important observation in carrying out the latter step is that the derivative of the left-hand side of (\ref{Fprime}) with respect to $t_0$ is an expression to which the current induction hypothesis is readily applicable. This justifies (\ref{Fprime}) and completes the proof of (\ref{RRR=id}). $\Box$ \vskip 0.08in \begin{corollary}\label{Cor-R} Under the assumptions of Lemma~\ref{Lemma-R}, for each multi-index of length $\leq m-2$ the following estimate holds \begin{equation}\label{RRR=est} \Bigl(\int_0^\infty \frac{r_\alpha(t)^p}{t^{p(m-1+s-|\alpha|)+n-1}}\,dt\Bigr)^{1/p} \leq C\sum_{|\gamma|=m-1}\|f_\gamma\|_{B^s_p(\partial\Omega)}, \end{equation} \noindent where the constant $C$ depends only on $n$, $p$, $s$ and $\|\nabla\varphi\|_{L_\infty({\mathbb{R}}^{n-1})}$. \end{corollary} \noindent{\bf Proof.} The identity (\ref{RRR=id}) gives \begin{eqnarray}\label{pw-est-R} && |R_\alpha(\Phi(X'),\Phi(Y'))| \\[6pt] &&\qquad\qquad \leq C|X'-Y'|^{m-1-|\alpha|} \sum_{|\gamma|=m-1}\int_0^1 |f_{\gamma}(\Phi(Y'+\tau(X'-Y')))-f_{\gamma}(\Phi(Y'))|\,d\tau \nonumber \end{eqnarray} \noindent for each $X',Y'\in{\mathbb{R}}^{n-1}$, where the constant $C$ depends only on $n$ and $\|\nabla\Phi\|_{L_\infty}$ which, in turn, is controlled in terms of $\|\nabla \varphi\|_{L_\infty}$. Given an arbitrary $t>0$ we now integrate the $p$-th power of both sides in (\ref{pw-est-R}) for $X',Y'\in\partial{\mathbb{R}}^{n-1}$ subject to $|\Phi(X')-\Phi(Y')|<t$. Using Fubini's Theorem and making the change of variables $Z':=Y'+\tau(X'-Y')$ we obtain, after noticing that $|Z'-Y'|\leq \tau t$, \begin{eqnarray}\label{r-vs-om} r_\alpha(t)^p & \leq & C\,t^{p(m-1-|\alpha|)}\sum_{|\gamma|=m-1} \int\!\!\!\!\!\!\int\limits_{{X',Y'\in{\mathbb{R}}^{n-1}}\atop{|X'-Y'|< c\,t}} \int_0^1|f_{\gamma}(\Phi(Y'+\tau(X'-Y')))-f_{\gamma}(\Phi(Y'))|^p \,d\tau\,dX'dY' \nonumber\\[6pt] &\leq & C\,t^{p(m-1-|\alpha|)}\sum_{|\gamma|=m-1}\int_0^1 \int\!\!\!\!\!\!\int\limits_{{Z',Y'\in{\mathbb{R}}^{n-1}}\atop{|Z'-Y'|<c\,\tau t}} |f_{\gamma}(\Phi(Z'))-f_{\gamma}(\Phi(Y'))|^p\,dZ'dY'd\tau \nonumber\\[6pt] &\leq & C\,t^{p(m-1+s-|\alpha|)+n-1} \sum_{|\gamma|=m-1}\int_0^1 \frac{\omega_p(f_{\gamma},\,c\,\tau t)^p}{\tau^{n-1}t^{ps+n-1}}\,d\tau. \end{eqnarray} \noindent Consequently, \begin{eqnarray}\label{r-om-bis} \int_0^\infty \frac{r_\alpha(t)^p}{t^{p(m-1+s-|\alpha|)+n-1}}\,dt & \leq & C\sum_{|\gamma|=m-1}\int_0^\infty \int_0^1 \frac{\omega_p(f_{\gamma},\,c\,\tau t)^p}{\tau^{n-1}t^{ps+n-1}}\,d\tau dt \nonumber\\[6pt] & \leq & C\sum_{|\gamma|=m-1}\Bigl(\int_0^\infty \frac{\omega_p(f_{\gamma},r)^p}{r^{ps+n-1}}\,dr\Bigr) \Bigl(\int_0^1\frac{1}{\tau^{1-sp}}\,d\tau\Bigr) \nonumber\\[6pt] & \leq & C\sum_{|\gamma|=m-1}\int_0^\infty \frac{\omega_p(f_{\gamma},t)^p}{t^{ps+n-1}}\,dt, \end{eqnarray} \noindent after making the change of variables $r:=c\,\tau t$ in the second step. With this in hand, the estimate (\ref{RRR=est}) follows by virtue of (\ref{Eqv}). $\Box$ \vskip 0.08in After this preamble, we are in a position to present the \vskip 0.08in \noindent{\bf Proof of Proposition~\ref{trace-2}.} We divide the proof into a series of steps, starting with \vskip 0.08in \noindent{\it Step I: The well-definiteness of trace.} Let ${\mathcal U}$ be an arbitrary function in $W^{m,a}_p(\Omega)$ and set \begin{equation}\label{ff-aa} f_\alpha:=i^{|\alpha|}\,{\rm Tr}\,[D^\alpha\,{\mathcal U}],\qquad \forall\,\alpha\,:\,|\alpha|\leq m-1. \end{equation} \noindent It follows from Lemma~\ref{trace-1} that these trace functions are well-defined and, in fact, \begin{equation}\label{falpha} \sum_{|\alpha|\leq m-1}\|f_\alpha\|_{B^s_p(\partial\Omega)} \leq C\|\,{\mathcal U}\|_{W^{m,a}_p(\Omega)}. \end{equation} In order to prove that $\dot{f}:=\{f_\alpha\}_{|\alpha|\leq m-1}$ belongs to $\dot{B}^{m-1+s}_p(\partial\Omega)$, let $R_\alpha(X,Y)$ and $r_\alpha(t)$ be as in (\ref{reminder})-(\ref{rem-Rr}). Our goal is to show that for every multi-index $\alpha$ with $|\alpha|\leq m-1$, \begin{equation}\label{B-alpha} \Bigl(\int_0^\infty \frac{r_\alpha(t)^p}{t^{p(m-1+s-|\alpha|)+n-1}}\,dt\Bigr)^{1/p} \leq C\|\,{\mathcal U}\|_{W^{m,a}_p(\Omega)}. \end{equation} \noindent To this end, we first observe that if $|\alpha|=m-1$ then the expression in the left-hand side of (\ref{B-alpha}) is majorized by $C\Bigl(\int_0^\infty\omega_p(f_\alpha,t)^p/t^{ps+n-1}\,dt\Bigr)^{1/p}$ which, by (\ref{Eqv}) and (\ref{falpha}), is indeed $\leq C\|\,{\mathcal U}\|_{W^{m,a}_p(\Omega)}$. To treat the case when $|\alpha|<m-1$ we assume that $\Omega$ is locally represented as $\{X:\,X_n>\varphi(X')\}$ for some Lipschitz function $\varphi:{\mathbb{R}}^{n-1}\to{\mathbb{R}}$ and, as before, set $\Phi(X'):=(X',\varphi(X'))$, $X'\in{\mathbb{R}}^{n-1}$. Then (\ref{B-CC}) holds, thanks to (\ref{ff-aa}), for every multi-index $\alpha$ of length $\leq m-2$. Consequently, Corollary~\ref{Cor-R} applies and, in concert with (\ref{falpha}), yields (\ref{B-alpha}). This proves that the operator (\ref{TR-11})-(\ref{Tr-DDD}) is well-defined and bounded. \vskip 0.08in \noindent{\it Step II: The extension operator.} We introduce a co-boundary operator ${\mathcal E}$ which acts on $\dot{f}=\{f_\alpha\}_{|\alpha|\leq m-1}\in \dot{B}^{m-1+s}_p(\partial\Omega)$ according to \begin{equation}\label{def-Ee} ({\mathcal E}\dot{f})(X) =\int_{\partial\Omega}{\mathcal K}(X,Y)\,P(X,Y)\,d\sigma_Y, \qquad X\in\Omega, \end{equation} \noindent where $P(X,Y)$ is the polynomial associated with $\dot{f}$ as in (\ref{PPP}). The integral kernel ${\mathcal K}$ is assumed to satisfy \begin{eqnarray}\label{ker-prp} && \int_{\partial\Omega}{\mathcal K}(X,Y)\,d\sigma_Y=1 \qquad\mbox{for all }\,X\in\Omega, \\[6pt] && |D^\alpha_X{\mathcal K}(X,Y)|\leq c_\alpha\,\rho(X)^{1-n-|\alpha|}, \quad\forall\,X\in\Omega,\,\,\forall\,Y\in\partial\Omega, \label{more-Kp} \end{eqnarray} \noindent where $\alpha$ is an arbitrary multi-index, and \begin{equation}\label{last-Kp} {\mathcal K}(X,Y)=0\quad\mbox{if }\,\,|X-Y|\geq 2\rho(X). \end{equation} \noindent One can take, for instance, the kernel \begin{equation}\label{K-def} {\mathcal K}(X,Y):=\eta\left(\frac{X-Y}{\varkappa\rho_{\rm reg}(X)}\right) \left(\int_{\partial\Omega}\eta\left(\frac{X-Z}{\varkappa\rho_{\rm reg}(X)} \right)d\sigma_Z\right)^{-1}, \end{equation} \noindent where $\eta\in C^\infty_0(B_2)$, $\eta=1$ on $B_1$, $\eta\geq 0$ and $\varkappa$ is a positive constant depending on the Lipschitz constant of $\partial\Omega$. Here, as before, $\rho_{\rm reg}(X)$ stands for the regularized distance from $X$ to $\partial\Omega$. For each $X\in\Omega$ and $Z\in\partial\Omega$ and for every multi-index $\gamma$ with $|\gamma|=m$ we then obtain \begin{equation}\label{N0} D^\gamma{\mathcal E}\dot{f}(X) =\sum_{{\alpha+\beta=\gamma}\atop{|\alpha|\geq 1}} \frac{\gamma!}{\alpha!\beta!}\int_{\partial\Omega} D^\alpha_X{\mathcal K}(X,Y)\,(P_\beta(X,Y)-P_\beta (X,Z))\,d\sigma_Y. \end{equation} \noindent If for a fixed $\mu>1$ and for each $X\in\Omega$ and $t>0$ we set \begin{equation}\label{def-Gamma} \Gamma_t:=\{Y\in\partial\Omega:\,|X-Y|<\mu t\} \end{equation} \noindent we may then estimate \begin{eqnarray}\label{N1} |D^\gamma{\mathcal E}\dot{f}(X)|^p &\leq & C \sum_{{\alpha+\beta=\gamma}\atop{|\alpha|\geq 1}} \rho(X)^{-p|\alpha|}{\int{\mkern-19mu}-}_{\Gamma_{\rho(X)}} |P_\beta(X,Y)-P_\beta(X,Z)|^p\,d\sigma_Y \nonumber\\[6pt] & \leq & C\sum_{{\alpha+\beta=\gamma}\atop{|\alpha|\geq 1}} \sum_{|\beta|+|\delta|\leq m-1}\rho(X)^{-p|\alpha|} {\int{\mkern-19mu}-}_{\Gamma_{\rho(X)}} |R_{\delta+\beta}(Y,Z)|^p\,|X-Y|^{p|\delta|}\,d\sigma_Y, \nonumber\\[6pt] & \leq & C\sum_{|\tau|\leq m-1}\rho(X)^{p(|\tau|-m)} {\int{\mkern-19mu}-}_{\Gamma_{\rho(X)}}|R_{\tau}(Y,Z)|^p\,d\sigma_Y, \end{eqnarray} \noindent where we have used H\"older's inequality and (\ref{PR}). Averaging the extreme terms in (\ref{N1}) for $Z$ in $\Gamma_{\rho(X)}$, we arrive at \begin{equation}\label{N2} |D^\gamma{\mathcal E}\dot{f}(X)|^p \leq C \sum_{|\tau|\leq m-1}\rho(X)^{p(|\tau|-m)-2(n-1)} \int_{\Gamma_{\rho(X)}}\!\int_{\Gamma_{\rho(X)}} |R_\tau(Y,Z)|^p\,d\sigma_Yd\sigma_Z. \end{equation} Consider now a Whitney decomposition of $\Omega$ into a family of dyadic cubes, $\{Q_i\}_{i\in {\mathcal I}}$. In particular, $l_i:={\rm diam}\,Q_i\sim{\rm dist}\,(Q_i,\partial\Omega)$ uniformly for $i\in{\mathcal I}$. Thus, if $X\in Q_{i}$ for some ${i}\in I_{j}:=\{i\in{\mathcal I}:\,l_i=2^{-j}\}$, $j\in{\mathbb{Z}}$, the estimate (\ref{N2}) yields \begin{equation}\label{EST-E} |D^\gamma{\mathcal E}\dot{f}(X)| \leq C\sum_{|\tau|\leq m-1}2^{-j(|\tau|-m)} \Bigl(2^{2j(n-1)}\int\!\!\!\!\!\!\!\!\!\!\!\!\int \limits_{{Y,Z\in\partial\Omega\cap\,\varkappa\,Q_{i}}\atop {|Y-Z|<\varkappa\,2^{-j}}}|R_\tau(Y,Z)|^p\,d\sigma_Yd\sigma_Z\Bigr)^{1/p} \end{equation} \noindent for some $\varkappa=\varkappa(\partial\Omega)>1$. In fact, by choosing the constant $\mu$ in (\ref{def-Gamma}) sufficiently close to $1$, matters can be arranged so that the family $\{\varkappa Q_i\}_{i\in{\mathcal I}}$ has finite overlap. Keeping this in mind and availing ourselves of the fact that $\rho(X)\sim l_i$ uniformly for $X\in Q_i$, $i\in{\mathcal I}$, for each multi-index $\gamma$ of length $m$ we may then estimate: \begin{eqnarray}\label{bigstep} && \int_{\Omega}|D^\gamma{\mathcal E}\dot{f}(X)|^p\rho(X)^{p(1-s)-1}\,dX \nonumber\\[6pt] && \quad \leq C\sum_{j\in{\mathbb{Z}}}\sum_{i\in I_j} 2^{-jp(1-s)-j} \int_{Q_i}|D^\gamma{\mathcal E}\dot{f}(X)|^p\,dX \nonumber\\[6pt] && \quad \leq C\sum_{j\in{\mathbb{Z}}}\sum_{i\in I_j}\sum_{|\tau|\leq m-1} 2^{jp(m-1+s-|\tau|)+j(n-1)} \int\!\!\!\!\!\!\!\!\!\!\!\!\int \limits_{{Y,Z\in\partial\Omega\cap\,\varkappa\,Q_i}\atop {|Y-Z|<\varkappa\,2^{-j}}}|R_\tau(Y,Z)|^p\,d\sigma_Yd\sigma_Z \nonumber\\[6pt] && \quad \leq C\sum_{j\in{\mathbb{Z}}}^\infty\sum_{|\tau|\leq m-1} 2^{jp(m-1+s-|\tau|)+j(n-1)} \int\!\!\!\!\!\!\!\!\!\!\!\!\int \limits_{{Y,Z\in\partial\Omega}\atop {|Y-Z|<\varkappa\,2^{-j}}}|R_\tau(Y,Z)|^p\,d\sigma_Yd\sigma_Z \nonumber\\[6pt] && \quad \leq C\sum_{|\tau|\leq m-1} \int_0^\infty\frac{r_\tau(t)^p}{t^{p(m-1+s-|\tau|)+n-1}}\,dt \nonumber\\[6pt] && \quad \leq C\|\dot{f}\|^p_{\dot{B}^{m-1+s}_p(\partial\Omega)}, \end{eqnarray} \noindent where in the last step we have used (\ref{Bes-Nr}). This proves that the operator (\ref{Extension}) is well-defined and bounded. \vskip 0.08in \noindent{\it Step III: The right-invertibility property.} We shall now show that the operator (\ref{def-Ee}) is a right-inverse for the trace operator (\ref{TR-11}), i.e., whenever $\dot{f}=\{f_\gamma\}_{|\gamma|\leq m-1}\in \dot{B}^{m-1+s}_p(\partial\Omega)$, there holds \begin{equation}\label{N3a} f_\gamma=i^{|\gamma|}\,{\rm Tr}[D^\gamma{\mathcal E}\dot{f}] \end{equation} \noindent for every multi-index $\gamma$ of length $\leq m-1$. To this end, for $|\gamma|\leq m-1$ we write \begin{equation}\label{N4} D^\gamma{\mathcal E}\dot{f}(X)-{\mathcal E}_\gamma\dot{f}(X) =\sum_{{\alpha+\beta=\gamma}\atop{|\alpha|\geq 1}} \frac{\gamma!}{\alpha!\beta!}\int_{\partial\Omega} D^\alpha_X {\mathcal K}(X,Y)(P_\beta(X,Y)-P_\beta(X,Z))\,d\sigma_Y, \end{equation} \noindent where \begin{equation}\label{N3b} {\mathcal E}_\gamma\dot{f}(X) :=\int_{\partial\Omega}{\mathcal K}(X,Y)\, P_\gamma(X,Y)\,d\sigma_Y, \qquad X\in\Omega. \end{equation} \noindent Estimating the right-hand side in (\ref{N4}) in the same way as we did with the right-hand side of (\ref{N0}), we obtain \begin{eqnarray}\label{Dgamma-E} \int_{\partial\Omega} |D^\gamma{\mathcal E}\dot{f}(X)-{\mathcal E}_\gamma\dot{f}(X)|^p \rho(X)^{-ps-1}\,dX &\leq & C\sum_{|\tau|\leq m-1}\int_0^\infty\frac{r_\tau(t)^p} {t^{p(|\gamma|+s-|\tau|)+n-1}}\,dt \nonumber\\[6pt] & \leq & C\,\|\dot{f}\|^p_{\dot{B}^{m-1+s}_p(\partial\Omega)}. \end{eqnarray} \noindent In a similar fashion, we check that \begin{eqnarray}\label{sim-fash} && \int_{\partial\Omega}|\nabla (D^\gamma{\mathcal E}\dot{f}(X) -{\mathcal E}_\gamma\dot{f}(X)) |^p \rho(X)^{p-ps-1}\,dX \nonumber\\[6pt] &&\qquad\qquad \leq C\sum_{|\tau|\leq m-1}\int_0^\infty\frac{r_\tau(t)^p} {t^{p(|\gamma|+s-|\tau|)+n-1}}\,dt \leq C\,\|\dot{f}\|^p_{\dot{B}^{m-1+s}_p(\partial\Omega)}. \end{eqnarray} \noindent The two last inequalities imply $D^\gamma{\mathcal E}\dot{f}-{\mathcal E}\dot{f}\in V^{1,a}_p(\Omega)$ and, therefore, \begin{equation}\label{N4a} {\rm Tr}\,(D^\gamma{\mathcal E}\dot{f}-{\mathcal E}_\gamma\dot{f})=0. \end{equation} Going further, let us set \begin{equation}\label{EgX} Eg(X):=\int_{\partial\Omega}{\mathcal K}(X,Y)\,g(Y)\,d\sigma_Y, \qquad X\in\Omega. \end{equation} \noindent A simpler version of the reasoning in Step~II yields that $E$ maps $B^s_p(\partial\Omega)$ boundedly into $W^{1,a}_p(\Omega)$. Also, a standard argument based on the Poisson kernel-like behavior of ${\mathcal K}(X,Y)$ shows that ${\rm Tr}\,Eg=g$ for each $g\in B^s_p(\partial\Omega)$. Based on the definition (\ref{PPP}) and (\ref{N3b}) we have \begin{eqnarray}\label{E-E} && |{\mathcal E}_\gamma\dot{f}(X)-Ef_\gamma(X)|^p +\rho(X)^p|\nabla({\mathcal E}_\gamma\dot{f}(X)-Ef_\gamma(X))|^p \nonumber\\[6pt] && \qquad\qquad \leq C\sum_{{|\beta|\leq m-1-|\gamma|}\atop{|\beta|\geq 1}} \rho(X)^{p|\beta|}{\int{\mkern-19mu}-}_{\Gamma_{\rho(X)}} |f_{\gamma+\beta}(Y)|^p\,d\sigma_Y. \end{eqnarray} \noindent Consequently, for an arbitrary Whitney cube $Q_i$ we have \begin{eqnarray}\label{n1} && \int_{Q_i}|{\mathcal E}_\gamma\dot{f}(X)-Ef_\gamma(X)|^p\rho(X)^{-ps-1}\,dX +\int_{B_\delta} |\nabla({\mathcal E}_\gamma\dot{f}(X)-Ef_\gamma(X))|^p\rho(X)^{p-ps-1}\,dX \nonumber\\[6pt] && \qquad\qquad\qquad\qquad\qquad\qquad \leq C\sum_{{|\beta|\leq m-1-|\gamma|}\atop{|\beta|\geq 1}} l_i^{p(|\beta|-s)}\int_{\partial\Omega\cap \varkappa Q_i} |f_{\gamma+\beta}(Y)|^p\,d\sigma_Y. \end{eqnarray} \noindent Summing over all Whitney cubes we find \begin{equation}\label{Sm-wb} \|{\mathcal E}_\gamma\dot{f}-Ef_\gamma\|_{V_p^{1,a}(\Omega)} \leq C\sum_{|\alpha|\leq m-1}\|f_\alpha\|_{L_p(\partial\Omega)} \end{equation} \noindent which implies \begin{equation}\label{N5} {\rm Tr}\,({\mathcal E}_\gamma\dot{f}-Ef_\gamma) =0. \end{equation} \noindent Finally, combining (\ref{N5}), (\ref{N4a}), and ${\rm Tr}\,Ef_\gamma=f_\gamma$, we arrive at (\ref{N3a}). \vskip 0.08in \noindent{\it Step IV: The kernel of the trace.} We now turn to the task of identifying the null-space of the trace operator (\ref{TR-11})-(\ref{Tr-DDD}). For each $k\in{\mathbb{N}}_0$ we denote by ${\mathcal P}_k$ the collection of all vector-valued, complex coefficient polynomials of degree $\leq k$ (and agree that ${\mathcal P}_k= 0$ whenever $k$ is a negative integer). The claim we make at this stage is that the null-space of the operator \begin{equation}\label{TRW} W^{m,a}_p(\Omega)\ni {\mathcal W}\mapsto \Bigl\{{\rm Tr}\,[D^\gamma{\mathcal W}]\Bigr\}_{|\gamma|=m-1} \in B^s_p(\partial\Omega) \end{equation} \noindent is given by \begin{equation}\label{Null-tr} {\mathcal P}_{m-2}+V^{m,a}_p(\Omega). \end{equation} \noindent The fact that the null-space of the trace operator (\ref{TR-11})-(\ref{Tr-DDD}) is $V^{m,a}_p(\Omega)$ follows readily from this. That (\ref{Null-tr}) is included in the null-space of the operator (\ref{TRW}) is obvious. The opposite inclusion amounts to showing that if ${\mathcal W}\in W_p^{m,a}(\Omega)$ is such that ${\rm Tr}\,[D^\gamma{\mathcal W}]=0$ for all multi-indices $\gamma$ with $|\gamma|=m-1$, then there exists $P_{m-2}\in {\mathcal P}_{m-2}$ with the property that ${\mathcal W}-P_{m-2}\in V_p^{m,a}(\Omega)$. To this end, we note that the case $m=1$ is a consequence of (\ref{Hardy}) and consider next the case $m=2$, i.e. when \begin{equation}\label{WWT} {\mathcal W}\in W_p^{2,a}(\Omega),\qquad{\rm Tr}\,[\nabla{\mathcal W}]=0 \quad\mbox{on}\,\,\partial\Omega. \end{equation} \noindent Assume that $\{{\mathcal W}_j\}_{j\geq 1}$ is a sequence of smooth in $\overline{\Omega}$ (even polynomial) vector-valued functions approximating ${\mathcal W}$ in $W_p^{2,a}(\Omega)$. In particular, \begin{equation}\label{w1} {\rm Tr}\,[\nabla{\mathcal W}_j]\to 0\,\,\,\mbox{ in }\,\,\,L_p(\partial\Omega) \,\,\,\mbox{ as }\,\,j\to\infty. \end{equation} \noindent If in a neighborhood of a point on $\partial\Omega$ the domain $\Omega$ is given by $\{X:\,X_n>\varphi(X')\}$ for some Lipschitz function $\varphi$, the following chain rule holds for the gradient of the function $w_j:B'\ni X'\mapsto{\mathcal W}_j(X',\varphi(X'))$, where $B'$ is a $(n-1)$-dimensional ball: \begin{equation}\label{w2} \nabla w_j(X')= \Bigl(\nabla_{Y'}{\mathcal W}_j(Y',\varphi(X'))\Bigr)\Bigl|_{Y'= X'} +\Bigl(\frac{\partial}{\partial Y_n}{\mathcal W}_j(X',Y_n)\Bigr) \Bigl|_{Y_n=\varphi(X')}\,\nabla\varphi(X'). \end{equation} \noindent Since the sequence $\{w_j\}_{j\geq 1}$ is bounded in $L_p(B')$ and $\nabla w_j\to 0$ in $L_p(B')$, it follows that there exists a subsequence $\{j_i\}_i$ such that $w_{j_i}\to const$ in $L_p(B')$ (see Theorem~1.1.12/2 in {\bf\cite{Maz1}}). Hence, ${\rm Tr}\,{\mathcal W}=P_0=const$ on $\partial\Omega$. In view of ${\rm Tr}\,[{\mathcal W}-P_0]=0$ and ${\rm Tr}\,[\nabla{\mathcal W}]=0$, we may conclude that ${\mathcal W}-P_0\in V_p^{2,a}(\Omega)$ by Hardy's inequality. The general case follows in an inductive fashion, by reasoning as before with $D^\alpha{\mathcal W}$ with $|\alpha|=m-2$ in place of ${\mathcal W}$. $\Box$ \vskip 0.08in We now present a short proof of (\ref{Bes-X}), based on Proposition~\ref{trace-2}. \begin{proposition}\label{B-EQ} Assume that $1<p<\infty$, $s\in(0,1)$ and $m\in{\mathbb{N}}$. Then \begin{equation}\label{eQ-11} \|\dot{f}\|_{\dot{B}^{m-1+s}_p(\partial\Omega)}\sim \sum_{|\alpha|\leq m-1} \|f_\alpha\|_{B^{s}_p(\partial\Omega)}, \end{equation} \noindent uniformly for $\dot{f}=\{f_\alpha\}_{|\alpha|\leq m-1}\in\dot{B}^{m-1+s}_p(\partial\Omega)$. As a consequence, (\ref{Bes-X}) holds. \end{proposition} \noindent{\bf Proof.} The left-pointing inequality in (\ref{eQ-11}) is implicit in (\ref{RRR=est}). As for the opposite one, let $\dot{f}=\{f_\alpha\}_{|\alpha|\leq m-1}\in\dot{B}^{m-1+s}_p(\partial\Omega)$ and, with $a:=1-s-1/p$, consider ${\mathcal U}:={\mathcal E}(\dot{f})\in W^{m,a}_p(\Omega)$. Then Lemma~\ref{trace-1} implies that, for each multi-index $\alpha$ of length $\leq m-1$, the function $f_\alpha=i^{|\alpha|}\,{\rm Tr}\,[D^\alpha{\mathcal U}]$ belongs to $B^s_p(\partial\Omega)$, plus a naturally accompanying norm estimate. This concludes the proof of (\ref{eQ-11}). Finally, the last claim in the proposition is a consequence of (\ref{eQ-11}), (\ref{dense}) and the fact that the operator (\ref{TR-11})-(\ref{Tr-DDD}) is onto. $\Box$ \vskip 0.08in We include one more equivalent characterization of the space $\dot{B}^{m-1+s}_p(\partial\Omega)$, in the spirit of work in {\bf\cite{AP}}, {\bf\cite{PV}}, {\bf\cite{Ve2}}. To state it, recall that $\{e_j\}_j$ is the canonical orthonormal basis in ${\mathbb{R}}^n$. \begin{proposition}\label{CC-Aray} Assume that $1<p<\infty$, $s\in(0,1)$ and $m\in{\mathbb{N}}$. Then \begin{equation}\label{eQ0} \{f_\alpha\}_{|\alpha|\leq m-1}\in\dot{B}^{m-1+s}_p(\partial\Omega) \Longleftrightarrow \left\{ \begin{array}{l} f_\alpha\in B^{s}_p(\partial\Omega),\quad \forall\,\alpha\,:\,|\alpha|\leq m-1 \\[10pt] \qquad\qquad\qquad \mbox{ and } \\[10pt] (\nu_j\partial_k-\nu_k\partial_j)f_{\alpha}= \nu_jf_{\alpha+e_k}-\nu_kf_{\alpha+e_j} \\[6pt] \forall\,\alpha\,:\,|\alpha|\leq m-2,\quad\forall\,j,k\in\{1,...,n\}. \end{array} \right. \end{equation} \end{proposition} \noindent{\bf Proof.} The left-to-right implication is a consequence of (\ref{eQ-11}) and of the fact that (\ref{ff-aa}) holds for some ${\mathcal U}\in W^{m,a}_p(\Omega)$ (cf. Proposition~\ref{trace-2}). As for the opposite implication, we proceed as in the proof of Proposition~\ref{trace-2} and estimate (\ref{rem-Rr}) based on the identities (\ref{B-CC}) and knowledge that $f_\alpha$ belongs to $B^{s}_p(\partial\Omega)$ for each $\alpha$ of length $\leq m-1$. $\Box$ \vskip 0.08in We close this section with two remarks on the nature of the space $\dot{B}^{m-1+s}_p(\partial\Omega)$. First, we claim that the assignment \begin{equation}\label{ass} \dot{B}^{m-1+s}_p(\partial\Omega)\ni\dot{f}=\{f_\alpha\}_{|\alpha|\leq m-1} \mapsto \Bigl\{i^k\sum_{|\alpha|=k}\frac{k!}{\alpha!}\,\nu^\alpha\,f_\alpha\Bigr\} _{0\leq k\leq m-1}\in L_p(\partial\Omega) \end{equation} \noindent is one-to-one. This is readily justified with the help of the identity \begin{equation}\label{m-xxx} D^\alpha=i^{-|\alpha|}\,\nu^\alpha\, \frac{\partial^{|\alpha|}}{\partial\nu^{|\alpha|}} +\sum_{|\beta|=|\alpha|-1}\sum_{j,k=1}^n p_{\alpha,\beta,j,k}(\nu) \frac{\partial}{\partial\tau_{jk}}D^\beta \end{equation} \noindent where $\partial/\partial\tau_{jk}:=\nu_j\partial/\partial x_k -\nu_k\partial/\partial x_j$ and the $p_{\alpha,\beta,j,k}$'s are polynomial functions. Indeed, let $\dot{f}\in \dot{B}^{m-1+s}_p(\partial\Omega)$ be mapped to zero by the assignment (\ref{ass}) and consider ${\mathcal U}:={\mathcal E}(\dot{f})\in W^{m,a}_p(\Omega)$. Then $f_\alpha=i^{|\alpha|}\,{\rm Tr}\,\,[D^\alpha\,{\mathcal U}]$ on $\partial\Omega$ for each $\alpha$ with $|\alpha|\leq m-1$ and, granted the current hypotheses, $\partial^k{\mathcal U}/\partial\nu^k=0$ for $k=0,1,...,m-1$. Consequently, (\ref{m-xxx}) and induction on $|\alpha|$ yield that ${\rm Tr}\,\,[D^\alpha\,{\mathcal U}]=0$ on $\partial\Omega$ for each $\alpha$ with $|\alpha|\leq m-1$. Thus, $f_\alpha=0$ for each $\alpha$ with $|\alpha|\leq m-1$, as desired. The elementary identity (\ref{m-xxx}) can be proved by writing \begin{eqnarray}\label{m-xxx2} i^{|\alpha|}\,D^\alpha & = & \prod_{j=1}^n\Bigl(\frac{\partial}{\partial x_j} \Bigr)^{\alpha_j} \\[6pt] & = & \prod_{j=1}^n\Bigl[\sum_{k=1}^n\xi_k\Bigl(\xi_k \frac{\partial}{\partial x_j}-\xi_j\frac{\partial}{\partial x_k}\Bigr) +\sum_{k=1}^n\xi_j\xi_k\frac{\partial}{\partial x_k}\Bigr]^{\alpha_j} \Bigl|_{\xi=\nu} \nonumber\\[6pt] & = & \prod_{j=1}^n\Bigl[\sum_{l=0}^{\alpha_j}\frac{\alpha_j!}{l!(\alpha_j-l)!} \Bigl(\sum_{k=1}^n\xi_k\Bigl(\xi_k \frac{\partial}{\partial x_j}-\xi_j\frac{\partial}{\partial x_k}\Bigr)\Bigr) ^{\alpha_j-l}\nu_j^{l}\frac{\partial^l}{\partial\nu^l}\Bigr]\Bigl|_{\xi=\nu} \nonumber\\[6pt] & = & \prod_{j=1}^n\Bigl[ \nu_j^{\alpha_j}\frac{\partial^{\alpha_j}}{\partial\nu^{\alpha_j}}+ \sum_{l=0}^{\alpha_j-1}\frac{\alpha_j!}{l!(\alpha_j-l)!} \Bigl(\sum_{k=1}^n\xi_k\Bigl(\xi_k \frac{\partial}{\partial x_j}-\xi_j\frac{\partial}{\partial x_k}\Bigr)\Bigr) ^{\alpha_j-l}\nu_j^{l}\frac{\partial^l}{\partial\nu^l}\Bigr]\Bigl|_{\xi=\nu} \nonumber \end{eqnarray} \noindent and noticing that $\prod_{j=1}^n\nu_j^{\alpha_j}\partial^{\alpha_j}/\partial\nu^{\alpha_j} =\nu^\alpha\partial^{|\alpha|}/\partial\nu^{|\alpha|}$, whereas $(\xi_k\partial/\partial x_j-\xi_j\partial/\partial x_k)|_{\xi=\nu} =-\partial/\partial\tau_{jk}$. Our second remark concerns the image of the mapping (\ref{ass}) in the case when $\partial\Omega$ is sufficiently smooth. More precisely, assume that $\partial\Omega\in C^{m-1,1}$ and, for $0\leq k\leq m-1$, the space $B^{m-1-k+s}_p(\partial\Omega)$ is defined starting from $B^{m-1-k+s}_p({\mathbb{R}}^{n-1})$ and then transporting this space to $\partial\Omega$ via a smooth partition of unity argument and locally flattening the boundary (alternatively, $B^{m-1-k+s}_p(\partial\Omega)$ is the image of the trace operator acting from $B^{m-1-k+s+1/p}_p({\mathbb{R}}^{n})$). We claim that \begin{equation}\label{image} \partial\Omega\in C^{m-1,1}\Longrightarrow \mbox{the image of the mapping (\ref{ass}) is $\oplus_{k=0}^{m-1}B^{m-1-k+s}_p(\partial\Omega)$}. \end{equation} \noindent Indeed, granted that $\partial\Omega\in C^{m-1,1}$, it follows from (\ref{eQ0}) that $f_\alpha\in B^{m-1-|\alpha|+s}_p(\partial\Omega)$ for each $\alpha$ with $|\alpha|\leq m-1$ and, hence, $g_k:=i^k\sum_{|\alpha|=k}\frac{k!}{\alpha!}\,\nu^\alpha\,f_\alpha \in B^{m-1-k+s}_p(\partial\Omega)$ for each $k\in\{0,...,m-1\}$. Conversely, given a family $\{g_k\}_{0\leq k\leq m-1}\in\oplus_{k=0}^{m-1}B^{m-1-k+s}_p(\partial\Omega)$, we claim that there exists $\dot{f}=\{f_\alpha\}_{|\alpha|\leq m-1}\in\dot{B}^{m-1+s}_p(\partial\Omega)$ such that $g_k=i^k\sum_{|\alpha|=k}\frac{k!}{\alpha!}\,\nu^\alpha\,f_\alpha$ for each $k\in\{0,...,m-1\}$. One way to see this is to start with ${\mathcal U}\in B^{m-1+s+1/p}_p(\Omega)$ solution of $\Delta^m{\mathcal U}=0$ in $\Omega$, $\partial^k{\mathcal U}/{\partial\nu^k}=g_k \in B^{m-1-k+s}_p(\partial\Omega)$, $0\leq k\leq m-1$ (a system which satisfies the Shapiro-Lopatinskij condition) and then define the $f_\alpha$'s as in (\ref{ff-aa}). \subsection{Proof of the main result and further comments}} Theorem~\ref{Theorem} is a particular case of the next theorem concerning the unique solvability of the Dirichlet problem in $W_p^{m,a}(\Omega)$. \begin{theorem}\label{Theorem1} Let all assumptions of Theorem~\ref{th1a} be satisfied. Also let ${\mathcal F}\in V_p^{-m,a}(\Omega)$. Then the Dirichlet problem \begin{equation}\label{m9} \left\{ \begin{array}{l} {\mathcal A}(X,D_X)\,{\mathcal U}={\mathcal F} \qquad\mbox{in}\,\,\Omega, \\[15pt] {\displaystyle{\frac{\partial^k{\mathcal U}}{\partial\nu^k}}} =g_k\quad\,\,\mbox{on}\,\,\partial\Omega,\,\,\,\,\,0\leq k\leq m-1, \end{array} \right. \end{equation} \noindent has a solution ${\mathcal U}\in W_p^{m,a}(\Omega)$ if and only if (\ref{data-B}) is satisfied. In this latter case, the solution is unique and satisfies \begin{equation}\label{estUU} \|{\mathcal U}\|_{W_p^{m,a}(\Omega)} \leq C\sum_{|\alpha|\leq m-1}\|f_\alpha\|_{B^{s}_p(\partial\Omega)} +C\|{\mathcal F}\|_{V_p^{-m,a}(\Omega)}. \end{equation} \end{theorem} \noindent{\bf Proof.} It is clear from definitions that the operator \begin{equation}\label{ADXW} {\mathcal A}(X,D_X):W_p^{m,a}(\Omega)\longrightarrow V_p^{-m,a}(\Omega) \end{equation} \noindent is well-defined and bounded. Thus, granted that we seek solutions for (\ref{m9}) in the space $W_p^{m,a}(\Omega)$, the membership of ${\mathcal F}$ to $V_p^{-m,a}(\Omega)$, as well as the fact that the $g_k$'s satisfy (\ref{data-B}), are necessary conditions for the solvability of (\ref{m9}). Conversely, let $\dot{f}=\{f_\alpha\}_{|\alpha|\leq m-1}\in\dot{B}^{m-1+s}_p(\partial\Omega)$ be such that (\ref{data-B}) holds and, with ${\mathcal E}$ denoting the extension operator from Proposition~\ref{trace-2}, seek a solution for (\ref{m9}) in the form ${\mathcal U}={\mathcal E}(\dot{f})+{\mathcal W}$. where ${\mathcal W}\in V^{m,a}_p(\Omega)$ solves \begin{equation}\label{m10} \left\{ \begin{array}{l} {\mathcal A}(X,D_X){\mathcal W} ={\mathcal F}-{\mathcal A}(X,D_X)({\mathcal E}(\dot{f})) \qquad\mbox{in}\,\,\Omega, \\[15pt] {\rm Tr}\,\,[D^\gamma\,{\mathcal W}]=0\quad\mbox{on}\,\,\partial\Omega, \,\,\,\forall\,\gamma\,:\,|\gamma|\leq m-1. \end{array} \right. \end{equation} \noindent Since the boundary conditions in (\ref{m10}) are automatically satisfied if ${\mathcal W}\in V_p^{m,a}(\Omega)$, the solvability of (\ref{m10}) is a direct consequence of Theorem~\ref{th1a}. As for uniqueness, assume that ${\mathcal U}\in W_p^{m,a}(\Omega)$ solves (\ref{m9}) with ${\mathcal F}=0$ and $g_k=0$, $0\leq k\leq m-1$. From the fact that (\ref{ass}) is one-to-one, we infer that ${\rm Tr}\,\,[D^\gamma\,{\mathcal U}]=0$ on $\partial\Omega$ for all $\gamma$ with $|\gamma|\leq m-1$. Then, by Proposition~\ref{trace-2}, ${\mathcal U}\in V_p^{m,a}(\Omega)$ is a null-solution of ${\mathcal A}(X,D_X)$. In turn, Theorem~\ref{th1a} gives that ${\mathcal U}=0$, proving uniqueness for (\ref{m9}). Finally, (\ref{estUU}) is a consequence of the results in \S{6.4}. $\Box$ \vskip 0.08in We conclude this section with a couple of comments, the first of which regards the effect of the presence of lower order terms. More specifically, assume that \begin{equation}\label{E444-bis} {\mathcal A}(X,D_X)\,{\mathcal U} :=\sum_{0\leq |\alpha|,|\beta|\leq m}D^\alpha({\mathcal A}_{\alpha\beta}(X) \,D^\beta{\mathcal U}),\qquad X\in\Omega, \end{equation} \noindent where the top part of ${\mathcal A}(X,D_X)$ satisfies the hypotheses made in Theorem~\ref{Theorem} and the lower order terms are bounded. Then the Dirichlet problem (\ref{m9}) is Fredholm solvable, of index zero, in the sense that a solution ${\mathcal U}\in W_p^{m,a}(\Omega)$ exists if and only if the data ${\mathcal F}$, $\{g_k\}_{0\leq k\leq m-1}$ satisfy finitely many linear conditions, whose number matches the dimension of the space of null-solutions for (\ref{m9}). Furthermore, the estimate \begin{equation}\label{estUU-bis} \|{\mathcal U}\|_{W_p^{m,a}(\Omega)} \leq C\, \Bigl(\, \sum_{|\alpha|\leq m-1}\|{\mathcal F}\|_{V_p^{-m,a}(\Omega)} +\|f_\alpha\|_{B^{s}_p(\partial\Omega)}+\|{\mathcal U}\|_{L_p(\Omega)}\Bigr) \end{equation} \noindent holds for any solution ${\mathcal U}\in W_p^{m,a}(\Omega)$ of (\ref{m9}). Indeed, the operator \begin{equation}\label{cal-AAA} {\mathcal A}:V_p^{m,a}(\Omega)\longrightarrow V_p^{-m,a}(\Omega) \end{equation} \noindent is Fredholm with index zero, as can be seen by decomposing ${\mathcal A}=\mathaccent"0017 {\mathcal A}+({\mathcal A}-\mathaccent"0017 {\mathcal A})$ where $\mathaccent"0017 {\mathcal A}:=\sum_{|\alpha|=|\beta|=m} D^\alpha {\mathcal A}_{\alpha\beta}\,D^\beta$, and then invoking Theorem~\ref{th1a}. Now, it can be shown that the problem (\ref{m9}) is solvable if and only if ${\mathcal F}-{\mathcal A}(X,D_X){\mathcal E}\dot{f} \in {\rm Im}\,{\mathcal A}$, the image of the operator (\ref{cal-AAA}). Thus, if $T({\mathcal F},\{g_k\}_{0\leq k\leq m-1}):= {\mathcal F}-{\mathcal A}(X,D_X){\mathcal E}\dot{f}$, this membership entails $({\mathcal F},\{g_k\}_{0\leq k\leq m-1})\in T^{-1} \Bigl({\rm Im}\,{\mathcal A}\Bigr)$. Note that $T$ maps the space of data onto $V_p^{-m,a}(\Omega)$, hence the number of linearly independent compatibility conditions the data should satisfy is \begin{equation}\label{comp-cond-X} {\rm codim}\,T^{-1}\Bigl({\rm Im}\,{\mathcal A}\Bigr) ={\rm codim}\,({\rm Im}\,{\mathcal A}). \end{equation} \noindent On the other hand, from by Proposition~\ref{trace-2} and the fact that (\ref{ass}) is one-to-one we infer that the space of null-solutions for (\ref{m9}) is precisely ${\rm ker}\,{\mathcal A}$, the kernel of the operator (\ref{cal-AAA}). Since, as already pointed out, this operator has index zero, it follows that the problem (\ref{m9}) has index zero. Finally, (\ref{estUU-bis}) follows from what we have proved so far via a standard reasoning as in {\bf\cite{Ho}}. Our last comment regards the statement of the Dirichlet problem (\ref{e0}) with data \begin{equation}\label{newdata} \partial^k{\cal U}/\partial\nu^k=g_k\in B_p^{m-1-k+s}(\partial\Omega), \qquad k=0,1,\ldots,m-1, \end{equation} \noindent where $B_p^{m-1-k+s}(\partial\Omega)$ is defined here as the {\it range of ${\rm Tr}$ acting from $B_p^{m-1-k+s+1/p}({\mathbb{R}}^n)$}. If $\partial\Omega$ is smooth ($C^{1,1}$ will do) this problem is, certainly, well-posed. Let us illustrate some features of this particular formulation as the smoothness of $\partial\Omega$ deteriorates. Suppose we are looking for the solution ${\cal U}\in W_2^2(\Omega)$ of the Dirichlet problem for the biharmonic operator \begin{equation}\label{m9b} \left\{ \begin{array}{l} \Delta ^2\,{\cal U}=0\qquad\mbox{in}\,\,\Omega, \\[10pt] {\rm Tr}\,{\cal U}=g_1\qquad\mbox{on}\,\,\partial\Omega, \\[10pt] \langle\nu,{\rm Tr}\,[\nabla{\cal U}]\rangle =g_2\quad\mbox{on}\,\,\partial\Omega. \end{array} \right. \end{equation} \noindent The simplest class of data $(g_1, g_2)$ would be, of course, $B_2^{3/2}(\partial\Omega)\times B_2^{1/2}(\partial\Omega)$, where $B_2^{3/2}(\partial\Omega)$ and $B_2^{1/2}(\partial\Omega)$ are the spaces of traces on $\partial\Omega$ for functions in $ W_2^2(\Omega)$ and $ W_2^1(\Omega)$, respectively. However, this formulation has several serious drawbacks. The first one is that the mapping \begin{equation}\label{w22} W_2^2(\Omega)\ni {\cal U}\to\langle\nu,{\rm Tr}\,[\nabla{\mathcal U}]\rangle \in B_2^{1/2}(\partial\Omega) \end{equation} \noindent is generally unbounded. In fact, by choosing ${\mathcal U}$ to be a linear function we see that the continuity of (\ref{w22}) implies $\nu\in B_2^{1/2}(\partial\Omega)$ which is not necessarily the case for a Lipschitz domain, even for such a simple one as the square $S=[0,1]^2$. The same problem fails to have a solution in the class in $W_2^2(\Omega)$ when when $(g_1, g_2)$ is an arbitrary pair in $B_2^{3/2}(\partial\Omega)\times B_2^{1/2}(\partial\Omega)$. Indeed, consider the problem (\ref{m9b}) for $\Omega=S$ and the data $g_1 =0$ and $g_2 =1$. It is standard (see Theorem 7.2.4 in {\bf\cite{KMR1}} and Sect.\,7.1 in {\bf\cite{KMR2}}) that the main term of the asymptotics near the origin of any solution ${\mathcal U}$ in $W_2^1(S)$ is given in polar coordinates $(r,\omega)$ by \begin{equation}\label{asymp} \frac{2r}{\pi+2}\left( (\omega -\frac{\pi}{2}) \sin\omega - \omega\cos\omega\right). \end{equation} \noindent Since this function does not belong to $W_2^2(S)$, there is no solution of problem (\ref{m9b}) in this space. \vskip 0.10in \noindent -------------------------------------- \vskip 0.20in \noindent {\tt Vladimir Maz'ya} \noindent Department of Mathematics \noindent Ohio State University \noindent Columbus, OH 43210, USA \noindent {\tt e-mail}: {\it vlmaz\@@math.ohio-state.edu} and \noindent Department of Mathematical Sciences \noindent University of Liverpool \noindent Liverpool L69 3BX, UK \vskip 0.15in \noindent {\tt Marius Mitrea} \noindent Department of Mathematics \noindent University of Missouri at Columbia \noindent Columbia, MO 65211, USA \noindent {\tt e-mail}: {\it marius\@@math.missouri.edu} \vskip 0.15in \noindent {\tt Tatyana Shaposhnikova} \noindent Department of Mathematics \noindent Ohio State University \noindent Columbus, OH 43210, USA \noindent {\tt e-mail}: {\it tasha@math.ohio-state.edu} and \noindent Department of Mathematics \noindent Link\"oping University \noindent Link\"oping SE-581 83, Sweden \noindent {\tt e-mail}: {\it tasha@mai.liu.se \end{document}
\begin{document} \title{The movement of a solid in an incompressible perfect fluid \ as a geodesic flow} \begin{abstract} The motion of a rigid body immersed in an incompressible perfect fluid which occupies a three-dimensional bounded domain have been recently studied under its PDE formulation. In particular classical solutions have been shown to exist locally in time. In this note, following the celebrated result of Arnold \cite{Arnold} concerning the case of a perfect incompressible fluid alone, we prove that these classical solutions are the geodesics of a Riemannian manifold of infinite dimension, in the sense that they are the critical points of an action, which is the integral over time of the total kinetic energy of the fluid-rigid body system. \end{abstract} \par \ \par \noindent {\small {\bf Keywords.} Perfect incompressible fluid, fluid-rigid body interaction, least action principle.\\ {\bf AMS Subject Classification. } 76B99, 74F10.} \\ \section{Introduction} \label{Intro} We consider the motion of a rigid body immersed in an incompressible homogeneous perfect fluid, so that the system fluid-rigid body occupies a smooth open and bounded domain $\Omega \subset \mathbb{R}^{3}$. The solid is supposed to occupy at each instant $t \geqslant 0$ a smooth closed connected subset $\mathcal{S}(t) \subset \Omega$ which is surrounded by a perfect incompressible fluid filling the domain $\mathcal{F}(t) := \Omega \setminus \mathcal{S}(t)$. \par \ \par For the point of view of PDEs, this system have been recently studied in \cite{ort1}, \cite{ort2}, \cite{rosier}, \cite{ht}, \cite{ogfstt} which have set a Cauchy theory for classical solutions. \par \ \par The aim of this note is to provide a rigorous proof that the classical solutions can be equivalently thought as geodesics of a Riemannian manifold of infinite dimension, in the sense that they are the critical points of an action, which is the integral over time of the total kinetic energy of the fluid-rigid body system. It was pointed out in a famous paper by Arnold \cite{Arnold} that both the Euler equations for a rigid body as well as the Euler equations for a perfect fluid can be derived with this approach. The motion of a rigid body in a frame attached to its center of mass can be considered as a geodesic on the special orthogonal group $SO(3)$. On the other hand the motion of a perfect fluid filling a container $ \Omega$ (without any immersed rigid body in it) can be considered as a geodesic equation on the space $\text{Sdiff}^{+} (\Omega)$ of the volume and orientation preserving diffeormorphisms of $\Omega$. \par \ \par It is hence natural to try to extend this analysis to a system of interaction of a perfect fluid and a rigid body. In particular cases, when the fluid is irrotational or when the vorticity of the fluid is given by a finite number of point vortices, so that the dynamics is finite-dimensional, this was studied in details in \cite{VKM1,VKM2}; see also references therein. The goal of this paper is to observe that one can see the motion of a rigid body in a fluid governed by the incompressible Euler equations, as a geodesic flow, in the presence of a regular distributed vorticity, as well. \par \ \par The structure of this paper is as follows. In Subsection \ref{Pde}, we first recall the PDE formulation of the system. Then in Subsection \ref{GEO}, we describe the infinite-dimensional manifold and the action used in the geometric formulation of the problem. In Subsection \ref{Equi}, we state the main result of the paper, that is, the equivalence of the two points of view. Section \ref{Sec:Proofs} contains the proofs of the various statements. \par \ \par \subsection{PDE formulation} \label{Pde} The dynamics of this system can be described thanks to the following PDEs: \begin{eqnarray} \label{Euler1a2} \displaystyle \frac{\partial u}{\partial t}+(u\cdot\nabla)u + \nabla p &=& 0 \ \text{for} \ x\in \mathcal{F}(t) , \\ \label{Euler2a2} \operatorname{div\,} u &=& 0 \ \text{for} \ x\in \mathcal{F}(t), \\ \label{Solide1} m x_{B}''(t) &=& \int_{\partial \mathcal{S}(t)} pn \ d\Gamma, \\ \label{Solide2} (\mathcal{J} r )'(t) &=& \int_{\partial \mathcal{S}(t)} (x- x_{B})\times pn \ d\Gamma , \\ \label{Euler3a2} u\cdot n &=& 0 , \ \text{for} \ x\in \partial \Omega , \\ \label{Euler3b} u\cdot n &=& v \cdot n , \ \text{for} \ \ x\in \partial \mathcal{S}(t), \\ \label{Eulerci2} u |_{t= 0} &=& u_0 , \\ \label{Solideci1} \ell(0) &=& \ell_0, \\ \label{Solideci2} r (0)&=& r _0, \\ \label{CMci} x_{B}(0) &=& x_0 . \end{eqnarray} Equations (\ref{Euler1a2}) and (\ref{Euler2a2}) are the incompressible Euler equations. The vector field $u$ is the fluid velocity and the scalar field $p$ denotes the pressure. Equations (\ref{Solide1}) and (\ref{Solide2}) are Newton's laws for linear and angular momenta of the body under the influence of the pressure force. Here we denote by $m$ the mass of the rigid body (normalized in order that the density of the fluid is $\rho_{F} = 1$), by $x_B (t)$ the position of its center of mass, $n(t,x)$ denotes the unit normal vector pointing outside the fluid and $ d\Gamma(t) $ denotes the surface measure on $\partial \mathcal{S}(t)$. The time-dependent vector \begin{equation} \label{DefEll} \ell(t) := x_{B}'(t), \end{equation} denotes the velocity of the center of mass of the solid and $r$ denotes its angular speed, so that the solid velocity field is given by \begin{equation} \label{vietendue} v(t,x) :=\ell(t) + r (t) \times (x - x_{B}(t)). \end{equation} In (\ref{Solide2}) the $3 \times 3$ matrix $\mathcal{J}={\mathcal J}(t)$ denotes the moment of inertia which depends on time according to Sylvester's law \begin{equation}\label{Sylvester} \mathcal{J} =Q \mathcal{J}_0 Q^{*} , \end{equation} where $\mathcal{J}_0$ is the initial value of $\mathcal{J}$ and where the rotation matrix $Q \in SO(3)$ is deduced from $r$ by the following differential equation (where we use the convention to consider the operator $r(t) \times \cdot$ as a matrix): \begin{equation} \label{LoiDeQ} Q'(t) = r(t) \times Q(t) \text{ and } Q(0) = \operatorname{Id}_3. \end{equation} The matrix ${\mathcal J}_{0}$ can be obtained as follows. Given a positive function $\rho_{{\mathcal S}_0} \in L^{\infty}({\mathcal S}_{0};\mathbb{R})$ describing the density in the solid (again normalized in order that the density of the fluid is $\rho_{F} = 1$), the data $m$, $x_{0}$ and ${\mathcal J}_{0}$ can be computed by it first moments \begin{equation} \label{EqMasse} m := \int_{{\mathcal S}_0} \rho_{{\mathcal S}_0} dx > 0, \end{equation} \begin{equation} \label{Eq:CG} m x_0 := \int_{{\mathcal S}_0} x \rho_{{\mathcal S}_0} (x) dx, \end{equation} \begin{equation}\label{eqJ} \mathcal{J}_{0} := \int_{ \mathcal{S}_{0}} \rho_{{\mathcal S}_0}(x) \big( | x- x_{0} |^2 \operatorname{Id}_3 -(x- x_{0}) \otimes (x- x_{0}) \big) dx . \end{equation} Finally, the domains occupied by the solid and the fluid are given by \begin{equation} \label{PlaceDuSolideEtDuFluide} {\mathcal S}(t) = \Big\{ x_{B} (t) + Q(t)(x-x_{0}), \ x \in {\mathcal S}_{0} \Big\} \text{ and } {\mathcal F}(t)=\Omega \setminus \overline{{\mathcal S}(t)}, \end{equation} starting from a given initial position $\mathcal{S}_0 \subset \Omega$, such that $\mathcal{F}_0 := \Omega \setminus \mathcal{S}_0$. Let us underline that ${\mathcal S}(t)$ and $\partial \Omega$ being compact, and since ${\mathcal S}(t) \subset \Omega$, the solutions that we consider satisfy \begin{equation} \label{DistanceAuBord} \mbox{dist}({\mathcal S}(t),\partial \Omega) >0 \text{ on } [0,T]. \end{equation} Let us give a precise definition of the classical solutions examined in this paper. \begin{Definition}[Classical solutions] \label{germi} We call classical solution of the PDEs formulation on $[{0}, T]$ some \begin{equation*} (u,x_B , r) \in C^{1,\lambda} (\cup_{t \in [0,T]} \Big( \{t\} \times {\mathcal F}(t) \Big) ;\mathbb{R}^{3}) \times C^{2} ([0,T];\mathbb{R}^{3}) \times C^{1} ([0,T];\mathbb{R}^{3}), \end{equation*} (for some $\lambda \in (0,1)$) satisfying \eqref{Euler1a2}--\eqref{DistanceAuBord}. \end{Definition} The local-in-time existence and uniqueness of classical solutions to the problem \eqref{Euler1a2}--\eqref{DistanceAuBord} holds when the initial velocity of the fluid is in the H\"older space $C^{1,r}$ cf. \cite{ogfstt} for a precise statement. Let us also mention the earlier results of Ortega, Rosier and Takahashi \cite{ort1}-\cite{ort2} where the body-fluid system occupies the plane $\mathbb{R}^{2}$, Rosier and Rosier \cite{rosier} in the case of a body in $\mathbb{R}^{3}$ and Houot, San Martin and Tucsnak \cite{ht} in the case (considered here) of a bounded domain, with the initial velocity in a Sobolev space $H^{m}$, $m \geqslant 3$. \subsection{Geodesic formulation} \label{GEO} Let us now turn to the geometric viewpoint. We first describe below the infinite-dimensional space of configuration of the system. Next we introduce a natural action, which allows to define our notion of geodesic. \subsubsection{Rigid movements} Let us first describe the rigid part of the motion. To a velocity vector field $v$ of a rigid body one associates the flow \begin{equation} \label{flowS} \partial_t \tau(t,x) = v (t, \tau(t,x) ) \text{ and } \tau(0,x) = x \text{ for } (t,x) \in [0,T] \times {\mathcal S}_{0}. \end{equation} It is easy to integrate to find \begin{equation*} \tau(t,x) = x_{B} (t) + Q(t)(x-x_{0}), \end{equation*} where \begin{equation} \nonumber x_{B} (t) := \tau(t,x_0 ), \end{equation} and where $ Q(t)$ is obtained from $v$ by \eqref{LoiDeQ} and $r$ is given by \eqref{vietendue}. \par The flow $\tau$ can be seen as a $C^1$ function of the time with values in the Lie group $SE(3)$ of rigid motions (the special Euclidean group), that is the group generated by translations and rotations in $3$D. Its tangent space at $\operatorname{Id} \in SE(3)$ is \begin{eqnarray*} \mathfrak{se}(3) := T_{\operatorname{Id}} SE(3) = \Big\{ v \in C^{1}(\mathbb{R}^{3};\mathbb{R}^{3}) \ \Big/ \ D(v) = 0 \Big\}, \end{eqnarray*} where $D(v)$ denotes the tensor of deformations $2 D(v) := (\partial_i v_j + \partial_j v_i )_{i,j} $. Given $x_{B}$ in $\mathbb{R}^3$, we have the following: \begin{equation*} \mathfrak{se}(3) = \Big\{ v :\mathbb{R}^{3} \rightarrow \mathbb{R}^{3} \ \Big/ \ \exists (\ell,r) \in \mathbb{R}^{3} \times \mathbb{R}^{3} , \ \forall x \in \mathbb{R}^3 , \ v(x) = \ell + r \times ( x- x_B ) \} . \end{equation*} Moreover given $x_{B}$ in $\mathbb{R}^3$ and $v \in \mathfrak{se}(3) $, the ordered pair $(\ell,r)$ above is unique. Hence $\mathfrak{se}(3) \sim \mathbb{R}^{3} \times \mbox{Skew}_{3}(\mathbb{R})$. \par Accordingly, the tangent space of $SE(3)$ at $\tau \in SE(3)$ is \begin{equation*} T_\tau SE(3) := \Big\{ v \circ \tau \text{ with } v \in \mathfrak{se}(3) \Big\}. \end{equation*} Now, $x_0$ being given, we introduce the following projections on $T_{\tau} SE(3)$: to $\sigma = v \circ \tau \in T_{\tau} SE(3)$ we associate $(L_{\tau}[\sigma] , R_{\tau}[\sigma])$ the unique ordered pair $(\ell,r)$ associated to $v$ with $x_{B} := {\tau} (x_0)$, in other words: \begin{equation*} \sigma(\tau^{-1}(x)) = v(x)= L_{\tau}[\sigma] + R_{\tau}[\sigma] \times (x - \tau(x_{0})). \end{equation*} \ \par Let us conclude this subsection by describing the energy of the solid. Using the choice of $x_{B} (t)$ as the center of mass of the body at time $t$, we have that \begin{equation} \label{premeti} \int_{{\mathcal S}(t) } \rho_{{\mathcal S}_0} (\tau_t^{-1}(x)) \, (x- x_{B}(t)) ) \, dx = 0, \end{equation} and therefore, for any $(\ell_1 , r_1 ,\ell_2 , r_2 ) \in (\mathbb{R}^3)^4$, for any $t$, \begin{equation}\label{meti} \int_{{\mathcal S}(t) } \rho_{{\mathcal S}_0}(\tau_t^{-1}(x)) \, (\ell_1 + r_1 \times (x - x_{B}(t)) ) \cdot (\ell_2 + r_2 \times (x - x_{B} (t)) ) \, dx = m \ell_1 \cdot \ell_2 + \mathcal{J} (t) r_1 \cdot r_2 , \end{equation} where $ \mathcal{J} (t) $ is given by \eqref{Sylvester} and the notation $\tau_t^{-1}$ stands for the inverse of the function $\tau_{t}:=\tau (t,\cdot)$. \subsubsection{Fluid displacements and Arnold's geodesic interpretation} \label{Arnold} Let us briefly recall Arnold's interpretation of the Euler equation. To a velocity vector field $u$ satisfying the incompressible Euler equations in $\Omega$ (without body) one associates the flow $ \eta$ defined on $[0,T] \times \Omega$ by \begin{equation} \label{flowF} \partial_t \eta(t,x) = u (t, \eta(t,x) ) \text{ and } \eta(0,x) = x . \end{equation} The flow $\eta$ can be seen as a continuous function of the time with values in the space $\text{Sdiff}^{+} (\Omega)$ of the volume and orientation preserving diffeormorphisms defined of $\Omega$. The latter is viewed as an infinite-dimensional manifold with the metric inherited from the embedding in $L^2 (\Omega ; \mathbb{R}^{3})$, and the tangent space in $\eta \in \text{Sdiff}^{+} (\Omega)$ is \begin{equation*} T_\eta \ \text{Sdiff}^{+} (\Omega) := \Big\{ u \circ \eta \text{ with } u \in C^{1} ( \Omega ;\mathbb{R}^{3}) \text{ such that } \operatorname{div\,}(u)=0 \ \text{ in }\ \Omega \ \text{ and }\ u \cdot n=0 \ \text{ on }\ \partial \Omega \Big\}. \end{equation*} Euler's equations are then interpreted as a geodesic equation on $\text{Sdiff}^{+} (\Omega)$. The pressure field appears as a Lagrange multiplier for the divergence-free constraint on the velocity. Ebin and Marsden proved in \cite{EbinMarsden} the existence of these geodesics when the initial velocity of the fluid is in the Sobolev space $H^{s} (\Omega)$, $s>\frac{5}{2}$. \par \subsubsection{Possible configurations of the fluid/body system as a Riemannian manifold} \label{PossibleConfigurations} To introduce the geodesic formulation of the motion of a rigid body immersed in an incompressible perfect fluid, we first describe the infinite-dimensional manifold on which these geodesics will be considered. \par \ \\ {\bf The set of the possible configurations as a Riemannian manifold.} To begin with, we first describe the set of the possible configurations of the system at a fixed time by setting \begin{multline} \nonumber {\mathcal C} := \Big\{ \ (\tau,\eta) \in SE(3) \times C^{1,\lambda}( {\mathcal F}_{0}; \mathbb{R}^{3}) \text{ such that } \tau({\mathcal S}_{0}) \subset \Omega, \\ \eta \text{ is a volume and orientation preserving diffeomorphism } {\mathcal F}_{0} \rightarrow \Omega \setminus \left[\tau({\mathcal S}_{0})\right] \Big\}. \label{defC} \end{multline} We will represent $(\tau,\eta)$ by $\phi : \Omega \rightarrow \Omega$ such that $\phi_{|{\mathcal S}_{0}}= \tau$ and $\phi_{|{\mathcal F}_{0}}=\eta$. Note that $\phi$ is not necessarily continuous. \par Let us observe that according to the Helmoltz decomposition, in $C^{1,\lambda}( {\mathcal F}_{0}; \mathbb{R}^{3})$, the space of divergence-free vector fields is closed and admits a topological complement. Therefore, one can see that ${\mathcal C}$ is a submanifold of a manifold modelled on the Banach space \begin{equation*} E := \mathfrak{se}(3) \times C^{1,\lambda}( {\mathcal F}_{0}; \mathbb{R}^{3}). \end{equation*} We will be interested in the tangent space of this infinite-dimensional manifold. Let us first recall that by definition the tangent space $T_{(\tau,\eta)} {\mathcal C}$ in $(\tau,\eta) \in {\mathcal C}$ is the set of equivalence classes of germs of continuously differentiable function $\Theta: (-\varepsilon, \varepsilon) \rightarrow {\mathcal C}$ (for some $\varepsilon>0$) such that $\Theta(0)=(\tau,\eta)$, for the equivalence relation: \begin{equation*} \Theta_{1} \sim \Theta_{2} \text{ iff } \Theta_{1}'(0) =\Theta_{2}'(0) \text{ in local charts}. \end{equation*} Equivalently $(\sigma,\mu) \in T_{(\tau,\eta)} {\mathcal C}$ is the set of equivalence classes of pairs $(\psi, e)$ where $\psi$ is a chart defined on a neighborhood of $(\tau,\eta) \in {\mathcal C}$ and $e \in E$ for the relation \begin{equation*} (\psi_1 ,e_1 ) \equiv (\psi_2 ,e_2 ) \ \text{ if } \ D ( \psi_2 \circ \psi_1^{-1} )( \psi_1 (\tau,\eta ) ) \cdot e_1 = e_2 . \end{equation*} Clearly this makes $T_{(\tau,\eta)} {\mathcal C}$ a linear space and $T {\mathcal C} := \displaystyle \bigcup_{(\tau,\eta)\in {\mathcal C}} \Big( \{(\tau,\eta)\} \times T_{(\tau,\eta)} {\mathcal C} \Big)$ a vector bundle whose base space is ${\mathcal C}$. \par \ \par \noindent Let us introduce the following notation: given $(\tau,\eta) \in {\mathcal C}$ and $\mu \in C^{1,\lambda}( {\mathcal F}_{0}; \mathbb{R}^{3})$, we introduce $U_{\eta}[\mu]: \eta({\mathcal F}_{0}) \rightarrow \mathbb{R}^{3}$ by \begin{equation} \nonumber U_{\eta}[\mu] := \mu \circ \eta^{-1}. \end{equation} Recall that due to the openness of $\Omega$ and closedness of ${\mathcal S}_{0}$ one has $d(\phi({\mathcal S}_{0}), \partial \Omega)>0$ for $\phi \in {\mathcal C}$. \par \ \par Now the tangent space of ${\mathcal C}$ is described by the following proposition. \begin{Proposition} \label{Prop:UnIIsatisfaitcaC} Let $(\tau,\eta) \in {\mathcal C}$. Then using the notations \begin{equation} \label{NotationsC} x_B := \tau (x_0), \ {\mathcal S} := \tau ({\mathcal S}_{0}) \text{ and } \ {\mathcal F} := \Omega \setminus {\mathcal S}, \end{equation} we have \begin{align} \nonumber T_{(\tau,\eta)} {\mathcal C} = \bigg \{ (\sigma,\mu) \in T_{\tau}SE(3) &\times C^{1,\lambda}({\mathcal F}_{0};\mathbb{R}^{3}) \ \Big/ \\ \label{UDiv} & \operatorname{div\,}(U_{\eta}[\mu])=0 \ \text{ in }\ {\mathcal F}, \\ \label{UAuBord} & U_{\eta}[\mu](x) \cdot n(x) = (L_{\tau}[\sigma] + R_{\tau}[\sigma] \times (x -x_B)) \cdot n(x) \ \text{ on }\ \partial {\mathcal S} , \\ \label{UAuBordFixe} & U_{\eta}[\mu](x) \cdot n(x) =0 \ \text{ on }\ \partial \Omega \bigg\}. \end{align} As before, $n$ denotes the normal unit vector on $\partial \Omega$ and $\partial {\mathcal S}$, pointing outside the fluid. \end{Proposition} The proof of Proposition \ref{Prop:UnIIsatisfaitcaC} will be given in the next section. We will represent $(\sigma,\mu)$ by ${\mathcal U} \circ \phi: \Omega \rightarrow \mathbb{R}^{3}$ where ${\mathcal U}:\Omega \rightarrow \mathbb{R}^{3}$ is given by ${\mathcal U}_{|{\mathcal S}} \circ \tau= \sigma$ and ${\mathcal U}_{|{\mathcal F}}= U_{\eta}[\mu]$. In the same way as $\phi$, ${\mathcal U}$ can be discontinuous. \par \ \par The manifold ${\mathcal C}$ can be endowed by the following Riemannian metric: for any $\phi \in {\mathcal C}$, for any ${\mathcal U}_1 \circ \phi$, ${\mathcal U}_2 \circ \phi \in T_{\phi } {\mathcal C}$, \begin{align} \nonumber \langle {\mathcal U}_1 \circ \phi , {\mathcal U}_2 \circ \phi \rangle_{\phi }:= \int_{ \Omega } \rho_0 ( \phi^{-1}) \, {\mathcal U}_1 \cdot {\mathcal U}_2 \, dx . \end{align} Above we used the density $\rho_0 $ defined on $\Omega$ by: \begin{equation*} \rho_0 (x) = \left\{ \begin{array}{l} 1 \text{ in } {\mathcal F}_{0}, \\ \rho_{{\mathcal S}_0}(x) \text{ in } {\mathcal S}_{0}. \end{array} \right. \end{equation*} Splitting the fluid flow and the solid one this reads: for any $(\sigma_1 ,\mu_1) , (\sigma_2 ,\mu_2 )$ in $T_{(\tau,\eta)} {\mathcal C}$, we have \begin{align} \label{metrique} \langle (\sigma_1 ,\mu_1) , (\sigma_2 ,\mu_2 ) \rangle_{(\tau,\eta)} = m L_{\tau}[\sigma_{1}] \cdot L_{\tau}[\sigma_{2}] + {\mathcal J}[\tau] R_{\tau}[\sigma_{1}] \cdot R_{\tau}[\sigma_{2}] + \int_{ \eta ({\mathcal F}_{0}) } U_{\eta}[\mu_{1}] \cdot U_{\eta}[\mu_{2}] \, dx , \end{align} where ${\mathcal J}[\tau]$ is the inertia matrix deduced from the initial one ${\mathcal J}_0$ by the rigid transformation $\tau$ that is \begin{equation} \label{Jtau} \mathcal{J} [\tau] =Q[\tau] \mathcal{J}_0 Q[\tau]^{*}, \end{equation} where $Q[\tau]$ is the rotation matrix canonically associated to $\tau$, that is, its linear part. \par \ \par Let us stress that this metric defines a weaker topology than the original one on ${\mathcal C}$. \par \ \\ {\bf Curves of configurations.} We now turn to time-dependent displacements. Let us be given $(\tau_{0},\eta_{0})$ and $(\tau_{1},\eta_{1})$ in ${\mathcal C}$. We introduce \begin{align} \nonumber {\mathcal L} := \Big\{ \ (\tau,\eta) &\in C^{1}([0,T];{\mathcal C}), \text{ such that:} \\ \nonumber &{i.} \ \ \tau(0)=\tau_{0}, \ \ \eta(0)=\eta_{0}, \\ &{ii.} \ \ \tau(T)=\tau_{1}, \ \ \eta(T)=\eta_{1} \ \Big\}. \label{defL} \end{align} It is easy to verify that ${\mathcal L}$ is a submanifold of a manifold modelled on the Banach space $E_{T}:= C^{1}([0,T]; \mathfrak{se}(3)) \times C^{1,\lambda}([0,T] \times {\mathcal F}_{0}; \mathbb{R}^{3})$. \par The tangent space of this manifold is described by the following proposition. \begin{Proposition} \label{Prop:UnIIsatisfaitcaG} Let $(\tau,\eta) \in {\mathcal L}$. Then using the notations \begin{equation} \label{NotationsL} x_B (t) := \tau_{t} (x_0), \ {\mathcal S} (t) := \tau_{t}({\mathcal S}_{0}) \ \text{ and } \ {\mathcal F}(t):=\Omega \setminus {\mathcal S} (t), \end{equation} we have \begin{align} \nonumber T_{(\tau,\eta)} {\mathcal L} = \Big\{ &(\sigma,\mu) \in C^{1}([0,T]; T_{\tau_{t}}SE(3)) \times C^{1,\lambda}([0,T] \times {\mathcal F}_{0}; \mathbb{R}^{3}) \ \Big/ \\ \label{Nulsen0etT} & \sigma(0)=\sigma(T)=0 \text{ and } \mu(0)=\mu(T)=0, \\ \label{UDiv2} & \operatorname{div\,}(U_{\eta}[\mu])=0 \text{ in } {\mathcal F}(t) \text{ for each } t \in [0,T], \\ \label{UAuBord2} & U_{\eta}[\mu] \cdot n =(L_{\tau}[\sigma] + R_{\tau}[\sigma] \times [x-x_B (t)]) \cdot n \text{ on } \partial {\mathcal S}(t) \ \text{ for each } t \in [0,T], \\ \label{UAuBordFixe2} & U_{\eta}[\mu] \cdot n =0 \text{ on } \partial \Omega \text{ for each } t \in [0,T] \Big\}. \end{align} \end{Proposition} \begin{Remark} Here we make the abuse of notations $C^{1}([0,T];T_{\tau_{t}}SE(3))$, since $T_{\tau_{t}}SE(3)$ actually depends on $t$. We consider $\sigma$ as the section of a vector bundle rather than as a function. One can for instance interpret $C^{1}([0,T];T_{\tau_{t}}SE(3))$ as the set of mappings $\sigma$ such that $(t,x) \mapsto \sigma(t,\tau_{t}^{-1}(x))$ is in $C^{1}([0,T];\mathfrak{se}(3))$. Also, we sometimes drop the dependence of the objects on $t$ in order to simplify the notations. \end{Remark} \ \par \noindent Proposition \ref{Prop:UnIIsatisfaitcaG} is the time-dependent counterpart of Proposition \ref{Prop:UnIIsatisfaitcaC}. We do not provide a proof since it is only a matter of adapting the proof of Proposition \ref{Prop:UnIIsatisfaitcaC} with a harmless parameter. The only new point is to observe that the extremities of the curves being prescribed (the conditions i) and ii) in \eqref{defL}) the fields $\sigma$ and $\mu$ vanish when $t=0$ or $T$. \par \ \par As previously we will represent $(\tau,\eta) \in {\mathcal L}$ by $\phi : [0,T] \times \Omega \rightarrow \Omega$ such that for any $t \in [0,T]$, \begin{equation*} \phi (t,\cdot) |_{{\mathcal S}_{0}}= \tau (t,\cdot) \ \text{ and } \ \phi (t,\cdot) |_{{\mathcal F}_{0}}=\eta (t,\cdot). \end{equation*} We represent $(\sigma,\mu) \in T_{(\tau,\eta)} {\mathcal L}$ by ${\mathcal U} \circ \phi: [0,T] \times \Omega \rightarrow \mathbb{R}^{3}$ where ${\mathcal U}: [0,T] \times \Omega \rightarrow \mathbb{R}^{3}$ is given, for any $t \in [0,T]$, by \begin{equation*} {\mathcal U}(t,\cdot) |_{{\mathcal S}(t)} \circ \tau (t,\cdot) = \sigma (t,\cdot) \ \text{ and } \ {\mathcal U}(t,\cdot) |_{{\mathcal F}(t)}= U_{\eta}[\mu] (t,\cdot) . \end{equation*} \subsubsection{The geodesic interpretation of the motion of a rigid body immersed in an incompressible perfect fluid} \label{GeodesicInterpretation} Here we consider geodesics as critical points of the following action on the manifold ${\mathcal L}$: \begin{equation} \label{DefA} {\mathcal A}(\phi) := \frac{1}{2} \int_{[0,T] \times \Omega } \rho_{{0}} |\partial_t \phi |^2 \, dx\, dt. \end{equation} We see that the action is obtained by integrating the squared norm (associated to the metric of ${\mathcal C}$) of the tangent vector to the curve $\phi$, that is \begin{equation*} {\mathcal A}(\phi) = \frac{1}{2} \int_{[0,T] } \langle \partial_t \phi (t,\cdot),\partial_t \phi (t,\cdot) \rangle_{\phi (t,\cdot)} \, dt . \end{equation*} Separating the fluid and the body parts in the integral, and using \eqref{meti}, we see that \begin{equation*} {\mathcal A}(\phi)= \frac{1}{2} \int_{[0,T]} \Big( m |\ell(t)|^2 + {\mathcal J}[\tau(t)] r(t) \cdot r(t) + \int_{\eta_{t}({\mathcal F}_{0}) } |u(t,x)|^2 \, dx \Big) \, dt, \end{equation*} where \begin{equation} \label{LRU} \ell:=L_{\tau}[\partial_{t} \tau], \ r:=R_{\tau}[\partial_{t} \tau] \ \text{ and } \ u:=U_{\eta}[\partial_{t} \eta]. \end{equation} In this writing, we recognize the integral over time of the kinetic energy of the fluid-body system. \par Moreover, going back to \eqref{DefA}, since the action is a continuous quadratic form on ${\mathcal L}$, we deduce that ${\mathcal A}$ is differentiable on ${\mathcal L}$ with \begin{align} \label{diff} D {\mathcal A} (\phi) \cdot ( {\mathcal U} \circ \phi ) = \int_{[0,T] \times \Omega } \rho_{{0}} \, \partial_t \phi \cdot \partial_t ( {\mathcal U} \circ \phi ) \, dx\, dt = \int_{[0,T]} \langle \partial_t \phi (t,\cdot),\partial_t ({\mathcal U} \circ \phi) (t,\cdot) \rangle_{\phi (t,\cdot)} \, dt . \end{align} This leads us to the following natural definition. \begin{Definition} \label{geocca} We say that $\phi \in {\mathcal L}$ is a geodesic on ${\mathcal L}$ if for any ${\mathcal U} \circ \phi \in T_{\phi} {\mathcal L}$, $ D {\mathcal A} (\phi) \cdot ({\mathcal U} \circ \phi ) = 0$. \end{Definition} \subsection{Equivalence of the two points of view} \label{Equi} The main result of this paper is the following. \begin{Theorem} \label{Theo:Equi} If $(u,x_B , r) $ is a classical solution of the PDEs formulation on $[{0}, T]$ then $(\tau,\eta) $ defined by formulas \eqref{flowS} and \eqref{flowF} is a geodesic on ${\mathcal L}$. \par Conversely, let $(\tau,\eta) \in {\mathcal L}$ be a geodesic. Then $(u,x_B , r) $ where $(\ell,r,u)$ is defined by \eqref{LRU} and $x_{B}$ is obtained by \eqref{CMci} and \eqref{DefEll}, is a classical solution of the PDEs formulation on $[{0}, T]$ (in the sense of Definition \ref{germi}). \end{Theorem} Theorem \ref{Theo:Equi} will be proved in Subsection \ref{EquiProof}. \par \ \par \begin{Remark} Let us define the length of a curve $\phi \in {\mathcal L}$: \begin{equation*} \Lambda(\phi) := \int_{[0,T]} ( \langle \partial_t \phi (t,\cdot),\partial_t \phi (t,\cdot) \rangle_{\phi (t,\cdot)})^\frac{1}{2} \, dt , \end{equation*} and consider \begin{align} \nonumber d &:= \inf \Lambda(\phi) \\ &= \inf \int_{[0,T]} \left( |\ell(t)|^2 + {\mathcal J}(t) r(t) \cdot r(t) + \int_{\eta_{t}({\mathcal F}_0) } |u(t,x)|^2 \, dx \right)^{1/2} \, dt , \end{align} where the infimum is performed over $\phi = (\tau,\eta) \in {\mathcal L}$. We should say that $d$ is the geodesic distance between the configurations $(\tau_{0}, \eta_{0})$ and $(\tau_{1},\eta_{1})$ of ${\mathcal C}$. If $(\tau,\eta) \in {\mathcal L}$ realizes this infimum and is parametrized by $t$ in such a way that the energy does not depend on time then $(\tau,\eta)$ also minimizes the action ${\mathcal A}$ over ${\mathcal L}$. Conversely, by the conservation of energy, any geodesic is parameterized proportionnaly to arc length. \end{Remark} Let us mention here two open problems. \par \ \\ {\bf Open Problem 1.} Is it possible to prove that for $T$ small enough and $(\tau,\eta) \in {\mathcal L}$ such that the associated $(u,x_B , r) $ is a classical solution of the PDEs formulation on $[{0}, T]$, one has for any $({\tilde \tau},{\tilde \eta}) \in {\mathcal L}$, ${\mathcal A} (\tau,\eta) \leqslant {\mathcal A} ({\tilde \tau},{\tilde \eta})$, with equality if and only if $({\tilde \tau},{\tilde \eta}) = (\tau,\eta)$? This should extend the result obtained by Brenier cf. \cite{Brenier} in the case of a fluid without body. \par \ \\ {\bf Open Problem 2.} Is it possible to adapt the strategy that Ebin and Marsden used in \cite{EbinMarsden} in the case of a fluid alone to the case with a body, that is to prove the existence of a free torsion connexion and of some geodesics by using parallel transport, and then to prove that these solutions also solve the PDE formulation? \par \ \\ Let us also mention studies connected with the stability properties of the system. In the case of a fluid alone, Arnold \cite{Arnold} uses a notion of Riemannian curvature to investigate the stability of two-dimensional stationary flows. In the case considered here of a fluid-body system, this stability was studied by Ilin and Vladimirov \cite{VladimirovIlin2D,VladimirovIlin3D}. \section{Proofs} \label{Sec:Proofs} In this section, we prove the claims of Section \ref{Intro}. The existence of a solution of the PDE system will not be needed. \subsection{Proof of Proposition \ref{Prop:UnIIsatisfaitcaC}} \label{TS} {\bf 1.} Let us first show that any $(\sigma,\mu)$ satisfying \eqref{UDiv}-\eqref{UAuBordFixe} belongs to $T_{(\tau,\eta)} {\mathcal C}$. First, we define $\theta_{\mathcal S} \in C^{1}((-\varepsilon,\varepsilon);SE(3))$ by \begin{equation*} \partial_s \theta_{\mathcal S} (s) [x]= v_{\mathcal S} ( s, \theta_{\mathcal S} (s) [x] ) , \quad \theta_{\mathcal S} (0) [x]=\tau [x] , \end{equation*} where \begin{equation*} v_{\mathcal S} (s,x) := L_{\tau}[\sigma] + R_{\tau}[\sigma] \times (x-x_B (s)) , \quad x_B (s) := x_B + s L_{\tau}[\sigma] , \end{equation*} with $\varepsilon>0$ small; in particular we can require that for all $s \in (-\varepsilon,\varepsilon)$, one has \begin{equation*} {\mathcal S}_{s}:=\theta_{\mathcal S}(s)[{\mathcal S}] \subset \Omega. \end{equation*} \begin{Lemma} There exists a family $\psi_{s} : {\mathcal F} \rightarrow {\mathcal F}_{s}:= \Omega \setminus {\mathcal S}_{s}$, for $s \in (-\varepsilon,\varepsilon)$, smooth in its arguments, such that \begin{gather} \label{F1} \psi_{0}=\operatorname{Id}_{{\mathcal F}}, \\ \label{F2} \forall s \in (-\varepsilon,\varepsilon), \ \ \psi_{s}= \theta_{\mathcal S}(s) \text{ in a neighborhood of } {\mathcal S}, \\ \label{F3} \forall s \in (-\varepsilon,\varepsilon), \ \ \psi_{s}= \operatorname{Id} \text{ in a neighborhood of } \partial \Omega, \\ \label{F4} \forall s \in (-\varepsilon,\varepsilon) , \ \forall x \in {\mathcal F}, \ \ \det(\nabla_{x} \, \psi_{s})=1. \end{gather} \end{Lemma} \begin{proof} We define \begin{equation} \label{DefV} V(s,x) := -\frac{1}{2} \operatorname{curl} \big( \phi(x) (x\times L_{\tau}[\sigma] + \| x - x_B (s) \|^2 R_{\tau}[\sigma] ) \big), \end{equation} where $\phi$ is a smooth function equal to $1$ in a neighborhood of ${\mathcal S}$ and $0$ in a neighborhood of $\partial \Omega$. We associate $\psi_{s}$ as the solution of \begin{equation*} \partial_s \psi_{s} (x)= V( s, \psi_{s} (x)) , \quad \psi_{0} (x)=x . \end{equation*} Equations \eqref{F1} and \eqref{F3} are straightforward. Equation \eqref{F2} follows from the fact that $V$ coincides with $v_{\mathcal S}$ in some neighborhood of $\partial {\mathcal S}$, reducing $\varepsilon$ if necessary. Finally, since $V$ is clearly divergence-free, \eqref{F4} follows from Liouville's theorem. \end{proof} Now we construct the vector field: \begin{equation} \label{defvt} v_{\mathcal F} (s, \cdot) := V(s,\cdot ) + (\psi_{s})_{*}( U_{\eta}[\mu] - V( 0 ,\cdot ) ) \ \text{ in } {\mathcal F}_{s}. \end{equation} (Recall that the push-forward is defined as $(\psi_{*} v )(y) = d\psi_{\psi^{-1}(y)} [v (\psi^{-1}(y))] $.) \par Now we introduce a corresponding flow $\theta_{\mathcal F}$ according to the $s$-variable, starting from $\eta$: \begin{equation*} \left\{ \begin{array}{l} \partial_{s} \theta_{\mathcal F} (s,x) = v_{\mathcal F} (s,\theta_{\mathcal F}(s,x)), \ \text{ in } (-\varepsilon,\varepsilon) \times {\mathcal F}_{0}, \\ \theta_{\mathcal F}(0,x) = \eta(x) \ \text{ in } {\mathcal F}_{0}. \end{array} \right. \end{equation*} (One can for instance smoothly extend $v_{{\mathcal F}}(s,\cdot)$ to $\mathbb{R}^{n}$ to define this flow in a standard way.) \par \ \par It remains to check that $\Theta:= (\theta_{\mathcal S},\theta_{\mathcal F}) \in C^{1}((-\varepsilon,\varepsilon);{\mathcal C})$, that $\Theta(0)=(\tau,\eta)$ and that $\Theta '(0)= (\sigma , \mu)$. The two latter claims come directly from the construction.\par Let us prove that $\theta_{\mathcal F}$ is volume-preserving. To that purpose, we first notice that $\operatorname{div\,}(U_{\mu})=\operatorname{div\,}(V(s))=0$. Using the fact that the push-forward of a divergence-free vector field by a diffeomorphism with unit Jacobian determinant is still divergence-free (see for instance \cite[Proposition 2.4]{InoueWakimoto}), \eqref{F4} and \eqref{defvt}, we deduce that $ v_{\mathcal F} (s)$ is divergence-free. Hence it follows that $\theta_{\mathcal F}$ is volume-preserving by Liouville's theorem. \par The main point, that is that, for any $s \in (-\varepsilon,\varepsilon)$, $\theta_{\mathcal F}(s)$ sends ${\mathcal F}_{0}$ to ${\mathcal F}_{s}$ can be seen as follows. It suffices to prove that \begin{gather} \label{Tangence1} v_{\mathcal F} (s,\cdot).n = v_{\mathcal S} (s,\cdot).n \ \text{ on } \partial {\mathcal S}_{s}, \\ \label{Tangence2} v_{\mathcal F} (s,\cdot).n = 0 \ \text{ on } \partial \Omega. \end{gather} Using \eqref{F3} and \eqref{DefV}, we see that $v_{\mathcal F} = U_{\eta}[\mu]$ in some neighborhood of $\partial \Omega$, so \eqref{Tangence2} is a consequence of \eqref{UAuBordFixe}. For what concerns \eqref{Tangence1}, from \eqref{UAuBord} we see that $U_{\eta}[\mu] - v_{\mathcal S} (0,\cdot)$ is tangent to $\partial {\mathcal S}$. It follows that $(\psi_{s})_{*}(U_{\eta}[\mu] - v_{\mathcal S} (0,\cdot))$ is tangent to $\partial {\mathcal S}_{s}$, which gives \eqref{Tangence1}. \par The other requirements, namely that $\theta_{\mathcal F}$ is orientation preserving and has the claimed regularity, are clearly satisfied. \par \ \par \noindent {\bf 2.} Reciprocally, given $(\sigma,\mu)$ in $T_{(\tau,\eta)} {\mathcal C}$, by definition of the tangent space, there is $\Theta= (\theta_{{\mathcal S}},\theta_{{\mathcal F}}) \in C^{1}((-\varepsilon,\varepsilon); {\mathcal C})$ such that $\Theta(0)=(\tau,\eta)$ and that $\Theta '(0)= (\sigma , \mu)$. Let us check that \eqref{UDiv}-\eqref{UAuBordFixe} are satisfied. \par First, it is obvious that $\sigma=\theta'_{{\mathcal S}}(0) \in T_{\tau} SE(3)$. Next one can define \begin{equation*} v_{{\mathcal F}}(s,x):= \partial_{s} \theta_{{\mathcal F}}(s, \theta^{-1}_{{\mathcal F}}(s,x)), \end{equation*} as a vector field on ${\mathcal F}_{s}$. Then as \begin{equation} \label{TF} \partial_{s} \theta_{\mathcal F}(s, x) = v_{{\mathcal F}}(s,\theta_{\mathcal F}(s,x)), \end{equation} and since $\theta_{{\mathcal F}}$ is volume-preserving, by Liouville's theorem we infer that $\operatorname{div\,} v_{{\mathcal F}}=0$. Using again \eqref{TF} and the fact that $\theta_{{\mathcal F}}$ sends ${\mathcal F}_{0}$ to ${\mathcal F}_{s}$, we also easily infer \eqref{UAuBord} and \eqref{UAuBordFixe}. $\Box$ \\ \ \par \subsection{Proof of Theorem \ref{Theo:Equi}} \label{EquiProof} We start with the following lemma. \begin{Lemma} \label{plusimple} For any $\phi =(\tau,\eta) \in {\mathcal L}$, for any ${\mathcal U} \circ \phi \in T_{\phi} {\mathcal L}$, \begin{align} \nonumber &D {\mathcal A} (\phi) \cdot ( {\mathcal U} \circ \phi ) = \\ \nonumber &\quad \int_{[0,T]} \Big( m \ell (t) \cdot \partial_{t} L_{\tau}[\sigma] (t) + {\mathcal J} [\tau] r (t) \cdot \partial_{t} R_{\tau}[\sigma] (t) + \int_{{\mathcal F}_0 } \partial_t \eta (t,x) \cdot \partial_t ( {\mathcal U}(t,\eta(t,x)) ) \, dx \Big) \, dt, \end{align} where $\ell$ and $r$ are given by \eqref{LRU}, $Q[\tau]$ is the linear part of $\tau$ and ${\mathcal J }[\tau]$ is given by \eqref{Jtau}. \end{Lemma} \begin{proof} Let us split the fluid flow and the solid one in \eqref{diff} and prove that \begin{align} \label{idplusimple} \int_{[0,T] \times {\mathcal S}_0 } \rho_{{\mathcal S}_0} \, \partial_t \tau \cdot \partial_t \sigma \, dx\, dt = \int_{[0,T]} \Big( m \ell \cdot \partial_{t} L_{\tau}[\sigma] + {\mathcal J} [\tau] r \cdot \partial_{t} R_{\tau}[\sigma] \Big) . \end{align} Now in order to prove \eqref{idplusimple} we first perform a change of variable to get \begin{align} \label{maissi} \int_{[0,T] \times {\mathcal S}_0 } \rho_{{\mathcal S}_0} \, \partial_t \tau \cdot \partial_t \sigma \, dx\, dt = \int_{[0,T] } \int_{{\mathcal S}(t) } \rho_{{\mathcal S}_0} (\tau_t^{-1}(x)) \, \partial_t \tau (t,\tau_t^{-1}(x)) \cdot \partial_t \sigma(t,\tau_t^{-1}(x)) \, dx\, dt . \end{align} Now, by definition of $\ell$ and $r$ we have \begin{align} \label{maissiphi} \partial_t \tau(t, \tau_t^{-1}(x)) = \ell + r \times (x-x_B ) . \end{align} In particular, taking the linear part of these affine transformations, and using that the linear part of $\partial_{t} \tau$ is $\partial_{t} Q[\tau]$, this entails that \begin{align} \label{maissiQ} \partial_t Q [\tau]= r \times Q[\tau] . \end{align} On the other hand, by definition of $L_{\tau}[\sigma]$ and $R_{\tau}[\sigma]$, we have \begin{align} \nonumber \sigma = L_{\tau}[\sigma] + R_{\tau}[\sigma] \times Q[\tau] (x-x_0 ) , \end{align} so that, differentiating in time and using \eqref{maissiQ}, we obtain \begin{align} \nonumber \partial_t \sigma = \partial_t L_{\tau}[\sigma] + (\partial_t R_{\tau}[\sigma]) \times Q[\tau] (x-x_0 ) + R_{\tau}[\sigma] \times (r \times Q[\tau] (x-x_0 )) . \end{align} Operating $\tau_t^{-1}$ on the right for both sides of the previous equality, we get \begin{align} \label{maissitau} \partial_t \sigma(t,\tau_t^{-1}(x)) = \partial_{t} L_{\tau}[\sigma] + \partial_{t} R_{\tau}[\sigma] \times (x-x_B ) + R_{\tau}[\sigma] \times \Big( r \times (x-x_B ) \Big) . \end{align} We now plug \eqref{maissiphi} and \eqref{maissitau} into the right hand side of \eqref{maissi} to obtain \begin{align} \label{moinslong} \int_{[0,T] \times {\mathcal S}_0 } \rho_{{\mathcal S}_0} \partial_t \tau \cdot \partial_t \sigma \, dx\, dt = \int_{[0,T]} ( I_1 (t) + I_2 (t) ) dt , \end{align} with \begin{align*} I_1 (t) &:= \int_{{\mathcal S}(t) } \rho_{{\mathcal S}_0} (\tau_t^{-1}(x)) \Big( \ell (t) + r(t) \times (x-x_B (t) ) \Big) \cdot \Big( \partial_{t} L_{\tau}[\sigma] (t) + \partial_{t} R_{\tau}[\sigma] (t) \times (x-x_B (t)) \Big) \, dx , \\ I_2 (t) &:= \int_{{\mathcal S}(t) } \rho_{{\mathcal S}_0} (\tau_t^{-1}(x)) \Big( \ell(t) + r(t) \times (x-x_B(t) ) \Big) \cdot \Big[ R_{\tau}[\sigma] (t) \times \Big( r(t) \times (x-x_B (t)) \Big) \Big] \, dx . \end{align*} We use the identity \eqref{meti} with \begin{equation*} ( \ell_1, \, r_1, \, \ell_2, \, r_2 ) = (\ell (t),\, r (t), \, \partial_{t} L_{\tau}[\sigma] (t), \, \partial_{t} R_{\tau}[\sigma] (t) ), \end{equation*} to get \begin{align} \label{hihun} I_1 (t) = m \ell (t) \cdot \partial_{t} L_{\tau}[\sigma] (t) + {\mathcal J} [\tau(t)] r (t) \cdot \partial_{t} R_{\tau}[\sigma] (t) . \end{align} Finally we observe that \begin{align} \nonumber I_2 (t) &= \int_{{\mathcal S}(t) } \rho_{{\mathcal S}_0} (\tau_t^{-1}(x)) \, \ell (t) \cdot \Big( R_{\tau}[\sigma] (t) \times \big[ r (t) \times (x-x_B (t)) \big] \Big) \, dx \\ \nonumber &= \ell (t) \cdot \bigg( R_{\tau}[\sigma] (t) \times \Big( r(t) \times \Big[ \int_{{\mathcal S}(t) } \rho_{{\mathcal S}_0} (\tau_t^{-1}(x)) (x-x_B (t)) \,dx \Big] \Big) \bigg) \\ & = 0 , \label{hideux} \end{align} according to \eqref{premeti}. Combining \eqref{moinslong}, \eqref{hihun} and \eqref{hideux} yields \eqref{idplusimple}. \end{proof} \ \par \noindent \begin{proof}[Proof of Theorem \ref{Theo:Equi}] \ \par \ \par \noindent {\bf 1.} Let $(u,x_B , r) $ is a classical solution of the PDEs formulation on $[{0}, T]$ and let $(\tau,\eta) $ their respective flows given by formulas \eqref{flowS} and \eqref{flowF}. Equation (\ref{Euler1a2}) reads \begin{equation} \label{Leonard} \displaystyle \frac{\partial^2 \eta}{\partial t^2} + (\nabla p ) \circ \eta = 0 , \ \text{for} \ x\in \mathcal{F}_0 . \end{equation} Let us now consider $(\sigma,\mu)$ in $T_{(\tau,\eta)} {\mathcal L}$. Using \eqref{Leonard} and performing a change of variables, we get \begin{equation*} - \int_{{\mathcal F}_{0} } \frac{\partial^2 \eta}{\partial t^2} \cdot U_{\eta}[\mu] \circ \eta \, dx = - \int_{{\mathcal F} (t) } \nabla p \cdot U_{\eta}[\mu] \, dx. \end{equation*} Now using \eqref{UDiv2}, \eqref{UAuBordFixe2} and Green's formula we deduce \begin{equation*} - \int_{{\mathcal F} (t) } \frac{\partial^2 \eta}{\partial t^2} \cdot U_{\eta}[\mu] \circ \eta \, dx = \int_{\partial \mathcal{S}(t)} p \, U_{\eta}[\mu] \cdot n \, d \Gamma. \end{equation*} Now using (\ref{UAuBord2}) and then (\ref{Solide1})-(\ref{Solide2}), we obtain \begin{eqnarray*} - \int_{{\mathcal F} (t) } \frac{\partial^2 \eta}{\partial t^2} \cdot U_{\eta}[\mu] \circ \eta \, dx &= & \int_{\partial \mathcal{S}(t)} p \, \big ( L_{\tau}[\sigma] + R_{\tau}[\sigma] \times (x-x_{B}(t)) \big)\cdot n \, d \Gamma(x) \\ &= & m x''_B (t) \cdot L_{\tau}[\sigma] + ({\mathcal J} [\tau ] (t) r (t))' \cdot R_{\tau}[\sigma]. \end{eqnarray*} Then we integrate by parts over $[0,T]$ and conclude with Lemma \ref{plusimple} that $(\tau,\eta) $ is a geodesic on ${\mathcal L}$. \par \ \par \noindent {\bf 2.} Conversely, let $\phi \ \in {\mathcal L}$ be a geodesic. Using again Lemma \ref{plusimple}, it means that for any $(\sigma,\mu) \in T_{(\tau,\eta)} {\mathcal L}$, one has \begin{equation} \label{Var} \int_{[0,T]} \Big( m \ell(t) \cdot L_{\tau}[\sigma] (t) + {\mathcal J} [\tau(t) ] r (t) \cdot R_{\tau}[\sigma] (t) + \int_{{\mathcal F}_0 } \partial_t \eta (t,x) \cdot \partial_t ( U_{\tau}[\mu](t,\eta(t,x)) ) \, dx \Big) \, dt =0. \end{equation} We first use \eqref{Var} with $\sigma=0$. We consider $w : [0,T] \times {\mathcal F}_{t} \rightarrow \mathbb{R}^{3}$ satisfying $\operatorname{div\,}(w)=0$ in ${\mathcal F}_{t}$, $w \cdot n=0$ on $\partial {\mathcal F}_{t}$ and $w(0,\cdot)=0$ and $w(T,\cdot)=0$. Then $(\sigma,\mu) \in T_{(\tau,\eta)} {\mathcal L}$ for $\sigma=0$ and $\mu(t,x):= w(t,\eta(t,x))$. Consequently, one has \begin{align*} 0 &= \int_{[0,T]} \int_{{\mathcal F}_0 } \partial_t \eta (t,x) \cdot \partial_t (w(t,\eta(t,x))) \, dx \, dt \\ &= - \int_{[0,T]} \int_{{\mathcal F}_0 } \partial^{2}_t \eta (t,x) \cdot w(t,\eta(t,x)) \, dx \, dt \\ &= - \int_{[0,T]} \int_{{\mathcal F}_t } \partial^{2}_t \eta (t,\eta^{-1}(t,x)) \cdot w(t,x) \, dx \, dt. \end{align*} It follows that $\partial^{2}_t \eta (t,\eta^{-1}(t,\cdot))$ is a gradient field in ${\mathcal F}_{t}$, that we denote $-\nabla p$. Now going back to \eqref{Var} we get that for general $(\sigma,\mu) \in T_{(\tau,\eta)} {\mathcal L}$, one has \begin{equation*} \int_{[0,T]} \Big( m \ell(t) \cdot \partial_{t} L_{\tau}[\sigma] (t) + {\mathcal J} [\tau(t) ] r (t) \cdot \partial_{t} (R_{\tau}[\sigma]) (t) + \int_{{\mathcal F}_t } \nabla p(t,x) \cdot U_{\eta}[\mu](t,x) \, dx \Big) \, dt =0. \end{equation*} Using Green's formula for the last integral and \eqref{UAuBord2}-\eqref{UAuBordFixe2}, we deduce that for any $\sigma$, \begin{multline*} \int_{[0,T]} \bigg( m \ell(t) \cdot \partial_{t} L_{\tau}[\sigma] (t) + {\mathcal J} [\tau(t) ] r (t) \cdot \partial_{t} (R_{\tau}[\sigma]) (t) \\ + L_{\tau}[\sigma] \cdot \int_{\partial {\mathcal F}_t } p(t,x) n(x) \, d \Gamma + R_{\tau}[\sigma] \cdot \int_{\partial {\mathcal F}_t } p(t,x) (x-x_{B}(t)) \times n(x) \, d \Gamma \bigg) \, dt =0. \end{multline*} Now we integrate by parts in time the first two terms. Since this is valid for any $\sigma$, hence for any $L_{\tau}[\sigma]$ and $R_{\tau}[\sigma]$, we infer \eqref{Solide1}-\eqref{Solide2}. Equations \eqref{Euler3a2}-\eqref{Euler3b} then follow from the very definition of ${\mathcal L}$. \end{proof} \ \par \noindent {\bf Acknowledgements.} The authors were partially supported by the Agence Nationale de la Recherche, Project CISIFS, grant ANR-09-BLAN-0213-02. \end{document}
\begin{document} \baselineskip 13.75pt \title[ ]{An extension of the Glauberman ZJ-Theorem } \author{{M.Yas\.{I}r} K{\i}zmaz } \address{Department of Mathematics, Bilkent University, 06800 Bilkent, Ankara, Turkey} \email{yasirkizmaz@bilkent.edu.tr} \subjclass[2010]{20D10, 20D20} \keywords{controlling fusion, ZJ-theorem, $p$-stable groups} \maketitle \begin{abstract} Let $p$ be an odd prime and let $J_o(X)$, $J_r(X)$ and $J_e(X)$ denote the three different versions of Thompson subgroups for a $p$-group $X$. In this article, we first prove an extension of Glauberman's replacement theorem (\cite[Theorem 4.1]{Gla}). Secondly, we prove the following: Let $G$ be a $p$-stable group and $P\in Syl_p(G)$. Suppose that $C_G(O_{p}(G))\leq O_{p}(G)$. If $D$ is a strongly closed subgroup in $P$, then $Z(J_o(D))$, $\Omega(Z(J_r(D)))$ and $\Omega(Z(J_e(D)))$ are normal subgroups of $G$. Thirdly, we show the following: Let $G$ be a $\text{Qd}(p)$-free group and $P\in Syl_p(G)$. If $D$ is a strongly closed subgroup in $P$, then the normalizers of the subgroups $Z(J_o(D))$, $\Omega(Z(J_r(D)))$ and $\Omega(Z(J_e(D)))$ control strong $G$-fusion in $P$. We also prove a similar result for a $p$-stable and $p$-constrained group. Lastly, we give a $p$-nilpotency criteria, which is an extension of Glauberman-Thompson $p$-nilpotency theorem. \end{abstract} \section{Introduction} Throughout the article, all groups considered are finite. Let $P$ be a $p$-group. For each abelian subgroup $A$ of $P$, let $m(A)$ be the rank of $A$, and let $d_r(P)$ be the maximum of the numbers $m(A)$. Similarly, $d_o(P)$ is defined to be the maximum of orders of abelian subgroups of $P$ and $d_e(P)$ is defined to be the maximum of orders of elementary abelian subgroups of $P$. Define $$\mathcal A_r(P)=\{A\leq P \mid A \textit{ is abelian} \textit{ and } m(A)=d_r(P) \}, $$ $$\mathcal A_o(P)=\{A\leq P \mid A \textit{ is abelian} \textit{ and } |A|=d_o(P) \}$$ and $$\mathcal A_e(P)=\{A\leq P \mid A \textit{ is elementary abelian} \textit{ and } |A|=d_e(P) \}.$$ Now we are ready to define three different versions of Thompson subgroup: $J_r(P)$, $J_o(P)$ and $J_e(P)$ are subgroups of $P$ generated by all members of $\mathcal A_r(P), \mathcal A_o(P)$ and $\mathcal A_e(P)$, respectively. Thompson proved his normal complement theorem according to $J_r(P)$ in \cite{Thmp}, which states that ``if $N_G(J_r(P))$ and $C_G(Z(P))$ are both $p$-nilpotent and $p$ is odd then $G$ is $p$-nilpotent". Later Thompson introduced ``a replacement theorem" and a subgroup similar to $J_o(P)$ in \cite{Thmp2}. Due to the compatibility of the replacement theorem with $J_o(P)$, Glauberman worked with $J_o(P)$, indeed, he extended the replacement theorem of Thompson for odd primes (see \cite[Theorem 4.1]{Gla}). We should note that Glauberman's replacement theorem is one of the important ingredients of the proof of his ZJ-theorem.\\\\ \textbf{Theorem (Glauberman).} Let $p$ be an odd prime, $G$ be a $p$-stable group, and $P\in Syl_p(G)$. Suppose that $C_G(O_{p}(G))\leq O_{p}(G)$. Then $Z(J_o(P))$ is a characteristic subgroup of $G$.\\\\ There are many important consequences of his theorem. One of the striking ones is that $N_G(Z(J_o(P)))$ controls strong $G$-fusion in $P$ when $G$ does not involve a subquotient isomorphic to $Q_d(p)$ (see \cite[Theorem B]{Gla}). Another consequence of his theorem is an improvement of Thompson normal complement theorem. This result says that if $N_G(Z(J_o(P))$ is $p$-nilpotent and $p$ is odd then $G$ is $p$-nilpotent. There is still active research on properties of Thompson's subgroups. A current article \cite{Pt} is describing algorithms for determining $J_e(P)$ and $J_o(P)$. We also refer to \cite{Pt} and \cite{Kho} for more extensive discussions about literature and replacement theorems, which we do not state here. It deserves to be mentioned separately that Glauberman obtained remarkably more general versions of the Thompson replacement theorem in his later works (see \cite{Gla4} and \cite{Gla3}). We should also note that even if \cite[Theorem 1]{Pt} is attributed to Thompson replacement theorem \cite{Thmp} in \cite{Pt}, it seems that the correct reference is Isaacs replacement theorem (see \cite{Isc2}). In \cite{Crv}, the ZJ-theorem is given according to $J_e(P)$ (see \cite[Theorem 1.21, Definition 1.16]{Crv}). Although it might be natural to think that Glauberman ZJ-theorem is also correct for ``$J_e(P)$ and $J_r(P)$", there is no reference verifying that. We should also mention that Isaacs proved the Thompson normal complement theorem according to $J_e(P)$ in his book (see \cite[Chapter 7]{Isc}). However, the ZJ-theorem is not contained in his book. One of the purposes of this article is to generalize Glauberman replacement theorem (see \cite[Theorem 4.1]{Gla}), which was used by Glauberman in the proof of his ZJ-theorem. We also note that our replacement theorem is an extension of Isaacs replacement theorem (see \cite{Isc2}) when we consider odd primes. The following is the first main theorem of our article: \begin{theorem}\label{A} Let $G$ be a $p$-group for an odd prime $p$ and $A\leq G$ be abelian. Suppose that $B\leq G$ is of class at most $2$ such that $B'\leq A$, $A\leq N_G(B)$ and $B\nleq N_G(A)$. Then there exists an abelian subgroup $A^*$ of $G$ such that \begin{enumerate}[label=(\alph*)] \item\textit{ $|A|=|A^*|,$} \item \textit{$A\cap B <A^*\cap B,$} \item \textit{$A^*\leq N_G(A)\cap A^G,$} \item \textit{ the exponent of $A^*$ divides the exponent of $A$. Moreover, $rank(A)\leq rank(A^*)$.} \end{enumerate} \end{theorem} One of the main differences from \cite[Theorem 4.1]{Gla} is that we are not taking $A$ to be of maximal order. By removing the order condition, we obtain more flexibility to apply the replacement theorem. Since our replacement theorem is easily applicable to all versions of Thompson subgroups and there is a gap in the literature whether ZJ-theorem holds for other versions of Thompson subgroups, we shall prove our extensions of ZJ-theorem for all different versions of Thompson subgroups. \begin{definition*}\cite[pg 22]{Gla2}\label{def:p-stable} A group $G$ is called \textbf{$p$-stable} if it satisfies the following condition: Whenever $P$ is a $p$-subgroup of $G$, $g\in N_G(P)$ and $[P,g,g]=1$ then the coset $gC_G(P)$ lies in $O_p(N_G(P)/C_G(P))$. \end{definition*} Let $K$ be a $p$-group. We write $\Omega(K)$ to denote the subgroup $\langle \{x\in K\mid x^{p}=1 \}\rangle$ of $K$. Note that $\text{Qd}(p)$ is defined to be a semidirect product of $\mathbb Z_p \times \mathbb Z_p$ with $SL(2,p)$ by the natural action of $SL(2,p)$ on $\mathbb Z_p \times \mathbb Z_p$. Here is the second main theorem of the article; \begin{theorem}\label{B} Let $p$ be an odd prime, $G$ be a $p$-stable group, and $P\in Syl_p(G)$. Suppose that $C_G(O_{p}(G))\leq O_{p}(G)$. If $D$ is a strongly closed subgroup in $P$ then $Z(J_o(D)), \ \Omega(Z(J_r(D)))$ and $\Omega(Z(J_e(D)))$ are normal subgroups of $G$. \end{theorem} We prove Theorem B mainly by following the original proof given by Glauberman and with the help of Theorem A. When we take $D=P$, we obtain that $Z(J_o(P)),\Omega(Z(J_r(P)))$ and $\Omega(Z(J_e(P)))$ are characteristic subgroups of $G$ under the hypothesis of Theorem B. Both $Z(J_r(P))$ and $Z(J_e(P))$ need an extra operation ``$\Omega$" and it does not seem quite possible to remove ``$\Omega$" by the method used here. \begin{corollary}\label{C} Let $p$ be an odd prime, $G$ be a $p$-stable group, and $P\in Syl_p(G)$. Suppose that $C_G(O_{p}(G))\leq O_{p}(G)$ and $D$ is a strongly closed subgroup in $P$. If the exponent of $\Omega(D)$ is $p$, then $Z(J_o(\Omega(D))), \ Z(J_r(\Omega(D)))$ and $Z(J_e(\Omega(D)))$ are normal subgroups of $G$. \end{corollary} \begin{proof}[\textbf{Proof}] Suppose that the exponent of $\Omega(D)$ is $p$. Let $U\leq \Omega(D)$ and $U^g\leq P$ for some $g\in G$. Then we see that $U^g \leq D$ as $D$ is strongly closed in $P$. Since the exponent of $U$ is $p$, we get that $U^g\leq \Omega(D)$. Thus $\Omega(D)$ is strongly closed in $P$, and so $Z(J_o(\Omega(D)))\lhd G$ by Theorem B. On the other hand, $J_o(\Omega(D))=J_e(\Omega(D))=J_r(\Omega(D))$ since the exponent of $\Omega(D)$ is $p$. Then the result follows. \end{proof} Note that the condition on the exponent of $\Omega(D)$ is naturally satisfied if $\Omega(D)$ is a regular $p$-group and it is well known that $p$-groups of class at most $p-1$ are regular. Thus, we may apply Corollary C when $|\Omega(D)|\leq p^p$, in particular. One of the advantages of working with $\Omega(D)$ is that $J_x(\Omega(D))$ could be determined more easily compared to $J_x(D)$ for most of the $p$-groups for $x\in \{o,r,e\}$. \begin{definition*}\cite[pg 268]{Gor} A group $G$ is called \textbf{ $p$-constrained} if $C_G(U)\leq O_{p',p}(G)$ for a Sylow $p$-subgroup $U$ of $O_{p',p}(G)$. \end{definition*} \begin{theorem}\label{E} Let $p$ be an odd prime, $G$ be a $p$-stable group, and $P\in Syl_p(G)$. Assume that $N_G(U)$ is $p$-constrained for each nontrivial subgroup $U$ of $P$. If $D$ is a strongly closed subgroup in $P$ then the normalizers of the subgroups $Z(J_o(D))$, $\Omega(Z(J_r(D)))$ and $\Omega(Z(J_e(D)))$ control strong $G$-fusion in $P$. \end{theorem} \begin{remark} In \cite{H}, it is proven that if $G$ is $p$-stable and $p>3$ then $G$ is $p$-constrained by using classification of finite simple groups (see Proposition 2.3 in \cite{H}). Thus, the assumption ``$N_G(U)$ is $p$-constrained for each nontrivial subgroup $U$ of $P$'' is automatically satisfied when $p>3$ and $G$ is a $p$-stable group. \end{remark} \begin{theorem}\label{F} Let $p$ be an odd prime, $G$ be a \text{Qd}(p)-free group, and $P\in Syl_p(G)$. If $D$ is a strongly closed subgroup in $P$ then the normalizers of the subgroups $Z(J_o(D))$, $\Omega(Z(J_r(D)))$ and $\Omega(Z(J_e(D)))$ control strong $G$-fusion in $P$. \end{theorem} \begin{remark} In Theorem \ref{F}, if we take $D=P$, then the proof of this special case follows by Theorem \ref{B} and \cite[Theorem 6.6]{Gla2}. However, the general case requires some extra work. Indeed, we shall prove Theorem \ref{F} by constructing an appropriate section conjugacy functor depending on $D$, and applying \cite[Theorem 6.6]{Gla2}. \end{remark} The following is an easy corollary of Theorem \ref{F}. \begin{corollary} Let $p$ be an odd prime, $G$ be a $\text{Qd}(p)$-free group, and $P\in Syl_p(G)$. If the exponent of $\Omega(D)$ is $p$, then the normalizers of the subgroups $Z(J_o(\Omega(D))),Z(J_r(\Omega(D)))$ and $Z(J_e(\Omega(D)))$ control strong $G$-fusion in $P$. \end{corollary} \begin{proof}[\textbf{Proof}] As in the proof of Corollary \ref{C}, we see that $\Omega(D)$ is strongly closed in $P$ since the exponent of $\Omega(D)$ is $p$. Thus, $J_o(\Omega(D))=J_r(\Omega(D))=J_e(\Omega(D))$ and the result follows by Theorem \ref{F}. \end{proof} Lastly we state an extension of Glauberman-Thompson $p$-nilpotency theorem. \begin{theorem}\label{H} Let $p$ be an odd prime, $G$ be a group and $P\in Syl_p(G)$. If $D$ is a strongly closed subgroup in $P$ then $G$ is $p$-nilpotent if one of the normalizer of subgroups $Z(J_o(D))$, $\Omega(Z(J_r(D)))$ and $\Omega(Z(J_e(D)))$ is $p$-nilpotent. \end{theorem} \section{The proof of theorem A} We first state the following lemma, which is extracted from the proof of Glauberman replacement theorem. \begin{lemman}[Glauberman]\label{Glb} Let $p$ be an odd prime and $G$ be a $p$-group. Suppose that $G=BA$ where $B$ is a normal subgroup of $G$ such that $B'\leq Z(G)$ and $A$ is an abelian subgroup of $G$ such that $[B,A,A,A]=1$. Then $[b,A]$ is an abelian subgroup of $G$ for each $b\in B$. \end{lemman} \begin{proof}[\textbf{Proof}] Let $x,y\in A$. Our aim is to show that $[b,x]$ and $[b,y]$ commute. Set $u=[b,y]$. If we apply Hall-Witt identity to the triple $(b,x^{-1},u)$, we obtain that $$[b,x,u]^{x^{-1}}[x^{-1},u^{-1},b]^{u}[u,b^{-1},x^{-1}]^b=1.$$ Note that the above commutators of weight $3$ lie in the center of $G$ since $B'\leq Z(G)$. Thus we may remove conjugations in the above equation. Moreover, $[u,b^{-1},x^{-1}]=1$ as $[u,b^{-1}]\in B'$. Thus we obtain that $[b,x,u][x^{-1},u^{-1},b]=1$, and so $$[b,x,u]=[x^{-1},u^{-1},b]^{-1}.$$ Since $[x^{-1},u^{-1},b]=[[x^{-1},u^{-1}],b]\in Z(P)$, we see that $$[x^{-1},u^{-1},b]^{-1}=[[x^{-1},u^{-1}],b]^{-1}=[[x^{-1},u^{-1}]^{-1},b]=[[u^{-1},x^{-1}],b]$$ by \cite[Lemma 2.2.5(ii)]{Gor}. As a consequence, we get that $[b,x,u]=[[u^{-1},x^{-1}],b]$. By inserting $u=[b,y]$, we obtain $$[[b,x],[b,y]]=[[[b,y]^{-1},x^{-1}],b].$$ Now set $\overline G=P/B'$. Then clearly $\overline B$ is abelian. It follows that $[\overline B,\overline A, \overline A]\leq Z(\overline P)$ since $[B,A,A,A]=1$ and $\overline B$ is abelian. Then we have $$[[b,y]^{-1},x^{-1}]\equiv [[b,y]^{-1},x]^{-1}\equiv [[b,y],x] \ mod \ B' $$ by applying \cite[Lemma 2.2.5(ii)]{Gor} to $\overline G$. Since $x$ and $y$ commute and $\overline {[b,A]}\subseteq \overline B$ is abelian, we see that $$[b,y,x]\equiv [b,x,y] \ mod \ B'$$ by \cite[Lemma 2.2.5(i)]{Gor}. Finally we obtain $$[[b,x],[b,y]]=[[[b,y]^{-1},x^{-1}],b]= [[[b,y],x],b]=[[b,x,y],b].$$ By symmetry, we also have that $[[b,y],[b,x]]=[[b,x,y],b]$. Then it follows that $[[b,y],[b,x]]=[[b,y],[b,x]]^{-1}$, and so $[[b,x],[b,y]]=1$ since $G$ is of odd order. \end{proof} \begin{lemman}\label{rank lemma} Let $A$ be an abelian $p$-group and $E$ be the largest elementary abelian subgroup of $A$. Then $rank(E)=rank(A)$. \end{lemman} \begin{proof}[\textbf{Proof}] Consider the homomorphism $\phi:A\to A$ by $\phi(a)=a^p$ for each $a\in A$. Notice that $\phi(A)=\Phi(A)$ and $E=Ker(\phi)$, and so $|A/\Phi(A)|=|E|$. Since both $E$ and $A/\Phi(A)$ are elementary abelian groups of same order, we get $rank(E)=rank(A/\Phi(A))$. On the other hand, $rank(A/\Phi(A))=rank(A)$ and the result follows. \end{proof} \begin{proof}[\textbf{Proof of Theorem A}] We proceed by induction on the order of $G$. We can certainly assume that $G=AB$. Since $A$ is not normal in $G$, there exists a maximal subgroup $M$ of $G$ such that $A\leq M$. Clearly $A$ normalizes $M\cap B$ as both $M$ and $B$ are normal in $G$. Suppose that $M\cap B$ does not normalize $A$. By induction applied to $M$, there exists a subgroup $A^*$ of $M$ such that $A^*$ satisfies the conclusion of the theorem. Then $A^*$ also satisfies $(a),\ (c)$ and $(d)$ in $G$. Moreover, $A\cap (M\cap B)=A\cap B< A^*\cap B$, and so $G$ also satisfies the theorem. Hence, we can assume that $M\cap B\leq N_G(A)$. Notice that $M=M\cap AB=A(M\cap B)$, and so $M=N_G(A)$. Clearly $M\cap B$ is a maximal subgroup of $B$. Then $A$ acts trivially on $B/(M\cap B)$, and so $[B,A]\leq M=N_G(A)$. Thus, we see that $[B,A,A]\leq A$ which yields $[B,A,A,A]=1.$ Moreover, we have that $B'\leq Z(G)$ since $B'\leq A$ and $B'\leq Z(B)$. It follows that $[b,A]$ is abelian for any $b\in B$ by Lemma \ref{Glb}. Let $b\in B\setminus M$. Then $A\neq A^b\lhd M$. Set $H=AA^b$ and $Z=A\cap A^b$. Then clearly $H$ is a group and $Z\leq Z(H)$. On the other hand, $H$ is of class at most $2$ since $H/Z$ is abelian. Note that the identity $(xy)^n=x^ny^n[x,y]^{\frac{n(n-1)}{2}}$ holds for all $x,y\in H$ as $H$ is of odd order. It follows that the exponent of $H$ is the same as the exponent of $A$. Now we shall show that $H\cap B$ is abelian. First we claim that $H\cap B=(A\cap B)[A,b]$. Clearly, we have $[A,b]\subseteq H\cap B$ since $H=AA^b$. It follows that $(A\cap B)[A,b]\subseteq H\cap B$ as $A\cap B \leq H\cap B$. Next we obtain the reverse inequality. Let $x\in H\cap B$. Then $x=ac^b$ for $ a,c\in A$ such that $ac^b\in B$. Since $B\lhd G$, we see that $[c,b]\in B$, and so $ac\in B$ as $ac[c,b]=ac^b\in B$. It follows that $ac\in A\cap B$ and $x=ac[c,b]\in (A\cap B)[A,b]$, which proves the equality. Since $B'\leq A$, we see that $A\cap B\lhd B$. Then $A\cap B=A^b\cap B$ and hence $A\cap B=Z\cap B$. In particular, we see that $A\cap B\leq Z\leq Z(H)$. It follows that $H\cap B=(A\cap B)[A,b]$ is abelian since $[A,b]$ is an abelian subgroup of $H$ and $(A\cap B)\leq Z(H)$. Now set $A^*=(H\cap B)Z$. Note that $A^*$ is abelian as $H\cap B$ is abelian and $Z\leq Z(H)$. Now we shall show that $A^*$ is the desired subgroup. Clearly, the exponent of $A^*$ divides the exponent of $H$, which shows the first part of $(d)$. Note that $A<H$ and $H=H\cap AB=A(H\cap B)$, and so $H\cap B>A\cap B$. It follows that $A^*\cap B\geq H\cap B>A\cap B$, which shows $(b)$. On the other hand, $$A^*\leq H=AA^b\leq M\cap A^G=N_G(A)\cap A^G,$$ which shows $(c)$. It remains to prove $(a)$ and the second part of $(d)$. Since $A^*=(H\cap B)Z$, we have $$|A^*|=\dfrac{|H\cap B||Z|}{|Z\cap B|}=\dfrac{|H\cap B||Z|}{|A\cap B|}.$$ On the other hand, $H=AA^b=A(H\cap B)$. Hence we have $$\dfrac{|A A^b|}{|A^b|}=\dfrac{|A|}{|A\cap A^b|}=\dfrac{|A|}{|Z|}=\dfrac{|H\cap B|}{|A\cap B|}.$$ Thus, we see that $|A|=|A^*|$ as desired. Now let $E$ be the largest elementary abelian subgroup of $A$. We shall observe that $E$ and $A$ enjoy some similar properties. Note that $E\lhd M=N_G(A)$ since $E$ is a characteristic subgroup of $A$. Hence, $EE^b$ is a group. Now set $H_1=EE^b$, $Z_1=E\cap E^b$ and $E^*=(H_1\cap B)Z_1$. First observe that $Z_1\leq Z(H_1)$, and so $H_1$ is of class at most $2$. It follows that the exponent of $E^*$ is $p$ since $H_1$ is of odd order. Thus, $E^*$ is elementary abelian as $E^*\leq A^*$ and $A^*$ is abelian. Note also that $E\cap B=E\cap (A\cap B)$, and so $E\cap B$ is characteristic in $A\cap B$. Then we see that $E\cap B\lhd B$ as $A\cap B\lhd B$. This also yields that $E\cap B=(E\cap B)^b=E^b\cap B$, and hence $E\cap B=Z_1\cap B$. Lastly, observe that $H_1=EE^b=EE^b\cap EB=E(H_1\cap B)$. Now we can show that $|E|=|E^*|$ by using the same method used for showing that $|A|=|A^*|$. Then we see that $ rank(A)=rank(E)=rank(E^*)\leq rank(A^*)$ by Lemma \ref{rank lemma}. \end{proof} \section{The proof of theorem \ref{B}} \begin{lemman}Let $P$ be a $p$-group and $R$ be a subgroup of $P$. Then if there exists $A\in \mathcal A_{x}(P)$ such that $A\leq R$ then $J_x(R)\leq J_x(P)$ for $x\in \{ o,r,e\}$. Moreover, $J_x(P)=J_x(R)$ if and only if $J_x(P)\subseteq R$ for $x\in \{ o,r,e\}$. \end{lemman} The above lemma is an easy observation and we shall use it without any further reference. \begin{lemman}\cite[Theorem 8.1.3]{Gor} \label{opg} Let $G$ be a $p$-stable group such that $C_G(O_p(G))\leq O_p(G)$. If $P\in Syl_p(G)$ and $A$ is an abelian normal subgroup of $P$ then $A\leq O_p(G)$. \end{lemman} \begin{proof}[\textbf{Proof}] Since $O_p(G)$ normalizes $A$, we see that $[O_p(G),A,A]=1$. Write $C=C_G(O_p(G))$. Then we have $AC/C\leq O_p(G/C)$. Note that $O_p(G/C)=O_p(G)/C $ since $C\leq O_p(G)$. It follows that $A\leq O_p(G)$. \end{proof} \begin{definition}\label{strongly closed set} Let $G$ be a group, $P\in Syl_p(G)$ and $D$ be a nonempty subset of $P$. We say that $D$ is a strongly closed subset in $P$ (with respect to $G$) if for all $U\subseteq D$ and $g\in G$ such that $U^g\subseteq P$, we have $U^g\subseteq D$. \end{definition} \begin{lemman}\label{strogly closed} Let $G$ be a group and $P\in Syl_p(G)$. Suppose that $D$ is a strongly closed subset in $P$. If $N\lhd G$ and $D\cap N$ is nonempty then $D\cap N$ is also a strongly closed subset in $P$. Moreover, $G=N_G(D\cap N)N$. \end{lemman} \begin{proof}[\textbf{Proof}] Let $Q=P\cap N$ and write $D^*=D\cap N$. Then we see that $Q\in Syl_p(N)$. Let $U\subseteq D^*$ and $g\in G$ such that $U^g\subseteq P$. It follows that $U^g\subseteq D$ as $U\subseteq D$ and $D$ is strongly closed in $G$. Since $N\lhd G$, we see that $U^g\leq N$ which yields that $U^g\subseteq N\cap D=D^*$ which shows the first part. We already know that $G=N_G(Q)N$ by Frattini argument. Thus, it is enough to show that $N_G(Q)\leq N_G(D^*)$. Let $x\in N_G(Q)$. Then $D^{*^x}\subseteq Q\leq P$. Since $D^*$ is strongly closed in $P$, we see that $D^*{^x}= D^*$. It follows that $x\in N_G(D^*)$, as desired. \end{proof} \begin{lemman}\label{crucial lemma} Let $P$ be a $p$-group, $p$ be odd, and let $B,N\unlhd P$. Suppose that $B$ is of class at most $2$ and $B'\leq A$ for all $A\in \mathcal A_x(N)$. Then there exists $A\in \mathcal A_x(N)$ such that $B$ normalizes $A$ while $x\in \{o,r,e\}.$ \end{lemman} \begin{proof}[\textbf{Proof}] First suppose that $x=e$. Now choose $A\in \mathcal A_e(N)$ such that $A\cap B$ is maximum possible. If $B$ does not normalize $A$ then there exists an abelian subgroup $A^*\leq P$ such that $|A^*|=|A|$, $A^*\leq A^P\cap N_P(A)$, $A^*\cap B>A\cap B$ and the exponent of $A^*$ divides that of $A$ by Theorem A. We first observe that $A^*$ is an elementary abelian subgroup as the exponent of $A$ is $p$. Since $A\leq N\lhd P$, we see that $A^*\leq A^P\leq N$. Hence, $A^*\in \mathcal A_e(N)$ which contradicts to the maximality of $A\cap B$. Thus $B$ normalizes $A$ as desired. Now suppose that $x=r$ and let $ A\in \mathcal A_r(N)$. Then we apply Theorem A in a similar way and find $A^*\leq N $ with $rank(A^*)\geq rank(A)$. Since the rank of $A$ is maximal possible in $N$, we see that $A^*\in \mathcal A_r(N)$. The rest of the argument follows similarly. The case $x=o$ also follows in a similar fashion. \end{proof} \begin{theoremn}\label{maim thm} Let $p$ be an odd prime, $G$ be a $p$-stable group, and $P\in Syl_p(G)$. Let $D$ be a strongly closed subset in $P$ and $B$ be a normal $p$-subgroup of $G$. Write $K=\langle D \rangle$, $Z_o=Z(J_o(K)), \ Z_r=\Omega(Z(J_r(K))) \textit{ and } Z_e=\Omega(Z(J_e(K)))$. If all members of $\mathcal A_x(K)$ are included in the set $D$ then $Z_x\cap B\lhd G$ while $x\in \{o,r,e\}$. \end{theoremn} \begin{proof}[\textbf{Proof}] Write $J(X)=J_e(X)$ for any $p$-subgroup $X$ and set $Z=Z_e$. We can clearly assume that $B\neq 1$. Let $G$ be a counter example, and choose $B$ to be the smallest possible normal $p$-subgroup contradicting to the theorem. Notice that $K\unlhd P$ as $D$ is a normal subset of $P$, and so $Z\unlhd P$. In particular, $B$ normalizes $Z$. Set $B_1=(Z\cap B)^G$. Clearly $B_1\leq B$. Suppose that $B_1<B$. By our choice of $B$, we get $Z\cap B_1\lhd G$. Since $Z\cap B\leq B_1$, we have $Z\cap B\leq Z\cap B_1\leq Z\cap B$, and hence $Z\cap B= Z\cap B_1$. This contradiction shows that $B=B_1=(Z\cap B)^G$. Clearly $B'<B$, and hence $Z\cap B' \lhd G$ by our choice of $B$. Since $Z$ and $B$ normalize each other, $[Z\cap B,B]\leq Z\cap B'$. Since $B$ and $Z\cap B'$ are both normal subgroups of $G$, we obtain $[(Z\cap B)^g,B]\leq Z\cap B'$ for all $g\in G$. This yields $[(Z\cap B)^G,B]=[B,B]=B'\leq Z\cap B'.$ In particular, we have $B'\leq Z$, and so $[ Z\cap B, B']=1$. It follows that $[B,B']=1$ as $B=(Z\cap B )^G$. As a consequence, we see that $B$ is of class at most $2$. Notice that $Z\leq A$ for all $A\in \mathcal A_e(K)$ due to the fact that $AZ$ is an elementary abelian subgroup of $K$. Thus we see that, in particular, $B'\leq A$ for all $A\in \mathcal A_e(K).$ Let $N$ be the largest normal subgroup of $G$ that normalizes $Z\cap B$. Set $D^*=D\cap N$, which is nonempty by our hypothesis, and write $K^*=\langle D^* \rangle$. We see that $G=N_G(D^*)N$ by Lemma \ref{strogly closed}, and so $G=N_G(K^*)N$. It follows that $G=N_G(J(K^*))N$ since $J(K^*)$ is a characteristic subgroup of $K^*$. Suppose that $J(K)\leq K^*$. Then we see that $J(K)=J(K^*)$, and hence $Z\cap B$ is normalized by $N_G(J(K^*))$. It follows that $Z\cap B\lhd G$. Thus we may assume that $J(K)\nsubseteq K^*.$ There exists $A\in \mathcal A_e(K)$ such that $B$ normalizes $A$ by Lemma \ref{crucial lemma}. Hence, $[B,A,A]=1$ since $[B,A]\leq A$. Since $G$ is $p$-stable and $B\lhd G$, we have that $AC/C\leq O_p(G/C)$ where $C=C_G(B)$. Note that $C$ normalizes $Z\cap B$, and so $C\leq N$ by the choice of $N$. It follows that $AN/N\leq O_p(G/N).$ Now we claim that $O_p(G/N)=1$. Let $L\lhd G$ such that $L/N=O_p(G/N)$. Then $L=(L\cap P)N$, and hence $L$ normalizes $Z\cap B$ as both $N$ and $L\cap P$ normalize $Z\cap B$. The maximality of $N$ forces that $N=L$, which yields that $A\leq N$. Note that $A\subseteq D$ by hypothesis, and so $A\subseteq N\cap D=D^*\subseteq K^*$. We see that $Z\leq A\leq J(K^*)$, and so we have $J(K^*)\leq J(K)$. It follows that $Z \cap B\leq Z\leq \Omega(Z(J(K^*)))$. Set $X=\Omega(Z(J(K^*)))$. Then we see that $G=NN_G(X)$ since $G=NN_G(K^*)$ and $X$ is characteristic in $K^*$. Since $N$ normalizes $Z\cap B$, each distinct conjugate of $Z\cap B$ comes via an element of $N_G(X).$ Thus, $B=(Z\cap B)^G=(Z\cap B)^{N_G(X)}\leq X$. Since $J(K)\nsubseteq K^*$, some members of $\mathcal A_e(K)$ do not lie in $ K^*$. Among such members choose $ A_1 \in \mathcal A_e(K)$ such that $A_1\cap B$ is maximum possible. Note that $B$ does not normalize $A_1$, since otherwise this forces $A_1\leq K^*$ as in previous paragraphs. Then there exists $A^*\leq P$ such that $|A^*|=|A|$, $A^*\cap B>A_1\cap B$, $A^*\leq A_1^P\cap N_P(A_1)$ and the exponent of $A^*$ divides the exponent of $A_1$ by Theorem A. Since $A_1$ is elementary abelian, we see that $A^*$ is also elementary abelian. Moreover, $A^*\leq K$ as $A_1^P\leq K\lhd P$. It follows that $A^*\in \mathcal A_e(K)$, and so $A^*\leq K^*$ due to the choice of $A_1$. We see that $XA^*$ is a group and $A^*\in \mathcal A_e(K^*)$, and hence $B\leq X\leq A^*$. It follows that $B\leq A^*\leq N_P(A_1)$, which is the final contradiction. Thus, our proof is complete for $Z_e$. Almost the same proof works for $Z_r$ and $Z_o$ without any difficulty. \end{proof} When we work with $J_o(K)$, we do not need to use $\Omega$ operation due to the fact that $Z(J_o(K))\leq A$ for all $A\in \mathcal A_o(K)$. However, this does not need to be satisfied for $Z(J_e(K))$ and $Z(J_r(K))$. In these cases, however, the rank conditions force that $\Omega(Z(J_x(K)))\leq A$ for all $ A\in \mathcal A_{x}(K)$ for $x\in \{e,r\}$. This difference causes the use of $\Omega$ operation necessary for $Z(J_e(K))$ and $Z(J_r(K))$. \begin{proof}[\textbf{Proof of Theorem \ref{B}}] As in our hypothesis, let $G$ be a $p$-stable group that $C_G(O_p(G))\leq O_p(G)$ and $D$ be a strongly closed subgroup in $P$. Since all these subgroups $Z(J_o(D)), \ \Omega(Z(J_r(D)))$ and $\Omega(Z(J_e(D)))$ are abelian normal subgroups of $G$, we see that they must lie in $O_p(G)$ by Lemma \ref{opg}. Note that $D$ is also a strongly closed subset in $P$ and satisfies the hypothesis of Theorem \ref{maim thm}. Then the results follow from Theorem \ref{maim thm}. \end{proof} In this section, we see another application of Theorem \ref{maim thm} by proving the following theorem, which we shall need in the next section. \begin{theoremn}\label{main thm2} Let $p$ be an odd prime, $G$ be a $p$-stable and $p$-constrained group, and $P\in Syl_p(G)$. Let $D$ be a strongly closed subset in $P$. Write $K=\langle D \rangle$, $Z_o=Z(J_o(K)), \ Z_r=\Omega(Z(J_r(K))) \textit{ and } Z_e=\Omega(Z(J_e(K)))$. If all members of $\mathcal A_x(K)$ are included in the set $D$, then the normalizer of $Z_x(K)$ controls strong $G$-fusion in $P$ while $x\in \{o,r,e\}$. \end{theoremn} We need the following lemma in the proof of Theorem \ref{main thm2}. \begin{lemman}\cite[Lemma 7.2]{Gla}\label{p-stable} If $G$ is a $p$-stable group, then $G/O_{p'}(G)$ is also $p$-stable. \end{lemman} Since the $p$-stability definition we used here is not same with that of \cite{Gla} and \cite[Lemma 7.2]{Gla} has also extra assumption that $O_p(G)\neq 1$, it is appropriate to give a proof of this lemma here. \begin{proof}[\textbf{Proof}] Write $N=O_{p'}(G)$ and $\overline G=G/N$. Let $V$ be $p$-subgroup of $\overline G$. Then there exists a $p$-subgroup $U$ of $G$ such that $\overline U=V$. Let $\overline x \in N_{\overline G}(\overline U)$ such that $[\overline U, \overline x, \overline x ]=\overline 1$. Clearly, we can write $\overline x=\overline x_1 \overline x_2$ such that $\overline x_1$ is a $p$-element, $\overline x_2$ is a $p'$-element and $[\overline x_1,\overline x_2]=\overline 1$ for some $x_1,x_2\in G$. It follows that $[\overline U, \overline x_i, \overline x_i ]=\overline 1$ for $i=1,2$. Then we see that $\overline x_2\in C_{\overline G}(\overline U)$ by \cite[Lemma 4.29]{Isc}. Thus, it is enough to show that $\overline x_1 \in O_p(N_{\overline G}(\overline U)/C_{\overline G}(\overline U))$ to finish the proof. Since $\overline x_1$ is a $p$-element of $\overline G$, $x_1=sn$ where $n\in N$ and $s$ is a $p$-element of $G$, which yields that $\overline x_1=\overline s$. Then we see that $[UN,s,s]\in N$ and $s\in N_G(UN)$ by the previous paragraph. Note that $U\in Syl_p(UN)$ and $| Syl_p(UN)|$ is a $p'$-number. Consider the action of $\langle s \rangle $ on $Syl_p(UN)$. Then we observe that $s$ normalizes $U^n$ for some $n\in N$. Thus, we get that $[U^n,s,s]\leq U^n\cap N=1$. Note that $\overline U=\overline {U^n}$, and so we take $U^n=U$ without loss of generality. Let $K\leq N_G(U)$ such that $K/C_G(U)=O_p(N_G(U)/C_G(U))$. Thus we observe that $s\in K$ as $G$ is $p$-stable. Note that $N_{\overline G}(\overline U)=\overline {N_G(U)}$ and $C_{\overline G}(\overline U)=\overline {C_G(U)}$ by \cite[Lemma 7.7]{Isc}. Hence, we see that $\overline x_1=\overline s \in \overline K$ and $\overline K/\overline {C_G(U)}\leq O_p(\overline {N_G(U)}/\overline {C_G(U)})=O_p(N_{\overline G}(\overline U)/C_{\overline G}(\overline U))$, which completes the proof. \end{proof} \begin{proof}[\textbf{Proof of Theorem \ref{main thm2}}] Write $\overline G=G/O_{p'}(G)$. Then $\overline G$ is $p$-stable by Lemma \ref{p-stable}. Since $G$ is $p$-constrained, we have $C_{\overline G}(O_{p}(\overline G))\leq O_p(\overline G)$ by \cite[Theorem 1.1(ii)]{Gor}. Note that $Z_x\leq O_p(\overline G)$ by Lemma \ref{opg} for $x\in \{o,e,r\}$. We see that $\overline G$ satisfies the hypotheses of Theorem \ref{maim thm} as $\overline P$ is isomorphic to $P$ and $\overline D$ is the desired strongly closed set in $\overline P$. It follows that $Z_x(\overline K)\lhd \overline G$ by Theorem \ref{maim thm}, and so we get $G=O_{p'}(G)N_G(Z_x(K))$ for $x\in \{o,e,r\}$. Hence, $N_G(Z_x(K))$ controls strong $G$-fusion in $P$ by \cite[Lemma 7.1]{Gla} for $x\in \{o,e,r\}$. \end{proof} \section{The Proofs of Theorems \ref{E}, \ref{F} and \ref{H} } \begin{lemman}\label{strongly closed2} Let $P\in Syl_p(G)$ and $D$ be a strongly closed subset in $P$. Let $H\leq G$, $N\lhd G$ and $g\in G$ such that $P^g\cap H\in Syl_p(H)$. Then \begin{enumerate}[label=(\alph*)] \item\textit{ $D^g\cap H$ is strongly closed in $P^g\cap H$ with respect to $H$ if $D^g\cap H$ is nonempty.} \item \textit{$DN/N$ is strongly closed in $PN/N$ with respect to $G/N$.} \end{enumerate} \end{lemman} \begin{proof}[\textbf{Proof}] $(a)$ Let $U\subseteq D^g\cap H$ and $h\in H$ such that $U^h\subseteq P^g\cap H$. Since $U\subseteq D^g$ and $U^h\subseteq P^g$, we see that $U^h\subseteq D^g$ as $D^g$ is strongly closed in $P^g$ with respect to $G$. Thus, $U^h\subseteq D^g\cap H$ as $U^h\subseteq H$. $(b)$ Let $U/N\subseteq DN/N$ and suppose that $(U/N)^y\subseteq PN/N$ for some $y\in G$. By an easy argument, we can find $V\subseteq D$ such that $U/N=VN/N$. Then we see that $VN\subseteq DN$ and $(VN)^y=V^yN\subseteq PN$. We need to show that $V^yN\subseteq DN$. Notice that $\langle V^y \rangle=\langle V \rangle^y $ is a $p$-subgroup of $PN$. Since $P\in Syl_p(PN)$, there exists $x\in PN$ such that $V^y\subseteq P^x$. Since $D^x$ is strongly closed in $P^x$ and $V^x\subseteq D^x$, we see that $V^y\subseteq D^x$. Thus, $V^yN\subseteq D^xN$. Write $x=mn$ for $m\in P$ and $n\in N$. Note that $D^x=D^{mn}=D^n$ as $D$ is a normal set in $P$. It follows that $D^xN=D^nN=DN$. Consequently, $V^yN\subseteq DN$ as desired. \end{proof} Let $\mathcal L_p(G)$ be the set of all $p$-subgroups of $G$. A map $W:\mathcal L_p(G)\to \mathcal L_p(G)$ is called \textbf{a conjugacy functor} if the followings hold for each $U\in \mathcal L_p(G)$: \begin{enumerate} \item[(i)]\textit{ $W(U)\leq U$}, \item[(ii)] \textit{$W(U)\neq 1 $ unless $U=1$, and} \item[(iii)] \textit{$W(U)^g=W(U^g)$ for all $g\in G$.} \end{enumerate} \textbf{A section of $G$} is a quotient group $H/K$ where $K\unlhd H\leq G$. Let $\mathcal L_p^*(G)$ be the set of all sections of $G$ that are $p$-groups. A map $W:\mathcal L_p^*(G)\to \mathcal L_p^*(G)$ is called \textbf{a section conjugacy functor} if the followings hold for each $H/K \in \mathcal L_p^*(G)$: \begin{enumerate} \item[(i)]\textit{ $W(H/K)\leq H/K$}, \item[(ii)] \textit{$W(H/K)\neq 1 $ unless $H/K=1$, and} \item[(iii)] \textit{$W(H/K)^g=W(H^g/K^g)$ for all $g\in G$.} \item[(iv)] Suppose that $N\lhd H$, $N\leq K$ and $K/N$ is a $p'$-group. Let $P/N$ be a Sylow $p$-subgroup of $H/N$ and set $W(P/N)=L/N$. Then $W(H/K)=LK/K$. \end{enumerate} For more information about section conjugacy functors and their properties, we refer to \cite{Gla2}. Note that a sufficient condition for $(iii)$ and $(iv)$ is the following: whenever $Q,R \in \mathcal L_p^*(G)$ and $\phi:Q\to R$ is an isomorphism, $\phi (W(Q))=W(R).$ Thus, the operations like $ZJ_x,\Omega ZJ_x \textit{ and } J_x$ are section conjugacy functors for $x \in \{o,r,e\}$. \begin{lemman}\label{conjugacy functor} Let $P\in Syl_p(G)$ and $D$ be a strongly closed subset in $P$. Let $W:\mathcal{L}_p(G)\to \mathcal{L}_p(G)$ be a conjugacy functor. For each $p$-subgroup $U$ of $P$ define $$W_D(U)=\begin{cases} W(\langle U\cap D \rangle) & \ if \langle U\cap D \rangle\neq 1 \\ W(U) & \ if \ \langle U\cap D \rangle=1 \\ \end{cases}$$ and for all $V\in \mathcal{L}_p(G)$ and $x\in G$ such that $V^x\leq P$ define $W_D(V)=(W_D(V^x))^{x^{-1}}.$ Then the map $W_D:\mathcal{L}_p(G)\to \mathcal{L}_p(G)$ is a conjugacy functor. Moreover for each $y\in G$, $W_D=W_{D^y}$. \end{lemman} \begin{proof}[\textbf{Proof}] Since $W$ is a conjugacy functor, it is easy to see that $W_D(U)\leq U$ and $W_D(U)\neq 1$ unless $U=1$ for each $U\in \mathcal{L}_p(G)$ by our settings. Now we need to show that $W_D(U)^g=W_D(U^g)$ for all $g\in G$ and $U\in \mathcal{L}_p(G)$, and indeed $W_D$ is well defined. First suppose that $U,U^g\leq P$ for some $g\in G$. We first show that $W_D(U)^g=W_D(U^g)$for this special case. Note that $(U\cap D)^g\subseteq U^g\leq P$, and so $(U\cap D)^g\subseteq U^g\cap D$ as $D$ is strongly closed in $P$. On the other hand, $(U^g\cap D)^{g^{-1}}\subseteq U \leq P$, and so $(U^g\cap D)^{g^{-1}}\subseteq U\cap D$ as $D$ is strongly closed in $P$. By showing the reverse inequality, we obtain that $(U\cap D)^g= U^g\cap D.$ Now if $\langle U\cap D \rangle=1$ then $\langle U^g\cap D \rangle=1$ and $W_D(U)^g=W(U)^g=W(U^g)=W_D(U^g)$. The second equality holds as $W$ is a conjugacy functor. On the other hand, we get $W_D(U)^g=W(\langle U\cap D \rangle)^g=W(\langle U\cap D \rangle^g)=W(\langle U^g\cap D \rangle )=W_D(U^g)$ when $\langle U\cap D \rangle \neq 1.$ Now let $V\in \mathcal{L}_p(G)$ and $x,y\in G$ such that $V^x,V^y\leq P$. Then by setting $U=V^x$ and $g=x^{-1}y$, we have $U^g=V^y$ and $W_D(U)^g=W_D(U^g)$ by the previous paragraph. It follows that $W_D(V^y)=W_D(V^x)^{x^{-1}y}$. Then $W_D(V^y)^{y^{-1}}=W_D(V^x)^{x^{-1}}$, and so $W_D$ is well defined. Now let $z\in G$. Then $W_D(V^z)=W_D(V^x)^{x^{-1}z}=(W_D(V^x)^{x^{-1}})^z=W_D(V)^z$, which completes the proof of first part. Lastly, since $D^y$ is strongly closed in $P^y$, $W_{D^y}$ is a conjugacy functor for $y\in G$ by the first part. It is routine to check that they are indeed the same function. \end{proof} \begin{remark}\label{emtpty remark} Although a strongly closed set is nonempty according to Definition \ref{strongly closed set}, if we take $D=\emptyset$ in the previous lemma, we get $W_{\emptyset}(U)=W(U)$. Thus, we set $W_{\emptyset}=W$ for any conjugacy functor $W$. \end{remark} \begin{lemman}\label{section conjugacy functor} Let $P\in Syl_p(G)$ and $D$ be a strongly closed subset in $P$. Let $K\unlhd H\leq G$, $N\lhd G$ and $g\in G$ such that $P^g\cap H\in Syl_p(H)$. Let $W:\mathcal{L}^*_p(G)\to \mathcal{L}^*_p(G)$ be a section conjugacy functor. Then the followings hold: \begin{enumerate}[label=(\alph*)] \item\textit{ $W_{D^g\cap H}:\mathcal{L}_p(H)\to \mathcal{L}_p(H)$ is a conjugacy functor.} \item \textit{$W_{DN/N}:\mathcal{L}_p(G/N)\to \mathcal{L}_p(G/N)$ is a conjugacy functor.} \item \textit{$W_{(D^g\cap H)K/K}:\mathcal{L}_p(H/K)\to \mathcal{L}_p(H/K)$ is a conjugacy functor.} \end{enumerate} \end{lemman} \begin{proof}[\textbf{Proof}] $(a)$ By taking the restrictions of $W$ to the section $H/1$, we obtain a conjugacy functor $W:\mathcal{L}_p(H)\to \mathcal{L}_p(H)$. By Lemma \ref{strongly closed2} (a), $ D^g\cap H$ is strongly closed in $H\cap P^g$ with respect to $H$ if $D^g\cap H$ is nonempty. Then the result follows from Lemma \ref{conjugacy functor} and Remark \ref{emtpty remark}. Similarly, $(b)$ follows by Lemma \ref{strongly closed2} (b) and Lemma \ref{conjugacy functor}. Part $(c)$ also follows in a similar fashion. \end{proof} \begin{remark}\label{resrection} It should be noted that we only need $W$ be to be a conjugacy functor to establish Lemma \ref{section conjugacy functor} (a). Now assume the hypotheses and notation of Lemma \ref{section conjugacy functor}. Let $U\in \mathcal L_p(H)$. Then it is easy to see that $W_{D^g}(U)=W_{D^g\cap H}(U)$ by their definitions, and so $W_D(U)=W_{D^g\cap H}(U)$ by Lemma \ref{conjugacy functor}. Thus, the map $W_{D^g\cap H}$ is equal to the restriction of $W_D$ to $\mathcal L_p(H)$. \end{remark} \begin{lemman}\label{final} Assume the hypothesis and notation of Lemma \ref{section conjugacy functor}. We define $W^*_D:\mathcal{L}^*_p(G)\to \mathcal{L}^*_p(G)$ by setting $W^*_D(H/K)=W_{(D^g\cap H)K/K}(H/K)$ for each $H/K\in \mathcal{L}^*_p(G)$. Then $$W^*_D(H/K)=\begin{cases} W(\langle D^g\cap H \rangle K/ K) & \ if \ D^g\cap H\nsubseteq K. \\ W(H/K) & \ if \ D^g\cap H\subseteq K. \\ \end{cases} $$ Moreover, $W^*_D$ is a section conjugacy functor. \end{lemman} \begin{proof}[\textbf{Proof}] Firs suppose that $D^g\cap H\subseteq K$. Then $H/K\cap (D^g\cap H)K/K=K/K$, and so $W_{(D^g\cap H)K/K}(H/K)=W(H/K)$. If $D^g\cap H\nsubseteq K$ then $H/K\cap (D^g\cap H)K/K\neq K/K$, and so $W_{(D^g\cap H)K/K}(H/K)=W(\langle D^g\cap H \rangle K/ K)$ by its definition, which shows the first part. Note that $W^*_D(H/K)\leq H/K$ and $W^*_D(H/K)\neq 1$ unless $H/K=1$ by Lemma \ref{section conjugacy functor}(c). Now, we need to show that $(iii)$ and $(iv)$ in the definition of a section conjugacy functor hold. Pick $x\in G$. Since $(D^g\cap H)K/K$ is a strongly subset in $(P^g\cap H)K/K$, $(D^g\cap H)^xK^x/K^x$ is a strongly closed subset in $(P^g\cap H)^xK^x/K^x$. Moreover, $D^g\cap H\subseteq K$ if and only if $D^{gx}\cap H^x\subseteq K^x$. Thus, if $W^*_D(H/K)=W(H/K)$, then $W^*_D(H^x/K^x)=W(H^x/K^x)$. It follows that $$W^*_D(H^x/K^x)=W(H^x/K^x)=W(H/K)^x=W^*_D(H/K)^x.$$ The second equality holds as $W$ is a section conjugacy functor. Now if $W^*_D(H/K)=W(\langle D^g\cap H \rangle K/ K)$ then $$W^*_D(H^x/K^x))=W(\langle D^{gx}\cap H^x \rangle K^x/ K^x)=W((\langle D^g\cap H \rangle K/ K)^x)=W^*_D(H/K)^x.$$ The last equality holds as $W$ is a section conjugacy functor. Thus we see that $(iii)$ is satisfied. Now let $N\lhd H$ such that $N\leq K$ and $K/N$ is a $p'$-group. Let $X/N$ be a Sylow $p$-subgroup of $H/N$. We need to show that if $W^*_D(X/N)=L/N$ then $W^*_D(H/K)=LK/K$. Now pick $h\in H$ such that $(X/N)^h\supseteq (D^g\cap H)N/N$. By part $(iii)$, we have $W^*_D(X/N)^h=L^h/N^h=L^h/N$. If we could show that $W^*_D(H/K)=L^hK/K$, we can conclude that $$W^*_D(H/K)=W^*_D(H/K)^{h^{-1}}=(L^hK/K)^{h^{-1}}=LK/L$$ by part $(iii)$. Thus, we see that it is enough to show the claim for $(X/N)^h$, and so we may simply assume that $(D^g\cap H)N/N \subseteq X/N$. Clearly $\langle D^g\cap H\rangle$ is a $p$-group. Since $K/N$ is a $p'$-group, we see that $D^g\cap H\subseteq K$ if and only if $D^g\cap H\subseteq N$. Thus, if $W^*_D(H/K)=W(H/K)$ then $W^*_D(X/N)=W(X/N)$. It follows that $W^*_D(H/K)=LK/K$ as $W$ is a section conjugacy functor. Assume that $D^g\cap H\nsubseteq K$. Then $W^*_D(H/K)=W(\langle D^g\cap H \rangle K/ K)$ and $W^*_D(X/N)=W(\langle D^g\cap H \rangle N/ N)=L/N$. Now write $H^*=\langle D^g\cap H\rangle K$ and $P^*=\langle D^g\cap H\rangle N$. Observe that $P^*/N\in Syl_p(H^*/N)$ and recall $K/N$ is a $p'$-group. Since $W$ is a section conjugacy functor and $W(P^*/N)=L/N$, we get $W(H^*/K)=LK/K$. Then the result follows. \end{proof} \begin{proof}[\textbf{Proof of Theorem \ref{E}}] Let $p$ be an odd prime, $G$ be a $p$-stable group and $P\in Syl_p(G)$. Suppose that $D$ is a strongly closed subgroup in $P$. Let $H$ be a $p$-constrained subgroup of $G$ and $g\in G$ such that $P^g\cap H\in Syl_p(H)$. Since each $p$-subgroup of $H$ is also a $p$-subgroup of $G$, we see that $H$ is also a $p$-stable group. Now let $W\in \{ZJ_o, \Omega ZJ_e, \Omega ZJ_r \}$. It follows that $W_{D^g\cap H}$ is a conjugacy functor by Lemma \ref{section conjugacy functor}(a). Note that $W_{D^g\cap H}(P^g\cap H)\in \{W(D^g\cap H), W(P^g\cap H)\}$, and so $N_H(W_{D^g\cap H}(P^g\cap H))$ controls strong $H$-fusion in $P^g\cap H$ by Theorem \ref{main thm2} in both cases. Note also that $W_{D^g\cap H}(P^g\cap H)=W_D(P^g\cap H)$ by Remark \ref{resrection}. Now assume that $N_G(U)$ is $p$-constrained for each nontrivial subgroup $U$ of $P$. Fix $U\leq P$ and let $S\in Syl_p(N_G(U))$. Then by the arguments in the first paragraph, we see that the normalizer of $W_D(S)$ in $N_G(U)$ controls strong $N_G(U)$-fusion in $S$, and so we obtain that $N_G(W_D(P))$ control strong $G$-fusion in $P$ by \cite[Theorem 5.5(i)]{Gla2}. It follows that the normalizers the of the subgroups $Z(J_o(D))$, $\Omega(Z(J_r(D)))$ and $\Omega(Z(J_e(D)))$ control strong $G$-fusion in $P$. \end{proof} \begin{lemman}\label{ff} Let $p$ be an odd prime, $G$ be a group, and $P\in Syl_p(G)$. Suppose that $D$ is a strongly closed subgroup in $P$. Let $G^*$ be a section of $G$ such that $G^*$ is $p$-stable and $C_{G^*}(O_p(G^*))\leq O_p(G^*)$. If $S\in Syl_p(G^*)$, then $W^*_D(S)\lhd G^*$ for each $W\in \{ZJ_o, \Omega ZJ_e, \Omega ZJ_r \}$. \end{lemman} \begin{proof}[\textbf{Proof}] Note that $D$ is also a strongly closed set in $P$. We assume the notation of Lemma \ref{final}. Let $W\in \{ZJ_o, \Omega ZJ_e, \Omega ZJ_r \}$. Then clearly $W$ is a section conjugacy functor. It follows that $W^*_D:\mathcal L^*_p(G)\to \mathcal L^*_p(G)$ is a section conjugacy functor by Lemma \ref{final}. Let $G^*=X/K$ be a section of $G$ such that $$C_{G^*}(O_p(G^*))\leq O_p(G^*).$$ Let $H/K\in Syl_p(G^*)$. Then we see that $W^*_D(H/K)=W(H/K)$ if $D^g\cap H\subseteq K$. In this case, $W(H/K)=Z(J_o(H/K)), \ \Omega (Z(J_e(H/K)))$ or $\Omega (Z(J_r(H/K))) $ which are normal subgroups of $G^*$ by Theorem \ref{B}. If $D^g\cap H\nsubseteq K$ then $(D^g\cap H)K/K$ is a strongly closed subgroup in $H/K$ with respect to $G^*$. Write $D^*=( D^g\cap H) K/K$ , then $$W^*_D(H/K)=W(D^*)=Z(J_o(D^*)), \ \Omega (Z(J_e(D^*))), \ or \ \Omega (Z(J_r(D^*))) $$ which are normal subgroups of $G^*$ by Theorem \ref{B}. Thus we see that $W_D^*(H/K)\unlhd G^*$ for all cases. \end{proof} Now we are ready to prove Theorems \ref{F} and \ref{H}. \begin{proof}[\textbf{Proof of Theorem \ref{F}} ] Let $p$ be an odd prime, $G$ be a $\text{Qd}(p)$-free group, and $P\in Syl_p(G)$ as in our hypothesis. Since $G$ does not involve a section isomorphic to $\text{Qd}(p)$, every section of $G$ is $p$-stable by \cite[Proposition 14.7]{Gla2}. Now let $W\in \{ZJ_o, \Omega ZJ_e, \Omega ZJ_r \}$. Then we have that $W^*_D:\mathcal L^*_p(G)\to \mathcal L^*_p(G)$ is a section conjugacy functor by Lemma \ref{final}. Let $G^*$ be a section of $G$ such that $C_{G^*}(O_p(G^*))\leq O_p(G^*)$ and let $S\in Syl_p(G^*)$. Then we see that $W^*_D(S)\lhd G^*$ by Lemma \ref{ff}. It follows that $N_G(W^*_D(P))$ controls strong $G$-fusion in $P$ by \cite[Theorem 6.6]{Gla2}. We see that $W^*_D(P)=Z(J_o(D)), \ \Omega (Z(J_e(D))), \ or \ \Omega (Z(J_r(D)))$ according to choice of $W$, which completes the proof. \end{proof} \begin{proof}[\textbf{Proof of Theorem \ref{H}}] Let $W\in \{ZJ_o, \Omega ZJ_e, \Omega ZJ_r \}$. Then $W^*_D:\mathcal L^*_p(G)\to \mathcal L^*_p(G)$ is a section conjugacy functor by Lemma \ref{final}. Let $G^*$ be a section of $G$ such that $C_{G^*}(O_p(G^*))\leq O_p(G^*)$ and $G^*/(O_p(G^*)$ is $p$-nilpotent. Suppose also that $S^*\in Syl_p(G^*) $ is a maximal subgroup of $G^*$. Let $H$ be the normal Hall $p'$-subgroup of $G^*/O_p(G^*)$. Write $S=S^*/O_p(G^*)$. Then $S$ is also maximal in $G^*/(O_p(G^*)$ and $S$ acts on $H$ via coprime automorphisms. If $1<U\leq H$ is $S$-invariant then $SU=G^*/(O_p(G^*)$ by the maximality of $S$. Since $SH=G^*/(O_p(G^*)$ and $S\cap H=1$, we see that $U=H$. Thus, there is no proper nontrivial $S$-invariant subgroup of $H$. On the other hand, we may choose an $S$-invariant Sylow subgroup of $H$ by \cite[Theorem 3.23(a)]{Isc}. This forces $H$ to be a $q$-group for some prime $q$, and so $H'<H$. It follows that $H$ is abelian due to the fact that $H'$ is $S$-invariant. Let $H^*$ be a Hall $p'$-subgroup of $G^*$. Then we see that $H^*O_p(G^*)/(O_p(G^*)\cong H^*$. Thus, we observe that Hall $p'$-subgroups of $G^*$ are also abelian. Since $p$ is odd, we see that a Sylow $2$-subgroup of $G^*$ is abelian. This yields that $G^*$ does not involve a section isomorphic to $SL(2,p)$, and so every section of $G^*$ is $p$-stable by \cite[Proposition 14.7]{Gla2}. Then we obtain that $W^*_D(S^*)\lhd G^*$ by Lemma \ref{ff}. It follows that $G$ is $p$-nilpotent by \cite[Theorem 8.7]{Gla2}. \end{proof} \end{document}
\begin{document} \title[strongly solid II$_1$ factors with an exotic MASA]{strongly solid II$_1$ factors \\ with an exotic MASA} \begin{abstract} Using an extension of techniques of Ozawa and Popa, we give an example of a non-amenable strongly solid $\rm{II}_1$ factor $M$ containing an ``exotic'' maximal abelian subalgebra $A$: as an $A$,$A$-bimodule, $L^2(M)$ is neither coarse nor discrete. Thus we show that there exist $\rm{II}_1$ factors with such property but without Cartan subalgebras. It also follows from Voiculescu's free entropy results that $M$ is not an interpolated free group factor, yet it is strongly solid and has both the Haagerup property and the complete metric approximation property. \end{abstract} \author[C. Houdayer]{Cyril Houdayer} \address{CNRS ENS Lyon \\ UMPA UMR 5669 \\ 69364 Lyon cedex 7 \\ France} \email{cyril.houdayer@umpa.ens-lyon.fr} \author[D. Shlyakhtenko]{Dimitri Shlyakhtenko*} \address{UCLA\\ Department of Mathematics\\ 520 Portola Plaza\\ LA\\ CA 90095} \email{shlyakht@math.ucla.edu} \thanks{* Research partially supported by NSF grant DMS-0555680} \subjclass[2000]{46L10; 46L54} \keywords{Free group factors; Deformation/rigidity; Intertwining techniques; Free probability} \maketitle \section{Introduction} In their breakthrough paper \cite{ozawapopa}, Ozawa and Popa showed that the free group factors $L(\mathbf{F}_n)$ are {\em strongly solid}, i.e. the normalizer $\mathscr{N}_{L(\mathbf{F}_n)}(P)=\{u\in \mathscr{U}(L(\mathbf{F}_n)): uPu^*=P\}$ of any diffuse amenable subalgebra $P\subset L(\mathbf{F}_n)$ generates an amenable von Neumann algebra, thus AFD by Connes' result \cite{connes76}. This strengthened two well-known indecomposability results for free group factors: Voiculescu's celebrated result in \cite{voiculescu96}, showing that $L(\mathbf{F}_n)$ has no Cartan subalgebra, which in fact exhibited the first examples of factors with no Cartan decomposition; and Ozawa's result in \cite{ozawa2003}, showing that the commutant in $L(\mathbf{F}_n)$ of any diffuse subalgebra must be amenable ($L(\mathbf{F}_n)$ are {\it solid}). Furthermore in \cite{ozawapopaII}, Ozawa and Popa showed that for any lattice $\Gamma$ in $\operatorname{SL}(2, \mathbf{R})$ or $\operatorname{SL}(2, \mathbf{C})$, the group von Neumann algebra $L(\Gamma)$ is strongly solid as well. In this paper, we use a combination of Popa's deformation and intertwining techniques \cite{{popasup}, {popamal1}, {popa2001}} and the techniques of Ozawa and Popa \cite{ozawapopa, ozawapopaII} to give another example of a strongly solid $\rm{II_1}$ factor not isomorphic to an amplification of a free group factor, i.e. to an {\em interpolated} free group factor \cite{{dykema94}, {radulescu1994}} (the first example of this kind was constructed by the first-named author in \cite{houdayer7}, answering an open question of Popa \cite{popa07}). Our example is rather canonical: it is the crossed product of a free group factor $L(\mathbf{F}_\infty)$ by $\mathbf{Z}$, acting by a free Bogoljubov transformation obtained via Voiculescu's free Gaussian functor (cf. \cite{voiculescu92}). Roughly speaking, recall \cite{voiculescu92} that to any separable real Hilbert space $H_\mathbf{R}$, one can associate a finite von Neumann algebra $\Gamma(H_\mathbf{R})''$ which is precisely isomorphic to the free group factor $L(\mathbf{F}_{\dim H_\mathbf{R}})$. To any orthogonal representation $\pi : \mathbf{Z} \to \mathscr{O}(H_\mathbf{R})$ of $\mathbf{Z}$ on $H_\mathbf{R}$ corresponds a trace-preserving action $\sigma^\pi : \mathbf{Z} \curvearrowright \Gamma(H_\mathbf{R})''$, called the {\em Bogoljubov action} associated with the orthogonal representation $\pi$. Alternatively, our algebra can be viewed as a free Krieger algebra in the terminology of \cite{shlya99}, constructed from an abelian subalgebra and a certain completely positive map (related to the spectral measure of the $\mathbf{Z}$-action). It is in this way rather similar to a core of a free Araki-Woods factor \cite{shlya98,shlya97}. Along these lines, our main results are the following. \begin{TheoremA} Let $\pi : \mathbf{Z} \to \mathscr{O}(H_\mathbf{R})$ be an orthogonal representation on the real Hilbert space $H_\mathbf{R}$ such that the spectral measure of $\pi$ has no atoms. Denote by $M = L(\mathbf{F}_\infty) \rtimes_{\sigma^\pi} \mathbf{Z}$ the crossed product under the Bogoljubov action. Then for any maximal abelian subalgebra $A \subset M$, the normalizer $\mathscr{N}_M(A)$ generates an amenable von Neumann algebra. \end{TheoremA} In particular, the ${\rm II_1}$ factor $M = L(\mathbf{F}_\infty) \rtimes_{\sigma^\pi} \mathbf{Z}$ has no Cartan subalgebras. Under additional assumptions on the orthogonal representation $\pi$, we can obtain a stronger result. \begin{TheoremB} Let $\pi : \mathbf{Z} \to \mathscr{O}(H_\mathbf{R})$ be a {\em mixing} orthogonal representation on the real Hilbert space $H_\mathbf{R}$. Then $M = L(\mathbf{F}_\infty) \rtimes_{\sigma^\pi} \mathbf{Z}$ is a non-amenable strongly solid ${\rm II_1}$ factor, i.e. for any $P \subset M$ diffuse amenable subalgebra, $\mathscr{N}_M(P)''$ is an amenable von Neumann algebra. \end{TheoremB} Note that in both cases, $M$ has the Haagerup property and the complete metric approximation property, i.e. $\Lambda_{\operatorname{cb}}(M) = 1$. The proof of Theorems A and B, following a {\textquotedblleft deformation/rigidity\textquotedblright} strategy, is a combination of the ideas and techniques in \cite{{houdayer7}, {ozawapopa}, {ozawapopaII}, {popamal1}}. We will use the {\textquotedblleft free malleable deformation\textquotedblright} by automorphisms $(\alpha_t, \beta)$ defined on $\Gamma(H_\mathbf{R})'' \ast \Gamma(H_\mathbf{R})'' = \Gamma(H_\mathbf{R} \oplus H_\mathbf{R})''$. This deformation naturally arises as the {\textquotedblleft second quantization\textquotedblright} of the rotations/reflection defined on $H_\mathbf{R} \oplus H_\mathbf{R}$ that commute with the $\mathbf{Z}$-representation $\pi \oplus \pi$. The proof of Theorem B then consists in two parts. Let $\pi : \mathbf{Z} \to \mathscr{O}(H_\mathbf{R})$ be a mixing orthogonal representation and denote by $M = L(\mathbf{F}_\infty) \rtimes_{\sigma^\pi} \mathbf{Z}$ the corresponding crossed product ${\rm II_1}$ factor. First, we show that given any amenable subalgebra $P \subset M$ such that $P$ does not embed into $L(\mathbf{Z})$ inside $M$, the normalizer $\mathscr{N}_M(P)$ generates an amenable von Neumann algebra (see Theorem \ref{step}). For this, we will exploit the facts that the deformation $(\alpha_t)$ does not converge uniformly on the unit ball $(P)_1$ and that $P \subset M$ is {\it weakly compact}, and use the technology from \cite{{ozawapopa}, {ozawapopaII}}. So if $P \subset M$ is diffuse, amenable such that $\mathscr{N}_M(P)''$ is not amenable, $P$ must embed into $L(\mathbf{Z})$ inside $M$. Exploiting Popa's intertwining techniques and the fact that the $\mathbf{Z}$-action $\sigma^\pi$ is mixing, we prove that $\mathscr{N}_M(P)''$ is {\textquotedblleft captured\textquotedblright} in $L(\mathbf{Z})$ and finally get a contradiction. In proving that free group factors $L(\mathbf{F}_n)$ have no Cartan subalgebras \cite{voiculescu96}, Voiculescu proved that they actually have a formally stronger property: for any MASA (maximal abelian subalgebra) $A\subset N=L(\mathbf{F}_n)$, $L^2(N)$ (when viewed as an $A$,$A$-bimodule) contains a sub-bimodule of $L^2(A) \otimes L^2(A)$. In more classical language, for every MASA $L^\infty[0,1]\cong A\subset N$, every vector $\xi \in L^2(N)$ gives rise to a measure $\psi=\psi_\xi$ on $[0,1]^2$ determined by $$\int f(x) g(y) d\psi (x,y) = \langle f Jg^*J \xi , \xi\rangle,\qquad f,g\in A.$$ Voiculescu proved that, for any such $A\subset N\cong L(\mathbf{F}_n)$, there exists a nonzero vector $\xi$ for which $\psi$ is Lebesgue absolutely continuous. Any $N$ with this property cannot of course have Cartan subalgebras, since if $A$ is a Cartan subalgebra, the measure $\psi$ will have to be ``$r$-discrete'' (i.e., $\psi (B) = \int \nu_t(B) dt$ for some family of discrete measures $\nu_t$). This raised the obvious question: if $N$ has no Cartan subalgebras, must it be that for any diffuse MASA $A\subset N$, the $A$,$A$-bimodule $L^2(N)$ contains a sub-bimodule of $L^2(A) \otimes L^2(A)$? We answer this question in the negative. Our examples $M = L(\mathbf{F}_\infty) \rtimes \mathbf{Z}$, while strongly solid (or having no Cartan subalgebras), have an ``exotic'' MASA $A = L(\mathbf{Z})$, so that $L^2(M)$, when viewed as an $A$,$A$-bimodule, contains neither coarse nor $r$-discrete sub-bimodules. In other words, for all $\xi\neq 0$, $\psi_\xi$ is neither $r$-discrete nor Lebesgue absolutely continuous. In particular, combined with Voiculescu's results, this property shows that our examples $M$ are not interpolated free group factors. Thus we prove: \begin{CorollaryA} Let $\pi : \mathbf{Z} \to \mathscr{O}(H_\mathbf{R})$ be an orthogonal representation on the real Hilbert space $H_\mathbf{R}$ such that the spectral measure of $\bigoplus_{n \geq 1} \pi^{\otimes n}$ is singular w.r.t. the Lebesgue measure and has no atoms. Then the non-amenable ${\rm II_1}$ factor $M = L(\mathbf{F}_\infty) \rtimes_{\sigma^\pi} \mathbf{Z}$ has no Cartan subalgebra and is not isomorphic to any interpolated free group factor $L(\mathbf{F}_t)$, $1 < t \leq +\infty$. \end{CorollaryA} Assuming that the representation $\pi$ is mixing, we can obtain (see Theorem \ref{singular-ssolid}) new examples of strongly solid ${\rm II_1}$ factors not isomorphic to interpolated free group factors (see \cite{{houdayer7}, {popa07}}). \begin{CorollaryB} Let $\pi : \mathbf{Z} \to \mathscr{O}(H_\mathbf{R})$ be a mixing orthogonal representation on the real Hilbert space $H_\mathbf{R}$ such that the spectral measure of $\bigoplus_{n \geq 1} \pi^{\otimes n}$ is singular w.r.t. the Lebesgue measure. Then the non-amenable ${\rm II_1}$ factor $M = L(\mathbf{F}_\infty) \rtimes_{\sigma^\pi} \mathbf{Z}$ is strongly solid and is not isomorphic to any interpolated free group factor $L(\mathbf{F}_t)$, $1 < t \leq +\infty$. \end{CorollaryB} In Section \ref{examples}, we will present examples of orthogonal representations $\pi : \mathbf{Z} \to \mathscr{O}(H_\mathbf{R})$ which satisfy the assumptions of Corollaries A and B. After recalling the necessary background in Section \ref{preliminaries}, Theorems A and B are proven in Section \ref{keyresult}. {\bf Acknowledgements.} Part of this work was done while the first-named author was at University of California, Los Angeles. The first-named author is very grateful to the warm hospitality and the stimulating atmosphere at UCLA. He finally thanks Stefaan Vaes for fruitful discussions regarding this work during his visit at University of Leuven. \section{Preliminaries}\label{preliminaries} \subsection{Popa's intertwining techniques} We first recall some notation. Let $P \subset M$ be an inclusion of finite von Neumann algebras. The {\it normalizer of} $P$ {\it inside} $M$ is defined as \begin{equation*} \mathscr{N}_M(P) := \left\{ u \in \mathscr{U}(M) : \operatorname{Ad}(u) P = P \right\}, \end{equation*} where $\operatorname{Ad}(u) = u \cdot u^*$. The inclusion $P \subset M$ is said to be {\it regular} if $\mathscr{N}_M(P)'' = M$. The {\it quasi-normalizer of} $P$ {\it inside} $M$ is defined as \begin{equation*} \mathscr{QN}_M(P) := \left\{ a \in M : \exists b_1, \dots, b_n \in M, aP \subset \sum_i Pb_i, Pa \subset \sum_i b_iP \right\}. \end{equation*} The inclusion $P \subset M$ is said to be {\it quasi-regular} if $\mathscr{QN}_M(P)'' = M$. Moreover, \begin{equation*} P' \cap M \subset \mathscr{N}_M(P)'' \subset \mathscr{QN}_M(P)''. \end{equation*} Let $A, B$ be finite von Neumann algebras. An $A, B$-{\it bimodule} $H$ is a complex (separable) Hilbert space $H$ together with two {\it commuting} normal $\ast$-representations $\pi_A : A \to \mathbf{B}(H)$, $\pi_B : B^{\operatorname{op}} \to \mathbf{B}(H)$. We shall intuitively write $a \xi b = \pi_A(x)\pi_B(y^{\operatorname{op}}) \xi$, $\forall x \in A, \forall y \in B, \forall \xi \in H$. We say that $H_B$ is {\it finitely generated} as a right $B$-module if $H_B$ is of the form $p L^2(B)^{\oplus n}$ for some projection $p \in \mathbf{M}_n(\mathbf{C}) \otimes B$. In \cite{{popamal1}, {popa2001}}, Popa introduced a powerful tool to prove the unitary conjugacy of two von Neumann subalgebras of a tracial von Neumann algebra $(M, \tau)$. We will make intensively use of this technique. If $A, B \subset (M, \tau)$ are (possibly non-unital) von Neumann subalgebras, denote by $1_A$ (resp. $1_B$) the unit of $A$ (resp. $B$). \begin{theo}[Popa, \cite{{popamal1}, {popa2001}}]\label{intertwining1} Let $(M, \tau)$ be a finite von Neumann algebra. Let $A, B \subset M$ be possibly non-unital von Neumann subalgebras. The following are equivalent: \begin{enumerate} \item There exist $n \geq 1$, a possibly non-unital $\ast$-homomorphism $\psi : A \to \mathbf{M}_n(\mathbf{C}) \otimes B$ and a non-zero partial isometry $v \in \mathbf{M}_{1, n}(\mathbf{C}) \otimes 1_AM1_B$ such that $x v = v \psi(x)$, for any $x \in A$. \item The bimodule $\vphantom{}_AL^2(1_AM1_B)_B$ contains a non-zero sub-bimodule $\vphantom{}_AH_B$ which is finitely generated as a right $B$-module. \item There is no sequence of unitaries $(u_k)$ in $A$ such that \begin{equation*} \lim_{k \to \infty} \|E_B(a^* u_k b)\|_2 = 0, \forall a, b \in 1_A M 1_B. \end{equation*} \end{enumerate} \end{theo} If one of the previous equivalent conditions is satisfied, we shall say that $A$ {\it embeds into} $B$ {\it inside} $M$ and denote $A \preceq_M B$. For simplicity, we shall write $M^n := \mathbf{M}_n(\mathbf{C}) \otimes M$. \subsection{The complete metric approximation property} \begin{df}[Haagerup, \cite{haagerup79}] A finite von Neumann algebra $(M, \tau)$ is said to have the {\it complete metric approximation property} (c.m.a.p.) if there exists a net $\Phi_n : M \to M$ of ($\tau$-preserving) normal finite rank completely bounded maps such that \begin{enumerate} \item $\lim_n \|\Phi_n(x) - x\|_2 = 0$, $\forall x \in M$; \item $\lim_n \|\Phi_n\|_{\operatorname{cb}} = 1$. \end{enumerate} \end{df} It follows from Theorem $4.9$ in \cite{anan95} that if $G$ is a countable amenable group and $Q$ is a finite von Neumann algebra with the c.m.a.p., then for any action $G \curvearrowright (Q, \tau)$, the crossed product $Q \rtimes G$ has the c.m.a.p. as well. The notation $\bar{\otimes}$ will be used for the {\it spatial} tensor product. \begin{df}[Ozawa \& Popa, \cite{ozawapopa}] Let $\Gamma$ be a discrete group, let $(P, \tau)$ be a finite von Neumann algebra and let $\sigma : \Gamma \curvearrowright P$ be a $\tau$-preserving action. The action is said to be {\it weakly compact} if there exists a net $(\eta_n)$ of unit vectors in $L^2(P \bar{\otimes} \bar{P})_+$ such that \begin{enumerate} \item $\lim_n \|\eta_n - (v \otimes \bar{v})\eta_n\|_2 = 0$, $\forall v \in \mathscr{U}(P)$; \item $\lim_n \|\eta_n - (\sigma_g \otimes \bar{\sigma}_g)\eta_n\|_2 = 0$, $\forall g \in \Gamma$; \item $\langle (a \otimes 1)\eta_n, \eta_n \rangle = \tau(a) = \langle \eta_n, (1 \otimes \bar{a}) \eta_n \rangle$, $\forall a \in M, \forall n$. \end{enumerate} These conditions force $P$ to be amenable. A von Neumann algebra $P \subset M$ is said to be {\it weakly compact inside} $M$ if the action by conjugation $\mathscr{N}_M(P) \curvearrowright P$ is weakly compact. \end{df} \begin{theo}[Ozawa \& Popa, \cite{ozawapopa}]\label{weakcompact} Let $M$ be a finite von Neumann algebra with the complete metric approximation property. Let $P \subset M$ be an amenable von Neumann subalgebra. Then $P$ is weakly compact inside $M$. \end{theo} \subsection{Voiculescu's free Gaussian functor \cite{DVV:free,voiculescu92}} Let $H_\mathbf{R}$ be a real separable Hilbert space. Let $H = H_\mathbf{R} \otimes_\mathbf{R} \mathbf{C}$ be the corresponding complexified Hilbert space. The \emph{full Fock space} of $H$ is defined by \begin{equation*} \mathscr{F}(H) =\mathbf{C}\Omega \oplus \bigoplus_{n = 1}^{\infty} H^{\otimes n}. \end{equation*} The unit vector $\Omega$ is called the \emph{vacuum vector}. For any $\xi \in H$, we have the \emph{left creation operator} \begin{equation*} \ell(\xi) : \mathscr{F}(H) \to \mathscr{F}(H) : \left\{ {\begin{array}{l} \ell(\xi)\Omega = \xi, \\ \ell(\xi)(\xi_1 \otimes \cdots \otimes \xi_n) = \xi \otimes \xi_1 \otimes \cdots \otimes \xi_n. \end{array}} \right. \end{equation*} For any $\xi \in H$, we denote by $s(\xi)$ the real part of $\ell(\xi)$ given by \begin{equation*} s(\xi) = \frac{\ell(\xi) + \ell(\xi)^*}{2}. \end{equation*} The crucial result of Voiculescu \cite{voiculescu92} is that the distribution of the operator $s(\xi)$ w.r.t. the vacuum vector state $\langle \cdot \Omega, \Omega\rangle$ is the semicircular law supported on the interval $[-\|\xi\|, \|\xi\|]$, and for any subset $\Xi\subset H_\mathbf{R}$ of pairwise orthogonal vectors, the family $\{s(\xi):\xi\in \Xi\}$ is freely independent. Set \begin{equation*} \Gamma(H_\mathbf{R})'' = \{s(\xi) : \xi \in H_\mathbf{R}\}''. \end{equation*} The vector state $\tau = \langle \cdot \Omega, \Omega\rangle$ is a faithful normal trace on $\Gamma(H_\mathbf{R})''$, and \begin{equation*} \Gamma(H_\mathbf{R})'' \cong L(\mathbf{F}_{\dim H_\mathbf{R}}). \end{equation*} Since $\Gamma(H_\mathbf{R})''$ is a free group factor, $\Gamma(H_\mathbf{R})''$ has the Haagerup property and the c.m.a.p. \cite{haagerup79}. \begin{rem}[\cite{speicher:noncrossing, voiculescu92}]\label{value} Explicitely the value of $\tau$ on a word in $s(\xi_\iota)$ is given by \begin{equation}\label{formula} \tau(s(\xi_1) \cdots s(\xi_n)) = 2^{-n}\sum_ {(\{\beta_i, \gamma_i\}) \in \operatorname{NC}(n), \beta_i < \gamma_i} \prod_{k = 1}^{n/2}\langle \xi_{\beta_k}, \xi_{\gamma_k}\rangle. \end{equation} for $n$ even and is zero otherwise. Here $\operatorname{NC}(2p)$ stands for all the non-crossing pairings of the set $\{1, \dots, 2p\}$, i.e. pairings for which whenever $a < b < c < d$, and $a, c$ are in the same class, then $b, d$ are not in the same class. The total number of such pairings is given by the $p$-th Catalan number \begin{equation*} C_p = \frac{1}{p + 1}\begin{pmatrix} 2p \\ p \end{pmatrix}. \end{equation*} \end{rem} Let $G$ be a countable group together with an orthogonal representation $\pi : G \to \mathscr{O}(H_\mathbf{R})$. We shall still denote by $\pi : G \to \mathscr{U}(H)$ the corresponding unitary representation on the complexified Hilbert space $H = H_\mathbf{R} \otimes_\mathbf{R} \mathbf{C}$. The {\it free Bogoljubov shift} $\sigma^\pi : G \curvearrowright (\Gamma(H_\mathbf{R})'', \tau)$ associated with the representation $\pi$ is defined by \begin{equation*} \sigma_g^\pi = \operatorname{Ad}(\mathscr{F}(\pi_g)), \forall g \in G, \end{equation*} where $\mathscr{F}(\pi_g) = \bigoplus_{n \geq 0} \pi_g^{\otimes n} \in \mathscr{U}(\mathscr{F}(H))$. \begin{nota} For a countable group $G$ together with an orthogonal representation $\pi : G \to \mathscr{O}(H_\mathbf{R})$, we shall denote by \begin{equation*} \Gamma(H_\mathbf{R}, G, \pi)'' = \Gamma(H_\mathbf{R})'' \rtimes_{\sigma^\pi} G. \end{equation*} \end{nota} \begin{exam} If $(\pi, H) = (\lambda_G, \ell^2(G))$ is the left regular representation of $G$, it is easy to see that the action $\sigma^{\lambda_G} : G \curvearrowright \Gamma(\ell^2(G))''$ is the {\it free} Bernoulli shift and in that case $\Gamma(\ell^2(G), G, \lambda_G)'' \cong L(\mathbf{Z}) \ast L(G)$. \end{exam} For any $n \geq 0$, denote by $K_\pi^{(n)} = H^{\otimes n} \otimes \ell^2(G)$ with the $L(G),L(G)$-bimodule structure given by: \begin{eqnarray*} u_g \cdot (\xi_1 \otimes \cdots \otimes \xi_n \otimes \delta_h) & = & \pi_g \xi_1 \otimes \cdots \otimes \pi_g \xi_n \otimes \delta_{gh} \\ (\xi_1 \otimes \cdots \otimes \xi_n \otimes \delta_h) \cdot u_g & = & \xi_1 \otimes \cdots \otimes \xi_n \otimes \delta_{hg}. \end{eqnarray*} It is then straightforward to check that as $L(G), L(G)$-bimodules, we have the following isomorphism \begin{equation*} L^2(\Gamma(H_\mathbf{R}, G, \pi)'') \cong \bigoplus_{n \geq 0} K_\pi^{(n)}. \end{equation*} Recall that $\pi$ is said to be {\it mixing} if \begin{equation*} \lim_{g \to \infty} \langle \pi_g \xi, \eta\rangle = 0, \forall \xi, \eta \in H. \end{equation*} The following proposition is an easy consequence of Remark \ref{value} and Kaplansky density theorem. \begin{prop} Let $G$ be a countable group together with an orthogonal representation $\pi : G \to \mathscr{O}(H_\mathbf{R})$. The following are equivalent: \begin{enumerate} \item The representation $\pi : G \to \mathscr{O}(H_\mathbf{R})$ is mixing. \item The $\tau$-preserving action $\sigma^\pi : G \curvearrowright \Gamma(H_\mathbf{R})''$ is mixing, i.e. \begin{equation*} \lim_{g \to \infty} \tau(\sigma_g^\pi(x)y) = 0, \forall x, y \in \Gamma(H_\mathbf{R})'' \ominus \mathbf{C}. \end{equation*} \end{enumerate} \end{prop} \section{Proof of Theorems A and B}\label{keyresult} \subsection{The free malleable deformation on $\Gamma(H_\mathbf{R}, G, \pi)''$} Let $G$ be a countable group together with an orthogonal representation $\pi : G \to \mathscr{O}(H_\mathbf{R})$. Set \begin{itemize} \item $M = \Gamma(H_\mathbf{R}, G, \pi)''$. \item $\widetilde{M} = \Gamma(H_\mathbf{R} \oplus H_\mathbf{R}, G, \pi \oplus \pi)''$. \end{itemize} Thus, we can regard $\widetilde{M}$ as the amalgamated free product \begin{equation*} \widetilde{M} = M \ast_{L(G)} M, \end{equation*} where we view $M \subset \widetilde{M}$ under the identification with the left copy. Consider the following orthogonal transformations on $H_\mathbf{R} \oplus H_\mathbf{R}$: \begin{eqnarray*} U_t & = & \begin{pmatrix} \cos(\frac{\pi}{2} t) & -\sin(\frac{\pi}{2} t) \\ \sin(\frac{\pi}{2} t) & \cos(\frac{\pi}{2} t) \end{pmatrix}, \forall t \in \mathbf{R}, \\ V & = & \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}. \end{eqnarray*} Define the associated deformation $(\alpha_t, \beta)$ on $\Gamma(H_\mathbf{R} \oplus H_\mathbf{R})''$ by \begin{equation*} \alpha_t = \operatorname{Ad}(\mathscr{F}(U_t)), \; \beta = \operatorname{Ad}(\mathscr{F}(V)). \end{equation*} Since $U_t, V$ commute with $\pi \oplus \pi$, it follows that $\alpha_t, \beta$ commute with the diagonal action $\sigma^\pi \ast \sigma^\pi$. We can then extend the deformation $(\alpha_t, \beta)$ to $\widetilde{M}$ by ${\alpha_t}_{|L(G)} = \beta_{|L(G)} = \operatorname{Id}$. Moreover it is easy to check that the deformation $(\alpha_t, \beta)$ is {\it malleable} in the sense of Popa: \begin{prop} The deformation $(\alpha_t, \beta)$ satisfies: \begin{enumerate} \item $\lim_{t \to 0} \|x - \alpha_t(x)\|_2 = 0$, $\forall x \in \widetilde{M}$. \item $\beta^2 = \operatorname{Id}$, $\alpha_t \beta = \beta \alpha_{-t}$, $\forall t \in \mathbf{R}$. \item $\alpha_1(x \ast_{L(G)} 1) = 1 \ast_{L(G)} x$, $\forall x \in M$. \end{enumerate} \end{prop} We recall at last that the s-malleable deformation $(\alpha_t, \beta)$ automatically features a certain {\it transversality} property. \begin{prop}[Popa, \cite{popasup}]\label{transversality} We keep the same notation as before. We have the following: \begin{equation}\label{trans} \|x - \alpha_{2t}(x)\|_2 \leq 2 \|\alpha_t(x) - (E_{M} \circ \alpha_t)(x)\|_2, \; \forall x \in M, \forall t > 0. \end{equation} \end{prop} The following result of the first-named author about intertwining subalgebras inside the von Neumann algebras $\Gamma(H_\mathbf{R}, G, \pi)''$ (see Theorems $5.2$ in \cite{houdayer3} and $3.4$ in \cite{houdayer6}) will be a crucial tool in the next subsection. \begin{theo}[\cite{{houdayer3}, {houdayer6}}]\label{intertwining} Let $G$ be a countable group. Let $\pi : G \to \mathscr{O}(H_\mathbf{R})$ be any orthogonal representation. Set $M = \Gamma(H_\mathbf{R}, G, \pi)''$. Let $p \in M$ be a non-zero projection. Let $P \subset pMp$ be a von Neumann subalgebra such that the deformation $(\alpha_t)$ converges uniformly on the unit ball $(P)_1$. Then $P \preceq_M L(G)$. \end{theo} \subsection{The key result} Let $M, N, P$ be finite von Neumann algebras. For any $M, N$-bimodules $H, K$, denote by $\pi_H$ (resp. $\pi_K$) the associated $\ast$-representation of the binormal tensor product $M \otimes_{\operatorname{bin}} N^{\operatorname{op}}$ on $H$ (resp. on $K$). We refer to \cite{EL} for the definition of $\otimes_{\operatorname{bin}}$. We say that $H$ is {\em weakly contained} in $K$ and denote it by $H \prec K$ if the representation $\pi_H$ is weakly contained in the representation $\pi_K$, that is if $\ker(\pi_H) \supset \ker(\pi_K)$. Let $H, K$ be $M, N$-bimodules. The following are true: \begin{enumerate} \item Assume that $H \prec K$. Then, for any $N$-$P$ bimodule $L$, we have $H \otimes_N L \prec K \otimes_N L$, as $M, P$-bimodules. Exactly in the same way, for any $P, M$-bimodule $L$, we have $L \otimes_M H \prec L \otimes_M K$, as $P, N$-bimodules (see Lemma $1.7$ in \cite{anan95}). \item A von Neumann algebra $B$ is amenable iff $L^2(B) \prec L^2(B) \otimes L^2(B)$, as $B$-$B$ bimodules. \end{enumerate} Let $B, M, N$ be von Neumann algebras such that $B$ is amenable. Let $H$ be any $M, B$-bimodule and let $K$ be any $B, N$-bimodule. Then, as $M, N$-bimodules, we have $H \otimes_B K \prec H \otimes K$ (straightforward consequence of $(1)$ and $(2)$). \begin{lem}\label{weakcontainment} Let $G$ be an amenable group together with an orthogonal representation $\pi : G \to \mathscr{O}(H_\mathbf{R})$. Let $M = \Gamma(H_\mathbf{R}, G, \pi)''$. The $M, M$-bimodule $\mathscr{H} = L^2(\widetilde{M}) \ominus L^2(M)$ is weakly contained in the coarse bimodule $L^2(M) \otimes L^2(M)$. In particular, the left $M$-action on $\mathscr{H}$ extends to a u.c.p. map $\Psi : \mathbf{B}(L^2(M)) \to \mathbf{B}(\mathscr{H})$ whose range commutes with the right $M$-action. \end{lem} \begin{proof} Set $B = L(G)$ which is amenable by assumption. By definition of the amalgamated free product $\widetilde{M} = M \ast_{L(G)} M$ (see \cite{voiculescu92}), we have as $M, M$-bimodules \begin{equation*} L^2(\widetilde{M}) \ominus L^2(M) \cong \bigoplus_{n \geq 1} \mathscr{H}_n, \end{equation*} where \begin{equation*} \mathscr{H}_n = L^2(M) \otimes_B \mathop{\overbrace{(L^2(M) \ominus L^2(B)) \otimes_B \cdots \otimes_B (L^2(M) \ominus L^2(B))}}^{2n - 1} \otimes_B L^2(M). \end{equation*} Since $B = L(G)$ is amenable, the identity bimodule $L^2(B)$ is weakly contained in the coarse bimodule $L^2(B) \otimes L^2(B)$. From the standard properties of composition and weak containment of bimodules (see Lemma $1.7$ in \cite{anan95}), it follows that as $M, M$-bimodules \begin{equation*} \mathscr{H}_n \prec L^2(M) \otimes \mathop{\overbrace{(L^2(M) \ominus L^2(B)) \otimes \cdots \otimes (L^2(M) \ominus L^2(B))}}^{2n - 1} \otimes L^2(M). \end{equation*} Consequently, we obtain as $M, M$-bimodules \begin{equation*} \mathscr{H} = L^2(\widetilde{M}) \ominus L^2(M) \prec L^2(M) \otimes L^2(M). \end{equation*} Now the rest of the proof is the same as the one of Lemma $5.1$ in \cite{ozawapopaII}. The binormal representation $\mu$ of $M \odot M^{\operatorname{op}}$ on $\mathscr{H}$ is continuous w.r.t. the minimal tensor product. Hence $\mu$ extends to a u.c.p. map $\tilde{\mu}$ from $\mathbf{B}(L^2(M)) \bar{\otimes} M^{\operatorname{op}}$ to $\mathbf{B}(\mathscr{H})$. Define $\Psi(x) = \tilde{\mu}(x \otimes 1)$, $\forall x \in \mathbf{B}(L^2(M))$. Since $M^{\operatorname{op}}$ is in the multiplicative domain of $\tilde{\mu}$, it follows that the range of $\Psi$ commutes with the right $M$-action. \end{proof} The next theorem, which is the key result of this section in order to prove Theorems A and B, can be viewed as an analog of Theorems $4.9$ in \cite{ozawapopa}, B in \cite{ozawapopaII} and $3.3$ in \cite{houdayer7}. \begin{theo}\label{step} Let $G$ be an amenable group together with an orthogonal representation $\pi : G \to \mathscr{O}(H_\mathbf{R})$. Let $M = \Gamma(H_\mathbf{R}, G, \pi)''$. Let $P \subset M$ be an amenable subalgebra such that $P \npreceq_M L(G)$. Then $\mathscr{N}_M(P)''$ is amenable. \end{theo} \begin{proof} The proof is conceptually similar to the one of Theorem 4.9 in \cite{ozawapopa} under weaker assumptions: the malleable deformation $(\alpha_t)$ defined on $M = \Gamma(H_\mathbf{R}, G, \pi)''$ is not assumed to be {\textquotedblleft compact over $L(G)$\textquotedblright} and the bimodule $L^2(\widetilde{M}) \ominus L^2(M)$ is merely weakly contained in the coarse bimodule $L^2(M) \otimes L^2(M)$. To overcome these technical difficulties, we will use ideas from the proof of Theorem B in \cite{ozawapopaII}. Note that the symbol {\textquotedblleft Lim\textquotedblright} will be used for a state on $\ell^\infty(\mathbf{N})$, or more generally on $\ell^\infty(I)$ with $I$ directed, which extends the ordinary limit. Let $G$ be an amenable group and let $\pi : G \to \mathscr{O}(H_\mathbf{R})$ be an orthogonal representation. Let $M = \Gamma(H_\mathbf{R}, G, \pi)''$. Let $P \subset M$ be an amenable von Neumann subalgebra such that $P \npreceq_M L(G)$. Since $M$ has the c.m.a.p., $P$ is weakly compact inside $M$. Then there exists a net $(\eta_n)$ of vectors in $L^2(P \bar{\otimes} \bar{P})_+$ such that \begin{enumerate} \item $\lim_n \|\eta_n - (v \otimes \bar{v})\eta_n\|_2 = 0$, $\forall v \in \mathscr{U}(P)$; \item $\lim_n \|\eta_n - \operatorname{Ad}(u \otimes \bar{u})\eta_n\|_2 = 0$, $\forall u \in \mathscr{N}_M(P)$; \item $\langle (a \otimes 1)\eta_n, \eta_n\rangle = \tau(a) = \langle \eta_n, (1 \otimes \bar{a}) \eta_n \rangle$, $\forall a \in M, \forall n$. \end{enumerate} We consider $\eta_n \in L^2(M \bar{\otimes} \bar{M})_+$, and note that $(J \otimes \bar{J}) \eta_n = \eta_n$, where $J$ denotes the canonical anti-unitary on $L^2(M)$. We shall simply denote $\mathscr{N}_M(P)$ by $\mathscr{G}$. Let $z \in \mathscr{Z}(\mathscr{G}' \cap M)$ be a non-zero projection. Since $P \npreceq_M L(G)$ and $z \in P' \cap M$, it follows that $Pz \npreceq_M L(G)$. Theorem \ref{intertwining} then yields that the deformation $(\alpha_t)$ does not converge uniformly on $(Pz)_1$. Since any selfadjoint element $x \in (Pz)_1$ can be written \begin{equation*} x = \frac12 \|x\|_\infty (u + u^*) \end{equation*} where $u \in \mathscr{U}(Pz)$, it follows that $(\alpha_t)$ does not converge uniformly on $\mathscr{U}(Pz)$ either. Combining this with the inequality $(\ref{trans})$ in Proposition \ref{transversality}, we get that there exist $0 < c < 1$, a sequence of positive reals $(t_k)$ and a sequence of unitaries $(u_k)$ in $\mathscr{U}(P)$ such that $\lim_{k} t_k = 0$ and $\| \alpha_{t_k}(u_k z) - (E_M \circ \alpha_{t_k})(u_k z) \|_2 \geq c \|z\|_2$, $\forall k \in \mathbf{N}$. Since $\|\alpha_{t_k}(u_k z)\|_2 = \|z\|_2$, by Pythagora's theorem, we obtain \begin{equation}\label{key} \|(E_M \circ \alpha_{t_k})(u_k z)\|_2 \leq \sqrt{1 - c^2} \|z\|_2, \forall k \in \mathbf{N}. \end{equation} Set $\delta = \frac{1 - \sqrt{1 - c^2}}{6} \|z\|_2$. Choose and fix $k_0 \in \mathbf{N}$ such that \begin{equation}\label{delta} \|\alpha_{t_k}(z) - z\|_2 \leq \delta, \forall k \geq k_0. \end{equation} Define for any $n$ and any $k \geq k_0$, \begin{eqnarray*} \eta_n^k & = & (\alpha_{t_k} \otimes 1)(\eta_n) \in L^2(\widetilde{M}) \otimes L^2(\bar{M}) \\ \xi_n^k & = & (e_M\alpha_{t_k} \otimes 1)(\eta_n) \in L^2(M) \otimes L^2(\bar{M}) \\ \zeta_n^k & = & (e_M^\perp\alpha_{t_k} \otimes 1)(\eta_n) \in (L^2(\widetilde{M}) \ominus L^2(M)) \otimes L^2(\bar{M}). \end{eqnarray*} We observe that \begin{equation}\label{norm2} \|(x \otimes 1) \eta_n^k\|_2^2 = \tau(E_M(\alpha_{t_k}^{-1}(x^*x))) = \|x\|_2^2, \forall x \in \widetilde{M}. \end{equation} As in the proof of Theorem $4.9$ in \cite{ozawapopa}, noticing that $L^2(\widetilde{M}) \otimes L^2(\bar{M})$ is an $M \bar{\otimes} \bar{M}$-module and since $\eta_n^k = \xi_n^k + \zeta_n^k$, Equation \eqref{norm2} gives that for any $u \in \mathscr{G}$, and for any $k \geq k_0$, \begin{eqnarray}\label{crucial} \mathop{\operatorname{Lim}}_n \|[u \otimes \bar{u}, \zeta^k_n]\|_2 & \leq & \mathop{\operatorname{Lim}}_n \|[u \otimes \bar{u}, \eta_n^k]\|_2 \\ \nonumber & \leq & \mathop{\operatorname{Lim}}_n \|(\alpha_{t_k} \otimes 1)([u \otimes \bar{u}, \eta_n])\|_2 + 2 \|u - \alpha_{t_k}(u) \|_2 \\ & = & 2 \|u - \alpha_{t_k}(u)\|_2. \nonumber \end{eqnarray} Moreover, for any $x \in M$, \begin{eqnarray*} \| (x \otimes 1) \zeta^k_n \|_2 & = & \|(x \otimes 1) (e_M^\perp \otimes 1) \eta_n^k\|_2 \\ & = & \|(e_M^\perp \otimes 1) (x \otimes 1) \eta_n^k\|_2 \\ & \leq & \| (x \otimes 1) \eta_n^k\|_2 = \|x\|_2. \end{eqnarray*} \begin{claim}\label{claim1} For any $k \geq k_0$, \begin{equation}\label{crucial2} \mathop{\operatorname{Lim}}_n \|(z \otimes 1) \zeta_n^k\|_2 \geq \delta. \end{equation} \end{claim} \begin{proof}[Proof of Claim $\ref{claim1}$] We prove the claim by contradiction. Exactly as in the proof of Theorem 4.9 in \cite{ozawapopa}, noticing that $e_M z = z e_M$ (since $z \in M$) and $z u_k = u_k z$ (since $z \in \mathscr{Z}(\mathscr{G}' \cap M)$), and using $(\ref{delta})$ we have \begin{eqnarray*} && \mathop{\operatorname{Lim}}_n \|(z \otimes 1)\eta_n^k - (e_M \alpha_{t_k}(u_k) z \otimes \bar{u}_k)\xi_n^k\|_2 \\ & \leq & \mathop{\operatorname{Lim}}_n \|(z \otimes 1)\eta_n^k - (e_M \alpha_{t_k}(u_k) z \otimes \bar{u}_k)\eta_n^k\|_2 + \mathop{\operatorname{Lim}}_n \|(z \otimes 1) \zeta_n^k\|_2 \\ & \leq & \mathop{\operatorname{Lim}}_n \|(z \otimes 1)\eta_n^k - (e_M z \alpha_{t_k}(u_k) \otimes \bar{u}_k)\eta_n^k\|_2 + \|[\alpha_{t_k}(u_k), z]\|_2 + \delta \\ & \leq & \mathop{\operatorname{Lim}}_n\|(z \otimes 1)\zeta_n^k\|_2 + \mathop{\operatorname{Lim}}_n \|\eta_n^k - (\alpha_{t_k}(u_k) \otimes \bar{u_k})\eta_n^k\|_2 \\ & & + 2\|z - \alpha_{t_k}(z)\|_2 + \delta \\ & \leq & \mathop{\operatorname{Lim}}_n \|(\alpha_{t_k} \otimes 1)(\eta_n - (u_k \otimes \bar{u}_k)\eta_n)\|_2 + 4\delta = 4 \delta. \end{eqnarray*} Thus, we would get \begin{eqnarray*} \|(E_M \circ \alpha_{t_k})(u_k z)\|_2 & \geq & \|(E_M \circ \alpha_{t_k})(u_k)z\|_2 - \|z - \alpha_{t_k}(z)\|_2 \\ & \geq & \mathop{\operatorname{Lim}}_n \|((E_M \circ \alpha_{t_k})(u_k)z \otimes \bar{u}_k)\eta_n^k\|_2 - \delta \\ & \geq & \mathop{\operatorname{Lim}}_n \|(e_M \otimes 1) ((E_M \circ \alpha_{t_k})(u_k) z \otimes \bar{u}_k)\eta_n^k\|_2 - \delta \\ & = & \mathop{\operatorname{Lim}}_n \|(e_M \alpha_{t_k}(u_k) z \otimes \bar{u}_k) \xi_n^k\|_2 - \delta \\ & \geq & \mathop{\operatorname{Lim}}_n \|(z \otimes 1)\eta_n^k\|_2 - 5\delta \\ & = & \|z\|_2 - 5\delta > \sqrt{1 - c^2} \|z\|_2, \end{eqnarray*} which is a contradiction according to $(\ref{key})$. \end{proof} We now use the techniques of the proof of Theorem B in \cite{ozawapopaII}. Define a state $\varphi^{z, k}$ on $\mathbf{B}(\mathscr{H}) \cap \rho(M^{\operatorname{op}})'$, where $\rho(M^{\operatorname{op}})$ is the right $M$-action on $\mathscr{H}$, by \begin{equation*} \varphi^{z, k}(x) = \mathop{\operatorname{Lim}}_n \frac{1}{\|\zeta_n^{z, k}\|_2^2} \langle (x \otimes 1) \zeta_n^{z, k}, \zeta_n^{z, k}\rangle, \end{equation*} where $\zeta_n^{z, k} = (z \otimes 1) \zeta_n^k$. Note that \begin{equation*} \varphi^{z, k}(x) = \varphi^{z, k}(z x) = \varphi^{z, k}(x z), \forall x \in \mathbf{B}(\mathscr{H}) \cap \rho(M^{\operatorname{op}})'. \end{equation*} \begin{claim}\label{claim2} Let $a \in \mathscr{G}''$. Then one has \begin{equation*} \mathop{\operatorname{Lim}}_k | \varphi^{z, k} (a x - x a) | = 0, \end{equation*} uniformly for $x \in \mathbf{B}(\mathscr{H}) \cap \rho(M^{\operatorname{op}})'$ with $\|x\|_\infty \leq 1$. \end{claim} \begin{proof}[Proof of Claim $\ref{claim2}$] For $u \in \mathscr{G}$, since $z \in \mathscr{Z}(\mathscr{G}' \cap M)$, one has \begin{eqnarray*} \mathop{\operatorname{Lim}}_n \|\zeta_n^{z, k} - (u \otimes \bar{u}) \zeta_n^{z, k} (u \otimes \bar{u})^*\|_2 & \leq & \mathop{\operatorname{Lim}}_n \|\zeta_n^{k} - (u \otimes \bar{u}) \zeta_n^{k} (u \otimes \bar{u})^*\|_2 \\ & \leq & 2 \|u - \alpha_{t_k}(u)\|_2. \end{eqnarray*} For every $x \in \mathbf{B}(\mathscr{H}) \cap \rho(M^{\operatorname{op}})'$, one has \begin{equation*} \varphi^{z, k}(u^* x u) = \mathop{\operatorname{Lim}}_n \frac{1}{\|\zeta_n^{z, k}\|_2^2} \langle(x \otimes 1)(u \otimes \bar{u}) \zeta_n^{z, k} (u \otimes \bar{u})^*, (u \otimes \bar{u}) \zeta_n^{z, k} (u \otimes \bar{u})^*\rangle, \end{equation*} so that with $(\ref{crucial}) - (\ref{crucial2})$, \begin{equation*} |\varphi^{z, k}(u^* x u) - \varphi^{z, k}(x)| \leq \frac{4}{\delta^2} \|x\|_\infty \|u - \alpha_{t_k}(u)\|_2. \end{equation*} This implies that \begin{equation*} \mathop{\operatorname{Lim}}_k |\varphi^{z, k}(a x - x a)| = 0, \end{equation*} for each $a \in \mbox{span }\mathscr{G}$ and uniformly for $x \in \mathbf{B}(\mathscr{H}) \cap \rho(M^{\operatorname{op}})'$ with $\|x\|_\infty \leq 1$. However, for any $a \in M$, \begin{eqnarray*} |\varphi^{z, k}(x a)| & = & \mathop{\operatorname{Lim}}_n \frac{1}{\|\zeta_n^{z, k}\|_2^2} |\langle (x \otimes 1)(a \otimes 1) \zeta_n^{z, k}, \zeta_n^{z, k}\rangle| \\ & \leq & \frac{1}{\delta^2} \|x\|_\infty \|z a\|_2 \\ & \leq & \frac{1}{\delta^2} \|x\|_\infty \| a\|_2, \end{eqnarray*} and likewise for $|\varphi^{z, k}(a x)|$. An application of Kaplansky density theorem does the job. \end{proof} To prove at last that $\mathscr{G}''$ is amenable, we will use (as in Theorem B in \cite{ozawapopaII}) Connes' criterion for finite amenable von Neumann algebras (see Theorem $5.1$ in \cite{connes76} for the type ${\rm II_1}$ case and Lemma 2.2 in \cite{haagerup83} for the general case). For any non-zero projection $z \in \mathscr{Z}(\mathscr{G}' \cap M)$ and any finite subset $F \subset \mathscr{U}(\mathscr{G}'')$, we need to show \begin{equation*} \| \sum_{u \in F} uz \otimes \overline{uz} \|_{M \bar{\otimes} \bar{M}} = |F|. \end{equation*} Let $z \in \mathscr{Z}(\mathscr{G}' \cap M)$ be a non-zero projection and let $F \subset \mathscr{U}(\mathscr{G}'')$ be a finite subset. Since the $M, M$-bimodule $\mathscr{H}$ is weakly contained in the coarse bimodule $L^2(M) \otimes L^2(M)$, let $\Psi : \mathbf{B}(L^2(M)) \to \mathbf{B}(\mathscr{H}) \cap \rho(M^{\operatorname{op}})'$ be the u.c.p. map which extends the left $M$-action on $\mathscr{H}$ (see Lemma \ref{weakcontainment}). Note that $M$ is contained in the multiplicative domain of $\Psi$. Define $\psi^{z, k} = \varphi^{z, k} \circ \Psi$ a state on $\mathbf{B}(L^2(M))$. Let $u \in \mathscr{G}''$. By Claim \ref{claim2}, one has \begin{eqnarray*} \mathop{\operatorname{Lim}}_k |\psi^{z, k}((uz)^* x (uz) - x)| & = & \mathop{\operatorname{Lim}}_k |\varphi^{z, k}( \Psi((uz)^* x (uz)) - \Psi(x))| \\ & = & \mathop{\operatorname{Lim}}_k |\varphi^{z, k}( (uz)^* \Psi(x) (uz) - \Psi(x))| \\ & = & \mathop{\operatorname{Lim}}_k |\varphi^{z, k}( u^* \Psi(x) u - \Psi(x))| = 0, \end{eqnarray*} uniformly for $x \in \mathbf{B}(L^2(M))$ with $\|x\|_\infty \leq 1$. By a standard recipe of the theory together with the Hahn-Banach separation theorem, we can find a net $(\mu^{z, k})$ of positive norm-one elements in $S_1(L^2(M))$ (trace-class operators on $L^2(M)$) such that \begin{equation*} \lim_k \|\mu^{z, k} - \operatorname{Ad}(uz)\mu^{z,k}\|_1= 0, \forall u \in \mathscr{U}(\mathscr{G}''). \end{equation*} Since the above is satisfied in particular for $u = 1$ and since $F \subset \mathscr{\mathscr{G}''}$ is finite, replacing $\mu^{z, k}$ by $z \mu^{z, k} z/\|z \mu^{z, k} z\|_1$ we may assume that $\mu^{z, k} \in S_1(L^2(M))$ satisfies $\mu^{z, k} \geq 0$, $z\mu^{z, k}z = \mu^{z, k}$, $\|\mu^{z, k}\|_1 = 1$ and \begin{equation*} \lim_k \|\mu^{z, k} - \operatorname{Ad}(uz)\mu^{z,k}\|_1= 0, \forall u \in F. \end{equation*} Define now $\nu^{z, k} = (\mu^{z, k})^{1/2} \in S_2(L^2(M))$ (Hilbert-Schmidt operators on $L^2(M)$). The net $(\nu^{z, k})$ satisfies $z\nu^{z, k}z = \nu^{z, k}$, $\|\nu^{z, k}\|_2 = 1$ and \begin{equation*} \lim_k \|\nu^{z, k} - \operatorname{Ad}(uz)\nu^{z,k}\|_2= 0, \forall u \in F. \end{equation*} by Powers-St\o rmer inequality. With the identification \begin{equation*} S_2(L^2(M)) = L^2(M) \otimes L^2(\bar{M}) \end{equation*} as $M, M$-bimodules it follows that the $\ast$-representations of $M$ and $\bar{M}$ given by the left and right $M$-actions induce the spatial tensor norm. Thus, \begin{eqnarray*} |F| & = & \| \sum_{u \in F} \nu^{z, k}\|_2 \\ & \leq & \lim_k \| \sum_{u \in F} (uz)\nu^{z, k}(uz)^*\|_2 + \lim_k \| \sum_{u \in F} \nu^{z, k} - (uz) \nu^{z, k} (uz)^*\|_2 \\ & \leq & \|\sum_{u \in F} uz \otimes \overline{uz}\|_{M \bar{\otimes} \bar{M}}. \end{eqnarray*} Since the other inequality is trivial, the proof is complete. \end{proof} \subsection{Proof of Theorem A} We refer to Section \ref{examples} for the necessary background on spectral measures of unitary representations. Let's begin with a few easy observations first. Assume that $(N, \tau)$ is a finite von Neumann algebra with no amenable direct summand, i.e. $Nz$ is not amenable, $\forall z \in \mathscr{Z}(N)$, $z \neq 0$. Then for any non-zero projection $q \in N$, $qNq$ is non-amenable. Moreover, if $N$ has no amenable direct summand and $N \subset N_1$ is a unital inclusion of finite von Neumann algebras, then $N_1$ has no amenable direct summand either. \begin{lem}\label{diffuse} Let $G$ be a countable group together with an action $G \curvearrowright (N, \tau)$ on a finite von Neumann algebra. Write $M = N \rtimes G$ for the crossed product. Let $B \subset N$ be a diffuse subalgebra. Then $B \npreceq_M L(G)$. \end{lem} \begin{proof} We denote by $(v_g)$ the canonical unitaries which generate $L(G) \subset N \rtimes G = M$. Let $B \subset N$ be a diffuse subalgebra. Let $(u_n)$ be a sequence of unitaries in $B$ such that $u_n \to 0$ weakly, as $n \to \infty$. Let $I, J \subset G$ be finite subsets and \begin{eqnarray*} x & = & \sum_{g \in I} x_g v_g \\ y & = & \sum_{h \in J} y_h v_h, \end{eqnarray*} where $x_g, y_h \in N$. Then we have \begin{equation*} E_{L(G)}(x^* u_n y) = \sum_{(g, h) \in I \times J} \tau(x_g^* u_n y_h) v_g^* v_h. \end{equation*} In particular, \begin{equation*} \|E_{L(G)}(x^* u_n y)\|_2 \leq \sum_{(g, h) \in I \times J} |\tau(x^*_g u_n y_h)|. \end{equation*} Since $u_n \to 0$ weakly, as $n \to \infty$, we get $\lim_n \|E_{L(G)}(x^* u_n y)\|_2 = 0$. Finally, using Kaplansky density theorem, we obtain \begin{equation*} \lim_n \|E_{L(G)}(x^* u_n y)\|_2 = 0, \forall x, y \in M. \end{equation*} By $(3)$ of Theorem \ref{intertwining1}, it follows that $B \npreceq_M L(G)$. \end{proof} \begin{theo}[Theorem A]\label{nocartan} Let $\pi : \mathbf{Z} \to \mathscr{O}(H_\mathbf{R})$ be an orthogonal representation such that the spectral measure of $\pi$ has no atoms. Then $M = \Gamma(H_\mathbf{R}, \mathbf{Z}, \pi)''$ is a non-amenable ${\rm II_1}$ factor and for any maximal abelian subalgebra $A \subset M$, $\mathscr{N}_M(A)''$ is an amenable von Neumann algebra. \end{theo} \begin{proof} Since the spectral measure of $\pi : \mathbf{Z} \to \mathscr{U}(H)$ has no atoms, it follows that $\pi$ has no eigenvectors. So the representation $\mathscr{F}(\pi) : \mathbf{Z} \to \mathscr{U}(\mathscr{F}(H))$ has no eigenvectors either. Thus, the corresponding free Bogoljubov action $\sigma^\pi : \mathbf{Z} \curvearrowright \Gamma(H_\mathbf{R})''$ is necessarily outer (see Theorem~\ref{thm:bogoOuter}) and then $M = \Gamma(H_\mathbf{R}, \mathbf{Z}, \pi)''$ is a ${\rm II_1}$ factor. Moreover, $L(\mathbf{Z})$ is clearly a MASA in $M$. We prove the result by contradiction. Assume that $A \subset M= \Gamma(H_\mathbf{R}, \mathbf{Z}, \pi)''$ is a MASA such that $\mathscr{N}_M(A)''$ is not amenable. Write $1 - z \in \mathscr{Z}(\mathscr{N}_M(A)'')$ for the maximal projection such that $\mathscr{N}_M(A)''(1 - z)$ is amenable. Then $z \neq 0$ and $\mathscr{N}_M(A)''z$ has no amenable direct summand. Notice that $z \in A' \cap M = A$ and \begin{equation*} \mathscr{N}_M(A)''z = \mathscr{N}_{zMz}(Az)'', \end{equation*} by Lemma 3.5 in \cite{popamal1}. Moreover $Az \subset zMz$ is a MASA. Since the action $\sigma^\pi : \mathbf{Z} \curvearrowright \Gamma(H_\mathbf{R})''$ is outer, it follows that $\Gamma(H_\mathbf{R})' \cap M = \mathbf{C}$. Thanks to Theorem 3.3 in \cite{masapopa}, we can find a diffuse abelian subalgebra $B \subset \Gamma(H_\mathbf{R})''$ which is a MASA in $M$. Since $M$ is a ${\rm II_1}$ factor and $B$ is diffuse, there exist a projection $p \in B$ and a unitary $u \in \mathscr{U}(M)$ such that $p = u z u^*$. Define $\tilde{A} = u Az u^*$. Then $\tilde{A} \subset pMp$ is a MASA and $\mathscr{N}_{pMp}(\tilde{A})''$ has no amenable direct summand. Let $C = \tilde{A} \oplus B(1 - p) \subset M$. Note that $C \subset M$ is still a MASA. Since $\mathscr{N}_M(C)''$ is not amenable and $C \subset M$ is weakly compact, Theorem \ref{step} yields $C \preceq_M L(\mathbf{Z})$. Since $L(\mathbf{Z})$ is a MASA, if we apply Theorem A.1 of \cite{popa2001}, we obtain $v \in M$ a nonzero partial isometry such that $v^*v \in C' \cap M = C$, $q = vv^* \in L(\mathbf{Z})$ and $v C v^* \subset L(\mathbf{Z})q$. Since $C \subset M$ is also a MASA, we get $v C v^* = L(\mathbf{Z})q$. Note that $v p v^* \neq 0$, because otherwise we would have $v B v^* = L(\mathbf{Z})q$ and this would imply that $B \preceq_M L(\mathbf{Z})$, a contradiction according to Lemma \ref{diffuse}. Thus, with $q' = v p v^*$ we obtain $v \tilde{A} v^* = L(\mathbf{Z})q'$. Consequently $\mathscr{N}_{q' M q'}(L(\mathbf{Z}) q')''$ is not amenable. However, as $L(\mathbf{Z}), L(\mathbf{Z})$-bimodules we have the following isomorphism \begin{equation*} L^2(M) \cong \bigoplus_{n \geq 0} K^{(n)}_{\pi}, \end{equation*} where $K^{(n)}_{\pi} = H^{\otimes n} \otimes \ell^2(\mathbf{Z})$ (see Section \ref{preliminaries}). Since the spectral measure of $\pi$ has no atoms, it follows that $L(\mathbf{Z}) \subset M$ is a singular MASA, i.e. $\mathscr{N}_M(L(\mathbf{Z}))'' = L(\mathbf{Z})$, and {\em a fortiori} $\mathscr{N}_{q' M q'}(L(\mathbf{Z}) q')'' = L(\mathbf{Z}) q'$ (by Lemma 3.5 in \cite{popamal1}). We have reached a contradiction. \end{proof} \subsection{Proof of Theorem B} \begin{theo}[Theorem B]\label{stronglysolid} Let $\pi : \mathbf{Z} \to \mathscr{O}(H_\mathbf{R})$ be a mixing orthogonal representation. Then the non-amenable ${\rm II_1}$ factor $M = \Gamma(H_\mathbf{R}, \mathbf{Z}, \pi)''$ is strongly solid. \end{theo} \begin{proof} Since the representation $\pi : \mathbf{Z} \to \mathscr{O}(H_\mathbf{R})$ is mixing, it has no eigenvectors. So the representation $\mathscr{F}(\pi) : \mathbf{Z} \to \mathscr{U}(\mathscr{F}(H))$ has no eigenvectors either. Thus, the free Bogoljubov action $\sigma^\pi : \mathbf{Z} \curvearrowright \Gamma(H_\mathbf{R})''$ is necessarily outer (see Theorem~\ref{thm:bogoOuter}) and then $M = \Gamma(H_\mathbf{R}, \mathbf{Z}, \pi)''$ is a ${\rm II_1}$ factor. Let $P \subset M$ be a diffuse amenable von Neumann subalgebra. By contradiction assume that $ \mathscr{N}_M(P)''$ is not amenable. Write $1 - z \in \mathscr{Z}(\mathscr{N}_M(P)'')$ for the maximal projection such that $\mathscr{N}_M(P)''(1 - z)$ is amenable. Then $z \neq 0$ and $\mathscr{N}_M(P)''z$ has no amenable direct summand. Notice that \begin{equation*} \mathscr{N}_M(P)''z \subset \mathscr{N}_{zMz}(Pz)''. \end{equation*} Since this is a unital inclusion (with unit $z$), $\mathscr{N}_{zMz}(Pz)''$ has no amenable direct summand either. Let $A \subset \Gamma(H_\mathbf{R})''$ be a diffuse abelian subalgebra. Since $M$ is a ${\rm II_1}$ factor and $A$ is diffuse, there exist a projection $q \in A$ and a unitary $u \in \mathscr{U}(M)$ such that $q = u z u^*$. Define $Q = u Pz u^*$. Then $Q \subset qMq$ is diffuse, amenable and $\mathscr{N}_{qMq}(Q)''$ has no amenable direct summand. Let $B = Q \oplus A(1 - q) \subset M$. Note that $B \subset M$ is a unital diffuse amenable subalgebra. Since $\mathscr{N}_M(B)''$ is not amenable and $B \subset M$ is weakly compact, Theorem \ref{step} yields $B \preceq_M L(\mathbf{Z})$. Thus, there exists $n \geq 1$, a non-zero partial isometry $v \in \mathbf{M}_{1, n}(\mathbf{C}) \otimes M$ and a (possibly non-unital) $\ast$-homomorphism $\psi : B \to L(\mathbf{Z})^n$ such that $x v = v \psi(x)$, $\forall x \in B$. Observe that $q v \neq 0$, because otherwise we would have $vv^* \leq 1 - q$ and $x v = v \psi(x)$, $\forall x \in A(1 - q)$. This would mean that $A(1 - q) \preceq_M L(\mathbf{Z})$ and so $A \preceq_M L(\mathbf{Z})$, which is a contradiction according to Lemma \ref{diffuse}. Write $q v = w |q v|$ for the polar decomposition of $qv$. It follows that $w \in \mathbf{M}_{1, n}(\mathbf{C}) \otimes M$ is a non-zero partial isometry such that $x w = w \psi(x)$, $\forall x \in Q$. This means exactly that $Q \preceq_M L(\mathbf{Z})$. Note that $ww^* \in Q' \cap qMq \subset \mathscr{N}_{qMq}(Q)''$ and $w^*w \in \psi(Q)' \cap \psi(q) M^n \psi(q)$. Since the $\tau$-preserving action $\mathbf{Z} \curvearrowright \Gamma(H_\mathbf{R})''$ is mixing by assumption and $\psi(Q) \subset \psi(q)L(\mathbf{Z})^n\psi(q)$ is diffuse, it follows from Theorem $3.1$ in \cite{popamal1} (see also Theorem D.4 in \cite{vaesbern}) that $w^*w \in \psi(q)L(\mathbf{Z})^n\psi(q)$, so that we may assume $w^*w = \psi(q)$. Note that $w^* Q w = \psi(Q)$. Moreover since $\psi(Q)$ is diffuse, Theorem 3.1 in \cite{popamal1} yields that the quasi-normalizer of $\psi(Q)$ inside $\psi(q)M^n\psi(q)$ is contained in $\psi(q) L(\mathbf{Z})^n \psi(q)$. In particular, we get \begin{equation*} \operatorname{Ad}(w^*)(ww^*\mathscr{N}_{qMq}(Q)'' ww^*) \subset \psi(q) L(\mathbf{Z})^n \psi(q). \end{equation*} Note that $\operatorname{Ad}(w^*) : ww^* M ww^* \to w^*w M^n w^*w$ is a $\ast$-isomorphism. Since $\psi(q)L(\mathbf{Z})^n \psi(q)$ is amenable and $ww^*\mathscr{N}_{qMq}(Q)'' ww^*$ is non-amenable, we finally get a contradiction, which finishes the proof. \end{proof} The above theorem is still true for any amenable group $G$ (instead of $\mathbf{Z}$), and any mixing orthogonal representation $\pi : G \to \mathscr{O}(H_\mathbf{R})$ such that the corresponding Bogoljubov action $\sigma^\pi : G \curvearrowright \Gamma(H_\mathbf{R})''$ is properly outer, i.e. $\sigma^\pi_g$ is outer, for any $g \neq e$. \section{New examples of strongly solid ${\rm II_1}$ factors}\label{examples} \subsection{Spectral measures and unitary representations} Let $H$ be a separable complex Hilbert space. Let $G$ be a locally compact second countable (l.c.s.c.) abelian group together with $\pi : G \to \mathscr{U}(H)$ a $\ast$-strongly continuous unitary representation. Denote by $\widehat{G}$ the dual of $G$. It follows that $C^*(G) \cong C_0(\widehat{G})$ and $\pi$ gives rise to a $\ast$-representation $\sigma : C_0(\widehat{G}) \to \mathbf{B}(H)$ such that $\sigma(f_g) = \pi(g)$, for every $g \in G$, where $f_g(\chi) = \chi(g)$, $\forall \chi \in \widehat{G}$. Recall that for any unit vector $\xi \in H$, there exists a unique probability measure on $\mu_\xi$ on $\widehat{G}$ such that \begin{equation*} \int_{\widehat{G}} f \, d\mu_\xi = \langle \sigma(f) \xi, \xi\rangle. \end{equation*} Note that the formula makes sense for every bounded Borel function $f$ on $\widehat{G}$. \begin{df} Let $G$ be a l.c.s.c. abelian group together with $\pi : G \to \mathscr{U}(H)$ a $\ast$-strongly continuous unitary representation. The {\it spectral measure} $\mathscr{C}_\pi$ of the unitary representation $\pi$ is defined as the measure class on $\widehat{G}$ generated by all the probability measures $\mu_\xi$, for $\xi \in H$, $\|\xi\| = 1$. \end{df} Recall that the {\em support} of a measure is the (closed) subset of all points for which every neighborhood has positive measure. The spectral measure $\mathscr{C}_\pi$ is said to be {\it singular} if for all the probability measures $\mu$ in $\mathscr{C}_\pi$, the support of $\mu$ has $0$ Haar measure. From now on, we will only consider the cases when $G = \mathbf{Z}$ or $\mathbf{R}$. We identify the Pontryagin dual of $\mathbf{R}$ with $\mathbf{R}$ by the pairing $\mathbf{R} \times \mathbf{R} \ni (x, y) \mapsto e^{2\pi i x y}$. Define \begin{eqnarray*} p: \mathbf{R} & \to & \mathbf{T} = \mathbf{R}/\mathbf{Z} \\ x & \mapsto & x + \mathbf{Z} \end{eqnarray*} the canonical projection. For $\mu$ a probability measure on $\mathbf{R}$, the push-forward measure of $\mu$ on $\mathbf{T}$ is defined by $(p_\ast\mu)(A) = \mu(p^{-1}(A))=\mu(A+\mathbf{Z})$, $\forall A \subset \mathbf{T}$ Borel subset. The convolution product is denoted by $\ast$. We shall write \begin{equation*} \mu^{\ast k} = \mu \ast \cdots \ast \mu \end{equation*} for the $k$-fold convolution product. \begin{lem}\label{pushforward} Let $\mu$ be a probability measure on $\mathbf{R}$. Write $\nu = p_\ast \mu$. \begin{enumerate} \item If $\mu$ is singular, then $\nu$ is singular. \item For any $k \geq 1$, $(p_\ast \mu)^{\ast k}$ and $p_\ast(\mu^{\ast k})$ are absolutely continuous to each other. \end{enumerate} \end{lem} \begin{proof} Denote by $\lambda$ the Lebesgue measure on $\mathbf{R}$. We may identify $(\mathbf{T}, \operatorname{Haar})$ with $([0, 1], \lambda)$ as probability spaces. We use the notation $\mu_1 \sim \mu_2$ for two measures absolutely continuous to each other. $(1)$ Assume that $\mu$ is singular. Write $K$ for the support of $\mu$ and $K_n = K \cap [n, n+ 1[$. Clearly, $\operatorname{supp}(\nu) \subset p(K)$. We have \begin{eqnarray*} \operatorname{Haar}(p(K)) & \leq & \sum_{n \in \mathbf{Z}} \operatorname{Haar}(p(K_n)) \\ & = & \sum_{n \in \mathbf{Z}} \lambda(K_n) = 0. \end{eqnarray*} Thus $\operatorname{Haar}(\operatorname{supp}(\nu)) = 0$ and $\nu$ is singular. $(2)$ Under the previous identification, we have for any $A \subset \mathbf{T}$ Borel subset \begin{eqnarray*} \nu(B) & = & \mu(B + \mathbf{Z}) \\ & = &\sum_{n \in \mathbf{Z}} (\mu \ast \delta_n)(B). \end{eqnarray*} Thus for any $k \geq 1$, we have \begin{eqnarray*} \nu^{\ast k} & = & \left( \sum_{n \in \mathbf{Z}} \mu \ast \delta_n \right)^{\ast k} \\ & \sim & \sum_{n \in \mathbf{Z}} \left( \sum \mu^{\ast k} \ast \delta_n \right) \\ & \sim & \sum_{n \in \mathbf{Z}} \mu^{\ast k} \ast \delta_n. \end{eqnarray*} Consequently $(p_\ast \mu)^{\ast k} \sim p_\ast(\mu^{\ast k})$. \end{proof} \subsection{Examples of strongly solid ${\rm II_1}$ factors} Erd\"os showed in \cite{erdos} that the symmetric probability measure $\mu_\theta$ on $\mathbf{R}$, with $\theta = 5/2$, obtained as the weak limit of \begin{equation*} \left( \frac12 \delta_{-\theta^{-1}} + \frac12 \delta_{\theta^{-1}} \right) \ast \cdots \ast \left( \frac12 \delta_{-\theta^{-n}} + \frac12 \delta_{\theta^{-n}} \right) \end{equation*} is singular w.r.t. the Lebesgue measure $\lambda$ and has a Fourier Transform \begin{equation*} \widetilde{\mu}_\theta(t) = \prod_{n \geq 1} \cos\left(\frac{t}{\theta^n}\right) \end{equation*} which vanishes at infinity, i.e. $\widetilde{\mu}(t) \to 0$, as $|t| \to \infty$. \begin{exam}\label{singularmeasure} Modifying the measure $\mu_\theta$, Antoniou \& Shkarin (see Theorem $2.5, {\rm v}$ in \cite{antoniou}) constructed an example of a symmetric probability measure $\mu$ on $\mathbf{R}$ such that: \begin{enumerate} \item The Fourier Transform of $\mu$ vanishes at infinity, i.e. $\widetilde{\mu}(t) \to 0$, as $|t| \to \infty$. \item For any $n \geq 1$, the $n$-fold convolution product $\mu^{\ast n}$ is singular w.r.t. the Lebesgue measure $\lambda$. \end{enumerate} \end{exam} Let $\mu$ be a symmetric probability measure on $\mathbf{R}$ as in Example \ref{singularmeasure} and consider $\nu = p_\ast \mu$ the push-forward measure on the torus $\mathbf{T}$. Since $\mu(X) = \mu(-X)$, for any Borel set $X \subset \mathbf{R}$, it follows that $\nu(A) = \nu(\overline{A})$, for any Borel set $A \subset \mathbf{T}$, where $\overline{A} = \{\bar{z} : z \in A\}$. Let $\pi^\nu : \mathbf{Z} \to \mathscr{U}(L^2(\mathbf{T}, \nu))$ be the unitary representation defined by $(\pi^\nu_n f)(z) = z^n f(z)$, $\forall f \in L^2(\mathbf{T}, \nu)$, $\forall n \in \mathbf{Z}$. Note that moreover \begin{equation*} H_\mathbf{R}^\nu = \left\{ f \in L^2(\mathbf{T}, \nu) : \overline{f(z)} = f(\bar{z}), \forall z \in \mathbf{T} \right\} \end{equation*} is a real subspace of $L^2(\mathbf{T}, \nu)$ invariant under $\pi^\nu$. Indeed, for all $f, g \in H^\nu_\mathbf{R}$, \begin{eqnarray*} \langle f, g \rangle & = & \int_{\mathbf{T}} f(z) \overline{g(z)} \, d\nu(z) \\ & = & \int_{\mathbf{T}} \overline{f(\bar{z})} g(\bar{z}) \, d\nu(z) \\ & = & \int_{\mathbf{T}} \overline{f(\bar{z})} g(\bar{z}) \, d\nu(\bar{z}) \\ & = & \int_{\mathbf{T}} \overline{f(z)} g(z) \, d\nu(z) \\ & = & \overline{\langle f, g \rangle}. \end{eqnarray*} By assumption and using Lemma \ref{pushforward}, it follows that: \begin{enumerate} \item The unitary representation $\pi^\nu : \mathbf{Z} \to \mathscr{U}(L^2(\mathbf{T}, \nu))$ is mixing. \item The spectral measure of $\bigoplus_{n \geq 1} (\pi^\nu)^{\otimes n}$ is singular. \end{enumerate} Consider now the non-amenable ${\rm II_1}$ factor $M = \Gamma(H_\mathbf{R}^\nu, \mathbf{Z}, \pi^\nu)''$. Let $A = L(\mathbf{Z})$. Since $\pi^\nu$ is mixing, $A$ is maximal abelian in $M$ and {\em singular}, i.e. $\mathscr{N}_M(A)'' = A$. Since the spectral measure of the unitary representation $\bigoplus_{n \geq 1} (\pi^\nu)^{\otimes n}$ is singular and because of the $A, A$-bimodule isomorphism \begin{equation*} L^2(M) \cong \bigoplus_{n \geq 0} K^{(n)}_{\pi^\nu}, \end{equation*} where $K^{(n)}_{\pi^\nu} = L^2(\mathbf{T}, \nu)^{\otimes n} \otimes \ell^2(\mathbf{Z})$ (see Section \ref{preliminaries}), it follows that the $A, A$-bimodule $L^2(M)$ is disjoint form the coarse bimodule $L^2(A) \otimes L^2(A)$. Combining Voiculescu's result (see Corollary 7.6 in \cite{voiculescu96}) and the second-named author's result (see Proposition $9.2$ in \cite{shlya99}), it follows that the non-amenable ${\rm II_1}$ factor $M$ is not isomorphic to any interpolated free group factor $L(\mathbf{F}_t)$, $1 < t \leq \infty$. Moreover, our Theorem \ref{stronglysolid} yields that $M$ is strongly solid, hence has no Cartan subalgebra. \begin{theo}[Corollary B]\label{singular-ssolid} The ${\rm II_1}$ factor $M = \Gamma(H_\mathbf{R}^\nu, \mathbf{Z}, \pi^\nu)''$ is strongly solid, hence has no Cartan subalgebra. Nevertheless, for the maximal abelian subalgebra $A = L(\mathbf{Z})$, the $A, A$-bimodule $L^2(M)$ is disjoint from the coarse bimodule $L^2(A) \otimes L^2(A)$. Thus, $M$ is never isomorphic to an interpolated free group factor. \end{theo} \begin{rem} For $\theta = 3$, $\mu_\theta$ is the Cantor-Lebesgue measure on the ternary Cantor set. If we set $\nu = p_\ast \mu_\theta$, we get that for any $n \geq 1$, the $n$-fold convolution product $\nu^{\ast n}$ is singular w.r.t. the Lebesgue measure $\lambda$. In that case, the ${\rm II_1}$ factor $M = \Gamma(H_\mathbf{R}^\nu, \mathbf{Z}, \pi^\nu)''$ has no Cartan subalgebras and is not isomorphic to any interpolated free group factor (Corollary A). \end{rem} \subsection{Bimodule decompositions over MASAs.} Recall that if $\mu$ is a probability measure on $[0,1]\times [0,1]$ so that its push-forwards by the projection maps onto the two copies of $[0,1]$ are Lebesgue absolutely continuous, then $L^2([0,1]\times [0,1],\mu)$ can be regarded as an $L^\infty[0,1]$, $L^\infty[0,1]$-bimodule via the action \begin{multline*} (f_1 \cdot \xi \cdot f_2)(x,y) = f_1 (x) \xi(x,y) f_2 (y), \\ x,y\in [0,1],\quad f_j\in L^\infty[0,1], \quad \xi\in L^\infty ([0,1]\times[0,1],\mu).\end{multline*} For a von Neumann algebra $M$, consider the collection $\mathscr{C}(M)$ of measure classes $[\mu]$ on $[0,1]\times [0,1]$ with the property that there exists a MASA $L^\infty [0,1]\cong A\subset M$ so that $L^2(M)$, when regarded as an $A,A$-bimodule, contains a copy of $L^2([0,1]^2,\mu)$. Also let $\mathscr{D}(M)$ be the collection of all measure classes $[\mu]$ so that for {\em every} MASA $L^\infty[0,1]\cong A\subset M$, $L^2(M)$ contains a sub-bimodule of $L^2([0,1]^2,\mu)$. Clearly, $\mathscr{C}\supset \mathscr{D}$. Then (as is well known) $M$ has a Cartan subalgebra if and only if $\mathscr{C}(M)$ contains an $r$-discrete measure class (i.e., a measure class $[\mu]$ for which $\mu(B) = \int \mu_t (B) dt$ and $\mu_t$ are a.e. discrete). Voiculescu in \cite{voiculescu96} proved that $\mathscr{D}(L(\mathbf{F}_n)) \ni \{\textrm{Lebesgue Measure}\}$. It thus remained open whether every II$_1$ factor $N$ must either contain a Cartan subalgebra, or satisfy that $\mathscr{D}(N) \ni \{\textrm{Lebesgue Measure}\}$. Our main example $M= \Gamma(H_\mathbf{R}^\nu, \mathbf{Z}, \pi^\nu)''$ answers this question in the negative, as $\mathscr{D}(M)$ does not contain Lebesgue measure and yet $M$ has no Cartan subalgebra. \section{Outerness of free Bogoljubov actions} Although we do not need the following result in the rest of the paper, we record the following observation, which is well-known to the experts and is most likely folklore (although we could not find a precise reference). \begin{theo}\label{thm:bogoOuter} Let $G$ be a countable group, and let $\pi: G \to \mathscr{O}(H_\mathbf{R})$ be a $\ast$-strongly continuous orthogonal representation of $G$ on a real Hilbert space $H_\mathbf{R}$. Then $\sigma_g^\pi$ is inner iff $\pi_g=1$. In particular, if $\pi_g \neq 1$ for any $g\neq e$, the Bogoljubov action $\sigma^\pi$ of $G$ on $\Gamma(H_\mathbf{R})''$ is outer. \end{theo} \begin{proof} Let $g$ be an element of $G$ so that $\pi_g \neq 1$, and let $\alpha = \sigma_g^\pi$ acting on $M=\Gamma(H_\mathbf{R})''$. Let $T=\pi_g$. We may assume without loss of generality that $H_\mathbf{R}$ has dimension at least $2$, so that $M$ is a factor (otherwise, $M$ is abelian, and any non-trivial $T$ gives rise to an outer transformation). Suppose for a contradiction that $\alpha = \operatorname{Ad}(u)$ for some unitary $u\in M$. Then for any $x\in M$, $$ \alpha(x) = u x u^* $$ and so $\alpha (u) = u$. Let $H = H_\mathbf{R} \otimes_\mathbf{R} \mathbf{C}$ be the complexification of $H_\mathbf{R}$. We continue to denote the complexification of $T$ by the same letter. Let $H^a \subset H$ be the closed linear span of eigenvectors of $T$, $H^a_\mathbf{R} = H^a \cap H_\mathbf{R}$ be its real part. Then $N=\Gamma(H^a_\mathbf{R})''\subset \Gamma(H_\mathbf{R})''=M$. Moreover, it is clear from the Fock space decomposition of $L^2(M)$ that any eigenvectors for $\alpha$ must lie in $L^2(N)$, so $u\in N$. Thus we may, without loss of generality, assume that $N=M$ and that eigenvectors of $T$ densely span $H$. Thus we may assume that \begin{equation*} H_\mathbf{R} = \mathbf{R}^n \oplus \bigoplus_{k\in J} H^k_\mathbf{R}, \end{equation*} where $n\in \{0,1,\dots,+\infty\}$, each $H^k_\mathbf{R}\cong \mathbf{R}^2$ and $T$ acts trivially on $\mathbf{R}^n$ and acts on $H^k_\mathbf{R}$ by a rotation of period $2\pi /\log \lambda_k$. If we denote by $h_k, g_k$ an orthonormal basis for $H^k_\mathbf{R}$ and we set $c_k = s(h_k) + i s(g_k)\in M$, then $M\cong L(\mathbf{F}_n) * W^*(c_k : k\in J)$, and $\alpha = \operatorname{id} * \beta$ where $\beta (c_j ) = \exp(2\pi i \lambda_j) c_j$. Let $c_j = u_j b_j$ be the polar decomposition of $c_j$; thus $\beta(u_j)=\exp(2\pi i \lambda_j) u_j$ and $\beta(b_j)=b_j$. By \cite{DVV:circular}, $b_j$ and $u_j$ are freely independent and $W^*(b_k:k\in J) \cong W^*(u_k : k\in J) \cong L(\mathbf{F}_{2|J|})$. It follows that $M\cong L(\mathbf{F}_n) * W^*(b_k : k\in J) * W^*(u_k : k\in J) \cong L(\mathbf{F}_{n+|J|}) * L(\mathbf{F}_{|J|}) = N * P$ in such a way that $\alpha$ corresponds to the action $\operatorname{id} * \gamma$ where $\gamma : P\to P = W^*(u_k : k\in J)$ is given by $\gamma(u_k)=\exp(2\pi i \lambda_k) u_k$. Since by assumption $T$ is non-trivial, $|J|\geq 1$ and also $|J| + n \geq 1$. Thus if $\alpha (x) = uxu^*$ for all $x\in M$, then $u$ must commute with $N\subset N * P\cong M$. But $N'\cap M = N'\cap N = \mathscr{Z}(N)$ (e.g. because as an $N$,$N$-bimodule, $L^2(M) = L^2(N) \oplus (\textrm{a multiple of coarse $N$,$N$-bimodule})$), so $u\in \mathscr{Z}(N)$. But then $u P u^* =\alpha(P)\subset P$, which is easily seen to be impossible by using the free product decomposition of $L^2(M)$ in terms of $L^2(N)$ and $L^2(P)$, unless $u=\tau(u)$. But this is impossible, since $\alpha(s(h)) = s(Th)$ is a non-trivial automorphism. \end{proof} \section{Free Krieger algebras} Let $\nu$ be a probability measure on the torus $\mathbf{T}$. Note that $\nu$ gives rise to unital completely positive map $\eta : A \to A$, ($A = L^\infty(\mathbf{T})$), determined by $$\eta(f)(x) = \int f(x-y) d\nu(y) = (f * \nu)(x), \forall f \in C(\mathbf{T}).$$ It is not hard to see that the von Neumann algebra $M=\Gamma(H_\mathbf{R}^\nu, \mathbf{Z},\pi^\nu)'' \cong \Phi (A,\eta)$ in the notation of \cite{shlya99}, i.e., it is an example of a von Neumann algebra generated by an $A$-valued semicircular system with covariance $\eta$ (these were called ``free Krieger algebras'' in \cite{shlya99}, following the analogy between the operation $A\mapsto \Phi(A,\eta)$ and the crossed product operation $A\mapsto A\rtimes_\sigma \mathbf{Z}$). As we have seen, $M$ has both the c.m.a.p. and the Haagerup property, and thus for this specific choice of $\eta$, $\Phi(A,\eta)$ has these properties. We point out that in general (even for abelian $A$), $\Phi(A,\eta)$ may fail to have the Haagerup property for other choices of the completely positive maps $\eta$. It is an interesting question to determine exactly when $\Phi(A,\eta)$ has this property (and/or c.m.a.p.) as a condition on the completely-positive map $\eta : A\to A$, $A\cong L^\infty[0,1]$. It is likely that the techniques of the present paper would then apply to give solidity of $\Phi(A,\eta)$. \begin{prop} There exists a choice of $\eta : A\to A$, $A\cong L^\infty [0,1]$, so that $\Phi(A,\eta)$ does not have the Haagerup property and is not weakly amenable, i.e. $\Lambda_{\operatorname{cb}}(\Phi(A,\eta)) = \infty$. \end{prop} \begin{proof} Let $\alpha$ be an action of a free group $\mathbf{F}_2$ on $A\cong L^\infty[0,1]$ so that $M=A\rtimes_\alpha \mathbf{F}_2$ does not have the Haagerup property and is not weakly amenable (one could take, for example, an action measure equivalent to the action of $\operatorname{SL}(2, \mathbf{Z})$ on $A=L(\mathbf{Z}^2)$; the crossed product in this case has relative property (T) and does not have the Haagerup property \cite{popa2001}. Moreover it is not weakly amenable, i.e. $\Lambda_{\operatorname{cb}}(M) = \infty$ (see \cite{dorofaeff}). Denote the two automorphisms of $A$ corresponding to the actions of the two generators of $\mathbf{F}_2$ by $\alpha_1$, $\alpha_2$, and let $\eta_j = \alpha_j + \alpha_j^{-1}$, $\eta = \eta_1+\eta_2$. Let $\sigma$ be the free shift action of $\mathbf{Z}$ on $\mathbf{F}_\infty$. Then by \cite{shlya99}, \begin{multline} \Phi(A,\eta) \cong \Phi(A,\eta_1) *_A \Phi(A,\eta_2) \cong \\ \left((A \bar{\otimes} L(\mathbf{F}_\infty)) \rtimes_{\alpha_1 \otimes \sigma} \mathbf{Z}\right) *_A \left((A \bar{\otimes} L(\mathbf{F}_\infty)) \rtimes_{\alpha_2\otimes \sigma} \mathbf{Z}\right) \cong \\ (A \bar{\otimes} [L(\mathbf{F}_\infty)*L(\mathbf{F}_\infty)])\rtimes_{\alpha \otimes \sigma*\sigma}\mathbf{F}_2. \end{multline} Thus $\Phi(A,\eta)$ contains $M$ as a subalgebra. Since the Haagerup property and the weak amenability are inherited by subalgebras, it follows that $\Phi(A,\eta)$ cannot have the Haagerup property and is not weakly amenable. \end{proof} \end{document}
\begin{document} \title{Redrawing the Boundaries on Purchasing Data from Privacy-Sensitive Individuals} \thispagestyle{empty} \setcounter{page}{0} \begin{abstract} \iffalse In the setting of purchasing private data at auction introduced by Ghosh and Roth (EC 2011), a mechanism wishes to access sensitive data of individuals and compensate them for the loss of privacy incurred. To do so, the mechanism elicits privacy valuations from the players, and then computes payments depending on the valuations and on how the mechanism will process the sensitive data. The mechanism should compute payments in a way that is individually rational so that no player incurs a net loss in utility, truthful so that no player gives a false declaration, while assuring that enough players will participate to produce an accurate output. We focus on the difficult setting where each player's valuation for privacy is unbounded and is sensitive itself (e.g. because it may be correlated with the player's private data), for which Ghosh and Roth proved a partial impossibility result. \fi We prove new positive and negative results concerning the existence of truthful and individually rational mechanisms for purchasing private data from individuals with unbounded and sensitive privacy preferences. We strengthen the impossibility results of Ghosh and Roth (EC 2011) by extending it to a much wider class of privacy valuations. In particular, these include privacy valuations that are based on $(\eps, \delta)$-differentially private mechanisms for non-zero $\delta$, ones where the privacy costs are measured in a per-database manner (rather than taking the worst case), and ones that do not depend on the payments made to players (which might not be observable to an adversary). \Dnote{changed following sentence} To bypass this impossibility result, we study a natural special setting where individuals have \emph{monotonic privacy valuations}, which captures common contexts where certain values for private data are expected to lead to higher valuations for privacy (\eg having a particular disease). We give new mechanisms that are individually rational for all players with monotonic privacy valuations, truthful for all players whose privacy valuations are not too large, and accurate if there are not too many players with too-large privacy valuations. We also prove matching lower bounds showing that in some respects our mechanism cannot be improved significantly. \Dnote{modified last sentence. maybe we should remove it altogether?} \Knote{commented last sentence.} \end{abstract} \textbf{Keywords:} differential privacy, mechanism design \Snote{anything else?} \Knote{consent elicitation?} \section{Introduction} Computing over individuals' private data is extremely useful for various purposes, such as medical or demographic studies. Recent work on {\em differential privacy}~\cite{DMNS06, Dwo06} has focused on ensuring that analyses using private data can be carried out accurately while providing individuals a strong quantitative guarantee of privacy. While differential privacy provides formal guarantees on how much information is leaked about an individual's data, it is silent about what incentivizes the individuals to share their data in the first place. A recent line of work \cite{MT07, GR11, NST12, X13, NOS12, CCKMV13, FL12, LR12, RS12} has begun exploring this question, by relating differential privacy to questions of mechanism design. One way to incentivize individuals to consent to the usage of their private data is simply to pay them for using it. \Knote{added "to consent ..."} For example, a medical study may compensate its participants for the use of their medical data. However, determining the correct price is challenging: low payments may not draw enough participants, causing insufficient data for an accurate study, while high payments may be impossible for budgetary reasons. Ghosh and Roth \cite{GR11} approached this problem by allowing the mechanism to elicit \emph{privacy valuations} from individuals. A privacy valuation is a description of how much disutility an individual experiences from having information about their private data revealed. By eliciting valuations, the mechanism is hopefully able to tailor payments to incentivize enough participants to produce an accurate result, while not paying too much. \subsection{The setting and previous work} We continue the study of purchasing private data from individuals as first proposed by Ghosh and Roth \cite{GR11} (see \cite{R12,PR13} for a survey of this area). Since we work in a game-theoretic framework, we will also call individuals ``players''. As in \cite{GR11}, we study the simple case where the private information consists of a single data bit, which players can refuse to provide but cannot modify (e.g. because the data is already certified in a trusted database, such as a medical record database). To determine the price to pay players for their data bits, the mechanism elicits \emph{privacy valuations} from them. We study the simple case where each player $i$'s privacy valuation is parameterized by a single real parameter $v_i$. For example, in Ghosh and Roth \cite{GR11} they assume that player $i$ loses $v_i \eps$ utility when their data bit is used in an $\eps$-differentially private mechanism. We will study a wider variety of privacy valuation functions in this paper. The valuations are known only to the players themselves, and therefore players may report false valuations if it increases their utility. Furthermore, because these valuations may be correlated with the data bits, the players may wish to keep their valuations private as well. It is instructive to keep in mind the application of paying for access to medical data (\eg HIV status), where players cannot control the actual data bit, but their valuation might be strongly correlated to their data bit. The goal of the mechanism is to approximate the sum of data bits while not paying too much. Based on the declared valuations, the mechanism computes payments to each of the players and obtains access to the purchased data bits from the players that accept the payments. The mechanism then computes and publishes an approximation to the sum of the data bits, which can cause the players some loss of privacy, which should be compensated for by the mechanism's payment. The mechanism designer aims to achieve three goals, standard in the game theory literature: the mechanism should be individually rational, truthful, and accurate. A mechanism is \emph{individually rational} if all players receive non-negative utility from participating in the game. In our context, this means that the mechanism is sufficiently compensating players for their loss in privacy, something that may be important for ethical reasons, beyond just incentivizing participation. Informally, a mechanism is \emph{truthful} for player $i$ on a tuple $x=(x_1,\ldots,x_n)$ of reports from the players if player $i$ does not gain in utility by declaring some false type $x'_i$ (while the other players' types remain unchanged). We aim to build mechanisms that are individually rational for all players, and truthful for as many players and inputs as possible (ideally for all players and inputs). A mechanism is {\em accurate} if the output of the mechanism is close to the true function it wishes to compute, in our case the sum of the data bits. Ghosh and Roth \cite{GR11} study the restricted setting (in their terminology the ``insensitive value model'') where players do not care about leaking their privacy valuations, as well as the general model (the ``sensitive value model'') where they may care and their valuations can be unbounded. They present two mechanisms in the insensitive value model, one that optimizes accuracy given a fixed budget and another that optimizes budget given a fixed accuracy constraint. They also prove that their mechanisms are individually rational and truthful under the assumption that each player $i$ experiences a disutility of \emph{exactly} $v_i \eps$ when his data bit is used in an $\eps$-differentially private mechanism. In the general sensitive value model, they prove the following impossibility result: there is no individually rational mechanism with finite payments that can distinguish between the case where all players have data bit $0$ and the case where all players have data bit $1$. This impossibility result spurred a line of work attempting to bypass it. Fleischer and Lyu \cite{FL12} propose a Bayesian setting, where (for simplicity considering just Boolean inputs) there are publically known distributions $D_0$ and $D_1$ over privacy valuations, and each player who has data bit $b_i$ receives a valuation $v_i$ drawn from $D_{b_i}$. They show that in this model, it is possible to build a Bayes-Nash truthful, individually rational, and accurate mechanism. In a related work, Roth and Schoenebeck \cite{RS12} study a Bayesian setting where the agents' actual (dis)utilities are drawn from a known prior, and construct individually rational and ex-post truthful mechanism that are optimal for minimizing variance given a fixed budget and minimizing expected cost given a fixed variance goal. In comparison to \cite{FL12}, \cite{RS12} studies a disutility value that does not quantitively relate to the privacy properties of the mechanism (but rather just a fixed, per-player disutility for participation), while it results in mechanisms satisfying a stronger notion of truthfulness. Ligett and Roth \cite{LR12} measure the privacy loss incurred from a player's decision to participate separately from the information leaked about the actual data (effectively ruling out arbitrary correlations between privacy valuations and data bits). They work in a worst-case (non-Bayesian) model and construct a mechanism that satisfies a relaxed ``one-sided'' notion of truthfulness and accuracy. However, their mechanism only satisfies individual rationality for players whose privacy valuation is not too high. \subsubsection{Improving the negative results} \label{sec:issues} This line of work leaves several interesting questions open. The first is whether the impossibility result of \cite{GR11} really closes the door on all meaningful mechanisms when players can have unbounded privacy valuations that can be arbitrarily correlated with their sensitive data. There are two important loopholes that the result leaves open. First, their notion of privacy loss is pure $\eps$-differential privacy, and they crucially use the fact that for pure $\eps$-differentially private mechanisms the support of the output distribution must be identical for all inputs. This prevents their result from ruling out notions of privacy loss based on more relaxed notions of privacy, such as $(\eps, \delta)$-differential privacy for $\delta > 0$. As a number of examples in the differential privacy literature show, relaxing to $(\eps, \delta)$-differential privacy can be extremely powerful, even when $\delta$ is negligibly small but non-zero \cite{DworkLe09,HardtTa10,DworkRoVa10,De12,BeimelNiSt13}. Furthermore, even $(\eps,\delta)$ differential privacy measures the worst-case privacy loss over all databases, and it may be the case that on most databases, the players' expected privacy loss is much less than the worst case bound.\Knote{modified last sentence to expected loss} \footnote{For example, consider a mechanism that computes an $\eps$-differentially private noisy sum of the first $n-1$ rows (which we assume are bits), and if the result is 0, also outputs a $\eps$-differentially private noisy version of the $n$'th row (e.g. via ``randomized response''). The worst case privacy loss for player $n$ is $\eps$. On databases of the form $(0,0,\ldots,0,b)$ the first computation results with 0 with probability $\approx \eps$ and player $n$ suffers $\eps$ privacy loss with this probability. However, if it is known that the database is very unlikely to be almost entirely zero, then player $n$ may experience any privacy loss with only exponentially small probability. \label{footnote:perdatabase} \Knote{modified to make more precise. original footnote in file.}} Thus it is more realistic to use per-database measure of privacy loss (as done in \cite{CCKMV13}). Second, the \cite{GR11} notion of privacy includes as observable and hence potentially disclosive output the (sum of the) payments made to \emph{all} the players, not just the sum of the data bits. This leaves open the possibility of constructing mechanisms for the setting where an outside observer is not able to to see some of the player's payments. For example, it may be natural to assume that, when trying to learn about player $i$, an observer learns the payments to all players \emph{except} player $i$. In the extreme case, we could even restrict the outside observer to not see any of the payments, but only the approximation to the sum of the data bits. The Ghosh-Rosh impossibility proof fails in these cases. Indeed in this case where player $i$'s own payment is not visible to the observer, there \emph{does exist} an individually rational and accurate mechanism with finite payments: simply ask each player for their valuation $v_i$ and pay them $v_i \eps$, then output the sum of all the bits with noise of magnitude $O(1/\eps)$. (The reason that this mechanism is unsatisfactory is that it is completely untruthful --- players always gain by reporting a higher valuations.) We will close both these gaps: our results will hold even under very mild conditions on how the players experience privacy loss (in particular capturing a per-database analogue of $(\eps, \delta)$-differential privacy), and even when \emph{only} the approximate count of data bits is observable and \emph{none} of the payments are observable. \subsubsection{Improving the positive results} Another question left open by the previous work is whether we can achieve individual rationality and some form of truthfulness under a worst-case setting. Recall that \cite{FL12} and \cite{RS12} work in a Bayesian model, while \cite{LR12} does not guarantee individual rationality for all players. Furthermore, in both \cite{FL12} and \cite{RS12} the priors are heavily used in \emph{designing the mechanism}, and therefore their results break if the mechanism designer does not accurately know the priors. We will replace the Bayesian assumption with a simple qualitative assumption on the monotonicity of the correlation between players' data bits and their privacy valuation. For accuracy (but not individual rationality), we will assume a rough bound on how many players exceed a given threshold in their privacy valuations (similarly to \cite{NOS12}). Another question is the interpretation of the privacy loss functions. We observe that the truthfulness of the mechanisms in \cite{GR11} crucially relies on the assumption that $v_i \eps$ is the \emph{exact} privacy loss incurred. As was argued by \cite{NOS12} and \cite{CCKMV13}, it seems hard to quantify the exact privacy loss a player experiences, as it may depend on the mechanism, all of the players' inputs, as well as an adversary's auxiliary information about the database. (See \autoref{footnote:perdatabase} for an example.) It is much more reasonable to assume that the privacy valuations $v_i$ declared by the players and the differential privacy parameter $\eps$ yield an \emph{upper bound} on their privacy loss. When using this interpretation, the truthfulness of \cite{GR11} no longer holds. The mechanisms we construct will remain truthful using the privacy loss function only as an upper bound on privacy loss (for players whose privacy valuations are not too large, similarly to the truthfulness guarantees of \cite{NOS12,CCKMV13,LR12}). \subsection{Our results} \label{sec:results} In our model there are $n$ players labelled $1, \ldots, n$ each with a data bit $b_i \in \zo$ and a privacy valuation $v_i \in \R$, which we describe as a $2n$-tuple $(b,v)\in \zo^n\times \R^n$. The mechanism designer is interested in learning (an approximation of) $\sum b_i$. The players may lie about their valuation but they cannot lie about their data bit. A mechanism $M$ is a pair of randomized functions $(M_{\mathsf{out}}, M_{\mathsf{pay}})$, where $M_{\mathsf{out}} : \zo^n \times \R^n \dans \Z$ and $M_{\mathsf{pay}} : \zo^n \times \R^n \dans \R^n$. Namely $M_{\mathsf{out}}$ produces an integer that should approximate $\sum b_i$ while $M_{\mathsf{pay}}$ produces payments to each of the $n$ players. Because the players are privacy-aware, the utility they derive from the game can be separated into two parts as follows: $$\textrm{utility}_i = \textrm{payment}_i - \textrm{privacy loss}_i.$$ (Note that in this paper, we assume the players have no (dis)interest in the integer that $M_{\mathsf{out}}$ produces.) The privacy loss term will be quantified by a \emph{privacy loss function} that depends on the identity of the player, his bit, his privacy valuation, and his declared valuation $i, b, v, v'_i$ (where $v'_i$ is not necessarily his true type $v_i$), the mechanism $M$, and the outcome $(s, p)$ produced by $(M_{\mathsf{out}},M_{\mathsf{pay}})$. \Knote{made verbose.} \paragraph{Strengthened impossibility result of non-trivial accuracy with privacy.} Our first result significantly strengthens the impossibility result of Ghosh-Roth \cite{GR11}. \begin{theorem}[Main impossibility result, informal. See \autoref{thm:imp}] \label{thm:impinf} Fix any mechanism $M$ and any reasonable privacy loss functions. Then if $M$ is truthful (even if only for players with privacy valuation 0) and individually rational and makes finite payments to the players (even if only when all players have privacy valuation 0), then $M$ cannot distinguish between inputs $(b,v)=(0^n, 0^n)$ and $(b',v)=(1^n, 0^n)$. \end{theorem} \Snote{edited parenthetical ``even ifs''} \Snote{modified next few sentences} By ``reasonable privacy loss functions,'' we mean that if from observing the output of the mechanism on an input $(b,v)$, an adversary can distinguish the case that player $i$ has data bit $b_i=0$ from data bit $b_i=1$ (while keeping all other inputs the same), then player $i$ experiences a significant privacy loss (proportional to $v_i$) on database $(b,v)$. In particular, we allow for a per-database notion of privacy loss. Moreover, we only need the adversary to be able to observe the mechanism's estimate of the count $\sum_j b_j$, and not any of the payments made to players. And our notion of indistinguishability captures not only pure $\eps$-differential privacy but also $(\eps, \delta)$-differential privacy for $\delta > 0$. The conclusion of the result is as strong as conceivably possible, stating that $M$ cannot distinguish between the two most different inputs (data bits all $0$ vs. data bits all $1$) even in the case where none of the players care about privacy. We also remark that in our main impossibility result, in order to handle privacy loss functions that depend only on the distribution of the observable count and not the payment information, we crucially use the requirement that $M$ be truthful for players with $0$ privacy valuation. As we remarked earlier in \autoref{sec:issues} there exist $M$ that are individually rational and accurate (but not truthful). \paragraph{New notions of privacy and positive results.} \Dnote{edited this section} One of the main conceptual contributions of this work is restricting our attention to a special class of privacy loss functions, which we use to bypass our main impossibility result. Essential to the definition of differential privacy (\autoref{def:dp}) is the notion of \emph{neighboring inputs}. Two inputs to the mechanism are considered neighboring if they differ only in the information of a single player, and in the usual notion of differential privacy, one player's information may differ arbitrarily. This view also characterized how previous work modeled privacy loss functions: in the sensitive value model of \cite{GR11}, the privacy loss function to a player $i$ on an input $(b_i, v_i)$ was computed by considering how much changing to any possible neighbor $(b'_i, v'_i)$ would affect the output of the mechanism. In contrast, we will restrict our attention to privacy loss functions that consider only how much changing to a specific subset of possible neighbors $(b'_i, v'_i)$ would affect the output of the mechanism. By restricting to such privacy loss functions, we can bypass our impossibility results. We now describe how we restrict $(b'_i, v'_i)$. Recall that in our setting a single player's type information is a pair $(b_i,v_i)$ where $b_i\in \zo$ is a data bit and $v_i\in \R$ is a value for privacy. We observe that in many cases there is a natural sensitive value of the bit $b_i$, for example, if $b_i$ represents HIV status, then we would expect that $b_i = 1$ is more sensitive than $b_i = 0$. Therefore we consider only the following \emph{monotonic valuations}: $(0, v_i)$ is a neighbor of $(1, v_i')$ iff $v_i \leq v_i'$. Thus, if a player's true type is $(1, v'_i)$, then he is only concerned with how much the output of the mechanism differs from the case that her actual type were $(0, v_i)$ for $v_i \leq v'_i$. Consider the pairs that we have excluded: any pairs $(b_i, v_i), (b_i, v'_i)$ (\ie the data bit does not change) and any pairs $(0, v_i), (1, v'_i)$ where $v_i > v'_i$. By excluding these pairs we formally capture the idea that players are not concerned about revealing their privacy valuations \emph{except} inasmuch as they may be correlated with their data bits $b_i$ and therefore may reveal something about $b_i$. Since $b_i = 1$ is more sensitive than $b_i = 0$, the correlation says that privacy valuation when $b_i = 1$ should be larger than when $b_i = 0$. This can be seen as an intermediate notion between a model where players do not care at all about leaking their privacy valuation (the insensitive value model of \cite{GR11}), and a model where players care about leaking any and all information about their privacy valuation (the sensitive value model of \cite{GR11}). Of course the assumption that players are not concerned about revealing their privacy valuation except inasmuch as it is correlated with their data is highly context-dependent. There may settings where the privacy valuation is intrinsically sensitive, independently of the players' data bits, and in these cases using our notion of monotonic valuations would be inappropriate. However, we believe that there are many settings where our relaxation is reasonable. By using this relaxed notion of privacy, we are able to bypass our main impossibility result and prove the following: \begin{theorem}[Main positive result, informal, see \autoref{thm:mon}] For any fixed budget $B$ and $\eps > 0$, for privacy loss functions that only depend on how the output distribution changes between monotonic valuations, there exists a mechanism $M$ that is individually rational for all players and truthful for players with low privacy valuation (specifically $v_i \leq B/2\eps n$). Furthermore, as long as the players with low privacy valuation do indeed behave truthfully, then regardless of the behavior of the players with high privacy valuation, the mechanism's output estimates the sum $\sum_i b_i$ to within $\pm (h + O(1/\eps))$ \Dnote{added parens} where $h$ is the number of players with high privacy valuation. \end{theorem} Note that even though we fix a budget $B$ beforehand and thus cannot make arbitrarily high payments, we still achieve individual rationality for all players, even those with extremely high privacy valuations $v_i$. We do so by ensuring that such players experience perfect privacy ($\eps_i=0$), assuming they have monotonic valuations. We also remark that while we do not achieve truthfulness for all players, this is not a significant problem as long as the number $h$ of players with high privacy valuation is not too large. This is because the accuracy guarantee holds even if the non-truthful players lie about their valuations. We also give a small improvement to our mechanism that ensures truthfulness for all players with data bit $0$, but at some additional practical inconvenience; we defer the details to the body of the paper. \Dnote{edited this paragraph} We remark that besides our specific restriction to monotonic valuations in this paper, the underlying principle of studying restricted notions of privacy loss functions by considering only subsets of neighbors (where the subset should be chosen appropriately based on the specific context) could turn out to be a more generally meaningful and powerful technique that is useful to bypass impossibility results elsewhere in the study of privacy. \paragraph{Lower bounds on accuracy.} The above positive result raises the question: can we adaptively select the budget $B$ in order to achieve accuracy for all inputs, even those where some players have arbitrarily high privacy valuations? Recall that \autoref{thm:impinf} does not preclude this because we are now only looking at monotonic valuations, whereas \autoref{thm:impinf} considers arbitrary valuations. We nevertheless show that it is impossible: \begin{theorem}[Impossibility of accuracy for all privacy valuations, informal, see \autoref{thm:monimp}] For reasonable privacy loss functions that are only sensitive to changes in output distribution of monotonic neighbors, any $M$ with finite payments that is truthful (even if only on players with $0$ privacy valuation) and individually rational, there exist player privacy valuations $v, v'$ such that $M$ cannot distinguish between $(0^n, v)$ and $(1^n, v')$. \end{theorem} The exact formal condition on finite payments is somewhat stronger here than in \autoref{thm:impinf}, but it remains reasonable; we defer the formal statement to the body of the paper. Finally, we also prove a trade-off showing that when there is a limit on the maximum payment the mechanism makes, then accuracy cannot be improved beyond a certain point, even when considering only monotonic valuations. We defer the statement of this result to \autoref{sec:accurate}. \Dnote{rewrote this paragraph} \subsection{Related work} The relationship between differential privacy and mechanism design was first explored by \cite{MT07}. Besides the already mentioned works, this relationship was explored and extended in a series of works \cite{NST12}, \cite{X13} (see also \cite{X11priv}), \cite{NOS12}, \cite{CCKMV13} (see also \cite{CCKMV11}), \cite{HK12}, \cite{KPRU12}. In \cite{MT07,NST12,HK12,KPRU12}, truthfulness refers only to the utility that players derive from the outcome of the game (as in standard mechanism design) and differential privacy is treated as a separate property. The papers \cite{X13, NOS12, CCKMV13} study whether and when such mechanisms, which are separately truthful and differentially private, remain truthful even if the players are privacy-aware and may incur some loss in utility from the leakage of the private information. Differential privacy has also been used as a technical tool to solve problems that are not necessarily immediately obvious as being privacy-related; the original work of \cite{MT07} does this, by using differential privacy to construct approximately truthful and optimal mechanisms, while more recently, \cite{KPRU12} use differential privacy as a tool to compute approximate equilibria. For more details, we refer the reader to the recent surveys of \cite{R12, PR13}. Two ideas we draw on from this literature (particularly \cite{NOS12,CCKMV13}) are (1) the idea that privacy loss cannot be used as a threat because we do not know if a player will actually experience the maximal privacy loss possible, and therefore we should treat privacy loss functions only as upper bounds on the actual privacy loss, and (2) the idea that it is meaningful to construct mechanisms that are truthful for players with reasonable privacy valuations and accurate if most players satisfy this condition. Our mechanisms are truthful not for all players but only for players with low privacy valuation; it will be accurate if the mechanism designer knows enough about the population to set a budget such that most players have low privacy valuation (with respect to the budget). \section{Definitions} \subsection{Notation} For two distributions $X, Y$ we let $\Delta(X, Y)$ denote their total variation distance (i.e. statistical distance). For an integer $i$ let $[i] = \{1, \ldots, i\}$. For any set $S$ and any vector $v \in S^n$, we let $v_{-i} \in S^{n-1}$ denote the vector $v_1, \ldots, v_{i-1}, v_{i+1}, \ldots, v_n$. We use the following convention: a vector of $n$ entries consisting of $n-1$ variables or constants followed by an \emph{indexed} variable denotes the vector of $n$ entries with the last variable inserted at its index. For example \eg $(0^{n-1}, v_i)$ denotes the vector with all zeros except at the $i$'th entry, which contains $v_i$. Some notation about the setting regarding mechanisms etc. was already introduced in \autoref{sec:results}. \subsection{Differential privacy} \Knote{removed "monotonic valuations" from title} \Snote{moved monotone definitions to positive results section} \begin{definition} Two inputs $(b, v), (b', v') \in \zo^n \times \R^n $ are \emph{$i$-neighbors} if $b_j = b'_j$ and $v_j = v'_j$ for all $j \neq i$. They are {\em neighbors} if they are $i$-neighbors for some $i \in [n]$. \end{definition} \begin{definition} \label{def:dp} A randomized function $f$ is {\em $(\eps, \delta)$-differentially private} if for all neighbors $(b, v), (b', v')$, it holds that for all subsets $S$ of the range of $f$: \begin{equation} \label{eq:ahpobinpz} \mathop{\mathrm{Pr}}[f(b, v) \in S] \leq e^\eps \mathop{\mathrm{Pr}}[f(b', v') \in S] + \delta. \end{equation} We say $f$ is $\eps$-differentially private if it is $(\eps, 0)$-differentially private. \end{definition} The symmetric geometric random variable $\mathrm{Geom}(\eps)$ takes integer values with probability mass function $\mathop{\mathrm{Pr}}_{x \getsr \mathrm{Geom}(\eps)}[x = k] \propto e^{-\eps |k|}$ for all $k \in \Z$. It is well-known and easy to verify that for $b \in \zo^n$, the output $\sum b_i + \mathrm{Geom}(\eps)$ is $\eps$-differentially private. \subsection{Privacy loss functions} A {\em privacy loss function} for player $i$ is a real-valued function $\lambda^{(M)}_i(b, v, v'_i, s, p_{-i})$ taking as inputs the vectors of all player types $b, v$, player $i$'s declaration $v'_i$ (not necessarily equal to $v_i$), and a possible outcome $(s, p_{-i}) \in \Z \times \R^{n-1}$ of $M$. The function also depends on the mechanism $M = (M_{\mathsf{out}}, M_{\mathsf{pay}})$. Finally we define \begin{equation} \label{eqn:loss} \mathsf{Loss}^{(M)}_i(b, v, v'_i) = \Exp_{(s,p) \getsr M(b, (v_{-i}, v'_i))} [\lambda^{(M)}_i(b, v, v'_i, s, p_{-i})]. \end{equation} Observe that we have excluded player $i$'s own payment from the output, as we will assume that an outside observer cannot see player $i$'s payment. We let $M_{-i}$ denote the randomized function $M_{-i}(b, v) = (M_{\mathsf{out}}(b, v), M_{\mathsf{pay}}(b, v)_{-i})$. We comment that, in contrast to \cite{CCKMV13}, we allow $\lambda^{(M)}_i$ to depend on the player's declaration $v'_i$ to model the possibility that a player's privacy loss depends on his declaration. \Snote{I don't follow the next sentence. We don't let the observer see the actual declaration, as this would not respect identical output distributions.} \Knote{suggest to remove next sentence and only say this can only strengthen pos results.} \Dnote{i removed the sentence in question} Allowing this dependence only strengthens our positive results, while our negative results hold even if we exclude this dependence on $v'_i$. We remark that even if $\lambda^{(M)}_i$ doesn't depend on $v_i'$, then $\mathsf{Loss}^{(M)}_i$ will still depend on $v'_i$, since it is an expectation over the output distribution of $M_{\mathsf{out}}(b, (v_{-i}, v'_i))$. (See \autoref{eqn:loss}).) Since the choice of a specific privacy loss function depends heavily on the context of the mechanism being studied, we avoid fixing a single privacy loss function and rather study several reasonable properties that privacy loss functions should have. Also, while we typically think of privacy valuation as being positive and privacy losses as positive, our definition does not exclude the possibility that players may \emph{want} to lose their privacy, and therefore we allow privacy valuations (and losses) to be negative. Our impossibility results will only assume non-negative privacy loss, while our constructions handle possibly negative privacy loss functions as long as the \emph{absolute value} of the privacy loss function is bounded appropriately. \Snote{removed ``thus making our results as strong as possible in this regard.''} \subsection{Mechanism design criteria} \begin{definition} A mechanism $M = (M_{\mathsf{out}}, M_{\mathsf{pay}})$ is {\em $([\alpha, \alpha'], \beta)$-accurate} on an input $(b, v)\in\zo^n\times\R^n$ if, setting $\overline{b} = \frac{1}{n} \sum_{i=1}^n b_i$, it holds that $$\mathop{\mathrm{Pr}}[M_{\mathsf{out}}(b, v) \notin ((\overline{b} - \alpha) n, (\overline{b} + \alpha') n) ] \leq \beta.$$ We say that $M$ is {\em $(\alpha,\beta)$-accurate} on $(b,v)$ if it is $([\alpha,\alpha],\beta)$-accurate. \end{definition} \Snote{changed closed interval to open interval inside probability above. Else $([1/2,1/2],\beta)$ accuracy doesn't make sense. also added def of $(\alpha,\beta)$-accurate.} We define $\mathsf{Pay}^{(M)}_i(b, v) = \Exp_{p \getsr M_{\mathsf{pay}}(b, v)}[p_i]$. \begin{definition} Fix $n$, a mechanism $M$ on $n$ players, and privacy loss functions $\lambda^{(M)}_1, \ldots, \lambda^{(M)}_n$. We say $M$ is \emph{individually rational} if for all inputs $(b, v) \in \zo^n \times \R^n$ and all $i \in [n]$: $$\mathsf{Pay}^{(M)}_i(b, v) \geq \mathsf{Loss}^{(M)}_i(b, v, v_i). $$ $M$ is \emph{truthful for input $(b, v)$ and player $i$} if for all $v'_i$ it holds that $$\mathsf{Pay}^{(M)}_i(b, v) - \mathsf{Loss}^{(M)}_i(b, v, v_i) \quad \geq \quad \mathsf{Pay}^{(M)}(b, (v_{-i}, v'_i)) - \mathsf{Loss}^{(M)}_i(b, v, v'_i).$$ $M$ is simply \emph{truthful} if it is truthful for all inputs and all players. \end{definition} \section{Impossibility of non-trivial accuracy with privacy} \label{sec:neg} \Snote{moved distinguishability definition here} We will use a notion of distinguishability that captures when a function leaks information about an input pertaining to a particular player. \Knote{added qualifier} \begin{definition} An input $(b, v)\in \zo^n\times\R^n$ is \emph{$\delta$-distinguishable} for player $i$ with respect to a randomized function $f$ if there is an $i$-neighbor $(b', v')$ such that $\Delta(f(b, v), f(b', v')) \geq \delta$. \end{definition} We choose a notion based on statistical distance because it allows us to capture $(\eps, \delta)$-differ\-ential privacy even for $\delta > 0$. Namely, if there is an input $(b, v)\in \zo^n\times \R^n$ that is $\delta$-distinguishable for player $i$ with respect to $f$, then $f$ cannot be $(\eps, \delta')$-differentially private for any $\eps,\delta'$ satisfying $\delta > \delta' + e^\eps - 1 \approx \delta' + \eps$. However, note that, unlike differential privacy, $\delta$-distinguishability is a {\em per-input} notion, measuring how much privacy loss a player can experience on a particular input $(b,v)$, {\em not} taking the worst case over all inputs. \Snote{this is a new sentence} \Knote{added emphasis.} For our impossibility result we will require that any specified privacy loss should be attainable if the player's privacy valuation is large enough, as long as there is in fact a noticeable amount of information about the player's type being leaked (\ie the player's input is somewhat distinguishable). Note that having unbounded privacy losses is necessary for having any kind of negative result. If the privacy losses were always upper-bounded by some value $L$, then a trivially truthful and individually rational mechanism would simply pay every player $L$ and output the exact sum of data bits. \Snote{new footnote in response to a reviewer comment}\Knote{made Salil's footnote part of text} \begin{definition} \label{def:increasing} A privacy loss function $\lambda^{(M)}_i$ for a mechanism $M$ and player $i$ is \emph{increasing for $\delta$-distin\-guishability} if there exists a real-valued function $T_i$ such that for all $\ell > 0$, $b \in \zo^n$ and $v_{-i} \in \R^{n-1}$, if $v_i \geq T_i(\ell, b, v_{-i})$ and if $(b, v)$ is $\delta$-distinguishable for player $i$ with respect to $M_{\mathsf{out}}$, then $\mathsf{Loss}^{(M)}_i(b, v, v_i) > \ell$. \end{definition} Notice that in our notion of increasing for $\delta$-distinguishability we only consider distinguishability for $M_{\mathsf{out}}$ and not for $(M_{\mathsf{out}}, M_{\mathsf{pay}})$. Being able to handle this definition is what makes our impossibility rule out mechanisms even for privacy loss functions depending only on the distribution of $M_{\mathsf{out}}$. \autoref{def:increasing} implies that the privacy loss functions are unbounded. We next define a natural property of loss functions, that for privacy-indifferent players privacy loss is not affected by the particular value reported for $v_i$. \Snote{moved this def here, and changed terminology from ``centered'' to ``respects indifference''} \begin{definition} A privacy loss function $\lambda^{(M)}_i$ for a mechanism $M$ and player $i$ \emph{respects indifference} if whenever $v_i = 0$ it follows that $\mathsf{Loss}^{(M)}_i(b, v, v'_i) = \mathsf{Loss}^{(M)}_i(b, v, v''_i)$ for all $v'_i, v''_i$. \end{definition} \Snote{changed ``finite payments for privacy-insensitive players'' to ``finite payments when all players are privacy-indifferent''} \Dnote{moved definitions into theorem} \begin{theorem} \label{thm:imp} Fix a mechanism $M$ and a number of players $n$, and non-negative privacy loss functions $\lambda^{(M)}_1, \ldots, \lambda^{(M)}_n$. Suppose that the $\lambda^{(M)}_i$ respect indifference, and are increasing for $\delta$-distin\-guishability for some $\delta \leq \tfrac{1}{6n}$. Suppose that $M$ that satisfies all of the following: \begin{itemize*} \item $M$ is individually rational. \item $M$ has finite payments when all players are privacy-indifferent, in the sense that for all $b \in \zo^n$ and all $i \in [n]$, it holds that $\mathsf{Pay}^{(M)}_i(b, 0^n)$ is finite. \item $M$ is truthful for privacy-indifferent players, namely $M$ is truthful for all inputs $(b, v)$ and players $i$ such that $v_i = 0$. \end{itemize*} Then it follows that $M$ cannot have non-trivial accuracy in the sense that it cannot be $(1/2, 1/3)$-accurate on $(0^n, 0^n)$ and $(1^n, 0^n)$. \end{theorem} \begin{proof} We write $\mathsf{Pay}_i, \mathsf{Loss}_i, \lambda_i$ to denote $\mathsf{Pay}^{(M)}_i, \mathsf{Loss}^{(M)}_i, \lambda^{(M)}_i$. By the assumption that $M$ has finite payments when all players are privacy-indifferent, we can define $$P = \max_{i \in [n], b \in \zo^n} \mathsf{Pay}_i(b, 0^n) < \infty.$$ By the assumption that all the $\lambda_i$ are increasing for $\delta$-indistin\-guishability, we may define a threshold $$L = \max_{i \in [n], b \in \zo^n} T_i(P, b, 0^{n-1})$$ such that for all $i \in [n], b \in \zo^n, v_i \geq L$, it holds that if $(b, (0^{n-1}, v_i))$ is $\delta$-distinguishable, then $\mathsf{Loss}_i(b, (0^{n-1}, v_i), v_i) > P$. \Snote{elaborated description of hybrids} We construct a sequence of $2n+1$ inputs $x^{(1,0)}, x^{(1,1)}, x^{(2,0)}, x^{(2,1)}, \ldots, x^{(n,0)}, x^{(n,1)}, x^{(n+1,0)}$. In $x^{(1,0)}$, all players have data bit 0 and privacy valuation 0. That is, $x^{(1,0)}=(0^n,0^n)$. From $x^{(i,0)}$, we construct $x^{(i,1)}$ by changing player $i$'s data bit $b_i$ from 0 to 1 and valuation $v_i$ from 0 to $L$. From $x^{(i,1)}$, we construct $x^{(i+1,0)}$ by changing player $i$'s valuation $v_i$ back from $L$ to $0$ (but $b_i$ remains 1). Thus, \begin{eqnarray*} x^{(i,0)} & = & ((1^{i-1}, 0, 0^{n-i}), (0^{i-1}, 0, 0^{n-i})), \quad\textrm{and} \\ x^{(i, 1)} & = & ((1^{i-1}, 1, 0^{n-i}), (0^{i-1}, L, 0^{n-i})) \end{eqnarray*} In particular, $x^{(n+1,0)} = (1^n, 0^n)$. \Knote{tried to make hybrids visually clear} Define the hybrid distributions $H^{(i,j)} = M_{\mathsf{out}}(x^{(i,j)})$. \Snote{added claim and reordered proof a bit} \begin{claim} For all $i\in [n]$, $\mathsf{Pay}_i(x^{(i,1)}) \leq \mathsf{Pay}_i(x^{(i+1,0)}) \leq P$.\end{claim} To prove this claim, we first note that all players have privacy valuation 0 in $x^{(i+1,0)}$, so $\mathsf{Pay}_i(x^{(i+1,0)}) \leq P$ by the definition of $P$. Since player $i$ has privacy valuation 0 in $x^{(i+1,0)}$, we also know that privacy loss of player $i$ in input $x^{(i+1,0)}$ is independent of her declaration (since $\lambda_i$ respects indifference). If player $i$ declares $L$ as her valuation instead of 0, she would get payment $\mathsf{Pay}_i(x^{(i,1)})$. By truthfulness for privacy-indifferent players, we must have $\mathsf{Pay}_i(x^{(i,1)}) \leq\mathsf{Pay}_i(x^{(i+1,0)})$. By the definition of $L$ it follows that $x^{(i,1)}$ cannot be $\delta$-distinguishable for player $i$ with respect to $M_{\mathsf{out}}$. Otherwise, this would contradict individual rationality because on input $x^{(i,1)}$ player $i$ would have privacy loss $> P$ while only getting payoff $\leq P$. Since $x^{(i,1)}$ is not $\delta$-distinguishable for player $i$ with respect to $M_{\mathsf{out}}$, and because $x^{(i,1)}$ is an $i$-neighbor of $x^{(i, 0)}$ as well as $x^{(i+1, 0)}$, it follows that \begin{equation} \label{eq:ahgpoiha} \Delta(H^{(i,0)}, H^{(i,1)}) < \delta \text{ and }\Delta(H^{(i,1)}, H^{(i+1,0)}) < \delta \end{equation} Finally, since \autoref{eq:ahgpoiha} holds for all $i \in [n]$, and since $H^{(1,0)} = M_{\mathsf{out}}(0^n, 0^n)$ and $H^{(n+1,0)} = M_{\mathsf{out}}(1^n, 0^n)$, we have by the triangle inequality that $$\Delta(M_{\mathsf{out}}(0^n, 0^n), M_{\mathsf{out}}(1^n, 0^n)) < 2n\delta$$ But since $\delta \leq 1/6n$, this contradicts the fact that $M$ has non-trivial accuracy, since non-trivial accuracy implies that we can distinguish between the output of $M_{\mathsf{out}}$ on inputs $(0^n, 0^n)$ and $(1^n, 0^n)$ with advantage greater than $1/3$, simply by checking whether the output is greater than $n/2$. \end{proof} \subsection{Subsampling for low-distinguishability privacy loss functions} We comment that the $\delta\leq 1/6n$ bound in \autoref{thm:imp} is tight up to a constant factor. Indeed, if players do not incur significant losses when their inputs are $O(1/n)$-distinguishable, then an extremely simple mechanisms based on sub-sampling can be used to achieve truthfulness, individual rationality, and accuracy with finite budget. Namely, suppose that the privacy loss functions are such that if for all $i$, if player $i$'s input is not $C/n$-distinguishable for some constant $C$, then regardless of $v_i$, the loss to player $i$ is bounded by $P$. Then the following mechanism is truthful, individually rational, and accurate: pay all players $P$, select at random a subset $A$ of size $k$ for some $k<C$ from the population, and output $(n/|A|)\cdot \sum_{i \in A} b_i$. \Snote{added a factor of $n$ to output} By a Chernoff Bound, this mechanism is $(\eta, 2e^{-\eta^2k})$-accurate for all $\eta>0$. By construction no player's input is $C/n$-distinguishable and therefore their privacy loss is at most $P$ and the mechanism is individually rational. Finally mechanism is truthful since it behaves independently of the player declarations. \section{Positive results} \Dnote{moved this part here since it's not really about monotonic valuations} For our positive results, we will require the following natural property from our privacy loss functions. Recall that we allow the privacy loss functions $\lambda^{(M)}_i$ to depend on a player's report $v'_i$, in addition the the player's true type. We require the dependence on $v'_i$ to be well-behaved in that if changing declarations does not change the output distribution, then it also does not change the privacy loss. \begin{definition} A privacy loss function $\lambda^{(M)}_i$ \emph{respects identical output distributions} if the following holds: for all $b, v$, if the distribution of $M_{-i}(b, v)$ is identical to $M_{-i}(b, (v_{-i}, v'_i))$, then for all $s,p$, it holds that $\lambda^{(M)}_i(b, v, v'_i, s, p) = \lambda^{(M)}_i(b, v, v_i, s, p)$. \end{definition} \Snote{changed $M_{\mathsf{out}}$ to $M_{-i}$ in the above definition, since it seems to be what we want} The above definition captures the idea that if what the privacy adversary can see (namely the output of $M_{-i}$) doesn't change, then player $i$'s privacy loss should not change. \subsection{Monotonic valuations} \Dnote{added a sentence} We now define our main conceptual restriction of the privacy loss functions to consider only \emph{monotonic valuations}. \Snote{moved defs of monotone neighbors, monotone dp here} \Snote{redefined monotonically related to make symmetry explicit} \begin{definition} Two player types $(b_i, v_i), (b'_i, v'_i)\in \zo\times \R$ are said to be \emph{monotonically related} iff ($b_i=0$, $b'_i=1$, and $v_i \leq v'_i$) or ($b_i=1$, $b'_i=0$, and $v_i \geq v'_i$). \Dnote{added subscripts to variables in previous sentence to stay consistent with convention that $b, v$ denote vectors} Two inputs $(b, v), (b', v')\in \zo^n\times \R^n$ are \emph{monotonic $i$-neighbors} if they are $i$-neighbors and furthermore $(b_i, v_i), (b'_i, v'_i)$ are monotonically related. They are {\em monotonic neighbors} if they are monotonic $i$-neighbors for some $i \in [n]$. \end{definition} \Dnote{moved monotonic dp definition, moved definition of bounded by monotonic dp here} \Snote{new sentence} Following \cite{CCKMV13}, we also make the assumption that the privacy loss functions on a given output $(s,p_{-i})$ are bounded by the amount of influence that player $i$'s report has on the probability of the output: \begin{definition} A privacy loss function $\lambda^{(M)}_i$ is \emph{bounded by differential privacy} if the following holds: $$ \left|\lambda^{(M)}_i(b, v, v'_i, s, p_{-i})\right| \leq v_i \cdot \left( \max_{(b''_i, v''_i)} \log \frac{\mathop{\mathrm{Pr}}[M_{-i}(b, v) = (s, p_{-i})]} {\mathop{\mathrm{Pr}}[M_{-i}((b_{-i}, b''_i), (v_{-i}, v''_i)) = (s, p_{-i})]} \right)$$ A privacy loss function $\lambda^{(M)}_i$ is \emph{bounded by differential privacy for monotonic valuations} if: $$ \left|\lambda^{(M)}_i(b, v, v'_i, s, p_{-i}) \right| \leq v_i \cdot \left( \max_{(b''_i, v''_i) \text{ mon. related to } (b_i, v_i)} \log \frac{\mathop{\mathrm{Pr}}[M_{-i}(b, v) = (s, p_{-i})]} {\mathop{\mathrm{Pr}}[M_{-i}((b_{-i}, b''_i), (v_{-i}, v''_i)) = (s, p_{-i})]} \right) $$ \end{definition} \Snote{worth thinking about whether there is a Bayesian interpretation of the monotone version of the definition} As noted and used in \cite{CCKMV13}, the RHS in the above definition can be upper-bounded by the level of (pure) differential privacy, and the same holds for monotonic valuations: \Snote{dropped mention of $(\eps,\delta)$ - seemed to be a distraction} \begin{fact} \label{fact:bound} If $M_{-i}$ is $\eps$-differentially private for $i$-neighbors (\ie \autoref{eq:ahpobinpz} holds for all $i$-neighbors) and $\lambda^{(M)}_i$ is bounded by differential privacy (even if only for monotonic valuations), then player $i$'s privacy loss is bounded by $v_i \eps$ regardless of other player types, player declarations, or outcomes. \end{fact} \Dnote{added this subsection} \Snote{removed subsection heading and made minor edits below} As hinted at in the definition of privacy loss functions bounded by differential privacy for monotonic valuations, one can define an analogue of differential privacy where we take the maximum over just monotonically related neighbors. However this notion is not that different from the original notion of differential privacy, \Knote{change "not that interesting" to "not that different from differential privacy"} since satisfying such a definition for some privacy parameter $\eps$ immediately implies satisfying (standard) differential privacy for privacy parameter $3\eps$, since every two pairs $(b_i,v_i)$ and $(b_i',v_i')$ are at distance at most 3 in the monotonic-neighbor graph. \Knote{should the factor be 3 or 2?} The monotonic neighbor notion becomes more interesting if we consider a further variant of differential privacy where the privacy guarantee $\eps_i$ afforded to an individual depends on her data $(b_i,v_i)$ (e.g. $\eps_i=1/v_i$). We defer exploration of this notion to a future version of this paper. \Dnote{should we define the other definition with individual $\eps_i$ here?} \subsection{Mechanism for monotonic valuations} The idea behind our mechanism for players with monotic valuations (\autoref{alg:mon}) is simply to treat the data bit as 0 (the insensitive value) for all players who value privacy too much. \begin{algorithmf}{Mechanism for monotonic valuations \label{alg:mon}} Input: $(b, v) \in \zo^n \times \R^n$. Auxiliary inputs: budget $B > 0$, privacy parameter $\eps > 0$. \begin{enumerate} \item For all $i \in [n]$, set $b'_i = b_i$ if $2\eps v_i \leq B/n$, otherwise set $b'_i = 0$. \item Output $\sum_{i=1}^n b'_i + \mathrm{Geom}(\eps)$. \item Pay $B/n$ to player $i$ if $2\eps v_i \leq B/n$, else pay player $i$ nothing. \end{enumerate} \end{algorithmf} \begin{theorem} \label{thm:mon} For privacy loss functions that are bounded by differential privacy for monotonic valuations and respect identical output distributions, the mechanism $M$ in \autoref{alg:mon} satisfies the following: \begin{enumerate} \item \label{item:truth} $M$ is truthful for all players with $2\eps v_i \leq B/n$. \item $M$ is individually rational for all players \item Assume only that the truthful players described in Point \ref{item:truth} do indeed declare their true types. Letting $\eta$ denote the fraction of players where $b_i = 1$ and $2\eps v_i > B / n$, it holds that $M$ is $([\eta + \gamma, \gamma], 2e^{-\eps \gamma n})$-accurate. \end{enumerate} \end{theorem} \begin{proof} \textbf{Truthfulness for players with $2\eps v_i \leq B/n$}: if $2\eps v_i \leq B/n$, then declaring any $v'_i \leq B/(2\eps n)$ has no effect on the output of the mechanism, and so there is no change in utility since the privacy loss functions respect identical output distributions. If player $i$ declares some $v'_i > B/(2 \eps n)$, then he loses $B/n$ in payment. Because $M_{-i}$ is $\eps$-differentially private if for $i$-neighbors (recall we assume an observer cannot see the change in $p_i$) and we assumed that the privacy loss functions are bounded by differential privacy for monotonic valuations, it follows that player $i$'s privacy loss has absolute value at most $2\eps v_i$ under a report of $v_i$ and under a report of $v_i'$ (\autoref{fact:bound}). Thus, there is at most a change of $2 \eps v_i$ in privacy, which is not sufficient to overcome the payment loss of $B/n$. \Snote{elaborated last couple of sentences} \Snote{reordered proof of IR below} \textbf{Individual rationality:} consider any vector of types $b, v$ and any player $i$. If $v_i \leq B/2\eps n$ then player $i$ receives payment $B/n$. By the hypothesis that the privacy loss functions are bounded by differential privacy for monotonic valuations, and because the mechanism is $\eps$-differentially private, the privacy loss to player $i$ is bounded by $\eps v_i < B/n$ (\autoref{fact:bound}), satisfying individual rationality. Now suppose that player $i$ has valuation $v_i > \tfrac{B}{2 \eps n}$. In this case the payment is $0$. The mechanism sets $b'_i = 0$, and for every $(b''_i, v''_i)$ monotonically related to $(b_i, v_i)$ the mechanism also sets $b'_i = 0$. Since the report of player $i$ does not affect $b'_j$ or the payment to player $j$ for $j\neq i$, monotonic neighbors will produce the \emph{exact} same output distribution of $M_{-i}$. Therefore the privacy loss of player $i$ is 0. Indeed, since the privacy loss function is bounded by differential privacy for monotonic valuations, we have: $$ \left|\lambda^{(M)}_i(b, v, v'_i, s, p_{-i}) \right| \leq v_i \cdot \left( \max_{(b''_i, v''_i) \text{ mon. related to } (b_i, v_i)} \log \frac{\mathop{\mathrm{Pr}}[M_{-i}(b, v) = (s, p_{-i})]} {\mathop{\mathrm{Pr}}[M_{-i}((b_{-i}, b''_i), (v_{-i}, v''_i)) = (s, p_{-i})]} \right) = 0$$ \textbf{Accuracy:} the bits of the $(1 - \eta)$ fraction of truthful players and players with $b_i = 0$ are always counted correctly, while the bits of the $\eta$ fraction of players with $b_i = 1$ and large privacy valuation $v_i \geq B/(2 \eps n)$ are either counted correctly (if they declare a value less than $B/(2 \eps n)$) or are counted as $0$ (if they declare otherwise). This means that $\overline{b'} = \sum_{i=1}^n b'_i$ and $\overline{b} = \sum_{i=1}^n b_i$ satisfy $\overline{b'}\in [\overline{b}-\eta n,\overline{b'}]$. \Snote{removed $1/n$ factors} By the definition of symmetric geometric noise, it follows that (letting $v'$ be the declared valuations of the players) it holds that $$\mathop{\mathrm{Pr}}[|M_{\mathsf{out}}(b, v') - \overline{b'}| \geq \gamma n] < 2 e^{-\eps \gamma n}.$$ \Snote{changed strict inequalities to non-strict in this tail bound to match new def of accuracy - please check} \Dnote{fine with me - I made the bound itself strict to stay consistent with the definition, this is true from the definition of geometric noise} The theorem follows. \end{proof} \subsubsection{Achieving better truthfulness} We can improve the truthfulness of \autoref{thm:mon} to include all players with data bit $0$. \begin{theorem} \label{thm:moretruth} Let $M'$ be the same as in \autoref{alg:mon}, except that \emph{all} players with $b_i = 0$ are paid $B/n$, even those with large privacy valuations. Suppose that the $\lambda^{(M')}_i$ are bounded by differential privacy for monotonic valuations and also respect identical output distributions. Then the conclusions of \autoref{thm:mon} hold and in addition the mechanism is truthful for all players with data bit $b_i = 0$. \end{theorem} Note that, unlike \autoref{alg:mon}, here the payment that the mechanism makes to players depends on their data bit, and not just on their reported valuation. This might make it impractical in some settings (e.g. if payment is needed before players give permission to view their data bits). \begin{proof} Increasing the payments to the players with $b_i = 0$ and privacy valuation $v_i > \frac{B}{2 \eps n}$ does not hurt individual rationality or accuracy. We must however verify that we have not harmed truthfulness. Since players are not allowed to lie about their data bit, the same argument for truthfulness of players with $b_i = 1$ and $v_i \leq B/(2 \eps n)$ remains valid. It is only necessary to verify that truthfulness holds for all players with $b_i = 0$. Observe that for players with $b_i = 0$, the output distribution of the mechanism is identical regardless of their declaration for $v_i$. Therefore by the assumption that the $\lambda^{(M)}_i$ respect identical output distributions, changing their declaration does not change their privacy loss. Furthermore, by the definition of $M'$ changing their declaration does not change their payment as all players with $b_i = 0$ are paid $B/n$. Therefore, there is no advantage to declaring a false valuation. \end{proof} We remark that \autoref{thm:moretruth} is only preferable to \autoref{thm:mon} in settings where knowing the true valuations has some value beyond simply helping to achieve an accurate output; in particular, notice that $M'$ as defined in \autoref{thm:moretruth} does not guarantee any better accuracy or any lower payments (indeed, it may make more payments than the original \autoref{alg:mon}). \section{Lower bounds} \subsection{Impossibility of non-trivial accuracy for all privacy valuations with monotonic privacy} One natural question that \autoref{alg:mon} raises is whether we can hope to adaptively set the budget $B$ based on the valuations of the players and thereby achieve accuracy for all inputs, not just inputs where most players' privacy valuations are small relative to some predetermined budget. In this section we show that this is not possible, even when only considering players who care about privacy for monotonic neighbors. \Snote{moved monotone distinguishability def here} \begin{definition} An input $(b, v)\in \zo^n\times \R^n$ is \emph{$\delta$-monotonically distinguishable} for player $i$ with respect to a randomized function $f$ if there is a monotonic $i$-neighbor $(b', v')$ such that $\Delta(f(b, v), f(b', v')) \geq \delta$. \end{definition} \begin{definition} \label{def:increasingmon} A privacy loss function $\lambda^{(M)}_i$ for a mechanism $M$ and player $i$ is \emph{increasing for $\delta$-monotonic distinguishability} if there exists a real-valued function $T_i$ such that for all $\ell > 0$, $b \in \zo^n$ and $v_{-i} \in \R^{n-1}$, if $v_i \geq T_i(\ell, b, v_{-i})$ and if $(b, v)$ is $\delta$-monotonically distinguishable for player $i$ with respect to $M_{\mathsf{out}}$, then $\mathsf{Loss}^{(M)}_i(b, v, v_i) > \ell$. \end{definition} \Snote{changed ``has finite payments for privacy-aware players'' to ``always has finite payments''} \Dnote{merged definitions into theorem statement.} \begin{theorem} \label{thm:monimp} Fix a mechanism $M$ and a number of players $n$, and non-negative privacy loss functions $\lambda^{(M)}_1, \ldots, \lambda^{(M)}_n$. Suppose that the $\lambda^{(M)}_i$ respect indifference and are increasing for $\delta$-monotonic distinguishability for $\delta \leq \tfrac{1}{3n}$. Suppose $M$ satisfies all the following: \begin{itemize*} \item $M$ is individually rational. \item $M$ always has finite payments, in the sense that for all $b \in \zo^n, v \in \R^n$ and all $i \in [n]$ it holds that $\mathsf{Pay}^{(M)}_i(b, v)$ is finite. \item $M$ is truthful for privacy-indifferent players, as in \autoref{thm:imp} \end{itemize*} Then $M$ does not have non-trivial accuracy for all privacy valuations, namely $M$ cannot be $(1/2, 1/3)$-accurate on $(0^n, v)$ and $(1^n, v)$ for all $v \in \R^n$. \end{theorem} \begin{proof} The argument follows the same outline as the proof of \autoref{thm:imp}, \ie by constructing a sequence of hybrid inputs and using truthfulness for privacy-indifferent players and individual rationality to argue that the neighboring hybrids must produce statistically close outputs. However, we have to take more care here because for the hybrids in this proof there is no uniform way to set the maximum payment $P$ and threshold valuation $L$ for achieving privacy loss $>P$ at the beginning of the argument, since here we allow the finite payment bound to depend on the valuations (whereas \autoref{thm:imp} only refers to the payment bound when all valuations are zero). Instead, we set $P_i, L_i$ for the $i$'th hybrids in a way that depends on $L_{[i-1]} = (L_1,\ldots,L_{i-1})$. \Snote{elaborated hybrids, and introduced double-indexed hybrid inputs like \autoref{thm:imp}} As before, we have $2n+1$ inputs $x^{(1,0)}, x^{(1,1)}, x^{(2,0)}, x^{(2,1)}, \ldots, x^{(n,0)}, x^{(n,1)}, x^{(n+1,0)}$, which we define inductively as follows. In $x^{(1,0)}$, all players have data bit 0 and privacy valuation 0. That is, $x^{(1,0)}=(0^n,0^n)$. From $x^{(i,0)}$, we define $x^{(i,1)}$ by changing player $i$'s data bit from 0 to 1. From $x^{(i,1)}=(b^{(i)},v^{(i)})$, we define $P_i = \mathsf{Pay}_i(x^{(i,1)})$ to be the amount that player $i$ is paid in $x^{(i,1)}$, and $L_i = T_i(P_i,b^{(i)},v^{(i)}_{-i})$ to be a privacy valuation beyond which payment $P_i$ does not compensate for $\delta$-distinguishability (as promised by \autoref{def:increasingmon}). Then we define $x^{(i+1,0)}$ by increasing the valuation of player $i$ from 0 to $L_i$. By induction, for $i = 1, \ldots, n+1$, we have $$x^{(i,0)} = (1^{i-1} 0^{n-i+1}, L_{[i-1]} 0^{n-i+1}).$$ \Dnote{fixed indices in previous sentece, was off by one} Define the distribution $H^{(i)} = M_{\mathsf{out}}(x^{(i,0)})$. \iffalse For $i \in [n]$, define $P_i = \mathsf{Pay}_i(1^i 0^{n-i}, L_{[i-1]} 0^{n-i+1})$ and $L_i = T_i(P_i, 1^i 0^{n-i}, L_{[i-1]} 0^{n-i})$. Here, $L_{[i]}$ denotes the vector $(L_1, \ldots, L_i)$, and $T_i$ is the function promised by \autoref{def:increasingmon}, guaranteed to exist by our assumption that the loss functions are increasing for $\delta$-monotonic distinguishability. The base case is defined as $L_{[0]} 0^n = 0^n$, and our hypotheses that $M$ has finite payments for all privacy valuations and that the loss functions are increasing for $\delta$-monotonic distinguishability imply that this sequence $\{(P_i, L_i)\}_{i \in [n]}$ is well-defined. \fi \begin{claim} \label{clm:paogih} $\mathsf{Pay}_i(x^{(i+1,0)}) \leq \mathsf{Pay}_i(x^{(i,1)}) = P_i$ \end{claim} On input $x^{(i,1)}$, player $i$ has privacy valuation 0, so his privacy loss is independent of his declaration (since $\lambda_i$ respects indifference). Declaring $L_i$ would change the input to $x^{(i+1,0)}$, so by truthfulness for privacy-indifferent players, we have $\mathsf{Pay}_i(x^{(i+1,0)}) \leq \mathsf{Pay}_i(x^{(i,1)})$. By the definition of $L_i$, $x^{(i+1,0)}$ cannot be $\delta$-monotonically distinguishable for player $i$ with respect to $M_{\mathsf{out}}$. Otherwise, this would contradict individual rationality because on input $x^{(i+1,0)}$ player $i$ would have privacy loss greater than $P_i$ while only getting a payoff of at most $P_i$ (by \autoref{clm:paogih}). Since $x^{(i+1,0)}$ is not $\delta$-monotonically distinguishable for player $i$ with respect to $M_{\mathsf{out}}$, and because $x^{(i,0)}$ is an $i$-monotonic neighbor of $x^{(i+1,0)}$, it follows that $\Delta(H^{(i-1)}, H^{(i)}) < \delta$. Finally, since this holds for all $i \in [n]$, the triangle inequality implies that $\Delta(H^{(0)}, H^{(n)}) < n \delta$. But since $\delta \leq 1/3n$, this implies that $$\Delta(M_{\mathsf{out}}(0^n, 0^n), M_{\mathsf{out}}(1^n, L)) < 1/3$$ contradicting the fact that $M$ has non-trivial accuracy for all privacy valuations. \end{proof} \subsection{Tradeoff between payments and accuracy} \label{sec:accurate} \Snote{as discussed, I think we need to change the interpretation of this result. also the presentation can probably use editing like done in the other lower bounds} \Dnote{I rewrote the proof along the lines of the previous ones} One could also ask whether the accuracy of \autoref{thm:mon} can be improved, \ie whether it is possible to beat $(\eta + \gamma, 2e^{-\eps \gamma n})$-accuracy. We now present a result that, assuming the mechanism does not exceed a certain amount of payment, limits the best accuracy it can achieve. (We note however that this bound is loose and does not match our mechanism.) In order to prove optimality we will require that the privacy loss functions be growing with statistical distance, which is a strictly stronger condition than being increasing for $\delta$-distinguishability. However, a stronger requirement is unavoidable since one can invent contrived privacy loss functions that are increasing but for which one can achieve $(\eta, 0)$-accuracy by simply by outputting $\sum b'_i$ as constructed in \autoref{alg:mon} without noise (while preserving the same truthfulness and individual rationality guarantees). Nevertheless, being growing with statistical distance for monotonic neighbors is compatible with being bounded by differential privacy for monotonic neighbors (\ie there exist functions that satisfy both properties), and therefore the following result still implies limits to how much one can improve the accuracy of \autoref{thm:mon} for all privacy loss functions bounded by differential privacy for monotonic neighbors. \Snote{I would call this ``grows with statistical distance''. It's not proportional to, since it's a one-sided inequality} \Dnote{I changed the terminology} \begin{definition} $\lambda^{(M)}_i(b, v, v'_i, s, p_{-i})$ is \emph{growing with statistical distance (for monotonic neighbors)} if: $$\mathsf{Loss}^{(M)}_i(b, v, v_i) \geq v_i \cdot \left( \max_{(b', v')} \Delta(M_{\mathsf{out}}(b, v), M_{\mathsf{out}}(b', v')) \right)$$ where the maximum is taken over $(b', v')$ that are (monotonic) $i$-neighbors of $(b, v)$. \end{definition} \begin{theorem} Fix a mechanism $M$, a number of players $n$, and privacy loss functions $\lambda^{(M)}_i$ for $i = 1, \ldots, n$. Suppose that the $\lambda^{(M)}_i$ respect indifference and are growing with statistical distance for monotonic neighbors. \Dnote{yes for monotonic neighbors} Suppose that $M$ satisfies the following: \begin{itemize*} \item $M$ is individually rational. \item There exists a maximum payment over all possible inputs that $M$ makes to any player who declares $0$ privacy valuation. Call this maximum value $P$. \item $M$ is truthful for privacy-indifferent players as defined in \autoref{thm:imp}. \end{itemize*} Then it holds that for any $\tau, \gamma, \eta > 0$ such that $\eta + 2\gamma \leq 1$, and any $\beta < \frac{1}{2} - \tfrac{P}{\tau} \gamma n$, the mechanism $M$ cannot be $([\eta + \gamma, \gamma], \beta)$-accurate on all inputs where at most an $\eta$ fraction of the players' valuations exceed $\tau$. \end{theorem} \begin{proof} Fix any $\tau, \eta, \gamma > 0$ and any $\beta < \tfrac{1}{2} - \tfrac{P}{\tau} \gamma n$. We prove the theorem by showing that $M$ cannot be $([\eta + \gamma, \gamma], \beta)$-accurate. Let $h = \eta n$ denote the number of players with high privacy valuation allowed. Fix any $L \geq P h / (1 - 2 \tfrac{P}{\tau} \gamma n - 2\beta)$. Consider the following sequence of hybrid inputs. Let $x^{(1,0)} = (0^n, 0^n)$. From $x^{(i,0)}$, define $x^{(i,1)}$ by flipping player $i$'s data bit from $0$ to $1$. From $x^{(i,1)}$, define $x^{(i+1,0)}$ by increasing the valuation of player $i$ from $0$ to $L$ if $i \in [h+1]$, or from $0$ to $\tau$ if $i \in (h+1, h + 2 \gamma n+1]$. By induction, we have: \begin{eqnarray*} \forall i \in [h+1], & x^{(i,0)} = & (1^{i-1} 0^{n-i+1}, L^{i-1} 0^{n-i+1}) \\ \forall i \in (h+1, h+2\gamma n+1], & x^{(i, 0)} = & (1^{i-1} 0^{n-i+1}, L^h \tau^{i-h-1} 0^{n-i+1}) \end{eqnarray*} These are well-defined since $h+2\gamma = (\eta + 2\gamma) n \leq n$. Define the hybrids $H^{(i,0)} = M_{\mathsf{out}}(x^{(i,0)})$. To analyze these hybrids, we use the following claims. \begin{claim} \label{claim:indist} For any input $(b, v)$ where player $i$ is paid at most $P$, it holds that $(b, v)$ is not $\delta$-distinguishable for monotonic neighbors for player $i$ with respect to $M_{\mathsf{out}}$ for any $\delta \geq P / v_i$. \end{claim} \autoref{claim:indist} holds because by individual rationality, it holds that the privacy loss does not exceed $P$. By the assumption that the privacy loss functions are growing with statistical distance for monotonic neighbors, it follows that $\Delta(M_{\mathsf{out}}(b, v), M_{\mathsf{out}}(b', v')) \leq P / v_i$ for all $(b', v')$ monotonic neighbors of $(b, v)$. \begin{claim} \label{claim:apobnipbn} $\mathsf{Pay}_i(x^{(i+1,0)}) \leq \mathsf{Pay}_i(x^{(i,1)}) \leq P$. \end{claim} As in the proof of \autoref{thm:monimp}, this claim holds because on input $x^{(i,1)}$, player $i+1$ has $0$ privacy valuation, and so $\mathsf{Pay}_i(x^{(i,1)}) \leq P$ by our assumption that the mechanism pays at most $P$ to players with $0$ privacy valuation. $\mathsf{Pay}_i(x^{(i+1,0)}) \leq \mathsf{Pay}_i(x^{(i,1)})$ follows as in the proof of \autoref{thm:monimp} from the truthfulness of the mechanism for privacy-indifferent players and by the fact that the privacy loss functions respect indifference. We may apply \autoref{claim:indist} to conclude that for all $i \in [h]$, since player $i$ has valuation $L$ in $x^{(i+1,0)}$, it holds that $x^{(i+1,0)}$ cannot be $(P/L)$-distinguishable for monotonic neighbors for player $i$. Since $x^{(i,0)}, x^{(i+1,0)}$ are monotonic $i$-neighbors, it follows that $\Delta(H^{(i,0)}, H^{(i+1,0)}) < P/L$. Repeating the same argument for all $i \in [h+1, h +2\gamma n]$ and using the fact that player $i$ has valuation $\tau$ in $x^{(i+1,0)}$ for these $i$, it follows that $\Delta(H^{(i,0)}, H^{(i+1,0)}) < P / \tau$. Combining the above using the triangle inequality and applying the definition of $L$, we deduce that \begin{equation} \label{eq:ahpgoiahsg} \Delta(H^{(1,0)}, H^{(h+ 2\gamma n+1, 0)}) < \frac{\eta n P}{L} + \frac{2\gamma n P}{\tau} \leq 1 - 2\beta \end{equation} For $i \in [n]$, define the open interval on the real line $A(i) = (i-1 -(\eta + \gamma)n, i-1 + \gamma n)$. Since the sum of the data bits in $x^{(i,0)}$ is exactly $i-1$, in order for $M$ to be $([\eta + \gamma, \gamma], \beta)$-accurate, it is necessary that \begin{equation} \label{eq:asopg} \mathop{\mathrm{Pr}}[H^{(i)} \in A(i)] > 1 - \beta \text { for all } i \in [h+ 2\gamma n + 1] \end{equation} Observe that $A(1)$ and $A(h+ 2\gamma n+1)$ are disjoint. Therefore, \autoref{eq:ahpgoiahsg} implies that $$\mathop{\mathrm{Pr}}[H^{(1,0)} \in A(1)] < \mathop{\mathrm{Pr}}[H^{(h + 2 \gamma n + 1, 0)} \in A(1)] + 1 - 2 \beta$$ By \autoref{eq:asopg} it follows that $\mathop{\mathrm{Pr}}[H^{(h + 2 \gamma n + 1, 0)} \in A(1)] < \beta$ \Dnote{made previous less-than strict} and therefore from the previous inequality we deduce that $\mathop{\mathrm{Pr}}[H^{(1,0)} \in A(1)] < 1 - \beta$. But this contradicts \autoref{eq:asopg}, and therefore it must be the case that $M$ is not $([\eta + \gamma, \gamma], \beta)$-accurate. \end{proof} \begin{remark} A different way to evaluate the accuracy guarantee of our mechanism, (the one taken in the work of Ghosh and Roth \cite{GR11}) would be to compare it to the optimal accuracy achievable in the class of all envy-free mechanisms with budget $B$. However, in our context it is not clear how to define envy-freeness: while it is clear what it means for player $i$ to receive player $j$'s payment, it is not at all clear (without making further assumptions) how to define the privacy loss of player $i$ as if he were treated like player $j$, since this loss may depend on the functional relationship between the player $i$'s type and the output of the mechanism. Because of this, our mechanism may not envy-free (for reasonable privacy loss functions), and so we refrain from using envy-free mechanisms as a benchmark. \end{remark} \end{document}
\begin{document} \title{Distributional Shift Adaptation using Domain-Specific Features} \author{ \IEEEauthorblockN{Anique Tahir\IEEEauthorrefmark{1}, Lu Cheng\IEEEauthorrefmark{2}, Ruocheng Guo\IEEEauthorrefmark{3} and Huan Liu\IEEEauthorrefmark{1} } \IEEEauthorblockA{ \textit{\IEEEauthorrefmark{1}Arizona State University}, Tempe, AZ, USA \\ } \IEEEauthorblockA{ \textit{\IEEEauthorrefmark{2}University of Illinois Chicago}, Chicago, IL, USA \\ } \IEEEauthorblockA{ \textit{\IEEEauthorrefmark{3}Bytedance AI Lab}, London, UK \\ } } \maketitle \begin{abstract} Machine learning algorithms typically assume that the training and test samples come from the same distributions, i.e., \textit{in-distribution}. However, in open-world scenarios, streaming big data can be \textit{Out-Of-Distribution (OOD)}, rendering these algorithms ineffective. Prior solutions to the OOD challenge seek to identify \textit{invariant} features across different training domains. The underlying assumption is that these invariant features should also work reasonably well in the unlabeled target domain. By contrast, this work is interested in the \textit{domain-specific} features that include both invariant features and features unique to the target domain. We propose a simple yet effective approach that relies on correlations in general regardless of whether the features are invariant or not. Our approach uses the most confidently predicted samples identified by an OOD base model (teacher model) to train a new model (student model) that effectively adapts to the target domain. Empirical evaluations on benchmark datasets show that the performance is improved over the SOTA by ${\sim}10$-$20\%$\footnote{https://github.com/aniquetahir/SimprovMinimal}. \end{abstract} \section{Introduction} Standard machine learning models (i.e., models trained by Empirical Risk Minimization (ERM)~\cite{vapnik1991principles}) rely on a key assumption that the training and test data are independent and identically distributed (i.i.d.), or \textit{in-distribution}. However, in practice, streaming big data can be \textit{out-of-distribution (OOD)}, rendering significant performance degradation of ERM-based models. To overcome this critical OOD challenge, a variety of methods have been proposed, such as the Distributionally Robust Optimization (DRO)~\cite{sagawa2019distributionally}, and Invariant Risk Minimization (IRM)~\cite{arjovsky_invariant_2020}. Most of these methods assume that invariant features for prediction across different training domains can also generalize well to the test domain~\cite{li2018domain, edwards2016towards}. However, a comprehensive comparison of different OOD methods by the authors in~\cite{gulrajani_search_2020, schott2021visual} showed that ERM can outperform such methods across different datasets. One potential explanation is that learning invariant features alone may be insufficient. This work aims to exploit domain-related features to further improve the OOD prediction performance. Take the benchmark dataset CMNIST~\cite{arjovsky_invariant_2020} as an example. In Fig.~\ref{fig:working_example}, we observe that there is a slight difference between the color-label correlation and the digit-label correlation in the training domains. However, the domain-related correlation (color-label) is significantly different between the training and the test domains. This suggests that learning the domain-related features can help predict the label since they capture correlations unique to the test domain. \begin{figure} \caption{CMNIST dataset sample with color-label and digit-label correlations that vary marginally in the training domains. The test domain has different domain-specific correlations. GT represents the scale of the digit.} \label{fig:working_example} \end{figure} With the growing popularity of publicly accessible applications and websites, unlabeled data is ubiquitous and contains greater variety, arriving in increasing volumes and with more velocity. Platforms such as Apache Kafka aid in analytics, integrating big data streams. When machine learning models are deployed at scale, the deployment domain might differ from the domains in which the model was trained. Especially, since the model might be trained on a relatively smaller quantity of data compared to the stream of big data it encounters in practice. The incoming unlabeled data, however, might help the model adapt by learning from the distribution of features specific to the deployment domain. This work thereby seeks to leverage \textit{domain-specific} features (including both invariant features and domain-related features) to address the OOD challenge. To achieve this, we assume that the unlabeled data from the target domain is available during deployment for adaptive training. We identify three primary challenges. First, the latent representations learned during training are often entangled between the invariant features and domain-related features~\cite{zhang2021towards, higgins2016beta, sagawa2019distributionally}. Disentangling these features in the latent space is a challenging task. It is suggested that well-grounded disentanglement approaches must rely on assumptions about the model or data \cite{locatello2019challenging}. Second, how do we identify features related to the target domain with only the labeled training data and unlabeled target data? We need to design a feedback mechanism to enforce the model to learn domain-related features. Finally, with no access to labels for the target domain, it is difficult to determine the optimization direction when adapting the model to the target domain, i.e., determining whether there are positive correlations or negative correlations. This highlights the importance of model selection based on the training data. To address these challenges, we propose a simple yet effective approach -- \textit{Simprov} -- that learns domain-specific features for OOD prediction using labeled training data and unlabeled target domain data. In particular, we first identify the high confidence predictions in the target domain by using an OOD base model such as IRM and then use these to train a runtime classifier for the target domain. Our major contributions include: (i) a novel framework that uses domain-specific features for OOD prediction, (ii) an effective model selection criterion for fast adapting the model to the target domain, and (iii) empirical analyses on three benchmark datasets from DomainBed~\cite{ganin2016domain} and WILDS~\cite{wilds1, wilds2}. \section{Related Work} Standard machine learning uses ERM to optimize the objective function. A key assumption is that random variables in the data are \textit{i.i.d}. Thus, in scenarios involving distribution shift, ERM performance degrades significantly~\cite{lazer2014parable, rosenfeld2022online}. OOD methods aim to address the issue by using data from related domains that differ in distributions. One seminal work in OOD is Invariant Risk Minimization (IRM)~\cite{arjovsky_invariant_2020} which aims to identify the invariant features. The hypothesis is that if the model can identify the causes (i.e., the invariant features) of an outcome, then it should perform reasonably well in a new unlabeled domain as it does not rely on spurious correlations. Distributionally Robust Optimization (DRO) family of approaches focuses on the worst-case scenario~\cite{rahimian2019distributionally, hu2018does}: optimizing for the source domain with the greatest loss. Another line of research leverages pseudo-labeling and data augmentation~\cite{sohn2020fixmatch, cubuk2020randaugment} approaches. Here, a trained model is used to generate noisy labels for samples in the unlabeled domain, combine them with the annotated training data and use the resulting semi-pseudo-labeled batch to further improve the trained model~\cite{berthelot2019mixmatch}. Noisy student~\cite{xie2020self} incorporates model distillation, where it trains the teacher to generate pseudo labels which are used for training a student model. More recent research~\cite{long2017deep,ganin2016domain} considered Domain Adaptation (DA), where both the labeled training domains and unlabeled test domain(s) are available during training. Our problem setting is slightly different: we optimize prediction performance while DA optimizes on the learned representation. Adaptive Risk Minimization (ARM)~\cite{zhang_adaptive_2020} studied the same problem setting as ours by adapting the training model to the target domain using meta-learning to update the model's parameters. We complement prior works by considering the importance of learning domain-specific features for the target domain and removing potential spurious features identified in the training domains, that is, features useful for prediction during training but not for the target domain. \section{Preliminaries} \textbf{Invariant Risk Minimization.} IRM~\cite{arjovsky_invariant_2020} aims to identify the invariant features (often referred to as \textit{causes}) by training over multiple different domains. Thus the loss function of IRM is designed to minimize the per domain risk, $R^e = \mathbb{E}_{p^{tr}(x,y|e)}[l]$, where $x$ represents the features, $y$ the labels, $e$ the domain, $l$ the loss function (e.g., mean squared error), and $p^{tr}$ is the distribution over the training domains. Formally, let $\Phi$ be the invariant prediction function. The objective function of IRM, $L$, can be then defined as: \begin{equation} L(\Phi) = \sum_{e \in \mathcal{E}^{tr}} R^e(\Phi) + \lambda|| \nabla_{\hat{w}|\hat{w}=1.0} R^e (\hat{w} \circ \Phi) ||, \end{equation} where $\hat{w}$ is a classification model that predicts from the invariant features, $\lambda$ is the regularization parameter, and $\mathcal{E}^{tr}$ is the set of training domains. The second term adds a constraint on the learning for a particular environment by increasing the loss when the propagation gradients are high resulting in reduced learning towards a specific domain leading to more generalizability. \\\textbf{Distillation.} The distillation consists of a teacher model and a student model. The teacher model is trained on the original data and the student model then learns from the teacher~\cite{sanh2019distilbert, hinton2015distilling}. In this work, we use offline and response-based distillation~\cite{gou2021knowledge} where the predictions (hard-labels) or logits (soft-labels) of the teacher model are used to train the student model. Formally, let $T$ denote the teacher model, $S$ the student model, the recursive loss function $L$ for offline distillation is: \begin{equation} L = \sum_{i=1}^{n}\alpha L(y_i, S(x_i)) + (1-\alpha)L(y_i, T(x_i)), \end{equation} where $\alpha$ denotes the ratio between the two losses~(teachers and students) and $n$ is the number of samples. Here, the teacher and student models are trained independently from each other. \section{Method} In this section, we describe our proposed approach~(\textit{Simprov}) for tackling the OOD challenge. Simprov aims to effectively adapt an ERM-based model to the distribution of target domain by learning domain-specific features. We first formally define the problem setting as follows: \begin{definition} \label{def:arm_setting} Let $\mathcal{E}_{all}$ be the set of all possible domains, $\mathcal{E}_{tr}$ the set of training domains, and $\mathcal{E}_{te}$ the set containing the target domain. Given training samples $x_{tr} \in \mathcal{X}$ of an input random variable X, $y_{tr} \in \mathcal{Y}$ of a target random variable Y, $z \in \mathcal{Z}$ of an input domain random variable Z , and $x_{te} \in \mathcal{X}$ of X, the goal is to learn a function $f: \mathcal{X} \rightarrow \mathcal{Y}$ representing $P(Y|X, e_{te})$ given, $P(X, Y|e_{te}) \neq P(X,Y|e_{tr})$, where $e_{te} \in \mathcal{E}_{te}$ and $e_{tr} \in \mathcal{E}_{tr}$. \end{definition} \begin{figure} \caption{An overview of Simprov. It leverages invariant features (Teacher), distillation, and a model selection criterion (Improvisation) to enhance performance on the target domain. The teacher model (such as IRM) learns invariant features and identifies target samples predicted with high confidence. Pseudo-labels (PL) with dropout are used to estimate the confidence. The student model (i.e., an ERM-based model) is trained over these selected samples to make predictions in the target domain. Since the pseudo-labels are generated without prior knowledge of the target domain, training over them requires a positive feedback loop between the teacher and student formed by the combination of Improvisation (1) and Self-Distillation (2).} \label{fig:architecture} \end{figure} \raggedbottom \sloppy An overview of our approach is highlighted in Fig.~\ref{fig:architecture}. Simprov primarily consists of three parts: Pseudo-Labeling, Self-Distillation, and Model Selection (Improvisation). Logically, the structure of the prediction model constitutes of a representation learning module $\Phi: \mathcal{X} \rightarrow \mathcal{H}$ and a classifier $\hat{w}: \mathcal{H} \rightarrow \mathcal{Y}$, where $\mathcal{H}$ is the representation space. We aim to learn the function $f: \mathcal{X} \rightarrow \mathcal{Y} \sim P(Y|X, e_{te})$. Note the difference between our problem setting and Domain Adaptation is that the objective of the latter is to learn the invariant representation function $\Phi$ s.t. $P(\Phi(X)|X, e_{te}) = P(\Phi(X)|X, e_{tr})$. \subsection{Pseudo-Labeling} Simprov's learning is initiated by pseudo-labeling the target data. It then relies on a positive feedback loop for learning about the target distribution. Simprov first identifies the subset of the high confidence target predictions using a base model for OOD generalization. The intuition is that the predictions with the highest confidence are the most accurate since the confidence represents the reliance on invariant features for the predictions. Simprov then uses these predictions with high confidence as pseudo-labels to start a feedback loop. Particularly, we use a trained OOD base model such as IRM to pseudo-label target data with prediction confidence values generated using Monte Carlo (MC) dropout for uncertainty estimation~\cite{gal2016dropout}. Specifically, we perform label inference after changing the dropout mask for the same batch of target data. The variance between the inferences then determines the confidence in the predictions. Formally, let $\mathcal{C}=\{1, 2, ..., k\}$ be the set of $k$ classes, $d$ the dropout probability, $m$ the number of confirmations for the pseudo-labeling process, and $f$ the labeling function parameterized by $\theta$. The pseudo-label $\Tilde{l}_i$ of the $i$-th inference for target sample $j$ can be obtained by: \begin{equation} \Tilde{l}_{j,i} = f_\theta(x_j, d_{j,i})\quad \forall i \in \{1, 2, ..., m\}, x_j \in \mathcal{E}_{te}. \end{equation} \sloppy Let $\ell_j$ be the set of all inferred pseudo labels of $j$ i.e., $\ell_j = \{\tilde{l}_{j, 1}, \tilde{l}_{j, 2}, ..., \tilde{l}_{j, m}\}$. A simple majority voting strategy is used to infer the final pseudo label $\Tilde{y}_j$: \begin{equation} \label{eq:mode} \Tilde{y}_j = \argmax_c \sum_{a} \mathbbm{1}(a, c) \quad \forall a \in \ell_j, c \in \mathcal{C}. \end{equation} where $\mathbbm{1}$ is the indicator function. Finally, the confidence score $\kappa_j$ of $\Tilde{y}_j$ is defined as $\kappa_j = - \text{Var} (\ell_j)$, i.e., the variance of $\ell_j$. \subsection{Self-Distillation} With the high confidence target samples predicted by an OOD base model, Simprov trains an ERM-based student model with dropouts over the target distribution. To further improve the quality of the pseudo-labels, it creates a positive feedback loop where it re-trains the student model using the previous student model as the teacher. At the end of the feedback loop, Simprov learns domain-specific features in the target domain. At t=0, $f_{\theta_0}$ is the function learned by the base model (e.g., IRM on training domains). It encourages Simprov to predict using invariant features. $f_{\theta_0}$ is then used to infer pseudo labels for the target data. Next, we train the student model $f_{\theta_1}$ on the target data using the pseudo labels. We update the pseudo labels for the target data using re-trained $f_{\theta_1}$. The iterative process improves the student model towards positive feedback as judged by the model selection criterion detailed below. \subsection{Random Chance-based Model Selection} Although the self-distillation process can help improve the quality of the pseudo-labels, it might turn into a negative feedback loop as the correct direction of the feedback loop is unknown. Incorrect pseudo labels will only reinforce the teacher's inconsistencies. \sloppy To address this challenge, we propose to use the student model's pseudo-labels for training domains to maneuver the direction of the feedback loop in the self-distillation process. The base model learns invariant features in the training domains. However, due to issues such as sufficiency~\cite{lee2017sufficiency}, it may learn some spurious features. Between training and target domains, these spurious features ($X_{spur}$) may be (i) positively correlated i.e., $P(Y|X_{spur}, e_{te}) \propto P(Y|X_{spur}, e_{tr})$, (ii) negatively correlated i.e., $P(Y|X_{spur}, e_{te}) \propto \frac{1}{P(Y|X_{spur}, e_{tr})}$, or (iii) independent. By definition, $P(Y|X_{inv},e_{tr})=P(Y|X_{inv},e_{ts})$, where $X_{inv}$ represents the invariant latent features. If the training and target distributions have the same correlation~(cases (i) and (iii)), then a model trained on the target distribution works similarly on the training distribution. Otherwise~(case (ii)), the model would give an accuracy that is lower than random chance on $\mathcal{E}_{tr}$. We propose a new metric $d^t_{rand}$ to help identify the direction of the self-distillation feedback loop: the difference between the training prediction accuracy and the random chance of a model trained on target pseudo-labels. Formally, the model selection metric is defined as: \begin{equation} d^t_{rand} = \Big\lvert f_{\theta_t}(x) - \frac{1}{k}\Big\rvert \ ,\quad x \in \mathcal{E}_{tr}. \end{equation} If $d^t_{rand}$ is greater than $d^{t-1}_{rand}$, it indicates that the model has learned informative features. Thus, during self-distillation, Simprov only replaces the teacher model at $t-1$ with the student model when this metric increases. This ensures that there is information gain from the target distribution to de-noise the pseudo-labels, i.e., the model is learning the domain-specific features, including both the domain-relevant and invariant features in the target domain. \section{Experiments} We aim to answer the following research questions in the experiments: \textbf{RQ. 1} Can Simprov outperform SOTA for OOD over different datasets? \textbf{RQ. 2} How effective is the proposed model selection criterion? \textbf{RQ. 3} How sensitive is Simprov to different values of hyperparameters? \begin{table}[] \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{llll} \cline{2-4} & \textbf{CMNIST} & \textbf{Camelyon17} & \textbf{Waterbirds} \\ \cline{2-4} IRM & {\ul 67.1 (2.5)} & 64.2 (8.1) & 75.3 (0.6) \\ Group DRO & 38.7 (1.8) & 68.4 (7.3) & 91.4 (0.3) \\ \hline DANN & 51.5 (0.3) & 68.4 (9.2) & 77.8 (0.0) \\ ARM & 56.2 (0.2) & 87.2 (0.9) & {\ul 94.1 (0.0)} \\ Pseudolabel & 42.9 (1.1) & 67.7 (8.2) & 74.2 (8.0) \\ NoisyStudent & 27.1 (3.8) & 86.7 (1.7) & 22.2 (0.0) \\ \hline \textbf{Simprov-IRM (Ours)} & \textbf{89.8 (0.1)} & \textbf{92.8 (6.2)} & 81.6 (8.1) \\ \textbf{Simprov-DRO (Ours)} & 12.3 (0.0) & {\ul 87.7 (3.5)} & \textbf{95.0 (3.0)} \end{tabular} } \caption{Average accuracy and standard deviations over five trials of different methods under three benchmark datasets.} \label{tbl:accuracies} \end{table} \subsection{Experimental Setup} \label{sec:setup} Our implementation extends the boilerplate provided by the Stanford's WILDS benchmark repository~\cite{wilds2}. \textit{Datasets.} We use three benchmark datasets with different classification tasks. (i) CMNIST~\cite{arjovsky_invariant_2020} contains images of digits that have either of the two colors: green and red. The label is `1’ if the digit is less than five, otherwise it is `0’. (ii) Camelyon17-Wilds~\cite{bandi2018detection} is related to tumor detection. (iii) Waterbirds~\cite{sagawa2019distributionally} aim to classify images of landbirds and waterbirds with land or water backgrounds. For the model architecture, we followed the default setting of WILDS~\cite{wilds1}. \textit{Baselines}. We compare Simprov with two popular OOD models (i.e., IRM~\cite{arjovsky_invariant_2020} and Group DRO~\cite{rahimian2019distributionally}) and four SOTA domain-adaptation models (i.e., DANN~\cite{ganin2016domain}, ARM~\cite{zhang_adaptive_2020}, Pseudolabel~\cite{lee2013pseudo}, and NoisyStudent~\cite{xie2020self}). IRM and Group DRO aim to learn invariant features across domains. DANN, ARM, Pseudolabel, and NoisyStudent employ techniques to ensure the distributions of learned representations are aligned across domain. Using IRM and DRO base models leads to two versions of Simprov: \textit{Simprov-IRM} and \textit{Simprov-DRO}. \subsection{Results} \label{sec:results} We report the mean and standard deviations of the accuracy on the target domain over five trials of the selected models. We present the results in Table~\ref{tbl:accuracies}. We used the same train/test splits (i.e., the hardest case) for CMNIST as in~\cite{arjovsky_invariant_2020}, different from most of other implementations that report results over a combination of splits. The best results are in bold font and the second best ones are underlined. We make the following observations answering \textbf{RQ1}: \begin{itemize}[leftmargin=*] \item Simprov mostly outperforms the corresponding base models across different tasks (e.g., Simprov-IRM outperforms IRM for CMNIST), indicating that learning domain-specific features is critical for achieving high accuracy in OOD tasks. Simprov improves accuracy by ${\sim}20\%$ on the hardest dataset~(CMNIST) as it optimizes the feature representation using target domain data. \item Simprov reinforces the feature correlations learned in the base models. This is supported by the observation that when the base models perform relatively well (e.g., Camelyon17 and Waterbirds), it can improve the prediction performance by a large margin; however, its performance degrades significantly if the base models fail (e.g., Group DRO for CMNIST). This further implies that learning invariant features is necessary for the OOD challenge. \item Compared to the SOTA models for domain adaption, our models consistently achieve the best performance. For example, Simprov has an ${\sim}10\%$ improvement on average over three datasets on compared to ARM. There are two reasons for this improvement. First, by using only the pseudo-labels predicted by the OOD base models rather than their latent features, By using pseudo labels instead of the features for the training data during distillation, Simprov does not rely on the strong feedback regarding the training domains while retaining feedback for the target domain via backpropagation; (ii) Our model selection strategy leads the training process in a direction of information gain, i.e., when the random-chance difference is large, Simprov has high information about the target domain, leading to comparatively better performance. \end{itemize} We perform further analysis to answer \textbf{RQ2} and \textbf{RQ3}. Fig.~\ref{fig:model_selection}(a) shows that increasing the deepness (i.e. the number of distillation iterations) of the self-distillation process generally helps Simprov learn better domain-specific features in the target domain. We believe this is in part due to the feedback loop created during training on the target data. Fig.~\ref{fig:model_selection}(b) shows that the proposed model selection strategy is effective. When the random chance difference is low, there is high variation in the accuracy of the models on the target domain. This is because the closer the model's performance to random chance accuracy on the training data, the less Simprov knows about the domain-specific features. By contrast, at a higher random chance difference, Simprov presents more minor variations and higher accuracy on the target domain. \begin{figure} \caption{(a) effects of deepness on the accuracy, and (b) accuracy changes relative to our model selection metric.} \label{fig:model_selection} \end{figure} \section{Conclusion} Our approach~(Simprov) leveraged both labeled training data and target data to learn domain-specific features guided by an effective model selection criterion. We showed that our method can outperform SOTA over three benchmark datasets. We draw two main conclusions. First, our approach relies on invariants from OOD models in prior works. Second, our approach does not find \textit{purely} invariant features in the data in lieu of the domain-specific features. We leave these to future work. \end{document}
\begin{document} \title{Expressibility of the alternating layered ansatz for quantum computation} \author{Kouhei Nakaji and Naoki Yamamoto} \affil{Department of Applied Physics and Physico-Informatics \& Quantum Computing Center, Keio University, Hiyoshi 3-14-1, Kohoku, Yokohama, 223-8522, Japan} \maketitle \abstract{ The hybrid quantum-classical algorithm is actively examined as a technique applicable even to intermediate-scale quantum computers. To execute this algorithm, the hardware efficient ansatz is often used, thanks to its implementability and expressibility; however, this ansatz has a critical issue in its trainability in the sense that it generically suffers from the so-called gradient vanishing problem. This issue can be resolved by limiting the circuit to the class of shallow alternating layered ansatz. However, even though the high trainability of this ansatz is proved, it is still unclear whether it has rich expressibility in state generation. In this paper, with a proper definition of the expressibility found in the literature, we show that the shallow alternating layered ansatz has almost the same level of expressibility as that of hardware efficient ansatz. Hence the expressibility and the trainability can coexist, giving a new designing method for quantum circuits in the intermediate-scale quantum computing era. } \section{Introduction} Recent rapid progress in the hardwares for quantum computing stimulates researchers to develop new techniques that utilize even noisy intermediate-scale quantum (NISQ) \cite{NISQ} device for real applications such as machine learning and quantum chemistry. Especially, several hybrid quantum-classical algorithms have been actively examined, as a means to reduce the computational cost required on quantum computing part. The variational quantum eigensolver (VQE) \cite{vqe}, which trains the parameterized quantum circuit via a classical computation to decrease a given cost function, is typically considered as such a hybrid algorithm. The critical point in this strategy is in the difficulty to design a suitable and implementable circuit ansatz that, after the training process, may produce an exact or well-approximating solution of a given problem. The hardware efficient ansatz (HEA) \cite{hardware-efficient-ansatz} is often used mainly because of the implementability on a hardware; this is a relatively shallow circuit ansatz, whose parameters are embedded in the angles of single-qubit rotation gates. However, it was proven in \cite{plateau} that, when those parameters are randomly chosen, the gradient of a standard cost function vanishes, meaning that the update of the cost function often gets stuck in the learning process before reaching the minimum. For solving this vanishing gradient problem, several approaches have been proposed. Reference \cite{plateau-initialization} showed numerically that, with a special type of initializing method of the HEA, the vanishing gradient problem does not occur in an ansatz with ${\cal O}(1)$ qubits. Also, a quantum analogue of the natural gradient have been proposed in \cite{Carleo2019,Yamamoto2019}, the original classical version of which is often used to avoid similar vanishing gradient problems in neural networks. In this paper, we focus on the third approach given by Ref.~\cite{local-cost-function} that provides a method for devising a specific structure of the HEA ansatz, called the Alternating Layered Ansatz (ALT), which in fact provably does not suffer from the vanishing gradient problem. By definition, the class of ALT is included in that of HEA; the difference is that, while a HEA consists of multiple layers of single-qubit rotation gates and entanglers that in principle combines {\it all} qubits in each layer, the entangling gates contained in an ALT is restricted to entangle only {\it local} qubits in each layer. With this setting, the authors in \cite{local-cost-function} derived a strict lower bound of the variance of the gradient for an ALT with its parameters randomly chosen (more precisely, the ensemble of unitary matrices corresponding to each circuit block is 2-design) under the condition that the cost function is local (that is, the cost function is composed of local functions of only a small number of local qubits). By using this lower bound, it was also shown that the vanishing gradient problem can be resolved if the number of layers is of the order $O({\rm poly}(\log n))$ where $n$ is the total number of qubits, or roughly speaking if the circuit is shallow. Then an important question arises; does ALT have a sufficient expressive power (expressibility) for generating a rich class of states, which contains the optimal or a well-approximating state? Because the set of ALTs is a subclass of that of HEA, one might argue that the expressibility of ALT could be much lower than that of the HEA; if this is the case, the ALT circuit may not generate a desired state even though the learning process is smoothly running. Thus, it is worth examining the expressibility of ALT in order to assess the practicality of this ansatz in executing the hybrid quantum-classical algorithm. In this paper, we study this problem using the expressibility measure introduced in \cite{expressibility} and show that, fortunately, the class of shallow ALTs has the same level of expressibility as that of HEAs. Therefore, the expressibility and the trainability of a quantum circuit can coexist, which means that the existing HEA found in the literature can be basically replaced with a simpler ALT without degradation of expressibility while acquiring a better trainability. That is, the ALT might be taken as a new standard ansatz in NISQ computing era. The structure of this paper is as follows. Section~\ref{preliminary} is the preliminary, giving the definition of expressibility and the ansatzes. In Sec.~\ref{evaluation}, we show both theoretically and numerically that the expressibility of ALT is as high as that of HEA. In Sec.~\ref{vqe-section}, we show how much the value of expressibility is reflected to the result of VQE. Finally, we conclude with some remarks in Sec.~\ref{conclusion}. \section{Preliminaries} \label{preliminary} In this section, we define indicators of the expressibility and introduce some circuit ansatzes. \subsection{Indicators of the expressibility} Following Ref.~\cite{expressibility}, we define the expressibility of a given circuit, by the randomness of states generated from the circuit, in terms of the frame potential and the Kullback-Leibler (KL) divergence. \subsubsection*{Frame Potential} To define the expressibility of a given circuit ansatz $C$, let us consider the deviation of the state distribution generated by $C$ from the Haar distribution, as follows; \[ {\mathcal A}^{(t)}(C) = \left\| \int_{\rm Haar} (|\psi\rangle\langle\psi|)^{\otimes t} d\psi - \int_{\Theta} (|\psi_{\theta}\rangle\langle\psi_{\theta}|)^{\otimes t} d\theta \right\|_{HS}, \] where $\int_{\rm Haar}$ denotes the integration over the state $|\psi\rangle$ distributed with respect to the Haar measure, and $\| \cdot \|_{HS}$ is the Hilbert Schmidt distance. Also, $|\psi_{\theta}\rangle$ is the state generated by the ansatz $C$ characterized by the parameter $\theta\in\Theta$, e.g., $|\psi_{\theta}\rangle=U_C(\theta)|0\rangle, \theta\in\Theta$, where $U_C(\theta)$ is the unitary operator corresponding to $C$ and $|0\rangle$ is an initial state. Then we call that the ansatz $C$ with smaller ${\mathcal A}^{(t)}(C)$ has a higher expressibility. This definition is justified by the following reason. That is, because the state $|\psi\rangle$ generated from the Haar distribution can in principle represent an arbitrary state, the condition ${\mathcal A}^{(t)}(C)\approx 0$ implies that the ansatz $C$ can generate almost all states possibly including the optimal solution (e.g., the ground state in VQE); also, in this case the states generated from $C$ are almost equally distributed, which is particularly favorable if little is known about the problem. To compute ${\mathcal A}^{(t)}(C)$, we instead focus on the following $t$-th generalized frame potential \cite{frame-potential-2} of $C$: \begin{flalign} \label{frame-potential} \f{t}{C} = \int_{\Phi} \int_{\Theta} |\langle \psi_{\phi}|\psi_{\theta}\rangle|^{2t} d\phi d\theta, \end{flalign} where both $\Theta$ and $\Phi$ represent the same set of parameters of $C$. In the present paper, we simply call it the $t$-th frame potential. In particular, the $t$-th frame potential of $N$-dimensional states distributed with respect to the Haar measure, is given by $\fhaar{t} = \int_{\rm Haar}\int_{\rm Haar}|\langle\psi|\psi^{\prime}\rangle|^{2t} d\psi d\psi^{\prime}$. The point of introducing the frame potential is because these quantities are linked to ${\mathcal A}^{(t)}(C)$ in the following form; that is, for an arbitrary positive integer $t$, it holds \begin{flalign} \label{frame-potential ineq} \f{t}{C} - \fhaar{t} = {\mathcal A}^{(t)}(C) \geq 0. \end{flalign} The equality in the last inequality holds if and only if the ensemble of $|\psi_{\theta}\rangle$ is a state $t$-design \cite{frame-potential-t-design-1,frame-potential-t-design-2,frame-potential-t-design-3}. Thus, the ansatz $C$ with smaller $\f{t}{C}$ has a higher expressibility. Also $\f{t}{C}$ is lower bounded by $\fhaar{t}$, meaning that the frame potential can be used as an indicator for quantifying the non-uniformity in the state distribution. In Sec.~\ref{moment}, we calculate $\f{1}{C}$ and $\f{2}{C}$ for several ansatz $C$. \subsubsection*{KL-Divergence} Note that the frame potential \eqref{frame-potential} is the $t$-th moment of the fidelity $F = |\langle \psi_{\theta}|\psi_{\phi}\rangle|^2$, where the circuit parameters $\theta\in\Theta$ and $\phi\in\Phi$ are randomly sampled from the circuit ansatz $C$. Hence the probability distribution of $F$, denoted by $\pf{C}$, contains more information for quantifying the randomness of $C$ than $\f{t}{C}$, and thereby the following measure can be used to quantify the expressibility of $C$: \begin{flalign} \operatorname{\expr(C)}=D_{\mathrm{KL}}\left(\pf{C} \| P_{\mathrm{Haar}}(F)\right) = \int_0^1 \pf{C} \log \frac{\pf{C}}{P_{\mathrm{Haar}}(F)} dF, \end{flalign} where $D_{KL} (q \| p) $ is the KL divergence between $q$ and $p$. Also $P_{\rm Haar}(F)$ is the probability distribution of the fidelity $F=|\langle\psi|\psi^{\prime}\rangle|^2$, where $|\psi\rangle$ and $|\psi^{\prime}\rangle$ are sampled according to the Haar measure. In Ref.~\cite{average-fidelity}, $ P_{\mathrm {Haar}} (F) = (N-1) (1-F)^{N-2}$ is derived, where $ N $ is the dimension of Hilbert space. Because in general $D_{KL} (q \| p) =0$ iff $q=p$, the anzatz $C$ with smaller value of $\expr(C)$ has a higher expressibility. Thus, $\expr(C)$ can also be used an indicator for quantifying the non-uniformity of an ansatz. Lastly recall that the $t$-th moment of $\pf{C}$ and $P_{\rm Haar}(F)$ are $\f{t}{C}$ and $\fhaar{t}$, respectively. Thus, if the values of $\f{t}{C}$ is close to $\fhaar{t}$, the value of $\expr(C)$ is close to zero. \subsection{Ansatzes} \label{ansatzes} Here we describe the three types of ansatzes investigated in this paper. \subsubsection*{Hardware Efficient Ansatz} The HEA circuit consists of multiple layers of parametrized single qubit gates and entanglers which entangle all qubits. In the following, let $\hea$ be the class of HEA with $\ell$ layers where each layer contains $n$-qubits. \subsubsection*{Alternating Layered Ansatz} The ALT introduced in \cite{local-cost-function} also consists of multiple layers but the components of each layer are different from those of the HEA as follows. That is, each layer has some separated blocks, where each block has parametrized single-qubit rotation gates and fixed entanglers that entangle all qubits inside the block. The probability distributions of those angle parameters are independent in all blocks and all layers. In this paper, we further restrict the class of ALT as follows. First, as in HEA, the entire circuit is composed of $\ell$ layers, where each layer contains $n$ qubits. Then, we assume that, in the odd-number-labeled layers, each block contains $m$ qubits, so that $m$ is an even number and $n/m$ is an integer. In other words, the odd-number-labeled layers contain $n/m$ blocks which operate on $\{1, \ldots, m\}$, $\{m+1\ldots, 2m\}$, ..., and $\{n-m + 1, \ldots, n\}$ qubits. As for the even-number-labeled layers, they contain $n/m + 1$ blocks which operate on $\{1,\ldots, m/2\}$, $\{m/2 + 1,\ldots, 3m/2\}$, ..., and $\{n-m/2 + 1, \ldots, n\}$ qubits; that is, the first and the last block operate on $m/2$ qubits, while the others operate on $m$ qubits. In the following, we use $\alt$ to denote the class of ALT with the above-defined indices. In \cite{local-cost-function}, it is proved that the vanishing gradient problem can be avoided if the following conditions are satisfied; (i) each term of the cost Hamiltonian, $H=\sum_k H_k$ is local, meaning that each term $H_k$ is composed of less than $m$ neighboring qubits, (ii) the ensemble of unitary matrices in each block is 2-design, and (iii) the number of layers, $\ell$, is of the order $O({\rm poly}(\log n))$. Note that each block needs to be deep enough for the distribution of corresponding unitary matrices to be close to 2-design. \subsubsection*{Tensor Product Ansatz} In addition to HEA and ALT, we here introduce the class of tensor product ansatz (TEN) as a relatively weak ansatz. This ansatz also consists of $\ell$ layers, where each layer contains $n$ qubits, and each layer contains $n/m$ blocks ($n/m$ is assumed to be an integer), which contain single-qubit rotation gates and entanglers combining all qubits in the block. Throughout all the layers, the blocks operate on $\{1, \ldots, m\}$, $\{m+1\ldots, 2m\}$, ..., and $\{n-m + 1, \ldots, n\}$ qubits. Thus, TEN always generates a product state of the form $|\psi_1\rangle \otimes \cdots \otimes |\psi_{n/m}\rangle$ where each state is composed of $m$ qubits. Let $\ten$ be the class of TEN described above. \subsubsection*{} In Fig.~\ref{fig-ansatzes}, we show examples of the structures of these ansatzes. \figmedium{vqe-expressibility-medium.001.png} {Examples of TEN, ALT, and HEA introduced in Section~\ref{ansatzes}. The upper figures are the overall structures of the ansatzes in the case $n=8$ and $\ell=3$ (and $m=4$ for TEN and ALT). The circuits inside the blocks are exemplified in the lower figure, where $\sigma_{a_i}$ ($a_i \in \{x, y, z\}$) is the Pauli matrix operating on a qubit and $\theta_i \in [0, 2\pi]$ is the parameter. } {fig-ansatzes} \section{Expressibility of the circuit ansatzes} \label{evaluation} In this section, we give some analytical expressions as well as upper bounds of the first and the second frame potentials of the three ansatzes introduced in Section \ref{ansatzes}, showing that the shallow ALT has almost the same expressibility as that of HEA. This result will be further confirmed by a numerical simulation in terms of the KL-divergence. \subsection{Analytical expression of the frame potential of the ansatzes} \label{moment} First of all, to compute the frame potentials of each ansatz, we assume that the ensemble of the unitary matrices corresponding to $\hea$ is 2-design. Similarly, the ensemble of the unitary matrices corresponding to each block of $\alt$ and $\ten$ are assumed to be 2-design. These assumptions are also adopted in the discussion of \cite{plateau} and \cite{local-cost-function}. Such ensemble of unitary matrices can be generated by randomly choosing the parameters of the circuit having a specific structure. Before going into the detail, we show some integration formulae for random unitary matrices \cite{haar-integral}. First, if the ensemble of $n\times n$ unitary matrices $\{ U \}$ is 1-design, the following formula holds: \begin{flalign} \label{one-design} \intone dU U_{i j} U_{m k}^{*}=\frac{\delta_{i m} \delta_{j k}}{n}&, \end{flalign} where $\intone dU$ is the integral over the 1-design ensemble of the unitary matrices. Second, if the ensemble of $n\times n$ unitary matrices $\{ U \}$ is 2-design, the following formula holds: \begin{flalign} \label{2-design} \inttwo dU U_{i_1 j_1}U_{i_2 j_2} U_{i_1^{\prime}j_1^{\prime}}^{\ast}U_{i_2^{\prime}j_2^{\prime}}^{\ast} = \sum_{k=1}^4 \lambda_{k}^{(n)} \Delta^k_{i_1 j_1 i_2 j_2 i_1^{\prime}j_1^{\prime} i_2^{\prime}j_2^{\prime}}, \end{flalign} where \begin{flalign} \lambda_{1}^{(n)} = \lambda_{2}^{(n)} = \frac{1}{2^{2n}-1} &, ~~ \lambda_{3}^{(n)} = \lambda_{4}^{(n)} = -\frac{1}{(2^{2n}-1)2^n}, \nonumber \\ \Delta^1_{i_1 j_1 i_2 j_2 i_1^{\prime}j_1^{\prime} i_2^{\prime}j_2^{\prime}} &= \delta_{i_1i_1^{\prime}}\delta_{j_1j_1^{\prime}}\delta_{i_2i_2^{\prime}}\delta_{j_2j_2^{\prime}}, \nonumber\\ \Delta^2_{i_1 j_1 i_2 j_2 i_1^{\prime}j_1^{\prime} i_2^{\prime}j_2^{\prime}} &= \delta_{i_1i_2^{\prime}}\delta_{j_1j_2^{\prime}}\delta_{i_2i_1^{\prime}}\delta_{j_2j_1^{\prime}}, \nonumber\\ \Delta^3_{i_1 j_1 i_2 j_2 i_1^{\prime}j_1^{\prime} i_2^{\prime}j_2^{\prime}} &= \delta_{i_1i_1^{\prime}}\delta_{j_1j_2^{\prime}}\delta_{i_2i_2^{\prime}}\delta_{j_2j_1^{\prime}}, \nonumber\\ \Delta^4_{i_1 j_1 i_2 j_2 i_1^{\prime}j_1^{\prime} i_2^{\prime}j_2^{\prime}} &= \delta_{i_1i_2^{\prime}}\delta_{j_1j_1^{\prime}}\delta_{i_2i_1^{\prime}}\delta_{j_2j_2^{\prime}}, \end{flalign} and $\inttwo dU$ is the integral over the 2-design ensemble of the unitary matrices. These formulae are effectively used to derive the theorems shown below. \subsubsection*{The First Frame Potential} For the first frame potentials, the following theorem holds. \begin{th.} \label{first-moment-1} If the ensemble of the unitary matrices corresponding to $\hea$ and the ensemble of the unitary matrices corresponding to each block of $\alt$ and $\ten$ are 2-design, then the following equalities hold: \begin{flalign} \label{first-moment-formula} \f{1}{\hea} = \f{1}{\alt} = \f{1}{\ten} = \fhaarn{1}. \end{flalign} \end{th.} The equality, $\f{1}{\hea} = \fhaarn{1}$, can be readily proved from the assumption of the theorem, because, if the ensemble of the unitary matrices corresponding to the circuit is 2-design, the ensemble of the states generated by the circuit is a state 2-design (and therefore a state 1-design). For the other equalities, the proof is given in Appendix. Note that, accordingly, the ensembles of the states generated by $\hea$, $\alt$, and $\ten$ are all 1-design. \subsubsection*{The Second Frame Potential} The second frame potential of the Haar random circuits, $\fhaarn{2}$, can be computed as \begin{flalign} \qquad \fhaarn{2} = \int_0^{1}dF F^2(2^n - 1) (1 - F)^{2^n-2} &= \frac{1}{(2^n + 1)2^{n-1}}. \end{flalign} Then, for $\hea$ and $\ten$, the following theorem holds. \begin{th.} \label{second-moment-1} If the ensemble of the unitary matrices corresponding to $\hea$ and the ensemble of the unitary matrices corresponding to each block of $\ten$ are 2-design, then the following equalities hold: \begin{flalign} \fhea{2} &= \fhaarn{2}, \label{hea-second}\\ \ften{2} &= 2^{\frac{n}{m}-1} \cdot \frac{2^n + 1}{(2^m + 1)^{\frac{n}{m}}}\fhaarn{2}. \label{ten-second} \end{flalign} \end{th.} The equality (\ref{hea-second}) is readily proved from the assumption of the theorem, because, as we mentioned above, if the ensemble of the unitary matrices corresponding to the circuit is 2-design, the ensemble of the states generated by the circuit is a state 2-design. For Eq.\eqref{ten-second}, we give the proof in Appendix. From this theorem we find that $\ften{2}$ is always larger than $\fhaarn{2}$; in particular, $\ften{2} \simeq 2^{n/m-1} \fhaarn{2}$ for large $n$, meaning that the expressibility of TEN is much smaller than that of HEA in the sense of the frame potential. As for ALT, it is difficult to obtain an explicit formula like the case of HEA and TEN. Hence in Theorem~3 below, we provide a formula for computing the values of $\faltcustom{2}{2}$ and $\faltcustom{2}{3}$; the computation methods for the other $\ell$s are left for future work. Before stating the theorem, below we define a 16-dimensional vector $\atwom$, a $16 \times 16$ matrix $B(2, m)$, a 64-dimensional vector $\athreem$, and a $64 \times 64$ matrix $B(3, m)$. Given integers $k_a, k_b \in \{1, 2, 3, 4\}$, the $(4(k_a-1) + k_b)$-th component of the vector $\atwom$ is defined as \begin{flalign} \label{a2m} \atwom_{4(k_a-1) + k_b} &= \inttwo dP dQ \sqrt{\lambda_{k_a}^{(m)}\lambda_{k_b}^{(m)}}\Delta^{(k_a, k_b)}(P, Q), \end{flalign} where $\Delta^{(k_a, k_b)}(P, Q)$ is the function of $m/2 \times m/2$ unitary matrices $P$ and $Q$: \begin{flalign} \Delta^{(k_a, k_b)}(P, Q) &= \sum_{\substack{u u^{\prime} i i^{\prime}\\ j j^{\prime} l l^{\prime}}} \sum_{\substack{p p^{\prime} q q^{\prime}}} \Delta^{k_a}_{u 0 u^{\prime} 0 i 0 i^{\prime} 0} \Delta^{k_b}_{p 0 p^{\prime} 0 q 0 q^{\prime} 0} P_{j u} P_{j^{\prime} u^{\prime}} P_{l i}^{\ast} P_{l^{\prime} i^{\prime}}^{\ast} Q_{l p} Q_{l^{\prime} p^{\prime}} Q_{j q}^{\ast} Q_{j^{\prime} q^{\prime}}^{\ast}. \end{flalign} Next, given integers $k_a, k_b, k_c \in \{1, 2, 3, 4\}$, the $(16(k_a-1) + 4(k_b-1) + k_c)$-th component of the vector $\athreem$ is defined as \begin{flalign} \label{a3m} \athreem_{16(k_a-1) + 4(k_b-1) + k_c} &= \inttwo dP dQ \sqrt{\lambda_{k_a}^{(m)}\lambda_{k_b}^{(m)}\lambda_{k_c}^{(m)}}\Delta^{(k_a, k_b, k_c)}(P, Q), \end{flalign} where $\Delta^{(k_a, k_b, k_c)}(P, Q)$ is a function of $m/2 \times m/2$ unitary matrices $P$ and $Q$: \begin{flalign} \label{delta-3} \Delta^{(k_a, k_b, k_c)}(P, Q) &= \sum_{\substack{u u^{\prime} i i^{\prime}\\ j j^{\prime} l l^{\prime}}} \sum_{\substack{r r^{\prime} t t^{\prime}\\ p p^{\prime} q q^{\prime}}} \Delta^{k_a}_{u 0 u^{\prime} 0 i 0 i^{\prime} 0} \Delta^{k_b}_{j l j^{\prime} l^{\prime} t r t^{\prime} r^{\prime}} \Delta^{k_c}_{p 0 p^{\prime} 0 q 0 q^{\prime} 0} P_{t u} P_{t^{\prime} u^{\prime}} P_{j i}^{\ast} P_{j^{\prime} i^{\prime}}^{\ast} Q_{l p} Q_{l^{\prime} p^{\prime}} Q_{r q}^{\ast} Q_{r^{\prime} q^{\prime}}^{\ast}. \end{flalign} Also, given integers $k_a, k_b, k_c, k_d\in \{1, 2, 3, 4\}$, the $(4(k_a-1) + 4k_b, 4(k_c-1) + k_d)$-th component of the matrix $B(m, 2)$ is defined as \begin{flalign} \label{b2m} B(2, m)_{4(k_a-1) + k_b, 4(k_c-1) + k_d} &= \inttwo dP dQ \sqrt{\lambda_{k_a}^{(m)}\lambda_{k_b}^{(m)}}\sqrt{\lambda_{k_c}^{(m)}\lambda_{k_d}^{(m)}}\Delta^{(k_a, k_b, k_c, k_d)}(P, Q), \end{flalign} where $\Delta^{(k_a, k_b, k_c, k_d)}(P, Q)$ is a function of $m \times m$ matrices $P$ and $Q$: \begin{flalign} \Delta^{(k_a, k_b, k_c, k_d)}(P, Q) &= \sum_{\substack{u_2 u^{\prime}_2 i_2 i^{\prime}_2\\ j_2 j^{\prime}_2 l_2 l^{\prime}_2}} \sum_{\substack{ p_2 p^{\prime}_2 q_2 q^{\prime}_2}} \sum_{\substack{u_3 u^{\prime}_3 i_3 i^{\prime}_3\\ j_3 j^{\prime}_3 l_3 l^{\prime}_3}} \sum_{\substack{ p_3 p^{\prime}_3 q_3 q^{\prime}_3}} \Delta^{k_a}_{u_2 0 u^{\prime}_2 0 i_2 0 i^{\prime}_2 0} \Delta^{k_b}_{p_2 0 p^{\prime}_2 0 q_2 0 q^{\prime}_2 0} \Delta^{k_c}_{u_3 0 u^{\prime}_3 0 i_3 0 i^{\prime}_3 0} \Delta^{k_d}_{p_3 0 p^{\prime}_3 0 q_3 0 q^{\prime}_3 0} \nonumber\\ &\qquad \times P_{j_2 u_2}^{j_3 u_3} P_{j^{\prime}_2 u^{\prime}_2}^{j^{\prime}_3 u^{\prime}_3} P_{l_2 i_2}^{\ast l_3 i_3} P_{l^{\prime}_2 i^{\prime}_2}^{\ast l^{\prime}_3 i^{\prime}_3} Q_{l_2 p_2}^{l_3 p_3} Q_{l^{\prime}_2 p^{\prime}_2}^{l^{\prime}_3 p^{\prime}_3} Q_{j_2 q_2}^{\ast j_3 q_3} Q_{j_2^{\prime} q^{\prime}_2}^{\ast j_3^{\prime} q^{\prime}_3 } \end{flalign} For the matrix component $M^{s, t}_{i, j}$, the upper indices correspond to the first $m/2$ qubits and the lower indices correspond to the last $m/2$. Given integers $k_a, k_b, k_c, k_d, k_e, k_f \in \{1, 2, 3, 4\}$, the $(16(k_a-1) + 4(k_b-1) + k_c, 16(k_d-1) + 4(k_e-1) + k_f)$-th component of the matrix $B(m, 3)$ is defined as \begin{flalign} \label{b3m} B(3, m)&_{16(k_a-1) + 4(k_b-1) + k_c, 16(k_d-1) + 4(k_e-1) + k_f} \nonumber \\ &= \inttwo dP dQ \sqrt{\lambda_{k_a}^{(m)}\lambda_{k_b}^{(m)}\lambda_{k_c}^{(m)}}\sqrt{\lambda_{k_d}^{(m)}\lambda_{k_e}^{(m)}\lambda_{k_f}^{(m)}}\Delta^{(k_a, k_b, k_c, k_d, k_e, k_f)}(P, Q), \end{flalign} where $\Delta^{(k_a, k_b, k_c, k_d, k_e, k_f)}(P, Q)$ is a function of $m \times m$ matrices $P$ and $Q$: \begin{flalign} \label{delta-six} \Delta^{(k_a, k_b, k_c, k_d, k_e, k_f)}(P, Q) &=\sum_{\substack{u_2 u^{\prime}_2 i_2 i^{\prime}_2\\ j_2 j^{\prime}_2 l_2 l^{\prime}_2}} \sum_{\substack{r_2 r^{\prime}_2 t_2 t^{\prime}_2\\ p_2 p^{\prime}_2 q_2 q^{\prime}_2}} \sum_{\substack{u_3 u^{\prime}_3 i_3 i^{\prime}_3\\ j_3 j^{\prime}_3 l_3 l^{\prime}_3}} \sum_{\substack{r_3 r^{\prime}_3 t_3 t^{\prime}_3\\ p_3 p^{\prime}_3 q_3 q^{\prime}_3}} \Delta^{k_{11}^a}_{u_20u^{\prime}_20i_20i^{\prime}_20} \Delta^{k_{31}^a}_{j_2l_2 j^{\prime}_2 l^{\prime}_2 t_2 r_2 t^{\prime}_2 r^{\prime}_2} \Delta^{k_{11}^b}_{p_20p^{\prime}_2q_2 0q^{\prime}_20} \nonumber\\ &\qquad \times \Delta^{k_{12}^a}_{u_30u^{\prime}_30i_30i^{\prime}_30} \Delta^{k_{32}^a}_{j_3l_3 j^{\prime}_3 l^{\prime}_3 t_3 r_3 t^{\prime}_3 r^{\prime}_3} \Delta^{k_{12}^b}_{p_30p^{\prime}_3q_3 0q^{\prime}_30} P_{t_2 u_2}^{t_3 u_3} P_{t^{\prime}_2 u^{\prime}_2}^{t^{\prime}_3 u^{\prime}_3} P^{\ast j_3 i_3}_{j_2 i_2} P^{\ast j^{\prime}_3 i^{\prime}_3}_{j^{\prime}_2 i^{\prime}_2} \nonumber\\ &\qquad \times Q_{l_2 p_2}^{l_3 p_3} Q_{l^{\prime}_2 p^{\prime}_2}^{l^{\prime}_3 p^{\prime}_3} Q^{\ast r_3 q_3}_{r_2 q_2} Q^{\ast r^{\prime}_3 q^{\prime}_3}_{r^{\prime}_2 q^{\prime}_2}. \end{flalign} Now we can give the theorem as follows (the proof is given in Appendix). \begin{th.} \label{second-moment-2} If the ensemble of the unitary matrices corresponding to each block of $\altcustom{2}$ and $\altcustom{3}$ are 2-design, then we have \begin{flalign} \label{aba2} & \faltcustom{2}{2} = \atwom^{\rm T} B(2, m)^{\frac{n}{m} - 1} \atwom, \\ \label{aba3} & \faltcustom{2}{3} = \athreem^{\rm T} B(3, m)^{\frac{n}{m} - 1} \athreem. \end{flalign} \end{th.} \fig{vqe-expressibility.001.png} {The values of $\f{2}{C}/\fhaarn{2}$ for ALT and TEN. The left and right figures show the case $\ell = 2$ and $\ell = 3$, respectively. The vertical axes are in the logarithmic scale. If $\f{2}{C}/\fhaarn{2}$ is close to 1, this means that the expressibility of the ansatz $C$ is relatively high.} {figure-second-moment}{-1} The vectors $\aellm$ and the matrices $B(\ell, m)$ for $\ell = 2,3$ are obtained by directly computing Eqs. (\ref{a2m}), (\ref{a3m}), (\ref{b2m}), and (\ref{b3m}), which then lead to $\faltcustom{2}{2}$ and $\faltcustom{2}{3}$. Now our interest is in the gap of these quantities from $\fhaarn{t}$, to see the expressibility of ALT. For this purpose, in Fig.~\ref{figure-second-moment} we show $\faltcustom{2}{2}/\fhaarn{2}$ and $\faltcustom{2}{3}/\fhaarn{2}$ as a function of $n/m$, for several values of $(m, n)$. For comparison, $\ftencustom{2}{2}/\fhaarn{2}$ and $\ftencustom{2}{3}/\fhaarn{2}$ are shown in the figure. Recall that, if this measure takes a smaller value, this means that the corresponding ansatz has a higher expressibility. Here is the list of notable points: \begin{itemize} \item For any pair of $(n, m)$, it is clear that $\faltcustom{2}{\ell}$ is much smaller than $\ftencustom{2}{\ell}$ for both $\ell=2,3$. This means that, as expected, ALT has a much higher expressibility than TEN. \item For any pair of $(n, m)$, $\faltcustom{2}{2} > \faltcustom{2}{3}$ hold, i.e., as $\ell$ increases, the expressibility increases. \item For any fixed $n/m$, the ALT with bigger $m$ always has a higher expressibility. For instance, the ALT with $(n,m)=(50,10)$ has a higher expressibility than the ALT with $(n,m)=(20,4)$. This is simply because, if the structure of the circuit (the number of division in each layer for making the block) is the same, then an ALT with bigger block components has a higher expressibility. \item For a fixed $n$, we have ALT with the smaller second order frame potential by taking $m$ bigger. For instance $n=100$, we have ${\mathcal F}^{(2)}(C_{\rm ALT}^{2,2,100}) > {\mathcal F}^{(2)}(C_{\rm ALT}^{2,4,100}) > {\mathcal F}^{(2)}(C_{\rm ALT}^{2,10,100})$. That is, for a limited number of available qubits, the ALT with less blocks has a higher expressibility. \item $\faltcustom{2}{\ell} \simeq \fhaarn{2}$ when $m=10$ for all $n/m$ within the figure and for both $\ell=2,3$. Hence the ALT composed from the blocks with $m=10$ qubits in each layer has almost the same expressibility as the HEA without respect to the total qubits number, $n$. In other words, for a given HEA with fixed $n$, we can divide each layer into separated 10-qubits blocks to make an ALT, without decreasing the expressibility. \end{itemize} The last point is of particular important in our scenario. That is, we are concerned with the condition on the number $m$ such that $\faltcustom{2}{\ell} \simeq \fhaarn{2}$ holds. The following Theorem~4 and the subsequent Corollary~1, which can be readily derived from the theorem, provide a means for evaluating such $m$. \begin{th.} \label{second-moment-alt} If the ensemble of the unitary matrices corresponding to each block of $\altcustom{2}$ and $\altcustom{3}$ are 2-design, then the following inequalities hold: \begin{flalign} \faltcustom{2}{2} &< \left(1+\frac{1}{2^{n}}\right)\left(1+\frac{1.2}{2^m}\right)^2\left(1+ 8\left(\left(1 + \frac{20.8}{2^{m/2}}\right)^{\frac{n}{m}-1} -1\right)\right)\fhaarn{2}, \\ \faltcustom{2}{3} &< \left(1+\frac{1}{2^{n}}\right)\left(1+\frac{1.2}{2^m}\right)^2\left(1+ 32\left(\left(1 + \frac{83.2}{2^{m/2}}\right)^{\frac{n}{m}-1} -1\right)\right)\fhaarn{2}. \end{flalign} \end{th.} \begin{corollary} \label{large-n} If $m=2a\log_2 n$ and $143/(an^{a-1}\log_2 n) < 1$, \begin{flalign} \faltcustom{2}{2} &< \left(1+\frac{1}{2^{n}}\right)\left(1+\frac{1.2}{n^{2a}}\right)^2\left(1+ \frac{143}{an^{a-1} \log_2 n}\right)\fhaarn{2}. \end{flalign} If $m=2a\log_2 n$ and $2288/(an^{a-1}\log_2 n) < 1$, \begin{flalign} \faltcustom{2}{3} &< \left(1+\frac{1}{2^{n}}\right)\left(1+\frac{1.2}{n^{2a}}\right)^2\left(1+ \frac{2288}{an^{a-1}\log_2 n}\right)\fhaarn{2}. \end{flalign} \end{corollary} Recall from Eq.~\eqref{frame-potential ineq} that $\f{t}{C} \geq \fhaar{t}$ holds for any ansatz $C$. Therefore, if $m \geq 4\log n$ and $n$ is enough large, Corollary~1 implies that $\faltcustom{2}{2} \sim \fhaarn{2}$ and $\faltcustom{2}{3} \sim \fhaarn{2}$. This means that the ensembles of the states generated by $\altcustom{2}$ and $\altcustom{3}$ are almost 2-design. Hence in this case, from Theorem~2, the expressibility of ALT is as high as that of HAE. It is worth mentioning that, when $m=O(\log_2 n)$, the vanishing gradient problem does not happen in ALT as long as the cost function is local and $\ell$ is small \cite{local-cost-function}. More precisely, it was shown there that the variance of the gradient of such a cost function is larger than the value proportional to $O(1/2^{m\ell})$; thus, by taking $m=O(\log_2 n)$, the variance decreases with only $O(1/{\rm poly}(n))$ as a function of $n$, whereas in the HEA case the same variance decreases exponentially fast as $n$ becomes large. Therefore, the expressibility and the trainability coexists in the shallow ALT with $m=O(\log_2 n)$. \subsection{Expressibility measured by KL divergence} \label{numerical} \begin{table}[h] \small \begin{center} \begin{tabular}{|p{6mm}|p{12mm}|p{6mm}|p{6mm}|p{18mm}|p{18mm}|p{18mm}|} \hline $n$ & {\bf Ansatz} & $\ell$ & $m$ &\begin{tabular}{l}Depth of \\each block \end{tabular} & \begin{tabular}{l}\# of gate\\parameters \end{tabular}&\begin{tabular}{l}Example of\\the circuit \end{tabular}\\ \hline \hline \CenterRow{6}{4} & \CenterRow{2}{\bf TEN}& 2 & 2&2&16&-\\ \cline{3-7} & & 3 & 2&2 & 24 &Fig.~\ref{tensor-product-ansatz-ex} \\ \cline{2-7} & \CenterRow{2}{\bf ALT} & 2 & 2 & 2 &16&-\\ \cline{3-7} & &3 & 2 & 2 & 24 &Fig.~\ref{alternating-layered-ansatz-ex}\\ \cline{2-7} & \CenterRow{1}{\bf HEA} & 4 & - &-&16& Fig.~\ref{hardware-efficient-ansatz-ex}\\ \hline \hline \CenterRow{5}{6} & \CenterRow{2}{\bf TEN} &2 & 2 & 2&24 &-\\ \cline{3-7} & & 3 & 2 & 2 &36&-\\ \cline{2-7} & \CenterRow{2}{\bf ALT}& 2 & 2&2&24&-\\ \cline{3-7} & & 3 & 2&2 & 36 &- \\ \cline{2-7} & {\bf HEA} & 6 & - & - &36&-\\ \hline \hline \CenterRow{10}{8} & \CenterRow{4}{\bf TEN} & \CenterRow{2}{2} & 2 & 2 & 32 &-\\ \cline{4-7} & & & 4 & 4 & 64 &-\\ \cline{3-7} & & \CenterRow{2}{3} & 2 & 2 & 48 &-\\ \cline{4-7} & & & 4 & 4 & 96 &-\\ \cline{2-7} & \CenterRow{4}{\bf ALT} & \CenterRow{2}{2} & 2 & 2 & 32 &-\\ \cline{4-7} & & & 4 & 4 & 64 &-\\ \cline{3-7} & & \CenterRow{2}{3} & 2 & 2 & 48 &-\\ \cline{4-7} & & & 4 & 4 & 96 &-\\ \cline{2-7} & \CenterRow{1}{\bf HEA} & 8 & - & -&64&-\\ \cline{1-7} \end{tabular} \caption{Parameters chosen for computing the KL-divergence. The number of gate parameters are computed by $n \times \ell \times {\rm (Depth\ of\ each\ block)}$ for TEN and ALT, and $n \times \ell$ for HEA. } \label{kl-setting} \end{center} \end{table} In Subsection \ref{moment}, we have shown that the first two moments of $\pf{\alt}$ and $\pf{\hea}$ are close to those of $\phaar$, as long as $m = O(\log_2 n)$ and the block components of the ansatzes are sufficiently random. (Recall that, if every block is completely random, then the set of HEA constitutes the Haar ensemble.) The result implies that both $\pf{\alt}$ and $\pf{\hea}$ are close to $\phaar$ itself, meaning that $\pf{\alt} \simeq \pf{\hea} \simeq \phaar$. In this subsection, to support this conjecture, we evaluate the values of KL-divergence $\expr(C)=D_{\mathrm{KL}}\left(\pf{C} \| P_{\mathrm{Haar}}(F)\right)$ for the case $C=\alt$ and $C=\hea$, in addition to $C=\ten$ for comparison with various sets of $(\ell, m, n)$. Especially, we focus on the relationship between the values of $\f2{C}$ and $\expr(C)$, and check if $\f{2}{C} \simeq \f{2}{C^{\prime}}$ would lead to $\expr(C) \simeq \expr(C^{\prime})$ for a fixed $n$. The parameters taken for calculating the KL-divergence are summarized in Table~\ref{kl-setting}. Note that the circuits are chosen to be similar to those used in Section~\ref{moment}; for TEN and ALT, the depth of the circuits inside the blocks are all set to $m$ so that the ensemble of the unitary matrices corresponding to those circuits become close to 2-design \cite{approximate-2-design,approximate-2-design-2}; for HEA, $\ell$ is set to $n$ so that the ensemble of the unitary matrices corresponding to the {\it whole circuits} becomes close to 2-design. It is expected that $\faltbase{2}{3}{2}{4} \approx \fheabase{2}{4}{4}$ and $\faltbase{2}{3}{4}{8} \approx \fheabase{2}{8}{8}$ are realized, because, in Fig~\ref{figure-second-moment}, we see that $\faltbase{2}{3}{2}{4}$ and $\faltbase{2}{3}{4}{8}$ are almost equals to the Haar values when the ensembles of unitary matrices corresponding to each block are 2-design. Thus, we here check if $\f{2}{C} \simeq \f{2}{C^{\prime}}$ would mean $\expr(C) \simeq \expr(C^{\prime})$ in these parameter sets. As an example of the circuit, the whole structure of $\tenbase{3}{2}{4}$, $\altbase{3}{2}{4}$, and $\heabase{4}{4}$ in our settings are shown in Figs.~\ref{tensor-product-ansatz-ex}, \ref{alternating-layered-ansatz-ex}, and \ref{hardware-efficient-ansatz-ex}, respectively. As illustrated in the figures, each layer is composed of parametrized single qubit gates and fixed 2-qubit CNOT gates. In each trial of computing KL-divergence, we generate 200 states. When generating a state in each trial, we randomly choose the parameters and the type fo single-qubit gate of the circuit. That is, for the $i$-th single qubit gate $R_i(\theta_i) = \exp(\sigma_{a_i} \theta_i)$ with $a_i = \{x, y, z\}$ and $\theta_i \in [0, 2\pi]$, in each trial all $a_i$ and $\theta_i$ are randomly chosen. Then 200 fidelity values are computed, which are then used to construct the histogram with 1000 bins to approximate the probability distribution $\pf{C}$. Note that increasing the number of generated states and the number of bins do not affect the following conclusions. In this setting, Fig.~\ref{expr} shows the KL divergences $\expr(\alt)$, $\expr(\hea)$, and $\expr(\ten)$. As a reference, we also show the values of $\f{2}{C}/\fhaarn{2}$ computed from the second moment of the fidelity distributions. Each data point and associated error bar is the average and the standard deviation of 10 trials of computation, respectively. Here is the list of points: \begin{itemize} \item For a fixed $n$, $\expr(\ten)$ is always bigger than $\expr(\alt)$ and $\expr(\hea)$. \item As the number of layers increases, the KL-divergence decreases for fixed $(m, n)$. \item For a fixed $n$, the tendency of the values of $\f{2}{C}/\fhaarn{2}$ is strongly correlated with that of KL-divergence. \item As expected, $\faltcustom{2}{3} \approx \fheacustom{2}{n}$ is realized when $(m, n) = (2, 4)$ and $(4, 8)$. \item $\expr(\altcustom{3})$ is as small as $\expr(\heabase{n}{n})$ in the parameter sets where $\faltcustom{2}{3} \approx \fheacustom{2}{n}$ is realized, i.e., $(m, n) = (2, 4)$ and $(4, 8)$. This result implies that the state distribution in ALT is also close to that in HEA, in the setting where the second frame potential is close to $\fhaarn{2}$. \end{itemize} From some of the above observations, we find the strong correlation between $\f{2}{C}/\fhaarn{2}$ and $\expr(C)$; that is, as $\f{2}{C}$ becomes close to $1$, then $\expr(C)$ becomes close to $0$. Therefore, combining the result obtained in Section~\ref{moment}, we get a clear evidence that, as far as $m = O(\log_2 n)$, $\expr(\altcustom{2})\approx 0$ and $\expr(\altcustom{3})\approx 0$ hold. That is, the high expressibility and trainability in ALT proven in Section~\ref{moment} are assured also in terms of KL-divergence. \begin{figure} \caption{The structures of ansatzes in our setting. } \label{tensor-product-ansatz-ex} \label{alternating-layered-ansatz-ex} \label{hardware-efficient-ansatz-ex} \end{figure} \fig{vqe-expressibility.003.png} {$\f{2}{C}/\fhaarn{2}$ (top) and KL-divergence (bottom) for each ansatz. The sets of points with which $\faltcustom{2}{3} \approx \fheacustom{2}{3}$ hold are enclosed by the red rectangles.} {expr}{0} \section{Application to VQE} \label{vqe-section} Recall that ALT was originally introduced with the motivation to resolve the vanishing gradient problem in VQE, which has been often observed when using HEA; then we were concerned with the expressibility of ALT in VQE, meaning that ALT would not offer a chance to reach the optimal solution due to the possible loss of expressibility. But we now know that this concern has been resolved under some conditions, as concluded in the previous section; that is, the expressibility and the trainability coexists in the shallow ALT with $m=O(\log_2 n)$. This section provides a case study of VQE that implies this desirable fact. We choose the Hamiltonian of 4-qubits Heisenberg model on a 1-dimensional lattice with periodic boundary conditions: \begin{flalign} \label{hamiltonian} {\cal H} &= \sum_{i=1}^{4} (\sigma_x^i \sigma_x^{i+1} + \sigma_y^{i} \sigma_y^{i+1} + \sigma_z^i \sigma_z^{i+1}), \end{flalign} where $\sigma^i_a$ ($a \in \{x, y, z\}$) is the Pauli matrix that operates on the $i$-th qubit and $\sigma^5_a = \sigma^1_a$. The goal of VQE problem is to find the minimum eigenvalue of ${\cal H}$, by calculating the mean energy $\langle {\cal H} \rangle = \langle \psi_{\theta} | {\cal H}| \psi_{\theta}\rangle = \langle 0 | U_C(\theta)^\dagger {\cal H} U_C(\theta) | 0 \rangle$ via a quantum computer and then updating the parameter $\theta\in\Theta$ to decrease $\langle {\cal H} \rangle$ via a classical computer, in each iteration. As ansatzes, $\tenbase{3}{2}{4}$, $\altbase{3}{2}{4}$, and $\heabase{4}{4}$ are chosen. As indicated in Fig.~\ref{expr}, the values of KL divergence corresponding to these ansatzes show that $\expr(\tenbase{3}{2}{4}) > \expr(\altbase{3}{2}{4}) \simeq \expr(\heabase{4}{4})$. That is, this ALT has the expressibility as high as that of HEA, and further, it is expected to enjoy the trainability unlike the HEA. The simulation results are shown in Fig.~\ref{vqe}. In the top three subfigures of Fig.~\ref{vqe}, the blue lines and the associated error bars represent the average and the standard deviation of $\langle {\cal H} \rangle$ in total 100 trials, respectively. In each trial, the initial parameters of the ansatz are randomly chosen, and the optimization to decrease $\langle {\cal H} \rangle$ in each iteration is performed by using the Adam Optimizer with learning rate $0.001$ \cite{adam}. The green line shows the theoretical minimum energy (i.e., the ground energy) of ${\cal H}$. Also the bottom subfigures of Fig.~\ref{vqe} show three trajectories of $\langle {\cal H} \rangle$ (red lines) whose energies at the final iteration step are the smallest three. The ansatz $\tenbase{3}{2}{4}$, which has the least expressibility in the sense of frame potentials and the KL divergence analysis, clearly gives the worst result; its least mean-energy is far above from the ground energy. This is simply because the state generated via $\tenbase{3}{2}{4}$ cannot represent the ground state for any parameter choice. The result on $\heabase{4}{4}$ is the second worst, which also does not reach the ground energy as in the case of TEN. Note that increasing the number of parameters does not change this result; that is, we also executed the simulation with $\heabase{4}{6}$ that has the same number of parameters as $\altbase{3}{2}{4}$ but did not find a better result than that of $\heabase{4}{4}$. On the other hand, $\altbase{3}{2}{4}$ succeeds in finding the ground state; in fact, 5 of the total 100 trajectories generated via this ALT reach the ground energy. Hence this result implies that, in this example, the expressibility and the trainability coexist in ALT, while the latter is lacking in HEA. This better trainability of ALT than that of HEA could be explained in terms of the ``magnitude" of the gradient vector $\nabla_{\theta} \langle {\cal H} \rangle = [\partial \langle {\cal H} \rangle/\partial \theta_1, \ldots, \partial \langle {\cal H} \rangle/\partial \theta_P]$, where $P$ is the number of parameters. Now care should be taken to define an appropriate magnitude, because the focused ansatzes have different number of parameters. In this work, we regard $\partial \langle {\cal H} \rangle/\partial \theta_p, (p=1, \ldots, P)$ as random variables and, based on this view, define the magnitude of $\nabla_{\theta} \langle {\cal H} \rangle$ as the mean of the absolute value of those random variables: \begin{flalign} \label{EQUATION-g-definition} \| g(\theta)\| = \frac{1}{P}\sum_{p=1}^{P} \left| \frac{\partial \langle {\cal H} \rangle}{\partial \theta_p} \right|. \end{flalign} We evaluate the average of the magnitude \eqref{EQUATION-g-definition} over sample trajectories, at several values of energy reached through the update of $\theta$. For this purpose, let $\theta_C^{(i)}(t)$ denote the vector of parameters of a given ansatz $C$ at the $t$-th step (number of iteration) of the $i$-th trajectory, and $E_C^{(i)}(t) = \langle 0| U_C(\theta_C^{(i)}(t))^{\dagger}\mathcal{H}U_C(\theta_C^{(i)}(t))|0\rangle$ be the energy at $\theta_C^{(i)}(t)$. Next, to define the average of $\| g(\theta_C)\|$ over the sample trajectories at the given energy value $E$, let $t_E^{(i)}$ be the smallest integer such that the energy of the $i$-th trajectory satisfies $E_C^{(i)}(t_E^{(i)}) \leq E$; in other words, $t_E^{(i)}$ represents the number of iteration such that the $i$-th trajectory first reaches the value $E$. Note that some trajectories may not reach a given value $E$ for all the repetition of $\theta_C$. (For example, as seen above, all trajectories of TEN never reached the value $E=-7$.) Hence, let ${\cal I}_E$ be the set of index $i$ such that the $i$-th trajectory reaches the value $E$ at some point of $t_E^{(i)}$. We can now define the average of $\| g(\theta_C)\|$ as \begin{equation} \label{EQUATION-definition-of-average} \langle\| g(\theta_C)\|\rangle_E = \frac{1}{|{\cal I}_E|}\sum_{i\in{\cal I}_E} \left\|g(\theta_C^{(i)}(t_E^{(i)}))\right\|, \end{equation} where $|{\cal I}_E|$ denotes the size of ${\cal I}_E$. Figure~\ref{fig-gradient} shows Eq.~\eqref{EQUATION-definition-of-average} for the specific values of $E$ (integers from $-7$ to $0$) for the three ansatzes $\tenbase{3}{2}{4}$, $\altbase{3}{2}{4}$, and $\heabase{4}{4}$. The standard deviation of the average is indicated by the error bar. For instance, $\langle \| g(\theta_C)\| \rangle_E \simeq 5.6$ for the case of orange point (i.e., the case of ALT) at $E = -1$ was calculated with $|{\cal I}_E|=100$; actually, all 100 trajectories become lower than $E = -1$. The figure shows that $\langle\| g(\theta_C)\|\rangle_E$ in ALT is always larger than that in HEA for all $E$, and this result is consistent to the theorems given in \cite{local-cost-function}. Such a larger gradient vector might enable ALT to circumvent possible flat energy landscapes and eventually realize the better trainability than HEA, but further studies are necessary to confirm this observation. Note that TEN has the largest values of $\langle\| g(\theta_C)\|\rangle_E$ when $E\geq -5$, which yet do not lead to the convergence to the global minimum due to the lack of expressibility. \fig{vqe-expressibility.002.png} {Top: Energy versus the iteration step in the VQE problem for the Hamiltonian \eqref{hamiltonian}, with the ansatz TEN (left), ALT (center), and HEA (right). The blue lines and the associated error bars represent the average and the standard deviation of the mean energies in total 100 trials, respectively; in each trial, the initial parameters of the ansatz are randomly chosen. Optimization to decrease $\langle {\cal H} \rangle$ in each iteration is performed by using Adam Optimizer with learning rate $0.001$. Bottom: Three of 100 trajectories for each ansatz TEN (left), ALT (center), and HEA (right), indicated by red lines. The trajectories are chosen such that the energies at the final iteration step are the three smallest values. } {vqe}{0} \fig{vqe-expressibility.017.png} {The average and the standard deviation of $\|g(\theta)\|$ versus $E$. The norm of the gradient is defined as $\| g(\theta)\| = \frac{1}{P}\sum_{p=1}^{P}\left| \frac{\partial\langle{\mathcal{H}\rangle}}{\partial\theta_p} \right|$. } {fig-gradient} {0} \section{Conclusion} \label{conclusion} This paper has examined the expressibility power of the shallow ALT, which was proposed before as a solution to the vanishing gradient problem found in various types of hybrid quantum-classical algorithms. Our conclusion is that, in addition to such a well trainability, shallow ALTs have the expressibility; that is, in the measure of the frame potential and the KL-divergence, those shallow ALTs have almost the same expressibility as HEAs, which are often used in hybrid algorithms but suffer from the vanishing gradient problem. In particular, we have proven that such expressibility holds if the number of entangled qubits in each block is of the order of the logarithm of the number of all resource qubits, which is consistent to the previous result discussing the trainability of ALT. We also provided a case study of VQE problem implying that the ALT enjoys both the expressibility and the trainability. Although our results are limited to the case $\ell = 2, 3$, we have numerically observed that the ALT acquires even higher expressibility when making $\ell$ bigger. Therefore, we conjecture that the above conclusion still holds for ALT with $\ell \geq 4$. The rigorous proof is left for future work. \\ \mbox{} {\bf Acknowledgement: } This work is supported by the MEXT Quantum Leap Flagship Program Grant Number JPMXS0118067285. \appendix \section{Proof of Theorems} \label{proof-of-theorem} Notation: For the unitary matrix $U_a$ corresponding to the entire circuit, we denote the unitary matrix corresponding to the $i$-th layer of the circuit to be $\ualayer{i}$ and the unitary matrix corresponding to the $j$-th block in the $i$-th layer as $\uablock{i}{j}$. \subsection{Proof of Theorem \ref{first-moment-1}} First, because the probability distribution of $F=|\langle\psi|\psi^{\prime}\rangle|^2$ with $n$-qubits states $|\psi\rangle$ and $|\psi^{\prime}\rangle$ taken from the Haar measure is given by $P_{\mathrm {Haar}} (F) = (2^n-1) (1-F)^{2^n-2}$, the value of $\fhaarn{1}$ is straightforwardly computed as \begin{flalign} \fhaarn{1} = \int_{\rm Haar}\int_{\rm Haar}|\langle\psi|\psi^{\prime}\rangle|^{2}d\psi d\psi^{\prime} = \int_0^{1}dF F(2^n - 1) (1 - F)^{2^n-2} = \frac{1}{2^n}. \end{flalign} Next, we provide the proof of $\f{1}{\alt} = \fhaarn{1}$. Given two final states $ | \phi \rangle = U_{a} |0\rangle$ and $ | \psi \rangle = U_{b} | 0 \rangle $ generated by $\alt$, we have \begin{flalign} \label{first-frame-potential} \qquad \f{1}{\alt} & = \intone d{U_{a}} d{U_{b}} \langle 0 |U_{a}^{\dagger} U_{b} |0\rangle \langle 0|U_{b}^{\dagger} U_{a} |0\rangle \nonumber\\ &= \intone \left(\prod_{i=1}^\ell d\ualayer{i}\right)\left(\prod_{i=1}^\ell d\ublayer{i}\right) \nonumber \\ &\qquad \times \langle 0 |\ualayer{1}^\dagger \ualayer{2}^\dagger \cdots \ualayer{\ell}^\dagger \ublayer{\ell} \cdots \ublayer{2} \ublayer{1}|0\rangle \nonumber\\ &\qquad \times \langle 0 |\ublayer{1}^\dagger \ublayer{2}^\dagger \cdots \ublayer{\ell}^\dagger \ualayer{\ell} \cdots \ualayer{2} \ualayer{1}|0\rangle \nonumber\\ &= \intone \left(\prod_{i=1}^\ell \left(\prod_{j=1}^{k(i)} d\uablock{i}{j} \right)\right) \left(\prod_{i^{\prime}=1}^\ell \left(\prod_{j^{\prime}=1}^{k(i^{\prime})} d\ubblock{i^{\prime}}{j^{\prime}} \right)\right) \nonumber\\ &\qquad \times \langle 0 |\ualayer{1}^\dagger \ualayer{2}^\dagger \cdots \ualayer{\ell}^\dagger \ublayer{\ell} \cdots \ublayer{2} \ublayer{1}|0\rangle \nonumber\\ &\qquad \times \langle 0 |\ublayer{1}^\dagger \ublayer{2}^\dagger \cdots \ublayer{\ell}^{\dagger} \ualayer{\ell} \cdots \ualayer{2} \ualayer{1}|0\rangle, \end{flalign} where $k(i)$ is the number of blocks in the $i$-th layer and each $\int dU$ is the average over the ensemble of the unitary matrix $U$. Because the distribution of each $\uablock{i}{j}$ is 2-design (and is therefore 1-design), we can apply the formula (\ref{one-design}) to the integrals with respect to $\uablock{i}{j}$. Actually, by integrating $\prod_{j=1}^{k(\ell)} d \ualayer{\ell, j}$ for all $a$ in the last line of (\ref{first-frame-potential}), we have \begin{flalign} \label{first-frame-potential-2} \qquad \f{1}{\alt} &= \intone \left(\prod_{i=1}^\ell \left(\prod_{j=1}^{k(i)} d\uablock{i}{j} \right)\right)\left(\prod_{i^{\prime}=1}^\ell \left(\prod_{j^{\prime}=1}^{k(i^{\prime})} d\ubblock{i^{\prime}}{j^{\prime}} \right)\right) \nonumber\\ &\qquad \times \langle 0 |\ualayer{1}^\dagger \ualayer{2}^\dagger \cdots \ualayer{\ell - 1}^\dagger \ualayer{\ell-1} \cdots \ualayer{2} \ualayer{1}|0\rangle \nonumber\\ &\qquad \times \langle 0 |\ublayer{1}^\dagger \ublayer{2}^\dagger \cdots \ublayer{\ell}^\dagger \ublayer{\ell} \cdots \ublayer{2} \ublayer{1}|0\rangle \nonumber\\ & = \left(\frac{1}{2^m}\right)^{\ell}\intone \left(\prod_{i=1}^{L-1} \left(\prod_{\alpha=1}^{k(i)} dU_a^{i\alpha} \right)\right)\left(\prod_{j=1}^L \left(\prod_{\beta=1}^{k(i^{\prime})} dU_b^{j\beta} \right)\right) \times 1 \nonumber\\ & = \frac{1}{2^n} = \fhaarn{1}. \end{flalign} The other equality in Eq.~(\ref{first-moment-formula}) can be proved in the same manner. \subsection{Proof of Theorem \ref{second-moment-1}} Similar to the first frame potential, the value of $\fhaarn{1}$ is straightforwardly computed as \begin{flalign} \fhaarn{2} = \int_0^{1}dF F^2(2^n - 1) (1 - F)^{2^n-2} = \frac{1}{2^{n-1}(2^n + 1)}. \end{flalign} Next, we compute $\ften{2}$. Given two final states $ | \phi \rangle = U_a|0\rangle$ and $ | \psi \rangle = U_b |0\rangle$, it is computed as \begin{flalign} \ften{2} &= \inttwo \left(\prod_{i=1}^{\ell} d{\uablock{i}{1}} d{\ubblock{i}{1}}\right) |\langle 0 |\uablock{1}{1}^{\dagger} \uablock{2}{1}^{\dagger} \cdots\uablock{\ell}{1}^{\dagger} \ubblock{\ell}{1}\cdots\ubblock{2}{1}\ubblock{1}{1} |0\rangle|^4 \nonumber \\ &\times \inttwo \left(\prod_{i=1}^{\ell} d{\uablock{i}{2}} d{\ubblock{i}{2}}\right) |\langle 0 |\uablock{1}{2}^{\dagger} \uablock{2}{2}^{\dagger} \cdots\uablock{\ell}{2}^{\dagger} \ubblock{\ell}{2}\cdots\ubblock{2}{2}\ubblock{1}{2} |0\rangle|^4 \nonumber \\ &\times \cdots \times \inttwo \left(\prod_{i=1}^{\ell} d{\uablock{i}{\frac{n}{m}}} d{\ubblock{i}{\frac{n}{m}}}\right) \nonumber \\ & \qquad\qquad \times |\langle 0 |\uablock{\ell}{1}^{\dagger} \uablock{\ell}{2}^{\dagger} \cdots\uablock{\ell}{\ell}^{\dagger} \ubblock{\ell}{\ell}\cdots\ubblock{\ell}{2}\ubblock{\ell}{1} |0\rangle|^4 \nonumber \\ &= \left(\frac{1}{(2^m+1)2^{m-1}}\right)^{\frac{n}{m}} = 2^{\frac{n}{m}-1} \cdot \frac{2^n + 1}{(2^m + 1)^{\frac{n}{m}}}\fhaarn{2}. \end{flalign} \subsection{Proof of Theorem \ref{second-moment-2}} Here we only show the computation of $\faltcustom{2}{3}$. The computation of $\faltcustom{2}{2}$ can be done in a similar manner. The second frame potential can be expressed as follows: \begin{flalign*} \faltcustom{2}{3} &= \int d\ualayer{1}d\ualayer{2} d\ualayer{3} d\ublayer{1} d\ublayer{2} d\ublayer{3} ~ |\langle 0 |\ualayer{1}^{\dagger} \ualayer{2}^{\dagger} \ualayer{3}^{\dagger} \ublayer{3} \ublayer{2}\ublayer{1}|0\rangle|^4 \\ &= \inttwo \left(\prod_{i=1}^{3}\prod_{j=1}^{k(i)} d\uablock{i}{j}\right) \left(\prod_{i^{\prime}=1}^{3}\prod_{j^{\prime}=1}^{k(i)} d\ubblock{i^{\prime}}{j^{\prime}}\right) \\ & \qquad \times |\langle 0 |\uablock{1}{1}^{\dagger}\uablock{1}{2}^{\dagger} \cdots \uablock{1}{n/m}^{\dagger}\uablock{2}{1}^{\dagger}\uablock{2}{2}^{\dagger} \cdots\uablock{2}{n/m+1}^{\dagger} \\ &\qquad \qquad \times \uablock{3}{1}^{\dagger}\uablock{3}{2}^{\dagger} \cdots \uablock{3}{n/m}^{\dagger} \ubblock{3}{n/m}\cdots\ubblock{3}{2}\ubblock{3}{1} \\ &\qquad\qquad\qquad \times \ubblock{2}{n/m+1} \cdots\ubblock{2}{2}\ubblock{2}{1}\ubblock{1}{n/m} \cdots\ubblock{1}{2}\ubblock{1}{1} |0\rangle|^4 \nonumber. \end{flalign*} Recall that $k(i)$ denotes the number of blocks in $i$-th layer; $k(i)=n/m$ for $i=1,3$ and $k(i)=n/m+1$ for $i=2$. We can get the final formula in the theorem by integrating only the unitary matrices in the first layer and the third layer. Executing integrals $\inttwo \prod_{j=1}^{n/m} d\uablock{\ell}{j}$ and $\inttwo \prod_{j^{\prime}=1}^{n/m} d\ubblock{\ell}{j^{\prime}}$ for $\ell=1,3$, we have \begin{flalign} \label{integrate-blocks} \faltcustom{2}{3} & = \inttwo \left(\prod_{j=1}^{n/m+1} d\uablock{2}{j}\right) \left(\prod_{j^{\prime}=1}^{n/m+1} d\ubblock{2}{j^{\prime}}\right) \nonumber \\ & \sumak{11}\sumak{12}\cdots\sumak{1\frac{n}{m}}\sumak{31}\sumak{32}\cdots\sumak{3\frac{n}{m}}\sumbk{11}\sumbk{12}\cdots\sumbk{1\frac{n}{m}} \nonumber\\ &\lambdaa{11}\lambdaa{12}\cdots \lambdaa{1\frac{n}{m}-1}\lambdaa{1\frac{n}{m}} \lambdaa{31}\lambdaa{32}\cdots \lambdaa{3\frac{n}{m}-1} \lambdaa{3\frac{n}{m}} \lambdab{11} \lambdab{12}\cdots \lambdab{1\frac{n}{m}-1} \lambdab{1\frac{n}{m}} \nonumber\\ &\times \delthree{1} \times \delsix{1}{2} \nonumber \\ &\times \delsix{2}{3}\times \cdots \nonumber \\ & \times\delsix{\frac{n}{m}-1}{\frac{n}{m}}\nonumber \\ & \times \delthreeplus{\frac{n}{m}} \allowdisplaybreaks\nonumber \\ &= \sumak{11}\sumak{12}\cdots\sumak{1\frac{n}{m}}\sumak{31}\sumak{32}\cdots\sumak{3,\frac{n}{m}}\sumbk{11}\sumbk{12}\cdots\sumbk{1\frac{n}{m}}\nonumber \\ &\inttwo d\uablock{2}{1} d\ubblock{2}{1}\sqrt{\lambdaa{11}\lambdaa{31}\lambdab{11}}\delthree{1} \nonumber \\ &\times \inttwo d\uablock{2}{2}d\ubblock{2}{b} \sqrt{\lambdaa{11}\lambdaa{31}\lambdab{11}} \sqrt{\lambdaa{12}\lambdaa{32}\lambdab{12}} \delsix{1}{2} \nonumber \\ &\times \inttwo d\uablock{2}{3}d\ubblock{2}{3}\sqrt{\lambdaa{12}\lambdaa{32}\lambdab{12}} \sqrt{\lambdaa{13}\lambdaa{33}\lambdab{13}}\delsix{2}{3} \nonumber \\ &\times \cdots \times \inttwo d\uablock{2}{\frac{n}{m}}d\ubblock{2}{\frac{n}{m}}\sqrt{\lambdaa{1\frac{n}{m}-1}\lambdaa{3\frac{n}{m}-1}\lambdab{1\frac{n}{m}-1}} \sqrt{\lambdaa{1\frac{n}{m}}\lambdaa{3\frac{n}{m}}\lambdab{1\frac{n}{m}}} \nonumber\\ &\qquad\qquad \times \delsix{\frac{n}{m}-1}{\frac{n}{m}} \nonumber\\ &\times\inttwo d\uablock{2}{\frac{n}{m}+1}d\ubblock{2}{\frac{n}{m}+1} \sqrt{\lambdaa{1\frac{n}{m}}\lambdaa{3\frac{n}{m}}\lambdab{1\frac{n}{m}}} \nonumber\\ &\qquad\qquad\times \delthreeplus{\frac{n}{m}} \nonumber \\ &= \athreem^{\rm T} B(3, m)^{\frac{n}{m}-1} \athreem, \end{flalign} where we use definitions: (\ref{a3m}), (\ref{delta-3}), (\ref{b3m}), and (\ref{delta-six}). For the purpose of exemplifying the computation in the first equality of (\ref{integrate-blocks}), we show the computation when $n/m=2$ in the following. When $n/m=2$, the second frame potential can be computed as follows: \begin{flalign} \label{example-second-frame} \faltbase{2}{3}{n}{m} &= \inttwo dU_a(1,1) dU_a(1,2) dU_a(2,1) dU_a(2,2)dU_a(2,3)dU_a(3,1)dU_a(3,2)\nonumber\\ &\qquad\qquad\qquad dU_b(1,1) dU_b(1,2) dU_b(2,1)dU_b(2,2)dU_b(2,3)dU_b(3,1)dU_b(3,2)\nonumber\\ &\qquad\qquad\qquad|\langle0|U^{\dagger}_a(1,1)U^{\dagger}_a(1,2)U^{\dagger}_a(2,1)U^{\dagger}_a(2,2)U^{\dagger}_a(2,3)U^{\dagger}_a(3,1)U^{\dagger}_a(3,2) \nonumber \\ &\qquad\qquad\qquad\qquad U_b(3,1)U_b(3,2)U_b(2,1)U_b(2,2)U_b(2,3)U_b(1,1)U_b(1,2)\allowdisplaybreaks |0\rangle|^4 \nonumber\\ &=\inttwo dU_a(1,1) dU_a(1,2) dU_a(2,1) dU_a(2,2)dU_a(2,3)dU_a(3,1)dU_a(3,2)\nonumber\\ &\qquad\qquad\qquad dU_b(1,1) dU_b(1,2) dU_b(2,1)dU_b(2,2)dU_b(2,3)dU_b(3,1)dU_b(3,2)\nonumber\\ &\qquad\qquad\sum_{\bf i,j,k,l,p} \left(U_a^{\ast}(1, 1)_{i_1 0}^{i_2 0}U_a^{\ast}(1, 2)_{i_3 0}^{i_4 0}U_a^{\ast}(2, 1)_{j_1 i_1}U_a^{\ast}(2, 2)_{j_2 i_2}^{j_3 i_3}U_a^{\ast}(2, 3)_{j_4 i_4} U_a^{\ast}(3, 1)_{k_1 j_1}^{k_2 j_2} U_a^{\ast}(3, 2)_{k_3 j_3}^{k_4 j_4}\right.\nonumber\\ &\qquad\qquad\qquad\left. U_b(3, 1)_{k_1 l_1}^{k_2 l_2}U_b(3, 2)_{k_3 l_3}^{k_4 l_4}U_b(2, 1)_{l_1 p_1}U_b(2, 2)_{l_2 p_2}^{l_3 p_3}U_b(2, 3)_{l_4 p_4}U_b(1, 1)_{p_1 0}^{p_2 0}U_b(1, 2)_{p_3 0}^{p_4 0}\right)\times \nonumber\\ &\qquad\qquad\sum_{\bf q,r,s,t,u} \left(U_b^{\ast}(1, 1)_{q_1 0}^{q_2 0}U_b^{\ast}(1, 2)_{q_3 0}^{q_4 0}U_b^{\ast}(2, 1)_{r_1 q_1}U_b^{\ast}(2, 2)_{r_2 q_2}^{r_3 q_3}U_b^{\ast}(2, 3)_{r_4 q_4} U_b^{\ast}(3, 1)_{s_1 r_1}^{s_2 r_2} U_b^{\ast}(3, 2)_{s_3 r_3}^{s_4 r_4}\right.\nonumber\\ &\qquad\qquad\qquad\left. U_a(3, 1)_{s_1 t_1}^{s_2 t_2}U_a(3, 2)_{s_3 t_3}^{s_4 t_4}U_a(2, 1)_{t_1 u_1}U_a(2, 2)_{t_2 u_2}^{t_3 u_3}U_a(2, 3)_{t_4 u_4} U_a(1, 1)_{u_1 0}^{u_2 0}U_a(1, 2)_{u_3 0}^{u_4 0}\right)\times\nonumber\\ &\qquad\qquad\sum_{\bf i^{\prime},j^{\prime},k^{\prime},l^{\prime},p^{\prime}} \left(U_a^{\ast}(1, 1)_{i^{\prime}_1 0}^{i^{\prime}_2 0}U_a^{\ast}(1, 2)_{i^{\prime}_3 0}^{i^{\prime}_4 0}U_a^{\ast}(2, 1)_{j^{\prime}_1 i^{\prime}_1}U_a^{\ast}(2, 2)_{j^{\prime}_2 i^{\prime}_2}^{j^{\prime}_3 i^{\prime}_3}U_a^{\ast}(2, 3)_{j^{\prime}_4 i^{\prime}_4} U_a^{\ast}(3, 1)_{k^{\prime}_1 j^{\prime}_1}^{k^{\prime}_2 j^{\prime}_2} U_a^{\ast}(3, 2)_{k^{\prime}_3 j^{\prime}_3}^{k^{\prime}_4 j^{\prime}_4}\right. \nonumber\\ &\qquad\qquad\qquad\left. U_b(3, 1)_{k^{\prime}_1 l^{\prime}_1}^{k^{\prime}_2 l^{\prime}_2}U_b(3, 2)_{k^{\prime}_3 l^{\prime}_3}^{k^{\prime}_4 l^{\prime}_4}U_b(2, 1)_{l^{\prime}_1 p^{\prime}_1}U_b(2, 2)_{l^{\prime}_2 p^{\prime}_2}^{l^{\prime}_3 p^{\prime}_3} U_b(2, 3)_{l^{\prime}_4 p^{\prime}_4} U_b(1, 1)_{p^{\prime}_1 0}^{p^{\prime}_2 0}U_b(1, 2)_{p^{\prime}_3 0}^{p^{\prime}_4 0}\right)\times \nonumber\\ &\qquad\qquad\sum_{\bf q^{\prime},r^{\prime},s^{\prime},t^{\prime},u^{\prime}} \left(U_b^{\ast}(1, 1)_{q^{\prime}_1 0}^{q^{\prime}_2 0}U_b^{\ast}(1, 2)_{q^{\prime}_3 0}^{q^{\prime}_4 0}U_b^{\ast}(2, 1)_{r^{\prime}_1 q^{\prime}_1}U_b^{\ast}(2, 2)_{r^{\prime}_2 q^{\prime}_2}^{r^{\prime}_3 q^{\prime}_3}U_b^{\ast}(2, 3)_{r^{\prime}_4 q^{\prime}_4} U_b^{\ast}(3, 1)_{s^{\prime}_1 r^{\prime}_1}^{s^{\prime}_2 r^{\prime}_2} U_b^{\ast}(3, 2)_{s^{\prime}_3 r^{\prime}_3}^{s^{\prime}_4 r^{\prime}_4}\right.\nonumber\\ &\qquad\qquad\qquad\left. U_a(3, 1)_{s^{\prime}_1 t^{\prime}_1}^{s^{\prime}_2 t^{\prime}_2} U_a(3, 2)_{s^{\prime}_3 t^{\prime}_3}^{s^{\prime}_4 t^{\prime}_4} U_a(2, 1)_{t^{\prime}_1 u^{\prime}_1} U_a(2, 2)_{t^{\prime}_2 u^{\prime}_2}^{t^{\prime}_3 u^{\prime}_3} U_a(2, 3)_{t^{\prime}_4 u^{\prime}_4} U_a(1, 1)_{u^{\prime}_1 0}^{u^{\prime}_2 0}U_a(1, 2)_{u^{\prime}_3 0}^{u^{\prime}_4 0}\right) \end{flalign} where the bold symbols in the bottom the summation denote the multiple indices, e.g., ${\bf i}=i_1,i_2,i_3,i_4$. For the integrals $U_a(1,1), U_b(1,1), U_a(1, 2), U_b(1, 2)$, \begin{flalign} \label{int1} \inttwo dU_a(1, 1) U_a(1, 1)_{u_1 0}^{u_2 0} U_a(1, 1)_{u^{\prime}_1 0}^{u^{\prime}_2 0} U_a^{\ast}(1, 1)_{i_1 0}^{i_2 0} U_a^{\ast}(1, 1)_{i^{\prime}_1 0}^{i^{\prime}_2 0} &= \sum_{k_{11}^a=1}^4 \lambdaa{11}\Delta^{k_{11}^a}_{u_10u^{\prime}_10i_1 0i^{\prime}_10} \Delta^{k_{11}^a}_{u_20u^{\prime}_20i_2 0i^{\prime}_20}, \\ \label{int2} \inttwo dU_b(1, 1) U_b(1, 1)_{p_1 0}^{p_2 0} U_b(1, 1)_{p^{\prime}_1 0}^{p^{\prime}_2 0} U_b^{\ast}(1, 1)_{q_1 0}^{q_2 0} U_b^{\ast}(1, 1)_{q^{\prime}_1 0}^{q^{\prime}_2 0} &= \sum_{k_{11}^b=1}^4 \lambdab{11}\Delta^{k_{11}^b}_{p_10p^{\prime}_10q_1 0q^{\prime}_10} \Delta^{k_{11}^b}_{p_20p^{\prime}_20q_2 0q^{\prime}_20}, \allowdisplaybreaks\\ \label{int3} \inttwo dU_a(1, 2) U_a(1, 2)_{u_3 0}^{u_4 0} U_a(1, 2)_{u^{\prime}_3 0}^{u^{\prime}_4 0} U_a^{\ast}(1, 2)_{i_3 0}^{i_4 0} U_a^{\ast}(1, 2)_{i^{\prime}_3 0}^{i^{\prime}_4 0} &= \sum_{k_{12}^a=1}^4 \lambdaa{12}\Delta^{k_{12}^a}_{u_30u^{\prime}_30i_3 0i^{\prime}_30} \Delta^{k_{12}^a}_{u_40u^{\prime}_40i_4 0i^{\prime}_40}, \\ \label{int4} \inttwo dU_b(1, 2) U_b(1, 2)_{p_3 0}^{p_4 0} U_b(1, 2)_{p^{\prime}_3 0}^{p^{\prime}_4 0} U_b^{\ast}(1, 2)_{q_3 0}^{q_4 0} U_b^{\ast}(1, 2)_{q^{\prime}_3 0}^{q^{\prime}_4 0} &= \sum_{k_{12}^b=1}^4 \lambdab{12}\Delta^{k_{12}^b}_{p_30p^{\prime}_30q_3 0q^{\prime}_30} \Delta^{k_{12}^b}_{p_40p^{\prime}_40q_4 0q^{\prime}_40} \end{flalign} hold. For the integrals $U_a(3, 1), U_b(3, 1), U_a(3, 2), U_b(3, 2)$, \begin{flalign} \label{int5} &\inttwo dU_a(3,1)dU_b(3,1) \nonumber\\ &\qquad\sum_{\substack{k_1,k_2\\s_1,s_2\\}} \sum_{\substack{k^{\prime}_1,k^{\prime}_2\\s^{\prime}_1,s^{\prime}_2\\}} U^{\ast}_a(3, 1)^{k_2j_2}_{k_1 j_1} U_b(3, 1)_{k_1l_1}^{k_2l_2} U^{\ast}_b(3, 1)^{s_2r_2}_{s_1 r_1} U_a(3, 1)_{s_1t_1}^{s_2t_2} U^{\ast}_a(3, 1)^{k_2^{\prime}j_2^{\prime}}_{k_1^{\prime} j_1^{\prime}} U_b(3, 1)_{k_1^{\prime}l_1^{\prime}}^{k_2^{\prime}l_2^{\prime}} U^{\ast}_b(3, 1)^{s_2^{\prime}r_2^{\prime}}_{s_1^{\prime} r_1^{\prime}} U_a(3, 1)_{s_1^{\prime}t_1^{\prime}}^{s_2^{\prime}t_2^{\prime}} \nonumber\\ &= \sumak{31}\lambdaa{31} \Delta^{k_{31}^a}_{j_1l_1j_1^{\prime}l_1^{\prime}t_1r_1t_1^{\prime}r_1^{\prime}} \Delta^{k_{31}^a}_{j_2l_2j_2^{\prime}l_2^{\prime}t_2r_2t_2^{\prime}r_2^{\prime}} \\ &\inttwo dU_a(3,2)dU_b(3,2) \nonumber\\ &\qquad\sum_{\substack{k_3,k_4\\s_3,s_4\\}} \sum_{\substack{k^{\prime}_3,k^{\prime}_4\\s^{\prime}_3,s^{\prime}_4\\}} U^{\ast}_a(3, 2)^{k_4j_4}_{k_3 j_3} U_b(3, 2)_{k_3l_3}^{k_4l_4} U^{\ast}_b(3, 1)^{s_4r_4}_{s_3 r_3} U_a(3, 2)_{s_3t_3}^{s_4t_4} U^{\ast}_a(3, 2)^{k_4^{\prime}j_4^{\prime}}_{k_3^{\prime} j_3^{\prime}} U_b(3, 2)_{k_3^{\prime}l_3^{\prime}}^{k_4^{\prime}l_4^{\prime}} U^{\ast}_b(3, 2)^{s_4^{\prime}r_4^{\prime}}_{s_3^{\prime} r_3^{\prime}} U_a(3, 2)_{s_3^{\prime}t_3^{\prime}}^{s_4^{\prime}t_4^{\prime}} \nonumber\\ &= \sumak{31}\lambdaa{31} \Delta^{k_{32}^a}_{j_3l_3j_3^{\prime}l_3^{\prime}t_3r_3t_3^{\prime}r_3^{\prime}} \Delta^{k_{32}^a}_{j_4l_4j_4^{\prime}l_4^{\prime}t_4r_4t_4^{\prime}r_4^{\prime}} \label{int6} \end{flalign} hold. Substituting (\ref{int1}), (\ref{int2}), (\ref{int3}), (\ref{int4}), (\ref{int5}), and (\ref{int6}) to (\ref{example-second-frame}), we get \begin{flalign} \faltbase{2}{3}{n}{m} &= \sumak{11}\sumak{31}\sumbk{11}\sumak{12}\sumak{32}\sumbk{12}\lambdaa{11}\lambdaa{31}\lambdab{11}\lambdaa{12}\lambdaa{32}\lambdab{12} \nonumber\\ & \inttwo dU_a(2,1)dU_b(2,1) \left[ \sum_{\substack{u_1 u^{\prime}_1 i_1 i^{\prime}_1\\ j_1 j^{\prime}_1 l_1 l^{\prime}_1}} \sum_{\substack{r_1 r^{\prime}_1 t_1 t^{\prime}_1\\ p_1 p^{\prime}_1 q_1 q^{\prime}_1}} \Delta^{k_{11}^a}_{u_10u^{\prime}_10i_10i^{\prime}_10} \Delta^{k_{31}^a}_{j_1l_1 j^{\prime}_1 l^{\prime}_1 t_1 r_1 t^{\prime}_1 r^{\prime}_1} \Delta^{k_{11}^b}_{p_10p^{\prime}_10q_1 0q^{\prime}_10} \right. \\ &\qquad\left. U_a(2, 1)_{t_1 u_1}U_a(2, 1)_{t^{\prime}_1 u^{\prime}_1} U_a^{\ast}(2, 1)_{j_1 i_1}U_a^{\ast}(2, 1)_{j^{\prime}_1 i^{\prime}_1} U_b(2, 1)_{l_1 p_1}U_b(2, 1)_{l^{\prime}_1 p^{\prime}_1} U_b^{\ast}(2, 1)_{r_1 q_1}U_b^{\ast}(2, 1)_{r^{\prime}_1 q^{\prime}_1}\right] \times \nonumber\\ &\inttwo dU_a(2,2)dU_b(2,2) \left[ \sum_{\substack{u_2 u^{\prime}_2 i_2 i^{\prime}_2\\ j_2 j^{\prime}_2 l_2 l^{\prime}_2}} \sum_{\substack{r_2 r^{\prime}_2 t_2 t^{\prime}_2\\ p_2 p^{\prime}_2 q_2 q^{\prime}_2}} \sum_{\substack{u_3 u^{\prime}_3 i_3 i^{\prime}_3\\ j_3 j^{\prime}_3 l_3 l^{\prime}_3}} \sum_{\substack{r_3 r^{\prime}_3 t_3 t^{\prime}_3\\ p_3 p^{\prime}_3 q_3 q^{\prime}_3}} \Delta^{k_{11}^a}_{u_20u^{\prime}_20i_20i^{\prime}_20} \Delta^{k_{31}^a}_{j_2l_2 j^{\prime}_2 l^{\prime}_2 t_2 r_2 t^{\prime}_2 r^{\prime}_2} \Delta^{k_{11}^b}_{p_20p^{\prime}_2q_2 0q^{\prime}_20} \right. \nonumber\\ &\qquad\Delta^{k_{12}^a}_{u_30u^{\prime}_30i_30i^{\prime}_30} \Delta^{k_{32}^a}_{j_3l_3 j^{\prime}_3 l^{\prime}_3 t_3 r_3 t^{\prime}_3 r^{\prime}_3} \Delta^{k_{12}^b}_{p_30p^{\prime}_3q_3 0q^{\prime}_30} U_a(2, 2)_{t_2 u_2}^{t_3 u_3}U_a(2, 2)_{t^{\prime}_2 u^{\prime}_2}^{t^{\prime}_3 u^{\prime}_3} U_a^{\ast}(2, 2)_{j_2 i_2}^{j_3 i_3} U_a^{\ast}(2, 2)_{j^{\prime}_2 i^{\prime}_2}^{j^{\prime}_3 i^{\prime}_3} \nonumber\\ &\qquad U_b(2, 2)_{l_2 p_2}^{l_3 p_3} U_b(2, 2)_{l^{\prime}_2 p^{\prime}_2}^{l^{\prime}_3 p^{\prime}_3} U_b^{\ast}(2, 2)_{r_2 q_2}^{r_3 q_3} U_b^{\ast}(2, 2)_{r^{\prime}_2 q^{\prime}_2}^{r^{\prime}_3 q^{\prime}_3} \left. \right]\times \nonumber\\ & \inttwo dU_a(2,3)dU_b(2,3) \left[ \sum_{\substack{u_4 u^{\prime}_4 i_4 i^{\prime}_4\\ j_4 j^{\prime}_4 l_4 l^{\prime}_4}} \sum_{\substack{r_4 r^{\prime}_4 t_4 t^{\prime}_4\\ p_4 p^{\prime}_4 q_4 q^{\prime}_4}} \Delta^{k_{12}^a}_{u_40u^{\prime}_40i_40i^{\prime}_40} \Delta^{k_{32}^a}_{j_4l_4 j^{\prime}_4 l^{\prime}_4 t_4 r_4 t^{\prime}_4 r^{\prime}_4} \Delta^{k_{12}^b}_{p_40p^{\prime}_40q_4 0q^{\prime}_40} \right. \\ &\qquad\left. U_a(2, 3)_{t_4 u_4}U_a(2, 3)_{t^{\prime}_4 u^{\prime}_4} U_a^{\ast}(2, 3)_{j_4 i_4}U_a^{\ast}(2, 3)_{j^{\prime}_4 i^{\prime}_4} U_b(2, 3)_{l_4 p_4}U_b(2, 3)_{l^{\prime}_4 p^{\prime}_4} U_b^{\ast}(2, 3)_{r_4 q_4}U_b^{\ast}(2, 3)_{r^{\prime}_4 q^{\prime}_4}\right] \allowdisplaybreaks\nonumber \\ &= \sumak{11}\sumak{31}\sumbk{11}\sumak{12}\sumak{32}\sumbk{12}\lambdaa{11}\lambdaa{31}\lambdab{11}\lambdaa{12}\lambdaa{32}\lambdab{12} \nonumber\\ &\qquad\inttwo dU_a(2,1)dU_b(2,1)\delthree{1} \times \nonumber\\ &\qquad\inttwo dU_a(2,2)dU_b(2,2) \delsix{1}{2} \times\nonumber\\ &\qquad\inttwo dU_a(2,3)dU_b(2,3)\delthreebase{2}{3}, \end{flalign} which is the right hand side of the first equality in (\ref{integrate-blocks}) when $n/m = 2$. \subsection{Proof of Theorem \ref{second-moment-alt}} Similar to Theorem \ref{second-moment-2}, we only show the inequality for $\faltcustom{2}{3}$ here. The inequality for $\faltcustom{2}{2}$ can be shown in the same manner. In the process of showing the final inequality of the theorem, we expand $\athreem$ and $B(3, m)$ as the sum of a vector/matrix whose components are $O(1)$ and a vector/matrix whose components are $O(1/2^{m/2})$. To evaluate Eq.~(\ref{a3m})\footnote{The evaluation procedure is straightforward, but a lot of computation is required. Thus, instead of performing a hand-calculation, we built an algorithm to evaluate (\ref{a3m}) for arbitrary $m$ and derived the expansion formula by computational calculation. We also built an algorithm for evaluating (\ref{b3m}) and derived the expansion formula by computational calculation.}, we can expand $\athreem$ as \begin{flalign} \athreem = \frac{1}{2^{m}}\left(\vzero + \frac{1.2}{2^{m/2}}\vone \right), \end{flalign} where \begin{flalign} \vzeroi &= \begin{cases} 1 & \quad i=1 \quad(k_a, k_b, k_c=1), \\ 1 & \quad i=22 \quad(k_a,k_b,k_c=2), \\ 0 & \quad \mbox{otherwise}, \end{cases} \qquad \qquad |\vonei | < 1 . \end{flalign} Also, evaluating Eq.~(\ref{b3m}), we can expand $B(3, m)$ as \begin{flalign} B(3, m) = \frac{1}{2^{2m}}\left(D + \frac{1.3}{2^{m/2-6}}X\right), \end{flalign} where \begin{flalign} D_{ij} &= \begin{cases} 1 & \quad i=1, j=1 \quad(k_a, k_b, k_c, k_d, k_e, k_f=1), \\ 1 & \quad i=22, j=22 \quad(k_a, k_b, k_c, k_d, k_e, k_f=2), \\ 0 & \quad \mbox{otherwise}, \end{cases} \qquad \qquad |X_{ij}| < \frac{1}{64}. \end{flalign} With $\alpha = n/m$, let $\gxd{k}{\alpha}$ be as the set of matrices expressed by $\prod_{i=1}^{\alpha}R_i$ where $R_i = D {\rm \ or \ } X$ and the number of $X$s in $\{R_i\}$ is $k$. For example, $XDXX \in \gxd{3}{4}$ and $XDDD \in \gxd{1}{4}$. Then, $\faltcustom{2}{3}$ is expanded as \begin{flalign} \faltcustom{2}{3} &= \frac{1}{2^{2m}}\left(\vzero^T + \frac{1.2}{2^{m/2}}\vone^T \right) \frac{1}{2^{2n-2m}}\left(D+\frac{1.3}{2^{m/2-6}}X\right)^{\alpha-1} \left(\vzero + \frac{1.2}{2^{m/2}}\vone \right) \nonumber\\ &= \frac{1}{2^{2n}} \left(\vzero^T D^{\alpha-1}\vzero + \left(\frac{2.4}{2^{m/2}}\right)\vone^T D^{\alpha-1}\vzero + \left(\frac{1.2^2}{2^{m}}\right)\vone^T D^{\alpha-1}\vone \right) \nonumber\\ &+ \frac{1}{2^{2n}}\sum_{k=1}^{\alpha - 1}\left(\frac{1.3}{2^{m/2-6}}\right)^k \sum_{i=1}^{{}_{\alpha - 1} \mathrm{C} _k} \left(\vzero^T g^k_{\alpha i} \vzero + \left(\frac{2.4}{2^{m/2}}\right)\vone^T g^k_{\alpha i} \vzero +\left(\frac{1.2^2}{2^{m}}\right)\vone^T g^k_{\alpha i} \vone \right), \end{flalign} where $g^k_{\alpha i}$ ($i=1,2\dots {}_{\alpha} \mathrm{C} _k$) is an element of $\gxd{k}{\alpha}$. For an arbitrary $g\in\gxddef$ with $k\geq 1$, \begin{flalign} \label{g-inequality} \vvec{r}^{\rm T} g \vvec{s} &< (1,1,\cdots,1) \begin{pmatrix} 1 & 0 \cdots & 0\\ 0 & 1 \cdots & 0\\ \vdots & \vdots & \vdots \\ 0 & 0 \cdots & 1 \end{pmatrix}^{\alpha - k} \begin{pmatrix} \frac{1}{64} & \frac{1}{64} \cdots & \frac{1}{64}\\ \frac{1}{64} & \frac{1}{64} \cdots & \frac{1}{64}\\ \vdots & \vdots & \vdots \\ \frac{1}{64} & \frac{1}{64} \cdots & \frac{1}{64} \end{pmatrix}^{k} \begin{pmatrix} 1\\ 1\\ \vdots\\ 1 \end{pmatrix} = 64 \end{flalign} holds. For $D$ and $\vvec{0}, \vvec{1}$ \begin{flalign} \label{vector-inequality} \vvec{1}^{\rm T} D^{\frac{n}{m}-1} \vvec{s} &< \sum^{64}_{i=1}\left(1 \cdot (\delta_{i1} + \delta_{i22}) \cdot 1\right) = 2 \\ \label{vector-inequality-2} \vvec{0}^{\rm T} D^{\frac{n}{m}-1} \vvec{0} &= 2 \end{flalign} holds where $r,s=0,1$. By using the inequalities (\ref{g-inequality}), (\ref{vector-inequality}), and (\ref{vector-inequality-2}), the upper bound for $\faltcustom{2}{3}$ is derived as follows: \begin{flalign} \label{proof-f-inequality} \faltcustom{2}{3} &< \frac{1}{2^{2n}} \left(2 + 2\left(\frac{2.4}{2^{m/2}}\right) + 2\left(\frac{1.2^2}{2^{m}}\right) + 64\sum_{k=1}^{\alpha-1} {}_{\alpha-1} \mathrm{C} _k\left(\frac{1.3}{2^{m/2-6}}\right)^k 1^k \left(1+\frac{1.2}{2^{m/2}} \right)^2 \right) \nonumber\\ &= \frac{1}{2^{2n-1}}\left(1+\frac{1.2}{2^m}\right)^2\left(1+ 32\left(\left(1 + \frac{1.3}{2^{m/2-6}}\right)^{\alpha-1} -1\right)\right)\nonumber\\ &= \left(1+\frac{1}{2^{n}}\right)\left(1+\frac{1.2}{2^m}\right)^2\left(1+ 32\left(\left(1 + \frac{83.2}{2^{m/2}}\right)^{\alpha-1} -1\right)\right)\fhaarn{2}. \end{flalign} \subsection{Proof of Corollary \ref{large-n}} As in the above theorems, we only show the inequality for $\faltcustom{2}{3}$ here. When $m = 2a \log_2 n$, \begin{flalign} \label{proof-exponential} \left(1 + \frac{83.2}{2^{m/2}}\right)^{\alpha-1} < \left[\left(1 + \frac{83.2}{2^{m/2}}\right)^{\frac{2^{m/2}}{83.2}}\right]^{\frac{83.2n}{2^{m/2}m}} < e^{\frac{83.2n}{2^{m/2}m}} = e^{\frac{41.6}{a n^{a-1}\log_2 n}}. \end{flalign} If $41.6/(a n^{a-1}\log_2 n) < 1$, \begin{flalign} \label{proof-exponential-2} e^{\frac{41.6}{a n^{a-1}\log_2 n}} < 1+(e-1)\frac{41.6}{a n^{a-1}\log_2 n}. \end{flalign} Substituting Eqs.~(\ref{proof-exponential}), (\ref{proof-exponential-2}), and $m = 2a \log_2 n$ into (\ref{proof-f-inequality}), we get \begin{flalign} \faltcustom{2}{3} &< \left(1+\frac{1}{2^{n}}\right)\left(1+\frac{1.2}{n^{2a}}\right)^2\left(1+ \frac{2288}{an^{a-1}\log_2 n}\right)\fhaarn{2}. \end{flalign} \end{document}
\begin{document} \section{Appendix of paper 3181 "An Asynchronous Decentralized Algorithm for Wasserstein Barycenter Problem"} \begin{theorem} Given a dual variable $\eta$ and its gradient $x = x^*(\sqrt{W}\eta)$ , the distance between $x$ and the optimal of the primal objective is bounded by $\|x - x^*\|^2 \le \frac{2}{\mu}(\varphi(\eta) - \varphi(\eta^*))$ and the consensus distance is bounded by $\|\sqrt{W} x\|^2 \le \frac{\lambda_{max}(W)}{\mu}(\varphi(\eta)-\varphi(\eta^*))$, where $x^*$ and $\eta^*$ denote the optimal solution to the primal and the dual problem,respectively. \end{theorem} \begin{proof} As $x=x^*(\sqrt{W}\eta)$, then we have \begin{align} &\varphi(\eta) = \langle \sqrt{W}\eta,x\rangle - F(x),\label{3}\\ &\varphi(\eta^*) = \langle \sqrt{W}\eta^*,x^*\rangle - F(x^*) = -F(x^*)\label{4}, \end{align} where we use the fact that $\sqrt{W}x^* =0$. Subtracting \ref{4} from \ref{3}, we obtain \begin{equation}\label{5} \varphi(\eta) - \varphi(\eta^*) = \langle \sqrt{W}\eta,x-x^*\rangle + F(x^*) -F(x), \end{equation} where we use again $\sqrt{W}x^* =0$. Also, according to the strongly convexity of $F(x)$, we can write \begin{align}\label{6} \|x - x^*\|^2 &\le \frac{2}{\mu}(F(x^*) -F(x) + \langle \nabla F(x),x - x^*\rangle) \nonumber \\ &= \le \frac{2}{\mu}(F(x^*) -F(x) + \langle\sqrt{W}\eta ,x - x^*\rangle), \end{align} where the equality follows from the fact that $\nabla F(x) = \sqrt{W}\eta$, since $x=x^*(\sqrt{W}\eta)$. Combining (\ref{5}) and (\ref{6}) leads to \begin{equation*} \|x-x^*\|^2 \le \frac{2}{\mu}(\varphi(\eta) - \varphi(\eta^*)) \end{equation*} Also by the $L=\frac{\lambda_{max}(W)}{\mu}$-smooth of $\varphi(\eta)$, we have \begin{equation*} \|\varphi(\eta) -\varphi(\eta^*)\|^2 \le L(\varphi(\eta) -\varphi(\eta^*) -\langle \nabla \varphi(\eta^*),\eta - \eta^*\rangle)). \end{equation*} Substituting $\nabla \varphi(\eta) = \sqrt{W} x$ and $\nabla \varphi(\eta^*) =0$, we can conclude $$\|\sqrt{W} x\|^2 \le \frac{\lambda_{max}(W)}{\mu}(\varphi(\eta)-\varphi(\eta^*)).$$ \end{proof} In the following lemma, we summarize the properties of the sequence $\theta_k$ in ASBCDS. \begin{lemma}\label{lemma:theta} Assume that $\theta_1 = 1/m$ and $\theta_{k+1} = \frac{\sqrt{\theta^4_k+4 \theta_k^2}-\theta_k^2}{2}$ for $k \ge 1$, then $\theta_k$ satisfies $\frac{1}{k-1+2m}\le \theta_k \le \frac{2}{k-1+2m}$ and $\frac{1-\theta_{k+1}}{\theta_{k+1}^2} = \frac{1}{\theta_k^2}$. \end{lemma} \begin{proof} If we denote $h(a) = \frac{\sqrt{a^4+4a^2}-a^2}{2}$ as a function of $a$, then we have $\nabla h(a) > 0$ for $a\in (0,1)$. Thus, if $\frac{1}{k-1+2m}\le \theta_k \le \frac{2}{k-1+2m}$, then $h(\frac{1}{k-1+2m})\le \theta_{k+1} \le h(\frac{2}{k-1+2m})$. It can be verified that $h(\frac{1}{k-1+2m})\ge\frac{1}{k+2m}$ and $h(\frac{2}{k-1+2m})\le\frac{2}{k+2m}$. Since $\theta_1= 1/m$ satisfies the inequality, then we can conclude the first part of the lemma by induction. As for the second part of the conclusion, if we treat $\theta_{k+1}$ as the unknown variable and solve the equation, we can obtain $\theta_{k+1} = \frac{\sqrt{\theta^4_k+4 \theta_k^2}-\theta_k^2}{2}$. \end{proof} \begin{theorem} Under the assumption that $\varphi(\eta)$ is $L$-smooth, the stochastic gradient estimation is bounded as $\mathbb{E} \| \nabla \varphi(\omega_{j(k+1)})- \nabla \phi(\omega_{j(k+1)},\xi_{k+1})\|^2 \le \frac{mL\theta_{k+1}\epsilon}{8}$, and the delay $\tau \le m$, then for Algorithm ASBCDS, if step size $\gamma$ satisfies $3 L \gamma + 12 L \gamma(\frac{\tau^2+\tau }{m}+2\tau)^2 \le 1$, we have $\varphi(\eta_k) -\varphi(\eta^*) \le \epsilon$ after $K = \frac{\sqrt{2m^2 (\varphi(\eta_0) - \varphi(\lambda^*)+\|\zeta_0-\lambda^*\|^2/(2\gamma))}}{\sqrt{\epsilon}} =\mathcal{O}(\frac{m\tau\sqrt{L}}{\sqrt{\epsilon}})$ iteration. Furthermore, if we assume $\mathbb{E} \| \nabla \varphi(\lambda)- \nabla \phi(\lambda,\xi)\|^2 \le\sigma^2$ and sample $M_k$ mini-batch of samples in the $k$-th iteration to satisfy the assumption on $\nabla \phi(\omega_{j(k+1)},\xi_{k+1})$, then the total stochastic oracle access is bounded by $K +1+ \frac{16\sigma^2 (K+2m)^2}{Lm\epsilon}=\mathcal{O}(\frac{m\tau L}{\sqrt{\epsilon}}+\frac{m\tau^2\sigma^2}{\epsilon^2})$. \end{theorem} \begin{proof} {\bf Step 1:} According to the update rule in ASBCDS, we have \begin{displaymath} \theta_{k+1} \zeta_k =\lambda_{k+1}- (1-\theta_{k+1}) \eta_k , \end{displaymath} and \begin{align*} \eta_{k+1} &= \lambda_{k+1} + m \theta_{k+1}(\zeta_{k+1}-\zeta_k) \\ &=(1-\theta_{k+1})\eta_{k} + m \theta_{k+1}\zeta_{k+1} -(m-1) \theta_{k+1}\zeta_k. \end{align*} Combing these together and eliminating $\zeta_k$, we have \begin{align*} m\theta_{k+1} \zeta_{k+1} = &\eta_{k+1} -(1-\theta_{k+1})\eta_k +(m-1)\lambda_{k+1} \\& -(m-1)(1-\theta_{k+1})\eta_k. \end{align*} Then we have for $k \ge 1$ \begin{displaymath} \frac{\lambda_{k+1} -(1-\theta_{k+1})\eta_k}{\theta_{k+1}} = \frac{\eta_k -m(1-\theta_k)\eta_{k-1}+(m-1)\lambda_{k}}{m\theta_k}. \end{displaymath} Thus, we have for each block $p \in \{1,\cdots,m\}$, \begin{align}\label{lamda-eta} \lambda_{k+1}^{[p]} =& \eta_k^{[p]} -\theta_{k+1}\eta_k^{[p]} + \frac{\theta_{k+1}\eta_k^{[p]} }{m\theta_k} -\frac{\theta_{k+1}(1-\theta_k)\eta_{k-1}^{[p]} }{\theta_k} \nonumber\\ &+\frac{(m-1)\theta_{k+1}\lambda_k^{[p]} }{m\theta_k} \nonumber\\ =& \eta_k^{[p]} -\frac{\theta_{k+1}}{\theta_k}(\frac{1}{m} - \theta_k) (\eta_k^{[p]} -\lambda_k^{[p]} ) \nonumber\\ &+\frac{\theta_{k+1}(1-\theta_k)}{\theta_k}(\lambda_{k}^{[p]} -\eta_{k-1}^{[p]} ). \end{align} By setting $d_k =\frac{\theta_{k+1}(1-\theta_k)}{\theta_k}$ ,$e_k = \frac{\theta_{k+1}}{\theta_k}(\frac{1}{m} - \theta_k)$ and $b(h,k) = \prod_{i=h}^k d_k$, we have for $k \ge jp(k+1) \ge 1$ \begin{align*} \lambda_{k+1}^{[p]} =& \eta_k^{[p]} + e_k(\eta_k^{[p]}-\lambda_k^{[p]})+d_k(\lambda_k^{[p]}-\eta_{k-1}^{[p]}) \\ =& \lambda_k^{[p]} + (e_k+1)(\eta_k^{[p]}-\lambda_k^{[p]})+d_k e_{k-1}(\eta_{k-1}^{[p]}-\lambda_{k-1}^{[p]}) \\ &+d_kd_{k-1}(\lambda_{k-1}^{[p]}-\eta_{k-2}^{[p]}) \\ =& \lambda_k^{[p]} + (e_k+1)(\eta_k^{[p]}-\lambda_k^{[p]}) \\ &+\sum_{i=jp(k+1)}^{k-1}b(i+1,k)e_i (\eta_{i}^{[p]}-\lambda_{i}^{[p]}) \\ &+b(jp(k+1),k)(\lambda_{jp(k+1)}^{[p]}-\eta_{jp(k+1)-1}^{[p]}) \end{align*} Making $l =k$ and summing the above inequality from $l = jp(k+1)$ to $k$, we have \begin{small} \begin{align*} \lambda_{k+1}^{[p]} =& \eta_{jp(k+1)}^{[p]} + \sum_{l=jp(k+1)}^k (e_l+1)(\eta_{l}^{[p]}-\lambda_l^{[p]}) \\ &+\sum_{l=jp(k+1)+1}^k\sum_{i=jp(k+1)}^{l-1}b(i+1,l)e_i (\eta_{i}^{[p]}-\lambda_{i}^{[p]}) \\ &+\sum_{l = jp(k+1)}^k b(jp(k+1),i)(\lambda_{jp(k+1)}^{[p]}-\eta_{jp(k+1)-1}^{[p]}) \\ =& \eta_{jp(k+1)}^{[p]} + \sum_{l=jp(k+1)}^k (e_l+1)(\eta_{l}^{[p]}-\lambda_l^{[p]}) \\ &+\sum_{i=jp(k+1)}^{k}(\sum_{l=i+1}^{k}b(i+1,l)e_i )(\eta_{i}^{[p]}-\lambda_{i}^{[p]}) \\ &+\sum_{l = jp(k+1)}^k b(jp(k+1),i)(\lambda_{jp(k+1)}^{[p]}-\eta_{jp(k+1)-1}^{[p]}) \end{align*} \end{small} By summing over all the blocks, we have \begin{align}\label{eqn:lambda-omega} \lambda_{k+1} -\omega_{j(k+1)} = & \sum_{p=1}^{m} \sum_{h = jp(k+1)}^k(1+e_h+e_h\sum_{i=h+1}^k b(h,i)) \nonumber\\ &([\eta_h^{[p]}]-[\lambda_h^{[p]}]). \end{align} , Then we have \begin{small} \begin{align}\label{eqn:omeganormp} &\|\lambda_{k+1} -\omega_{j(k+1)} \|^2 \nonumber\\ = &\sum_{p=1}^{m} \|\sum_{h = jp(k+1)}^k(1+e_h+e_h\sum_{i=h+1}^k b(h,i))(\eta_h^{[p]}-\lambda_h^{[p]})\|^2 \nonumber\\ \le&\sum_{p=1}^{m} (\sum_{h = jp(k+1)}^k(1+e_h+e_h\sum_{i=h+1}^k b(h,i))) \nonumber\\ &\cdot\sum_{h = jp(k+1)}^k(1+e_h+e_h\sum_{i=h+1}^k b(h,i))\|\eta_h^{[p]}-\lambda_h^{[p]}\|^2 \nonumber\\ \le&\sum_{p=1}^{m} (\sum_{h = jp(k+1)}^k(1+e_h\sum_{i=1}^{k-h+1} 1)) \nonumber\\ &\cdot\sum_{h = jp(k+1)}^k(1+e_h\sum_{i=1}^{k-h+1} 1)\|\eta_h^{[p]}-\lambda_h^{[p]}\|^2 \nonumber\\ \le&\sum_{p=1}^{m} (\sum_{ii=1}^{k- jp(k+1)+1}(1+\frac{1}{m}\sum_{i=1}^{ii} 1)) \nonumber\\ &\cdot\sum_{ii = 1}^{k-jp(k+1)+1}(1+\frac{1}{m}\sum_{i=1}^{ii} 1)\|\eta_{k-ii+1}^{[p]}-\lambda_{k-ii+1}^{[p]}\|^2 \nonumber\\ \le&\sum_{p=1}^{m} (\sum_{ii=1}^{\min(k-\tau,\tau)}(1+\frac{1}{m}\sum_{i=1}^{ii} 1)) \nonumber\\ &\cdot\sum_{ii = 1}^{\tau}(1+\frac{1}{m}\sum_{i=1}^{ii} 1)\|\eta_{k-ii+1}^{[p]}-\lambda_{k-ii+1}^{[p]}\|^2 \nonumber\\ \le&\sum_{p=1}^{m} (\frac{\tau^2+\tau}{2m} + \tau) \sum_{i=1}^{\min(k-\tau,\tau)}(1+\frac{i}{m}) \|\eta_{k-i+1}^{[p]}-\lambda_{k-i+1}^{[p]}\|^2, \end{align} \end{small} where the first is due to the Cauchy-Schwartz inequality; in the second inequality, we use $b(h,i) \le 1$; in the third inequality, we change variable $ii = k-h+1$ and use $e_h \le \frac{1}{m}$; the forth inequality follows from the fact that $k+1 - jp(k+1) \le \tau$. Dividing $\theta_{k+1}^2$ of both side of (\ref{eqn:omeganormp}) and summing from $k=0$ to $K$, we have \begin{align}\label{eqn:omeganorm} &\sum_{k=0}^K \frac{1}{\theta_{k+1}^2} \|\lambda_{k+1} - \omega_{j(k+1)}\|^2 \nonumber\\ \le &\sum_{p=1}^{m} (\frac{\tau^2+\tau}{2m} + \tau)\sum_{k=0}^K \sum_{i=1}^{\min(\tau,k-\tau)}\frac{1+\frac{i}{m}}{\theta_{k+1}^2} \|\eta_{k-i+1}^{[p]}-\lambda_{k-i+1}^{[p]}\|^2 \nonumber\\ \le& \sum_{p=1}^{m} (\frac{\tau^2+\tau}{2m} + \tau) \nonumber\\ &\cdot\sum_{k=0}^K \sum_{i=1}^{\min(\tau,k-\tau)}\frac{16(1+\frac{i}{m})}{\theta_{k-i+1}^2} \|\eta_{k-i+1}^{[p]}-\lambda_{k-i+1}^{[p]}\|^2 \nonumber\\ \le&4(\frac{\tau^2+\tau}{m} + 2\tau)^2\sum_{k=0}^K \frac{1}{\theta_{k+1}^2}\|\eta_{k+1}-\lambda_{k+1}\|^2 \end{align} where the second inequality follows from the fact that $\frac{1}{\theta_{k+1}^2} \le (k+2n)^2 \le 4(k-i+2n)^2 \le \frac{16}{\theta_{k+1-i}}$ according to Lemma \ref{lemma:theta} and $\tau \le n$, and the third is due to the fact that each $\frac{1}{\theta_{k+1}^2}\|\eta_{k+1}^{[p]}-\lambda_{k+1}^{[p]}\|^2$ appears at most $\tau$ times with coefficient from $1 + \frac{1}{m}$ to $1+\frac{\tau}{m}$. {\bf Step2:} From the smoothness of $\varphi$, we have \begin{align}\label{eta-oneiter} &\varphi(\eta_{k+1}) \nonumber\\ & \le \varphi(\lambda_{k+1}) +\langle \nabla \varphi(\lambda)^{[i_{k}]} ,\eta_{k+1}^{[i_{k}]}-\lambda_{k+1}^{[i_{k}]}\rangle + \frac{L}{2}\|\eta_{k+1}^{[i_{k}]}-\lambda_{k+1}^{[i_{k}]}\|^2 \nonumber\\ &\le\varphi(\lambda_{k+1}) +\frac{L}{2}\|\eta_{k+1}^{[i_{k}]}-\lambda_{k+1}^{[i_{k}]}\|^2 \nonumber\\ &+ \langle \nabla \varphi(\lambda_{k+1})^{[i_{k}]} -\nabla \varphi(\omega_{j(k+1)})^{[i_{k}]},\eta_{k+1}^{[i_{k}]}-\lambda_{k+1}^{[i_{k}]}\rangle \nonumber\\ & + \langle \nabla \varphi(\omega_{j(k+1)})^{[i_{k}]} - \nabla\phi(\omega_{j(k+1)},\xi_{k+1})^{[i_{k}]} ,\eta_{k+1}^{[i_{k}]}-\lambda_{k+1}^{[i_{k}]}\rangle \nonumber\\ &+ \langle\nabla \phi(\omega_{j(k+1)},\xi_{k+1}) ^{[i_{k+1}]},\eta_{k+1}^{[i_{k+1}]}-\lambda_{k+1}^{[i_{k+1}]}\rangle \end{align} For the last three term of the r.h.s., we have \begin{align}\label{lh-1} &\langle \nabla \varphi(\lambda_{k+1})^{[i_k]} -\nabla \varphi(\omega_{jl(k+1)})^{[i_k]} ,\eta_{k+1}^{[i_k]} -\lambda_{k+1}^{[i_k]} \rangle \nonumber\\ &\le\frac{\gamma L_l^2}{2D_1}\|\lambda_{k+1}-\omega_{jl(k+1)}\|^2 + \frac{\gamma D_1 }{2}\|\frac{\lambda_{k+1} ^{[i_k]} -\eta_{k+1}^{[i_k]} }{\gamma}\|^2, \end{align} \begin{align}\label{lh-2} & \langle \nabla \varphi(\omega_{jl(k+1)})^{[i_k]} - \nabla\phi(\omega_{j1(k+1)},\xi_{k+1})^{[i_k]} ,\eta_{k+1}^{[i_k]} -\lambda_{k+1}^{[i_k]} \rangle \nonumber\\ &\le \frac{\gamma }{2D_2}\| \nabla\varphi(\omega_{jl(k+1)})^{[i_k]} - \nabla \phi(\omega_{j1(k+1)},\xi_{k+1})^{[i_k]} \|^2 \nonumber\\ &+ \frac{\gamma D_2 }{2}\|\frac{\lambda_{k+1}^{[i_k]} -\eta_{k+1}^{[i_k]}}{\gamma} \|^2, \end{align} and \begin{align}\label{lh-3} &\langle\nabla\phi(\omega_{j(k+1)},\xi_{k+1})^{[i_k]} ,\eta_{k+1}^{[i_k]} -\lambda_{k+1}^{[i_k]} \rangle \nonumber\\ &=\langle g_{k+1},\eta_{k+1}^{[i_k]}-\lambda_{k+1}^{[i_k]}\rangle \nonumber\\ &=-\frac{\|\eta_{k+1}^{[i_k]}-\lambda_{k+1}^{[i_k]}\|^2}{\gamma} \end{align} By substituting (\ref{lh-1}),(\ref{lh-2}) and (\ref{lh-3}) into (\ref{eta-oneiter}), we have \begin{align} &\varphi(\eta_{k+1}) \nonumber\\ &\le\varphi(\lambda_{k+1}) -\gamma(1-\frac{L\gamma}{2})\|\frac{\eta_{k+1}^{[i_k]}-\lambda_{k+1}^{[i_k]}}{\gamma}\|^2 \nonumber\\ &+ \frac{\gamma }{2D_2}\| \nabla \varphi(\omega_{j(k+1)}) ^{[i_k]}- \nabla \phi(\omega_{j(k+1)},\xi_{k+1})^{[i_k]}\|^2 \nonumber\\ &+ \frac{\gamma (D_1+D_2 )}{2}\|\frac{\lambda_{k+1}^{[i_k]} -\eta_{k+1}^{[i_k]}}{\gamma}\|^2 \nonumber\\ &+ \frac{\gamma L^2}{2D_1}\|\lambda_{k+1}-\omega_{j(k+1)}\|^2 \nonumber \end{align} Taking expectation on $i_k$, we have \begin{align}\label{eqn:phi} &\mathbb{E}_{i_k}\varphi(\eta_{k+1}) \nonumber\\ &\le\varphi(\lambda_{k+1})+ \frac{\gamma L^2}{2D_1}\|\lambda_{k+1}-\omega_{j(k+1)}\|^2 \nonumber\\ &+ \frac{\gamma }{2mD_2}\| \nabla \varphi(\omega_{j(k+1)})- \nabla \phi(\omega_{j(k+1)},\xi_{k+1})\|^2 \nonumber\\ &-\gamma(1-\frac{L\gamma}{2}-\frac{D_1+D_2}{2})\mathbb{E}_{i_k}\|\frac{\eta_{k+1}^{[i_k]}-\lambda_{k+1}^{[i_k]}}{\gamma}\|^2 \end{align} {\bf Step 3:} We have \begin{align} &\quad \frac{m^2}{2\gamma}\|\theta_{k+1}\zeta_{k+1} - \theta_{k+1}\eta^*\|^2 \nonumber\\ &= \frac{m^2}{2\gamma}\|\theta_{k+1}\zeta_{k+1} - \theta_{k+1}\zeta_{k}+ \theta_{k+1}\zeta_{k} -\theta_{k+1}\eta^*\|^2 \nonumber\\ &= \frac{m^2}{2\gamma}\|\theta_{k+1}\zeta_{k+1}^{[i_k]}-\theta_{k+1}\zeta_{k}^{[i_k]}\|^2 +\frac{m^2}{2\gamma}\|\theta_{k+1}\zeta_k -\theta_{k+1}\eta^* \|^2 \nonumber\\ &\quad -m\langle g_{k+1},\theta_{k+1}\zeta_k^{[i_k]} -\theta_{k+1}[\eta^*]^{[i_k]}\rangle. \nonumber \end{align} Taking expectation on $i_k$, we have \begin{align}\label{eqn:zeta} &\quad \frac{m^2}{2\gamma}\mathbb{E}_{i_k}\|\theta_{k+1}\zeta_{k+1} - \theta_{k+1}\eta^*\|^2 \nonumber\\ &= \frac{1}{2\gamma}\mathbb{E}_{i_k}\|\eta_{k+1}^{[i_k]} - \lambda_{k+1}^{[i_k]}\|^2 +\frac{m^2}{2\gamma}\|\theta_{k+1}\zeta_k -\theta_{k+1}\eta^* \|^2 \nonumber\\ &\quad \langle \nabla \varphi(\omega_{j(k+1)},\xi_{k+1}),\theta_{k+1}\zeta_k -\theta_{k+1}\eta^*\rangle. \end{align} If we further take expectation on $\xi_{k+1}$,then we have \begin{align}\label{eqn:prod} &\mathbb{E} -\langle \nabla \phi(\omega_{j(k+1)},\xi_{k+1}),\theta_{k+1}\zeta_k -\theta_{k+1}\eta^*\rangle \nonumber\\ &=\mathbb{E}-\langle \nabla \phi(\omega_{j(k+1)},\xi_{k+1}),\lambda_{k+1}-(1-\theta_{k+1})\eta_{k}- \theta_{k+1}\eta^*\rangle \nonumber\\ &=-\langle \nabla \varphi(\omega_{j(k+1)}),\lambda_{k+1}-(1-\theta_{k+1})\eta_{k}- \theta_{k+1}\eta^*\rangle \nonumber\\ &=-\langle \nabla \varphi(\omega_{j(k+1)}),\omega_{jl(k+1)}-(1-\theta_{k+1}) \eta_{k}- \theta_{k+1}\eta^*\rangle \nonumber\\ &-\langle \nabla \varphi(\omega_{j(k+1)} , \lambda_{k+1} - \omega_{j(k+1)} ) \nonumber\\ &\le (1-\theta_{k+1})\varphi(\eta_{k}) + \theta_{k+1} \varphi(\eta^*)- \varphi(\omega_{j(k+1)}) \nonumber\\ &-\langle \nabla \varphi(\omega_{j(k+1)} , \lambda_{k+1} - \omega_{j(k+1)} ) \nonumber\\ &\le (1-\theta_{k+1})\varphi(\eta_{k}) + \theta_{k+1} \varphi(\eta^*)- \varphi(\lambda_{k+1}) \nonumber\\ &+\langle \nabla \varphi(\lambda_{k+1}) -\nabla \varphi(\omega_{j(k+1)} ),\lambda_{k+1} - \omega_{jl(k+1)} ), \end{align} where the expectation is taken on $\xi_{k+1}$ and the first equality follows from the line (3) in ASBCDS,the second equality is due to the independence of $\nabla \phi(\omega_{jl(k+1)},\xi_{k+1})$ and $\omega_{jl(k+1)}-(1-\theta_{k+1}) \eta_{k}- \theta_{k+1}\eta^*$. The first and second inequality just follows from the convexity of $\varphi$. {\bf Step 4:} By summing up (\ref{eqn:phi})(\ref{eqn:zeta}) and (\ref{eqn:prod}) and taking expectation on $\xi_{k+1}$, we have \begin{align}\label{eqn:varphi-oneiter} &\mathbb{E}_{i_{k+1}} \varphi(\eta_{k+1}) \le (1-\theta_{k+1}) \varphi(\eta_k) + \theta_{k+1} \varphi(\eta^*) \nonumber\\ &-\gamma(\frac{1}{2}-\frac{L\gamma}{2}-\frac{D_1+D_2}{2})\mathbb{E}_{\xi_{k+1}}[\mathbb{E}_{i_k}\|\frac{\eta_{k+1}-\lambda_{k+1}}{\gamma}\|^2 ] \nonumber\\ &+ \frac{\gamma }{2mD_2}\mathbb{E}_{\xi_{k+1}} \| \nabla \varphi(\omega_{j(k+1)})- \nabla \phi(\omega_{j(k+1)},\xi_{k+1})\|^2 \nonumber\\ &+ (\frac{\gamma L^2}{2D_1}+L)\|\lambda_{k+1}-\omega_{j(k+1)}\|^2 +\frac{m^2}{2\gamma}\|\theta_{k+1}\zeta_k -\theta_{k+1}\eta^* \|^2 \nonumber\\ &-\mathbb{E}_{\xi_{k+1}}[\frac{m^2}{2\gamma}\mathbb{E}_{i_k}\|\theta_{k+1}\zeta_{k+1} - \theta_{k+1}\eta^*\|^2 ], \end{align} where we use the fact that $\|\eta_{k+1}-\lambda_{k+1} \|^2= \|\eta_{k+1}^{i_k}-\lambda_{k+1}^{i_k}\|^2$ Dividing $\theta_{k+1}^2$ on both side of (\ref{eqn:varphi-oneiter}), we have \begin{align}\label{eqn:varphi-zeta} &\frac{\mathbb{E} \varphi(\eta_{k+1}) -\varphi(\eta^*)}{\theta_{k+1}^2} +\frac{m^2}{2\gamma}\mathbb{E}\|\zeta_{k+1} - \eta^*\|^2 \nonumber\\ \le &\frac{1-\theta_{k+1}}{\theta_{k+1}^2} (\varphi(\eta_k) - \varphi(\eta^*)) +\frac{m^2}{2\gamma}\|\zeta_k -\eta^* \|^2 \nonumber\\ &-\frac{\gamma}{\theta_{k+1}^2}(\frac{1}{2}-\frac{L\gamma}{2}-\frac{D_1+D_2}{2})\mathbb{E}\|\frac{\eta_{k+1}-\lambda_{k+1}}{\gamma}\|^2 \nonumber\\ &+ \frac{\gamma }{2mD_2\theta_{k+1}^2}\mathbb{E}_{\xi_{k+1}} \| \nabla \varphi(\omega_{j(k+1)})- \nabla \phi(\omega_{j(k+1)},\xi_{k+1})\|^2 \nonumber\\ &+ \frac{1}{\theta_{k+1}^2}(\frac{\gamma L^2}{2D_1}+L)\|\lambda_{k+1}-\omega_{j(k+1)}\|^2 , \end{align} Using the fact that $\frac{1-\theta_{k+1}}{\theta_{k+1}^2} = \frac{1}{\theta_k^2}$, summing from $k=0$ to $K$, and taking expectation on all the $\i_k$'s and $\xi_{k+1}$'s, we have \begin{align} &\frac{\mathbb{E} \varphi(\eta_{K+1}) -\varphi(\eta^*)}{\theta_{K+1}^2} +\frac{m^2}{2\gamma}\mathbb{E}\|\zeta_{K+1} - \eta^*\|^2 \nonumber\\ \le &\frac{1}{\theta_1^2} (\varphi(\eta_0) - \varphi(\eta^*)) +\frac{m^2}{2\gamma}\|\zeta_0 -\eta^* \|^2 \nonumber\\ &-\sum_{k=0}^K\frac{\gamma}{\theta_{k+1}^2}(\frac{1}{2}-\frac{L\gamma}{2}-\frac{D_1+D_2}{2})\mathbb{E}\|\frac{\eta_{k+1}-\lambda_{k+1}}{\gamma}\|^2 \nonumber\\ &+ \sum_{k=0}^K\frac{\gamma }{2mD_2\theta_{k+1}^2}\mathbb{E} \| \nabla \varphi(\omega_{j(k+1)})- \nabla \phi(\omega_{j(k+1)},\xi_{k+1})\|^2 \nonumber\\ &+ \sum_{k=0}^K\frac{1}{\theta_{k+1}^2}(\frac{\gamma L^2}{2D_1}+L)\mathbb{E}\|\lambda_{k+1}-\omega_{j(k+1)}\|^2 \nonumber\\ \le &\frac{1}{\theta_1^2} (\varphi(\eta_0) - \varphi(\eta^*)) +\frac{m^2}{2\gamma}\|\zeta_0 -\eta^* \|^2 + \sum_{k=1}^K\frac{\epsilon }{16\theta_{k+1}} \nonumber\\ &-(\frac{1}{2}-\frac{L\gamma}{2}-\frac{D_1+D_2}{2} - 4(\frac{\gamma^2 L^2}{2D_1}+L\gamma) (\frac{\tau^2+\tau}{m} + 2\tau)^2) \nonumber\\ &\sum_{k=0}^K\frac{\gamma}{\theta_{k+1}^2}\mathbb{E}\|\frac{\eta_{k+1}-\lambda_{k+1}}{\gamma}\|^2, \end{align} where the second inequality follows from (\ref{eqn:omeganorm}) and $\mathbb{E} \| \nabla \varphi(\omega_{j(k+1)})- \nabla \phi(\omega_{j(k+1)},\xi_{k+1})\|^2 \le \frac{mD_2\theta_{k+1}\epsilon}{8\gamma}$. {\bf Step 5:} If we set $D_1 = D_2 = \gamma L$, and choose $\gamma$ satisfying $3 L \gamma + 12 L \gamma(\frac{\tau^2+\tau }{m}+2\tau)^2 \le 1$, then we have \begin{align} &\mathbb{E}\varphi(\eta_{K+1}) -\varphi(\eta^*) \nonumber \\ \le& \frac{\theta_{K+1}^2}{\theta_1^2}(\varphi(\eta_0) -\varphi(\eta^*) + \frac{\theta_1^2 m^2}{2\gamma}\|\zeta_0 -\eta^*\|^2) + \frac{\epsilon}{2}, \end{align} where we use the fact that $\sum_{k=0}^K \frac{\theta_{K+1}^2}{\theta_{k+1}}\le \frac{4\sum_{k=0}^K (k+2m)}{(K+2m)^2}\le 8$. Then according to Lemma \ref{lemma:theta}, in order to get $\epsilon$ accuracy, we need to set $K = \frac{\sqrt{2m^2 (\varphi(\eta_0) - \varphi(\lambda^*)+\|\zeta_0-\lambda^*\|^2/(2\gamma))}}{\sqrt{\epsilon}} $. Since in the $k$-th iteration, we need to sample $M_k = \frac{8\sigma^2 \gamma}{m\epsilon L\gamma \theta_{k+1}} \le \frac{8 \sigma^2 (k+2m)}{mL\epsilon}$ mini-batch of samples, and the overall stochastic gradient access is $\sum_{k=0}^K \max(1,M_k) \le K+1 + \sum_{k=0}^{K} M_k \le K +1+ \frac{16\sigma^2 (K+2m)^2}{Lm\epsilon}$. \end{proof} \begin{theorem}[Equivalence between ASBCDS and PASBCDS] If we take the same $jp(k+1)$ and $\xi_{k+1}$ in each iteration of Algorithm ASBCDS and PASBCDS, then we have $\lambda_{k+1} =u_k + \theta_{k+1}^2 v_k $, $\zeta_{k+1} = u_{k+1}$ and $\eta_{k+1} = u_{k+1} + \theta_{k+1}^2 v_{k+1}$ for all $k = 0,\cdots, K$. \end{theorem} We use $\omega_{jl(k)}^o$ and $g_k^o$ denote the $\omega_{jl(k)}$ and $g_k$ generated by ASBCD, and use $\omega_{jl(k)}^n$ and$g_k^n$to denote the $\omega_{jl(k)}$ generated by PASBCD. Besides, we also assume that if $\omega_{jl(k)}^o$=$\omega_{jl(k)}^n$, then we have $g_k^o =g_k^n$. Now, we need to prove that $u_k= \zeta_k$ and $\eta_k = u_k + \theta_k^2 v_k$ by induction. When $k = 0$, $\zeta_k = u_k = 0$ and $\eta_k = u_k + \theta_k^2 v_k =0$. Since $jp(0) =0$ for all $p\in\{1,\cdots,m\}$,then we have $\omega_{j(1)}^o =\lambda_1= \omega_{j(1)}^n=0 = u_0+ \theta_1^2 v_0$ for all $l\in \{1,\cdots,m\}$, which ensures $g_1^o = g_1^n$. Then we have $\zeta_1 = u_1$ and \begin{align*} \eta_1 &= \lambda_1 + m \theta_1(\zeta_1 -\zeta_0) = u_0 + \theta_1^2 v_0 + m \theta_1(\zeta_1 -\zeta_0) \\ &= u_0 + \theta_1^2 v_0 + m \theta_1(u_1 -u_0) \\ &= u_1 + \theta_1^2(v_0 - \frac{1-m\theta_1}{\theta_1^2}(u_1-u_0)) \\ &= u_1 + \theta_1^2 v_1. \end{align*} When $k > 0$, suppose we have $u_k= \zeta_k$ and $\eta_k = u_k + \theta_k^2 v_k$ , then \begin{align}\label{ind:lambda} \lambda_{k+1} &= (1-\theta_{k+1}) \eta_k + \theta_{k+1} \zeta_k \nonumber\\ &= (1-\theta_{k+1}) (\eta_k -\zeta_k) + \zeta_k \nonumber\\ &= (1-\theta_{k+1})\theta_k^2 v_k + u_k = \theta_{k+1}^2v_k + u_k, \end{align} where in the last line, we use the fact that $(1-\theta_{k+1})\theta_k^2 = \theta_{k+1}^2$. If we have $\omega_{jl(k+1)}^o = \omega_{jl(k+1)}^n$ for all $l \in \{1,\cdots,m\}$, then we have $g_{k+1}^o = g_{k+1}^n$ and $\zeta_{k+1} = u_{k+1}$. For $\eta_{k+1}$, we have \begin{align}\label{ind:eta} \eta_{k+1} &= \lambda_{k+1} + m \theta_{k+1}(\zeta_{k+1} -\zeta_k) \nonumber\\ &= u_k + \theta_{k+1}^2 v_k + m \theta_{k+1}(\zeta_{k+1} -\zeta_k) \nonumber\\ &= u_k + \theta_{k+1}^2 v_k + m \theta_{k+1}(u_{k+1} -u_k) \nonumber\\ &= u_{k+1} + \theta_{k+1}^2(v_k - \frac{1-m\theta_{k+1}}{\theta_{k+1}^2}(u_{k+1}-u_k)) \nonumber\\ &= u_{k+1} + \theta_{k+1}^2 v_{k+1}. \end{align} Now we need to prove $\omega_{j(k+1)}^o = \omega_{j(k+1)}^n$ . We consider the following auxiliary Algorithm~\ref{alg:ASBCD-A} and Algorithm ~\ref{alg:PASBCD-A}. \begin{algorithm}[h] \caption{ASBCDS-Auxiliary} \label{alg:ASBCD-A} \textbf{Input}: $\hat \eta_{jp(k+1)} =\eta_{jp(k+1)}$,$\hat\zeta_{jp(k+1)}=\zeta_{jp(k+1)}$, and $\theta_{(jp(k+1))}$, for all $p \in \{1,\cdots,m\}$. \begin{algorithmic}[1] \FOR{$p = \{1,\cdots,m\}$} \FOR{$i = jp(k+1),\cdots,k$} \STATE $\hat\lambda_{i+1}^{[p]} = \theta_{i+1}\hat\zeta_i^{[p]} + (1-\theta_{i+1})\hat\eta_i^{[p]} $. \STATE $\hat\zeta_{i+1}^{[p]} = \hat\zeta_i^{[p]} $ , \STATE$\hat\eta_{i+1} ^{[p]} =\hat \lambda_{i+1} ^{[p]} + \theta_{i+1}(\hat\zeta_{i+1}^{[p]} -\hat\zeta_i^{[p]} )$. \ENDFOR \ENDFOR \end{algorithmic} \textbf{Output}: $\eta_{k+1}$ \end{algorithm} From Algorithm ~\ref{alg:ASBCD-A}, according to (\ref{eqn:lambda-omega}), we can conclude $\hat \lambda_{k+1} = \omega_{jl(k+1)}^o$ since $\hat\eta_i^{[p]} = \hat \lambda_i^{[p]}$ for all $p\in \{1,\cdots,m\}$ and $i\in\{jp(k+1),\cdots,k\}$. Note that in order to distinguish the variables, we use a superscript $\hat\cdot$ to indicate the variables generated by the auxiliary algorithms. \begin{algorithm}[th] \caption{PASBCDS-Auxiliary} \label{alg:PASBCD-A} \textbf{Input}: $\hat u_{jp(k+1)} =u_{jp(k+1)}$, $\hat v_{jp(k+1)} =v_{jp(k+1)}$, and $\theta_{jp(k+1)}$ for all $p \in \{1,\cdots,m\}$ \begin{algorithmic}[1] \FOR {$p = 1,\cdots,m$} \FOR{$i = jp(k+1),\cdots,k$} \STATE $\hat\omega_{i+1}^{[p]} = \hat u_{i}^{[p]} +\theta_{i+1}^2\hat v_{i}^{[p]} ,$ \STATE $\hat u _{i+1}^{[p]} =\hat u_i^{[p]} ,\hat v_{i+1}^{[p]} = \hat v_i^{[p]} $ \ENDFOR \ENDFOR \end{algorithmic} \textbf{Output}: $\hat\eta_{k+1} = \hat u_{k+1} + \theta_{k+1}^2 \hat v_{k+1}$. \end{algorithm} Then follow the same induction in (\ref{ind:lambda}) and (\ref{ind:eta}), we can conclude that $\hat\eta_{k}^{[p]} = \hat u_{k}^{[p]} + \theta_k^2 \hat v_k^{[p]}$, $\hat \zeta_k^{[p]} = \hat u_k^{[p]}$ and $\hat \lambda_{k+1}^{[p]} = \hat u_k^{[p]} + \theta_{k+1}^2\hat v_k^{[p]} $. Since $ \hat u_{j(k+1)}^{[p]} = u_{jp(k+1)}^{[p]}$, and $\hat v_{j(k+1)}^{[p]} = v_{jp(k+1)}$, we can conclude that $\hat \lambda_{k+1} = \omega_{j(k+1)}^n $ according to the update rule of $\omega^n$, i.e. line 3 in PASBCDS, which indicate $\omega_{k}^o = \omega_{k}^n$ and verifies the equivalence between ASBCDS and PASBCDS. \end{document}
\begin{document} \begin{sloppy} \title{ Iteration complexity analysis of random coordinate descent methods for $\ell_0$ regularized convex problems } \begin{abstract} In this paper we analyze a family of general random block coordinate descent methods for the minimization of $\ell_0$ regularized optimization problems, i.e. the objective function is composed of a smooth convex function and the $\ell_0$ regularization. Our family of methods covers particular cases such as random block coordinate gradient descent and random proximal coordinate descent methods. We analyze necessary optimality conditions for this nonconvex $\ell_0$ regularized problem and devise a separation of the set of local minima into restricted classes based on approximation versions of the objective function. We provide a unified analysis of the almost sure convergence for this family of block coordinate descent algorithms and prove that, for each approximation version, the limit points are local minima from the corresponding restricted class of local minimizers. Under the strong convexity assumption, we prove linear convergence in probability for our family of methods. \end{abstract} \begin{keywords} $\ell_0$ regularized convex problems, Lipschitz gradient, restricted classes of local minima, random coordinate descent methods, iteration complexity analysis. \end{keywords} \pagestyle{myheadings} \thispagestyle{plain} \markboth{A. Patrascu and I. Necoara}{Coordinate descent methods for $\ell_0$ regularized optimization} \section{Introduction} \noindent In this paper we analyze the properties of local minima and devise a family of random block coordinate descent methods for the following $\ell_0$ regularized optimization problem: \begin{equation}\label{l0regular} \min\limits_{x \in \rset^n} F(x) \quad \left(= f(x) + \norm{x}_{0,\lambda} \right), \end{equation} where function $f$ is smooth and convex and the quasinorm of $x$ is defined as: \[ \norm{x}_{0,\lambda}= \sum\limits_{i=1}^N \lambda_i\norm{x_i}_0, \] where $\|x_i\|_0$ is the quasinorm which counts the number of nonzero components in the vector $x_i \in \rset^{n_i}$, which is the $i$th block component of $x$, and $\lambda_i \ge 0$ for all $i=1, \dots, N$. Note that in this formulation we do not impose sparsity on all block components of $x$, but only on those $i$th blocks for which the corresponding penalty parameter $\lambda_i>0$. However, in order to avoid the convex case, intensively studied in the literature, we assume that there is at least one $i$ such that $\lambda_i>0$. \noindent In many applications such as compressed sensing \cite{BluDav:08,CanTao:04}, sparse support vector machines \cite{Bah:13}, sparse nonnegative factorization \cite{Gil:12}, sparse principal component analysis \cite{JouNes:10} or robust estimation \cite{Kek:13} we deal with a convex optimization problem for which we like to get an (approximate) solution, but we also desire a solution which has the additional property of sparsity (it has few nonzero components). The typical approach for obtaining a sparse minimizer of an optimization problem involves minimizing the number of nonzero components of the solution. In the literature for sparse optimization two formulations are widespread: \textit{(i) the regularized formulation} obtained by adding an $\ell_0$ regularization term to the original objective function as in \eqref{l0regular}; \textit{(ii) the sparsity constrained formulation} obtained by including an additional constraint on the number of nonzero elements of the variable vector. However, both formulations are hard combinatorial problems, since solving them exactly would require to try all possible sparse patterns in a brute-force way. Moreover, there is no clear equivalence between them in the general case. \noindent Several greedy algorithms have been developed in the last decade for the sparse linear least squares setting under certain restricted isometry assumptions \cite{Bah:13,BluDav:08,CanTao:04}. In particular, the iterative hard thresholding algorithm has gained a lot of interest lately due to its simple iteration \cite{BluDav:08}. Recently, in \cite{Lu:12}, a generalization of the iterative hard thresholding algorithm has been given for general $\ell_0$ regularized convex cone programming. The author shows linear convergence of this algorithm for strongly convex objective functions, while for general convex objective functions the author considers the minimization over a bounded box set. Moreover, since there could be an exponential number of local minimizers for the $\ell_0$ regularized problem, there is no characterization in \cite{Lu:12} of the local minima at which the iterative hard thresholding algorithm converges. Further, in \cite{LuZha:12}, penalty decomposition methods were devised for both regularized and constrained formulations of sparse nonconvex problems and convergence analysis was provided for these algorithms. Analysis of sparsity constrained problems were provided e.g. in \cite{Bec:12}, where the authors introduced several classes of stationary points and developed greedy coordinate descent algorithms converging to different classes of stationary points. Coordinate descent methods are used frequently to solve sparse optimization problems \cite{Bec:12,LuXia:13,NecCli:13,NecNes:13,BanGha:08} since they are based on the strategy of updating one (block) coordinate of the vector of variables per iteration using some index selection procedure (e.g. cyclic, greedy or random). This often reduces drastically the iteration complexity and memory requirements, making these methods simple and scalable. There exist numerous papers dealing with the convergence analysis of this type of methods: for deterministic index selection see \cite{HonWan:13,BecTet:13,LuoTse:92}, while for random index selection see \cite{LuXia:13,Nes:12,NecCli:13,Nec:13,NecPat:14,PatNec:14,RicTac:12}. \subsection{Main contribution} \noindent In this paper we analyze a family of general random block coordinate descent iterative hard thresholding based methods for the minimization of $\ell_0$ regularized optimization problems, i.e. the objective function is composed of a smooth convex function and the $\ell_0$ regularization. The family of the algorithms we consider takes a very general form, consisting in the minimization of a certain approximate version of the objective function one block variable at a time, while fixing the rest of the block variables. Such type of methods are particularly suited for solving nonsmooth $\ell_0$ regularized problems since they solve an easy low dimensional problem at each iteration, often in closed form. Our family of methods covers particular cases such as random block coordinate gradient descent and random proximal coordinate descent methods. We analyze necessary optimality conditions for this nonconvex $\ell_0$ regularized problem and devise a procedure for the separation of the set of local minima into restricted classes based on approximation versions of the objective function. We provide a unified analysis of the almost sure convergence for this family of random block coordinate descent algorithms and prove that, for each approximation version, the limit points are local minima from the corresponding restricted class of local minimizers. Under the strong convexity assumption, we prove linear convergence in probability for our family of methods. We also provide numerical experiments which show the superior behavior of our methods in comparison with the usual iterative hard thresholding algorithm. \subsection{Notations and preliminaries} We consider the space $\rset^n$ composed by column vectors. For $x,y \in \rset^n$ denote the scalar product by $\langle x,y \rangle = x^T y$ and the Euclidean norm by $\|x\|=\sqrt{x^T x}$. We use the same notation $\langle \cdot,\cdot \rangle$ ($\|\cdot\|$) for scalar product (norm) in spaces of different dimensions. For any matrix $A \in \rset^{m \times n}$ we use $\sigma_{\min}(A)$ for the minimal eigenvalue of matrix $A$. We use the notation $[n] = \{1,2, \dots, n\}$ and $e = [1 \cdots 1]^T \in \rset^n$. \noindent In the sequel, we consider the following decompositions of the variable dimension and of the $n \times n$ identity matrix: \begin{equation*} n = \sum\limits_{i=1}^N n_i, \qquad \qquad I_n= \left[ U_1 \dots U_N \right], \qquad \qquad I_n= \left[ U_{(1)} \dots U_{(n)} \right], \end{equation*} where $U_i \in \rset^{n \times n_i}$ and $U_{(j)} \in \rset^{n}$ for all $i \in [N]$ and $j \in [n]$. If the index set corresponding to block $i$ is given by $\mathcal{S}_i$, then $\abs{\mathcal{S}_i} = n_i$. Given $x \in \rset^n$, then for any $i \in [N]$ and $j \in [n]$, we denote: \begin{align*} x_i &= U_i^T x \in \rset^{n_i}, \quad \quad \quad \quad \ \ \nabla_i f(x)= U_i^T \nabla f(x) \in \rset^{n_i},\\ x_{(j)} &= U_{(j)}^T x \in \rset^{}, \quad \quad \quad \quad \ \ \ \nabla_{(j)} f(x)= U_{(j)}^T \nabla f(x) \in \rset^{}. \end{align*} \noindent For any vector $x \in \rset^n$, the support of $x$ is given by $\text{supp}(x)$, which denotes the set of indices corresponding to the nonzero components of $x$. We denote $\bar{x} = \max\limits_{j \in \text{supp}(x)} \abs{x_{(j)}}$ and $\underline{x} = \min\limits_{j \in \text{supp}(x)} \abs{x_{(j)}}$. Additionally, we introduce the following set of indices: $$I(x) = \text{supp}(x) \cup \{j \in [n]: \ j \in \mathcal{S}_i, \ \lambda_i=0\}$$ and $I^c(x) = [n] \backslash I(x)$. Given two scalars $p\ge 1, r>0$ and $ x \in \rset^n$, the $p-$ball of radius $r$ and centered in $x$ is denoted by $\mathcal{B}_p(x,r) = \{y \in \rset^n: \; \norm{y-x}_p < r \}$. Let $I \subseteq [n]$ and denote the subspace of all vectors $x \in \rset^n$ satisfying $I(x) \subseteq I$ with $S_I$, i.e. $S_I = \{x \in \rset^n: \; x_i=0 \quad \forall i \notin I\}$. \noindent We denote with $f^*$ the optimal value of the convex problem $f^* = \min_{x \in \rset^n} f(x)$ and its optimal set with $X^*_f=\left\{x \in \rset^n: \nabla f(x)=0 \right\}$. In this paper we consider the following assumption on function $f$: \begin{assumption}\label{assump_grad_1} The function $f$ has (block) coordinatewise Lipschitz continuous gradient with constants $L_i>0$ for all $i \in [N]$, i.e. the convex function $f$ satisfies the following inequality for all $i \in [N]$: \begin{equation*} \norm{\nabla_i f(x+U_ih_i) - \nabla_i f(x)} \le L_i \norm{h_i} \quad \forall x \in \rset^n, h_i \in \rset^{n_i}. \end{equation*} \end{assumption} \noindent An immediate consequence of Assumption \ref{assump_grad_1} is the following relation \cite{Nes:12}: \begin{equation}\label{Lipschitz_gradient} f(x+U_ih_i) \le f(x) + \langle \nabla_i f(x), h_i\rangle + \frac{L_i}{2}\norm{h_i}^2 \quad \forall x \in \rset^n, h_i \in \rset^{n_i}. \end{equation} We denote with $\lambda=[\lambda_1 \cdots \lambda_N]^T \in \rset^N$, $L=[L_1 \cdots L_N]^T $ and $L_f$ the global Lipschitz constant of the gradient $\nabla f(x)$. In the Euclidean settings, under Assumption \ref{assump_grad_1} a tight upper bound of the global Lipschitz constant is $L_f \le \sum_{i=1}^N L_i$ (see \cite[Lemma 2]{Nes:12}). Note that a global inequality based on $L_f$, similar to \eqref{Lipschitz_gradient}, can be also derived. Moreover, we should remark that Assumption \ref{assump_grad_1} has been frequently considered in coordinate descent settings (see e.g. \cite{Nec:13,Nes:12, NecNes:13,NecCli:13,NecPat:14,RicTac:12}). \section{Characterization of local minima} \noindent In this section we present the necessary optimality conditions for problem \eqref{l0regular} and provide a detailed description of local minimizers. First, we establish necessary optimality conditions satisfied by any local minimum. Then, we separate the set of local minima into restricted classes around the set of global minimizers. The next theorem provides conditions for obtaining local minimizers of problem \eqref{l0regular}: \begin{theorem}\label{lemmaaux} If Assumption \ref{assump_grad_1} holds, then any $z \in \rset^n \backslash \{0\}$ is a local minimizer of problem \eqref{l0regular} on the ball $\mathcal{B}_{\infty}(z,r)$, with $r=\min\left\{\underline{z}, \frac{\underline{\lambda}}{\norm{\nabla f(z)}_1}\right\}$, if and only if $z$ is a global minimizer of convex problem $\min\limits_{x \in S_{I(z)}} f(x)$. Moreover, $0$ is a local minimizer of problem \eqref{l0regular} on the ball $\mathcal{B}_{\infty} \left(0, \frac{\min_{i \in [N]} \lambda_i}{\norm{\nabla f(z)}_1} \right)$ provided that $0 \not \in X_f^*$, otherwise is a global minimizer for \eqref{l0regular}. \end{theorem} \begin{proof} For the first implication, we assume that $z$ is a local minimizer of problem \eqref{l0regular} on the open ball $\mathcal{B}_{\infty}(z,r)$, i.e. we have: $$f(z) \le f(y) \quad \forall y \in \mathcal{B}_{\infty}(z,r) \cap S_{I(z)}.$$ \noindent Based on Assumption \ref{assump_grad_1} it follows that $f$ has also global Lipschitz continuous gradient, with constant $L_f$, and thus we have: $$f(z) \le f(y) \le f(z) + \langle \nabla f(z),y- z\rangle + \frac{L_f}{2}\norm{y- z}^2 \quad \forall y \in \mathcal{B}_{\infty}(z,r)\cap S_{I(z)}.$$ Taking $\alpha = \min\{\frac{1}{L_f}, \frac{r}{\max\limits_{j \in I(z)}\abs{\nabla_{(j)} f(z) }}\}$ and $y= z - \alpha \nabla_{I(z)} f(z)$, we obtain: $$0 \le \left(\frac{\alpha^2}{2L_f} - \frac{\alpha}{L_f}\right)\norm{\nabla_{I(z)} f(z)}^2 \le 0.$$ Therefore, we have $ \nabla_{I(z)} f(z) =0$, which means that: \begin{equation}\label{localmin} z = \arg\min\limits_{x \in S_{I(z)}} f(x). \end{equation} \noindent For the second implication we first note that for any $y, d \in \rset^n$, with $ y \neq 0$ and $\norm{d}_{\infty} < \underline{y} $, we have: \begin{equation}\label{positive} \abs{y_{(i)}+d_{(i)}} \ge \abs{y_{(i)}} - \abs{d_{(i)}} \ge \underline{y} - \norm{d}_{\infty} > 0 \quad \forall i \in \text{supp}(y). \end{equation} Clearly, for any $d \in \mathcal{B}_{\infty}(0,r) \backslash S_{I(y)}$, with $r= \underline{y}$, we have: \begin{equation*} \norm{y+d}_{0,\lambda} = \norm{y}_{0,\lambda} + \sum\limits_{i \in I^c(y) \cap \text{supp}(d)} \norm{d_{(i)}}_{0,\lambda} \ge \norm{y}_{0,\lambda} + \underline{\lambda}. \end{equation*} \noindent Let $d \in \mathcal{B}_{\infty}(0,r) \backslash S_{I(y)}$, with $r = \min\left\{ \underline{y}, \frac{\underline{\lambda}}{\norm{\nabla f(y)}_1} \right\}$. The convexity of function $f$ and the Holder inequality lead to: \begin{align} \label{parti} F(y+d) &\ge f(y) + \langle \nabla f(y), d \rangle + \norm{y+d}_{0,\lambda} \nonumber \\ &\ge F(y) - \norm{\nabla f(y)}_1 \norm{d}_{\infty} + \underline{\lambda} \ge F(y) \quad \forall y \in \rset^n. \end{align} We now assume that $z$ satisfies \eqref{localmin}. For any $x \in \mathcal{B}_{\infty}(z,r) \cap S_{I(z)}$ we have $\norm{x-z}_{\infty} < \underline{z}$, which by \eqref{positive} implies that $\abs{x_{(i)}} > 0$ whenever $\abs{z_{(i)}} > 0$. Therefore, we get: \begin{equation*} F(x) = f(x) + \norm{x}_{0,\lambda} \ge f(z) + \norm{z}_{0,\lambda} = F(z), \end{equation*} and combining with the inequality \eqref{parti} leads to the second implication. Furthermore, if $0 \not \in X_f^*$, then $\nabla f(0) \not =0$. Assuming that $\min_{i \in [N]} \lambda_i>0$, then $F(x) \geq f(0) + \langle \nabla f(0), x \rangle + \norm{x}_{0,\lambda} \geq F(0) - \norm{\nabla f(0)}_1 \norm{x}_{\infty} + \min_{i \in [N]} \lambda_i \geq F(0)$ for all $x \in \mathcal{B}_{\infty} \left(0, \frac{\min_{i \in [N]} \lambda_i}{\norm{\nabla f(z)}_1} \right)$. If $0 \in X_f^*$, then $\nabla f(0) =0$ and thus $F(x) \geq f(0) + \langle \nabla f(0), z \rangle + \norm{x}_{0,\lambda} \geq F(0)$ for all $x \in \rset^n$. \end{proof} \noindent From Theorem \ref{lemmaaux} we conclude that any vector $z \in \rset^n$ is a local minimizer of problem \eqref{l0regular} if and only if the following equality holds: $$ \nabla_{I(z)} f(z)=0.$$ We denote with $\mathcal{T}_f$ the set of all local minima of problem \eqref{l0regular}, i.e. \[ \mathcal{T}_f =\left\{ z \in \rset^n:\; \nabla_{I(z)} f(z)=0 \right \}, \] and we call them \textit{basic local minimizers}. It is not hard to see that when the function $f$ is strongly convex, the number of basic local minima of problem \eqref{l0regular} is finite, otherwise we might have an infinite number of basic local minimizers. \subsection{Strong local minimizers} \noindent In this section we introduce a family of strong local minimizers of problem \eqref{l0regular} based on an approximation of the function $f$. It can be easily seen that finding a basic local minimizer is a trivial procedure e.g.: $(a)$ if we choose some set of indices $I \subseteq [n]$ such that $\{j \in [n]: j \in \mathcal{S}_i, \lambda_i=0\} \subseteq I$, then from Theorem \ref{lemmaaux} the minimizer of the convex problem $\min_{x \in S_I} f(x)$ is a basic local minimizer for problem \eqref{l0regular}; $(b)$ if we minimize the convex function $f$ w.r.t. all blocks $i$ satisfying $\lambda_i=0$, then from Theorem \ref{lemmaaux} we obtain again some basic local minimizer for \eqref{l0regular}. This motivates us to introduce more restricted classes of local minimizers. Thus, we first define an approximation version of function $f$ satisfying certain assumptions. In particular, given $i \in [N]$ and $x \in \rset^n$, the convex function $u_i:\rset^{n_i} \to \rset^{}$ is an upper bound of function $f(x+U_i(y_i-x_i))$ if it satisfies: \begin{equation} \label{u_upperapr} f(x+U_i(y_i-x_i)) \le u_i(y_i;x) \quad \forall y_i \in \rset^{n_i}. \end{equation} \noindent We additionally impose the following assumptions on each function $u_i$. \begin{assumption}\label{approximation} The approximation function $u_i$ satisfies the assumptions: \\ (i) The function $u_i(y_i;x)$ is strictly convex and differentiable in the first argument, is continuous in the second argument and satisfies $u_i(x_i;x) = f(x)$ for all $x \in \rset^n$.\\ (ii) Its gradient in the first argument satisfies $\nabla u_i(x_i;x) = \nabla_i f(x) \quad \forall x \in \rset^n$. \\ (iii) For any $x \in \rset^n$, the function $u_i(y_i;x)$ has Lipschitz continuous gradient in the first argument with constant $M_i > L_i$, i.e. there exists $M_i>L_i$ such that: $$\norm{\nabla u_i(y_i;x) - \nabla u_i(z_i;x)} \le M_i \norm{y_i-z_i} \quad \forall y_i, z_i \in \rset^{n_i}.$$ (iv) There exists $\mu_i$ such that $0< \mu_i \le M_i - L_i$ and $$u_i(y_i;x) \ge f(x+U_i(y_i-x_i)) + \frac{\mu_i}{2}\norm{y_i-x_i}^2 \quad \forall x \in \rset^n, y_i \in \rset^{n_i}.$$ \end{assumption} Note that a similar set of assumptions has been considered in \cite{HonWan:13}, where the authors derived a general framework for the block coordinate descent methods on composite convex problems. Clearly, Assumption \ref{approximation} $(iv)$ implies the upper bound \eqref{u_upperapr} and in \cite{HonWan:13} this inequality is replaced with the assumption of strong convexity of $u_i$ in the first argument. \noindent We now provide several examples of approximation versions of the objective function $f$ which satisfy Assumption \ref{approximation}. \begin{example} \label{ex_3u} We now provide three examples of approximation versions for the function $f$. The reader can easily find many other examples of approximations satisfying Assumption \ref{approximation}. \\ \text{1. Separable quadratic approximation}: given $M \in \rset^N$, such that $M_i > L_i$ for all $i \in [N]$, we define the approximation version $$u_i^q(y_i; x, M_i) = f(x) + \langle \nabla_i f(x), y_i-x_i\rangle + \frac{M_i}{2}\norm{y_i-x_i}^2.$$ It satisfies Assumption \ref{approximation}, in particular condition $(iv)$ holds for $\mu_i=M_i - L_i$. This type of approximations was used by Nesterov for deriving the random coordinate gradient descent method for solving smooth convex problems \cite{Nes:12} and further extended to the composite convex case in \cite{NecCli:13,RicTac:12}. \noindent \text{2. General quadratic approximation}: given $H_i \succeq 0$, such that $H_i \succ L_iI_{n_i}$ for all $i \in [N]$, we define the approximation version $$u_i^Q(y_i;x,H_i) = f(x) + \langle \nabla_i f(x), y_i-x_i\rangle + \frac{1}{2}\langle y_i-x_i, H_i(y_i-x_i)\rangle. $$ It satisfies Assumption \ref{approximation}, in particular condition $(iv)$ holds for $\mu_i = \sigma_{\min}(H_i - L_i I_{n_i})$ (the smallest eigenvalue). This type of approximations was used by Luo, Yun and Tseng in deriving the greedy coordinate descent method based on the Gauss-Southwell rule for solving composite convex problems \cite{LuoTse:92,LuoTse:93,TseYun:09}. \noindent \text{3. Exact approximation}: given $\beta \in \rset^N$, such that $\beta_i > 0$ for all $i \in [N]$, we define the approximation version $$u_i^{e}(y_i;x,\beta) = f(x+U_i(y_i-x_i)) + \frac{\beta_i}{2}\norm{y_i-x_i}^2.$$ It satisfies Assumption \ref{approximation}, in particular condition $(iv)$ holds for $\mu_i = \beta_i$. This type of approximation functions was used especially in the nonconvex settings \cite{GriSci:00,HonWan:13}. \end{example} \noindent Based on each approximation function $u_i$ satisfying Assumption \ref{approximation}, we introduce a class of restricted local minimizers for our nonconvex optimization problem \eqref{l0regular}. \begin{definition}\label{local_min_general} For any set of approximation functions $u_i$ satisfying Assumption \ref{approximation}, a vector $z$ is called an \textit{u-strong local minimizer} for problem \eqref{l0regular} if it satisfies: $$ F(z) \le \min\limits_{y_i \in \rset^{n_i}} u_i(y_i;z) + \norm{z+ U_i(y_i-z_i)}_{0,\lambda} \quad \forall i \in [N].$$ Moreover, we denote the set of strong local minima, corresponding to the approximation functions $u_i$, with $\mathcal{L}_u$. \end{definition} \noindent It can be easily seen that \[ \min_{y_i \in \rset^{n_i}} u_i(y_i;z) + \norm{z+ U_i(y_i-z_i)}_{0,\lambda} \overset{y_i=z_i}{\leq} u_i(z_i;z) + \norm{z}_{0,\lambda} = F(z) \] and thus an u-strong local minimizer $z \in \mathcal{L}_u$, has the property that each block $z_i$ is a fixed point of the operator defined by the minimizers of the function $u_i(y_i;z) + \lambda_i\norm{y_i}_0$, i.e. we have for all $i \in [N]$: $$z_i = \arg\min\limits_{y_i \in \rset^{n_i}} u_i(y_i;z) + \lambda_i\norm{y_i}_0.$$ \begin{theorem} Let the set of approximation functions $u_i$ satisfy Assumption \ref{approximation}, then any $u-$strong local minimizer is a local minimum of problem \eqref{l0regular}, i.e. the following inclusion holds: \[ \mathcal{L}_u \subseteq \mathcal{T}_f. \] \end{theorem} \begin{proof} From Definition \ref{local_min_general} and Assumption \ref{approximation} we have: \begin{align*} F(z) & \le \min\limits_{y_i \in \rset^{n_i}} u_i(y_i;z) + \norm{z+ U_i(y_i-z_i)}_{0,\lambda}\\ &\le \min\limits_{y_i \in \rset^{n_i}} u_i(z_i;z) + \langle \nabla u_i (z_i;z), y_i-z_i \rangle + \frac{M_i}{2}\norm{y_i-z_i}^2+ \norm{z+ U_i(y_i-z_i)}_{0,\lambda}\\ &= \min\limits_{y_i \in \rset^{n_i}} F(z) + \langle \nabla_i f(z), y_i-z_i \rangle + \frac{M_i}{2}\norm{y_i-z_i}^2+ \lambda_i(\norm{y_i}_0-\norm{z_i}_{0})\\ &\le F(z) + \langle \nabla_i f(z), h_i \rangle + \frac{M_i}{2}\norm{h_i}^2+ \lambda_i(\norm{z_i+h_i}_0-\norm{z_i}_{0}) \end{align*} for all $h_i \in \rset^{n_i}$ and $i \in [N]$. Choosing now $h_i$ as follows: \[ h_{i} = -\frac{1}{M_i}U_{(j)}\nabla_{(j)} f(z) \;\;\; \text{for some} \;\; j \in I(z) \cap \mathcal{S}_i,\] we have from the definition of $I(z)$ that \[ \lambda_i(\norm{z_i+h_i}_0-\norm{z_i}_{0}) \leq 0 \] and thus $0 \leq -\frac{1}{2M_i} \|\nabla_{(j)} f(z) \|^2$ or equivalently $\nabla_{(j)} f(z) = 0$. Since this holds for any $j \in I(z) \cap \mathcal{S}_i$, it follows that $z$ satisfies $\nabla_{I(z)} f(z) = 0$. Using now Theorem \ref{lemmaaux} we obtain our statement. \end{proof} \noindent For the three approximation versions given in Example \ref{ex_3u} we obtain explicit expressions for the corresponding u-strong local minimizers. In particular, for some $M \in \rset^N_{++}$ and $i \in [N]$, if we consider the previous separable quadratic approximation $u_i^q(y_i;x,M_i)$, then any strong local minimizer $z \in \mathcal{L}_{u^q}$ satisfies the following relations: \begin{enumerate} \item[\textit{(i)}] $\nabla_{I(z)} f(z) = 0$ and additionally \item[\textit{(ii)}] $\begin{cases} \abs{\nabla_{(j)} f(z)} \le \sqrt{2\lambda_{i} M_{i} }, &\text{if} \ z_{(j)}=0 \\ \abs{z_{(j)}} \ge \sqrt{\frac{2\lambda_{i}}{M_{i}}}, &\text{if} \ z_{(j)} \neq 0, \quad \forall i \in [N]$ and $j \in \mathcal{S}_i. \end{cases}$ \end{enumerate} The relations given in $(ii)$ can be derived based on the separable structure of the approximation $u_i^q(y_i;x,M_i)$ and of the quasinorm $\|\cdot\|_0$ using similar arguments as in Lemma 3.2 from \cite{Lu:12}. For completeness, we present the main steps in the derivation. First, it is clear that any $z \in \mathcal{L}_{u^q}$ satisfies: \begin{equation}\label{u^q_argmin} z_{(j)} = \arg\min_{y_{(j)} \in \rset^{}} \nabla_{(j)} f(z) (y_{(j)}-z_{(j)}) + \frac{M_i}{2}\abs{y_{(j)}-z_{(j)}}^2 + \lambda_i \norm{y_{(j)}}_0 \end{equation} for all $j \in \mathcal{S}_i $ and $ i\in [N]$. On the other hand since the optimum point in the previous optimization problems can be $0$ or different from $0$, we have: \begin{align*} &\min_{y_{(j)} \in \rset^{}} \nabla_{(j)} f(z) (y_{(j)}-z_{(j)}) + \frac{M_i}{2}\abs{y_{(j)}-z_{(j)}}^2 + \lambda_i \norm{y_{(j)}}_0 \\ &=\min \left\{ \frac{M_i}{2}\abs{z_{(j)}- \frac{1}{M_i}\nabla_{(j)} f(z)}^2 - \frac{1}{2M_i}\abs{\nabla_{(j)} f(z)}^2, \lambda_i - \frac{1}{2M_i}\abs{\nabla_{(j)} f(z)}^2 \right\}. \end{align*} If $z_{(j)}=0$, then from fixed point relation of problem \eqref{u^q_argmin} and the expression for its optimal value we have $\frac{M_i}{2}\abs{z_{(j)}- \frac{1}{M_i}\nabla_{(j)} f(z)}^2 - \frac{1}{2M_i}\abs{\nabla_{(j)} f(z)}^2 \leq \lambda_i - \frac{1}{2M_i}\abs{\nabla_{(j)} f(z)}^2$ and thus $\abs{\nabla_{(j)} f(z)} \le \sqrt{2\lambda_i M_i}$. Otherwise, we have $j \in I(z)$ such that from Theorem \ref{lemmaaux} we have $\nabla_{(j)} f(z) =0$ and combining with $\frac{M_i}{2}\abs{z_{(j)}- \frac{1}{M_i}\nabla_{(j)} f(z)}^2 - \frac{1}{2M_i}\abs{\nabla_{(j)} f(z)}^2 \geq \lambda_i - \frac{1}{2M_i}\abs{\nabla_{(j)} f(z)}^2$ leads to $\abs{z_{(j)}} \ge \sqrt{\frac{2\lambda_i}{M_i}}$. Similar derivations as above can be derived for the general quadratic approximations $u_i^Q(y_i;x,H_i)$ provided that $H_i$ is diagonal matrix. For general matrices $H_i$, the corresponding strong local minimizers are fixed points of small $\ell_0$ regularized quadratic problems of dimensions $n_i$. \noindent Finally, for some $\beta \in \rset^N_{++}$ and $i \in [N]$, considering the exact approximation $u_i^{e}(y_i;x,\beta_i)$ we obtain that any corresponding strong local minimizer $z \in \mathcal{L}_{u^e}$ satisfies: \begin{equation*} z_i = \arg \min\limits_{h_i \in \rset^{n_i}} F(z+ U_ih_i) + \frac{\beta_{i}}{2}\norm{h_i}^2 \quad \forall i \in [N]. \end{equation*} \begin{theorem}\label{inclusions} Let Assumption \ref{assump_grad_1} hold and $u^1, u^2$ be two approximation functions satisfying Assumption \ref{approximation}. Additionally, let $$u^1(y_i;x) \le u^2(y_i;x), \quad \forall y_i \in \rset^{n_i}, x \in \rset^n, i \in [N].$$ Then the following inclusions are valid: $$ \mathcal{X}^* \subseteq \mathcal{L}_{u^1} \subseteq \mathcal{L}_{u^2} \subseteq \mathcal{T}_f.$$ \end{theorem} \begin{proof} Assume $z \in \mathcal{X}^*$, i.e. it is a global minimizer of our original nonconvex problem \eqref{l0regular}. Then, we have: \begin{align*} F(z) &\le \min\limits_{y_i \in \rset^{n_i}} F(z + U_i(y_i-z_i)) \\ & = \min\limits_{y_i \in \rset^{n_i}} f(z+U_i(y_i-z_i)) + \lambda_i \norm{y_i}_0 + \sum\limits_{j \neq i}\lambda_j \norm{z_j}_0\\ &\le \min\limits_{y_i \in \rset^{n_i}} u_i^1(y_i;z) +\norm{z+U_i(y_i-z_i)}_{0,\lambda} \quad \forall i\in [N], \end{align*} and thus $z \in \mathcal{L}_{u^1}$, i.e. we proved that $\mathcal{X}^* \subseteq \mathcal{L}_{u^1}$. Therefore, any class of $u$-strong local minimizers contains the global minima of problem \eqref{l0regular}. \noindent Further, let us take $z \in \mathcal{L}_{u^1}$. Using Definition \eqref{local_min_general} and defining \[ t_i = \arg\min\limits_{y_i \in \rset^{n_i}} u_i^2(y_i;z) + \norm{z+U_i(y_i-z_i)}_{0,\lambda}, \] we get: \begin{align*} F(z) &\le \min\limits_{y_i \in \rset^{n_i}} u_i^1(y_i;z) + \norm{z+U_i(y_i-z_i)}_{0,\lambda}\\ &\le u_i^1(t_i;z) + \norm{z+U_i(t_i-z_i)}_{0,\lambda}\\ &\le u_i^2(t_i;z) + \norm{z+U_i(t_i-z_i)}_{0,\lambda}\\ &= \min\limits_{y_i \in \rset^{n_i}} u_i^2 (y_i;z) + \norm{z+U_i(y_i-z_i)}_{0,\lambda}. \end{align*} This shows that $z \in \mathcal{L}_{u^2}$ and thus $\mathcal{L}_{u^1} \subseteq \mathcal{L}_{u^2}$. \end{proof} \noindent Note that if the following inequalities hold \[ (L_i + \beta_i) I_{n_i} \preceq H_i \preceq M_i I_{n_i} \quad \forall i \in [N],\] using the Lipschitz gradient relation \eqref{Lipschitz_gradient}, we obtain that $$u_i^{e}(y_i;x,\beta_i) \le u_i^{Q}(y_i;x,H_i) \leq u_i^{q}(y_i;x,M_i) \quad \forall x \in \rset^n, y_i \in \rset^{n_i}.$$ \noindent Therefore, from Theorem \ref{inclusions} we observe that $u^{q} \ (u^{Q})$-strong local minimizers for problem \eqref{l0regular} are included in the class of all basic local minimizers $\mathcal{T}_f$. Thus, designing an algorithm which converges to a local minimum from $\mathcal{L}_{u^q}$ ($\mathcal{L}_{u^Q}$) will be of interest. Moreover, $u^e$-strong local minimizers for problem \eqref{l0regular} are included in the class of all $u^{q} \ (u^Q)$-strong local minimizers. Thus, designing an algorithm which converges to a local minimum from $\mathcal{L}_{u^{e}}$ will be of interest. To illustrate the relationships between the previously defined classes of restricted local minima and see how much they are related to global minima of \eqref{l0regular}, let us consider an example. \begin{example} \label{classes_points} We consider the least square settings $f(x) = \norm{Ax-b}^2$, where $A \in \rset^{m \times n}$ and $b \in \rset^m$ satisfying: \begin{equation*} A = \begin{bmatrix} 1 & \alpha_1 &\cdots &\alpha_1^n \\ 1 &\alpha_2 &\cdots &\alpha_2^n \\ 1 &\alpha_3 &\cdots &\alpha_3^n \\ 1 &\alpha_4 &\cdots &\alpha_4^n \\ \end{bmatrix} + \left[p I_4 \quad O_{4,n-4} \right], \qquad b=q e, \end{equation*} with $e \in \rset^{4}$ the vector having all entries $1$. We choose the following parameter values: $\alpha = [1 \ 1.1 \ 1.2 \ 1.3]^T, n=7, p=3.3, q=25, \lambda=1$ and $\beta_i=0.0001$ for all $i\in [n]$. We further consider the scalar case, i.e. $n_i =1$ for all $i$. In this case we have that $u_i^q=u_i^Q$, i.e. the separable and general quadratic approximation versions coincide. The results are given in Table \ref{local_min}. From $128$ possible local minima, we found $19$ local minimizers in $\mathcal{L}_{u^{q}}$ given by $u^q_i(y_i;x,L_f)$, and only $6$ local minimizers in $\mathcal{L}_{u^q}$ given by $u^q_i(y_i;x,L_i)$. Moreover, the class of $u^{e}$-strong local minima $\mathcal{L}_{u^{e}}$ given by $u^e_i(y_i;x,\beta_i)$ contains only one vector which is also the global optimum of problem \eqref{l0regular}, i.e. in this case $\mathcal{L}_{u^{e}} = \mathcal{X}^*$. From Table \ref{local_min} we can clearly see that the newly introduced classes of local minimizers are much more restricted (in the sense of having small number of elements, close to that of the set of global minimizers) than the class of basic local minimizers that is much larger. \setlength{\extrarowheight}{5pt} \renewcommand{4pt}{4pt} \begin{center} \begin{table}[ht] \begin{center}\caption{Strong local minima distribution on a least square example.} \label{local_min} \begin{tabular}{|c|c|c|c|c|} \hline \textbf{Class of local minima} & $\mathcal{T}_f$ & $\overset{\mathcal{L}_{u^{q}}}{u^q_i(y_i;x,L_f)}$ & $\overset{\mathcal{L}_{u^{q}}}{u^q_i(y_i;x,L_i)}$ & $\overset{\mathcal{L}_{u^{e}}}{u^e_i(y_i;x,\beta_i)}$ \\ \hline \textbf{Number of local minima} & 128 & 19 & 6 & 1 \\ \hline \end{tabular} \end{center} \end{table} \end{center} \end{example} \section{Random coordinate descent type methods} In this section we present a family of random block coordinate descent methods suitable for solving the class of problems \eqref{l0regular}. The family of the algorithms we consider takes a very general form, consisting in the minimization of a certain approximate version of the objective function one block variable at a time, while fixing the rest of the block variables. Thus, these algorithms are a combination between an iterative hard thresholding scheme and a general random coordinate descent method and they are particularly suited for solving nonsmooth $\ell_0$ regularized problems since they solve an easy low dimensional problem at each iteration, often in closed form. Our family of methods covers particular cases such as random block coordinate gradient descent and random proximal coordinate descent methods. \noindent Let $x \in \rset^n$ and $i \in [N]$. Then, we introduce the following \textit{thresholding map} for a given approximation version $u$ satisfying Assumption \ref{approximation}: \begin{align*} T^{u}_i(x) &= \arg\min\limits_{y_i \in \rset^{n_i}} u_i(y_i;x) + \lambda_i\norm{y_i}_{0}. \end{align*} \noindent In order to find a local minimizer of problem \eqref{l0regular}, we introduce the family of \textit{random block coordinate descent iterative hard thresholding} (RCD-IHT) methods, whose iteration is described as follows: \begin{algorithm}{{\bf RCD-IHT}} \begin{itemize} \item[1.] Choose $ x^0 \in \rset^n$ and approximation version $u$ satisfying Assumption \ref{approximation}. For $k \ge 0$ do: \item[2.] Choose a (block) coordinate $i_k \in [N]$ with uniform probability \item[3.] Set $x^{k+1}_{i_k} = T^{u}_{i_k}(x^k)$ and $x^{k+1}_i=x^k_i \;\; \forall i \neq i_k$. \end{itemize} \end{algorithm} \noindent Note that our algorithm is directly dependent on the choice of approximation $u$ and the computation of the operator $T^{u}_{i}(x)$ is in general easy, sometimes even in closed form. For example, when $u_i(y_i;x) = u_i^q(y_i;x,M_i)$ and $\nabla_{i_k} f(x^k)$ is available, we can easily compute the closed form solution of $T^{u}_{i_k}(x^k)$ as in the iterative hard thresholding schemes \cite{Lu:12}. Indeed, if we define $\Delta^i(x) \in \rset^{n_i}$ as follows: \begin{align} \label{deltaq} (\Delta^i(x))_{(j)} = \frac{M_i}{2} \abs{x_{(j)}- (1/M_i)\nabla_{(j)} f(x)}^2, \end{align} then the iteration of (RCD-IHT) method becomes: \begin{equation*} x^{k+1}_{(j)} = \begin{cases} x^k_{(j)} - \frac{1}{M_{i_k}}\nabla_{(j)}f(x^k), & \text{if} \quad (\Delta^{i_k}(x^k))_{(j)}\ge \lambda_{i_k}\\ 0, &\text{if} \quad (\Delta^{i_k}(x^k))_{(j)}\le \lambda_{i_k}, \end{cases} \end{equation*} for all $j \in \mathcal{S}_{i_k}$. Note that if at some iteration $\lambda_{i_k}=0$, then the iteration of algorithm (RCD-IHT) is identical with the iteration of the usual \textit{random block coordinate gradient descent method} \cite{NecCli:13,Nes:12}. Further, our algorithm has, in this case, similarities with the iterative hard thresholding algorithm (IHTA) analyzed in \cite{Lu:12}. For completeness, we also present the algorithm (IHTA). \begin{algorithm}{{IHTA}} \cite{Lu:12} \begin{itemize} \item[1.] Choose $ M_f > L_f$. For $k \ge 0$ do: \item[2.] $x^{k+1} = \arg\min_{y \in \rset^{n}} f(x^k) + \langle \nabla f(x^k), y - x^k \rangle + \frac{M_f}{2}\norm{ y - x^k}^2 + \norm{y}_{0,\lambda} $, \end{itemize} \end{algorithm} or equivalently for each component we have the update: \begin{equation*} x^{k+1}_{(j)} = \begin{cases} x^k_{(j)} - \frac{1}{M_{f}}\nabla_{(j)}f(x^k), & \text{if} \quad \frac{M_f}{2} \abs{x^k_{(j)} - \frac{1}{M_{f}} \nabla_{(j)}f(x^k)}^2 \ge \lambda_{i}\\ 0, &\text{if} \quad \frac{M_f}{2} \abs{x^k_{(j)} - \frac{1}{M_{f}}\nabla_{(j)}f(x^k)}^2 \le \lambda_{i}, \end{cases} \end{equation*} for all $j \in \mathcal{S}_i$ and $i \in [N].$ Note that the arithmetic complexity of computing the next iterate $x^{k+1}$ in (RCD-IHT), once $\nabla_{i_k}f(x^k)$ is known, is of order $\mathcal{O}(n_{i_k})$, which is much lower than the arithmetic complexity per iteration $\mathcal{O}(n)$ of (IHTA) for $N >> 1$, that additionally requires the computation of full gradient $\nabla f(x^k)$. Similar derivations as above can be derived for the general quadratic approximations $u_i^Q(y_i;x,H_i)$ provided that $H_i$ is diagonal matrix. For general matrices $H_i$, the corresponding algorithm requires solving small $\ell_0$ regularized quadratic problems of dimensions $n_i$. \noindent Finally, in the particular case when we consider the exact approximation $u_i(y_i;x)=u_i^{e}(y_i;x,\beta_i)$, at each iteration of our algorithm we need to perform an exact minimization of the objective function $f$ w.r.t. one randomly chosen (block) coordinate. If $\lambda_{i_k} = 0$, then the iteration of algorithm (RCD-IHT) requires solving a small dimensional subproblem with a strongly convex objective function as in the classical \textit{proximal block coordinate descent method} \cite{HonWan:13}. In the case when $\lambda_{i_k}>0$ and $n_i > 1$, this subproblem is nonconvex and usually hard to solve. However, for certain particular cases of the function $f$ and $n_i = 1$ (i.e. scalar case $n=N$), we can easily compute the solution of the small dimensional subproblem in algorithm (RCD-IHT). Indeed, for $x \in \rset^n$ let us define: \begin{align} \label{deltae} v^i(x) & = x + U_i h_i(x), \ \text{where} \ h_i(x) = \arg\min\limits_{h_i \in \rset^{}} f(x + U_ih_i) + \frac{\beta_i}{2}\norm{h_i}^2 \nonumber\\ \Delta^i(x) &= f(x-U_ix_i) + \frac{\beta_i}{2}\norm{x_i}^2 - f(v^{i}(x)) - \frac{\beta_i}{2}\norm{(v^{i}(x))_i - x_i}^2 \quad \forall i \in [n]. \end{align} Then, it can be seen that the iteration of (RCD-IHT) in the scalar case for the exact approximation $u_i^{e}(y_i;x,\beta_i)$ has the following form: \begin{equation*} x^{k+1}_{i_k} = \begin{cases} (v^{i_k}(x^k))_{i_k}, \ &\text{if} \ \Delta^{i_k}(x^k) \ge \lambda_{i_k} \\ 0 , \ &\text{if} \ \Delta^{i_k}(x^k) \le \lambda_{i_k}. \end{cases} \end{equation*} In general, if the function $f$ satisfies Assumption \ref{assump_grad_1}, computing $v^{i_k}(x^k)$ at each iteration of (RCD-IHT) requires the minimization of an unidimensional convex smooth function, which can be efficiently performed using unidimensional search algorithms. Let us analyze the least squares settings in order to highlight the simplicity of the iteration of algorithm (RCD-IHT) in the scalar case for the approximation $u_i^{e}(y_i;x,\beta_i)$. \begin{example} Let $A \in \rset^{m \times n}, b \in \rset^m$ and $f(x) = \frac{1}{2}\norm{Ax-b}^2$. In this case (recall that we consider $n_i=1$ for all $i$) we have the following expression for $\Delta^{i}(x)$: $$\Delta^{i}(x) =\frac{1}{2}\norm{r-A_{i}x_{i}}^2+\frac{\beta_{i}}{2}\norm{x_{i}}^2 - \frac{1}{2}\left\lVert r\left(I_m-\frac{A_{i}A_{i}^T}{\norm{A_{i}}^2+\beta_{i}}\right)\right\rVert^2-\frac{\beta_{i}}{2}\left \lVert \frac{A_{i}^Tr}{\norm{A_{i}}^2+\beta_{i}}\right\rVert^2,$$ where $r = Ax-b$. Under these circumstances, the iteration of (RCD-IHT) has the following closed form expression: \begin{equation} x^{k+1}_{i_k}= \begin{cases} x^k_{i_k} - \frac{A_{i_k}^Tr^k}{\norm{A_{i_k}}^2+\beta_{i_k}}, \ &\text{if} \ \Delta^{i_k}(x^k) \ge \lambda_{i_k} \\ 0 , \ &\text{if} \ \Delta^{i_k}(x^k) \le \lambda_{i_k}. \end{cases} \end{equation} \end{example} \noindent In the sequel we use the following notations for the entire history of index choices, the expected value of objective function $f$ w.r.t. the entire history and for the support of the sequence $x^k$: \begin{equation*} \xi^k = \{i_0, \dots, i_{k-1} \}, \qquad f^k=\mathbb{E}[f(x^k)], \qquad I^k=I(x^k). \end{equation*} \noindent Due to the randomness of algorithm (RCD-IHT), at any iteration $k$ with $\lambda_{i_k} > 0$, the sequence $I^k$ changes if one of the following situations holds for some $j \in \mathcal{S}_{i_k}$: \begin{align*} (i)& \ x^k_{(j)} = 0 \ \text{and} \ (T^u_{i_k}(x^k))_{(j)} \neq 0 \\ (ii)& \ x^k_{(j)} \neq 0 \ \text{and} \ (T^u_{i_k}(x^k))_{(j)} = 0. \end{align*} \noindent In other terms, at a given moment $k$ with $\lambda_{i_k}>0$, we expect no change in the sequence $I^k$ of algorithm (RCD-IHT) if there is no index $j \in \mathcal{S}_{i_k}$ satisfying the above corresponding set of relations $(i)$ and $(ii)$. We define the notion of \textit{change of $I^k$ in expectation} at iteration $k$, for algorithm (RCD-IHT) as follows: let $x^k$ be the sequence generated by (RCD-IHT), then the sequence $I^k=I(x^k)$ changes in expectation if the following situation occurs: \begin{equation}\label{expectation} \mathbb{E}[\abs{I^{k+1} \setminus I^k} + \abs{I^k \setminus I^{k+1}} \ | \ x^k] > 0, \end{equation} which implies (recall that we consider uniform probabilities for the index selection): \begin{align*} \mathbb{P}\left(\abs{I^{k+1} \setminus I^k} + \abs{I^k \setminus I^{k+1}} > 0 \ | \ x^k \right) \ge \frac{1}{N}. \end{align*} \noindent In the next section we show that there is a finite number of changes of $I^k$ in expectation generated by algorithm (RCD-IHT) and then, we prove global convergence of this algorithm, in particular we show that the limit points of the generated sequence converges to strong local minima from the class of points $\mathcal{L}_{u}$. \section{Global convergence analysis} \noindent In this section we analyze the descent properties of the previously introduced family of coordinate descent algorithms under Assumptions \ref{assump_grad_1} and \ref{approximation}. Based on these properties, we establish the nature of the limit points of the sequence generated by Algorithm (RCD-IHT). In particular, we derive that any accumulation point of this sequence is almost surely a local minimum which belongs to the class $\mathcal{L}_u$. Note that the classical results for any iterative algorithm used for solving general nonconvex problems state global convergence to stationary points, while for the $\ell_0$ regularized nonconvex and NP-hard problem \eqref{l0regular} we show that our family of algorithms have the property that the generated sequences converge to strong local minima. \noindent In order to prove almost sure convergence results for our family of algorithms, we use the following supermartingale convergence lemma of Robbins and Siegmund (see e.g. \cite{PatNec:14}): \begin{lemma} \label{mart} Let $v_k, u_k$ and $\alpha_k$ be three sequences of nonnegative random variables satisfying the following conditions: \[ \mathbb{E}[v_{k+1} | {\cal F}_k] \leq (1+\alpha_k) v_k - u_k \;\; \forall k \geq 0 \;\; \text{a.s.} \;\;\; \text{and} \;\;\; \sum_{k=0}^\infty \alpha_k < \infty \;\; \text{a.s.}, \] where ${\cal F}_k$ denotes the collections $v_0, \dots, v_k, u_0, \dots, u_k$, $\alpha_0, \dots, \alpha_k$. Then, we have $\lim_{k \to \infty} v_k = v$ for a random variable $v \geq 0$ a.s. \; and \; $\sum_{k=0}^\infty u_k < \infty$ a.s. \end{lemma} \noindent Further, we analyze the convergence properties of algorithm (RCD-IHT). First, we derive a descent inequality for this algorithm. \begin{lemma} \label{descent_ramiht} Let $x^k$ be the sequence generated by (RCD-IHT) algorithm. Under Assumptions \ref{assump_grad_1} and \ref{approximation} the following descent inequality holds: \begin{align} \label{decrease} \mathbb{E}[F(x^{k+1})\;|\; x^k] \le F(x^k) - \mathbb{E}\left[\frac{\mu_{i_k}}{2}\norm{x^{k+1}-x^k}^2 \;|\; x^k \right]. \end{align} \end{lemma} \begin{proof} From Assumption \ref{approximation} we have: \begin{align*} F(x^{k+1}) + \frac{\mu_{i_k}}{2}\norm{x^{k+1}_{i_k}-x_{i_k}^k}^2 &\le u_{i_k}(x^{k+1}_{i_k}, x^k) + \norm{x^{k+1}}_{0,\lambda} \\ &\le u_{i_k}(x^{k}_{i_k}, x^k) + \norm{x^{k}}_{0,\lambda} \\ &\le f(x^k) + \norm{x^{k}}_{0,\lambda} = F(x^k). \end{align*} In conclusion, our family of algorithms belong to the class of descent methods: \begin{align} \label{decrease_iter} F(x^{k+1}) & \le F(x^k) - \frac{\mu_{i_k}}{2}\norm{x^{k+1}_{i_k}-x_{i_k}^k}^2. \end{align} Taking expectation w.r.t. $i_k$ we get our descent inequality. \end{proof} \noindent We now prove the global convergence of the sequence generated by algorithm (RCD-IHT) to local minima which belongs to the restricted set of local minimizers $\mathcal{L}_u$. \begin{theorem} \label{convergence_rpamiht} Let $x^k$ be the sequence generated by algorithm (RCD-IHT). Under Assumptions \ref{assump_grad_1} and \ref{approximation} the following statements hold: \noindent $(i)$ There exists a scalar $\tilde{F}$ such that: $$ \lim\limits_{k \to \infty} F(x^{k})= \tilde{F} \ a.s. \quad \text{and} \quad \lim\limits_{k \to \infty} \norm{x^{k+1}-x^k} = 0 \ a.s.$$ \noindent $(ii)$ At each change of sequence $I^k$ in expectation we have the following relation: $$\mathbb{E}\left[\frac{\mu_{i_k}}{2}\norm{x^{k+1}-x^k}^2 \;|\; x^k \right] \ge \delta,$$ where $\delta= \frac{1}{N}\min\left\{\min\limits_{i \in [N]: \lambda_i>0} \frac{\mu_i\lambda_i }{M_i}, \min\limits_{i \in [N], j \in {\mathcal S}_i \cap \text{supp}(x^0)} \frac{ \mu_i}{2} |x^0_{(j)}|^2 \right\} > 0.$ \noindent $(iii)$ The sequence $I^k$ changes a finite number of times as $k \to \infty$ almost surely. The sequence $\norm{x^k}_0$ converges to some $\norm{x^*}_0$ almost surely. Furthermore, any limit point of the sequence $x^k$ belongs to the class of strong local minimizers $\mathcal{L}_u$ almost surely. \end{theorem} \begin{proof} \noindent $(i)$ From the descent inequality given in Lemma \eqref{descent_ramiht} and Lemma \ref{mart} we have that there exists a scalar $\tilde{F}$ such that $\lim_{k \to \infty} F(x^{k})= \tilde{F}$ almost sure. Consequently, we also have $\lim_{k \to \infty} F(x^k) - F(x^{k+1}) = 0$ almost sure and since our method is of descent type, then from \eqref{decrease_iter} we get $\frac{\mu_{i_k}}{2}\norm{x^{k+1} - x^k}^2 \le F(x^k) - F(x^{k+1})$, which leads to $\lim_{k \to \infty} \norm{x^{k+1}-x^k} = 0$ almost sure. \noindent $(ii)$ For simplicity of the notation we denote $x^{+} = x^{k+1}, x = x ^k $ and $ i=i_k$. First, we show that any nonzero component of the sequence generated by (RCD-IHT) is bounded below by a positive constant. Let $x \in \rset^n$ and $i \in [N]$. From definition of $T^{u}_i(x)$, for any $j \in \text{supp}(T^{u}_i(x))$, the $j$th component of the minimizer $T^{u}_i(x)$ of the function $u_i(y_i;x) + \lambda_i\norm{y_i}_{0}$ is denoted $(T^{u}_i(x))_{(j)}$. Let us define $y^+ = x + U_i(T^{u}_i(x)-x_i)$. Then, for any $j \in \text{supp}(T^{u}_i(x))$ the following optimality condition holds: \begin{align} \label{nablaui} \nabla_{(j)} u_i(y^+_i;x)=0. \end{align} \noindent On the other hand, given $j \in \text{supp}(T^{u}_i(x))$, from the definition of $T^{u}_i(x)$ we get: \begin{align*} u_i(y^+_i;x) + \lambda_i \norm{y^+_i}_0 \le u_i(y^{+}_i-U_{(j)}y^+_{(j)}; x) + \lambda_i \norm{y^{+}_i-U_{(j)}y^{+}_{(j)}}_0. \end{align*} Subtracting $\lambda_i \norm{y^{+}_i-U_{(j)}y^{+}_{(j)}}_{0}$ from both sides, leads to: \begin{equation} \label{ineq_iter} u_i(y^+_i;x) + \lambda_i \le u_i(y^{+}_i-U_{(j)}y^+_{(j)}; x) . \end{equation} Further, if we apply the Lipschitz gradient relation given in Assumption \ref{approximation} $(iii)$ in the right hand side and use the optimality conditions for the unconstrained problem solved at each iteration, we get: \begin{align*} u_i(y^{+}_i-U_{(j)}y^+_{(j)}; x) &\le u_i(y^{+}_i;x) - \langle \nabla_{(j)} u_i(y^{+}_i;x),y^{+}_{(j)}\rangle\ + \frac{M_{i}}{2} |y^{+}_{(j)}|^2\\ & \overset{\eqref{nablaui}}{=} u_i(y^{+}_i;x) + \frac{M_{i}}{2} |y^{+}_{(j)}|^2. \end{align*} Combining with the left hand side of \eqref{ineq_iter} we get: \begin{equation}\label{bound_x_k} |(T^{u}_i(x))_{(j)}|^2 \ge \frac{2\lambda_{i}}{M_{i}} \qquad \forall j \in \text{supp}(T^{u}_i(x)). \end{equation} Replacing $x = x^k$ for $k \ge 0$, it can be easily seen that, for any $j \in \text{supp}(x^k_i)$ and $i \in [N]$, we have: \begin{equation*} \abs{x^k_{(j)}}^2 \begin{cases} \ge \frac{2\lambda_{i}}{M_i}, &\text{if} \quad x^k_{(j)} \neq 0 \quad \text{and} \quad i \in \xi^k \\ = \abs{x^{0}_{(j)}}^2, &\text{if} \quad x^k_{(j)} \neq 0 \quad \text{and} \quad i \notin \xi^k. \end{cases} \end{equation*} \noindent Further, assume that at some iteration $k > 0$ a change of sequence $I^k$ in expectation occurs. Thus, there is an index $j \in [n]$ (and block $i$ containing $j$) such that either $\left( x^k_{(j)}=0 \ \text{and} \ \left(T^{u}_i(x^{k})\right)_{(j)} \neq 0\right)$ or $\left( x^k_{(j)} \neq 0 \ \text{and} \ \left(T^{u}_i(x^{k})\right)_{(j)} = 0\right)$. Analyzing these cases we have: \begin{equation*} \norm{T^{u}_i(x^k)-x^k_i}^2 \ge \left | \left(T^{u}_i(x^k)\right)_{(j)}-x^k_{(j)} \right |^2 \;\; \begin{cases} \ge \frac{2\lambda_{i}}{M_{i}} &\text{if} \quad x^k_{(j)} = 0 \\ \ge \frac{2\lambda_{i}}{M_{i}} &\text{if} \quad x^k_{(j)} \neq 0 \ \text{and} \ i \in \xi^k \\ = |x^{0}_{(j)}|^2 &\text{if} \quad x^k_{(j)} \neq 0 \ \text{and} \ i \notin \xi^k. \end{cases} \end{equation*} \noindent Observing that under uniform probabilities we have: $$\mathbb{E}\left[\frac{\mu_{i_k}}{2}\norm{x^{k+1}-x^k}^2| x^k \right] = \frac{1}{N}\sum\limits_{i=1}^N\frac{\mu_i}{2}\norm{T^{u}_i(x^k)-x^k_i}^2,$$ we can conclude that at each change of sequence $I^k$ in expectation we get: \begin{align*} \mathbb{E}\left[\frac{\mu_{i_k}}{2}\norm{x^{k+1}-x^k}^2 | x^k\right] \ge \frac{1}{N}\min\left\{\min\limits_{i \in [N]: \lambda_i>0} \frac{\mu_i\lambda_i }{M_i}, \min\limits_{i \in [N], j \in {\mathcal S}_i \cap \text{supp}(x^0)} \frac{ \mu_i}{2} |x^0_{(j)}|^2 \right\}. \end{align*} \noindent $(iii)$ From $\lim\limits_{k \to \infty} \norm{x^{k+1}-x^k} = 0$ a.s. we have $\lim\limits_{k \to \infty} \mathbb{E}\left[\norm{x^{k+1}-x^k} \ | \ x^k\right] = 0$ a.s. On the other hand from part $(ii)$ we have that if the sequence $I^k$ changes in expectation, then $\mathbb{E}[\norm{x^{k+1}-x^{k}}^2 \ | \ x^{k}] \ge \delta > 0.$ These facts imply that there are a finite number of changes in expectation of sequence $I^k$, i.e. there exist $K>0$ such that for any $k > K$ we have $I^k = I^{k+1}$. \noindent Further, if the sequence $I^k$ is constant for $k > K$, then we have $I^k=I^*$ and $\norm{x^k}_{0,\lambda} = \norm{x^*}_{0,\lambda}$ for any vector $x^*$ satisfying $I(x^*)=I^*$. Also, for $k> K$ algorithm (RCD-IHT) is equivalent with the classical random coordinate descent method \cite{HonWan:13}, and thus shares its convergence properties, in particular any limit point of the sequence $x^k$ is a minimizer on the coordinates $I^*$ for $\min_{x \in S_{I^*}} f(x)$. Therefore, if the sequence $I^k$ is fixed, then we have for any $k > K$ and $i_k \in I^k$: \begin{equation}\label{fixedsupp} u_{i_k}(x^{k+1}_{i_k};x^k)+ \norm{x^{k+1}}_{0,\lambda} \le u_{i_k}(y_{i_k};x^k)+\norm{x^{k} + U_{i_k}(y_{i_k}-x^k_{i_k})}_{0,\lambda} \quad \forall y_{i_k} \in \rset^{n_{i_k}}. \end{equation} \noindent On the other hand, denoting with $x^*$ an accumulation point of $x^k$, taking limit in \eqref{fixedsupp} and using that $\norm{x^k}_{0,\lambda} = \norm{x^*}_{0,\lambda}$ as $k \to \infty$, we obtain the following relation: $$F(x^*)\le \min_{y_{i} \in \rset^{n_i}} u(y_i;x^*)+\norm{x^*+U_i(y_i-x^*_i)}_{0,\lambda} \quad a.s.$$ for all $i \in [N]$ and thus $x^*$ is the minimizer of the previous right hand side expression. Using the definition of local minimizers from the set $\mathcal{L}_u$, we conclude that any limit point $x^*$ of the sequence $x^k$ belongs to this set, which proves our statement. \end{proof} \noindent It is important to note that the classical results for any iterative algorithm used for solving nonconvex problems usually state global convergence to stationary points, while for our algorithms we were able to prove global convergence to local minima of our nonconvex and NP-hard problem \eqref{l0regular}. Moreover, if $\lambda_i=0$ for all $i \in [N]$, then the optimization problem \eqref{l0regular} becomes convex and we see that our convergence results cover also this setting. \section{Rate of convergence analysis} \noindent In this section we prove the linear convergence in probability of the random coordinate descent algorithm (RCD-IHT) under the additional assumption of strong convexity for function $f$ with parameter $\sigma$ and for the scalar case, i.e. we assume $n_i=1$ for all $i \in [n] =[N]$. Note that, for algorithm (RCD-IHT) the scalar case is the most practical since it requires solving a simple unidimensional convex subproblem, while for $n_i > 1$ it requires the solution of a small NP-hard subproblem at each iteration. First, let us recall that complexity results of random block coordinate descent methods for solving convex problems $f^* =\min_{x \in \rset^n} f(x)$, under convexity and Lipschitz gradient assumptions on the objective function, have been derived e.g. in \cite{HonWan:13}, where the authors showed sublinear rate of convergence for a general class of coordinate descent methods. Using a similar reasoning as in \cite{HonWan:13,NecPat:14a}, we obtain that the randomized version of the general block coordinate descent method, in the strongly convex case, presents a linear rate of convergence in expectation of the~form: \begin{equation*} \mathbb{E}[f(x^{k}) - f^*] \le \left(1-\theta\right)^k \left(f(x^{0}) - f^*\right), \end{equation*} where $\theta \in (0,1)$. Using the strong convexity property for $f$ we have: \begin{equation}\label{rcd_rate_of_conv2} \mathbb{E}\left[\norm{x^k - x^*} \right] \le \left(1- \theta\right)^{k/2} \sqrt{\frac{2}{\sigma}\left(f(x^{0}) - f^* \right)} \quad \forall x \in X_f^*, \end{equation} where we recall that we denote $X_f^* = \arg \min_{x \in \rset^n} f(x)$. For attaining an $\epsilon$-suboptimality this algorithm has to perform the following number of iterations: \begin{equation}\label{rcd_complexity2} k \ge \frac{2}{\theta} \log \frac{1}{\epsilon}\sqrt{\frac{2\left(f(x^0)-f^*\right)}{\sigma}}. \end{equation} \noindent In order to derive the rate of convergence in probability for algorithm (RCD-IHT), we first define the following notion which is a generalization of relations \eqref{deltaq} and \eqref{deltae} for $u_i(y_i,x) = u_i^q(y_i,x,M_i)$ and $u_i(y_i,x) = u_i^e(y_i,x,\beta_i)$, respectively: \begin{align} \label{deltau1} & v^i(x) = x + U_i(h_i(x) - x_i), \quad \text{where} \quad h_i(x) = \arg\min\limits_{y_i \in \rset^{}} u_i(y_i;x) \\ &\Delta^i(x) = u_i(0;x) - u_i(h_i(x);x). \label{deltau2} \end{align} We make the following assumption on functions $u_i$ and consequently on $\Delta^i(x)$: \begin{assumption} \label{assump_delta} There exist some positive constants $C_i$ and $D_i$ such that the approximation functions $u_i$ satisfy for all $i \in [n]$: $$\abs{\Delta^i(x) - \Delta^i(z)} \le C_i\norm{x-z} + D_i\norm{x-z}^2 \quad \forall x \in \rset^n, z \in \mathcal{T}_f$$ and $$ \min_{z \in \mathcal{T}_f} \min\limits_{i \in [n]} \abs{\Delta^i(z) - \lambda_i} >0. $$ \end{assumption} \noindent Note that if $f$ is strongly convex, then the set $\mathcal{T}_f$ of basic local minima has a finite number of elements. Next, we show that this assumption holds for the most important approximation functions $u_i$ (recall that $u_i^q =u_i^Q$ in the scalar case $n_i=1$). \begin{lemma}\label{delta_xy} Under Assumption \ref{assump_grad_1} the following statements hold:\\ $(i)$ If we consider the separable quadratic approximation $u_i(y_i;x) = u_i^q(y_i;x,M_i)$,~then: $$ \abs{\Delta^i(x) - \Delta^i(z)} \le M_i v^i_{\max}\left(1 + \frac{L_f}{M_i}\right)\norm{x - z} + \frac{M_i}{2}\left(1 + \frac{L_f}{M_i}\right)^2 \norm{x - z}^2,$$ for all $x \in \rset^n$ and $z \in \mathcal{T}_f$, where we have defined $v^i_{\max}$ as follows $v^i_{\max} = \max \{\norm{(v^i(y))_i} :\; y \in \mathcal{T}_f\}$ for all $i \in [n]$. \\ $(ii)$ If we consider the exact approximation $u_i(y_i;x) = u_i^e(y_i;x,\beta_i)$, then we have: \[ \abs{\Delta^i(x) - \Delta^i(z)} \le \gamma^i\norm{x-z} + \frac{L_f+\beta_i}{2} \norm{x-z}^2,\] for all $x \in \rset^n$ and $z \in \mathcal{T}_f$, where we have defined $\gamma^i$ as follows $\gamma^i = \max\{\norm{\nabla f(y-U_iy_i)}+ \norm{\nabla f(v^i(y))} + \beta_i\norm{y_i}: \; y \in \mathcal{T}_f \}$ for all $i \in [n]$. \end{lemma} \begin{proof} $(i)$ For the separable quadratic approximation $u_i(y_i;x) = u_i^q(y_i;x,M_i)$, using the definition of $\Delta^i(x)$ and $v^i(x)$ given in \eqref{deltau1}--\eqref{deltau2} (see also \eqref{deltaq}), we get: \begin{align} \label{deltaqq} \Delta^i(x) = \frac{M_i}{2}\norm{x_i-\frac{1}{M_i}\nabla_if(x)}^2 = \frac{M_i}{2}\norm{(v^i(x))_i}^2. \end{align} \noindent Then, since $\norm{\nabla_i f(x) - \nabla_i f(z)} \leq L_f \norm{x-z}$ and using the property of the norm $|\norm{a} - \norm{b}| \leq \norm{a-b}$ for any two vectors $a$ and $b$, we obtain: \begin{align*} \abs{\Delta^i(x) -\Delta^i(z)} &= \frac{M_i}{2}\left\lvert \norm{(v^i(x))_i}^2 - \norm{(v^i(z))_i}^2\right\lvert \\ & \le \frac{M_i}{2}\left\lvert \norm{(v^i(x))_i} - \norm{(v^i(z))_i}\right\lvert \; \left\lvert \norm{(v^i(x))_i} + \norm{(v^i(z))_i}\right\lvert \\ & \overset{\eqref{deltaqq}}{\le} \frac{M_i}{2}\left(1 + \frac{L_f}{M_i}\right)\norm{x - z} \left( 2\norm{(v^i(z))_i} + \left(1 + \frac{L_f}{M_i}\right)\norm{x - z}\right). \end{align*} \noindent $(ii)$ For the exact approximation $u_i(y_i;x) = u_i^e(y_i;x,\beta_i)$, using the definition of $\Delta^i(x)$ and $v^i(x)$ given in \eqref{deltau1}--\eqref{deltau2} (see also \eqref{deltae}), we get: \[ \Delta^i(x) = f(x-U_ix_i) - f(v^i(x)) + \frac{\beta_i}{2}\norm{x_i}^2 - \frac{\beta_i}{2}\norm{(v^i(x))_i - x_i}^2. \] Then, using the triangle inequality we derive the following relation: \begin{align*} \abs{\Delta^i(x) - \Delta^i(z)} &\le \Big |f(x-U_ix_i)- f(z-U_iz_i) + f(v^i(z)) - f(v^i(x))\\ &\;\;\;\; + \frac{\beta_i}{2}\norm{(v^i(z))_i - z_i}^2 - \frac{\beta_i}{2}\norm{(v^i(x))_i - x_i}^2 \Big | +\Big\lvert\frac{\beta_i}{2}\norm{x_i}^2 - \frac{\beta_i}{2}\norm{z_i}^2\Big\rvert. \end{align*} \noindent For simplicity, we denote: \begin{align*} \delta_{1i}(x,z)= &f(x-U_ix_i)- f(z-U_iz_i) + f(v^i(z)) - f(v^i(x)) \\ & \;\; + \frac{\beta_i}{2}\norm{(v^i(z))_i - z_i}^2 - \frac{\beta_i}{2}\norm{(v^i(x))_i - x_i}^2\\ \delta_{2i}(x,z)=&\frac{\beta_i}{2}\norm{x_i}^2 - \frac{\beta_i}{2}\norm{z_i}^2. \end{align*} \noindent In order to bound $\Delta^i(x) - \Delta^i(z)$, it is sufficient to find upper bounds on $\abs{\delta_{1i}(x,z)}$ and $\abs{\delta_{2i}(x,z)}$. For a bound on $\abs{\delta_{1i}(x,z)}$ we use $\abs{\delta_{1i}(x,y)} = \max\{\delta_{1i}(x,y),-\delta_{1i}(x,y)\}$. Using the optimality conditions for the map $v^i(x)$ and convexity of $f$ we obtain: \begin{align*} f(v^i(x)) & \ge f(v^i(z)) + \langle \nabla f(v^i(z)), v^i(x) - v^i(z)\rangle \nonumber\\ &= \! f(v^i(z)) \!+\! \langle \nabla f(v^i(z)), x \!-\! z\rangle \!+\! \langle \nabla_i f(v^i(z)), ((v^i(x))_i \!-\! x_i) \!-\! ((v^i(z))_i \!-\! z_i)\rangle \nonumber\\ &=\! f(v^i(z)\!) \!+\! \langle \nabla f(v^i(z)\!), x \!-\! z\rangle \!-\! \beta_i \langle (v^i(z)\!)_i \!-\! z_i,((v^i(x)\!)_i \!-\! x_i)\!-\! ((v^i(z)\!)_i \!-\! z_i) \rangle \nonumber\\ &= f(v^i(z)) + \langle \nabla f(v^i(z)), x-z\rangle + \frac{\beta_i}{2}\norm{(v^i(z))_i - z_i}^2 \nonumber\\ &\qquad + \frac{\beta_i}{2}\norm{(v^i(z))_i-z_i}^2 - \beta_i \langle (v^i(z))_i -z_i, (v^i(x))_i-x_i\rangle \nonumber\\ &=f(v^i(z)) + \langle \nabla f(v^i(z)), x-z\rangle + \frac{\beta_i}{2}\norm{(v^i(z))_i-z_i}^2 \nonumber \\ &\qquad + \frac{\beta_i}{2}\norm{(v^i(z))_i-z_i - ((v^i(x))_i-x_i)}^2 - \frac{\beta_i}{2}\norm{(v^i(x))_i - x_i}^2 \nonumber\\ & \ge \! f(v^i(z)) \!+\! \frac{\beta_i}{2}\norm{(v^i(z))_i \!-\! z_i}^2 \!-\! \frac{\beta_i}{2}\norm{(v^i(x))_i \!-\! x_i}^2 \!-\! \norm{\nabla f(v^i(z))}\norm{x \!-\! z}, \end{align*} where in the last inequality we used the Cauchy-Schwartz inequality. On the other hand, from the global Lipschitz continuous gradient inequality we get: \begin{equation*} f(x-U_ix_i) \le f(z-U_iz_i) + \norm{\nabla f(z-U_iz_i)}\norm{x-z} + \frac{L_f}{2}\norm{x-z}^2. \end{equation*} \noindent From previous two relations we obtain: \begin{equation}\label{delta} \delta_{1i}(x,z) \le \left( \norm{\nabla f(z-U_iz_i)}+ \norm{\nabla f(v^i(z))} \right) \norm{x-z} + \frac{L_f}{2}\norm{x-z}^2. \end{equation} \noindent In order to obtain a bound on $-\delta_{1i}(x,z)$ we observe that: \begin{align} & f(v^i(x)) + \frac{\beta_i}{2}\norm{(v^i(x))_i-x_i}^2 - f(v^i(z)) - \frac{\beta_i}{2}\norm{(v^i(z))_i -z_i}^2 \nonumber\\ &\quad \le f(x + U_i((v^i(z))_i-z_i)) - f(v^i(z)) \nonumber \\ & \quad \le \norm{\nabla f(v^i(z))}\norm{x-z} + \frac{L_f}{2}\norm{x-z}^2,\label{bound_aux3} \end{align} where in the last inequality we used the Lipschitz gradient relation and Cauchy-Schwartz inequality. Also, from the convexity of $f$ and the Cauchy-Schwartz inequality we get: \begin{equation}\label{bound_aux4} f(x-U_ix_i) \ge f(z-U_iz_i) - \norm{\nabla f(z-U_iz_i)}\norm{x-z}. \end{equation} Combining now the bounds \eqref{bound_aux3} and \eqref{bound_aux4} we obtain: \begin{equation}\label{-delta} -\delta_{1i}(x,z)\le \left(\norm{\nabla f(z-U_iz_i)}+ \norm{\nabla f(v^i(z))}\right)\norm{x-z} + \frac{L_f}{2}\norm{x-z}^2. \end{equation} Therefore, from \eqref{delta} and \eqref{-delta} we obtain a bound on $\delta_{1i}(x,z)$: \begin{equation}\label{delta1abs} \abs{\delta_{1i} (x,z)}\le \left( \norm{\nabla f(z-U_iz_i)}+ \norm{\nabla f(v^i(z))} \right) \norm{x-z} + \frac{L_f}{2}\norm{x-z}^2. \end{equation} \noindent Regarding the second quantity $\delta_{2i}(x,z)$, we observe that: \begin{align} \abs{\delta_{2i}(x,z)} &= \frac{\beta_i}{2} \Big\lvert\norm{x_i}+\norm{z_i} \Big\rvert \Big\lvert \norm{x_i}-\norm{z_i}\Big\rvert = \frac{\beta_i}{2}\Big\lvert \norm{x_i}-\norm{z_i}+2\norm{z_i}\Big\rvert \Big\lvert\norm{x_i}-\norm{z_i}\Big\rvert \nonumber\\ & \le \frac{\beta_i}{2}\left(\norm{x - z}+2\norm{z_i}\right) \norm{x - z}.\label{delta2abs} \end{align} From the upper bounds on $\abs{\delta_{1i} (x,z)}$ and $\abs{\delta_{2i} (x,z)}$ given in \eqref{delta1abs} and \eqref{delta2abs}, respectively, we obtained our result. \end{proof} \noindent We further show that the second part of Assumption \ref{assump_delta} holds for the most important approximation functions $u_i$. \begin{lemma}\label{lemma_alpha} Under Assumption \ref{assump_grad_1} the following statements hold:\\ $(i)$ Considering the separable quadratic approximation $u_i(y_i;x) = u_i^q(y_i;x,M_i)$, then for any fixed $z \in \mathcal{T}_f$ there exist only two values of parameter $M_i$ satisfying $\abs{\Delta^i(z) - \lambda_i} = 0$.\\ $(ii)$ Considering the exact approximation $u_i(y_i;x) = u_i^e(y_i;x,\beta_i)$, then for any fixed $z \in \mathcal{T}_f$, there exists a unique $\beta_i$ satisfying $\abs{\Delta^i(z) - \lambda_i} = 0$.\\ \end{lemma} \begin{proof} $(i)$ For the approximation $u_i(y_i;x) = u_i^q(y_i;x,M_i)$ we have: $$\Delta^i(z) = \frac{M_i}{2}\norm{z_i - \frac{1}{M_i}\nabla_i f(z)}^2.$$ Thus, we observe that $\Delta^i(z) = \lambda_i$ is equivalent with the following relation: $$ \frac{\norm{z_i}^2 }{2}M_i^2 - \left( \langle \nabla_i f(z), z_i\rangle +\lambda_i \right)M_i + \frac{\norm{\nabla_i f(z)}^2}{2} = 0.$$ which is valid for only two values of $M_i$. \noindent $(ii)$ For the approximation $u_i(y_i;x) = u_i^e(y_i;x,\beta_i)$ we have: $$\Delta^i(z) = f(z-U_iz_i) + \frac{\beta_i}{2}\norm{z_i}^2 - f(v^i_{\beta}(z)) - \frac{\beta_i}{2}\norm{h^i_{\beta}(z) - z_i}^2,$$ where $v^i_{\beta}(z)$ and $h^i_{\beta}(z)$ are defined as in \eqref{deltau1} corresponding to the exact approximation. Without loss of generality, we can assume that there exist two constants $\beta_i > \gamma_i > 0$ such that $\Delta^i(z) = \lambda_i$. In other terms, we have: $$ \frac{\beta_i}{2}\norm{z_i}^2 - f(v^i_{\beta}(z)) - \frac{\beta_i}{2}\norm{h^i_{\beta}(z) - z_i}^2 = \frac{\gamma_i}{2}\norm{z_i}^2 - f(v^i_{\gamma}(z)) - \frac{\gamma_i}{2}\norm{h^i_{\gamma}(z) - z_i}^2.$$ We analyze two possible cases. Firstly, if $z_i = 0$, then the above equality leads to the following relation: \begin{align*} f(v^i_{\beta}(z)) + \frac{\beta_i}{2}\norm{h^i_{\beta}(z)}^2 &= f(v^i_{\gamma}(z)) + \frac{\gamma_i}{2}\norm{h^i_{\gamma}(z)}^2 \\ & \le f(v^i_{\beta}(z)) + \frac{\gamma_i}{2}\norm{h^i_{\beta}(z)}^2, \end{align*} which implies that $\beta_i \le \gamma_i$, that is a contradiction. Secondly, assuming $z_i \neq 0$ we observe from optimality of $h^i_{\beta}(z)$ that: \begin{equation}\label{ineq1} \frac{\beta_i}{2}\norm{z_i}^2 - f(v^i_{\beta}(z)) - \frac{\beta_i}{2}\norm{h^i_{\beta}(z) - z_i}^2 \ge \frac{\beta_i}{2}\norm{z_i}^2 - f(z). \end{equation} On the other hand, taking into account that $z \in \mathcal{T}_f$ we have: \begin{equation}\label{ineq2} \frac{\gamma_i}{2}\norm{z_i}^2 - f(v^i_{\gamma}(z)) - \frac{\gamma_i}{2}\norm{h^i_{\gamma}(z) - z_i}^2 \le \frac{\gamma_i}{2}\norm{z_i}^2 - f(z). \end{equation} From \eqref{ineq1} and \eqref{ineq2} we get $\beta_i \le \gamma_i$, thus implying the same contradiction. \end{proof} \noindent We use the following notations: \begin{align*} C_{\max} = \max_{1\le i \le n} C_i,\quad D_{\max} = \max_{1\le i \le n} D_i,\quad \tilde{\alpha} &= \min_{z \in \mathcal{T}_f} \min\limits_{i \in [n]} \abs{\Delta^i(z) - \lambda_i}. \end{align*} \noindent Since the cardinality of basic local minima $\mathcal{T}_f$ is finite for strongly convex functions $f$, then there is a finite number of possible values for $\abs{\Delta^i(z) - \lambda_i}$. Therefore, from previous lemma we obtain that $\tilde{\alpha}=0$ for a finite number of values of parameters $(M_i,\mu_i)$ of the approximations $u_i = u_i^q$ or $u_i=u_i^e$. We can reason in a similar fashion for general approximations $u_i$, i.e. that $\tilde{\alpha}=0$ for a finite number of values of parameters $(M_i,\mu_i)$ of the approximations $u_i$ satisfying Assumption \ref{approximation}. In conclusion, choosing randomly at an initialization stage of our algorithm the parameters $(M_i,\mu_i)$ of the approximations $u_i$, we can conclude that $\tilde{\alpha}>0$ almost sure. \noindent Further, we state the linear rate of convergence with high probability for algorithm (RCD-IHT). Our analysis will employ ideas from the convergence proof of deterministic iterative hard thresholding method in \cite{Lu:12}. However, the random nature of our family of methods and the properties of the approximation functions $u_i$ require a new approach. We use the notation $k_p$ for the iterations when a change in expectation of $I^k$ occurs, as given in the previous section. We also denote with $F^*$ the global optimal value of our original $\ell_0$ regularized problem \eqref{l0regular}. \begin{theorem} Let $x^k$ be the sequence generated by the family of algorithms (RCD-IHT) under Assumptions \ref{assump_grad_1}, \ref{approximation} and \ref{assump_delta} and the additional assumption of strong convexity of $f$ with parameter $\sigma$. Denote with $\kappa$ the number of changes in expectation of $I^k$ as $k \to \infty$. Let $x^*$ be some limit point of $x^k$ and $\rho>0$ be some confidence level. Considering the scalar case $n_i=1$ for all $i \in [n]$, the following statements hold: \noindent \textit{(i)} The number of changes in expectation $\kappa$ of $I^k$ is bounded by $\left \lceil \frac{ \mathbb{E} \left [ F(x^0) - F(x^*) \right] }{\delta} \right \rceil$, where $\delta$ is specified in Theorem \ref{convergence_rpamiht} $(ii)$. \noindent \textit{(ii)} The sequence $x^k$ converges linearly in the objective function values with high probability, i.e. it satisfies $\mathbb{P}\left(F(x^k) - F(x^*) \le \epsilon \right) \ge 1-\rho$ for $k \ge \frac{1}{\theta}\log\frac{\tilde{\omega}}{\rho\epsilon}$, where $\tilde{\omega}=2^{\omega}(F(x^0)-F^*)$, with $\omega = \left\{\max\limits_{t \in \rset^{}} \ \alpha t -\beta t^2 : 0 \le t \le \left\lfloor\frac{\mathbb{E}[F(x^0) - F(x^*)]}{\delta}\right\rfloor \right\}, \beta = \frac{\delta}{2(F(x^0)-F^*)}$, $\alpha = \left(\log \left[2(F(x^0) - F^*)\right] + 2\log\frac{2 N }{\sqrt{\sigma}\xi} - \frac{\delta}{2 (F(x^0)-F^*)} + \theta \right)$ and $\xi=\frac{1}{2}\left(\sqrt{\frac{C_{\max}^2}{D_{\max}^2} + \frac{\tilde{\alpha}}{D_{\max}}} - \frac{C_{\max}}{D_{\max}}\right)$. \end{theorem} \begin{proof} $(i)$ From \eqref{decrease} and Theorem \ref{convergence_rpamiht} $(ii)$ it can be easily seen that: \begin{align*} \delta & \le \mathbb{E}\left[\frac{\mu_{i_{k_p}}}{2}\norm{x^{k_p+1}-x^{k_p}}^2 \Big| x^{k_p}\right] \le F(x^{k_p}) - \mathbb{E}[F(x^{k_p+1})|x^{k_p}] \\ & \leq F(x^{k_p}) - \mathbb{E}[F(x^{k_{p+1}})|x^{k_p}]. \end{align*} Taking expectation in this relation w.r.t. the entire history $\xi^{k_p}$ we get the bound: $\delta \le \mathbb{E}\left[ F(x^{k_p}) - F(x^{k_{p+1}}) \right].$ Further, summing up over $p \in [\kappa]$ we have: $$\kappa \delta \le \mathbb{E}\left[F(x^{k_1}) - F(x^{k_{\kappa}+1})\right]\le \mathbb{E}\left[F(x^0) - F(x^*)\right],$$ i.e. we have proved the first part of our theorem. \noindent $(ii)$ In order to establish the linear rate of convergence in probability of algorithm (RCD-IHT), we first derive a bound on the number of iterations performed between two changes in expectation of $I^k$. Secondly, we also derive a bound on the number of iterations performed after the support is fixed (a similar analysis for deterministic iterative hard thresholding method was given in \cite{Lu:12}). Combining these two bounds, we obtain the linear convergence of our algorithm. Recall that for any $p \in [\kappa]$, at iteration $k_{p}+1$, there is a change in expectation of $I^{k_p}$, i.e. \begin{equation*} \mathbb{E}[\abs{I^{k_{p}} \setminus I^{k_{p}+1}} + \abs{I^{k_{p}+1} \setminus I^{k_{p}}} \Big| \ x^{k_{p}}] > 0, \end{equation*} which implies that $$\mathbb{P}\left( \abs{I^{k_{p}} \setminus I^{k_{p}+1}} + \abs{I^{k_{p}+1} \setminus I^{k_{p}}} > 0 | x^{k_p}\right) = \mathbb{P}\left( I^{k_{p}} \neq I^{k_{p}+1} | x^{k_p}\right) \ge \frac{1}{n}$$ and furthermore \begin{equation}\label{probabilitysupp2} \mathbb{P}\left( \abs{I^{k_{p}} \setminus I^{k_{p}+1}} + \abs{I^{k_{p}+1} \setminus I^{k_{p}}} = 0 | x^{k_p} \right) = \mathbb{P}\left( I^{k_{p}} = I^{k_{p}+1} | x^{k_p}\right) \le \frac{n-1}{n}. \end{equation} \noindent Let $p$ be an arbitrary integer from $[\kappa]$. Denote $\hat{x}^* = \arg\min\limits_{x \in S_{I^{k_p}}} f(x)$ and $\hat{f}^*=\mathbb{E}\left[f(\hat{x}^*) \ | \ x^{k_{p-1}+1} \right]$. \noindent Assume that the number of iterations performed between two changes in expectation satisfies: \begin{equation}\label{iter_rcd_aux2} k_{p} - k_{p-1} > \frac{1}{\theta} \left(\log \left[2(F(x^0) - F^* - (p-1)\delta )\right] + 2\log\frac{2 n }{\sqrt{\sigma}\xi} \right) + 1, \end{equation} where we recall that $\sigma$ is the strong convexity parameter of $f$. For any $k \in [k_{p-1}+1, k_{p}]$ we denote $f^k =\mathbb{E}[f(x^k) \ | \ x^{k_{p-1}+1}]$. From Lemma \ref{descent_ramiht} and Theorem \ref{convergence_rpamiht} we have: $$f^{k_{p-1}+1} - \hat{f}^* \le \mathbb{E}[F(x^{k_{p-1}+1}) \ | \ x^{k_{p-1}+1}] - \mathbb{E}[F(\hat{x}^*)\ | \ x^{k_{p-1}+1}] \le F(x^0) - (p-1)\delta - F^*,$$ \noindent so that we can claim that \eqref{iter_rcd_aux2} implies \begin{equation}\label{iter_rcd2} k_{p} - k_{p-1} > \frac{2}{\theta} \log \frac{ 2\sqrt{2(f^{k_{p-1}+1} - \hat{f}^*)}n}{\sqrt{\sigma}\xi} + 1 \ge \frac{2}{\theta} \log \frac{ \sqrt{2n(f^{k_{p-1}+1} - \hat{f}^*)}}{\sqrt{\sigma}\xi(\sqrt{n}-\sqrt{n-1})} +1. \end{equation} \noindent We show that under relation \eqref{iter_rcd2}, the probability \eqref{probabilitysupp2} does not hold. First, we observe that between two changes in expectation of $I^k$, i.e. $k \in [k_{p-1}+1, k_{p}]$, the algorithm (RCD-IHT) is equivalent with the randomized version of coordinate descent method \cite{HonWan:13, NecPat:14a} for strongly convex problems. Therefore, the method has linear rate of convergence \eqref{rcd_rate_of_conv2}, which in our case is given by the following expression: \begin{equation*} \mathbb{E}\left[\norm{x^k\!-\!\hat{x}^*} \;|\; x^{k_{p-1}+1} \right]\!\le\!\left(1\!-\!\theta\right)^{(k-k_{p-1}-1)/2}\sqrt{\frac{2}{\sigma}\left(f^{k_{p-1}+1}-\hat{f}^*\right)}, \end{equation*} for all $k \in [k_{p-1}+1, k_{p}]$. Taking $k=k_{p}$, if we apply the complexity estimate \eqref{rcd_complexity2} and use the bound \eqref{iter_rcd2}, we obtain: $$\mathbb{E}\left[\norm{x^{k_{p}} - \hat{x}^*} \;|\; x^{k_{p-1}+1} \right] \le \left(1 - \theta\right)^{(k_{p} - k_{p-1}-1)/2} \!\sqrt{\frac{2}{\sigma}\left(f^{k_{p-1}+1}-\hat{f}^*\right)} \!<\! \!\xi\!\left(1\!-\!\sqrt{\frac{n\!-\!1}{n}}\right).$$ From the Markov inequality, it can be easily seen that we have: \begin{equation*} \mathbb{P}\left(\norm{x^{k_{p}} - \hat{x}^*} < \xi \;|\; x^{k_{p-1}+1} \right) = 1- \mathbb{P}\left(\norm{x^{k_{p}} - \hat{x}^*} \ge \xi \;|\; x^{k_{p-1}+1}\right) > \sqrt{1-\frac{1}{n}}. \end{equation*} \noindent Let $i \in [N]$ such that $\lambda_i>0$. From Assumption \ref{assump_delta} and definition of parameter $\xi$ we see that the event $\norm{x^{k_{p}} - \hat{x}^*} < \xi$ implies: \begin{equation*} \abs{\Delta^i(x^{k_p}) - \Delta^i(\hat{x}^*)} \le C_{\max}\norm{x^{k_p}-\hat{x}^*} + D_{\max}\norm{x^{k_p}-\hat{x}^*}^2 < \tilde{\alpha} \le \abs{\Delta^i(\hat{x}^*) - \lambda_i}. \end{equation*} \noindent The first and the last terms from the above inequality further imply: \begin{equation*} \begin{cases} \abs{\Delta^i(x^{k_p})} > \lambda_i, & \text{if} \quad \abs{\Delta^i(\hat{x}^*)} > \lambda_i\\ \abs{\Delta^i(x^{k_p})} < \lambda_i, & \text{if} \quad \abs{\Delta^i(\hat{x}^*)} < \lambda_i, \end{cases} \end{equation*} or equivalently $I^{k_p+1} = \hat{I}^* = \left\{ j \in [n]: \lambda_j=0 \right\} \cup \left\{ i \in [n]: \lambda_i>0, \abs{\Delta^i(\hat{x}^*)} > \lambda_i \right\}$. \noindent In conclusion, if \eqref{iter_rcd2} holds, then we have: \begin{equation*} \mathbb{P}\left( I^{k_{p}+1} = \hat{I}^* \;|\; x^{k_{p-1}+1} \right) > \sqrt{1-\frac{1}{n}}. \end{equation*} Applying the same procedure as before for iteration $k = k_{p} - 1$ we obtain: \begin{equation*} \mathbb{P}\left( I^{k_{p}} = \hat{I}^* \;|\; x^{k_{p-1}+1}\right) > \sqrt{1-\frac{1}{n}}. \end{equation*} \noindent Considering the events $\{I^{k_{p}} = \hat{I}^*\}$ and $\{I^{k_{p}+1} = \hat{I}^*\}$ to be independent (according to the definition of $k_p$), we have: \begin{equation*} \mathbb{P}\left( \left\{I^{k_{p}+1} = \hat{I}^*\right\} \cap \left\{I^{k_{p}} = \hat{I}^*\right\} \;|\; x^{k_{p-1}+1} \right) = \mathbb{P}\left( I^{k_{p}+1} = I^{k_{p}} \;|\; x^{k_{p-1}+1} \right) > \frac{n-1}{n}, \end{equation*} which contradicts the assumption $\mathbb{P}\left(I^{k_{p}} = I^{k_{p}+1} \ | \ x^{k_{p}}\right) \le \frac{n-1}{n}$ (see \eqref{probabilitysupp2} and the definition of $k_p$ regarding the support of $x$). \noindent Therefore, between two changes of support the number of iterations is bounded by: \begin{equation*} k_{p} - k_{p-1} \le \frac{1}{\theta} \left(\log \left[2(F(x^0) - F^* - (p-1)\delta )\right] + 2\log\frac{2 n }{\sqrt{\sigma}\xi} \right) +1. \end{equation*} We can further derive the following: \begin{align*} &\frac{1}{\theta} \left(\log \left[2(F(x^0) - F^* - (p-1)\delta )\right] + 2\log\frac{2 n }{\sqrt{\sigma}\xi} \right) \\ &= \frac{1}{\theta} \left(\log \left[2 (F(x^0) - F^*)\left(1 - \frac{(p-1)\delta}{F(x^0)-F^*} \right)\right] + 2\log\frac{2 n }{\sqrt{\sigma}\xi} \right) \\ & = \frac{1}{\theta} \left(\log \left[2 (F(x^0) - F^*)\right] + \log\left[1 - \frac{(p-1)\delta}{F(x^0)-F^*}\right] + 2\log\frac{2 n }{\sqrt{\sigma}\xi} \right) \\ &\le \frac{1}{\theta} \left(\log \left[2(F(x^0) - F^*)\right] - \frac{(p-1)\delta}{F(x^0)-F^*} + 2\log\frac{2 n }{\sqrt{\sigma}\xi} \right), \end{align*} \noindent where we used the inequality $\log(1-t) \le -t$ for any $t \in (0, \ 1)$. Denoting with $k_\kappa$ the number of iterations until the last change of support, we have: \begin{align*} k_\kappa & \le \sum\limits_{p=1}^{\kappa}\frac{1}{\theta} \left(\log \left[2 (F(x^0) - F^*)\right] - \frac{(p-1)\delta}{F(x^0)-F^*} + 2\log\frac{2 n }{\sqrt{\sigma}\xi} \right) +1 \\ & = \kappa \frac{1}{\theta}\left(\log \left[2(F(x^0) - F^*)\right] + 2\log\frac{2 n }{\sqrt{\sigma}\xi} + \frac{\delta}{2 (F(x^0)-F^*)} + \theta \right) - \frac{\kappa^2}{\theta} \underbrace{\frac{\delta}{2(F(x^0)-F^*)}}_{\beta}. \end{align*} \noindent Once the support is fixed (i.e. after $k_\kappa$ iterations), in order to reach some $\epsilon$-local minimum in probability with some confidence level $\rho$, the algorithm (RCD-IHT) has to perform additionally another $$\frac{1}{\theta}\log \frac{f^{k_\kappa+1}-f(x^*)}{\epsilon \rho}$$ iterations, where we used again \eqref{rcd_complexity2} and Markov inequality. Taking into account that the iteration $k_\kappa$ is the largest possible integer at which the support of sequence $x^k$ could change, we can bound: $$f^{k_\kappa + 1}- f(x^*) = E[F(x^{k_\kappa + 1})- F(x^*)] \le F(x^0) - F^* - \kappa\delta.$$ \noindent Thus, we obtain: \begin{align*} \frac{1}{\theta}&\log \frac{f^{k_\kappa+1}-f(x^*)}{\epsilon \rho} \le \frac{1}{\theta}\log \frac{F(x^0) - F^* - \kappa\delta}{\epsilon \rho} \\ &\le \frac{1}{\theta}\left(\log \left[(F(x^0) - F^*)\left(1 - \frac{\kappa\delta}{F(x^0) - F^*}\right)\right] - \log \epsilon \rho \right)\\ &\overset{\log (1-t) \leq -t}{\le} \frac{1}{\theta}\left(\log (F(x^0) - F^*) - \frac{\kappa\delta}{F(x^0) - F^*} - \log \epsilon \rho \right)\\ &\le \frac{1}{\theta}\left(\log \frac{F(x^0) - F^*}{\epsilon \rho } - \frac{\kappa\delta}{F(x^0) - F^*} \right). \end{align*} Adding up this quantity and the upper bound on $k_{\kappa}$, we get that the algorithm (RCD-IHT) has to perform at most $$\frac{1}{\theta} \left(\alpha\kappa - \beta \kappa^2 + \log \frac{F(x^0) - F^*}{\epsilon \rho }\right)\le \frac{1}{\theta} \left(\omega + \log \frac{F(x^0) - F^*}{\epsilon \rho }\right)$$ iterations in order to attain an $\epsilon$-suboptimal point with probability at least $\rho$, which proves the second statement of our theorem. \end{proof} \noindent Note that we have obtained global linear convergence for our family of random coordinate descent methods on the class of $\ell_0$ regularized problems with strongly convex objective function $f$. \section{Random data experiments on sparse learning} In this section we analyze the practical performances of our family of algorithms (RCD-IHT) and compare them with that of algorithm (IHTA) \cite{Lu:12}. We perform several numerical tests on sparse learning problems with randomly generated data. All algorithms were implemented in Matlab code and the numerical simulations are performed on a PC with Intel Xeon E5410 CPU and 8 Gb RAM memory. \noindent Sparse learning represents a collection of learning methods which seek a tradeoff between some goodness-of-fit measure and sparsity of the result, the latter property allowing better interpretability. One of the models widely used in machine learning and statistics is the linear model (least squares setting). Thus, in the first set of tests we consider sparse linear~formulation: $$\min\limits_{x \in \rset^n} F(x) \quad \left(=\frac{1}{2}\norm{Ax-b}^2 + \lambda \norm{x}_0 \right),$$ where $A \in \rset^{m \times n} $ and $\lambda >0$. We analyze the practical efficiency of our algorithms in terms of the probability of reaching a global optimal point. Due to difficulty of finding the global solution of this problem, we consider a small model $m=6$ and $n=12$. For each penalty parameter $\lambda $, ranging from small values (0.01) to large values (2), we ran the family of algorithms (RCD-IHT), for separable quadratic approximation (denoted (RCD-IHT-$u^q$), for exact approximation (denoted (RCD-IHT-$u^e$) and (IHTA) \cite{Lu:12} from 100 randomly generated (with random support) initial vectors. The numbers of runs out of 100 in which each method found the global optimum is given in Table \ref{tabel2}. We observe that for all values of $\lambda$ our algorithms (RC-IHT-$u^q$) and (RCD-IHT-$u^e$) are able to identify the global optimum with a rate of success superior to algorithm (IHTA) and for extreme values of $\lambda$ our algorithms perform much better than (IHTA). \renewcommand{4pt}{4pt} \begin{table}[ht] \centering \caption{Numbers of runs out of 100 in which algorithms (IHTA), (RCD-IHT-$u^q$) and (RCD-IHT-$u^e$) found global optimum.} {\small \label{tabel2} \begin{tabular}{|c|c|c|c|} \hline $\lambda $ &\textbf{(IHTA)} & \textbf{(RCD-IHT-$u^q$)} & \textbf{(RCD-IHT-$u^e$)}\\ \hline \hline $0.01$ & 95 & 96 & 100\\ \hline $0.07$ & 92 & 92 & 100\\ \hline $0.09$ & 43 & 51 & 70\\ \hline $0.15$ & 41 & 47 & 66\\ \hline $0.35$ & 24 & 28 & 31\\ \hline $0.8$ & 36 & 43 & 44\\ \hline $1.2$ & 29 & 29 & 54\\ \hline $1.8$ & 76 & 81 & 91\\ \hline $2$ & 79 & 86 & 97 \\ \hline \end{tabular} } \end{table} \noindent In the second set of experiments we consider the $\ell_2$ regularized logistic loss model from machine learning \cite{Bah:13}. In this model the relation between the data, represented by a random vector $a \in \rset^n$, and its associated label, represented by a random binary variable $y \in \{0, 1\}$, is determined by the conditional probability: $$P\{y | a;x \}= \frac{e^{y \langle a,x\rangle}}{1+e^{\langle a,x\rangle}},$$ where $x$ denotes a parameter vector. Then, for a set of $m$ independently drawn data samples $\{(a_i , y_i )\}_{i=1}^m$, the joint likelihood can be written as a function of $x$. To find the maximum likelihood estimate one should maximize the likelihood function, or equivalently minimize the negative log-likelihood (the logistic loss): $$\min\limits_{x \in \rset^n} \frac{1}{m}\sum\limits_{i=1}^m \log\left(1 + e^{\langle a_i,x\rangle} \right) - y_i\langle a_i,x \rangle. $$ Under the assumption of $n \le m$ and $A = \left[a_1, \dots, a_m \right] \in \rset^{n \times m}$ being full rank, it is well known that $f(\cdot)$ is strictly convex. However, there are important applications (e.g. feature selection) where these assumptions are not satisfied and the problem is highly ill-posed. In order to compensate this drawback, the logistic loss is regularized by some penalty term (e.g. $\ell_2$ norm $\norm{x}^2_2$, see \cite{Bah:13,Has:09}). Furthermore, the penalty term implicitly bounds the length of the minimizer, but does not promote sparse solutions. Therefore, it is desirable to impose an additional sparsity regularizer, such as the $\ell_0$ quasinorm. In conclusion our problem to be minimized is given by: $$\min\limits_{x \in \rset^n} F(x) \quad \left(=\frac{1}{m}\sum\limits_{i=1}^m \log\left(1 + e^{\langle a_i,x\rangle} \right) - y_i\langle a_i,x \rangle + \frac{\nu}{2}\norm{x}^2 + \norm{x}_{0,\lambda}\right),$$ where now $f$ is strongly convex with parameter $\nu$. For simulation, data were uniformly random generated and we fixed the parameters $\nu=0.5$ and $\lambda=0.2$. Once an instance of random data has been generated, we ran 10 times our algorithms (RCC-IHT-$u^q$) and (RCD-IHT-$u^e$) and algorithm (IHTA) \cite{Lu:12} starting from 10 different initial points. We reported in Table \ref{tabel1} the best results of each algorithm obtained over all 10 trials, in terms of best function value that has been attained with associated sparsity and number of iterations. In order to report relevant information, we have measured the performance of coordinate descent methods (RCD-IHT-$u^q$) and (RCD-IHT-$u^e$) in terms of full iterations obtained by dividing the number of all iterations by the dimension $n$. The column $F^*$ denotes the final function value attained by the algorithms, $\norm{x^*}_0$ represents the sparsity of the last generated point and \textit{iter} (\textit{full-iter}) represents the number of iterations (the number of full iterations). Note that our algorithms (RCD-IHT-$u^q$) and (RCD-IHT-$u^e$) have superior performance in comparison with algorithm (IHTA) on the reported instances. We observe that algorithm (RCD-IHT-$u^e$) performs very few full iterations in order to attain best function value amongst all three algorithms. Moreover, the number of full iterations performed by algorithm (RCD-IHT-$u^e$) scales up very well with the dimension of the problem. \renewcommand{4pt}{4pt} \begin{table}[ht] \centering \caption{Performance of Algorithms (IHTA), (RCD-IHT-$u^q$), (RCD-IHT-$u^e$)} {\small \label{tabel1} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline $m\backslash n$ &\multicolumn{3}{c|}{\textbf{(IHTA)}} & \multicolumn{3}{c|}{\textbf{(RCD-IHT-$u^q$)}} & \multicolumn{3}{c|}{\textbf{(RCD-IHT-$u^e$)}}\\ \cline{2-10} & $F^*$ & $\norm{x^*}_0$ & iter & $F^*$ & $\norm{x^*}_0$ & full-iter & $F^*$ & $\norm{x^*}_0$ & full-iter\\ \hline \hline $20\backslash 100$ & 1.56 & 23 & 797 & 1.39 & 21 & 602 & -0.67 & 15 & 12 \\ \hline $50\backslash 100$ & -95.88 & 31 & 4847 & -95.85 & 31 & 4046 & -449.99 & 89 & 12 \\ \hline $30\backslash 200$ & -14.11 & 35 & 2349 & -14.30 & 33 & 1429 & -92.95 & 139 & 12 \\ \hline $50\backslash 200$ & -0.88 & 26 & 3115 & -0.98 & 25 & 2494 & -13.28 & 83 & 19 \\ \hline $70\backslash 300$ & -12.07 & 70 & 5849 & -11.94 & 71 & 5296 & -80.90 & 186 & 19 \\ \hline $70\backslash 500$ & -20.60 & 157 & 6017 & -19.95 & 163 & 5642 & -69.10 & 250 & 16 \\ \hline $100\backslash 500$ & -0.55 & 16 & 4898 & -0.52 & 16 & 5869 & -47.12 & 233 & 14 \\ \hline $80\backslash 1000$ & 13.01 & 197 & 9516 & 13.71 & 229 & 7073 & -0.56 & 19 & 13 \\ \hline $80\backslash 1500$ & 5.86 & 75 & 7825 & 6.06 & 77 & 7372 & -0.22 & 24 & 14 \\ \hline $150\backslash 2000$ & 26.43 & 418 & 21353 & 25.71 & 509 & 20093 & -30.59 & 398 & 16 \\ \hline $150\backslash 2500$ & 26.52 & 672 & 15000 & 27.09 & 767 & 15000 & -55.26 & 603 & 17 \\ \hline \end{tabular} } \end{table} \end{sloppy} \end{document}
\begin{document} \title{Quantum engine efficiency bound beyond the second law of thermodynamics} \author{Wolfgang Niedenzu} \email{Wolfgang.Niedenzu@weizmann.ac.il} \affiliation{Department of Chemical Physics, Weizmann Institute of Science, Rehovot~7610001, Israel} \author{Victor Mukherjee} \affiliation{Department of Chemical Physics, Weizmann Institute of Science, Rehovot~7610001, Israel} \affiliation{Department of Physics, Shanghai University, Baoshan District, Shanghai~200444, P.\,R.~China} \author{Arnab Ghosh} \affiliation{Department of Physics, Shanghai University, Baoshan District, Shanghai~200444, P.\,R.~China} \affiliation{Department of Chemical Physics, Weizmann Institute of Science, Rehovot~7610001, Israel} \author{Abraham G. Kofman} \affiliation{Department of Chemical Physics, Weizmann Institute of Science, Rehovot~7610001, Israel} \affiliation{CEMS, RIKEN, Saitama, 351-0198, Japan} \author{Gershon Kurizki} \affiliation{Department of Chemical Physics, Weizmann Institute of Science, Rehovot~7610001, Israel} \begin{abstract} According to the second law, the efficiency of cyclic heat engines is limited by the Carnot bound that is attained by engines that operate between two thermal baths under the reversibility condition whereby the total entropy does not increase. Quantum engines operating between a thermal and a squeezed-thermal bath have been shown to surpass this bound. Yet, their maximum efficiency cannot be determined by the reversibility condition, which may yield an unachievable efficiency bound above unity. Here we identify the fraction of the exchanged energy between a quantum system and a bath that necessarily causes an entropy change and derive an inequality for this change. This inequality reveals an efficiency bound for quantum engines energised by a non-thermal bath. This bound does not imply reversibility, unless the two baths are thermal. It cannot be solely deduced from the laws of thermodynamics. \end{abstract} \date{October 29, 2017} \maketitle \section{Introduction} Engines are machines that convert some form of energy (e.g., thermal or electrical energy) into work. Their efficiency, defined as the ratio of the extracted work to the invested energy, is restricted to $1$ at most by the energy-conservation law. While mechanical engines may reach this bound, Carnot showed~\cite{carnotbook} that the efficiency of any heat engine that cyclically operates between two thermal baths is universally limited by the ratio of the bath temperatures, regardless of the concrete design~\cite{schwablbook,kondepudibook}. The universality of this bound led to the introduction of the notion of entropy by Clausius~\cite{clausius1865verschiedene} and the formalisation of the second law of thermodynamics. \par The Carnot bound is attained by (idealised) heat engines that operate reversibly between two (cold and hot) thermal baths, so that the total entropy of the engine and the two baths combined is unaltered over a cycle~\cite{kondepudibook,schwablbook,callenbook}. This corresponds to the minimum amount of heat being dumped into the cold bath, so as to close the cycle, and hence to the maximum input heat being transformed into work. By contrast, in an irreversible cycle, a larger amount of heat must be dumped into the cold bath, so that less input heat is available for conversion into work, causing the engine efficiency to decrease~\cite{kondepudibook,callenbook}. \par Whereas the above considerations hold for engines that operate between two thermal baths at temperatures $T_\mathrm{c}$ and $T_\mathrm{h}$, there are more general engine cycles that comprise additional baths at intermediate temperatures between $T_\mathrm{c}$ and $T_\mathrm{h}$. However, any such cycle (be it reversible or not) is less efficient than a reversible cycle that solely involves $T_\mathrm{c}$ and $T_\mathrm{h}$~\cite{schwablbook}. Hence, to find out how to use available resources most efficiently it suffices to consider the two-bath scenario. \par As part of the effort to understand the rapport between quantum mechanics and thermodynamics~\cite{scovil1959three,pusz1978passive,lenard1978thermodynamical,alicki1979quantum,scully2003extracting,allahverdyan2004maximal,erez2008thermodynamic,delrio2011thermodynamic,horodecki2013fundamental,correa2014quantum,skrzypczyk2014work,brandao2015second,pekola2015towards,uzdin2015equivalence,campisi2016power,rossnagel2016single} (see~\cite{kosloff2013quantum,gelbwaser2015thermodynamics,goold2016role,vinjanampathy2016quantum,kosloff2017quantum} for recent reviews), the Carnot bound has been challenged for quantum engines in which one or both of the baths are non-thermal~\cite{scully2003extracting,dillenschneider2009energetics,huang2012effects,abah2014efficiency,rossnagel2014nanoscale,hardal2015superradiant,niedenzu2016operation,manzano2016entropy,klaers2017squeezed,agarwalla2017quantum}. In this respect, a distinction is to be drawn between two types of non-thermal engines~\cite{niedenzu2016operation,dag2016multiatom}, (i) engines wherein the working medium equilibrates to a thermal state whose temperature is adjustable (e.g., by the phase of the coherence in a ``phaseonium'' bath~\cite{scully2003extracting}), which qualify as genuine heat engines with a controllable Carnot bound, and (ii) engines wherein the non-thermal (e.g., squeezed~\cite{rossnagel2014nanoscale}) bath may render the working-medium state non-thermal, making the Carnot bound irrelevant. \par The efficiency bound of the latter type of engines has been addressed~\cite{abah2014efficiency,rossnagel2014nanoscale,niedenzu2016operation,manzano2016entropy,agarwalla2017quantum} but still needs elucidation. What is particularly puzzling is that, contrary to heat engines that operate between two thermal baths, their efficiency bound cannot be deduced from the requirement of reversible operation: Reversibility may entail an efficiency bound that not only surpasses the (as mentioned, irrelevant) Carnot bound but also unity~\cite{manzano2016entropy}, making it unachievable. Hence, the question naturally arises whether such engines are limited by constraints other than the second law. \par The second law for quantum relaxation processes is widely accepted~\cite{alicki1979quantum,alicki2004thermodynamics,boukobza2007three,parrondo2009entropy,deffner2011nonequilibrium,boukobza2013breaking,kosloff2013quantum,sagawa2013second,argentieri2014violation,binder2015quantum,gelbwaser2015thermodynamics,uzdin2015equivalence,goold2016role,manzano2016entropy,vinjanampathy2016quantum,brandner2016periodic,breuerbook} to be faithfully rendered by Spohn's inequality~\cite{spohn1978entropy}. According to this inequality, the entropy change of a system that interacts with a thermal bath is bounded from below by the exchanged energy divided by the bath temperature. What has not been considered so far is, however, that the bound on entropy change in quantum relaxation processes crucially depends on whether the state of the relaxing system is non-passive. The definition~\cite{pusz1978passive,lenard1978thermodynamical,allahverdyan2004maximal} of a non-passive state~\cite{pusz1978passive,lenard1978thermodynamical,allahverdyan2004maximal,anders2013thermodynamics,alicki2013entanglement,gelbwaser2013work,hovhannisyan2013entanglement,binder2015quantacell,binder2015quantum,gelbwaser2015thermodynamics,perarnau2015extractable,skrzypczyk2015passivity,brown2016passivity,dag2016multiatom,depalma2016passive,goold2016role,niedenzu2016operation,vinjanampathy2016quantum,bruschi2017gravitational} is that its energy can be unitarily reduced until the state becomes passive, thereby extracting work. Non-passive states may thus be thought of as being ``quantum batteries''~\cite{alicki2013entanglement,binder2015quantacell} or ``quantum flywheels''~\cite{levy2016quantum}. The maximum amount of work extractable from such states (their ``work capacity'') has been dubbed ``ergotropy'' in Ref.~\cite{allahverdyan2004maximal}. For example, every population-inverted state is non-passive and so are, e.g., coherent or squeezed field states, whereas thermal states are passive. \par Here we examine the adequacy of assessing the maximum efficiency via the standard reversibility criterion in experimentally-relevant~\cite{rossnagel2016single,klaers2017squeezed} cyclic engines that intermittently interact with two (thermal or non-thermal) baths. We show that the standard reversibility criterion provides an inequality for the change in the engine entropy which may be much too loose (non-tight) to be useful if non-passive states are involved. The distinction between non-passive and passive states is at the heart of our analysis and underlies our division of the energy exchanged between a quantum system and a bath into a part that necessarily causes an entropy change, and ergotropy. Our proposed division is in fact a new unraveling of the first law of thermodynamics for quantum systems. In scenarios where non-thermal baths may create non-passive states of the working medium, we derive a new inequality for the entropy change which yields a physical efficiency limit of the engine that never surpasses unity. This efficiency limit in general cannot be assessed by the standard reversibility criterion. We illustrate these results for the practically-relevant Carnot- and Otto cycles~\cite{klaers2017squeezed} energised by non-thermal baths. Both cycles are shown to be restricted by our new efficiency bound. \section{The first law of quantum thermodynamics} For an arbitrary process taking the initial state $\rho_0$ of a quantum system to an evolving state $\rho(t)$, which may be governed by a time-dependent Hamiltonian $H(t)$ and a bath, energy conservation implies \begin{equation}\label{eq_first_law} \Delta E(t)=\mathcal{E}_\mathrm{d}(t)+W(t), \end{equation} where $\Delta E(t)$ is the change in the system energy $E(t)=\operatorname{Tr}[\rho(t) H(t)]$. Its two constituents are \begin{subequations}\label{eq_defs_Ediss_work} \begin{equation}\label{eq_def_DeltaEdiss} \mathcal{E}_\mathrm{d}(t)\mathrel{\mathop:}=\int_0^t\operatorname{Tr}[\dot\rho(t^\prime)H(t^\prime)]\mathrm{d} t^\prime, \end{equation} which is the non-unitary dissipative energy change due to the interaction with the bath, and \begin{equation}\label{eq_def_work} W(t)\mathrel{\mathop:}=\int_0^t\operatorname{Tr}[\rho(t^\prime)\dot H(t^\prime)]\mathrm{d} t^\prime, \end{equation} \end{subequations} which is the work~\cite{pusz1978passive} due to changes of the system Hamiltonian. Contrary to the energy change $\Delta E(t)$, both $\mathcal{E}_\mathrm{d}(t)$ and $W(t)$ are process variables that generally depend on the evolution path, not only on the initial and final states. For thermal baths, the energy~\eqref{eq_def_DeltaEdiss} is commonly identified with the transferred heat~\cite{alicki1979quantum}. The energy $\mathcal{E}_\mathrm{d}(t)$ vanishes for a closed (isolated) system whose state evolves unitarily according to the von~Neumann equation $\dot\rho(t)=\frac{1}{i\hbar}[H(t),\rho(t)]$. The work~\eqref{eq_def_work} is either extracted or invested by the external agent that controls the system via a time-dependent Hamiltonian, as in driven engines. \par We here consider general scenarios, wherein the bath and/or the system may be in a non-thermal state and strive to better understand the nature of the exchanged energy~\eqref{eq_def_DeltaEdiss} and, in particular, its relation to entropy change. As we show, only part of the exchanged energy $\mathcal{E}_\mathrm{d}(t)$ is necessarily accompanied by a change in entropy. \par \begin{figure} \caption{\textbf{Visualisation of the concept of passive energy and ergotropy.} The different kinds of energy contained in a quantum state visualised by means of a battery at a certain temperature. The battery charge (yellow bars) represents ergotropy~$\mathcal{W}$ (extractable as work, here illustrated by a lighted bulb) and its temperature (colour of the battery: red---hot, blue---cold) represents passive (here: thermal) energy $E_\mathrm{pas}$---the higher the temperature the larger the passive energy. (a) The battery is partly charged and hot: This represents a non-passive state which allows for work extraction. As the battery is not completely charged, the light bulb appears dim. (b) The battery is discharged, but its temperature is the same as in (a). This state is the passive state of (a) and, consequently, the light bulb does not shine. (c) The battery is in a non-passive state whose ergotropy is higher than in (a) (the battery is fully charged) but the passive energy is lower (the battery is colder). Although the total energy in (a) and (c) may be the same, more work can be extracted from the state~(c), causing the light bulb to shine brighter than in~(a).} \label{fig_ergotropy} \end{figure} \par To elucidate this issue, we resort to the concept of non-passive states (see Fig.~\ref{fig_ergotropy} and Appendix~\ref{app_ergotropy}). The energy $E(t)$ of a non-passive state $\rho(t)$ can be decomposed into ergotropy $\mathcal{W}(t)\geq0$ and passive energy $E_\mathrm{pas}(t)$. Ergotropy is the maximum amount of work that can be extracted from such a state by means of unitary transformations~\cite{pusz1978passive,lenard1978thermodynamical,allahverdyan2004maximal}. By contrast, the passive energy, which is the energy of the passive state $\pi(t)$, cannot be extracted in the form of work. \par The von~Neumann entropy $\mathcal{S}(\rho(t))=-k_\mathrm{B}\operatorname{Tr}[\rho(t)\ln\rho(t)]$ of a non-passive state $\rho(t)$ is the same as that of its passive state $\pi(t)$ since the two are related by a unitary transformation. Hence, a change in entropy requires a change in the passive state $\pi(t)$. Equation~\eqref{eq_def_DeltaEdiss}, however, does not discriminate between $\rho(t)$ and $\pi(t)$: A change in $\rho(t)$ may cause a non-zero $\mathcal{E}_\mathrm{d}(t)$ but not necessarily a change in entropy. By contrast, a change in $\pi(t)$ results in entropy change. \par In order to explicitly account for a change in the passive state, we may decompose the dissipative energy change~\eqref{eq_def_DeltaEdiss} as follows, \begin{equation}\label{eq_DeltaEdiss_decomposition} \mathcal{E}_\mathrm{d}(t)=\Delta E_\mathrm{pas}|_\mathrm{d}(t)+\Delta\mathcal{W}|_\mathrm{d}(t), \end{equation} where \begin{subequations}\label{eq_defs_heat_passive_work} \begin{equation}\label{eq_def_heat} \Delta E_\mathrm{pas}|_\mathrm{d}(t)\mathrel{\mathop:}=\int_{0}^{t} \operatorname{Tr}[\dot{\pi}(t^\prime)H(t^\prime)]\mathrm{d} t^\prime \end{equation} is the dissipative (non-unitary) change in passive energy and \begin{equation}\label{eq_def_DeltaW_diss} \Delta\mathcal{W}|_\mathrm{d}(t)\mathrel{\mathop:}=\int_0^t\operatorname{Tr}\Big[\big(\dot\rho(t^\prime)-\dot\pi(t^\prime)\big)H(t^\prime)\Big]\mathrm{d} t^\prime \end{equation} \end{subequations} is the dissipative (non-unitary) change in the system ergotropy due to its interaction with the bath. The microscopic decomposition of the exchanged energy~\eqref{eq_DeltaEdiss_decomposition} into dissipative change in passive energy~\eqref{eq_def_heat} and dissipative ergotropy change~\eqref{eq_def_DeltaW_diss} is a new unraveling of the first law of thermodynamics for quantum systems that constitutes one of our main results. \par The decomposition~\eqref{eq_DeltaEdiss_decomposition} carries with it the following insights: (a)~Although ergotropy may be transferred from a non-thermal bath to the system in a non-unitary fashion, it may afterwards still be extracted from the system in the form of work via a suitable unitary transformation. (b)~Consistently, any unitary changes (in either ergotropy or in passive energy due to time-dependent changes of the Hamiltonian) are associated with work~\eqref{eq_def_work}. If the Hamiltonian is constant, then $\Delta E_\mathrm{pas}|_\mathrm{d}(t)$ is only the change in passive energy without work, $\Delta E_\mathrm{pas}|_\mathrm{d}(t)=\Delta E_\mathrm{pas}(t)=\operatorname{Tr}[\pi(t)H]-\operatorname{Tr}[\pi_0H]$, where $\pi_0$ is the passive counterpart of the initial state $\rho_0$. Likewise, $\Delta\mathcal{W}|_\mathrm{d}(t)=\Delta\mathcal{W}(t)=\mathcal{W}(\rho(t))-\mathcal{W}(\rho_0)$ is then the change in ergotropy without work performance. (c)~While a non-zero $\Delta E_\mathrm{pas}|_\mathrm{d}(t)$ entails a change in the passive state $\pi(t)$ and hence in entropy, a non-zero $\mathcal{E}_\mathrm{d}(t)$, by contrast, does not necessarily imply an entropy change, as shown below. The correspondence of $\Delta E_\mathrm{pas}|_\mathrm{d}(t)$ and $\Delta\mathcal{S}(t)$ is plausible since they have the same sign provided a majorisation relation~\cite{mari2014quantum,binder2015quantum} holds for $\rho(t)$, as detailed in Appendix~\ref{app_majorisation}. \par \begin{figure} \caption{\textbf{Interaction of a cavity mode with thermal and non-thermal baths.} (a)~A cavity mode initialised in a coherent state decays into the surrounding electromagnetic-field bath to the vacuum state. (b)~A cavity mode prepared in the vacuum state evolves to a squeezed-vacuum state due to its interaction with a squeezed bath. The circles and the ellipse represent the respective phase-space distributions~\cite{gardinerbook} of the field states.} \label{fig_examples} \end{figure} \par \par Let us illustrate these insights for a single cavity mode (harmonic oscillator at frequency $\omega$) prepared in a pure coherent state $\rho_0=\proj{\alpha_0}$ that interacts (via a leaky mirror) with the surrounding electromagnetic-field bath (Fig.~\ref{fig_examples}a), which for optical frequencies is very close to the vacuum state~\cite{gardinerbook}. Being in contact with a bath, the cavity-mode state evolves in a non-unitary fashion (according to a quantum master equation~\cite{breuerbook}). Since the Hamiltonian is constant, the work~\eqref{eq_def_work} vanishes, $W(t)=0$. While the cavity field exponentially decays to the vacuum state, $\rho(t)=\proj{\alpha_0e^{-i\omega t-\kappa t}}$, where $\kappa$ is the leakage rate, its entropy does not change, $\mathcal{S}(\rho(t))=0$, so that the passive state $\pi(t)=\proj{0}$ is constant. Consequently, $\Delta E_\mathrm{pas}|_\mathrm{d}(t)=0$ and the entire energy change is due to dissipated ergotropy, $\Delta E(t)=\Delta\mathcal{W}|_\mathrm{d}(t)=\hbar\omega|\alpha_0|^2(e^{-2\kappa t}-1)\leq0$. \par \begin{figure} \caption{\textbf{Entropy and energy of a cavity mode interacting with a squeezed bath.} Entropy, ergotropy and energy changes for a single cavity mode prepared in the vacuum state that interacts with an outside bath in a squeezed-vacuum state (Fig.~\ref{fig_examples}b) obtained by a numerical integration of the master equation. The energies are given in units of $\hbar\omega$ and the entropy in units of $k_\mathrm{B}$. Parameters: $\omega=10\kappa$ and squeezing parameter $r=0.4$, $\kappa$ being the decay rate of the cavity.} \label{fig_squeezed_cavity} \end{figure} \par As another example, consider again a single cavity mode, this time prepared in its vacuum state $\rho_0=\proj{0}$, that interacts with an outside bath in a squeezed-vacuum state~\cite{gardinerbook} (Appendix~\ref{app_master_equation_squeezed_bath}), eventually converging to a squeezed-vacuum state inside the cavity (Fig.~\ref{fig_examples}b). Although the initial and the steady state have zero entropy, this is not true during the evolution (Fig.~\ref{fig_squeezed_cavity}). Consequently, both dissipative passive-energy change $\Delta E_\mathrm{pas}|_\mathrm{d}(t)$ and dissipative ergotropy change $\Delta\mathcal{W}|_\mathrm{d}(t)\geq 0$ occur. Figuratively, this process corresponds to a non-unitary charging of a battery. \section{Reversibility criterion}\label{sec_spohn} In non-equilibrium thermodynamics, the accepted criterion for the irreversibility or reversibility of the system relaxation to its steady state is the non-negativity of the entropy production~\cite{kondepudibook}. For quantum systems that are weakly coupled to (thermal or non-thermal) Markovian baths, Spohn~\cite{spohn1978entropy} put forward an expression for the entropy production $\Sigma(t)$. Here, we are interested in relaxation to steady state, for which we define $\Sigma\mathrel{\mathop:}=\Sigma(\infty)$, satisfying (Appendix~\ref{app_spohn}) \begin{equation}\label{eq_spohn_integrated} \Sigma\geq0, \end{equation} where the equality sign is the reversibility condition. For a constant Hamiltonian, it evaluates to $\Sigma=\Srel{\rho_0}{\rho_\mathrm{ss}}\geq0$, where $\Srel{\rho_0}{\rho_\mathrm{ss}}\mathrel{\mathop:}=k_\mathrm{B}\operatorname{Tr}[\rho_0(\ln \rho_0-\ln \rho_\mathrm{ss})]$ is the entropy of the system initialised in a state $\rho_0$ at $t=0$ relative to the steady state $\rho_\mathrm{ss}$ to which it relaxes. For a slowly time-varying Hamiltonian~\cite{alicki1979quantum,alipour2016correlations}, Eq.~\eqref{eq_spohn_integrated} gives rise to an inequality for the the change $\Delta\mathcal{S}$ of the system (von~Neumann) entropy, given in Appendix~\ref{app_spohn}. \par The common~\cite{alicki1979quantum,alicki2004thermodynamics,boukobza2007three,parrondo2009entropy,deffner2011nonequilibrium,boukobza2013breaking,kosloff2013quantum,sagawa2013second,argentieri2014violation,binder2015quantum,gelbwaser2015thermodynamics,uzdin2015equivalence,brandner2016periodic,goold2016role,manzano2016entropy,vinjanampathy2016quantum,breuerbook} identification of Eq.~\eqref{eq_spohn_integrated} with the second law appears plausible for systems in contact with thermal baths: It then evaluates to $\Sigma=\Delta\mathcal{S}-\mathcal{E}_\mathrm{d}/T\geq0$, where $\mathcal{E}_\mathrm{d}$ is the dissipative change in the system energy defined in Eq.~\eqref{eq_def_DeltaEdiss} (in the limit $t\rightarrow\infty$). \par Here we contend that although inequality~\eqref{eq_spohn_integrated} is a formally correct statement of the second law (under standard thermodynamic assumptions), it may not provide a meaningful estimate of $\Delta\mathcal{S}$ if a system is initialised in a non-passive state and/or interacts with a non-thermal bath. Physically, this is because, as discussed above, the exchanged energy $\mathcal{E}_\mathrm{d}$ may be non-zero even if the entropy does not change. \section{Entropy change in relaxation processes involving ergotropy}\label{sec_decay} Consider the decay of an initially non-passive state $\rho_0$ to a (passive) thermal state $\rho_\mathrm{th}$ via contact with a thermal bath at temperature $T$. Based on the decomposition~\eqref{eq_DeltaEdiss_decomposition}, the reversibility condition~\eqref{eq_spohn_integrated} evaluates to (at $t\rightarrow\infty$) \begin{equation}\label{eq_DeltaS_QdT_DeltaW} \Delta\mathcal{S}\geq\frac{\mathcal{E}_\mathrm{d}}{T}=\frac{\Delta E_\mathrm{pas}|_\mathrm{d}+\left.\Delta\mathcal{W}\right|_\mathrm{d}}{T}, \end{equation} where both dissipative change in passive energy~\eqref{eq_def_heat} and dissipated ergotropy~\eqref{eq_def_DeltaW_diss} appear. In what follows we shall revise this inequality, which may greatly overestimate the actual entropy change. As shown below, a tight inequality for $\Delta\mathcal{S}$ is indispensable for correctly assessing the maximum efficiency of an engine. \subsection{Constant Hamiltonian} We first consider the case of a constant Hamiltonian. As we have seen, dissipative ergotropy change is not necessarily linked to a change in entropy. Therefore, the lower bound on $\Delta\mathcal{S}$ in Eq.~\eqref{eq_DeltaS_QdT_DeltaW} may be not tight (maximal). It is obtained from Spohn's inequality~\eqref{eq_spohn_integrated} for the relaxation of an initially non-passive state in a thermal bath. However, one may resort to the fact that the entropy $\mathcal{S}$ is a state variable, so that $\Delta\mathcal{S}=\mathcal{S}(\rho_\mathrm{th})-\mathcal{S}(\rho_0)$ is path-independent, i.e., its value only depends on the initial state $\rho_0$ and the (passive) thermal steady state $\rho_\mathrm{th}$. Hence, Spohn's inequality~\eqref{eq_spohn_integrated} may well be applied to alternative evolution paths from $\rho_0$ to $\rho_\mathrm{th}$, giving rise to different inequalities for the same $\Delta\mathcal{S}$. \par In particular, we now consider a path that does not involve any dissipation of ergotropy to the bath: Namely, one may start the process by performing a unitary transformation to the passive state, $\rho_0\mapsto\pi_0$. Thereafter, this state is brought in contact with the thermal bath, yielding the steady-state solution $\rho_\mathrm{th}$. Inequality~\eqref{eq_spohn_integrated} applied to this alternative path yields \begin{equation}\label{eq_DeltaS_QdT} \Delta\mathcal{S}\geq\frac{\Delta E_\mathrm{pas}|_\mathrm{d}}{T}, \end{equation} where $\Delta E_\mathrm{pas}|_\mathrm{d}$ is the same as in Eq.~\eqref{eq_DeltaS_QdT_DeltaW}. \par The steady state attained via contact with a thermal bath is passive, hence the system ergotropy must decrease as a result of the relaxation, $\Delta\mathcal{W}|_\mathrm{d}=-\mathcal{W}_0\leq0$, where $\mathcal{W}_0\geq 0$ is the initial ergotropy stored in the state $\rho_0$. Hence, inequality~\eqref{eq_DeltaS_QdT} always entails inequality~\eqref{eq_DeltaS_QdT_DeltaW} and is thus a tighter and more relevant estimate of $\Delta\mathcal{S}$. This has a crucial consequence: If the initial state is non-passive, inequality~\eqref{eq_DeltaS_QdT} rules out the equality sign in inequality~\eqref{eq_DeltaS_QdT_DeltaW}, so that the considered decay via contact with a thermal bath can never be reversible according to criterion~\eqref{eq_spohn_integrated}. \par We now consider the more general situation wherein the system is governed by a constant Hamiltonian and interacts with an arbitrary bath (that may not be parameterised by a temperature) until it reaches the steady state $\rho_\mathrm{ss}$. In order to obtain an optimal (the tightest) inequality for the entropy change $\Delta\mathcal{S}$, we here instead of inequality~\eqref{eq_spohn_integrated} ($\Sigma$ for such a bath is given in Appendix~\ref{app_sigma_non-thermal}) propose to adopt the mathematical relation \begin{equation}\label{eq_srel} \Srel{\pi_0}{\pi_\mathrm{ss}}\geq 0. \end{equation} As shown in Appendix~\ref{app_sigmap_optimal}, Eq.~\eqref{eq_srel} provides generally a tight inequality for $\Delta\mathcal{S}$. The motivation for Eq.~\eqref{eq_srel} is, as before, that the entropy of any state $\rho$ is the same as that of its passive counterpart $\pi$. If $\pi_\mathrm{ss}$ is a thermal state, we recover Eq.~\eqref{eq_DeltaS_QdT}. \par We stress that, contrary to Spohn's inequality (Appendix~\ref{app_spohn}), Eqs.~\eqref{eq_DeltaS_QdT} and~\eqref{eq_srel} do not require weak coupling between the system and the bath (in the same spirit as in Refs.~\cite{schloegl1966zur,deffner2011nonequilibrium}) and are thus universally-valid whenever the reduced state of the system reaches a steady state. \subsection{Time-dependent Hamiltonian} We now allow the Hamiltonian $H(t)$ to slowly vary during the evolution~\cite{alicki1979quantum}. Contrary to the case of a constant Hamiltonian, the dissipative passive-energy change~\eqref{eq_def_heat} and the ergotropy change~\eqref{eq_def_DeltaW_diss} in the r.h.s.\ of Eq.~\eqref{eq_DeltaS_QdT_DeltaW} are now path-dependent. Namely, they are not only determined by the initial state $\rho_0$ and the steady state $\rho_\mathrm{th}(\infty)$, which is a thermal state under the Hamiltonian $H(\infty)$. \par Since during the evolution the time-dependent Hamiltonian may generate a non-passive state (even if the initial state is passive and the bath is thermal) we cannot, in general, find an alternative path void of dissipated ergotropy for the same $H(t)$. Notwithstanding, we may still consider a path void of initial ergotropy in the spirit of the previous section by extracting the ergotropy of the initial state in a unitary fashion prior to the interaction with the bath, resulting in the passive state $\pi_0$. Afterwards, this passive state is brought into contact with the thermal bath, yielding the steady state $\rho_\mathrm{th}(\infty)$. Spohn's inequality can be applied to the latter step, yielding \begin{equation}\label{eq_DeltaS_Qth} \Delta\mathcal{S}\geq\frac{\mathcal{E}_\mathrm{d}^\prime}{T}, \end{equation} with the energy \begin{equation}\label{eq_def_Qth} \mathcal{E}_\mathrm{d}^\prime\mathrel{\mathop:}=\int_0^\infty\operatorname{Tr}[\dot\varrho(t)H(t)]\mathrm{d} t \end{equation} exchanged with the bath along the alternative path. Here $\varrho(t)$ is the solution of the same thermal master equation that governs $\rho(t)$ but with the initial condition $\varrho_0=\pi_0$. In the case that the initial state $\rho_0$ is already passive, we have $\varrho(t)=\rho(t)$, $\mathcal{E}_\mathrm{d}^\prime=\mathcal{E}_\mathrm{d}$ and Eqs.~\eqref{eq_DeltaS_Qth} and~\eqref{eq_spohn_integrated} coincide. For a constant Hamiltonian, Eq.~\eqref{eq_DeltaS_Qth} evaluates to Eq.~\eqref{eq_DeltaS_QdT}. \par Consider now the more general situation where a quantum system interacts with a non-thermal bath and eventually relaxes to a unitarily-transformed thermal state $U\rho_\mathrm{th}(\infty)U^\dagger$. A prime example is a harmonic oscillator that interacts with a squeezed thermal bath~\cite{gardinerbook,ekert1990canonical}: Its steady state is a squeezed thermal state. Then one can show (Appendix~\ref{app_liouvillian}) that this situation can be traced back to the interaction of a unitarily-transformed state $\tilde\rho(t)\mathrel{\mathop:}= U^\dagger \rho(t) U$ with a thermal bath, provided that the Hamiltonian $H(t)$ commutes with itself at all times; a harmonic oscillator with a time-dependent frequency and time-independent eigenstates is an example. This requirement will be adopted in the remainder of this paper for any interaction of a system with a non-thermal bath. The relaxation of a possibly non-passive state $\tilde\rho(t)$ in a thermal bath pertains to the scenario considered above upon replacing $\rho(t)$ by $\tilde\rho(t)$ there. Equation~\eqref{eq_DeltaS_Qth} thus also holds for this class of non-thermal baths (the derivation and the generalisation to arbitrary non-thermal baths are discussed in Appendix~\ref{app_time-dependent_Hamiltonian}). \par The new entropic inequality~\eqref{eq_DeltaS_Qth} is the second main result of our work. For the special case of a constant Hamiltonian, it reduces to inequality~\eqref{eq_DeltaS_QdT}. \section{Maximal efficiency of engines powered by non-thermal baths}\label{sec_efficiency} In view of our new inequality~\eqref{eq_DeltaS_Qth}, does inequality~\eqref{eq_spohn_integrated} always provide a true bound on the engine efficiency? Namely, is reversibility indeed the key to operating a quantum engine at the highest possible efficiency? This question arises for cyclic engines fuelled by non-thermal (e.g., squeezed) baths, since such baths may transfer both passive thermal energy and ergotropy to the system while Eq.~\eqref{eq_spohn_integrated} does not distinguish between these two different kinds of energies. \par \begin{figure} \caption{\textbf{Engine fuelled by a non-thermal bath.} Schematics of an engine fuelled by a hot non-thermal (e.g., squeezed thermal) bath that provides the input energy $\mathcal{E}_\mathrm{d,h}$. The engine operates in an arbitrary cycle wherein work is extracted by a piston and an amount of energy $\mathcal{E}_\mathrm{d,c}$ is dumped into the cold thermal bath.} \label{fig_engine} \end{figure} \par Here we consider a quantum engine (Fig.~\ref{fig_engine}) that operates between a cold thermal bath (at temperature $T_\mathrm{c}$) and a hot non-thermal bath subject to a time-dependent drive (the ``piston''~\cite{alicki1979quantum}). As in common, experimentally-relevant situations~\cite{klaers2017squeezed}, the non-thermal bath drives the working medium into a non-passive state whose passive counterpart is assumed to be thermal. This allows us to maintain the notion of a ``hot'' bath with temperature $T_\mathrm{h}>T_\mathrm{c}$, where $T_\mathrm{h}$ is defined by the steady-state solution of the working medium. As an example, in the case of a single cavity mode interacting with the surrounding electromagnetic field in a squeezed-thermal state~\cite{breuerbook,gardinerbook}, the temperature $T_\mathrm{h}$ equals the thermodynamic temperature of the bath prior to its squeezing. The generalisation of the present analysis to arbitrary passive states is straightforward (Appendix~\ref{app_efficiency}). \par Existing treatments of engines powered by non-thermal baths have taken the system--baths interaction to be isochoric, i.e., subject to a constant Hamiltonian~\cite{huang2012effects,abah2014efficiency,rossnagel2014nanoscale,hardal2015superradiant,manzano2016entropy,niedenzu2016operation}. We here relax this restriction and allow for stroke cycles wherein the working-medium (WM) Hamiltonian is allowed to slowly change during the interaction with the baths~\cite{alicki1979quantum}. We only impose the condition that the WM attains its steady state at the end of the energising stroke (wherein it interacts with the hot non-thermal bath) and the resetting stroke (wherein it interacts with the cold thermal bath). \par The energising stroke is described by a master equation~\cite{breuerbook} that evolves the WM state to a unitarily-transformed thermal state $\rho_\mathrm{ss}(\infty)=U\rho_\mathrm{th}(\infty)U^\dagger$, hence Eq.~\eqref{eq_DeltaS_Qth} holds. After this stroke, the WM is in a non-passive state, whose ergotropy is subsequently extracted by the piston via a suitable unitary transformation. Since we seek the efficiency bound, we assume that no ergotropy is dissipated in the cold bath (and thus lost), hence the requirement to extract it from the WM before its interaction with that bath. We note that in cycles where both baths are simultaneously coupled to the WM (as in continuous cycles~\cite{gelbwaser2013minimal}), part of the ergotropy is inevitably dissipated into the cold bath, so that such cycles are inherently less efficient than stroke cycles adhering to the above requirement. \par Similarly, Hamiltonians that do not commute with themselves at different times are known to reduce the efficiency due to ``quantum friction''~\cite{kosloff2013quantum,brandner2016periodic,mukherjee2016speed,kosloff2017quantum}, whereas we are here interested in principal limitations on the efficiency. Hence, during the interaction with the non-thermal bath, the Hamiltonian is assumed to commute with itself at all times, as already mentioned in the discussion on the validity of Eq.~\eqref{eq_DeltaS_Qth} for such a bath. \par The engine's WM must return to its initial state after each cycle. This implies that $\Delta\mathcal{S}=0$ over a cycle, hence the importance of having a tight estimate for the entropy change within each stroke. The entropy changes in the two relevant strokes satisfy $\Delta\mathcal{S}_\mathrm{c}\geq \mathcal{E}_\mathrm{d,c}/T_\mathrm{c}$ and $\Delta\mathcal{S}_\mathrm{h}\geq\mathcal{E}^\prime_\mathrm{d,h}/T_\mathrm{h}$. Here $\mathcal{E}_\mathrm{d,c}\leq0$ is the change in the WM energy due to its interaction with the cold thermal bath and $\mathcal{E}^\prime_\mathrm{d,h}\geq0$ is the change the WM energy would have, had the non-thermal bath been thermal [as in Eq.~\eqref{eq_DeltaS_Qth}]. Taking into account that the WM is passive prior to its interaction with the cold bath, so that Eqs.~\eqref{eq_DeltaS_Qth} and~\eqref{eq_spohn_integrated} coincide for that stroke, the condition of vanishing entropy change over a cycle (which must hold in any cycle) then yields the inequality \begin{equation}\label{eq_condition_gen} \Delta\mathcal{S}_\mathrm{c}+\Delta\mathcal{S}_\mathrm{h}=0\quad\Rightarrow\quad\frac{\mathcal{E}_\mathrm{d,c}}{T_\mathrm{c}}+\frac{\mathcal{E}^\prime_\mathrm{d,h}}{T_\mathrm{h}}\leq 0. \end{equation} \par The efficiency of the engine is defined as the ratio of the extracted work to the invested energy, $\eta\mathrel{\mathop:}=-W/\mathcal{E}_\mathrm{d,h}$, where $\mathcal{E}_\mathrm{d,h}$ is the total energy (the sum of passive thermal energy and ergotropy) imparted by the non-thermal bath during the energising stroke. Using the first-law statement~\eqref{eq_first_law}, this ratio may be expressed through the energy transfers $\mathcal{E}_\mathrm{d,c}$ and $\mathcal{E}_\mathrm{d,h}$. Condition~\eqref{eq_condition_gen} on $\mathcal{E}_\mathrm{d,c}$ (the energy lost to the cold bath) then restricts the efficiency to \begin{equation}\label{eq_etamax_gen} \eta\leq 1-\frac{T_\mathrm{c}}{T_\mathrm{h}}\frac{\mathcal{E}^\prime_\mathrm{d,h}}{\mathcal{E}_\mathrm{d,h}}=\mathrel{\mathop:}\eta_\mathrm{max}. \end{equation} Its derivation as well as a more general expression for the case where the passive state after the energising stroke is non-thermal are given in Appendix~\ref{app_efficiency}. \par The efficiency bound~\eqref{eq_etamax_gen} does not only depend on the two temperatures, which is to be expected, as non-thermal baths may occur in various forms that cannot be universally described by a common set of parameters. The physical details of the bath (e.g., its squeezing parameter) are thus encoded in the fraction of the two energies $\mathcal{E}^\prime_\mathrm{d,h}$ and $\mathcal{E}_\mathrm{d,h}$, whose forms are universal. This fraction expresses the ratio of generalised heat transfer to the total energy input from the hot bath. \par The bound~\eqref{eq_etamax_gen} underscores the physicality of our inequality~\eqref{eq_DeltaS_Qth}: In the usual regime of functioning of the engine, $\mathcal{E}^\prime_\mathrm{d,h}\geq0$ and $\mathcal{E}_\mathrm{d,h}>0$ (i.e., the hot bath provides energy and increases the WM entropy), the bound~\eqref{eq_etamax_gen} is limited by unity, $\eta_\mathrm{max}\leq 1$, which is reached in the ``mechanical''-engine limit $\mathcal{E}^\prime_\mathrm{d,h}\rightarrow 0$ where the non-thermal bath only provides ergotropy. By contrast, the bound $\eta_\Sigma$ that stems from the reversibility condition~\eqref{eq_spohn_integrated} (derived in Appendix~\ref{app_efficiency}) may surpass $1$ (see Ref.~\cite{manzano2016entropy}). In the opposite, heat-engine, limit $\mathcal{E}^\prime_\mathrm{d,h}\rightarrow \mathcal{E}_\mathrm{d,h}$ where only passive thermal energy but no ergotropy is imparted by the hot bath, Eq.~\eqref{eq_etamax_gen} reproduces the Carnot bound $\eta_\mathrm{C}=1-T_\mathrm{c}/T_\mathrm{h}$. As shown below, if the Hamiltonian is kept constant during the interaction with the non-thermal bath, then Eq.~\eqref{eq_etamax_gen} is restricted by $\eta_\mathrm{C}\leq\eta_\mathrm{max}\leq\eta_\Sigma$. Therefore, for such engines our new bound~\eqref{eq_etamax_gen} is always tighter than the second-law bound $\eta_\Sigma$. \par The bound~\eqref{eq_etamax_gen} is valid in the regime $\mathcal{E}_\mathrm{d,c}\leq0$ and $\mathcal{E}^\prime_\mathrm{d,h}\geq 0$ wherein the cold bath serves as an energy dump. As shown in~\cite{niedenzu2016operation}, there exists a regime wherein such a machine acts simultaneously as an engine and a refrigerator for the cold bath. The efficiency then evaluates to $\eta=1$ (see Appendix~\ref{app_efficiency}). \par We have thus reached a central conclusion: The efficiency bound of the engine increases with the decrease of the ratio of the energy that an alternative thermal engine would have received (in the same energising stroke) to the total energy imparted by the non-thermal bath (in the actual engine cycle). In the limit of thermal baths~\cite{alicki1979quantum} we recover the standard Carnot bound for the efficiency of heat engines, even if the engine (in any cycle) exhibits quantum signatures (e.g., quantum coherence in the WM due to the piston action~\cite{uzdin2015equivalence}) or the WM--bath interactions are time-dependent~\cite{mukherjee2016speed}. \par We note that the costs of bath preparation or the heat generated by a clock~\cite{erker2017autonomous,woods2016autonomous} required to implement a time-periodic Hamiltonian will reduce the efficiency. In the spirit of thermodynamics, however, the bound~\eqref{eq_etamax_gen} only takes into account limitations inherent to the cycle. \par Whilst our analysis is focused on the two-bath situation, Eq.~\eqref{eq_condition_gen} can be generalised to cycles where the working medium intermittently interacts with additional (thermal or non-thermal) baths. This generalisation shows (Appendix~\ref{app_multibath}) that the efficiency of multi-bath engines is always lower than the maximum efficiency~\eqref{eq_etamax_gen} of the appropriate two-bath engine, thus reaffirming the generality of the bound~\eqref{eq_etamax_gen}. \section{Specific quantum engines} We now pose the question: Which bound is more relevant, $\eta_\Sigma$ (whose explicit form is given in Appendix~\ref{app_efficiency}) that stems from the reversibility condition~\eqref{eq_spohn_integrated}, or $\eta_\mathrm{max}$ given by Eq.~\eqref{eq_etamax_gen}? Contrary to the Carnot bound, the efficiency bound~\eqref{eq_etamax_gen} not only depends on the parameters of the baths but also on the energising stroke through the stroke's initial condition and the Hamiltonian that determine the integrals $\mathcal{E}^\prime_\mathrm{d,h}$ and $\mathcal{E}_\mathrm{d,h}$. Yet, the functional form~\eqref{eq_etamax_gen} is independent of the choice of the non-thermal bath or the WM. Whether or not this bound is reached by an engine that implements this chosen energising stroke is then determined by condition~\eqref{eq_condition_gen}. \par In complete generality, the tighter of the alternative efficiency bounds derived here, \begin{equation}\label{eq_eta_minimum} \eta\leq\min\{\eta_\mathrm{max},\eta_\Sigma\}, \end{equation} is the relevant one. Relation~\eqref{eq_eta_minimum} is the universal thermodynamic limit on quantum engine efficiency, which never surpasses unity. \par Notwithstanding the alternatives that may be offered by Eq.~\eqref{eq_eta_minimum}, we now discuss two generic practically-relevant engine cycles for which one can explicitly show that $\eta_\mathrm{max}\leq\eta_\Sigma$. Such engines are thus not restricted by the second law, but by other constraints on their entropy. \subsection{Time-dependent Hamiltonian: A squeezed photonic Carnot engine} \par \begin{figure} \caption{\textbf{A photonic Carnot cycle for a squeezed thermal bath.} The cycle starts with a thermal state with frequency $\omega_\mathrm{c}$ and temperature $T_\mathrm{c}$ (lower left corner). In stroke~$1$, the mode undergoes an adiabatic compression to frequency $\omega_2=\omega_\mathrm{c}T_\mathrm{h}/T_\mathrm{c}$ and temperature $T_\mathrm{h}>T_\mathrm{c}$. Thereafter, in the energising stroke~$2$, the frequency is slowly reduced to $\omega_\mathrm{h}\leq\omega_2$ while the mode is connected to the squeezed thermal bath, yielding a squeezed thermal steady state. Its ergotropy is extracted in stroke~$3$ by an ``unsqueezing'' unitary operation, resulting in a thermal state with temperature $T_\mathrm{h}$. In stroke~$4$, the frequency is again adiabatically reduced to $\omega_1=\omega_\mathrm{h}T_\mathrm{c}/T_\mathrm{h}$ such that the mode attains the temperature $T_\mathrm{c}$. Finally, stroke~$5$ is an isothermal compression back to the initial state.} \label{fig_carnot} \end{figure} \par We first consider a photonic Carnot-like engine fuelled by a squeezed-thermal bath, as depicted in Fig.~\ref{fig_carnot}. It contains the four strokes of the regular thermal Carnot cycle~\cite{carnotbook,clausius1865verschiedene,schwablbook,kondepudibook}, as well as an additional ergotropy-extraction stroke (stroke $3$ in the figure). In the regular thermal Carnot cycle, the interactions with the baths are isothermal. \par \begin{figure} \caption{\textbf{Entropy change in a Carnot cycle.} Change in entropy (in units of $k_\mathrm{B}$) during stroke~$2$ of the modified Carnot cycle in Fig.~\ref{fig_carnot} as a function of the stroke duration obtained by a numerical integration of the master equation. The upper (blue) curve corresponds to the reversibility criterion~\eqref{eq_spohn_integrated}; it is seen that the inequality $\Sigma\geq0$ is far from being saturated. By contrast, our proposed inequality~\eqref{eq_DeltaS_Qth} is saturated (i.e., the equality sign applies) for sufficiently long stroke duration (red lower curve); here $\Delta\mathcal{S}_\mathrm{h}(t)=\mathcal{S}(\rho(t))-\mathcal{S}(\rho_0)$. Parameters: Oscillator frequency $\omega(t)=(25-0.05\kappa t)\kappa$, $k_\mathrm{B} T_\mathrm{h}=5\hbar\kappa$ and squeezing parameter $r=0.2$, $\kappa$ being the decay rate of the cavity.} \label{fig_carnot_engine} \end{figure} \par Based on Eq.~\eqref{eq_DeltaS_Qth}, we have in the second stroke $\mathcal{E}^\prime_\mathrm{d,h}=T_\mathrm{h}\Delta\mathcal{S}_\mathrm{h}$, since the master equation void of squeezing induces isothermal expansion wherein the state $\varrho(t)$ is always in thermal equilibrium (Fig.~\ref{fig_carnot_engine}). Stroke $5$ is isothermal compression, i.e., $\mathcal{E}_\mathrm{d,c}=T_\mathrm{c}\Delta\mathcal{S}_\mathrm{c}$. The condition of vanishing entropy change over a cycle, $\Delta\mathcal{S}=\mathcal{E}_\mathrm{d,c}/T_\mathrm{c}+\mathcal{E}^\prime_\mathrm{d,h}/T_\mathrm{h}=0$, corresponds to the equality sign in condition~\eqref{eq_condition_gen}. Hence, the efficiency of this cycle is the bound in Eq.~\eqref{eq_etamax_gen}. \par Consequently, the bound $\eta_\mathrm{max}$ is lower than $\eta_\mathrm{\Sigma}$ for all possible engine cycles that contain a ``Carnot-like'' energising stroke, namely, a stroke characterised by a slowly-changing Hamiltonian and an initial thermal state at temperature $T_\mathrm{h}$, such that $\mathcal{E}^\prime_\mathrm{d,h}=T_\mathrm{h}\Delta\mathcal{S}_\mathrm{h}$. \par \begin{figure} \caption{\textbf{Squeezing and unsqueezing of a cavity mode.} (a) The interaction of a cavity mode with a squeezed thermal bath (stroke~2 in Fig.~\ref{fig_carnot}) may be realised in a micromaser setup where a beam of entangled atom pairs passes through the cavity~\cite{dag2016multiatom}. (b) The unsqueezing operation in stroke~$3$ of Fig.~\ref{fig_carnot} may be implemented by a suitable modulation of the cavity frequency~\cite{graham1987squeezing,agarwal1991exact,averbukh1994enhanced}.} \label{fig_implementation} \end{figure} \par Such a photonic Carnot engine energised by a squeezed bath may be implemented as a modification of the photonic Carnot cycle based on a cavity in a micromaser setup in the seminal work by Scully et al.~\cite{scully2003extracting}: Instead of a beam of coherently-prepared three-level atoms (``phaseonium'') that constitute an effective thermal bath for the cavity-mode WM, we here suggest, following Ref.~\cite{dag2016multiatom}, to use a beam of suitably-entangled atom pairs passing through a cavity that may act as a squeezed-thermal bath for the same WM (Fig.~\ref{fig_implementation}a). The steady state of the cavity mode is then determined by a squeezing parameter $r$ and a temperature $T_\mathrm{h}$, which are both a function of the two-atom state~\cite{dag2016multiatom}. A major advantage of this method is that it allows for very high squeezing parameters. In order to extract the ergotropy that is stored in the cavity mode after its interaction with the squeezed bath and before its interaction with the cold bath (where it would be lost), a unitary transformation that ``unsqueezes'' the cavity field must be performed, e.g., as in Refs.~\cite{graham1987squeezing,agarwal1991exact,averbukh1994enhanced}, where the cavity-mode frequency is abruptly ramped up and then gradually ramped down (Fig.~\ref{fig_implementation}b). \subsection{Constant Hamiltonian: An Otto-like cycle} Next, we consider a quantum Otto cycle~\cite{geva1992quantum,feldmann2004characteristics,quan2007quantum,delcampo2014more,uzdin2015equivalence,kosloff2017quantum} that consists of two isentropic strokes (adiabatic compression and decompression of the WM), two isochoric strokes (interaction with the baths at a fixed Hamiltonian) and an additional ergotropy-extraction stroke. This cycle amounts to setting $\omega_2=\omega_\mathrm{h}$ and $\omega_1=\omega_\mathrm{c}$ in Fig.~\ref{fig_carnot}. \par Since the Hamiltonian is now kept constant during the energising stroke, we have $\mathcal{E}^\prime_\mathrm{d,h}=\Delta E_\mathrm{pas,h}$, where $\Delta E_\mathrm{pas,h}$ is the change in passive energy during the hot stroke, and $\mathcal{E}_\mathrm{d,h}=\Delta E_\mathrm{pas,h}+\Delta \mathcal{W}_\mathrm{h}$, where $\Delta \mathcal{W}_\mathrm{h}$ is the change in ergotropy during that stroke. The efficiency of this Otto-like cycle is bounded by Eq.~\eqref{eq_etamax_gen}, \begin{equation}\label{eq_etamax_otto} \eta_\mathrm{max}^\mathrm{Otto}=1-\frac{T_\mathrm{c}}{T_\mathrm{h}}\frac{\Delta E_\mathrm{pas,h}}{\Delta E_\mathrm{pas,h}+\Delta \mathcal{W}_\mathrm{h}}\leq \eta_\Sigma, \end{equation} but this bound is only attained in the ``mechanical'' limit $\mathcal{E}^\prime_\mathrm{d,h}=\Delta E_\mathrm{pas,h}=0$, where only ergotropy is transferred from the non-thermal bath and no net entropy change occurs during the strokes. In this case the bound equals $1$, as one expects for mechanical engines. By contrast, the Carnot-like cycle always operates at maximum efficiency, even when both passive thermal energy and ergotropy are imparted by this bath. \par In general, any engine cycle wherein the interaction with the hot bath is isochoric (has constant Hamiltonian) and sufficiently long (for the WM to reach steady state) abides by the bound~\eqref{eq_etamax_otto}, which is lower than the bound $\eta_\Sigma$ imposed by the second law (Fig.~\ref{fig_efficiency_otto}). Moreover, their efficiency bound always surpasses the Carnot bound, $\eta_\mathrm{max}^\mathrm{Otto}\geq\eta_\mathrm{C}$. \par \begin{figure} \caption{\textbf{Efficiency bounds for the Otto-like cycle.} Actual efficiency $\eta$ and alternative efficiency bounds (the explicit expressions are summarised in Appendix~\ref{app_figures}) for an Otto-like cycle implemented with a harmonic-oscillator working medium and a squeezed thermal bath as a function of (a) the frequency ratio and (b) the squeezing parameter. The bounds only hold in the regime $\mathcal{E}_\mathrm{d,c}\leq0$ (see text). Parameters: $T_\mathrm{h}=3T_\mathrm{c}$ and (a) squeezing parameter $r=0.5$ and (b) oscillator frequencies $\omega_\mathrm{c}/\omega_\mathrm{h}=0.5$.} \label{fig_efficiency_otto} \end{figure} \section{Discussion} Our analysis has been aimed at comparing the efficiency bounds and the conditions for their attainment in quantum engines energised by thermal and non-thermal baths. These respective bounds turn out to be very different since, unlike thermal baths, non-thermal baths may exchange both thermal (passive) energy and ergotropy with the working medium (WM). To this end we have revisited the first law of thermodynamics and identified as passive energy the part of the energy exchange with the bath that necessarily causes a change in the WM entropy [Eq.~\eqref{eq_def_heat}]. This division of the exchanged energy relies on the distinction between passive and non-passive states of the WM. Only the latter states store ergotropy that may be completely extracted in the form of work. Our energetic division conceptually differs from the one involving ``housekeeping heat'' previously provided for classical systems~\cite{hatano2001steady}. It would be interesting to extend our analysis to situations where ``housekeeping heat'' has been considered in a quantum context~\cite{gardas2015thermodynamic,misra2015quantum}. \par Based on the distinction between passive and non-passive states, we have put forward a new estimate~\eqref{eq_DeltaS_Qth} of the entropy change in quantum relaxation processes, which turns out to be the key to understanding the limitations of quantum engines fuelled by arbitrary baths. Cyclic engines whose passive energy is altered by the baths are restricted in efficiency by limits on their entropy change. Yet, for a wide class of practically-relevant engines, including all engines whose energising stroke is either isochoric or Carnot-like, the restriction imposed by inequality~\eqref{eq_DeltaS_Qth} on the entropy change is stricter than what the second law~\eqref{eq_spohn_integrated} would allow. By contrast, the commonly used reversibility is a global condition on the WM and the two baths combined that is imposed by the second law, and hence not necessarily a relevant characterisation of engine efficiency. \par An alternative formulation of our main insight is that, for any baths, entropy change limits the engine efficiency in the same way as in traditional heat engines---condition~\eqref{eq_condition_gen} is the same whether the energising bath is thermal or not. Namely, maximal efficiency is reached when (a)~no ergotropy (extractable work) is dumped into the cold bath and (b)~no entropy is generated within the engine, or, equivalently, minimal energy is dumped into the cold bath~\cite{kondepudibook}. For thermal engines, this criterion of minimal energy dumping and the reversibility criterion coincide, but the two criteria differ if the energising bath is non-thermal. \par Another important insight is that the same efficiency bound~\eqref{eq_etamax_gen} ensues whether the WM is energised by a non-thermal bath or by a thermal bath (that supplies thermal energy) combined with a battery (that supplies ergotropy) provided the total energy imparted by the WM remains the same. This supports the description of non-thermal engines as hybrids of thermal (thermal-energy-fuelled) and ``mechanical'' (ergotropy-fuelled) engines~\cite{niedenzu2016operation}. \par Our theory provides better understanding of the operation principles of quantum engines: These are shown not to follow only from the laws of thermodynamics, but require discrimination between different (passive and non-passive) quantum states of the system (WM) and the baths involved. The present generalisation of the treatment of standard thermal processes for quantum systems is not only the key to the construction of the most efficient hybrid engines that are unrestricted by the Carnot bound, as in the recent experimental implementation of an engine powered by a squeezed bath~\cite{klaers2017squeezed}. It may also open a new perspective on quantum-channel communications~\cite{mari2014quantum,depalma2016passive,qi2016thermal} where entropic constraints play a major role. \section*{Author contributions} W.\,N. conceived the idea. W.\,N. and A.G.\,K. performed the calculations. W.\,N. implemented and performed the numerical simulations. W.\,N., V.\,M., A.\,G, A.G.\,K. and G.\,K. contributed to the discussion and the interpretation of the results. All authors were involved in the discussion during the writing of the manuscript. W.\,N., A.G.\,K. and G.\,K. wrote the manuscript. \appendix \section{Non-passive states}\label{app_ergotropy} The energy $E$ of a state $\rho$ with respect to a Hamiltonian $H$ can be decomposed into ergotropy $\mathcal{W}$ and passive energy $E_\mathrm{pas}$. Ergotropy is the maximum amount of work that can be extracted from the state by means of unitary transformations such that the Hamiltonian before and after the unitary coincide~\cite{pusz1978passive,lenard1978thermodynamical,allahverdyan2004maximal}. The passive energy, by contrast, cannot be extracted in the form of work. States that only contain passive energy are called passive states. \par Ergotropy is defined as \begin{equation}\label{eq_app_def_ergotropy} \mathcal{W}(\rho,H)\mathrel{\mathop:}=\operatorname{Tr}(\rho H)-\min_U\operatorname{Tr}(U\rho U^\dagger H)\geq0, \end{equation} where the minimisation is over the set of all possible unitary transformations. Consequently, any state $\rho$ can be written as $\rho=V_\rho\pi V_\rho^\dagger$, i.e., as a unitarily-transformed passive state $\pi$, where $V_\rho$ is the unitary that realises the minimum appearing on the r.h.s.\ of Eq.~\eqref{eq_app_def_ergotropy}. The energy of the state $\rho$ thus reads \begin{equation} E=E_\mathrm{pas}+\mathcal{W}=\operatorname{Tr}[\pi H]+\operatorname{Tr}[(\rho-\pi)H]. \end{equation} Explicitly, the passive state and its energy read \begin{subequations} \begin{align} \pi&\mathrel{\mathop:}=\sum_n r_n\proj{n}\label{eq_app_passive_state}\\ E_\mathrm{pas}&=\operatorname{Tr}[\pi H]=\sum_n r_n E_n,\label{eq_app_passive_energy} \end{align} \end{subequations} where $\{r_n\}$ are the ordered ($r_{n+1}\leq r_n\,\forall n$) eigenvalues of $\rho$ and $\{\ket{n}\}$ is the ordered ($E_{n+1}\geq E_n\,\forall n$) eigenbasis of $H$. When $H$ is non-degenerate, $\pi$ is unique. If $H$ is degenerate, its eigenbasis and, consequently, the passive state~\eqref{eq_app_passive_state}, may be not unique. However, the energies~\eqref{eq_app_passive_energy} of all passive states corresponding to $\rho$ are the same and equal the passive energy of $\rho$. \section{Majorisation relation}\label{app_majorisation} Assume $\rho(t^\prime)\succ\rho(t^{\prime\prime})$ for any $t^{\prime\prime}\geq t^\prime$ in some time interval $I$ ($t^\prime,t^{\prime\prime}\in I$), namely that $\rho(t^\prime)$ majorises~\cite{binder2015quantum,mari2014quantum} $\rho(t^{\prime\prime})$ in this interval, i.e., \begin{equation}\label{eq_app_majorisation} \sum_{m=1}^nr_m(t^\prime)\geq\sum_{m=1}^nr_m(t^{\prime\prime})\quad (1\leq n\leq N), \end{equation} where $r_{m+1}(\tau)\leq r_m(\tau)$ ($\tau\in I$) are the ordered eigenvalues of $\rho(\tau)$ [cf.\ Eq.~\eqref{eq_app_passive_state}] and $N$ is the dimension of the Hilbert space of the system. \par Let us consider the sign of the dissipative passive-energy change $\Delta E_\mathrm{pas}|_\mathrm{d}$ under this majorisation condition. We may write~\eqref{eq_def_heat} in the form \begin{equation} \Delta E_\mathrm{pas}|_\mathrm{d}(t) = \int_0^t\mathrm{d}\tau \operatorname{Tr}[\dot\pi(\tau)H(\tau)]=\int_{0}^{t}\mathrm{d} \tau \lim_{h\rightarrow 0}f(\tau,h), \end{equation} where we have defined \begin{equation} f(\tau,h)\mathrel{\mathop:}=\sum_{n=1}^N \frac{r_n(\tau+h)-r_n(\tau)}{h} E_n(\tau), \end{equation} where $E_{n+1}(\tau)\geq E_n(\tau)$ are the ordered eigenvalues of the Hamiltonian [cf.\ Eq.~\eqref{eq_app_passive_energy}]. Using summation by parts and the normalisation of the density matrix, this function may be rewritten as \begin{equation} f(\tau,h)=\sum_{n=1}^{N-1}[E_{n+1}(\tau)-E_n(\tau)]\sum_{m=1}^n\frac{r_m(\tau)-r_m(\tau+h)}{h}. \end{equation} The first factor is non-negative due to the monotonically-ordered energies. The second factor is also non-negative if Eq.~\eqref{eq_app_majorisation} holds in the entire integration domain $[0,t]$. In this case, the majorisation relation implies $\Delta E_\mathrm{pas}|_\mathrm{d}(t)\geq 0$. \par Let us now turn to the sign of the entropy change. If $\rho_1\succ\rho_2$, then $\mathcal{S}(\rho_2)\geq\mathcal{S}(\rho_1)$~\cite{allahverdyan2004maximal}. Hence, we have the relation \begin{multline}\label{eq_majorisation_Q_DeltaS_succ} \rho(t^\prime)\succ\rho(t^{\prime\prime})\quad\forall\ 0\leq t^\prime\leq t^{\prime\prime}\leq t\\\Rightarrow\quad \Delta E_\mathrm{pas}|_\mathrm{d}(t)\geq 0\,\wedge\, \Delta\mathcal{S}(t)\geq 0, \end{multline} where $\Delta\mathcal{S}(t)=\mathcal{S}(\rho(t))-\mathcal{S}(\rho_0)$. Similarly, one can show that the opposite relation holds, $\rho(t^\prime)\prec\rho(t^{\prime\prime})\Rightarrow \Delta E_\mathrm{pas}|_\mathrm{d}(t)\leq 0\,\wedge\, \Delta\mathcal{S}(t)\leq 0$. When the Hamiltonian is non-degenerate, $\Delta E_\mathrm{pas}|_\mathrm{d}(t)$ and $\Delta\mathcal{S}(t)$ can be shown to vanish iff the passive state corresponding to $\rho(\tau)$ is constant (i.e., the evolution of $\rho(\tau)$ is unitary) for $\tau \in [0,t]$. \par For the case of a constant Hamiltonian, relation~\eqref{eq_majorisation_Q_DeltaS_succ} was obtained in Ref.~\cite{binder2015quantum}. In this case, $\Delta E_\mathrm{pas}|_\mathrm{d}(t)=\Delta E_\mathrm{pas}(t)$ and hence Eq.~\eqref{eq_majorisation_Q_DeltaS_succ} implies that the passive energy of $\rho_2$ is greater than or equal to the passive energy of $\rho_1$ if $\rho_1\succ\rho_2$ or, equivalently, if $\pi_1\succ\pi_2$, where $\pi_i$ is the passive state corresponding to $\rho_i$ ($i=1,2$). \section{Master equation for a squeezed bath}\label{app_master_equation_squeezed_bath} In the interaction picture, the master equation for a harmonic oscillator that interacts with a squeezed thermal bath reads~\cite{gardinerbook} \begin{multline}\label{eq_app_master_squeezing} \dot\rho=\kappa(N+1)\mathcal{D}(a,a^\dagger)[\rho]+\kappa N\mathcal{D}(a^\dagger,a)[\rho]\\-\kappa M\mathcal{D}(a,a)[\rho]-\kappa M\mathcal{D}(a^\dagger,a^\dagger)[\rho], \end{multline} where $\mathcal{D}(A,B)[\rho]\mathrel{\mathop:}= 2A\rho B-BA\rho-\rho BA$. Here $\kappa$ denotes the decay rate and (w.l.o.g.\ we have set the squeezing phase to zero) \begin{subequations} \begin{align}\label{eq_master_squeezing_standard_coefficients} N&\mathrel{\mathop:}=\bar{n}(\cosh^2r+\sinh^2r)+\sinh^2r\\ M&\mathrel{\mathop:}=-\cosh r\sinh r (2\bar{n}+1), \end{align} \end{subequations} where $\bar{n}=[\exp(\hbar\omega/[k_\mathrm{B} T])-1]^{-1}$ is the thermal excitation number of the bath at the oscillator frequency $\omega$ and $r$ the squeezing parameter. The results in Fig.~\ref{fig_squeezed_cavity} were obtained by a numerical solution of Eq.~\eqref{eq_app_master_squeezing} with $\bar{n}=0$. \par Defining $b\mathrel{\mathop:}= S(r)a S^\dagger(r)=a\cosh r+a^\dagger\sinh r$, where $S(r)=\exp\left[\frac{r}{2}a^2-\frac{r}{2}(a^\dagger)^2\right]$ is the unitary squeezing operator, the master equation~\eqref{eq_app_master_squeezing} can be cast into the Lindblad form~\cite{breuerbook,ekert1990canonical} \begin{equation} \dot\rho=\kappa(\bar{n}+1)\mathcal{D}(b,b^\dagger)[\rho]+\kappa \bar{n}\mathcal{D}(b^\dagger,b)[\rho]. \end{equation} Its steady-state solution is the squeezed thermal state $S(r)\left[Z^{-1}\exp\left(-\hbar\omega a^\dagger a/[k_\mathrm{B} T]\right)\right]S^\dagger(r)$. \section{Entropy production $\Sigma$}\label{app_spohn} Spohn's inequality for the entropy-production rate reads~\cite{spohn1978entropy} \begin{equation}\label{eq_spohn} \sigma\mathrel{\mathop:}=-\frac{\mathrm{d}}{\mathrm{d} t}\Srel{\rho(t)}{\rho_\mathrm{ss}}\geq0, \end{equation} where $\Srel{\rho(t)}{\rho_\mathrm{ss}}\mathrel{\mathop:}=k_\mathrm{B}\operatorname{Tr}[\rho(t)(\ln \rho(t)-\ln \rho_\mathrm{ss})]$. Inequality~\eqref{eq_spohn} holds for any $\rho(t)$ that evolves according to a Lindblad master equation~\cite{breuerbook} \begin{equation}\label{eq_app_master} \dot\rho=\mathcal{L}\rho, \end{equation} $\mathcal{L}$ being the Liouvillian (Lindblad operator). The steady-state solution of Eq.~\eqref{eq_app_master} obeys $\mathcal{L}\rho_\mathrm{ss}=0$. Then, upon defining $\Sigma\mathrel{\mathop:}=\int_0^{\infty}\sigma\mathrm{d} t$, the time-integrated inequality~\eqref{eq_spohn} yields \begin{equation}\label{eq_spohn_integrated_app} \Sigma=\Srel{\rho_0}{\rho_\mathrm{ss}}\geq0. \end{equation} \par Equality~\eqref{eq_spohn} requires the coupling between the system and the bath to be sufficiently weak and the bath relaxation to be sufficiently fast to allow for the perturbative derivation of the Lindblad master equation. In the spirit of traditional thermodynamics, the Lindblad approach excludes correlations or entanglement between the system and the bath~\cite{breuerbook}. In general, Eq.~\eqref{eq_spohn} may not hold for non-Markovian baths~\cite{erez2008thermodynamic}. In contrast, since the relative entropy is non-negative, Eq.~\eqref{eq_spohn_integrated_app} holds for arbitrary coupling between the system and the bath~\cite{schloegl1966zur,deffner2011nonequilibrium}. \par As shown in Refs.~\cite{alicki1979quantum,alipour2016correlations}, Spohn's inequality~\eqref{eq_spohn} can be generalised to time-dependent Hamiltonians under the condition that $H(t)$ varies slowly compared to the relaxation time of the reservoir~\cite{alicki1979quantum}. The corresponding master equation then reads \begin{equation} \dot\rho(t)=\mathcal{L}(t)\rho(t), \end{equation} where $\mathcal{L}(t)$ is the same Liouvillian as in Eq.~\eqref{eq_app_master}, but with time-dependent coefficients (cf.\ Ref.~\cite{alicki1979quantum}). Its invariant state $\rho_\mathrm{ss}(t)$ satisfies $\mathcal{L}(t)\rho_\mathrm{ss}(t)=0$. The generalisation of inequality~\eqref{eq_spohn} then reads~\cite{alipour2016correlations} \begin{equation}\label{eq_spohn_L_t_diff} \sigma=\left.-\frac{\mathrm{d}}{\mathrm{d} s}\Srel{e^{s\mathcal{L}(t)}\rho(t)}{\rho_\mathrm{ss}(t)}\right|_{s=0}\geq0. \end{equation} Upon integration, Eq.~\eqref{eq_spohn_L_t_diff} evaluates to the inequality \begin{equation}\label{eq_spohn_L_t} \Sigma=\Delta\mathcal{S}+k_\mathrm{B}\int_0^\infty\operatorname{Tr}\Big[\big(\mathcal{L}(t)\rho(t)\big)\ln\rho_\mathrm{ss}(t)\Big]\mathrm{d} t\geq0 \end{equation} for the entropy change $\Delta\mathcal{S}=\mathcal{S}(\rho_\mathrm{ss}(\infty))-\mathcal{S}(\rho_0)$. In the case of a constant Hamiltonian, Eq.~\eqref{eq_spohn_L_t} reduces to Eq.~\eqref{eq_spohn_integrated_app}. \par If the Liouvillian describes the interaction with a thermal bath at temperature $T$, i.e., $\mathcal{L}(t)=\mathcal{L}_\mathrm{th}(t)$, then $\rho_\mathrm{ss}(t)=\rho_\mathrm{th}(t)$, where \begin{equation}\label{eq_app_rhoth_t} \rho_\mathrm{th}(t)=\frac{1}{Z(t)}\exp\left(-\frac{H(t)}{k_\mathrm{B} T}\right) \end{equation} is a thermal state for the (instantaneous) Hamiltonian $H(t)$. Equation~\eqref{eq_spohn_L_t} then yields \begin{equation}\label{eq_spohn_t_integrated} \Delta\mathcal{S}\geq\frac{1}{T}\int_{0}^{\infty}\operatorname{Tr}\left[\dot{\rho}(t)H(t)\right]\mathrm{d} t= \frac{\mathcal{E}_\mathrm{d}}{T}, \end{equation} with the dissipated energy $\mathcal{E}_\mathrm{d}$ defined in Eq.~\eqref{eq_def_DeltaEdiss}. \section{Entropy production $\Sigma$ for non-thermal baths}\label{app_sigma_non-thermal} Let us consider $\Sigma$ in the case of a constant Hamiltonian [Eq.~\eqref{eq_spohn_integrated_app}] for a non-thermal bath that gives rise to a non-passive steady state $\rho_\mathrm{ss}=U\pi_\mathrm{ss}U^\dagger$ via the Liouvillian $\mathcal{L}_U$. This $\Sigma$ can be related to that of a passive state, as follows. Since the relative entropy is invariant with respect to a unitary transformation of its arguments, Eq.~\eqref{eq_spohn_integrated_app} can be recast in the form \begin{equation}\label{eq_sigma_nonthermal_bath} \Sigma=\Srel{\tilde\rho_0}{\pi_\mathrm{ss}}\geq0, \end{equation} where $\tilde\rho_0\mathrel{\mathop:}= U^\dagger\rho_0U$. Thus, $\Sigma$ equals the entropy production obtained under the relaxation of an open system from the unitarily-transformed state $\tilde\rho_0$ to the passive state $\pi_\mathrm{ss}$. \par In particular, when $\pi_\mathrm{ss}$ is the thermal state $\rho_\mathrm{th}$, $\Sigma$ equals the entropy production obtained under thermalisation of the system starting from the state $\tilde\rho_0$ and we have \begin{equation}\label{eq_sigma_rhotilde_thermal_bath} \Sigma=\Delta\mathcal{S}-\frac{\tilde{\mathcal{E}}_\mathrm{d}}{T}\geq0, \end{equation} where $\tilde{\mathcal{E}}_\mathrm{d}$ is the change in the energy $\tilde E=\operatorname{Tr}[\tilde\rho H]$ of the transformed state $\tilde\rho$. \par Consider now a slowly-varying $H(t)$ such that inequality~\eqref{eq_spohn_L_t} holds. The invariant state of $\mathcal{L}_U(t)$ now reads $\rho_\mathrm{ss}(t)=U\rho_\mathrm{th}(t)U^\dagger$, with the (instantaneous) thermal state~\eqref{eq_app_rhoth_t}. Inequality~\eqref{eq_spohn_L_t} then yields \begin{equation}\label{eq_sigma_t_LU_integrated} \Sigma=\Delta\mathcal{S}-\frac{1}{T}\int_{0}^{\infty}\operatorname{Tr}\left[U^\dagger\dot{\rho}(t)UH(t)\right]\mathrm{d} t\geq 0, \end{equation} where the appearing integral is the generalisation of $\tilde{\mathcal{E}}_\mathrm{d}$ from inequality~\eqref{eq_sigma_rhotilde_thermal_bath}. It is shown in Appendix~\ref{app_liouvillian} that $U^\dagger\dot{\rho}(t)U$ equals a thermal Liouvillian acting on a unitarily-transformed state [Eq.~\eqref{eq_liouvillian_U_th_t}]. Hence, also for a time-dependent Hamiltonian, the evaluation of $\Sigma$ in a non-thermal bath reduces to the case of a transformed state that decays via contact with a thermal bath. \section{Optimality of the inequality for relative entropy}\label{app_sigmap_optimal} Equation~\eqref{eq_srel} provides a generally tighter inequality for $\Delta\mathcal{S}$ than Eq.~\eqref{eq_sigma_nonthermal_bath} [or~\eqref{eq_spohn_integrated_app}]. Indeed, Eq.~\eqref{eq_sigma_nonthermal_bath} can be written as $\Delta\mathcal{S}\geq\mathcal{S}(\pi_\mathrm{ss})-k_\mathrm{B} A$, where $A=-\operatorname{Tr}[\tilde\rho_0\ln\pi_\mathrm{ss}]$. This inequality is the tightest (i.e., its r.h.s.\ is maximal) on the set of all states $\tilde\rho_0$ which differ from $\rho_0$ by a unitary transformation, when $A$ is minimal on this set. Note that $\pi_\mathrm{ss}$ commutes with the Hamiltonian and the eigenvalues of $-\ln\pi_\mathrm{ss}$ do not decrease as a function of the eigenvalues of the Hamiltonian. Thus, $-\ln\pi_\mathrm{ss}$ can be considered, in a sense, as an effective ``Hamiltonian'', for which $A$ is the average ``energy'' in the state $\tilde\rho_0$. The average energy amongst unitarily-accessible states is known to be minimised in the passive state. When $H$ is non-degenerate, then the passive state $\pi_0$ corresponding to $H$ is also the passive state corresponding to the effective ``Hamiltonian'' $-\ln\pi_\mathrm{ss}$; hence, $A$ is minimal for $\tilde\rho_0=\pi_0$. \par By contrast, if $H$ is degenerate there is generally no unique passive state (Appendix~\ref{app_ergotropy}). In this case, $A$ is minimal not for each $\pi_0$ but iff $\pi_0$ is also a passive state of the effective ``Hamiltonian'', i.e., iff $\pi_0$ commutes with $\pi_\mathrm{ss}$. One can show that there exists, at least, one such state $\pi_0$. Thus, Eq.~\eqref{eq_srel} provides the tightest inequality for $\Delta\mathcal{S}$ among all inequalities of the form~\eqref{eq_sigma_nonthermal_bath} or~\eqref{eq_spohn_integrated_app}. \section{Unitary equivalence of non-thermal and thermal baths}\label{app_liouvillian} The time evolution of an initial state $\rho_0$ under the Liouvillian $\mathcal{L}_U$ as defined in Appendix~\ref{app_sigma_non-thermal} may be replaced by an alternative time evolution involving a thermal bath. These two equivalent evolution paths can be lucidly represented by the diagram (see also~\cite{ekert1990canonical} and Appendix~\ref{app_master_equation_squeezed_bath}) \begin{equation}\label{eq_cd} \begin{tikzcd} \rho_0 \arrow[r,"\mathcal{L}_U",mapsto] \arrow[d,"U^\dagger" left,mapsto,bend right,dashed] & \rho(t)=U\tilde\rho(t)U^\dagger \\ \tilde{\rho}_0=U^\dagger\rho_0 U \arrow[r,"\mathcal{L}_\mathrm{th}",mapsto,bend right,dashed] & \tilde\rho(t) \arrow[u,"U" right,mapsto,bend right,dashed] \end{tikzcd}. \end{equation} According to Eq.~\eqref{eq_cd}, the evolution of $\rho_0$ induced by a non-thermal bath towards $\rho_\mathrm{ss}$ (solid arrow) may be replaced by a three-stage process (dashed arrows) wherein the system is in contact with a thermal bath only in the second step. \par This may be shown as follows. The Liouvillian $\mathcal{L}_U$ in the interaction picture may be cast into the general Lindblad form~\cite{breuerbook} \begin{equation}\label{eq_LU_Lalpha} \mathcal{L}_U\rho=\sum_\alpha \frac{\gamma_\alpha}{2}\left[2 L_\alpha\rho L_\alpha^\dagger - L_\alpha^\dagger L_\alpha \rho - \rho L_\alpha^\dagger L_\alpha\right]. \end{equation} We now consider the unitarily transformed master equation \begin{equation}\label{eq_LU_transformed} U^\dagger\left(\mathcal{L}_U\rho\right)U=\sum_\alpha \frac{\gamma_\alpha}{2}\left[2 \tilde{L}_\alpha\tilde{\rho} \tilde{L}_\alpha^\dagger - \tilde{L}_\alpha^\dagger \tilde{L}_\alpha \tilde{\rho} - \tilde{\rho} \tilde{L}_\alpha^\dagger \tilde{L}_\alpha\right], \end{equation} where we have defined $\tilde{\rho}\mathrel{\mathop:}= U^\dagger\rho U$ and $\tilde{L}_\alpha\mathrel{\mathop:}= U^\dagger L_\alpha U$. The right-hand side of Eq.~\eqref{eq_LU_transformed} is thus again a Lindblad superoperator, $U^\dagger(\mathcal{L}_U\rho)U=\mathrel{\mathop:} \tilde{\mathcal{L}}\tilde{\rho}$. Now, since $\rho_\mathrm{ss}=U\rho_\mathrm{th}U^\dagger$ is the steady-state solution of $\mathcal{L}_U$, the state $\tilde{\rho}_\mathrm{ss}\mathrel{\mathop:}= U^\dagger\rho_\mathrm{ss}U= \rho_\mathrm{th}$ must be the steady state of $\tilde{\mathcal{L}}$. Hence, $\tilde{\mathcal{L}}$ has to be a thermal generator, i.e., $\tilde{\mathcal{L}}=\mathcal{L}_\mathrm{th}$, and therefore \begin{equation}\label{eq_LU_Lth} U^\dagger(\mathcal{L}_U\rho)U=\mathcal{L}_\mathrm{th}\left(U^\dagger\rho U\right). \end{equation} Hence, the solution of $\dot\rho=\mathcal{L}_U\rho$ may be written as \begin{equation} \rho(t)=U\left[e^{t\mathcal{L}_\mathrm{th}}\left(U^\dagger\rho_0 U\right)\right]U^\dagger. \end{equation} \par If $H(t)$ is slowly varying in time and commutes with itself at all times, we have time-dependent $\gamma_\alpha(t)$ in Eq.~\eqref{eq_LU_Lalpha}~\cite{alicki1979quantum}. Since the above derivation does not depend on these rates, we have \begin{equation}\label{eq_liouvillian_U_th_t} U^\dagger\left(\mathcal{L}_U(t)\rho(t)\right)U=\mathcal{L}_\mathrm{th}(t)\left(U^\dagger\rho(t) U\right). \end{equation} \section{Entropy change for time-dependent Hamiltonians}\label{app_time-dependent_Hamiltonian} Equation~\eqref{eq_DeltaS_Qth} for a thermal bath was derived based on the alternative (dashed) path \begin{equation}\label{eq_app_Lth_t_alternative_paths} \begin{tikzcd}[row sep=huge, column sep = 6em] \rho_0 \arrow[r,"\dot\rho(t)=\mathcal{L}_\mathrm{th}(t)\rho(t)","\mathcal{E}_\mathrm{d}"',mapsto] \arrow[d,"\mathrm{unitary}"',mapsto,bend right,dashed] & \rho_\mathrm{th}(\infty) \\ \varrho_0=\pi_0 \arrow[ru,"\dot\varrho(t)=\mathcal{L}_\mathrm{th}(t)\varrho(t)"',"\mathcal{E}_\mathrm{d}^\prime",mapsto,bend right,dashed] \end{tikzcd}. \end{equation} The energies $\mathcal{E}_\mathrm{d}$ (along the original path) and $\mathcal{E}_\mathrm{d}^\prime$ (along the alternative path) are those that appear on the r.h.s.\ of the entropic inequalities~\eqref{eq_DeltaS_QdT_DeltaW} and~\eqref{eq_DeltaS_Qth}. \par The $\Sigma$-inequality for the situation where the invariant state is non-passive is given in Eq.~\eqref{eq_sigma_t_LU_integrated} and may be recast in the form \begin{equation}\label{eq_eq_sigma_t_LU_integrated_thermal} \Delta\mathcal{S}\geq\frac{1}{T}\int_{0}^{\infty}\operatorname{Tr}\Big[U^\dagger[\mathcal{L}_U(t)\rho(t)]UH(t)\Big]\mathrm{d} t. \end{equation} Owing to Eq.~\eqref{eq_liouvillian_U_th_t}, this inequality is equivalent to \begin{equation}\label{eq_LU_Lth_integral} \Delta\mathcal{S}\geq\frac{1}{T}\int_{0}^{\infty}\operatorname{Tr}\Big[[\mathcal{L}_\mathrm{th}(t)\tilde\rho(t)]H(t)\Big]\mathrm{d} t, \end{equation} where $\tilde\rho(t)\mathrel{\mathop:}= U^\dagger\rho(t)U$ and $\mathcal{L}_\mathrm{th}(t)$ is a thermal Liouvillian with the same temperature and the same $H(t)$ as in $\mathcal{L}_U(t)$. The problem of a state $\rho(t)$ that evolves subject to a non-thermal bath has thus been reduced to the problem of a state $\tilde\rho(t)$ that evolves according to a thermal bath. This is the situation considered in the original (solid) path in Eq.~\eqref{eq_app_Lth_t_alternative_paths} upon replacing $\rho(t)$ by $\tilde\rho(t)$ there. This yields again Eq.~\eqref{eq_DeltaS_Qth}, thus extending it to the case of a non-passive invariant state. \par In the general case that $\pi_\mathrm{ss}(t)$ is not a thermal state, inequality~\eqref{eq_eq_sigma_t_LU_integrated_thermal} is replaced by \begin{equation} \Delta\mathcal{S}\geq-k_\mathrm{B}\int_{0}^{\infty}\operatorname{Tr}\Big[U^\dagger[\mathcal{L}_U(t)\rho(t)]U\ln\pi_\mathrm{ss}(t)\Big]\mathrm{d} t. \end{equation} One can then proceed as above, but $\mathcal{L}_\mathrm{th}(t)$ is then replaced by a ``passive'' Liouvillian $\mathcal{L}_\mathrm{pas}(t)$ whose invariant state is $\pi_\mathrm{ss}(t)$. The resulting inequality for $\Delta\mathcal{S}$ [the generalisation of Eq.~\eqref{eq_DeltaS_Qth}, i.e., the counterpart of Eq.~\eqref{eq_srel}] then reads, \begin{equation}\label{eq_app_DeltaS_integral_passive} \Delta\mathcal{S}\geq -k_\mathrm{B}\int_{0}^{\infty}\operatorname{Tr}\Big[[\mathcal{L}_\mathrm{pas}(t)\varrho(t)]\ln\pi_\mathrm{ss}(t)\Big]\mathrm{d} t, \end{equation} where $\varrho(0)=\pi_0$. Note that the latter integral cannot be identified with energy transfer. Equation~\eqref{eq_app_DeltaS_integral_passive} holds also for the case of a passive invariant state $\rho_\mathrm{ss}(t)=\pi_\mathrm{ss}(t)$, where now $\mathcal{L}_\mathrm{pas}(t)=\mathcal{L}(t)$. \section{Derivation of the efficiency bound}\label{app_efficiency} Energy conservation [Eq.~\eqref{eq_first_law}] over a cycle yields \begin{equation} \mathcal{E}_\mathrm{d,c}+\mathcal{E}_\mathrm{d,h}+W=0, \end{equation} where $\mathcal{E}_\mathrm{d,c}$ ($\mathcal{E}_\mathrm{d,h}$) is the dissipative energy change of the WM due to its interaction with the cold thermal (hot non-thermal) bath (Fig.~\ref{fig_engine}). As mentioned in the main text, we assume that the WM is thermal and hence passive prior to its interaction with the cold thermal bath. \par The efficiency of the engine is defined as the ratio of the extracted work to the invested energy (passive thermal energy and ergotropy) $\mathcal{E}_\mathrm{d,h}=\int_0^\infty\operatorname{Tr}[(\mathcal{L}_U(t)\rho(t))H(t)]\mathrm{d} t$ provided by the non-thermal bath, yielding \begin{equation}\label{eq_app_eta} \eta\mathrel{\mathop:}=\frac{-W}{\mathcal{E}_\mathrm{d,h}}=1+\frac{\mathcal{E}_\mathrm{d,c}}{\mathcal{E}_\mathrm{d,h}}. \end{equation} This expression holds for $\mathcal{E}_\mathrm{d,c}\leq0$ and $\mathcal{E}_\mathrm{d,h}\geq0$; see below a discussion of the opposite case. From condition~\eqref{eq_condition_gen} it then follows that \begin{equation} \mathcal{E}_\mathrm{d,c}\leq -\frac{T_\mathrm{c}}{T_\mathrm{h}}\mathcal{E}_\mathrm{d,h}^\prime. \end{equation} Inserting this relation into~\eqref{eq_app_eta} yields the efficiency bound~\eqref{eq_etamax_gen}. \par The efficiency bound~\eqref{eq_etamax_gen} may be generalised to the case where the passive state of the working medium is not thermal after the interaction with the non-thermal bath. Condition~\eqref{eq_condition_gen} is then, following Eq.~\eqref{eq_app_DeltaS_integral_passive}, replaced by \begin{equation} \frac{\mathcal{E}_\mathrm{d,c}}{T_\mathrm{c}}-k_\mathrm{B}\int_{0}^{\infty}\operatorname{Tr}\Big[[\mathcal{L}_\mathrm{pas}(t)\varrho(t)]\ln\pi_\mathrm{ss}(t)\Big]\mathrm{d} t\leq 0 \end{equation} and we then find \begin{equation} \eta\leq 1+\frac{k_\mathrm{B} T_\mathrm{c}}{\mathcal{E}_\mathrm{d,h}}\int_{0}^{\infty}\operatorname{Tr}\Big[[\mathcal{L}_\mathrm{pas}(t)\varrho(t)]\ln\pi_\mathrm{ss}(t)\Big]\mathrm{d} t, \end{equation} where the integral is evaluated for the energising stroke. \par If $\mathcal{E}_\mathrm{d,c}>0$ ($\mathcal{E}_\mathrm{d,h}^\prime<0$), then also the cold bath provides energy, which has to be taken into account in the efficiency. The latter now reads~\cite{niedenzu2016operation} \begin{equation}\label{eq_app_eta_1} \eta=\frac{-W}{\mathcal{E}_\mathrm{d,h}+\mathcal{E}_\mathrm{d,c}}=\frac{\mathcal{E}_\mathrm{d,h}+\mathcal{E}_\mathrm{d,c}}{\mathcal{E}_\mathrm{d,h}+\mathcal{E}_\mathrm{d,c}}=1, \end{equation} which cannot be further restricted by any inequality for $\Delta\mathcal{S}$. \par We now derive the efficiency bound that follows from the reversibility condition~\eqref{eq_spohn_integrated}. The requirement of vanishing entropy change over a cycle then yields \begin{equation} \frac{\mathcal{E}_\mathrm{d,c}}{T_\mathrm{c}}+\frac{\tilde{\mathcal{E}}_\mathrm{d,h}}{T_\mathrm{h}}\leq0, \end{equation} where $\tilde{\mathcal{E}}_\mathrm{d,h}$ [the integral in Eq.~\eqref{eq_sigma_t_LU_integrated}] is the energy change during the interaction with the thermal bath along the dashed path in Eq.~\eqref{eq_cd}. Consequently, according to this criterion the efficiency~\eqref{eq_app_eta} is bounded by \begin{equation}\label{eq_eta_bound_spohn} \eta\leq 1-\frac{T_\mathrm{c}}{T_\mathrm{h}}\frac{\tilde{\mathcal{E}}_\mathrm{d,h}}{\mathcal{E}_\mathrm{d,h}}=\mathrel{\mathop:}\eta_\Sigma. \end{equation} This bound surpasses $1$ if $\tilde{\mathcal{E}}_\mathrm{d,h}<0$, which, e.g., is the case if the bath is ``over-squeezed'': This means that, due to the excessive bath squeezing, the interaction with the thermal bath along the alternative path of Eq.~\eqref{eq_cd} decreases the energy while that with the non-thermal bath along the initial path increases it. \par If the Hamiltonian is constant during the energising stroke, then $\tilde{\mathcal{E}}_\mathrm{d,h}=\Delta E_\mathrm{pas,h}|_\mathrm{d}+\widetilde{\Delta\mathcal{W}|_\mathrm{d}}$, where $\widetilde{\Delta\mathcal{W}|_\mathrm{d}}\leq 0$ is the ergotropy lost to the effective thermal bath in the second step of the alternative path in Eq.~\eqref{eq_cd}. A comparison of Eq.~\eqref{eq_eta_bound_spohn} with our bound Eq.~\eqref{eq_etamax_otto} for a constant Hamiltonian then yields $\eta_\mathrm{max}^\mathrm{Otto}\leq\eta_\Sigma$. \section{Maximal efficiency of multi-bath quantum engines}\label{app_multibath} We consider a cycle operating between $N$ thermal baths (either heat sources or heat dumps) and $M$ non-thermal baths that are assumed to energise the engine. Namely, the non-thermal baths provide both passive energy and ergotropy to the working medium. As before (see main text and Appendix~\ref{app_efficiency}) we assume that the strokes are sufficiently long such that Eq.~\eqref{eq_DeltaS_Qth} is valid and that the ergotropy of the working medium is extracted before every stroke that involves a bath. \par For this situation, Eq.~\eqref{eq_condition_gen} can be generalised to \begin{equation}\label{eq_app_condition_multibath} 0\geq\sum_{i=1}^M \frac{\mathcal{E}_{\mathrm{d,h},i}^{\prime}}{T_{\mathrm{h},i}}+\sum_{\{1\leq i\leq N|\mathcal{E}_{\mathrm{d,}i}\geq 0\}}\frac{\mathcal{E}_{\mathrm{d,}i}}{T_i}+\sum_{\{1\leq i\leq N|\mathcal{E}_{\mathrm{d,}i}\leq 0\}}\frac{\mathcal{E}_{\mathrm{d},i}}{T_i}. \end{equation} Here the temperatures of the thermal baths are denoted by $T_i$ and the temperature parameters of the non-thermal baths by $T_{\mathrm{h},i}$. Note that under the assumptions made above $\mathcal{E}_{\mathrm{d,h},i}^{\prime}\geq0$ and that for thermal baths $\mathcal{E}_{\mathrm{d,}i}\equiv\mathcal{E}_{\mathrm{d,}i}^\prime$. \par By introducing the minimum and maximum temperatures $T_\mathrm{min}\leq\{T_i,T_{\mathrm{h},i}\}\leq T_\mathrm{max}$, we obtain~\cite{schwablbook} \begin{multline}\label{eq_app_condition_multibath_minmaxtemp} 0\geq\sum_{i=1}^M \frac{\mathcal{E}_{\mathrm{d,h},i}^{\prime}}{T_{\mathrm{h},i}}+\sum_{\{1\leq i\leq N|\mathcal{E}_{\mathrm{d,}i}^\prime\geq 0\}}\frac{\mathcal{E}_{\mathrm{d,}i}^\prime}{T_i}+\sum_{\{1\leq i\leq N|\mathcal{E}_{\mathrm{d,}i}\leq 0\}}\frac{\mathcal{E}_{\mathrm{d},i}}{T_i}\\\geq\frac{\sum_{i=1}^M \mathcal{E}_{\mathrm{d,h},i}^{\prime}+\sum_{\{i|\mathcal{E}_{\mathrm{d},i}^\prime\geq 0\}}\mathcal{E}_{\mathrm{d,}i}^\prime}{T_\mathrm{max}}+\frac{\sum_{\{i|\mathcal{E}_{\mathrm{d,}i}\leq 0\}}\mathcal{E}_{\mathrm{d,}i}}{T_\mathrm{min}}\\=\mathrel{\mathop:} \frac{\mathcal{E}_\mathrm{d,in}^\prime}{T_\mathrm{max}}+\frac{\mathcal{E}_\mathrm{d,out}}{T_\mathrm{min}}. \end{multline} Hence, we have the relation \begin{equation}\label{eq_app_Eout} \mathcal{E}_\mathrm{d,out}\leq-\frac{T_\mathrm{min}}{T_\mathrm{max}}\mathcal{E}^\prime_\mathrm{d,in}. \end{equation} \par The efficiency of the multi-bath engine is \begin{equation}\label{eq_app_eta_multibath} \eta=1+\frac{\mathcal{E}_\mathrm{d,out}}{\mathcal{E}_\mathrm{d,in}}, \end{equation} where \begin{equation} \mathcal{E}_\mathrm{d,in}\mathrel{\mathop:}=\sum_{i=1}^M \mathcal{E}_{\mathrm{d,h},i}+\sum_{\{1\leq i\leq N|\mathcal{E}_{\mathrm{d,}i}\geq 0\}}\mathcal{E}_{\mathrm{d,}i} \end{equation} is the total energy that the working medium obtained from the energising baths during a cycle. Owing to Eq.~\eqref{eq_app_Eout}, the efficiency~\eqref{eq_app_eta_multibath} is bounded by \begin{equation}\label{eq_app_etamax_multibath} \eta\leq 1-\frac{T_\mathrm{min}}{T_\mathrm{max}}\frac{\mathcal{E}^\prime_\mathrm{d,in}}{\mathcal{E}_\mathrm{d,in}}. \end{equation} Note that the equality sign in Eq.~\eqref{eq_app_etamax_multibath} is only fulfilled if both equality signs in Eq.~\eqref{eq_app_condition_multibath_minmaxtemp} hold. In particular, Eq.~\eqref{eq_app_etamax_multibath} is a strict inequality in the multi-bath case, i.e., if more than two temperatures appear in Eq.~\eqref{eq_app_condition_multibath}. \par Inequality~\eqref{eq_app_etamax_multibath} is the generalisation of Eq.~\eqref{eq_etamax_gen} to more than one energising bath. The efficiency of multi-bath engines is thus always lower than the maximum efficiency of a two-bath engine that operates between a cold thermal bath at temperature $T_\mathrm{min}$ and a hot non-thermal bath at temperature parameter $T_\mathrm{max}$ which results in the same ratio $\mathcal{E}_\mathrm{d,in}^\prime$/$\mathcal{E}_\mathrm{d,in}$ of the input energies. This also holds in the case that the first equality sign in Eq.~\eqref{eq_app_condition_multibath_minmaxtemp} is fulfilled, which in the case of thermal baths corresponds to the second law and hence the reversibility condition. \par The efficiency bound~\eqref{eq_app_etamax_multibath} thus contains as a special case the fact that the efficiency of multi-bath heat engines (i.e., the case where all the baths are thermal such that $\mathcal{E}^\prime_\mathrm{d,in}\equiv\mathcal{E}_\mathrm{d,in}$) is always lower than the Carnot efficiency determined by the minimium and the maximum temperatures of the cycle, even if the cycle is reversible~\cite{schwablbook}. In this sense, our bound~\eqref{eq_etamax_gen} is universal. \par The above considerations hold for the case $\mathcal{E}_{\mathrm{d,h},i}^\prime\geq0$. As discussed in Appendix~\ref{app_efficiency} for the two-bath situation, in the case that $\mathcal{E}_\mathrm{d,h}^\prime<0$ the two-bath engine operates at efficiency $\eta=1$ [Eq.~\eqref{eq_app_eta_1}], which obviously cannot be surpassed by any engine powered by multiple thermal or non-thermal baths. \section{Expressions used in Figure~\ref{fig_efficiency_otto}}\label{app_figures} In Fig.~\ref{fig_efficiency_otto} we have used the energies \begin{subequations}\label{eq_energies_figure} \begin{align} \mathcal{E}_\mathrm{d,h}&=\hbar\omega_\mathrm{h} (\bar{n}_\mathrm{h}+\Delta\bar{n}_\mathrm{h}-\bar{n}_\mathrm{c})\label{eq_energies_figure_a}\\ \Delta E_\mathrm{pas,h}&=\hbar\omega_\mathrm{h} (\bar{n}_\mathrm{h}-\bar{n}_\mathrm{c})\label{eq_energies_figure_b}\\ \tilde{\mathcal{E}}_\mathrm{d}&=\Delta E_\mathrm{pas,h}|_\mathrm{d}-\hbar\omega_\mathrm{h}\Delta\bar{n}_\mathrm{c}.\label{eq_energies_figure_c} \end{align} \end{subequations} Here $\omega_\mathrm{c}$ ($\omega_\mathrm{h}$) is the oscillator frequency before (after) the compression stroke. Furthermore, we have defined $\bar{n}_i=[\exp(\hbar\omega_i/[k_\mathrm{B} T_i])-1]^{-1}$ and $\Delta\bar{n}_i=(2\bar{n}_i+1)\sinh^2(r)$ for $i\in\{\mathrm{c},\mathrm{h}\}$, where $r$ denotes the squeezing parameter~\cite{breuerbook}. Using the energies~\eqref{eq_energies_figure}, the efficiency bounds $\eta_\Sigma$ [Eq.~\eqref{eq_eta_bound_spohn}] and $\eta_\mathrm{max}$ [Eq.~\eqref{eq_etamax_otto}] then evaluate to \begin{equation} \eta_\Sigma=1-\frac{T_\mathrm{c}}{T_\mathrm{h}}\frac{\bar{n}_\mathrm{h}-\bar{n}_\mathrm{c}-\Delta\bar{n}_\mathrm{c}}{\bar{n}_\mathrm{h}+\Delta\bar{n}_\mathrm{h}-\bar{n}_\mathrm{c}} \end{equation} and \begin{equation} \eta_\mathrm{max}=1-\frac{T_\mathrm{c}}{T_\mathrm{h}}\frac{\bar{n}_\mathrm{h}-\bar{n}_\mathrm{c}}{\bar{n}_\mathrm{h}+\Delta\bar{n}_\mathrm{h}-\bar{n}_\mathrm{c}}, \end{equation} respectively. Additionally, we have used the actual efficiency~\cite{niedenzu2016operation} \begin{equation} \eta=1-\frac{(\bar{n}_\mathrm{h}-\bar{n}_\mathrm{c})\omega_\mathrm{c}}{(\bar{n}_\mathrm{h}+\Delta\bar{n}_\mathrm{h}-\bar{n}_\mathrm{c})\omega_\mathrm{h}}, \end{equation} which is valid for $\mathcal{E}_\mathrm{d,c}\leq0$, i.e., $\bar{n}_\mathrm{c}\leq\bar{n}_\mathrm{h}$. For $\bar{n}_\mathrm{h}\leq\bar{n}_\mathrm{c}\leq\bar{n}_\mathrm{h}+\Delta\bar{n}_\mathrm{h}$ the efficiency evaluates to $\eta=1$. The machine acts as an engine for $\mathcal{E}_\mathrm{d,h}\geq0$, i.e., for $\bar{n}_\mathrm{h}+\Delta\bar{n}_\mathrm{h}\geq\bar{n}_\mathrm{c}$, which for the parameters of Fig.~\ref{fig_efficiency_otto} corresponds to $\omega_\mathrm{c}/\omega_\mathrm{h}\gtrsim 0.22$. \begin{thebibliography}{78} \makeatletter \providecommand \@ifxundefined [1]{ \@ifx{#1\undefined} } \providecommand \@ifnum [1]{ \ifnum #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \@ifx [1]{ \ifx #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{#1} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode `\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty \bibitem [{\citenamefont {Carnot}(1824)}]{carnotbook} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Carnot}},\ }\href@noop {} {\emph {\bibinfo {title} {R\'eflexions sur la puissance motrice du feu et sur les machines propres \`a d\'evelopper cette puissance}}}\ (\bibinfo {publisher} {Bachelier},\ \bibinfo {address} {Paris},\ \bibinfo {year} {1824})\BibitemShut {NoStop} \bibitem [{\citenamefont {Schwabl}(2006)}]{schwablbook} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Schwabl}},\ }\href@noop {} {\emph {\bibinfo {title} {Statistical Mechanics}}},\ \bibinfo {edition} {2nd}\ ed.\ (\bibinfo {publisher} {Springer-Verlag},\ \bibinfo {address} {Berlin Heidelberg},\ \bibinfo {year} {2006})\BibitemShut {NoStop} \bibitem [{\citenamefont {Kondepudi}\ and\ \citenamefont {Prigogine}(2015)}]{kondepudibook} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Kondepudi}}\ and\ \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Prigogine}},\ }\href@noop {} {\emph {\bibinfo {title} {Modern Thermodynamics}}},\ \bibinfo {edition} {2nd}\ ed.\ (\bibinfo {publisher} {John Wiley \& Sons Ltd},\ \bibinfo {address} {Chichester},\ \bibinfo {year} {2015})\BibitemShut {NoStop} \bibitem [{\citenamefont {Clausius}(1865)}]{clausius1865verschiedene} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Clausius}},\ }\enquote {\bibinfo {title} {Ueber verschiedene f\"ur die Anwendung bequeme Formen der Hauptgleichungen der mechanischen W\"armetheorie},}\ \href {\doibase 10.1002/andp.18652010702} {\bibfield {journal} {\bibinfo {journal} {Ann. Phys.}\ }\textbf {\bibinfo {volume} {201}},\ \bibinfo {pages} {353} (\bibinfo {year} {1865})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Callen}(1985)}]{callenbook} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.~B.}\ \bibnamefont {Callen}},\ }\href@noop {} {\emph {\bibinfo {title} {{Thermodynamics and an Introduction to Thermostatistics}}}},\ \bibinfo {edition} {2nd}\ ed.\ (\bibinfo {publisher} {John Wiley \& Sons, Inc.},\ \bibinfo {address} {New York},\ \bibinfo {year} {1985})\BibitemShut {NoStop} \bibitem [{\citenamefont {Scovil}\ and\ \citenamefont {Schulz-DuBois}(1959)}]{scovil1959three} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.~E.~D.}\ \bibnamefont {Scovil}}\ and\ \bibinfo {author} {\bibfnamefont {E.~O.}\ \bibnamefont {Schulz-DuBois}},\ }\enquote {\bibinfo {title} {Three-Level Masers as Heat Engines},}\ \href {\doibase 10.1103/PhysRevLett.2.262} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {262} (\bibinfo {year} {1959})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Pusz}\ and\ \citenamefont {Woronowicz}(1978)}]{pusz1978passive} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Pusz}}\ and\ \bibinfo {author} {\bibfnamefont {S.~L.}\ \bibnamefont {Woronowicz}},\ }\enquote {\bibinfo {title} {Passive states and KMS states for general quantum systems},}\ \href {\doibase 10.1007/BF01614224} {\bibfield {journal} {\bibinfo {journal} {Commun. Math. Phys.}\ }\textbf {\bibinfo {volume} {58}},\ \bibinfo {pages} {273} (\bibinfo {year} {1978})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lenard}(1978)}]{lenard1978thermodynamical} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Lenard}},\ }\enquote {\bibinfo {title} {Thermodynamical proof of the Gibbs formula for elementary quantum systems},}\ \href {\doibase 10.1007/BF01011769} {\bibfield {journal} {\bibinfo {journal} {J. Stat. Phys.}\ }\textbf {\bibinfo {volume} {19}},\ \bibinfo {pages} {575} (\bibinfo {year} {1978})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Alicki}(1979)}]{alicki1979quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Alicki}},\ }\enquote {\bibinfo {title} {The quantum open system as a model of the heat engine},}\ \href {\doibase 10.1088/0305-4470/12/5/007} {\bibfield {journal} {\bibinfo {journal} {J. Phys. A}\ }\textbf {\bibinfo {volume} {12}},\ \bibinfo {pages} {L103} (\bibinfo {year} {1979})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Scully}\ \emph {et~al.}(2003)\citenamefont {Scully}, \citenamefont {Zubairy}, \citenamefont {Agarwal},\ and\ \citenamefont {Walther}}]{scully2003extracting} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~O.}\ \bibnamefont {Scully}}, \bibinfo {author} {\bibfnamefont {M.~S.}\ \bibnamefont {Zubairy}}, \bibinfo {author} {\bibfnamefont {G.~S.}\ \bibnamefont {Agarwal}}, \ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Walther}},\ }\enquote {\bibinfo {title} {Extracting Work from a Single Heat Bath via Vanishing Quantum Coherence},}\ \href {\doibase 10.1126/science.1078955} {\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {299}},\ \bibinfo {pages} {862} (\bibinfo {year} {2003})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Allahverdyan}\ \emph {et~al.}(2004)\citenamefont {Allahverdyan}, \citenamefont {Balian},\ and\ \citenamefont {Nieuwenhuizen}}]{allahverdyan2004maximal} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~E.}\ \bibnamefont {Allahverdyan}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Balian}}, \ and\ \bibinfo {author} {\bibfnamefont {T.~M.}\ \bibnamefont {Nieuwenhuizen}},\ }\enquote {\bibinfo {title} {Maximal work extraction from finite quantum systems},}\ \href {\doibase 10.1209/epl/i2004-10101-2} {\bibfield {journal} {\bibinfo {journal} {EPL (Europhys. Lett.)}\ }\textbf {\bibinfo {volume} {67}},\ \bibinfo {pages} {565} (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Erez}\ \emph {et~al.}(2008)\citenamefont {Erez}, \citenamefont {Gordon}, \citenamefont {Nest},\ and\ \citenamefont {Kurizki}}]{erez2008thermodynamic} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Erez}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Gordon}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Nest}}, \ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Kurizki}},\ }\enquote {\bibinfo {title} {Thermodynamic control by frequent quantum measurements},}\ \href {\doibase 10.1038/nature06873} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {452}},\ \bibinfo {pages} {724} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Del~Rio}\ \emph {et~al.}(2011)\citenamefont {Del~Rio}, \citenamefont {{\AA}berg}, \citenamefont {Renner}, \citenamefont {Dahlsten},\ and\ \citenamefont {Vedral}}]{delrio2011thermodynamic} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Del~Rio}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {{\AA}berg}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Renner}}, \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Dahlsten}}, \ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Vedral}},\ }\enquote {\bibinfo {title} {The thermodynamic meaning of negative entropy},}\ \href {\doibase 10.1038/nature10123} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {474}},\ \bibinfo {pages} {61} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Horodecki}\ and\ \citenamefont {Oppenheim}(2013)}]{horodecki2013fundamental} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Horodecki}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Oppenheim}},\ }\enquote {\bibinfo {title} {Fundamental limitations for quantum and nanoscale thermodynamics},}\ \href {\doibase 10.1038/ncomms3059} {\bibfield {journal} {\bibinfo {journal} {Nat. Commun.}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages} {2059} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Correa}\ \emph {et~al.}(2014)\citenamefont {Correa}, \citenamefont {Palao}, \citenamefont {Alonso},\ and\ \citenamefont {Adesso}}]{correa2014quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.~A.}\ \bibnamefont {Correa}}, \bibinfo {author} {\bibfnamefont {J.~P.}\ \bibnamefont {Palao}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Alonso}}, \ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Adesso}},\ }\enquote {\bibinfo {title} {Quantum-enhanced absorption refrigerators},}\ \href {\doibase 10.1038/srep03949} {\bibfield {journal} {\bibinfo {journal} {Sci. Rep.}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages} {3949} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Skrzypczyk}\ \emph {et~al.}(2014)\citenamefont {Skrzypczyk}, \citenamefont {Short},\ and\ \citenamefont {Popescu}}]{skrzypczyk2014work} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Skrzypczyk}}, \bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont {Short}}, \ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Popescu}},\ }\enquote {\bibinfo {title} {Work extraction and thermodynamics for individual quantum systems},}\ \href {\doibase 10.1038/ncomms5185} {\bibfield {journal} {\bibinfo {journal} {Nat. Commun.}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {4185} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Brand{\~a}o}\ \emph {et~al.}(2015)\citenamefont {Brand{\~a}o}, \citenamefont {Horodecki}, \citenamefont {Ng}, \citenamefont {Oppenheim},\ and\ \citenamefont {Wehner}}]{brandao2015second} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Brand{\~a}o}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Horodecki}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Ng}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Oppenheim}}, \ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Wehner}},\ }\enquote {\bibinfo {title} {The second laws of quantum thermodynamics},}\ \href {\doibase 10.1073/pnas.1411728112} {\bibfield {journal} {\bibinfo {journal} {Proc. Natl. Acad. Sci.}\ }\textbf {\bibinfo {volume} {112}},\ \bibinfo {pages} {3275} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Pekola}(2015)}]{pekola2015towards} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~P.}\ \bibnamefont {Pekola}},\ }\enquote {\bibinfo {title} {Towards quantum thermodynamics in electronic circuits},}\ \href {\doibase 10.1038/nphys3169} {\bibfield {journal} {\bibinfo {journal} {Nat. Phys.}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages} {118} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Uzdin}\ \emph {et~al.}(2015)\citenamefont {Uzdin}, \citenamefont {Levy},\ and\ \citenamefont {Kosloff}}]{uzdin2015equivalence} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Uzdin}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Levy}}, \ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Kosloff}},\ }\enquote {\bibinfo {title} {Equivalence of Quantum Heat Machines, and Quantum-Thermodynamic Signatures},}\ \href {\doibase 10.1103/PhysRevX.5.031044} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {031044} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Campisi}\ and\ \citenamefont {Fazio}(2016)}]{campisi2016power} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Campisi}}\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Fazio}},\ }\enquote {\bibinfo {title} {The power of a critical heat engine},}\ \href {\doibase 10.1038/ncomms11895} {\bibfield {journal} {\bibinfo {journal} {Nat. Commun.}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {11895} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ro{\ss}nagel}\ \emph {et~al.}(2016)\citenamefont {Ro{\ss}nagel}, \citenamefont {Dawkins}, \citenamefont {Tolazzi}, \citenamefont {Abah}, \citenamefont {Lutz}, \citenamefont {Schmidt-Kaler},\ and\ \citenamefont {Singer}}]{rossnagel2016single} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Ro{\ss}nagel}}, \bibinfo {author} {\bibfnamefont {S.~T.}\ \bibnamefont {Dawkins}}, \bibinfo {author} {\bibfnamefont {K.~N.}\ \bibnamefont {Tolazzi}}, \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Abah}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Lutz}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Schmidt-Kaler}}, \ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Singer}},\ }\enquote {\bibinfo {title} {A single-atom heat engine},}\ \href {\doibase 10.1126/science.aad6320} {\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {352}},\ \bibinfo {pages} {325} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kosloff}(2013)}]{kosloff2013quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Kosloff}},\ }\enquote {\bibinfo {title} {Quantum Thermodynamics: A Dynamical Viewpoint},}\ \href {\doibase 10.3390/e15062100} {\bibfield {journal} {\bibinfo {journal} {Entropy}\ }\textbf {\bibinfo {volume} {15}},\ \bibinfo {pages} {2100} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gelbwaser-Klimovsky}\ \emph {et~al.}(2015)\citenamefont {Gelbwaser-Klimovsky}, \citenamefont {Niedenzu},\ and\ \citenamefont {Kurizki}}]{gelbwaser2015thermodynamics} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Gelbwaser-Klimovsky}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Niedenzu}}, \ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Kurizki}},\ }\enquote {\bibinfo {title} {Thermodynamics of Quantum Systems Under Dynamical Control},}\ \href {\doibase 10.1016/bs.aamop.2015.07.002} {\bibfield {journal} {\bibinfo {journal} {Adv. At. Mol. Opt. Phys.}\ }\textbf {\bibinfo {volume} {64}},\ \bibinfo {pages} {329} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Goold}\ \emph {et~al.}(2016)\citenamefont {Goold}, \citenamefont {Huber}, \citenamefont {Riera}, \citenamefont {del Rio},\ and\ \citenamefont {Skrzypczyk}}]{goold2016role} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Goold}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Huber}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Riera}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {del Rio}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Skrzypczyk}},\ }\enquote {\bibinfo {title} {The role of quantum information in thermodynamics---a topical review},}\ \href {\doibase 10.1088/1751-8113/49/14/143001} {\bibfield {journal} {\bibinfo {journal} {J. Phys. A}\ }\textbf {\bibinfo {volume} {49}},\ \bibinfo {pages} {143001} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Vinjanampathy}\ and\ \citenamefont {Anders}(2016)}]{vinjanampathy2016quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Vinjanampathy}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Anders}},\ }\enquote {\bibinfo {title} {Quantum thermodynamics},}\ \href {\doibase 10.1080/00107514.2016.1201896} {\bibfield {journal} {\bibinfo {journal} {Contemp. Phys.}\ }\textbf {\bibinfo {volume} {57}},\ \bibinfo {pages} {1} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kosloff}\ and\ \citenamefont {Rezek}(2017)}]{kosloff2017quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Kosloff}}\ and\ \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Rezek}},\ }\enquote {\bibinfo {title} {The Quantum Harmonic Otto Cycle},}\ \href {\doibase 10.3390/e19040136} {\bibfield {journal} {\bibinfo {journal} {Entropy}\ }\textbf {\bibinfo {volume} {19}},\ \bibinfo {pages} {136} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dillenschneider}\ and\ \citenamefont {Lutz}(2009)}]{dillenschneider2009energetics} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Dillenschneider}}\ and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Lutz}},\ }\enquote {\bibinfo {title} {Energetics of quantum correlations},}\ \href {\doibase 10.1209/0295-5075/88/50003} {\bibfield {journal} {\bibinfo {journal} {EPL (Europhys. Lett.)}\ }\textbf {\bibinfo {volume} {88}},\ \bibinfo {pages} {50003} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Huang}\ \emph {et~al.}(2012)\citenamefont {Huang}, \citenamefont {Wang},\ and\ \citenamefont {Yi}}]{huang2012effects} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.~L.}\ \bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Wang}}, \ and\ \bibinfo {author} {\bibfnamefont {X.~X.}\ \bibnamefont {Yi}},\ }\enquote {\bibinfo {title} {Effects of reservoir squeezing on quantum systems and work extraction},}\ \href {\doibase 10.1103/PhysRevE.86.051105} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. E}\ }\textbf {\bibinfo {volume} {86}},\ \bibinfo {pages} {051105} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Abah}\ and\ \citenamefont {Lutz}(2014)}]{abah2014efficiency} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Abah}}\ and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Lutz}},\ }\enquote {\bibinfo {title} {Efficiency of heat engines coupled to nonequilibrium reservoirs},}\ \href {\doibase 10.1209/0295-5075/106/20001} {\bibfield {journal} {\bibinfo {journal} {EPL (Europhys. Lett.)}\ }\textbf {\bibinfo {volume} {106}},\ \bibinfo {pages} {20001} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ro\ss{}nagel}\ \emph {et~al.}(2014)\citenamefont {Ro\ss{}nagel}, \citenamefont {Abah}, \citenamefont {Schmidt-Kaler}, \citenamefont {Singer},\ and\ \citenamefont {Lutz}}]{rossnagel2014nanoscale} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Ro\ss{}nagel}}, \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Abah}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Schmidt-Kaler}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Singer}}, \ and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Lutz}},\ }\enquote {\bibinfo {title} {Nanoscale Heat Engine Beyond the Carnot Limit},}\ \href {\doibase 10.1103/PhysRevLett.112.030602} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {112}},\ \bibinfo {pages} {030602} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hardal}\ and\ \citenamefont {M{\"u}stecapl{\i}o{\u{g}}lu}(2015)}]{hardal2015superradiant} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~{\"U}.~C.}\ \bibnamefont {Hardal}}\ and\ \bibinfo {author} {\bibfnamefont {{\"O}.~E.}\ \bibnamefont {M{\"u}stecapl{\i}o{\u{g}}lu}},\ }\enquote {\bibinfo {title} {Superradiant Quantum Heat Engine},}\ \href {\doibase 10.1038/srep12953} {\bibfield {journal} {\bibinfo {journal} {Sci. Rep.}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {12953} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Niedenzu}\ \emph {et~al.}(2016)\citenamefont {Niedenzu}, \citenamefont {Gelbwaser-Klimovsky}, \citenamefont {Kofman},\ and\ \citenamefont {Kurizki}}]{niedenzu2016operation} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Niedenzu}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Gelbwaser-Klimovsky}}, \bibinfo {author} {\bibfnamefont {A.~G.}\ \bibnamefont {Kofman}}, \ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Kurizki}},\ }\enquote {\bibinfo {title} {On the operation of machines powered by quantum non-thermal baths},}\ \href {\doibase 10.1088/1367-2630/18/8/083012} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {18}},\ \bibinfo {pages} {083012} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Manzano}\ \emph {et~al.}(2016)\citenamefont {Manzano}, \citenamefont {Galve}, \citenamefont {Zambrini},\ and\ \citenamefont {Parrondo}}]{manzano2016entropy} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Manzano}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Galve}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Zambrini}}, \ and\ \bibinfo {author} {\bibfnamefont {J.~M.~R.}\ \bibnamefont {Parrondo}},\ }\enquote {\bibinfo {title} {Entropy production and thermodynamic power of the squeezed thermal reservoir},}\ \href {\doibase 10.1103/PhysRevE.93.052120} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. E}\ }\textbf {\bibinfo {volume} {93}},\ \bibinfo {pages} {052120} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Klaers}\ \emph {et~al.}(2017)\citenamefont {Klaers}, \citenamefont {Faelt}, \citenamefont {Imamoglu},\ and\ \citenamefont {Togan}}]{klaers2017squeezed} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Klaers}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Faelt}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Imamoglu}}, \ and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Togan}},\ }\enquote {\bibinfo {title} {Squeezed Thermal Reservoirs as a Resource for a Nanomechanical Engine beyond the Carnot Limit},}\ \href {\doibase 10.1103/PhysRevX.7.031044} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {031044} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Agarwalla}\ \emph {et~al.}(2017)\citenamefont {Agarwalla}, \citenamefont {Jiang},\ and\ \citenamefont {Segal}}]{agarwalla2017quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~K.}\ \bibnamefont {Agarwalla}}, \bibinfo {author} {\bibfnamefont {J.-H.}\ \bibnamefont {Jiang}}, \ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Segal}},\ }\enquote {\bibinfo {title} {Quantum efficiency bound for continuous heat engines coupled to noncanonical reservoirs},}\ \href {\doibase 10.1103/PhysRevB.96.104304} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {96}},\ \bibinfo {pages} {104304} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Da{\u{g}}}\ \emph {et~al.}(2016)\citenamefont {Da{\u{g}}}, \citenamefont {Niedenzu}, \citenamefont {M{\"u}stecapl{\i}o{\u{g}}lu},\ and\ \citenamefont {Kurizki}}]{dag2016multiatom} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~B.}\ \bibnamefont {Da{\u{g}}}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Niedenzu}}, \bibinfo {author} {\bibfnamefont {{\"O}.~E.}\ \bibnamefont {M{\"u}stecapl{\i}o{\u{g}}lu}}, \ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Kurizki}},\ }\enquote {\bibinfo {title} {Multiatom Quantum Coherences in Micromasers as Fuel for Thermal and Nonthermal Machines},}\ \href {\doibase 10.3390/e18070244} {\bibfield {journal} {\bibinfo {journal} {Entropy}\ }\textbf {\bibinfo {volume} {18}},\ \bibinfo {pages} {244} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Alicki}\ \emph {et~al.}(2004)\citenamefont {Alicki}, \citenamefont {Horodecki}, \citenamefont {Horodecki},\ and\ \citenamefont {Horodecki}}]{alicki2004thermodynamics} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Alicki}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Horodecki}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Horodecki}}, \ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Horodecki}},\ }\enquote {\bibinfo {title} {Thermodynamics of Quantum Information Systems — Hamiltonian Description},}\ \href {\doibase 10.1023/B:OPSY.0000047566.72717.71} {\bibfield {journal} {\bibinfo {journal} {Open Syst. Inf. Dyn.}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages} {205} (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Boukobza}\ and\ \citenamefont {Tannor}(2007)}]{boukobza2007three} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Boukobza}}\ and\ \bibinfo {author} {\bibfnamefont {D.~J.}\ \bibnamefont {Tannor}},\ }\enquote {\bibinfo {title} {Three-Level Systems as Amplifiers and Attenuators: A Thermodynamic Analysis},}\ \href {\doibase 10.1103/PhysRevLett.98.240601} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {98}},\ \bibinfo {pages} {240601} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Parrondo}\ \emph {et~al.}(2009)\citenamefont {Parrondo}, \citenamefont {den Broeck},\ and\ \citenamefont {Kawai}}]{parrondo2009entropy} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~M.~R.}\ \bibnamefont {Parrondo}}, \bibinfo {author} {\bibfnamefont {C.~V.}\ \bibnamefont {den Broeck}}, \ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Kawai}},\ }\enquote {\bibinfo {title} {Entropy production and the arrow of time},}\ \href {\doibase 10.1088/1367-2630/11/7/073008} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages} {073008} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Deffner}\ and\ \citenamefont {Lutz}(2011)}]{deffner2011nonequilibrium} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Deffner}}\ and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Lutz}},\ }\enquote {\bibinfo {title} {Nonequilibrium Entropy Production for Open Quantum Systems},}\ \href {\doibase 10.1103/PhysRevLett.107.140404} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {107}},\ \bibinfo {pages} {140404} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Boukobza}\ and\ \citenamefont {Ritsch}(2013)}]{boukobza2013breaking} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Boukobza}}\ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Ritsch}},\ }\enquote {\bibinfo {title} {Breaking the Carnot limit without violating the second law: A thermodynamic analysis of off-resonant quantum light generation},}\ \href {\doibase 10.1103/PhysRevA.87.063845} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {87}},\ \bibinfo {pages} {063845} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sagawa}(2013)}]{sagawa2013second} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Sagawa}},\ }in\ \href {\doibase 10.1142/9789814425193_0003} {{\selectlanguage {English}\emph {\bibinfo {booktitle} {Lectures on Quantum Computing, Thermodynamics and Statistical Physics}}}},\ \bibinfo {editor} {edited by\ \bibinfo {editor} {\bibfnamefont {M.}~\bibnamefont {Nakahara}}\ and\ \bibinfo {editor} {\bibfnamefont {S.}~\bibnamefont {Tanaka}}}\ (\bibinfo {publisher} {World Scientific},\ \bibinfo {address} {Singapore},\ \bibinfo {year} {2013})\ pp.\ \bibinfo {pages} {125--190}\BibitemShut {NoStop} \bibitem [{\citenamefont {Argentieri}\ \emph {et~al.}(2014)\citenamefont {Argentieri}, \citenamefont {Benatti}, \citenamefont {Floreanini},\ and\ \citenamefont {Pezzutto}}]{argentieri2014violation} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Argentieri}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Benatti}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Floreanini}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Pezzutto}},\ }\enquote {\bibinfo {title} {Violations of the second law of thermodynamics by a non-completely positive dynamics},}\ \href {\doibase 10.1209/0295-5075/107/50007} {\bibfield {journal} {\bibinfo {journal} {EPL (Europhys. Lett.)}\ }\textbf {\bibinfo {volume} {107}},\ \bibinfo {pages} {50007} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Binder}\ \emph {et~al.}(2015{\natexlab{a}})\citenamefont {Binder}, \citenamefont {Vinjanampathy}, \citenamefont {Modi},\ and\ \citenamefont {Goold}}]{binder2015quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Binder}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Vinjanampathy}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Modi}}, \ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Goold}},\ }\enquote {\bibinfo {title} {Quantum thermodynamics of general quantum processes},}\ \href {\doibase 10.1103/PhysRevE.91.032119} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. E}\ }\textbf {\bibinfo {volume} {91}},\ \bibinfo {pages} {032119} (\bibinfo {year} {2015}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Brandner}\ and\ \citenamefont {Seifert}(2016)}]{brandner2016periodic} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Brandner}}\ and\ \bibinfo {author} {\bibfnamefont {U.}~\bibnamefont {Seifert}},\ }\enquote {\bibinfo {title} {Periodic thermodynamics of open quantum systems},}\ \href {\doibase 10.1103/PhysRevE.93.062134} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. E}\ }\textbf {\bibinfo {volume} {93}},\ \bibinfo {pages} {062134} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Breuer}\ and\ \citenamefont {Petruccione}(2002)}]{breuerbook} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.-P.}\ \bibnamefont {Breuer}}\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Petruccione}},\ }\href@noop {} {\emph {\bibinfo {title} {The Theory of Open Quantum Systems}}}\ (\bibinfo {publisher} {Oxford University Press},\ \bibinfo {year} {2002})\BibitemShut {NoStop} \bibitem [{\citenamefont {Spohn}(1978)}]{spohn1978entropy} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Spohn}},\ }\enquote {\bibinfo {title} {Entropy production for quantum dynamical semigroups},}\ \href {\doibase 10.1063/1.523789} {\bibfield {journal} {\bibinfo {journal} {J. Math. Phys.}\ }\textbf {\bibinfo {volume} {19}},\ \bibinfo {pages} {1227} (\bibinfo {year} {1978})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Anders}\ and\ \citenamefont {Giovannetti}(2013)}]{anders2013thermodynamics} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Anders}}\ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Giovannetti}},\ }\enquote {\bibinfo {title} {Thermodynamics of discrete quantum processes},}\ \href {\doibase 10.1088/1367-2630/15/3/033022} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {15}},\ \bibinfo {pages} {033022} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Alicki}\ and\ \citenamefont {Fannes}(2013)}]{alicki2013entanglement} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Alicki}}\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Fannes}},\ }\enquote {\bibinfo {title} {Entanglement boost for extractable work from ensembles of quantum batteries},}\ \href {\doibase 10.1103/PhysRevE.87.042123} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. E}\ }\textbf {\bibinfo {volume} {87}},\ \bibinfo {pages} {042123} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gelbwaser-Klimovsky}\ \emph {et~al.}(2013{\natexlab{a}})\citenamefont {Gelbwaser-Klimovsky}, \citenamefont {Alicki},\ and\ \citenamefont {Kurizki}}]{gelbwaser2013work} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Gelbwaser-Klimovsky}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Alicki}}, \ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Kurizki}},\ }\enquote {\bibinfo {title} {Work and energy gain of heat-pumped quantized amplifiers},}\ \href {\doibase 10.1209/0295-5075/103/60005} {\bibfield {journal} {\bibinfo {journal} {EPL (Europhys. Lett.)}\ }\textbf {\bibinfo {volume} {103}},\ \bibinfo {pages} {60005} (\bibinfo {year} {2013}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hovhannisyan}\ \emph {et~al.}(2013)\citenamefont {Hovhannisyan}, \citenamefont {Perarnau-Llobet}, \citenamefont {Huber},\ and\ \citenamefont {Ac\'{\i}n}}]{hovhannisyan2013entanglement} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.~V.}\ \bibnamefont {Hovhannisyan}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Perarnau-Llobet}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Huber}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Ac\'{\i}n}},\ }\enquote {\bibinfo {title} {Entanglement Generation is Not Necessary for Optimal Work Extraction},}\ \href {\doibase 10.1103/PhysRevLett.111.240401} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {111}},\ \bibinfo {pages} {240401} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Binder}\ \emph {et~al.}(2015{\natexlab{b}})\citenamefont {Binder}, \citenamefont {Vinjanampathy}, \citenamefont {Modi},\ and\ \citenamefont {Goold}}]{binder2015quantacell} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.~C.}\ \bibnamefont {Binder}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Vinjanampathy}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Modi}}, \ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Goold}},\ }\enquote {\bibinfo {title} {Quantacell: powerful charging of quantum batteries},}\ \href {\doibase 10.1088/1367-2630/17/7/075015} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {17}},\ \bibinfo {pages} {075015} (\bibinfo {year} {2015}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Perarnau-Llobet}\ \emph {et~al.}(2015)\citenamefont {Perarnau-Llobet}, \citenamefont {Hovhannisyan}, \citenamefont {Huber}, \citenamefont {Skrzypczyk}, \citenamefont {Brunner},\ and\ \citenamefont {Ac\'{\i}n}}]{perarnau2015extractable} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Perarnau-Llobet}}, \bibinfo {author} {\bibfnamefont {K.~V.}\ \bibnamefont {Hovhannisyan}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Huber}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Skrzypczyk}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Brunner}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Ac\'{\i}n}},\ }\enquote {\bibinfo {title} {Extractable Work from Correlations},}\ \href {\doibase 10.1103/PhysRevX.5.041011} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {041011} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Skrzypczyk}\ \emph {et~al.}(2015)\citenamefont {Skrzypczyk}, \citenamefont {Silva},\ and\ \citenamefont {Brunner}}]{skrzypczyk2015passivity} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Skrzypczyk}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Silva}}, \ and\ \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Brunner}},\ }\enquote {\bibinfo {title} {Passivity, complete passivity, and virtual temperatures},}\ \href {\doibase 10.1103/PhysRevE.91.052133} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. E}\ }\textbf {\bibinfo {volume} {91}},\ \bibinfo {pages} {052133} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Brown}\ \emph {et~al.}(2016)\citenamefont {Brown}, \citenamefont {Friis},\ and\ \citenamefont {Huber}}]{brown2016passivity} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.~G.}\ \bibnamefont {Brown}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Friis}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Huber}},\ }\enquote {\bibinfo {title} {Passivity and practical work extraction using Gaussian operations},}\ \href {\doibase 10.1088/1367-2630/18/11/113028} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {18}},\ \bibinfo {pages} {113028} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {De~Palma}\ \emph {et~al.}(2016)\citenamefont {De~Palma}, \citenamefont {Mari}, \citenamefont {Lloyd},\ and\ \citenamefont {Giovannetti}}]{depalma2016passive} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {De~Palma}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Mari}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Lloyd}}, \ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Giovannetti}},\ }\enquote {\bibinfo {title} {Passive states as optimal inputs for single-jump lossy quantum channels},}\ \href {\doibase 10.1103/PhysRevA.93.062328} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {93}},\ \bibinfo {pages} {062328} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bruschi}(2017)}]{bruschi2017gravitational} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~E.}\ \bibnamefont {Bruschi}},\ }\enquote {\bibinfo {title} {On the gravitational nature of energy},}\ \href {https://arxiv.org/abs/1701.00699} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:1701.00699}\ } (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Levy}\ \emph {et~al.}(2016)\citenamefont {Levy}, \citenamefont {Di\'osi},\ and\ \citenamefont {Kosloff}}]{levy2016quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Levy}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Di\'osi}}, \ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Kosloff}},\ }\enquote {\bibinfo {title} {Quantum flywheel},}\ \href {\doibase 10.1103/PhysRevA.93.052119} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {93}},\ \bibinfo {pages} {052119} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Mari}\ \emph {et~al.}(2014)\citenamefont {Mari}, \citenamefont {Giovannetti},\ and\ \citenamefont {Holevo}}]{mari2014quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Mari}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Giovannetti}}, \ and\ \bibinfo {author} {\bibfnamefont {A.~S.}\ \bibnamefont {Holevo}},\ }\enquote {\bibinfo {title} {Quantum state majorization at the output of bosonic Gaussian channels},}\ \href {\doibase 10.1038/ncomms4826} {\bibfield {journal} {\bibinfo {journal} {Nat. Commun.}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {3826} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gardiner}\ and\ \citenamefont {Zoller}(2000)}]{gardinerbook} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~W.}\ \bibnamefont {Gardiner}}\ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Zoller}},\ }\href@noop {} {\emph {\bibinfo {title} {Quantum Noise}}},\ \bibinfo {edition} {2nd}\ ed.\ (\bibinfo {publisher} {Springer-Verlag},\ \bibinfo {address} {Berlin},\ \bibinfo {year} {2000})\BibitemShut {NoStop} \bibitem [{\citenamefont {Alipour}\ \emph {et~al.}(2016)\citenamefont {Alipour}, \citenamefont {Benatti}, \citenamefont {Bakhshinezhad}, \citenamefont {Afsary}, \citenamefont {Marcantoni},\ and\ \citenamefont {Rezakhani}}]{alipour2016correlations} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Alipour}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Benatti}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Bakhshinezhad}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Afsary}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Marcantoni}}, \ and\ \bibinfo {author} {\bibfnamefont {A.~T.}\ \bibnamefont {Rezakhani}},\ }\enquote {\bibinfo {title} {Correlations in quantum thermodynamics: Heat, work, and entropy production},}\ \href {\doibase 10.1038/srep35568} {\bibfield {journal} {\bibinfo {journal} {Sci. Rep.}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {35568} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Schl{\"o}gl}(1966)}]{schloegl1966zur} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Schl{\"o}gl}},\ }\enquote {\bibinfo {title} {Zur statistischen Theorie der Entropieproduktion in nicht abgeschlossenen Systemen},}\ \href {\doibase 10.1007/BF01362471} {\bibfield {journal} {\bibinfo {journal} {Z. Phys.}\ }\textbf {\bibinfo {volume} {191}},\ \bibinfo {pages} {81} (\bibinfo {year} {1966})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ekert}\ and\ \citenamefont {Knight}(1990)}]{ekert1990canonical} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~K.}\ \bibnamefont {Ekert}}\ and\ \bibinfo {author} {\bibfnamefont {P.~L.}\ \bibnamefont {Knight}},\ }\enquote {\bibinfo {title} {Canonical transformation and decay into phase-sensitive reservoirs},}\ \href {\doibase 10.1103/PhysRevA.42.487} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {42}},\ \bibinfo {pages} {487} (\bibinfo {year} {1990})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gelbwaser-Klimovsky}\ \emph {et~al.}(2013{\natexlab{b}})\citenamefont {Gelbwaser-Klimovsky}, \citenamefont {Alicki},\ and\ \citenamefont {Kurizki}}]{gelbwaser2013minimal} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Gelbwaser-Klimovsky}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Alicki}}, \ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Kurizki}},\ }\enquote {\bibinfo {title} {Minimal universal quantum heat machine},}\ \href {\doibase 10.1103/PhysRevE.87.012140} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. E}\ }\textbf {\bibinfo {volume} {87}},\ \bibinfo {pages} {012140} (\bibinfo {year} {2013}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Mukherjee}\ \emph {et~al.}(2016)\citenamefont {Mukherjee}, \citenamefont {Niedenzu}, \citenamefont {Kofman},\ and\ \citenamefont {Kurizki}}]{mukherjee2016speed} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Mukherjee}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Niedenzu}}, \bibinfo {author} {\bibfnamefont {A.~G.}\ \bibnamefont {Kofman}}, \ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Kurizki}},\ }\enquote {\bibinfo {title} {Speed and efficiency limits of multilevel incoherent heat engines},}\ \href {\doibase 10.1103/PhysRevE.94.062109} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. E}\ }\textbf {\bibinfo {volume} {94}},\ \bibinfo {pages} {062109} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Erker}\ \emph {et~al.}(2017)\citenamefont {Erker}, \citenamefont {Mitchison}, \citenamefont {Silva}, \citenamefont {Woods}, \citenamefont {Brunner},\ and\ \citenamefont {Huber}}]{erker2017autonomous} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Erker}}, \bibinfo {author} {\bibfnamefont {M.~T.}\ \bibnamefont {Mitchison}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Silva}}, \bibinfo {author} {\bibfnamefont {M.~P.}\ \bibnamefont {Woods}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Brunner}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Huber}},\ }\enquote {\bibinfo {title} {Autonomous Quantum Clocks: Does Thermodynamics Limit Our Ability to Measure Time?}}\ \href {\doibase 10.1103/PhysRevX.7.031022} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {031022} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Woods}\ \emph {et~al.}(2016)\citenamefont {Woods}, \citenamefont {Silva},\ and\ \citenamefont {Oppenheim}}]{woods2016autonomous} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~P.}\ \bibnamefont {Woods}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Silva}}, \ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Oppenheim}},\ }\enquote {\bibinfo {title} {Autonomous quantum machines and finite sized clocks},}\ \href {https://arxiv.org/abs/1607.04591} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:1607.04591}\ } (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Graham}(1987)}]{graham1987squeezing} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Graham}},\ }\enquote {\bibinfo {title} {Squeezing and Frequency Changes in Harmonic Oscillations},}\ \href {\doibase 10.1080/09500348714550801} {\bibfield {journal} {\bibinfo {journal} {J. Mod. Opt.}\ }\textbf {\bibinfo {volume} {34}},\ \bibinfo {pages} {873} (\bibinfo {year} {1987})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Agarwal}\ and\ \citenamefont {Kumar}(1991)}]{agarwal1991exact} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~S.}\ \bibnamefont {Agarwal}}\ and\ \bibinfo {author} {\bibfnamefont {S.~A.}\ \bibnamefont {Kumar}},\ }\enquote {\bibinfo {title} {Exact quantum-statistical dynamics of an oscillator with time-dependent frequency and generation of nonclassical states},}\ \href {\doibase 10.1103/PhysRevLett.67.3665} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {67}},\ \bibinfo {pages} {3665} (\bibinfo {year} {1991})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Averbukh}\ \emph {et~al.}(1994)\citenamefont {Averbukh}, \citenamefont {Sherman},\ and\ \citenamefont {Kurizki}}]{averbukh1994enhanced} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Averbukh}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Sherman}}, \ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Kurizki}},\ }\enquote {\bibinfo {title} {Enhanced squeezing by periodic frequency modulation under parametric instability conditions},}\ \href {\doibase 10.1103/PhysRevA.50.5301} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {50}},\ \bibinfo {pages} {5301} (\bibinfo {year} {1994})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Geva}\ and\ \citenamefont {Kosloff}(1992)}]{geva1992quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Geva}}\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Kosloff}},\ }\enquote {\bibinfo {title} {A quantum-mechanical heat engine operating in finite time. A model consisting of spin-1/2 systems as the working fluid},}\ \href {\doibase 10.1063/1.461951} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys.}\ }\textbf {\bibinfo {volume} {96}},\ \bibinfo {pages} {3054} (\bibinfo {year} {1992})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Feldmann}\ and\ \citenamefont {Kosloff}(2004)}]{feldmann2004characteristics} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Feldmann}}\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Kosloff}},\ }\enquote {\bibinfo {title} {Characteristics of the limit cycle of a reciprocating quantum heat engine},}\ \href {\doibase 10.1103/PhysRevE.70.046110} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. E}\ }\textbf {\bibinfo {volume} {70}},\ \bibinfo {pages} {046110} (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Quan}\ \emph {et~al.}(2007)\citenamefont {Quan}, \citenamefont {Liu}, \citenamefont {Sun},\ and\ \citenamefont {Nori}}]{quan2007quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.~T.}\ \bibnamefont {Quan}}, \bibinfo {author} {\bibfnamefont {Y.-x.}\ \bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {C.~P.}\ \bibnamefont {Sun}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\enquote {\bibinfo {title} {Quantum thermodynamic cycles and quantum heat engines},}\ \href {\doibase 10.1103/PhysRevE.76.031105} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. E}\ }\textbf {\bibinfo {volume} {76}},\ \bibinfo {pages} {031105} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {del Campo}\ \emph {et~al.}(2014)\citenamefont {del Campo}, \citenamefont {Goold},\ and\ \citenamefont {Paternostro}}]{delcampo2014more} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {del Campo}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Goold}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Paternostro}},\ }\enquote {\bibinfo {title} {More bang for your buck: Super-adiabatic quantum engines},}\ \href {\doibase 10.1038/srep06208} {\bibfield {journal} {\bibinfo {journal} {Sci. Rep.}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages} {6208} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hatano}\ and\ \citenamefont {Sasa}(2001)}]{hatano2001steady} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Hatano}}\ and\ \bibinfo {author} {\bibfnamefont {S.-i.}\ \bibnamefont {Sasa}},\ }\enquote {\bibinfo {title} {Steady-State Thermodynamics of Langevin Systems},}\ \href {\doibase 10.1103/PhysRevLett.86.3463} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {86}},\ \bibinfo {pages} {3463} (\bibinfo {year} {2001})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gardas}\ and\ \citenamefont {Deffner}(2015)}]{gardas2015thermodynamic} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Gardas}}\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Deffner}},\ }\enquote {\bibinfo {title} {Thermodynamic universality of quantum Carnot engines},}\ \href {\doibase 10.1103/PhysRevE.92.042126} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. E}\ }\textbf {\bibinfo {volume} {92}},\ \bibinfo {pages} {042126} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Misra}\ \emph {et~al.}(2015)\citenamefont {Misra}, \citenamefont {Singh}, \citenamefont {Bera},\ and\ \citenamefont {Rajagopal}}]{misra2015quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Misra}}, \bibinfo {author} {\bibfnamefont {U.}~\bibnamefont {Singh}}, \bibinfo {author} {\bibfnamefont {M.~N.}\ \bibnamefont {Bera}}, \ and\ \bibinfo {author} {\bibfnamefont {A.~K.}\ \bibnamefont {Rajagopal}},\ }\enquote {\bibinfo {title} {Quantum R\'enyi relative entropies affirm universality of thermodynamics},}\ \href {\doibase 10.1103/PhysRevE.92.042161} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. E}\ }\textbf {\bibinfo {volume} {92}},\ \bibinfo {pages} {042161} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Qi}\ \emph {et~al.}(2016)\citenamefont {Qi}, \citenamefont {Wilde},\ and\ \citenamefont {Guha}}]{qi2016thermal} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Qi}}, \bibinfo {author} {\bibfnamefont {M.~M.}\ \bibnamefont {Wilde}}, \ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Guha}},\ }\enquote {\bibinfo {title} {Thermal states minimize the output entropy of single-mode phase-insensitive Gaussian channels with an input entropy constraint},}\ \href {https://arxiv.org/abs/1607.05262} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:1607.05262}\ } (\bibinfo {year} {2016})}\BibitemShut {NoStop} \end{thebibliography} \end{document}
\begin{document} \begin{abstract} Let $\mathcal{A}$ consists of analytic functions $f:\mathbb{D}\to\mathbb{C}$ satisfying $f(0)=f'(0)-1=0$. Let $\mathcal{S}^*_{Ne}$ be the recently introduced Ma-Minda type functions family associated with the $2$-cusped kidney-shaped {\it nephroid} curve $\left((u-1)^2+v^2-\frac{4}{9}\right)^3-\frac{4 v^2}{3}=0$ given by \begin{align*} \mathcal{S}^*_{Ne}:= \left\{f\in\mathcal{A}:\frac{zf'(z)}{f(z)}\prec\varphi_{\scriptscriptstyle {Ne}}(z)=1+z-z^3/3\right\}. \end{align*} In this paper, we adopt a novel technique that uses the geometric properties of {\it hypergeometric functions} to determine sharp estimates on $\beta$ so that each of the differential subordinations \begin{align*} p(z)+\beta zp'(z)\prec \begin{cases} \sqrt{1+z};\\ 1+z;\\ e^z; \end{cases} \end{align*} imply $p(z)\prec\varphi_{\scriptscriptstyle{Ne}}(z)$, where $p(z)$ is analytic satisfying $p(0)=1$. As applications, we establish conditions that are sufficient to deduce that $f\in\mathcal{A}$ is a member of $\mathcal{S}^*_{Ne}$. \end{abstract} \subjclass[2010] {30C45, 30C80, 33C05, 33C15} \keywords{Differential Subordination, Starlike functions, Hypergeometric Functions, Nephroid, Bernoulli Lemniscate} \title{Sufficiency for Nephroid Starlikeness using Hypergeometric Functions} \markboth{A. Swaminathan and Lateef Ahmad Wani}{Sufficiency for nephroid starlikeness using hypergeometric functions} \section{Introduction} Let $\mathcal{A}$ be the family of analytic functions $f$ defined on the open unit disk $\mathbb{D}:=\left\{z:|z|<1\right\}$ and satisfying $f(0)=f'(0)-1=0$. Let $\mathcal{S}\subset\mathcal{A}$ be the family of one-one ({\it univalent}) functions defined on $\mathbb{D}$. Further, let $\mathcal{S}^*\subset\mathcal{S}$ and $\mathcal{C}\subset\mathcal{S}$ be, respectively, the well-known classes of {\it starlike} and {\it convex} functions defined on $\mathbb{D}$. We note that the functions in $\mathcal{S}^*$ are analytically characterized by the condition that for each $z\in\mathbb{D}$, the quantity $zf'(z)/f(z)$ lies in the interior of the half-plane $\mathrm{Re}(w)>0$. \par Let $f$ be analytic and $g$ be univalent. Then $f$ is {\it subordinate} to $g$, written as $f\prec{g}$, if, and only if, \begin{align*} f(0)=g(0) \quad \text{and} \quad f(\mathbb{D})\subset g(\mathbb{D}). \end{align*} \begin{definition} Let $\Lambda:\mathbb{C}^2\times\mathbb{D}\to\mathbb{C}$ be analytic, and let $u$ be univalent. The analytic function $p$ is said to satisfy the first-order differential subordination if \begin{align}\label{Def-Diff-Subord-Psi} \Lambda(p(z),\,zp'(z);\,z)\prec u(z), \qquad z\in\mathbb{D}. \end{align} \end{definition} If $q:\mathbb{D}\to\mathbb{C}$ is univalent and $p{\prec}q$ for all $p$ satisfying \eqref{Def-Diff-Subord-Psi}, then $q$ is said to be a dominant of the differential subordination \eqref{Def-Diff-Subord-Psi}. A dominant $\tilde{q}$ that satisfies $\tilde{q}\prec{q}$ for all dominants $q$ of \eqref{Def-Diff-Subord-Psi} is called the best dominant of \eqref{Def-Diff-Subord-Psi}. If $\tilde{q}_1$ and $\tilde{q}_2$ are two best dominants of \eqref{Def-Diff-Subord-Psi}, then $\tilde{q}_2(z)=\tilde{q}_1(e^{i\theta}z)$ for some $\theta\in\mathbb{R}$. For further details related to differential subordinations, we refer to the monograph of Miller and Mocanu \cite{Miller-Mocanu-Book-2000-Diff-Sub} (see also Bulboac\v{a} \cite{Bulboaca-2005-Diff-Sub-Book}). Due to its straightforward consequences, the theory of differential subordinations (a complex analogue of differential inequalities) developed by Miller and Mocanu \cite{Miller-Mocanu-Book-2000-Diff-Sub} is being extensively used in studying the analytic and geometric properties of univalent functions. For some recent works, see \cite{Antonio-Miller-2020-AMP, Ebadian-Bulboaca-Cho-RACSAM-2020, Ebadian-Adegani-Bulboaca-2020-JFS, Gavris-2020-Slovaca, S.Kumar-Goel-2020-RACSAM, Naz-Ravi-2020-MJM, Swami-Wani-2020-BKMS, HMS-DS-Caratheo-2020-JIA}. \par Following \cite{Ali-Jain-Ravi-2012-Radii-LemB-AMC, Sokol-J.Stankwz-1996-Lem-of-Ber, Mendiratta-2014-Shifted-Lemn-Bernoulli, Gandhi-Ravi-2017-Lune, Sharma-Raina-Sokol-2019-Ma-Minda-Crescent-Shaped, Mendiratta-Ravi-2015-Expo-BMMS, Sharma-Ravi-2016-Cardioid, Kumar-Ravi-2016-Starlike-Associated-Rational-Function, Kargar-2019-Booth-Lem-A.M.Physics, Cho-2019-Sine-BIMS, Khatter-Ravi-2019-Lem-Exp-Alpha-RACSAM, Goel-Siva-2019-Sigmoid-BMMS, Yunus-2018-Limacon}, the authors in \cite{Wani-Swami-Nephroid-Basic,Wani-Swami-Radius-Problems-Nephroid-RACSAM} introduced and studied the geometric properties of the function $\varphi_{\scriptscriptstyle{Ne}}(z):=1+z-z^3/3$ and the associated Ma-Minda type (see \cite{Ma-Minda-1992-A-unified-treatment, HMS-MM-2018-RACSAM, HMS-MM-2013-JCA}) function family $\mathcal{S}^*_{Ne}$ given by \begin{align*} \mathcal{S}^*_{Ne}:=\left\{f\in\mathcal{A}:\frac{zf'(z)}{f(z)}\prec\varphi_{\scriptscriptstyle {Ne}}(z)\right\}. \end{align*} It was proved by Wani and Swaminathan \cite{Wani-Swami-Nephroid-Basic} that the function $\varphi_{\scriptscriptstyle{Ne}}(z)$ maps the boundary $\partial\mathbb{D}$ of the unit disk $\mathbb{D}$ univalently onto the {\it nephroid}, a $2$--cusped kidney--shaped curve (see \Cref{Figure-Nephroid}), given by \begin{align}\label{Equation-of-Nephroid} \left((u-1)^2+v^2-\frac{4}{9}\right)^3-\frac{4 v^2}{3}=0. \end{align} Geometrically, a nephroid is the locus of a point fixed on the circumference of a circle of radius $\rho$ that rolls (without slipping) on the outside of a fixed circle having radius $2\rho$. First studied by Huygens and Tschirnhausen in 1697, the nephroid curve was shown to be the catacaustic (envelope of rays emanating from a specified point) of a circle when the light source is at infinity. In 1692, Jakob Bernoulli had shown that the nephroid is the catacaustic of a cardioid for a luminous cusp. However, the word nephroid was first used by Richard A. Proctor in 1878 in his book `{\it The Geometry of Cycloids}'. For further details related to the nephroid curve, we refer to \cite{Lockwood-Book-of-Curves-2007, Yates-1947-Handbook-Curves}. \begin{figure}\label{Figure-Nephroid} \end{figure} Thus $f\in\mathcal{S}^*_{Ne}$ if, and only if, all the values taken by the expression ${zf'(z)}/{f(z)}$ lie in the region $\Omega_{Ne}$ bounded by the nephroid curve \eqref{Equation-of-Nephroid}. Since $\mathcal{S}^*_{Ne}\subset\mathcal{S}^*$, we call $f\in\mathcal{S}^*_{Ne}$ a {\it nephroid starlike} function. \par In this paper, we employ the differential subordination techniques and use the geometric properties of {\it Gaussian and confluent hypergeometric functions} to establish conditions which ensure that the analytic function $f\in\mathcal{A}$ is nephroid starlike in $\mathbb{D}$. More specifically, we determine the best possible bounds on the real $\beta$ so that, for some analytic $p$ satisfying $p(0)=1$, the following implication holds: \begin{align*} p(z)+\beta zp'(z)\prec \begin{cases} \sqrt{1+z};\\ 1+z;\\ e^z; \end{cases} \implies p(z)\prec\varphi_{\scriptscriptstyle{Ne}}(z). \end{align*} Replacing $p(z)$ by the expression ${zf'(z)}/{f(z)}$ for any $f\in\mathcal{A}$, we obtain conditions that are sufficient to imply that the function $f$ is nephroid starlike in $\mathbb{D}$. \par Although similar type of differential subordination implication problems have been studied for several other function families (for instance see \cite{Ahuja-Ravi-2018-App-Diff-Sub-Stud-Babe-Bolyai, Ali-Ravi-2012-Diff-Sub-LoB-TaiwanJM, Aktas-LemExpo-Coulomb-2020-SSMH, Kumar-Ravi-2013-Suff-Conditions-LoB-JIA, SushilKumar-Ravi-2018-Sub-Positive-RP-CAOT, Bohra-Ravi-2019-Diff-Subord-Hacet.J, Cho-Ravi-2018-Diff-Sub-Booth-Lem-TJM, Ebadian-Bulboaca-Cho-RACSAM-2020, Madaan-Ravi-2019-Filomat, NazAdiba-Ravi-2019-Exponential-TJM, Naz-Ravi-2020-MJM, S.Kumar-Goel-2020-RACSAM}), the approach of utilizing the properties of hypergeometric functions to arrive at the desired implication is totally new. In addition, this paper verifies {\it analytically} certain crucial facts which some of the above cited authors have concluded geometrically without providing any analytic clarification. However, graphical illustrations are also provided in this manuscript for enhancing the clarity of the results to the reader and competing with the existing related literature. \par In the sequel, it is always assumed that $z\in\mathbb{D}$ unless stated otherwise. \section{Preliminaries on Hypergeometric Functions} The following lemma related to differential subordination will be used in our discussion. \begin{lemma}[{Ma and Minda \cite[p. 132]{Miller-Mocanu-Book-2000-Diff-Sub}}]\label{Lemma-3.4h-p132-Miller-Mocanu} Let $q:\mathbb{D}\to\mathbb{C}$ be univalent, and let $\lambda$ and $\vartheta$ be analytic in a domain $\Omega\supseteq q(\mathbb{D})$ with $\lambda(\xi)\neq0$ whenever $\xi\in{q(\mathbb{D})}$. Define \begin{align*} \Theta(z):=zq'(z)\,\lambda(q(z)) \quad \text{ and } \quad h(z):=\vartheta(q(z))+\Theta(z), \qquad z\in\mathbb{D}. \end{align*} Suppose that either \begin{enumerate}[\rm(i)] \item $h(z)$ is convex, or \item $\Theta(z)$ is starlike.\\ In addition, assume that \item $\mathrm{Re}\left({zh'(z)}/{\Theta(z)}\right)>0$ in $\mathbb{D}$. \end{enumerate} If $p\in\mathcal{H}$ with $p(0)=q(0)$, $p(\mathbb{D})\subset{\Omega}$ and \begin{align*} \vartheta(p(z))+zp'(z)\,\lambda(p(z))\prec\vartheta(q(z))+zq'(z)\,\lambda(q(z)), \qquad z\in\mathbb{D}, \end{align*} then $p\prec{q}$, and $q$ is the best dominant. \end{lemma} \begin{definition}[{\bf Gaussian hypergeometric function}] Let $a,b\in\mathbb{C}$ and $c\in\mathbb{C}\setminus\{0,-1,-2,\ldots\}$. Define \begin{align}\label{Gaussian-HG} F(a,b;c;z):={_2F_1}(a,b;c;z)=\sum_{j=0}^\infty\frac{(a)_j(b)_j}{j!\;(c)_j}\,z^j, \quad z\in\mathbb{D}, \end{align} where $(x)_j$ is the Pochhammer symbol given by \begin{align}\label{Pochhammer-Symbol} (x)_j= \begin{cases} 1, \quad j=0\\ x(x+1)(x+2)\cdots(x+j-1), \qquad j\in\{1,2,\ldots\}. \end{cases} \end{align} The analytic function $F(a,b;c;z)$ given in \eqref{Gaussian-HG} is called the Gaussian hypergeometric function. \end{definition} Prior to the use of hypergeometric functions in the proof of Bieberbach's conjecture by de Branges \cite{de-Branges-1985-Proof-BC}, there has been little known connections between the univalent function theory and the theory of special functions. This surprising use of hypergeometric functions has given function theorists a renewed interest to study the interrelatedness of these two concepts and, as a result, a number of papers have been published in this direction. For instance, see \cite{Ahuja-2008-Connections-HGFs-AMC, Bohra-Ravi-CHGfs-Bessel-2017-AM, Kustner-2002-HGF-CMFT, Kustner-2007-HGF-JMAA, Miller-Mocanu-1990-HGFs-PAMS, Mostafa-2010-HGFs-CMA, Ruscheweyh-Singh-1986-HGFs-JMAA, Swaminathan-2004-HGFs-TamsuiOxf, Swaminathan-2006-HGFs-Conic-Regions, Swaminathan-2006-Inclusion-HGFs-JCAA, Swaminathan-2007-IncBeta-ITSF, Swaminathan-2010-HGFs-CMA, Wani-Swami-2020-BabesBolyai, HMS-HGF-2017-Math-Methods-Appl-Sci, HMS-HGF-2019-RACSAM, HMS-HGF-2020-JNCA, HMS-HGF-2021-Quaest}. The function $F(a,b;c;z)$ defined in \eqref{Gaussian-HG} has many interesting properties among which the following will be used to prove our results. For further details, we refer to Rainville \cite{Rainville-Special-Functions-Book}. \begin{enumerate}[(i)] \item $F(a,b;c;z)$ is a solution of the differential equation \begin{align*} z(1-z)w''(z)+(c-(a+b+1)z)w'(z)-abw(z)=0. \end{align*} \item $F(a,b;c;z)$ has a representation in terms of the gamma function \begin{align*} \Gamma(z)=\int_0^\infty t^{z-1}e^{-t}dt,\quad \mathrm{Re}(z)>0 \end{align*} as \begin{align}\label{Gamma-Function-Rep-Gaussian-HGF} F(a,b;c;z)=\frac{\Gamma(c)}{\Gamma(a)\Gamma(b)}\sum_{j=0}^\infty\frac{\Gamma(a+j)\Gamma(b+j)}{j!\;\Gamma(c+j)}\,z^j. \end{align} \item $F(a,b;c;z)$ satisfies \begin{align}\label{Derivative-Property-GHGF} F'(a,b;c;z)=\frac{ab}{c}F(a+1,b+1;c+1;z) \end{align} \item If $\mathrm{Re}\,c>\mathrm{Re}\,b>0$, then $F(a,b;c;z)$ has the following integral representation \begin{align}\label{Integral-Rep-Gaussian-HGF} F(a,b;c;z)=\frac{\Gamma(c)}{\Gamma(b)\Gamma(c-b)}\int_{0}^1\frac{t^{b-1}(1-t)^{c-b-1}}{(1-tz)^{a}}\,dt, \quad z\in\mathbb{D}. \end{align} \end{enumerate} We hereby mention that the function $zF(a,b;c;z)$ given by \begin{align*} zF(a,b;c;z)=z{_2F_1}(a,b;c;z)=z+\sum_{j=2}^\infty\frac{(a)_{j-1}(b)_{j-1}}{(j-1)!\;(c)_{j-1}}\,z^j, \quad z\in\mathbb{D}, \end{align*} is known as {\it normalized} or {\it shifted} Gaussian hypergeometric function. \subsection*{Order of Starlikeness} Let $f\in\mathcal{A}$. The order of starlikeness (with respect to zero) of the function $f(z)$ is defined to be the number $\sigma(f)$ given by \begin{align}\label{Order-ST} \sigma(f):= \inf_{z\in\mathbb{D}}\mathrm{Re}\left(\frac{zf'(z)}{f(z)}\right)\in[-\infty,1]. \end{align} In terms of $\sigma(f)$, we observe that $f\in\mathcal{A}$ is starlike if, and ony if, $\sigma(f)\geq0$, or precisely, \begin{align*} f\in\mathcal{S}^* \iff \sigma(f) \geq 0. \end{align*} Related to the order of starlikeness of the modified Gaussian hypergeometric function $zF(a,b;c;z)$, K\"{u}stner \cite{Kustner-2002-HGF-CMFT, Kustner-2007-HGF-JMAA} proved the following result. \begin{lemma}[{K\"{u}stner \cite[Theorem 1 (a)]{Kustner-2007-HGF-JMAA}}]\label{Lemma-Kustner-2007-HGF-JMAA} If $0<a\leq b\leq c$, then \begin{align*} 1-\frac{ab}{b+c} \leq \sigma\left(zF(a,b;c;z)\right) \leq 1-\frac{ab}{2c}. \end{align*} \end{lemma} \begin{definition}[{\bf Confluent hypergeometric function}] Let $a\in\mathbb{C}$ and $c\in\mathbb{C}\setminus\{0,-1,-2,\ldots\}$. The confluent (or Kummer) hypergeometric function is defined as the convergent power series \begin{align}\label{Confluent-HG} \Phi(a;c;z):={_1F_1}(a;c;z)=\sum_{j=0}^\infty\frac{(a)_j}{(c)_j}\frac{z^j}{j!}, \quad z\in\mathbb{D}, \end{align} where $(x)_j$ is the Pochhammer symbol defined in \eqref{Pochhammer-Symbol}. \end{definition} The function $\Phi(a;c;z)$ is analytic in $\mathbb{C}$ and satisfies the Kummer's differential equation \begin{align*} zw''(z)+(c-z)w'(z)-aw(z)=0. \end{align*} If we replace $b$ by $1/\varrho$ and $z$ by $z\varrho$ in the series \eqref{Gaussian-HG} and allow $\varrho\to0$, we obtain the series \eqref{Confluent-HG}. Below, we mention certain well-known properties of $\Phi(a;c;z)$ given by \eqref{Confluent-HG}. \begin{align}\label{Gamma-Function-Rep-CHGF} \Phi(a;c;z) =\frac{\Gamma(c)}{\Gamma(a)}\sum_{j=0}^\infty\frac{\Gamma(a+j)}{\Gamma(c+j)}\frac{z^j}{j!}, \end{align} \begin{align}\label{Derivative-Property-CHGF} \Phi'(a;c;z)=\frac{a}{c}\Phi(a+1;c+1;z), \end{align} and \begin{align}\label{Integral-Rep-CHGF} \Phi(a;c;z)= \frac{\Gamma(c)}{\Gamma(a)\Gamma(c-a)}\int_{0}^1{t^{a-1}(1-t)^{c-a-1}e^{tz}}\,dt, \quad (\mathrm{Re}\,c>\mathrm{Re}\,a>0). \end{align} Further, the function \begin{align*} z\Phi(a;c;z)=z{_1F_1}(a;c;z)=z+\sum_{j=2}^\infty\frac{(a)_{j-1}}{(c)_{j-1}}\,\frac{z^j}{(j-1)!}, \quad z\in\mathbb{D}, \end{align*} is the {\it normalised} (shifted) confluent hypergeometric function. The following result related to the starlikeness of $z\Phi(a;c;z)$ will be used to prove \Cref{Thrm-CHGF1}. \begin{lemma}[{Miller and Mocanu \cite[p. 236]{Miller-Mocanu-Book-2000-Diff-Sub}}]\label{Lemma-Corollary-MilMocanu} If $c\leq 1+N(a-1)$, where \begin{align*} N(a)= \begin{cases} |a|+\frac{1}{2} \quad\text{if}\quad |a|\geq\frac{1}{3},\\ \frac{3(a)^2}{2}+\frac{2}{3} \quad\text{if}\quad |a|\leq\frac{1}{3}, \end{cases} \end{align*} then $z\Phi(a;c;z)$ is starlike in $\mathbb{D}$. \end{lemma} \section{Main Results} By making use of \Cref{Lemma-3.4h-p132-Miller-Mocanu}, \Cref{Lemma-Kustner-2007-HGF-JMAA}, and the above mentioned properties of Gaussian hypergeometric function, we prove \Cref{Thrm-LemB-Impl-Neph-GHGF} and \Cref{Thrm-GHGF2}. \begin{theorem}\label{Thrm-LemB-Impl-Neph-GHGF} Let $p:\mathbb{D}\to\mathbb{C}$ be analytic and satisfies $p(0)=1$. Let $\varphi_{\scriptscriptstyle{L}}(z):=\sqrt{1+z}$ and \begin{align*} p(z)+\beta zp'(z)\prec\varphi_{\scriptscriptstyle{L}}(z),\qquad \beta>0. \end{align*} If $\beta\geq\beta_L\approx0.158379$, then $p(z)\prec\varphi_{\scriptscriptstyle{Ne}}(z)$, where $\beta_L$ is the unique root of \begin{align}\label{Eq-BetaL} \frac{3}{\Gamma(-\frac{1}{2})}\sum_{j=0}^\infty\frac{\Gamma(-\frac{1}{2}+j)}{j!\,(1+j\beta)}-1 = 0. \end{align} The estimate on $\beta$ is best possible. \end{theorem} \begin{proof} An elementary analysis shows that the analytic function \begin{align}\label{Def-qbeta-Integral-Form-Lem} \Psi_\beta(z)=\frac{1}{\beta}\int_0^1\frac{t^{\frac{1}{\beta}-1}}{(1+zt)^{-1/2}}\,dt, \qquad \beta>0 \end{align} is a solution of the first-order linear differential equation \begin{align*} \Psi_\beta(z)+\beta z\Psi'_\beta(z)=\varphi_{\scriptscriptstyle{L}}(z). \end{align*} In view of the representation \eqref{Integral-Rep-Gaussian-HGF} of the Gaussian hypergeometric function, it is easy to see that the function $\Psi_\beta(z)$ given by \eqref{Def-qbeta-Integral-Form-Lem} has the form \begin{align}\label{qbeta-GHF-Form} \Psi_\beta(z)=F\left(-\frac{1}{2},\frac{1}{\beta};\frac{1}{\beta}+1;-z\right). \end{align} For brevity, we now split the proof into two steps.\\ {\bf Step I}. {\it In this step, we prove that $p(z)+\beta zp'(z)\prec\varphi_{\scriptscriptstyle{L}}(z)$ implies $p(z)\prec \Psi_\beta(z)$, $\beta>0$}. \par For $\xi\in\mathbb{C}$, define $\vartheta(\xi)=\xi$ and $\lambda(\xi)=\beta$ so that \begin{align*} \Theta(z)=z\Psi'_\beta(z)\lambda(\Psi_\beta(z))=\beta z\Psi'_\beta(z)= \beta zF'\left(-\frac{1}{2},\frac{1}{\beta};\frac{1}{\beta}+1;-z\right). \end{align*} This, on using the identity \eqref{Derivative-Property-GHGF}, gives \begin{align}\label{Def-qbeta-GHGF-Form-Lem} \Theta(z)=\frac{\beta}{2(1+\beta)} zF\left(\frac{1}{2},\frac{1}{\beta}+1;\frac{1}{\beta}+2;-z\right). \end{align} We prove that the function $\Theta(z)$ given by \eqref{Def-qbeta-GHGF-Form-Lem} is starlike in $\mathbb{D}$ by showing that $\sigma(\Theta)\geq0$, where $\sigma(\cdot)$ is defined in \eqref{Order-ST}. For the normalized hypergeometric function on the right side of \eqref{Def-qbeta-GHGF-Form-Lem}, we have $a=1/2,b=1/\beta+1$ and $c=1/\beta+2$, so that the condition $0<a\leq b\leq c$ easily holds. Therefore, by \Cref{Lemma-Kustner-2007-HGF-JMAA}, it follows that \begin{align*} \sigma\left(zF(a,b;c;z)\right) \geq 1-\frac{ab}{b+c} =1-\frac{1+\beta}{2\left(2+3\beta\right)} =\frac{3+5\beta}{2\left(2+3\beta\right)} >0 \qquad (\because\beta>0). \end{align*} This shows that the hypergeometric function $zF\left(\frac{1}{2},\frac{1}{\beta}+1;\frac{1}{\beta}+2;-z\right)$ is starlike in $\mathbb{D}$, thereby proving the starlikeness of $\Theta(z)$ defined in \eqref{Def-qbeta-GHGF-Form-Lem}. Since $\beta>0$ and $\Theta(z)$ is starlike, the function $$h(z)=\vartheta\left(\Psi_\beta(z)\right)+\Theta(z)=\Psi_\beta(z)+\Theta(z)$$ satisfies \begin{align*} \mathrm{Re}\left(\frac{zh'(z)}{\Theta(z)}\right)= \mathrm{Re}\left(\frac{1}{\beta}+\frac{z\Theta'(z)}{\Theta(z)}\right)>0. \end{align*} In view of \Cref{Lemma-3.4h-p132-Miller-Mocanu}, we conclude that the differential subordination $$p(z)+\beta zp'(z)\prec \Psi_\beta(z)+\beta z\Psi'_\beta(z)=\varphi_{\scriptscriptstyle{L}}(z)$$ implies the subordination $p(z)\prec \Psi_\beta(z)$, where $\Psi_\beta(z)$ is given by \eqref{Def-qbeta-Integral-Form-Lem} (or \eqref{qbeta-GHF-Form}). \par Now the desired subordination $p(z)\prec\varphi_{\scriptscriptstyle{Ne}}(z)$ will hold true if the subordination $\Psi_\beta(z)\prec\varphi_{\scriptscriptstyle{Ne}}(z)$ holds.\\ {\bf Step II}. {\it In this step, we prove that $\Psi_\beta(z)\prec\varphi_{\scriptscriptstyle{Ne}}(z)$ if, and only if, $\beta\geq\beta_L\approx0.158379$}. \par {\it Necessity.} Let $\Psi_\beta(z)\prec\varphi_{\scriptscriptstyle{Ne}}(z)$, $z\in\mathbb{D}$. Then \begin{align}\label{NS-DSI-GHGFN} \varphi_{\scriptscriptstyle {Ne}}(-1)<\Psi_\beta(-1)<\Psi_\beta(1)<\varphi_{\scriptscriptstyle {Ne}}(1). \end{align} On using the representation \eqref{qbeta-GHF-Form} and the identity \eqref{Gamma-Function-Rep-Gaussian-HGF}, the condition \eqref{NS-DSI-GHGFN} yields the two inequalities \begin{align*} \frac{1}{3}\leq F\left(-\frac{1}{2},\frac{1}{\beta};\frac{1}{\beta}+1;1\right)=\frac{1}{\Gamma(-\frac{1}{2})}\sum_{j=0}^\infty\frac{\Gamma(-\frac{1}{2}+j)}{j!\,(1+j\beta)} \end{align*} and \begin{align*} \frac{5}{3} \geq F\left(-\frac{1}{2},\frac{1}{\beta};\frac{1}{\beta}+1;-1\right)=\frac{1}{\Gamma(-\frac{1}{2})}\sum_{j=0}^\infty\frac{\Gamma(-\frac{1}{2}+j)}{j!\,(1+j\beta)}(-1)^j. \end{align*} Or equivalently, \begin{align*} \tau(\beta):= \frac{1}{\Gamma(-\frac{1}{2})}\sum_{j=0}^\infty\frac{\Gamma(-\frac{1}{2}+j)}{j!\,(1+j\beta)}-\frac{1}{3} \geq 0 \end{align*} and \begin{align*} \delta(\beta):=\frac{5}{3} - \frac{1}{\Gamma(-\frac{1}{2})} \sum_{j=0}^\infty\frac{\Gamma(-\frac{1}{2}+j)}{j!\,(1+j\beta)}(-1)^j \geq 0. \end{align*} A computer based numerical computation shows that for $\beta\in(0,\infty)$, \begin{align*} \tau(\beta)\in\left(-\frac{1}{3},\frac{2}{3}\right) \quad \text{ and } \quad \delta(\beta)\in\left(\frac{5}{3}-\sqrt{2},\;\frac{2}{3}\right). \end{align*} That is, as $\beta$ varies from $0$ to $\infty$, $\delta(\beta)$ is positive, while $\tau(\beta)$ takes positive as well as negative values. Moreover, $$\tau'(\beta)=\frac{-1}{\Gamma(-\frac{1}{2})} \sum_{j=1}^\infty\frac{\Gamma(-\frac{1}{2}+j)}{(j-1)!\,(1+j\beta)^2}$$ takes values from the interval $(0,\infty)$ for each $\beta\in(0,\infty)$. This shows that $\tau(\beta)$ is strictly increasing in $(0,\infty)$. Therefore, both conditions $\tau(\beta)\geq0$ and $\delta(\beta)\geq0$ hold true for $\beta\geq\beta_L\approx0.158379$, where $\beta_L$ is the unique root of $\tau(\beta)$. See the plots of $\tau(\beta)$ and $\delta(\beta)$ in \Cref{Plots-TauDel-HGF-SRN}. \begin{figure} \caption{Plots of $\tau(\beta)$ and $\delta(\beta)$, $\beta>0$.} \label{Plots-TauDel-HGF-SRN} \end{figure} \par {\it Sufficiency.} Since the function $\varphi_{\scriptscriptstyle{Ne}}(z)$ is univalent in $\mathbb{D}$ and $\Psi_\beta(0)=\varphi_{\scriptscriptstyle{Ne}}(0)=1$, it is sufficient to prove that $\Psi_\beta(\mathbb{D})\subset\varphi_{\scriptscriptstyle{Ne}}(\mathbb{D})$ for $\beta\geq\beta_L$.\\ The square of the distance from the point (1,0) to the points on the nephroid curve \eqref{Equation-of-Nephroid} is \begin{align*} d_1(\theta):=\frac{16}{9}-\frac{4 \cos^2\theta}{3}, \quad 0\leq\theta<2\pi, \end{align*} and the square of the distance from (1,0) to the points on the curve \begin{align*} \Psi_\beta(e^{i\theta})=&F\left(-\frac{1}{2},\frac{1}{\beta};\frac{1}{\beta}+1;-e^{i\theta}\right)\\ =&\frac{1}{\Gamma(-\frac{1}{2})}\sum_{j=0}^\infty\frac{(-1)^j\Gamma(-\frac{1}{2}+j)}{j!\,(1+j\beta)}(\cos{j\theta}+i\sin{j\theta}), \quad 0\leq\theta<2\pi, \end{align*} is given by \begin{align*} d_2(\theta,\beta):=\left(\frac{1}{\Gamma\left(-\frac{1}{2}\right)} \sum_{j=0}^{\infty} C(j,\beta) \cos(j\theta)-1\right)^2 +\left(\frac{1}{\Gamma\left(-\frac{1}{2}\right)} \sum_{j=0}^{\infty} C(j,\beta) \sin(j\theta)\right)^2, \end{align*} where \begin{align*} C(j,\beta):=\frac{(-1)^j\Gamma(-\frac{1}{2}+j)}{j!\,(1+j\beta)}. \end{align*} Since the curves $\varphi_{\scriptscriptstyle{Ne}}(e^{i\theta})$ and $\Psi_\beta(e^{i\theta})$ are symmetric about the real axis, we may choose $\theta\in[0,\pi]$. Now, the difference of square of the distances from the point (1, 0) to the points on the boundary curves $\varphi_{\scriptscriptstyle{Ne}}(e^{i\theta})$ and $\Psi_\beta(e^{i\theta})$, respectively, is \begin{align*} d(\theta,\beta):=&d_1(\theta)-d_2(\theta,\beta)\\ =&\frac{16}{9}-\frac{4 \cos^2\theta}{3}- \left( \frac{\sum_{j=0}^{\infty}C(j,\beta)\cos(j\theta)}{\Gamma\left(-\frac{1}{2}\right)} -1\right)^2 -\left( \frac{\sum_{j=0}^{\infty}C(j,\beta)\sin(j\theta)}{\Gamma\left(-\frac{1}{2}\right)} \right)^2, \end{align*} A computer based numerical computation shows that $d(\theta,\beta)\geq0$ for each $\theta\in[0,\pi]$ whenever $\beta\geq\beta_L\approx0.158379$ and $d(\theta,\beta)<0$ for some $\theta\in(\pi-\epsilon,\pi)$, $\epsilon\to0$ whenever $\beta<0.158379$, see \Cref{Table-Numeric-Values-H}. This shows that the region bounded by the curve $\Psi_\beta(e^{i\theta})$ is completely contained in $\varphi_{\scriptscriptstyle{Ne}}(\mathbb{D})$ whenever $\beta\geq\beta_L$. Moreover, the estimate on $\beta$ is best possible as \begin{align*} d(\pi,\beta_L) =&\frac{4}{9}- \left(\frac{1}{\Gamma\left(-\frac{1}{2}\right)} \sum_{j=0}^{\infty} (-1)^jC(j,\beta_L)-1\right)^2\\ =&\frac{4}{9}- \left(\frac{1}{\Gamma\left(-\frac{1}{2}\right)} \sum_{j=0}^{\infty}\frac{\Gamma(-\frac{1}{2}+j)}{j!\,(1+j\beta_L)} -1\right)^2\\ =&\frac{4}{9}- \left(\left(\tau(\beta_L)+\frac{1}{3}\right)-1\right)^2\\ =&\frac{4}{9}- \left(\frac{1}{3}-1\right)^2 \qquad\qquad \left(\because \tau(\beta_L)=0\right)\\ =&0 \end{align*} See \Cref{Figure-HGT1-I} for the graphical illustration of the above proved facts and the sharpness of the bound $\beta_L$ for the containment $\Psi_\beta(\mathbb{D})\subset\varphi_{\scriptscriptstyle{Ne}}(\mathbb{D})$. This proves the sufficiency of $\beta\geq\beta_L$ for the subordination $\Psi_\beta\prec\varphi_{\scriptscriptstyle{Ne}}$.\\ The desired result now follows by combining the conclusions of Step I and Step II. \end{proof} \begin{center} \begin{table}[H] \centering \begin{tabular}{ |c|c|c| } \hline {\boldmath{$\theta$}} & {\boldmath{$d(\theta,\beta)$, $\beta=\beta_L\approx0.158379$}} & {\boldmath{$d(\theta,\beta)$, $\beta=0.1583737<\beta_L$}}\\ \hline $3$ & $0.0893992$ & $0.0893943$\\ \hline $3.14$ & $0.000230464$ & $0.000223834$\\ \hline $3.141$ & $0.0000596419$ & $0.0000530052$\\ \hline $3.1415$ & $9.83806\times10^{-6}$ & $3.19942\times10^{-6}$\\ \hline $3.14159$ & $6.4166\times10^{-6}$ & $-2.22177\times10^{-7}$\\ \hline $3.141592$ & $6.40162\times10^{-6}$ & $-2.37156\times10^{-7}$\\ \hline $3.1415926$ & $6.39958\times10^{-6}$ & $-2.39199\times10^{-7}$\\ \hline $3.14159265$ & $6.39953\times10^{-6}$ & $-2.39247\times10^{-7}$\\ \hline $\pi$ & $0$ & $-2.39248\times10^{-7}$\\ \hline \end{tabular} \caption{Numerical computations} \label{Table-Numeric-Values-H} \end{table} \end{center} \begin{figure}\label{Figure-HGT1-I} \end{figure} The following sufficient condition for the function class $\mathcal{S}^*_{Ne}$ is a direct application of \Cref{Thrm-LemB-Impl-Neph-GHGF} obtained by setting $p(z)=zf'(z)/f(z)$. \begin{corollary} Let $f\in\mathcal{A}$, and let \begin{align}\label{Definition-J} \mathcal{G}(z):=1-\frac{zf'(z)}{f(z)}+\frac{zf''(z)}{f'(z)},\quad z\in\mathbb{D}. \end{align} If the function $f(z)$ satisfies \begin{align*} \left(1+\beta\,\mathcal{G}(z)\right)\frac{zf'(z)}{f(z)}\prec \varphi_{\scriptscriptstyle{L}}(z), \end{align*} then $f\in\mathcal{S}^*_{Ne}$ for $\beta\geq\beta_L$, where $\beta_L$ is the unique root of \eqref{Eq-BetaL}. \end{corollary} \begin{theorem}\label{Thrm-GHGF2} Let the analytic $p$ satisfies $p(0)=1$ and let \begin{align*} p(z)+\beta zp'(z)\prec 1+z, \qquad \beta>0. \end{align*} Then $p\prec\varphi_{\scriptscriptstyle{Ne}}$ whenever $\beta\geq1/2$ and this estimate on $\beta$ is sharp. \end{theorem} \begin{proof} Consider the first-order linear differential equation $$q_\beta(z)+\beta zq'_\beta(z)=1+z$$ whose analytic solution is the function $q_\beta(z)$ given by \begin{align*} q_\beta(z)=\frac{1}{\beta}\int_0^1 t^{\frac{1}{\beta}-1}(1+zt)\,dt=F\left(-1,\frac{1}{\beta};\frac{1}{\beta}+1;-z\right),\quad z\in\mathbb{D}. \end{align*} as its solution. Defining the functions $\vartheta$ and $\lambda$ as in \Cref{Thrm-LemB-Impl-Neph-GHGF} we obtain \begin{align*} \Theta(z)=zq'_\beta(z)\lambda(q_\beta(z))=\beta zq'_\beta(z) =\frac{\beta}{1+\beta} zF\left(0,\frac{1}{\beta}+1;\frac{1}{\beta}+2;-z\right) =\frac{\beta}{1+\beta} z, \end{align*} which is clearly a starlike function in $\mathbb{D}$. Also, $$h(z)=\vartheta\left(q_\beta(z)\right)+\Theta(z)=q_\beta(z)+\Theta(z)$$ satisfies $\mathrm{Re}\left({zh'(z)}/{\Theta(z)}\right)>0$ in $\mathbb{D}$. Therefore, it follows from \Cref{Lemma-3.4h-p132-Miller-Mocanu} that $$p(z)+\beta zp'(z)\prec 1+z=q_\beta(z)+\beta zq'_\beta(z)$$ implies $p\prec {q_\beta}$. \par To get $p\prec\varphi_{\scriptscriptstyle{Ne}}$, it now remains to prove that $q_\beta\prec\varphi_{\scriptscriptstyle{Ne}}$.\\ If $q_\beta\prec\varphi_{\scriptscriptstyle{Ne}}$, then $\varphi_{\scriptscriptstyle {Ne}}(-1)<q_\beta(-1)<q_\beta(1)<\varphi_{\scriptscriptstyle {Ne}}(1)$ which is is equivalent to \begin{align*} F\left(-1,\frac{1}{\beta};\frac{1}{\beta}+1;1\right)-\frac{1}{3}\geq0 \; \quad \text{or}, \quad \beta\geq \frac{1}{2} \end{align*} and \begin{align*} \frac{5}{3} - F\left(-1,\frac{1}{\beta};\frac{1}{\beta}+1;-1\right)\geq0 \; \quad \text{or}, \quad \beta\geq -\frac{5}{2} \end{align*} Therefore the necessary condition for $q_\beta\prec\varphi_{\scriptscriptstyle{Ne}}$ is that $\beta\geq\max\{1/2,-5/2\}=1/2$.\\ As in \Cref{Thrm-LemB-Impl-Neph-GHGF}, it can be easily verified that whenever $\beta\geq1/2$, the distance $d_1(\theta)$ from (1,0) to the points on the nephroid curve $\varphi_{\scriptscriptstyle{Ne}}(e^{i\theta})$ is always greater than or equal to the distance $d_2(\theta,\beta)$ from (1,0) to the points on the curve $q_\beta(e^{i\theta})$, $0\leq\theta<2\pi$. This shows that $q_\beta(\mathbb{D})\subset\varphi_{\scriptscriptstyle{Ne}}(\mathbb{D})$ whenever $\beta\geq1/2$. Hence $\beta\geq1/2$ is also sufficient for the subordination $q_\beta\prec\varphi_{\scriptscriptstyle{Ne}}$ to hold true. Moreover, \begin{align*} d_1(0)=d_2(0,1/2) \quad \text{and} \quad d_1(\pi)=d_2(\pi,1/2). \end{align*} Therefore the estimate on $\beta$ can not be decreased further. \end{proof} \begin{corollary} Let $\mathcal{G}(z)$ be given by \eqref{Definition-J}. If $f\in\mathcal{A}$ satisfies the subordination \begin{align*} \left(1+\beta\,\mathcal{G}(z)\right)\frac{zf'(z)}{f(z)}\prec 1+z, \end{align*} then $f\in\mathcal{S}^*_{Ne}$ whenever $\beta\geq1/2$. \end{corollary} In the following theorem, we make use of \Cref{Lemma-3.4h-p132-Miller-Mocanu}, \Cref{Lemma-Corollary-MilMocanu}, and the properties of the confluent hypergeometric function $\Phi(a;c;z)$ defined in \eqref{Confluent-HG}. \begin{theorem}\label{Thrm-CHGF1} Let $p$ be analytic in $\mathbb{D}$ satisfying $p(0)=1$. For $\varphi_{\scriptscriptstyle{e}}(z):=e^z$, let the differential subordination \begin{align*} p(z)+\beta zp'(z)\prec \varphi_{\scriptscriptstyle{e}}(z), \qquad \beta>0. \end{align*} holds. Then $p(z)\prec\varphi_{\scriptscriptstyle{Ne}}(z)$ whenever $\beta\geq\beta_e\approx1.14016$, where $\beta_e$ is the unique solution of \begin{align*} \sum_{j=0}^\infty\frac{1}{j!\,(1+j\beta)}-\frac{5}{3}=0. \end{align*} The estimate on $\beta$ can not be improved further. \end{theorem} \begin{proof} Consider the function \begin{align}\label{Def-Sol-CHGF} \psi_\beta(z)=\frac{1}{\beta}\int_0^1\ e^{zt}\,t^{\frac{1}{\beta}-1}\,dt, \qquad \beta>0. \end{align} It can be easily verified that $\psi_\beta(z)$ given by \eqref{Def-Sol-CHGF} is an analytic solution of the linear differential equation \begin{align*} \psi_\beta(z)+\beta z\psi'_\beta(z)=\varphi_{\scriptscriptstyle{e}}(z). \end{align*} Using the representation \eqref{Integral-Rep-CHGF} of the confluent hypergeometric function, it can be observed that \begin{align}\label{qbeta-CGHF-Form} \psi_\beta(z)=\Phi\left(\frac{1}{\beta};\frac{1}{\beta}+1;z\right). \end{align} Define $\vartheta(\xi)=\xi$ and $\lambda(\xi)=\beta$ so that \begin{align*} \Theta(z)=z\psi'_\beta(z)\lambda(\psi_\beta(z))=\beta z\psi'_\beta(z)= \beta z\Phi'\left(\frac{1}{\beta};\frac{1}{\beta}+1;z\right), \end{align*} which upon using \eqref{Derivative-Property-CHGF} yields \begin{align}\label{Def-qbeta-CHGF-Form-Lem} \Theta(z)=\frac{\beta}{(1+\beta)} z\Phi\left(\frac{1}{\beta}+1;\frac{1}{\beta}+2;z\right). \end{align} We now use \Cref{Lemma-Corollary-MilMocanu} to prove that $\Theta(z)$ given by \eqref{Def-qbeta-CHGF-Form-Lem} is starlike in $\mathbb{D}$. Here $a=1/\beta+1$ and $c=1/\beta+2$ so that $c-1=1/\beta+1$ and \begin{align*} N(a-1)= \begin{cases} \frac{1}{\beta}+\frac{1}{2} \quad\text{if}\quad \frac{1}{\beta}\geq\frac{1}{3},\\ \frac{3}{2\beta^2}+\frac{2}{3} \quad\text{if}\quad \frac{1}{\beta}\leq\frac{1}{3}. \end{cases} \end{align*} In both the cases, the inequality $c-1\geq N(a-1)$ holds. Therefore, $\Theta(z)$ is starlike and consequently the function $$h(z)=\vartheta\left(\psi_\beta(z)\right)+\Theta(z)=\psi_\beta(z)+\Theta(z)$$ satisfies $\mathrm{Re}\left({zh'(z)}/{\Theta(z)}\right)>0$ for each $z\in\mathbb{D}$. In view of \Cref{Lemma-3.4h-p132-Miller-Mocanu}, we conclude that the differential subordination $p(z)+\beta zp'(z)\prec \psi_\beta(z)+\beta z\psi'_\beta(z)=\varphi_{\scriptscriptstyle{e}}(z)$ implies $p(z)\prec \psi_\beta(z)$, where $\psi_\beta(z)$ is given by \eqref{qbeta-CGHF-Form}. \par Now the subordination $p(z)\prec\varphi_{\scriptscriptstyle{Ne}}(z)$ will be attained if $\psi_\beta(z)\prec\varphi_{\scriptscriptstyle{Ne}}(z)$.\\ {\bf Claim.} The necessary and sufficient condition for the subordination $\psi_\beta\prec\varphi_{\scriptscriptstyle{Ne}}$ to hold true is that $\beta\geq\beta_e\approx1.14016$. \par {\it Necessity.} Let $\psi_\beta(z)\prec\varphi_{\scriptscriptstyle{Ne}}(z)$, $z\in\mathbb{D}$. Then \begin{align}\label{NS-DSI-CHGFN} \varphi_{\scriptscriptstyle {Ne}}(-1)<\psi_\beta(-1)<\psi_\beta(1)<\varphi_{\scriptscriptstyle {Ne}}(1). \end{align} On using \eqref{qbeta-CGHF-Form} and \eqref{Gamma-Function-Rep-CHGF} in \eqref{NS-DSI-CHGFN}, we obtain the two inequalities \begin{align*} \mu(\beta):= \sum_{j=0}^\infty\frac{(-1)^j}{j!\,(1+j\beta)}- \frac{1}{3} \geq0 \end{align*} and \begin{align*} \rho(\beta):= \frac{5}{3} -\sum_{j=0}^\infty\frac{1}{j!\,(1+j\beta)} \geq 0. \end{align*} We note that: \begin{align*} \lim_{\beta\searrow0}\mu(\beta)=\frac{1}{e}-\frac{1}{3}>0, \qquad \lim_{\beta\nearrow\infty}\mu(\beta)=\frac{2}{3}>0 \end{align*} and \begin{align*} \lim_{\beta\searrow0}\rho(\beta)=\frac{5}{3}-e<0, \qquad \lim_{\beta\nearrow\infty}\rho(\beta)=\frac{2}{3}>0. \end{align*} Moreover, a computation shows that both $\mu(\beta)$ and $\rho(\beta)$ are strictly increasing functions of $\beta\in(0,\infty)$. Therefore, the inequalities $\mu(\beta)\geq0$ and $\rho(\beta)\geq0$ are true for $\beta\geq\beta_e$, where $\beta_e\approx1.14016$ is the unique root of $\rho(\beta)=0$. See the plots of $\mu(\beta)$ and $\rho(\beta)$ in \Cref{Plot-mu,Plot-rho}. \par {\it Sufficiency.} In order to attain the subordination $\psi_\beta(z)\prec\varphi_{\scriptscriptstyle{Ne}}(z)$, we only need to prove that $\psi_\beta(\mathbb{D})\subset\varphi_{\scriptscriptstyle{Ne}}(\mathbb{D})$ whenever $\beta\geq\beta_e$. Likewise in \Cref{Thrm-LemB-Impl-Neph-GHGF}, the difference of the square of the distances from the point (1, 0) to the points on the boundary curves $\varphi_{\scriptscriptstyle{Ne}}(e^{i\theta})$ and $\psi_\beta(e^{i\theta})$, respectively, is \begin{align*} d(\theta,\beta):=&d_1(\theta)-d_2(\theta,\beta)\\ =&\frac{4}{3}\left(\frac{4}{3}-{\cos^2\theta}\right)- \left(\sum_{j=0}^{\infty}\frac{\cos(j\theta)}{{j!\,(1+j\beta)}}-1\right)^2 -\left(\sum_{j=0}^{\infty}\frac{\sin(j\theta)}{{j!\,(1+j\beta)}}\right)^2. \end{align*} Since both the curves $\varphi_{\scriptscriptstyle{Ne}}(e^{i\theta})$ and $\psi_\beta(e^{i\theta})$ are symmetric about the real line, we restrict $\theta$ to $[0,\pi]$. A computation shows that $d(\theta,\beta)\geq0$ for each $\theta\in[0,\pi]$ whenever $\beta\geq\beta_e\approx1.14016$ and $d(\theta,\beta)<0$ for some $\theta$ whenever $\beta<1.14016$. This shows that the region bounded by the curve $\psi_\beta(e^{i\theta})$ lies in the interior of $\varphi_{\scriptscriptstyle{Ne}}(\mathbb{D})$ whenever $\beta\geq\beta_e$. Further, the estimate on $\beta$ can not be improved as \begin{align*} d(0,\beta_e) =&\frac{4}{9}- \left(\sum_{j=0}^{\infty}\frac{1}{{j!\,(1+j\beta_e)}}-1\right)^2\\ =&\frac{4}{9}- \left(\left(\frac{5}{3}-\rho(\beta_e)\right)-1\right)^2\\ =&\frac{4}{9}- \left(\frac{5}{3}-1\right)^2 \qquad\qquad \left(\because \rho(\beta_e)=0\right)\\ =&0 \end{align*} See \Cref{Figure-CHGFT} for the geometrical interpretation of the sharpness of $\beta_e$. \end{proof} \begin{figure} \caption{Plot of $\mu(\beta)$, $\beta>0$.} \label{Plot-mu} \caption{Plot of $\rho(\beta)$, $\beta>0$.} \label{Plot-rho} \end{figure} The following sufficient condition for the nephroid starlikeness of $f\in\mathcal{A}$ is obtained on setting $p(z)={zf'(z)}/{f(z)}$ in \Cref{Thrm-CHGF1}. \begin{corollary} Let $\mathcal{G}(z)$ be defined as in \eqref{Definition-J}. If $f\in\mathcal{A}$ satisfies the subordination \begin{align*} \left(1+\beta\,\mathcal{G}(z)\right)\frac{zf'(z)}{f(z)}\prec \varphi_{\scriptscriptstyle{e}}(z), \end{align*} then $f\in\mathcal{S}^*_{Ne}$ for $\beta\geq\beta_e\approx1.14016$. \end{corollary} \section{Conclusion} In this work, certain differential subordination-implication problems have been discussed. We have employed a new technique to solve the problem by applying the well-known properties of Gaussian and confluent (or Kummer) hypergeometric functions. In addition, analytic clarification of certain set-inclusions have been supplied in this paper which the earlier authors have claimed to be true without providing any details. All the results proved yield sufficient conditions for the nephroid starlikeness of a normalized analytic function. For a quick overview, we summarize the subordination implications discussed in this paper in \Cref{Table:II}. \par In view of the recent papers of Srivastava et al. \cite{HMS-q-Hankel-2019-MPaid, HMS-q-Coeff-2019-HMJ, HMS-q-ConicD-2019-Rocky, HMS-q-Janowski-2019-Filomat, HMS-q-Hankel-2021-BSM}, we remark that the works related to the nephroid domain carried out by the authors of this manuscript has several future prospects for $q$-extension. \begin{center} \begin{longtable}{ |M{4cm}|c|c|M{2.1cm}|M{2.4cm}| } \hline {\bf Differential Subordination} & {\boldmath{$\mathcal{P}(z)$}} & {\bf Implication} & {\boldmath$\beta$} & {\bf Sharp/ non-sharp \boldmath$\beta$}\\ \hline \multirow{3}{10em}{$p(z)+\beta{zp'(z)}\prec\mathcal{P}(z)$} & $\sqrt{1+z}$ & \multirow{3}{6em}{$p(z)\prec\varphi_{\scriptscriptstyle{Ne}}(z)$ $=1+z-{z^3}/{3}$} & $0.158379$ &\multirow{3}{2em}{sharp}\\ \cline{2-2} \cline{4-4} & $1+z$ & & $1/2$ &\\ \cline{2-2} \cline{4-4} & $e^z$ & & $1.14016$ &\\ \hline \caption{Subordination implications studied in this paper} \label{Table:II} \end{longtable} \end{center} {\bf Acknowledgment.} This work of the authors were supported by the Project No. CRG/2019/000200/MS of Science and Engineering Research Board, Department of Science and Technology, New Delhi, India. \end{document}
\begin{document} \title[On $(\sigma,\delta)$-skew McCoy modules]{On $(\sigma,\delta)$-skew McCoy modules} \date{\today} \subjclass[2010]{16S36, 16U80} \keywords{McCoy module, $(\sigma,\delta)$-skew McCoy module, semicommutative module, Armendariz module, $(\sigma,\delta)$-skew Armendariz module, reduced module} \author[M. Louzari]{Mohamed Louzari} \address{Depertment of Mathematics\\ Faculty of sciences \\ Abdelmalek Essaadi University\\ BP. 2121 Tetouan, Morocco} \email{mlouzari@yahoo.com} \author[L. Ben Yakoub]{L'moufadal Ben Yakoub} \address{Depertment of Mathematics\\ Faculty of sciences \\ Abdelmalek Essaadi University\\ BP. 2121 Tetouan, Morocco} \email{benyakoub@hotmail.com} \begin{abstract}Let $(\sigma,\delta)$ be a quasi derivation of a ring $R$ and $M_R$ a right $R$-module. In this paper, we introduce the notion of $(\sigma,\delta)$-skew McCoy modules which extends the notion of McCoy modules and $\sigma$-skew McCoy modules. This concept can be regarded also as a generalization of $(\sigma,\delta)$-skew Armendariz modules. Some properties of this concept are established and some connections between $(\sigma,\delta)$-skew McCoyness and $(\sigma,\delta)$-compatible reduced modules are examined. Also, we study the property $(\sigma,\delta)$-skew McCoy of some skew triangular matrix extensions $V_n(M,\sigma)$, for any nonnegative integer $n\geq 2$. As a consequence, we obtain: (1) $M_R$ is $(\sigma,\delta)$-skew McCoy if and only if $M[x]/M[x](x^n)$ is $(\overline{\sigma},\overline{\delta})$-skew McCoy, and (2) $M_R$ is $\sigma$-skew McCoy if and only if $M[x;\sigma]/M[x;\sigma](x^n)$ is $\overline{\sigma}$-skew McCoy. \end{abstract} \maketitle \section{Introduction} Throughout this paper, $R$ denotes an associative ring with unity and $M_R$ a right $R$-module. For a subset $X$ of a module $M_R$, $r_R(X)=\{a\in R|Xa=0\}$ and $\ell_R(X)=\{a\in R|aX=0\}$ will stand for the right and the left annihilator of $X$ in $R$ respectively. An Ore extension of a ring $R$ is denoted by $R[x;\sigma,\delta]$, where $\sigma$ is an endomorphism of $R$ and $\delta$ is a $\sigma$-derivation, i.e., $\delta\colon R\rightarrow R$ is an additive map such that $\delta(ab)=\sigma(a)\delta(b)+\delta(a)b$ for all $a,b\in R$ (the pair $(\sigma,\delta)$ is also called a quasi-derivation of $R$). Recall that elements of $R[x;\sigma,\delta]$ are polynomials in $x$ with coefficients written on the left. Multiplication in $R[x;\sigma,\delta]$ is given by the multiplication in $R$ and the condition $xa=\sigma(a)x+\delta(a)$, for all $a\in R$. In the next, $S$ will stand for the Ore extension $R[x;\sigma,\delta]$. On the other hand, we have a natural functor $-\otimes_RS$ from the category of right $R$-modules into the category of right $S$-modules. For a right $R$-module $M$, the right $S$-module $M\otimes_R S$ is called {\it the induced module} \cite{matczuk/induced}. Since $R[x;\sigma,\delta]$ is a free left $R$-module, elements of $M\otimes_R S$ can be seen as polynomials in $x$ with coefficients in $M$ with natural addition and right $S$-module multiplication. \par For any $0\leq i\leq j\;(i,j\in \Bbb N)$, $f_i^j\in End(R,+)$ will denote the map which is the sum of all possible words in $\sigma,\delta$ built with $i$ factors of $\sigma$ and $j-i$ factors of $\delta$ (e.g., $f_n^n=\sigma^n$ and $f_0^n=\delta^n, n\in \Bbb N $). We have $x^ja=\sum_{i=0}^jf_i^j(a)x^i$ for all $a\in R$, where $i,j$ are nonnegative integers with $j\geq i$ (see \cite[Lemma 4.1]{lam}). \par Following Lee and Zhou \cite{lee/zhou}, we introduce the notation $M[x;\sigma,\delta]$ to write the $S$-module $M\otimes_R S$. Consider $$M[x;\sigma,\delta]:=\set{\sum_{i=0}^nm_ix^i\mid n\geq 0,m_i\in M};$$ which is an $S$-module under an obvious addition and the action of monomials of $R[x;\sigma,\delta]$ on monomials in $M[x;\sigma,\delta]_{R[x;\sigma,\delta]}$ via $(mx^j)(ax^{\ell})=m\sum_{i=0}^jf_i^j(a)x^{i+\ell}$ for all $a\in R$ and $j,\ell\in \Bbb N$. The $S$-module $M[x;\sigma,\delta]$ is called the {\it skew polynomial extension} related to the quasi-derivation $(\sigma,\delta)$. \par A module $M_R$ is semicommutative, if for any $m\in M$ and $a\in R$, $ma=0$ implies $mRa=0$ \cite{rege2002}. Let $\sigma$ an endomorphism of $R$, $M_R$ is called an $\sigma$-semicommutative module \cite{zhang/chen} if, for any $m\in M$ and $a\in R$, $ma=0$ implies $mR\sigma(a)=0$. For a module $M_R$ and a quasi-derivation $(\sigma,\delta)$ of $R$, we say that $M_R$ is $\sigma$-compatible, if for each $m\in M$ and $a\in R$, we have $ma=0 \Leftrightarrow m\sigma(a)=0$. Moreover, we say that $M_R$ is $\delta$-compatible, if for each $m\in M$ and $a\in R$, we have $ma=0\Rightarrow m\delta(a)$=0. If $M_R$ is both $\sigma$-compatible and $\delta$-compatible, we say that $M_R$ is $(\sigma,\delta)$-compatible (see \cite{annin/2004}). In \cite{zhang/chen}, a module $M_R$ is called $\sigma$-{\it skew Armendariz}, if $m(x)f(x)=0$ where $m(x)=\sum_{i=0}^nm_ix^i\in M[x;\sigma]$ and $f(x)=\sum_{j=0}^ma_jx^j\in R[x;\sigma]$ implies $m_i\sigma^i(a_j)=0$ for all $i,j$. According to Lee and Zhou \cite{lee/zhou}, $M_R$ is called $\sigma$-{\it Armendariz}, if it is $\sigma$-compatible and $\sigma$-skew Armendariz. \par Following Alhevas and Moussavi \cite{moussavi/2012}, a module $M_R$ is called $(\sigma,\delta)$-skew Armendariz, if whenever $m(x)g(x)=0$ where $m(x)=\sum_{i=0}^pm_ix^i\in M[x;\sigma,\delta]$ and $g(x)=\sum_{j=0}^qb_jx^j\in R[x;\sigma,\delta]$, we have $m_ix^ib_jx^j=0$ for all $i,j$. In this paper, we introduce the concept of $(\sigma,\delta)$-skew McCoy modules which is a generalization of McCoy modules and $\sigma$-skew McCoy modules. This concept can be regarded also as a generalization of $(\sigma,\delta)$-skew Armendariz modules and rings. We study connections between reduced modules, $(\sigma,\delta)$-compatible modules and $(\sigma,\delta)$-skew McCoy modules. Also, we show that $(\sigma,\delta)$-skew McCoyness passes from a module $M_R$ to its skew triangular matrix extension $V_n(M,\sigma)$. In this sens, we complete the definition of skew triangular matrix rings $V_n(R,\sigma)$ given by Isfahani \cite{isfahani/2011}, by introducing the notion of skew triangular matrix modules. Moreover, we give some results on $(\sigma,\delta)$-skew McCoyness for skew triangular matrix modules. \section{$(\sigma,\delta)$-skew McCoy modules} \par Cui and Chen \cite{cui/2011,cui/2012}, introduced both concepts of McCoy modules and $\sigma$-skew McCoy modules. A module $M_R$ is called {\it McCoy} if $m(x)g(x)=0$, where $m(x)=\sum_{i=0}^pm_ix^i\in M[x]$ and $g(x)=\sum_{j=0}^qb_jx^j\in R[x]\setminus\{0\}$ implies that there exists $a\in R\setminus\{0\}$ such that $m(x)a=0$. A module $M_R$ is called {\it $\sigma$-skew McCoy} if $m(x)g(x)=0$, where $m(x)=\sum_{i=0}^pm_ix^i\in M[x;\sigma]$ and $g(x)=\sum_{j=0}^qb_jx^j\in R[x;\sigma]\setminus\{0\}$ implies that there exists $a\in R\setminus\{0\}$ such that $m(x)a=0$. With the same manner, we introduce the concept of {\it $(\sigma,\delta)$-skew McCoy} modules which is a generalization of McCoy modules, $\sigma$-skew McCoy modules and $(\sigma,\delta)$-skew Armendariz modules. \begin{definition}Let $M_R$ be a module and $M[x;\sigma,\delta]$ the corresponding $(\sigma,\delta)$-skew polynomial module over $R[x;\sigma,\delta]$. \par$\mathbf{(1)}$ The module $M_R$ is called {\it $(\sigma,\delta)$-skew McCoy} if $m(x)g(x)=0$, where $m(x)=\sum_{i=0}^pm_ix^i\in M[x;\sigma,\delta]$ and $g(x)=\sum_{j=0}^qb_jx^j\in R[x;\sigma,\delta]\setminus\{0\}$, implies that there exists $a\in R\setminus\{0\}$ such that $m(x)a=0$ $($i.e., $\sum_{i=\ell}^pm_if_{\ell}^i(a)=0$, for all $\ell=0,1,\cdots,p)$. \par$\mathbf{(2)}$ The ring $R$ is called {\it $(\sigma,\delta)$-skew McCoy} if $R$ is $(\sigma,\delta)$-skew McCoy as a right $R$-module. \end{definition} \begin{remark}\label{rem2}$\mathbf{(1)}$ If $M_R$ is an $(\sigma,\delta)$-skew Armendariz module then it is $(\sigma,\delta)$-skew McCoy $($Proposition \ref{prop1}$)$. But the converse is not true $($Example \ref{exp mcnotarm}$)$. \par$\mathbf{(2)}$ If $\sigma=id_R$ and $\delta=0$ we get the concept of McCoy module, if only $\delta=0$, we get the concept of $\sigma$-skew McCoy module. \par$\mathbf{(3)}$ A module $M_R$ is $(\sigma,\delta)$-skew McCoy if and only if for all $m(x)\in M[x;\sigma,\delta]$, $r_{R[x;\sigma,\delta]}(m(x))\neq 0\Rightarrow r_{R[x;\sigma,\delta]}(m(x))\cap R\neq 0.$ \end{remark} An ideal $I$ of a ring $R$ is called $(\sigma,\delta)$-stable, if $\sigma(I)\subseteq I$ and $\delta(I)\subseteq I$. \begin{proposition}\label{prop2}$\mathbf{(1)}$ Let $I$ be a nonzero right ideal of $R$. If $I$ is $(\sigma,\delta)$-stable then $R/I$ is an $R$-module $(\sigma,\delta)$-skew McCoy. \par$\mathbf{(2)}$ For any index set $I$, if $M_i$ is an $(\sigma_i,\delta_i)$-skew McCoy as $R_i$-module for each $i\in I$, then $\prod_{i\in I}M_i$ is an $(\sigma,\delta)$-skew McCoy as $\prod_{i\in I}R_i$-module, where $(\sigma,\delta)=(\sigma_i,\delta_i)_{i\in I}$. \par$\mathbf{(3)}$ Every submodule of an $(\sigma,\delta)$-skew McCoy module is $(\sigma,\delta)$-skew McCoy. In particular, if $I$ is a right ideal of an $(\sigma,\delta)$-skew McCoy ring then $I_R$ is $(\sigma,\delta)$-skew McCoy module. \par$\mathbf{(4)}$ A module $M_R$ is $(\sigma,\delta)$-skew McCoy if and only if every finitely generated submodule of $M_R$ is $(\sigma,\delta)$-skew McCoy. \end{proposition} \begin{proof}$\mathbf{(1)}$ Let $m(x)=\sum_{i=0}^p\overline{m}_ix^i\in (R/I)[x;\sigma,\delta]$, where $\overline{m}_i=r_i+I\in R/I$ for all $i=0,1,\cdots,p$ and $r$ an arbitrary nonzero element of $I$. We have $m(x)r=\sum_{i=0}^p(r_i+I)\sum_{\ell=0}^if_{\ell}^i(r)x^{\ell}\in I[x;\sigma,\delta]$, because $f_{\ell}^i(r)\in I$ for all $\ell=0,1,\cdots,i$. Hence $m(x)r=\bar {0}$. \par$\mathbf{(2)}$ Let $M=\prod_{i\in I}M_i$ and $R=\prod_{i\in I}R_i$ such that each $M_i$ is an $(\sigma_i,\delta_i)$-skew McCoy as $R_i$-module for all $i\in I$. Take $m(x)=(m_i(x))_{i\in I}\in M[x;\sigma,\delta]$ and $f(x)=(f_i(x))_{i\in I}\in R[x;\sigma,\delta]\setminus\{0\}$, where $m_i(x)=\sum_{s=0}^pm_i(s)x^s\in M_i[x;\sigma_i,\delta_i]$ and $f_i(x)=\sum_{t=0}^qa_i(t)x^t\in R_i[x;\sigma_i,\delta_i]$ for each $i\in I$. Suppose that $m(x)f(x)=0$, then $m_i(x)f_i(x)=0$ for each $i\in I$. Since $M_i$ is $(\sigma_i,\delta_i)$-skew McCoy, there exists $0\neq r_i\in R_i$ such that $m_i(x)r_i=0$ for each $i\in I$. Thus $m(x)r=0$ where $0\neq r=(r_i)_{i\in I}\in R$. \par$\mathbf{(3)}$ and $(4)$ are obvious. \end{proof} \begin{proposition}\label{prop1}If $M_R$ is an $(\sigma,\delta)$-skew Armendariz module then it is $(\sigma,\delta)$-skew McCoy. \end{proposition} \begin{proof} Let $m(x)=\sum_{i=0}^pm_ix^i\in M[x;\sigma,\delta]$ and $g(x)=\sum_{j=0}^qb_jx^j\in R[x;\sigma,\delta]\setminus\{0\}$. Suppose that $m(x)g(x)=0$, then $m_ix^ib_jx^j=0$ for all $i,j$. Since $g(x)\neq 0$ then $b_{j_0}\neq 0$ for some $j_0\in\{0,1,\cdots,p\}$. Thus $m_ix^ib_{j_0}x^{j_0}=0$ for all $i$. On the other hand $m_ix^ib_{j_0}x^{j_0}=\sum_{\ell=0}^p(\sum_{i=\ell}^pm_if_{\ell}^i(b_{j_0}))x^{\ell+j_0}=0$, and so $\sum_{i=\ell}^pm_if_{\ell}^i(b_{j_0})=0$ for all $\ell=0,1,\cdots,p$. Thus $m(x)b_{j_0}=0$, therefore $M_R$ is $(\sigma,\delta)$-skew McCoy. \end{proof} By the next example, we see that the converse of Proposition \ref{prop1} does not hold. \begin{example}\label{exp mcnotarm}Let $R$ be a reduced ring. Consider the ring $$R_4=\set{\left( \begin{array}{ccccc} a & a_{12} & a_{13}& a_{14} \\ 0 & a & a_{23}& a_{24} \\ 0 & 0 & a & a_{34}\\ 0 & 0 &0& a \\ \end{array} \right)\mid a,a_{ij}\in R},$$ Since $R$ is reduced then it is right McCoy and so $R_4$ is right McCoy, by \cite[Proposition 2.1]{zhao/liu}. But $R_4$ is not Armendariz by \cite[Example 3]{kim/lee}. \end{example} A module $(\sigma,\delta)$-skew McCoy need not to be McCoy by \cite[Example 2.3(2)]{cui/2012}. Also, the following example shows that, there exists a module which is McCoy but not $(\sigma,\delta)$-skew McCoy. \begin{example}\label{exp2}Let $\Bbb Z_2$ be the ring of integers modulo $2$, and consider the ring $R=\Bbb Z_2\oplus \Bbb Z_2$ with the usual addition and multiplication. Let $\sigma$ be an endomorphism of $R$ defined by $\sigma((a,b))=(b,a)$ and $\delta$ an $\sigma$-derivation of $R$ defined by $\delta((a,b))=(a,b)-\sigma((a,b))$. The ring $R$ is commutative reduced then it is McCoy. However, for $p(x)=(1,0)x$ and $q(x)=(1,1)+(1,0)x\in R[x;\sigma,\delta]$. We have $p(x)q(x)=0$, but $p(x)(a,b)\neq 0$ for any $0\neq (a,b)\in R$. Therefore, $R$ is not $(\sigma,\delta)$-skew McCoy. Also, $R$ is not $(\sigma,\delta)$-compatible, because $(0,1)(1,0)=(0,0)$, but $(0,1)\sigma((1,0))=(0,1)^2\neq (0,0)$ and $(0,1)\delta((1,0))=(0,1)(1,1)=(0,1)\neq (0,0)$. \end{example} \begin{lemma}\label{rem3} Let $M_R$ be an $(\sigma,\delta)$-compatible module. For any $m\in M_R$, $a\in R$ and nonnegative integers $i,j$. We have the following: \par$\mathbf{(1)}$ $ma=0\Rightarrow m\sigma^i(a)=m\delta^j(a)=0$. \par$\mathbf{(2)}$ $ma=0\Rightarrow m\sigma^i(\delta^j(a))=m\delta^i(\sigma^j(a))=0$. \end{lemma} \begin{proof}The verification is straightforward. \end{proof} If $M_R$ is an $(\sigma,\delta)$-compatible module then $ma=0\Rightarrow mf_i^j(a)=0$ for any nonnegative integers $i,j$ such that $i\geq j$, where $m\in M_R$ and $a\in R$. For a subset $U$ of $M_R$ and $(\sigma,\delta)$ a quasi-derivation of $R$, the set of all skew polynomials with coefficients in $U$ is denoted by $U[x;\sigma,\delta]$. \begin{lemma}\label{prop3}Let $M_R$ be a module and $(\sigma,\delta)$ a quasi-derivation of $R$. The following are equivalent: \par$\mathbf{(1)}$ For any $U\subseteq M[x;\sigma,\delta]$, $(r_{R[x;\sigma,\delta]}(U)\cap R)[x;\sigma,\delta]=r_{R[x;\sigma,\delta]}(U)$. \par$\mathbf{(2)}$ For any $m(x)=\sum_{i=0}^pm_ix^i\in M[x;\sigma,\delta]$ and $f(x)=\sum_{j=0}^qa_jx^j\in R[x;\sigma,\delta]$. If $m(x)f(x)=0$ implies $\sum_{\ell=i}^pm_{\ell}f_i^{\ell}(a_j)=0$ for all $i,j$. \end{lemma} \begin{proof}$(1)\Rightarrow (2)$. Let $m(x)=\sum_{i=0}^pm_ix^i\in M[x;\sigma,\delta]$ and $f(x)=\sum_{j=0}^qa_jx^j\in R[x;\sigma,\delta]$. If $m(x)f(x)=0$, we have $f(x)\in r_{R[x;\sigma,\delta]}(m(x))=(r_{R[x;\sigma,\delta]}(m(x))\cap R)[x;\sigma,\delta]$. Then $a_j\in r_{R[x;\sigma,\delta]}(m(x))$ for all $j$, so that $m(x)a_j=0$ for all $j$. But $m(x)a_j=0 \Leftrightarrow \sum_{\ell=i}^pm_{\ell}f_i^{\ell}(a_j)=0$ for all $0\leq i\leq p$. Thus $\sum_{\ell=i}^pm_{\ell}f_i^{\ell}(a_j)=0$ for all $i,j$. \break $(2)\Rightarrow (1)$. Let $U\subseteq M[x;\sigma,\delta]$, we have always $(r_{R[x;\sigma,\delta]}(U)\cap R)[x;\sigma,\delta]\subseteq r_{R[x;\sigma,\delta]}(U)$. Conversely, let $f(x)\in r_{R[x;\sigma,\delta]}(U)$ then by $(2)$, we have $Ua_j=0$ for all $j$ and so $a_j\in r_{R[x;\sigma,\delta]}(U)\cap R$. Therefore $f(x)\in (r_{R[x;\sigma,\delta]}(U)\cap R)[x;\sigma,\delta]$. \end{proof} \begin{theorem}[McCoy's Theorem for module extensions]\label{theo mccoy}Let $M_R$ be a module and $N$ a nonzero submodule of $M[x;\sigma,\delta]$. If one of the equivalent conditions of Lemma \ref{prop3} is satisfied. Then $r_{R[x;\sigma,\delta]}(N)\neq 0$ implies $r_{R}(N)\neq 0$. \end{theorem} \begin{proof}Suppose that $r_{R[x;\sigma,\delta]}(N)\neq 0$, then there exists $0\neq f(x)=\sum_{i=0}^pa_ix^i\in r_{R[x;\sigma,\delta]}(N)$. But $r_{R[x;\sigma,\delta]}(N)=(r_{R[x;\sigma,\delta]}(N)\cap R)[x;\sigma,\delta]$ by Lemma \ref{prop3}. Therefore all $a_i$ are in $r_{R[x;\sigma,\delta]}(N)$, so $a_i\in r_R(N)$ for all $i$. Since $f(x)\neq 0$ then there exists $i_0\in \{0,1,\cdots p\}$ such that $0\neq a_{i_0}\in r_R(N)$. So that $r_{R}(N)\neq 0$. \end{proof} \begin{definition}\label{def}Let $M_R$ be a module and $\sigma$ an endomorphism of $R$. We say that $M_R$ satisfies the condition $(\mathcal{C_{\sigma}})$ if whenever $m\sigma(a)=0$ with $m\in M$ and $a\in R$, then $ma=0$. \end{definition} \begin{proposition}\label{prop/combine}Let $m(x)=\sum_{i=0}^{p}m_ix^i\in M[x;\sigma,\delta]$ and $f(x)=\sum_{j=0}^qa_jx^j$ $\in R[x;\sigma,\delta]$ such that $m(x)f(x)=0$. If one of the following conditions hold: \par$\mathbf{(a)}$ $M_R$ is $(\sigma,\delta)$-skew Armendariz and satisfy the condition $(\mathcal{C_{\sigma}})$. \par$\mathbf{(b)}$ $M_R$ is reduced and $(\sigma,\delta)$-compatible. Then $m_ia_j=0$ for all $i,j$. \end{proposition} \begin{proof}\par$\mathbf{(a)}$ Since $M_R$ is $(\sigma,\delta)$-skew Armendariz then from $m(x)f(x)=0$, we get $m_ix^ia_jx^j=0$ for all $i,j$. But $m_ix^ia_jx^j=m_i\sum_{\ell=0}^if_{\ell}^i(a_j)x^{j+\ell}=m_i\sigma^i(a_j)x^{i+j}+Q(x)=0$ where $Q(x)$ is a polynomial in $M[x;\sigma,\delta]$ of degree strictly less than $i+j$. Thus $m_i\sigma^i(a_j)=0$, therefore $m_ia_j=0$ for all $i,j$. \par$\mathbf{(b)}$ We will use freely the fact that, if $ma=0$ then $m\sigma^i(a)=m\delta^{j}(a)=mf_i^{j}(a)=0$ for any nonnegative integers $i,j$ with $j\geq i$. From $m(x)f(x)=0$, we have the following system of equations: $$\;\qquad\qquad\qquad\qquad\qquad m_p\sigma^p(a_q)=0,\leqno(0)$$ $$m_p\sigma^p(a_{q-1})+m_{p-1}\sigma^{p-1}(a_q)+m_pf_{p-1}^p(a_q)=0,\qquad\qquad\leqno(1)$$ $$m_p\sigma^p(a_{q-2})+m_{p-1}\sigma^{p-1}(a_{q-1})+m_pf_{p-1}^p(a_{q-1})+m_{p-2}\sigma^{p-2}(a_q)+m_{p-1}f_{p-2}^{p-1}(a_q)\leqno(2)$$ $$\quad\qquad\qquad\qquad\quad\;\; +m_pf_{p-2}^p(a_q)=0,$$ $$m_p\sigma^p(a_{q-3})+m_{p-1}\sigma^{p-1}(a_{q-2})+ m_pf_{p-1}^p(a_{q-2})+m_{p-2}\sigma^{p-2}(a_{q-1})\leqno(3)$$ $$+m_{p-1}f_{p-2}^{p-1}(a_{q-1})+m_pf_{p-2}^p(a_{q-1})+m_{p-3} \sigma^{p-3}(a_q)+m_{p-2}f_{p-3}^{p-2}(a_q)$$ $$\;\;\quad +m_{p-1}f_{p-3}^{p-1}(a_q)+m_pf_{p-3}^p(a_q)=0,$$ $$\qquad\qquad\qquad\qquad\vdots$$ $$\qquad\sum_{j+k=\ell}\;\;\sum_{i=0}^p\; \sum_{k=0}^q(m_i\sum_{j=0}^if_j^i(a_k))=0,\leqno(\ell)$$ $$\qquad\qquad\qquad\qquad\vdots$$ $$\;\qquad\qquad\qquad\qquad\quad\sum_{i=0}^pm_i\delta^i(a_0)=0.\leqno(p+q)$$ From equation $(0)$, we have $m_pa_q=0$ by $\sigma$-compatibility. Multiplying equation $(1)$ on the right hand by $a_q$, we get $$m_p\sigma^p(a_{q-1})a_q+m_{p-1}\sigma^{p-1}(a_q)a_q+m_pf_{p-1}^p(a_q)a_q=0,\;\qquad\qquad\qquad\qquad\leqno(1')$$ Since $M_R$ is semicommutative, then $$m_pa_q=0\Rightarrow m_p\sigma^p(a_{q-1})a_q=m_pf_{p-1}^p(a_q)a_q=0.$$ By Lemma \ref{lemma banal}, equation $(1')$ gives $m_{p-1}a_q=0$. Also, by $(\sigma,\delta)$-compatibility, equation $(1)$ implies $m_p\sigma^p(a_{q-1})=0$, because $m_pa_q=m_{p-1}a_q=0$. Thus $m_pa_{q-1}=0$. \par Summarizing at this point, we have $$m_pa_q=m_{p-1}a_q=m_pa_{q-1}=0\leqno(\alpha)$$ Now, multiplying equation $(2)$ on the right hand by $a_q$, we get $$m_p\sigma^p(a_{q-2})a_q+m_{p-1}\sigma^{p-1}(a_{q-1})a_q+m_pf_{p-1}^p(a_{q-1})a_q+m_{p-2}\sigma^{p-2}(a_q)a_q\leqno(2')$$ $$+m_{p-1}f_{p-2}^{p-1}(a_q)a_q+m_pf_{p-2}^p(a_q)a_q=0,\;\;$$ With the same manner as above, equation $(2')$ gives $m_{p-2}\sigma^{p-2}(a_q)a_q=0$ and thus $m_{p-2}a_q=0\;(\beta)$. Also, multiplying equation $(2)$ on the right hand by $a_{q-1}$, we get $$m_p\sigma^p(a_{q-2})a_{q-1}+m_{p-1}\sigma^{p-1}(a_{q-1})a_{q-1}+m_pf_{p-1}^p(a_{q-1})a_{q-1}\leqno(2'')$$ $$+m_{p-2}\sigma^{p-2}(a_q)a_{q-1}+m_{p-1}f_{p-2}^{p-1}(a_q)a_{q-1}+m_pf_{p-2}^p(a_q)a_{q-1}=0$$ Equations $(\alpha)$ and $(\beta)$ implies $$\;0=m_p\sigma^p(a_{q-2})a_{q-1}=m_pf_{p-1}^p(a_{q-1})a_{q-1}=m_{p-2}\sigma^{p-2}(a_q)a_{q-1}$$ $$=m_{p-1}f_{p-2}^{p-1}(a_q)a_{q-1}=m_pf_{p-2}^p(a_q)a_{q-1}\qquad\qquad\qquad\qquad\;$$ Hence, equation $(2'')$ gives $m_{p-1}\sigma^{p-1}(a_{q-1})a_{q-1}=0$ and by Lemma \ref{lemma banal}, we get $m_{p-1}a_{q-1}=0\;(\gamma)$. Now, by equations $(\alpha)$,$(\beta)$ and $(\gamma)$, we get $m_{p-1}\sigma^{p-1}(a_{q-1})=m_pf_{p-1}^p(a_{q-1})=m_{p-2}\sigma^{p-2}(a_q)=m_{p-1}f_{p-2}^{p-1}(a_q)=m_pf_{p-2}^p(a_q)=0$. Therefore equation $(2)$ implies $m_p\sigma^p(a_{q-2})=0$, so that $m_pa_{q-2}=0$. \par Summarizing at this point, we have $m_ia_j=0$ with $i+j\in \{p+q,p+q-1,p+q-2\}$. Continuing this procedure yields $m_ia_j=0$ for all $i,j$. \end{proof} \begin{lemma}\label{lemma banal}Let $M_R$ be an $(\sigma,\delta)$-compatible module, if $ma^2=0$ implies $ma=0$ for any $m\in M$ and $a\in R$. Then \par$\mathbf{(1)}$ $m\sigma(a)a=0$ implies $ma=m\sigma(a)=0$. \par$\mathbf{(2)}$ $ma\sigma(a)=0$ implies $ma=m\sigma(a)=0$. \end{lemma} \begin{proof}The proof is straightforward. \end{proof} According to Lee and Zhou \cite{lee/zhou}, a module $M_R$ is called $\sigma$-{\it reduced}, if for any $m\in M$ and $a\in R$. We have \begin{enumerate} \item [$\mathbf{(1)}$] $ma=0$ implies $mR\cap Ma=0$. \item [$\mathbf{(2)}$] $ma=0$ if and only if $m\sigma(a)=0$. \end{enumerate} The module $M_R$ is called reduced if $M_R$ is $id_R$-reduced. \begin{lemma}[{\cite[Lemma 1.2]{lee/zhou}}]\label{lemma zhou}The following are equivalent for a module $M_R$: \begin{enumerate} \item [$\mathbf{(1)}$] $M_R$ is $\sigma$-reduced. \item [$\mathbf{(2)}$] The following three conditions hold: For any $m\in M$ and $a\in R$, \begin{enumerate} \item [$\mathbf{(a)}$] $ma=0$ implies $mRa=mR\sigma(a)=0$. \item [$\mathbf{(b)}$] $ma\sigma(a)=0$ implies $ma=0$. \item [$\mathbf{(c)}$] $ma^2=0$ implies $ma=0$. \end{enumerate} \end{enumerate} \end{lemma} By Lemma \ref{lemma zhou}, a module $M_R$ is reduced if and only if it is semicommutative with $ma^2=0$ implies $ma=0$ for any $m\in M$ and $a\in R$. \begin{corollary}[{\cite[Theorem 2.19]{moussavi/2012}}]Every $(\sigma,\delta)$-compatible and reduced module is $(\sigma,\delta)$-skew Armendariz. \end{corollary} \begin{proof}Clearly from Proposition \ref{prop/combine}(b). \end{proof} Let $M_R$ be a module and $(\sigma,\delta)$ a quasi derivation of $R$. We say that $M_R$ satisfies the condition $(*)$, if for any $m(x)\in M[x;\sigma,\delta]$ and $f(x)\in R[x;\sigma,\delta]$, $m(x)f(x)=0$ implies $m(x)Rf(x)=0$. A module $M_R$ which satisfies the condition $(*)$ is semicommutative. But the converse is not true, by the next example. \begin{example}\label{ex2}Take the ring $R=\Bbb Z_2\oplus \Bbb Z_2$ with $(\sigma,\delta)$ as considered in Example \ref{exp2}. Since $R$ is commutative then the module $R_R$ is semicommutative. However, it does not satisfy the condition $(*)$. For $p(x)=(1,0)x$ and $q(x)=(1,1)+(1,0)x\in R[x;\sigma,\delta]$. We have $p(x)q(x)=0$, but $p(x)(1,0)q(x)=(1,0)+(1,0)x\neq 0$. Thus $p(x)Rq(x)\neq 0$. \end{example} \begin{theorem}\label{th2}If a module $M_R$ is $(\sigma,\delta)$-compatible and reduced, then it satisfies the condition $(*)$. \end{theorem} \begin{proof}Let $m(x)=\sum_{i=0}^pm_ix^i\in M[x;\sigma,\delta]$ and $f(x)=\sum_{j=0}^qa_jx^j\in R[x;\sigma,\delta]$, such that $m(x)f(x)=0$. By Proposition \ref{prop/combine}(b) and semicommutativity of $M_R$, we have $m_iRa_j=0$ for all $i$ and $j$. Moreover, compatibility implies $m_if_k^{\ell}(Ra_j)=0$ for all $i,j,k,\ell$. Therefore $m(x)Rf(x)=0$. \end{proof} Since the ring $R=\Bbb Z_2\oplus \Bbb Z_2$ is reduced, then from Example \ref{ex2}, we can see that the condition ``$(\sigma,\delta)$-compatible" in Theorem \ref{th2} is not superfluous. \begin{proposition}\label{prop4}Let $M_R$ be an $(\sigma,\delta)$-compatible module which satisfies $(*)$. Suppose that for any $m(x)=\sum_{i=0}^pm_ix^i\in M[x;\sigma,\delta]$ and $f(x)=\sum_{j=0}^qa_jx^j\in R[x;\sigma,\delta]\setminus\{0\}$, $m(x)f(x)=0$. Then $m_ia_q^{p+1}=0$ for all $i=0,1,\cdots, p$. \end{proposition} \begin{proof}Let $m(x)=\sum_{i=0}^pm_ix^i\in M[x;\sigma,\delta]$ and $f(x)=\sum_{j=0}^qa_jx^j\in R[x;\sigma,\delta]\setminus\{0\}$, such that $m(x)f(x)=0$. We can suppose that $a_q\neq 0$. From $m(x)f(x)=0$, we get $m_p\sigma^p(a_q)=0$. Since $M_R$ is $(\sigma,\delta)$-compatible, we have $m_pa_q=0$ which implies $m_px^pa_q=0$. Since $m(x)f(x)=0$ implies $m(x)a_qf(x)=0$. Then $$0=(m_px^p+m_{p-1}x^{p-1}+\cdots+m_1x+m_0)(a_q^2x^q+a_qa_{q-1}x^{q-1}+\cdots+a_qa_1x+a_qa_0)$$ $$\;\;\;=(m_{p-1}x^{p-1}+\cdots+m_1x+m_0)(a_q^2x^q+a_qa_{q-1}x^{q-1}+\cdots+a_qa_1x+a_qa_0).\qquad\;$$ If we put $f'(x)=a_qf(x)$ and $m'(x)=\sum_{i=0}^{p-1}m_ix^i$ then we get $m_{p-1}a_q^2=0$. Continuing this procedure yields $m_ia_q^{p+1-i}=0$ for all $i=0,1,\cdots, p$. Consequently $m_ia_q^{p+1}=0$ for all $i=0,1,\cdots, p$. \end{proof} \begin{corollary}Let $M_R$ be an $(\sigma,\delta)$-compatible module over a reduced ring $R$. If $M_R$ satisfies $(*)$, then it is $(\sigma,\delta)$-skew McCoy. \end{corollary} \begin{proof}Let $m(x)=\sum_{i=0}^pm_ix^i\in M[x;\sigma,\delta]$ and $f(x)=\sum_{j=0}^qa_jx^j\in R[x;\sigma,\delta]\setminus\{0\}$, such that $m(x)f(x)=0$. We can suppose that $a_q\neq 0$. By Proposition \ref{prop4}, we have $m_ia_q^{p+1}=0$ for all $i=0,1,\cdots, p$. Since $M_R$ is $(\sigma,\delta)$-compatible, we get $m_ix^ia_q^{p+1}=m_i\sum_{\ell=0}^if_{\ell}^i(a_q^{p+1})x^{\ell}=0$ for all $i$. Hence $m(x)a_q^{p+1}=0$ where $a_q^{p+1}\neq 0$, because $R$ is reduced. Consequently $M_R$ is $(\sigma,\delta)$-skew McCoy. \end{proof} \begin{example}\label{ex2.2}Consider a ring of polynomials over $\Bbb Z_2$, $R=\Bbb Z_2[x]$. Let $\sigma\colon R\rightarrow R$ be an endomorphism defined by $\sigma(f(x))=f(0)$. Then \par$\mathbf{(1)}$ $R$ is not $\sigma$-compatible. Let $f=\overline{1}+x$, $g=x\in R$, we have $fg=(\overline{1}+x)x\neq 0$, however $f\sigma(g)=(\overline{1}+x)\sigma(x)=0$. \par$\mathbf{(2)}$ $R$ is $\sigma$-skew Armendariz \cite[Example~5]{hong/2003}. \end{example} From Example \ref{ex2.2}, we see that the ring $R=\Bbb Z_2[x]$ is $\sigma$-skew McCoy because it is $\sigma$-skew Armendariz, but it is not $\sigma$-compatible. Thus the $(\sigma,\delta)$-compatibility condition is not essential to obtain $(\sigma,\delta)$-skew McCoyness. \begin{example}[{\cite[Example 2.5]{louzari2}}]\label{ex5}Let $R$ be a ring, $\sigma$ an endomorphism of $R$ and $\delta$ be a $\sigma$-derivation of $R$. Suppose that $R$ is $\sigma$-rigid. Consider the ring $$V_3(R)=\set{\left( \begin{array}{ccc} a & b&c\\ 0 & a& b \\ 0 & 0 & a\\ \end{array} \right)\mid a,b,c\in R}.$$ The ring $V_3(R)$ is $(\overline{\sigma},\overline{\delta})$-skew McCoy, reduced and $(\overline{\sigma},\overline{\delta})$-compatible, and by Theorem \ref{th2}, it satisfies the condition $(*)$. \end{example} \section{$(\sigma,\delta)$-skew McCoyness of some matrix extensions} For a nonnegative integer $n\geq 2$, let $R$ be a ring and $M$ a right $R$-module. Consider $$S_n(R):=\set{\left( \begin{array}{ccccc} a & a_{12} & a_{13} & \ldots & a_{1n} \\ 0 & a & a_{23} & \ldots & a_{2n} \\ 0 & 0 & a & \ldots & a_{3n}\\ \vdots & \vdots &\vdots&\ddots &\vdots \\ 0 & 0 & 0 & \ldots & a \\ \end{array} \right)\mid a,a_{ij}\in R}$$ and $$S_n(M):=\set{\left( \begin{array}{ccccc} m & m_{12} & m_{13} & \ldots & m_{1n} \\ 0 & m & m_{23} & \ldots & m_{2n} \\ 0 & 0 & m & \ldots & m_{3n}\\ \vdots & \vdots &\vdots&\ddots &\vdots \\ 0 & 0 & 0 & \ldots & m \\ \end{array} \right)\mid m,m_{ij}\in M}$$ Clearly, $S_n(M)$ is a right $S_n(R)$-module under the usual matrix addition operation and the following scalar product operation. For $U=(u_{ij})\in S_n(M)$ and $A=(a_{ij})\in S_n(R)$, $UA=(m_{ij})\in S_n(M)$ with $m_{ij}=\sum_{k=1}^nu_{ik}a_{kj}$ for all $i,j$. A quasi derivation $(\sigma,\delta)$ of $R$ can be extended to a quasi derivation $(\overline{\sigma},\overline{\delta})$ of $S_n(R)$ as follows: $\overline{\sigma}((a_{ij}))=(\sigma(a_{ij}))$ and $\overline{\delta}((a_{ij}))=(\delta(a_{ij}))$. We can easily verify that $\overline{\delta}$ is a $\overline{\sigma}$-derivation of $S_n(R)$. \begin{theorem} A module $M_R$ is $(\sigma,\delta)$-skew McCoy if and only if $S_n(M)$ is $(\overline{\sigma},\overline{\delta})$-skew McCoy as an $S_n(R)$-module for any nonnegative integer $n\geq 2$. \end{theorem} \begin{proof} The proof is similar to \cite[Theorem 14]{baser/2009}. \end{proof} Now, for $n\geq 2$. Consider $$V_n(R):=\set{\left( \begin{array}{cccccc} a_0 & a_1 & a_2 & a_3 & \ldots & a_{n-1} \\ 0 & a_0 & a_1 & a_2 & \ldots & a_{n-2} \\ 0 & 0 & a_0 & a_1 & \ldots & a_{n-3}\\ \vdots & \vdots &\vdots& \vdots & \ddots &\vdots \\ 0 & 0 & 0 & 0 & \ldots & a_1 \\ 0 & 0 & 0 & 0& \ldots & a_0 \\ \end{array} \right)\mid a_0,a_1,a_2,\cdots,a_{n-1}\in R}$$ and $$V_n(M):=\set{\left( \begin{array}{cccccc} m_0 & m_1 & m_2 & m_3 & \ldots & m_{n-1} \\ 0 & m_0 & m_1 & m_2 & \ldots & m_{n-2} \\ 0 & 0 & m_0 & m_1 & \ldots & m_{n-3}\\ \vdots & \vdots &\vdots& \vdots & \ddots &\vdots \\ 0 & 0 & 0 & 0 & \ldots & m_1 \\ 0 & 0 & 0 & 0& \ldots & m_0 \\ \end{array} \right)\mid m_0,m_1,m_2,\cdots,m_{n-1}\in M}$$ With the same method as above, $V_n(M)$ is a right $V_n(R)$-module, and a quasi derivation $(\sigma,\delta)$ of $R$ can be extended to a quasi derivation $(\overline{\sigma},\overline{\delta})$ of $V_n(R)$. Note that $V_n(M)\cong M[x]/M[x](x^n)$ where $M[x](x^n)$ is a submodule of $M[x]$ generated by $x^n$ and $V_n(R)\cong R[x]/(x^n)$ where $(x^n)$ is an ideal of $R[x]$ generated by $x^n$. \begin{proposition} A module $M_R$ is $(\sigma,\delta)$-skew McCoy if and only if $V_n(M)$ is $(\overline{\sigma},\overline{\delta})$-skew McCoy as an $V_n(R)$-module for any nonnegative integer $n\geq 2$. \end{proposition} \begin{proof}The proof is similar to that of \cite[Theorem 14]{baser/2009} or \cite[Proposition 2.27]{cui/2012}. \end{proof} \begin{corollary} For a nonnegative integer $n\geq 2$, we have: \par$\mathbf{(1)}$ $M_R$ is $(\sigma,\delta)$-skew McCoy if and only if $M[x]/M[x](x^n)$ is $(\overline{\sigma},\overline{\delta})$-skew McCoy. \par$\mathbf{(2)}$ $R$ is $(\sigma,\delta)$-skew McCoy if and only if $R[x]/(x^n)$ is $(\overline{\sigma},\overline{\delta})$-skew McCoy. \par$\mathbf{(3)}$ $R$ is McCoy if and only if $R[x]/(x^n)$ is McCoy. \end{corollary} In the next, we define {\it skew triangular matrix modules} $V_n(M,\sigma)$, based on the definition of skew triangular matrix rings $V_n(R,\sigma)$ given by Isfahani \cite{isfahani/2011}. Let $\sigma$ be an endomorphism of a ring $R$ and $M_R$ a right $R$-module. For $n\geq 2$. Consider $$V_n(R,\sigma):=\set{\left( \begin{array}{cccccc} a_0 & a_1 & a_2 & a_3 & \ldots & a_{n-1} \\ 0 & a_0 & a_1 & a_2 & \ldots & a_{n-2} \\ 0 & 0 & a_0 & a_1 & \ldots & a_{n-3}\\ \vdots & \vdots &\vdots& \vdots & \ddots &\vdots \\ 0 & 0 & 0 & 0 & \ldots & a_1 \\ 0 & 0 & 0 & 0& \ldots & a_0 \\ \end{array} \right)\mid a_0,a_2,\cdots,a_{n-1}\in R}$$ and $$V_n(M,\sigma):=\set{\left( \begin{array}{cccccc} m_0 & m_1 & m_2 & m_3 & \ldots & m_{n-1} \\ 0 & m_0 & m_1 & m_2 & \ldots & m_{n-2} \\ 0 & 0 & m_0 & m_1 & \ldots & m_{n-3}\\ \vdots & \vdots &\vdots& \vdots & \ddots &\vdots \\ 0 & 0 & 0 & 0 & \ldots & m_1 \\ 0 & 0 & 0 & 0& \ldots & m_0 \\ \end{array} \right)\mid m_0,m_2,\cdots,m_{n-1}\in M}$$ Clearly $V_n(M,\sigma)$ is a right $V_n(R,\sigma)$-module under the usual matrix addition operation and the following scalar product operation. $$\left( \begin{array}{cccccc} m_0 & m_1 & m_2 & m_3 & \ldots & m_{n-1} \\ 0 & m_0 & m_1 & m_2 & \ldots & m_{n-2} \\ 0 & 0 & m_0 & m_1 & \ldots & m_{n-3}\\ \vdots & \vdots &\vdots& \vdots & \ddots &\vdots \\ 0 & 0 & 0 & 0 & \ldots & m_1 \\ 0 & 0 & 0 & 0& \ldots & m_0 \\ \end{array} \right) \left( \begin{array}{cccccc} a_0 & a_1 & a_2 & a_3 & \ldots & a_{n-1} \\ 0 & a_0 & a_1 & a_2 & \ldots & a_{n-2} \\ 0 & 0 & a_0 & a_1 & \ldots & a_{n-3}\\ \vdots & \vdots &\vdots& \vdots & \ddots &\vdots \\ 0 & 0 & 0 & 0 & \ldots & a_1 \\ 0 & 0 & 0 & 0& \ldots & a_0 \\ \end{array} \right)=$$ $$\left( \begin{array}{cccccc} c_0 & c_1 & c_2 & c_3 & \ldots & c_{n-1} \\ 0 & c_0 & c_1 & c_2 & \ldots & c_{n-2} \\ 0 & 0 & c_0 & c_1 & \ldots & c_{n-3}\\ \vdots & \vdots &\vdots& \vdots & \ddots &\vdots \\ 0 & 0 & 0 & 0 & \ldots & c_1 \\ 0 & 0 & 0 & 0& \ldots & c_0 \\ \end{array} \right),\; \mathrm{where} $$ $c_i=m_0\sigma^{0}(a_i)+m_1\sigma^1(a_{i-1})+m_2\sigma^2(a_{i-2})+\cdots+m_i\sigma^{i}(a_0)$ for each $0\leq i\leq n-1$. \par We denote elements of $V_n(R,\sigma)$ by $(a_0,a_1,\cdots,a_{n-1})$ and elements of $V_n(M,\sigma)$ by $(m_0,m_1,\cdots,m_{n-1})$. There is a ring isomorphism $\varphi\colon R[x;\sigma]/(x^n)\rightarrow V_n(R,\sigma)$ given by $\varphi(a_0+a_1x+a_2x^2+\cdots+a_{n-1}x^{n-1}+(x^n))=(a_0,a_1,a_2,\cdots,a_{n-1})$, and an abelian group isomorphism $\phi\colon M[x,\sigma]/M[x,\sigma](x^n)\rightarrow V_n(M,\sigma)$ given by $\phi(m_0+m_1x+m_2x^2+\cdots+m_{n-1}x^{n-1}+(x^n))=(m_0,m_1,m_2,\cdots,m_{n-1})$ such that $$\phi(N(x)A(x))=\phi(N(x))\varphi(A(x))$$ for any $N(x)=m_0+m_1x+m_2x^2+\cdots+m_{n-1}x^{n-1}+(x^n)\in M[x,\sigma]/M[x,\sigma](x^n)$ and $A(x)=a_0+a_1x+a_2x^2+\cdots+a_{n-1}x^{n-1}+(x^n)\in R[x;\sigma]/(x^n)$. The endomorphism $\sigma$ of $R$ can be extended to $V_n(R,\sigma)$ and $R[x;\sigma]$, and we will denote it in both cases by $\overline{\sigma}$. \begin{theorem} A module $M_R$ is $\sigma$-skew McCoy if and only if $V_n(M,\sigma)$ is $\overline{\sigma}$-skew McCoy as an $V_n(R,\sigma)$-module for any nonnegative integer $n\geq 2$. \end{theorem} \begin{proof}We shall adapt the proof of \cite[Theorem 14]{baser/2009} to this situation. Note that $V_n(R,\sigma)[x,\overline{\sigma}]\cong V_n(R[x,\sigma], \overline{\sigma})$ and $V_n(M,\sigma)[x,\overline{\sigma}]\cong V_n(M[x,\sigma], \overline{\sigma})$. We only prove when $n=2$, because other cases can be proved with the same manner. Suppose that $M_R$ is $\sigma$-skew McCoy. Let $0\neq m(x)\in V_2(M,\sigma)[x,\overline{\sigma}]$ and $0\neq f(x)\in V_2(R,\sigma)[x,\overline{\sigma}]$ such that $m(x)f(x)=0$, where $$m(x)=\sum_{i=0}^p \left(\begin{array}{cc} m_{11}^{(i)} & m_{12}^{(i)} \\ 0 & m_{11}^{(i)} \\ \end{array}\right)x^i= \left(\begin{array}{cc} \sum_{i=0}^pm_{11}^{(i)}x^i & \sum_{i=0}^pm_{12}^{(i)}x^i \\ 0 & \sum_{i=0}^pm_{11}^{(i)}x^i \\ \end{array}\right)=\left(\begin{array}{cc} \alpha_{11} & \alpha_{12} \\ 0 & \alpha_{11} \\ \end{array}\right)$$ $$f(x)=\sum_{j=0}^q \left(\begin{array}{cc} a_{11}^{(j)} & a_{12}^{(j)} \\ 0 & a_{11}^{(j)} \\ \end{array}\right)x^j= \left(\begin{array}{cc} \sum_{j=0}^q a_{11}^{(j)}x^j & \sum_{j=0}^q a_{12}^{(j)}x^j \\ 0 & \sum_{j=0}^q a_{11}^{(j)}x^j \\ \end{array}\right)= \left(\begin{array}{cc} \beta_{11} & \beta_{12} \\ 0 & \beta_{11} \\ \end{array}\right)$$ Then $\left(\begin{array}{cc} \alpha_{11} & \alpha_{12} \\ 0 & \alpha_{11} \\ \end{array}\right)\left(\begin{array}{cc} \beta_{11} & \beta_{12} \\ 0 & \beta_{11} \\ \end{array}\right)=0$, which gives $\alpha_{11}\beta_{11}=0$ and $\alpha_{11}\beta_{12}+\alpha_{12}\overline{\sigma}(\beta_{11})=0$ in $M[x;\sigma]$. If $\alpha_{11}\neq 0$, then there exists $0\neq \beta\in \{\beta_{11},\beta_{12}\}$ such that $\alpha_{11}\beta=0$. Since $M_R$ is $\sigma$-skew McCoy then there exists $0\neq c\in R$ which satisfies $\alpha_{11}c=0$, thus $\left(\begin{array}{cc} \alpha_{11} & \alpha_{12} \\ 0 & \alpha_{11} \\ \end{array}\right)\left(\begin{array}{cc} 0 & c \\ 0 & 0 \\ \end{array}\right)=\left(\begin{array}{cc} 0 & \alpha_{11}c \\ 0 & 0 \\ \end{array}\right)=0$. If $\alpha_{11}=0$ then $\left(\begin{array}{cc} 0 & \alpha_{12} \\ 0 & 0 \\ \end{array}\right)\left(\begin{array}{cc} 0 & c \\ 0 & 0 \\ \end{array}\right)=0$, for any $0\neq c\in R$. Therefore, $V_2(M,\sigma)$ is $\overline{\sigma}$-skew McCoy. \par Conversely, suppose that $V_2(M,\sigma)$ is an $\overline{\sigma}$-skew McCoy module. Let $0\neq m(x)=m_0+m_1x+\cdots+m_px^p\in M[x;\sigma]$ and $0\neq f(x)=a_0+a_1x+\cdots+a_qx^q\in R[x;\sigma]$, such that $m(x)f(x)=0$. Then $\left(\begin{array}{cc} m(x) & 0 \\ 0 & m(x) \\ \end{array}\right)\left(\begin{array}{cc} f(x) & 0 \\ 0 & f(x) \\ \end{array}\right)=\left(\begin{array}{cc} m(x)f(x) & 0 \\ 0 & m(x)f(x) \\ \end{array}\right)=0$, so there exists $0\neq \left(\begin{array}{cc} a & b \\ 0 & a \\ \end{array}\right)\in V_2(R,\sigma)$ such that $\left(\begin{array}{cc} m(x) & 0 \\ 0 & m(x) \\ \end{array}\right)\left(\begin{array}{cc} a & b \\ 0 & a \\ \end{array}\right)=0$, because $V_2(M,\sigma)$ is $\overline{\sigma}$-skew McCoy. Thus $m(x)a=m(x)b=0$, where $a\neq 0$ or $b\neq 0$. Therefore, $M_R$ is $\sigma$-skew McCoy. \end{proof} \begin{corollary} For a nonnegative integer $n\geq 2$, we have: \par$\mathbf{(1)}$ $M_R$ is $\sigma$-skew McCoy if and only if $M[x;\sigma]/M[x;\sigma](x^n)$ is $\overline{\sigma}$-skew McCoy. \par$\mathbf{(2)}$ $R$ is $\sigma$-skew McCoy if and only if $R[x;\sigma]/(x^n)$ is $\overline{\sigma}$-skew McCoy. \par$\mathbf{(3)}$ $M_R$ is McCoy if and only if $M[x]/M[x](x^n)$ is McCoy. \par$\mathbf{(4)}$ $R$ is McCoy if and only if $R[x]/(x^n)$ is McCoy. \end{corollary} \end{document}
\begin{document} \title{Lower and upper bounds of quantum battery power in multiple central spin systems} \author{Li Peng} \affiliation{State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Wuhan Institute of Physics and Mathematics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences, Wuhan 430071, China} \affiliation{University of Chinese Academy of Sciences, Beijing 100049, China.} \author{Wen-Bin He} \email[]{hewenbin18@csrc.ac.cn} \affiliation{Beijing Computational Science Research Center, Beijing 100193, China} \affiliation{The Abdus Salam International Center for Theoretical Physics, Strada Costiera 11, 34151 Trieste, Italy.} \author{Stefano Chesi} \affiliation{Beijing Computational Science Research Center, Beijing 100193, China} \affiliation{Department of Physics, Beijing Normal University, Beijing 100875, China} \affiliation{The Abdus Salam International Center for Theoretical Physics, Strada Costiera 11, 34151 Trieste, Italy.} \author{Hai-Qing Lin } \affiliation{Beijing Computational Science Research Center, Beijing 100193, China} \affiliation{Department of Physics, Beijing Normal University, Beijing 100875, China} \author{Xi-Wen Guan} \email[]{xwe105@wipm.ac.cn} \affiliation{State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Wuhan Institute of Physics and Mathematics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences, Wuhan 430071, China} \affiliation{NSFC-SPTP Peng Huanwu Center for Fundamental Theory, Xian 710127, China} \affiliation{Department of Theoretical Physics, Research School of Physics and Engineering, Australian National University, Canberra ACT 0200, Australia} \pacs{03.67.-a, 02.30.Ik,42.50.Pq} \begin{abstract} We study the energy transfer process in quantum battery systems consisting of multiple central spins and bath spins. Here with ``quantum battery" we refer to the central spins, whereas the bath serves as the ``charger". For the single central-spin battery, we analytically derive the time evolutions of the energy transfer and the charging power with arbitrary number of bath spins. For the case of multiple central spins in the battery, we find the scaling-law relation between the maximum power $P_{max}$ and the number of central spins $N_B$. It approximately satisfies a scaling law relation $P_{max}\propto N_{B}^{\alpha}$, where scaling exponent $\alpha$ varies with the bath spin number $N$ from the lower bound $\alpha =1/2$ to the upper bound $\alpha =3/2$. The lower and upper bounds correspond to the limits $N\to 1$ and $N\gg N_B$, respectively. In thermodynamic limit, by applying the Holstein-Primakoff (H-P) transformation, we rigorously prove that the upper bound is $P_{max}=0.72 B A \sqrt{N} N_{B}^{3/2}$, which shows the same advantage in scaling of a recent charging protocol based on the Tavis-Cummins model. Here $B$ and $A $ are the external magnetic field and coupling constant between the battery and the charger. \end{abstract} \date{\today} \maketitle \section{I. Introduction} Energy resources are always an important subject of modern sciences \cite{iea}, dating back to the fuel-coal energy to nuclear energy \cite{Gamow}, to present renewable energy including wind and solar energy \cite{iea,Dolf,Joel}. The exploitation of energy resources significantly involve the study of the energy transfer, storage and generation. Recently, it attracts enormous attention to study quantum heat engine \cite{Medley2011, Chen2019} and refrigeration \cite{Weld2010, Yu2020, Peng2019, Wolf2011}, energy storage and transfer in quantum mechanical systems. The latter are named as ``quantum battery" \cite{Alicki, Hovhannisyan, Campaioli, Ferraro, Gian18, Campaioli2018, le2018spin, Lewenstein, Rossini2019, Andolina2019, Caravelli2020, Gian, campo, Sergi}. Classical electrical battery stores energy by electric field, which can be understood in the frame of electrodynamics. In contrast, the quantum battery usually refers to the devices that utilize the quantum degrees of freedom to store and transfer energy. In general, the quantum degrees of freedom and their interplay can endow the quantum battery with advantage beyond classical picture. In the last few years, there have been variety of methods to study the quantum battery, including realization schemes, battery power and charging process \cite{Pirmoradian2019, yyzhang, An2020, Rossini, Santos2019, Santos2020, Friis2018}. In these studies, quantum coherence and entanglement seemed to play a key role in manipulation of quantum batteries. R. Alicki and M. Fannes \cite{Alicki} showed that entanglement can help extract more work in charging process. However, the role of entanglement in work extraction is still in debate \cite{Hovhannisyan,Campaioli}. D. Ferraro et.al \cite{Ferraro} showed that quantum advantage of charging power is manifested by an array of $N$ collective two-level systems in a cavity in comparison to the $N$ parallel quantum battery cells of the Dicke model. G. M. Andolina et.al \cite{Gian18} considered the role of correlations in different systems serving as a quantum battery, including the combination of two-level systems and quantum harmonic oscillators. There are also other schemes to realize quantum batteries, for example, using the open systems \cite{Gherardini2020, Carrega2020, Farina2019, Barra,Watanabe} and external field driving systems \cite{Crescente2020}. However, there still remain many open questions concerning quantum battery. These mainly concern the battery's largest energy, power, extractable energy etc. Firstly, the number of quantum battery cells can not be increased to infinity in order to reach an infinite power. Therefore it imposes a theoretical and practical challenge to manipulate as many quantum battery cells as possible due to the decoherent nature of quantum systems. Secondly, the number of quantum degrees of freedom in chargers is usually not big enough such that the transferred energy is not able to saturate the full cells of a battery during the charging process. Nevertheless, both the numbers of quantum degrees of freedom and coupling strength between the battery and charger can alter the quality and power of the quantum battery. This essentially involves the issue how the storage capability of quantum battery depends on the cell numbers of both battery and charger. In this paper, we study the energy transfer process in quantum batteries of the multiple-central spin model. Here the battery consists of $N_B$ spins which are displayed in collective mode during the charging process, whereas the charger has $N$ bath spins, see the Fig.\ref{fig_spinc}. We analyze the dependence of the energy transfer and the power of the battery on the number of battery spins $N_B$ and the number of the charger spins $N$. We find that the transferred energy linearly increases with the number of battery spins $N_B$ when $N$ and $N_B$ are comparable, then saturate to a certain value. While the maximum power monotonically increases with respect to the number $N_B$ in a power-law form $P_{max} \propto N_{B}^{\alpha}$, where $\alpha$ shows a dependence on the number of charge spins $N$. For the limit $N\ll N_B$, the lower bound reads $\alpha =1/2$. For the case $N\gg N_B$, the maximum energy of battery always linearly increases with the number of battery spins. While for $N\gg N_B$ and in thermodynamic limit, the power-law relation of the maximum power $P_{max} \to N_{B}^{1.5}$ is verified by numerical calculation. In thermodynamic limit, using the Holstein-Primakoff transformation, we also rigorously prove that $P_{max}=0.72 B \cdot A \sqrt{N} N_{B}^{\alpha}$, where the exponent gives the upper bound $ \alpha=3/2$. However, for $N_B$ incoherent batteries with single spins, we prove that the maximum power is given by $P_{ max} \approx 0.72B A \sqrt{N} N_B$. Here $B$ and $A $ are respectively the external magnetic field and coupling constant between the battery and the charger. It turns out the battery power essentially depends on the cell numbers of the battery and the charger. Our analytical results shed light on the high-power charging of quantum batteries. \section{II. The quantum battery and the model} \emph{Quantum battery.}---In this section, we discuss the basic setup of the quantum battery. The protocol of the underlying quantum battery consists of two parts, i.e., the quantum reservoir of energy-battery $H_{B}$ and the energy charger $H_{C}$. Both the battery and charger are composed of quantum particles that have discrete energy levels and degeneracies. The charging process is accomplished by switching on the interaction $H_I$ between the battery and the charger so as to complete the energy transfer, see Fig.~\ref{fig_spinc}. For this purpose, the whole Hamiltonian of this model is given by \begin{equation} H(t)=H_{B}+H_{C}+\lambda(t)H_{I}, \label{qb} \end{equation} where coupling constant $\lambda(t)$ will be used to control the charging period. It equals to 1 for one charging period $t \in [0,\tau]$ and is 0 for other time. There exists energy input and output between the battery and charger during the charging period from $t=0$ to $t=\tau$. The energy transfer, the charging speed and the power of the battery essentially depend on the number of the battery and charger, interaction strength between them and other external drives if possible. \begin{figure} \caption{The illustration of charging of multiple central spin model working as quantum battery, whereas the bath spins serves as the charger. At $t<0$, there is no interaction between battery and charger. While interaction is switched on during the charging process $t\in [0,\tau]$, the battery is charged. } \label{fig_spinc} \end{figure} In order to comply with the terminology which is used in the previous work \cite{Gian18,Ferraro}, we first introduce the definitions of energy and power of the quantum battery. We consider that system which evolves unitarily such that the wave function $\psi(t)$ describe the state of system. Meanwhile, the state of battery spins can be described by reduced density matrix of battery $\rho_B(t)=\mathrm{tr}_{C}[\vert \psi(t) \rangle \langle \psi(t) \vert]$, here $\mathrm{tr}_{C}$ denotes the trace taking over the spins in the charger. The energy of the battery is defined as the expectation value of the Hamiltonian $H_B$ \begin{eqnarray}\label{eq_Ebc} E_B(t)&=&\mathrm{tr}[H_B\rho_B(t)]. \end{eqnarray} Here $\rho_B$ denotes the reduced density matrix of the battery. The transferred energy of quantum battery is given by $\Delta E_B(t)=E_B(t)-E_B(0)$, where $E_B(0)$ is the energy before charging process. Meanwhile the charging power of the battery is defined as \begin{equation} P_{B}(t)=\Delta E_B(t)/{t}. \end{equation} Since the unitary evolution of the whole system during the charging period, the energy will flow between charger and battery back and forth. It is not necessary to track the energy and power at every moment. Usually, one chooses the maximum energy as a measure of the capability for storing energy $E_{max}=\max[\Delta E_B(t)]$, and accordingly the maximum power reads $P_{max}=\max[ P_{B}(t)]$. It has been demonstrated \cite{Ferraro} that collective battery cells of two-level systems coupled to a cavity mode can enhance the energy transfer by manipulating the detuning between the two-level systems and the cavity mode. They argued that the collective evolution proceeds through states characterized by quantum entanglement among the battery cells. In general, we naturally expect an existence of such quantum advantage generated during the time evolution of the whole many-particle systems of the Hamiltonian (\ref{qb}). Here we aim to investigate the scaling laws of the maximum energy $E_{max}$ and the maximum power $P_{max}$ with respect to the numbers of battery spins. Similarly, in our work, the multiple central spins are prepared in a collective way, so that there also exists a certain form of quantum advantage in the system considered below. Such scaling laws reveal coherent nature between the battery and the charger, as well as the quantum entanglement among the spin qubits in the battery induced by the unitary evolution. \emph{The model.}--- In order to realize a high-power quantum battery, we consider the multiple central spin model with the Hamiltonian (\ref{qb}) given by \begin{eqnarray} H_{B}&=&B\mathbf{S} ^{z}, \label{H-B}\\ H_{C}&=&h \mathbf{J}^{z},\label{H_C} \\ H_I&=&A(\mathbf{S} ^{+} \mathbf{J}^{-}+\mathbf{S} ^{-} \mathbf{J}^{+})+2\Delta\mathbf{S} ^{z} \mathbf{J}^{z}. \label{H-I} \label{Hhcs} \end{eqnarray} Here, for our convenience, we denoted the large spin operators $\mathbf{{S} ^{\alpha}}=\sum_{i=1}^{N_B} s_{i}^{\alpha},\alpha=\{z,+,-\}$, and $\mathbf{J}=\sum_{j=1}^{N}\mathbf{\tau}_{j}^\alpha $ for the battery and charger, respectively. We adopt different notations for central spins $s_{i}^{\alpha}$ and bath spins $\tau_{i}^{\alpha}$ in order to avoid a misunderstanding. They are both the spin-$\frac{1}{2}$ operators. We regard the central spins as the storage cells of the quantum battery, while the bath spins as charging energy carrier. The energy can be exchanged between the battery and the charger through spin-exchange interaction term $H_I$, also see the Fig.\ref{fig_spinc}. The $H_I$ contains the spin flip-flop interaction and the Ising type interaction, which are respectively denoted by $A$ and $\Delta$, i.e. the exchange coupling constant and anisotropic parameter. We also set the coupling strength $A=1$ for our rescaled units in the whole paper, see \footnote{For the unit of other parameters, we compared them with the $A$ to obtain their unit. At present, superconductor qubits may serve as quantum battery platform to observe the results of this work since spin exchange interaction can be realized experimentally. In practical experiment, spin-exchange coupling usually takes the unit $[{\mathrm time}]^{-1}$, for instance in \cite{Guo:2021}, they set Hamiltonian as $H/\hbar$ and the spin-exchange coupling $J_{m,m+1}\sim 1/60 ns^{-1}$. } The parameters $B$ and $h$ are the effective external magnetic fields for the central spins and bath spins, respectively. And $N_B$ is the number of central spins, $N$ is the number of bath spins. We introduce the Dicke state $\vert n \rangle= \vert \frac{N}{2},n- \frac{N}{2}\rangle $, which is the eigenstate of $\mathbf{J^{2}}$ and $\mathbf{J^{z}}$. The Dicke state can be expressed as \begin{equation} \vert n \rangle=\frac{1}{\sqrt{C^{n}_{N}}} \sum_{j_{1}<\cdots <j_{n}} \vert j_{1},\cdots ,j_{n} \rangle, \end{equation} here $\vert j_{1},\cdots ,j_{n} \rangle=\tau_{j_{1}}^{+}\cdots \tau_{j_{n}}^{+} \vert \Downarrow\rangle$, and normalization coefficient $C^{n}_{N}$ is combination number $\frac{N!}{n!(N-n)!}$, and $ \vert \Downarrow\rangle$ denotes the down spins as the reference state. The Dicke state is highly entangled many-body quantum state. The action of the above spin operators on state $\vert n \rangle$ are given by \begin{eqnarray*} \mathbf{J^{z}}\vert n \rangle &=& (-\frac{N}{2}+n)\vert n \rangle,\\ \mathbf{J^{-}} \vert n \rangle &=& \sqrt{b_{N,n}} \vert n-1 \rangle,\\ \mathbf{J^{+}} \vert n \rangle &=& \sqrt{b_{N,n+1}} \vert n+1 \rangle, \end{eqnarray*} where denoted the coefficient $b_{N,n}=n(N-n+1)$. For the large spin operator of the battery $\mathbf{S}$, they have similar properties through replacing $N$ by $N_B$ and replacing the spin operators $\tau^\alpha_j$ by $s^\alpha _j$, respectively. We consequently introduce the state basis of the whole system $\vert m,n \rangle$ for the degree of the battery $m\in \{0,1,\cdots,N_B\}$ and the degree of the charger $n\in \{0,1,\cdots,N\}$. The Hamiltonian of the whole system $H$ can be diagonalized by the recurrence relation developed in \cite{He-WB:2019}. For special case $N_B=1$, we can analytically obtain the whole dynamical evolution of spin polarization, see the Appendix. \section{III. Numerical and analytical Results} We first consider the numerical study of the general form of the quantum battery (\ref{qb}). We assume the initial state as \begin{align} \vert \Phi_{0}\rangle={\vert \varphi_{0}\rangle}_{B} \otimes {\vert \phi_{0}\rangle}_{C}. \end{align} Usually, the battery spins are in lowest states while the charger is in the higher excited states. For performing our numerical study, we choose the initial state as $\vert \Phi_{0}\rangle=\vert 0, N \rangle=\vert \Downarrow, \Uparrow \rangle$. The wave function of system evolves with time, namely, \begin{equation} \vert\psi(t)\rangle=\exp(-iHt)\vert \Phi_{0}\rangle. \end{equation} By definition Eq. (\ref{eq_Ebc}), we may calculate the evolution of the energy of battery as function of time $t$. \subsection{A. Special $N_B=1$ case} At the beginning of this subsection, we first study the results of the special case $N_B=1$ with $\vert \Phi_{0}\rangle=\vert \downarrow \rangle_{B} \otimes {\vert \phi_{0}\rangle}_{C}$ in order to get intuitive recognition of the energy transfer. Usually one can choose the states of bath spins as the Fock state or spin coherent state. Here we consider the Fock state for the initial state of the bath spins \begin{align} \vert \Phi_{0}\rangle=\vert \downarrow \rangle \otimes {\vert n\rangle}, \end{align} where the bath spin state ${\vert n\rangle}$ represents $n$ flipped spins among the $N$ spins. The time evolution of the wave function can be obtained from the Hamiltonian $H$ with the Eqs. (\ref{H-B}-\ref{H-I}), i.e. \begin{equation}\label{wave_function} \vert \psi(t)\rangle = e^{-i\theta t}\Big[P_{\uparrow}^{n}(t) \vert \uparrow \rangle \vert n-1\rangle+ P_{\downarrow}^{n} (t) \vert \downarrow \rangle \vert n\rangle \Big]. \end{equation} Here the global phase $\theta$ can be omitted and the two probability amplitudes are given by $P_{\uparrow}^{n}=-i\frac{2\sqrt{b_{N,n}}A}{ \Omega_{n}}\sin( \frac{\Omega_{n}t}{2})$ and $ P_{\downarrow}^{n}= i\frac{\Delta_{n}}{ \Omega_{n}}\sin( \frac{\Omega_{n}t}{2})+\cos( \frac{\Omega_{n}t}{2}) $. The wave function satisfies the normalization condition $|P_{\uparrow}^{n}|^2+|P_{\downarrow}^{n}|^2=1$. In the above equations, we denoted the parameters \begin{eqnarray} \Delta_{n}&=&B-h+(2n-1-N)\Delta,\nonumber \\ \Omega_{n}&=&\sqrt{\Delta_{n}^2+4b_{N,n}A^2}.\nonumber \end{eqnarray} Using the wave function (\ref{wave_function}), the charging energy and the power of quantum battery are obtained explicitly \begin{small} \begin{eqnarray} \Delta E_{B}(t)& =& B\frac{4b_{N,n}A^2}{ \Omega_{n}^2} \sin^{2}( \frac{\Omega_{n}t}{2}) \label{Energy_BAI_3}\\ P_B(t)&=&\Delta E_{B}(t)/t= B\frac{4b_{N,n}A^2}{ \Omega_{n}^2\,t} \sin^{2}( \frac{\Omega_{n}t}{2}). \label{power_BAI_3} \end{eqnarray} \end{small} The detailed calculation can be found in Appendix, also see the calculation for the Jaynes-Cummings (JC) model \cite{Gian18}. Based on this result, we briefly present a discussion on the energy transfer of the quantum battery below. \begin{figure} \caption{The charging energy of quantum battery $\Delta E_B(t)$(blue solid line), the energy of charger $\Delta E_C(t)$(dashed red line) and the interaction energy $E_I(t)$(dashed green line) are shown as function of $\Omega_{n}t/2$. (a) Charger and quantum battery are at resonant for $B=h=1$ and $\Delta=0$. (b) Charger and quantum battery are at resonant for $B=h=1$ and $\Delta=5$. (c) Charger and quantum battery are off from the resonance for $B=5$, $h=1$ and $\Delta=0$. (d) Charger and quantum battery are off from resonance for $B=5$, $h=1$ and $\Delta=5$. In the above subplots(a)-(d), we set $A=1$, $N=10$ and $n=N/2=5$. In the (a) and (b), the interaction energy are always equal to zero for the whole time regime and the energy can be totally transferred from charger to battery. } \label{diff_initialstate} \end{figure} (i) Resonant case $B=h$, $\Delta=0$, the charging energy is given by \begin{eqnarray} \Delta E_{B}(t)&=&B\sin^2(\sqrt{b_{N,n}}A t). \end{eqnarray} After an approximation, the maximum of the power is given by \begin{eqnarray} P_{max}\approx 0.72B A\sqrt{b_{N,n}}.\label{single-central-s-P} \end{eqnarray} From the expression of $\Delta E_B(t)$, the maximum transferred energy and the consuming time are given by \begin{equation} E_{max}=B ,\ \tau_{min}=\frac{\pi}{2A\sqrt{b_{N,n}}} \label{tmin} \end{equation} From the definition of $b_{N,n}$, we may obtain the minimum time to transfer the maximum energy, namely, $\tau_{min}=\frac{\pi}{2A}\frac{1}{{(N+1)}/2}$, here we see $n=\frac{N+1}{2}$. This means that the quantum battery is able to store the maximum energy in the shortest time for the initial state with $n=(N+1)/2$ flipped bath spins. (ii) Non-resonant case $B\neq h$ or $\Delta\neq 0$. In this case, we observe that the charging energy of quantum battery $|\Delta E_B(t)/B | <1$ and interaction energy $E_I(t)=\langle H_{I}\rangle \neq 0$. In Fig.\ref{diff_initialstate} (a) (b), we show the results of the battery and charge at the resonance. There is no interaction energy between the battery and charger in the figure (a). For the Fig.~\ref{diff_initialstate} (b), we chose $B=h$, $\Delta=5$, $n=N/2=5$, the terms involving the factors $(N/2-n)$ and $(B-h)$ in the charging energy of quantum battery vanish (see Appendix Eqs. (A7) and (A8)). In this case, the maximum energy intake is limited by $N$ due to the conservation of the energy. Fig.~\ref{diff_initialstate} (c) (d) present the non-resonant case, at which there exists an interaction energy between the battery and charger. This indicates that the transferred energy from the charger to the quantum battery is essentially subject to the interaction form. In this scenario, the maximum transfer energy strongly depends on $\Delta, \, B$ and $h$. \subsection{B. Arbitrary $N_B$ case } For arbitrary number of battery spins $N_B$ case, the eigenfunction is constructed by $\varphi=\sum_{m}\sum_{n} c_{m,n}\vert m, n \rangle$. After substituting the above ansatz into eigenvalue equation, the superposition coefficient $c_{m,n}$ are determined by following recurrence equation \begin{eqnarray} w_{mn}c_{m,n}+A\sqrt{b_{N_B,m}b_{N,n+1}}c_{m-1,n+1}+\nonumber \\ A\sqrt{b_{N_B,m+1}b_{N,n}}c_{m+1,n-1}=E c_{m,n},\label{Recurrence} \end{eqnarray} where coefficient $b_{N_B,m}=m(N_B-m+1)$ is defined as $b_{N,n}$ previously, and $w_{mn}=B(-\frac{N_B}{2}+m)+h(-\frac{N}{2}+n)+2\Delta(-\frac{N_B}{2}+m)(-\frac{N}{2}+n)$. Here, for the battery, $m\in \{0,1,\cdots,N_B\}$ and for the charger $n\in \{0,1,\cdots,N\}$. However, the recurrence equation (\ref{Recurrence}) with two variables $m, n$ is very difficult to be solved analytically. In order to study the energy transfer, we exactly diagonalize the Hamiltonian to obtain the time evolution of the system. Without losing the essential properties of the battery, we consider the interaction energy between charger and battery as zero by choosing the parameter $\Delta=0$ and $B=h=1$ in our numerical calculation. We will show that for this case the Hamiltonian can map to the Tavis-Cummings model \cite{sm,JC,dicke}. In addition, the system is prepared in the initial state $\vert \Phi_{0}\rangle=\vert \Downarrow, \Uparrow \rangle$, i.e. $m=0$ and $n=N$. The time evolution of the energy and power of the battery can be obtained numerically and analytically. \begin{figure} \caption{The maximum energy (a) and power (b) of the multiple central spin model v.s. the number of battery spins $N_B$ for different charger settings $N$. The dashed lines in (b) show the numerical fitting of the power relation Eq. (\ref{P_alpha}) in logarithmic scale for $N_B \in [20,80]$, i.e. $N=5$, $\alpha=0.5013$, $\beta=4.3706$ (red line); $N=10$, $\alpha=0.5067$, $\beta=9.9668$ (green line) and $N=15$, $\alpha=0.5172$, $\beta=15.9241$ (blue line), which agree with the numerical results showing in the corresponding symbols. This confirms the lower bound of the scaling exponent of the maximum power $\alpha \to 1/2$. Here we set $A=1$, $B=h=1$, $\Delta=0$ with the initial state $n=N$, and $m=0$.} \label{lmgEP} \end{figure} For a classical battery device, the electric current is static so that a charging process can be complete in a certain time. However, for the quantum battery, the energy transfer is essentially subject to dynamical evolution and depends not only on the devices but also on the charging time. Let's first understand how the charging process depend on the number of the battery spins when the number of charger spins is fixed. If the battery spins are token as the Fock state like that for the charger spins, the dynamical evolution of the battery involves the highly entangled Dicke state $\vert m \rangle, m=1,...N_B$ in charging process. Such kind of setting leads to a collective charging of the multiple central spin quantum battery, similar to the two-level system coupled to the single cavity mode, i.e. the Dicke model \cite{Ferraro}. By using Holstein-Primakoff transformation, we will prove that our model can be mapped to Tavis-Cummings model, see the Eq.(\ref{Htc}) in analytical study part. Meanwhile, Tavis-Cummings model relates to the Dicke model by the rotating wave approximation, see \cite{Ferraro}. Therefore, we naturally expect an existence of a general scaling relation between the battery power and the number of battery spins $N_B$ in the quantum battery of the Tavis-Cummings-like model. After performing numerical calculation, we find that the maximum power \begin{equation} P_{max}\propto \beta (N) N_{B}^{\alpha}, \label{P_alpha} \end{equation} where the exponent $\alpha$ is strongly affected by the number of charger spins and the initial state, $\beta $ is a function of the number of the charger spins $N$. Here the scaling exponent $\alpha$ essentially marked a collective nature of the battery in transferring energy. \begin{figure} \caption{ The rescaled maximum energy (a) and the maximum power (b) v.s. the number of the battery spins $N_B$ for different number of charger spins $N$. The dashed lines in (b) show the numerical fitting of the power relation Eq. (\ref{P_alpha}) in logarithmic scale for $N_B \in [1,50]$, i.e. $N=100$, $\alpha=1.4075$, $\beta=7.0056$ (red line); $N=200$, $\alpha=1.4434$, $\beta=9.4058$ (green line) and $N=300$, $\alpha=1.4540$, $\beta=11.3456$ (blue line), which agree with the numerical results showing in the corresponding symbols. This agreement confirms the upper bound of the scaling exponent of the maximum power $\alpha \to 3/2$ in thermodynamic limit. Here we set $A=1$, $B=h=1$, $\Delta=0$ with the initial state $n=N_B$, and $m=0$. } \label{lmgEP_N} \end{figure} \begin{figure} \caption{Logarithmic contour plot of the maximum power v.s. the numbers of the battery spins $N_B$ and charger spins $N$. It clearly shows different values of power scaling exponent $\alpha$ in the regimes $N_B\gg N$ and $N\gg N_B$. Here we set $A=1$, $B=h=1$, $\Delta=0$ with the initial state $n=N$, and $m=0$. } \label{qbalpha_Nall} \end{figure} Using the above setting and the initial state, i.e. $m=0,\, n=N$, we firstly compute the time evolution of energy and the maximum power, more detailed explanation on the numerical calculation is given in Appendix. In the Fig.\ref{lmgEP}, we show the maximum energy and maximum power as function of the number $N_B$ of battery spins for different numbers of charger spins. In Fig.~\ref{lmgEP} (a), we observe that the maximum energy $E_{max}$ increases linearly with respect to the number of battery spins $N_B$ when $N_B<N$ and saturates to a constant value when $N_B>N$. The maximum energy clearly shows a kink. In Fig.~\ref{lmgEP} (b), we observe that the maximum power $P_{max}$ increases monotonically with respect to the battery spins $N_B$ for different number of charger spins $N=5$ (red circle), $N=10$ (green square) and $N=15$ (blue triangle). The logarithmic plot of the maximum power $P_{max}$ directly gives the scaling exponent $\alpha$ which fits the relation (\ref{P_alpha}) for the region $N_B>N$, see Fig.~\ref{lmgEP}(b) and the Appendix. This result confirms the lower bound of the scaling exponent of the maximum power, i.e. $\alpha \to 1/2$, in the region $N_B>N$. In Fig.~\ref{lmgEP_N}, we demonstrate the maximum energy and the power law relation (\ref{P_alpha}) of the battery maximum power for $N\gg N_B$ with the initial condition $n=N_B$. We observe that the rescaled maximum energy $E_{max}/N_B$ does exhibit plateaux in thermodynamic limit, see Fig.~\ref{lmgEP_N} (a). In Fig.~\ref{lmgEP_N} (b), the plot of the maximum power $P_{max}$ in logarithmic scale show the scaling relation (\ref{P_alpha}) in agreement with the analytical result given in Eq. (\ref{pmaxhp}), where the analytical result $N=100$, $\alpha=1.5$, $\beta=7.2$; $N=200$, $\alpha=1.5$, $\beta=10.1823$ and $N=300$, $\alpha=1.5$, $\beta=12.4708$. Both the $N_B$ and $N$ take the thermodynamic limit, the result Eq. (\ref{pmaxhp}) can exactly hold. In Fig.~\ref{qbalpha_Nall}, we further demonstrate the power law relation (\ref{P_alpha}) of the maximum power with respect to the numbers of battery spins $N_B$ and charger spins $N$, where we set the initial state $n=N$ and consider the ranges $N_B\in [1,40]$ and $N\in [1,200]$. This figure also confirms the observation shown in Fig.~\ref{lmgEP} and Fig.~\ref{lmgEP_N}. Our numerical results show that the collective battery is enable to enhance the power through increasing the number of battery cells when the charger resources are big enough. In certain regions there exist lower and upper bounds of the scaling exponents in the maximum power. In next subsection, we will present an analytical proof of these two bounds. \subsection{Analytical study } In order to get a comprehensive understanding of the lower and upper bounds of the scaling exponent found by numerics in last section, we now present a rigorous calculation of the maximum energy and power of the quantum battery of Tavis-Cummings type. If we apply the Holstein-Primakoff transformation to both the bath and battery spins, thus the whole Hamiltonian of system (\ref{H-B}-\ref{H-I}) can be mapped to the Tavis-Cummings model \cite{sm,JC,dicke}, where the $N_B$ central spins are regarded as the $N_B$ atoms of two-levels energy. For $N\gg1$, $N_B\gg1$, we apply transformation for charger spins \begin{eqnarray}\label{HP-1} \mathbf{J}^{+}&=&\sqrt{N}a^{\dagger}\sqrt{1-a^{\dagger}a/N}\nonumber\\ \mathbf{J}^{-}&=&\sqrt{N}\sqrt{1-a^{\dagger}a/N}a\nonumber\\ \mathbf{J}^{z}&=&-\frac{N}{2}+a^{\dagger}a. \end{eqnarray} Without losing generality, we can obtain the Tavis-Cummings model for the case $\Delta=0$ \begin{eqnarray} H_{TC}&=& B\mathbf{S} ^{z}+h(a^{\dagger}a-\frac{N}{2})+A\sqrt{N}(\mathbf{S} ^{+} {a}+ \nonumber \\ && \mathbf{S} ^{-}{a}^{\dagger}). \label{Htc} \end{eqnarray} And we continue to apply the Holstein-Primakoff transformation to battery spins \begin{eqnarray}\label{HP-2} \mathbf{S}^{+}&=&\sqrt{N_B}b^{\dagger}\sqrt{1-b^{\dagger}b/N_B}\nonumber\\ \mathbf{S}^{-}&=&\sqrt{N_B}\sqrt{1-b^{\dagger}b/N_B}b\nonumber\\ \mathbf{S}^{z}&=&-\frac{N_B}{2}+b^{\dagger}b. \end{eqnarray} In above formulas, $a(b)$ and $a^{\dagger}(b^{\dagger})$ both are the annihilation and creation operators of boson. Substituting Eq.(\ref{HP-1}) and Eq.(\ref{HP-2}) into the Hamiltonian Eq. (\ref{H-B}-\ref{H-I}), we can obtain \begin{eqnarray}\label{TC-Hamiltonian} H&\approx& B(-\frac{N_B}{2}+b^{\dagger}b)+h(-\frac{N}{2}+a^{\dagger}a)\nonumber\\ && +A\sqrt{N_BN}(a^{\dagger}b+ab^{\dagger}). \end{eqnarray} Here we neglected the terms $a^{\dagger}a/N$ and $b^{\dagger}b/N_B$ since $N\gg1$, $N_B\gg1$, while we set $\Delta=0$ in the $H_I$ for simplifying our analytical study. Later, based on the whole Hamiltonian (\ref{TC-Hamiltonian}), we will analytically derive the scaling laws of the maximum energy and the maximum power with respect to the numbers of battery and charger spins. In this model, the total particle number is conserved and thus we have $[H, a^{\dagger}a+b^{\dagger}b]=0$. Without losing a generality, we can choose the Hamiltonian as the following form for $B=h$ \begin{eqnarray} H_I=A\sqrt{N_B N}(a^{\dagger}b+ab^{\dagger}). \end{eqnarray} We take the initial state as previous $\vert \Phi_{0}\rangle=\vert m,n\rangle=\vert m \rangle_{B} \otimes {\vert n\rangle}_{C}$ and the quantum battery is in the lowest state, namely, $m\rightarrow 0$. The maximum charging energy of the quantum battery is influenced by not only the energy levels of the battery and charger but also the choice of their initial states. In quantum optics, the energy levels of photons can be infinite. For the multiple central spins, the maximum transferred energy $\Delta E_B\propto B\cdot N_B$. We reasonably choose $n-m \sim N_B$, i.e. the charger contains enough energy to charge the battery to a level of the maximum energy. The wave function at time $t$ is given as the previous expression $ \vert\psi(t)\rangle=\exp(-\mathrm{i} H_{I}t)\vert \Phi_{0}\rangle$. By definition, the charging energy of the quantum battery is given by \begin{eqnarray} \Delta E_B(t)=B\Big[\left\langle \psi(t)\right|b^{\dagger}b\left|\psi(t)\right>-\left\langle \Phi_{0}\right|b^{\dagger}b\left|\Phi_{0}\right>\Big], \end{eqnarray} Let's further define the operator \begin{eqnarray}\label{F_definition} \hat{\mathbf{F}}=b^{\dagger}b-a^{\dagger}a. \end{eqnarray} Its time evolution is given by \begin{eqnarray} F(t)&=&\left\langle \Phi_{0}\right|e^{iH_I t} \hat{\mathbf{F}}e^{-iH_I t}\left|\Phi_{0}\right>. \label{Ft_definition} \end{eqnarray} After carefully calculating the recurrent commutation relations between the operators $H_I$ and $\hat{\mathbf{F}}$, we obtain the following expression \begin{eqnarray}\label{Ft_opeartor} &e^{iH_I t}& \hat{\mathbf{F}}e^{-iH_I t}=\hat{\mathbf{F}}+\sum_n \frac{1}{n!} [iH_I t,[iH_It,\cdots,[iH_I t,\hat{\mathbf{F}}]\cdots]]\nonumber\\ &=&\sum_{m=0}^{\infty}\frac{i^{2m+1}}{(2m+1)!}(2tA{\sqrt{N_BN}})^{2m+1}(a^{\dagger}b-ab^{\dagger})\nonumber\\ &&+\sum_{m=0}^{\infty}\frac{i^{2m}}{(2m)!}(2tA{\sqrt{N_BN}})^{2m}\hat{\mathbf{F}}\nonumber\\ &=&i\sin(2A{\sqrt{N_BN}}t)(a^{\dagger}b-ab^{\dagger})\nonumber\\ &&+\cos(2A{\sqrt{N_BN}}t)\hat{\mathbf{F}}. \end{eqnarray} Substituting Eq.~(\ref{Ft_opeartor}) and Eq.~(\ref{F_definition}) into Eq.~(\ref{Ft_definition}), we further obtain a simple expression \begin{equation} F(t)=(m-n)\cos(2A{\sqrt{N_BN}}t). \end{equation} Moreover, the total particle number $\hat{\mathbf{N}}=b^{\dagger}b+a^{\dagger}a$ is a conserved quantity, i.e. $[H_I,\hat{\mathbf{N}}]=0$. Therefore we have $N(t)=\left\langle \psi(t)\right|\hat{\mathbf{N}}\left|\psi(t)\right>=m+n$. It follows that \begin{eqnarray} \left\langle \psi(t)\right|b^{\dagger}b\left|\psi(t)\right>&=&\frac{N(t)+F(t)}{2}\nonumber\\ &&=\frac{m+n}{2}+\frac{m-n}{2}\cos{(2A{\sqrt{N_BN}}t)}.\nonumber \end{eqnarray} Thus the charging energy and the power of the quantum battery are given by \begin{eqnarray} \Delta E_B(t)&=&B\cdot(n-m)\sin^2(A{\sqrt{N_BN}}t), \label{Ebhp} \\ P_B(t)&=&B\cdot(n-m)\frac{\sin^2(A{\sqrt{N_BN}}t)}{t} \label{pbhp}, \end{eqnarray} respectively. It is straightforward to obtain the maximum power that is given by ${P}_{max}=B\cdot 0.72A\sqrt{N_BN}(n-m)$ for a time $\tau=1.16/(A\sqrt{N_BN})$. As being mentioned in previous section, we demand $n-m \sim N_B$ and $N\gg N_B$, thus the maximum power is given by \begin{eqnarray} {P}_{max}=0.72BA\sqrt{N}N_{B}^{3/2} \label{pmaxhp} \end{eqnarray} that reveals a significant advantage of this charging protocol, which leads to the upper bound of the scaling exponent $\alpha=3/2$. We observe that in the early charging process, the power reaches the maximum while the energy does not reach the maximum. This means that, the maximum power $P_{max} \propto N_{B}^{3/2}$ can indeed occur in the early time of the charging process, when the flipped spin $\langle b^{\dagger}b \rangle$ in the battery is much less than the number of battery cells $N_{B}$ . Therefore the Holstein-Primakoff transformation is valid for our analytical results. On the other hand, for the limit $N\rightarrow 1$, the maximum power shows a lower bound of such advantage, see Fig.\ref{lmgEP}(b). The evolution of the system can be easily obtained for $N=1$ case with initial state $\vert m,\uparrow\rangle$. The energy $\Delta E_{B}=B\sin^2(\sqrt{b_{N_B,m+1}}A t)$ and power $P_{B}= B\sin^2(\sqrt{b_{N_B,m+1}}A t)/t$, so that the maximum of power is given by $P_{max}\approx 0.72B \sqrt{b_{N_B,m+1}}A$ for the charging time $1.16/\sqrt{b_{N_B,m+1}}A$. According to the previous setting, the initial state of the battery spins are in lowest state $m\rightarrow 0 $ that gives $\sqrt{b_{N_B,m+1}}=\sqrt{N_B}$ and leads to $P_{max} \propto \sqrt{N_B}$. This consists with the numerical result given in Fig.\ref{lmgEP}(b), i.e. the scaling exponent $\alpha$ varies from the lower bound $\alpha =1/2$ to the upper bound $3/2$ when the number of charger spins $N$ changes from small to the thermodynamic limits, i.e. $N\gg1$ and $N_B\gg1$, while the condition $N\gg N_B$ holds. \section{IV. Conclusion} We have studied numerically and analytically the high-power quantum battery through the multiple central spin model. The advantage of quantum battery has been demonstrated through the maximum power of the quantum battery ${P}_{max}=0.72BA\sqrt{N}N_{B}^{\alpha }$ that exhibits a universal power-law dependence of the battery cells (spins) under the condition $N_B\ll N$. Such a power-law relation is analytically derived by the quantum battery of the Tavis-Cummings type. We also have observed that the power-law exponent of the battery power depends on the number of charger spins $N$, namely the scaling exponent $\alpha$ varies with the bath spin numbers $N$ from the lower bound $\alpha =1/2$ to the upper bound $\alpha =3/2$. From the maximum power (\ref{single-central-s-P}) of the single central spin battery, we see clearly the maximum power of $N_B$ incoherent quantum batteries of single central spin systems is given by $P_{ max} \approx 0.72B A \sqrt{N} N_B$. Therefore, a quantum advantage is revealed from the maximum power Eq. (\ref{pmaxhp}) of the quantum battery of the $N_B$ central spins. In the latter case, coherence of the $N_B$ central spins is naturally created by the interaction between the battery and charger spins. In the Appendix, we have presented the analytical results of the quantum battery with $N_B=1$ and an introduction to our numerical method. Our results display the role of how both the charger and battery are capable to enhance the quantum advantage of the Tavis-Cummings type systems. Our rigorous results of dynamical energy transfer shed lights on the design of quantum batteries. \textit{Acknowledgements. }W.B.H. acknowledges support from NSAF (Grant No. U1930402). X.W.G. is supported by the NSFC grant No.\ 11874393, and the National Key R\&D Program of China No.\ 2017YFA0304500. S.C. acknowledges support from NSFC (Grants No. 11974040 and No. 1171101295) and the National Key R\&D Program of China No. 2016YFA0301200. H. Q. L. acknowledges financial support from National Science Association Funds U1930402 and NSFC 11734002, as well as computational resources from the Beijing Computational Science Research Center. \widetext \begin{center} \textbf{\large Appendix } \end{center} \setcounter{equation}{0} \setcounter{figure}{0} \makeatletter \renewcommand{A\arabic{figure}} \renewcommand{\thesection}{A\arabic{figure}} \renewcommand{\thesection}{SM \arabic{section}} \renewcommand{A\arabic{equation}}{A\arabic{equation}} \section{The explicit forms of the charging energy} For the special case $N_{B}=1$, the Hamiltonian can be written as $2\times2 $ matrix in the basises $ \vert \downarrow \rangle \vert n\rangle, \vert \uparrow \rangle \vert n-1\rangle$, here $n=1,2,...,N$ \begin{equation} H_n=\left( \begin{array}{cc} \frac{B-h}{2}+(n-1-N/2)\Delta &\sqrt{b_{N,n}}A \\ \sqrt{b_{N,n}}A & -\frac{B-h}{2}-(n-N/2)\Delta \\ \end{array} \right) \end{equation} It is easy to diagonalize above small matrix $H$ analytically to obtain the evolution operator $U(t)=\exp(-iHt)$. The wave function can be derived by $\vert \psi(t)\rangle=U(t)\vert \Phi_{0}\rangle$. The Hamiltonian can be written as $H_n=\left( \Delta_{n}/2 \right) \hat{\sigma}_{z}+\sqrt{b_{N,n}}A \hat{\sigma}_{x}+C$, here $C$ is constant. The evolution operator $U(t)$ can be obtained by using property of Pauli matrix namely $\exp(i \theta \hat{n}\cdot \hat{\sigma})=\cos(\theta) I+i \sin(\theta) \hat{n}\cdot \hat{\sigma}$. It is $ U(t)=\cos( \Omega_{n}t/2) I-i \sin( \Omega_{n}t/2)\left[\left( \Delta_{n}/\Omega_{n} \right)\hat{ \sigma}_{z}+2\left( \sqrt{b_{N,n}}A/\Omega_{n}\right) \hat{ \sigma}_{x}\right]$, where \begin{eqnarray} \Delta_{n}&=&B-h+(2n-1-N)\Delta,\nonumber \\ \Omega_{n}&=&\sqrt{\Delta_{n}^2+4b_{N,n}A^2}.\nonumber \end{eqnarray} By using evolution operator $U(t)$ act on the initial state $\vert \downarrow \rangle \vert n \rangle$, we obtain the wave function of the time finally. We explicitly rewrite the wave function for $N_B=1$ case Eq.(\ref{wave_function}) as \begin{equation} \vert \psi(t)\rangle = e^{-i\theta t}\Big[P_{\uparrow}^{n}(t) \vert \uparrow \rangle \vert n-1\rangle+ P_{\downarrow}^{n} (t) \vert \downarrow \rangle \vert n\rangle \Big]. \end{equation} here the global phase $\theta$ can be omitted and two amplitudes are given by \begin{eqnarray*}\label{amplitude_P} P_{\uparrow}^{n}(t)&=&-i\frac{2\sqrt{b_{N,n}}A}{ \Omega_{n}}\sin( \frac{\Omega_{n}t}{2}),\\ P_{\downarrow}^{n}(t)&=& i\frac{\Delta_{n}}{ \Omega_{n}}\sin( \frac{\Omega_{n}t}{2})+\cos( \frac{\Omega_{n}t}{2}). \end{eqnarray*} The wave function satisfies the normalization condition, namely $|P_{\uparrow}^{n}(t)|^2+|P_{\downarrow}^{n}(t)|^2=1$. And the parameters are denoted by $\Delta_{n}=B-h+(2n-1-N)\Delta,\Omega_{n}=\sqrt{\Delta_{n}^2+4b_{N,n}A^2}$. The density matrix for the system can be obtained as \begin{eqnarray} \rho(t)&=&\left|\Psi(t)\right>\left<\Psi(t)\right|\nonumber\\ &=&P^n_{\uparrow}(t) {P_{\uparrow}^{n}(t)}^{\ast}\left|\uparrow \right> \left|n-1 \right>\left<n-1 \right| \left<\uparrow \right| +P^n_{\uparrow}(t) {P_{\downarrow}^{n}(t)}^{\ast}\left|\uparrow \right> \left|n-1 \right>\left<n \right| \left<\downarrow \right| \nonumber\\ &&+P^n_{\downarrow}(t) {P_{\uparrow}^{n}(t)}^{\ast}\left|\downarrow \right> \left|n \right>\left<n-1 \right| \left<\uparrow \right| + P^n_{\downarrow}(t) {P_{\downarrow}^{n}(t)}^{\ast}\left|\downarrow \right> \left|n \right>\left<n \right| \left<\downarrow \right|. \end{eqnarray} Then the reduced density matrices $\rho_B$ and $\rho_C$ are given respectively as \begin{eqnarray} \rho_B(t) &=& {\mathrm{tr}}_C\Big[\left|\Psi(t)\right>\left<\Psi(t)\right| \Big]\nonumber\\ &=&P^n_{\uparrow}(t) {P_{\uparrow}^{n}(t)}^{\ast}\left|\uparrow \right> \left< \uparrow \right|+ P^n_{\downarrow}(t) {P_{\downarrow}^{n}(t)}^{\ast} \left|\downarrow \right> \left< \downarrow \right|,\\ \rho_C(t) &=& {tr}_B\Big[\left|\Psi(t)\right>\left<\Psi(t)\right| \Big]\nonumber\\ &=& P^n_{\uparrow}(t) {P_{\uparrow}^{n}(t)}^{\ast}\left| n-1 \right> \left< n-1 \right|+ P^n_{\downarrow}(t) {P_{\downarrow}^{n}(t)}^{\ast} \left| n \right> \left< n \right| \end{eqnarray} After simple algebra, we derive the energy of the quantum battery, the energy of charger, and the energy of interaction between charger and battery by substituting the above density matrix into the definition Eq.(\ref{eq_Ebc}) \begin{eqnarray} E_B(t)&=&\mathrm{tr}[H_B\rho_B(t)] =B\Big[\frac{4b_{N,n}A^2}{ \Omega_{n}^2} \sin^{2}( \frac{\Omega_{n}t}{2})-\frac{1}{2}\Big],\label{energy_B} \\ E_C(t)&=&\mathrm{tr}[H_C\rho_C(t)] =h\Big[(-\frac{N}{2}+n)-\frac{4b_{N,n}A^2}{ \Omega_{n}^2} \sin^{2}( \frac{\Omega_{n}t}{2})\Big],\label{energy_C} \\ E_I(t)&=&\mathrm{tr}[H_I\rho(t)] =\Delta(\frac{N}{2}-n)-(B-h)\frac{4b_{N,n}A^2}{ \Omega_{n}^2} \sin^{2}( \frac{\Omega_{n}t}{2}). \label{energy_I} \end{eqnarray} \section{The exact diagonalization and fitting the scaling law} In this part, we present in details the exact diagonalization method. According to the action of larger spin operator on the Dicke state, we have \begin{eqnarray} \mathbf{J^{z}}\vert n \rangle &=& (-\frac{N}{2}+n)\vert n \rangle,\\ \mathbf{J^{-}} \vert n \rangle &=& \sqrt{b_{N,n}} \vert n-1 \rangle,\\ \mathbf{J^{+}} \vert n \rangle &=& \sqrt{b_{N,n+1}} \vert n+1 \rangle, \end{eqnarray} such that $J^{z},J^{-},J^{+}$ are written as $(N+1)*(N+1)$ matrix, for example, $J^{z}$ and $J^{-}$ are given by \begin{equation} (J^{z})_{mn}= \left\lbrace \begin{array}{c c c} (-\frac{N}{2}+n) ,& for &m=n \\ 0 ,& for & others \\ \end{array} \right. \end{equation} and \begin{equation} (J^{-})_{mn}= \left\lbrace \begin{array}{c c c} \sqrt{b_{N,n}} ,& for &m=n-1 \\ 0 ,& for & others \\ \end{array} \right. , \end{equation} respectively. At the same time, the operators $S^{z},S^{-},S^{+}$ can be written as $(N_{B}+1)*(N_{B}+1)$ matrix too. By combining the matrix of $J$ and $S$, we obtain the matrix form of the whole Hamiltonian Eq. \ref{H-B}-\ref{H-I}. Thus the dimension of the Hamiltonian in the Dicke basis is $(N_{B}+1)(N+1)$. For $N_{B}\le 40$, and $N\le 300$, the Hamiltonian can be diagonalized directly to obtain the evolving operator $U(dt)=\exp(-iH dt)$ with suitable time step $dt$. The time dependent wave function can be obtained numerically $\vert \psi(t) \rangle =U(dt) \cdots U(dt) \vert \Phi_{0} \rangle$. Then according to Eq.(2) and (3), the energy and power can be computed. \textbf{Scaling relation.} The scaling relation of the maximal power of battery reads \begin{equation} P_{max}\propto \beta (N) N_{B}^{\alpha}. \end{equation} By taking logarithm, we use linear fitting to obtain the scaling exponent $\alpha$ \begin{equation} \log(P_{max})= \alpha \log(N_{B})+\log(\beta (N)), \label{app_plog} \end{equation} where $\beta $ is a constant for a fixed $N$. In the numerical fitting in Fig.~(\ref{lmgEP}), we fixed the range of $N_{B}$ in $[1,80]$. Since the total energy conservation, the energy reaches to a saturation point for $N_B>N$, see Fig.\ref{lmgEP}(a). Therefore we use the data after the kink to fit the scaling relation Eq.(\ref{app_plog}) for the region $N<N_B$ in Fig. \ref{lmgEP}(b). Similarly, for the region $N\gg N_B$, we fit the scaling relation Eq.(\ref{app_plog}) and do find agreement with our analytical relation (\ref{pmaxhp}), see the main text. \end{document}
\begin{document} \large \title{Linear complementarity on simplicial cones and the congruence orbit of matrices \thanks{{\it 2010 AMS Subject Classification:} 90C33, 15A04; {\it Key words and phrases:} linear complementarity, \textbf{Q}-matrix, \textbf{P}-matrix, congruence orbit} } \author{A. B. N\'emeth\\Faculty of Mathematics and Computer Science\\Babe\c s Bolyai University, Str. Kog\u alniceanu nr. 1-3\\RO-400084 Cluj-Napoca, Romania\\email: nemab@math.ubbcluj.ro \and S. Z. N\'emeth\\School of Mathematics, University of Birmingham\\Watson Building, Edgbaston\\Birmingham B15 2TT, United Kingdom\\email: s.nemeth@bham.ac.uk} \date{} \maketitle \begin{abstract} The congruence orbit of a matrix has a natural connection with the linear complementarity problem on simplicial cones formulated for the matrix. In terms of the two approaches -- the congruence orbit and the family of all simplicial cones -- we give equivalent classification of matrices from the point of view of the complementarity theory. \end{abstract} \section{Introduction} We use in this introduction some standard terms and notations which will also be specified in the next section. Let be $K\subset \mathbb R^m$ a cone, $K^*\in \mathbb R^m$ be its dual, $M:\mathbb R^m\to \mathbb R^m$ a linear mapping and $q\in \mathbb R^m$. The problem \begin{equation*} \LCP(K,q,M): \;\textrm{find}\;x\in K \;\textrm{with}\; Mx+q\in K^*\;\textrm{and}\; \langle x,Mx+q\rangle=0 \end{equation*} is called \emph{the linear complementarity problem on the cone $K$}. In the case $K=\mathbb R^m_+$ it is denoted by $\LCP(q,M)$ and called \emph{the classical linear complementarity problem}. As one of the most important problems in optimization theory, the classical linear complementarity problem has a broad literature (see \cite{MR3396730} and the literature therein). Despite of the important progress last decades in this field, it still in the center of interest nowadays. Besides the classical case, the linear complementarity on Lorentz cone and the cone of positive semi-definite matrices emerged as an important topic in the previous decade \cite{MR1782157,GowdaSznajderTao2004}. When $K\subset \mathbb R^m$ is a simplicial cone, the linear complementarity problem can be transformed by a linear mapping in the classical one. But general simplicial cones can differ substantially from each other in some aspects, one of them being the projection onto the cone, the mapping playing an essential role in the solution of optimization problems. If the linear mapping $M$ is given, such an approach relates the problem to the \emph{congruence orbit} of $M$, i.e. to the set of maps \begin{equation}\label{orbit} \dy (M) =\{L^\top ML:\;L\in \glm \}, \end{equation} where $\glm$ denotes the \emph{general linear group} of $\mathbb R^m$, i.e., the group of invertible linear maps of the vector space $\mathbb R^m$. Among other results, in this note we will show that $\LCP(K,q,M)$ is feasible for an arbitrary simplicial cone $K\subset \mathbb R^m$ and an arbitrary $q\in \mathbb R^m$ if and only if $M$ is a \emph{positive definite} mapping \cite{MR3396730}, i.e., if $\langle Mx,x \rangle > 0$, $\forall x\in \mathbb R^m$, $x\ne 0$, which is equivalent to saying that this property holds for each member of $\dy (M)$. It turns out that this property is also equivalent with the much stronger $\textbf{P}$ and $\textbf{Q}$-properties of all members of $\dy (M)$ and equivalently, with the corresponding properties of $M$ for each simplicial cone $K$. It is possible that some of the problems considered in the present note already occured in a different setting in the huge literature on linear complementarity. Even so, the approach of considering classical $\textbf{P}$ and $\textbf{Q}$-properties of a matrix for the \emph{whole family} of simplicial cones and the relation with the congruence orbit of the matrix seems a novel approach which justifies our following investigation. \section{Terminology and notations} \label{terminology} Denote by $\mathbb R^m$ the $m$-dimensional Euclidean space endowed with the scalar product $\langle\cdot,\cdot\rangle:\mathbb R^m\times\mathbb R^m\to\mathbb R,$ and the Euclidean norm $\|\cdot\|$ and topology this scalar product defines. Denote $\langle x,y\rangle=0$ by $x\perp y$. Throughout this note we shall use some standard terms and results from convex geometry (see e.g. \cite{MR1451876}). Let $K$ be a \emph{convex cone} in $\mathbb R^m$, i. e., a nonempty set with (i) $K+K\subset K$ and (ii) $tK\subset K,\;\forall \;t\in \mathbb R_+ =[0,+\infty)$. The convex cone $K$ is called \emph{pointed}, if $K\cap (-K)=\{0\}.$ The cone $K$ is {\it generating} if $K-K=\mathbb R^m$. $K$ is generating if and only if $\intr K\not= \varnothing.$ A closed, pointed generating convex cone is called \emph{proper cone}. The set \begin{equation}\label{simplicial} K= \cone\{x^1,\dots,x^m\}:=\{t_1x^1+\dots+t_m x^m:\;t_i\in \mathbb R_+,\;i=1,\dots,m\} \end{equation} with $x^1,\,\dots,\,x^m$ linearly independent vectors in $\mathbb R^m$ is called a \emph{simplicial cone}. A simplicial cone is proper. The \emph{dual} of the convex cone $K$ is the set $$K^*:=\{y\in \mathbb R^m:\;\langle x,y\rangle \geq 0,\;\forall \;x\in K\},$$ with $\langle\cdot,\cdot\rangle $ is the standard scalar product in $\mathbb R^m$. The cone $K$ is called \emph{self-dual} if $K=K^*.$ If $K$ is self-dual, then it is proper. In all that follows we will suppose that $\mathbb R^m$ is endowed with a Cartesian system having an orthonormal basis $e^1,\dots,e^m$ and the elements $x\in \mathbb R^m$ are represented by the column vectors $x=(x_1,\dots,x_m)^\top$, with $x_i$ the coordinates of $x$ with respect to this basis. (That is, $\mathbb R^m$ will be the vector space of $m$-dimensional column vectors.) The set \[\mathbb R^m_+=\{x=(x_1,\dots,x_m)^\top\in \mathbb R^m:x_i\geq 0,\mbox{ } i=1,\dots,m\}\] is called the \emph{non-negative orthant} of the above introduced Cartesian reference system. In fact $$\mathbb R^m_+=\cone \{e^1,...,e^m\}.$$ A direct verification shows that $\mathbb R^m_+$ is a self-dual cone. If $K$ is the simplicial cone defined by (\ref{simplicial}) and $A$ is the non-singular matrix transforming the basis $e^1,...,e^m$ to the linear independent vectors $x^1,...,x^m$, then obviously \begin{equation}\label{repre} K=A\mathbb R^m_+. \end{equation} For simplicity from now on we will call a convex cone simply cone. \section{Changing the cone linearly} \begin{lemma}\label{gentran} Let $W\subset \mathbb R^m$ be a cone and $A\in \glm$. Then $K=AW$ is a cone too and $K^*=A^{-T}W^*$. \end{lemma} \begin{proof} The first assertion is trivial. Take $y\in K^*$. This is equivalent to $$\langle Aw,y\rangle =\langle w,A^\top y\rangle \geq 0,\mbox{ }\forall w\in W.$$ Thus, $y\in K^*\iff A^\top y\in W^* \iff y\in A^{-T}W^*.$ \end{proof} \begin{corollary}\label{gentranself} If $W^*=W$, $A\in\glm$ then $$ K=AW \iff K^*=A^{-T}W.$$ If $K$ is the simplicial cone (\ref{simplicial}), then, because of the representation (\ref{repre}) and the self-duality of $\mathbb R^m_+$, we have $$K^*=A^{-T}\mathbb R^m_+.$$ \end{corollary} \section{Linear transformation of a cone and the complementarity problem} For the mapping $F:\mathbb R^m \to \mathbb R^m$ the \emph{complementarity problem} $\CP(F,K)$ is to find $x\in\mathbb R^m$ such that \begin{equation*} K\ni x\perp F(x)\in K^*. \end{equation*} The solution set of $\CP(F,K)$ will be denoted by $\SOL(F,K)$. We have \begin{equation}\label{-T} x\perp y\iff Ax\perp A^{-\top}y. \end{equation} Hence, by using Lemma \ref{gentran} and (\ref{-T}), we conclude the following result: \begin{proposition}\label{CPequ} If $W$ is a cone, $A\in \glm$ and $K=AW$, then $$\SOL(F,K)=A(\SOL(A^\top FA,W)).$$ \end{proposition} \begin{proof} Indeed, \begin{gather*} Ax\in A(\SOL(A^\top FA,W))\iff x\in\SOL(A^\top FA,W)\iff W\ni x\perp A^\top F(Ax)\in W^*\\\iff K\ni Ax\perp F(Ax)\in K^* \iff Ax\in SOL(F,K). \end{gather*} \end{proof} \section{The case of linear complementarity} The complementarity problem $\CP(f,K)$ with $F(x)=Mx+q$, where $M\in \mathbb R^{m\times m}$ and $q\in \mathbb R$, will be denoted by $\LCP(K,M,q)$ and called \emph{linear complementarity problem}. Thus, the linear complementarity problem $\LCP(K,M,q)$ is to find $x\in\mathbb R^m$ such that \begin{equation*} K\ni x\perp Mx+q\in K^*. \end{equation*} The solution set of $\LCP(K,M,q)$ will be denoted by $\SOL(K,M,q)$. In this case Proposition \ref{CPequ} becomes \begin{proposition}\label{LCPequ} If $W$ is a cone, $A\in \glm$ and $K=AW$, then $$\SOL(K, M,q)=A(\SOL(W,A^\top MA, A^\top q)).$$ \end{proposition} If $K=\mathbb R^m_+$, then $\CP(F,K)$, $\SOL(F,K)$, $\LCP(K,M,q)$ and $\SOL(K,M,q)$ will simply be denoted by $\CP(F)$, $\SOL(F)$, $\LCP(M,q)$ and $\SOL(M,q)$, respectively. \section{The congruence orbit of a matrix and the complementarity problem} If $A$ and $B$ are in $\mathbb R^{m\times m}$, then they are \emph{congruent} and we write $A\sim B$, if there exists $L\in \glm$ such that $$L^\top AL=B,$$ that is $B$ is in the congruence orbit $\dy (A)$ of $A$ defined at (\ref{orbit}). Obviously, $\sim$ is an equivalence relation and in this case $\dy (B)=\dy (A).$ In the case of simplicial cones Proposition \ref{LCPequ} reduces to \begin{proposition}\label{LCPclassic} If $K=L\mathbb R^m_+$ is a simplicial cone then $$\SOL(K, A,q)=A(\SOL(\mathbb R^m_+,L^\top AL,A^\top q)).$$ Hence the linear complementarity problem on a simplicial cone is equivalent to the complementarity problem on the non-megative orthant, that is, to the classical linear complementarity problem. \end{proposition} \begin{remark} Proposition \ref{LCPclassic} shows that for linear complementarity problems with matrix $A$ on simplicial cones the congruence orbit $\dy (A)$ of $A$ appears in a natural way. \end{remark} \begin{definition} Let $A$ be a linear transformation. Then, we say that \begin{enumerate} \item $A$ has the \emph{$K$-\textbf{Q}-property} if $\LCP(K,A,q)$ has a solution for all $q$. \item $A$ has the \emph{$K$-\textbf{P}-property} if $\LCP(K,A,q)$ has a unique solution for all $q$. \item The $\mathbb R^m_+$-\textbf{Q}-property ($\mathbb R^m_+$-\textbf{Q}-property) is called \emph{\textbf{Q}-property, (\textbf{P}-property)} and the matrix of the linear transformation $A$ with the \textbf{Q}-property (\textbf{P}-property) is called \textbf{Q}-matrix (\textbf{P}-matrix) (for simplicity we denote a linear transformation and its matrix by the same letter). \item $A$ has the \emph{general feasibility property} with respect to $K$ denoted $K$-\textbf{FS}-property if \[(AK+q)\cap K^*\ne\varnothing,\mbox{ }\forall q\in \mathbb R^m.\] If $K=\mathbb R^m_+$, then the $K$-\textbf{FS}-property is called \textbf{FS}-property and it is characterized by the relation \[(A\mathbb R^m_++q)\cap\mathbb R^m_+\ne\varnothing,\mbox{ }\forall q\in \mathbb R^m.\] The matrix $A$ with the \textbf{FS}-property is called \textbf{FS}-matrix. \end{enumerate} \end{definition} \begin{remark}\label{pbpos} Obviously, the \textbf{P}-property of a matrix $A$ implies its \textbf{Q}-property, and its \textbf{Q}-property implies its \textbf{FS}-property as well. A classical result going back to the paper \cite{Samelson} asserts that $A$ possesses the \textbf{P}-property if and only if all its principal minors are positive. Theorem 3.1.6 in \cite{MR3396730} asserts that a positive definite matrix possesses the $\textbf{P}$-property. The \textbf{FS}-property of a matrix can be considered the weakest one in the context of linear complementarity. It is easy to see that the \textbf{FS}-property ($K$-\textbf{FS}-property) of the matrix $A$ is equivalent to $-A\mathbb R^m_++\mathbb R^m_+=\mathbb R^m$ ($-AK+K^*=\mathbb R^m$). \end{remark} With the notations in the above definition we have \begin{proposition}\label{ekviv} If $A\in \mathbb R^{m\times m}$ has the $K$-\textbf{Q}-property ($K$-\textbf{P}-property) then $M=L^\top AL\in \dy (A)$ has the $LK$-\textbf{Q}-property ($LK$-\textbf{P}-property). \end{proposition} \begin{example}\label{Icongr} \emph{The congruence orbit of the identity} We have $\dy (I)=\{L^\top L:\;L\in\glm\}$. Hence, each member of $\dy (I)$ is a \emph{symmetric positive definite matrix}. \end{example} The following lemma is based on Example \ref{Icongr} and shows that if the congruence orbit of a matrix contains a positive definite matrix, then all of its matrices are positive definite. \begin{lemma}\label{ulem} If $\dy (A)$ contains a symmetric positive definite matrix, then $$\dy (A)= \dy (I).$$ \end{lemma} \begin{proof} We can suppose that $A$ itself is a symmetric positive definite matrix. If we denote by $R$ the square root of $A$ \cite{MR2978290}, then we can write \begin{equation*} \dy (A) =\{L^\top AL:\;L\in \glm\} = \{(RL)^\top (RL):\;L\in \glm\}= \end{equation*} \begin{equation*} \{M^\top M:\;M\in\glm\}=\dy (I). \end{equation*} \end{proof} Obviously, a symmetric positive definite matrix is nothing else but a symmetric \textbf{P}-matrix. Hence, by Lemma \ref{ulem} we conclude \begin{corollary}\label{corulem} Each member of the congruence orbit of a symmetric positive definite matrix is a \textbf{P}-matrix. \end{corollary} How about the congruence orbit of a non-symmetric \textbf{P}-matrix? Can it have a property similar to the one stated in Corollary \ref{corulem}? We will show that this holds if and only if the matrix is positive definite. \begin{lemma}\label{dgnl} If the diagonal of the matrix $A\in \mathbb R^{m\times m}$ contains some non-positive element, then $\dy (A)$ contains non-\textbf{FS}-matrices. \end{lemma} \begin{proof} (a) Let $D_k$ be a diagonal matrix with $d_{kk}=-1$ and $d_{ii}=1$ if $i\ne k$. If $A=(a_{ij})_{i,j=1,...,m}$, then $B=D_k^\top AD_k$ is a matrix with $b_{ij}=a_{ij}$ if $i\ne k$ and $j\ne k$, $b_{ki}=-a_{ki}$, $i\ne k$, $b_{jk}=-a_{jk}$, $j\ne k$, and $b_{kk}=a_{kk}$. (b) Without loss of generality we can assume that $a_{11}\leq 0$. Assume that the positive terms of the first line of $A$ are $a_{1j},\,a_{1k},,...,a_{1l}$. Then if $D$ is the diagonal matrix with $d_{ii}=-1,\;i \in \{j,k,...,l\}$ and $d_{ii}=1,\;i\notin \{j,k,...,l\}$, then using the remark at (a) we conclude that $$C=D^\top AD$$ is a matrix whose first line contains only non-positive elements. Hence, for any $q=(-1,*....*)^\top$ we have $(C\mathbb R^m_++q)\cap \mathbb R^m_+=\varnothing$. \end{proof} As we have stated at Remark \ref{pbpos} a positive definite matrix possesses the \textbf{P}-property, but simple examples show, that the converse is not true. We have the obvious assertion: \begin{lemma}\label{szimmertiz} If $A\in \mathbb R^{m\times m}$ is not positive definite, then its \emph{symmetrizant} \begin{equation}\label{szimm} S(A)=\frac{A+A^\top}{2} \end{equation} is not positive definite neither. \end{lemma} \begin{remark}\label{szimmdiag} Observe that the diagonal of $S(A)$ defined as in (\ref{szimm}) coincides with the diagonal of $A$. \end{remark} \begin{theorem}\label{eqn} Let $A\in \mathbb R^{m\times m}$. Then the following assertions are equivalent: \begin{enumerate} \item Each member of $\dy (A)$ is a \textbf{FS}-matrix. \item Each member of $\dy (A)$ is a \textbf{Q}-matrix. \item Each member of $\dy (A)$ is a \textbf{P}-matrix. \item $A$ is a positive definite matrix. \end{enumerate} Hence, if any of the above conditions hold then $A$ possesses the $K$-\textbf{P}, $K$-\textbf{Q}, $K$-\textbf{FS}- properties for any simplicial cone $K$. \end{theorem} \begin{proof} Suppose that assertion 1 hold. (a) Assume that $A$ is not positive definite, that is, item 4. does not hold. Then, by Lemma \ref{szimmertiz}, the same is true for $S(A)$. That is, $S(A)$ is a symmetric matrix which is not positive definite. Then, it has non-positive eigenvalues, that is, in the spectral decomposition $$S(A)=ODO^\top$$ $D$ is a diagonal matrix with some non-positive elements. On the other hand we have \begin{equation*} D= O^\top S(A)O=\frac{O^\top AO +O^\top A^\top O}{2}. \end{equation*} Now, since $O^\top A^\top O=(O^\top AO)^\top$, it follows, according to Remark \ref{szimmdiag}, that the diagonal of $D$ coincides with the diagonal of $O^\top AO$. Since this diagonal contains non-positive elements, it follows by Lemma \ref{dgnl} that the orbit $\dy (A)$ contains elements which are not \textbf{FS}-matrices. This shows the implication $1\Rightarrow 4$. (b) If 4 holds, then for each $L\in \glm$ and any $x\not= 0$ we have $$\langle L^\top AL x,x\rangle =\langle ALx,Lx\rangle >0,$$ that is, $L^\top AL$ is positive definite and hence, by Theorem 3.1.6 in \cite{MR3396730} is a \textbf{P}-matrix, and hence also a \textbf{Q}-matrix and an \textbf{FS}-matrix. Thus, we have the implications $4 \Rightarrow 3 \Rightarrow 2 \Rightarrow 1$. (c) The last assertion of the theorem follows from Propositions \ref{LCPclassic} and \ref{ekviv}. \end{proof} \begin{lemma}\label{posdiag} If for $A \in \glm$ there exists $x\in \mathbb R^m$ with $\langle Ax,x\rangle >0,$ then $\dy (A)$ contains a matrix with at least one positive element in its diagonal. \end{lemma} \begin{proof} Let $S(A)$ be the symmetrizant of $A$. Then $\langle S(A)x,x\rangle >0$. Consider the spectral decomposition $$S(A)=ODO^\top $$ of $S(A)$. Then, $$\langle S(A)x,x\rangle =\langle ODO^\top x,x\rangle = \langle DO^\top x,O^\top x\rangle >0$$ Hence, the diagonal matrix $D$ must contain positive elements, since $\langle Dy,y\rangle= \sum_i d_{ii}y_i^2 >0$ with $y=O^\top x$. Now, $$ D= \frac{O^{-1}A(O^\top)^{-1}+O^{-1}A^\top(O^\top)^{-1}}{2}=\frac{O^{-1}A(O^\top)^{-1}+(O^{-1}A(O^\top)^{-1})^\top}{2},$$ hence the diagonal of $O^{-1}A(O^\top)^{-1}$ coincides with the diagonal of $D$ and hence it must contain positive elements. \end{proof} The matrix $A=(a_{ij})_{i,j=1,...,m}\in \mathbb R^{m\times m}$ is called \emph{positive}, if $a_{ij}>0$, $\forall i,j$. \begin{lemma}\label{strpos} For the matrix $A\in \mathbb R^{m\times m}$ the following two assertions are equivalent: \begin{enumerate} \item $\dy (A)$ contains a matrix with a positive element on its diagonal, \item $\dy (A)$ contains a positive matrix. \end{enumerate} \end{lemma} \begin{proof} (a) We first prove that if the matrix $A:=(a_{ij})_{i,j=1,....,m}\in \mathbb R^{m\times m}$ contains a positive principal submatrix of order $n-1<m$, then it has a conjugate containing a positive principal submatrix of order $n$. Suppose that $A(1:n,1:n):=(a_{ij})_{i,j=1,....,n}$ has the property that $a_{ij}>0$ whenever $i,j\in\{2,\dots,n\}$ and let $A(2:n,2:n)=(a_{ij})_{i,j=2,\dots,n}$. Denote by $I\in\mathbb R^{n\times n}$ the unit matrix and let $E_{12}\in\mathbb R^{n\times n}$ be the matrix with $1$ in the position $(i,j)=(1,2)$ and $0$ elsewhere. Let $L_t=I+tE_{12}$ with $t$ a real parameter. Put \[B=L_tA(1:n,1:n)L_t^\top=(b_{ij})_{i,j=1,...,n}\] and $B(2:n,2:n)=(b_{ij})_{i,j=2,\dots,n}$. Then, we have \begin{equation}\label{B22} B(2:n,2:n)=A(2:n,2:n), \end{equation} \begin{equation}\label{b11} b_{11}=a_{11}+ta_{21}+ta_{12}+t^2a_{22}, \end{equation} \begin{equation}\label{b1i} b_{1i}=a_{1i}+ta_{2i} ,\;\;i\geq 2, \end{equation} \begin{equation}\label{bi1} b_{i1}=a_{i1}+ta_{i2} ,\;\;i\geq 2. \end{equation} From (\ref{B22}), (\ref{b11}), (\ref{b1i}) and (\ref{bi1}), it follows that for $t>0$ large enough we will have $b_{ij}>0,\;i,j=1,...,n$. (b) Applying the procedure from (a), the element $a_{ii}>0$ on the diagonal of $A$ can be augmented to obtain in $\dy (A)$ a matrix with positive principal minor of order $2$, then a matrix with positive principal minor of order $3$ in $\dy (A)$, and so an, to obtain a positive matrix in $\dy (A)$. \end{proof} \begin{theorem}\label{nempd} If for $A\in \glm$ there exists an $x\in \mathbb R^m$ with $\langle Ax,x\rangle >0$ and an $y\in \mathbb R^m\setminus \{0\}$ with $\langle Ay,y\rangle \leq 0$, then $\dy (A)$ contains non-\textbf{FS} -matrices and \textbf{Q}-matrices as well. \end{theorem} \begin{proof} By Theorem \ref{eqn} $\dy (A)$ must contain non-\textbf{FS}-matrices. By Lemma \ref{posdiag}, $\dy (A)$ must contain a matrix $B$ with at least one positive element on its diagonal. Then, by Lemma \ref{strpos}, $\dy (B)=\dy (A)$ must contain a positive matrix. By Theorem 3.8.5 in \cite{MR3396730}, it follows that a such matrix is a \textbf{Q}-matrix. \end{proof} \begin{corollary}\label{vegso} Suppose that $A\in \glm$. Then, exactly one of the following alternatives hold: \begin{enumerate} \item $\langle Ax,x\rangle >0$, $\forall x\in \mathbb R^m\setminus \{0\}$ $\iff$ each member of $\dy (A)$ is a \textbf{FS}-matrix $\iff$ each member of $\dy (A)$ is a \textbf{Q}-matrix $\iff$ each member of $\dy (A)$ is a \textbf{P}-matrix. \item If $\langle Ax,x\rangle \leq 0$, $\forall x\in \mathbb R^m$, then no matrix in $\dy (A)$ can have the \textbf{FS}-property. \item If for some non-zero elements $x$ and $y$ in $\mathbb R^m$ one has $\langle Ax,x\rangle >0$ and $\langle Ay,y\rangle \leq 0$, then $\dy (A)$ contains non-\textbf{FS}-matrices and \textbf{Q}-matrices as well. \end{enumerate} \end{corollary} \begin{proof} Only item 2 needs proof. Let $A\in \glm$ a matrix with $\langle Ax,x\rangle \leq 0$, $\forall x\in \mathbb R^m$. Suppose to the contrary that there is a matrix $L^\top AL\in\dy (A)$ which has the \textbf{FS}-property. Let $q\in\mathbb R^m$ be a vector with all components negative. Then, there exists $x\in\mathbb R^m_+$ such that \begin{equation}\label{ef} L^\top ALx+q\in\mathbb R^m_+. \end{equation} Thus, \begin{equation}\label{epsp} 0\le \langle x,L^\top ALx+q\rangle=\langle Lx,ALx\rangle+\langle x,q\rangle. \end{equation} If $x\ne 0$, then the right hand side of equation \eqref{epsp} is negative, which is a contradiction. Hence, $x=0$. Then, \eqref{ef} implies $q\in\mathbb R^m_+$, which contradicts the choice of $q$. In conclusion, no matrix in $\dy (A)$ can have the \textbf{FS}-property. \end{proof} The alternatives listed in Corollary \ref{vegso} can be formulated in complementarity terms too: \begin{corollary}\label{vegso2} Suppose that $A\in \glm$. Then exactly one of the following alternatives hold : \begin{enumerate} \item $A$ possesses the $K$-\textbf{FS}- property for any simplicial cone $K$ $\iff$ $A$ possesses the $K$-\textbf{Q}- property for any simplicial cone $K$ $\iff$ $A$ possesses the $K$-\textbf{P}- property for any simplicial cone $K$. \item There is no simplicial cone $K$ for which $A$ possesses the $K$-\textbf{FS}-property. \item There exists a simplicial cone $K$ for which $A$ does not have the $K$-\textbf{FS}-property and there exists a simpicial cone $L$ for which $A$ possesses the $L$-\textbf{Q}-property. \end{enumerate} \end{corollary} \end{document}
\begin{document} \title{Canonical steering ellipsoids of pure symmetric multiqubit states with two distinct spinors and volume monogamy of steering} \author{B. G. Divyamani} \affiliation{Tunga Mahavidyalaya, Thirthahalli-577432, Karnataka, India} \author{I. Reena} \affiliation{Department of Physics, Bangalore University, Bangalore-560 056, India} \author{Prasanta K. Panigrahi} \affiliation{Department of Physical Sciences, Indian Institute of Science Education and Research Kolkata, Mohanpur-741246, West Bengal, India} \author{A. R. Usha Devi} \affiliation{Department of Physics, Bangalore University, Bangalore-560 056, India} \affiliation{Inspire Institute Inc., Alexandria, Virginia, 22303, USA.} \author{Sudha} \email{tthdrs@gmail.com} \affiliation{Department of Physics, Kuvempu University, Shankaraghatta-577 451, Karnataka, India} \affiliation{Inspire Institute Inc., Alexandria, Virginia, 22303, USA.} \date{\today} \begin{abstract} Quantum steering ellipsoid formalism provides a faithful representation of all two-qubit states and is useful in obtaining their correlation properties. The steering ellipsoids of two-qubit states that have undergone local operations on both the qubits so as to bring the state to its canonical form are the so-called {\emph{canonical steering ellipsoids}}. The steering ellipsoids corresponding to the two-qubit subsystems of permutation symmetric $N$-qubit states are considered here. We construct and analyze the geometric features of the canonical steering ellipsoids corresponding to pure permutation symmetric $N$-qubit states with two distinct spinors. Depending on the degeneracy of the two spinors in the pure symmetric $N$-qubit state, there arise several families which cannot be converted into one another through Stochastic Local Operations and Classical Communications (SLOCC). The canonical steering ellipsoids of the two-qubit states drawn from the pure symmetric $N$-qubit states with two distinct spinors allow for a geometric visualization of the SLOCC equivalent class of states. We show that the states belonging to the W-class correspond to oblate spheroids centered at $(0,0,1/(N-1))$ with fixed semiaxes lengths $1/\sqrt{N-1}$ and $1/(N-1)$. The states belonging to all other SLOCC inequivalent families correspond to ellipsoids centered at the origin of the Bloch sphere. We also explore volume monogamy relations of states belonging to these families, mainly the W-class of states. \end{abstract} \pacs{03.65.Ud, 03.67.Bg} \maketitle \section{Introduction} The Bloch sphere representation of a single qubit contains valuable geometric information needed for quantum information processing tasks. A natural generalization and an analogous picture for a two-qubit system is provided by the {\emph{quantum steering ellipsoid}}~\cite{jevtic2014,MilneNJP2014,MilnePRA2016} and is helpful in understanding correlation properties such as quantum discord~\cite{shi2011,shi}, volume monogamy of steering~\cite{MilneNJP2014,MilnePRA2016} etc., Quantum steering ellipsoid is the set of all Bloch vectors to which one party's qubit could be `steered' when all possible measurements are carried out on the qubit belonging to other party. The volume of the steering ellipsoids~\cite{jevtic2014} corresponding to the two-qubit subsystems of an $N$-qubit state, $N>3$, capture monogamy properties of the state effectively~\cite{MilneNJP2014,MilnePRA2016} and provides insightful information about two-qubit entanglement. While the quantum steering ellipsoid~\cite{jevtic2014,MilneNJP2014,MilnePRA2016} is the set of all Bloch vectors of first qubit steered by local operations on second qubit, the so-called {\emph{canonical steering ellipsoid}}~\cite{verstraete2001,fvthesis,supra} is the steering ellipsoid of a two-qubit state that has attained a canonical form under suitable SLOCC operations on {\emph {both the qubits}}. It has been shown that the SLOCC canonical forms of a two-qubit state can either be a Bell diagonal form or a nondiagonal one (when the two-qubit state is rank-deficient)~\cite{verstraete2001,supra}. The canonical steering ellipsoids corresponding to the two-qubit states can thus have only two distinct forms~\cite{verstraete2001,supra} and provide a much simpler geometric picture representing the set of all SLOCC equivalent two-qubit states. The canonical steering ellipsoids corresponding to the two-qubit subsystems of pure three-qubit permutation symmetric states are analyzed in Ref.~\cite{can}. It has been shown that~\cite{can} the two SLOCC inequivalent families of pure three-qubit permutation symmetric states, the W-class of states (with two distinct spinors) and the GHZ class of states (with three distinct spinors) correspond to distinct canonical steering ellipsoids. While an ellipsoid centered at the origin of the Bloch sphere is the canonical steering ellipsoid for the GHZ class of states, an oblate spheroid with its center shifted along the polar axis is the one for W-class of states. Using these, the volume monogamy relations are established and the obesity of the steering ellipsoids is made use of to obtain expressions for concurrence of states belonging to these two SLOCC inequivalent families in Ref.~\cite{can}. In this paper, we extend the analysis to a class of $N$-qubit pure states which are symmetric under exchange of qubits. Through the SLOCC canonical forms of the two-qubit reduced state, extracted from pure symmetric {\emph{multiqubit} states with two distinct spinors and the Lorentz canonical forms of their real representative, we examine the features of {\emph {canonical steering ellipsoids}} associated with them. We identify the special features of the canonical steering ellipsoid representing $N$-qubit states of the W-class and these features distinguish this class from all other SLOCC inequivalent families of pure symmetric $N$-qubit states. We discuss the volume monogamy of steering for pure permutation symmetric $N$-qubit states and obtain the volume monogamy relation satisfied by W-class of states. An expression for obesity of the steering ellipsoid and thereby an expression for concurrence of two-qubit subsystems of $N$-qubit states belonging to the W-class is obtained. Contents of this paper are organized as follows: In Sec.II, we give a brief review on SLOCC classification of pure permutation symmetric multiqubit states based on Majorana representation~\cite{majorana,bastin,solano,arus} and obtain the two-qubit subsystems of the states belonging to SLOCC inequivalent families of pure symmetric multiqubit states with two distinct spinors. Sec.~III provides an outline of the real matrix representation of a two-qubit density matrix and their Lorentz canonical forms under SLOCC transformation of the two-qubit density matrix. We also obtain the Lorentz canonical forms of two-qubit subsystems corresponding to SLOCC inequivalent families, in Sec.~III. In Sec.IV, we analyse the nature of steering ellipsoids associated with the distinct Lorentz canonical forms obtained in Sec.~III. The volume monogamy of steering for pure symmetric multiqubit states with two distinct spinors is discussed along with illustration for W-class of states, in Sec.~V. Summary of our results is presented in Sec.~VI. \section{Majorana geometric representation of pure symmetric $N$-qubit states with two distinct spinors} Ettore Majorana, in his novel 1932 paper~\cite{majorana} proposed that a pure spin $j=\frac{N}{2}$ quantum state can be represented as a {\em symmetrized} combination of $N$ constituent spinors as follows: \begin{equation} \label{Maj} \vert \Psi_{\rm sym}\rangle={\mathcal N}\, \sum_{P}\, \hat{P}\, \{\vert \epsilon_1, \epsilon_2, \ldots \epsilon_N \rangle\}, \end{equation} where \begin{equation} \label{spinor} \vert\epsilon_l\rangle= \left( \cos(\alpha_l/2)\, \vert 0\rangle + \sin(\alpha_l/2) \, \vert 1\rangle\right) e^{i\beta_l/2},\ \ l=1,\,2,\ldots,\,N. \end{equation} The symbol $\hat{P}$ corresponds to the set of all $N!$ permutations of the spinors (qubits) and ${\mathcal N}$ corresponds to an overall normalization factor. The name Majorana {\emph {geometric}} representation is owing to the fact that it leads to an intrinsic geometric picture of the state in terms of $N$ points on the unit sphere. In fact, the spinors $\vert \epsilon_l\rangle$, $l=1,\,2,\ldots,\,N$ of (\ref{spinor}) correspond geometrically to $N$ points on the unit sphere $S^2$, with the pair of angles $(\alpha_l,\beta_l)$ determining the orientation of each point on the sphere. The pure symmetric $N$-qubit states characterized by {\emph{two}} distinct qubits are given by~\cite{bastin,solano,arus}, \begin{eqnarray} \label{dnk} \vert D_{N-k, k}\rangle &=& {\mathcal N}\, \sum_{P}\, \hat{P}\,\{ \vert \underbrace{\epsilon_1, \epsilon_1, \ldots , \epsilon_1}_{N-k};\ \underbrace{\epsilon_2, \epsilon_2,\ldots, \epsilon_2}_{k}\rangle\}. \end{eqnarray} Here, one of the spinors say $\vert \epsilon_1 \rangle$ occurs $N-k$ times whereas the other spinor $\vert \epsilon_2 \rangle$ occurs $k$ times in each term of the symmetrized combination. Under identical local unitary transformations, the pure symmetric $N$-qubit states with two distinct spinors can be brought to the canonical form~\cite{arus}, \begin{eqnarray} \label{nono} \vert D_{N-k, k}\rangle &\equiv & \sum_{r=0}^k\, \beta^{(k)}_{r}\,\, \left\vert\frac{N}{2},\frac{N}{2}-r \right\rangle, \ \ \ \ k=1,\,2,\,3,\ldots \left[\frac{N}{2}\right]\\ \label{nono1} \beta^{(k)}_{r}&=&{\mathcal N}\,\, \sqrt{\frac{N!(N-r)!}{r!}}\,\frac{a^{k-r}\, b^r}{(N-k)! (k-r)!},\ \ \ \ 0\leq a<1, \ \ b=\sqrt{1-a^2}. \end{eqnarray} Notice that $\left\vert\frac{N}{2},\frac{N}{2}-r \right\rangle$, $r=0,\,1,\,2\ldots,$ are the Dicke states, which are common eigenstates of collective angular momentum operators $J^2$ and $J_z$. They are basis states of the $N+1$ dimensional symmetric subspace of collective angular momentum space of $N$ qubits. The states $\vert D_{N-k, k}\rangle$ (see (\ref{nono}), (\ref{nono1})) are characterized by only one real parameter `$a$' and thus form one parameter family of states $\{{\mathcal{D}}_{N-k,k}\}$~\cite{arus,saruakr}. When $a=0$, the states $\vert D_{N-k, k}\rangle$ reduce to the Dicke states $\left\vert N/2,\,N/2-k \right\rangle$~\cite{arus,saruakr} in which $\vert\epsilon_1\rangle=\vert 0\rangle$ and $\vert \epsilon_2\rangle=\vert 1\rangle$ (see (\ref{dnk})). When $a\longrightarrow 1$, $\vert D_{N-k, k}\rangle$ becomes a separable state consisting of only one spinor $\vert \epsilon_1\rangle$ or $\vert \epsilon_2\rangle$. It is important to notice that in the family $\{{\mathcal{D}}_{N-k,k}\}$, different values of $k$, ($k=1,\,2,\,3,\ldots \left[\frac{N}{2}\right]$), correspond to different SLOCC inequivalent classes~\cite{arus}. That is, a state $\vert D_{N-k, k}\rangle$ cannot be converted into $\vert D_{N-k',k'}\rangle$, $k\neq k'$ through any choice of local unitary (identical) transformations. In fact, different values of $k$ lead to different {\emph {degeneracy configurations}}~\cite{arus} of the two spinors $\vert \epsilon_1 \rangle$, $\vert \epsilon_2 \rangle$ in the state $\vert D_{N-k, k}\rangle$. When $k=1$, one gets the W-class of states $\{{\mathcal{D}}_{N-1,1}\}$ where one of the qubits say $\vert \epsilon_1 \rangle$ repeats only once in each term of the symmetrized combination (see (\ref{dnk})) and the other qubit $\vert \epsilon_2 \rangle$ repeats $N-1$ times. The N-qubit W-state \begin{equation*} \vert {{W}}_N\rangle=\frac{1}{\sqrt{N}}\left[\vert 000\ldots 1\rangle+\vert 000\ldots 10\rangle+\cdots+\vert 100\ldots 00\rangle\right]\equiv \left\vert \frac{N}{2},\frac{N}{2}-1\right\rangle \end{equation*} belongs to the family $\{{\mathcal{D}}_{N-1,1}\}$ and hence the name {\emph{W-class of states}}. The Dicke state \begin{equation*} \left\vert \frac{N}{2},\frac{N}{2}-2\right\rangle=\sqrt{\frac{2}{N(N-1)}}\left[\vert 000\ldots011\rangle+\vert 000\ldots 0110\rangle+\cdots+\vert 110\ldots 00\rangle\right]. \end{equation*} is a typical state of the family $\{{\mathcal{D}}_{N-2,2}\}$. In all, there are $\left[\frac{N}{2}\right]$ SLOCC inequivalent families in the set of all pure permutation symmetric $N$-qubit states with two-distinct spinors~\cite{fn}. \subsection{ Two-qubit reduced density matrices of the states $\vert D_{N-k,\,k}\rangle$} The two-qubit marginal $\rho^{(k)}$ corresponding to any random pair of qubits in the pure symmetric $N$-qubit state $\vert D_{N-k,\,k}\rangle\in \{{\mathcal{D}}_{N-k,k}\}$ is obtained by tracing over the remaining $N-2$ qubits in it. In Ref. \cite{akhiss}, it has been shown, using the algebra of addition of angular momenta, $j_1=1$ (corresponding to two-qubit marginal) and $j_2=(N-2)/2$ (corresponding to the remaining $N-2$ qubits), that the two-qubit reduced density matrix $\rho^{(k)}$ has the form \begin{eqnarray} \label{rhok_matrix} \rho^{(k)}&=&\ba{cccc} A^{(k)} \ \ & B^{(k)} \ \ & B^{(k)}\ \ & C^{(k)} \ \ \\ B^{(k)} \ \ & D^{(k)}\ \ & D^{(k)}\ \ & E^{(k)} \ \ \\ B^{(k)} \ \ & D^{(k)} \ \ & D^{(k)} \ \ & E^{(k)} \ \ \\ C^{(k)} \ \ & E^{(k)} \ \ & E^{(k)} \ \ & F^{(k)} \ \ \end{array}\right). \end{eqnarray} The elements $A^{(k)},\, B^{(k)},\, C^{(k)},\, D^{(k)},\, E^{(k)}$ and $F^{(k)}$ are real and are explicitly given by~\cite{akhiss} \begin{eqnarray} \label{elements} A^{(k)}=\sum_{r=0}^k\, \left({\beta_r^{k}}\right)^2 \left({c^{(r)}_{1}}\right)^2, \ & & \ B^{(k)}=\frac{1}{\sqrt{2}}\sum_{r=0}^{k-1}\, {\beta^{(k)}_r} \beta^{(k)}_{r+1}\, c^{(r)}_{1} c^{(r+1)}_{0} \nonumber \\ & & \nonumber \\ C^{(k)}=\sum_{r=0}^{k-2}\, \beta^{(k)}_r \beta^{(k)}_{r+2}\,\, c^{(r)}_{1} c^{(r+2)}_{-1}, \ & & \ D^{(k)}=\frac{1}{2}\sum_{r=1}^{k}\, \left({\beta_r^{(k)}}\right)^2 \left({c^{(r)}_{0}}\right)^2 \\ & & \nonumber \\ E^{(k)}=\frac{1}{\sqrt{2}}\sum_{r=0}^{k-1}\, \beta^{(k)}_r \beta^{(k)}_{r+1}\,\, c^{(r)}_{0}c^{(r+1)}_{-1}, \ & & \ \ \ \ F^{(k)}=\sum_{r=0}^k\, \left({\beta_r^{(k)}}\right)^2 \left({c^{(r)}_{-1}}\right)^2. \nonumber \end{eqnarray} where, $\beta_r^{(k)}$ are given as functions of the parameter `$a$' in (\ref{nono1}) and \begin{eqnarray} \label{cg_explicit} c^{(r)}_{1}&=&\sqrt{\frac{(N-r)(N-r-1)}{N(N-1)}},\ \ \ c^{(r)}_{-1}=\sqrt{\frac{r\, (r-1)}{N(N-1)}},\nonumber \\ c^{(r)}_{0}&=&\sqrt{\frac{2r\, (N-r)}{N(N-1)}} \end{eqnarray} are the Clebsch-Gordan coefficients $c^{(r)}_{m_2}~=~C\left(\frac{N}{2}-1,\, 1,\, \frac{N}{2};m-m_2,\, m_2, m \right)$, $m~=~\frac{N}{2}-r$, $m_2=1,\,0,\,-1$ ~\cite{Var}. In particular, for W-class of states i.e., when $k=1$, we have \begin{eqnarray} \label{rho1} &&\rho^{(1)}=\mbox{Tr}_{N-2}\left(\vert D_{N-1,\,1}\rangle\langle D_{N-1,\,1}\vert\right)\nonumber \\ &&\ = \left( \left(\beta^{(1)}_0\right)^2+ \left(\beta^{(1)}_1\, c^{(1)}_1\right)^2 \right)\vert 1,\,1\rangle\langle 1,\,1 \vert \nonumber \\ && \ \ \ + \left(\beta^{(1)}_1\, c^{(1)}_0\right)^2 \vert 1,\,0\rangle\langle 1,\,0 \vert +\beta^{(1)}_0 \beta^{(1)}_1\, c^{(1)}_0 \vert 1,\,1\rangle\langle 1,\,0 \vert \nonumber \\ &&\ \ \ +\beta^{(1)}_0 \beta^{(1)}_1\, c^{(1)}_0 \vert 1,\,0\rangle\langle 1,\,1 \vert \end{eqnarray} Here (see (\ref{nono1})) we have $\beta^{(1)}_0={\mathcal N}N\, a $, $\beta^{(1)}_1={\mathcal N}\, \sqrt{N(1- a^2)}$ with ${\mathcal N}=\frac{1}{\sqrt{N^2\,a^2+N(1-a^2)}}$ and the associated non-zero Clebsch-Gordan coefficients (see (\ref{cg_explicit})) are given by \begin{equation} \label{cg_explicit1} c^{(1)}_1=\sqrt{\frac{N-2}{N}},\ \ \ c^{(1)}_0=\sqrt{\frac{2}{N}}. \end{equation} In the standard two-qubit basis $\{\vert 0_A,0_B \rangle, \vert 0_A,1_B \rangle, \vert 1_A,0_B \rangle, \vert 1_A,1_B \rangle\}$, the two-qubit density matrix $\rho^{(1)}$ drawn from the states $\vert D_{N-1,1}\rangle$ takes the form \begin{eqnarray} \label{rho1_matrix} \rho^{(1)}&=&\ba{cccc} A^{(1)} \ \ & B^{(1)} \ \ & B^{(1)}\ \ & 0 \ \ \\ B^{(1)} \ \ & D^{(1)}\ \ & D^{(1)}\ \ & 0 \ \ \\ B^{(1)} \ \ & D^{(1)} \ \ & D^{(1)} \ \ & 0 \ \ \\ 0 \ \ & 0 \ \ & 0 \ \ & 0 \ \ \end{array}\right) \end{eqnarray} where \begin{eqnarray} \label{rho1ele} A^{(1)}&=&\frac{N^2a^2+(N-2)(1-a^2)}{N^2\,a^2+N(1-a^2)},\ \ B^{(1)}=\frac{a\sqrt{1-a^2}}{1+a^2(N-1)},\ \nonumber \\ D^{(1)}&=& \frac{1-a^2}{N^2\,a^2+N(1-a^2)},\ \end{eqnarray} In a similar manner, the two-qubit subsystems of pure symmetric $N$-qubit states $\vert D_{N-k,k}\rangle$ belonging to each SLOCC inequivalent family $\{{\mathcal D}_{N-k,\, k}\}$, $k=2,\,3,\ldots,\,\left[\frac{N}{2}\right]$ can be obtained as a function of $N$ and `$a$' using Eqs. (\ref{rhok_matrix}), (\ref{elements}), (\ref{cg_explicit}). As is shown in Refs.~\cite{supra,can}, the real representative $\Lambda^{(k)}$ of the two-qubit subsystem $\rho^{(k)}$ and its Lorentz canonical form $\widetilde{\Lambda}^{(k)}$ are essential in obtaining the geometric representation of the states $\vert D_{N-k,k}\rangle$, for all $k$. We thus proceed to obtain $\Lambda^{(k)}$ and its Lorentz canonical form $\widetilde{\Lambda}^{(k)}$ in the following. \section{The real representation of $\rho^{(k)}$ and its Lorentz canonical forms} The real representative $\Lambda^{(k)}$ of the two-qubit state $\rho^{(k)}$ is a $4\times 4$ real matrix with its elements given by \begin{eqnarray} \label{lambda} \Lambda^{(k)}_{\mu \, \nu}&=& {\rm Tr}\,\left[\rho^{(k)}\, (\sigma_\mu\otimes\sigma_\nu)\,\right] \end{eqnarray} That is, $\Lambda^{(k)}_{\mu \, \nu}$, $\mu, \nu=0,\,1,\,2,\,3$ are the coefficients of expansion of $\rho^{(k)}$, expanded in the Hilbert-Schmidt basis $\{\sigma_\mu\otimes \sigma_\nu\}$: \begin{eqnarray} \label{rho2q} \rho^{(k)}&=&\frac{1}{4}\, \sum_{\mu,\,\nu=0}^{3}\, \Lambda^{(k)}_{\mu \, \nu}\, \left( \sigma_\mu\otimes\sigma_\nu \right), \end{eqnarray} Here, $\sigma_i$, $i=1,\,2,\,3$ are the Pauli spin matrices and $\sigma_0$ is the $2\times 2$ identity matrix; \begin{eqnarray} \label{sigmamu} \sigma_0=\left(\begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array}\right),\ \ \sigma_1=\left(\begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array}\right),\ \ \sigma_2=\left(\begin{array}{cc} 0 & -i \\ i & 0 \end{array}\right),\ \ \sigma_3=\left(\begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array}\right). \end{eqnarray} It can be readily seen that (see (\ref{lambda}), (\ref{rho2q})) the real $4\times 4$ matrix $\Lambda^{(k)}$ has the form \begin{eqnarray} \label{Lg} \Lambda^{(k)}&=&\left(\begin{array}{llll} 1 & r_1 & r_2 & r_3 \\ s_1 & t_{11} & t_{12} & t_{13} \\ s_2 & t_{21} & t_{22} & t_{23} \\ s_3 & t_{31} & t_{32} & t_{33} \\ \end{array}\right), \end{eqnarray} where ${\mathbf r}=(r_1,\,r_2,\,r_3)^T$, ${\mathbf s}=(s_1,\,s_2,\,s_3)^T$ are Bloch vectors of the individual qubits and $T=(t_{ij})$ is the correlation matrix; \begin{eqnarray} \label{ri} r_i&=&\Lambda^{(k)}_{i \, 0}= {\rm Tr}\,\left[\rho^{(k)}\, (\sigma_i\otimes\sigma_0)\,\right] \ \ \\ \label{sj} s_j&=& \Lambda^{(k)}_{0 \, j}={\rm Tr}\,\left[\rho^{(k)}\, (\sigma_0\otimes\sigma_j)\,\right] \\ \label{tij} t_{ij}&=& \Lambda^{(k)}_{i \, j}= {\rm Tr}\,\left[\rho^{(k)}\, (\sigma_i\otimes\sigma_j)\,\right],\ \ \ \ i,\,j=1,\,2,\,3. \end{eqnarray} For a symmetric two-qubit density matrix, the Bloch vectors $\mathbf{r}$ and $\mathbf{s}$ are identical and hence $r_i=s_i$, $i=1,\,2,\,3$; From the structure of $\rho^{(k)}$ in (\ref{rhok_matrix}) and using (\ref{ri}), (\ref{sj}), (\ref{tij}) we obtain the general form of the real matrix $\Lambda^{(k)}$ as \begin{eqnarray} \label{Lk} \Lambda^{(k)}&=&\left(\begin{array}{cccc} 1& \frac{2(B^{(k)}+E^{(k)})}{A^{(k)}+2D^{(k)}+F^{(k)}} & 0 & \frac{A^{(k)}-F^{(k)}}{A^{(k)}+2D^{(k)}+F^{(k)}} \\ \frac{2(B^{(k)}+E^{(k)})}{A^{(k)}+2D^{(k)}+F^{(k)}} & \frac{2(C^{(k)}+D^{(k)})}{A^{(k)}+2D^{(k)}+F^{(k)}} & 0 & \frac{2(B^{(k)}-E^{(k)})}{A^{(k)}+2D^{(k)}+F^{(k)}} \\ 0 & 0 & \frac{2(D^{(k)}-C^{(k)})}{A^{(k)}+2D^{(k)}+F^{(k)}} & 0 \\ \frac{A^{(k)}-F^{(k)}}{A^{(k)}+2D^{(k)}+F^{(k)}} & \frac{2(B^{(k)}-E^{(k)})}{A^{(k)}+2D^{(k)}+F^{(k)}} & 0 & 1-\frac{4D^{(k)}}{A^{(k)}+2D^{(k)}+F^{(k)}} \\ \end{array}\right). \end{eqnarray} The elements of $\Lambda^{(k)}$, for different $k$, can be evaluated using (\ref{elements}), (\ref{cg_explicit})): \subsection{Lorentz canonical forms of $\Lambda^{(k)}$} Under SLOCC transformation, the two-qubit density matrix $\rho^{(k)}$ transforms to $\widetilde{\rho}^{(k)}$ as \begin{eqnarray} \label{rhokab} \rho^{(k)}\longrightarrow\widetilde{\rho}^{(k)}&=&\frac{(A\otimes B)\, \rho^{(k)}\, (A^\dag\otimes B^\dag)} {{\rm Tr}\left[\rho^{(k)}\, (A^\dag\, A\otimes B^\dag\, B)\right]}. \end{eqnarray} Here, $A, B\in {\rm SL(2,C)}$ denote $2\times 2$ complex matrices with unit determinant. A suitable choice of $A$ and $B$ takes the two-qubit density matrix $\rho^{(k)}$ to its canonical form $\widetilde{\rho}^{(k)}$. The transformation of $\rho^{(k)}$ in (\ref{rhokab}) leads to the transfomation~\cite{supra,can} \begin{eqnarray} \label{sl2c} \Lambda^{(k)}\longrightarrow \widetilde{\Lambda}^{(k)}&=&\frac{L_A\,\Lambda^{(k)}\, L^T_B}{\left(L_A\,\Lambda^{(k)}\, L^T_B\right)_{00}}. \end{eqnarray} of its real representative $\Lambda^{(k)}$. In (\ref{sl2c}), $L_A,\, L_B\in SO(3,1)$ are $4\times 4$ proper orthochronous Lorentz transformation matrices~\cite{KNS} corresponding respectively to $A$, $B\in SL(2,C)$ and the superscript `$T$' denotes transpose operation. The Lorentz canonical form $\widetilde{\Lambda}^{(k)}$ of $\Lambda^{(k)}$ and thereby the SLOCC canonical form of the two-qubit density matrix $\rho^{(k)}$ (see (\ref{rhokab})) can be obtained by constructing the $4\times 4$ real symmetric matrix $\Omega^{(k)}=\Lambda^{(k)}\, G\, \left(\Lambda^{(k)}\right)^T$, where $G={\rm diag}\,(1,-1,-1,-1)$ denotes the Lorentz metric. Using the defining property~\cite{KNS} $L^T\,G\,L=G$ of Lorentz transformation $L$, it can be seen that $\Omega^{(k)}$ undergoes a {\em Lorentz congruent transformation} under SLOCC (up to an overall factor)~\cite{supra} as \begin{eqnarray} \label{oa} \Omega^{(k)}\rightarrow \widetilde{\Omega}^{(k)}_A&=& \widetilde{\Lambda}^{(k)}\, G\, \left(\widetilde{\Lambda}^{(k)}\right)^T \nonumber \\ &=& L_{A}\, \Lambda^{(k)}\, L_{B}^T\, G \, L_{B}\, {\Lambda^{(k)}}^T L_{A}^T \nonumber \\ &=& L_{A}\, \Omega^{(k)}\, L_{A}^T. \end{eqnarray} It has been shown in Ref.~\cite{supra} that $\widetilde{\Lambda}^{(k)}$ can either be a real $4\times 4$ diagonal matrix or a non-diagonal matrix with only one off-diagonal element, depending on the eigenvalues, eigenvectors of $G\,\Omega^{(k)}=G\left(\Lambda^{(k)}\, G\, \left(\Lambda^{(k)}\right)^T\right)$. \begin{itemize} \item[(i)] The diagonal canonical form $\widetilde{\Lambda}^{(k)}_{I_c}$ results when the eigenvector $X_0$ associated with the highest eigenvalue $\lambda_0$ of $G\,\Omega^{(k)}$ obeys the Lorentz invariant condition $X_0^T\, G\, X_0>0$. The diagonal canonical form $\widetilde{\Lambda}^{(k)}_{I_c}$ is explicitly given by \begin{eqnarray} \label{lambda1c} \Lambda^{(k)}\longrightarrow\widetilde{\Lambda}^{(k)}_{I_c}&=&\frac{L_{A_1}\,\Lambda^{(k)}\, L^T_{B_1}}{\left(L_{A_1}\,\Lambda^{(k)}\, L^T_{B_1}\right)_{00}}\nonumber \\ &=&{\rm diag}\, \left(1,\,\sqrt{\frac{\lambda_1}{\lambda_0}},\sqrt{\frac{\lambda_2}{\lambda_0}},\, \pm\, \sqrt{\frac{\lambda_3}{\lambda_0}}\right), \end{eqnarray} where $\lambda_0\geq\lambda_1\geq\lambda_2\geq \lambda_3> 0$ are the {\em non-negative} eigenvalues of $G\,\Omega^{(k)}$. The Lorentz transformations $L_{A_1},\, L_{B_1}\in SO(3,1)$ in (\ref{lambda1c}) respectively correspond to $SL(2,C)$ transformation matrices $A_1,\, B_1$ which take the two-qubit density matrix $\rho^{(k)}$ to its SLOCC canonical form $\widetilde{\rho}^{(k)}_{I_c}$ through the transformation (\ref{rhokab}). The diagonal form of $\widetilde{\Lambda}^{(k)}_{I_c}$ readily leads, on using (\ref{rho2q}), to Bell-diagonal form \begin{eqnarray} \label{rhobd} \widetilde{\rho}^{(k)}_{\,I_c} &=& \frac{1}{4}\, \left( \sigma_0\otimes \sigma_0 + \sum_{i=1,2}\, \sqrt{\frac{\lambda_i}{\lambda_0}}\, \left(\sigma_i\otimes\sigma_i\right) \pm \sqrt{\frac{\lambda_3}{\lambda_0}}\, \left(\sigma_3\otimes\sigma_3\right) \right) \end{eqnarray} as the canonical form of the two-qubit state $\rho^{(k)}$. \item[(ii)] The Lorentz canonical form of $\Lambda^{(k)}$ turns out to be a non-diagonal matrix (with only one non-diagonal element) given by \begin{eqnarray} \label{lambda2c} \Lambda^{(k)}\longrightarrow\widetilde{\Lambda}^{(k)}_{II_c}&=&\frac{L_{A_2}\,\Lambda^{(k)}\, L^T_{B_2}}{\left(L_{A_{2}}\, \Lambda^{(k)}\, L^T_{B_{2}}\right)_{00}} = \left(\begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & a_1 & 0 & 0 \\ 0 & 0 & -a_1 & 0 \\ 1-a_0 & 0 & 0 & a_0 \ \ \end{array}\right) \ \ \end{eqnarray} when the non-negative eigenvalues of $G\Omega^{(k)}$ are doubly degenerate with $\lambda_0\geq \lambda_1$ and the eigenvector $X_0$ belonging to the highest eigenvalue $\lambda_0$ satisfies the Lorentz invariant condition $X_0^T\, G\, X_0 =0$. In Ref.~\cite{supra}, it has been shown that when the maximum amongst the doubly degenerate eigenvalues of $G\Omega^{(k)}$ possesses an eigenvector $X_0$ satisfying the condition $X_0^T\, G\, X_0 =0$, the real symmetric matrix $\Omega^{(k)}=\Lambda^{(k)}G\left(\Lambda^{(k)}\right)^T$ attains the non-diagonal Lorentz canonical form given by \begin{eqnarray} \label{yyy} \Omega^{(k)}_{II_c}&=&\widetilde{\Lambda}^{(k)}_{II_c}\,G\,\left(\widetilde{\Lambda}^{(k)}_{II_c}\right)^T=L_{A_2}\,\Omega^{(k)}\, L^T_{A_2} \nonumber \\ &=& \,\left(\begin{array}{cccc} \phi_0 & 0 & 0 & \phi_0-\lambda_0 \\ 0 & -\lambda_1 & 0 & 0 \\ 0 & 0 & -\lambda_1 & 0 \\ \phi_0-\lambda_0 & 0 & 0 & \phi_0-2\lambda_0 \ \ \end{array}\right). \end{eqnarray} The parameters $a_0$, $a_1$ in (\ref{lambda2c}) are related to the eigenvalues $\lambda_0$, $\lambda_1$ of $G\Omega^{(k)}$ and the $00^{\rm th}$ element of $\widetilde{\Omega}^{(k)}_{II_c}$ (see (\ref{yyy})). It can be seen that~\cite{supra} \begin{eqnarray} \label{phi0} && a_0=\frac{\lambda_0}{\phi_0},\ \ a_1=\sqrt{\frac{\lambda_1}{\phi_0}},\ \ \mbox{where} \ \ \phi_0=\left(\Omega^{(k)}_{II_c}\right)_{00}=\left[\left(L_{A_2}\,\Lambda^{(k)}\, L^T_{B_2}\right)_{00}\right]^2. \end{eqnarray} The Lorentz matrices $L_{A_{2}},\, L_{B_{2}}\in SO(3,1)$ correspond to the SL(2,C) transformations $A_{2}$, $B_{2}$ that transform $\rho^{(k)}$ to its SLOCC canonical form $\rho^{(k)}_{II_c}$ (see \ref{rhokab}). The non-diagonl canonical form $\widetilde{\Lambda}^{(k)}_{II_c}$ leads to the SLOCC canonical form $\widetilde{\rho}^{(k)}_{\,II_c}$ of the two-qubit density matrix $\rho^{(k)}$, on using (\ref{rho2q}); \begin{equation} \label{rho2} \widetilde{\rho}^{(k)}_{\,II_c}=\frac{1}{2}\left(\begin{array}{cccc} 1 & 0 & 0 & a_1 \\ 0 & 1-a_0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ a_1 & 0 & 0 & a_0 \ \ \end{array}\right); \ \ \ 0\leq a_1^2\leq a_0 \leq 1. \end{equation} \end{itemize} \subsection{Lorentz canonical form of $\Lambda^{(1)}$ corresponding to W-class of states $\{{\mathcal{D}}_{N-k,k}\}$:} Using the explicit structure of the two-qubit state $\rho^{(1)}$ given in (\ref{rho1_matrix}), (\ref{rho1ele}), its real representative $\Lambda^{(1)}$ is obtained as (see (\ref{lambda})) \begin{eqnarray} \label{lambda32} \Lambda^{(1)}&=&\left(\begin{array}{cccc}1 & \frac{2a\sqrt{1-a^2}}{1+a^2(N-1)} &0& 1+\frac{2a^2}{1+a^2(N-1)}-\frac{2}{N} \\ \frac{2a\sqrt{1-a^2}}{1+a^2(N-1)} & \frac{2(1-a^2)}{N\left(1+a^2(N-1)\right)} & 0 & \frac{2a\sqrt{1-a^2}}{1+a^2(N-1)} \\ 0 & 0 & \frac{2(1-a^2)}{N\left(1+a^2(N-1)\right)} & 0 \\ 1+\frac{2a^2}{1+a^2(N-1)}-\frac{2}{N} & \frac{2a\sqrt{1-a^2}}{1+a^2(N-1)} & 0 & 1+\frac{4a^2}{1+a^2(N-1)}-\frac{4}{N} \end{array} \right)= \left(\Lambda^{(1)}\right)^T. \end{eqnarray} We now construct the $4\times 4$ symmetric matrix $\Omega^{(1)}$ and obtain \begin{eqnarray} \label{omega32} \Omega^{(1)}&=&\Lambda^{(1)}\, G\, \left(\Lambda^{(1)}\right)^T= \Lambda^{(1)}\, G\, \Lambda^{(1)}\nonumber \\ &=& \chi\left(\begin{array}{cccc} N-1 & 0 &0 & N-2 \\ 0& -1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ N-2 & 0 & 0 & N-3\end{array} \right),\ \ \ \chi =\left[\frac{2(1-a^2)}{N\left(1+a^2(N-1)\right)}\right]^2. \end{eqnarray} The eigenvalues of the matrix $G\,\Omega^{(1)},\ G={\rm diag}\, (1,\,-1,\,-1,\,-1)$ are readily seen to be four-fold degenerate and are given by \begin{eqnarray} \label{ev32} \lambda_0&=&\lambda_1=\lambda_2=\lambda_3=\chi=\left[\frac{2(1-a^2)}{N\left(1+a^2(N-1)\right)}\right]^2. \end{eqnarray} It can be seen that $X_0=(1,\, 0,\, 0,\, -1)$ is an eigenvector of $G\,\Omega^{(1)}$ belonging to the four-fold degenerate eigenvalue $\lambda_0$ and obeys the Lorentz invariant condition $X_0^T\, G\, X_0=0$. We notice here that $\Omega^{(1)}$ is already in the canonical form (\ref{yyy}). On comparing (\ref{omega32}) with (\ref{yyy}), we get \begin{equation} \label{phi0g} \phi_0=(\Omega^{(1)})_{00}=(N-1)\chi. \end{equation} On substituting the parameters $a_0$, $a_1$ (see (\ref{phi0}), (\ref{ev32}), (\ref{phi0g})) in (\ref{lambda2c}), we arrive at the Lorentz canonical form of the real matrix $\Lambda^{(1)}$ as \begin{eqnarray} \label{l32c} \widetilde{\Lambda}^{(1)}&=&\left(\begin{array}{cccc}1 & 0 &0& 0 \\ 0& \frac{1}{\sqrt{N-1}} & 0 & 0\\ 0 & 0 & -\frac{1}{\sqrt{N-1}} & 0 \\ \frac{N-2}{N-1} & 0 & 0 & \frac{1}{N-1} \end{array} \right). \end{eqnarray} It can be readily seen that $\widetilde{\Lambda}^{(1)}$, the Lorentz canonical form corresponding to the W-class of states, is independent of the parameter `$a$'. \subsection{Lorentz canonical form of $\Lambda^{(k)}$, $k=2,\,3,\ldots,\left[\frac{N}{2}\right]$} Here, we evaluate the real representative $\Lambda^{(k)}$ of $\rho^{(k)}$ for different values of $k$ ($k=2,\,3,\ldots,\left[\frac{N}{2}\right]$) making use of Eqs. (\ref{elements}), (\ref{cg_explicit}),(\ref{Lk}). We then construct the real symmetric matrix $\Omega^{(k)}={\Lambda^{(k)}}\,G\left(\Lambda^{(k)}\right)^T$ for $k=2,\,3,\ldots,\left[\frac{N}{2}\right]$ and observe that $G\Omega^{(k)}=G{\Lambda^{(k)}}\,G\,{(\Lambda^{(k)})}^T$ has {\emph{non-degenerate eigenvalues}} $\lambda_0\neq\lambda_1\neq\lambda_2\neq\lambda_3$ when $k=2,3,\,\ldots,\left[\frac{N}{2}\right]$ and the highest eigenvalue $\lambda_0$ possesses an eigenvector $X_0$ satisfying the relation $X_0^T\,G\,X_0>0$. The Lorentz canonical form $\widetilde{\Lambda}^{(k)}$, $k=2,\,3,\ldots,\left[\frac{N}{2}\right]$, is thus given by the diagonal matrix (see (\ref{lambda1c})). \[ \widetilde{\Lambda}^{(k)}=\mbox{diag}\,\left(1,\,\sqrt{\lambda_1/\lambda_0},\,\sqrt{\lambda_2/\lambda_0},\,\pm\sqrt{\lambda_3/\lambda_0} \right). \] The eigenvalues $\lambda_\mu$, ($\mu=0,\,1,\,2,\,3$) of $G\Omega^{(k)}$ are dependent on the parameters `$a$', $k$ and $N$ characterizing the state $\vert D_{N-k,\,k}\rangle$, when $k$ takes any of the integral values greater than $1$ and less than $\left[\frac{N}{2}\right]$. Hence the canonical form $\widetilde{\Lambda}^{(k)}$, $k=2,3,\,\ldots,\left[\frac{N}{2}\right]$ is different for different states $\vert D_{N-k,\,k}\rangle$ unlike in the case of $\widetilde{\Lambda}^{(1)}$ (see (\ref{l32c})), the canonical form of W-class of states, which depends only on the number of qubits $N$. \section{Geometric representation of the states $\vert D_{N-k,k}\rangle$} In this section, based on the two different canonical forms of $\Lambda^{(k)}$ obtained in Sec.~III, we find the nature of canonical steering ellipsoids associated with the pure symmetric multiqubit states $\vert D_{N-k,k}\rangle$ belonging to SLOCC inequivalent families $\{{\mathcal{D}}_{N-k,\,k}\}$. To begin with, we give a brief outline~\cite{supra,can} of obtaining the steering ellipsoids of a two-qubit density matrix $\rho^{(k)}$ based on the form of its real representative $\Lambda^{(k)}$. In the two-qubit state $\rho^{(k)}$, local projective valued measurements (PVM) $Q>0$, $Q=\sum_{\mu=0}^{3}\, q_\mu\, \sigma_\mu$, $q_0=1$, $\sum_{i=1}^3\,q_i^2=1$ on Bob's qubit leads to collapsed state of Alice's qubit characterized by its Bloch-vector ${\mathbf p}_A=(p_1,\,p_2,\,p_3)^T$ through the transformation~\cite{supra} \begin{equation} \label{funda} \left(1, p_1,\,p_2,\,p_3 \right)^T=\Lambda^{(k)}\,\left(1, q_1,\,q_2,\,q_3 \right)^T, \ \ q_1^2+q_2^2+q_3^2=1. \end{equation} Notice that the vector ${\mathbf{q}}_B=\left(q_1,\,q_2,\,q_3 \right)^T$, $q_1^2+q_2^2+q_3^2=1$ represents the entire Bloch sphere and the steered Bloch vectors ${\mathbf p}_A$ of Alice's qubit constitute an ellipsoidal surface ${\mathcal E}_{A\vert\,B}$ enclosed within the Bloch sphere. When Bob employs convex combinations of PVMs i.e., positive operator valued measures (POVMs), to steer Alice's qubit, he can access the points inside the steering ellipsoid. Similar will be the case when Bob's qubit is steered by Alice through local operations on her qubit. For the Lorentz canonical form $\widetilde{\Lambda}^{(k)}_{I_c}$ (see (\ref{lambda1c})) of the two-qubit state $\widetilde{\rho}^{(k)}_{\,I_c}$, it follows from (\ref{funda}) that \begin{equation} p_1=\sqrt{\frac{\lambda_1}{\lambda_0}}\,q_1, \ \ p_2=\sqrt{\frac{\lambda_2}{\lambda_0}}\,q_2,\ \ p_3=\pm \sqrt{\frac{\lambda_3}{\lambda_0}} q_3, \ \ \end{equation} are steered Bloch points ${\mathbf {p}}_A$ of Alice's qubit. They are seen to obey the equation \begin{equation} \label{ellI} \frac{\lambda_0\, p_1^2}{\lambda_1}+ \frac{\lambda_0\, p_2^2}{\lambda_2}+ \frac{\lambda_0\, p_3^2}{\lambda_3}=1 \end{equation} of an ellipsoid with semiaxes $(\sqrt{\lambda_1/\lambda_0}, \,\sqrt{\lambda_2/\lambda_0},\, \sqrt{\lambda_3/\lambda_0})$ and center $(0,0,0)$ inside the Bloch sphere $q_1^2+q_2^2+q_3^2=1$. We refer to this as the {\em canonical steering ellipsoid} representing the set of all two-qubit density matrices which are on the SLOCC orbit of the state $\widetilde{\rho}^{(k)}_{\,I_c}$ (see (\ref{rhokab})). For the second Lorentz canonical form $\widetilde{\Lambda}_{II_c}$ (see (\ref{lambda2c})), we get the coordinates of steered Alice's Bloch vector ${\mathbf{p}}_A$, on using (\ref{funda}); \begin{equation} \label{e2A} p_1=a_1q_1,\ \ p_2=-a_1q_2,\ \ p_3= \left(1-a_0\right)+a_0q_3, \ \ \ q_1^2+q_2^2+q_3^2=1 \end{equation} and they satisfy the equation \begin{eqnarray} \label{sph} && \frac{p_1^2}{a_1^2}+ \frac{p_2^2}{a_1^2}+ \frac{\left(p_3-(1-a_0)\right)^2}{a_0^2}=1. \end{eqnarray} Eq. (\ref{sph}) represents the canonical steering spheroid (traced by Alice's Bloch vector ${\mathbf{p}}_A$) inside the Bloch sphere with its center at $(0,\,0,\, 1-a_0)$ and lengths of the semiaxes given by $a_0=\lambda_0/\phi_0$, $a_1=\sqrt{\lambda_1/\phi_0}$ given in (\ref{phi0}). In other words, a shifted spheroid inscribed within the Bloch sphere, represents two-qubit states that are SLOCC equivalent to $\widetilde{\rho}^{(k)}_{II_c}$ (see (\ref{rho2})). \subsection{Canonical steering ellipsoids of W-class of states} We have seen in Sec.~III B that the Lorentz canonical form of $\Lambda^{(1)}$, the real representative of the symmetric two-qubit state $\rho^{(1)}$ drawn from the W-class of states $\vert D_{N-1,1}\rangle$ has a {\emph{non-diagonal}} form (see (\ref{l32c}). On comparing (\ref{l32c}) with the canonical form in (\ref{lambda2c}), we get \begin{equation} a_1=\frac{1}{\sqrt{N-1}},\ \ a_0=\frac{1}{N-1}. \end{equation} From (\ref{sph}) and the discussions prior to it, it can be readily seen that the quantum steering ellipsoid associated with $\widetilde{\Lambda}^{(1)}$ in (\ref{l32c}) is a spheroid centered at $(0,0,\frac{N-2}{N-1})$ inside the Bloch sphere, with fixed semiaxes lengths $(\frac{1}{\sqrt{N-1}},\, \frac{1}{\sqrt{N-1}},\, \frac{1}{N-1})$ (see Fig.~1). It is interesting to note that the Lorentz canonical form $\widetilde{\Lambda}^{(1)}$ is not dependent on the state parameter `$a$', $0\leq a<1$ and hence all states $\vert D_{N-1,\,1}\rangle$ in the family $\{{\mathcal{D}}_{N-1,\,1}\}$ are represented by a spheroid, all its parameters such as center, semiaxes, volume etc., dependent only on the number of qubits $N$. \subsection{Canonical steering ellipsoids of the states $\vert D_{N-k,k}\rangle$, $k=2,\,3,\ldots,\left[\frac{N}{2}\right]$} As is seen in Sec.~III C, the Lorentz canonical form of $\Lambda^{(k)}$, $k=2,\,3,\ldots,\left[\frac{N}{2}\right]$, the real representative of the two-qubit states $\rho^{(k)}$ drawn from the pure symmetric $N$-qubit states $\vert D_{N-k,k}\rangle$, has the diagonal form (see (\ref{lambda1c})). The values of $\lambda_0$, $\lambda_1$, $\lambda_2$, $\lambda_3$, the eigenvalues of the matrix $G\,\Omega^{k}$ can be evaluated for each value of $k$, $k=2,\,3,\ldots,\left[\frac{N}{2}\right]$ for a chosen $N$. From (\ref{ellI}) and the discussions therein, it follows that the canonical steering ellipsoids of the states $\vert D_{N-k,k}\rangle$, $k=2,\,3,\ldots,\left[\frac{N}{2}\right]$ is an ellipsoid centered at the origin of the Bloch sphere with lengths of the semiaxes given by $\sqrt{\lambda_1/\lambda_0}$, $\sqrt{\lambda_2/\lambda_0}$, $\sqrt{\lambda_3/\lambda_0}$. The eigenvalues $\lambda_\mu$, $\mu=0,\,1,\,2,\,3$ of $G\Omega^{(k)}$ depend on the parameter `$a$' also, unlike in the case of W-class of states where they depend only on $N$, the number of qubits. Thus each state $\vert D_{N-k,k}\rangle$ belonging to the family $\{{\mathcal{D}}_{N-k,\,k}\}$, $k=2,\,3,\ldots,\left[\frac{N}{2}\right]$ is represented by an ellipsoid whose semiaxes depend on the values of $k$, $N$ and `$a$'. The canonical steering ellipsoids corresponding to the $10$-qubit pure symmetric states $\vert D_{10-k,k}\rangle$ with chosen values of $k$ and `$a$' are shown in In Fig.~2. \begin{figure} \caption{(Colour online) Steering ellipsoids centered at the origin of the Bloch sphere representing Lorentz canonical form of pure symmetric $10$-qubit states $\vert D_{10-k,k}\rangle$ for $k=1$ to $k=5$. The length of the semi-axes of the ellipsoids for the $10$-qubit states chosen here are (i) $\left(0.91,\,0.71,\,0,62\right)$ (ii) $\left(0.83,\,0.59,\,0.41\right)$ (iii) $\left(0.745,\,0.533,\,0.279\right)$ (iv) $\left(0.656,\,0.53,\,0.185\right)$} \end{figure} In particular, the canonical steering ellipsoids corresponding to Dicke states are {\emph{oblate spheroids}} centered at the {\emph{origin}} (see Fig.3). \begin{figure} \caption{(Colour online) Oblate spheroids centered at the origin representing the Lorentz canonical form of the $N$-qubit Dicke states $\left\vert N/2,N/2-k\right\rangle$ (equivalently, the states $\vert D_{N-k,k}\rangle$, with $a=0$)} \end{figure} \section{Volume monogamy relations for pure symmetric multiqubit states $\vert D_{N-k,k}\rangle$} Monogamy relations restrict shareability of quantum correlations in a multipartite state. They find potential applications in ensuring security in quantum key distribution~\cite{Tehral,Paw}. Milne {\em et. al.}~\cite{MilneNJP2014, MilnePRA2016} introduced a geometrically intuitive monogamy relation for the volumes of the steering ellipsoids representing the two-qubit subsystems of multiqubit pure states, which is stronger than the well-known Coffman-Kundu-Wootters monogamy relation~\cite{CKW}. In this section we explore how volume monogamy relation~\cite{MilneNJP2014} imposes limits on volumes of the quantum steering ellipsoids representing the two-qubit subsystems $\rho^{(k)}=\mbox{Tr}_{N-2}\,\left[\vert D_{N-k, k}\rangle\langle D_{N-k, k}\vert\right]$ of pure symmetric multiqubit states $\vert D_{N-k, k}\rangle$. For the two-qubit state $\rho_{AB}(=\rho^{(k)})$ (see (\ref{rho2q})), we denote by ${\mathcal E}_{A\vert\,B}$, the quantum steering ellipsoid containing all steered Bloch vectors of Alice when Bob carries out local operations on his qubit. The volume of ${\mathcal E}_{A\vert\,B}$ is given by~\cite{jevtic2014} \begin{equation} \label{xxx} V_{A\vert B}=\left(\frac{4\pi}{3}\right)\, \frac{\vert\det \Lambda \vert}{(1-r^2)^2}, \end{equation} where $r^2=\mathbf{r}\cdot\mathbf{r}=r_1^2+r_2^2+r_3^2$ (see (\ref{ri})). As the steering ellipsoid is constrained to lie within the Bloch sphere, i.e., $V_{A\vert B}\leq V_{\rm unit}=(4\pi/3)$, one can choose to work with the {\emph {normalized volumes}} $v_{A\vert B}=\frac{V_{A\vert B}}{4\pi/3}$, the ratio of the volume of the steering ellipsoid to the volume of a unit sphere~\cite{MilnePRA2016}. The volume monogamy relation satisfied by a {\emph{pure}} three-qubit state shared by Alice, Bob and Charlie is given by~\cite{jevtic2014,MilneNJP2014,MilnePRA2016} \begin{equation} \label{vm} \sqrt{V_{A\vert B}} + \sqrt{V_{C\vert B}} \leq \sqrt{\frac{4\pi}{3}}. \end{equation} where $V_{A\vert B}, \ V_{C\vert B}$ are respectively the volumes of the ellipsoids corresponding to steered states of Alice and Charlie when Bob performs all possible local measurements on his qubit. The {\emph {normalized}} form of the volume monogmay relation (\ref{vm}) turns out to be \begin{equation} \label{vmn} \sqrt{v_{A\vert B}} + \sqrt{v_{C\vert B}} \leq 1, \end{equation} where $v_{A\vert B}=\frac{V_{A\vert B}}{4\pi/3}$ are the {\emph{normalized volumes}}. The monogamy relation (\ref{vmn}) is not, in general, satisfied by mixed three-qubit states \cite{MilnePRA2016} and it has been shown that \begin{equation} \label{vmnM} \left(v_{A\vert B}\right)^{\frac{2}{3}} + \left(v_{C\vert B}\right)^{\frac{2}{3}} \leq 1, \end{equation} is the volume monogamy relation for pure as well as mixed three-qubit states~\cite{MilnePRA2016}. As there are $\frac{1}{2}(N-2)(N-1)$ three-qubit subsystems in a $N$-qubit state, each of which obey monogamy relation (\ref{vmnM}), on adding these relations and simplifying, one gets~\cite{MilnePRA2016} \begin{equation} \label{vmnN} \left(v_{A\vert B}\right)^{\frac{2}{3}} + \left(v_{C\vert B}\right)^{\frac{2}{3}}+\left(v_{D\vert B}\right)^{\frac{2}{3}}+\cdots \leq \frac{N-1}{2}. \end{equation} The relation (\ref{vmnN}) is the volume monogamy relation satisfied by pure as well as mixed $N$-qubit states~\cite{MilnePRA2016}. For $N=3$, it reduces to (\ref{vmnM}). For multiqubit states that are invariant under exchange of qubits, $v_{A\vert B}=v_{C\vert B}=v_{D\vert B}=\cdots=v_N$ where $v_N$ denotes the normalized volume of the steering ellipsoid corresponding to any of the $N-1$ qubits, the steering performed by, say $N$th qubit. Eq. (\ref{vmnN}) thus reduces to \begin{equation} \label{vmnNs} (N-1) \left(v_N\right)^{\frac{2}{3}} \leq \frac{N-1}{2} \Longrightarrow \left(v_N\right)^{\frac{2}{3}} \leq \frac{1}{2} \end{equation} implying that $\left(v_N\right)^{\frac{2}{3}} \leq \frac{1}{2}$ is the volume monogamy relation for permutation symmetric multiqubit states. \subsection{Volume monogamy relations governing the W-class of states $\{{\mathcal{D}}_{N-1,1}\}$} On denoting the normalized volume of a steering ellipsoid corresponding to the states $\vert D_{N-1,1}\rangle$ by $v^{(1)}_N$, we have (see (\ref{xxx})) \begin{equation} \label{v32} v^{(1)}_N=\frac{\vert\det \Lambda^{(1)}\vert}{(1-r^2)^2}, \end{equation} where $\Lambda^{(1)}$ is given in (\ref{lambda32}) and \begin{equation} \label{r32} r_1=\frac{2a\sqrt{1-a^2}}{1+a^2(N-1)},\ \ r_2=0, \ \ r_3=1+\frac{2a^2}{1+a^2(N-1)}-\frac{2}{N} \end{equation} Under suitable Lorentz transformations, the real matrix $\Lambda^{(1)}$ (see (\ref{lambda32})) associated with the state $\rho^{(1)}$ (see (\ref{rho1_matrix})) gets transformed to its Lorentz canonical form $\widetilde{\Lambda}^{(1)}$ (see (\ref{l32c})). It follows that (see (\ref{phi0}), (\ref{ev32})) \begin{equation} \label{phi032} \left(L_A\,\Lambda^{(1)}\, L^T_B\right)_{00}=\sqrt{\phi_0}=2\sqrt{N-1}\left[\frac{1-a^2}{N(1+(N-1)\,a^2)}\right]. \end{equation} Using the property $\det L_A=\det L_B=1$ of orthochronous proper Lorentz transformations~\cite{KNS} and substituting $\vert\det\widetilde{\Lambda}^{(1)}\vert=\frac{1}{(N-1)^2}$ in (\ref{sl2c}), we obtain \begin{equation} \label{int} \vert\det\widetilde{\Lambda}^{(1)}\vert=\frac{1}{(N-1)^2}=\vert\det L_A\vert\, \vert\det L_B\vert \left\vert\det\left(\frac{\Lambda^{(1)}}{\sqrt{\phi_0}}\right)\right\vert =\frac{ \vert\det\,\Lambda^{(1)}\vert}{\phi_0^2}. \end{equation} Eq. (\ref{int}) leads to $\vert\det\,\Lambda^{(1)}\vert=\phi_0^2\,\vert\det\widetilde{\Lambda}^{(1)}\vert$. The normalized volume $v^{(1)}_N$ of the quantum steering ellipsoid corresponding to W-class of states thus becomes (see (\ref{v32})) \begin{equation} \label{det32} v^{(1)}_N=\vert\det\widetilde{\Lambda}^{(1)}\vert\frac{\phi_0^2}{(1-r^2)^2} \end{equation} From (\ref{r32}) and (\ref{phi032}) it readily follows that $\phi_0^2=(1-r^2)^2$ and hence (see (\ref{det32})) the simple form for the normalized volume of the corresponding steering ellipsoid associated with the two-qubit state $\rho^{(1)}$ turns out to be \begin{eqnarray} \label{vnW} v^{(1)}_{N}= \frac{\phi_0^2}{(N-1)^2\,(1-r^2)^2}=\frac{1}{(N-1)^2}. \end{eqnarray} The volume monogamy relation $\left(v^{(1)}_{N}\right)^{\frac{2}{3}} \leq \frac{1}{2}$ (see (\ref{vmnNs})) takes the form \begin{equation} \label{mnrW} \left(\frac{1}{(N-1)^2}\right)^{2/3}\leq \frac{1}{2} \, \Longrightarrow \, (N-1)^{\frac{-4}{3}} \leq \frac{1}{2} \end{equation} and is readily satisfied for any $N\geq 3$ as can be seen in Fig.~4. \subsection{Relation between obesity of steering ellipsoids and concurrence} We recall here that the {\em obesity} ${\mathcal O}(\rho_{AB})=\vert \det\Lambda\vert^{1/4}$ of the quantum steering ellipsoid~\cite{MilneNJP2014} depicting a two-qubit state $\rho_{AB}$ is an upper bound for the concurrence $C(\rho_{AB})$: \begin{equation} \label{c&v} C(\rho_{AB})\leq {\mathcal O}(\rho_{AB}) =\vert \det\Lambda\vert^{1/4}. \end{equation} Furthermore, if $\rho_{AB}\longrightarrow\widetilde{\rho}_{AB}=(A\otimes B)\rho_{AB}\, (A^\dag\otimes B^\dag)/({\rm Tr}(A^\dag\,A\otimes B^\dag B)\rho_{AB}]$, $A,B\in SL(2,C)$ it follows that~\cite{MilneNJP2014} \begin{equation} \label{cvratio} \frac{{\mathcal O}(\rho_{AB})}{C(\rho_{AB})}=\frac{{\mathcal O}(\widetilde{\rho}_{AB})}{C(\widetilde{\rho}_{AB})}. \end{equation} We make use of the relation (\ref{cvratio}) to obtain a relation for concurrence~\cite{Wootters} of a pair of qubits in the symmetric $N$-qubit pure states $\vert D_{N-k,k}\rangle$, $k=1,\,2,\ldots,\left[\frac{N}{2}\right]$. For the states $\vert D_{N-1,1}\rangle$ belonging to W-class, we readily get (see (\ref{lambda32}), (\ref{l32c})) \begin{equation} \det\Lambda^{(1)}=\left(\frac{2(1-a^2)}{N(1+a^2 (N-1))}\right)^4, \ \ \ \det\widetilde{\Lambda}^{(1)}=\left(\frac{1}{N-1}\right)^2 \end{equation} and thereby the obesities ${\mathcal O}(\rho^{(1)})$, ${\mathcal O}(\widetilde{\rho}^{(1)})$: \begin{equation} \label{obwclass} {\mathcal O}(\rho^{(1)})=\frac{2(1-a^2)}{N(1+a^2 (N-1))}, \ \ \ {\mathcal O}(\widetilde{\rho}^{(1)})=\frac{1}{\sqrt{N-1}} \end{equation} It is not difficult to evaluate the concurrence of the canonical state $\widetilde{\rho}^{(1)}$ and it is seen that \begin{equation} \label{co} C(\widetilde{\rho}^{(1)})={\mathcal O}(\widetilde{\rho}^{(1)})=\frac{1}{\sqrt{N-1}}. \end{equation} We thus obtain (see (\ref{cvratio}),(\ref{co})) \begin{eqnarray} \label{cfin00} C(\rho^{(1)})&=&{\mathcal O}(\rho^{(1)})=\frac{2(1-a^2)}{N(1+a^2 (N-1))}. \end{eqnarray} The value of concurrence in (\ref{cfin00}) matches exactly with that obtained using $C(\rho^{(1)})={\rm max} (0,\mu_1-\mu_2-\mu_3-\mu_4)$ where $\mu_1\geq\mu_2\geq\mu_3\geq \mu_4$ are square-roots of the eigenvalues of the matrix $R=\rho^{(1)}\,(\sigma_2\otimes\sigma_2)\, {\rho^{(1)}}^*\, (\sigma_2\otimes\sigma_2)$~\cite{Wootters}. We have seen that the state $\vert D_{N-1,\,1}\rangle$ reduces to W-state when $a=0$ and hence for the $N$-qubit W-state, concurrence of any pair of qubits is given by $C(\rho_W^{(1)})=\frac{2}{N}$ (see (\ref{cfin00})). \section{Summary} In this work, we have analyzed the canonical steering ellipsoids and volume monogamy relations of the pure symmetric $N$-qubit states characterized by two distinct Majorana spinors. We have shown that the entire W-class of states has a geometric representation in terms of a {\emph{shifted oblate spheroid}} inscribed within the Bloch sphere. The center of the spheroid, the length of its semiaxes and its volume are shown to be dependent only on the number of qubits $N$ implying that all states in the $N$-qubit W-class are characterized by a {\emph {single}} spheroid, shifted along the polar axis of the Bloch sphere. All other SLOCC inequivalent families of pure symmetric $N$-qubit states with two distinct spinors are shown to be geometrically represented by {\emph {ellipsoids centered at the origin}}. Except the W-state (and its obverse counterpart) which are represented by a {\emph{shifted spheroid}}, all other $N$-qubit Dicke states are represented by an {\emph{oblate spheroid centered at the origin}}. A discussion on volume monogamy relations applicable to identical subsystems of a pure symmetric $N$-qubit state is given here and a volume monogamy relation applicable for W-class of states is obtained. A relation connecting concurrence of the two-qubit state and obesity of the associated quantum steering ellipsoid with its canonical counterparts is made use of to obtain concurrence of the states belonging to W-class. It would be interesting to examine the features of canonical steering ellipsoids and volume monogamy relations for the SLOCC inequivalent families of pure symmetric multiqubit states with more than two distinct spinors; in particular, the class of pure symmetric $N$-qubit states belonging to GHZ-class (with three distinct spinors). \end{document}
\begin{document} \title{Fuzzy propositional configuration logics} \author{Paulina Paraponiari\thanks{\protect\includegraphics[height=0.3cm]{elidek_logo_en}The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI PhD Fellowship grant (Fellowship Number: 1200).} \\Department of Mathematics\\Aristotle University of Thessaloniki\\54124 Thessaloniki, Greece\\parapavl@math.auth.gr} \date{} \maketitle \begin{abstract} We introduce and investigate a weighted propositional configuration logic over De Morgan algebras. This logic is able to describe software architectures with quantitative features especially the uncertainty of the interactions that occur in the architecture. We deal with the equivalence problem of formulas in our logic by showing that every formula can be written in a specific form. Surprisingly, there are formulas which are equivalent only over specific De Morgan algebras. We provide examples of formulas in our logic which describe well-known software architectures equipped with quantitative features such as the uncertainty and reliability of their interactions. \end{abstract} \keywords{Software architectures, Formal methods, Propositional configuration logics, Fuzzy logic, Quantitative features, Uncertainty.} \section{Introduction} Uncertainty is inevitable in software architecture \cite{uncerta:risk}. Software architectures are increasingly composed of many components such as workload and servers. Computations between the components run in environments in which resources may have radical variability \cite{soft:uncertain:world}. For instance, software architects may be uncertain about the cost and performance impact of a proposed software architecture. They may be aware of the cost and performance of the interactions in the architecture. However, there may be undesirable outcomes such as failure of a component to interact and complete its task \cite{uncerta:risk}. Uncertainty may affect functional and non-functional architecture requirements \cite{relax}. Hence, it is necessary to consider uncertainty as a basic quantitative characteristic in software architectures. So far, the existing architecture decision-making approaches do not provide a quantitative method of dealing with uncertainty \cite{Deali:uncert}. The motivation of our work is to formally describe and compare software architectures with quantitative features such as the uncertainty. For this, we consider that fuzzy logics rely on the idea that truth comes in degrees. Hence, they constitute a suitable tool in order to deal with uncertainty. Moreover, recently the authors in \cite{fuzzy:iot} used fuzzy logic on IoT devices for assisting the blind people for their safe movement. This is a strong indication for the possible future applications of fuzzy logic. In this paper we extend the work of \cite{Ka:Pa,Pa_Rah_1,Pa_Rah} by introducing and investigating the fuzzy PCL (fPCL for short) over De Morgan algebras. This work is motivated as follows. In \cite{Pa_Rah_1,Pa_Rah} we introduced the weighted PCL over commutative semirings (wPCL for short). This logic serves as a specification language for the study of software architectures with quantitative features such as the maximum cost of an architecture or the maximum priority of the involvement of a component. Then in \cite{Ka:Pa}, we introduced the weighted PCL over product valuation monoids (w$_{\text{pvm}}$PCL for short) which serves as a specification language for software architectures with quantitative features such as the average of all interactions' costs of the architecture, and the maximum cost among all costs occurring most frequently within a specific number of components. Those features are not covered in \cite{Pa_Rah_1,Pa_Rah}. The aforementioned works are not able to model the uncertainty that occurs between the interactions in the architecture. In this paper we deal with this problem by introducing and investigating the fuzzy PCL (fPCL for short) which is a weighted PCL over De Morgan algebras. The contributions of our work are the following. \begin{enumerate}[$\bullet$] \item We introduce the syntax and semantics of fPCL. The semantics of fPCL formulas are series with values in the De Morgan algebra. This logic is able to describe software architecture with quantitative features such as the uncertainty. Moreover, we are able to compute the weight of an architecture even when unwanted components participate. This is possible since De Morgan algebras are equipped with a complement mapping whereas the algebraic structures in \cite{Ka:Pa,Pa_Rah_1,Pa_Rah} are not. \item In the sequel, we construct fPCL formulas which describe the Peer-to-Peer architecture and the Master/Slave architecture for finitely number of components. \item Lastly, we deal with the decidability of equivalence of fPCL formulas. For this, we examine the existence of a normal form. We show that the construction of the normal form of a fPCL formula depends on the properties of the De Morgan algebra. Hence, there may be fPCL formulas which have the same normal form over the fuzzy algebra but different ones over the Boolean algebra. In other words, two fPCL formulas can be equivalent over the fuzzy algebra but not over the Boolean algebra. We give examples to show our point. In our paper, we prove that for every fPCL formula over a set of ports and a Kleene algebra we can effectively construct an equivalent one in normal form. We note that this construction can be easily adapted for fPCL formulas over a Boolean algebra. We conclude that two fPCL formulas are equivalent over a De Morgan algebra if they have the same normal form considering the properties of the aforementioned De Morgan algebra. For this, we give an algorithm which is able to decide the equivalence of two fPCL formulas in normal form, in polynomial time. \end{enumerate} \section{Related Work} Existing work has investigated the formal description of the qualitative and quantitative properties of software architecture. In particular, the authors in \cite{Ma:Co} introduced the propositional configuration logic (PCL for short) which was proved sufficient to describe the qualitative properties of software architectures. Later in \cite{Pa_Rah_1,Pa_Rah}, we introduced and investigated a weighted PCL (wPCL for short) over a commutative semiring which serves as a specification language for the study of software architectures with quantitative features such as the maximum cost of an architecture or the maximum priority of a component. We proved that the equivalence problem of wPCL formulas is decidable. In \cite{Ka:Pa} we extended the work of \cite{Pa_Rah_1,Pa_Rah} by introducing and investigating weighted PCL over product valuation monoids (w$_\text{pvm}$PCL for short). This logic is proved to be sufficient to serve as a specification language for software architectures with quantitative properties, such as the average of all interactions' costs of the architecture and the maximum cost among all costs occurring most frequently within a specific number of components in an architecture. However, the aforementioned works do not cover quantitative properties such us the uncertainty and reliability of an architecture. The authors in \cite{stoch} address the problem of evaluating the system reliability as a stochastic property of software architectural models in the presence of uncertainty. Also, the authors in \cite{Fram:Unc} develop a conceptual framework for the management of uncertainty in software architecture in order to reduce its impact during the system's life cycle. However, the aforementioned works are lack of formality of the architecture description, which is crucial since non-formal systems can be unreliable at some point. \section{Preliminaries} \subsection{Lattices} Let $K$ be a nonempty set, and $\leq$ a binary relation over $K$ which is reflexive, antisymmetric, and transitive. Then $\leq$ is called a partial order and the pair $(K, \leq)$ a partially ordered set (poset for short). If the partial order $\leq $ is understood, then we shall denote the poset $(K,\leq)$ simply by $K$. For $k, k' \in K$ we denote by $k \vee k'$ (resp. $k \wedge k'$) the least upper bound or supremum (resp. the greatest lower bound or infimum) of $k$ and $k'$ if it exists in $K$. A poset $K$ is called a \emph{lattice} if $k\vee k'$ and $k\wedge k'$ exist in $K$ for every $k,k'\in K$. A lattice $K$ is called \emph{distributive} if $ k\wedge(k'\vee k'')=(k\wedge k')\vee(k\wedge k'')$ and $(k \vee k')\wedge k''=(k \wedge k'')\vee(k'\wedge k'')$ for every $k,k',k'' \in K$. Moreover, the absorption laws $k \vee \left( k \wedge k^\prime \right) = k$ and $ k \wedge \left( k \vee k^\prime \right) = k$ hold for every $k, k^\prime \in K.$ A poset $K$ is called \emph{bounded} if there are two elements $0,1 \in K$ such that $0 \leq k \leq 1$ for every $k \in K$. A \emph{De Morgan algebra} is denoted by $(K,\leq,^{-})$, where $K$ is a bounded distributed lattice (bdl for short) with complement mapping $^- : K \rightarrow K$ which satisfies involution and the De Morgan laws $\overline{\overline{k}}=k$, $\overline{k \vee k'}=\overline{k} \wedge \overline{k'},$ and $\overline{k \wedge k'} = \overline{k} \vee \overline{k'} $ for every $k, k' \in K$. A known De Morgan algebra is the structure $([0,1], \leq, ^-)$ where $\leq$ is the usual order on real numbers and the complement mapping is defined by $\overline{k}=1-k$ for every $k \in [0,1]$. The authors in \cite{Dr:Mu,Ra:Fu} show that a semiring $(K, +, \cdot, 0,1)$ equipped with a complement mapping $^-$, which is a monoid morphism from $(K,+,0)$ to $(K,\cdot,1)$ and $\overline{\overline{k}}=k$ for every $k \in K$, is a De Morgan algebra $(K, \leq, ^-)$. The relation $\leq$ is defined as follows: $k \leq k'$ iff $k+k'=k'$. On the other hand, a De Morgan algebra $(K, \leq, ^-)$ induces a semiring $(K, \vee, \wedge, 0, 1)$ with a complement mapping $^-$. In the following, we denote a De Morgan algebra by $(K, \vee, \wedge , 0,1, ^-)$. Moreover, a \emph{Kleene algebra} is a De Morgan algebra that satisfies $k_1 \wedge \overline{k_1} \leq k_2 \vee \overline{k_2}$, or equivalently, $(k_1 \wedge \overline{k_1}) \wedge (k_2 \vee \overline{k_2}) = (k_1\wedge \overline{k_1})$ for every $k_1, k_2 \in K$. A \emph{Boolean algebra} is a Kleene algebra that satisfies $k\wedge \overline{k} =0$ and $k\vee \overline{k} = 1$ for every $k\in K.$ In the following we present the most well-known De Morgan algebras. We refer the reader to \cite{Wa:Ge,Mo:Fu} for further examples of De Morgan algebras. \begin{figure} \caption{Three element Kleene algebra} \label{three_element} \caption{Four element algebra} \label{four_element} \caption{Operators of De Morgan algebras} \label{op_de_morg} \end{figure} \begin{enumerate}[$\bullet$] \item The two element Boolean algebra $\textbf{2} = \left( \{0,1\}, \vee, \wedge, 0,1, ^- \right)$, where $\overline{0}=1$ and $\overline{1}=0$. \item The three element Kleene algebra $\textbf{3}= \left( \{0,u,1\}, \vee, \wedge, 0,1, ^- \right)$, where $\overline{0}=1$, $\overline{1}=0$, $\overline{u}=u$. The operators $ \vee, \wedge$ are shown in Figure \ref{three_element}. \item The four element algebra $\textbf{4}=\left( \{0,u,w,1\}, \vee, \wedge, 0,1, ^- \right)$, where $\overline{u} = u$, $\overline{w} = w$, $u\vee w = 1$ and $u\wedge w = 0$. The operators $ \vee$ and $\wedge$ are shown in Figure \ref{four_element}. \item The fuzzy algebra $\textbf{F}=\left( [0,1], \max, \min, 0,1, ^- \right)$, where for every $k\in [0,1]$ the complement mapping is defined by $\overline{k} = 1-k.$ This algebra is a Kleene algebra. To see this, let $k, k^\prime \in [0,1]$ and note that $\min \{ \min\{ k, \overline{k} \}, \max\{ k^\prime, \overline{k^\prime} \} \} = \min \{ k, \overline{k} \}.$ \end{enumerate} \begin{quotation} \emph{Throughout the paper, $\mathbf{3}$ and $K_\mathbf{3}$ will denote respectively, the three element Kleene algebra and a De Morgan algebra which is a Kleene algebra. Also, by $\mathbf{2}$ and $\textbf{B}$ we will denote respectively, the two element Boolean algebra and a De Morgan algebra which is a Boolean algebra. By $K$ we will denote an arbitrary De Morgan algebra.} \end{quotation} Lastly, consider $K$ be a De Morgan algebra and $Q$ a set. A \emph{formal series} (or simply \emph{series}) \emph{over} $Q$ \emph{and} $K$ is a mapping $s:Q\rightarrow K$. We denote by $K\left\langle \left\langle Q \right\rangle \right\rangle $ the class of all series over $Q$ and $K$. \section{Fuzzy Propositional Interaction Logic} In this section we introduce a quantitative version of PIL where the weights are taken in the De Morgan algebra $K$. Since De Morgan algebras and more generally bdl's found applications in fuzzy theory, we call our weighted PIL a fuzzy PIL. \begin{definition} The syntax of formulas of \emph{fuzzy PIL} (\emph{fPIL} for short) over $P$ and $K$ is given by the grammar: $$\varphi::= true \mid p \mid \ ! \varphi \mid \varphi \ {\scriptstyle \ovee} \ \varphi $$ where $p \in P$ and the operators $!, \ {\scriptstyle \ovee} \ $ denote the fuzzy negation and the fuzzy disjunction, respectively, among \emph{fPIL} formulas. \end{definition} The fuzzy conjunction operator among fPIL formulas $\ { \scriptstyle \owedge} \ $ is defined by $\varphi_1 \ { \scriptstyle \owedge} \ \varphi_2 : = \ ! (! \varphi_1 \ {\scriptstyle \ovee} \ ! \varphi_2).$ For the semantics of fPIL formulas over $P$ and $K$ we introduce the notion of a $K$-fuzzy interaction. For this we need to recall the $K$-fuzzy sets from \cite{l:fuzzy}. A $K$-fuzzy set $S$ on a non empty set $X$ is a function $S: X\to K$. A \emph{$K$-fuzzy interaction} $\alpha$ on $P$ is a $K$-fuzzy set on $P$ with the restriction that $\alpha(p) \neq 0$ for at least one port $p\in P$. We denote by $fI(P,K)$ the set of $K$-fuzzy interactions $\alpha$ on $P$ and by $fPIL(K,P)$ the set of all fPIL formulas over $P$ and $K$. We interpret fPIL formulas over $P$ and $K$ as series in $K \left\langle \left \langle fI(P,K) \right\rangle \right \rangle$. \begin{definition} Let $\varphi \in fPIL(K,P)$. The semantics of $\varphi$ is a series $\left\Vert \varphi \right\Vert \in K \left\langle \left\langle fI(P,K) \right\rangle \right\rangle$. For every $K$-fuzzy interaction $\alpha\in fI(P,K)$ the value $\left\Vert \varphi \right\Vert (a)$ is defined inductively on the structure of $\varphi$ as follows: \begin{enumerate}[$\bullet$] \item $\left\Vert true\right\Vert (a) = 1$, \item $\left\Vert p\right\Vert (a) = a(p)$, \item $\left\Vert ! \varphi \right\Vert (a) = \overline{\left\Vert \varphi \right\Vert (a)}$, \item $\left\Vert \varphi_1 \ {\scriptstyle \ovee} \ \varphi_2 \right\Vert (a) = \left\Vert \varphi_1 \right\Vert (a) \vee \left\Vert \varphi_2 \right\Vert (a) $. \end{enumerate} \end{definition} Trivially, we get $\left\Vert\varphi_1 \ { \scriptstyle \owedge} \ \varphi_2 \right\Vert(a)=\left\Vert\varphi_1\right\Vert(a)\wedge \left\Vert\varphi_2\right\Vert(a)$ for every $\alpha\in fI(P,K)$. Moreover, we define the fPIL formula $! true := false$ and it is valid that $\left\Vert false \right\Vert (\alpha) = 0$ for every $\alpha \in fI(P,K).$ Next, we define the equivalence relation among fPIL formulas. For this, we consider that De Morgan algebras such as the Kleene and the Boolean algebra satisfy some extra properties except from the ones that are valid to every De Morgan algebra by its definition. \begin{definition}\label{fpil_equiv} Two \emph{fPIL} formulas $\varphi_1, \varphi_2$ over $P$ and a concrete De Morgan algebra $K_{con}$ are called $K_{con}$-equivalent, and we write $\varphi_1 \ \dot{\equiv}_{K_{con}} \ \varphi_2$, whenever $\left\Vert \varphi_1 \right\Vert (\alpha)=\left\Vert \varphi_2\right\Vert(\alpha)$ for every $\alpha \in fI(P,K_{con}).$ Two \emph{fPIL} formulas $\varphi_1, \varphi_2$ over $P$ and an arbitrary De Morgan algebra $K$ are called simply equivalent, and we write $\varphi_1\ \dot{\equiv} \ \varphi_2 $, whenever $\left\Vert \varphi_1 \right\Vert (\alpha)=\left\Vert \varphi_2\right\Vert(\alpha)$ for every $\alpha \in fI(P,K).$ \end{definition} Let $P=\{ p,q,r \}$ be a set of ports. Following the previous definition and by the properties of De Morgan algebras, we prove that $p\ { \scriptstyle \owedge} \ !p \ { \scriptstyle \owedge} \ \left( q \ {\scriptstyle \ovee} \ r \right) \ \dot{\equiv}_{\mathbf{2}} \ false$ and $p\ { \scriptstyle \owedge} \ !p \ { \scriptstyle \owedge} \ \left( q \ {\scriptstyle \ovee} \ r \right) \ \dot{\equiv} \ \left(p\ { \scriptstyle \owedge} \ !p\ { \scriptstyle \owedge} \ q \right) \ {\scriptstyle \ovee} \ \left(p\ { \scriptstyle \owedge} \ !p\ { \scriptstyle \owedge} \ r \right) $. We proceed with some properties of our fPIL formulas. \begin{proposition}\label{neg_i_oplus} Let $\varphi_1, \varphi_2 $ be fPIL formulas over $P$ and $K$. Then \[ ! \left( \varphi_1 \ {\scriptstyle \ovee} \ \varphi_2 \right) \ \dot{\equiv} \ (!\varphi_1) \ { \scriptstyle \owedge} \ \left( ! \varphi_2 \right). \] \end{proposition} \begin{proof} Let $\alpha \in fI(P,K)$. Then \begin{align*} \left\Vert ! \left( \varphi_1 \ {\scriptstyle \ovee} \ \varphi_2 \right) \right\Vert(\alpha) & = \overline{\left\Vert \varphi_1 \ {\scriptstyle \ovee} \ \varphi_2 \right\Vert(\alpha)} \\ & = \overline{ \left\Vert \varphi_1 \right\Vert(\alpha) \vee \left\Vert \varphi_2 \right\Vert(\alpha) } \\ & = \overline{ \left\Vert \varphi_1 \right\Vert(\alpha) } \wedge \overline{\left\Vert \varphi_2 \right\Vert(\alpha)} \\ & = \left\Vert ! \varphi_1\right\Vert(\alpha) \wedge \left\Vert !\varphi_2\right\Vert(\alpha) \\ & = \left\Vert \left( ! \varphi_1 \right) \ { \scriptstyle \owedge} \ \left( ! \varphi_2 \right) \right\Vert(\alpha). \end{align*} \end{proof} \begin{proposition}\label{pil_true_false} Let $\varphi$ be a \emph{fPIL} formula over $P$ and $K$. Then the following hold: \begin{flushleft} \begin{tabular}{l l l} $ (1)$ $ \varphi \ {\scriptstyle \ovee} \ true \ \dot{\equiv} \ true,$ & \hspace*{1cm} $ (3) $ $\varphi \ { \scriptstyle \owedge} \ true \ \dot{\equiv} \ \varphi$, & \hspace*{1cm} $ (5)$ $ !!\varphi \ \dot{\equiv} \ \varphi. $ \\ $(2)$ $ \varphi \ {\scriptstyle \ovee} \ false \ \dot{\equiv} \ \varphi,$ & \hspace*{1cm} $(4) $ $ \varphi \ { \scriptstyle \owedge} \ false \ \dot{\equiv} \ false ,$ & \end{tabular} \end{flushleft} \end{proposition} \begin{proof} The proofs are straightforward. \end{proof} \begin{proposition}\label{fpil_associa} The operators $\ { \scriptstyle \owedge} \ $ and $ \ {\scriptstyle \ovee} \ $ of the \emph{fPIL} are associative. \end{proposition} \begin{proof} The proposition holds since the operators $\wedge$ and $\vee$ are associative. \end{proof} \begin{proposition}\label{otimes_over_oplus_i} Let $\varphi, \varphi_1, \varphi_2 \in fPIL(K,P)$. Then \[ \varphi \ { \scriptstyle \owedge} \ \left( \varphi_1 \ {\scriptstyle \ovee} \ \varphi_2 \right) \ \dot{\equiv} \ \left( \varphi \ { \scriptstyle \owedge} \ \varphi_1\right) \ {\scriptstyle \ovee} \ \left( \varphi \ { \scriptstyle \owedge} \ \varphi_2 \right). \] \end{proposition} \begin{proof} Since $\wedge$ distributes over $\vee$ we get the proposition. \end{proof} Next, we give the absorption and idempotent laws among fPIL formulas. \begin{proposition}\label{absorpt_pil} Let $\varphi, \varphi^\prime \in fPIL(K,P)$. Then \begin{tabular}{l l l l} $(1)$ & $\varphi \ { \scriptstyle \owedge} \ \left( \varphi \ {\scriptstyle \ovee} \ \varphi^\prime \right) \ \dot{\equiv} \ \varphi $ & \hspace*{1cm} $(3)$ & $\varphi \ {\scriptstyle \ovee} \ \varphi \ \dot{\equiv} \ \varphi $. \\[0.1cm] $(2)$ & $\varphi \ {\scriptstyle \ovee} \ \left( \varphi \ { \scriptstyle \owedge} \ \varphi^\prime \right) \ \dot{\equiv} \ \varphi $. & \hspace*{1cm} $(4)$ & $\varphi \ { \scriptstyle \owedge} \ \varphi \ \dot{\equiv} \ \varphi.$ \end{tabular} \end{proposition} \begin{proof} For the proof of (1) and (2) we apply the absorption laws of De Morgan algebras. The other are valid since $\wedge $ and $\vee$ are idempotent. \end{proof} \section{Fuzzy Propositional Configuration Logic} In this section we introduce and investigate the fuzzy PCL over $P$ and $K$. \begin{definition} The syntax of formulas of \emph{fuzzy PCL} (\emph{fPCL} for short) \emph{over} $P$ \emph{and} $K$ is given by the grammar: $$ \zeta :: = \varphi \mid \neg \zeta \mid \zeta \oplus \zeta \mid \zeta \uplus \zeta $$ where $\varphi$ is a \emph{fPIL} formula over $P$ and $K$, $\neg$, $\oplus$ and $\uplus$ denote the fuzzy negation, the fuzzy disjunction and the fuzzy coalescing operator, respectively. \end{definition} Let $\zeta, \zeta^\prime $ be fPCL formulas over $P$ and $K$. The fuzzy conjunction operator among $\zeta$ and $\zeta^\prime$ and the closure operator of $\zeta$ are defined, respectively, as follows: \begin{flushleft} $\begin{array}{l l} (1) \ \zeta \otimes \zeta^\prime : = \neg (\neg \zeta \oplus \neg \zeta^\prime), & \hspace*{1cm} (2) \ \sim \zeta := \zeta \uplus true. \end{array}$ \end{flushleft} Next, we denote by $fC(P,K)$ the set of nonempty sets of $K$-fuzzy interactions in $fI(P,K)$, and by $fPCL(K,P)$ the set of fPCL formulas over $P$ and $K$. We define the semantics of fPCL formulas over $P$ and $K$ as series in $K \left\langle\left\langle fC(P,K) \right\rangle\right\rangle $. \begin{definition} Let $\zeta$ be a \emph{fPCL} formula over $P$ and $K$. The semantics of $\zeta$ is a series $\left\Vert \zeta \right\Vert \in K \left\langle\left\langle fC(P,K) \right\rangle\right\rangle$. For every set $\gamma \in fC(P,K)$ the value $\left\Vert \zeta \right\Vert (\gamma)$ is defined inductively on the structure of ${\zeta}$ as follows: \begin{enumerate}[$\bullet$] \item $\left\Vert \varphi \right\Vert (\gamma) = \underset{\alpha\in\gamma}{\bigwedge} \left\Vert \varphi \right \Vert(a)$, \item $\left\Vert \neg \zeta \right\Vert (\gamma) = \overline{\left\Vert \zeta \right\Vert (\gamma)}$, \item $\left\Vert \zeta_1 \oplus \zeta_2 \right\Vert (\gamma) = \left\Vert \zeta_1 \right\Vert (\gamma) \vee \left\Vert \zeta_2 \right\Vert (\gamma), $ \item $\left\Vert \zeta_1 \uplus \zeta_2\right\Vert (\gamma) = \underset{\gamma=\gamma_1 \cup \gamma_2}{\bigvee} \left( \left\Vert \zeta_1 \right\Vert (\gamma_1) \wedge \left\Vert \zeta_2 \right\Vert (\gamma_2) \right)$. \end{enumerate} \end{definition} \noindent It is easy to prove that $\left\Vert true \right\Vert (\gamma) = 1$ and $\left\Vert false \right\Vert (\gamma) = 0$ for every $\gamma \in fC(P,K). $ \begin{definition}\label{fpcl_equiv} Two \emph{fPCL} formulas $\zeta_1, \zeta_2$ over $P$ and a concrete De Morgan algebra $K_{con}$ are called $K_{con}$-equivalent, and we write $\zeta_1\equiv_{K_{con}} \zeta_2$, whenever $\left\Vert \zeta_1 \right\Vert (\gamma)=\left\Vert \zeta_2\right\Vert(\gamma)$ for every $\gamma \in fC(P,K_{con}).$ Two \emph{fPCL} formulas $\zeta_1, \zeta_2$ over $P$ and an arbitrary De Morgan algebra $K$ are called simply equivalent, and we write $\zeta_1\equiv \zeta_2 $, whenever $\left\Vert \zeta_1 \right\Vert (\gamma)=\left\Vert \zeta_2\right\Vert(\gamma)$ for every $\gamma \in fC(P,K).$ \end{definition} In the following, we examine the relation between the fPIL and fPCL operators on fPIL formulas. Firstly, we show that the application of negation operators $!$ and $\neg$ on a fPIL formula derive in general non equivalent fPCL formulas. Indeed, let $p\in P$ and $\gamma =\{a_1,a_2\} \in fC(P,K)$. Then we have $$ \Vert ! p \Vert (\gamma) = \bigwedge_{a\in \gamma} \Vert ! p \Vert(a) = \overline{\Vert p \Vert (a_1)} \wedge \overline{\Vert p \Vert (a_2)} = \overline{a_1(p)} \wedge \overline{a_2(p)} $$ and $$ \Vert\neg p \Vert (\gamma) = \overline{\Vert p \Vert(\gamma)} = \overline{\bigwedge_{a\in \gamma} \Vert p \Vert(a)} = \bigvee_{a\in \gamma} \overline{ \Vert p \Vert(a)} = \overline{\Vert p \Vert (a_1)} \vee \overline{\Vert p \Vert (a_2)} = \overline{a_1(p)} \vee \overline{a_2(p)}$$ which implies that $ ! p \not \equiv \neg p$. Similarly, we show that in general $\varphi \ {\scriptstyle \ovee} \ \varphi^\prime \not \equiv \varphi \oplus \varphi^\prime$ where $\varphi, \varphi^\prime $ are fPIL formulas. For this, let $P$ be a set of ports, $\varphi=p\in P$ and $\varphi^\prime =p^\prime \in P$, where $p\not = p^\prime$. If $\gamma = \{ \alpha_1, \alpha_2 \}\in fC(P,K)$, then we get $\left\Vert p \ {\scriptstyle \ovee} \ p' \right\Vert (\gamma) \not = \left\Vert p \oplus p' \right\Vert (\gamma) $ and so $p \ {\scriptstyle \ovee} \ p' \not \equiv p \oplus p'. $ However, as we show in the next proposition, the application of the operators $\ { \scriptstyle \owedge} \ $ and $\otimes$ on fPIL formulas produce equivalent fPCL formulas. \begin{proposition}\label{otimes_i} Let $\varphi_1 , \varphi_2$ be \emph{fPIL} formulas over $P$ and $K$. Then \[ \varphi_1 \ { \scriptstyle \owedge} \ \varphi_2 \equiv \varphi_1 \otimes \varphi_2. \] \end{proposition} \begin{proof} For every $\gamma \in fC(P,K)$ we compute \begin{align*} \left\Vert \varphi_1 \ { \scriptstyle \owedge} \ \varphi_2 \right\Vert (\gamma) & = \bigwedge_{\alpha\in \gamma} \left\Vert \varphi_1 \ { \scriptstyle \owedge} \ \varphi_2 \right\Vert (\alpha) \\ & = \bigwedge_{\alpha\in \gamma} \left(\left\Vert \varphi_1\right\Vert(\alpha) \wedge\left\Vert \varphi_2 \right\Vert (\alpha) \right) \\ & = \left(\bigwedge_{\alpha\in \gamma} \left\Vert \varphi_1\right\Vert(\alpha)\right) \wedge \left( \bigwedge_{\alpha\in \gamma} \left\Vert \varphi_2 \right\Vert (\alpha) \right) \\ & = \left\Vert \varphi_1\right\Vert(\gamma) \wedge \left\Vert \varphi_2\right\Vert(\gamma) \\ & = \overline{ \overline{\left\Vert \varphi_1 \right\Vert(\gamma)} \vee \overline{\left\Vert \varphi_2 \right\Vert(\gamma)} } \\ & = \left\Vert \neg \left( \neg \varphi_1 \oplus \neg \varphi_2 \right) \right\Vert (\gamma) \\ & = \left\Vert \varphi_1 \otimes \varphi_2 \right\Vert (\gamma), \end{align*} where the third equality holds by the commutativity and associativity of $\wedge$. \end{proof} In the sequel, we prove several properties of our fPCL formulas. \begin{proposition} The \emph{fPCL} operators $\oplus, \otimes $ and $\uplus $ are associative and commutative. \end{proposition} \begin{proof} We prove only the associativity of the $\uplus$ operator. The rest are analogously proved. Let $\zeta_1, \zeta_2, \zeta_3 \in fPCL(K,P)$ and $\gamma \in fC(P,K)$. Then \begin{align*} \left\Vert \zeta_1 \uplus \left( \zeta_2 \uplus \zeta_3 \right) \right\Vert(\gamma) & = \bigvee_{\gamma = \gamma_1 \cup \gamma^\prime} \left( \left\Vert \zeta_1 \right\Vert (\gamma_1) \wedge \left\Vert \zeta_2\uplus \zeta_3 \right\Vert(\gamma^\prime) \right) \\ & = \bigvee_{\gamma = \gamma_1 \cup \gamma^\prime} \left( \left\Vert \zeta_1 \right\Vert (\gamma_1) \wedge \left( \bigvee_{\gamma^\prime =\gamma_2\cup \gamma_3} \left( \left\Vert \zeta_2\right\Vert(\gamma_2) \wedge \left\Vert\zeta_3\right\Vert(\gamma_3) \right) \right) \right) \\ & = \bigvee_{\gamma = \gamma_1 \cup \gamma^\prime} \bigvee_{\gamma^\prime =\gamma_2\cup \gamma_3} \left( \left\Vert \zeta_1\right\Vert(\gamma_1) \wedge \left( \left\Vert \zeta_2\right\Vert(\gamma_2) \wedge \left\Vert \zeta_3\right\Vert(\gamma_3) \right) \right) \\ & = \bigvee_{\gamma = \gamma^\prime \cup \gamma_3} \bigvee_{\gamma^\prime = \gamma_1\cup \gamma_2} \left(\left( \left\Vert \zeta_1\right\Vert(\gamma_1) \wedge \left\Vert \zeta_2\right\Vert(\gamma_2) \right) \wedge \left\Vert \zeta_3\right\Vert(\gamma_3) \right) \\ & = \bigvee_{\gamma = \gamma^\prime \cup \gamma_3} \left(\left(\bigvee_{\gamma^\prime = \gamma_1\cup \gamma_2} \left( \left\Vert \zeta_1\right\Vert(\gamma_1) \wedge \left\Vert \zeta_2\right\Vert(\gamma_2) \right)\right) \wedge \left\Vert \zeta_3\right\Vert(\gamma_3) \right) \\ & = \bigvee_{\gamma = \gamma^\prime \cup \gamma_3} \left( \left\Vert \zeta_1 \uplus \zeta_2\right\Vert(\gamma^\prime) \wedge \left\Vert \zeta_3\right\Vert(\gamma_3) \right) \\ & = \left\Vert \left( \zeta_1\uplus \zeta_2 \right) \uplus \zeta_3 \right\Vert(\gamma) \end{align*} \noindent where the third and fifth equalities hold since $\wedge$ distributes over $\vee$ and the fourth one by the associativity of the $\wedge $ operator. \end{proof} \begin{proposition} Let $\zeta\in fPCL(K,P)$. Then \[ \left\Vert \sim \zeta \right\Vert(\gamma) = \bigvee_{\gamma^\prime \subseteq \gamma} \left\Vert \zeta \right\Vert(\gamma^\prime) \] \noindent for every $\gamma\in fC(P,K). $ \end{proposition} \begin{proof} For every $\gamma\in fC(P,K)$ we have \begin{align*} \left\Vert \zeta \right\Vert (\gamma) & = \bigvee_{\gamma=\gamma^\prime \cup \gamma^{\prime \prime}} \left( \left\Vert \zeta\right\Vert(\gamma^\prime) \wedge \left\Vert true\right\Vert(\gamma^{\prime \prime}) \right) \\ & = \bigvee_{\gamma^\prime \subseteq \gamma } \left\Vert \zeta \right\Vert (\gamma^\prime). \end{align*} \end{proof} \begin{proposition}\label{uplus_over_oplus} Let $\zeta, \zeta_1, \zeta_2 \in fPCL(K,P)$. Then \[ \zeta \uplus (\zeta_1 \oplus\zeta_2) \equiv (\zeta \uplus \zeta_1) \oplus (\zeta \uplus \zeta_2). \] \end{proposition} \begin{proof} For every $\gamma\in fC(P,K)$ we have \begin{align*} \left\Vert \zeta \uplus (\zeta_1 \oplus \zeta_2) \right\Vert(\gamma) & = \bigvee_{\gamma=\gamma_1\cup \gamma_2} \left( \left\Vert \zeta \right\Vert(\gamma_1) \wedge \left\Vert \zeta_1\oplus \zeta_2\right\Vert(\gamma_2) \right) \\ & = \bigvee_{\gamma=\gamma_1\cup \gamma_2} \left( \left\Vert \zeta \right\Vert(\gamma_1) \wedge \left(\left\Vert \zeta_1\right\Vert(\gamma_2)\vee \left\Vert \zeta_2\right\Vert(\gamma_2) \right)\right) \\ & = \bigvee_{\gamma=\gamma_1\cup \gamma_2}\left( \left( \left\Vert \zeta \right\Vert(\gamma_1) \wedge\left\Vert \zeta_1\right\Vert(\gamma_2)\right) \vee \left( \left\Vert \zeta \right\Vert(\gamma_1) \wedge\left\Vert \zeta_2\right\Vert(\gamma_2)\right) \right) \\ & = \bigvee_{\gamma=\gamma_1\cup \gamma_2} \left( \left\Vert \zeta \right\Vert(\gamma_1) \wedge\left\Vert \zeta_1\right\Vert(\gamma_2)\right) \vee \bigvee_{\gamma=\gamma_1\cup \gamma_2} \left( \left\Vert \zeta \right\Vert(\gamma_1) \wedge\left\Vert \zeta_2\right\Vert(\gamma_2)\right) \\ & = \left\Vert \left(\zeta \uplus \zeta_1\right) \oplus \left(\zeta \uplus \zeta_2 \right)\right\Vert(\gamma) \end{align*} where the third equality holds since $\wedge $ distributes over $\vee$ and the fourth one by the associativity of $\vee$. \end{proof} \begin{proposition}\label{otimes_over_oplus} Let $\zeta, \zeta_1, \zeta_2 \in fPCL(K,P)$. Then \[ \zeta \otimes (\zeta_1 \oplus\zeta_2) \equiv (\zeta \otimes \zeta_1) \oplus (\zeta \otimes \zeta_2). \] \end{proposition} \begin{proof} By the distributivity of $\wedge$ over $\vee$ we get the subsequent proposition. \end{proof} \begin{proposition}\label{absorpt_fpcl} Let $ \zeta \in fPCL(K,P)$. Then \begin{tabular}{l l l l} $(1)$ & $\neg \neg \zeta \equiv \zeta .$ & \hspace*{1.5cm} $(5)$ & $\zeta \oplus false \equiv \zeta.$ \\[0.1cm] $(2)$ & $\zeta\oplus \zeta \equiv \zeta.$ & \hspace*{1.5cm} $(6)$ & $\zeta \otimes true \equiv \zeta.$ \\[0.1cm] $(3)$ & $\zeta\otimes \zeta \equiv \zeta.$ & \hspace*{1.5cm} $(7)$ & $\zeta \otimes false \equiv false \equiv \zeta \uplus false.$ \\[0.1cm] $(4)$ & $\zeta \oplus true \equiv true.$ & & \end{tabular} \end{proposition} \begin{proof} By the properties of De Morgan algebras we can prove the above properties. \end{proof} \begin{proposition}\label{neg} Let $ \zeta_1, \zeta_2 \in fPCL(K,P)$. Then \begin{enumerate}[$(1)$] \item $\neg \left( \zeta_1 \oplus \zeta_2 \right) \equiv \left( \neg \zeta_1 \right) \otimes \left( \neg \zeta_2 \right).$ \item $\neg \left( \zeta_1 \otimes \zeta_2 \right) \equiv \left( \neg \zeta_1 \right) \oplus \left( \neg \zeta_2 \right).$ \end{enumerate} \end{proposition} \begin{proof} This proposition holds since $\overline{k\vee k^\prime} = \overline{k} \wedge \overline{k^\prime}$ for every $k,k^\prime \in K$. \end{proof} In the following proposition, we present the absorbing laws of our fPCL formulas. \begin{proposition}\label{absorpt_pcl} Let $\zeta, \zeta^\prime \in fPCL(K,P)$. Then \begin{enumerate}[$(1)$] \item $\zeta \otimes \left( \zeta \oplus \zeta^\prime \right) \equiv \zeta $. \item $\zeta \oplus \left( \zeta \otimes \zeta^\prime \right) \equiv \zeta $. \end{enumerate} \end{proposition} \begin{proof} This proof is done analogously to the proof of Proposition \ref{absorpt_pil}. \end{proof} \begin{proposition}\label{otimes_distib_coale} Let $\varphi \in fPIL(K,P)$ and $\zeta_1, \zeta_2 \in fPCL(K,P)$. Then \[ \varphi \otimes (\zeta_1 \uplus \zeta_2) \equiv (\varphi \otimes \zeta_1) \uplus (\varphi \otimes \zeta_2). \] \end{proposition} \begin{proof} For every $\gamma\in fC(P,K)$ we compute \begin{align*} \left\Vert \varphi \ \otimes \right. & \left. (\zeta_1 \uplus \zeta_2) \right\Vert (\gamma) \\ & = \left\Vert \varphi \right\Vert(\gamma) \wedge \left\Vert \zeta_1 \uplus \zeta_2 \right\Vert(\gamma) \\ & = \left\Vert \varphi \right\Vert(\gamma) \wedge \bigvee_{\gamma=\gamma_1 \cup \gamma_2} \left( \left\Vert \zeta_1 \right\Vert(\gamma_1) \wedge \left\Vert \zeta_2\right\Vert(\gamma_2) \right) \\ & = \bigvee_{\gamma=\gamma_1 \cup \gamma_2} \left\Vert \varphi \right\Vert(\gamma) \wedge \left( \left\Vert \zeta_1 \right\Vert(\gamma_1) \wedge \left\Vert \zeta_2\right\Vert(\gamma_2) \right) \\ & = \bigvee_{\gamma=\gamma_1 \cup\gamma_2} \left(\bigwedge_{\alpha\in \gamma} \left\Vert \varphi \right\Vert(\alpha) \wedge \left\Vert \zeta_1 \right\Vert(\gamma_1) \wedge \left\Vert \zeta_2\right\Vert(\gamma_2) \right) \\ & = \bigvee_{\gamma=\gamma_1 \cup \gamma_2} \left(\bigwedge_{\alpha_1\in \gamma_1} \left\Vert \varphi \right\Vert(\alpha_1) \wedge \bigwedge_{\alpha_2\in \gamma_2} \left\Vert \varphi \right\Vert(\alpha_2) \wedge \left\Vert \zeta_1 \right\Vert(\gamma_1) \wedge \left\Vert \zeta_2\right\Vert(\gamma_2) \right) \\ & = \bigvee_{\gamma=\gamma_1 \cup \gamma_2} \left( \left\Vert \varphi \right\Vert(\gamma_1) \wedge \left\Vert \varphi \right\Vert(\gamma_2) \wedge \left\Vert \zeta_1 \right\Vert(\gamma_1) \wedge \left\Vert \zeta_2\right\Vert(\gamma_2) \right) \\ & = \bigvee_{\gamma=\gamma_1 \cup \gamma_2} \left( \left\Vert \varphi \otimes \zeta_1 \right\Vert(\gamma_1) \wedge \left\Vert \varphi \otimes \zeta_2 \right\Vert(\gamma_2) \right) \\ & = \left\Vert \left( \varphi \otimes \zeta_1\right) \uplus \left( \varphi \otimes \zeta_2 \right) \right\Vert(\gamma) \end{align*} where the third equality holds since $\wedge $ distributes over $\vee$ and the fifth one by the idempotency and associativity of the $\wedge$ operator. \end{proof} \begin{proposition}\label{pil_prop} Let $\varphi \in fPIL(K,P)$. Then \begin{flushleft} $\begin{array}{l l l} (1) \ \varphi \uplus \varphi \equiv \varphi. & \hspace*{1.2cm} (2) \ \neg \sim \varphi \equiv \ ! \varphi. & \hspace*{1.2cm} (3) \ \neg \varphi \equiv \ \sim \ ! \varphi. \end{array}$ \end{flushleft} \end{proposition} \begin{proof} For every $\gamma\in fC(P,K)$ we have \begin{enumerate}[(1)] \item \begin{align*} \left\Vert \varphi \uplus \varphi \right\Vert(\gamma) & = \bigvee_{\gamma=\gamma_1\cup \gamma_2} \left(\left\Vert\varphi \right\Vert(\gamma_1) \wedge \left\Vert\varphi \right\Vert(\gamma_2)\right) \\ & = \bigvee_{\gamma=\gamma_1 \cup\gamma_2} \left( \bigwedge_{\alpha_1\in \gamma_1} \left\Vert \varphi\right\Vert(\alpha_1) \wedge \bigwedge_{\alpha_2\in \gamma_2}\left\Vert \varphi\right\Vert(\alpha_2) \right) \\ & = \bigvee_{\gamma=\gamma_1 \cup\gamma_2} \left( \bigwedge_{\alpha\in \gamma_1\cup \gamma_2} \left\Vert \varphi\right\Vert(\alpha) \right)\\ & = \bigwedge_{\alpha\in \gamma} \left\Vert\varphi\right\Vert(\alpha) = \left\Vert \varphi \right\Vert(\gamma) \end{align*} where the third and fourth equalities hold since the operators $\wedge$ and $ \vee $ are idempotent. \item \begin{align*} \left\Vert \neg \sim \varphi \right\Vert (\gamma) & = \left\Vert \neg \left( \varphi \uplus true \right) \right\Vert (\gamma) \\ & = \overline{\left\Vert \varphi \uplus true \right\Vert (\gamma) } \\ & = \overline{\bigvee_{\gamma=\gamma_1\cup \gamma_2}\left(\left\Vert \varphi \right\Vert(\gamma_1) \wedge \left\Vert true \right\Vert (\gamma_2) \right) } \\ & = \overline{\bigvee_{\gamma^\prime \subseteq \gamma}\left\Vert \varphi \right\Vert(\gamma^\prime) } \\ & = \overline{\bigvee_{\gamma^\prime \subseteq \gamma} \bigwedge_{\alpha\in \gamma^\prime}\left\Vert \varphi \right\Vert(\alpha) } \\ & = \bigwedge_{\gamma^\prime \subseteq \gamma} \bigvee_{\alpha\in \gamma^\prime}\overline{\left\Vert \varphi \right\Vert(\alpha) }. \end{align*} \noindent Let $\gamma=\{ \alpha_1, \dots, \alpha_n \}$ and $\{\alpha_1\},\dots, \{\alpha_n\}, \gamma_1, \dots, \gamma_k $ be all possible subsets of $\gamma$, where $k=2^n - (n+1)$ and $|\gamma_i|>1$ for every $i\in \{1, \dots,k\}$. Therefore, we get \begin{align*} \left\Vert \neg(\varphi \uplus 1) \right\Vert(\gamma) & = \bigwedge_{\gamma^\prime \subseteq \gamma} \bigvee_{\alpha\in \gamma^\prime}\overline{\left\Vert \varphi \right\Vert(\alpha) } \\ & = \left(\bigwedge_{i=1}^n \bigvee_{\alpha\in \{a_i\}} \overline{\left\Vert \varphi \right\Vert(\alpha)} \right)\wedge \left( \bigwedge_{j=1}^k \bigvee_{\alpha\in \gamma_j} \overline{\left\Vert\varphi \right\Vert(\alpha)} \right) \\ & = \left(\overline{\left\Vert \varphi\right\Vert(\alpha_1)} \wedge \dots \wedge \overline{\left\Vert \varphi\right\Vert(\alpha_n)} \right)\wedge \left( \bigwedge_{j=1}^k \bigvee_{\alpha\in \gamma_j} \overline{\left\Vert\varphi \right\Vert(\alpha)} \right) \\ & = \left(\bigwedge_{\alpha\in \gamma}\overline{\left\Vert \varphi\right\Vert(\alpha)} \right)\wedge \left( \bigwedge_{j=1}^k \bigvee_{\alpha^\prime\in \gamma_j} \overline{\left\Vert\varphi \right\Vert(\alpha^\prime)} \right) \\ & = \left(\bigwedge_{\alpha\in\gamma}\left\Vert ! \varphi\right\Vert(\alpha) \right)\wedge \left( \bigwedge_{j=1}^k \bigvee_{\alpha^\prime\in \gamma_j} \left\Vert !\varphi \right\Vert(\alpha^\prime) \right) \\ & = \bigwedge_{j=1}^k \bigvee_{\alpha^\prime\in \gamma_j} \left( \bigwedge_{\alpha\in \gamma}\left\Vert ! \varphi\right\Vert(\alpha) \wedge \left\Vert ! \varphi \right\Vert(\alpha^\prime) \right) \\ & = \bigwedge_{\alpha\in \gamma}\left\Vert ! \varphi\right\Vert(\alpha) \\ & = \left\Vert ! \varphi\right\Vert(\gamma) \end{align*} \noindent where for the validity of the last equality we give the following explanation. For every $j\in \{1, \dots, k\}$ and for every $\alpha^\prime\in \gamma_j$ we have $\bigwedge_{\alpha\in \gamma}\left\Vert ! \varphi\right\Vert(\alpha) \wedge \left\Vert ! \varphi \right\Vert(\alpha^\prime) = \bigwedge_{\alpha\in \gamma}\left\Vert ! \varphi\right\Vert(\alpha)$ since $\alpha^\prime \in \gamma$ and $\wedge $ is idempotent. Lastly, by the idempotency of $\vee$ we get the last equality. \item \begin{align*} \left\Vert \sim \ ! \varphi \right\Vert (\gamma) & = \left\Vert \neg \left( \neg \sim \ ! \varphi \right) \right\Vert (\gamma) \\ & = \left\Vert \neg ! (! \varphi) \right\Vert (\gamma) \\ & = \left\Vert \neg \left( ! ! \varphi \right) \right\Vert(\gamma) \\ & = \left\Vert \neg \varphi \right\Vert(\gamma) \end{align*} where the second equality holds by Proposition \ref{pil_prop}(2). \end{enumerate} \end{proof} \begin{proposition}\label{fpil_ovee_to_fpcl} Let $J$ be a finite index set and $\varphi_j $ a \emph{fPIL} formula over $P$ and $K$ for every $j\in J$. Let $\gamma\in fC(P,K)$. Then \[ \left\Vert \underset{j\in J}{\pmb{{\Large \ovee}}} \varphi_j \right\Vert(\gamma) = \bigvee_{J^\prime \subseteq J} \bigvee_{ \underset{j^\prime \in J^\prime}{\bigcup\mkern-12.5mu\cdot\mkern6mu}\gamma_{j^\prime} =\gamma } \left( \bigwedge_{j^\prime \in J^\prime} \left\Vert \varphi_{j^\prime} \right\Vert(\gamma_{j^\prime}) \right) \] \noindent where $\mathbin{\mathaccent\cdot\cup}$ denotes the disjoint union of sets. \end{proposition} \begin{proof} Let $\gamma = \{ \alpha_1, \dots, \alpha_n \} \in fC(P,K)$ and a finite index set $J$. Then \begin{align*} \left\Vert \underset{j\in J}{\pmb{{\Large \ovee}}} \varphi_j \right\Vert(\gamma) & = \bigwedge_{\alpha\in \gamma} \left\Vert \underset{j\in J}{\pmb{{\Large \ovee}}} \varphi_j \right\Vert(\alpha) \\ & = \bigwedge_{\alpha\in \gamma} \left( \bigvee_{j\in J} \left\Vert \varphi_j \right\Vert(\alpha) \right) \\ & = \bigvee_{(j_1, \dots, j_n) \in J^n} \left( \left\Vert \varphi_{j_1} \right\Vert(\alpha_1) \wedge \left\Vert \varphi_{j_2} \right\Vert(\alpha_2) \wedge \dots \wedge \left\Vert \varphi_{j_n} \right\Vert(\alpha_n) \right) \\ & = \bigvee_{J^\prime \subseteq J} \bigvee_{ \underset{j^\prime \in J^\prime}{\bigcup\mkern-12.5mu\cdot\mkern6mu} \gamma_{j^\prime} = \gamma} \left( \bigwedge_{j^\prime\in J^\prime} \left\Vert \varphi_{j^\prime} \right\Vert(\gamma_{j^\prime}) \right) \end{align*} \noindent where for the validity of the last equality we give the following explanation. In the third equality we have all possible n-tuples with elements from $J$. Hence, there are cases where we have repetitions of some $j_{i}$'s. Moreover, in every parenthesis in the third equality, each $\alpha_{j}$ appears exactly once and therefore we get the disjoint unions of sets that are equal to $\gamma$ in the last equality. Lastly, considering the above, the commutativity of the operators $\wedge, $ $\vee$ and the idempotency of the $\vee$ operator, we get the last equality. \end{proof} \begin{proposition}\label{pil_coal_n} Let $J$ be a finite index set and $\varphi_j $ a \emph{fPIL} formula over $P$ and $K$ for every $j\in J$. Then \[ \biguplus_{j\in J} \varphi_j \equiv \bigotimes_{j\in J} \left( \sim \varphi_j \right) \otimes \left( \underset{j\in J}{\pmb{{\Large \ovee}}} \ \varphi_j \right). \] \end{proposition} \begin{proof} Let $\gamma\in fC(P,K)$ and $J=\{ 1, \dots, n \}$ a finite index set. Then we get \begin{align*} & \left\Vert \bigotimes_{j\in J} \left( \sim \varphi_j \right) \otimes \left( \underset{j\in J}{\pmb{{\Large \ovee}}} \ \varphi_j \right) \right\Vert(\gamma) \\& = \bigwedge_{j\in J}\left( \bigvee_{\gamma_j\subseteq \gamma} \left\Vert \varphi_j\right\Vert(\gamma_j) \right)\wedge \left( \bigvee_{J^\prime \subseteq J} \bigvee_{ \underset{j^\prime \in J^\prime}{\bigcup\mkern-12.5mu\cdot\mkern6mu}\gamma_{j^\prime}^\prime = \gamma } \left( \bigwedge_{j^\prime \in J^\prime} \left\Vert \varphi_{j^\prime} \right\Vert(\gamma_{j^\prime}^\prime) \right) \right) \\ & = \bigvee_{\gamma_1\subseteq \gamma} \ldots \bigvee_{\gamma_n\subseteq \gamma}\left( \bigwedge_{j\in J}\left\Vert \varphi_j\right\Vert(\gamma_j) \right) \wedge \left( \bigvee_{J^\prime \subseteq J} \bigvee_{ \underset{j^\prime \in J^\prime}{\bigcup\mkern-12.5mu\cdot\mkern6mu}\gamma_{j^\prime}^\prime =\gamma } \left( \bigwedge_{j^\prime \in J^\prime} \left\Vert \varphi_{j^\prime} \right\Vert(\gamma_{j^\prime}^\prime) \right) \right) \\ & = \bigvee_{\gamma_1\subseteq \gamma} \ldots \bigvee_{\gamma_n\subseteq \gamma}\bigvee_{J^\prime \subseteq J} \bigvee_{ \underset{j^\prime \in J^\prime}{\bigcup\mkern-12.5mu\cdot\mkern6mu}\gamma_{j^\prime}^\prime =\gamma } \left( \left\Vert \varphi_1\right\Vert(\gamma_1) \wedge \dots \wedge \left\Vert \varphi_n\right\Vert(\gamma_n) \wedge \bigwedge_{j^\prime \in J^\prime} \left\Vert \varphi_{j^\prime} \right\Vert(\gamma_{j^\prime}^\prime)\right) \\ & = \bigvee_{\gamma_1\subseteq \gamma} \ldots \bigvee_{\gamma_n\subseteq \gamma}\bigvee_{J^\prime \subseteq J} \bigvee_{ \underset{j^\prime \in J^\prime}{\bigcup\mkern-12.5mu\cdot\mkern6mu}\gamma_{j^\prime}^\prime =\gamma } \left( \bigwedge_{j^\prime \in J^\prime} \left\Vert \varphi_{j^\prime } \right\Vert(\gamma_{j^\prime}^\prime \cup \gamma_{j^\prime}) \wedge \bigwedge_{j\in J\backslash J^\prime} \left\Vert \varphi_j \right\Vert(\gamma_j) \right) \end{align*} \noindent where the first equality holds by Proposition \ref{fpil_ovee_to_fpcl} and the fourth one by the idempotency of $\wedge$. We observe that the sets $\gamma_{j^\prime}^\prime \cup \gamma_{j^\prime}$ and $\gamma_{j} $ for every $j^\prime \in J^\prime$ and $j\in J\backslash J^\prime$, consist all possible subsets of $\gamma$ where the union of them is equal to $\gamma$. So, by the idempotency of $\wedge$ and $\vee$ we get \begin{align*} \left\Vert \bigotimes_{j\in J} \left( \sim \varphi_j \right) \otimes \left( \underset{j\in J}{\pmb{{\Large \ovee}}} \ \varphi_j \right) \right\Vert(\gamma) & = \bigvee_{\gamma_1^{\prime \prime } \cup \dots \cup \gamma_n^{\prime \prime}=\gamma} \left( \left\Vert \varphi_1\right\Vert(\gamma_1^{\prime \prime}) \wedge \dots \wedge \left\Vert \varphi_n\right\Vert(\gamma_n^{\prime \prime})\right) \\ & = \left\Vert \varphi_1\uplus \dots \uplus \varphi_n \right\Vert (\gamma). \end{align*} \end{proof} \begin{proposition}\label{pil_coal_neg} Let $J$ be a finite index set and $\varphi_j$ a \emph{fPIL} formula for every $j\in J$. Then \[ \neg \biguplus_{j\in J} \varphi_j \equiv \bigoplus_{j\in J} \left( ! \varphi_j \right) \oplus \sim \left( \underset{j\in J}{\pmb{{\Large \owedge}}} ! \varphi_j \right) \] \end{proposition} \begin{proof} We get \begin{align*} \neg \biguplus_{j\in J} \varphi_j & \equiv \neg \left( \bigotimes_{j\in J} \left( \sim \varphi_j \right) \otimes \left( \underset{j\in J}{\pmb{{\Large \ovee}}} \varphi_j \right) \right) \\ & \equiv \bigoplus_{j\in J} \left( \neg \sim \varphi_j \right) \oplus \neg \left( \underset{j\in J}{\pmb{{\Large \ovee}}} \ \varphi_j \right) \\ & \equiv \bigoplus_{j\in J} \left( ! \varphi_j \right) \oplus \sim \ ! \left( \underset{j\in J}{\pmb{{\Large \ovee}}} \varphi_j \right) \\ & \equiv \bigoplus_{j\in J} \left( ! \varphi_j \right) \oplus \sim \underset{j\in J}{\pmb{{\Large \owedge}}} \left(! \varphi_j\right) \end{align*} \noindent where the third equivalence holds by Proposition \ref{pil_prop}. \end{proof} \begin{proposition}\label{otimes_to_uplus} Let $J$ be a finite index set and $\varphi_j$ a \emph{fPIL} formula for every $j\in J$. Then \[ \bigotimes_{j\in J} (\sim \varphi_j) \equiv \ \sim \biguplus_{j\in J} \varphi_j . \] \end{proposition} \begin{proof} Let $\gamma\in fC(P,K)$. Then \begin{align*} \left\Vert \bigotimes_{j\in J} (\sim \varphi_j) \right\Vert(\gamma) & = \bigwedge_{j\in J} \left( \bigvee_{\gamma_j\subseteq \gamma} \left\Vert \varphi_j\right\Vert(\gamma_j) \right) \\ & = \bigvee_{\bigcup_{j\in J}\gamma_j \subseteq \gamma} \left( \bigwedge_{j\in J} \left\Vert\varphi_j\right\Vert(\gamma_j) \right) \\ & = \bigvee_{\gamma^\prime \subseteq \gamma} \left(\bigvee_{\bigcup_{j\in J}\gamma_j = \gamma^\prime} \left( \bigwedge_{j\in J} \left\Vert\varphi_j\right\Vert(\gamma_j) \right) \right) \\ & = \bigvee_{\gamma^\prime \subseteq \gamma} \left\Vert \biguplus_{j\in J} \varphi_j \right\Vert(\gamma^\prime) \\ & = \left\Vert \sim \biguplus_{j\in J} \varphi_j \right\Vert(\gamma) \end{align*} \noindent where the second equality holds since $\wedge$ distributes over $\vee$. \end{proof} \begin{proposition}\label{pil_coal_conj} Let $J$ and $K$ be finite index sets and $\varphi_j$, $\varphi_k^\prime$ \emph{fPIL} formulas for every $j\in J$ and $k\in K$. Then \[ \left(\biguplus_{j\in J} \varphi_{j}\right) \otimes \left( \biguplus_{k\in K} \varphi_{k}^\prime \right) \equiv \ \sim \left( \biguplus_{j\in J} \varphi_j \uplus \biguplus_{k\in K} \varphi_k^\prime \right) \otimes \left( \underset{(j,k)\in J\times K}{\pmb{{\Large \ovee}}} \left( \varphi_j \ { \scriptstyle \owedge} \ \varphi_k^\prime \right) \right) \] \end{proposition} \begin{proof} By Proposition \ref{absorpt_fpcl}(1) we get \begin{align*} \left(\biguplus_{j\in J} \varphi_{j}\right) \otimes \left( \biguplus_{k\in K} \varphi_{k}^\prime \right) \equiv \neg \neg \left(\left(\biguplus_{j\in J} \varphi_{j}\right) \otimes \left( \biguplus_{k\in K} \varphi_{k}^\prime \right) \right). \end{align*} \noindent By Proposition \ref{pil_coal_neg} it is valid that \begin{align*} \neg \left( \left(\biguplus_{j\in J} \varphi_{j}\right) \otimes \right. & \left. \left( \biguplus_{k\in K} \varphi_{k}^\prime \right) \right) \\ & \equiv \neg \left( \biguplus_{j\in J} \varphi_{j} \right) \oplus \neg \left( \biguplus_{k\in K} \varphi_{k}^\prime \right) \\ & \equiv \bigoplus_{j\in J} \left( ! \varphi_j \right) \oplus \sim \left( \underset{j\in J}{\pmb{{\Large \owedge}}} ! \varphi_j \right) \oplus \bigoplus_{k \in K} \left( ! \varphi_k^\prime \right) \oplus \sim \left( \underset{k\in K}{\pmb{{\Large \owedge}}} ! \varphi_k^\prime \right) \end{align*} \noindent and so \begin{align*} \neg \neg \left( \left(\biguplus_{j\in J} \right. \right. & \left. \left.\varphi_{j\in J}\right) \otimes \left( \biguplus_{k\in K} \varphi_{k}^\prime \right) \right) \\ & \equiv \neg \left( \bigoplus_{j\in J} \left( ! \varphi_j \right) \oplus \sim \left( \pmb{{\Large \owedge}}_{j\in J} ! \varphi_j \right) \oplus \bigoplus_{k \in K} \left( ! \varphi_k^\prime \right) \oplus \sim \left( \underset{k\in K}{\pmb{{\Large \owedge}}} ! \varphi_k^\prime \right) \right) \\ & \equiv \bigotimes_{j\in J} \left( \neg ! \varphi_j \right) \otimes \neg\sim \left( \pmb{{\Large \owedge}}_{j\in J} ! \varphi_j \right) \otimes \bigotimes_{k\in K} \left( \neg ! \varphi_{k}^\prime \right) \otimes \neg \sim \left( \underset{k\in K}{\pmb{{\Large \owedge}}} ! \varphi_k^\prime \right) \\ & \equiv \bigotimes_{j\in J} \left( \sim \varphi_j \right) \otimes ! \left( \underset{j\in J}{\pmb{{\Large \owedge}}} !\varphi_j \right) \otimes \bigotimes_{k\in K} \left(\sim \varphi_{k}^\prime \right) \otimes ! \left( \underset{k\in K}{\pmb{{\Large \owedge}}} ! \varphi_k^\prime \right) \\ & \equiv \bigotimes_{j\in J} \left( \sim \varphi_j \right) \otimes \left(\underset{j\in J}{\pmb{{\Large \ovee}}} \varphi_j \right) \otimes \bigotimes_{k\in K} \left(\sim \varphi_{k}^\prime \right) \otimes \left( \underset{k\in K}{\pmb{{\Large \ovee}}} \varphi_k^\prime \right) \\ & \equiv \bigotimes_{j\in J} \left( \sim \varphi_j \right) \otimes \bigotimes_{k\in K} \left(\sim \varphi_{k}^\prime \right) \otimes \left(\underset{j\in J}{\pmb{{\Large \ovee}}} \varphi_j \right) \otimes \left( \underset{k\in K}{\pmb{{\Large \ovee}}} \varphi_k^\prime \right) \\ & \equiv \bigotimes_{j\in J} \left( \sim \varphi_j \right) \otimes \bigotimes_{k\in K} \left(\sim \varphi_{k}^\prime \right) \otimes \left(\left(\underset{j\in J}{\pmb{{\Large \ovee}}} \varphi_j \right) \ { \scriptstyle \owedge} \ \left( \underset{k\in K}{\pmb{{\Large \ovee}}} \varphi_k^\prime \right) \right) \\ & \equiv \sim \left( \biguplus_{j\in J} \varphi_j \uplus \biguplus_{k\in K} \varphi_k^\prime \right) \otimes \left(\underset{(j,k)\in J\times K}{\pmb{{\Large \ovee}}} \left( \varphi_j \ { \scriptstyle \owedge} \ \varphi_k^\prime \right) \right) \end{align*} \noindent where the third equivalence holds by Proposition \ref{pil_prop} and the last one by Proposition \ref{otimes_to_uplus}. \end{proof} We proceed with an important property of fPCL formulas over $P$ and a Kleene algebra. \begin{proposition}\label{kleene_ports} Let $P$ be a set of ports and $K_\mathbf{3}$ a Kleene algebra. Then \[ \left( p \ \otimes \ ! p \right) \otimes \left(q \ {\scriptstyle \ovee} \ ! q \right) \equiv_{K_\mathbf{3}} p \ \otimes \ ! p \] \noindent where $p,q \in P.$ \end{proposition} \begin{proof} Let $\gamma\in fC(P,K_\mathbf{3})$. Then \begin{align*} \left\Vert \left( p\otimes ! p \right) \otimes \left(q \ {\scriptstyle \ovee} \ ! q \right) \right\Vert(\gamma) & = \bigwedge_{\alpha\in \gamma} \left\Vert \left( p\otimes ! p \right) \otimes \left(q \ {\scriptstyle \ovee} \ ! q \right)\right\Vert(\alpha) \\ & = \bigwedge_{\alpha\in \gamma} \left( \left(\alpha(p)\wedge \overline{\alpha(p)} \right) \wedge \left( \alpha(q) \vee \overline{\alpha(q)} \right)\right) \\ & = \bigwedge_{\alpha\in \gamma} \left(\alpha(p)\wedge \overline{\alpha(p)} \right) \\ & = \left\Vert p \otimes ! p\right\Vert(\gamma) \end{align*} where the third equality holds since $\left( k\wedge \overline{k} \right) \wedge \left( k^\prime \vee \overline{k^\prime} \right) = k\wedge \overline{k}$ for every $k,k^\prime\in K_{\mathbf{3}}$. \end{proof} Next, we give properties of $fPCL$ formulas over $P$ and a Boolean algebra. \begin{proposition}\label{boolean_prop} Let $\varphi$ and $\zeta$ be a \emph{fPIL} and a \emph{fPCL} formula, respectively, over $P$ and a Boolean algebra. Then \begin{tabular}{l l l l} $(1)$ & $\varphi \ \otimes \ ! \varphi \equiv_{\mathbf{B}} false .$ & \hspace*{1.5cm} $(3)$ & $\zeta \otimes \neg \zeta \equiv_{\mathbf{B}} false.$ \\[0.1cm] $(2)$ & $\varphi \ {\scriptstyle \ovee} \ ! \varphi \equiv_{\mathbf{B}} true .$ & \hspace*{1.5cm} $(4)$ & $ \zeta \oplus \neg \zeta \equiv_{\mathbf{B}} true. $ \end{tabular} \end{proposition} \begin{proof} By the properties of the Boolean algebra we get the proposition. \end{proof} Let $P$ be a finite set of ports. By the previous results we get that $p \ \otimes \ !p \equiv_{\mathbf{2}} false $ but $p \ \otimes \ !p \not \equiv_{\mathbf{3}} false$ for every $p\in P.$ \section{Examples} \begin{figure} \caption{Peer-to-peer architecture.} \label{p2p:image} \end{figure} \begin{example}[Peer-to-Peer architecture] Peer-to-peer architecture (P2P for short) is a commonly used computer networking architecture. All peers in the architecture have some available resources, such as processing power and network bandwidth (cf. \cite{p2p_def}). Those resources are available to the other peers that participate in the architecture without the need of a central component to coordinate their interactions. This is not the case in Request/Response architecture where a coordinator is needed (cf. \cite{Ma:Co}). All peers in the architecture are both suppliers and consumers of their resources and so there is no distinction between them. In our example, we consider four components $C_1, C_2, C_3$ and $C_4$ (Figure \ref{p2p:image}). Every component has two ports denoted by $r$ and $s$ which represent, respectively, the functions receive and send. Let $J=\{1,2,3,4\}$. So, the set of ports is $P= \bigcup_{j\in J}\{r_j,s_j\}$. Each component can receive and send information to as many other components in the architecture except from itself. One possible architecture scheme is shown in Figure \ref{p2p:image}. In the sequel we construct a fPCL formula which describes the P2P architecture with four components with a fPCL formula. Firstly, let two distinct components $C_j$ and $C_{j^\prime}$ where $j,j^\prime \in J$. The interaction between $C_j$ and $C_{j^\prime}$ where $C_j$ rceives information from $C_{j^\prime}$, is characterised by the following \emph{fPIL} formula \[ \varphi_{j,j^\prime} = r_j\otimes s_{j^\prime } \otimes ! s_j \otimes ! r_{j^\prime} \otimes \bigotimes_{j^{\prime \prime} \in J \backslash\{ j,j^\prime \} } \left( ! r_{j^{\prime \prime}} \otimes ! s_{j^{\prime \prime}} \right) \] \noindent Next, as it was mentioned above, $C_j$ can receive information from more than one components. Let $J^\prime \subseteq J\backslash \{j \} $. The interactions between $C_{j}$ and $C_{j^\prime }$, where $j^\prime \in J^\prime$ are characterized by $ \zeta_{j, J^\prime} = \biguplus_{j^\prime \in J^\prime} \varphi_{j,j^\prime}.$ However, $J^\prime $ can be any non empty subset of $J\backslash\{ j \}$. So the \emph{fPCL} formula \[ \zeta_j = \bigoplus_{J^\prime \in \mathcal{P}(J\backslash\{j\})\backslash \{ \emptyset\} } \zeta_{j, J^\prime} \] \noindent, where $\mathcal{P}(J\backslash\{j\})$ denotes the power set of $J\backslash\{j\}$, describes all possible architecture schemes between $C_j$ and the rest components in the architecture. Lastly, some components may not interact at all with the others. Therefore, the \emph{fPCL} formula \[ \zeta = \bigoplus_{J^{\prime\prime} \in \mathcal{P}(J)\backslash\{ \emptyset \}} \biguplus_{j \in J^{\prime\prime}} \zeta_{j} \] \noindent describes all possible architecture schemes of the P2P architecture with four components. In our example, we consider that every port has a degree of uncertainty. Let the fuzzy algebra and a configuration set $\gamma\in fC(P,\mathbf{F})$. For every $\alpha\in fI(P,\textbf{F})$ the value $\alpha(p)$ represents the degree of uncertainty of the port $p\in P.$ If $\alpha(p) = 0$ then the port has an absolute uncertain bahavior. If $\alpha(p) = 1 $ the port will participate with no uncertainty, i.e., it will participate with no fault in its behavior. Then the value $\left\Vert \sim \zeta \right\Vert(\gamma)$ gives the maximum uncertainty that can occur in the architecture considering the given interactions of $\gamma$. \end{example} \begin{example} We recall from \cite{Ma:Co} the Master/Slave architecture for two masters $M_1, M_2$ and two slaves $S_1, S_2$ with ports $m_1, m_2$ and $s_1, s_2$, respectively. Masters can interact only with slaves, and vice versa. Each slave can interact with only one master. As it was mentioned in the Introduction, software architectures have a degree of uncertainty. We show how we can compute the uncertainty of the Master/Slave architecture over a finite number of components. We consider the fuzzy algebra and the set of ports $P=\{s_1, m_1, s_2, m_2 \}$. Next, we construct the \emph{fPCL} formula which describes the architecture. The interaction between a master $m \in \{m_1, m_2\}$ and a slave $s\in \{s_1, s_2\}$ is described by the \emph{fPIL} formula \[ \varphi_{s,m} = s\otimes m\otimes \ ! s^\prime \otimes \ ! m^\prime \] \noindent where $s^\prime \not = s$ and $m^\prime \not = m$. Moreover, as it was mentioned above, every master can interact with only one slave, and vice versa. Hence, the \emph{fPCL} formula \[ \zeta = \left( \varphi_{s_1,m_1} \oplus \varphi_{s_2, m_1} \right) \uplus \left( \varphi_{s_1,m_2} \oplus \varphi_{s_2, m_2} \right) \] \noindent describes the Master /Slave architecture with two masters and two slaves. Let $\gamma\in fC(P,K_\mathbf{F})$ be a set of estimations of uncertainty of the ports in the architecture and the \emph{fPCL} formula $\sim \zeta$. The value $\left\Vert \sim \zeta \right\Vert (\gamma)$ gives the maximum value among the values that represent the maximum uncertainty among the architecture patterns. \end{example} \section{Normal Form and Decidability of Equivalence} \label{section_normal_form} In this section we examine the decidability of equivalence of fPCL formulas. Let $\zeta $ and $\zeta^\prime$ be fPCL formulas over the set of ports $P=\{p,q,r\}$ and the fuzzy algebra $\mathbf{F}$. By Definition \ref{fpcl_equiv}, $\zeta \equiv_{\textbf{F}} \zeta^\prime$ if $\left\Vert \zeta \right\Vert(\gamma) = \left\Vert\zeta^\prime \right\Vert(\gamma)$ for every $\gamma\in fC(P,\mathbf{F})$. However, the set $fC(P,\mathbf{F})$ is infinite and so it is impossible to check the equivalence by the previous way. This is not the case for fPCL formulas over De Morgan algebras with finite set $K$ such as the two element Boolean algebra and the three element Kleene algebra. However, if we prove that two fPCL formulas have the same normal form, then they are equivalent. In the sequel, we show that every fPCL formula over $P$ and a Kleene algebra $K_\mathbf{3}$, can be equivalently written in a normal form. Consequently, we show that the equivalence problem for fPCL formulas over $P$ and a Kleene algebra $K_\mathbf{3}$ is decidable. In the following, we give some useful definitions for the definition of the normal form of our fPCL formulas. \begin{definition} Let $P$ be a set of ports. A \emph{fPIL} formula $\varphi $ is called f-monomial if it is of the form $$ \varphi = \underset{p_1\in P_1}{\pmb{{\Large \owedge}}} p_1 \ { \scriptstyle \owedge} \ \underset{p_2\in P_2}{\pmb{{\Large \owedge}}} ! p_2.$$ \noindent where $P_1, P_2 \subseteq P $ and $P_1\cup P_2 \not = \emptyset$. \end{definition} Following the previous definition, $P_1 \cap P_2 $ can be either empty or not. Consider $P=\{ p,q,r \}$ be a set of ports. The fPIL formulas $p \ { \scriptstyle \owedge} \ ! p \ { \scriptstyle \owedge} \ ! q$ and $p\ { \scriptstyle \owedge} \ r$ are f-monomials. \begin{definition} Let $P$ be a set of ports and $K$ a De Morgan algebra. A \emph{fPIL} formula $\varphi$ is said to be in $fpil$-normal form if it is of the form \begin{enumerate}[$(1)$] \item $\varphi \ \dot{\equiv} \ \underset{i\in I}{\pmb{{\Large \ovee}}} \ \varphi_i $, where $I$ is a finite index set, $\varphi_i$ is a f-monomial for every $i\in I$ and $\varphi_i \dot{\not \equiv} \varphi_{i^\prime}$ for every $i, i^\prime \in I$ with $i\not = i^\prime$, or \item $\varphi \ \dot{\equiv} \ true$, or \item $\varphi \ \dot{\equiv} \ false.$ \end{enumerate} \end{definition} \begin{definition} Let $P$ be a finite set of ports and $K$ a De Morgan algebra. A \emph{fPCL} formula $\zeta$ over $P$ and $K$ is said to be in normal form if it is of the following form: \begin{enumerate}[$(1)$] \item $\zeta = \bigoplus_{i\in I}\biguplus_{j\in J_i} \varphi_{i,j} $, where $I, J_i$ are finite index sets for every $i\in I$ and $\varphi_{i,j}\not \equiv false$ is in $fpil$-normal form for every $i\in I$ and $j\in J_i$, or \item $\zeta = true$, or \item $\zeta=false.$ \end{enumerate} \end{definition} By Propositions \ref{absorpt_pil}, \ref{absorpt_fpcl}, \ref{pil_prop}, for every fPCL formula in normal form we can construct its equivalent one in normal form satisfying the following statements: \begin{enumerate}[(1)] \item Let $i\in I$. Then $\varphi_{i, j} \not \equiv \varphi_{i, j^\prime}$ for every $j \not = j^\prime$. \item Let $i,i^\prime \in I$ with $i\not = i^\prime$. Then $\biguplus_{j\in J_i} \varphi_{i,j} \not \equiv \biguplus_{j\in J_{i^\prime}} \varphi_{i^\prime,j}$. \end{enumerate} \begin{quotation} In the sequel, every fPCL formula in normal form is considered to satisfy the above statements. \end{quotation} Next, we present our results on the existence and the construction of the normal form fPCL formulas. But first, we need to note a very important observation. For this we give the following example. \begin{example}\label{example_equiv} Let $P=\{ p,q \}$ be a set of ports and the \emph{fPIL} formulas $\varphi = p \ \ { \scriptstyle \owedge} \ \ ! p$ and $\varphi^\prime = \left( p \ \ { \scriptstyle \owedge} \ \ ! p \ { \scriptstyle \owedge} \ q \right) \ {\scriptstyle \ovee} \ \left( p \ \ { \scriptstyle \owedge} \ \ ! p \ { \scriptstyle \owedge} \ \ ! q \right) $. Those two formulas are in normal form. Considering an arbitrary De Morgan algebra we get that $\varphi\not \equiv \varphi^\prime$ since their normal forms are not equivalent. However, we prove they are equivalent over the fuzzy algebra. For this \begin{align*} \varphi^\prime & = \left( p \ \ { \scriptstyle \owedge} \ \ ! p \ { \scriptstyle \owedge} \ q \right) \ {\scriptstyle \ovee} \ \left( p \ \ { \scriptstyle \owedge} \ \ ! p \ \ { \scriptstyle \owedge} \ \ ! q \right) \\ & \equiv_{\textbf{F}} \left( p \ \ { \scriptstyle \owedge} \ \ ! p \right) \ { \scriptstyle \owedge} \ \left( q \ {\scriptstyle \ovee} \ !q \right) \\ & \equiv_{\textbf{F}} p \ \ { \scriptstyle \owedge} \ \ ! p = \varphi \end{align*} \noindent where the first equivalence holds since $\ { \scriptstyle \owedge} \ $ distributes over $ \ {\scriptstyle \ovee} \ $ and the second one by Proposition \ref{kleene_ports}. We conclude that $\varphi \equiv_{\textbf{F}} \varphi^\prime $. Analogously, we prove that $\varphi \equiv_{K_\textbf{3}} \varphi^\prime$. Also, $\varphi \equiv_{\textbf{B}} \varphi^\prime $ since both formulas are equivalent to $false$ over a Boolean algebra. However, $\varphi$ and $\varphi^\prime$ are not equivalent if we consider the four element algebra $\textbf{4}$. Let $\gamma=\{ \alpha \} \in fC(P,\textbf{4})$, where $\alpha(p) = u$ and $\alpha(q)=w$. Then $\left\Vert\varphi \right \Vert (\gamma) = u \not = 0 = \left\Vert \varphi ^\prime \right\Vert(\gamma)$. So $\varphi \not \equiv_{\textbf{4}} \varphi^\prime $. \end{example} By Example \ref{example_equiv}, we observe that for the construction of the normal form of a fPCL formula we need to take into account the properties of the De Morgan algebra. In the following, we show that for every fPCL formula over $P$ and a Kleene algebra, we can effectively construct its equivalent fPCL formula in normal form. \begin{theorem}\label{kleene_normal_form_theorem} Let $P$ be a set of ports and $K$ an arbitrary De Morgan algebra. Then for every \emph{fPCL} formula $\zeta_1 \in fPCL(K,P)$, $\zeta_2 \in fPCL(K_{\mathbf{3}},P)$ and $\zeta_3 \in fPCL(\mathbf{B},P)$, we can effectively construct an equivalent \emph{fPCL} formula $\zeta_1^\prime\in fPCL(K,P)$, $\zeta_2^\prime\in fPCL(K_{\mathbf{3}},P)$ and $\zeta_3^\prime\in fPCL(\mathbf{B},P)$, respectively, in normal form. The time complexity of the construction is polynomial. \end{theorem} \begin{proof} We prove our theorem by induction on the structure of fPCL formulas over $P$ and a Kleene algbera $K_{\mathbf{3}}$. We deal with the other cases at the end of this proof. Let $\zeta=\varphi \in fPIL(K_{\mathbf{3}}, P)$. If $\zeta$ is equal to $true$ or $false$, then we are done. Otherwise, by Propositions \ref{neg_i_oplus}, \ref{pil_true_false}, \ref{fpil_associa}, \ref{otimes_over_oplus_i} and \ref{absorpt_pil} we get its equivalent formula in $fpil$-normal form. Then we go to Step 2(2) and by Propositions \ref{otimes_over_oplus_i} and \ref{absorpt_pil} we get its equivalent formula of $\zeta$ in normal form. Now, let $\zeta_1, \zeta_2$ be fPCL formulas and assume that both $\zeta_1$ and $\zeta_2$ are not equivalent to $true$ or $false$. Those cases can be treated analogously to the cases we show below and the properties of the De Morgan algebra. Consider $\zeta_1^\prime= \bigoplus_{i_1\in I_1}\biguplus_{j_1\in J_{i_1}} \varphi_{i_1,j_1}$ and $ \zeta_2^\prime = \bigoplus_{i_2\in I_2}\biguplus_{j_2\in J_{i_2}} \varphi_{i_2,j_2}$ be their equivalent normal forms, respectively. Then we go to Step 1. \begin{flushleft} \textbf{\underline{Step 1}} \end{flushleft} \begin{enumerate}[(1)] \item Firstly, let $\zeta=\zeta_1 \oplus \zeta_2$. The formula $\zeta$ is equivalent to $\zeta_1^\prime \oplus \zeta_2^\prime$ which is of the form $\bigoplus_{i\in I} \biguplus_{j\in J_i} \varphi_{i,j} $ where $\varphi_{i,j}$ is in $fpil$-normal form for every $j\in J_i$. Then we go to Step 2. \item Next, let $\zeta = \zeta_1 \uplus \zeta_2.$ Then \begin{align*} \zeta & \equiv \zeta_1^\prime \uplus \zeta_2^\prime \\ & \equiv \left( \bigoplus_{i_1\in I_1}\biguplus_{j_1\in J_{i_1}} \varphi_{i_1,j_1} \right) \uplus \left( \bigoplus_{i_2\in I_2}\biguplus_{j_2\in J_{i_2}} \varphi_{i_2,j_2} \right) \\ & \equiv \bigoplus_{i_1\in I_1} \bigoplus_{i_2\in I_2} \left( \biguplus_{j_1\in J_{i_1}} \varphi_{i_1,j_1} \uplus \biguplus_{j_2\in J_{i_2}} \varphi_{i_2,j_2} \right) \end{align*} \noindent where the last equivalence holds by Proposition \ref{uplus_over_oplus}. Then we go to Step 2. \item Now, let $\zeta=\zeta_1 \otimes \zeta_2.$ Then we get \begin{align*} \zeta & \equiv \zeta_1^\prime \otimes \zeta_2^\prime \\ & \equiv \left( \bigoplus_{i_1\in I_1}\biguplus_{j_1\in J_{i_1}} \varphi_{i_1,j_1} \right) \otimes \left( \bigoplus_{i_2\in I_2}\biguplus_{j_2\in J_{i_2}} \varphi_{i_2,j_2} \right) \\ & \equiv \bigoplus_{(i_1, i_2) \in I_1\times I_2} \left( \left( \biguplus_{j_1\in J_{i_1}} \varphi_{i_1,j_1} \right) \otimes \left( \biguplus_{j_2\in J_{i_2}} \varphi_{i_2,j_2} \right) \right) \\ & \equiv \bigoplus_{(i_1, i_2) \in I_1\times I_2} \left( \left( \sim \biguplus_{j_1\in J_{i_1}} \varphi_{i_1,j_1} \uplus \biguplus_{j_2\in J_{i_2}} \varphi_{i_2,j_2} \right) \otimes \right. \\ & \hspace*{5cm} \left.\underset{(j_1,j_2)\in J_{i_1}\times J_{i_2}}{\pmb{{\Large \ovee}}} \left( \varphi_{i_1,j_1} \ { \scriptstyle \owedge} \ \varphi_{i_2,j_2} \right) \right) \end{align*} \noindent where the third equivalence holds by Proposition \ref{otimes_over_oplus} and the fourth one by Proposition \ref{pil_coal_conj}. Consider the fPIL formula $\varphi_{(i_1, i_2)} = \underset{(j_1,j_2)\in J_{i_1}\times J_{i_2}}{\pmb{{\Large \ovee}}} $ $\left( \varphi_{i_1,j_1} \ { \scriptstyle \owedge} \ \varphi_{i_2,j_2} \right)$ for every $(i_1, i_2) \in I_1\times I_2$. Then by Propositions \ref{absorpt_fpcl} and \ref{otimes_distib_coale} we get \begin{align*} \zeta & \equiv \bigoplus_{(i_1, i_2) \in I_1\times I_2} \left( \biguplus_{j_1\in J_{i_1}} \left(\varphi_{i_1,j_1} \otimes \varphi_{(i_1, i_2)}\right) \uplus \biguplus_{j_2\in J_{i_2}} \left(\varphi_{i_2,j_2} \otimes \varphi_{(i_1, i_2)} \right) \uplus \right.\\ & \left. \hspace*{8cm}\left(\varphi_{(i_1, i_2)} \otimes true\right) \right) \\ & \equiv \bigoplus_{(i_1, i_2) \in I_1\times I_2} \left( \biguplus_{j_1\in J_{i_1}} \left(\varphi_{i_1,j_1} \ { \scriptstyle \owedge} \ \varphi_{(i_1, i_2)}\right) \uplus \biguplus_{j_2\in J_{i_2}} \left(\varphi_{i_2,j_2} \ { \scriptstyle \owedge} \ \varphi_{(i_1, i_2)} \right) \uplus \varphi_{(i_1, i_2)} \right) . \end{align*} \noindent Then we go to Step 2. \item Let us assume that $\zeta = \neg \zeta_1$. Then \begin{align*} \zeta & \equiv \neg \zeta_1^\prime \\ & \equiv \neg \left( \bigoplus_{i_1\in I_1}\biguplus_{j_1\in J_{i_1}} \varphi_{i_1,j_1} \right) \\ & \equiv \bigotimes_{i_1\in I_1}\left(\neg \left(\biguplus_{j_1\in J_{i_1}} \varphi_{i_1,j_1} \right)\right) \\ & \equiv \bigotimes_{i_1\in I_1}\left( \bigoplus_{j_1\in J_{i_1}} \left( ! \varphi_{i_1, j_1} \right) \oplus \sim \left( \pmb{{\Large \owedge}}_{j_1\in J_{i_1}} ! \varphi_{i_1. j_1} \right) \right) \end{align*} \noindent where the third equivalence holds by Proposition \ref{neg} and the fourth one by Proposition \ref{pil_coal_neg}. In the sequel by applying Propositions \ref{neg_i_oplus}, \ref{otimes_over_oplus_i}, \ref{otimes_over_oplus}, \ref{otimes_distib_coale} and \ref{pil_coal_conj} we get a formula of the form $\bigoplus_{i\in I} \biguplus_{j\in J_i} \varphi_{i,j} $ where $\varphi_{i,j} = \underset{k\in K_{i,j}}{\pmb{{\Large \ovee}}} \varphi_{i,j,k} $ and $\varphi_{i,j,k}$ is a f-monomial for every $k\in K_{i,j}$ and $j\in J_i$. Next, we proceed to Step 2. \end{enumerate} \begin{flushleft} \textbf{\underline{Step 2}} \end{flushleft} Let a formula of the form $\bigoplus_{i\in I} \biguplus_{j\in J_i} \varphi_{i,j} $ where the formula $\varphi_{i,j} = \pmb{{\Large \ovee}}_{k\in K_{i,j}} \varphi_{i,j,k} $ and $\varphi_{i,j,k}$ is a f-monomial for every $k\in K_{i,j}$ and $j\in J_i$. In order to get its equivalent formula in normal form we apply the following. \begin{enumerate}[(1)] \item Firstly, we apply Propositions \ref{pil_true_false}, \ref{absorpt_pil}, \ref{absorpt_fpcl} and \ref{pil_prop} (1) in order to discard any repetitions when the operations allow it. So we get a formula of the form $\bigoplus_{i^\prime\in I^\prime} \biguplus_{j^\prime\in J^\prime_{i^\prime}} \varphi_{i^\prime,j^\prime} $ where $\varphi_{i^\prime,j^\prime}= \bigoplus_{k^\prime\in K_{i^\prime,j^\prime}^\prime} \varphi_{i^\prime,j^\prime,k^\prime}$ is in $fpil$-normal form for every $(i^\prime, j^\prime) \in I^\prime\times J^\prime_{i^\prime}$. \item Next, since we consider a Kleene algebra, we apply Proposition \ref{kleene_ports}. Let for instance a f-monomial $\varphi$ of the following form: \[ \varphi = \bigotimes_{p_1\in P_1} \left( p_1\otimes ! p_1 \right)\otimes \bigotimes_{p_2\in P_2} p_2 \otimes \bigotimes_{p_3\in P_3} ! p_3 \] \noindent where the sets $P_1, P_2, P_3 \in P$ are pairwise disjoint. Then we consider the set $P^\prime = P\backslash (P_1\cup P_2\cup P_3) $ and by Proposition \ref{kleene_ports} we get the following: \[ \varphi \equiv_{K_\mathbf{3}} \bigotimes_{p_1\in P_1} \left( p_1\otimes ! p_1 \right)\otimes \bigotimes_{p_2\in P_2} p_2 \otimes \bigotimes_{p_3\in P_3} ! p_3 \otimes \bigotimes_{p\in P^\prime } \left( p \ {\scriptstyle \ovee} \ ! p \right). \] \noindent We follow the above procedure for every f-monomial $\varphi$ of the form $ \bigotimes_{p_1\in P_1} \left( p_1\otimes ! p_1 \right)\otimes \bigotimes_{p_2\in P_2} p_2 \otimes \bigotimes_{p_3\in P_3} ! p_3 $. By this step we ``appear" the ports that get eliminated by the property of the Kleene algebra. \item Lastly, we apply Propositions \ref{pil_true_false}, \ref{absorpt_pil}, \ref{absorpt_fpcl} and \ref{pil_prop}(1) to discard again any repetitions created by the previous step. \end{enumerate} By following the steps given above, we get an equivalent formula in normal form. In order to complete our proof, we need to prove our claim for the time complexity of the algorithm presented above. In every step of our construction, we applied the distribution and idempotency properties of our logic, which are done in polynomial time. This concludes our proof. Consider a Boolean algebra and a \emph{fPCL} formula $\zeta\in fPCL(\mathbf{B}, P)$. For the construction of its equivalent \emph{fPCL} formula over $P$ and $\mathbf{B}$ in normal form, we follow the above proof where we replace Step 2(2) with the application of Proposition \ref{boolean_prop}. If $K$ is an arbitrary De Morgan algebra and $\zeta\in fPCL(K,P)$, then for the construction of its equivalent \emph{fPCL} formula over $P$ and $K$ in normal form, we follow the above proof without Steps 2(2) and 2(3). The complexity of those constructions is again polynomial. \end{proof} Next, we prove that the equivalence problem for \emph{fPCL} formulas is decidable. \begin{theorem}\label{theor_equiv} Let $K$ be a De Morgan algebra and $P$ a set of ports. Then, for every $\zeta_1, \zeta_2 \in fPCL(K,P)$ the equivalence $\zeta_1\equiv \zeta_2 $ is decidable. The run time is polynomial. \end{theorem} \begin{proof} By Theorem \ref{kleene_normal_form_theorem} we can effectively construct fPCL formulas $\zeta_1^\prime , \zeta_2^\prime$ in normal form such that $\zeta_1\equiv\zeta_1^\prime$ and $\zeta_2\equiv\zeta_2^\prime$. In order to prove whether $\zeta_1$ and $\zeta_2$ are equivalent or not, we need to examine if $\zeta_1^\prime \equiv \zeta_2^\prime$ or not. For this, we write our formulas in a form of sets which we compare using Algorithm 1 given in \ref{fig:equiv}. Firstly, we consider that $\zeta_1^\prime=\bigoplus_{i\in I}\biguplus_{j\in J_{i}} \varphi_{i,j}$ where $\varphi_{i,j} \not = false$ is in $fpil$-normal form for every $i\in I$ and $j\in J_i$. Hence, $\zeta_1^\prime= \bigoplus_{i\in I}\biguplus_{j\in J_{i}} \pmb{{\Large \ovee}}_{k\in K_{i,j}} \varphi_{i,j,k}$ where $\varphi_{i,j,k}$ is a f-monomial for every $k\in K_{i,j}$. If there exists $i\in I$ and $j\in J_i$ such that $\varphi_{i,j} \equiv true$, then $\varphi_{i,j}$ can be written as $\pmb{{\Large \ovee}}_{k\in K_{i,j}} true$ where $K_{i,j} = \{1\}$. Analogously, $\zeta_2^\prime \equiv \bigoplus_{m\in M}\biguplus_{n\in N_m} \pmb{{\Large \ovee}}_{l\in L_{m,n}} \varphi_{m,n,l}^\prime$. Next, for every f-monomial $\varphi_{i,j,k} = \pmb{{\Large \owedge}}_{p\in P_{i,j,k}}p \ { \scriptstyle \owedge} \ \pmb{{\Large \owedge}}_{p\in P_{i,j,k}^\prime} !p $ in $\zeta_1^\prime$, we let the set $S_{i,j,k} = \underset{p\in P_{i,j,k}}{\bigcup} \{p \} \cup \underset{p\in P_{i,j,k}^\prime}{\bigcup} \{!p\} $. If $\varphi_{i,j,k} =true$ then $S_{i,j,k} = \{true\} $. Then the following set \begin{enumerate}[$\bullet$] \item $S_{\zeta_1^\prime} = \underset{i\in I}{\bigcup } \{ \underset{j\in J_i}{\bigcup } \{ \underset{k\in K_{i,j}}{\bigcup } \{ S_{i,j,k} \} \} \} $ \end{enumerate} \noindent represents $\zeta_1^\prime$ in the form of sets. If $\zeta_1^\prime =true$, then $S_{\zeta_1^\prime} = \{ \{ \{ \{ true \} \} \} \}$ since $I = \{1\}$, $J_1 = \{1\}$, $K_{1,1} = \{1\}$ and $S_{1,1,1}=\{ true \}$. Analogously, if $\zeta_1^\prime =false$, then $S_{\zeta_1^\prime} = \{ \{ \{ \{ false \} \} \} \}$. Next, we compute the set $S_{\zeta_2^\prime}$ which represents $\zeta_2^\prime $ in the form of sets. We need to note that the representation of a fPCL formula $\zeta = \bigoplus_{i\in I} \biguplus_{j\in J_i} \pmb{{\Large \ovee}}_{k\in K_{i,j}}\varphi_{i,j,k}$, which is in normal form, in the form of sets is possible since \begin{enumerate}[(1)] \item $\biguplus_{j\in J_i} \pmb{{\Large \ovee}}_{k\in K_{i,j}}\varphi_{i,j,k} \not \equiv \biguplus_{j\in J_{i^\prime}} \pmb{{\Large \ovee}}_{k\in K_{i^\prime,j}}\varphi_{i^\prime,j,k}$ for every $i,i^\prime \in I$ with $i\not = i^\prime$, \item $ \pmb{{\Large \ovee}}_{k\in K_{i,j}}\varphi_{i,j,k} \not \equiv \pmb{{\Large \ovee}}_{k\in K_{i,j^\prime}}\varphi_{i,j,k} $ for every $j, j^\prime\in J_i$ with $j\not = j^\prime$, and \item $ \varphi_{i,j,k} \not \equiv \varphi_{i,j,k^\prime} $ for every $k, k^\prime \in K$ with $k\not = k^\prime.$ \end{enumerate} In order to prove if $\zeta_1^\prime$ and $\zeta_2^\prime $ are equivalent, we need to examine if the sets $S_{\zeta_1^\prime}$ and $S_{\zeta_2^\prime}$ are equal. For this we give Algorithm 1 given in \ref{fig:equiv}. Given the sets $S_{\zeta_1^\prime}$ and $S_{\zeta_2^\prime}$ as inputs for Algorithm 1, we can decide whether $S_{\zeta_1^\prime}=S_{\zeta_2^\prime}$ or not. As for the time complexity of the algorithm, we prove that is polynomial. The construction of the sets $S_{\zeta_1^\prime} $ and $S_{\zeta_2^\prime} $ is done in polynomial time. As for the algorithm in \ref{fig:equiv}, we observe that there are eight nested for loops. Let that the variable of the $i$-th for loop, where $i\in \{1, \dots, 8\}$, ranges from $1$ to $n_i \in \mathbb{N}^*$. So, the number of computations in total are $n_1\cdot \ldots \cdot n_8$. Let $n=\max\{ n_1, \dots, n_8 \}$. Then $n_1\cdot \ldots \cdot n_8\leq n^8$ and the complexity is $ \mathcal{O}\left(n_1\cdot \ldots \cdot n_8\right) = \mathcal{O}\left( n^8\right).$ Hence, considering the complexity of the construction of the normal forms of $\zeta_1$ and $\zeta_2$, we conclude that the run time of the equivalence problem is polynomial. \end{proof} \begin{remark} Let $\zeta_1, \zeta_2 \in fPCL(K,P)$. By Theorem \ref{theor_equiv} we can decide whether $\zeta_1\equiv \zeta_2$ or not. If $\zeta_1\equiv \zeta_2$ then $\zeta_1\equiv_{K_{con}} \zeta_2$ for every $K_{con}$ De Morgan algebra. However, as shown in Example \ref{example_equiv}, it is possible that $\zeta_1\equiv_{K_\mathbf{3}} \zeta_2$ but $\zeta_1\not \equiv \zeta_2$. Hence, if $\zeta_1 \not \equiv \zeta_2$, then we can examine whether $\zeta_1 \equiv_{K_\mathbf{3}} \zeta_2$ and/or $\zeta_1 \equiv_{\mathbf{B}} \zeta_2$ following the constructions in Theorems \ref{kleene_normal_form_theorem} and \ref{theor_equiv}. \end{remark} \appendix \section{Algorithm for Decidability of Equivalence}\label{fig:equiv} Let $\zeta_1$ and $\zeta_2$ be fPCL formulas over $P$ and $K$. By Theorem \ref{kleene_normal_form_theorem} we can effectively construct their equivalent fPCL formulas $\zeta_1^\prime$ and $\zeta_2^\prime$, respectively, in normal form. By Theorem \ref{theor_equiv}, if $S_{\zeta_1^\prime} = S_{\zeta_2^\prime}$, then $\zeta_1^\prime \equiv \zeta_2^\prime$. In order to show whether $S_{\zeta_1^\prime}$ and $S_{\zeta_2^\prime}$ are equal or not, we give Algorithm 1 where as input we have the sets $S_{\zeta_1^\prime}$ and $S_{\zeta_2^\prime}$. Let $P=\{p,q,r\}$ be a set of ports. For better understanding, the reader can apply the algorithm for the sets \begin{enumerate}[$\bullet$] \item $S_{\zeta_1^\prime} = \left\{ \left\{ \left\{ \left\{ p,q\right\}, \left\{ r\right\} \right\}, \left\{ \left\{ p,!r\right\} \right\} \right\}, \left\{ \left\{ \left\{ !p,!r\right\}, \left\{p, r\right\} \right\}, \left\{ \left\{ p\right\} \right\}, \left\{ \left\{ true \right\}\right\} \right\} \right\} $ and \item $S_{\zeta_2^\prime} = \left\{ \left\{ \left\{ \left\{ p,q\right\}, \left\{! r\right\} \right\}, \left\{ \left\{ p\right\} \right\} \right\}, \left\{ \left\{ \left\{ !q\right\} \right\}, \left\{ \left\{ !p\right\} \right\}, \left\{ \left\{ r \right\}\right\} \right\} \right\} $ \end{enumerate} \noindent which represent the fPCL formulas \begin{itemize} \item $\zeta_1^\prime = \left(\left( \left(p\ { \scriptstyle \owedge} \ q\right) \ {\scriptstyle \ovee} \ r \right) \uplus \left( p\ { \scriptstyle \owedge} \ !r \right)\right) \oplus \left( \left((!p\ { \scriptstyle \owedge} \ !r) \ {\scriptstyle \ovee} \ (p\ { \scriptstyle \owedge} \ r)\right)\uplus p \uplus true \right)$ and \item $\zeta_2^\prime = \left(\left( \left(p\ { \scriptstyle \owedge} \ q\right) \ {\scriptstyle \ovee} \ !r \right) \uplus q\right) \oplus \left( !q\uplus !p \uplus r \right)$, \end{itemize} \noindent respectively. \noindent \scalebox{0.8}{\begin{minipage}[t!]{.6\textwidth} \fbox{\parbox{0.8\linewidth}{\begin{algorithm}[H] \caption{Main }\label{main:alg} \textbf{Input \hspace*{0.2cm}: $S_{\zeta_1^\prime}$, $S_{\zeta_2^\prime}$} \\ \hspace*{0.5cm} \begin{algorithmic} \If{$card(S_{\zeta_1^\prime})=card(S_{\zeta_2^\prime})$} \State $k\gets 0 $ \For{$i$ in range $(1, card(S_{\zeta_1^\prime})$} \For{$j$ in range $(1, card(S_{\zeta_2^\prime})$} \If{SetEq$_1(S_{\zeta_1^\prime}[i], S_{\zeta_2^\prime}[j]) = true$} \State $k \gets k+1$ \EndIf \EndFor \EndFor \If{$k=card(S_{\zeta_1^\prime})$} \State \text{``Equivalent"} \Else \State \text{``Not equivalent"} \EndIf \Else \State \text{``Not equivalent"} \EndIf \end{algorithmic} \end{algorithm}}} \end{minipage}} \hspace*{0.2cm} \scalebox{0.8}{\begin{minipage}[t!]{.6\textwidth} \fbox{\parbox{0.8\linewidth}{\begin{algorithm}[H] \caption{SetEq$_1$ } \textbf{Input \hspace*{0.2cm}: A, B} \\ \textbf{Output: E} \begin{algorithmic} \If{$card(A)=card(B)$} \State $k\gets 0 $ \For{$i$ in range $(1, card(A))$} \For{$j$ in range $(1, card(B))$} \If{$SetEq_2(A[i],B[j]) = true$} \State $k \gets k+1$ \EndIf \EndFor \EndFor \If{$k=card(A)$} \State $E \gets true$ \Else \State $E \gets false$ \EndIf \EndIf \end{algorithmic} \end{algorithm} }} \end{minipage}} \noindent \scalebox{0.8}{\begin{minipage}[t]{.6\textwidth} \fbox{\parbox{0.8\linewidth}{\begin{algorithm}[H] \caption{SetEq$_2$ } \textbf{Input \hspace*{0.2cm}: A, B} \\ \textbf{Output: E} \begin{algorithmic} \If{$card(A)=card(B)$} \State $k\gets 0 $ \For{$i$ in range $(1, card(A))$} \For{$j$ in range $(1, card(B))$} \If{$SetEq_3(A[i],B[j]) = true$} \State $k \gets k+1$ \EndIf \EndFor \EndFor \If{$k=card(A)$} \State $E \gets true$ \Else \State $E \gets false$ \EndIf \EndIf \\ \Return $E$ \end{algorithmic} \end{algorithm}}} \end{minipage} } \hspace*{0.2cm} \scalebox{0.8}{\begin{minipage}[t!]{.6\textwidth} \fbox{\parbox{0.8\linewidth}{\begin{algorithm}[H] \caption{SetEq$_3$ } \textbf{Input \hspace*{0.2cm}: A, B} \\ \textbf{Output: E} \begin{algorithmic} \If{$card(A)=card(B)$} \State $k\gets 0 $ \For{$i$ in range $(1, card(A))$} \For{$j$ in range $(1, card(B))$} \If{$A[i]=B[j] $} \State $k \gets k+1$ \EndIf \EndFor \EndFor \If{$k=card(A)$} \State $E \gets true$ \Else \State $E \gets false$ \EndIf \EndIf \\ \Return $E$ \end{algorithmic} \end{algorithm}}} \end{minipage}} \end{document}
\begin{document} \title {Time-Energy Costs of Quantum Measurements} \author{Chi-Hang Fred Fung} \email{chffung@hku.hk} \affiliation{Department of Physics and Center of Theoretical and Computational Physics, University of Hong Kong, Pokfulam Road, Hong Kong} \author{H.~F. Chau} \affiliation{Department of Physics and Center of Theoretical and Computational Physics, University of Hong Kong, Pokfulam Road, Hong Kong} \begin{abstract} Time and energy of quantum processes are a tradeoff against each other. We propose to ascribe to any given quantum process a time-energy cost to quantify how much computation it performs. Here, we analyze the time-energy costs for general quantum measurements, along a similar line as our previous work for quantum channels, and prove exact and lower bound formulae for the costs. We use these formulae to evaluate the efficiencies of actual measurement implementations. We find that one implementation for a Bell measurement is optimal in time-energy. We also analyze the time-energy cost for unambiguous state discrimination and find evidence that only a finite time-energy cost is needed to distinguish any number of states. \end{abstract} \pacs{03.67.-a, 03.67.Lx, 89.70.Eg} \maketitle \section{Introduction} Quantum mechanical systems cannot evolve with an arbitrary speed and an arbitrary energy. The evolution speed and system energy are constrained by time-energy uncertainty relations (TEURs)~\cite{Lloyd2000}. Many TEURs have been proposed and investigated~\cite{Mandelstam1945, Bhattacharyya1983,Anandan1990,Uhlmann1992,Vaidman1992,Pfeifer1993,Margolus1996,Margolus1998,Chau2010, Giovannetti2003,Giovannetti2003b,Zander2007, Taddei2013,delCampo2013} and they follow a general form in which the product of the evolution time (needed to evolve the initial state to the final state) and the system energy (or a function of the eigen-energies) is upper bounded by some number dependent on the closeness between the initial and final states. Recognizing that time and energy are a tradeoff against each other, we proposed to regard time energy as a single measure for the resource consumed by a quantum process~\cite{Chau2011,Fung:2013:Time-energy}. Essentially, a high time-energy cost indicates that the process requires a long time to complete at a low system energy level or a high system energy level for a short completion time. We motivated definitions for the time-energy measures for unitary transformations~\cite{Chau2011} and quantum channels~\cite{Fung:2013:Time-energy} by a TEUR proved earlier~\cite{Chau2010}. In this work, we investigate the time-energy measure for general quantum measurements also called positive operator-valued measures (POVM). Quantum measurements are quantum evolutions of some quantum states that eventually produce classical outputs (i.e., by triggering a detector). Thus, quantum measurements are also restrained by TEURs and the concept of time-energy cost also applies to them. Essentially, ``easy'' measurements (e.g., directly detecting the input states) would incur small time-energy costs. More specifically, a quantum measurement can be considered as a unitary operation in a larger Hilbert space containing the system to be measured and an ancillary system indicating the measurement outcome. We define the time-energy cost of a measurement as the time-energy cost for this unitary operation which we have already quantified before~\cite{Chau2011,Fung:2013:Time-energy}. The time-energy cost of a measurement given the POVM description may be used to judge the efficiency of an actual implementation. The time-energy cost of an implementation can be computed based on the actual experimental components (such as beam splitters) used and the time-energy cost of the POVM can be computed (or bounded) using the results of this work. A small difference between these cost values indicates that the actual implementation is quite efficient already, consuming close to the fundamental minimal time and energy to run. In this work, we derive lower bounds on the time-energy cost of POVM and obtain the exact value for the time-energy cost in some special cases. These results are applied to some examples. In particular, we compute the time-energy costs of linear optics based implementations of Bell measurements and a POVM with rank-2 elements, and compare them with the ideal time-energy costs given the POVM descriptions. We find that the Bell measurement implementation that projects onto one Bell state is optimal, but that projects onto two Bell states is not. Also, our calculation indicates that the implementation of the POVM with rank-2 elements may be far from optimal. In addition, we study the time-energy cost for the optimal unambiguous state discrimination (USD) for distinguishing symmetric coherent states. Interestingly, the cost lower bound increases but saturates to some value as the number of states increases. This may indicate that a finite time-energy resource is enough to distinguish any number of states. We motivate a time-energy measure based on the following TEUR by Chau~\cite{Chau2010}. Given a time-independent Hamiltonian $H$ of a system, the time $t$ needed to evolve a state $\ket{\Phi}$ under the action of $H$ to a state whose fidelity~\footnote{We adopt the fidelity definition $F(\rho,\sigma)=\big({\rm Tr} \sqrt{\rho^{1/2} \sigma \rho^{1/2}}\big)^2$ for two quantum states $\rho$ and $\sigma$.} is less than or equal to $\epsilon$ satisfies the TEUR \begin{align} \label{eqn-time-energy-relation1} t \geq \frac{(1-\sqrt{\epsilon})\hbar}{A \sum_j |\alpha_j|^2 |E_j|} \end{align} where $E_j$'s are the eigenvalues of $H$ with the corresponding normalized energy eigenvectors $\ket{E_j}$'s, $\ket{\Phi}=\sum_j \alpha_j \ket{E_j}$, and $A \approx 0.725$ is a universal constant. Based on this equation, a weighted sum of $|t E_j|$'s serves as an indicator of the time-energy resource needed to perform $U=\exp(-i H t / \hbar)$. Thus, this motivates the following definition of the time-energy cost of a unitary matrix $U \in \myUgrp(r)$~\cite{Chau2011}: \begin{align} \label{eqn-definition-maxnorm-for-U} \maxnorm{U}&=\max_{1 \le j \le r} |\theta_j| \end{align} where $U$ has eigenvalues $\exp(-i E_j t/\hbar)\equiv \exp(\theta_j)$ for $j=1,\dots,r$ and $E_j$ are the eigenvalues of the Hamiltonian $H$~\footnote{We remark that our previous works~\cite{Chau2011,Fung:2013:Time-energy} consider more general measures by taking linear combinations of $|\theta_j|$'s. Here, we only consider the maximum $|\theta_j|$.}. We assume that all angles are taken in the range $(-\pi,\pi]$. The concept of the time-energy cost has been extended to quantum channels by considering a unitary extension in a larger Hilbert space and regarding the cost of the unitary as the cost of the quantum channel~\cite{Fung:2013:Time-energy}. The time-energy resource for a quantum channel $\mathcal F$ with Kraus operators $\{F_1,\ldots, F_K\}$ is defined as \dmathX2{ \maxnorm{\mathcal{F}} &\equiv& \min_U & \maxnorm{U} &eqn-energy-measure-general-channel\cr && \text{s.t.} & \mathcal{F}(\rho) = {\rm Tr}_B [ U_{BA} (\ket{0}_B\bra{0} \otimes \rho_A) U_{BA}^\dag ] \: \forall \rho. \cr } where the channel $\mathcal{F}$ acts on state $\rho$ in system $A$ and the unitary extension $U_{BA}$ includes system $B$ prepared in a standard state. In this definition, we seek the unitary extension that consumes the least time energy. We previously found bounds on $\maxnorm{\mathcal{F}}$ for general channels and obtained the exact value of $\maxnorm{\mathcal{F}}$ for some special channels including the depolarizing channel~\cite{Fung:2013:Time-energy}. In this paper, we consider the time-energy cost for general quantum measurements on finite-dimensional systems. A POVM can be cast as a quantum channel, and thus our previous result~\cite{Fung:2013:Time-energy} may be applied. However, since there are extra unitary degree of freedom on the POVM elements and freedom in the labelings of the detection events (more explanation later), more analysis is needed to reuse the previous result for quantum channels. We remark that a similar work by Uzdin and Gat \cite{Uzdin:2013:time-energy} derives results for the time-energy cost for USD measurements with rank-1 projectors. In this work, we derive results for the time-energy cost for general POVM. The organization of this paper is as follows. We first introduce some notations and review some existing results in Sec.~\ref{sec-preliminary}. These results are used to prove formulae for the time-energy cost for POVM in Sec.~\ref{sec-TE-POVM}. In Sec.~\ref{sec-examples}, we apply the lower bound and exact formulae for the POVM time-energy cost to a few examples. Finally, we conclude in Sec.~\ref{sec-conclusion}. \section{Preliminary} \label{sec-preliminary} Denote by $\myUgrp(r)$ the group of $r \times r$ unitary matrices. Given a matrix $U$, its $(i,j)$ element is denoted by $U(i,j)$, row $i$ by $U(i,*)$, and column $j$ by $U(*,j)$. We adopt the convention that $\cos^{-1}$ always returns an angle in the range $[0,\pi]$. The quantum channel $\mathcal F$ is described by $$ {\mathcal F}(\rho) = \sum_{i=1}^{K} F_i \rho F_i^\dag $$ where the Kraus operators are $F_i \in {\mathbb C}^{m \times n}$. We assume without loss of generality that $m \ge n$, since we can zero pad the Kraus operators and extract the non-zero subspace of the channel output. We only consider finite-dimensional systems, i.e., $m,n < \infty$. Define a map from a sequence of Kraus operators $(F_1,F_2,\ldots, F_{K})$ to a ${K} {m} \times {n}$ matrix as follows: \begin{align} g(F_1,F_2,\dots,F_{{K}}) &\triangleq \begin{bmatrix} F_1 \\ F_2 \\ \vdots \\ F_{{K}} \end{bmatrix} \in {\mathbb C}^{{K} {m} \times {n}}. \end{align} Because $\sum_{j=1}^{{K}} F_j^\dag F_j = I$, the columns of $g(F_1,F_2,\dots,F_{{K}})$ are orthonormal and $g(F_1,F_2,\dots,F_{{K}})$ can be regarded as a submatrix of a unitary one. \subsection{ Partial $U$ problem} Problem~\eqref{eqn-energy-measure-general-channel} defines the time-energy cost for a general quantum channel. Note that two sets of Kraus operators $\{F_1,\ldots, F_K\}$ and $\{F_1',\ldots, F_K'\}$ represent the same quantum channel if and only if $F_i'=\sum_{j=1}^K w_{ij} F_j$ for all $i$ and for some unitary matrix $[w_{ij}]$ (see Ref.~\cite{Nielsen2000}). Thus, to solve problem~\eqref{eqn-energy-measure-general-channel}, one needs to consider all possible Kraus representations. Let us propose a simpler but related problem, which will be useful for analyzing the time-energy cost for POVM in Sec.~\ref{sec-TE-POVM}. Consider the time-energy cost for a sequence of Kraus operators. We define the partial $U$ problem for the submatrix $g(F_1,F_2,\dots,F_{{K}})$ as \dmathX2{ \maxnorm{ g(F_1,F_2,\dots,F_{{K}}) } \equiv\hspace{-2.6cm}& \cr && \displaystyle\min_{U} & \maxnorm{ U }&eqn-problem-partial-with-g\cr &&\text{s.t.}& U= \begin{bmatrix} F_1 & * & * & \cdots & * \\ F_2 & * & \ddots & & * \\ \vdots & \vdots &&& \vdots \\ \undermat{n}{F_{{K}}} & * & * & \cdots & * \end{bmatrix} \in \myUgrp({K} m). \cr \cr } Here, the first $n$ columns are fixed and the optimization is over the remaining ${K} m-n$ columns. We proved formulae that upper and lower bound this problem in Ref.~\cite{Fung:2013:Time-energy} and we summarize the results in Appendix~\ref{app-summary}. Note that $g(F_1,F_2,\dots,F_{{K}})$ has the following property: \begin{lemma} \label{lemma-U-norm-unchanged-by-conjugation-2} {\rm \begin{align*} &\maxnorm{ g(F_1,F_2,\dots,F_{{K}}) } \\ = & \maxnorm{ g(\hat{Q}F_1Q^\dag,F_2Q^\dag,\dots,F_{{K}}Q^\dag) } \end{align*} for any unitary matrix $Q \in \myUgrp(n)$ and \begin{equation} \label{eqn-U-norm-unchanged-by-conjugation-Qhat} \hat{Q}= \begin{bmatrix} Q & 0 \\ 0& \bf{1} \end{bmatrix} \in \myUgrp(m). \end{equation} } \end{lemma} This lemma is Lemma~\ref{lemma-U-norm-unchanged-by-conjugation} in Appendix~\ref{app-summary} in another form. This form facilitates our later analysis. \section{Time-energy cost of POVM} \label{sec-TE-POVM} \begin{figure} \caption{ Example implementation of a POVM based on linear optics. In this example, the first $m$ detection events map to the first POVM element $M_1$, and the next $m$ detection events map to the second POVM element $M_2$, and so on. } \label{fig-POVM} \end{figure} We are given a POVM $\mathcal M$ with elements $\{ M_i \in {\mathbb C}^{n \times n}: i=1,\dots,K \}$ expressed in the basis $\{\ket{\overline{0}},\dots,\ket{\overline{n-1}}\}$, which, for example, may correspond to the input modes of beam splitters. Note that $\sum_{i=1}^K M_i = I$ and $M_i$ is positive semidefinite. An experiment implementing the POVM takes an input state in that basis and runs a quantum circuit to produce detection events corresponding to $\{M_i\}$. We can label the detection events using another basis $\{\ket{0}, \dots, \ket{{K} {m}-1}\}$, which, for example, may correspond to the output modes of beam splitters. Figure~\ref{fig-POVM} shows an example using linear optics to implement the POVM where each detection event corresponds to a detector click. In the simplest case, the ${m}$ detection events $\ket{(i-1) {m}},\dots,\ket{i {m}-1}$ map to $M_i$. This corresponds to embedding the POVM in a unitary matrix $U$ in a larger space of dimension ${K} {m}$ and the projection onto detection event $\ket{j}$ indicates an outcome for $M_i$ according to the above mapping. (We note that in reality, these projections need not be separately detected.) This means that $U$ has to satisfy \begin{align*} \sum_{z=(i-1) {m}}^{i {m}-1} \bra{z} U \rho U^\dag \ket{z} &= {\rm Tr} ( M_i \bar{\rho}) \hbox{ for all } i=1,\dots,K \end{align*} for any input state $\bar{\rho} \in {\mathbb C}^{n \times n}$ and \begin{align*} \rho= \begin{bmatrix} \bar{\rho} & \bf{0} \\ \bf{0} & \bf{0} \end{bmatrix} \in {\mathbb C}^{{K} {m} \times {K} {m}} \end{align*} is the input state in the larger space using basis $\{\ket{\bar{0}}, \dots, \ket{\overline{{K} {m}-1}}\}$. Thus, $U$ is of the form \begin{align} U= \begin{bmatrix} F_1 & * & * & \cdots & * \\ F_2 & * & \ddots & & * \\ \vdots & \vdots &&& \vdots \\ \undermat{n}{F_{{K}}} & * & * & \cdots & * \end{bmatrix} \in \myUgrp({K} m) \\ \nonumber \end{align} in which element $(i,j)$ corresponds to $\ket{i}\bra{\bar{j}}$, and the Kraus operators are of the form \begin{align} \label{eqn-general-form-Kraus-operator} F_i=V_i \begin{bmatrix} \sqrt{M_i} \\ \bf{0} \end{bmatrix} \in {\mathbb C}^{m \times n}, \end{align} where $V_i \in \myUgrp(m)$ that we may freely choose. To maintain generality, we allow zeros to be padded in $F_i$. In essence, the projections corresponding to the first $m$ rows of $U$ correspond to POVM outcome 1, and the next $m$ rows to POVM outcome 2, and so on. These projections are the detection events when $U$ is directly implemented in an experiment and the order of them (i.e., the order of the rows of $U$) is meaningless. In other words, we may arbitrarily label the projection outcomes $\ket{z}$. So if $U$ describes an experiment implementing the POVM, $PU$ also describes the same experiment for some permutation matrix $P$. Overall, we define the time-energy cost of POVM $\mathcal M$ by \begin{equation} \label{eqn-TE-resource-POVM-1} \maxnorm{\mathcal M} \equiv \min_{P,\{V_i\}} \maxnorm{ P g(F_1,F_2,\dots,F_K) } \end{equation} where $P$ is some ${K} {m} \times {K} {m}$ permutation matrix, and $\maxnorm{Pg}$ is the solution to the partial $U$ problem~\eqref{eqn-problem-partial-with-g}. As we shall see, the number of zeros padded in $F_i$ (i.e., $m-n$) does not matter. In the following, we first investigate the special case where $P$ only swaps the POVM elements $\{F_i\}$, i.e., we restrict $P$ to be of the form $\hat{P} \otimes I_m$ where $\hat{P}$ is some ${K} \times {K}$ permutation matrix and $I_m$ is the $m$-dimensional identity matrix. Then, using the result of this special case, we investigate the case with a general $P$. \subsection{With arbitrary POVM element labelings} We first focus on the problem without the optimization over $P$ and $\{V_i\}$ (assumed to be fixed), and with a specific ordering of the POVM elements $(M_k)_{k=1}^K$: \begin{align} & \maxnorm{(M_k)_{k=1}^K} \nonumber \\ \equiv& \maxnorm{ g(F_1,F_2,\dots,F_K) } \nonumber \\ =& \maxnorm{ g(\hat{Q}F_1Q^\dag,F_2Q^\dag,\dots,F_{{K}}Q^\dag) } \text{for all } Q \nonumber \\ \ge& \max_{1\leq i \leq n} \cos^{-1} \left[ \operatorname{Re}( (\hat{Q} F_1 Q^\dag) (i,i) ) \right] \nonumber \\ =& \cos^{-1} \left[ \min_{1\leq i \leq n} \operatorname{Re}( (\hat{Q} F_1 Q^\dag) (i,i) ) \right] \label{eqn-max-norm-POVM-lower-bound-no-perm-1} \end{align} where (i) the third line is due to Lemma~\ref{lemma-U-norm-unchanged-by-conjugation-2}, $Q \in U(n)$ and $\hat{Q}$ is of the form in Eq.~\eqref{eqn-U-norm-unchanged-by-conjugation-Qhat}; (ii) the inequality in the fourth line is due to Eq.~\eqref{eqn-maxnorm-lower-bound-diagonal1}; and (iii) the last equality is because $\cos^{-1}$ is a decreasing function in the range $[0,\pi]$. Different $Q$ gives different bounds. With an argument similar to that for Eq.~\eqref{eqn-maxnorm-lower-bound-diagonal3}, we choose $Q$ to be the right singular matrix of $\sqrt{M_1}$ and this gives $ \min_i \operatorname{Re}( (\hat{Q} F_1 Q^\dag) (i,i) ) \le \sigma_\text{min} (F_1) = \sigma_\text{min} (\sqrt{M_1}) $ since every element of a unitary matrix (corresponding to the product of $\hat{Q}$, $V_1$, and the left singular matrix of $\sqrt{M_1}$) has a norm no larger than unity, where $\sigma_\text{min}$ denotes the minimum singular value of its argument. This shows that \begin{align} \maxnorm{(M_k)_{k=1}^K} &\ge \cos^{-1} \left[ \sigma_\text{min} (\sqrt{M_1}) \right]. \label{eqn-max-norm-POVM-lower-bound-no-perm-2} \end{align} Since this lower bound is independent of $\{V_i\}$, we have \begin{align} \label{eqn-max-norm-POVM-lower-bound-no-perm-3} \min_{\{V_i\}} \maxnorm{ (M_k)_{k=1}^K } \ge \cos^{-1} \left[ \sigma_\text{min} (\sqrt{M_1}) \right]. \end{align} On the other hand, this bound can be made more stringent by choosing $V_1$ so that the product of $\hat{Q}$, $V_1$, and the left singular matrix of $\sqrt{M_1}$ is the identity matrix. Upper bound --- We upper bound the above quantity $ \min_{\{V_i\}} \maxnorm{(M_k)_{k=1}^K} $ by letting $V_1$ to be the unitary matrix that transforms the left singular matrix of $\sqrt{M_1}$ to become its right singular matrix. Applying Eq.~\eqref{eqn-maxnorm-special-case2} gives \begin{align} \label{eqn-max-norm-POVM-upper-bound-no-perm-1} \min_{\{V_i\}} \maxnorm{ g(F_1,F_2,\dots,F_K) } \le \cos^{-1} \left[ \sigma_\text{min} (\sqrt{M_1}) \right] . \end{align} It is an inequality because we chose one particular $V_1$. Combining Eqs.~\eqref{eqn-max-norm-POVM-lower-bound-no-perm-3} and \eqref{eqn-max-norm-POVM-upper-bound-no-perm-1} gives \begin{align} \label{eqn-max-norm-POVM-exact-no-perm-1} \min_{\{V_i\}} \maxnorm{ g(F_1,F_2,\dots,F_K) } = \cos^{-1} \left[ \sigma_\text{min} (\sqrt{M_1}) \right] . \end{align} We now consider the minimization over permutations. For the special case that $P$ permutes only the POVM elements, we have the following. \begin{theorem} {\rm \begin{align} &\maxnorm{\{M_{k}\}_{k=1}^K} \nonumber \\ \equiv& \min_{{\pmb\pi}} \min_{\{V_i\}} \maxnorm{ g(F_{{\pmb\pi}(1)},F_{{\pmb\pi}(2)},\dots,F_{{\pmb\pi}(K)}) } \nonumber \\ =& \min_{1\leq k \leq K} \cos^{-1} \left[ \sigma_\text{min} (\sqrt{M_k}) \right] , \label{eqn-TE-POVM-permute-element-index} \end{align} where $\pmb\pi$ denotes the ordering function. } \end{theorem} \subsection{With arbitrary detection event labelings} We now consider general permutations over all detection events of all POVM elements and bound $\maxnorm{\mathcal M}$ in Eq.~\eqref{eqn-TE-resource-POVM-1}. Essentially, the permutation $P$ in $Pg(F_1,F_2,\dots,F_K)$ serves to produce a new top-left $n \times n$ block which we denote as $\tilde{F}$. We may reuse Eqs.~\eqref{eqn-max-norm-POVM-lower-bound-no-perm-1} and \eqref{eqn-max-norm-POVM-lower-bound-no-perm-2} with this $\tilde{F}$ in place of $F_1$. Depending on how we choose $Q$ in Eq.~\eqref{eqn-max-norm-POVM-lower-bound-no-perm-1}, we have two methods to lower bound $\maxnorm{\mathcal M}$. In general, we may take the maximum of two bounds of the two methods [cf. Eqs.~\eqref{eqn-general-permutation-method-Q-identity-1}, \eqref{eqn-column-Mj-bound}, \eqref{eqn-TE-POVM-general-lower-bound}, and \eqref{eqn-TE-POVM-general-lower-bound2}]. Later, we will apply Method~1 in the examples in Sec.~\ref{sec-example-bell-state} and Method~2 in the examples in Secs.~\ref{sec-example-linear-optics} and \ref{sec-example-usd}. \subsubsection{Method 1} Let us consider the first way to bound $\maxnorm{\mathcal M}$ in Eq.~\eqref{eqn-TE-resource-POVM-1}. Starting from Eq.~\eqref{eqn-max-norm-POVM-lower-bound-no-perm-1} with $Q$ being the identity matrix, we have \begin{align*} \maxnorm{\mathcal M} &= \min_{P,\{V_j\}} \maxnorm{ P g(F_1,F_2,\dots,F_K) } \\ &\ge \cos^{-1} \Big[ \max_{P,\{V_j\}} \min_{1\leq i \leq n} \operatorname{Re}( \tilde{F} (i,i) ) \Big] \equiv \cos^{-1} A \end{align*} where we used the fact that $\cos^{-1}$ is a decreasing function in the range $[0,\pi]$. Using the max-min inequality (see, e.g., Ref.~\cite{Boyd:2004}), \begin{align*} A &\le \min_{1\leq i \leq n} \max_{P,\{V_j\}} \operatorname{Re}( \tilde{F} (i,i) ) \\ &= \min_{1\leq i \leq n} \max_{j} \lVert \sqrt{M_j}(*,i) \rVert_2 \end{align*} where the term on the RHS of the second line is the $\ell_2$-norm of the $i$th column of $\sqrt{M_j}$. The second line is due that whenever we choose through $P$ the $i$th row of $\tilde{F}$ to be the $l$th row of the $j$th POVM element $F_j=V_j \sqrt{M_j}$, we can always maximize this $l$th row's $i$th column element by choosing the best rotation $V_j$. The best rotation concentrates all elements of the $i$th column of $\sqrt{M_j}$ to the $l$th row. This gives one way to lower bound $\maxnorm{\mathcal M}$: \begin{theorem} {\rm \begin{align} \maxnorm{\mathcal M} \ge \cos^{-1} \left[ \min_{1\leq i \leq n} \max_{1\le j \le K} \lVert \sqrt{M_j}(*,i) \rVert_2 . \right] . \label{eqn-general-permutation-method-Q-identity-1} \end{align} } \end{theorem} This lower bound is easy to compute, by first obtaining the norm of every column of all $\sqrt{M_j}$ and then comparing them. \begin{corollary} \label{cor-column-Mj-bound} {\rm If there is a $\sqrt{M_j}$ having a column with norm $c \ge 1/\sqrt{2}$, \begin{align} \maxnorm{\mathcal M} \ge \cos^{-1} (c) . \label{eqn-column-Mj-bound} \end{align} } \end{corollary} \begin{proof} For any POVM, the trace-preserving constraint implies that $\sum_{j=1}^K \lVert \sqrt{M_j}(*,i) \rVert_2^2 =1$. Thus, $\max_{j} \lVert \sqrt{M_j}(*,i) \rVert_2 =c$. Finally, we can neglect the minimization over $i$ since every $i$ serves as a lower bound. \end{proof} \subsubsection{Method 2} Let us consider the second way to bound $\maxnorm{\mathcal M}$ in Eq.~\eqref{eqn-TE-resource-POVM-1}. We start from Eqs.~\eqref{eqn-max-norm-POVM-lower-bound-no-perm-1} and \eqref{eqn-max-norm-POVM-lower-bound-no-perm-2} with $\tilde{F}$ in place of $F_1$. Note that the upper bound in Eq.~\eqref{eqn-max-norm-POVM-upper-bound-no-perm-1} does not apply here since we now do not have the unitary degree of freedom on the left (i.e., $V_1$) to make the top-left $n \times n$ block of $Pg(F_1,F_2,\dots,F_K)$ Hermitian. The $n$ rows of $\tilde{F}$ are constructed by selecting rows coming from any Kraus operators ${F_i}$ of Eq.~\eqref{eqn-general-form-Kraus-operator}, $i=1,\dots,K$ (not necessarily from the same element). Thus we have the following. \begin{theorem} \label{thm-maxnorm-lower-bound-all-permutations} {\rm \begin{align} \maxnorm{\mathcal M} \ge \min_{P,\{V_i\}} \cos^{-1} \left[ \sigma_\text{min}(\tilde{F}) \right] \label{eqn-TE-POVM-general-lower-bound} \end{align} where $P$ denotes the selection of the rows of $\tilde{F}$ coming from any Kraus operators ${F_i}$ of Eq.~\eqref{eqn-general-form-Kraus-operator}, $i=1,\dots,K$. } \end{theorem} In general we need to iterate over all permutations of the rows to find the best $\tilde{F}$ to achieve the minimum on the RHS. Also, this lower bound may not be tight. On the other hand, we may bound $\sigma_\text{min}(\tilde{F})$ as follows. First, it is no larger than the norm of any row $j$ of $\tilde{F}$: \begin{align} \tilde{F}(j,*) \tilde{F}(j,*)^\dag&= [W_L(j,*) S W_R^\dag] [W_R S^\dag W_L(j,*)^\dag] \nonumber \\ &= \sum_{i=1}^n |W_L(j,i)|^2 \sigma_i^2(\tilde{F}) \nonumber \\ &\ge \sigma_\text{min}^2(\tilde{F}) \hspace{.7cm} \text{for }\: 1 \le j \le n \label{eqn-bound-of-sigma-F} \end{align} where we take the singular value decomposition $\tilde{F}=W_L S W_R^\dag$ and $\sigma_i(\tilde{F}), i=1,\dots,n$ are the diagonal elements of $S$. Second, $\sigma_\text{min}(\tilde{F})$ is no larger than the minimum singular value of any subset of rows of $\tilde{F}$. This follows by simply multiplying the left singular matrix of this submatrix to the left of $\tilde{F}$ and applying the above result to this new $\tilde{F}$~\footnote {For example, suppose that the subset of rows comes from the first two rows of $\tilde{F}$ and $R$ is the $2\times 2$ left singular matrix of it. Then, let $\tilde{F}'=\begin{bmatrix}R^\dag&0\\0&{\mathbf 1}\end{bmatrix} \tilde{F}$ and apply Eq.~\eqref{eqn-bound-of-sigma-F} to $\tilde{F}'$. Note that $\tilde{F}'$ and $\tilde{F}$ have the same singular values.}. Thus, we construct $\tilde{F}$ by taking rows from $\{F_i\}$ with as large singular values as possible which can be done by choosing $V_i$ to cancel out the left singular matrix of $\sqrt{M_i}$. Therefore, a strategy to find a lower bound of $\maxnorm{\mathcal M}$ in Eq.~\eqref{eqn-TE-resource-POVM-1} is the following. \begin{lemma} {\rm Order all singular values of all $\sqrt{M_i}$, $i=1,\dots,K$, and obtain the $n$th largest singular value $\sigma_n$. Then, \begin{align} \maxnorm{\mathcal M} \ge \cos^{-1} (\sigma_n) . \label{eqn-TE-POVM-general-lower-bound2} \end{align} } \end{lemma} We remark that we do not take into account the amounts of overlaps between the rows of $\tilde{F}$ when we select them and thus this lower bound can be loose in some cases [i.e., the RHS of Eq.~\eqref{eqn-TE-POVM-general-lower-bound2} is lower than that of Eq.\eqref{eqn-TE-POVM-general-lower-bound}]. As an extreme example, two rows of $\tilde{F}$ come from different $\sqrt{M_i}$ and $\sqrt{M_j}$ such that the one row is a scalar multiple of each other. This makes the smallest singular value of $\tilde{F}$ zero instead of $\sigma_n$. In general, we need to go through all permutations in Eq.~\eqref{eqn-TE-POVM-general-lower-bound} to obtain good lower bounds. We consider optimality for special cases. \begin{lemma} {\rm If an $\tilde{F}$ can be found such that the RHS of Eq.~\eqref{eqn-TE-POVM-general-lower-bound2} is equal to that of Eq.~\eqref{eqn-TE-POVM-general-lower-bound} [i.e., $\sigma_\text{min}(\tilde{F})=\sigma_n$], such an $\tilde{F}$ is the minimizing $\tilde{F}$ for Eq.~\eqref{eqn-TE-POVM-general-lower-bound}. } \end{lemma} Furthermore, if the minimizing $\tilde{F}$ in Eq.~\eqref{eqn-TE-POVM-general-lower-bound} is Hermitian, we upper bound $\maxnorm{\mathcal M}$ in Eq.~\eqref{eqn-TE-resource-POVM-1} by using Eq.~\eqref{eqn-maxnorm-special-case2} [similar to the argument for Eq.~\eqref{eqn-max-norm-POVM-upper-bound-no-perm-1}]: $$ \maxnorm{\mathcal M} \le \cos^{-1} \left[ \sigma_\text{min}(\tilde{F}) \right] . $$ Combining this with Eq.~\eqref{eqn-TE-POVM-general-lower-bound} gives the following. \begin{theorem} \label{thm-TE-POVM-general-special-case} {\rm If the minimizing $\tilde{F}$ for Eq.~\eqref{eqn-TE-POVM-general-lower-bound} is Hermitian, \begin{align} \label{eqn-TE-POVM-general-special-case1} \maxnorm{\mathcal M} = \cos^{-1} \left[ \sigma_\text{min}(\tilde{F}) \right] . \end{align} } \end{theorem} \section{Examples} \label{sec-examples} We compute the time-energy costs for a few quantum measurements and also compare them with the costs of some actual experiments based on the linear optical components used. We do not consider the detectors in all time-energy cost calculations below. \subsection{Time-energy cost for $\myUgrp(2)$} The most general unitary operator in $\myUgrp(2)$ can be implemented by a beam splitter (BS) with the freedom to choose the reflectivity and phase as follows~\cite{Zeilinger:1994:two-particle}: \begin{align} U_\text{BS}= \exp (i \chi) \begin{bmatrix} r & i t^*\\ i t & r^* \end{bmatrix} \label{eqn-U-BS} \end{align} where $\chi$ is an arbitrary real number, and $r$ and $t$ are the reflection and transmission amplitudes (complex) with $|r|^2+|t|^2=1$. We seek the most efficient $U_\text{BS}$ for a fixed reflectivity $|r|$ based on $\maxnorm{U_\text{BS}}$. The eigenvalues of $U_\text{BS}$ are $\exp(i \chi) \left[ \text{Re}(r) \pm i \sqrt{ |t|^2+\text{Re}^2(r)} \right]$. It can be easily seen that the best parameters are $\chi=0$ and $r=|r|$, giving \begin{equation} \label{eqn-maxnorm-U-BS} \maxnorm{U_\text{BS}}=\cos^{-1} |r|. \end{equation} \begin{figure} \caption{ Bell measurement for $\ket{\Psi^-}$. } \label{fig-Bell1} \end{figure} \subsection{Time-energy cost for Bell state analysis} \label{sec-example-bell-state} \subsubsection{One Bell state} A 50-50 beam splitter can be used to project the two-photon input state onto the singlet Bell state~\cite{Braunstein:1995:Bell,Lutkenhaus:1999:Bell} (see Fig.~\ref{fig-Bell1}). The four Bell states are \begin{align*} \ket{\Psi^\pm} &= ( \ket{\updownarrow}_a \ket{\leftrightarrow}_b \pm \ket{\leftrightarrow}_a \ket{\updownarrow}_b ) /\sqrt{2} \\ \ket{\Phi^\pm} &= ( \ket{\updownarrow}_a \ket{\updownarrow}_b \pm \ket{\leftrightarrow}_a \ket{\leftrightarrow}_b ) /\sqrt{2} \end{align*} where two photons are in modes $a$ and $b$, and $\ket{\updownarrow}$ and $\ket{\leftrightarrow}$ are single-photon states with vertical and horizontal polarizations. Two detectors are installed at the two output ports of the BS, and when both report a click, the input state is collapsed to the singlet state $\ket{\Psi^-}$. This simple setup cannot make projections onto the other three Bell states which is possible with more complicated setups~\cite{Braunstein:1995:Bell,Lutkenhaus:1999:Bell}. Based on the previous analysis resulting in Eq.~\eqref{eqn-maxnorm-U-BS}, the time-energy cost to collapse a two-photon state to $\ket{\Psi^-}$ with this simple setup is $\cos^{-1} (1/\sqrt{2})=\pi/4$ using the fact that it is a 50-50 BS. Let us consider the time-energy cost for the ideal measurement with a projection onto $\ket{\Psi^-}$. Obviously, there is a POVM element $\ket{\Psi^-}\bra{\Psi^-}$ and following Corollary~\ref{cor-column-Mj-bound}, we can see that a column of it has norm $1/\sqrt{2}$. So, by Eq.~\eqref{eqn-column-Mj-bound}, the cost lower bound is $\pi/4$. Therefore, the above implementation with one BS is optimal since it achieves this bound. \begin{figure} \caption{ Bell measurement for $\ket{\Psi^-}$ and $\ket{\Psi^+}$. } \label{fig-Bell2} \end{figure} \subsubsection{Two Bell states} A more complicated setup, the Innsbruck detection scheme ~\cite{Weinfurter:1994:Bell,Braunstein:1995:Bell,Michler:1996:Bell} , as shown in Fig.~\ref{fig-Bell2}, can project onto two Bell states. Coincidence detections at detectors 1 and 4 or at 2 and 3 correspond to projection onto $\ket{\Psi^-}$. Coincidence detections at detectors 1 and 2 or at 3 and 4 correspond to projection onto $\ket{\Psi^+}$. The event of having two particles at any one of the four detectors could have been triggered by $\ket{\Phi^+}$ or $\ket{\Phi^-}$. The time-energy cost for the ideal measurement with projections onto $\ket{\Psi^\pm}$ is lower bounded by $\pi/4$, argued as above. We construct a $U$ with these two projections in order to obtain an upper bound: \begin{align*} U&=\ket{0} \bra{\Psi^-}_{ab} + \ket{1} \bra{\Psi^+}_{ab} + \ket{2} \bra{ \updownarrow \updownarrow}_{ab} + \ket{3} \bra{ \leftrightarrow \leftrightarrow}_{ab} \\ &= \begin{bmatrix} \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} & 0 & 0 \\ \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \end{align*} where $U$ acts on states specified in the basis $\{\ket{\updownarrow \leftrightarrow}, \ket{\leftrightarrow \updownarrow }, \ket{ \updownarrow \updownarrow}, \ket{ \leftrightarrow \leftrightarrow}\}$ and produces the detection events labeled as $\ket{j}$, $j=1,2,3,4$. It is clear that $\maxnorm{U}=\pi/4$. Therefore, the time-energy cost for the ideal measurement with projections onto $\ket{\Psi^\pm}$ is $\pi/4$. Comparison between the time-energy cost for the ideal measurement and the cost for the actual implementation may subject to interpretations. We may compute the overall cost for all the linear optics devices responsible for (i) only the transformation or (ii) the transformation and detection. The detection part is for detecting the horizontal and vertical qubit states and it consists of a polarizing beam splitter (PBS) and two detectors. One may argue that this part is used anyway to detect the original input qubit when no transformation is involved and so it should not be included. On the other hand, including the detection part in the overall cost also makes sense since sometimes it is not needed (for example in the one Bell state measurement); also, it is specific to linear optics implementations and we may want to include all costs due to this type of implementations when our consideration is not restricted to this type. Here, we adopt interpretation (ii) since it is the presence of the two PBS that enables the projections onto two Bell states. As such, the time-energy cost for the Innsbruck scheme certainly costs more than $\pi/4$ since it contains a 50-50 BS and two PBS, and the BS already costs $\pi/4$. To find time-energy cost for a PBS, consider its unitary representation for transforming the polarization states of the two input modes: \begin{align} U_\text{PBS}= e^{i \chi} \begin{bmatrix} \ket{\updownarrow}\bra{\updownarrow} & \ket{\leftrightarrow}\bra{\leftrightarrow} \\ \ket{\leftrightarrow}\bra{\leftrightarrow} & \ket{\updownarrow}\bra{\updownarrow} \end{bmatrix} \label{eqn-U-PBS} \end{align} which has eigenvalues $-e^{i \chi}$, $e^{i \chi}$, $e^{i \chi}$, and $e^{i \chi}$. With $\chi=\pi/2$, the smallest time-energy cost is $\maxnorm{U_\text{PBS}}=\pi/2$. \subsection{Time-energy cost for general measurements on linear optical qubits} \label{sec-example-linear-optics} \begin{figure} \caption{ An implementation of the POVM in Eq.~\eqref{eqn-example-linear-optics-POVM}. } \label{fig-rank-2} \end{figure} A scheme for general measurements on linear optical qubits was proposed in Ref.~\cite{Ota:2012:optics-measurement}. We analyze the time-energy cost for their measurement implementation shown in Fig.~\ref{fig-rank-2} (which is Fig.~1 of Ref.~\cite{Ota:2012:optics-measurement}), consisting of, sequentially, a PBS, two wave plates (WP), a BS, and two WP. The input state is polarization encoded: $\ket{\psi}=c_\text{H} \ket{\leftrightarrow}+c_\text{V} \ket{\updownarrow}$. The POVM elements to be implemented are \begin{equation} \begin{aligned} M_1&= \cos^2 \varphi \ket{m_+}\bra{m_+}+\sin^2 \varphi \ket{m_-}\bra{m_-} \\ M_2&= \sin^2 \varphi \ket{m_+}\bra{m_+}+\cos^2 \varphi \ket{m_-}\bra{m_-} \end{aligned} \label{eqn-example-linear-optics-POVM} \end{equation} where we assume $w=0$ in the implementation of Ref.~\cite{Ota:2012:optics-measurement}. Here, $\{\ket{m_\pm}\}$ form an orthonormal basis. We assume that $0\le\varphi\le\pi/2$. We first compute the time-energy cost for the implementation. For simplicity, we only consider the PBS and BS, which will give us a cost lower bound. In the implementation, the PBS is the one in Eq.~\eqref{eqn-U-PBS} and the BS is the one in Eq.~\eqref{eqn-U-BS} with reflectivity $|r|=\cos \varphi$. Thus, $\maxnorm{U_\text{PBS}}=\pi/2$ and $\maxnorm{U_\text{BS}}=\varphi$. The total evolution time $t_\text{tol}$ is split between the PBS and BS: \begin{equation} \label{eqn-example-optics-total-time} t_\text{tol}=t_\text{PBS}+t_\text{BS} \end{equation} and the total energy is thus $\pi/2/t_\text{PBS}+\varphi/t_\text{BS}$. The optimal split between $t_\text{PBS}$ and $t_\text{BS}$ is found by \dmathX2{ E_\text{tol}^\text{impl} &=& \min_{t_\text{PBS},t_\text{BS}} & \frac{\pi}{2 t_\text{PBS}}+\frac{\varphi}{t_\text{BS}} &eqn-example-linear-optics-Eimpl\cr && \text{s.t.} & t_\text{tol}=t_\text{PBS}+t_\text{BS} \cr } which can be solved analytically easily. Next, we obtain the time-energy cost for the POVM in Eq.~\eqref{eqn-example-linear-optics-POVM} by solving Eq.~\eqref{eqn-TE-POVM-general-lower-bound}. We can solve it by going through all $12$ permutations for $\tilde{F}$ to get $$ \maxnorm{\mathcal M} \ge \begin{cases} \varphi & \text{if } 0 \le \varphi < \pi/4 \\ \frac{\pi}{2}-\varphi & \text{if } \pi/4 \le \varphi < \pi/2 \end{cases} $$ with, for the case $\varphi \in [0,\pi/4)$, $$ \tilde{F}= \begin{bmatrix} \cos \varphi & 0\\ 0 & \cos \varphi \end{bmatrix} $$ which is formed by taking the first row of $\sqrt{M_1}$ and the second row of $\sqrt{M_2}$, and for the case $\varphi \in [\pi/4,\pi/2)$, $$ \tilde{F}= \begin{bmatrix} \sin \varphi & 0\\ 0 & \sin \varphi \end{bmatrix} $$ which is formed by taking the first row of $\sqrt{M_2}$ and the second row of $\sqrt{M_1}$. Since these minimizing $\tilde{F}$ are diagonal for both cases, Theorem~\ref{thm-TE-POVM-general-special-case} implies that \begin{align*} \maxnorm{\mathcal M} &= \begin{cases} \varphi & \text{if } 0 \le \varphi < \pi/4 \\ \frac{\pi}{2}-\varphi & \text{if } \pi/4 \le \varphi < \pi/2 \end{cases}. \end{align*} Using $\maxnorm{\mathcal M}$ as the time-energy cost, we have \begin{align} E_\text{tot}^\text{ideal} \equiv \frac{ \maxnorm{\mathcal M} } {t_\text{tot}}. \label{eqn-example-linear-optics-Eideal} \end{align} We can see how much more energy is used for the same time $t_\text{tot}$ in the actual implementation compared to the ideal one by computing $E_\text{tot}^\text{impl}/E_\text{tot}^\text{ideal} \triangleq r(\varphi)$ which turns out to be independent of $t_\text{tot}$. Figure~\ref{fig-linear-optics} shows that result and it can be seen that the PBS causes a significant increase in the energy cost for small and large $\varphi$. \begin{figure} \caption{ Ratio of the energy of the measurement implementation with linear optics (Eq.~\eqref{eqn-example-linear-optics-Eimpl}) to the minimum energy (Eq.~\eqref{eqn-example-linear-optics-Eideal}): $r(\varphi)=E_\text{tot}^\text{impl}/E_\text{tot}^\text{ideal}$. } \label{fig-linear-optics} \end{figure} \subsection{Time-energy cost for unambiguous state discrimination} \label{sec-example-usd} We analyze the time-energy cost for unambiguous state discrimination (USD) of geometrically uniform (GU) states~\cite{Eldar:2003:USD}. A set of GU states generated by a single normalized state $\ket{\phi} \in {\mathbb C}^n$ is $\mathcal S=\{\ket{\phi_i}=U_i \ket{\phi}, U_i \in \mathcal G \}$, where $\mathcal G$ is a finite group of unitary matrices $\{U_i \in \myUgrp(n), i=1,\dots,K-1\}$ such that $U_i U_j \in \mathcal G$ and $U_i^\dag \in \mathcal G$ for all $i,j$. We assume the states in $\mathcal S$ have equal prior probability $1/(K-1)$. Theorem~4 of Ref.~\cite{Eldar:2003:USD} proves that the POVM $\mathcal M$ that unambiguously discriminates these states with the minimum inconclusive result consists of $K$ POVM elements \begin{align*} M_i&=p \kett{\tilde{\phi}_i} \braa{\tilde{\phi}_i}, \text{ for } i=1,\dots,K-1 \\ M_K&=I-\sum_{i=1}^{K-1} M_i \end{align*} where $\{ \kett{\tilde{\phi}_i} = U_i \kett{\tilde{\phi}} , U_i \in \mathcal G\}$, $\kett{\tilde{\phi}}=(\Phi \Phi^\dag)^{-1} \ket{\phi}$, $\Phi$ is a matrix of columns $\ket{\phi_i}$, and $\sqrt{p}$ is the smallest singular value of $\Phi$. Here, $(\Phi \Phi^\dag)^{-1}$ is the Moore-Penrose pseudoinverse of $\Phi \Phi^\dag$. Note that $\kett{\tilde{\phi}_i}$ is not necessarily normalized. It turns out that this optimal USD measurement produces equal probabilities for detecting each state in $\mathcal S$. This detection probability is \begin{align*} & \operatorname{Pr}( \text{concluding } i \text{ given that }\ket{\phi_i} \text{ is emitted}) \\ =& \bra{\phi_i} M_i\ket{\phi_i} = p = \sigma_\text{min}^{2}(\Phi). \end{align*} We are interested in the time-energy cost of this USD measurement. We apply Eq.~\eqref{eqn-TE-POVM-general-lower-bound2} to lower bound $\maxnorm{\mathcal M}$. The single non-zero singular value of $M_i, i=1,\dots,K-1$ is \begin{align*} p \braakett{\tilde{\phi}_i}{\tilde{\phi}_i} &= p \: \bra{\phi} (\Phi \Phi^\dag)^{-2} \ket{\phi} \\ &= \sigma_\text{min}^{2}(\Phi) \bra{\phi} (\Phi \Phi^\dag)^{-2} \ket{\phi}. \end{align*} Now, let's focus on $M_K$. Note that $T=\sum_{i=1}^{K-1} M_i$ has rank at most $K-1$ and thus $M_K$ has at least $n-K+1$ eigenvalues of one. Also, $T$ has an eigenvalue of one since otherwise we would have increased $p$ and the original POVM was not optimal. This means that $M_K$ has at least one eigenvalue of zero. We need to find the $n$th largest singular value $\sigma_n$ among all singular values of all $\sqrt{M_i}$. The first $n-K+1$ largest singular values are equal to one coming from $\sqrt{M_K}$. The next $K-2$ singular values come from any of $\sqrt{M_i}, i=1,\dots,K$. And the next one (i.e., the $n$th one) must be $$ \sigma_n= \sigma_\text{min}(\Phi) \sqrt{ \bra{\phi} (\Phi \Phi^\dag)^{-2} \ket{\phi} }. $$ coming from any one of $\sqrt{M_i}, i=1,\dots,K-1$. Therefore, the time-energy cost for the optimal USD measurement $\mathcal M$ for GU states with equal prior probabilities is \begin{align} \maxnorm{\mathcal M} \ge \cos^{-1} \left[ \sigma_\text{min}(\Phi) \sqrt{ \bra{\phi} (\Phi \Phi^\dag)^{-2} \ket{\phi} } \right]. \label{eqn-cost-USD-lower-bound} \end{align} As a numerical example, we consider $\bar{K}\equiv K-1$ coherent states of the same mean photon number $|\alpha|^2$ but with different phases: $$ \ket{\phi_j}=e^{|\alpha|^2/2} \sum_{m=0}^\infty \frac{\alpha_j^m}{\sqrt{m!}} \ket{m} $$ where $j=1,\dots,\bar{K}$, $\alpha_j=\alpha \: e^{i 2 \pi (j-1)/\bar{K}}$, and $\ket{m}$ are the boson number states. Note that $\ket{\phi_j}=U^j \ket{\phi_0}$ with $$ U=\sum_{m=0}^\infty e^{i 2\pi m/\bar{K}} \ket{m}\bra{m}. $$ Therefore, $\ket{\phi_j}$ are GU states. We compute the lower bound of the time-energy cost for the optimal USD measurement $\mathcal M$ that distinguishes $\ket{\phi_j}$, $j=1,\dots,\bar{K}$. For simplicity, we approximate $\ket{\phi_j}$ and $U$ by truncating the sums to the first $50$ terms, which is reasonable since we consider $|\alpha|$ to be small. Thus, we consider the states to be 50-dimensional. The lower bounds of the time-energy costs using Eq.~\eqref{eqn-cost-USD-lower-bound} is shown in Fig.~\ref{fig-usd}. Among the four intensities plotted, the USD measurement corresponding to the highest intensity case has the smallest lower bound of the time-energy cost and thus may actually require a smaller time-energy cost. Also, the figure suggests that it takes more time-energy cost to distinguish a higher number of states. Interestingly, the cost lower bound saturates to some value as the number of states increases. This may indicate that a finite time-energy resource is enough to distinguish any number of states (for a fixed mean photon number). \begin{figure} \caption{ Lower bound of the time-energy cost for the optimal USD for distinguishing $K-1$ symmetric coherent states (Eq.~\eqref{eqn-cost-USD-lower-bound}). The four curves from top to bottom correspond to mean photon number $|\alpha|^2=.1, .5, 1, 3$. } \label{fig-usd} \end{figure} \section{Concluding remarks} \label{sec-conclusion} We propose and investigate the time-energy cost for POVMs, along a similar line as our previous work for unitary transformations and quantum channels. We motivate our definition for the time-energy cost by a TEUR. To find the cost, a POVM is regarded as a quantum channel embedded in a unitary transformation in a larger Hilbert space. The minimum cost among all unitary transformations implementing this POVM is the cost of the POVM. We proved formulae for computing POVM time-energy cost based on the POVM elements. When we only optimize over the ordering of the POVM elements in the larger unitary transformation, we obtain the cost in Eq.~\eqref{eqn-TE-POVM-permute-element-index} which depends on the minimal singular value of some element. A POVM element may correspond to multiple detection events. When we also optimize over the detection events of the POVM elements, we obtain lower bounds to the cost in Eq.~\eqref{eqn-general-permutation-method-Q-identity-1} and \eqref{eqn-TE-POVM-general-lower-bound}. Under a special case satisfying the Hermitian condition, the cost is given by Eq.~\eqref{eqn-TE-POVM-general-special-case1}. The time-energy cost of a POVM can be used as a benchmark for the efficiency of actual experiments. We compared the costs of the ideal POVMs and the actual linear optics experiments for the Bell measurements and a POVM with rank-2 elements. We saw that the Bell measurement for one Bell state is optimal but that for two Bell states is not. Also, the implementation for the POVM with rank-2 elements may not be optimal. We computed the lower bound to the time-energy cost for the optimal USD for distinguishing symmetric coherent states. Our result suggests that more time-energy resource is needed to distinguish more states, in line with intuition, but interestingly the cost lower bound saturates as the number of states increases. This may indicate that a finite time-energy resource is enough to distinguish any number of states. \section*{Acknowledgments} We thank H.-K. Lo for enlightening discussion. This work is supported in part by RGC under Grant No. 700712P from the HKSAR Government. \appendix \section{Summary of previous work} \label{app-summary} We summarize the results of Ref.~\cite{Fung:2013:Time-energy} for quantum channels that are useful to this work. Given a matrix $U$, the submatrix formed from columns $a$ to $b$ inclusively is denoted by $U_{[a,b]}$ with $a \le b$. \subsection{Partial $U$ problem with $n$ vectors} Solving problem~\eqref{eqn-problem-partial-with-g} means finding $U \in \myUgrp(r)$ where $r={K} m$ with the smallest $\maxnorm{U}$ of the form \begin{align} \label{eqn-U-with-missing-columns} U= \begin{bmatrix} \ket{b_1} & \ket{b_2} & \dots & \ket{b_n} & * & * & \cdots & * \end{bmatrix} \end{align} where the first $n$ columns are orthogonal and $n \le r$. We formulated this problem in Ref.~\cite{Fung:2013:Time-energy} as finding such $U$ that transforms $\ket{e_i} \longrightarrow \ket{b_i}$ for all $i=1,\ldots,n$: \dmathX2{ \maxnorm{U_{[1,n]}} &\equiv& \displaystyle\min_{U} & \maxnorm{ U }\cr &&\text{s.t.}&U\ket{e_i} = \ket{b_i} \:\: \text{for all }i=1,\ldots,n,\cr &&&\text{with }U \in \myUgrp(r).&eqn-problem-original-min-U\cr } where $\ket{e_i}$ is the unit vector with $1$ at the $i$th entry and $0$ everywhere else. Note that the notation $U_{[1,n]}$ means that the columns $1$ to $n$ of $U$ are fixed as in Eq.~\eqref{eqn-U-with-missing-columns}. In other words, $$ \maxnorm{ g(F_1,F_2,\dots,F_{{K}}) } = \maxnorm{U_{[1,n]}}. $$ \subsection{Partial $U$ problem with one vector} Consider a special case. The ``partial $U$ problem'' \eqref{eqn-problem-original-min-U} with only one vector has the following solution~\cite{Fung:2013:Time-energy}: \dmathX2{ \maxnorm{U_{[i,i]}} &\equiv& \min_{U} & \maxnorm{ U }\cr &&\text{s.t.}&U\ket{e_i} = \ket{b_i}\text{ with } U \in \myUgrp(r)&eqn-problem-min-U-single-vector\cr &=&&\hspace{-18pt}\cos^{-1}\left[\operatorname{Re}(\braket{e_i}{b_i})\right].\cr } We remark the solution does not depend on the actual form of $\ket{e_i}$ and $\ket{b_i}$. Note that the notation $U_{[i,i]}$ means that column $i$ of $U$ is fixed. \subsection{Partial $U$ problem -- lower bound} Since the feasible set of problem \eqref{eqn-problem-min-U-single-vector} contains that of problem \eqref{eqn-problem-original-min-U}, $$ \maxnorm{U_{[1,n]}} \ge \maxnorm{U_{[i,i]}} \text{ for all } i=1,\dots,n. $$ Thus, a lower bound to the time-energy cost is \begin{equation} \label{eqn-maxnorm-lower-bound-diagonal1} \maxnorm{U_{[1,n]}} \ge \max_{1\leq i \leq n} \cos^{-1}\{\operatorname{Re}[U(i,i)]\}. \end{equation} where $\cos^{-1}$ always returns an angle in the range $[0,\pi]$. Note that $\braket{e_i}{b_i}$ simply corresponds to the $i$th diagonal element of $U$. Based on Eq.~\eqref{eqn-maxnorm-lower-bound-diagonal1}, two more bounds using the eigenvalues and singular values are derived: \begin{align} \label{eqn-maxnorm-lower-bound-diagonal2} \maxnorm{U_{[1,n]}} &\ge \max_{1\leq i \leq n} \cos^{-1}\{\operatorname{Re}[ \lambda_i(F_1^\text{top}) ]\}, \text{ and} \\ \label{eqn-maxnorm-lower-bound-diagonal3} \maxnorm{U_{[1,n]}} &\ge \cos^{-1} \left[ \sigma_\text{min}(F_1) \right] \end{align} where $\lambda_i$ denotes the $i$th eigenvalue of its argument and $\sigma_\text{min}$ denotes the minimum singular value of its argument. To get Eqs.~\eqref{eqn-maxnorm-lower-bound-diagonal2} and \eqref{eqn-maxnorm-lower-bound-diagonal3}, we need the following lemma. \begin{lemma} \label{lemma-U-norm-unchanged-by-conjugation} {\rm (Lemma~1 in Ref.~\cite{Fung:2013:Time-energy}) $$ \maxnorm{U_{[1,n]}} = \maxnorm{(\tilde{Q} U \tilde{Q}^\dag)_{[1,n]}} $$ for any unitary matrix $Q \in \myUgrp({n})$ with \begin{equation*} \tilde{Q}= \begin{bmatrix} Q & 0 \\ 0& \bf{1} \end{bmatrix} \in \myUgrp(r) . \end{equation*} } \end{lemma} To get Eq.~\eqref{eqn-maxnorm-lower-bound-diagonal2}, we apply Schur decomposition to the first $n$ rows of $F_1$ (which is a square matrix denoted as $F_1^\text{top}$) to obtain its eigenvalues on the diagonal of a triangular matrix and use Lemma~\ref{lemma-U-norm-unchanged-by-conjugation} to cancel out the left and right unitary matrices. This triangular matrix becomes the new top-left block of $U$. To obtain Eq.~\eqref{eqn-maxnorm-lower-bound-diagonal3}, we apply singular value decomposition to $F_1$ to get $F_1=V D Q$ ($V$ and $Q$ are unitary and $D$ is diagonal) and use Lemma~\ref{lemma-U-norm-unchanged-by-conjugation} to cancel out the right unitary matrix $Q$ giving the new $U(i,i)= (V D)(i,i)$. Next, note that $\operatorname{Re}[(V D)(i,i)] \le D(i,i)$ since the magnitude of every element of $V$ (being unitary) is at most one. Thus, Eq.~\eqref{eqn-maxnorm-lower-bound-diagonal3} is a looser bound than Eq.~\eqref{eqn-maxnorm-lower-bound-diagonal1}. In general, we may take the maximum of the RHS of Eqs.~\eqref{eqn-maxnorm-lower-bound-diagonal1}-\eqref{eqn-maxnorm-lower-bound-diagonal3} to serve as the lower bound. \subsection{Partial $U$ problem -- diagonal $F_1$} An exact time-energy cost is obtained for a special case. If the top-left $n \times n$ block of $U$ is diagonal (i.e., $F_1$ is diagonal if it is square), we have \begin{align} \label{eqn-maxnorm-special-case1} \maxnorm{U_{[1,n]}} = \max_{1 \le i \le n} \cos^{-1}\{\operatorname{Re}[U(i,i)]\} \end{align} (c.f. Eq.~(44) of Ref.~\cite{Fung:2013:Time-energy}). In general, if $F_1$ is Hermitian, it can be diagonalized and, based on Lemma~\ref{lemma-U-norm-unchanged-by-conjugation}, Eq.~\eqref{eqn-maxnorm-special-case1} becomes \begin{align} \label{eqn-maxnorm-special-case2} \maxnorm{U_{[1,n]}} = \cos^{-1} \left[ \lambda_\text{min} (F_1) \right], \end{align} where $\lambda_\text{min}$ denotes the minimum eigenvalue of its argument. \end{document}